I remember a couple of years ago, while working with some developers, one of them seemed to be irritated with empty lines in source code. I think he was missing an important dimension of software: Readability. Grab the book nearest to you. Go ahead and do it. Grab any book close to you. Open it on any page. Are there any blank spaces between lines? If you didn't take a book for kids, empty lines will emerge like crack in the 80's making paragraphs easier on the eyes (instead of making them red though). Speaking of the devil…
I feel better now*. Just as empty lines break ideas up in text books, empty lines in source code offer a new dimension for structuring code. It’s like something that the developer has to say about the code. Almost like non verbal communication. I’m exaggerating, but really, use empty lines to separate related lines of code when extract method is getting to the limit.
Perfectly refactored code will have only one line per method, which is absolutely a boost in detail complexity. Don’t forget detail complexity! Sometimes we are so concerned with minimizing dynamic complexity that we end adding tons of detail complexity. 100 methods are harder to read than 20 methods 5 times bigger if you properly place empty lines.
By the way, keep an eye on semantics, use the most suitable words for identifiers, and avoid Hungarian notation. If you need a variable to store the amount of attempts some action has been performed, call it attemptsCount (or something similar with two words), don’t use things like ac, or just count (count of what?). Saving some milliseconds on each key stroke is not as good as saving two or three days of development time plus two or three days of debugging time. Let’s do the math just for fun. When I type attempsCount it takes me like 2 or 3 times more than count. Let’s say 5 times more. If we have to type attemptsCount 1000 times and each time takes 2 seconds, we have a total of 2000s (or 33 minutes). If we have to type count 1000 times and each time takes 0.4s (2s / 5) it will take a total of 400s (or 7 minutes). Total gain using count: 26 minutes. Let’s say half an hour. Now, depending on the context, this could be absolutely nothing or way too much. Are you counting just one thing in your application? if so, count is OK, but I bet that if you are typing count 1000 times, you are counting more than one single thing. So, attemptsCount seems to be more appropriate for real life applications. Just let the code speak by itself.
And remember, it’s not only about complexity and semantics, which in turn determine maintainability. It’s about writing code that is nice to read and good to programmers’ eyes, if you are not like the aforementioned co-worker, of course.
* Because of the blank line, don't get me wrong! I like to keep my neurons intact... well, except for some beers once in a while. By the way, brain cells do regenerate and reproduce, even after maturity. Thanks biologists. Love you Viviana.
Finally! I've managed to write a Maven POM for Enterprise App. Just add this into your pom.xml:
You may also need to add the official Maven repository for Vaadin addons:
Better late than never ;)
Some months ago, I got involved in a project where I needed to generate quite big reports (more than 1M rows) extracted mainly from an SQL table growing at a very fast pace. This table played a central role in the everyday usage of the system.
In order to avoid losing performance, we scheduled a database backup and a process to empty that large problematic table every 3 months. But the problem with the reports remained the same. At some point we ended with a table with more than 3M rows and we needed to generate reports with more than 1M rows. Even if the report was small, querying this table was taking too much time (some times more than 10 minutes).
Here is what we did. First we decided to design like an ETL process using only SQL (somebody will argue that this is not ETL, but it is). How? We developed a module to execute SQL scripts defined at run-time every day at 3:00 am.
These scripts were basically "insert into table" statements. They took data from some tables and inserted the result into another table. Data in the destiny table was more suitable for reporting, so we gained some time when generating the reports by moving processing (expensive joins mostly) from working hours to 3:00 am, when nobody (even no CEO) was using the system. Reports were way faster (like 20 seconds for the bigger one).
A couple of months after going to production with this scheme, we faced another problem. The destiny table, supposed to be more suitable for reporting, started to be a problem. As you may guess, it was too large. We were facing our initial situation again. The solution: Split that large data set into smaller ones. How? Every row in that table had a time stamp. So we divided the data into semesters by making one table per semester. The script executor module was modified to put the data into the correct table according to current date. The reporting application was also updated to allow users select the semester (so the app was able to query the correct table).
Reports are fast at the time of writing this. In any case, this solution gave us some time to think about implementing a Big Data solution, NoSQL maybe... Wait! This is already kind of a Big Data solution :)
A few days ago, an Enterprise App user asked me if lazy loading is better than pagination. My answer: Totally! Pagination is an old, so called, web 1.0 solution. At that time, no AJAX where possible at all.
It's not very usable to have a lot of 1, 2, 3, 4, 5, 6, ..., 100 links. Lazy loading (á la Vaadin's way) not only is easier to use but to understand for the end user. If a user sees a scroll bar, she will understand that scrolling that thing will cause the rows to... well, scroll. No need for further explanation about it (is there any explanation to make?). Chances are that pagination will cause most novice users to get lost, at least at the first time they use the software.
Sometimes scrolling through the entire data set is certainly necessary and pagination could make that task a pain in the neck. I think pagination is OK if we have, let's say, seven pages at most (tanks Miller). If the number of pages is uncertain at design or development time, I would definitely try a different approach. Nevertheless, if the target client devices allow it, I will go with lazy loading.
I don't know why the heck I still used pagination for that much time. I think I just get used to use pagination for every table. DisplayTag could be guilty, but even DisplayTag can be configured to use new appealing delectable lazy loading.
Some of you will argue "Google uses pagination". I must admit there are situations where pagination makes a lot of sense. I can't imagine an almost infinite page with all the results of a Google search. Also, I feel very comfortable reading online books in a paginated user interface. But a lot of debate is happening out there. Even Google itself prefers view-all search results.
Another great thing about lazy loading, specially (but not exclusively) when used in tables, is that it is a good way to get your web applications to appear more modern, more web 2.0. Just note how Facebook, Twitter and other heavyweights use lazy loading to show their data. Go ahead, modernize your web apps by using some lazy loading.
I have been a ProjectLocker and Assembla user for years. They both offer excellent tools for software project management. However, now I'm moving to GitHub, 'cause you know, I'm becoming kind of an open-source activist (don't take that too serious, it doesn't mean that I'm pretending to be the next Stallman or something). GitHub seems to be better suited for social open source projects.
I have written my share of code, so it's time to give back something to the world (also, I'm publishing all that code to raise my developer career but it doesn't sound like the thoughts of an open-source activist). Check it out, tons of code for you: QBasic, C, C++, PHP, and Java (mostly Java), and some projects using well known Java technologies such as Spring Framework, Hibernate, JPA/TopLink, Struts 2, SiteMesh, DisplayTag, Vaadin and Enterprise App).
Enterprise App users! now you can follow the add-on development here. I will migrate all issues and tasks to GitHub to handle everything from one place. You can fork and send patches (through pull requests) if you want to.
As some of you already know, I will be making a big noise about Enterprise App and InfoDoc Pro next month. Big noise without a new website is not worthwhile. So I acquired a hosting service (no free advertising for them right now), a new domain (alejandrodu.com), and a bunch of new ideas to make Enterprise App and InfoDoc Pro shine as a golden Java coffee cup (or at least as the yellow one in the picture).
So far, Enterprise App has more than 600 downloads at Vaadin Directory and I'm aware of more than 15 projects using it worldwide. I would like to thank all license buyers for supporting this web site as well as the add-on itself.
InfoDoc Pro is currently near a stable release version. Thanks to Thota Software Solutions, Aio Technology S.A.S. and Colombitrade S.A.S. for all the feedback and financial support for the project.