May 27, 2005

Christopher Judd [11 posts]

 1  |  2  |  3  Next >>

The Eclipse Platform will be a Hard Act to Follow

Posted April 7, 2005.

A friend of mine told me recently, he thought based on historical trends that the Eclipse Platform has about a year or two left. Only time will tell, but either way, Eclipse has left a permanent its mark on our industry.

Having spent the last couple of months really studying and researching Eclipse and its internals for Pro Eclipse JST, the thing that excites me most is the plug-in architecture. I could care less about the SWT debate. I think we as an industry really need to use the Eclipse Platform as a learning tool on how to write very extensible frameworks and plug-ins. I have been installing and working with both large and small, open source and commercial, Eclipse Foundation and third-party Eclipse plug-ins. I have also been involved in setting up internal update sites and contributing update files to small open source projects. It is easy to tell which plug-ins have been designed and written well. It is extremely easy to tell which have considered extensibility and which have not. I have written plug-ins for other platforms like Delphi and JBuilder in the past. Eclipse gets the highest marks for me and has set my expectations high. For another platform to compete it will have to have the following extensions for me.

  • Perspectives & Views
  • Template Engine
  • Graphic Framework
  • Modeling Framework
  • Multi-language support
  • Easy updates and versioning
  • Well documented integration & extensions
  • Large industry support & strong community

    I feel like I am doing a Bank of America commercial now. I just want the maximum.

    Theory of Constraints

    Posted March 29, 2005.

    In John Birtley's article Best Practices for Risk-Free Deployment, John uses an analogy of a convey of ships. He says:

    Software projects are a complex set of interdependent people and teams and can be likened to a convoy of ships. A convoy has to move at the rate of the slowest ship. Increasing the speed of a single ship in the convoy will not increase the speed of the convoy – it will simply increase the amount of wasted capacity in the speeded-up ship.

    Speeding up the slowest ship will, however, have a positive effect since the whole convoy can now move faster.

    This reminded me of one of my favorite books and most valuable lessons. The book is The Goal by Eliyahu Goldratt and the lesson presented in the book is the Theory of Constraints (TOC). The concept is a contraint or bottleneck prevents an entire process from becoming more productive. Automating a single task if it is not the bottleneck will not increase output. Only increasing capacity at the source of the constraint of bottleneck will increase output.

    The development process can suffer from many constraints. These contraints can include requirements gathering, testing, deployment and testing cycles. So just improving developer's productivity by automating the build process with Ant may not make an impact on the entire processes output.

    Look for the bottleneck in your IT shop and add capacity. You might be amazed at the results.

    Continuous Integration Lessons Learned

    Posted December 31, 2004.

    I just finished setting up a continuous integration (CI) environment using cruise control (CC) and lava lamps. Previously I have always used ANT and an OS specific scheduled event configured to run at mid night to enable continuous integration. However, this time I took the time to try out cruise control and I am glad I did.

    Cruise control is a daemon which can be configured to monitor a source code repository such as CVS for changes or specific times. When one of these events occurs, CC can use ANT or Maven to check the source code out of the repository and build it. CC can be used to publish the results in multiple ways including email, instant messaging, web site and x10 for enabling fun visual indicators such as lava lamps.

    During the process of using CC, I learned the following valuable lessons I wanted to share.

    First, once every 24 hours is not frequent enough for continuous integration. As mentioned above, I use to set up CI environments to build once every 24 hours. When I initially set up CC, I was asked to set the builds to run every 4 to 6 hours. There were skeptics who believed any more frequent builds triggered by repository activity would interfere with a team so new to CI and cause unnecessary anxiety. However, every 4 to 6 hours was a problem when trying to set up and configure CC initially so I set CC to check the repository every minute and if something changed to wait for 5 minutes of inactivity before starting the build. Fortunately, I forgot to change it to a less frequent iteration and the minute check made into the final configuration. We discovered that the short frequencies actually provided the best results by giving everybody comfort since they got immediate feedback. Plus without a frequent build, one bad build could cause the red lava lamp to be lit all day.

    Second, default ANT logging configuration can cause major performance problems. ANT by default logs all message to a XML file regardless of their logging level. In a short couple of weeks the log file reached 36 mb. The size of the log file would not be a major problem if the web site did not try to transform the document 8 times. Each transformation was taking approximately 6 minutes so 8 times lead to an approximately 48 minutes wait after clicking on the build link on the web site status page. Configuring the ant schedule task to uselogger=”true” caused and to use the file logging level to log. This reduced the log file to under 1 mb and greatly improved the usability of the web site.

    Third, there are multiple audiences for the builds so there is a need for a continuous integration build and a nightly build. The audience of a continuous build should be the developers themselves. Developers need quick feedback to provide confidence. They need to know what they checked in to the repository works outside their development environment and what they check out of the repository works. So, this build should focus on code compiling, passing the unit tests and being able to be packages and possibly deployed. The second audience is management and architects. Managers are often trying to collect metrics from frameworks like NCSS and JUnit (number of unit tests). Architects are often interested in code quality reports such as PMD and unit test code coverage. These types of reports take longer to produce and don’t need to run continuously. A separate build that runs at midnight is perfect for executing metric and code quality reports. Of course developers should be able to run these reports at any point in time in their development environments since the same build scripts should be used by both the developers and CI environment.

    Forth, a CI web site like the one already included in CC is a very valuable communication tool. While developers need to be notified immediately of build problems via email, IM or lava lamps, other such as managers do not. A website can provide the information they need at their convenience.

    Fifth, lava lamps are a fun way to provide a visual indicator of the build. I initially thought the idea was rather hokey but I was wrong. If you want to learn how to integrate lava lamps with CC check out Mike Clark's Pragmatic Automation web site (

    [View comment, or add your own]

    True Test Driven Development with Groovy

    Posted November 10, 2004.

    Pratically Groovy: Unit test your Java code faster with Groovy is a nice article. However, it missed a major value add Groovy offers to unit testing. Groovy can enable true test first development. One of the issues with doing test driven development with Java is the fact it is strictly typed. To run tests they must compile and they can not compile unless there is at least a stubbed method to call. With Groovy you don’t have that requirement because it is dynamic. If it tries to execute a method that does not exist, an exception is thrown which is a valid test result reminding someone it must be implemented.

Below you'll find links to our other blogger's individual blogs.

Gary Cornell

Jason Gilmore

Jim Sumser

Steve Anglin

Ewan Buckingham

Nick Wienholt

Matt Stephens

Rob Warner

Other Bloggers