Friday, November 2, 2007

Big Ball of Mud

This post at Object Mentor reminded me that I'd never actually read the "Big Ball of Mud" paper by Brian Foote's and Joseph Yoder. Quoting from the paper -
"A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair."
I'm sure everyone has seen one, worked on one, and even created one. I'm also certain that in general things aren't much different since that paper was first published in 1999. Here is a link to a "Big Ball Of Mud" Google video presentation by Brian Foote from August 2007. I guess that since the authors are still talking about the pattern we should expect it to be around for a while longer.

The next time you need to wade into the swamp, you might want to take a copy of the book "Working Effectively with Legacy Code" with you.

Wednesday, October 10, 2007

Testability and Design

As Michael Feathers notes, 'The Deep Synergy Between Testability and Good Design', there is a lot of discussion about how to test private methods. I think he's absolutely correct that the need to test a private method is a hint about the design. I've been reviewing a class that I was never completely happy with because it exposes private methods for testing. Looking at it today I'm sure it would be possible to extract a class that would improve the design and fix the problem. It would have been much easier to clean this up when I was writing the code than to find the time to fix it now. I wish I had listened closer to the hint that my test was giving me.

It's pretty amazing how much we learn about the goodness of a design when try to test it. In the past I used to write fewer and larger classes than I do today. Most of the reason for the change is for better unit tests.

Wednesday, October 3, 2007

The Discipline of Agile

An article by Scott Ambler at Dr. Dobb's does a nice job refuting the idea that agile development is not disciplined. I think some of this might come from the lack of ceremony on agile development projects. Things get done and no one makes a big deal of it. Software is normally usable (and very likely releasable) at the end of every iteration, so the actual release can be anti-climatic. Contrast that with traditional methods where every phase or milestone is a big deal and release is often an earth shaking event. Maybe the traditionalists are just mistaking all their ceremony for real discipline.

Friday, September 28, 2007

Never Start With the Data Model

My bias is for object oriented design and domain modeling. I am convinced that you don't build great software by starting design with a data model. I think starting with the data model is just about the worst idea and pretty much guarantees mediocre software. The best systems start with a domain model and work through the various layers of the system down to the database, not the other way around.

The database should be left until as late in development as possible. Experience shows this is not the case because creating the database is usually one of the first things on a project task list. Instead we should be asking how soon do we really need a database? The unit tests will be better and faster when they don't need a database. You know that data access should be encapsulated behind an interface. Why not leave the database until we know what the data requirements of an at least partially working system are?

Honestly, I can't think of a harder way to try to conceptualize a system than through a data model. Do your users understand a database schema? Is a database schema the model of your system you want to carry around in your head? Great software requires a model that can be used to communicate with users, developers, product owners, and domain experts. I don't think a data model is how any business person conceptualizes their business processes. What you need is a consistent coherent domain model that can can be used to make sense of a system at any level. You must be able to "peel the onion" to discover details as needed. Design by data model immediately forces all the nasty gnarly implementation details front and center. It is a well understood user interface design principle that bad designs expose details of the data model via data entry screens. The worst designs percolate the data model from top to bottom.

The best software solutions model a problem in the problem domain. A data model is a solution in the database domain not the problem domain. So why is the database schema where design so often starts? Maybe it's the mainframe mentality in big corporations, or naive developers who don't really understand object orientation, or just a general ignorance of object oriented design and domain modeling. Eric Evans wrote great book on domain driven design that is not as widely read as it deserves to be. In that book he talks about the "ubiquitous language" of a software project -
"A domain model can be the core of a common language for a software project. The model is a set of concepts built up in the heads of people on the project, with terms and relationships that reflect domain insight. These terms and interrelationships provide the semantics of a language that is tailored to the domain ..."
I don't see such a language ever coming from a data model.

What prompted me to write this was a post touting the new release of an "active record" object relational mapping framework for Java and some controversy it generated in this post. As you can probably guess, I'm not really a fan of the active record design pattern. But reading the posts and the comments, and knowing how popular Ruby on Rails is, it's clear that lot's of developers are very big fans.

I don't get it. Maybe it's because active record is so easy to understand. I suppose active record might work for small systems. But I would never want to use it on any even moderately sized enterprise software project. I see enough ugly code, highly coupled, tangled classes, and mangled hierarchies without database access code mixed into my business classes and immediately coupling my design to the database. I shudder to imagine the maintenance headaches it would cause. At least I'm in good company -
"Another argument against Active Record is the fact that it couples the object design to the database design. This makes it more difficult to refactor either design as a project goes forward."

Martin Fowler, Patterns of Enterprise Application Integration
"In an object oriented program, UI, database, and other support code often gets written directly into the business objects. Additional business logic is embedded in the behavior of the UI widgets and database scripts. This happens because it is the easiest way to make things work, in the short run.

When the domain-related code is diffused through such a large amount of code, it becomes extremely difficult to see and to reason about. Superficial changes to the UI can actually change business logic. To change a business rule may require require meticulous tracing of UI code, database code, or other program elements."

Eric Evans, Domain Driven Design

I hope the fans of active record know what they could be getting themselves into, but as usual I doubt that is the case.

Wednesday, September 19, 2007

There Aren't Any Rules

I know a lot of programmers won't agree but there are few, if any, "rules" for writing good code. It doesn't matter what platform or language or type of software you are writing. If you think someone can hand you a set of rules that you can follow to write good code you are mistaken. I try not to call anything a rule anymore. Now I call them guidelines or principles. The difference is that a guideline needs to be interpreted and judged to know when it should be applied. A guideline might also apply to more than one case.

Sorry, but I reserve the right to change my mind tomorrow about what I told you today was the right thing. This profession is too young, changes too fast, and is too much of a craft for rules written in stone. I've been doing this long enough to know that many things we consider best practices today are likely to change tomorrow. Remember hungarian notation? At one point you would have flunked a code review if you didn't use it, now you'd be more likely to flunk if you did. How about object oriented design? I remember when inheritance and polymorphism were going to save the world. Now too much inheritance is a sure sign of bad design. How about unit testing? We're still discovering the best techniques for things like mock objects.

Much of what makes good code depends on situation and context. Michaels Feathers says it very clearly -
"Good programming is contextual. Practice is contextual. We can articulate rules about how to use language constructs correctly, but they're just guidelines. Context is king."

When Good Code Looks Bad, Michael Feathers
I'd like to add that it is also about having the knowledge and judgment to know which guideline to apply when. There aren't any shortcuts, it takes judgment, practice, and study to write good code. Becoming a good programmer is hard work and you won't be one by following a set of rules. But you might become one if you practice and study the art of programming.

Wednesday, September 5, 2007

Metrics To Improve Unit Testing

At Test Often Joe Ponczak lists seven metrics that you can use to make your methods more testable, "Seven Metrics to Improve Your Unit Testing". Experienced unit testers will probably be familiar with the metrics but it never hurts to review. I mainly mention this because it is the blog for CODIGN Software which makes some nice reasonably priced Eclipse plug-ins for writing JUnit tests and analyzing code coverage.

p.s. I sure don't like the white on black color scheme on the blog though.

Tuesday, September 4, 2007

What makes software so hard?

Great article in the August issue of The Rational Edge, "What makes software so hard?". I have been convinced for some time that software is a design and not a manufacturing process. The article clearly describes the difference in explaining where in each type of process lie -
"On the other hand, for software projects, the relationship between
architecture and design versus production is not only inverted, but in
fact approaches infinity: since the production or manufacturing costs
for software are almost nil, almost all cost for a software
project comes from the workflows dealing with the creative parts --
that is, the invention and design of software."
Another great quote is the author's comparison of software to writing a novel. The metaphor fits so well -
"In a sense, creating software is far more closely related to writing a novel than to any traditional engineering effort -- an author is free to construct whatever type of story, limited only by his imagination and whatever facts may be necessary to the narrative. Furthermore, the author must construct the architecture of the story in his head, and to make sure the design and implementation of the story is consistent with the overall architecture. The quality of the author's product is to a high degree based on whether he succeeds in keeping the details of the story consistent with the architecture."
It is the notion that software is like manufacuring that has really turned me against big process improvement efforts like CMMi that are pushed from the top down. Anyone in a corporate environment has likely lived through at least one such effort. They usually seem so to be attempts to create predictability that does not exist in software and turn developers into interchangeable pieces that can be shuffled around a Gantt chart.

Thursday, August 30, 2007

The Mysteries of Debugging?

Why do so many programmers find debugging so hard? Sure there are exceptionally wicked bugs, but most of the time we make debugging harder than it needs to be. The only secret I know of is having the right attitude and using the right approach.

How Not to Debug

Trial and error

Just guess at what the problem is. Add lots of print statements to the code and hope one of them shows you what the problem is. Make changes until the problem goes away. You don't need to know the cause as long as the bug is fixed.

Blame it on the ...

That's impossible, certainly it can't be a mistake in my code. It must be the compiler, database, network, and so on and so on.

Don't Understand the Problem

Don't dig deep enough to understand the cause. Fix the first and most obvious thing you find. Then fix the next most obvious thing when the same bug shows up again later.

Gentlemen Start Your Debuggers

That's what the debugger is for so fire that sucker up and start stepping. We'll just step through every line of code until we find the bug. You've got nothing but time right?

The Right Approach to Debugging

First don't panic! Programming is about solving problems and a bug is just another problem to solve. Of course you must approach debugging in a logical and organized way. The first thing you need are some some clues. Using the clues you can develop a theory and tests to validate the theory. Once your theory is validated you can implement a fix. This is very similar to applying the scientific method :
  1. Gather data
  2. Form a hypothesis
  3. Perform experiments that test the hypothesis
  4. Prove or disprove the hypothesis
  5. Rinse and repeat as needed

Techniques for Successful Debugging


You need to reliably reproduce the bug. If you can't reproduce it when needed you can't test it or know when it is fixed. Reproducing a bug can be the hardest part of debugging.

Find the simplest test case that demonstrates the bug. You want to make it quick and easy because you will need to recreate the bug many times. The harder the bug is to recreate the less sure you will be of the cause and your solution. It is often worth the effort to create the smallest simplest program with the least code and clutter that shows the bug.

Analyze All the Available Data

Before rushing into a theory about the cause of a bug you need to make sure you have completely analyzed all the data that you have about it. Don't jump to conclusions because your first instinct will often be wrong. Look at the problem from as many directions as possible first.

Make sure you understand what the data is saying about the problem. We've got a great new technology called an exception that holds enough information all by itself to tell you the exact problem and the line of code where it occurred. Take the time to read and understand the full exception output. Time and gain I find myself pointing out to a programmer that the exception is telling them exactly what the problem is and all they need to do is read the exception output.

Turn on as much application logging as possible and take the time to thoroughly examine the log or trace files. There are so many freely available open source, high quality, easy to use, logging and tracing frameworks available for every platform that there is no excuse for any application not to generate high quality error and debug logs.

Narrow Things Down

Use a binary search or divide and conquer technique to zero in on the problem code. You need an organized hunting expedition not a haphazard ramble through the code to find bugs.

Look at what has changed recently. If things worked fine last week then figure out what has changed in the code or its runtime environment and look there first.

Explain the Bug to Somebody Else

When you aren't making any progress, stop, take a breath and find someone else to talk the problem over with. So often the simple act of explaining a problem generates an insight before you are even finished with the explanation. If just explaining things doesn't work then the other person may have a great idea of their own.

Fix the Real Problem

The symptom you see may not be the actual bug. You need to find and fix the root cause not the symptom. Be sure you are fixing the problem and not just treating a symptom. When you find the problem look around for any similar problems. We all tend to make the same mistake more than once.

Write a Test Before You Fix

First, The test will be a good demonstration of the bug and when the test succeeds it can be proof that the bug is fixed. Second, you just spent valuable time finding and fixing the bug and a test will help ensure that it does not come back to waste your time again.

The Compiler is Not Broken

The compiler, database, wahtever is not broken. There could be bug there, but don't start from that assumption it will just waste time. Believe me it is not a bug in the compiler, compiler writers are way smarter than you or me.

One Change at a Time

Never make more than one change before testing the bug again. If you make two changes how will you know which one fixed the bug and if which change is actually needed.

Check the Simplest Thing First

Bugs are often caused by some silly mistake or oversight and the simple things are easy to check and fix. The unlikely things are hard to check, so save them for later.

Use the Debugger

I saved this one almost for last because it should be a last resort. Debuggers are wonderful and powerful tools! But debuggers can also be tedious, time consuming, and confusing. Sometimes they are the only way to figure a problem out, but to use a debugger effectively you first need to narrow down the code that needs to be checked. Don't let the debugger be the first tool you reach for.

Use Tools to Find Bugs Before You Deploy

The best way to fix a bug is to never let it get deployed. It should not be necessary to remind any programmer to turn on as many compiler warnings as possible, but unfortunately, I know it is. When you have all the compiler warnings removed from your code, run a static analysis tool aver it too. There are many open source and commercial code analysis tools available so get at least one and use it to analysis ALL of your code for bugs.


"The Pragmatic Programmer: From Journeyman to Master" by Andrew Hunt, David Thomas
"Code Complete" by Steve McConnell
"The Practice of Programming" by Brian W. Kernighan, Rob Pike
Debugging strategy: easy stuff first
Fix The Bug, Not The Symptom

Tuesday, August 28, 2007

New Application of the Builder Pattern

At "Mistaeks I Hav Made" Nat Pryce writes about an alternative to the Object Mother pattern that is based on the Builder pattern I blogged about previously. It looks like a useful application of the Builder pattern. Object Mother is a technique for creating test data for unit tests. I've used Object Mothers and the article is right that overtime they get messy and bloated.

In he last line of the article Nat says -
"In some cases, Builders have so improved the code that they ended up being used in the production code as well."

Which is a nice validation of underlying the Builder pattern. I wonder if they used the same builder, complete with the default values, in the production code or wrote new ones.

Monday, August 27, 2007

Developer Testing is Habit Forming (and Habit Changing)

A while back Tim Ottinger wrote a great article about how "Testing Will Challenge Your Conventions". Among the points that really hit home for me were -

  1. "Interfaces suddenly seem like a really good idea...". Mock objects can be so useful when used correctly that creating an interface is often the first thing I do.
  2. "Singletons and static methods no longer seem like a great way to do work...". As I hope everyone knows by now the Singleton pattern is way overrated and makes writing truly isolated unit tests very hard.
  3. "Private makes less sense...". I still struggle with making a method public just for testing but sometimes there is no other way. Whenever I find the need to do so, I give my design a hard look to make sure it is as good as it should be.
  4. "You need to be able to pass a class everything it might need at construction time...". To write isolated unit tests your classes cannot configure themselves. Of course long constructor parameter lists can be a problem but I can use the builder pattern as one way to counter that.
  5. "Smaller methods are the norm." When I think about the size of methods I write now compared to a few years ago I am amazed at how small the methods are now. It is getting to where I can't read or don't have the patience to read methods over 10 to 15 lines long.
That's a few of my thoughts go read the whole things for yourself.

Friday, August 24, 2007

Discpline and Software Development

Jeff Atwood writes over at Coding Horror that "Discipline Makes Strong Developers". Discipline is important and the best developers are certainly disciplined. But what sort of discipline are we actually talking about? Let's start with what sort of discipline we are not interested in -
  • Not imposed discipline.
  • Not about drill sergeants or enforcers.
  • Not about being able to code in low level languages like C or assembler.
  • Not about being anal-retentive.
  • While I have great respect for Watt's Humphrey, it's not about recording and reporting every minute detail of your work day.
Disciplined software development is about -
  • self-discipline
  • focus
  • attitude
  • approach
  • organization
  • responsibility and accountability
This includes the having the discipline to -
  • write the unit tests
  • add the Java Doc comments
  • find the best name for a variable or method
  • leave the code better than you found it
  • fix problems not of your making
  • improve your skills
  • learn new things
  • keep the coding standards
  • use source control
  • check-in changes frequently
  • run the units often
  • run the unit tests before every check-in
  • write a test for a bug before you fix it
  • get your code reviewed
  • fix the problem not just treat the symptom
  • keep code consistent
As a technical lead I struggle with this. I have no desire to be the cop, but there can be too much bad code to ignore. One helpful option is automation. Use analysis tools to identify coding problems, violations of coding standards, and generate test coverage metrics. Automate your builds and do a nightly integration build or preferably a continuous integration build after every change. Run the analysis as part of every integration build. Fail the build if there are to many violations and notify the development team of every failure.

I think that ultimately it is up the individual developer to have pride in his or her work. I think the Pragmatic Programmers said it best. A developer must "Care About Your Craft" and "Think! About Your Work".

Tuesday, August 21, 2007

What is software design?

Michael Feathers has a recent blog "It's All Design" where he says:

When I think about what we do in software development, I find it hard to imagine similar things happening in other fields.

He goes on to speculate that an auto designer would never be given a requirement such as there must be fifteen drink holders. I think he's wrong and that we operate in the same way as in many other fields. Come on, there really are vehicles with fifteen cup holders and I can't imagine any auto designer doing that without such a requirement.

In the comments Michael says "I try to imagine what happens in conversations with architects.". Well my wife has a degree in Architecture, but worked in the field only briefly before moving into software development. She has found great parallels between the work in both fields. We are currently working with an architect to design an addition to our house and one of the first things our architect did was ask about any features and requirements we had in mind.

Don't get me wrong, I agree that software development is just one big design process. I really hate how the software development process is so often compared to a manufacturing process. In software the manufacturing is not the development of an application it is copying that application to a disk. What we do, even the lowest level coding, is design not manufacturing.

But I don't agree with Michael Feathers that requirements and design are the same thing:

Maybe requirements is just a word that we use because we're dividing our design work between two groups.. a group that determines the higher level design (what the product will do); and the lower level design (how the product will do it).

Requirements frame design but are the same as design. Something like a performance requirement does not say anything about the design of a product. Software development is a design process and requirements are part of the process, but a stating a requirement is not the same as making a design decision.

Thursday, August 16, 2007

"Mocks Aren't Stubs" Revisited

Reading the book "xUnit Test Patterns: Refactoring Test Code" made me take another look at Martin Fowler's 2004 article "Mocks Aren't Stubs". I'm glad I did because the article has been significantly updated (it's practically a rewrite and nearly twice as long) to reflect new thinking about mock objects. The first thing I noticed was the new terminology that is consistent with the "xUnit Test Patterns" book. The second thing I noticed is the new ideas about the different styles of using mock objects. The updated article is definitely worth a read or a re-read.

It looks like we are beginning to develop a shared language around unit testing, like what happened with refactoring and design patterns. It's interesting to watch this kind of thing as it matures and we learn more. It reminds we of how much object oriented development has changed over the last twenty years.

Tuesday, August 14, 2007

Java Mock Object Frameworks Reviewed

I've been reading the book "xUnit Test Patterns: Refactoring Test Code" so you are going to get a few posts on unit testing. The book is huge, 27 chapters and 944 pages, and packed with useful information. This clearly was no hastily compiled book, the author has invested a lot time and effort. The book has a website here. Right now I'm reading the chapter on "Test Doubles", what you and I would probably call mock objects, but the author classifies into five types: Dummy Object, Test Stub, Test Spy, Mock Object, and Fake Object. The classification is sensible and really makes you think about how you use mocks and stubs. I've been using mock objects for years and never really thought that much about it.

Despite using mock objects for years, I haven't kept up with the the mock objects frameworks. Until recently I've been using the original static libraries and had not tried any of the dynamic mock frameworks like DynaMock, EasyMock, or jMock. I've finally gotten tried of writing custom mock objects and decided ti was time to try something new. Over the last couple weeks I've been testing EasyMock, jMock, and rMock. So here are my thoughts on the those frameworks.

EasyMock 1 (Java 1.3 or 1.4)

EasyMock uses recording to set expectations. A mock instance is created and the expected method calls are specified by method calls with the expected parameters.
  • Documentation is decent and better than other v1 kits.
  • Includes a tutorial with source code.
  • Generally seems simpler than the others to understand.
  • Some extra code, instances of the control and mock instance for each mock.
  • Need calls to the replay and verify methods for each mock used in a test.
  • Recording expectations using actual method calls is an easy to understand metaphor.
  • Specifying return values is not so easy to understand. I thought it clashed with the recording model. The inconsistencies with the model of recording of expectations makes things harder.
  • Had to record method calls, using null values, even when I did not care about what parameters were used to invoke a method. It seems like this would make it harder for someone else to understand the intent of the test.
  • Less proxy casting than jMock, but two variables are needed for each mock. Need the mock and a a reference to the interface that the mock implements.
  • Tests extend standard JUnit TestCase.
jMock 1 (Java 1.3 or 1.4)

jMock uses expectation specification and essentially implements it's own little language for setting expectations on a mock.
  • Documentation is ok, but could be better. There are lots of classes so it can be hard to figure out where to look for something in the Java Doc. The tutorial is not included in the download. There is no full example with source code.
  • jMock Usage is very consistent across all aspects of setting expectations and return values.
  • Expectation definition at first seems verbose, but needs fewer lines of code than EasyMock. Expectations and return values are specified together which can make them easier to read and understand.
  • Method names are specified as strings and may hamper refactoring.
  • Casting proxies is annoying.
  • Tests must extend MockObjectTestCase.
rMock 2 (Java 1.3 and up)

rMock follows the same model of recording expectations as EasyMock.
  • Quite a bit of documentation, but it was not as useful. Had to generate my own Java Doc. No source code examples are included.
  • This framework seems to want to totally redefine how unit tests are written. It implements a whole new assert framework which I found confusing, but it does work with JUnit.
  • No advantage over EasyMock.
  • No special support for JUnit 4 features or Java 5+.
  • Tests must extend RMockTestCase.
EasyMock 2 (Java 5 and up)

This is a nice upgrade from version 1.
  • Documentation the same quality as version 1. Includes a tutorial with source code.
  • Much improved over version 1.
  • Less code than version 1, no control objects, no proxy casting.
  • Does require static imports for the cleanest looking code. More static imports than jMock.
  • Still requires calls to replay and verify methods for every mock in a test.
  • Still relatively the simplest to understand.
  • Same inconsistencies in the model between recording expectations and setting return values as the previous version.
  • Tests extend standard JUnit TestCase.
jMock 2 (Java 5 and up)

This is a significant upgrade that I found much easier to use.
  • Documentation is better. The tutorial is not included in the download. Still no full source code examples.
  • Less code, no proxy casting.
  • Does require static imports for the cleanest looking code.
  • Model is more consistent and simpler.
  • The syntax of expectation setup needs a little getting used to.
  • All mock variables must be declared final.
  • Seems to have the best features of record and play back without the inconsistencies.
  • Tests no longer have to extend MockObjectTestCase. Extending MockObjectTestCase is probably still the easiest thing for JUnit 3.
While I prefered JMock, EasyMock is also an excellent framework. Either jMock or EasyMocl would be a good choice. I do not recommend rMock. The choice between EasyMock or jMock will come down to personal preference and perhaps the skills of your developers. I think the slightly steeper learning curve of jMock is worth it for the consistency of its model. I was pleasantly surprised at how much easier jMock 2 was to use over jMock 1.

Friday, August 10, 2007

New Version of Cobertura Code Coverage Tool Released

This is not exactly new, but version 1.9 of Cobertura is available. It's not a big update but is worth getting just for the improvements to the branch coverage reporting. I found the branch coverage of no use at all prior to the 1.9 release. Now it is one of the best features. Instead of marking an "if" statement as 100% covered when the "if" block was never entered, Cobertura now requires all conditions to be tested for 100%. The html report even has a nice context popup that tells exactly how many conditions have been tested.

Version 1.9 is a seamless upgrade. I was able to drop it into our builds without any changes to the Ant scripts.

Friday, August 3, 2007

Recommended Books

Here is a list of the top books on various software development topics I recommend. Believe it or not, I've read all but one or two of them, which should give you a hint as to how long I have been doing this.


"Object Design: Roles, Responsibilities, and Collaborations"
by Rebecca Wirfs-Brock, Alan McKean

This is one of the best books on object oriented design you will find. This books focuses is on using approach to object oriented design.

"Domain-Driven Design: Tackling Complexity in the Heart of Software"
by Eric Evans

This is an amazing book and still a favorite. This book explains how to model the problem domain knowledge and create a ubiquitous domain language.

"Object Thinking"
by David West

This is a quirky book and I suspect it won't be to everyone's liking. But for serious OO designers it is well worth reading and full of thought provoking ideas and opinions.

"Head First Object-Oriented Analysis and Design: A Brain Friendly Guide to OOA&D"
by Brett D. McLaughlin, Gary Pollice, Dave West

A gentle and entertaining introduction to the subject. This book will is mainly aimed at novice designers and will probably bore more experienced developers.

"Design Patterns: Elements of Reusable Object-Oriented Software"
by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides

What is left to say about the book that started the design patterns movement other than every serious developer should read this book at least twice. Just please ignore the Singleton pattern.

"Agile Software Development, Principles, Patterns, and Practices"
by Robert C. Martin

A great book with lots of example code that really lays out some key OO design principles such as DRY (Don't Repeat Yourself). Though I thought the bowling example was a bit weak and I told Robert Martin so.

"UML Distilled: A Brief Guide to the Standard Object Modeling Language, (3rd Edition)"
by Martin Fowler

You won't find a simpler, shorter, or more readable introduction to UML anywhere.


"The Mythical Man-Month: Essays on Software Engineering"
by Frederick P. Brooks

Who would have thought this book would still ring true after so many years. This is a classic that you really ought to read.

"Extreme Programming Explained: Embrace Change, (2nd Edition)"
by Kent Beck, Cynthia Andres

Kent Beck is arguably the leading voice for agile development and this is the book that started it all. A must read for any anyone interested in agile software development. Sadly, so far I've only read the first edition.

"Lean Software Development: An Agile Toolkit for Software Development Managers"

"Implementing Lean Software Development: From Concept to Cash"
by Mary Poppendieck, Tom Poppendieck

Having trouble convincing your management that agile software development makes sense? If these two books don't help then nothing will.

"Waltzing With Bears: Managing Risk on Software Projects"
by Tom Demarco, Timothy Lister

Another classic that still works. If you only read one book on software project risk this should be it.

"Software Configuration Management Patterns: Effective Teamwork, Practical Integration"
by Stephen P. Berczuk, Brad Appleton

This book will help you move beyond simple check-in an check-out to managing your project's artifacts. It explains the proven SCM patterns and practices needed to succeed.

"Pragmatic Version Control Using CVS"
by Dave Thomas, Andy Hunt

An excellent concise introduction to version control in general and CVS in particular. This is a great book for the new or inexperienced version control user.

"Continuous Integration: Improving Software Quality and Reducing Risk"
by Paul Duvall

This is a very good book. Ever since Martin Fowler's seminal article on the subject we've badly needed this book. I only wish it had been written a couple years earlier. This book will be most useful to those new to the concepts of "Continuous Integration".


"Refactoring: Improving the Design of Existing Code"
by Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts

This is the bible on refactoring. If have not read this book and think you practice "refactoring" then think again. Refactoring, along with unit testing, are key techniques of agile development.

"Refactoring to Patterns"
by Joshua Kerievsky

This book is about using patterns to improving existing designs of existing code bases.

"The Pragmatic Programmer: From Journeyman to Master"
by Andrew Hunt, David Thomas

This is a must read for every programmer. It is full of all sorts of techniques for improving your craft. Ever wonder why the best programmers are 10 or 20 times more productive? It's probably because they are already use the techniques in the book.

"Working Effectively with Legacy Code"
by Michael Feathers

Strategies for fixing that crusty old untested legacy code.

"Code Complete"
by Steve McConnell

I first read this book twenty years ago. The newest edition is just as good a guide to the why and how of writing great code as the original. This should be on every programmer's bookshelf.

Java Programming

"Pragmatic Unit Testing in Java with JUnit"
by Andy Hunt, Dave Thomas

You won't find a better tutorial on unit testing or JUnit. Give this book to any programmer new to developer testing.

"Effective Java Programming Language Guide"
by Joshua Bloch

Every Java programmer must read this book! Any Java programmer that hasn't should not be allowed anywhere near a Java compiler.

"Java Generics and Collections"
by Maurice Naftalin, Philip Wadler

A clear guide to a tough subject. I think everyone will learn at least one new thing.

"Thinking in Java (5th Edition)"
by Bruce Eckel


"Patterns of Enterprise Application Architecture"

by Martin Fowler

"Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions"
by Gregor Hohpe, Bobby Woolf

Friday, July 20, 2007

Java Dates, Calendars and TimeZones

We all know Java date handling is a pain. I'm sure most Java developers give as little thought as possible to dates, less to calendars, and none to time zones. We use new java.util.Date() as much as possible and avoid java.util.Calendar like the plague. This works out, more or less, until you need to account for time zones. Java dates don't include any time zone related methods so it's easy to ignore them, we forget that dates really have a time zone of Greenwich Mean Time (GMT).

The thing to remember is that a Java date is a very simple object . If you remove the deprecated constructors and methods there is not much left besides equals(), hashCode() and getTime(). A date is really just a wrapper around a Java long integer value and that long integer is the number of milliseconds since January 1, 1970, 00:00:00 GMT. We forget about the GMT business because whenever we print a date it looks like this -

Sun Jul 15 10:00:00 CDT 2007

It is easy to forget that the output of toString() is the value of the date in a localized format for display and not the real value. The same date printed in two different time zones would look different even though the actual long integer value is the same. Here is an example that may help -

import java.util.Calendar;
import java.util.Date;
import java.util.GregorianCalendar;
import java.util.TimeZone;

public class CalendarTimeZoneTest {
private static String getCalendarDate(Calendar cal) {
return (cal.get(Calendar.MONTH) + 1) + "/" +
cal.get(Calendar.DAY_OF_MONTH) + "/" +
cal.get(Calendar.YEAR) + " " +
cal.get(Calendar.HOUR_OF_DAY) + ":" +
cal.get(Calendar.MINUTE) + ":" +

private static Calendar getCalendarForTimeZone(TimeZone tz) {
Calendar cal = new GregorianCalendar(tz);
cal.set(Calendar.HOUR_OF_DAY, 10);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MONTH, 6);
cal.set(Calendar.YEAR, 2007);
cal.set(Calendar.DAY_OF_MONTH, 15);
return cal;

private static void showDate(Calendar Cal) {
System.out.println(" Time zone : " +
System.out.println(" Milliseconds : " +
System.out.println(" Calendar Date: " +
System.out.println(" Local Date : " +
new Date(Cal.getTimeInMillis()));

public static void main(String[] args) {
Calendar localCal =
Calendar japanCal =
System.out.println("Same date/time in different time zones");

japanCal = new GregorianCalendar(TimeZone.getTimeZone("Japan"));
System.out.println("Same milliseconds in different time zones");

First the example creates two calendars with the same date and time for two different time zones. This results in two dates with different millisecond values. Next, the example creates two calendars with the same milliseconds in two different time zones. The output looks like -

Same date/time in different time zones
Time zone : Central Standard Time
Milliseconds : 1184511600000
Calendar Date: 7/15/2007 10:0:0
Local Date : Sun Jul 15 10:00:00 CDT 2007

Time zone : Japan Standard Time
Milliseconds : 1184461200000
Calendar Date: 7/15/2007 10:0:0
Local Date : Sat Jul 14 20:00:00 CDT 2007

Same milliseconds in different time zones
Time zone : Central Standard Time
Milliseconds : 1184511600000
Calendar Date: 7/15/2007 10:0:0
Local Date : Sun Jul 15 10:00:00 CDT 2007

Time zone : Japan Standard Time
Milliseconds : 1184511600000
Calendar Date: 7/16/2007 0:0:0
Local Date : Sun Jul 15 10:00:00 CDT 2007

What I hope the example demonstrates is that the same Java date, i.e. millisecond value, has different meanings in different time zones. If you want the same date and time in different time zones you must take the offset between time zones into account. The easiest way to do this using the standard Java libraries is to create a date from its parts, year, month, day, etc., using a Java calendar object.

Sunday, July 15, 2007

Lazy Initialization Using an On Demand Holder

I was reminded of the "On Demand Holder" idiom the other day. If you're not familiar with it, the idiom is a thread safe replacement for lazy initialization using double checked locking. As we all know double checked locking is broken. Here is an example implementation -
public class Something {
private Something() { }

private static class Holder {
private static final Something instance = new Something();

public static Something getInstance() {
return Holder.instance;

This works because of the way classes are loaded. The inner class Holder is not loaded and initialized until a thread references it, so the static instance of Something is not created until the first time that the getInstance() method is called. Here is more information and references for "On Demand Holder".

Saturday, July 14, 2007

Programmer Personality Test

Here's a link to an interesting "Programmer Personality Test". If you've ever taken a Meyers-Briggs test, then the test should be familiar. It's short, painless, and worth taking if just for the entertainment value. You might learn something and it could spark interesting discussions with your teammates. A test like this could be useful as part of the hiring process, but this one is short and I'm not sure how accurate it is.

Thursday, July 5, 2007

What's Your Build Process?

So what is your build process like? Is simple or complicated? Is it manual or automated? How many of the Five R's of Agile SCM Baselines does it satisfy? I've been doing a lot of work on software build process lately and here is a simple four step process that mostly satisfies those "Five R's" -

  1. Get the latest sources from the SCM repository (branch or mainline).
  2. Execute the build against the latest sources.
  3. Commit the results of the build back to the SCM repository.
  4. Label the results of the build in the SCM repository (branch or version as needed).
Four steps, looks pretty simple. I can't think anything simpler that would be very robust. So how about automating the process? What would be simplest way to get this fully automated and as mistake proof as possible. There are lot's of build server out there, both open source and commercial, it seems like on of them ought to be able to handle the job.

I've evaluated quite a few build servers recently (I'll post reviews later) and can't find one suitable product. They all easily handle continuous integration builds, but none can automate all four builds steps. Obviously, all the servers can mange step 2, and they all partially handle step 1, i.e. getting the sources from the main line. All the servers can check out source code, but if you want to do it from more than one branch you need a configuration for each branch. None of the servers can commit anything back to the repository without external scripting of some sort let alone branch your project. If I need to write scripts or maintain multiple configurations what have I gained over just doing the four step process manually?

This is for Java application development and Ant is used for the building. I've looked at enhancing the Ant builds and as much as I like Ant, it is just not well suited for this task by itself. It might be possible to write custom Ant tasks, but I really don't want to create a custom build system that needs to be maintained. It's enough work just maintaining the Ant builds as they are. Something seems wrong when it is so hard to automate such a simple process.

Sunday, July 1, 2007

A Java Builder Pattern

There's a Builder pattern that Joshua Bloch has briefly described in a couple of his "Effective Java Reloaded" sessions at Java One. This Builder is not necessarily a replacement for the original design pattern. The problems this Builder pattern can solve are too many constructors, too many constructor parameters, and over use of setters to create an object.

Here are some examples of the pattern in use. These examples create various Widgets with two required properties and several optional ones -

Widget x = new Widget.Builder("1", 1.0).
Widget y = new Widget.Builder("2", 2.0).
Widget z = new Widget.Builder("3", 4.0).

The basic idea behind the pattern is to limit the number of constructor parameters and avoid the use of setter methods. Constructors with too many parameters, especially optional ones, are ugly and hard to use. Multiple constructors for different modes are confusing. Setter methods add clutter and force an object to be mutable. Here is an class skeleton of the pattern -

public class Widget {
public static class Builder {
public Builder(String name, double price) { ... }
public Widget build() { ... }
public Builder manufacturer(String value) { ... }
public Builder serialNumber(String value) { ... }
public Builder model(String value) { ... }

private Widget(Builder builder) { ... }

Notice that Widget has no public constructor and no setters and that the only way to create a Widget is using the static inner class Widget.Builder. Widget.Builder has a constructor that takes the required properties of Widget. Widget's optional properties can be set using optional property methods on the Widget.Builder. The property methods of Widget.Builder return a reference to the builder so method calls can be chained.

A really nice feature of this pattern is the ability to do pre-creation validation of an object state. When setters are used to set object state during creation it is virtually impossible to guarantee that object has been properly created.

Here is the full source for Widget and its Builder -

public class Widget {
public static class Builder {
private String name;
private String model;
private String serialNumber;
private double price;
private String manufacturer;

public Builder(String name, double price) { = name;
this.price = price;

public Widget build() {
// any pre-creation validation here
Widget result = new Widget(name, price);
result.model = model;
result.serialNumber = serialNumber;
result.manufacturer = manufacturer;
return result;

public Builder manufacturer(String value) {
this.manufacturer = value;
return this;

public Builder serialNumber(String value) {
this.serialNumber = value;
return this;

public Builder model(String value) {
this.model = value;
return this;

private String name;
private String model;
private String serialNumber;
private double price;
private String manufacturer;

* Creates an immutable widget instance.
private Widget(String name, double price) { = name;
this.price = price;

public String toString() {
return super.toString() + " {"
+ "name="
+ getName()
+ " model="
+ getModel()
+ " serialNumber="
+ getSerialNumber()
+ " price="
+ getPrice()
+ " manufacturer="
+ getManufacturer()
+ "}";

public String getManufacturer() {
return manufacturer;

public String getModel() {
return model;

public String getName() {
return name;

public double getPrice() {
return price;

public String getSerialNumber() {
return serialNumber;

Notice that Widget's private constructor takes the required properties and that the Builder sets the optional properties. Another thing to note is that widget is an immutable object as implemented.