Saturday, March 31, 2007

Some of us are more equal then the others

A few days ago I told you about the three forces that are often at work within a project: what, when and how. This is a follow up, to tell you what the discussion I had was about.

I feel strongly that there should be a balance between what is built, how it's built and when it will be done. You need to strike a balance between these three forces to make any project a success. That balance can be achieved in two really different ways: either make it all the responsibility of one person, or make each force the responsibility of a separate person.

Years ago when projects were often relatively simple, scope was relatively clear and deadlines were realistic, it was reasonable to have one person run an entire project. And even today there are projects and people that can make the "one responsible person" setup work. Mind you: that one person doesn't have to do all the work him or herself. But he or she is responsible for all three facets (what, when and how) at the same time and has to strike a balance between them all the time. Constantly balancing whether having this extra feature is really worth the extra time and risk. Or whether changing the code structure this close to the deadline is really worth the risk it introduces of not being done on time.

But most projects these days have grown way too complex for one person to handle all these responsibilities. These days it is more common to have three separate people fulfill the three separate roles: the functional guy, the technical guy and the "deadline guy". They're always fighting with each other and often you will find two of them ganging up on the third. But in the end they always have to find a compromise on which all three agree.

Now imagine what happens if one of these three is also made responsible for the project as a whole. What happens to the balance of power between the three then? They're all responsible for one aspect, but suddenly one of them is also responsible for the total result. This completely destroys the delicate balance of power that was so carefully introduced by having a different person be responsible for each of the factors. You might as well just have one person be responsible for all three powers. In fact, that would be better. Because then at least there is one leader, instead of three leaders of which one is a bit more important then the others.

Tuesday, March 27, 2007

The trias projectica

I had an interesting (that seems to be my stopgap these days) discussion the other day about who is in charge of a project. Who is responsible for the timely and correct delivery of a project. As is often the case, "the other side" claimed the project manager is end responsible for the entire project. I disagree strongly with that view.

As far as I can see there are (at least) three key factors in a project:

  • what do we build?
  • how do we build it?
  • when do we deliver it?
These are known by many other names (like: features, quality, deadline), but I'd like to stick with what, how and when for now. The what is determined by a functional designers, or in these days more often either a product manager or a "usabilitystop" expert. The how will normally be determined by the developers, the technical lead or maybe even an "architect". The when is the responsibility of the project manager, the resource planner or the "director".

As you might know or notice, there is a dependency between these factors. You can choose to create more functionality (what) in the same way (how), but that will certainly take more time (when). You can also say that you want more functionality and want it delivered in the same time, no matter how it's built. It will certainly result in lesser quality (not to mention less motivation of your developers), but it can be done.

The important thing is that a change in one of the factors, influences the other one. And the trick to a successful project is balancing these three factors.

Sunday, March 25, 2007

If it didn't take effort

I think every developer should be able to write his programs within a simple text editor. Knowing how to change your applications without your preferred IDE will be very valuable in precisely those situations where your IDE is not available. You know, at the live web server, at a customer site or even at the desk of another developer (if your organization chose not to standardize on one IDE).

But of course working within an IDE does make life much easier. Auto-complete is such a time saver (especially if it is the smart-complete type that IntelliJ IDEA introduced me to), and having parameter hints or even complete API help at your finger tips really makes a lit of difference.

But there is something I really don't like about the way IDEs try to save us time. They almost all have the ability to generate getters and setters for member fields. And when they do, they generate documentation for those getters and setters. That's great, because it saves you from having to write your own. Right? Wrong!

Since even the smartest IDE has no idea of the semantics of a certain getFoo method, all it can generate is documentation of the kind "gets the value of Foo". How useful is that? When was the last time you were using a certain API, weren't sure of the functionality of one of its getters, opened the API docs, read "gets the value of Bar" and thought "... of course, how silly of me not to realize that"?

Yet we all like it very much when our IDE generates such comments for getters we write (or actually generate). As it allows us to get to the finish as soon as possible. Let's see... the code seems to work, so what else is needed? Camel case all method names... check... Pascal cased the class name... check... Meaningful local variable names... check... And documentation for all public methods... we only have getters and setters, so that's a... check! We're done, so let's commit it and allow others to admire my work.

When I review someones code and see this type of documentation, I always make a note of it and pick it up with them. And to make it easy to remember, I've again come up with this one-liner: if it didn't take effort to write comments, they're of no value to me when I read them. My tech writer colleague's will probably nod in agreement when they read this, but it really is true. And don't you dare use this mentioning of tech writers to say that real documentation should really be in a separate written document, like a programmer's reference guide!

Think about it: how often do you look something up in the API docs of your language or library? How useful is that documentation? And how much effort do you think the creators put into it? Does it seem like half of it was generated by their IDEs? Or closer to the truth: if it looks like half of it was generated by an IDE, how often do you curse the developers for not providing you with better documentation? And more than likely those are precisely the libraries where you reach for the programmer's reference guide. Which was written by someone who really spent time on documenting the functionality, instead of hoping his word processor would auto-complete it for him.

Friday, March 23, 2007

iframe.onload doesn't fire for ActiveX controls

Last week I had a few very frustrating days with the web interface of a new application we're building. The web application is basically allowing you to navigate websites as they were at a given moment in time.

The web application is built up from some panels with additional information and a main iframe hosting the actual web page. You type the URL of the page you want to see and presto, there you have it. On the side we'll show you all the other versions of that page we have available. Clicking on one of those versions will bring it up in the iframe.

So far, so good. The problems started when allowing the user to click links in the page that is showing in the iframe. We use a mix of rewriting the HTML before it's shown and intercepting the requests on the server, so that following links actually takes you to the destination as it was at the time that you were looking at. It might sound a bit complex, but it feels a but like time traveling. Imagine the wayback machine, with links working historically.

To update the information in the local panels we hook the onload event of the iframe. And that's where the problems started. Because on some of our systems, the onload event doesn't seem to fire when we open a PDF. It seems to only happen on IE, but not consistently on all our IE systems. A workaround with periodically inspecting the iframe also doesn't work, because somehow we can't even get the URL for the PDFs. And before you ask: yes, the PDFs are served from the same domain as the rest of the page.

It's really frustrating, because when the onload doesn't fire it breaks our UI logic. And the workarounds we've had to do are not pretty and slow the UI responsiveness way more than we'd like.

I've done some searching to see whether other people also had this problem. But of all the questions being posted on iframes, this doesn't seem to be a common occurence. That's why I post it here. Has anyone had the same problem? And if so, what was your solution?

Monday, March 19, 2007

Our Mac doesn't run Sherlock

One of the good things about Macs, is that women are instantly attracted to them. So those same pheromones that once attracted her to me, last Saturday started flowing between my wife and our new iMac. She approached it slowly at first, carefully absorbing its beauty and the fact that a 20" screen looks pretty huge in our living room. Believe it boys: size does matter!

Within minutes she was happily playing with Safari ("just like Firefox"), iTunes ("hey, where are my songs?"), Photo Booth ("hihi, you look funny") and the simple games that come with OSX. That last one started her on a quest for the favorite game she has on her Windows box: Sherlock - a puzzle game with horrible graphics but very addictive gameplay. The game wasn't installed on the iMac of course, but that didn't stop her from searching and meanwhile exploring the new computer. Her quest to find the game in the end brought her to the Applications menu, where -much to her delight- she found it: Sherlock!

She clicked the icon, starting the Mac search application. Let's just say that she wasn't too excited about the way the OSX programmers had implemented her favorite game!

Sunday, March 18, 2007

Macs are heavy

Yesterday I finally gave in and bought my first Mac. I've been ogling them for years, because the machines look gorgeous and the programs I use are mostly available for any platform. The reason I hadn't gotten a Mac so far had simply been the price. They sure are expensive.

But this weekend there was a sale at a local Mac store, so I gave in and spent some of my savings on a brand new 20" iMac. One thing I didn't count on though, was the weight of such a machine. Macs look so clean and bright that I somehow imagined they wouldn't weigh more then a few kilos. Boy was I wrong! The short distance I had to carry the Mac from the store to the car was killing. I had to stop twice just to get my breath back.

Luckily I could park close to home, so I had less trouble there. And the unpacking of such a beautiful device easily made up for the hard work to get it home. I still have no idea what I need it for, but my iMac sure looks great in the living room.

Saturday, March 17, 2007

More on zooming in

Sorry about the lack of updates in the last few weeks. It's been terribly busy at work, which left even less time than usual for updating my web log. The good news of such activity at work is of course that I have more topics than ever to post on.

But I'd like to start with some more background on my one-liner of last week: looking closely, insignificant details turn into things of infinite beauty.

For those of you that don't know, one of my hobbies is photography. If you want to see some of my work, visit StutterShutter which is a joint effort with some friends or BuitenBlog (in Dutch) where my wife and I post regular updates.

My specialty when it comes to photography is close-ups. What became my specialty actually started out as a way to get enough photos out of our small living room to fill a twice-a-week photo blog. If you just show overview pictures, you'll be running out of material within weeks. But if you move in close enough, even a small living room can last a few years.

We've been running our photo blog for over three years now. And even though we've moved to a different house with a bigger living room and expanded the scope of our photo blog to include things outside of the house, I still post almost nothing but close-ups.

I was recently talking with my wife about why that is. And at the risk of sounding a bit over-dramatic, the way I take my pictures has made me aware of the beauty of many things that I never noticed before. By moving up close, I started to appreciate things like the colors of a sunrise or sunset, the intricate details of many flowers and insects and the lines left in the sand at low tide. But also many man-made things, such as a simple screw, my wife's medical supplies, the twisting pattern in a cable and Nespresso cups.

So if you ever find yourself with some free time and a camera on your hands, zoom in. You never know... you might discover something that was always there, but you never noticed.

Tuesday, March 13, 2007

Zoom in

Looking closely, insignificant details turn into things of infinite beauty.

Sunday, March 4, 2007

Don't generate

Last week I've spent quite some time debugging a web service and its corresponding client code. The client code was generated from the web service description (WSDL). I'm sure the generated client provides a wonderfully easy way to access a web service from your code, but it didn't help to make my debugging effort any easier. Code generators have the habit on not focusing on maintainability of their generated offspring, let alone that they care about the sanity of their designated debugger. In the end I'm pretty sure I got all problems lurking in there, but it reminded me of one of my older principles: don't generate code from data.

If you've written code to interpret some data and then generate code from it, why not skip the code generation step? Just keep the code that interprets the data and work from that. That means that as soon as you update the data, your program will work with it. No forgetting to generate the updated code, no risk of overwriting handwritten changes (see the "generation gap" pattern).

Some might find this approach less flexible and not as easily supported by tools (although that is changing with the current push for DSLs), but I find it a lot easier to control... and debug.