Sunday, 03 September, 2006
Tales from the Trenches: The Forth Incident
In my first games programming job, I took over development of a game that the original designer had started. He had written a "first playable" version that included basic game play and graphics. My task was to work with the publisher to complete the design, and then implement that design. The contract called for a Windows version and a version for the Macintosh. Since we didn't have any in-house Macintosh experience, we contracted with another development firm to complete the Mac version.
The entire project was written in C. We wrote all of the internal game logic to be platform independent so that it could compile without change on either platform. I put all of the Windows-specific user interface code in separate modules, and had a very strict separation between platform-specific and platform-independent code. Such was common practice, and it had worked well for us in other projects that we had developed for multiple platforms. All the other programmer had to do was develop the Mac-specific user interface and attach it to the base game logic. Simple, right?
The first few milestones in the Mac port came and went, with the programmer showing good progress on the port. One day, however, we got a call from the owner of the other development house. He had to fire the programmer who had been working on the game. Why? Because the programmer--a very junior developer who was working at his first job--had decided that it would be easier to throw out the 20,000 lines of fully-tested platform-independent C code and replace it with entirely new code written in Forth. Let's just say that none of us involved in the project were very happy about that.
I don't have anything against Forth as a programming language, and I'll be the first to admit that my C code didn't present the best possible interface, but discarding a large body of existing and proven code in favor of something new is almost always a bad idea.
This "throw it away and start over" mentality is very common among junior developers who don't understand that things are usually much more difficult than they appear. They think that it's easier and faster to rewrite code from scratch than to take the time to study and understand the existing interfaces. Besides, implementing user interface code isn't nearly as cool or interesting as developing core game logic. And we all know how programmers will favor cool and interesting over tedious UI hacking.
Ultimately, of course, the blame lies squarely on the project manager who let several milestones go by before performing even a cursory inspection of the code. Had he examined the programmer's work at the first milestone, it would have saved him a whole lot of time, money, and heartache. He had to pull a programmer from another project so that they could finish our contract in time.
The moral of the story is twofold. First, a complete rewrite takes longer than you think, especially if you don't fully understand what you're rewriting. Not only that, the new code probably has a large number of non-obvious bugs that probably existed in the old code, but were fixed over time. The old code, as clunky as it might appear, contains a whole bunch of hard-won knowledge that will almost certainly be lost in a rewrite.
Secondly, programmers need supervision, especially junior programmers at the beginning of a product development cycle. You have to live with your early decisions throughout the product cycle, and any mistake you make early on will be very expensive to fix as time goes by. It's imperative that managers keep an eye on the development team's early decisions.