Tuesday, 26 February, 2002
About two years ago we created an in-house application to send survey invitation emails and track respondents so we could send reminder messages to people who haven't responded. The program was necessary because our survey tool, Inquisite, was designed for anonymous surveys where you can't tell who a person is unless you ask (and they respond). Over the years, we've made the custom invitation mailer program available to clients as a custom solution. It is still very much an in-house application with a somewhat goofy user interface, and we make some assumptions about the client's environment that we just couldn't make if we tried to sell it as part of a commercial package. Just this week, for example, we've had to make changes so it will support SQL Server databases (in-house we'd only used it with Access databases), and we had to add some more stringent communications error checking to support other types of SMTP servers. Those kinds of things make the difference between an in-house utility application and a successful commercial product.
In his book Zen of Windows 95 Programming (an excellent book, by the way, despite the terrible title), Lou Grinzo makes a distinction between public code (code that other people see) and private code (code that only the original programmer ever sees), and explains how private code has a distressing tendency to become public code—almost 100% of the time. He makes a very good case for doing things right no matter what. You may think that the one-shot conversion application you just wrote will never escape from your hard drive, but you're probably wrong. You're going to offer it to a co-worker one of these days, and he's going to give it to somebody else. Before you know it, the development team's going to drop it into the product and a year from now one of your best customers is going to run smack dab into a big ugly bug that resulted from an assumption you made when you wrote the program. Don't tell me it won't happen. It has. To every programmer. At best it's embarrassing. Certainly, the development team bears some of the blame for incorporating what is essentially untested code, but a large part of the blame falls squarely onto the shoulders of the original author.
On my white board I have written:
If you don't have the time to do it right,
where will you find the time to do it over?
Every time I ignore that, I end up regretting it.
Monday, 25 February, 2002
A couple of months after I bought my Osborne I computer (see December 4, 2001), Dad and I bought an Epson MX-80 F/T printer from Orange Micro. That would have been in March of 1982. For $625, we got an 80-column printer that would print 80 characters per second (about a page a minute) in four different type faces. The printer's base price was about $500, but we had to spring for the special serial interface card because the Osborne's parallel interface wasn't compatible. The printer was nothing fancy, but combined with WordStar on the Osborne I, it beat my old typewriter hands down.
Somebody at the office last week was looking for a dot matrix printer to test some label printing software they're writing as part of a larger custom solution. The only dot matrix printer at the office is connected to the phone system for some kind of logging or another, so I offered to bring mine in. Yes, I still have that old Epson. 20 years and still going strong. I'll post a picture if I can remember to take the camera to the office.
So we connected the printer to a computer and told Windows that it was an Epson MX-80 F/T. Started Notepad and sent some text to the printer. The printer didn't like it—started spitting out pages, beeping, and printing all kinds of weird characters. Why? Because Windows was sending graphics output to the printer, expecting the old Graphtrax option to be installed. At first I thought there was an option on the driver to disable graphics output, but no luck. Then I thought that maybe Epson's web site might have the answer. (It didn't, by the way, but I did find a place to download a PDF of the original user guide. Go to http://support.epson.com/cgi/find_product.pl?product=Dotmatrix&tab=documentation.html, and then pick your printer.) After an hour, I finally remembered that Windows has a "Generic/Text" driver. Once we installed that driver, the printer worked. Watching and listening to that little printer go brought back some fond memories.
In retrospect, I'm astonished that hardware from 20 years ago still works with current machines. Perhaps more astonishing is that I was able to find a ribbon for the printer at Fry's. As long as Epson continues to design new printers that use that type of ribbon, I'll be able to get ribbons for my old printer. I like that idea. It's the ultimate expression of the "if it ain't broke, don't fix it" mentality that I wish the computer industry as a whole and most other industries in general would adopt.
Sunday, 24 February, 2002
Gamespot ran an article recently about abandonware games: computer games that are no longer available through retail channels, and no longer marketed, distributed, or supported by the companies that published them. The list of such games is long indeed, and includes TriTryst, the first game I ever wrote. People are collecting the games and making them available for free over the Internet; a practice which has raised more than one eyebrow. Software publishers insist that to do so is a violation of their copyrights. Activist game junkies insist that, since the publishers have "abandoned" the titles, they have somehow relinquished their rights to control the distribution. Some have gone so far as to suggest that copyright law be amended to include such a clause. Most of the arguments are along the lines of "once you've let the cat out of the bag, you can't put it back in." From my perspective, it looks like just another attempt at eliminating laws that protect intellectual property.
A common argument is that by making a protected work available the publisher incurs some public obligation, in perpetuity, to continue to make that work available regardless of financial considerations. If the publisher decides to stop distributing the work, then it must release the work into the public domain, transfer the rights to somebody else, or in some other way continue to make the work available. I disagree. Strongly. The creator of a work should, under the law, have full control over that work's disposition. The creator of a work (or those to whom he assigns rights), not "the public" has, and should continue to have, full control over the work's disposition, including removing it from the market at any time for any reason whatsoever. Intellectual property is property, and its creators and owners deserve full protection.
That said, I'm happy to see TriTryst and other abandoned games available on the abandonware sites. The more respectable of the sites (The Underdog is one such) have a policy of not posting games that are currently active, and removing any game if the publisher so requests. Copyright holders, for the most part, allow this to continue because the old games increase the publishers' exposure without affecting of their new games. As long as the abandonware sites honor publishers' requests to discontinue distribution of particular games, then everybody wins, except those who would dictate to an owner how his property is to be used.
Saturday, 23 February, 2002
Always among the last to try something new, I waited until today before finally checking out the Krispy Kreme Doughnuts shop that opened down the road. It's quite the impressive place. As you walk in the door you can see the doughnut making machine—hundreds of raw doughnuts rising as they move along towards the deep fryer, where they float partially submerged in the oil, and then are flipped so the other side gets fried. Then up the conveyer belt and back down and through to an icing waterfall. They offered us a fresh, hot doughnut that'd just come out of the icing. The thing melted in my mouth. It's a a very good first impression. The service was fast—even on a Saturday morning—and very friendly. We'll be back.
Thursday, 21 February, 2002
A couple of weeks ago (see February 8), I mentioned that we were doing some multi-user debugging on the latest version of our software. The project started when I was doing some stress testing for a potential client—they wanted to know how many transactions we could handle per second. I tried Microsoft's Web application stress tool to gather the metrics, but it is too generalized to be of much use. Instead, I wrote a program that would simulate multiple users hitting our site and taking surveys. The first version of the program just hit the server as hard and as fast as it could, with as many threads as possible. That was good for maximum throughput testing, but didn't simulate "typical" use, which is what we really wanted. I'll be the first to admit that we should have done this long ago. Ideally, we should have developed the test application in parallel with the product. But we're all on limited budgets these days, and getting something out is often more important than delivering the highest quality software. (Much to my dismay, I might add, but that's aseparate rant.)
Since the development team is busy building the system and it's my client who wanted the performance metrics, it fell to me to implement a testing tool. Not that I mind—I get few opportunities these days to do real programming work. I've spent the better part of two weeks on the project, and have assisted the development team in tracing a number of potentially embarrassing bugs, and some serious performance bottlenecks.
Along the way (and here's the point of today's ramblings), I had to implement a priority queue in which to store events. My first pass just implemented it as a sorted list with a sequential search to locate the insertion point for new items. That's all well and good for a small queue, but becomes terribly inefficient when the queue size grows beyond a couple hundred items. Inserting an item into a sorted list takes on the order of N operations. Removing an item takes a similar amount of time. When the average queue size is 100 items, that's not much of a problem. When N reaches 10,000, though, you start to notice the difference. A binary heap implementation of priority queues takes an average of log2 N operations to insert or delete an item. In a priority queue containing 10,000 items, inserting into a sorted list will take 10,000 operations (finding the insertion spot and then inserting the item). A binary heap implementation will take about 14 operations.
Not having written a priority queue for at least a decade, I was at a loss when it came to implement that improvement. Fortunately, you can find anything on the Internet. A good place to start looking for priority queue implementations is this page, which has links to many different implementations and a good primer on priority queues in general. I based my Delphi implementation on the ANSI C Reference Implementation found here. That page also has a good description of the data structure and a good explanation of the implementation.
Tuesday, 19 February, 2002
I ran across the Peek-A-Booty site today. (By the way, that's www.peek-a-booty.org, not .com—the .com site is a porn site.) The .org site is something totally different—a way around Internet censorship. Peek-a-Booty is a distributed Web application that circumvents some types of DNS filtering. Think of it this way: Say your employer or your government blocks your access to a particular Web site—you can't get to it from your PC. But the Peek-A-Booty site isn't blocked and you really want to get to the blocked site. Instead of going directly to the blocked site, you contact Peek-A-Booty and ask it to contact the blocked site and return the contents to you. The idea is to have a very loose collection of these servers—similar to the Napster model—so employers couldn't shut you down just by blocking the main Peek-A-Booty site. This would render almost every current type of filtering ineffective.
Discussing this with some friends, we hit upon the not-original realization that the only effective means of censorship (or security, for that matter) is the granting model. That is, to say that you can visit only approved sites. Any model that attempts to perform censorship based on a restrictive model (i.e. you can visit all sites except those on the restricted list) is doomed to fail because there's always another site with unapproved content. That's not to say that the granting model is perfect. Nothing prevents somebody from changing the content on an "approved" site such that the new content is inappropriate for whatever group you're trying to restrict. Still, it's much easier to restrict access to approved sites rather than prevent access to unapproved sites.
I'm not sure where all this is going. I have to think that any organization that is performing Web censorship based on a restrictive model is, for the most part, just covering its butt. The people actually implementing the blocking model have to be smart enough to realize that they'll never win. But no organization that I'm aware of has the audacity to implement the truly effective granting model because of the inevitable controversy that would ensue. It's an intriguing mental exercise, though, and I'm interested to see what innovations the Internet blocking software industry comes up with to solve the Peek-A-Booty problem.
Monday, 18 February, 2002
<!--#include virtual="diary/2001/2001_11.htm" -->
<!--#include virtual="diary/2001/2001_10.htm" -->
<!--#include virtual="diary/2001/2001_09.htm" -->
Now, this has to be in a .SHTML file, but that's no problem.
Reader Eric Lawrence tried to make that point to me last week, but I thought he was talking about Microsoft's Front Page Extensions implementation, which I wanted no part of.
Anyway, there are still some issues with the #include syntax that I haven't cleared up, but it looks like I'll be able to use the SSI directives to create my indexed web diary. Stay tuned.
Friday, 15 February, 2002
Today's lesson, class, is on the art of finding and fixing bugs. It'll be a very short lesson—just one point for me to make and for you to ponder. When you're trying to find a bug, make as few changes as possible to the code. Doing otherwise risks introducing new bugs or, worse, masking rather than fixing the problem. And debugging is not the time when you optimize that routine you've been thinking about, or rewrite some code that you think is ugly. Working is more important than fast, and until you get it working you can't know where the real bottlenecks are, so any optimization you make is at best wishful thinking.
It's such a simple lesson, and painfully obvious to anybody who takes a few minutes to think it over. Why do so many programmers ignore it?
Thursday, 14 February, 2002
If there's one thing every junior consultant needs to have injected into their head with a heavy duty 2500 RPM DeWalt Drill, it's this: Customers Don't Know What They Want. Stop Expecting Customers to Know What They Want. It's just never going to happen. Get over it.
The emphasis is his. I agree with that, but then he gets it wrong.
He goes on to say that, since the customer doesn't know he wants, it's up to you as a developer to obtain the domain knowledge and build what you think the the customer wants. That, in my experience, is a recipe for disaster. I usually agree with most of what Joel has to say about development, but he's dead wrong on this point. In 20 years of writing software for hire, I have never been successful using that method. The worst projects I've been on are not those in which the client hovers over me every minute, but rather those on which the client is studiously disinterested until delivery time, and then completely surprised when the software doesn't work as expected.
If you're building software, be it for a product, on contract, or part of an in-house system, you must get regular input from your prospective users. If it's important enough for them to hire a programmer, then it should be important enough for them to take the time to explain the business process to the programmer and help design a usable system. Certainly the programmer has to learn something about the business, but he shouldn't have to become a domain expert for every project he's assigned. It takes years to become competent in some fields. Do you really expect a programmer—even a really bright programmer—to become an expert and create a program to solve the domain-specific problem in a reasonable amount of time? Have fun, guy. Me? I'd suggest you walk away from a contract if you think the client won't have the time to spend with you.
Wednesday, 13 February, 2002
I bargained with Life for a penny
and Life would pay no more
however I begged in evening
when I counted my scanty store.
For Life is a just employer
it gives you what you ask.
But once you have set the wages,
then you must bear the task.
I worked for a menial's hire
only to learn, dismayed
That any wage I had asked of Life
Life would have willingly paid.
If anybody knows who wrote that, or even where I might be able to search for who wrote it, please let me know.
Tuesday, 12 February, 2002
Many of our clients use a custom application that we wrote to send email invitations to surveys. The application is a little rough around the edges, but it works well. We've made versions with Web interfaces, and other customizations for various clients. It's what I'd call a semi-custom application. I frequently get calls or emails from clients who have received return notifications from some SMTP server to which the program has sent mail. Usually they're wondering why my program is so stupid that it can't figure out if an email address is valid.
Explaining the SMTP protocol and its nuances to the uninitiated is difficult. I've yet to come up with a simple explanation that non-technical people can grasp. They refuse to believe that it really is as simple as paper mail: you address the envelope and drop it in the mailbox. Somehow or another, the letter either gets to its intended recipient, or it comes back to you as undeliverable. Email works the same way. You tell your email program to deliver a message to firstname.lastname@example.org, and press the send key. Your email program contacts the SMTP server that you told it about (you did, really, when you configured your system for your ISP) and sends the mail. The SMTP server accepts the message; in effect saying "I'll deliver it for you, and let you know if for some reason I can't." How it all works under the hood is irrelevant to the discussion.
If you know of a good introduction to SMTP for non-technical users to read, please point me at it.
Monday, 11 February, 2002
Some years ago, must have been 1998 or so, a very amusing AVI file made the rounds at the game company where I was working. I downloaded and saved it to my hard drive, and would show it to people from time to time. The cartoon featured a fat purple alien, an improbably constructed woman (or female-form android, I was never sure), a C3PO-style robot, and an evil clown. I'm sure it's considered "adult" humor due to some animated nudity and some foul language. That notwithstanding, it's very, very funny, and some seriously cool animation.
I don't know when it came back, but a co-worker sent me the link today. It's called Tripping the Rift, and you can find it at www.trippingtherift.com. There you'll find all sorts of information about the cartoon's creation, and some scenes from the "upcoming" sequel that seems to be caught in bureaucratic limbo. On the downloads page you can download a much nicer rendition than the crappy little AVI that I thought was so funny four years ago. It's a hefty download, though, 35 megabytes, and you'll also need a divx codec that you can get by following a link from the downloads page.
Friday, 08 February, 2002
Always suspect your own code first. I knew on Wednesday evening that the failure was due to either IIS or our software dropping the request. I've been in this business long enough to suspect my own code first, and I thought it exceedingly odd that IIS would be dropping a request and returning a blank page. At first I resisted testing against another web server, but decided it couldn't hurt. I'm glad we had another web server to test against because it verified that the problem was with our code.
When you're writing a log file, be sure that the information written is usable. We were writing a tremendous amount of stuff to the log, most of which was useless for debugging purposes when multiple simultaneous requests were being processed. In a multi-threaded application, the log information must include the thread ID and some information that ties that thread ID to the request being processed. We spent almost a full day running tests and modifying the log output so that we could get enough information to track the bug. My only hope is that the modified logging code remains in the shipping product.
Murphy's Law applies. In spades. If there's any chance for resource contention, you can bet that it will happen. The root of the problem was a piece of code that reads a configuration file. This code opened the file in exclusive mode, barring any other thread from opening the file during the few milliseconds it took to read the information. If there's any chance of resource contention, you either code for the possibility by using a mutual exclusion lock, or you write code to eliminate the possibility.
Squashing exceptions is a very bad idea. Although the root cause of the problem was the resource contention, we were unable to determine the real problem because the exception being raised by the thread's failure to open the file was being squashed by the exception handler. We were correctly throwing an exception, but handling it improperly and not even writing a log entry. If you're writing code to use exceptions, then use them!
I'm happy to say that I'm not personally responsible for any of the affected code. However, the experience has left me with a renewed appreciation for the difficulties of writing multi-threaded code, and you can bet that some of my other applications will be getting a good looking over in the near future.
Thursday, 07 February, 2002
CNN is running a story about the SoloTrek personal transportation device by Millennium Jet. Although still in the development stages, the technology shows some real promise. It uses an internal combustion engine and ducted fans for lift, and computer controls to keep the thing balanced while in the air. Right now it only works about two feet off the ground, but they're aiming for 8,000 ft. and 80 MPH for 120 nautical miles. The primary market is military (Special Forces), and then police, fire, and Search and Rescue. "Ultimately," says the article, "it may one day reach the civilian market."
It's that last line that scares me. Can you imagine thousands of thrill-seekers jetting around with 325-pound vehicles attached to their backs? Even with computer-controlled balance and recovery systems (i.e. parachute), it's a disaster waiting to happen. I hope these things remain expensive and difficult to operate. Otherwise I'm staying indoors and out of the flight path. Sure, I would like one, but I wouldn't trust your average Austin driver with one.
Monday, 04 February, 2002
There's something to be said for hometown banks. After all the crap we went through trying to refinance our mortgage with the big banks, we finally went to our local bank where we've had our checking and savings accounts for the last 6 years. 30 minutes after sitting down with the loan officer, we had a verbal commitment for a home equity loan to pay off the mortgage, plus a little cash out to help finance our remodeling. The Board has to approve the loan, of course, but it's likely they will considering that we're borrowing much less than 50% of the property's market value. Better yet, closing costs are limited to a title policy and a few filing fees—the bank is doing all the documents in-house so there's no silly "document processing fees." And no appraisal, again because the relatively small amount that we're borrowing. The rate's not bad, either: 7%. Not as low as a real mortgage loan, but better than most of the home equity loans I've seen, especially considering the homestead law weirdness. And it's better than the 8.375% we've been paying.
Perhaps best of all, the local bank will be keeping the loan rather than selling it to some huge conglomerate bank. We'll have a real live person we can contact if we have any questions about or problems with our loan. Given the troubles we've had being shuttled through Chemical Bank, Citibank, and Wells Fargo in the last 7 years, that's going to be a huge relief.
Sunday, 03 February, 2002
This story at Wired News came across my desk last week. The guy claims to be "electrically sensitive"—he can detect and is harmed by electromagnetic radiation from appliances, radio towers, cell phones, etc. He and people like him are doing a very good job of keeping wireless technology and high-bandwidth Internet service out of Mendocino, California.
Flat Earthers abound.
Saturday, 02 February, 2002
I've been thinking a lot lately about creating an index for my Web diary (see January 19). After looking around a bit, I've decided that I'll have to create my own application to do it. I'll come up with a format for anchors in my diary entries (something like 20020202 for February 2, 2002) and insert those anchors into the HTML. My custom program will parse the HTML looking for those links, and then prompt me for index keywords for each topic. I'll store the keywords in a database and the program can generate the index automatically in whatever format I like. This doesn't eliminate the problem of an entry's location changing after three months, but it eliminates my having to worry about it. If I re-generate the index every time I post, it'll always be up to date.
That's still not ideal. Ideal would be if there was a standard include syntax for HTML. What I really wanted to do was create the diary entries in their respective months' files, and create a diary template (this page, for example) that reads something like this:
Put all of the introductory stuff here.
I was very surprised to find that no such thing exists in the HTML standard. Or if it does I haven't yet found it. I could do it if I set up Microsoft Front Page Server Extensions on my Web, but my experience with that technology has been less than satisfactory and I'm not willing to put up with its flakiness.
Friday, 01 February, 2002
I finally installed all of the cabinets in the laundry room, and have attached the doors and drawer fronts. I still have to adjust the doors so that they hang plumb and level, but the hinges make short work of that. The four adjustment screws on the hinge let you move the door on all three axes. A few minutes with a screwdriver is all I'll need. It'll take a little longer to get the drawer fronts aligned correctly.
Installing cabinets is relatively easy, but you have to pay attention. The most important part is getting a level reference line. Once you have that, it's a simple matter of shimming the cabinets up to the line and attaching them to the wall and to each other.