Monday, 31 May, 2004
A Google search for "konqueror windows share" revealed the Access to Windows Shares article on SuSE's site, which gives instructions for the command line and Konqueror on SuSE versions 8.1 and later. The command line setup was painless, and in a matter of a few minutes I had full access to my Windows shares both from the command line and from Konqueror. The basic idea is to create a directory on the Linux box and then map the Windows share to that directory with this mount command:
mount -tsmbfs //server/share /linuxpath/dir
After that, I could just browse to /linuxpath/dir and I was able to access the files on the Windows share. I can make the mapping permanent by adding the mount point to the /etc/fstab file, the only drawback being that I need to identify each individual share in that file or use the administrative shares (C$, D$, etc.).
Adding browse capability so that I can browse for machines and shares from Konqueror was a little more involved. I had to start YAST again and install a couple more packages, then configure LISa, the LAN Information Server. On my first try I could access the Windows shares only by IP address, apparently because I didn't instruct LISa to send NetBIOS broadcasts to locate servers. I had to install the Samba server package and enable it before I could access by machine name. This is hinted at on the Web page, but not explicitly stated. I'm somewhat surprised that I have to enable the Samba server in order for this to work. I wonder if the reason is that the program that LISa uses, nmblookup, is part of the Samba package. Perhaps it's possible to install that program by itself?
Sunday, 30 May, 2004
Today's Linux experiment was frustrating, but ultimately fruitful. I've been wanting to convert my day-to-day work to Linux, and I've chosen SuSE Linux 9.1 as the particular distribution. I installed the Evolution email client the other day, and have been using it since. I haven't converted my address book yet, which is something of an inconvenience, but I'll manage. Today's experiment was installing the POPFile spam filter, which I've been using on Windows since last fall. This turned out to be very difficult.
I downloaded the cross platform version of POPFile to my home directory, extracted the files, and tried to run the program. The Perl interpreter failed with the message "Can't locate HTML/Tagset.pm." It seems that the SuSE install didn't include all of the necessary Perl modules. So I started up SuSE's setup program, YAST, located the module, and installed it. Try number two got a little further before exiting with the error message "Can't locate DBI.pm," which turns out to be the Perl database interface.
I've had this problem before: trying to get all of the necessary Perl modules installed. Figuring that these things have to be listed somewhere, I took another look at the documentation that came with POPFile. Their installation instructions for the cross platform version say:
Get Perl running on your machine, then download the POPFile Perl zip from the POPFile Home Page, and extract it to a directory of your choice.
Not very helpful, as Perl is indeed running on my machine. Checking the Web site, I finally found the list of required modules in their HowTo section here. I installed the DBI module from my SuSE DVD, but was unable to find the SQLite module there so I figured I'd get it from CPAN like the instructions say. I was able to download and extract the files, but the build failed. Why? Because the SuSE installation didn't include make, gcc, and other utilities required for building programs! Granted, I didn't tell it to install development tools, but considering how much Linux software is distributed in source form with make file installations, it seems reasonable to expect the default install to include a minimum set of build tools. So, back to YAST and installation of some development tools. After all that, the CPAN install failed again for reasons unknown, but since it left the downloaded files in my root directory I was able to manually execute make and install the program.
Did you know that you must have root permissions to run POPFile? Many Linux systems restrict access to ports lower than 1024, and the standard POP port is 110. Fortunately, POPFile gives a nice error message about that one and I had no trouble making POPFile run as root. This, too, is not mentioned in the POPFile documentation. It is, however, mentioned in the FAQ.
All told, installing POPFile on my system took about 4 hours of real work (including researching and head-scratching) spread out over an 8 hour period. I realize that an experienced Linux head would have figured all this stuff out a lot quicker, and I expect that the next thing I try to install will go much quicker. Still, I'm a reasonably bright guy who's been using computers for 25 years. I have some idea of what's going on. Imagine the problems a casual computer user would have with this stuff. I'd be willing to bet that most would give the program a try, see it fail, and give up. More tomorrow after I've sorted out my thoughts on this one.
Saturday, 29 May, 2004
More experiments on the Linux front. Today I thought I'd figure out why I can't access my Windows shares from my SuSE 9.1 system. Konqueror, the KDE file manager, dies unexpectedly when I attempt to access one of the shares. I managed to get Mandrake 9.2 running on my other lab computer here, and am able to access the shares from Konqueror without trouble. So, there's something wrong with the SuSE configuration. Figuring I might as well start at the beginning, I tracked down the Samba documentation and learned how to use it from the command line. With the command:
smbclient //server/share -Userver/jmischel
I was able to connect to the Windows share from my SuSE system. I can get a directory listing and read the files. But even after connecting with smbclient, I'm unable to connect using Konqueror. So that means there's a problem with the Konqueror configuration, or with the interface between Konqueror and Samba. I still haven't tracked that one down.
Friday, 28 May, 2004
Debra started riding with me in the mornings about a week after I returned from my trip to Harlingen. (See April 1 for details.) We spent the first four or five weeks riding the mountain bikes: one hour in the morning on Monday, Wednesday, and Friday, and a longer ride of two hours or so on Sunday. Mountain bikes are okay, but they're not built for hours sitting in the saddle in the same position like road bikes are. After a 25 mile ride a couple of weeks ago, we showered and then headed down to the Bicycle Sport Shop. We'd been planning to get Debra a road bike at some point, and we'd pretty much figured out what she should get.
The bike in the picture is a Lemond Zurich. It's at the high end of the low-end racing bikes, if that makes any sense. The frame is carbon fiber and Reynolds 853 steel. Bontrager wheels, Shimano Ultegra components. It's a sweet ride. It's probably overkill for the kind of riding we do, but to get a less expensive bike means sacrificing comfort. Debra's had it for almost two weeks now, and says that riding it is a lot more comfortable than the mountain bike. Sunday we'll go out for 30 miles.
Wednesday, 26 May, 2004
Now here's a new one: Video Game Helps Players Lose Weight. Who would have thought that something good could come out of kids' fascination with video games? It seems that Konami's Dance Dance Revolution (DDR) game is helping players stay more active and lose weight. At $1.00 to $1.50 for a six-minute session at the local video arcade, it's not the cheapest way to lose weight. You're better off spending the $40 for the PC version (there also are versions for the PlayStation, PlayStation 2, and XBox) and $40 more for a dance pad. See GETUPMOVE.COM for testimonials and more information, and check out the DDR Freak fan site for tips, hints, cheats, etc.
Tuesday, 25 May, 2004
Linus Torvalds and kernel maintainer Andrew Morton have adopted a revised process for Linux kernel submissions. The revised process requires that developers who submit contributions have to acknowledge their right to submit the code: in effect certifying that the code is their own work or otherwise free of legal entanglements. This acknowledgement, called the Developer's Certificate of Origin (DCO), also ensures that developers get their due credit. See the press release for full information.
The Slashdot comments on this issue are mixed, with some saying that it's a Good Thing, and others forecasting doom, gloom, and Linux kernel development being overwhelmed with bureaucratic process to the detriment of innovation. Conspiracy theories include "big corporations" wanting an audit trail so that they have somebody to sue when something goes wrong, malicious agents of Linux detractors "sneaking" copyrighted code into the kernel, and all manner of other nefarious plots. Seems to me that the kernel and the rest of the open source world would be better off if these people expended their creativity and time on software development rather than on thinking up new and entirely implausible ways that others could hijack or derail kernel development.
The discussion of legal liability is especially humorous. The group is about evenly divided between those who say that the GPL protects them from being sued for liability, and those who say that the GPL's limitation of liability clauses are not recognized in some localities. What's laughable is that most of the people worrying about this have absolutely no grounds to fear being sued, simply because they don't have enough money to make it worthwhile. If something goes wrong and a lawsuit is filed, the lawyers will go after the money, wherever it is, not some poor slob who submitted a kernel patch. Oh, that person might be named in the suit, but the lawyers aren't going to hit him too hard. What's the point of trying to get a million dollar judgement from somebody who makes $50,000 per year? On the other hand, if the developer in question has money, it's doubtful that the GPL will protect him when the big gun lawyers come calling.
Limitation of liability clauses in voluntary contracts like the GPL seem intended to deter small claims that would cost more in legal fees than one would be likely to obtain in a settlement. They're like unlocked gates that deter honest people from walking into somebody's back yard. When claims move up into the nosebleed multi-million dollar range, the rules change and the lawyers start mentioning "malicious intent," "willful negligence," and other things that render liability limitation useless. The simple fact is that if you publish any code, you're opening yourself up to liability claims if somebody experiences problems with it. That's the way the legal system works. Deal with it or keep your code to yourself.
I probably should stop reading comments on Slashdot.
Saturday, 22 May, 2004
I'm back to torturing Linux again, this time in a real effort to move my personal email, writing, and day-to-day work from Windows. I've installed SuSE 9.1 Professional on my 1.2 GHz AMD machine and am slowly getting it configured. I'm a little disappointed so far by the default install, and have been adding packages as I find that I need them. I was surprised that the default installation doesn't include the Samba client so that I can copy files from my Windows machine. Even after installing Samba I'm not able to access my Windows shares, and the error message I get is very cryptic: "client process died unexpectedly." SuSE's support Web site hasn't been very helpful with this one. I find it strange that I was able to get Samba working without trouble in SuSE 7.0 and 8.0, but am having all this trouble with the newer version. The Windows computer hasn't changed--it's still Windows 2000 and I know that the shares work as I'm able to access them from other Windows machines and from Samba running on a Mandrake install.
I'm a little confused by the problems that Fedora is having with my video hardware (see my March 16 entry for full information). According to this bug report the problem lies in the S3 video driver and the workaround is to use the VESA driver, although that's not an optimum solution. The RedHat team kicks the problem back to the XFree86 team. All well and good, I guess, for free software, but I'd be pretty upset if I paid for the software and it didn't work with such common hardware. It's still a mystery to me why Mandrake and SuSE work but Fedora doesn't. Perhaps the working ones have decided to use the VESA driver by default because the S3 driver is broken. I guess I should look into that.
Of all the distributions I've used lately I'm most impressed with Mandrake's install, but even that one has some odd problems. I installed it on an older machine the other day. It correctly identified my video hardware, but then configured XFree86 to use a video mode that the card doesn't support. As a result, X fails to start and reports that the video mode isn't supported. It seems to me that the installation program should be able to determine the valid video modes and act accordingly. Very odd.
I've settled on SuSE 9.1 Professional for my production machine and will continue configuring it to meet my needs. I have a lab machine that I'll be using to evaluate other distributions and perform some experiments. I'll keep you posted here.
Wednesday, 19 May, 2004
I have a standard set of tools that I usually install on any machine that I'll be working with on a regular basis. This set of tools is a hodge podge of stuff I've collected over the years, either written by me or acquired by other means. The tools include a text search utility (grep), a hex file viewer (my home-grown DUMPJ program), an ASCII chart display (surprisingly, I still use that), file comparer, and various other odds and ends that I've found useful over the years. Some of these tools are showing their age and don't work particularly well with modern systems, but most of them are still quite useful. I haven't installed those tools on my development machine at work because they frown on "unauthorized" software installs. I'll eventually get the OK to install them here, but in the meantime I'm limited to what comes with Windows and Visual Studio .NET.
My task today was to find all the "RAISERROR" statements in all of the project's stored procedures, which are stored as text files in one of the project directories. "If I had grep," I thought, "I'd just use it in that directory." Since I don't have grep, I tried to use Windows Explorer. No dice. It reported no matches when I know for a fact that at least some of the files contain that statement. Then I found the Windows FINDSTR command, which is something of a bastardized grep. After fooling with the syntax for a while, all to no avail, I'd almost given up. Then it occurred to me that the stored procedures are saved in Unicode rather than plain old ASCII text. Sure enough, if I opened a file in Notepad and saved it as straight text, FINDSTR and Windows Explorer worked as expected. So now I'm sitting here wondering how Microsoft managed to ship a Unicode-enabled Notepad with Windows 2000, but still hasn't managed to add Unicode support to their text search programs.
I did eventually solve my problem, by the way. I used the "Find in Files" functionality of Visual Studio .NET to find what I needed. I find it incredibly annoying to start Visual Studio in order to search for files that contain particular strings, especially when Microsoft is beginning to use Unicode in many more situations. My hope is that Longhorn's tools will fix this shortcoming.
Friday, 14 May, 2004
Estimating a project can be very confusing to those who've never done it before. Approached incorrectly, it's impossible even to get started, and people who are unfamiliar with the process are hesitant to put anything on paper for fear of being wrong. After all, how can one come up with a reasonably accurate time estimate on a medium or large project without first doing detailed requirements analysis and design? My answer is "decomposition and experience."
The most important part of project estimation is decomposition. It's impossible to estimate a project of any meaningful size without first breaking it down into smaller pieces. Some people like to think in terms of subsystems: user interface, data access, business logic, etc. I find that a functional breakdown is more effective. People also differ on the granularity of their tasks. Some say to break it down into week-sized chunks and estimate from there, some say one to three days. The approach I use is to break a project down into chunks that I can think about in isolation--tasks that are small enough to fit into my head. Only then do I try to attach time estimates to the pieces. I like to end up with chunks that take no longer than a week to complete. That way, I can see measurable progress on a weekly basis and I can readjust the schedule if things start getting out of control.
When it comes to estimating a project, experience is very helpful. As an e-business consultant, many of the projects to which I'm assigned are very similar. They all require some kind of database access, user interface, server interaction, business logic, and reporting. I'm able to draw on experience from many other projects, saying "this project is similar to the one I did last year," and applying some rules of thumb. Even if I have no detailed information about the client's business process, I can infer certain requirements. At minimum, I can make a reasonable estimate by considering the average, best, and worst cases of similar projects.
When working with a team that has never used our estimating techniques before, it's sometimes hard to get the other developers to move ahead without more detailed requirements. We always manage to do it, though, and they are surprised at the end of the project that we managed to come very close to the original time estimate. If you look at the individual tasks estimates-versus-actuals, you'll see quite a bit of variation, but the overall project comes in pretty close to right on time: maybe a little under, maybe a little over. This happens even when large parts of the project plan are changed: some requirements dropped and others added. My boss says that this is an example of "the law of large numbers," the idea being that if you get enough things with sufficient granularity, the overs and shorts will balance each other out. Whatever the reason, I've found over the years that it works.
Thursday, 13 May, 2004
Obesity Kills. We've long known that people who are very fat typically have many more health problems than the less corpulent. Doctors and others have been telling us for decades that being overweight can cause heart problems, diabetes, high blood pressure, and any number of other illnesses. The long-accepted explanation was that all the extra weight put too much strain on the heart and other organs.
According to the Associated Press article linked above, that obvious and very plausible answer is dead wrong. Although the extra weight does contribute to some conditions like arthritis and sleep apnea, the real killers are the fat cells themselves. Apparently, fat cells are little chemical factories that churn out all manner of hormones and such to help regulate the body. But when there's an overabundance of fat cells, chemical levels become toxic. I suspect we'll be hearing a lot about the biology of fat over the next few years.
Tuesday, 11 May, 2004
I was about halfway through a series of long posts lamenting the overabundance of unreasonable people in the world when I realized that I was whining. And I'm reasonably certain that those of you reading this aren't interested in hearing me whine. There's nothing quite like writing to reveal how I was wallowing in self pity. Now that that's over...
As Jeff Duntemann pointed out in his diary entry on certainty, certainty sells. Well, unreasonableness sells too. Combined (would that be unreasoning or unreasonable certainty) they sell even better. People flock to extremism. Nobody in recent memory has gotten elected by saying "I'm a reasonable guy." That's the way things are, and no amount of whining or wishing on my part is going to change it. But I can wonder why, can't I?
The simplest explanation is that people in general are unsure of themselves and eager to embrace something, anything, that they can latch onto and put their doubts to rest. There's a lot of comfort in feeling like you have the answer. Although I'm sure some people work this way, that answer paints the average person with a very unflattering brush. And although it answers the "certainty" question, however unsatisfactorily, it sheds very little light on the question of why so many people act unreasonably and like to see or read reports of unreasonable behavior. That one's got me stumped.
Sunday, 09 May, 2004
I've been in something of a blue funk lately, maybe even a little depressed as I ponder the lack of reasonableness in the world. More to the point, I'm lamenting the overabundance of unreasonableness: people who insist that their way is it and everybody else is full of ... well ... "it". Jeff Duntemann's recent diary entry (see May 6, 2004) about certainty tied a couple of loose ends together for me, which allowed me to solidify my thoughts on the matter. More in a couple of days.
Saturday, 08 May, 2004
There are two widely unknown laws of physics you should know if you're considering taking up cycling. These laws are well known to cyclists, but the general public is not aware of them and physicists for some reason do not publicize them.
The Laws of Physics and Cycling
- Headwinds are stronger than tailwinds. If you've ever ridden in wind you know this to be true. If you head into the wind at the beginning of your ride, when you turn around to head back the wind will slow. Similarly, if you head out with the wind, when you turn into the wind it increases. This also is true of circular courses, where a wind coming from anywhere on your front will be stronger than any wind that quarters from behind. Headwinds increase as the ride progresses. Tailwinds diminish. The average ratio of headwind to tailwind is approximately (I don't have the space here to show the proof) 2:1, but it never approaches 1:1.
- Hills are larger, steeper, and longer on the way up than on the way down. Pedaling up a hill will lower your average speed more than pedaling down the same hill. It often is impossible to measure the effect that going down a hill has on your speed, but regardless of the hill's steepness you can always measure the effect that pedaling up has. Physicists are still trying to determine the exact relationship. There are many variables, including the smoothness of the road and the hill's position relative to your starting point.
I took part in the Armadillo Hill Country Classic today--a 100 mile ride organized by the Austin Cycling Association. Today is the first time I've done this ride. It was well organized and well supported. It was a good course except for the many hills and the headwind on the return. Even so, I managed to turn in my best time ever for a 100 mile ride. I didn't do everything right. I slacked off on the eating and drinking about halfway through and bonked badly at about 70 miles. Fortunately I recognized the problem before it got too bad and began increasing my consumption. 10 miles of taking it relatively easy and I had my strength back. This is a very good ride that I would recommend to anybody. They had six different distances from 14 to 120 miles. Well worth checking out ifyou're a cyclist in the Austin area.