Modernizing It's All About the Pentiums

One of my favorite rap songs is Weird Al's It's All About the Pentiums. In some ways the exaggerative style of the lyrics have protected the song, and it has aged relatively well, given the subject matter, but even so there are some parts of it that just seem like they could stand to be modernized. Here are a few of my suggestions. (Note that I'm only listing the parts I suggest changes for. The full lyrics, if you don't know them, are available elsewhere.)



























original lyric comments suggested revision
Defraggin' my hard drive for thrills While defragmentation is not entirely obsolete
as such, it seems a lot less relevant now than
ten years ago, and indeed there are much
geekier things one can do with a filesystem
these days than defragment it. Unfortunately,
tuning the filesystem parameters for
performance doesn't seem to fit the metre
here, so I had to go with something older,
older even than defragmentation, but something
all geeks still do from time to time.
Partition my hard drive for thrills
(or stet)
Installed a T1 line in my house While residential frame relay is still geeky
in general, T1 is no longer the same kind of
overkill that it was when the song was
written, and many ordinary consumers
have that much bandwidth from DSL or
even cable modem service. My revised
lyric, however, should be good for at
least ten years more.
Installed an OC3 at my house
Upgrade my system at least twice a day The first line here would have been fine
as it stands, except that changing the rhyme
makes the next line easier to work with.
This latter did not age well and is arguably the
worst lyric in the song. (The part about Y2K
was laughable within six months after the song's
release, and probably should not have been
included in the first place.) The line can't even
really be upgraded with a modern equivalent,
because there isn't one. PnP is a very bad
memory for most geeks, with no modern equivalent
really, and nobody is very much afraid of 2038,
most software having already been upgraded to
64-bit datetime values as I write this.
That makes this line a good choice to slip
in something that wasn't on most people's
radar when the song was written in the
nineties. Network security and reliability
seems obvious.
Social networking and user-created content are
another option, but security seems more
likely to still be an issue in ten years.
Threading and Unicode seem too technical,
managed code too likely to be taken for
granted in another ten years.
My battery backup is certified green.
I'm strictly plug-and-play, I ain't afraid of Y2K My generator's clean. My firewall is lean and mean.
You think your Commodore 64 is really neato No one could have predicted in 1999 that the
Commodore 64 would be cool again in 2007, but
the retro trend, among teenagers and gamers,
has really changed the flavor of meaning
this line carries. A small brand substitution
should restore the original sentiment.
You think your Packard Bell is really neato
In a 32-bit world, you're a 2-bit user Yeah, there is still a lot of 32-bit stuff
out there, but the cool CPUs now are all
64-bit, and the days of 32-bit software
are numbered. I wouldn't have revised the
song for this, and it may even be a little
ahead of its time, but since we're making
changes, this should be updated too.
In a 64-bit world, you're a 2-bit user


Wow, look at that. With only six changes (and most of them are very small changes, and two of them scarcely even necessary) a song from 1999 feels current in 2006. Given the subject matter, that's actually fairly amazing. It's hard to find a computer book from 1999 that's worth the paper it's printed on, so these lyrics really have aged quite well.

Book Series and Strange Titles

This morning I just happened to see this book, and it set me thinking. I realize, of course, that Visual Quickstart Guide is the series title, and so they kept it on this book so as to match the other books in the series. Nonetheless, the idea of having a visual guide to text-based markup standards is... funny. Well, it's funny to me. So this set me thinking about what other bizarre or oxymoronic titles there could be, if books on certain topics were published as part of a series with a poorly-matched series title...


  • sed and awk Visual Quickstart Guide

  • Brain Surgery for Dummies

  • Teach Yourself Patience in 21 Days

  • Tex: Quick & Easy

  • Emacs: The Missing Manual

Cleaning Up Digital Photos Again

We had a holiday open house at work again, and again we took photos, and again it's my job to put them in a photo gallery on the web. I'm still working on that at work (we took a lot of photos, and I have other things to do as well), but I brought one photo home because I thought it would make for a good demonstration of some of the techniques I use to clean up mediocre photographs.


Here's a sample of what a section of the original photo looks like. This is how it came out of the camera. (Click the image to see a full-size version. Notice how grainy it is.) As I've said before, the ideal thing would be to take better photographs in the first place. A camera that just does better in low light would be good, for instance. In this case, it also might have helped to take the photo in a different part of the building, but I wanted a photo of the punch stand, and it was set up in just about the darkest part of the library, barring a closet. (The building is an old Carnegie building, and this is the part they barely touched when they remodeled in 1990, because it's near the beautiful historical dome. So no lights were added. It looks fine when you're standing there, but it's not so good for photographs.)

Also, I might have gotten slightly better results if I'd moved closer and used less zoom, but I had to take the shot when I could get it, because there were cookies on the table, and most of the time I'd have gotten shots of several people's backs. So this is the photo I've got to work with. Oh, here's a scaled-down version of the whole thing. This is, again, mostly just like it came out of the camera, except for being scaled down.

The image is a bit dark, so naturally the first thing I did was to take the levels tool and lighten it up. I discussed this step in more detail before, so I won't belabor it again now. The result looked like this.

The image is easier to see now, but if we stop here we've almost done as much harm as good, since the poor quality of the image is also now entirely too easy to see. The image is blotchy with apparently random blobs of strong color -- green, blue, red, purple, ... ick. Scaling down will help (click the image to see how much worse the full-size version is), but I'd like to clean it up a little if possible before I scale it down. So I thought maybe I'd try a despeckle filter. Despeckle is designed for much milder damage than this, but I thought I'd see what I could make it do.

I tried adaptive and/or recursive despeckling, but on an image this bad they scarcely have any impact at all, so ultimately I had to turn off those options. This is unfortunate, because it creates blur, but when an image is this badly speckled, a little blur can actually be better than the alternative.

So this is a sample of what I managed to get out of the despeckle filter. It's a bit disappointing, and if I were a real graphics artist I could probably have tweaked the numbers a bit and got something slightly better, or known which other filter to use and how to apply it, but I'm just a network administrator trying to wear an extra hat, so this is what I managed.

I was tempted to scrap it at this point.

However, I've learned that when you're trying to clear up artifacts, like speckles, from an image, sometimes what doesn't look so good on its own can be useful in combination with the original image, by using one of the various layer blending modes. Of course I had run the despeckle filter on a separate layer that was a copy of the previous image, so by changing the blending mode and adjusting the opacity, maybe I could salvage something yet.


I tried several different blending modes, but I'm only going to show you the ones that I thought worth keeping. Here it is (on the left) with the despeckled image in overlay mode with an opacity of 65%, over the (non-despeckled) lightened image. And here it is (right) with the despeckled image in screen mode with an opacity of 33%, overtop of the version with the 65% overlay, overtop of the lightened image.

The screen and the overlay have different effects, and in this case the one was effectively muting some of the speckles out of one part of the image, and the other was de-emphasizing the speckles in other parts of the image, as well as making the image overall lighter. So I combined those effects, and at this point I did my first scale-down, to 1536x1235 (a factor of the original size), using of course the bicubic interpolation. I also cropped the image slightly, which got rid of the big white thing at the left (which is actually the edge of a rectangular pillar, which is really pink not white, but nevermind).

I hadn't noticed originally, because of the other issues that the image had, but by now it was obvious that I was going to have to do something about the man's red eyes. Generally what I do in cases like this is take the lasso tool and select the most egregiously red parts of the eyes, copy, paste, make it a separate layer, desaturate it, and then turn down the opacity until it looks as close to natural as I can get.


If you want to get elaborate you can also recolorize the eyes, especially if you know what color the person's eyes are supposed to be, but in this instance, especially at this scale, I didn't deem that step necessary. Gray is good enough here.

Finally, I brightened the image up a little using the technique I discussed before, lightened that back up a bit (using levels), noted that the effect was still too strong, turned down the opacity on that layer (letting the previous, unbrightened version show through partially) until I thought it was about as good as I was going to get, scaled it down (bicubic interpolation again) one last time to 512x412 (a factor of the previous size), and this is the result:

Okay, it's still not a terrific photo, but compare it to the original:

A is for apple

A is for apple, B is for ball, C is for caterpillar sitting on a wall.
D is for doggy chewing on a bone, E is for eggnog that you drink all alone.
F is for farctate, G is for Grog, H is for helicopter flying in the fog.
I is for Igloo, J is for Jam, K for kalamari, and L is for lamb.
M is for mnemonic, N for networking, O is for a few people running everything.
P is for pituitary, the size of a pea, Q is for quaffing, R for rascally.
S is for slapstick, comedy that's dumb, T is for tungsten, U for ultimatum.
W is for Wabbit, X for XYZZY, Y is for yoghurt, Z for Zenity.
Now I know my ABC vocabulary. Next time maybe you will sing the song with me.

Book Review: The Age of Turbulence: Adventures in a New World

One of the things various people have said over the years about Alan Greenspan is that he tends to underestimate his own influence. Reading his book, I think I'm seeing that too. For example, in the introduction, he relates how after 9/11 he made a speech that put a brave face on things, saying that the economy had become resiliant to shocks, but he didn't fully believe it and didn't expect he was fooling anyone. Then he turned out to be right: the economy recovered relatively quickly. It's obvious to me that at least part of the reason the economy recovered so quickly is because Greenspan suggested that it would. People believed (correctly, in my opinion) that he was the leading economics expert in the world, and so when he made positive statements, that gave people confidence, which generally has a lifting effect on the economy. Another example: barely a page later, he relates that after a meeting with lawmakers, he went home thinking all he'd done was reinforce what the lawmakers were already thinking, but the press acted like it was his agreement that made the whole thing happen. Well, it probably was. Apparently the lawmakers in question actually believe that the Chairman of the Fed is some kind of expert on economics, and if he agrees with what they're thinking, that gives them the confidence to go forward with it, and if he has reservations (as at the previous meeting) they hesitate (as they did). So now when out of retirement he comes out with a book saying that we are now living in a world with a "global capitalist economy that is more flexible, resilient, open, self-correcting, and fast-changing than it was even a quarter of a century earlier", people are going to believe that, too, and they're going to behave accordingly. I wouldn't have been very surprised if in the wake of the book's publication the economy surged up a bit: Greenspan just said a bunch of positive things about the economy, so let's all go out and do stuff with money. (It didn't work out that way because there were other forces at work, some of which I mention below...)

The historical narrative in the first half of the book is fascinating, not because I wasn't familiar with the basic events (I lived through and remember most of chapters 5-11), but because the perspective of an economist lights things up just differently enough to show up some things (trends, causes, and generalities) that I'd not been aware of before. Greenspan is a much better writer than I would have expected, and his story is compelling.

After going through the economic history of the last several decades, the author goes on to explore the economic issues that are currently facing various parts of the world, and the cultural and political issues that have important implications for economic policy and development. This is interesting material, as well, though of course much remains to be seen regarding how history will bear out his predictions.

Reading this book has raised in my mind some questions.

First, why is the short-term federal funds rate the only lever that the central bank in the US has to effect monetary policy? (I'm not saying, necessarily, that there should be other levers; I'm asking the question because I don't know the answer.) Greenspan indicates that the Fed was aware of the risk to the economy posed by the "irrational exuberance" of the dot-com bubble but was unable to do anything about it. Indeed, they briefly attempted to control the rising stock prices but found their measures ineffective and possibly counterproductive over the long term, so they left off trying. We can't fight market forces, they concluded. So then we had the dot-com bust and several years of pretty hard times for the IT industry, which had an impact on the entire economy. Not much later the Fed again saw a sudden inflation in another market they cannot effectively oversee, the real-estate market. There was nothing they could do about it, and when the bubble popped the housing market deteriorated quite significantly. The results include a credit crunch and the bankruptcy or collapse of a number of major lenders, especially in the subprime market (i.e., creditors that lend to normal people who don't have the 20% downpayment and other resources needed to get the best interest rates). A lot of first-time home buyers have been foreclosed, as I understand it not so much because of wrong that they've done as merely because the market now cannot support the loans they were offered during the real-estate boom. The home (which is the collateral) is not worth the outstanding loan amount, so if they can't make a payment they're stuck: there's no basis for an extension, and they can't sell their way out. This sort of thing is obviously not good for the overall long-term health of the economy, but what could be done about it? Are there additional levers that could (if Congress were so inclined) be granted to the Fed to assist them in more effectively smoothing out short-term economic forces and promoting the long-term health of the economy? And if so, what would be the other consequences of giving the central bank these additional powers?

Price controls obviously are NOT the answer. Just about all modern economists take it as an axiom that if the markets get too far out of touch with reality they will eventually correct themselves, and it is these market corrections that cause all the problems. The sorts of controls that characterize central planning (socialism and especially marxism) are only good for forcing markets further out of touch with reality, which invariably causes more problems than it solves, as Eastern Europe discovered.

However, the role of the central bank, primarily, is to control macroeconomic forces, most especially the value of money. (This is why we call it monetary policy, after all.) Controlling inflation (and deflation, if that becomes an issue) is very clearly within their mandate. But if the inflation occurs because of a situation in a market over which they have little or no influence, how can they control that inflation and keep the value of the currency stable? Besides the stock market and real estate, what other markets are there that the Fed cannot readily influence? What dangers does our economy face in the future? Just for instance (and purely *cough* hypothetically, ahem), what if labor becomes significantly overvalued? What kinds of havoc would the resulting market correction wreak?

A Word About Standardization

If you pay any attention at all to what people in technical circles (especially programmers) are saying, you will be familiar with the concept of standardization: everyone agrees to adhere to a particular standard, or the standards published by a particular body, and as a result everybody's stuff works together. To hear some of the zealots talk, virtually everything about computers, and especially software, should be standardized.

But that's not really necessary, or even a good idea.

What should be standardized in computers and software is exactly analogous to what should be standardized in the physical world. Specifically, it is not the things themselves that need to be standardized, but the connections and interactions between the things. This is equally true whether the things in question are kitchen appliances, part of your household plumbing, audio equipment, or computer applications.

Electrical and electronic equipment varies tremendously in terms of how it works inside and what it accomplishes, but it all interacts with other systems -- specifically, the power grid -- in pretty much exactly the same way. Setting aside for the moment what happens when you travel overseas, and heavy industrial equipment that runs off weird stuff like three-phase, normal stuff all runs off the same voltage (err, one of two voltages) and the same cycle of alternating current and plugs into one of a very small number of outlet styles. (Most things in the US plug into the same one style of outlet, but there are additional special styles for electric stoves, clothes dryers, and certain kinds of lighting. Other countries may have different exceptions. But even the exceptions are standardized: all clothes dryers in the US, for instance, plug into the same kind of receptacle.) If a device needs something different, the manufacturer includes (either inside the thing or along with it) a power supply or plug-pack that effectively makes the differences go away, or it runs off standard-form-factor batteries that can be recharged in a standard charger that plugs into a standard outlet. You can go to any store and buy any electric device for any purpose by any manufacturer, and it may do all kinds of weird stuff on the inside, who knows, but its interface with the power grid will be the same as for every other device. It doesn't matter if it's a computer, a blender, or an electric nose cleaner. You buy it, you take it home, you plug it in, and it Just Works.

Which brings me around to software.

Some things are pretty well standardized already. Network connectivity, for instance, is pretty much locked into TCP/IP. IPv6 has been coming for ten years, but at this point I am not convinced it will EVER replace IPv4, and there are no other contenders at all. There are some new application-layer protocols (e.g., BitTorrent), but these mostly are related to kinds of applications that were never standardized in the past. As far as things that have been around for a while are concerned (e.g., email), the standards are firmly entrenched. (The idea of an email-sending program that doesn't use SMTP has been tossed around in theory on a number of occasions, but trying to get anybody to use one is like trying to sell a life preserver that doesn't float.) The last time I can think of that established protocols were ditched in favor of new ones was when telnet and rcp gave way to ssh and scp. Even there, technically, telnet is still in use as a platform that other protocols (such as http and smtp) stand on top of.

Another place where things need to connect has to do with data. Internally, when a program is working with data, it can use whatever data structure it wants, whatever makes the programmer's job easiest. Nobody cares what structure the program uses internally, except for the people who maintain the program itself. But when the program goes to export the data and store it somewhere (in a file, in a database, wherever), now you have to consider the possibility that the user might also want to work with this data using other programs, and so you need a standard file format.

Programs that don't consider this possibility can end up creating data that the user can't ever do anything else with, and that severely limits their usefulness. There are lots of examples, but I'm going to pick on Microsoft Publisher in particular. Everything I say about it, of course, also applies to any other software that doesn't support any standard formats or other mechanisms for data interchange.

If Publisher could export its publications to a widely-supported format like PDF (or Word document format, or anything else that's widely supported), then people who wanted to send their flier or poster or whatever by email would have a way to do so. Better still, if it just used a standard format in the first place, then the user wouldn't have to go through the extra step of exporting: they could just attach the thing and send it. But Publisher can't do that. It only saves in one format: Publisher format. Nothing can open its files except Publisher. So if a user creates something in Publisher and asks me, "How do I send this to [someone]", there's only one answer I can give them: "Well, first, instead of Publisher, you have to create it in a program that supports a standard file format." Users do not like this answer, because it means the work they've already done has to be thrown away, but there's nothing to be done about that: Publisher simply does not provide for data interchange. (Copy and paste can be used to extract limited portions of a Publisher document, mostly the text, but that is usually not what the user put the most work into and is most eager to preserve.)

I call software like this Dead End Software, because there's no outlet: any data you put into it is trapped there, and there's no way to get it out. I strongly recommend against using such software for anything other than quick one-off work. (By "one-off" I mean something you're never going to need to refer to later, e.g., a Wet Paint sign. Even there you want to be careful, because a lot of times you think you won't need to refer to something later and then it turns out that you do.) It's a black hole, a final resting place for your data. Avoid, avoid, avoid.

Hepburn Must Stop

People who know me generally are aware of the fact that I am interested in language. The topic has always fascinated me, even since I got my dad to explain parts of speech to me when I was three or four years old. (Actually, come to think of it, I already was familiar with three writing systems at that point, as well.) So it should come as no surprise that I've been looking at assorted language-related stuff in my spare time ever since I left college.

Most recently, I've been looking at Japanese. Yeah, I know, it's a weird one to pick (especially since it seems almost all of the English-speaking people with any interest in it are obsessed with anime and manga, which don't interest me at all), but hey, I've seldom been accused of being excessively typical.

Anyway, in the course of reading (mostly on the internet) about Japanese, one of the things I've run into is the Hepburn Romanization. This is a system whereby Japanese text is transliterated into Latin characters. Transliteration is seldom without problems, and studying a foreign language from a text that transliterates everything is generally inadvisable (unless all you want to learn is how to say "Does anyone speak English? Does this airplane go back to the United States? How much is a ticket?"), but it seems to me that Hepburn is particularly obnoxious, especially for English speakers.

In the first place, learning to correctly pronounce the Romanized Japanese is at least as hard as learning Hiragana, maybe worse, because of the need to unlearn long-ingrained habits associated with English use of the same characters (e.g., it's difficult to learn to pronounce "ou" as a held long o rather than as it would be pronounced in English). This is compounded by the fact that Hepburn uses Latin vowel pronunciations, so e is a and i is e and u is oo and so forth, like in Spanish. The Latin vowel mappings by themselves, if they were the only major issue, would be no big deal at all, but in Spanish you don't have combinations that would be dipthongs in English showing up every other syllable to screw with your mind.

Hepburn doesn't even have the good graces to be easy to type on a US-English keyboard, because it uses a diacritical mark (which for added bonus points is not even a mark that's particularly common in European character sets) on vowels when they are held for an extra mora. Since this is untypeable on most keyboards, most of the time in practice you usually either simply don't see any indication that the vowel is held (which is extremely bad, because it makes non-identical words identical, and the absolutely *last* thing Japanese needs is twice as many homonyms) or else a second vowel character is used, which aggravates the aforementioned vowel pronunciation issue for English speakers. Using a punctuation mark to indicate a held vowel should have been an extremely obvious approach, since after all that is what katakana does, but no.

The letter y is even worse than the vowels, because you have to unlearn the notion that it could ever under any circumstances be a vowel, even when it directly follows a stop consonant. Did you know that "Tokyo" is two syllables? Also "Kyoto". This shows up in approximately seven out of every ten Japanese words and is *hard* for an English speaker to get used to reading correctly. When you see the corresponding hiragana, you don't have this problem, because each symbol stands for exactly one syllable (or "mora" or whatever they call them), so it's very obvious where the syllable divisions go. This is fairly important in Japanese, and the Romanization obscures it.

Just in case the y issue didn't do enough to obscure the syllable boundaries (which, it bears repeating, are important in Japanese), Romanization also obscures the syllable divisions in other areas, though I think a certain amount of that would be fundamentally unavoidable in any system that transliterates a syllabary into a true alphabet. (Alphabets are inherently suited for writing languages with a more freeform syllable structure allowing for closed syllables and arbitrary blends; the only closed syllables you have in Japanese are with the sokuon, and the only blends you have are the aforementioned yoon.) The only thing worse than transliterating a syllabary into a true alphabet is trying to go the other direction and write a language like English in something like katakana, which is just wholly altogether unworkable (not that that stops the Japanese from doing it, of course).

The most egregious offense I want to talk about, though, is the letter r. Hepburn uses the r to represent an alveolar flap, a sound we don't have in English at all. Now, the idea of using a letter that wouldn't otherwise be used to represent a sound that wouldn't otherwise be represented makes a certain amount of sense, but r is a particularly unfortunate choice here, at least for English speakers, because of the various bizarre properties of the r sound that English speakers take for granted and do without thinking. (For native speakers of Romance languages, I suppose Hepburn is maybe not so bad, but in practice how many people are there who speak Spanish and Japanese but not English?) There are other letters that could have been used, not least l, which is somewhat closer to the sound anyhow, but no, Hepburn uses the r. Problem is, if you pronounce it as r, or anything even vaguely like r, you're in for all manner of trouble, because r has all sorts of phonemic consequences. It colors every letter it sits next to, either before or after, especially vowels. It's also completely impossible to form certain very-common Japanese blends (most notably ryo and ryu, which it should be noted are one syllable each, see the previous paragraph about y) if you pronounce this r as the English r.

Aside from the blends, and the weird and unfortunate mess it makes out of adjascent vowels, r isn't even a stop ("plosive") consonant. It's a liquid. Japanese doesn't have liquids, unless you count the syllabic nasal (which is altogether another topic, and believe it or not Hepburn Romanization manages to make that one harder to read easily as well).

So anyway, all of that is to say, every time I run into Japanese language-learning materials that make extensive use of Romanization (which is *annoyingly* common), I cringe and go looking for something else. I suppose the writers of these materials believe that transliterating everything will make it "easier" for English speakers by removing the need to learn kana, but honestly, anybody who is even *slightly* serious about learning a language can certainly handle picking up at least hiragana, and everything thereafter will be *much* easier than with the Romanization.

It's not like hiragana is anywhere close to being even the tip of the iceberg for what characters you've got to learn if you actually ever want to be able to read any actual Japanese. I mean, you can't even look up words you don't know in a dictionary without learning two or three hundred radicals (and their lexical order) just to get started, so 46 hiragana characters is really no big deal.

The economy: near death, or cuts and scrapes?

Last night somebody told me that the stock market has lost trillions of dollars (he said how many trillion, but I don't remember) in a few days. Okay. My immediate response was, "What's that as a percentage?" I mean, yeah, trillions of dollars sounds like a lot, but the US economy is bigger than most people realize. It can afford to lose a few dollars here or there, from time to time.

As it turns out, the overall percentage of loss, since the last major peak (in 2007) is around 30%, depending on which index you look at. That sounds like a lot, and for a short-term drop it is a pretty good-sized chunk, but it's hardly the end of the world as we know it. There are peaks and bubbles (the fruits of periodic irrational exuberance), and then there are corrections back down to a more sane, gradual, and sustainable growth rate. 30% is a pretty large correction, but it's not out of line with what we've seen in the past.

So I went to a website that does stock charts. There are a number of sites out there that do basically the same thing, but in this instance I happened to select MSN Money, because it came up first in the search results. I went to the financial site, and I asked for charts of the Dow Jones Industrial Average, because that's a well-known index. It's not the only index we could look at, but it's a common choice, and, I believe, an instructive one.

I'm going to show two different charts here. The first represents the short-term view, on a linear scale.


Okay, yeah, that looks pretty bad. Actually, it looks worse than it should, because they've left off the bottom portion of the chart, starting the linear scale with eight thousand at the x axis. When you have to leave off more than half the numbers at the bottom of the scale in order to show the interesting part without making the chart too big, it usually means you should have used a logarithmic scale.

Now, let's step back and look at the larger picture. This second chart is on a logarithmic picture. (Otherwise the first three quarters of it would sort of resemble a flat line across the bottom.) Take a look:



What a difference! The black lines are original. I've taken the liberty of putting a red circle around the current economic crisis. On the one hand, yes, that's one of the biggest drops on the chart. On the other hand, it's clearly nothing very far out of the ordinary. If anything, that weird bulge around the (most recent) turn of the century is more unusual. The big dip at the left, of course, corresponds with the Great Depression.

I said I was going to show two charts, but here's an extra bonus image of the second chart, this time with a trend line drawn in, in green:


On the one hand, we're not seeing the steep growth of the eighties and nineties, but on the other hand this crisis has got to go some to look anything like the sharp drop of the early thirties.

That flat section across the seventies is called "stagflation", and in some ways that was worse than the current crisis, because it just went on and on and on, and then it went on and on and on some more, some twelve or thirteen years before things really started to pick up in the early eighties.

Of course, I don't know the future, and it's conceivable that things could keep going down until the current crisis turns into a second Great Depression. But there's no reason to assume that's what's going to happen. What we've seen so far is part of the normal up and down motion that happens all the time.

I don't want to be accused of being an unbridled optimist, so I'll say now that just because the economy hasn't completely collapsed doesn't mean our society isn't headed for a peck of trouble in other ways. All I'm saying is, some people are blowing the current economic crisis out of proportion. It's not really our gravest concern. There are, indeed, much more worrisome things to be upset about. (The condition of the public education system, just for instance, is outright terrifying. But that's a topic for another day.)

Bad Analogies 101

At church we're going through an evangelism course called Way of the Master. I want to be clear up front that just because the course uses one tortured analogy doesn't make the whole course bad. It only makes the analogy bad. Indeed, the course could potentially be valuable. (How valuable? Too soon to tell. We've only had one week of it so far. Though, already, it has managed to set me thinking, and that in itself is not without value.) Still, I feel the need to vent about exactly how terrible this analogy is, so here we go.

The analogy is that a firefighter arrives at a house fire and proceeds to sit locked in the fire truck, listening to music on headphones, while the house burns to the ground around a family of five, whom he can see screaming and calling out to be rescued, which pleas he ignores. When questioned about his actions, he says he was testing the CD player that he'd bought as a gift for the fire chief, at great personal expense.

Aside from the obvious physical problems with the analogy (he can see the family from the truck, but they can't get out of the fire without help), the really broken part is the spiritual side of the picture. I don't know about you, but to my knowledge I've never had an unbeliever cry out to me for help with spiritual things. Ever.

Believe me, if somebody even *asked* me to explain salvation to them, let alone *cried out*, I'd be... willing isn't even the right word. Enthusiastic probably falls short of the mark as well. That's the kind of thing you daydream about, but it does not generally, you know, happen. On the contrary, people typically don't know the house is burning around them, and the few who do know it are usually convinced they cannot be helped.

Now, I'm not saying it's right for us to sit and do nothing just because unbelievers are content to stay that way. It's certainly not. But I *am* saying the analogy is flawed.

Here's another analogy: I'm not a firefighter. I'm a geologist. My geology degree is from an unaccredited college, which most people have never heard of, which has had to move across international boundaries numerous times as various governments have tried to shut it down. The government of my country officially tolerates the school, but you can tell they consider it an embarrassment.

So I have an unaccredited degree, and I call myself a geologist, but I am not employed as a geologist. I work some other job to pay the bills. The equipment I use is different from the equipment that other geologists use, too.

For the past few years I've been independently studying a certain mountain, and I have concluded that not only is it actually a stratovolcano (a fact which was previously unknown), but furthermore it is active, and is building up tremendous seismic pressure even as we speak and will soon erupt. It's impossible to know an exact timeframe, but with every passing month the pressure builds higher. There is a large bulge on the side of the mountain that has doubled in size over the last six weeks. It's going to go, soon, and furthermore it's going to be a very potent eruption. From the amount of pressure that's building up, my estimates say it could be bigger than Krakatoa, or at least comparable.

At the base of the mountain there is, of course, a town. When the volcano erupts, it will blow a large chunk of mountain, tens of thousands of tons of rock, down the mountainside and straight through the town. And on top of that there could be lava, volcanic ash, quakes, and so on, all the usual destructive stuff that goes with a major volcanic eruption.

I have to convince the people to evacuate.

People have been living in the town for generations. The mountain has never erupted in the past. The local news runs stories about what a crackpot I am. The local authorities, as well as the state government, are reassuring people that of course the mountain is an ordinary mountain, just like all the other mountains in the area.

But the people still need to evacuate. They're going to die if they stay. I know they're going to die, but I sit in my bedroom and make excuses for why I'm not talking to them about the volcano. I even post about it on my blog (which nobody reads), but I don't go out and tell people about the volcano.

Okay, so this analogy isn't perfect either. I think it's closer than the other one.

Esoteric Knowledge Quiz #3

Do your friends, family, and coworkers accuse you of being a repository of useless information? (Mine do.) Here's your chance to test your knowledge of obscure tidbits...
I've had some help coming up with the questions for this one, so hopefully they represent a broader spectrum of topics.


  1. Which was the first toy to be called an action figure?

    1. Barbie

    2. Captain Action

    3. G.I. Joe

    4. Superman
    Thanks to Mark Harris for sending in this question.


  2. Which of the following is not a hardwood tree?

    1. Hickory

    2. Cedar

    3. Sycamore

    4. Balsa


  3. The 1985 NFL season will forever be remembered by fans for the historically great Chicago Bears defense, which helped power the team to a convincing 46-10 win over the New England Patriots in Superbowl XX. But what team lead the league in offense that year?

    1. Chicago Bears

    2. Miami Dolphins

    3. San Diego Chargers

    4. San Francisco 49ers
    Thanks to Dave Gable for sending in this question.


  4. What is the traditional well-known TCP/IP port number commonly associated with NNTP?

    1. 17

    2. 70

    3. 119

    4. 179
    For an extra bonus geek point, which one is (or was) commonly associated with gopher?


  5. If you have desmodromic valves on your vehicle's engine, what is their function?

    1. Opening and closing the intake and exhaust valves without the use of return springs

    2. Removing air from the crankcase to lower internal air resistance

    3. Routing exhaust gases through the engine to help it warm up faster

    4. Allowing gaseous fuel vapors to be burned alongside liquid fuel

    For bonus points, which of the following brands of vehicle is most likely to have these valves?

    1. Ducati

    2. Peugeot

    3. Kawasaki

    4. Honda
    Thanks to Andy Kerr for sending in this two-part question.




Most of the answers are in the comments now.



Past Quizzes: 1, 2


Quiz four is already in the works. If you have questions to contribute to future quizzes, send them in to jonadab@NO SPAM THANKS ANYWAYbright.net with the phrase Esoteric Knowledge Quiz in the subject line (or. if you are on the Wheeitology list, you can just post them there). Thanks!

Esoteric Knowledge Quiz #2

Do your friends, family, and coworkers accuse you of being a repository of useless information? (Mine do.) Here's your chance to test your knowledge of obscure but interesting tidbits...


  1. What is the major factor usually blamed for Dutch Disease?

    1. too much lowland in too small an area

    2. too much of a natural resource

    3. too many seventeen-year cicadas

    4. too little magnesium in the diet



  2. What does the word farctate mean?

    1. stuffed full

    2. fluoroastatic acid

    3. play in an irresponsible manner

    4. pass gas


  3. In Star Trek: The Next Generation, when Dr. Soong puts the emotion chip intended for Data into Lore, the android sings a song. The song that he sings features two major characters, only one of whom is named in the part that Lore sings. What is the name of the other character?

    1. Abu Hasan

    2. Kassim Baba

    3. Enkidu

    4. Ivan Skavinsky Skavar


  4. Which of the following islands is currently claimed by Japan, but controlled by another country?

    1. Atlasov

    2. Iturup

    3. Wake Island

    4. Okinawa


  5. The following quote is an excerpt from (a translation of) a letter sent to a civil authority. Who wrote it? "In the second place, I was not even examined, let alone asked what my faith was, much less found guilty of any misdeed. Such a procedure, firstly, is counter to the Jewish law, John 7:51, where Nicodemus says: 'Does our law judge a man without first giving him a hearing and learning what he does?' Yes, it is also counter to the Gentile justice, Acts 25:16, where Festus says, '... it was not the custom of the Romans to give up any one before the accused met the accusers face to face, and had opportunity to make defense concerning the charge laid against him.'"

    1. Johann Reuchlin

    2. Huldrych Zwingli

    3. Alexander Mack

    4. Galileo Galilei




The answers have been posted in the comments.

Microsoft Gives a Month for Seven

Okay, it's time to revisit Windows Seven Dates and my Vienna Timeline. Network World reports (see also the slashdot discussion) that Microsoft is now putting a specific month to their projected release date: January 2010.

Of course, that doesn't mean it'll actually be available in January 2010. Haha. No. In the first place, the date can and probably will still slip a bit. In the second place, new versions of Windows are never actually available to the public on the official release date. No, they become available only to select partners on the official release date. (Select partners, in Microsoft parlance, are the large multinational megacorporations whose Software Assurance licensing allows them to install any version they want, any time they want, on any computer they want. Typically the IT departments of these large corporations would never in a million years actually deploy a brand new release the same year it comes out. Most of them have only moved from Windows 2000 to Windows Server 2003 and Windows XP in the last few months; some of them still haven't.) Actual availability to the public comes several months later. That's how it was with Windows '95 (officially released in December 2005; not available until 2006), Windows XP, and Vista. There's no reason to believe Seven would be different in this regard.

Nonetheless, their putting a specific month on their release prediction is significant. Very significant. There's a lot less wiggle-room in a month than there is in a vague projection like "early 2010", and in the past Microsoft has usually not projected specific months until they're legitimately close to having something they believe they can bang into release-quality shape in approximately that amount of time.

If you look at my timeline, it doesn't call for a specific-month release projection until 2016Q2, less than a year from actual release in 2017. Going by just that alone, one could be excused for concluding that my timeline is off by eight years.

I don't think my timeline is off by quite that much. For one thing, even if Microsoft actually comes out with the product in January 2010, that's only seven years ahead of my timeline. Furthermore, the projection in question on my timeline is for a release date only two quarters forward from when it's projected; as of now January 2010 is still six quarters out, three times as far into the future. Historically, the further into the future a projection is, the more room there is for it to still be pushed back. I fully expect Microsoft to push back this release date at least once yet, and then on top of that I expect them to release to select partners only, buying a few extra months before actually shipping the new OS to the public.

Still, this caught me off guard, and I'm now very much convinced that my timeline is overlong, and that Microsoft will beat it by several years. Also there have been fewer feature announcements than I predicted, and I believe this is significant: Microsoft actually learned from the Vista development experience and is aiming somewhat lower for Seven, no doubt deliberately. More realistic goals, less wasted time. My timeline was written with the assumption that they had not learned this lesson, but it appears now that they have. Which is good, for Microsoft and for their customers.

The Value of Dead Languages

Sometimes people question the utility of studying a dead language, but honestly, I think dead languages have just as much utility as living ones, albeit for different reasons. The value of knowing dead languages was kind of hammered home for me tonight.

My dad was on a discussion forum, and he turned to me and said, "How do you spell sarcophagus? p-h-a?" I didn't even have to think about this question; the answer was blindingly obvious. I just looked at him and said, "Dad, it's from σαρξ and φαγομαι." So, yeah, p-h-a then.

Obviously, I didn't study Greek just so I could spell English better. But having studied it I do know English better, and not just spelling either.

Vista Notes: the Gadget Sidebar

Okay, so where I work we've been putting off Vista uptake, waiting for the service packs to come out. I was hoping to wait for SP2, but definitely I didn't want to touch the thing until SP1 (which just arrived).

But after June, there is to be no more OEM XP, and no more retail XP either. Meaning, any computers we replace after that point, if they need to be Windows (which our library automation software requires on staff workstations, unless we want to use the ILS through remote desktop, which most of the staff probably don't want) will probably be Vista. In preparation for this... I, as TCG, need to have at least some familiarity with this turkey.

So late last week a new computer arrived, containing Vista. Which I am currently using as I type this. I've been taking notes, and I feel the need to vent, so I will be documenting some aspects of my experience here.

The first thing I want to talk about is the gadget sidebar. This was for me one of the most exciting features of Vista.

It's something Gnome and KDE have had for aeons, of course. (I know for sure that Gnome had already had it for a while the first time I saw Gnome, in RedHat 6.0. That was Gnome 0.x. I didn't see KDE until a while later (circa Mandrake 7.1), but when I did it did already have this feature. And we're still talking twentieth century here, back when the default web browser for both of said environments was Netscape 4.x. Konqueror didn't exist, and the Mozilla project releases up to that point had version numbers starting with "M" for "milestone". So yeah, it's been a while, and panel applets in Gnome and KDE weren't really a new feature even then.)

Nonetheless, it's a feature that OS X does not have, and it's an incredibly useful feature, at least potentially. Now, I already knew that Vista's implementation would not be as flexible as that in Gnome and KDE. Among other things, I knew that the traditional taskbar elements would not be reimplemented as gadgets and so would not be able to be positioned anywhere. As before, they're still locked in place. Similarly, the new gadgets cannot be positioned on the taskbar, only on the sidebar. And you can only have one sidebar. And it has to go on the side, not top or bottom. (From long experience using side panels on other systems, I can tell you that you will probably want it on the left, rather than the right.) All this I knew. (How did I know all this stuff? I've been acquainted with Windows since it ran on DOS, so by now I'm somewhat familiar with how Microsoft does things. Also I'd seen screenshots.) But despite these caveats I was still looking forward to the sidebar, because it's still a big step forward from what Windows XP provided in this area, (namely, squat), and it's still potentially very useful.

Of course, the first thing I wanted to do to the sidebar is resize it. The default size is preposterously exaggeratedly large, the size you would want it to be for a tradeshow demo so people can see it clearly as they approach your booth from across the room. For daily use, this is terribly impractical, because it consumes way too much screen space. In Gnome you can make any panel any size, down to like 12 pixels if you want (though in practice it's not very useful below about 24px on a modern display resolution). So I wanted to resize the sidebar. I suspect everyone who uses Vista will at one point or another want to resize the sidebar.

Only, it doesn't do that. Well, there's a third-party Sidebar Styler you can get that, among other things, is supposed to let you resize the sidebar. And it does, after a fashion. Specifically, it lets you resize the panel itself, i.e., the background. But the gadgets do not change size. At all.

Now, Gnome has a couple of badly-behaved gadgets that will not resize to fit a narrow side panel. (The RSS reader is one example of this.) Of course, in Gnome, you can put those applets on a top or bottom panel, where more width is available. And they're not really the most useful applets anyway, so even if you *only* use a side panel, you can pretty much live without those badly-behaved applets.

But on Vista, *all* the gadgets -- not just all the default bundled ones, but *all* the ones I managed to find -- do not resize. At all. (Well, some of them do expand and become even more preposterously huge if you undock them. Determining the practical usefulness of that particular "feature" is left as an exercise.)

So I can either let the sidebar consume some hundred and fifty pixels off the side of my screen, or I can turn off the sidebar. I suppose if I had the budget for a round of thirty-inch monitors for every workstation, capable of 2560x2048 or higher resolution, then this might not be such a big deal. As it stands a lot of our systems here at the library aren't even up to 1600x1200 yet, so the sidebar, if we want to use it, will consume 12% of the screen, or even 15% (at 1024x768, which is all some of the older LCDs can handle; no, we do not generally replace the display when we buy a PC; we replace it when the display dies). This realization was the first of several significant disappointments regarding the sidebar.

The second thing I noticed about the sidebar is no big deal, and in fact I rather expected it: the bundled gadgets are junk. I don't think there's a single one of them I would ever use. The clock is not only analog-only, but also takes up far more space than you would ever want to devote to something that basically just shows you the time. Its options are limited to one of about six skins, showing the second hand or not, and timezone. The calendar is even more underwhelming. It has no options whatsoever, it's orange, and like the clock it takes up FAR more space than it should. If you want both the time and date, with the default gadgets, you've used up almost 300 pixels of your sidebar's height right there, just for the time and date. The CPU meter looks cool for about fifteen seconds until you realize you can't actually tell whether your system is loaded or not by looking at it. And so on.

So the bundled gadgets are junk. As I said, I expected this. The purpose of the bundled gadgets is pretty much just to demonstrate the concept, and what you're really going to do is download gadgets that will actually be useful. I was pleased to discover that there's a readily accessible "Get more" link, which takes you to some kind of Microsoft-hosted gallery. This is not altogether dissimilar to the "Get Extensions" link in Firefox, a system that works *reasonably* well. In a few minutes I was able to find some extensions I would actually use: A CPU/memory meter that can actually be read at a glance, a digital clock, a current-date gadget that doesn't take up a fifth of the sidebar and isn't orange... I later replaced both of those with a single gadget that shows the date and the time, plus weather, in less than the space the bundled weather gadget consumes, much less the clock or date, much less all three.

Most importantly, I found something called App Launcher, which is exactly what you would hope it would be: a way to put launchers on the sidebar. Gnome and KDE treat launchers as first-class citizens on the panels, so you don't need a special applet to have the capability. But that's a detail. The important thing here is that I can now have launchers on the sidebar. This brings it much closer to parity with KDE's panels. (Gnome, of course, has drawers, which increase the usefuless of the panel to another level, but that's another discussion. And nothing's stopping a third-party developer from creating a drawers gadget for the Windows sidebar, though I didn't see one on my first trip through the gallery.)

Now, granted, the taskbar has had QuickLaunch since 1998. But the taskbar is so cramped for space that you can't really afford to put very many launchers in the QuickLaunch. Five or six is about all most people want in there, because every three of them consumes the space of one window on the task list, and the Windows task list already has way too strong a tendency to run out of space and start doing grouping and/or scrolling, which pretty much chucks convenience right out the window. Gnome and KDE let you put the menu and the notification area ("system tray" in Windows parlance) and the launchers all on a different panel from the task list, so that the task list has the full width of the screen. Well, now with Windows you can at least put the launchers on a different panel, namely the sidebar. The menu and system tray are still stuck on the taskbar, on either side of the task list, but at least the task list no longer has to compete for space with QuickLaunch as well.

So anyway, putting app launchers on the sidebar means you can easily have a couple of dozen launchers readily accessible. Which means you don't need to get to the desktop all the $#@! time. Which means you don't have to compulsively minimize everything constantly. This is a big usability win. Real big, in my estimation.

The next thing I noticed about the sidebar is that the much-hyped transparency in Vista is of quite limited utility. (This actually applies to more than just the sidebar -- the transparent window title bars, if you use Aero, have the same issues. But for the moment I'm talking about the sidebar.) On the one hand, making the gadgets mostly transparent does make them stand out less, which is nice, because it lets you focus on what you're doing and only look at the gadgets when you need them. Good. On the other hand, the main purpose of transparency in other contexts typically is to let you see through to what's behind, but the transparency in Vista does not really allow for this, because of the inherent blur. If someone knows of a way to turn the blur off so that the transparency can be actually useful, please tell me about it.

Also, the sidebar does not have a hide button. It can be turned off, and you can put windows in front of it, and a hotkey will bring it to the front... but to my knowledge there's no way to just quickly hide (and subsequently unhide) the sidebar. (This is particularly unfortunate in combination with the fact that Windows Explorer does not seem to be smart enough to avoid putting desktop icons behind the sidebar by default, although as I noted earlier with the App Launcher gadget installed the desktop icons are much less important than they used to be.) Also, maximizing a window causes it to cover up the sidebar. Given the non-configurable large size of the Windows Sidebar, that's probably for the best, but it's sure not ideal.

I should point out too that you can't position the applets where you want them. Well, you can arrange their order, but they always start at the top and work their way down, leaving the bottom blank if you haven't filled the bar. This is somewhat unfortunate, since there are certain kinds of gadgets one might specifically want to position at the bottom (e.g., directly above the Start button). This is not a big deal, though, and it's something that could be fixed in a later version without requiring gadgets to be updated. It is also worth noting that the sidebar fills the whole side even if you don't have it full. This is a good default, but there ideally should be an option to let it end where it runs out of gadgets. Again, though, this is functionality that could be added in a later version without requiring gadgets to be updated.

Despite the disappointments, I still feel that the gadget sidebar is one of the most exciting new features in Vista -- perhaps the most exciting one of all from a user's perspective.

I have more to say about Vista -- much more -- but I'll leave the rest for another post.

Seven Dates

In light of a couple of recent items in the news (see also slashdot coverage), I'm going to say a few words again about the Windows Seven Development Timeline, as previously discussed here.

First, let's get that story about the XPHE extension out of the way. This is actually official info, but it's nonetheless irrelevant to my timeline. Because of the way it only applies to special (ultra-portable) hardware, this extension would mean nothing for mainstream computers even if it included the pro edition, which it doesn't. Id est, this is not a story about Microsoft changing its operating system plans. Like most systems, the latest version of Windows requires beefier hardware than a several-year-old version. That's normal, and because hardware continuously improves it's mostly no big deal, though of course people whine about it a lot. (Remember DOS? It can run comfortably on a system with a single-digit-megahertz processor and RAM measured in kilobytes. XP isn't quite that old and lean, but it's older and leaner than Vista.) So this is just about ultra-portable hardware not being up to the specs of a modern desktop.

Now, on to the more interesting stuff: dates.

The soundbyte you keep hearing is "Sometime in the next year or so we will have a new version." That's from the horse's mouth, but the words "or so" are, IMO, rather telling. Microsoft presumably wants you to think, or at least hope, that "or so" means something like "plus or minus a couple of months", i.e., that the new version would be out sometime in 2009. But the words "or so" could just as easily mean "or two, or three, or more... you know, schedules change as things progress". Which IMO is probably what it will eventually turn out to mean.

CNET was told (by a MS representative, they say, and I have no particular reason to disbelieve that) "roughly three years from Vista's January 2007 debut". That would be 2010Q1, closer to two years than one from now -- and again, "roughly" is an important word. The person who's saying this knows, or at least suspects, that that date will slip (as all release dates tend to do, and not just at Microsoft).

So then, looking at my Windows Seven Development Timeline, there are a couple of different places this announcment might fit, though none are a very good fit. The 2011 Q2 announcement (predicting a release in early 2012) seems too close (that's a clear less than a year prediction, and this is more like 1-2 years). The 2008 Q2 prediction is a little far out, and in any case we really already had that one, over a year ago (yes, it was ahead of my schedule). So I think the current prediction identifies most closely with the 2009 Q4 prediction (second half of 2011), which is more specific than this one, but seems to be of the right general duration.

As best I can figure, that places Microsoft about six quarters ahead of my timeline, give or take (depending on how you interpret the technical announcements, and whether Dev Corvin has actual information or is just making stuff up). That's a year and a half! If this progress keeps up, Seven could have a shorter (real) dev timeframe than Vista did, which would bring my predictions up short (not that that would be a bad thing).

So now we're looking for some non-date announcements: something about security, something for developers, and something related to the internet. (Actually, all that talk about Live could potentially qualify for the last.) Those are listed for 2010 on my timeframe, so if they come in the next few months we'll definitely be ahead of schedule.

But let me be perfectly clear here: if Seven is actually available to customers in 2010, I will be absolutely flabbergasted. There's a reason my timeline shows the date being pushed back repeatedly. Six quarters ahead of my cynical schedule would ultimately mean release in mid-2015, and if they keep gaining quarters at that rate (six quarters off my timeframe for every five that pass) they could potentially make 2012. If they short-circuit the last couple years of my timeline entirely they could maybe even make 2011. But that's wildly optimistic. 2010 would mean they were meeting their own estimates, which as far as I'm aware has never happened in their entire history as a company.

Dropping Binary Compatibility With Previous Versions

Apple did this in 2000. At the same time, they also completely scrapped their old codebase, a move that was long overdue. The old Mac OS didn't have real multitasking, a sane framework for non-GUI programs, memory protection, ... in short, it was in much worse shape than Windows, technically speaking. Apple had concentrated totally on the UI, and that was not sustainable. UI is important, but you have to have a strong foundation to build it on.

Anyway, my point is, while Apple needed to make huge changes, and Microsoft can probably get away with smaller changes, dropping binary backward compatibility with old system libraries is something every OS has to do periodically. Only, until now, Microsoft has only done it gradually, piecemeal, and by accident. (If you try to run software designed for a long-dead version of Windows you'll discover what I mean. Little things will just not quite work right.)

As this article notes, the attempt to retain binary backward compatibility across multiple versions costs something. Now Microsoft wants to free itself from those costs, as Apple did with the release of OS X.

Most Unix systems don't incur these costs in the first place, at least not in the same way, because they don't worry so much about binary compatibility. They don't need to, because they have source compatibility. In an enviroment where you have the source code for everything anyway, you can just recompile as necessary when you upgrade to a new version of the core system. (Take this philosophy to its logical extreme and you get Gentoo, or the BSD ports system, where the user's system recompiles everything from scratch, locally, every time they upgrade anything. But the distributor can also pre-compile the software for each major version of the OS and make pre-compiled versions available, which is what most distributors do, because it makes upgrades faster for the user.)

But Apple and Microsoft both rely heavily on proprietary third-party software, for which source code is not available, except to the ISVs who produce the software -- and they cannot always be relied upon to do any porting, especially not punctually; Apple had significant trouble getting Adobe to finally support the new version, and they still haven't moved to Cocoa, most of a decade later. Microsoft doesn't rely as heavily on any single ISV as Apple does on Adobe, but that's only because the stuff they rely on is spread out over a larger number of ISVs. So they have to think about this issue.

The logical solution is to do what Apple did: supply an emulated old-version environment for running old-version software, with all the performance penalties that implies. Software that is updated promptly can be run natively, with the advantages that go along with that. I don't think they can afford to do this every major version, but at this point they're well overdue for it.

Whether they'll actually do it is an open question. I don't know whether Dev Corvin actually has any significant inside information, and of course it's so early in the Windows Seven development timeframe at this point that any decision that's made can potentially be changed several times before release. Nonetheless, it's an interesting point.

Whether (and how) this figures on my timeline is also another question. Assuming it's a for-real announcement originating from Microsoft, it would be a fairly sweeping technical announcement of the general type that my timeline has slated for 2010. But it's not related to security, and there was only one sentence about how this sort of thing is good for developers, and it's not clear that even that sentence necessarily means third-party developers. So I'm not sure there's a specific timeline entry to pin this on.

Chain Forwards on the Web

Well, I've been tagged.

Back in 1996, I used to receive (and ignore) email messages with these kinds of instructions all the time, but this is the first time I've ever seen it outside the context of email. In honor of the fact that it's the first time, I'm actually going to attempt to play along somewhat. This time only.

[The Age of Turbulence: Adventures in a New World]I am currently reading Greenspan's book, and happen to have it sitting within eight inches of my keyboard. It's a fascinating read by the way. I'm on page 339 at the moment, but backtracking to page 123 (which just happens to be the first page of chapter six), here are sentences 6-8:

...In the process, the demise of central planning exposed the almost unimaginable extent of the rot that had accumulated over decades.


But the biggest surprise that awaited me was an extraordinary tutorial on the roots of market capitalism. This is the system with which, of course, I am most familiar, but my understanding of its foundations was wholly abstract...


That part was easy. Now the business about sending it on to (err, tagging) five more people, that's another matter. My blog is kind of atypical. Almost all visitors find a specific post (most commonly this one, as it happens) through a search engine, in the same way one would find content on a traditional website. I guess you could say I'm on the fringes of the (do I have to hate myself for using this word?) blogosphere. Naturally, the two people whom I have some reason to believe might occasionally drop by just to see if I've posted anything lately have both already been tagged.

Very well, there's nothing in the instructions that says you can only tag people whom you're confident will read the post and know they were tagged. So, in the spirit of following the letter of the instructions and ignoring the intent, I'm tagging Andy Jentes (a friend who, as far as I know, does not have a web presence at the moment), Dave Gable (another friend, going by the handle randomneuralfirings over at Xanga), Derek Lowe (a biochemist over at pipeline.corante.com), Dan Ritter (a hardware reviewer from down under -- dansdata.blogsome.com), and, umm, Tim Vroom (of perlmonks.org fame). Hey, why not?