The XHTML 100

In the spirit of Marko Karppinen’s The State of the Validation, here are the results of testing 119 XHTML sites for standards compliance. This is not a rigorous scientific exercise; the methodology had several shortcomings, some of which I detail below.

Most of the sites tested are the personal web pages of the “Alpha Geeks”, an elite group of well-linked web designers and programmers, along with some of their friends. Because these are individuals, I do not plan to “name names” by publishing the exact list of URLs tested. Sorry. However, the general sample group is pretty easy to reconstruct. If you’re the type of person who is interested in XHTML — if you’re the type of person who would waste time reading the rest of this post — just look at your own blogroll, and start validating. Your results should be roughly the same as mine.

This post is divided into three sections:

Test Description

The tests derive from the first three criteria described in an earlier entry. I only tested sites that claimed to be XHTML — in other words, I only validated sites that provided an XHTML DOCTYPE (or something that was trying to be an XHTML DOCTYPE, anyway.) I ignored sites that provided an HTML DOCTYPE or that didn’t have a DOCTYPE at all. It would have been interesting to test HTML 4.01 standards compliance, but that wasn’t what I was interested in.

The “fourth” test described in the earlier entry gets at the question of, “Why are you using XHTML in the first place?” I think this is a good question to ponder… but for this survey I thought it best to focus on the first three tests, which are less philosophical and more straightforward and mechanical.

For the sake of brevity, as soon as a site failed, I stopped applying all further tests. One strike, you’re out.

  • Level 1: The “main” page must validate as XHTML. (“The Simple Validation Test”)

    This test is self-explanatory. Select the home page of the site and run it through the W3C validator. Note that in many cases the page I tested was not a top-level page, but a main journal/weblog page, as in http://domain.com/blog. The distinction doesn’t matter too much. We just want to validate the main entry point to the site… or the page that outsiders tend to link to in their blogrolls, anyway.

    The great majority of XHTML sites failed to pass Level 1.

  • Level 2: Three secondary pages must validate as XHTML. (“The Laziness Test”)

    I designed this test to weed out people who go to the effort to make sure their home page validates… and then simply slap an XHTML DOCTYPE on the top of the rest of their pages and call it quits.

    A “secondary” page is simply another page on the website that is only one or two clicks away from the main page, such as an “About” page, a “Contact” page, or a monthly archive page. These secondary pages often had images, forms, or other elements that were not present on the main page, thus providing a useful test of proper tag nesting and valid attribute usage. If the secondary page lacked an XHTML DOCTYPE I skipped it; if it had an XHTML doctype, it was fair game.

    Of course, a more thorough test would validate all pages on the site and then characterize the total results (somehow). I chose to validate just three pages. Basically, I figure that if I can quickly select three other pages that all validate, then you’ve done a pretty good job of making sure that your site is in solid shape. Of course, some people will pass this test based on the luck of the draw, and so clearly this test overestimates the number of people who have “perfectly valid” sites. Hey, I’m okay with that.

    The majority of XHTML sites that passed Level 1 failed to pass Level 2.

  • Level 3: The site must serve up the proper MIME-type (application/xhtml+xml) to conforming user agents. (“The MIME-type Test”)

    The “conforming user agent” I used to sniff for the MIME-type was Mozilla 1.3. Mozilla has been around long enough that its ability to handle application/xhtml+xml should be well-known. Furthermore, Mozilla indicates that it can handle this the proper MIME-type through its HTTP-ACCEPT header. If the site served up text/html to other browsers, that was fine — I was just looking for some acknowledgment of this issue.

    If an author makes it past Test 2, he or she clearly knows a thing or two about XHTML. If he or she then fails Test 3, we can conclude one of two things:

    • The author is ignorant of the spec.
    • The author is willfully ignoring the spec.

    Either way, it’s a failure. XHTML is not simply about making sure all your tags are closed and your attributes are quoted. XHTML might look superficially like HTML, but it is an entirely different beast. Those who know enough to pass Test 2 should know enough to understand the MIME-type as well.

    Anyway, the great majority of XHTML sites that passed Level 2 failed to pass Level 3.

The reasons why you should serve up your XHTML as application/xhtml+xml are well-documented. First and foremost, the spec says so:

The ‘application/xhtml+xml’ media type [ RFC3236 ] is the [emphasis not mine] media type for XHTML Family document types, and in particular it is suitable for XHTML Host Language document types….

‘application/xhtml+xml’ SHOULD be used for serving XHTML documents to XHTML user agents. Authors who wish to support both XHTML and HTML user agents MAY utilize content negotiation by serving HTML documents as ‘text/html’ and XHTML documents as ‘application/xhtml+xml’.

Second, there’s Hixie’s famous article on the matter, which describes why you need to use the proper MIME-type. Personally, I think Hixie is a little too strict. He argues strenuously that serving up XHTML as text/html is wrong, and then relegates to Appendix B the concept of serving up different MIME-types to different user agents: “Some advanced authors are able to send back XHTML as application/xhtml+xml to UAs that support it, and as text/html to legacy UAs…” (A side note: this distinction about “advanced” authors is a little odd. First, as the results demonstrate, XHTML is hard enough that even advanced authors get it wrong most of the time. Second, configuring your server to do some minimal MIME-type negotiation really isn’t that tough. If you’re advanced enough to know what XHTML is, you’re advanced enough to add a few lines to your .htaccess file. Or add a little PHP snippet for your dynamic pages. Et cetera.)

Anyway, without Hixie’s Appendix B, we’re stuck. If you serve up your pages as application/xhtml+xml to all browsers, you’ll run into IE, which chokes on this MIME-type. The only non-suicidal thing to do is to serve text/html to the primitive browsers that don’t understand the proper MIME-type, and application/xhtml+xml to the ones that do.

Data Collection

I collected results for 119 XHTML websites. I reviewed about half the sites on April 19, 2003, and the other half on April 20, 2003. I used Mozilla 1.3 to sniff for MIME-types, but for the majority of my testing I used Safari Beta 2, because of its superior speed and tab management. (A side note: for beta software, Safari performed extremely well, humming along smoothly with fifteen or twenty tabs open at once. It did consistently crash on a couple of URLs, which I plan to submit with the bug reporting tool.)

Finding 119 XHTML websites is not quite as easy as it first appears. At first I tried searching Google for terms such as “XHTML standards” or “XHTML DOCTYPE”. But as it turned out, sites that talk about XHTML standards and DOCTYPEs are suprisingly unlikely to be XHTML sites.

I finally hit upon a method that yielded a reasonable percentage of XHTML websites. I went to the blogs of several very well-known bloggers who write about web standards: the “Alpha Geeks”. I then methodically went through their blogrolls. Some observations:

  • This method is likely to overestimate the number of valid XHTML sites. The Alpha Geeks and their friends are among the most tech-savvy people publishing on the web — and furthermore, they have the enormous freedom to tailor their site so that it validates. (Large corporations are for various reasons much more sluggish.)

  • The blogrolls of the Alpha Geeks consisted primarily of fellow Alpha Geeks. There were other sites, of course — news sites, journalist-bloggers, music aficionado-bloggers, bloggers who drive traffic by posting pictures of themselves in their underwear, and so on. But the majority of the links were web standards advocates, web designers, and programmers.

  • Even in this elite crowd, a large percentage of people either didn’t bother with DOCTYPEs or were using HTML DOCTYPEs. I didn’t spend time validating the latter, although it would have been an interesting exercise.

  • A significant fraction of the Alpha Geeks were the so-called “Microsoft bloggers”. Microsoft is doing a pretty good job of getting its employees out there in the Alpha Geek community. Interestingly, nearly all the Microsoft bloggers are using HTML DOCTYPEs. Do they know something the rest of us don’t?

  • One of the more popular blogging tools of the Alpha Geeks was Moveable Type. The majority of Alpha Geek MT sites were not using MT’s default templates — usually their MT installation was highly customized. Radio was also a popular choice, although Radio blogs did not contribute significantly to the number of XHTML sites. A few of the Alphas “roll their own” system (more power to them). Blogger was suprisingly rare, considering its popularity in general — perhaps because it isn’t as customizable as Moveable Type. The ground-breaking (but now unsupported) Greymatter was even rarer.

  • Of the XHTML sites, XHTML 1.0 Transitional was the most popular choice by a wide margin. This isn’t too surprising. XHTML 1.0 Transitional is the default DOCTYPE for Moveable Type, and it has the added benefit of allowing you to use all sorts of wonderfully semantic tags and attributes such as the <center> tag and the border attribute for images.

Many Alpha Geeks (including some vociferous standards advocates) failed validation very badly, with dozens and dozens of errors of varying types. On the other hand, a few Alpha Geeks came tantalizingly, frustratingly close to validation. Typically this sort of failure would arise on the last page, where the author would make a tiny error such as forgetting to escape a few entities or inserting naked text inside a blockquote. I can certainly understand how these kinds of errors can creep in, no matter how diligently you try to avoid them. (And I can sympathize — the blockquote validation error is a personal bugbear of mine.)

But it doesn’t matter whether I feel bad or not. It doesn’t matter if I think the errors are “small” or “forgivable”. That has absolutely nothing to do with the specs, or the validator…

“Listen! And understand! That Validator is out there. It can’t be bargained with! It can’t be reasoned with! It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, EVER… until you are validated!”

And, umm, on that note, let’s get to the results.

Results

Of the 119 XHTML sites tested:

  • 88 sites (74%) failed Test 1 (“Simple Site Validation”).
  • 18 sites (15%) passed Test 1, but failed Test 2 (“The Laziness Test”).
  • 12 sites (10%) passed Test 2, but failed Test 3 (“The MIME-type Test”).
  • Leaving us with one site (1%) that managed to pass all three tests.

I know I promised not to name names, but I must make an exception. For the one man in the entire set who passed all three tests, let’s hear it for… beandizzy! Yay beandizzy! At the time of this writing, beandizzy is reformulating his design — but as of a week ago, his site validated perfectly and served up the right MIME-type. So congratulations, beandizzy. You have beaten the elite of the elite. You stand alone on the mountain top. (Well, there might be the occasional string theorist standing alongside you — but really, physicists are best ignored.)

As for the rest, the results speak for themselves. Even among the elite of the elite, the savviest of the savvy, adherence to standards is pretty low. Note that this survey most likely overestimates adherence to XHTML standards, since you would expect the Alpha Geeks to rate high on XHTML standards comprehension.

Also, I have to admit that I grew rather emotionally invested in the test process. I figured twenty sites would be enough to get at least one compliant site. When that failed, I went on to 40, 60, … amazed that not one site had passed. By the time I reached beandizzy’s site (#98) I was pretty drained. I surveyed the rest of the blogroll I was on and then gave up. So again, this survey most likely overestimates XHTML standards adherence, because I quit soon after I got one success.

Conclusions are forthcoming. But there’s one thing that’s clear right off the bat: XHTML is pretty damn hard. If the Alpha Geeks can’t get it right, who can?

Reload Assiduously

No, no, no. The plan all along was to secede from Southern California, not to secede from the entire country. Sheesh. Of course, some folks seem to think they’ve already seceded

So I don’t read the Volokh Conspiracy, but my old friend and personal attorney1 Eric Stenberg writes to inform me about an interesting article on free speech and violent video games by Prof. Eugene Volokh. The article includes a lengthy excerpt of an opinion by Judge Posner of the Seventh Circuit.

Eric says that in law school he and his classmates read a great deal of Posner, and that he (Judge Posner, not Eric) is considered one of the most important legal scholars in the United States today. Judge2 for yourself:

Zombies are supernatural beings, therefore difficult to kill. Repeated shots are necessary to stop them as they rush headlong toward the player. He must not only be alert to the appearance of zombies from any quarter; he must be assiduous about reloading his gun periodically, lest he be overwhelmed by the rush of the zombies when his gun is empty.

And all this time I thought law school would be a drag. Eric baby, where do I sign up?

Finally: I want to apologize to my friends and family for all the geek stuff in the last couple of posts. I’ve recently been quite curious about the status of XHTML on the current World Wide Web. In fact, I spent a fair chunk of Sunday and Monday evening scouring the web for XHTML websites and subjecting them to the validation tests described earlier.

The results were, shall we say, not pretty. But I’m still collecting my thoughts on the matter. Assiduously reloading my shotgun, as it were. So just fair warning: there’s going to be more on this forthcoming. Probably a lot more. I know that you, my loved ones, couldn’t care less about web standards, so please bear with me for now. It seems we’re all going to have to suffer together.

1. Well, the guy I tend to pester with legal questions, anyway.

2. Tee-hee!

We Have a Winner!

So the XHTML2 people have judged poor little old HTML and found it wanting. XHTML1 isn’t good enough for them either, because it’s really just a reformulation of HTML, and neither language is “semantic enough”. The Semantic Web is not here yet, but through the technological solution of XHTML2, we’re going to make it happen.

Given that the plan is to make the web a better place by pushing out a “better” markup language, it behooves us to ask: how are we doing on current standards compliance? No, really, stop chuckling, it’s a serious question. XHTML 1.0 has been out for over three years, and the percentage of sites that conform to the XHTML specification is not noticeably different from zero.1 A naive person might wonder whether this failure of XHTML 1 to gain traction could pose a problem.

Then — a revelation. Through rather mysterious circumstances, I found myself at the website of Jacques Distler. After browsing his site for a while, I have come to the inescapable conclusion that Jacques Distler may well be the only person on the planet who understands the XHTML 1 specification and uses it properly.

That is, Distler’s site:

  1. Passes the “Simple Validation Test”: The home page validates as XHTML. (Most sites with a DOCTYPE of XHTML or HTML fail this test.)

  2. Passes the “Laziness Test”: In this test, we try validating two or three random pages deeper in the site. (This test is meant to filter out people who lazily slap a DOCTYPE on all their pages but only bother to actually validate their home page.)

  3. Passes the “MIME-type Test”: The server is serving up the pages with a MIME-Type of application/xhtml+xml, in accordance with the specification.2 (All together, now: “Serving up XHTML as text/html is evil.” Good.)

  4. Passes the (somewhat ad-hoc) “Cluefulness Test”: Distler is actually using XHTML to do something that you can’t do using plain old HTML. Namely, he’s embedding valid MathML markup in order to display equations directly in his blog. (If you’re going to go to all the trouble of getting your site to validate as XHTML, you really ought to do something with it, yes?)

So there you have it folks. One man is knowledgeable enough to implement our latest web standards correctly. The fact that this particular man is a theoretical physicist specializing in string theory and quantum field theory should not dissuade us in the slightest that the technological solution that the W3C has chosen will ultimately prevail.

1. In fact, after some preliminary investigation on my own, I began to worry that the absolute number of sites that conform to the XHTML specification is, in fact, exactly zero. More on this later.

2. Note that Distler’s site comes awfully close to failing the MIME-type test. Distler is serving up his pages as application/xhtml-xml to all browsers that support it (such as Mozilla) and text/html to browsers that don’t, such as Internet Explorer. According to the specification, XHTML 1.0 documents “MAY” under certain circumstances be served up as text/html, but XHTML 1.1 documents “SHOULD NOT”. To wit: “In particular, ‘text/html’ is NOT suitable for XHTML Family document types that adds elements and attributes from foreign namespaces, such as XHTML+MathML.” But we’ll give him a pass on this one.

3. Note that serving application/xhtml-xml to some browsers and text/html to other browsers is the only realistic way to follow the specification as of this date. Serving text/html to all user agents is plainly wrong. How about serving up application/xhtml-xml in all cases? While admirably purist, this approach is suicidal, because IE6 chokes on that MIME-type.

Posted in Web

Well, not ALL standards are crap

I realize that in my last entry, I never really answered the opening question, “Are Standards Crap?” If you couple my failure to answer the question with my disparaging remarks about forward compatibility, standards, and XHTML 2… you might come to the conclusion that I think standards are crap.

Well, I won’t offer a non-apology apology — I take full responsibility for my lack of clarity. So here’s what I really think. Some standards are crap, and some are fine, or even good. HTML 4.01 has its quirks, but on balance it’s a fine standard. CSS 1 and 2 are excellent, and I’m eagerly looking forward to version 3. XHTML 1, I’m less thrilled about. Still debating this one internally.1

And as for XHTML 2? Well, let’s have no ambiguity here. Check out this gem from the public www-html lists:

From: “Ernest Cline”
To: www-html@w3.org, www-html-editor@w3.org
Date: Wed, 09 Apr 2003 20:31:25 -0400
Subject: [XHTML2] Poor little old <a>

Let’s face it. There is very little purpose that <a> serves in the
current working draft. About the only thing it still has going for it
is that links specified by <a> are still supposed to look like links…

The thread continues with a chorus of yeas.2. Not one dissenting voice. Heck, why keep the poor little old <a> tag, anyway? Oh, I dunno. How about because the <a> tag has been the primary linking mechanism in HTML ever since the the very first version of the language, you blithering…

Arrrrgh. Sorry about that. It just irks me to no end that the XHTML 2 folks are not only purposefully breaking backward compatibility (which is bad enough)… but they seem to be taking a gleeful pleasure in the destruction of the old. What gives? Who let these people in, anyway? Who gave these people the keys to the vault, and why aren’t the soldiers guarding the entrance?

This would all be less troublesome if only the XHTML 2 people would listen to Zeldman’s comments from a few months ago and name their specification something else. “Advanced Markup Language”, or “Semantic Markup Language”, or something like that.3 Then we could proceed with good conscience.

But instead, here’s what’s going to happen. In less than a year, the XHTML 2 people are going to push their vision of the Semantic Web out on us. Thousands of web designers are going to jump on the bandwagon and unthinkingly start slapping XHTML 2 DOCTYPES on their websites. The browser makers will ignore the new standard. Or mis-implement them. Or divert valuable resources that should be used to improve the current standards. And thus, in five or six years we’ll have a mismash of incompatible browsers and tag-soup pages pretending to be XHTML 2. By that time, the designers of XHTML 2 (by then XHTML 2.2) will be bored and champing at the bit to unleash XHTML 3 on the world. Wheeee.

What a train wreck.

1. Right now I’m thinking XHTML 1 qualifies as “Mostly Harmless”.

2. The thread continues with a digression into whether one could keep the <a> tag around if it was used for nesting links like the (currently useless) <object> tag, along with an off-topic discussion of whether to dump the <acronym> tag.

3. Of course those are bad names too, because they imply that HTML isn’t advanced and isn’t semantic. But you get the idea.

Posted in Web

Are Standards Crap?

Mark Pilgrim thinks so — or at least if he doesn’t think so, a few months ago he was frustrated enough to say so out loud. More recently, Peter-Paul Koch argues that web standards and forward compatibility are not one and the same thing.

Koch’s argument confuses Jeffrey Zeldman, author of the forthcoming Forward Compatibility: Designing & Building With Standards. This confusion is perhaps understandable, given Koch’s rather contorted English constructions. So here’s my take on it. Forward compatibility means, “will my website of today perform well in the browsers and devices of tomorrow?” Koch is asking, “If I follow web standards, will I achieve forward compatibility?” Koch’s answer is a resounding no.

People who believe the answer is “yes” are assuming that the browser-makers will implement web standards properly. But time and time again, the browser-makers have failed to do this. Ask Mark about the OBJECT element, or XHTML Basic. Better yet, ask Anil Dash. He’ll give you an earful.

Presumably the “yes” people fall into one of two camps:

  1. The strict interpretation: Web Standards = Forward Compatibility. Since the strict interpretation is contraindicated by vast reams of physical evidence from the real world, this one is safely ignored. We can only marvel at the stubborn persistence of this belief, which serves as definitive proof of the power of articles in glossy magazines and hip web publications to bend and warp reality.

  2. The loose interpretation (the “faithful” interpretation): Web Standards != Forward Compatibility… but they will at some unspecified point in the future. This belief at least takes the current situation into account, and couples it with a sweet, childlike faith that things will be better in the future.

    That is to say, despite all historical evidence to the contrary, and despite the almost total absence of market forces that would push things in this direction, all the browser-makers will eventually realize the error of their ways and repent. And yea, they shall smite the wicked users of table-based layouts with brimstone and fiery ash, and the righteous shall be redeemed. By their XHTML2 DOCTYPEs shall ye know them.

Meanwhile, Zeldman has a sensible, practical take on XHTML2:

Regardless of the 2 in its name, XHTML 2 will not make XHTML 1 obsolete. Browsers will not stop supporting XHTML 1. Designers will never have to use XHTML 2. Those who find it beneficial will adopt it. Those who don’t, won’t.

That many designers might never use the emerging specification does not seem to bother most of the framers of XHTML 2, nor does it seem to make them question the value or practicality of what they are creating.

If one of the driving forces behind the Web Standards Project can say such things about the latest, greatest version of HTML, then it behooves us all to be wary. So I’ve started reading the public email lists for XHTML2. I’m still formulating my thoughts on XHTML2, but my initial take is that it is a real chamber of horrors. I’d like to write more, but I must cool my boiling blood and be off for Passover Seder.

More to come. Happy Pesach.

Posted in Web

It Bleeds, It Bleeds…

The public beta 2 of Safari is out. Up until recently I had only been playing with it. But the latest version is faster than Mozilla, and it seems more stable than Mozilla, and now it has tabs — and that’s good enough for me.

Not to say Safari doesn’t have its little quirks. For example, today I discovered a bug where list items can bleed over into right-floating divs (see the simplified test page). Mark Pilgrim has been kind enough to add the bug to his unofficial list of Safari bugs, and I’ve also submitted the bug to Apple using the built-in bug report tool, so we’ll see how it goes.

In other web design news, I’ve moved all of my MOTWM class notes to the new template, and I’ve also posted a whole slew of notes that had been sitting around in my notebook for months. So if you’re in the class and you’ve been missing out on the notes, my apologies — go get ’em while they’re hot. Bill has asked that I not provide the URL directly on this site, but the URL is quite guessable. If you can’t figure out what the URL to MOTWM on goer.org could be, then send me an email and I’ll send you the link. Ah, security through obscurity. It’s a beautiful thing.

Finally, a hearty congratulations to Justin and his wife Dana, who are now the proud parents of an 8lb 12oz baby boy named Kevin Davis. They grow ’em plump and healthy over at the Miller household. Despite this good news, I am forced to conclude that some people will go to extraordinary lengths to avoid going to Poker Night when I’m on a winning streak. I wonder what excuse Justin is going to pull out this week? “I’ve got a really late meeting tonight.” “My wife is going into labor.” Pfft. Whatever, Justin.

Thoughts on Master of Orion III

For a week or two, I had been hanging around the local Apple Store, waiting for Master of Orion III for the Macintosh to come out. Of course, this is not the most efficient way to get a new game, but that’s okay — I just like the Apple Store aesthetic anyway. Although it was a bit disturbing when a middle-aged gentleman walked up to me and started asking me questions about the iMac line. I was totally confused, until I suddenly realized I was wearing half-rimmed glasses, a black polo shirt, black belt and shoes, and clean, pressed khakis. Suddenly it all made sense. The Apple Store aesthetic is a powerful force.

Well, finally Master of Orion III arrived. Now, some reviewers have complained that this game is too complicated and slow-paced. My response to that is: if you don’t like setting budgetary policy, managing local and empire-wide tax levels, zoning planetary developmental regions, and arguing over affairs of state with your peers in the Galactic Senate, well then I guess you’re not cut out to be a Galactic Overlord, are you?

The game does have its difficulties — most of which are obviated by simply giving up control and letting the AI do its work. Space combat is just one example. I’d like to be able to control my ships… but by the time I’ve manually selected all my task forces and given them orders, the enemy has already powered up weapons, launched fighter squadrons, swept me with sensors, identified and targetted my vessels, fired off a devastating missile salvo, destroyed half of my fleet, made off with my daughters, and keyed my car. So I have to give the battle AI full control. It’s kind of sad, but at least I get to watch a pretty light show. And I can still rotate the battlefield and zoom the camera in and out. I have a theory that doing this helps my forces, somehow.

Anyway, I thought I’d share the following account of the first game I managed to win on “Medium” difficulty. As Dave Barry likes to say, I swear I am not making any of this up.

My chosen species was the Imsaeis, a large, bloated race of floating gas giant dwellers. (The game documentation describes them as being outwardly “cheerful” and “agreeable”, while secretly “striving to be in control”.) The game began with a long phase of peaceful expansion, trade, and colonization. Life was good. But all of a sudden, another race launched a surprise attack on one of my worlds! Shocked and outraged, there was nothing to do but declare total war. I immediately went to my allies, through private channels and in the Senate. I begged them, bribed them, and even demanded that they assist me in the struggle — or at least condemn my attackers for their misdeeds. But nobody would agree to help. In fact, some of them even seemed angered by my behavior.

I would have to go it alone.

First the covert war began. My enemy had already infiltrated my territory with spies and saboteurs, with orders to blow up key buildings and demoralize my population. To root these agents out, I was forced to drastically increase my internal security measures (by adjusting the slider on the helpfully labelled “Oppressometer”). This led to not a small amount of unrest amongst my citizens. I also retaliated with my own spies, hoping to assassinate key leaders and probe the enemy’s technological capabilities.

Soon the “black ops” phase gave way to the war itself. My massive economy had enabled me to construct a mighty armada, which I launched deep into his territory. Our fleets met, and my larger numbers and superior technology carried the day. I soon achieved space superiority and began landing troops. But to my great surprise, his ground resistance proved particularly tenacious, and I was forced to order my troops to hold their ground while I called in reinforcements. Eventually I managed to sweep his organized armies and his local militia forces aside. And thus the war was over, and his population properly pacified. Err, liberated.

Shortly thereafter, all the other species recognized that my military and economic superiority gave me an overwhelming lead over them all. So they all banded together and did the only logical thing they could… they elected me President of the Galaxy, ending the game on a peaceful and happy note.

I’m pretty sure that this is how these things generally work out.

Charles and Louis, Louis and Charles

Last evening I saw Sarah in her high school production of Kiss Me, Kate. I mention this because I thought everyone would like to know that my little sister is quite probably the greatest actress of her generation. And possibly the greatest singer of her generation too (what with the mediocre competition and all). You heard it here first.

The other kids were also pretty good, particularly her co-star and the two gangsters. The latter even had helpful advice to offer, in an impromptu educational segment of the show:


Brush up your Shakespeare
Start doing it now.
Brush up your Shakespeare
And the women you will wow.

Ah, so that’s why I’ve stuck with those MOTWM classes for so many years. Although I have to admit that the correlation between the wowing of women and one’s knowledge of Shakespeare is not as clear-cut as the gangsters allege. Maybe I’m hanging out in the wrong bars?

Anyway, this quarter we won’t be doing the English Renaissance for a few more weeks. Right now we’re focusing on pre-Renaissance France, a subject that would be far less confusing if the French nobility had been just a little more innovative in naming their male heirs. For example: in the late fourteenth century, the King (Charles) goes mad, but not before he has a son (Charles) who succeeds in mostly unifying France. He has a son (Louis) who ends up fleeing his father’s wrath and living temporarily with the France’s great rival, his uncle, the Duke of Burgundy. After his father’s (Charles’s) death, Louis takes the throne, where he soon becomes embroiled in a conflict with his cousin (Charles), now Duke of Burgundy. After many years Louis defeats Charles, fends off the English, establishes the present-day borders of France, and has a son (Charles), who invades Italy and sparks the Italian Wars. Charles dies young and without an heir, so his cousin (Louis) takes the throne, assisted by a dashing young military leader, the Duke of Bourbon (Charles), who will figure prominently later on…

And on it goes. Thank goodness American history does not have this problem, recent presidents aside. (I, for one, have trouble keeping my John Adams straight from my John Quincy Adams. I’ve just resorted to mentally assigning the second one with a little flag, “less important”.) One hopes that we do not continue on this path of modelling ourselves after French nobility. However, I hear that George W. Bush’s young nephew, George P. Bush, grandson of George and son of Jeb (Duke of Florida), has political aspirations. I smell trouble.

From Dollars to Doughnuts

I’ve been trying to get M’ris to add an RSS feed to her journal. Considering that M’ris hand-codes every one of her entries, this is a dubious proposition at best. Nevertheless, I soldiered on with what I thought was an extraordinarily persuasive argument:

This would make MY life much easier. Oh, I’m sure you’re thinking that this would involve some work for you. But why are you always thinking about yourself? It should be about me, me, meee!

She replied that:

You should never believe that other people aren’t thinking about you, you, yoooooou. If it appears that we aren’t, you should convince yourself that we are preparing an elaborate surprise for you. The longer you have to wait for it, the more elaborate it’s likely to be. Aren’t we nice?

So while I wait for what is bound to be an extraordinary surprise from M’ris & co., here are some items in brief:

  • Poker night tonight. I hope to continue my massive winning streak from last time, where I took ’em for twelve dollars. Just eight or nine more nights like that, and I’ll be back in the black.

  • Or about thirty-five more nights like that, and I’ll have paid for my recent brake job. When one’s brakes go from making a squeaking noise to making a rattling, grinding noise, it’s behooves one to take care of the problem, damn the cost.

    Not that I’m unhappy about this. On the contrary! It’s like my old Rastafarian roommate Miles used to say, usually while grinning from ear-to-ear, “I love paying bills.” Why do you love paying bills, Miles? “Because I hate having my power and phone shut off.” Truer words were never spoken. I love paying for brake repair. I hate involuntarily sailing through busy intersections.

  • With the aid of my newly repaired brakes, I popped down to LA for the weekend. I saw Eric and Susan, Jason and Megan, and Elana and Adiv. Now that was all nice, but the real triumph of the weekend (not to give short shrift to my oldest friends and my sister and brother-in-law) was the successful Quest for Donutman Strawberry Doughnuts.

  • For years, I thought I had lost Donutman forever. I couldn’t quite remember how to get there, and to make matters worse I couldn’t quite remember the name of the place: I thought it might be “Donutman”, but I also remember that back in my college days, many of us called it “Foster’s” (which may I say is quite wrong: it’s “Donutman”). My searches on the web pulled up nothing — I found a couple of personal websites raving about the donuts, but none of them provided an address or directions or anything useful. Heartbreaking but true.

    So on Sunday morning Eric, Susan, and I struck out for the Claremont area. Our original plan was to find some poor student playing frisbee, kidnap him or her, and extract the information by all means necessary. Desperate times call for desperate measures. Fortunately it didn’t come to that — the street names started to look familiar to me again, and soon enough we found Donutman. And I had me a strawberry doughnut. Huzzah!

    For future reference, Donutman is located on 915 E. Alosta in Glendora, CA. The nearest major cross street is Elwood. If this information helps just one poor, befuddled alumnus of the Claremont Colleges find their strawberry doughnut, this journal entry — nay, this entire website — will have been worth it.