You are viewing snarke

Bending Reality to my Will

> Recent Entries
> Archive
> Friends
> Profile
> My Website
> previous 10 entries

September 23rd, 2014


10:36 pm - Why Hugo Base Design Contests Are a Bad Idea
Wait, what?

"But Dave, didn't you enter a Hugo Base design contest?"

Yes, I did, and I am all too aware of how the success of that entry has badly undercut my case. Nevertheless, I believe the headline is quite true.

You see, while I was still in college, my home town city council has a couple of design constest for banners to decorate the downtown district. Since I was studying graphic design at the time, I decided to enter. When I lost the first one, I thought "Oh, well, I guess that's just how it goes." But when I lost the second one, it became obvious that the problem was not with my designs, but with the fact there were factors being used to choose the winner that weren't included in the contest specs. Things like "we really like bright colors," and "even though we said you could use up to the three silkscreened colors on the fabric, we're actually very miserly, so designs with just one color have a real edge."

At that point, I'd already had enough of giving away my skilled, trained labor for free, and decided I would not be entering any more design contests.

I stuck to that, too, until 2008. Because it's the Hugo, which has enormous personal significance. I first got to attend the awards ceremony in 1993, and sitting in that crowd and watching winners picking up their trophies was absolutely thrilling. Even though I was at WorldCon in a professional capacity, it was obvious to me (or so it seemed) that my career path did not lead toward ever being eligible to receive one for my own. "What award," I asked myself, "would be even more thrilling to receive than a Hugo?" It happens to be a very very short list. Nobel Prize, MacArthur Grant, Kennedy Center Award. That's it. An Oscar, Emmy, Clio, Tony, Pulitzer? Not as amazing as a Hugo, not to me.

This hopefully gives you some idea of just how big a deal it had to be to make me break my rule about entering design contests.

But now, you see, I have a Hugo; the one I made. I am, of course, hugely biased as to where it would fall on a scale of "best to worst base designs ever," but there's an awful lot of fairly good evidence that it's somewhere in the top 25% at least. All of which means that I really doubt I'll ever enter another Hugo base design contest. Ever.

Unfortunately, perhaps in part because of the results Montreal (and probably Scotland) got from their contests, having base design contests has become more common. This is a Bad Thing.

"No, no, it's a good thing! We will get to choose from among multiple options, so we can get the best base!" No, you'll get to choose from among a very limited number of options, most of which will be unusuable, and the remainder of which will probably be merely okay. Because what you're going to get from a contest is entries from amateurs. Really good designers don't have to give away their time for free to get work. They're not going to give you designs.

Fortunately, you don't have to just take my word for it. AIGA is the leading guild for graphic artists in the U. S., and they have a handy form letter for their members (or anybody else) to use to help educate people about asking designers to work for free. It's called "spec work", defined as "work done prior to engagement with a client in anticipation of being paid," and here's some of what that letter says: "AIGA, the nation’s largest and oldest professional association for design, strongly discourages the practice of requesting that design work be produced and submitted on a speculative basis in order to be considered for acceptance on a project, [because] successful design work results from a collaborative process between a client and the designer [whereas] design competitions ... result in a superficial assessment of the project. [Also,] requesting work for free demonstrates a lack of respect for the designer and the design process."

They do suggest an alternative approach. "A more effective and ethical approach to requesting speculative work is to ask designers to submit examples of their work from previous assignments as well as a statement of how they would approach your project. You can then judge the quality of the designer’s previous work and his or her way of thinking about your business."

As it happens, AIGA has an unusually mellow take on spec work. The Graphic Artists Guild says "Artists and designers who accept speculative assignments (whether directly from a client or by entering a contest or competition) risk losing anticipated fees, expenses, and the potential opportunity to pursue other, rewarding assignments."

The Registered Graphic Designers of Ontario goes so far as to "prohibit its members from engaging in speculative (spec) work" and goes on to state that "Spec work is universally condemned as an unethical business practice by responsible designers and design organizations around the world."

The Society of Graphic Designers of Canada says "The practice of asking for free design concepts in order to choose the 'right designer' or the 'best design' or the 'best logo' undermines and devalues the professional designer's education, experience, hard work and the entire design industry. GDC members do not engage in contests or other speculative, commercial projects."

There's even a domain dedicated to explaining the problems with spec work: http://www.nospec.com

So the problems with this contest so far are (a) the judges won't even get to see work from the most talented designers, and (b) the designs they do see are the designer's "best guess", rather than something custom-tailored based on interacting with the client. The design firm "artwurks unlimited" neatly summarizes the third big downside: "Speculative requests are often a result of 'I’ll know it when I see it,' thinking on the part of the client. The problem here is that it’s self-centered point-of-view rather than a position serving the needs and wants of the audience."

Not long after Montreal's base design contest, the WSFS Mark Protection committee (I think?) held a design contest for a logo, so that there would finally be some kind of symbol that could go on Hugo-winning book's covers and the like. It's an excellent idea. Unfortunately, the winning logo is not. As a graphic designer, my personal specific field of interest has always been logos and logotypes. Designing the Hugo base was a stretch for me. Designing logos is not.

Had I been hired by the Mark Committee, I know that part of my job would have been educating the judges in what makes a good logo. It's obvious to me why they picked the design that won: it does a very good job of replicating the appearance of the trophy, and the judges clearly thought that was important. Sadly, they were wrong. Most book buyers have never, and will never, see a real Hugo award. Making the logo look just like the trophy is not very important. Making the logo robust (recognizable under a variety of conditions and sizes), unique (not confused with anybody else's logo), eye-catching, and thematically appropriate (it does need to be Hugo-esque), are much more important.

Part of the irony of being so obsessed with duplicating the rocket is that logos are strongest when they're silhouettes: a single solid color, or black; but the silhouette of the Hugo doesn't look like a rocket! That's why the winning logo has to be two-tone black and gray. If you make it all black, the result is just sorta a lumpy vertical line that really doesn't have any "zoom" or "swoosh" to it at all. It would look a lot more like the Hugo if it looked less like a Hugo. It would look even more like a Hugo if it were foil-stamped in silver on a book cover. Alas, because it's two-tone, that's not going to happen. You can't half-stamp foil.

By now, you might be thinking that I'm about to say that no future Hugo awards committee should ever hold a contest again. Actually, no, I'm not. There is one very important consideration that tips me away from being that draconian, and that's budget. Science fiction fandom has never been a big-budget operation, and there's no way an awards committee could
afford a professional at normal union rates.

One of the requirement in Montreal's contest guidelines was that each base should cost no more than $150. My proposal included as part of the price, a modest but reasonable budget for my time, as well as the materials. I found out later that many of the previous bases had not paid the fabricator or artist for their time at all, although it was generally agreed that despite the fact that those people had been quite willing to do that, it was better if there was at least some acknowledgement of the value of skilled labor. For my part, I had put a fair amount of thought into how to keep the materials cost low, and the fabrication time short, in order to free up more of the budget for my own compensation. The Montreal adminstrators, in turn, told me point blank that they were entirely satisfied with the value I'd set on my time.

And yet! In order to make those bases, I ended up working fairly closely with Quiring Monuments, the largest grave marker maker in the Pacific Northwest. They were my source for the granite, and then they were sandblasting the partially completed bases. Naturally, when it was all over, I took my personal display Hugo over there to show them so they could see what it was they'd helped me make. I'd mostly worked with a woman in the front office, but when I was showing off the trophy, an older gentleman from a fancier office came out to see it, and he asked me what I'd been paid to do the work. When I told him $150, he was actually outraged. I was told that I should have received at least $800 for that kind of work.

Maybe so, but I don't think we're going to be handing out $800 trophies any time soon. Ergo, if a Hugo committee wants a great base for their awards, they have to find a competent professional who cares enough about the Hugos to cut them a really sweet deal. If they can't find anybody willing to do it who they believe can do it, that's when it's time to hold a design contest. It's better to pick from amateur designs handicapped by a contest communication blackout, than to have nothing at all.

But that means that a contest should be the last resort, used only if the committee can't find anybody better. Montreal couldn't come up with somebody, or so they told me. I heard the same thing from somebody on the Sasquan committee, but in their case, it just means they didn't bother looking. I can think of two or three intraregional fans who have skills and talents well suited for making a beautiful trophy base, entirely aside from myself, and I can think of at least a dozen more who might. Since I nearly won a Hugo (in 2010) for my 2009 Hugo Base design, and roughly two-thirds of the current committee knows me personally, to have one of them claim that they would have liked to have just appointed somebody except that they couldn't think of anybody to ask is just silly.

"Well, um, Dave? Here you are, kinda shouting and ranting and acting all scornful and stuff. Maybe it's because they know you that they didn't ask." Yup, that's a possibility. I am definitely not a 'people person,' and although I'm not aware of being actively disliked by any of the people on the committee (well, at least not until now), it's quite possible I am. Nevertheless, whether I could have been in the running to design this year's base or not does not change the fact that holding a contest is a bad idea. Nor should you mistake this blog entry as some kind of attempt to get the current committee to change its mind about having a contest and instead ask me to do it. I am no longer interested.
Current Mood: sadsad

(Leave a comment)

September 11th, 2014


02:09 pm - For the Record: The Endpoint of the Electric Car
In the long run, the ideal powerplant for a primary-use car (one that can be used for short and long trips) will be an electric car with a turbine range extender.

I think I first made this prediction three or so years ago, after the Leaf had been announced but before it was available. You see, the current crop of "plug-in hybrids" are still doing it wrong: they're connecting the fuel engine to the wheels. This is just dumb. One of the biggest if not THE biggest weakness of the internal-combustion (IC) car is the drive train. You need this elaborate and complicated transmission to turn the limited speed range of the engine into the far greater range required for propelling a car around. Since IC engines can't start themselves, an IC-only car has to keep the engine running when the car is stopped, so not only do you have to change gears (the transmission), you also have to be able to disconnect the engine entirely (the clutch or torque converter). The engine itself has to operate at a wide range of speeds, which inevitably means compromising overall efficiency to gain flexibility, and requires yet more mechanical moving parts (throttle body, variable output fuel injectors, and so forth).

The *last* thing we should be doing is making cars *more* complicated. Oh, goody, even more bits that can break down. Hybrid cars, like CFL lights, are lousy ideas from an engineering standpoint. They're both transition products, distinctly inferior to their coming replacements, but necessary (or at least economically desireable) because they can take partial advantage of new technologies until the infrastructure exists to make them unnecessary. If LEDs hadn't shown up so quickly, twisty-tube CFLs would eventually have been replaced by smaller versions of the traditional fluorescent fixture, with straight-line pin-mount tubes, because buying a new ballast everytime you buy a new bulb is stupid, and putting the ballast in the same space as the bulb means the heat from the bulb cooks the ballast and causes it to fail prematurely. What we really need to do is not prohibit the sale of incandescent bulbs, but prohibit the sale of any more light fixtures with Edison sockets. We've got vastly superior alternatives to the industrial-age light socket these days, and one of the worst things you can do to either fluorescent or LED lighting is to try to cram it into sphere-based 'lightbulb' form factor. But I digress.

An IC engine typically has more than 100 moving parts, which have to work in an environment with major temperature swings, serious pressure differentials, and an astonishing amount of high-speed metal-on-metal contact. An electric motor, by contrast, has one moving part, no significant pressure differentials, and generally will (and would prefer to) operate at much lower temperatures. All of that translates to much, much greater overall reliability.

Do you take your current IC car in for an oil change every 3 months/3k miles, or do you follow the manufacturer's recommendations, which are usually 6 months and 5k or 7.5k miles? Either way, compare that with the Leaf's dealer-recommended service schedule: first service visit is at 6 months, mostly to inspect for possible factory-caused problems. The next is sometime after 24 months. Again: 1 service appointment in the first two years of ownership, and that appt. doesn't necessarily involve changing or replacing a thing.
I expect that we'll eventually see primary-electric-drive cars easily exceed 1,000,000 miles. The overwhelming majority of them will be scrapped because of impact damage, not internal component failure.

The only real (as opposed to perceived) problem that *I* see for electric cars is a range issue, but probably not the one you think. Charging stations are going to keep prolifering, and even if your destination doesn't have a fancy car-charging station, if you can't find a regular old wall socket to plug the car into, you weren't even trying. The problem isn't that there's no place to charge a car, the problem is the time it takes to 'refill the tank'. The I-5 corridor along the west coast is already *very* well stocked with charging stations, but that won't let you get from Seattle to L. A. in 16 hours, like you can in a gas-powered car.

By the way, the oft-cited 'issue' of needing to replace the electric car's battery pack is a perceived problem, not a real one. Nissan, who (based on the evidence so far) seems to be very good about providing resonable, accurate, real-world-use based statistics for the Leaf, says that they expect a 5-year-old Leaf's battery pack will have about 80% of is original charge capacity. In fact, Nissan's warranty is for 70% or 60,000 miles after 5 years, so they're guaranteeing you'll get at least that much, so some Leaf owners might well not need to replace the batteries until the car is 6-9 years old. The current price for a new battery pack is $5,500, which is already pretty reasonable, but it's likely to go lower as manufacturing volume drives down the cost of lithium-ion batteries.

That brings us to the reversed-priorities 'plug-in hybrid'. As with the traditional gasoline-powered car, connecting your fuel engine to the wheels means throwing a huge pile of heavy and unreliable junk into the car. Dumb. What we need is a 'fuel-assisted hybrid.' Good grief, of *course* it's a 'plug-in,' because why would you ever drive a car that didn't? But if you need to be able to go further than a single charge will take you, that's when you use up some fuel in order to fire up the onboard fuel-powered generator.

Connecting the gas motor to a generator, and ONLY to a generator, makes a gigantic difference in the parts count. The engine itself can be set to run at one constant speed, and optimized for maximum efficiency at that speed. It and the directly-attached generator can be located anywhere in the car that is most convenient, without any need to ensure any mechanical linkage to the wheels.

Once you've abandoned the idea of connecting the engine to the wheels (like the rusty junker of an idea that it is), you have a much wider range of engines to choose from, as well, and one of the potentially most efficient engines, as well as one with a potentially astronomical power-to-weight and power-to-volume ratio, is the gas turbine engine.

What we're talking about in this case is basically a cute little teensy-weensie jet engine. Like an eletric motor, it typically as one moving part, which is the spinning shaft with blades that runs down the middle. In theory, the advantages are small size, reliability, and efficiency. (I was terribly amused when I first learned that the M1A1 Abrams tank has a turbine engine. Not the sort of vehicle that you'd expect to be jet-powered. :) The downsides are noise and heat, but there's no reason to think those are insurmountable. Sure, the exhaust gases might melt asphalt if you're not careful, but that just means you need to be careful; maybe a heat exchanger where an IC car has its catalytic converter. Whatever.

There's a microturbine currently available that, with generator, is about 1.5 feet long and 6 inches in diameter, and generates 7.5kW. That's big enough to completely recharge the Leaf in about 3 hours, and at least some of those hours can be driving hours. 7.5kW isn't quite big enough to keep the Leaf moving continuously at freeway speeds, but if you turned it on at the beginning of your trip, it would have the effect of turning the existing 24kWh packs into 34kWh packs (assuming you spend 90 minutes driving 100 miles), and the effective range from 100 to 140 miles. With a higher-capacity turbine, you could run the car continuously, in effect using it as a gas-powered car, while still getting far better mpg than IC cars. The transmission & torque converter of an IC car is replaced by the generator, voltage controller, and electric motor of the electric car.

This is, by the way, exactly the system that trains have been using for decades now. The steam locomotive was replaced by the diesel locomotive, but diesels are actually diesel-electrics: the diesel motor drives a generator, and the wheels are turned by electric motors. They just don't carry a lot of batteries along or try to plug in every time they arrive at a station.

"Oh, you make it sound all super-spiffy, but if this is such a great idea, how come nobody's doing it?" Indeed. I think there are two reasons. First, as you might have noticed, auto manufacturers in general, like most really big companies, truly suck at getting radical new technologies out the door. They're so invested in the current system that they have enormous trouble committing to something new. I'm very impressed that a company as big as Honda managed to get the first hybrid into showrooms, although introducing a new car is such an incredibly capital-intensive process that there may have been previous attempts that I didn't even hear about. (Yes, Honda, not Toyota. Honda's Insight came out a year before the Prius, and significantly outperformed it as well. It was, and is, a superior hybrid. Unfortunately for Honda, it turned out that people would happily trade poorer performance for extra doors; the original Insight was a two-seater car, and the first Prius was a four-door sedan.)

GM, semi-famously, HAD what could have easily been the first successful mass-market electric car, but after distributing EV1s in Californial, had a corporate psychotic break and snatched them all back and crushed them. It took brash start-up Tesla to give the world some idea of just how good an all-electric car could be. In the same vein, it's going to be years before a major car manufacturer finally gets over the idea that you can't put a gas-powered engine in a car without connecting it to the wheels, and any company small enough to already have that clue in their closet probably doesn't have the capital to develop the car.

The other reason we don't have an electric car with fuel-assisted range extender is that, when it comes to turbine engines, bigger ones are *easier* to build. The smaller you make the turbine, the faster it has to spin in order to run well and the tighter the tolerances are for the various parts. A plane's jet engine might run around 10,000 rpm, but car-sized turbines have to spin in the neighborhood of 100,000 rpm.

Tricky engineering = expensive. If we were manufacturing as many microturbines as we do V-8s, I suspect they'd lost a lot less than the V-8, but we don't, and they don't. Capstone's C65 turbine is slightly larger than most car engines, generates 65kW (about 90hp), and costs $56,000. On the other hand, they actually built a sparts car around their slightly smaller C30 turbine that could go 0-60 in 3.9 seconds, had a top speed of 150mph, and could go up to 500 miles before refueling. The exhaust from the turbine, by the way, met California's emissions standards without any further processing.

Until/unless some city get serious about microtransit, most Americans are going to be getting around by car for a long time to come (self-driving or otherwise). Most 2nd cars will be electric, and most 1st cars will be hybrids. Mark my words, the turbine-assisted electric will be the winner in the long run.

(9 comments | Leave a comment)

April 23rd, 2014


05:44 pm - The sound of reality.
I try not to get too distracted by Quora, but some of the questions people ask are awfully intriguing. Many are asinine, of course, but those are easy to skip. Although I tried, I could not let a question I saw today pass without comment, in part because I felt none of the other 22 answers had done a very good job of it.

The question was: "Why does vinyl sound more 'real' than a CD?" Not surprisingly, more than a few people basically said "It doesn't," with one or two adding the equivalent of "...you idiot." to the answer. Somebody else said that, well, it was a scientific fact that vinyl was better, and went on to invoke the Nyquist limit, apparently blithely unaware of RIAA equalization.

Here was my reply:




This reminds me of when Babylon 5 first aired. It was the first science fiction television series to use CGI for the spacecraft, rather than motion-controlled cameras and miniature models. "I don't like their spaceships," a friend of mine said. "They don't look real."

"You mean they don't look like little plastic space ships."

Both CDs and vinyl involve huge compromises in terms of sound reproduction. Although notes that CDs are limited to (<22,000 Hz) are above the range of most human's hearing, the consequences of that do extend downward into the audible range. Those effects can be reduced with careful design of the electronics, but generally you won't find that kind of care in equipment that's less than a couple of thousand dollars.

Vinyl, on the other hand, is seriously handicapped at the low end. A loud bass drum would cause the groove on a record to have to move so far that it would cut into the next groove over, so low frequencies are dialed way down for pressing onto vinyl, and then the home amplifier runs them through a filter that reverses the effect.

I've performed with symphony orchestras, in marching bands, in large classical choirs and small jazz choirs. I don't think either CDs or vinyl sound especially 'real.' I think digital is much better than vinyl on really cheap equipment. On a system between, say, $400 and $2000 or so, I wouldn't be surprised if a $200 turntable and a really good record sounded more real than a CD in a $200 CD/DVD combo player.

However, one of the things that makes actual live music sound so great is the dynamic range. That amazing downbeat for "O Fortuna," the opening number of Carmina Burana? Wow. Then, just a few seconds later, the whisper of "semper crescis, auf de crescis..." Vinyl just flat out cannot do that. If you record it at a low enough level to keep the loud part from cutting into the neighboring grooves, the quiet is so quiet that it's almost impossible to stamp that subtle a wiggle into the vinyl. The CD audio standard has an amazing dynamic range. One of the first CDs I ever bought was a Telarc sampler that included an excerpt of the 1812 Overture, with real cannons. In the end, although the CD might have had the full dynamic range of that music, it didn't matter, because the stereo couldn't play it. I was in a science club in high school, and we were buying a new stereo to play music during the lunch hour, and I had a receiver with 1000 watts per channel. Trying to play the 1812 caused it to shut down. In order to actually play the cannon blasts, the volume had to be turned down to where the symphonic part was just way too quiet.

Then there's the fact that both CDs and vinyl record, at most, two channels of sound (yes, even if you have Dolby 5+1 playback, if it's coming from a compact disc or a record, it's reconstructed from two channels). "Real" music, if it involves more than two performers, doesn't just come from two places.

As an instrumentalist, I'm primarily a percussionist, and generally speaking, percussion instruments are the hardest to reproduce on recordings. To recreate a bass drum takes massive amounts of power; to get a cymbal or tamborine right requires extreme precision at the highest frequencies. I've heard a stereo play back a tambourine well enough to sound 'real' to me exactly twice in my life. The first time involved six-foot tall Magneplanar speakers that I think cost around $12,000 for the pair. I have no idea what the rest of the equipment attached to them cost, but it was probably between $20k and $30k. The other time a speaker actually sounded 'real' to me (with a drum set ride cymbal), it involved some JBL speakers with titanium tweeters, and once again, the whole stereo system was well over $20,000. (Both times, the sound source was a CD.)

So why does vinyl sound more 'real' TO YOU than compact discs? Could be any one of a number of reasons. It's what you've grown accustomed to, or you listen to music that suits vinyl better than CDs, or you've got a better turntable than CD player, or you're less sensitive to the audio compromises typical of vinyl vs those common to CDs.

Finally, please note that this entire essay was comparing compact disc digital audio and stamped vinyl records. On the analog side, 15 inches per second 1" wide two-track magnetic tape will totally outperform both of those, but 96KHz sample-rate 24-bit digital recording would crush them all in terms of fidelity. Even then, you still won't be able to hear what I hear when I'm playing with an orchestra. Musicians on all sides of me, with every instrument's sound unaltered by anything besides the air itself? That's real.

Enjoy and use which ever technology sounds better to you. As far as I'm concerned, they're both so far from 'real' that it doesn't really matter that much which one I listen to.

(1 comment | Leave a comment)

December 9th, 2013


01:10 pm - Smooth
I've been meaning to write this little essay for years. Why not today?

Some years ago, I surprised a friend of mine by telling him that one of the reasons I use an electric razor is because it shaves closer than the ubiquitous multi-blade manual kind. I think most people have no idea how good a quality electric is these days.

A few caveats, though. I've found (and myriad reviews and comments online seem to support this) that there is a much wider range of performance between various electrics (Remington, Norelco, Braun, Panasonic) than between different brands of manual ones (Gillette, Schick, etc.). That's also true of different models by the same manufacturer. If you try an electric razor that retails for less than $120, you're wasting your time (and if you go for one that's more than $300, you're probably wasting your money). Also, it can take up to a month for you and your face (or whatever you're shaving) to get used to the new way of shaving. It should be obvious that, the 'magic of technology' notwithstanding, the results you get after just one week using an electric shaver might not measure up to what you can do with a manual after years and years of practice; that's not necessarily the fault of the tool.

I do keep both kinds in my bathroom, because they have different strengths and weaknesses. But a few Halloweens ago, I was going to put makeup all over my head, so I shaved off everything but my eyebrows, and, as an experiment, used a brand-new Gillette three-bladed safety razor on the left size, and my Panasonic electric on the right. I had just finished when friends started coming over for the party, so I took my head over to them and had a few people check out the results, which were unanimous. The electric side was much smoother. The difference for my beard (cheeks and neck) was less pronounced, but still obvious.

Also, while the Gillette was a fresh-from-the-box blade head, my Panasonic's blades were, oh, probably four or five years old at that point, which leads to the other quality where electrics utterly kick butt over modern manuals: cost. How often do you change the blade on your manual, and what do you pay for them? If I did all my shaving with manuals, even if I shopped at Costco, I'd still be spending at least $100/year on blades, and probably a lot more than that.

Also, manual blades have a sloped failure curve. Every day, they work almost as well as the day before. "It seems pretty dull. A new blade would feel a lot better, but geez, they're expensive! What to do. . . ." My electric shaver says I'm supposed to change the blade and foil head every six months. If I did, it would still be cheaper than manuals after two years, but in point of fact, I didn't change the replaceable parts until the foil finally wore through some five to seven years after I bought it, and the new parts cost $40. All the electrics but the Norelco rotary kind use a thin perforated sheet of metal to keep the cutter blades from cutting you, and I've found that, until the day the foil actually wears through and has a hole in it, the cutters works pretty much as well as they did when they were new. So, after a decade, total cost of ownership for electric, about $200. Estimated equivalent cost of manual: $1000–$1500.

Speaking of that perforated foil leads to the third way that I find electric shavers dramatically superior to manuals: safety. No matter how much pressure I use, or how I hold it, or which direction I move it, the only times that my electric has ever drawn blood is when the foil failed. (Well, almost. It did bite me once when being used on a more intimate part of my person, mostly because I had grown over-confident. However, had I used a manual razor as cavalierly on the same skin, I would undoubtably have been in danger of expiring from blood loss.)

It's possible that the reason the electric does a better job shaving is because I can move it in any direction. I can get a smoother shave with the manual if I shave against the direction of hair growth, but it's only smoother for an hour or so. After that, razor burn raises red welts. Using a blade 'backwards' drives the cut ends of the hair in to the skin a short distance, and some of those ends then catch the edge of the follicle and irritate the skin. I've never had a problem with razor burn from the electric.

Both manuals and electrics work best (in my experience) on shorter hair; generally hair that had been shaved no more than a few days ago. The manual tends to cut maybe a quarter-inch track through longer hairs (say, on my head if I haven't shaved it for a couple weeks), where the electric is all but useless. Also, the manual's better for areas where there are only a few hairs (long or short), like the back of my neck.

That's because of a characteristic difference in the way they get the job done. According to my watch, it takes me about the same amount of time to shave my beard with either one, but it feels like it takes longer with the electric, because I have to go over the same spot four or five times. Each pass of the electric gets some of the hair, but not all. The manual will usually cut a more or less clear swath with each pass. However, then I have to drop it into the sink and thrash it about to clear the whiskers from the blades for the next pass. With the electric, I generally don't have to stop until I'm done, so there's a lot more time with the shaver on my face, but much less rinsing it out.

By the way, it appears that the best electric shaver manufacturer today is the same as it was over a decade ago when I bought my current shaver: Braun. Nevertheless, I purchased what was (and also still is) the #2 shaver: Panasonic, because just like now, back then, Braun didn't make a wet/dry shaver. Electric or manual, hair is easier to cut when it's absorbed water, so the best time to shave is during or after a shower. If I were just shaving my beard, I might have gone for the Braun, but because I also wanted it for my head, being able to do it in the shower or tub was going to be much easier to clean up afterwards. Thus, the Panasonic.

Although it's probably as bad an idea as talking on your cell phone, an electric shaver does let you shave while driving, if you're really running late, because an electric shaver works quite well without any skin prep at all. This is particularly handy for quick touch-ups. I have used a manual directly on my skin a few times. If there are just a few hairs, or they're thin, and the blade is really sharp, it's not too bad. Otherwise, shaving cream is a requirement, or else Ow!

I'll use shaving cream with the electric, too, if I'm doing the whole beard. Not only will it help soften the whiskers, but it also helps alleviate one of the negatives about the electric: heat. By the time I've run it around my face enough to finish the job, the friction of the blades on the foil will have warmed up the foil quite a bit. Shaving cream and/or dipping the cutter head into water now and then fixes that problem.

The one factor where I find the manual clearly superior is noise. The rotary Norelco shavers are generally a lot quieter than the linear oscillating blades of the others, but it's still a loud high-pitched buzzing gizmo that you may be putting right next to your ear now and then.

The other category where the manual is better is that, with the exception of the amusing new vibrating manual razors, you don't have to worry about their batteries running down. I haven't found that to be all that big a problem with the Panasonic, though. A full charge, even today, is still enough to shave my face and head at least twice, if not three times, or just my beard for at least a week, so I only bring its power cord along on trips >7 days.

So:
-Closer shave? Electric.
-Cheaper? Electric.
-Safer? Electric.
-Faster? Tie.
-Better for thin areas? Manual.
-Better for longer hairs? Manual, by a nose. Clippers are the real answer.
-In wet/shower? Tie.
-Outside the bathroom? Electric.
-Noise? Manual.

For being clearly superior in the first three categories, the trophy goes to the electric shaver.

Just for the record, I did not receive any consideration or compensation from anybody for this commentary.
Tags:

(Leave a comment)

October 30th, 2013


01:43 pm - Captain Underpants
If you are, or live with, a male of the human race, you may or may not have noted a curious (well, I think it's curious) characteristic of men's underwear, specifically of briefs or boxer briefs. Ignore boxers: they're too loose to be relevant. But briefs, also known as 'jockeys' or 'tighty-whities,' and boxer briefs (similar, but with an inch or two of inseam), as manufactured by, oh, say, Hanes or Fruit of the Loom or Sears, do something very strange.

They lay flat.

I admit, as with so many other things in the world, I took this for granted for many years before one day thinking "What the heck?"

I have had a couple of different female friends over the years discuss bras with me; for example, I've learned that many women are slightly asymmetric, and wouldn't it be wonderful if you could get a bra that was one size on the left but a slightly different size on the right? I can only imagine the laughter and ridicule I would have received if I'd tried to convince them that what they really needed was a bra that lay flat on a table.

Now, there are manufacturers that make men's underwear that does not lay flat. I bought a pair (and why do we call one brief "a pair of underwear??") a couple of decades ago. I didn't buy them for their three-dimensional nature, but once I'd tried them on, it was suddenly obvious to me why they should be.

I think, though, that what I like best about the question "Why do men's underpants lay flat?" is that it actually springboards in many different directions.

Anatomy: After all, although both women's breasts and men's genitals present as curves surfaces, they're not the same curve, they're not the same mass, and there are structural differences. Perhaps it happens that if there were a male equivalent of 'cup size,' it would turn out that most men tend to be "A cups." Maybe they're squishier. Maybe many other things, some not necessarily appropriate for polite conversation.

There's also apparently a difference in what the garment wearer will tolerate. I know from experience that one of the consequences of flat underwear is the opportunity for 'wardrobe malfunction,' although since it happens inside pants, it usually just requires some discreet adjustments to put things right. However, boxers are the most popular style of underwear, and they are, in effect, one ongoing wardrobe malfunction, inasmuch as they don't even provide the support that briefs do, as long as you can stay inside briefs properly.

Engineering: Fabric does stretch, after all. Maybe the available stretch in underpants is adequate to reshape to an appropriate curve for men, where it would not be for a breast.

Fashion/Design: Not high fashion, but just the process of creating clothing for humans. There are many companies making more anatomically-conformant underwear. So why would the mainstream manufacturers so rarely follow suit? Expense? Market demand? And why, oh why, are there so many racks of tacky boxers for sale? "Metrosexuals" notwithstanding, is the typical male really that fashion-backward? {sounds of retching}

Culture: How does it come about that I've had more conversations with women about the practical considerations of the fit of a bra than I've had with men about the fit of underwear? And I'm fairly confident that I'm not unusual in that way. I'm sure there are many many men who've never discussed either subject, but I really doubt more than a tiny fraction have the opposite ratio.

Then there's the related issue about men and their neuroses related to endowment. On more than one occasion, I've considered raising this topic, but refrained because I was worried that it would come across as, well, bragging. Will my co-conversationalists, if they haven't already thought about this topic, think that I have this opinion just because I was more observant or thoughtful about this particular topic, or because it's a "bigger" issue for me? Never mind why it should even matter. There are some strange differences in the likely responses to a woman's stating "it's hard to find a comfortable bra because my breasts are just too large" and a man saying (or even implying) "it's hard to find comfortable underwear because my penis is just too big."

Gay Culture: Most of the non-flat underwear I'm aware of is marketed primarily to gays. Why? Yes, yes, I can think of two glaringly obvious reasons right off the bat; 'real men' (i.e. the classic macho straight guy) would be embarrassed near to death to talk about underwear; and, just like the bras at Victoria's Secret, some of the more dimensional items are intended specifically to enhance the wearer's sex appeal. Generally (although not exclusively!) I've noticed a bulging crotch is more likely to turn the head of a gay man than a straight woman. But are those the only reasons? I think not...

So. A simple question with, IMHO, many intriguing ramifications. Assuming, that is, that you're not somebody too embarrassed to even think about the subject. :)




Huh. LiveJournal has an "adult content" flag. My choices are "none" and "explicit". I would like to know why "implicit" isn't an option, thank you very much. And why isn't "innuendo" a choice? Huh? Huh?

(11 comments | Leave a comment)

July 12th, 2013


12:54 pm - Geezerology
I've surprised myself a little with how I've felt about gray hair as I've gotten older. But then, the gray hairs have surprised me with how quixotic they've been. My beard started going gray first, and there's a lot of gray there. Frankly, I didn't like it. A salt-and-pepper chin but my original very dark brown on the sides of my head looked rather silly to me. The top of my head doesn't have much hair left, but the fact that there are a few still hanging on is even less esthetically acceptable to me. Stop looking straggly and pathetic! Give up! Let go! But I digress.

Now my temples are going gray, and I'm fine with that. It looks fairly elegant and distinguished to me. Also, I'm old enough that I figure I probably ought to have gray hair by now anyway. Which of course means I'm way past due, because how many of us ever think we should have started getting gray hair when we started getting gray hair? Hardly surprising, though. When I think about "people with gray hair," my brain is more or less sampling everybody I know, and comes back with an average age of fifty to sixty or so. I point out to it that it should only include people who are starting to turn gray, and it still gives me forty-five to fifty.

Doing a bit of research for this post, by the way, turned up this rather fascinating report (check me out, citing the original research, who's da dude? Me!) which I found through an article in a British newspaper: "The researchers set out to test a widely-accepted “rule of thumb” in the cosmetics industry, that the age of 50, 50 per cent of the population had at least 50 per cent grey hair. In fact, the new study found that less than a quarter of those taking part had that much grey hair at that age. In many parts of the world, it was a substantially lower proportion."

Now, it's hardly fair to compare when you spot your own first gray hair to the age your brain hands you for when it notices other people with gray hair. First of all, we're probably going to notice our own much earlier than somebody else's. We're probably looking a lot more closely at our own, and we tend to be more critical of our own appearance. On top of that, because the typical human thought patterns are already tricking many of us into thinking we're going gray earlier than we "ought to," many of us dye it to hide the gray, thus skewing the perceived age related to gray hair upwards even further.

My gray hairs started appearing in my late 30's, which apparently was just little bit behind the average for Caucasian men, and for the most part, the hair follicles seem to be switching over to gray fairly slowly. At the rate I'm going, I probably won't be predominantly gray on my head until I'm in my mid to late 60's.

So here I am, pretty much copacetic with gray hairs showing up on my head right by my ears. Then one day I find a gray hair on my chest. This should not have surprised me in the slightest, but I found myself quite annoyed! It's particularly silly since I don't even particularly like my chest hair, so if it wants to camoflage itself against my pale epidermis, I ought to be glad. But no, I was offended, and promptly plucked the dang thing. Then I had pretty much the same reaction when I found one in one of my eyebrows. "I don't think so!" {poit!}

I mentioned this a few months ago to some friends, and one of them agreed that some gray hairs were more disturbing/offensive than others, and managed to rather gracefully imply that the ones that had bothered them the most were ironically located where they were very unlikely to be seen. I'd had the same reaction myself, but I was at a loss as to how to say that in polite company, so I'd waited to raise the topic until my eyebrow provided a more genteel example.

So I know it's not just me, and yet, how ridiculous is that? I should be pleased if the hairs that are normally covered by clothing turn gray, on the grounds that I'm probably going to go gray at a certain rate, but yea, let's keep the most visible hair dark and put the gray hair where it doesn't show.

Now I have not really tried to seriously manage gray hairs by pulling them out. I do pluck them out now and then, but I know that the end result is either giving up in exhaustion as the rate increases, or looking as if I have mange. But I have pulled a few now and then, and what I have found is rather perplexing. That eyebrow hair had a white tip. The majority of the pubic hairs have been white at the end, and dark at the base. The melanocytes decide to retire and let the hair grow in white, and then, what, Moe comes along and slaps them, and they get back to work?

The Interwebs were less than helpful for this. Lots of people report hair that shows spontaneous repigmentation, with nearly as many 'helpful' respondents claiming it must be due to diet, or sun bleaching, or whatever, because "real" gray hair doesn't change back. PubMed wasn't very helpful, either. I turned up just one clear reference: "Indeed, it is not too uncommon to see spontaneous repigmentation along the same individual hair shaft in early canities." ('Graying: gerontobiology of the hair follicle pigmentary unit.' Exp Gerontol. 2001 Jan;36(1):29-54.) All remaining search results were related to vitiligo, not to age-related pigment changes. BTW, "canities" means "grayness or whiteness of the hair."

The hair cycle, as you might know, is that a follicle spends some amount of time manufacturing a hair, then takes a few weeks (or months) off. When the rest period is over, it releases the hair and begins growing a new one. Arm hairs (for example) have a short growth phase and a long rest phase, which is why they're so much shorter than head hair.

So here I am, finding hair follicles that shut down pigment production, and then start it back up again partway through the job. I would love to know what that follicle does with the next hair that it grows. Does it stay white next time? Is it back to brown? Does it restart partway through, but maybe later?

I have no doubt that there are commercial research labs working on how to restart hair follicle melanocyte activity, but we probably won't hear much about it until they've got some results.

In the mean time, I guess I'll just have to deal with my own hair follicles trying to freak me out. Ah, Mistress Biology, you are a wacky thing indeed.
Current Mood: Amused

(1 comment | Leave a comment)

July 10th, 2013


08:13 pm - Portland Mass Transit: Is it really that good?
A friend of mine from Portland was recently complaining about the Seattle mass transit system. There are a bunch of different ones here in Seattle, and you have to pay for each segment! Pierce, Community Transit, Metro, Sound; what a mess. In Portland, it's all tied together. You can just pay once and not worry about transfers.

Yea, whatever. It's not the first time I've had somebody extol the virtues of the Portland transit system. I can't say I'm in love with Seattle's mass transit system, but I'm not very impressed with Portland's, either. But this time I was near my computer, so I decided to rustle up some facts to see how they compare, because I rather suspected the reason Portland's seems so much easier to use is because there just isn't very much of it.

Yup.

First of all, I'm just going to compare the Seattle city transit system, "Metro," to the Portland system, "TriMet." They both handled about 110 million passenger trips last year. With TriMet, 54 million of those were bus trips, the rest were commuter rail. Metro only has busses. In fact, Metro has 220 bus routes and 1600 vehicles. TriMet has 79 routes and approximately 600 busses, plus the 4 rail lines.

The "greater Portland metropolitan area" (Hillsboro, Portland, Vancouver WA, and the like) is actually bigger than the "greater Seattle metropolitan area" (Seattle, Tacoma, Bellevue, Woodinvile, Issaquah), 6,600 sq. miles vs 5,400 sq. miles, but I can't easily figure out how much of the Portland area actually has usable mass transit service. With about 1/3rd the routes but more area, it seems improbable that the coverage is as good, but maybe they're long routes that cover a lot of territory. {shrug}

Now, keep in mind my friend was whining about how complicated it is to get to Tacoma or Everett, where you start on a Metro bus, then transfer to a Sound Transit bus, and the go to a Pierce Transit or Community Transit bus. Quite frankly, I'm too lazy to hit the Pierce, Sound, Community, and state ferry web sites to add in their route count and resources. It seems rather unnecessary. For Portland, TriMet is the *only* transit system. The intra-Metro fare transfer system is just as good as the TriMet one. But TriMet doesn't have any other major systems to interface with, so no wonder that it seems easier.

Particularly annoying is that my unhappy friend even used to have an ORCA card, but seemed to think bitching that it was going to cost $7.75 to get to Tacoma via cash fares was better than coughing up the $5 to buy a card (available at dozens of locations, including vending machines in the bus tunnel stations, and many local retailers). The ORCA card provides toll-free transfers between the different transit systems.

Portland's system does have remarkably high usage rates, and I am happy that there is a transit system that's really being used. According to TriMet, 26% of evening commuters are using mass transit. Wow. They also point out that "More people ride TriMet than transit systems in larger cities, such as Dallas, Denver and San Diego." Seattle's not on that list because Metro had about 10 million more, I believe. On the other hand, the Seattle area has 80% more people, and is almost twice as dense, so you'd think they could do better than that. Maybe when the light rail line is extended north, they will. {shrug}

If you want to tell me that you like Portland's transit system, go right ahead. Just don't follow it up with a complaint about Seattle's. Could Seattle have had a system as well-liked as Portland's, had they done things differently? Beats me. *Could* they have done things differently, which is to say, was it ever politically possible to get the right amount of money spent on the right things to have made something else? There's no way to know. But it's still clear that the Portland system is just a heck of a lot smaller than the Seattle one, and that makes the job a lot easier. Walla Walla has 9 routes, and the standard fare is 50 cents. I'm sure it's much easier to figure out routes there, and what a bargain! Is it better?

Apples and oranges. Apples and oranges.

(3 comments | Leave a comment)

June 18th, 2013


10:55 am - But I don't WANT to be innovative!
Yesterday's housechilling party was all kinds of fun (yay!). I wonder if you can tell how successful a party was by how much cleanup you have to do afterwards? Probably not, but one of the things I had do deal with was the remainder of the enormous ripe casaba that Nick brought. My first thought was "sorbet!" but after I pureed some of it and gave it a taste, I don't think that will work very well. Like watermelon, the casaba has a very delicate flavor; it's quite watery. I think it will either freeze up solid or require way too much sugar, just like watermelon sorbet.

The solution with watermelon sorbet, by the way, is to add a teaspoon of ethanol, or two teaspoons of, say, vodka. The ethanol does what the sugar does: it's an antifreeze. The ethanol's just a lot more powerful than the sugar.

However, I don't have any Everclear in the pantry right now, so I went to Plan B: jelly or jam.

The canonical home canning text, the Ball Blue Book Of Preserving, does not have any recipes for melon jelly. WTH? (heck)

To make a successful jam or jelly, you need to get the pectin/sugar/water ratio right, and that varies from fruit to fruit. I need some kind of melon jelly recipe as a guide. Casaba is closely related to honeydew, but probably even a watermelon jelly recipe would work.

Apparently the concept of melon jelly is just mind-blowingly radical. There is absolutely nothing on the internet for casaba jelly. The only honeydew jelly recipe I found was in a blog by somebody who just made it up on the fly, and hers didn't set up. I'm guessing it's because she just swapped honeydew for the plums she used the very first time she made jam, and plums, being quite high in natural pectin, don't need any additional in order to set up.

I did find a watermelon jelly recipe from Marisa McClellan on a blog entitled "Food in Jars." This is a blogger who specializes in home canning, and yet she posts that watermelon is something "I would [not] have even considered putting in my jam pot" except that somebody asked her for a watermelon jelly recipe. This lack of imagination is more startling when I found Apronstrings Blog sharing a "Honeydew Jam with Mint and Lime" recipe that was adapted from "Cantaloupe Jam with Mint and Lime" that the blogger found . . . in the "Food In Jars" book. Yes, the book was written by Marisa McClellan. So she's made cantaloupe jam, but would never have thought to make watermelon jelly?

Yea, I know I'm unusually creative. But good grief! How hard can it be to say "I have more [name any fruit you can imagine] than I can eat all at one sitting. What will I do with it?" and answer with something other than "Wrap it in plastic and put it in the refrigerator." And yet, if Google's results are to be believed, rarely has anybody (or at least, anybody who shares recipes online), even thought about trying to make watermelon jelly, and nobody has ever tried it with casaba.

Bizarre.




I did find some related stuff that was pretty awesome, though. Marisa has obviously done a lot more than just parrot back instructions from other people. The overwhelming majority of home canners work via the "because they said so" principle. "You use that much sugar because the recipe says so." And it's extraordinarily rare that somebody *writing* a recipe knows enough about the science to tell you *why* a particular step is there. As a result, most people think that the sugar used in home canning fruit is there for flavor, and thus that it's no big deal to cut back if you think adding, say, five cups of sugar to six cups of fruit is excessive.

The "Food in Jars" blog is smarter than that. I already knew that sugar is critical for shelf life. As she notes, it's a preservative. It is aggressively hygroscopic, like salt. It's why you can store maple syrup in the cupboard without it going bad; the sugar concentration is really high. Bacteria and mold can't grow in the syrup because the water is sucked right out of them by the sugar; they actually get dehydrated.

What I *didn't* know is how it works to help jam set. I was aware that adding sugar changes the boiling point of water. Turns out that 220° F is the temp that sugar and pectin bond. Not enough sugar means you can't get the mix hot enough to trigger the bonding. Neat!

And, from "Local Kitchen" I come across the (in retrospect, terribly obvious) idea (also set forth by Marisa) that if you are going to experiment with making preserves out of fruits that you don't have an Officially Sanctioned recipe for, that you should test the pH. What water-bath canning doesn't kill is botulism spores. However, they are prevented from growing in a high-acid environment, where "high-acid" is pH 4.6 or below. The "Local Kitchen" blogger made honeydew melon jam (with forsythia and citrus!), and was ready to can when they remembered to test. The pH was around 6.0, so they froze it instead.

Since I do happen to have appropriate litmus paper handy, I can and shall do the appropriate testing. I found a comment on another site that said that one should use an electronic tester, because litmus paper wasn't accurate enough, but I am fairly confident that such a statement only makes sense if we're talking about the original wide-range stuff, that does its color-change from around 4 to 10 or so. I have some that reads 4-7, and then different paper for 6-11.

(3 comments | Leave a comment)

June 11th, 2013


02:57 pm - Dear Discover Magazine . . . .
I just sent off my very first letter to a magazine. It was a minor issue, but it was about the English language, so I couldn't help myself. . . .




Dear Editors:

Just finished reading my most recent "Discover" (really great issue, by the way), and now I'm writing my very first letter to a magazine.

Bill Andrews, in his "20 Things You Didn't Know About Gravity," item #4, says "Passengers on amusement park rides and the International Space Station experience microgravity—incorrectly known as zero gravity—because they fall at the same speed as the vehicles".

Sorry, Bill, but you're the one who's incorrect. You've made the common mistake of confusing English and Math. The English language phrase "zero gravity" is not in any way a replacement for "0g". English routinely rounds numerical amounts. "My commute was great! There was no traffic!" doesn't mean there was not a single car on the road; it means there were so few cars that they had no effect. If I want you to understand that the freeway was truly devoid of any other vehicles, I have to say "There was literally not another car on the road!" The editors at Merriam-Webster are on top of that distinction, since they have documented the commonly held definition of "zero gravity" as "lacking apparent gravitational pull" (emphasis mine).

If I'm riding a roller coaster or the Vomit Comet, and I'm in a situation where the various forces (gravitational and otherwise) cancel out to the point that I can't detect them without instruments, then I am, by definition, experiencing what most English speakers call "zero gravity." And, unlike the language of mathematics, meaning in English (and most other common general-purpose languages), is decided (in effect) by popular vote.

I've worked as an editor for many years myself, and I don't always *like* that; if I were the King of English, I would immediately ban "utilize" since "use" is an entirely superior and 100% compatible replacement. But I'm not, and I can't. And what people experience on the International Space Station fits the definition of the common English phrase "zero gravity." Sorry.

(Yes, I know that English as used in professional journals often assigns different definitions to things, and using the phrase "zero gravity" for a paper in Aerospace Science and Technology might be rather inappropriate, and even incorrect. But this is Discover, not (shudder) Scientific American.)
Current Mood: informative

(Leave a comment)

November 25th, 2012


01:52 pm - A New Appreciation For Gardening
I've just finished, er, something. "...reading a graphic novel?" No. "...playing a game?" Kind of. "...having an experience?" Closer. "...experiencing Botanicula." That'll have to do.

Botanicula is a computer program/game/graphic novel from Amanita Design, which came to me via the Humble Bundle program, and it's a stunning example of what's possible when you really take advantage of what a computer can bring to the art of telling a story.

First, though, a brief recap of history, starting with the Choose Your Own Adventure books. Literature with hyperlinks in the dead tree medium. Although illustrated, they were still fundamentally a text-based story. Much less well known are Infocom's computerized comic books of the time. They released three titles that were a cross between a traditional comic book and a computer game, where the story line was presented with different perspectives and one could switch back and forth between them. The release of HyperCard for the Macintosh helped spur computer-based interactive fiction. In fact, while researching this post, I discovered that Myst was originally written in HyperCard.

I think these days there are some parallel-ish developments. Or maybe they're converging. It's hard to say. I've seen very little contemporary hypertext literature. I think that is, in part, because it's just so hard to do. A writer already has monumental challenges just making a classic linear narrative compelling and engaging. If the reader is allowed to wander around the text, experiencing things in (at best) a semi-predictable order, it's so much harder to get the story's events and characters to set up the story arc in a way that will leave the reader satisfied.

Then there's "interactive fiction," or IF. I am not really satisfied with the current trends in IF because the term is used primarily for what most people think of as "text adventures," of which the original was Collossal Cave, and the most famous being Infocom's Zork games. There is innovative work being done in this area, but it is still very much focused on working through the command-line interface, where a 'reader' will type things like "go west," "pick up the letter," and the like. There are many potential ways to interact with fiction other than typing words at a prompt, but what I feel are some of the most innovative and promising approaches aren't really encompassed by the IF community as such.

In fact, most of the best "interactive fiction" I've seen in the past few years has caught me by surprise because the work presented itself as a computer game. Alas, this belief that a computer program designed to entertain must by definition be a game is (I think) the cause of what I think have been the major flaws in the works I've enjoyed. To illustrate why, I need to tell you about my encounter with "Millenium."

In the late 1980's, my computer of choice was the late, lamented, way-ahead-of-its-time Amiga, and, like many computer owners of the time, there was a certain percentage of the programs in my floppy case that were illegal copies. I wasn't as avid as some of my friends, but there were certainly a couple of occasions when I got together with a fellow Amiga owner and we'd spend time running some kind of copy-protection-bypass program and swapping software.

During one of these sessions, I acquired a program called "Millenium." Since my friend didn't have the original box it came in, I didn't know anything about it. So, when I popped it into the drive and launched it, the initial screen was pretty baffling: a picture that was fairly obviously some kind of moon base. Most of the systems were deactivated. I eventually found a button that activated a power plant, which provided enough power to activate the computer, which . . .

It was obviously a "game," but for me, it really wasn't much of a challenge. With very few exceptions, the next action I should take seemed pretty obvious. What I didn't know was why I was managing a moon base in the first place. The backstory unfolded piece by piece while I took care of my various tasks; the fact that something catastrophic had happened on Earth (but what?), and that there were other colonies in the system (somewhere? friendly?).

When I finally "won the game," I felt like I'd been watching a movie or reading a book, more than playing a game, and I loved it! I strongly suspected that, had I bought the game in the store, a lot of the backstory would have been plastered all over the box, and I was very glad I hadn't seen it, because the gradual unfolding of the context was, for me, the best part.

Now, as it happens, if you want to retrace my journey, you probably can, because it wasn't hard for me to dig up the details of that mysterious game. Its full name was "Millenium 2.2", released for the Atari and Amiga, and for DOS as "Millennium: Return to Earth." There's a Wikipedia entry which you should avoid, since it pretty much spoils the entire storyline in the first paragraph. Even better, they re-released the game (for I must assume, Windows, since the download file contains only an *.exe file, although the web page totally neglects to provide any information about the system required) in 2006. Again, avoid the "readme.txt" file if you want to experience it as a movie/novel, because the designers clearly see it as a game, so most of the context is spelled out in a few paragraphs right at the beginning of the file.

"Botanicula", delightfully, didn't try to tell me what it was about. Well, okay, when you start, there's a 'cinematic' sequence that sets up the story line, but mostly it's a game of discovery. It's entirely wordless; the story is told through the animation and the sounds made by the objects in the game. It reminded me of the movie Yellow Submarine and the animated series Samurai Jack on more than one occasion. It's surreal and beautiful and unexpected. It has a sonic background/soundtrack that adds immensely to the experience.

I only wish it weren't as much of a game as it is. "It's a game! There must be puzzles! The player must solve the puzzles!" I eventually got tired of the story grinding to a halt because I couldn't figure out what the **** I was supposed to do, and found a walk-through. Still, "Botanicula" is, for me, hugely better than many other similar games I've played because it seemed less obsessed with forcing me to be clever than many of them. You still need to be familiar with this genre, and know that you will spend a fair amount of time waving your mouse pointer around the screen looking for objects that change the pointer from the arrow to the hand. But Botanicula's art and design do a really good job of cueing those 'hot spots.' Each screen tends to be fairly minimal, and the objects you are likely to interact with are clear and sharp; objects that are decorative are usually slightly out of focus.

Arrows appear readily to show you where the exits are from the screen. Sound effects usually clearly informed me if I'd started a sequence of events that were good, if I'd found the right button but didn't have the right item with me to complete it, or if they were just amusing toys added for fun. Generally, I was led, gently but firmly, from chapter to chapter of the story.

It's rather the antithesis of a first person shooter, in that most of it happens at an easy, measured pace. Things float, drift, flutter, flex, bend, and sway much more than they snap, pop, explode, flash, or blink. Everything's small, and cute, except for the bad things, which are deliciously scary, rather than horrifyingly evil.

Except for the worm and star. There's a point near the middle of the game where the job is to move a star through a maze before a worm catches up to it, and this bit resembles nothing so much as those wooden marble maze games where you have to tilt the board to get the ball through the maze without dropping it into a hole. I was infuriated when I hit that stupid thing. Maybe if I'd had a mouse with the sensitivity dialed down, or a good trackball, it would have been okay, but with the trackpad on my laptop, mastering that stupid maze to get the star through fast enough was going to take me hours of trying over and over again, and there was no way I was going to waste that much of my life just to finish this story.

It turned out that there was a trick. It wasn't in the walk-through I had, but rather in Amanita's discussion group, and it's so non-obvious that the person who posted it speculated that it might have been an unintentional bug. There's a loop near the beginning of the maze, and if you take the star around the loop and back to the start, the worm gets confused and heads toward the end of the maze, You can then follow along behind it. When it gets to the end, it turns around and goes back to the start, so you have to move the star into a side branch as it passes, and you still have to complete the maze before the worm gets back to the beginning or the star just resets, but it's still vastly easier to complete the maze in the time it takes the worm to traverse the entire maze forwards and backwards rather than the time it takes for it to just catch up to you.

I think this is the key difference between thinking of something like "Botanicula" as a work of interactive fiction vs. a game. If it's a game, then you're going to think in terms of "winning" and "losing." If it's a story, then you should plan to make sure that every reader will be able to finish it. A writer doesn't want a reader to feel like they've "lost" when reading a novel. "Yea, I couldn't finish it, so I gave up and read something else." Yikes.

I can't really fault Amanita, though, because they clearly think of it as a game. And yet, it is so close to working as a brilliant and innovative interactive graphic novel. All it would need is some mechanism(s) for graceful challenge decay. If somebody spends more than a certain amount of time wandering back and forth without obtaining a token or entering new territory or whatnot, then some subtle hinting might appear. Make the little creature start peeking out or rustling the leaf its hiding behind. Have an arrow appear briefly to suggest a direction to travel. Have one of the characters pop up a thought balloon suggesting a course of action. If I'm wandering around stuck, I wouldn't mind just pacing back and forth trying random crap if I knew that eventually I'll either find what I'm missing or the game will give me a clue.

As it is, I'm pretty sure the game will let me wander around indefinitely without helping me along, and there were at least three places during the game where, when I resorted to reading the walk-through to find out how to advance, I said "there's no f***ing way I would ever have thought to try something as nonsensical as that!!" Now, there were also times when I *did* think to try something pretty nonsensical, but each reader is going to have different moments of "aha!"

So in a sense, as a work of fiction, it's a failure. It's unreadable unless you have the Cliff Notes to hand, or annotations, or what have you. But it and Trauma (which I described in a Google+ post as 'Deft, dream-like, and fairly short, [striking] a delicate balance between "too obvious" and "too obscure." Delightful.') are still very satisfying, engaging, and immersive. I recommend them both.
Current Mood: pleasedpleased

(1 comment | Leave a comment)

> previous 10 entries
> Go to Top
LiveJournal.com