Log in

Bending Reality to my Will

> Recent Entries
> Archive
> Friends
> Profile
> My Website
> previous 10 entries

July 25th, 2015

01:13 pm - Ask Not For Whom The (Bronze) Bell Tolls...
   I’ve enjoyed playing/reading text adventures for many years, as my re-creation of the “Double Fanucci” deck from Infocom’s Zork series clearly attests. For a while I was attending meetings of the Seattle Interactive Fiction group as well, and just a few weeks ago, while telling a friend about how it was possible, and in fact easy, to still play those old games using a truly remarkable selection of engines that can process and present works in this field, that I found and installed “Frotz” on my iPad.
   For those of you who haven’t had the experience, this particular form of computer game, a genre that currently is known as “interactive fiction”, usually goes something like this: when you start the game, you’re given some paragraphs that describe where you are, and probably some background info. Then you move around, typing commands like “go north” (usually abbreviated to simply “n”), “get accordion,” “inventory,” “squeeze duck,” “examine toenails,” and so forth.
   A huge amount of work has been done to make the underlying engine, the “parser”, understand English as much as possible, so that typing something like “put spanner in left wheel” or “read about nunchucks in manual on shelf” don’t utterly confuse the interpreter.
   This is all well and good, but returning to Interactive Fiction (IF) via Frotz just reminds me of the my ongoing disappointment with the genre.
   You see, the origins of the field were computer-mediated puzzles. Oh, yes, Leather Goddesses of Phobos, A Mind Forever Voyaging, and Leisure Suit Larry, to name but a few notable titles of the genre, did have storylines, but they were really a series of puzzles. The “reader’s” job was to solve the puzzles.
   Puzzles are fun, don’t get me wrong, but I don’t feel like I’ve ever found a piece of IF that really lives up to the “fiction” part of the nomenclature. A novel (well, a well-written novel at least) does not come to a grinding halt because I couldn’t figure out where the ivory key was hidden.
   Years ago, I accidentally had an interactive novel experience. I acquired a bootleg copy of a game for my Amiga. I’d never heard of it before, I didn’t have the box, or instructions, or anything except the name of the game: “Millenium.”
   I popped it in my drive, and found myself on the moon in a mostly mothballed moonbase. Now what? Well, there were rooms with devices that didn’t work because there wasn’t enough power, and a backup generator, and, well, it seemed clear I needed to turn on the backup generator, since the power control room said we were running exclusively on solar panels.
   Why was I on the moon? Dunno. But there was almost always a fairly obvious “next thing that needed to be done” for me to do, and, bit by bit, the story revealed itself. Earth was uninhabitable. There had been other colonies in the system. If I could get communications back on line, I might be able to reach them. I was unexpectedly attacked; I had to get the defensive systems up and running before they came back and destroyed me. The research lab eventually came up with the means to terraform the Earth, and the story (and game) ended when the terraforming systems were installed and activated.
   As a game, I would have thought it was probably too easy and obvious. But (possibly because I didn’t have a box to spoil the story for me), the storyline, gradually unfolding in front of me, was a blast.
   The other big success for me in this genre was “Marathon.” This was a Doom-style first-person shooter for the Mac. Now, I am not a big fan of “twitch” games, where forward progress is made by mastering complex button presses and joystick wiggles with split-second timing. I would never have played Marathon for more than an hour or two if I hadn’t turned up a cheat code that rendered my avatar unkillable. Once that was done, I could enjoy exploring the alien-infested ship.
   At first, this still didn’t seem too promising. Every now and then there would be a computer terminal, and the ship’s AI would, with prompting, cough up little nuggets of info to help me along, presumably with the goal of eventually getting rid of the alien invaders, repairing the ship, and completing whatever mission the spacecraft had been on when it had been attacked. But then the computer’s information started to get a bit strange. The AI had what seemed like a pleasant, helpful personality, but bit by bit, it became clear that it had actually gone insane, and what it was really trying to do was (among other things) get me killed. Dang!
   A brief aside here. There might be some of you thinking “But you were invulnerable! That totally ruins the game!” You would be dead wrong. If I hadn’t found that cheat code, I would have deleted the game from my computer in complete frustration long before I even found out that there was a subtle and sophisticated storyline embedded in it. I still had to figure out where to go and what to do, and there were also a couple of places I got stuck because I didn’t have finely-honed video-game reflexes. One spot in particular required jumping off a platform and then jumping again at just the right split-second to get to a door high on a wall, or some such thing, and I had to do that over and over and over again before I finally got it. It was really annoying. However, I still spent quite a few hours playing that game, and had so much fun that I also played all the way through the sequel.
   There’s also an important lesson for game designers here. I’ve spoken with some designers (and plenty of players) who would probably say “But that’s not how you’re supposed to play it! Why did they leave those cheat codes in? ‘Cause, well, that’s cheating!” If you’re one of those people, here’s a word of advice: get over it. You’re being an egotistical fool. This is a single-player puzzle-style game, not a multiplayer head-to-head challenge. If a user has more fun playing your game with impervious armor, or with an unlimited money supply, or with a magic key to open all the normally locked doors, then they’re having more fun! So let them have fun! Include an “undo,” or a save/restore system, or hints, or other optional ways to make your game easier, and let the end user decide how much assistance they want in order to have the most fun playing your game. If you’re worried about bragging rights, or you’re doing some kind of global leaderboard where people get to post scores or completion times, then you can alter scores to reflect what assists were or were not used, or have separate leaderboards for people who get through a game without using the undo or restore feature, or whatever. But limiting your audience because you think players shouldn’t play your game the “wrong” way is just dumb.
   One even briefer historical note: the company that published Marathon was working on a follow up product for the Macintosh that promised to be amazing; an epic first-person shooter that would blow people away. In fact, it almost certainly would have. There’s every reason to believe that if it had come out for the Mac, it would have made hordes of PC/Windows gamers wet themselves in envy. But, as it happens, Microsoft was getting ready to enter the console game market to go toe-to-toe with Sony and Nintendo, and they wanted a killer app. So they made the publisher of Marathon an offer they didn’t refuse, and the rest is history. The publisher was Bungie, and the game that was originally slated for the Mac, and instead came out for XBox, was "Halo."
   But enough of history, Halo, and games that demand massive video cards, and back to Frotz on my iPad, and the humble text adventure. Frotz, as it happens, comes with a couple dozen works of interactive fiction pre-installed, a very intelligent decision on somebody’s part. I thought it would be fun to pick out a title and play it, even though playing on my iPad meant I’d be entering text using just a picture of a keyboard, rather than an actual keyboard, never mind a good keyboard. But that’s a rant for another day.
   I settled on a work entitled “Bronze.” Each work has a little blurb to give you some idea what it’s like, sort of like the back cover text of a paperback. Bronze had what appears to be part of somebody’s review of the game, and it said, in part: “This game is intended for those of us new to interactive fiction. Puzzles have multiple solutions...no time limits...the difficulty is on the easier side...[and] commands new to the form, such as GO TO, remove a lot of the tedium of the old-school games. I recommend Bronze to [people who] would have little patience with the intentionally frustrating and pedantic I-Fs of old.”
   Now, I am not “new to interactive fiction,” but I don’t have much patience for “intentionally frustrating” anything, and as for pedantic, well, please refer back to my previous comments about egotistical fools. So this sounded like an excellent place to dive in.
   I regret to say that I think the reviewer is almost certainly correct in his description of the game. The writing is solid, it is probably much easier than many modern titles, and some of the special commands did indeed remove what traditionally were very tedious aspects of the genre.
   One minor remaining problem was that I did still eventually have to build a map. Yes, mapping the terrain can be fun, but in this case I was playing on my iPad. If I were playing on a computer, I could easily have the text adventure in one window, and OmniGraffle in the other, and could jump back and forth building the map as needed. In the 80s, I usually used graph paper, but that meant having to redraw maps more than once when I found that I’d started in the wrong spot, and the map was now going off the edge of the page. Mapping on my iPad, though, means having to pop out of Frotz and into a draw program, then back to Frotz, then back to the draw program. I don’t have a structured drawing app for my iPad (a la OmniGraffle or Vizio), so playing/reading Bronze when I’m away from home would mean playing without a map. Instead, I kept my laptop handy, and built my map on that while playing.
   For, oh, about the first two-thirds of the game, everything was going pretty well. The back story was slowly unfolding about me as I wandered around the castle. Many things that seemed mysterious and intriguing at first were slotted into context bit by bit. The text artfully included little nudges to prod me to keep moving so that I didn’t get stuck trying to deal with, say, the jigsaw puzzle with one missing piece, until I’d finished exploring the castle, since solutions to puzzles were often a matter of just finding the missing piece (sometimes quite literally).
   But eventually I got stuck. There’s different kinds of stuck in these games. Usually, it’s a “I need to get through that door/open this box/do some specific thing. I just have to find the means to accomplish that.” But this was a Grade 2 stuck: “Now what? Okay, the end goal is, er, to make the beast feel better? But how is that done? What am I looking for?”
   There was an unusually well-done hinting system, where I could “think about” various objects, which sometimes would provide useful insights and information. But I was soon getting very tired of pecking out “think about XXXX” on my iPad as I cast about more or less randomly trying to figure out what to do next.
   The real problem here is that this is where suspension of disbelief fails. I had been reading a story. I was immersed in the tale, seeing this castle in my mind’s eye, admiring the portraits, smelling the roses, and seeing dust motes dancing in the sunlight that fell on the mechanical chessplayer. The Grade 2 stuck brought me to a screeching halt. Now I was forced into puzzle analysis, and the world I’d been enjoying was merely data.
   Eventually I had to resort to a walkthrough: a transcript of commands to solve the story. Talk about a buzzkill! Reading through the walkthrough eventually got me to some commands that I hadn’t tried. Long commands. Movement commands abbreviate to single letters. About 98% of the remainder will be two-word commands, usually “[verb] [noun]” style. The ones that I need to progress were seven word commands. Worse, they were along the lines of “look up [object or character] in the [adjective] [noun]” There were two different books, plus records, and notes. There were many items that needed to be read about in each, and many items needed to be researched in more than one of them.
   In effect, what I hadn’t done was to make a list of everything that might bear on the current situation, and looked them up in my reference texts one after the other until I’d extracted the information I needed. Basically the sit-and-read equivalent of bumbling about a dungeon pulling on every wall sconce you find hoping you’ll stumble across a secret passage. Gaaaah!
   So I typed in five or six of those seven word commands, learned important facts, and then headed off to do some things that now seemed fairly straightforward. Sadly, I had to do that a couple of times, so I never did get back into the story before I reached the end.
   And did I “win?” Well, no, not exactly. In order to not completely spoil the remaining fun, I wasn’t reading the walkthrough and following it step by step. I was skimming it for hints, which I’d then try to use on my own. As a result, when I’d put the P in the Q, picked up the Z, and then X’ed the Y, the game told me “Congratulations. You [accomplished this thing], but you didn’t also [accomplish this other thing].” Other thing? I hadn’t been aware that I was supposed to accomplish the other thing.
   I wish that I could say that Bronze just isn’t a particularly good piece of I-F. Unfortunately, I doubt that’s the case. It’s probably still one of the best introductory titles available, even though it was published in 2006. Certainly the comments on IFDB would support that.
   No, I’m afraid that Bronze is what it is, or more tellingly, isn’t what I wish it was, for two reasons. One of them is that the I-F community still tends to think of these things as games, not stories, which puts a very heavy focus on the puzzles. There are authors and player/readers in the community who are more story-driven, but I think they’re still a minority.
   The other main problem is much bigger. Writing non-linear fiction is really freakin’ hard.
   You see, I-F isn’t the only time this kind of writing has appeared. Other vectors into this genre, coming from other starting points, have been tried. When HyperCard came out, there was a flurry of attempts to write hypertext stories and novels. The invention of and growth of the World Wide Web was another chance to create works that liked in multiple directions. I have a work written by Kathryn Cramer and published by Eastgate in my library, as well as a hypertext-enhanced edition of “A Fire Upon the Deep” by Vernor Vinge that was published by Brad Templeton in 1993. The “Choose Your Own Adventure” books are another avenue into non-linear fiction.
   For a would-be writer, a good course or workshop on writing can be very dismaying. The importance of opening lines, avoiding the expository lump, showing vs. telling, successful character development; there’s just an astonishing amount of stuff that goes into making a story good. Some people are lucky enough to intuitively grasp enough that they can become successful authors by just sitting down and writing, but most authors have to write a lot of junk before they finally produce something good, let alone great.
   When attempting this non-linearly, many of the standard tools for the writer are lost. You can’t control pacing as well, you don’t even necessarily know which part of the story a reader will get to first, you can’t take for granted what a reader is aware of as the story progresses, and so forth. Many authors will end up with more pages of notes than pages of manuscript, and that’s for a single path through the work!
   Hard, yes, but not impossible. Millenium and Marathon were to some degree unintentional examples of interactive fiction, but I would be absolutely delighted to have another experience like those again.
   Currently, the term “interactive fiction” is almost synonymous with “text adventure,” which I think is really unfortunate, because I think it really limits many people’s thoughts on what can be done along these lines. What I think can be fairly convincingly described as the best piece of interactive fiction ever published has barely any text in it at all! I am referring, of course, to a title that was to non-linear storytelling what the iPod was to MP3 players: Myst.
   It was still fundamentally a great big pile of puzzles, many of them extremely difficult, but (in stark contrast to “7th Guest” released about the same time), the puzzles were made to work for and advance the plot. Even when you were absolutely baffled as to how you were supposed to get some blankety-blank door open, you were still there, in the world.
   Myst, like Bronze and many other works of interactive fiction, had more than one possible ending. I’m not arguing for works that don’t have puzzles in some form, because one potential pitfall of I-F is when the interactive element is pointless. If everything is going to happen whether I act or not, then why isn’t this story just a normal linear story that can be printed on paper? Most I-F puzzles are such that if you don’t solve the puzzle, the story stops dead. Some of them will branch one way if you solve it (sometimes before some kind of time limit) and branch another if you don’t. The latter approach complicates the writing, but helps a lot in keeping the reader immersed.
   Perhaps one of these days a well-known successful linear author will take on the challenge of writing a non-linear work. Or some unknown but talented author will publish a non-linear story that breaks out of the genre and is recognized as A Work of Literature. The ubiquity of devices that can present non-linear fiction is a critical advantage, of course. We might eventually see courses and workshops that cover how to manage branching stories, book clubs that get together to discuss non-linear works, and scholarly journals that deconstruct the latest works.
   Some day, perhaps. First, we just need more titles that are, first and foremost, a really good read rather than a really good game. I can hardly wait.
Current Mood: disappointeddisappointed

(Leave a comment)

September 23rd, 2014

10:36 pm - Why Hugo Base Design Contests Are a Bad Idea
Wait, what?

"But Dave, didn't you enter a Hugo Base design contest?"

Yes, I did, and I am all too aware of how the success of that entry has badly undercut my case. Nevertheless, I believe the headline is quite true.

You see, while I was still in college, my home town city council has a couple of design constest for banners to decorate the downtown district. Since I was studying graphic design at the time, I decided to enter. When I lost the first one, I thought "Oh, well, I guess that's just how it goes." But when I lost the second one, it became obvious that the problem was not with my designs, but with the fact there were factors being used to choose the winner that weren't included in the contest specs. Things like "we really like bright colors," and "even though we said you could use up to the three silkscreened colors on the fabric, we're actually very miserly, so designs with just one color have a real edge."

At that point, I'd already had enough of giving away my skilled, trained labor for free, and decided I would not be entering any more design contests.

I stuck to that, too, until 2008. Because it's the Hugo, which has enormous personal significance. I first got to attend the awards ceremony in 1993, and sitting in that crowd and watching winners picking up their trophies was absolutely thrilling. Even though I was at WorldCon in a professional capacity, it was obvious to me (or so it seemed) that my career path did not lead toward ever being eligible to receive one for my own. "What award," I asked myself, "would be even more thrilling to receive than a Hugo?" It happens to be a very very short list. Nobel Prize, MacArthur Grant, Kennedy Center Award. That's it. An Oscar, Emmy, Clio, Tony, Pulitzer? Not as amazing as a Hugo, not to me.

This hopefully gives you some idea of just how big a deal it had to be to make me break my rule about entering design contests.

But now, you see, I have a Hugo; the one I made. I am, of course, hugely biased as to where it would fall on a scale of "best to worst base designs ever," but there's an awful lot of fairly good evidence that it's somewhere in the top 25% at least. All of which means that I really doubt I'll ever enter another Hugo base design contest. Ever.

Unfortunately, perhaps in part because of the results Montreal (and probably Scotland) got from their contests, having base design contests has become more common. This is a Bad Thing.

"No, no, it's a good thing! We will get to choose from among multiple options, so we can get the best base!" No, you'll get to choose from among a very limited number of options, most of which will be unusuable, and the remainder of which will probably be merely okay. Because what you're going to get from a contest is entries from amateurs. Really good designers don't have to give away their time for free to get work. They're not going to give you designs.

Fortunately, you don't have to just take my word for it. AIGA is the leading guild for graphic artists in the U. S., and they have a handy form letter for their members (or anybody else) to use to help educate people about asking designers to work for free. It's called "spec work", defined as "work done prior to engagement with a client in anticipation of being paid," and here's some of what that letter says: "AIGA, the nation’s largest and oldest professional association for design, strongly discourages the practice of requesting that design work be produced and submitted on a speculative basis in order to be considered for acceptance on a project, [because] successful design work results from a collaborative process between a client and the designer [whereas] design competitions ... result in a superficial assessment of the project. [Also,] requesting work for free demonstrates a lack of respect for the designer and the design process."

They do suggest an alternative approach. "A more effective and ethical approach to requesting speculative work is to ask designers to submit examples of their work from previous assignments as well as a statement of how they would approach your project. You can then judge the quality of the designer’s previous work and his or her way of thinking about your business."

As it happens, AIGA has an unusually mellow take on spec work. The Graphic Artists Guild says "Artists and designers who accept speculative assignments (whether directly from a client or by entering a contest or competition) risk losing anticipated fees, expenses, and the potential opportunity to pursue other, rewarding assignments."

The Registered Graphic Designers of Ontario goes so far as to "prohibit its members from engaging in speculative (spec) work" and goes on to state that "Spec work is universally condemned as an unethical business practice by responsible designers and design organizations around the world."

The Society of Graphic Designers of Canada says "The practice of asking for free design concepts in order to choose the 'right designer' or the 'best design' or the 'best logo' undermines and devalues the professional designer's education, experience, hard work and the entire design industry. GDC members do not engage in contests or other speculative, commercial projects."

There's even a domain dedicated to explaining the problems with spec work: http://www.nospec.com

So the problems with this contest so far are (a) the judges won't even get to see work from the most talented designers, and (b) the designs they do see are the designer's "best guess", rather than something custom-tailored based on interacting with the client. The design firm "artwurks unlimited" neatly summarizes the third big downside: "Speculative requests are often a result of 'I’ll know it when I see it,' thinking on the part of the client. The problem here is that it’s self-centered point-of-view rather than a position serving the needs and wants of the audience."

Not long after Montreal's base design contest, the WSFS Mark Protection committee (I think?) held a design contest for a logo, so that there would finally be some kind of symbol that could go on Hugo-winning book's covers and the like. It's an excellent idea. Unfortunately, the winning logo is not. As a graphic designer, my personal specific field of interest has always been logos and logotypes. Designing the Hugo base was a stretch for me. Designing logos is not.

Had I been hired by the Mark Committee, I know that part of my job would have been educating the judges in what makes a good logo. It's obvious to me why they picked the design that won: it does a very good job of replicating the appearance of the trophy, and the judges clearly thought that was important. Sadly, they were wrong. Most book buyers have never, and will never, see a real Hugo award. Making the logo look just like the trophy is not very important. Making the logo robust (recognizable under a variety of conditions and sizes), unique (not confused with anybody else's logo), eye-catching, and thematically appropriate (it does need to be Hugo-esque), are much more important.

Part of the irony of being so obsessed with duplicating the rocket is that logos are strongest when they're silhouettes: a single solid color, or black; but the silhouette of the Hugo doesn't look like a rocket! That's why the winning logo has to be two-tone black and gray. If you make it all black, the result is just sorta a lumpy vertical line that really doesn't have any "zoom" or "swoosh" to it at all. It would look a lot more like the Hugo if it looked less like a Hugo. It would look even more like a Hugo if it were foil-stamped in silver on a book cover. Alas, because it's two-tone, that's not going to happen. You can't half-stamp foil.

By now, you might be thinking that I'm about to say that no future Hugo awards committee should ever hold a contest again. Actually, no, I'm not. There is one very important consideration that tips me away from being that draconian, and that's budget. Science fiction fandom has never been a big-budget operation, and there's no way an awards committee could
afford a professional at normal union rates.

One of the requirement in Montreal's contest guidelines was that each base should cost no more than $150. My proposal included as part of the price, a modest but reasonable budget for my time, as well as the materials. I found out later that many of the previous bases had not paid the fabricator or artist for their time at all, although it was generally agreed that despite the fact that those people had been quite willing to do that, it was better if there was at least some acknowledgement of the value of skilled labor. For my part, I had put a fair amount of thought into how to keep the materials cost low, and the fabrication time short, in order to free up more of the budget for my own compensation. The Montreal adminstrators, in turn, told me point blank that they were entirely satisfied with the value I'd set on my time.

And yet! In order to make those bases, I ended up working fairly closely with Quiring Monuments, the largest grave marker maker in the Pacific Northwest. They were my source for the granite, and then they were sandblasting the partially completed bases. Naturally, when it was all over, I took my personal display Hugo over there to show them so they could see what it was they'd helped me make. I'd mostly worked with a woman in the front office, but when I was showing off the trophy, an older gentleman from a fancier office came out to see it, and he asked me what I'd been paid to do the work. When I told him $150, he was actually outraged. I was told that I should have received at least $800 for that kind of work.

Maybe so, but I don't think we're going to be handing out $800 trophies any time soon. Ergo, if a Hugo committee wants a great base for their awards, they have to find a competent professional who cares enough about the Hugos to cut them a really sweet deal. If they can't find anybody willing to do it who they believe can do it, that's when it's time to hold a design contest. It's better to pick from amateur designs handicapped by a contest communication blackout, than to have nothing at all.

But that means that a contest should be the last resort, used only if the committee can't find anybody better. Montreal couldn't come up with somebody, or so they told me. I heard the same thing from somebody on the Sasquan committee, but in their case, it just means they didn't bother looking. I can think of two or three intraregional fans who have skills and talents well suited for making a beautiful trophy base, entirely aside from myself, and I can think of at least a dozen more who might. Since I nearly won a Hugo (in 2010) for my 2009 Hugo Base design, and roughly two-thirds of the current committee knows me personally, to have one of them claim that they would have liked to have just appointed somebody except that they couldn't think of anybody to ask is just silly.

"Well, um, Dave? Here you are, kinda shouting and ranting and acting all scornful and stuff. Maybe it's because they know you that they didn't ask." Yup, that's a possibility. I am definitely not a 'people person,' and although I'm not aware of being actively disliked by any of the people on the committee (well, at least not until now), it's quite possible I am. Nevertheless, whether I could have been in the running to design this year's base or not does not change the fact that holding a contest is a bad idea. Nor should you mistake this blog entry as some kind of attempt to get the current committee to change its mind about having a contest and instead ask me to do it. I am no longer interested.
Current Mood: sadsad

(Leave a comment)

September 11th, 2014

02:09 pm - For the Record: The Endpoint of the Electric Car
In the long run, the ideal powerplant for a primary-use car (one that can be used for short and long trips) will be an electric car with a turbine range extender.

I think I first made this prediction three or so years ago, after the Leaf had been announced but before it was available. You see, the current crop of "plug-in hybrids" are still doing it wrong: they're connecting the fuel engine to the wheels. This is just dumb. One of the biggest if not THE biggest weakness of the internal-combustion (IC) car is the drive train. You need this elaborate and complicated transmission to turn the limited speed range of the engine into the far greater range required for propelling a car around. Since IC engines can't start themselves, an IC-only car has to keep the engine running when the car is stopped, so not only do you have to change gears (the transmission), you also have to be able to disconnect the engine entirely (the clutch or torque converter). The engine itself has to operate at a wide range of speeds, which inevitably means compromising overall efficiency to gain flexibility, and requires yet more mechanical moving parts (throttle body, variable output fuel injectors, and so forth).

The *last* thing we should be doing is making cars *more* complicated. Oh, goody, even more bits that can break down. Hybrid cars, like CFL lights, are lousy ideas from an engineering standpoint. They're both transition products, distinctly inferior to their coming replacements, but necessary (or at least economically desireable) because they can take partial advantage of new technologies until the infrastructure exists to make them unnecessary. If LEDs hadn't shown up so quickly, twisty-tube CFLs would eventually have been replaced by smaller versions of the traditional fluorescent fixture, with straight-line pin-mount tubes, because buying a new ballast everytime you buy a new bulb is stupid, and putting the ballast in the same space as the bulb means the heat from the bulb cooks the ballast and causes it to fail prematurely. What we really need to do is not prohibit the sale of incandescent bulbs, but prohibit the sale of any more light fixtures with Edison sockets. We've got vastly superior alternatives to the industrial-age light socket these days, and one of the worst things you can do to either fluorescent or LED lighting is to try to cram it into sphere-based 'lightbulb' form factor. But I digress.

An IC engine typically has more than 100 moving parts, which have to work in an environment with major temperature swings, serious pressure differentials, and an astonishing amount of high-speed metal-on-metal contact. An electric motor, by contrast, has one moving part, no significant pressure differentials, and generally will (and would prefer to) operate at much lower temperatures. All of that translates to much, much greater overall reliability.

Do you take your current IC car in for an oil change every 3 months/3k miles, or do you follow the manufacturer's recommendations, which are usually 6 months and 5k or 7.5k miles? Either way, compare that with the Leaf's dealer-recommended service schedule: first service visit is at 6 months, mostly to inspect for possible factory-caused problems. The next is sometime after 24 months. Again: 1 service appointment in the first two years of ownership, and that appt. doesn't necessarily involve changing or replacing a thing.
I expect that we'll eventually see primary-electric-drive cars easily exceed 1,000,000 miles. The overwhelming majority of them will be scrapped because of impact damage, not internal component failure.

The only real (as opposed to perceived) problem that *I* see for electric cars is a range issue, but probably not the one you think. Charging stations are going to keep prolifering, and even if your destination doesn't have a fancy car-charging station, if you can't find a regular old wall socket to plug the car into, you weren't even trying. The problem isn't that there's no place to charge a car, the problem is the time it takes to 'refill the tank'. The I-5 corridor along the west coast is already *very* well stocked with charging stations, but that won't let you get from Seattle to L. A. in 16 hours, like you can in a gas-powered car.

By the way, the oft-cited 'issue' of needing to replace the electric car's battery pack is a perceived problem, not a real one. Nissan, who (based on the evidence so far) seems to be very good about providing resonable, accurate, real-world-use based statistics for the Leaf, says that they expect a 5-year-old Leaf's battery pack will have about 80% of is original charge capacity. In fact, Nissan's warranty is for 70% or 60,000 miles after 5 years, so they're guaranteeing you'll get at least that much, so some Leaf owners might well not need to replace the batteries until the car is 6-9 years old. The current price for a new battery pack is $5,500, which is already pretty reasonable, but it's likely to go lower as manufacturing volume drives down the cost of lithium-ion batteries.

That brings us to the reversed-priorities 'plug-in hybrid'. As with the traditional gasoline-powered car, connecting your fuel engine to the wheels means throwing a huge pile of heavy and unreliable junk into the car. Dumb. What we need is a 'fuel-assisted hybrid.' Good grief, of *course* it's a 'plug-in,' because why would you ever drive a car that didn't? But if you need to be able to go further than a single charge will take you, that's when you use up some fuel in order to fire up the onboard fuel-powered generator.

Connecting the gas motor to a generator, and ONLY to a generator, makes a gigantic difference in the parts count. The engine itself can be set to run at one constant speed, and optimized for maximum efficiency at that speed. It and the directly-attached generator can be located anywhere in the car that is most convenient, without any need to ensure any mechanical linkage to the wheels.

Once you've abandoned the idea of connecting the engine to the wheels (like the rusty junker of an idea that it is), you have a much wider range of engines to choose from, as well, and one of the potentially most efficient engines, as well as one with a potentially astronomical power-to-weight and power-to-volume ratio, is the gas turbine engine.

What we're talking about in this case is basically a cute little teensy-weensie jet engine. Like an eletric motor, it typically as one moving part, which is the spinning shaft with blades that runs down the middle. In theory, the advantages are small size, reliability, and efficiency. (I was terribly amused when I first learned that the M1A1 Abrams tank has a turbine engine. Not the sort of vehicle that you'd expect to be jet-powered. :) The downsides are noise and heat, but there's no reason to think those are insurmountable. Sure, the exhaust gases might melt asphalt if you're not careful, but that just means you need to be careful; maybe a heat exchanger where an IC car has its catalytic converter. Whatever.

There's a microturbine currently available that, with generator, is about 1.5 feet long and 6 inches in diameter, and generates 7.5kW. That's big enough to completely recharge the Leaf in about 3 hours, and at least some of those hours can be driving hours. 7.5kW isn't quite big enough to keep the Leaf moving continuously at freeway speeds, but if you turned it on at the beginning of your trip, it would have the effect of turning the existing 24kWh packs into 34kWh packs (assuming you spend 90 minutes driving 100 miles), and the effective range from 100 to 140 miles. With a higher-capacity turbine, you could run the car continuously, in effect using it as a gas-powered car, while still getting far better mpg than IC cars. The transmission & torque converter of an IC car is replaced by the generator, voltage controller, and electric motor of the electric car.

This is, by the way, exactly the system that trains have been using for decades now. The steam locomotive was replaced by the diesel locomotive, but diesels are actually diesel-electrics: the diesel motor drives a generator, and the wheels are turned by electric motors. They just don't carry a lot of batteries along or try to plug in every time they arrive at a station.

"Oh, you make it sound all super-spiffy, but if this is such a great idea, how come nobody's doing it?" Indeed. I think there are two reasons. First, as you might have noticed, auto manufacturers in general, like most really big companies, truly suck at getting radical new technologies out the door. They're so invested in the current system that they have enormous trouble committing to something new. I'm very impressed that a company as big as Honda managed to get the first hybrid into showrooms, although introducing a new car is such an incredibly capital-intensive process that there may have been previous attempts that I didn't even hear about. (Yes, Honda, not Toyota. Honda's Insight came out a year before the Prius, and significantly outperformed it as well. It was, and is, a superior hybrid. Unfortunately for Honda, it turned out that people would happily trade poorer performance for extra doors; the original Insight was a two-seater car, and the first Prius was a four-door sedan.)

GM, semi-famously, HAD what could have easily been the first successful mass-market electric car, but after distributing EV1s in Californial, had a corporate psychotic break and snatched them all back and crushed them. It took brash start-up Tesla to give the world some idea of just how good an all-electric car could be. In the same vein, it's going to be years before a major car manufacturer finally gets over the idea that you can't put a gas-powered engine in a car without connecting it to the wheels, and any company small enough to already have that clue in their closet probably doesn't have the capital to develop the car.

The other reason we don't have an electric car with fuel-assisted range extender is that, when it comes to turbine engines, bigger ones are *easier* to build. The smaller you make the turbine, the faster it has to spin in order to run well and the tighter the tolerances are for the various parts. A plane's jet engine might run around 10,000 rpm, but car-sized turbines have to spin in the neighborhood of 100,000 rpm.

Tricky engineering = expensive. If we were manufacturing as many microturbines as we do V-8s, I suspect they'd lost a lot less than the V-8, but we don't, and they don't. Capstone's C65 turbine is slightly larger than most car engines, generates 65kW (about 90hp), and costs $56,000. On the other hand, they actually built a sparts car around their slightly smaller C30 turbine that could go 0-60 in 3.9 seconds, had a top speed of 150mph, and could go up to 500 miles before refueling. The exhaust from the turbine, by the way, met California's emissions standards without any further processing.

Until/unless some city get serious about microtransit, most Americans are going to be getting around by car for a long time to come (self-driving or otherwise). Most 2nd cars will be electric, and most 1st cars will be hybrids. Mark my words, the turbine-assisted electric will be the winner in the long run.

(9 comments | Leave a comment)

April 23rd, 2014

05:44 pm - The sound of reality.
I try not to get too distracted by Quora, but some of the questions people ask are awfully intriguing. Many are asinine, of course, but those are easy to skip. Although I tried, I could not let a question I saw today pass without comment, in part because I felt none of the other 22 answers had done a very good job of it.

The question was: "Why does vinyl sound more 'real' than a CD?" Not surprisingly, more than a few people basically said "It doesn't," with one or two adding the equivalent of "...you idiot." to the answer. Somebody else said that, well, it was a scientific fact that vinyl was better, and went on to invoke the Nyquist limit, apparently blithely unaware of RIAA equalization.

Here was my reply:

This reminds me of when Babylon 5 first aired. It was the first science fiction television series to use CGI for the spacecraft, rather than motion-controlled cameras and miniature models. "I don't like their spaceships," a friend of mine said. "They don't look real."

"You mean they don't look like little plastic space ships."

Both CDs and vinyl involve huge compromises in terms of sound reproduction. Although notes that CDs are limited to (<22,000 Hz) are above the range of most human's hearing, the consequences of that do extend downward into the audible range. Those effects can be reduced with careful design of the electronics, but generally you won't find that kind of care in equipment that's less than a couple of thousand dollars.

Vinyl, on the other hand, is seriously handicapped at the low end. A loud bass drum would cause the groove on a record to have to move so far that it would cut into the next groove over, so low frequencies are dialed way down for pressing onto vinyl, and then the home amplifier runs them through a filter that reverses the effect.

I've performed with symphony orchestras, in marching bands, in large classical choirs and small jazz choirs. I don't think either CDs or vinyl sound especially 'real.' I think digital is much better than vinyl on really cheap equipment. On a system between, say, $400 and $2000 or so, I wouldn't be surprised if a $200 turntable and a really good record sounded more real than a CD in a $200 CD/DVD combo player.

However, one of the things that makes actual live music sound so great is the dynamic range. That amazing downbeat for "O Fortuna," the opening number of Carmina Burana? Wow. Then, just a few seconds later, the whisper of "semper crescis, auf de crescis..." Vinyl just flat out cannot do that. If you record it at a low enough level to keep the loud part from cutting into the neighboring grooves, the quiet is so quiet that it's almost impossible to stamp that subtle a wiggle into the vinyl. The CD audio standard has an amazing dynamic range. One of the first CDs I ever bought was a Telarc sampler that included an excerpt of the 1812 Overture, with real cannons. In the end, although the CD might have had the full dynamic range of that music, it didn't matter, because the stereo couldn't play it. I was in a science club in high school, and we were buying a new stereo to play music during the lunch hour, and I had a receiver with 1000 watts per channel. Trying to play the 1812 caused it to shut down. In order to actually play the cannon blasts, the volume had to be turned down to where the symphonic part was just way too quiet.

Then there's the fact that both CDs and vinyl record, at most, two channels of sound (yes, even if you have Dolby 5+1 playback, if it's coming from a compact disc or a record, it's reconstructed from two channels). "Real" music, if it involves more than two performers, doesn't just come from two places.

As an instrumentalist, I'm primarily a percussionist, and generally speaking, percussion instruments are the hardest to reproduce on recordings. To recreate a bass drum takes massive amounts of power; to get a cymbal or tamborine right requires extreme precision at the highest frequencies. I've heard a stereo play back a tambourine well enough to sound 'real' to me exactly twice in my life. The first time involved six-foot tall Magneplanar speakers that I think cost around $12,000 for the pair. I have no idea what the rest of the equipment attached to them cost, but it was probably between $20k and $30k. The other time a speaker actually sounded 'real' to me (with a drum set ride cymbal), it involved some JBL speakers with titanium tweeters, and once again, the whole stereo system was well over $20,000. (Both times, the sound source was a CD.)

So why does vinyl sound more 'real' TO YOU than compact discs? Could be any one of a number of reasons. It's what you've grown accustomed to, or you listen to music that suits vinyl better than CDs, or you've got a better turntable than CD player, or you're less sensitive to the audio compromises typical of vinyl vs those common to CDs.

Finally, please note that this entire essay was comparing compact disc digital audio and stamped vinyl records. On the analog side, 15 inches per second 1" wide two-track magnetic tape will totally outperform both of those, but 96KHz sample-rate 24-bit digital recording would crush them all in terms of fidelity. Even then, you still won't be able to hear what I hear when I'm playing with an orchestra. Musicians on all sides of me, with every instrument's sound unaltered by anything besides the air itself? That's real.

Enjoy and use which ever technology sounds better to you. As far as I'm concerned, they're both so far from 'real' that it doesn't really matter that much which one I listen to.

(1 comment | Leave a comment)

December 9th, 2013

01:10 pm - Smooth
I've been meaning to write this little essay for years. Why not today?

Some years ago, I surprised a friend of mine by telling him that one of the reasons I use an electric razor is because it shaves closer than the ubiquitous multi-blade manual kind. I think most people have no idea how good a quality electric is these days.

A few caveats, though. I've found (and myriad reviews and comments online seem to support this) that there is a much wider range of performance between various electrics (Remington, Norelco, Braun, Panasonic) than between different brands of manual ones (Gillette, Schick, etc.). That's also true of different models by the same manufacturer. If you try an electric razor that retails for less than $120, you're wasting your time (and if you go for one that's more than $300, you're probably wasting your money). Also, it can take up to a month for you and your face (or whatever you're shaving) to get used to the new way of shaving. It should be obvious that, the 'magic of technology' notwithstanding, the results you get after just one week using an electric shaver might not measure up to what you can do with a manual after years and years of practice; that's not necessarily the fault of the tool.

I do keep both kinds in my bathroom, because they have different strengths and weaknesses. But a few Halloweens ago, I was going to put makeup all over my head, so I shaved off everything but my eyebrows, and, as an experiment, used a brand-new Gillette three-bladed safety razor on the left size, and my Panasonic electric on the right. I had just finished when friends started coming over for the party, so I took my head over to them and had a few people check out the results, which were unanimous. The electric side was much smoother. The difference for my beard (cheeks and neck) was less pronounced, but still obvious.

Also, while the Gillette was a fresh-from-the-box blade head, my Panasonic's blades were, oh, probably four or five years old at that point, which leads to the other quality where electrics utterly kick butt over modern manuals: cost. How often do you change the blade on your manual, and what do you pay for them? If I did all my shaving with manuals, even if I shopped at Costco, I'd still be spending at least $100/year on blades, and probably a lot more than that.

Also, manual blades have a sloped failure curve. Every day, they work almost as well as the day before. "It seems pretty dull. A new blade would feel a lot better, but geez, they're expensive! What to do. . . ." My electric shaver says I'm supposed to change the blade and foil head every six months. If I did, it would still be cheaper than manuals after two years, but in point of fact, I didn't change the replaceable parts until the foil finally wore through some five to seven years after I bought it, and the new parts cost $40. All the electrics but the Norelco rotary kind use a thin perforated sheet of metal to keep the cutter blades from cutting you, and I've found that, until the day the foil actually wears through and has a hole in it, the cutters works pretty much as well as they did when they were new. So, after a decade, total cost of ownership for electric, about $200. Estimated equivalent cost of manual: $1000–$1500.

Speaking of that perforated foil leads to the third way that I find electric shavers dramatically superior to manuals: safety. No matter how much pressure I use, or how I hold it, or which direction I move it, the only times that my electric has ever drawn blood is when the foil failed. (Well, almost. It did bite me once when being used on a more intimate part of my person, mostly because I had grown over-confident. However, had I used a manual razor as cavalierly on the same skin, I would undoubtably have been in danger of expiring from blood loss.)

It's possible that the reason the electric does a better job shaving is because I can move it in any direction. I can get a smoother shave with the manual if I shave against the direction of hair growth, but it's only smoother for an hour or so. After that, razor burn raises red welts. Using a blade 'backwards' drives the cut ends of the hair in to the skin a short distance, and some of those ends then catch the edge of the follicle and irritate the skin. I've never had a problem with razor burn from the electric.

Both manuals and electrics work best (in my experience) on shorter hair; generally hair that had been shaved no more than a few days ago. The manual tends to cut maybe a quarter-inch track through longer hairs (say, on my head if I haven't shaved it for a couple weeks), where the electric is all but useless. Also, the manual's better for areas where there are only a few hairs (long or short), like the back of my neck.

That's because of a characteristic difference in the way they get the job done. According to my watch, it takes me about the same amount of time to shave my beard with either one, but it feels like it takes longer with the electric, because I have to go over the same spot four or five times. Each pass of the electric gets some of the hair, but not all. The manual will usually cut a more or less clear swath with each pass. However, then I have to drop it into the sink and thrash it about to clear the whiskers from the blades for the next pass. With the electric, I generally don't have to stop until I'm done, so there's a lot more time with the shaver on my face, but much less rinsing it out.

By the way, it appears that the best electric shaver manufacturer today is the same as it was over a decade ago when I bought my current shaver: Braun. Nevertheless, I purchased what was (and also still is) the #2 shaver: Panasonic, because just like now, back then, Braun didn't make a wet/dry shaver. Electric or manual, hair is easier to cut when it's absorbed water, so the best time to shave is during or after a shower. If I were just shaving my beard, I might have gone for the Braun, but because I also wanted it for my head, being able to do it in the shower or tub was going to be much easier to clean up afterwards. Thus, the Panasonic.

Although it's probably as bad an idea as talking on your cell phone, an electric shaver does let you shave while driving, if you're really running late, because an electric shaver works quite well without any skin prep at all. This is particularly handy for quick touch-ups. I have used a manual directly on my skin a few times. If there are just a few hairs, or they're thin, and the blade is really sharp, it's not too bad. Otherwise, shaving cream is a requirement, or else Ow!

I'll use shaving cream with the electric, too, if I'm doing the whole beard. Not only will it help soften the whiskers, but it also helps alleviate one of the negatives about the electric: heat. By the time I've run it around my face enough to finish the job, the friction of the blades on the foil will have warmed up the foil quite a bit. Shaving cream and/or dipping the cutter head into water now and then fixes that problem.

The one factor where I find the manual clearly superior is noise. The rotary Norelco shavers are generally a lot quieter than the linear oscillating blades of the others, but it's still a loud high-pitched buzzing gizmo that you may be putting right next to your ear now and then.

The other category where the manual is better is that, with the exception of the amusing new vibrating manual razors, you don't have to worry about their batteries running down. I haven't found that to be all that big a problem with the Panasonic, though. A full charge, even today, is still enough to shave my face and head at least twice, if not three times, or just my beard for at least a week, so I only bring its power cord along on trips >7 days.

-Closer shave? Electric.
-Cheaper? Electric.
-Safer? Electric.
-Faster? Tie.
-Better for thin areas? Manual.
-Better for longer hairs? Manual, by a nose. Clippers are the real answer.
-In wet/shower? Tie.
-Outside the bathroom? Electric.
-Noise? Manual.

For being clearly superior in the first three categories, the trophy goes to the electric shaver.

Just for the record, I did not receive any consideration or compensation from anybody for this commentary.

(Leave a comment)

October 30th, 2013

01:43 pm - Captain Underpants
If you are, or live with, a male of the human race, you may or may not have noted a curious (well, I think it's curious) characteristic of men's underwear, specifically of briefs or boxer briefs. Ignore boxers: they're too loose to be relevant. But briefs, also known as 'jockeys' or 'tighty-whities,' and boxer briefs (similar, but with an inch or two of inseam), as manufactured by, oh, say, Hanes or Fruit of the Loom or Sears, do something very strange.

They lay flat.

I admit, as with so many other things in the world, I took this for granted for many years before one day thinking "What the heck?"

I have had a couple of different female friends over the years discuss bras with me; for example, I've learned that many women are slightly asymmetric, and wouldn't it be wonderful if you could get a bra that was one size on the left but a slightly different size on the right? I can only imagine the laughter and ridicule I would have received if I'd tried to convince them that what they really needed was a bra that lay flat on a table.

Now, there are manufacturers that make men's underwear that does not lay flat. I bought a pair (and why do we call one brief "a pair of underwear??") a couple of decades ago. I didn't buy them for their three-dimensional nature, but once I'd tried them on, it was suddenly obvious to me why they should be.

I think, though, that what I like best about the question "Why do men's underpants lay flat?" is that it actually springboards in many different directions.

Anatomy: After all, although both women's breasts and men's genitals present as curves surfaces, they're not the same curve, they're not the same mass, and there are structural differences. Perhaps it happens that if there were a male equivalent of 'cup size,' it would turn out that most men tend to be "A cups." Maybe they're squishier. Maybe many other things, some not necessarily appropriate for polite conversation.

There's also apparently a difference in what the garment wearer will tolerate. I know from experience that one of the consequences of flat underwear is the opportunity for 'wardrobe malfunction,' although since it happens inside pants, it usually just requires some discreet adjustments to put things right. However, boxers are the most popular style of underwear, and they are, in effect, one ongoing wardrobe malfunction, inasmuch as they don't even provide the support that briefs do, as long as you can stay inside briefs properly.

Engineering: Fabric does stretch, after all. Maybe the available stretch in underpants is adequate to reshape to an appropriate curve for men, where it would not be for a breast.

Fashion/Design: Not high fashion, but just the process of creating clothing for humans. There are many companies making more anatomically-conformant underwear. So why would the mainstream manufacturers so rarely follow suit? Expense? Market demand? And why, oh why, are there so many racks of tacky boxers for sale? "Metrosexuals" notwithstanding, is the typical male really that fashion-backward? {sounds of retching}

Culture: How does it come about that I've had more conversations with women about the practical considerations of the fit of a bra than I've had with men about the fit of underwear? And I'm fairly confident that I'm not unusual in that way. I'm sure there are many many men who've never discussed either subject, but I really doubt more than a tiny fraction have the opposite ratio.

Then there's the related issue about men and their neuroses related to endowment. On more than one occasion, I've considered raising this topic, but refrained because I was worried that it would come across as, well, bragging. Will my co-conversationalists, if they haven't already thought about this topic, think that I have this opinion just because I was more observant or thoughtful about this particular topic, or because it's a "bigger" issue for me? Never mind why it should even matter. There are some strange differences in the likely responses to a woman's stating "it's hard to find a comfortable bra because my breasts are just too large" and a man saying (or even implying) "it's hard to find comfortable underwear because my penis is just too big."

Gay Culture: Most of the non-flat underwear I'm aware of is marketed primarily to gays. Why? Yes, yes, I can think of two glaringly obvious reasons right off the bat; 'real men' (i.e. the classic macho straight guy) would be embarrassed near to death to talk about underwear; and, just like the bras at Victoria's Secret, some of the more dimensional items are intended specifically to enhance the wearer's sex appeal. Generally (although not exclusively!) I've noticed a bulging crotch is more likely to turn the head of a gay man than a straight woman. But are those the only reasons? I think not...

So. A simple question with, IMHO, many intriguing ramifications. Assuming, that is, that you're not somebody too embarrassed to even think about the subject. :)

Huh. LiveJournal has an "adult content" flag. My choices are "none" and "explicit". I would like to know why "implicit" isn't an option, thank you very much. And why isn't "innuendo" a choice? Huh? Huh?

(11 comments | Leave a comment)

July 12th, 2013

12:54 pm - Geezerology
I've surprised myself a little with how I've felt about gray hair as I've gotten older. But then, the gray hairs have surprised me with how quixotic they've been. My beard started going gray first, and there's a lot of gray there. Frankly, I didn't like it. A salt-and-pepper chin but my original very dark brown on the sides of my head looked rather silly to me. The top of my head doesn't have much hair left, but the fact that there are a few still hanging on is even less esthetically acceptable to me. Stop looking straggly and pathetic! Give up! Let go! But I digress.

Now my temples are going gray, and I'm fine with that. It looks fairly elegant and distinguished to me. Also, I'm old enough that I figure I probably ought to have gray hair by now anyway. Which of course means I'm way past due, because how many of us ever think we should have started getting gray hair when we started getting gray hair? Hardly surprising, though. When I think about "people with gray hair," my brain is more or less sampling everybody I know, and comes back with an average age of fifty to sixty or so. I point out to it that it should only include people who are starting to turn gray, and it still gives me forty-five to fifty.

Doing a bit of research for this post, by the way, turned up this rather fascinating report (check me out, citing the original research, who's da dude? Me!) which I found through an article in a British newspaper: "The researchers set out to test a widely-accepted “rule of thumb” in the cosmetics industry, that the age of 50, 50 per cent of the population had at least 50 per cent grey hair. In fact, the new study found that less than a quarter of those taking part had that much grey hair at that age. In many parts of the world, it was a substantially lower proportion."

Now, it's hardly fair to compare when you spot your own first gray hair to the age your brain hands you for when it notices other people with gray hair. First of all, we're probably going to notice our own much earlier than somebody else's. We're probably looking a lot more closely at our own, and we tend to be more critical of our own appearance. On top of that, because the typical human thought patterns are already tricking many of us into thinking we're going gray earlier than we "ought to," many of us dye it to hide the gray, thus skewing the perceived age related to gray hair upwards even further.

My gray hairs started appearing in my late 30's, which apparently was just little bit behind the average for Caucasian men, and for the most part, the hair follicles seem to be switching over to gray fairly slowly. At the rate I'm going, I probably won't be predominantly gray on my head until I'm in my mid to late 60's.

So here I am, pretty much copacetic with gray hairs showing up on my head right by my ears. Then one day I find a gray hair on my chest. This should not have surprised me in the slightest, but I found myself quite annoyed! It's particularly silly since I don't even particularly like my chest hair, so if it wants to camoflage itself against my pale epidermis, I ought to be glad. But no, I was offended, and promptly plucked the dang thing. Then I had pretty much the same reaction when I found one in one of my eyebrows. "I don't think so!" {poit!}

I mentioned this a few months ago to some friends, and one of them agreed that some gray hairs were more disturbing/offensive than others, and managed to rather gracefully imply that the ones that had bothered them the most were ironically located where they were very unlikely to be seen. I'd had the same reaction myself, but I was at a loss as to how to say that in polite company, so I'd waited to raise the topic until my eyebrow provided a more genteel example.

So I know it's not just me, and yet, how ridiculous is that? I should be pleased if the hairs that are normally covered by clothing turn gray, on the grounds that I'm probably going to go gray at a certain rate, but yea, let's keep the most visible hair dark and put the gray hair where it doesn't show.

Now I have not really tried to seriously manage gray hairs by pulling them out. I do pluck them out now and then, but I know that the end result is either giving up in exhaustion as the rate increases, or looking as if I have mange. But I have pulled a few now and then, and what I have found is rather perplexing. That eyebrow hair had a white tip. The majority of the pubic hairs have been white at the end, and dark at the base. The melanocytes decide to retire and let the hair grow in white, and then, what, Moe comes along and slaps them, and they get back to work?

The Interwebs were less than helpful for this. Lots of people report hair that shows spontaneous repigmentation, with nearly as many 'helpful' respondents claiming it must be due to diet, or sun bleaching, or whatever, because "real" gray hair doesn't change back. PubMed wasn't very helpful, either. I turned up just one clear reference: "Indeed, it is not too uncommon to see spontaneous repigmentation along the same individual hair shaft in early canities." ('Graying: gerontobiology of the hair follicle pigmentary unit.' Exp Gerontol. 2001 Jan;36(1):29-54.) All remaining search results were related to vitiligo, not to age-related pigment changes. BTW, "canities" means "grayness or whiteness of the hair."

The hair cycle, as you might know, is that a follicle spends some amount of time manufacturing a hair, then takes a few weeks (or months) off. When the rest period is over, it releases the hair and begins growing a new one. Arm hairs (for example) have a short growth phase and a long rest phase, which is why they're so much shorter than head hair.

So here I am, finding hair follicles that shut down pigment production, and then start it back up again partway through the job. I would love to know what that follicle does with the next hair that it grows. Does it stay white next time? Is it back to brown? Does it restart partway through, but maybe later?

I have no doubt that there are commercial research labs working on how to restart hair follicle melanocyte activity, but we probably won't hear much about it until they've got some results.

In the mean time, I guess I'll just have to deal with my own hair follicles trying to freak me out. Ah, Mistress Biology, you are a wacky thing indeed.
Current Mood: Amused

(1 comment | Leave a comment)

July 10th, 2013

08:13 pm - Portland Mass Transit: Is it really that good?
A friend of mine from Portland was recently complaining about the Seattle mass transit system. There are a bunch of different ones here in Seattle, and you have to pay for each segment! Pierce, Community Transit, Metro, Sound; what a mess. In Portland, it's all tied together. You can just pay once and not worry about transfers.

Yea, whatever. It's not the first time I've had somebody extol the virtues of the Portland transit system. I can't say I'm in love with Seattle's mass transit system, but I'm not very impressed with Portland's, either. But this time I was near my computer, so I decided to rustle up some facts to see how they compare, because I rather suspected the reason Portland's seems so much easier to use is because there just isn't very much of it.


First of all, I'm just going to compare the Seattle city transit system, "Metro," to the Portland system, "TriMet." They both handled about 110 million passenger trips last year. With TriMet, 54 million of those were bus trips, the rest were commuter rail. Metro only has busses. In fact, Metro has 220 bus routes and 1600 vehicles. TriMet has 79 routes and approximately 600 busses, plus the 4 rail lines.

The "greater Portland metropolitan area" (Hillsboro, Portland, Vancouver WA, and the like) is actually bigger than the "greater Seattle metropolitan area" (Seattle, Tacoma, Bellevue, Woodinvile, Issaquah), 6,600 sq. miles vs 5,400 sq. miles, but I can't easily figure out how much of the Portland area actually has usable mass transit service. With about 1/3rd the routes but more area, it seems improbable that the coverage is as good, but maybe they're long routes that cover a lot of territory. {shrug}

Now, keep in mind my friend was whining about how complicated it is to get to Tacoma or Everett, where you start on a Metro bus, then transfer to a Sound Transit bus, and the go to a Pierce Transit or Community Transit bus. Quite frankly, I'm too lazy to hit the Pierce, Sound, Community, and state ferry web sites to add in their route count and resources. It seems rather unnecessary. For Portland, TriMet is the *only* transit system. The intra-Metro fare transfer system is just as good as the TriMet one. But TriMet doesn't have any other major systems to interface with, so no wonder that it seems easier.

Particularly annoying is that my unhappy friend even used to have an ORCA card, but seemed to think bitching that it was going to cost $7.75 to get to Tacoma via cash fares was better than coughing up the $5 to buy a card (available at dozens of locations, including vending machines in the bus tunnel stations, and many local retailers). The ORCA card provides toll-free transfers between the different transit systems.

Portland's system does have remarkably high usage rates, and I am happy that there is a transit system that's really being used. According to TriMet, 26% of evening commuters are using mass transit. Wow. They also point out that "More people ride TriMet than transit systems in larger cities, such as Dallas, Denver and San Diego." Seattle's not on that list because Metro had about 10 million more, I believe. On the other hand, the Seattle area has 80% more people, and is almost twice as dense, so you'd think they could do better than that. Maybe when the light rail line is extended north, they will. {shrug}

If you want to tell me that you like Portland's transit system, go right ahead. Just don't follow it up with a complaint about Seattle's. Could Seattle have had a system as well-liked as Portland's, had they done things differently? Beats me. *Could* they have done things differently, which is to say, was it ever politically possible to get the right amount of money spent on the right things to have made something else? There's no way to know. But it's still clear that the Portland system is just a heck of a lot smaller than the Seattle one, and that makes the job a lot easier. Walla Walla has 9 routes, and the standard fare is 50 cents. I'm sure it's much easier to figure out routes there, and what a bargain! Is it better?

Apples and oranges. Apples and oranges.

(3 comments | Leave a comment)

June 18th, 2013

10:55 am - But I don't WANT to be innovative!
Yesterday's housechilling party was all kinds of fun (yay!). I wonder if you can tell how successful a party was by how much cleanup you have to do afterwards? Probably not, but one of the things I had do deal with was the remainder of the enormous ripe casaba that Nick brought. My first thought was "sorbet!" but after I pureed some of it and gave it a taste, I don't think that will work very well. Like watermelon, the casaba has a very delicate flavor; it's quite watery. I think it will either freeze up solid or require way too much sugar, just like watermelon sorbet.

The solution with watermelon sorbet, by the way, is to add a teaspoon of ethanol, or two teaspoons of, say, vodka. The ethanol does what the sugar does: it's an antifreeze. The ethanol's just a lot more powerful than the sugar.

However, I don't have any Everclear in the pantry right now, so I went to Plan B: jelly or jam.

The canonical home canning text, the Ball Blue Book Of Preserving, does not have any recipes for melon jelly. WTH? (heck)

To make a successful jam or jelly, you need to get the pectin/sugar/water ratio right, and that varies from fruit to fruit. I need some kind of melon jelly recipe as a guide. Casaba is closely related to honeydew, but probably even a watermelon jelly recipe would work.

Apparently the concept of melon jelly is just mind-blowingly radical. There is absolutely nothing on the internet for casaba jelly. The only honeydew jelly recipe I found was in a blog by somebody who just made it up on the fly, and hers didn't set up. I'm guessing it's because she just swapped honeydew for the plums she used the very first time she made jam, and plums, being quite high in natural pectin, don't need any additional in order to set up.

I did find a watermelon jelly recipe from Marisa McClellan on a blog entitled "Food in Jars." This is a blogger who specializes in home canning, and yet she posts that watermelon is something "I would [not] have even considered putting in my jam pot" except that somebody asked her for a watermelon jelly recipe. This lack of imagination is more startling when I found Apronstrings Blog sharing a "Honeydew Jam with Mint and Lime" recipe that was adapted from "Cantaloupe Jam with Mint and Lime" that the blogger found . . . in the "Food In Jars" book. Yes, the book was written by Marisa McClellan. So she's made cantaloupe jam, but would never have thought to make watermelon jelly?

Yea, I know I'm unusually creative. But good grief! How hard can it be to say "I have more [name any fruit you can imagine] than I can eat all at one sitting. What will I do with it?" and answer with something other than "Wrap it in plastic and put it in the refrigerator." And yet, if Google's results are to be believed, rarely has anybody (or at least, anybody who shares recipes online), even thought about trying to make watermelon jelly, and nobody has ever tried it with casaba.


I did find some related stuff that was pretty awesome, though. Marisa has obviously done a lot more than just parrot back instructions from other people. The overwhelming majority of home canners work via the "because they said so" principle. "You use that much sugar because the recipe says so." And it's extraordinarily rare that somebody *writing* a recipe knows enough about the science to tell you *why* a particular step is there. As a result, most people think that the sugar used in home canning fruit is there for flavor, and thus that it's no big deal to cut back if you think adding, say, five cups of sugar to six cups of fruit is excessive.

The "Food in Jars" blog is smarter than that. I already knew that sugar is critical for shelf life. As she notes, it's a preservative. It is aggressively hygroscopic, like salt. It's why you can store maple syrup in the cupboard without it going bad; the sugar concentration is really high. Bacteria and mold can't grow in the syrup because the water is sucked right out of them by the sugar; they actually get dehydrated.

What I *didn't* know is how it works to help jam set. I was aware that adding sugar changes the boiling point of water. Turns out that 220° F is the temp that sugar and pectin bond. Not enough sugar means you can't get the mix hot enough to trigger the bonding. Neat!

And, from "Local Kitchen" I come across the (in retrospect, terribly obvious) idea (also set forth by Marisa) that if you are going to experiment with making preserves out of fruits that you don't have an Officially Sanctioned recipe for, that you should test the pH. What water-bath canning doesn't kill is botulism spores. However, they are prevented from growing in a high-acid environment, where "high-acid" is pH 4.6 or below. The "Local Kitchen" blogger made honeydew melon jam (with forsythia and citrus!), and was ready to can when they remembered to test. The pH was around 6.0, so they froze it instead.

Since I do happen to have appropriate litmus paper handy, I can and shall do the appropriate testing. I found a comment on another site that said that one should use an electronic tester, because litmus paper wasn't accurate enough, but I am fairly confident that such a statement only makes sense if we're talking about the original wide-range stuff, that does its color-change from around 4 to 10 or so. I have some that reads 4-7, and then different paper for 6-11.

(3 comments | Leave a comment)

June 11th, 2013

02:57 pm - Dear Discover Magazine . . . .
I just sent off my very first letter to a magazine. It was a minor issue, but it was about the English language, so I couldn't help myself. . . .

Dear Editors:

Just finished reading my most recent "Discover" (really great issue, by the way), and now I'm writing my very first letter to a magazine.

Bill Andrews, in his "20 Things You Didn't Know About Gravity," item #4, says "Passengers on amusement park rides and the International Space Station experience microgravity—incorrectly known as zero gravity—because they fall at the same speed as the vehicles".

Sorry, Bill, but you're the one who's incorrect. You've made the common mistake of confusing English and Math. The English language phrase "zero gravity" is not in any way a replacement for "0g". English routinely rounds numerical amounts. "My commute was great! There was no traffic!" doesn't mean there was not a single car on the road; it means there were so few cars that they had no effect. If I want you to understand that the freeway was truly devoid of any other vehicles, I have to say "There was literally not another car on the road!" The editors at Merriam-Webster are on top of that distinction, since they have documented the commonly held definition of "zero gravity" as "lacking apparent gravitational pull" (emphasis mine).

If I'm riding a roller coaster or the Vomit Comet, and I'm in a situation where the various forces (gravitational and otherwise) cancel out to the point that I can't detect them without instruments, then I am, by definition, experiencing what most English speakers call "zero gravity." And, unlike the language of mathematics, meaning in English (and most other common general-purpose languages), is decided (in effect) by popular vote.

I've worked as an editor for many years myself, and I don't always *like* that; if I were the King of English, I would immediately ban "utilize" since "use" is an entirely superior and 100% compatible replacement. But I'm not, and I can't. And what people experience on the International Space Station fits the definition of the common English phrase "zero gravity." Sorry.

(Yes, I know that English as used in professional journals often assigns different definitions to things, and using the phrase "zero gravity" for a paper in Aerospace Science and Technology might be rather inappropriate, and even incorrect. But this is Discover, not (shudder) Scientific American.)
Current Mood: informative

(Leave a comment)

> previous 10 entries
> Go to Top