Log in

No account? Create an account
Bending Reality to my Will

> Recent Entries
> Archive
> Friends
> Profile
> My Website
> previous 10 entries

August 28th, 2019

04:30 pm - "Hey, you kids get off my d*mn cloud!"

A friend of mine posted a link to an article in Facebook about Tokyo police capturing drug-running drones with net-bearing drones, and then the crooks using their own capture-drones to bring down the police drones. It's rather amusing, but it reminded me of the "solution" to drone overflight that I'd thought of a few years ago. 

You see, there's this interesting puzzle. Legally, homeowners also own the airspace over their land up to about, well, nobody's really sure. It's definitely up to the top of the tallest thing on your property; either the top of your house, or the top of a tree if you have one, because that's airspace you're demonstrably "using." It's below 500', because that's where the FAA says planes can fly, and all navigable airspace is theirs. It's probably up to at least 85', according to the Supreme Court.

Now, the current (as of this writing) FAA rules on drones say they can't go above 400'. And the uncredited author at Hackernoon clearly believes that this is a real problem for drone operators. "Without an efficient mechanism to enable . . . property owners to grant low-altitude Right-of-Way access to [drones], there is no practical way [to] . . . operate at low-altitude without trespassing."

Read more...Collapse )

(Leave a comment)

October 3rd, 2018

01:31 pm - Set-top Box Neepery

So after not quite two years of fairly unsatisfactory media service from CenturyLink, they're coming tomorrow to make some changes. One of them is to move us from their fancy-schmancy Prism television service back to DirecTV. Our network connection is glass fiber right through the wall of the house and into the back of the transciever. In theory, (and what Margaret had ordered from them when this service went in) was gigabit-speed Internet. I will skip over most of the stupidity associated with the initial install except to mention that when they didn't show up to install it the first time on the date _they_ had scheduled, and Margaret called to find out why, the person at CenturyLink who answered said "What? But we haven't even deployed the fiber backbone to your neighborhood yet! It won't be installed for a few more months!" 

Clowns. An institution of clowns. 

*I* eventually discovered that, although (as I learned later) the fiber was actually theoretically capable of >4 Gb/second service, we nevertheless had only been allowed 40 MB/sec (roughly 400 Mbits/sec)  for our pipe. Which, although plenty fast nonetheless, is not gigabit.

However, because of this magical massive data pipe, CenturyLink offered this new PRISM service whereby one's 'cable' programming came over the fiber rather than through a dish. Tidy! Convenient! If only. 

Read more...Collapse )

(Leave a comment)

August 16th, 2018

03:31 pm - ATM Machines
I just ran across this bit of text in a introduction to the current standard password-validation-and-verification system used by Linux: "There are many PAM modules (yes I know that's redundant but saying “PAMs” or “PA modules” is awkward)"

Now I should explain that PAM is an acronym for "Pluggable Authentication Module."  The author is apologizing for saying (or so he thinks) "Pluggable Authentication Module modules." Just like pedantic smart-asses like to say "Don't say ATM machine, it's redundant."

I hope you're not one of those people, because I'm about to slap you down hard. What I realized reading that bit of text is actually obvious to most people, at least ones who aren't pedantic. An acronym is not the phrase it replaces. Most people know this intuitively, so they don't have a comeback when a pedant gets snotty. But "ATM" is the name of an entire process. There are ATM cards, and ATM servicepeople,  and ATM locations. Just because the name of this system is derived from the phrase "Automated Teller Machine" doesn't mean you necessarily get to treat it like a plug-in replacement or short form of it.

It's actually very easy to find obvious examples. If you try to expand LASER (Light Amplification by Stimulated Emission of Radiation, an acronym that has now graduated to full-on official "it's a word" status), it doesn't make sense. You have to put "beam" or "machine" or "device" after it if you expand it. "I used a laser cutter" becomes "I used a light amplification by stimulated emission of radiation-based cutting tool."

"What kind of computer do you have?" "Oh, it's an old IBM." "Oh, it's an old International Business Machines [machine]."

"I'm going to get an Uber to your place. What's your ZIP?" "...What's your Zone Improvement Plan [code number]?"

"Did you catch the new Avengers movie?" "Yea, we saw it in Image Maximum. I mean, in Image Maximum format, at an Image Maximum theater."
"We saw it in IMAX."

"Check out this awesome animated GIF!" "...this awesome animated Graphics Interchange Format [file]!"

The core purpose of language is communication. The reason to care about grammer and vocabulary and pronunciation and all that jazz is so that we can all talk to each other and (more or less) understand each other.  If somebody tells me they need a "DVD" to boot their computer, it would probably be a good idea for them to be specific, and tell me if they need a "Digital Video Disc disc" or a "Digital Video Disc drive" in order to get up and running again.

Technically, I shouldn't include VHS or CRT, because they're not acronyms, they're initialisms. A "Vee Aitch Ess player" not a "vuhs player," never mind the amusing result of expanding it to get a "video cassette recorder player." But I don't think you'll be confused if I'd just gone ahead and called them acronyms anyway.

Nevertheless, I understand why some people think they're being clever when they declare "ATM machine" to be redundant. There are many acronyms that are intended to be used as drop-in replacements of the phrases they're derived from. Most of the common acronyms used in SMS messages (Short Messaging System), aka "texts", are examples of that. ROFL. LOL. IMHO.

So, yea, it's an easy mistake to make. But I believe whole-heartedly that it is a mistake, so if you come up to me some time and try to show off your cleverness and linguistic knowledge by declaring that "ATM machine" is redundant, and acting as if somebody's apparent lack of knowledge of that fact is an indication of inferior education, you're going to be sadly disappointed by my response.

Speaking of pedants, in looking up some lists of acronyms, I came across this statement on the Reader's Digest web site: "Fact: 100 percent of people pronounce "sriracha" wrong." Apparently the guy (yes, it's a guy) who created it calls it "SEE rah chah." I've only ever heard it as "sih RA chah." RD was urging me not to pronounce it "SREAR-rah-chah." I hadn't even noticed the bizarre little 'r' right after the 's'. But I've got a little news flash for the editors at RD. If "100%" (they are presumably rounding up from, oh, 99.9947% or the like) of people pronounce it "wrong," and they're all pronouncing it the same way, then the wrong way's the right way now. The name has changed, and Reader's Digest had better get on board or expect to look like fools or asses. It's a fine line between "smart," "smarty-pants," and "smart-ass." I ought to know. I've crossed it many times myself.

(Leave a comment)

August 26th, 2017

11:28 am - Casaba Jelly Redux, and more Secrets of Jelly-Making
Well. It's been a jelly-making summer, and I have for a few years now been intending to make a second batch of my one-of-a-kind Casaba Jelly. Two days ago I actually found casaba melons at Thriftway. When I got it home, I discovered I'd never actually written down the recipe! Mon Dieu!

Jam/Jelly/Preserve recipes are infuriatingly robotic. "Use one package of this specific pectin-product, exactly this much fruit, precisely this much sugar, no more and no less of this much lemon juice. If you don't, Doom Will Befall You!" One is expected to be a Good Little Housewife and just follow the instructions. No getting creative! It's not allowed!

Fortunately, over the years, I have found various bits and pieces of information that provide clues to the desperately needed "why" to the Rules. I don't know that I'd consider my sources authoritative, but at least it's something to go on when creating new recipes.

So. Step one, juice the melon. This turned out to be a lot harder than I'd remembered. Chop it open, scoop out the seeds, cut off the rind, puree. So far so good. But for beautiful clear jelly, a pale yellow like stained glass, I need juice, not puree. I poured the glop into coffee filters to let it drain. The glop holds onto juice fiercely; the liquid stopped dripping out and the puree was still sopping wet. Wrap-and-squeeze is really tricky because the more moisture remains, the easier it is to just burst the wet paper filter, and this stuff was super-wet. In retrospect, putting it all into one big cheesecloth bag, then putting that in a cotton bag, then pressing out the juice, would probably have been faster.

Anyway, in the end, I had about 4 1/2 cups of juice (1.3 liters). The watermelon jelly recipe I was using as a starting point called for six cups. I ended up scrapping it and going back to first principles, which I've actually never done before.

So: four cups (because that seemed a reasonable goal for "juice for one melon") into the pot. Heat it up. Add one package of Sure-Jell dry pectin. Attach thermometer, and stir in four cups of sugar. Bring to a boil. Boiling point: 215°F. The sugar raises the boiling point of the liquid, but that's not high enough. It takes 220°F (or so I've read) to trigger the reaction that causes the pectin to set. That, and sufficient acid. More on that later.

Adding sugar: 4.5 cups. 5 cups. Eeek! Sudden surge of foamy hot syrup pours over the edge! Heat off! Towels! Wiping! Belatedly throw in 1/2 teaspoon of butter for anti-foam effect. Return to boil, carefully, stirring all the while. Argh! 219°F. Fine. Add another 1/2 cup of sugar. Aaand there! 220°F, maybe even 221.

Now it's time to check the pH. Jellies and other water-bath canning foods have three forms of defense against being spoiled by bacteria and fungi. First is that high concentration of sugar. Sugar is very hygroscopic: it pulls water toward it. Most bacteria landing on the jelly (or hiding out in it) will be instantly mummified.

For those micro-organisms that can resist being dehydrated, there's the heat. Pasteurization occurs at about 160°F. Milk, for example, is heated to that point, because that kills off the majority of Bad Things that can contaminate food. Jelly, being a liquid, and being heated to 220°, is completely pasteurized, and pouring it into the jars sterilizes them as well. Sealing the jars keeps new bacteria and fungi from invading our jelly, so until it's opened, it's safe.

Except for one particularly insidious evil little germ: Clostridium botulinum. The bacteria itself is not very dangerous, in that you can't normally "catch" a case of botulism. Rather, the bacteria, if it can have a little grow-and-reproduce party on something a human wants to eat, poops out a remarkably nasty neurotoxin. The neurotoxin itself, fortunately, will break down if heated to 85°C (185°F). Less fortunately, botulism spores won't. They can survive temperatures up to 250°F, so boiling doesn't kill them.

Fortunately, they only grow under fairly specific conditions. Canning does give them the low-oxygen environment they like. What they don't like is the acidity. Most fruit is fairly acidic, and botulism spores don't grow if the pH is below 4.3. This is why if you want to home-can meat or vegetables, you need a pressure cooker; a pressure cooker can get the temperature past 250° and kill botulism in foods that aren't acidic enough.

The tricky bit is that "acidic" doesn't automatically mean "sour." A ripe peach is deliciously sweet, but it's still acidic enough to safely water-bath can. The sour of the unripe peach is still in there, it's just masked by the sweet. But are melons acidic enough to safely water-bath can?

According to my litmus strip test, yes. The litmus paper I happened to have on hand wasn't ideal for this, because it's designed to read from 4 to 7. It doesn't really give me a precise read on how far between "4" and "5" something is, because those two values are pretty close in color. But the casaba jelly in the pot looked like a solid "4" to me. I added 3 tablespoons of lemon juice (pH 2.0–2.6) just in case. Note that a pH value of 2 means lemon juice is 100 times more acidic than something with a pH value of 4.

By the the way, the county extension office (aka the government) says to use bottled lemon juice because it has a standardized acid level. While this is true, it turns out that fresh lemon juice (excluding Meyer lemons) is all but guaranteed to be significantly more acidic than the bottled juice.1,2

Finally, there's the fourth safety factor for canned goods: you. If you pull a jar off the shelf and the lid isn't sealed, if it looks funny or smells weird or tastes odd, congratulations! Your built-in bio-assay analysis tools have functioned as designed: what you're holding may no longer be "food." Throw it out.

So, my casaba melon jelly recipe isn't Officially Certified by, well, I'm not sure exactly who would have to do what before Sure-Jell or Ball or whomever would be able to put in in their recipe books, but personally, I have at least as much confidence in the safety and reliability of my casaba jelly as I do anything else I've home-canned.

By the way, this stuff tastes amazing. The fresh casaba is quite watery; the flavor is delicate, and the melon itself is a pale green-white. In jelly form, it's an intense fruity punch, and a lovely golden color.

Although I do provide more specific instructions here than most jelly recipes offer, this is still not a first-time jelly-maker's recipe. You should at least have the instructions from the box of pectin at hand to explain some of the details of "processing" that I've omitted.

Casaba Melon Jelly
  • 4 cups casaba melon juice (juice from one melon)

  • 5 1/2 cups sugar

  • 1 package powdered pectin (Sure-Jell or similar)

  • 3 tbsp lemon juice (fresh or bottled)

Remove seeds and rind from melon, puree, then strain. Wrapping the puree in layers of cheesecloth and letting the juice drip out for a few hours, then gently squeezing to wring out the remaining juice is one method.

Heat lids and jars in a simmering water bath. (You should NOT boil them. Boiling can damage the seals on the lids. The water should be around 150–180° F to soften the lid seals and prevent thermal shock from shattering the jars.)

Put juice in a large sauce pan and bring to a boil. Add pectin, stirring until dissolved. Add butter (foam control). Add sugar, stir until dissolved. Bring to a full boil for about five minutes. A "full rolling boil" will probably get you a countertop full of foam, butter not withstanding, so it's much less messy to bring it just to a boil and keep stirring it to keep the foam down. The goal is to get the mixture to 220ºF.

Turn down heat to hold at a simmer, skim foam. (Foam can go in a jar as well for foamed jelly.)

Pour jelly into pre-heated jars. Fill to about 5mm (1/4 inch) from top. Screw on lids. Process for 5 minutes in a water-bath canner.

Makes 4 pints (aka 8 half-pint jars, or 16 4oz jars)

Note: if you didn't get 4 cups of juice from the melon, you can add up to 1/4 cup of water to make up the difference. More than that means you should probably get another melon.

(1 comment | Leave a comment)

May 22nd, 2017

03:19 pm - ADD: Inside, Outside, Upside-down

I woke up this morning, as I often do, with a “to do” list all set up and ready to go in my head. At the top of the list was a pretty simple little task that’s been on my list for over a week. Well, it’s actually been there three weeks, but I forgot about it for the first two.

It’s a letter I need to send. I don’t even really need to write it; I’ve sent this letter before, so it’s on my computer somewhere. It’s, oh, a fifteen-minute task at worst, and probably more like eight. And yet, it’s somehow not gotten done for a week.

This morning, I sat up and said “Okay, you’ve got nothing else on your list more important or urgent than that. So! Let’s do it!”

That was a couple of hours ago.

I have what is generally called Attention Deficit Disorder. I also have medication for same, which I took this morning at 6am, because it takes a couple hours to ‘kick in.’ But today it didn’t kick hard enough, so instead of writing that {redacted} letter, I found myself thinking about thinking. Again.


First of all, the well-meaning (I hope) but still remarkably stupid psychiatrists who worked on the most recent volume of the DSM (DSM-V, Diagnostic and Statistical manual of Mental Disorders, version five)) decided to rename ADD back to ADHD. “H” stands for Hyperactivity, and the broader mental health community started leaving the H out once it was understood that the hyperactivity is a side-effect, and one that is not present in everybody with this mental condition. As a result of the idiotic rename, somebody (for example, me) can now be Officially Diagnosed with “Attention Deficit Hyperactivity Disorder (without Hyperactivity)”

The problem, gentlemen, was not with the lack of an H, but rather the inappropriate and misleading presence of that first D. Attention is not like strength, or blood pressure, where one person can have more than another. If you are awake, you’re paying attention to something. Where behavior becomes a problem to be treated is in the question of what the attention is being paid to.

In fact, when I first started seeing a psychiatrist in order to get evaluated and diagnosed, he was leaning at first toward a diagnosis of OCD (obsessive-compulsive disorder), rather than ADD, because I tend to exhibit a lot more hyperfocus than hypofocus. Hypofocus is when somebody can’t hold their attention steady; it’s the classic “Hey, you’ll never guess what happened to me today! I was going into the store . . . oh! look! a butterfly!” Hyperfocus is sitting down on a curb with a book waiting for a parade to go by, and looking up a few chapters later to find out the parade’s already come and gone, and so has everybody else, so you’re just sitting there by yourself.

Really, the core difference between OCD hyperfocus (the ‘obsessive’ part), and ADD hyperfocus, is that with OCD, you can be hyperfocused on something you don’t want to keep thinking about. You can be completely sick and tired of worrying about germs, you can read and understand all the explanations about why germs aren’t an imminent threat to you, but still you can’t help worrying about them.

ADD hyperfocus is because you do like thinking about germs, or why clouds make shapes, or programming, or reading, and your brain is so into thinking about that thing that it shuts out everything else, like how much time has passed. I am both a performer and composer of music, but I don’t attend many concerts because there’s a pretty good chance I’m going to miss part of it. I’ll start out listening, but then I’ll think about how trombones work, or start designing a spit-gutter for trumpets (so they can stop dumping the liquid from their spit-valves onto the floor), and the next thing I know, everybody around me starts applauding. The concert’s over, and I missed it.

A far better designation for the mental condition I share with many many other people is Attention Management Disorder. I will make a choice about wanting to pay attention to one thing, but then some other (subconscious, unconscious, whatever) part will reassign my attention elsewhere. It will get reassigned to something that I want to work on, and in fact is often something that I would much rather be working on, but it’s not the thing that I’d chosen to work on.

Please, do not tell me “Oh, that happens to me, too. Maybe I have ADD.” The reason I get to have a prescription is because the way I think is so different from most people that it makes it harder for me to be a human being. I have behaviors that have a significant negative effect on my ability to keep myself fed and clothed and to get along with other people.


One of the more exasperating things about being neuroatypical is trying to explain one’s atypicality. If you’re wheelchair-bound, a fully-mobile friend might not remember at first that you can’t just sail into any building unless there’s a ramp or some such thing, but if you point this out, it’s really not hard for them to understand the problem. They can see the chair, and imagining the problems that can result isn’t a huge leap.

Mental differences are so much harder to explain. My own mother still cannot comprehend why it’s so hard for me to keep my kitchen counter clean. “Why don’t you just do it?” Ah, the number of times as a kid I was told I just needed to “buckle down” and get that homework done, or as an adult when friends would say “sometimes you just have to do stuff you don’t want to do,” as if what I needed was just a nice pep talk so I would get over being lazy.

Willpower, right? I just don’t have enough willpower. That’s all I need, just some more willpower. Yea, I actually believed that for years, until I needed some serious dental work.

Now, my mediocre teeth are definitely genetic, although not being able to make myself care for them as I should hasn’t helped. My orthodontist had pulled not just my wisdom teeth, but four more molars as well, so instead of the standard 32 teeth, I only have 24. Nevertheless, back in 2010, I had 32 fillings. Most of those fillings had been done without anesthetic. I’d often have nitrous oxide, but NO doesn’t affect pain, it just relaxes you. In fact, if a cavity’s not too big, there’s a good chance that drilling it out for a filling will be painless, and if not, it will be a fairly brief pain. Admittedly, it will be a pretty darn intense sharp hot biting pain, but still brief.

However, after 32 fillings, there wasn’t all that much tooth left to fill. The next step meant getting caps, or crowns. This involves grinding down the outside of the tooth to make sort of a cylindrical peg, then cementing a hollow ceramic replacement tooth over the top of it. Twenty of my twenty-four teeth needed to be crowned. The only possible way I could afford to have that much work done was by going to Mexico. The upside was even including airfare, hotel, meals, and all that, the total cost was still only a third of what it would have cost in the US. The downside was I had to have all twenty teeth done at the same time. That meant six hours in the chair the first day, and eleven hours on the second.

The reason a dental drill can hurt is because teeth have nerves for detecting heat, and that spinning grinding tip gets really really really hot. Grinding down an entire tooth is not painless. I can state this with absolute certainty, because during that eleven-hour session, one of the shots of novocaine didn’t work.

When I first realized that this tooth wasn’t numb like the others had been, I thought “Hey, I’ve had fillings done. I’ll just see how this goes.” Now, the primary purpose of novocaine is not to make you, the patient, more comfortable. The really important reason is because your reflexive reaction to pain is to pull away, and if a dentist has a drill spinning at 30,000 rpm in your mouth, and you suddenly jerked your head to the side, really awful things could happen as a result.

Pretty soon there were tears sliding down my face from the pain of having it ground down. I could have waved my hand to stop them, told them the deal, had them reapply the novocaine, waited for it to take effect, and everything would have been fine. But that would have meant adding at least 30 minutes to the time I’d spend in the chair that day, waiting for it to kick in, and I really didn’t want to be there any longer than necessary. So I sat perfectly still until they’d finished that tooth. Then I told them that we’d need to do another injection before going any further.

Quite the little anecdote, no? Now, look me in the eyes and convince me that my biggest problem is a lack of willpower.

Maybe not, eh? But then why? How can I be incapable of doing things that you find easy, simple, even trivial? You know, I’d really like to be able to just spell it out for you, believe me, I would. It’s not an easy thing to do, though. First of all, I have to actually know what’s going on inside my head. Do you understand what’s going on inside your head? Do you know why you get angry at some things but not others, why you like your favorite color, why you like to go dancing more than your friends do, why you straighten crooked pictures on the wall but your friends don’t?

Fortunately, I do. My evaluating psychiatrist commented that he’d rarely worked with somebody who could describe their inner mental landscape as clearly and coherently as I could. However, understanding what I’m trying to describe is just the beginning. How do I explain what’s going on inside my head in a way you’ll understand unless I also understand what’s going on inside your head?

Actually, the problem’s even worse than that. One of the many ways in which I have come to understand I think differently from most of my fellow humans is that my imagination works better than most. I could not even begin to count the times that something that seemed totally utterly obvious to me was unexpected, unanticipated, or unrealized to people around me, because they hadn’t bothered, or were incapable, of constructing and testing a model of some situation in their head; they didn’t or couldn’t imagine what else could be.

So if I want a typical somebody to understand what it’s like inside my head, I have to first figure out what the differences are between my thought processes and theirs, then I have to come up with a way of enabling them to imagine thinking like me, when it’s almost certainly the case that they’re going to have a lot more difficulty coming over to my side of the chasm than I did getting to theirs.

It’s taken me decades, but I think I might have finally come across a metaphor that might do the trick. This is based both on careful observation of my own behavior and feelings under various situations, on other people’s behavior, and on the descriptions of both neurotypicals and those with ADD in attempting to describe what it’s like inside their heads.

First of all, imagine making a list of all the things you might do today. You might have “pay bills,” “mow lawn,” “do laundry,” and “fix dinner” on the list, along with “watch Game of Thrones,” “take a bubblebath,” “watch random videos on YouTube,” and “pet the cat.” Then put the list in order with whatever you’d most like to do at the top, and the task you’d enjoy least at the bottom. This list should have at least three times as many things as you could actually do in one day, if not way more than that.

Now, take a moment to remember the last time you were on a plane with a crying baby, or whenever you were trying to read, relax, or get something done but there was somebody nearby with a baby or small child who was crying or screaming or making a ruckus. Remember that?

Okay. Now, imagine getting up in the morning and doing, well, whatever you normally do. Except that you’re wearing earbuds, and somebody else is following you around with the iPod that they’re connected to. The iPod has a recording of a crying baby on it, and the person with the iPod sets the volume based on how far down your list the thing you’re currently trying to do, is.

Reading a book in the back yard? No baby. Washing the dishes? Loud crying baby. Watching TV? Very quiet baby. Paying the bills? Baby screaming in your ears.

Now add in the understanding that you don’t get to wait until this baby falls asleep, or grows up, or gets taken away. Every time you try to pay your bills, there will be a screaming baby. Every time you look at the laundry hamper full of clothes, you can hear that baby make that gasping noise that means they’re about to start crying like a police siren again.

Now, I would confidently bet a fairly large sum of money that if you actually tried this experiment, it would be much more frustrating and aggravating than you would imagine, because it’s calibrated to my imagination, and I’m pretty sure mine is better than yours. I would also bet money that if you have taken the time to actually sit and imagine this experiment (not just thought about it while reading this essay, but sat back, closed your eyes, and really took the time to visualize the whole thing), that you’re thinking that I’ve exaggerated a bit to try to get my point across. I’m tired of people thinking that I’m messy because I’m lazy, and I’m getting my revenge by beating them over the head with my example to show “see! it’s harder to be me than you think!”

You would be wrong.

I actually think that just having a baby scream in your ears might not go far enough. Because this experiment is for people who do not have Attention Management Disorder, you might, unlike myself, still be able to make yourself pay those bill despite the horrible noise. To bring your actual behavior into alignment with mine, there might have to also be a pain component; that paying bills might have to also cause your arm to ache, or your leg to cramp a bit, before you’d have as much difficulty paying your bills or doing your laundry or cleaning your bathroom as I do.

Before you got to the point of waking up one morning with “send an email to so-and-so” at the top of your To Do list, an email that’s already mostly written, containing good news, to somebody you like, with your computer sitting within reach, and finding that you simply cannot make yourself do it. Even though you’ve had decades of experience tricking yourself into doing things that are hard. Even though you’ve taken medication that makes it easier to do things that are hard.

If that happened to you because you were wearing the Screaming Baby earbuds with the muscle-ache-causing armband, then you might indeed have experienced one of the really annoying ways in which my brain doesn’t do what I want it to.

I told you earlier that I’ve been trying for years to find a way to describe this experience. I think this is, in many ways, a really good analogue. When I try to get the laundry done, or pick up my room, it’s like my brain is full of static; like it’s distracted. Thinking gets difficult. Trivial inconveniences become irritating or enraging. It’s very much like having a baby crying in the other room where I get sick and tired of hearing that noise but can’t really go tell the kid to shut up or anything like that, except I’m not hearing any noise like that. I’m just reacting as if I were.

Now, if your response to a crying baby is more like “oh, the poor thing, I wish I could hold it and make it feel better,” then this thought experiment isn’t going to work for you. So it goes.

Upside Down

I should explain that my head is not just an infinite world of crying babies. There are lots of things about how my mind works that I really like. The super-charged imaginator and the high-resolution introspector, for example. I also have no doubt that some of the best qualities of my mind are irrevocably connected to some of the worst. It all comes as a package deal. My imagination and general inventiveness is paid for (at least in part) by a very sub-par memory, for example. I’m not sure what the up side is of the thing that makes it so effing hard to do stuff that isn’t fun, but I’m sure there’s something I gain in return.

If you live or work with somebody with ADD, maybe this will help you understand them a little better. Maybe not: it’s a good bet that what’s currently filed under “ADD” is actually two or more different syndromes that just happen to cause similar behaviors.

As for me, the more I can understand and accept how my brain works, what it does well, what’s hard, and what’s just too hard, the better I get at working around the weak spots and leaning on my strengths. Just like everybody else.


(Leave a comment)

January 25th, 2017

03:07 pm - Positive Diversity
    I’ve never been very good at the whole ‘social media’ thing. A close friend’s daughter (who apparently does (or at least did?) follow my LiveJournal blog) once said something to the effect that my posts were always really big. What can I say? I’m an essayist, not a sound-bite-ist. In these Trumpian, MyLiveInstaFaceTweet times, LiveJournal is so last week, and having your own personal web site? Please! That’s, like, pre-Cambrian or something! Not the same as having your own professional web site, I should point out; of course there’s a georgerrmartin.com web site. (There’s also a davidhowell.com web site, by the way. They make “museum-quality gifts.”)

    But I digress, and I hadn’t even gressed yet. The point I had barely begun to make is that if I wrote essays based on my readership, I wouldn’t write essays. Today’s topic is ‘diversity,’ and more than ever, it feels both pointless and unnecessary. Still, I’ll say it because I want to to be something that I said.

    This diatribe is prompted by this week’s issue of Time magazine (January 30, 2017), and more specifically, by four different articles found therein. Page 45 has “All The Presidents’ Men,” where Gretchen Carlson notes that Mr. Trump says he “loves women” and is “one of the biggest supporters of women out there.” And that his cabinet appointees are...mostly older white men. She asks “Are these picks the only qualified candidates out there?” She does note the four women he did pick: Elaine Chao, Betsy DeVos, Nikki Haley, and Linda McMahon. Then she goes back to check Obama’s cabinet, and finds just Hillary handling the gender diversity of his Cabinet. Then she looks at “dubya’s” cabinet, and finds “a handful of women.” Ms. Carlson concludes this “kind of women problem isn’t just a Republican or Democratic thing,” and there is “much work to be done.”

    For the record, I entirely agree that there’s much work to be done. I’m not sure if I agree with what work needs to be done. Ms. Carlson doesn’t say, but I think a lot of people would jump to a conclusion about that, a conclusion that appears on page 54, in an article/interview with M. Night Shyamalan. The article notes that “Shyamalan...has at times disappointed fans with a lack of diversity in his films. He says he sees progress ans inevitable but adds that he resists diversity for diversity’s sake.”

    I don’t doubt for a moment that there are plenty of directors (and business owners and managers and myriad other people in charge of hiring, choosing, or selecting) who use “I’m hiring based on qualifications, not race,” as a convenient smoke screen to deny a real bias. On the other hand, I have had people tell me to my face that they were angry because [organization or social institution] failed to have enough [underrepresented demographic group] in the top slots, without any seeming recognition that, in fact, there
weren’t enough “qualified candidates” to allow the kind of diversity that the angry person wanted to see.

    “Not enough candidates” is not an
excuse for a lack of diversity, but it is a reason, and is indicative of a deeper problem to solve.

    Page 60 of Time has “9 Questions for Lisa Lucas,” who is the director of the National Book Foundation. The first question notes that Ms. Lucas is both the first female and first black person to run the National Book Foundation. She says she finds that “sad” that these are firsts in 2017, and that although it says something about her own career, that “you don’t ever want to be the only one in the room.” The second question notes that three out of four of this year’s winners were black. She replies “I think we want our national body of excellent literature to look like our excellent body of American citizens.”

    It’s clear from what else she says, by the way, that she doesn’t mean “of course there should be lots of black winners because blacks are excellent.” She obviously wants a spectrum of ethnic backgrounds, genders, and other such characteristics to be showing up on the short list. It’s only a one-page interview, but I gotta say, to me she sounds  sharp as a tack and a totally awesome person to be running the NBF.

    Finally, there’s a movie review for “20
th Century Women,” which leads off with the observation that “Stories about women as told by men get a bad rap these days...we hardly need our lives mansplained to us...” but goes on to suggest that there’s no reason to dismiss men writing about women any more than to ridicule women writing about men, since there’s no exclusivity clause on “the sympathetic imagination.”


    Bear with me. I’ve got one more anecdote to share, and then I’ll start pulling all these threads together. Back in 2012 I was part of a small committee to select a Guest of Honor for a convention I work with/for. During the time we were making that selection, I attended the World Science Fiction convention, and had a conversation with a woman who had very definite opinions on SF convention’s overly white male bias for Guests of Honor. She felt that GoH selection committees had an obligation to make up for years of just inviting white men as their Guests.

    I didn’t tell her I was on such a committee at that time, because, frankly, I think a bias against white men is still a bias, and does not represent much of an improvement. My fellow selection committee members and I had already discussed that we were dedicated to making sure we didn’t overlook candidates just because they weren’t mainstream, but after that WorldCon, I came back with even more enthusiasm for finding a GoH that wasn’t a white guy. Not because I agreed that we had a
responsibility to make up for other convention’s choices, but because of the opportunity it represented. One of our selection criteria was to have a GoH who was going to make our convention more interesting. This might seem obvious,  but if you don’t explicitly include it, you can easily fail to achieve it. “Interesting” is very very close to being “not like us,” somebody with experiences and perspectives that are not lock-stepped with those of everybody else and our convention, and since most of the attendees are white . . .

    And yet, an “enlightened” policy like that is
still just institutional prejudice. It’s the same kind of skin-color-based short cut that has been so stupidly and unfairly applied in the past. But I still think it was a good idea, because it’s right here that so many people make a huge, huge mistake, and see the issue as a dichotomy, when it absolutely is not.

    Actually, there’s two huge thought errors about to pounce at this point. First, that this is a binary issue. It’s right or wrong, black or white, left or right, this or that. Bullshit. This is complicated, fuzzy, and gray. Not seeing the desired results is not proof that there's a problem. Second is the error of applying statistical averages to individuals. Unwinding the second one will take a few words.

    For years, women weren’t allowed to apply for work as a firefighter. The justification offered was that firefighting required a lot of physical strength, and women didn’t have the upper body strength to do the work. Well,
on average, that’s true, but the average man isn’t strong enough to pass the test, either. A New York Times article1 noted that about 10% of the women (and 57% of the men) who took the city’s firefighter physical exam scored a passing grade. So even though women as a class aren’t as good at arm-wrestling as men, does not mean that no individual woman can beat a specific man at arm-wrestling.

    Twelve percent of the U.S population is black2. Twenty-four percent of the U.S. population below the poverty line is black3. That does not mean that any particular black person you meet is poor, even if they’re wearing a ratty old hoodie. What is typical or likely of a class is not true of every single individual in that class. Mark my words: you can see this mistake made over and over again, every day.

    So, back to choosing a guest of honor. Just because somebody’s Asian, or female, does not automatically mean they would be an interesting guest of honor. Unfortunately, anything we use to guess “interesting-ness” is, at best, a guess, with the arguable exception of only choosing from the pool of candidates who have already been satisfactory GoHs at other conventions. In a post-racial world, a candidate’s ethnicity would be useless for selecting for ‘interesting’. In this world, it is nearly impossible to have similar life experiences as somebody with a notably different amount of melanin in their skin than yourself, or of a different gender, so the odds that a non-white-male guest of honor would have significant divergent experiences is pretty darn high.

    A better approach, I would argue, would be to evaluate the candidates based on where they grew up, where they attended high school and/or college (if they even did), what organizations they’ve belonged to, and what jobs (besides the science-fiction-related ones that presumably brought them to our attention in the first place) they’ve held. Choosing based on race or gender is only relevant to the degree that society has forced different experiences on that person because of their race or gender.

    But society does do that, thus making it relevant. Sad but true.

    There’s also the brownie point argument. A convention might have an incredibly diverse lineup of speakers or presenters, but if that diversity does not include diversity or race or gender, they’re going to take a lot of flack. Their lineup is going to be, fairly or not, judged on its appearance. In fact, that certainly seems to be what Ms. Carlson did when she scored the presidential cabinets based on how many women were in them. In a nutshell, she’s “profiling” (a la the TSA and selecting travelers based on race). What really matters for helping society reduce the inequalities of gender are having people in those seats who care about reducing those inequalities, and despite what some people seem to think (class != individual), there are white men who firmly believe in changing the status quo.

    By the way, please don’t fall into that general-to-specific thought trap and imagine I’m defending Mr. Trump’s current cabinet. The Trump administration might (or might not, I haven’t found data for this) have more women than Obama’s, but I would be very surprised if the Trump picks, as a group, were even remotely as concerned about gender inequality as the Obama group. To put it mildly.

    But I do think it’s important, really important, to not excoriate any organization if all you’ve done is eyeballed the people in the top slots. It is entirely reasonable to expect that a woman is more likely to care about and take actions to improve gender equality; that somebody who’s obviously non-Caucasian is more likely to care about ethnic fairness, and so forth. But, that's a class characteristic. I cannot help but think that far too many of the people who tweeted #OscarsSoWhite did so long before anybody actually looked into it more than skin-deep. Assuming an all-white roster reflects racial bias without fact-checking is just as biased and unfair as not hiring somebody because they’re black.4 Also, given the widespread availability of ‘fake news’ sites and ‘true’ but criminally misleading articles online, retweeting a message that references a source without actually going and reading the source yourself does not count as fact-checking.

    You see, part of the problem is that Cabinet seats, Oscar nominations, and guest of honor slots are high-visibility slots at the end of the chain, not the beginning. Since 12% of the citizens are black, should 12% of the cabinet be black? No way, because being appointed to a Cabinet position is no more a lottery than being a firefighter is, and there aren’t enough blacks (or women) who are qualified to fill those slots proportional to the general population.

    In fact, despite having doubled-down on my commitment to find a non-white-guy guest for our convention, when my committee got right down to brass tacks, the person we ended up selecting as our top candidate was . . . a white guy. I believe that none of our criteria were implicitly needlessly biased by gender or race, but our potential pool of candidates was so overwhelmingly full of white guys that to have chosen somebody who wasn’t would have meant accepting a pretty significant compromise.

    Telling selection committees or the Academy of Motion Picture Arts and Sciences to pick more minorities is, to a very great degree, treating a symptom instead of a cause. Now, 2017’s Oscar nominee lineup has more black nominees than any previous year. One might suspect (I do, at least) that this is partly due to some people getting shamed into becoming aware of their biases, and working to counter them. But, it’s also partly due to having a lot more movies, and more better movies, to choose from involving black actors and directors (Loving, Hidden Figures, Fences, Moonlight, Nocturnal Animals, and others). In turn, that reflects an increase in the number of minorities making it to Hollywood.

    I think one would have to be pretty self-delusional to claim that minorities aren’t just as capable of being CEOs and Oscar-winning-actors as whites, unless one would also accept that whites are genetically inferior physically, and thus can’t be good at sports. White Men Can’t Jump? Oh, please. And yet, if only 12% of the population is black, why is the NBA 75% black? The NFL, 68%?
5 Major League Baseball, 8.2%?6 Er, come again? I do not for a moment believe that baseball is somehow so fundamentally different from football or basketball that there’s some kind of biological explanation for that. Nor am I going to declare the NBA is unfairly avoiding hiring white guys because of their skin color based solely on that number.

    (If you're curious about those numbers, there are many good articles and studies about the sports population bias. For example, it’s a lot easier to play basketball on a street or small playground than baseball, and urban areas have a lot more streets than baseball parks. Urban areas also have more minorities. That’s one example factor (of unknown importance) among dozens.)

    I am not about to tell Mr. Shyamalan that he should be ashamed for not casting more minorities in his movies. If he wants to put extra time and effort into encouraging auditions from inner-city actors, or from other countries, then he might indeed find some previously-unknown brilliant actor to fill a role. But that’s up to him to decide if he thinks the payoff would be worth the cost, and I’m not going to disparage him if he doesn’t.

    It’s also the case that simple random chance can cause somebody or some organization that is truly being fair to appear to have a bias for or against a minority group. Random chance can also hide the fact that an organization has a bias. I have no intention of apologizing or excusing anybody who’s stupid enough to hurt themselves and the rest of us by failing to put into a position the most qualified, capable people possible. But jumping the gun and attacking somebody or some group prematurely, judging them based on the color of the skin of the people they hire, select, or honor, runs the risk of alienating a group that might actually have been an ally.

    Let me say that again, because it's basically the entire point of my essay. Attacking, shaming, or insulting an organization or individual prematurely, judging them based on appearance (simply counting the minority board members, or assuming a white guy Doesn't Get It because he's a White Guy) is not only exactly the sort of pre-judging we're trying to fight, but could easily result in alienating a person or group who might actually be a real or potential ally.

    That is something I can attest to personally. I’m more inclined, these days, to put effort into countering bias against gender-identity, sexual orientation, and weight than into ethnic or gender bias, because of hostility toward men and whites that I’ve experienced both as a member of the class and as an individual. Oh, I am entirely aware that being able to
choose whether or not to deal with gender inequality is a Privilege I have merely because I’m male, and there are women who would not hesitate a moment in trying to shame me for that. They will fail. I have only one lifetime allotted to me, and I have only a limited amount of that life I can spend on anything. I am indeed very fortunate to fall on the unfairly benefitted side of a long list of ways that humans divide humanity, but as long as I’m working to balance those lines, I don’t feel any obligation to favor one dividing line over another, so if it comes down to a matter of choosing between causes of equal relevance, I’m going to work on the cause where people aren’t trying to make me feel shitty about being born on the uphill side. If I’m not working hard enough to suit you on your favorite cause, you won’t have much luck convincing me to change that by trying to make me feel shitty. Quite the opposite.

    I really wish more people would try an alternative approach to this problem. Time’s movie reviewer (Stephanie Zacharek) didn’t call out a male director for portraying women badly, she called out Mike Mills because she thought he “deserved credit” for “wondering what [the lead character of the movie] was feeling in the first place.” Shaming the President for picking a cabinet of white men isn’t necessarily
wrong, but unless somebody’s doing something about the problem of not having enough qualified candidates in the first place7, then it’s probably not very useful, either.

    The next time you find yourself about to retweet #BlackLivesMatter or anything else related to inequality or bias, ask yourself a couple of questions first. Are you about to share something negative or something positive? If it’s negative, is that mostly what you pass along? And finally, are you spending your time and energy talking about a symptom, or something closer to the root cause?

    I cannot help but believe that
somebody has been thinking along those lines, since there are now notably more women going to 4-year colleges than men. For that matter, my cousin, currently the Chief Innovation and Research Officer for the Spokane Public Schools, was recognized by KCTS9 for his work as a high school principal for dramatically increasing the number of minority students who were applying for college (and taking AP courses and otherwise being ready to go to college). Seems to me that there’s a serious lack of effort in identifying and rewarding positive effort compared to the amount of time spent on the negative.

    More than once, I've had women tell me that something that they find pretty insufferable are guys who seem to think they deserve (one might even say "entitled to") some kind of pat on the back for merely not being an asshole. "I didn't pinch anybody's ass today! How come nobody appreciates that?" I absolutely agree that anybody who's pouting about not getting thanked for clearing a bar as low as that needs to recalibrate their expectations. BUT! If patting them on the back for not being a douchebag (or a dick, you can choose from a variety of gender-biased insults here) will encourage them to maintain this behavior . . . ? Or, thanking somebody who maybe doesn't think they're entitled to recognition?

    Yes, I do understand what I'm saying here. If you have a home with a nice carpet, and there's a bunch of shoes by the door, and a guest is clueless enough that they wear their shoes into the home anyway, you shouldn't have to thank them for not having walked through a mud puddle before entering your house. I am not saying you have any obligation or responsibility for thanking people for merely not being unfair or oppressive or biased or whatever. What I'm saying is, that despite that fact, if you do go ahead and thank that shoe-wearing guest for having wiped their feet before coming into your house (whether they actually did or not), you have probably improved the chances that, in the future, they will take their shoes off. You'll certainly have drawn their attention to your concern, and you'll have probably made them feel appreciated, which could also make them more interested in you, and what you care about, in return.

    Yes, there are plenty of people who are actively attempting to oppress other groups of people. "Build The Wall" and such crap. But there are (I think) far more people who are simply unaware of their own biases, or of institutional biases that they are blissfully ignorant of. People who could be working to counteract those biases if they were motivated to do so. If handing out compliments for behavior that doesn't actually deserve compliments will recruit more people into working against bias, then by gum, I'm going to hand them out! I care about is actually making things change, more than I care about satisfying my own smug self-righteousness. Better yet is acknowledging the efforts of somebody who is working to improve diversity and fairness, of course.

    If you really care about who’s in the top slots, look for ways to make sure that everybody’s getting a chance to get in at the ground floor, and not getting blocked at all the steps in between. Try to spend at least as much time and effort thanking, helping, and supporting people who are doing it
right, as you do insulting and finger-pointing at people who are doing it wrong. I really believe it’s more effective, and it’s definitely more fun.




4 Please, do not waste time and comment space pointing out evidence that supports Academy bias. I’m already aware of it, but whether or not they were, in fact, unfairly ignoring worthy minority candidates is entirely beside the point. If you declare the Smart car an overpriced underpowered uncomfortable piece of junk after doing nothing more than looking at a picture of it, the fact that all the major car magazines would agree with you does not change the fact that you came to that conclusion through prejudice and bias, not evidence.



7 No, I don’t know for a fact that there is a shortage of qualified female candidates. Given all the press I’ve seen in the past few years about women in business and politics, though, I don’t feel like it’s a particularly risky assumption. Please do keep in mind that in this case, “qualified” doesn’t mean “could probably do the job,” it means (or should mean) “is the best possible person to do the job.”

(1 comment | Leave a comment)

September 6th, 2016

01:55 pm - How to be a Tech Support Genius, Part N
All kinds of people are amazed how much I know about computers.

Ha. I'm a freakin' tech support genius because my own problems are so preposterously complicated and crazy, that everybody else's problems seem trivial.

My laser cutter's been down for almost three weeks, because the rack box that was its controller simply failed to boot one day. The software for the laser cutter (a) only runs under Windows, and (b) has a USB dongle that has to be present or it won't actually tell the cutter to do anything. So I had a special separate machine just running Windows XP (the laser cutter software's Chinese, and it seemed prudent to give it an old Windows to run under).

Alas, Pfft! The machine just doesn't boot one morning. Now, it hadn't been the most reliable system anyway; if I left it on for more than a day or so, it would freeze and have to be rebooted, and having yet another computer on for hours at a time, especially a rack mount server, meant I was using a hundred watts of power or so while it was on, so I had been seriously considering moving Windows over to a virtual server on my *other* rack mount Intel box. It's running Ubuntu, and has a lot more processing power.

Except that last time I upgraded Ubuntu, the install tripped over itself, botched the process, and had left me with a machine with no GUI. I really didn't see how I could possibly run a virtual Windows machine when there was no window for Windows to be in. So step one was to try upgrading Ubuntu from the 2014 version to the 2016 version, and try to get my graphic interface back.

Well, just the upgrade took almost an entire week. Oh, my, SO many ways that Ubuntu doesn't know how to install itself, but thinks it does. And the pre-OS bootloader (it's called Grub) turns out to be completely incompatible with the hardware RAID controller in my box. I'm not even sure how that's possible.

So I eventually get MadHatter upgraded from Ubuntu 14.04 LTS to Ubuntu 16.04 LTS. The MythTV part didn't successfully transition because MySQL didn't. Yet another reason (as if I needed more reasons) to consider MySQL a piece of junk. PostgreSQL, also running on MadHatter, transitioned without incident. Once I got MySQL repaired and operating, everything seemed mostly back to normal. MadHatter still didn't have a GUI, but at least it had a login prompt. Under 14.04, the broken GUI meant once it finished booting, the screen was just blank. It wasn't possible to do anything from the keyboard. Being a server, that wasn't actually so bad; I did most of my work with it by opening a command line from some other computer. Some times I used the WebMin http-based administrative interface, and every now and then I would open a graphic user interface via VNC.

This last one takes some explaining. Macs have "Screen Sharing," which is Apple's much friendlier renaming of "Virtual Network Computing," which Wikipedia defines as "a graphical desktop sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer." Now, on a Mac, what this does (at least, whenever I've used it, and I use it a lot) is pretty much what you'd expect: you get a window on Mac A that shows you the desktop of Mac B, and when you move your pointer through the window on A, the pointer on B follows it.

It's possible that the Macs can also do it the way Ubuntu does, although I haven't ever tried it. On Ubuntu, when I've opened up a window onto a GUI, instead of giving me a window onto the same thing that the 'real' monitor is showing, it creates a brand-new desktop in its imagination, and shows me that. On a Mac, Apple calls the graphical UI "Aqua," which encompasses things like having the window controls be red-yellow-green dots in the upper left, the highlight colors, how the drag bars look, how many times a menu item blinks when you select it (three), and all that sort of thing. On Ubuntu, there are a plethora of different desktop GUIs to choose from. I'm not even really sure how many. There's GNOME, and Unity, and maybe Lubuntu and Xubuntu, or maybe I should say Xfce and LightDM; I just don't have time to wade through the documentation muck to figure out what they're actually doing with all that stuff.

But anyway, waaay back with Ubuntu 12.04, my 'real' desktop had one UI, but when I connected with VNC, I had a different UI. Like that wasn't a pain in the butt. With 14.04, the local one, as mentioned, was broken. With 16.04, VNC was broken again (another system that didn't successfully make the upgrade from 14.04 to 16.04), but as one of the lesser-used interfaces, I didn't worry about it.

Instead, I set out to get the GUI working again. Now, according to most of the doc, it should have just worked. With earlier Ubuntus, there was a "server" edition and a "desktop" edition, and basically, the server version left out the desktop GUI, the open source word processor, the video file player, Minesweeper, and lots and lots of other stuff you'd really only use if you were sitting in front of your computer. With 16.04, they seem to have moved toward a model where you get a catalog of apps you might want to use, and when you go to use them, that's when they get downloaded and installed. So with 16.04, there's no longer a separate server edition. Even though Linuxes justifiably pride themselves on working really well on hardware that OSX or Windows no longer even pretends to be able to use, the extra drive space that the GUI occupies is pretty trivial, and if you aren't using it, it doesn't use enough processor time or system memory to worry about.

Which begged the question of why I didn't have it. Online research kept turning up exactly the same advice. "If it's not there, just install it." Specifically, "sudo apt-get install ubuntu-desktop" or its equivalent. Which I did. After doing so, one reboots, and one then has the Ubuntu Unity GUI.

Not. When I did that, nothing changed. I tried re-installing. I tried removing, rebooting, and installing. I tried wiggling my ears and holding my nose. Nothing worked.

Maybe it was time for a different approach. The entire reason I even have my own rack of servers in my home goes back to one mission-critical purpose: backups. The whole thing grew out of the fact that I have been running enterprise-grade backup software for my whole home network since 1994. Admittedly, even my backup software is now a certified antique, but that's partly because I can't afford to upgrade it. I was a beta-tester for Dantz/EMC's "Retrospect" backup software version 6, in part because I had a tape library backup at the time. Retrospect came in four versions: 'Express' for home, about $60, which backed up the computer you ran it on; the small business version, which backed up multiple machines on a local network and knew how to back up onto all kinds of tape drives (as well as CDs, removable disks like ZIP drives and the like, or to other hard drives); the medium-sized business version, which supported tape libraries (tape drives with automated tape changing mechanisms); and the full-blown enterprise version which could almost make coffee in the morning. I have the medium-sized version, which I think retailed for $600 at the time. The current equivalent is $499 (single-server, twenty client licenses. I have 25 client licenses with my current R6 install.)

What makes all this relevant is that Retrospect isn't a "Mac backup" system. It's an everything-backup system. The backup server is running on my XServe, but I have clients for Mac, Windows, and Linux. Every single computer I own gets backed up every twenty-four hours or (if it's a laptop and away from home, or not left on all the time) as soon as it's back on the local network.

So I decided to give up on the idea of cutting down on the number of computers I had active for now. Fine! I'll just restore the Windows system back onto the computer that stopped booting. Let's see. It's been a long time since I had to do a full system restore from a Retrospect backup. . . . Ah. Oh. Hmm. For a Windows machine, the instructions are to start by installing Windows, then the Retrospect client. From there, Retrospect could take over the machine, and overwrite the 'new' Windows with the backup version, thus restoring the machine to its pre-failure state.

If only I could find my damn WinXP install CD. I'd actually found it about a week ago. I'd said to myself "Oho! Say, I should keep that handy." And I'm pretty sure I picked it up and carefully put it somewhere safe. I just wish I could remember what the *$&$# I did with it.

Well, I guess that's not going to work.

All right, then, on to Plan B (or is it Plan E? Plan J? I've lost count at this point). I could mount the SCSI drive in some other computer, have Retrospect install the entire Windows structure onto the drive, then move it back to the first machine. I think it's possible for OSX to format a drive with an MBR, and I know Ubuntu can, and it has to be MBR in order to boot into Windows, but whether or not merely "MBR"-ing a drive and pouring all the Windows files into the partition would actually work is another question. AND since the drives in the former Windows machine were server-style SCA SCSI drives, I would have to come up with an SCA to wide-SCSI adaptor, and my box of SCSI cables has already been moved to Seattle.

This just feels way too complicated. Yea, I know, it was already way too complicated. But this is feeling complicated even for ME.

In all the digging and scrounging for that here-again gone-again WinXP disc, I did trip across an intriguing alternative. VirtualPC for Mac. Now, this is ALSO antique software left over from when Alexandria Digital Literature needed to run some Windows program or other, but it's an actual Microsoft product and came with its own copy of Windows 7. So instead of running a virtual Windows on my Ubuntu server, maybe I can run it on one of my Macs. None of them have anywhere near the processor power, but then, I don't think I need all that much.

The VirtualPC manual says it requires OSX 10.3 or better, and at least a G3 processor. Gracious me. I start by installing it on my shipping computer, a MacBook Pro running 10.6. The VirtualPC software is almost certainly going to be non-universal, which is to say, it won't have Intel-processor code folded in with the PowerPC processor code. That means if I try running it on a Mac with an Intel processor, it has to be OSX 10.6 or earlier, when OSX still had PowerPC-emulating libraries built-in.

Of course, that means I've got an Intel Mac pretending to be a PowerPC Mac in order to run software which in turn is pretending to be an Intel computer running Windows. I am not totally shocked when VirtualPC crashes on launch on the MacBook Pro.

That means either pulling an old PowerPC Mac tower out of storage, or using my alarm clock. One of the active computers in my home is a 450MHz G4 Macintosh "Cube". It's in my bedroom running a custom program I wrote to be my alarm clock. The cube does not have a cooling fan; it was cooled via convection. This means it's pretty much silent. The only noise it makes is when its hard drive spins up, and I put the quietest drive I had in it, so it's ideally suited for being in the room where I sleep. Also, it's running OSX 10.4.

And, indeed, VirtualPC installs on the cube without any problems at all. And the laser cutter program installs into VirtualPC's imaginary Windows 7 environment without complaint. Now, can the software see the laser cutter itself? Yes, it can! Well, once I fiddle with the VirtualPC control panel. As I'd expected, I/O is a little complicated. I have, in effect, two computers sharing one body, so arrangements need to be made. VirtualPC is clever enough to be able to share the Ethernet port with the Mac OS, but it's not possible to share USB ports in the same way. USB devices have to be designated as belonging to one or the other. I tell VirtualPC that it gets the "Unknown USB device" (aka the laser cutter) and the "USB Key", and the Mac gets to keep the speakers. The keyboard and mouse are special, they get handed back and forth depending on what window I'm in.

But then, tragedy! Despite the fact that the USB Key 'belongs' to VirtualPC, the laser cutter software is displeased, refusing to concede that a valid key is in place. I cannot convince it that it's being unreasonable; it refuses to energize the laser.

Damn. Now what?

I go back to Mad Hatter. Why won't it launch the damn GUI? I tinker with mysterious GRUB settings; no dice. I try just a 'core' install of ubuntu-desktop. Nope. I'm really tired of having to re-learn the whole interface every time I upgrade Ubuntu, but so be it; I try installing a different UI. Eenie meenie miney mo...Xubuntu/Xfce.

Aha! Oho! And . . . er, hmm. Well, on reboot, I actually have a splash screen! Oddly, it says "Lubuntu" not "Xubuntu," although I'm almost positive I picked the X flavor not the L one when I did the install, but who cares! It's not a text interface! And any minute now, it should quit blinking those little dots, and pop up some kind of box that says "username? Password?".

Any minute now.


Okay, it's waiting until I hit a key . . . or move the mouse . . . or click . . . well, crap. I'm back to the situation I had under 14.04. Instead of being able to log in but only with the command line, now I have a splash graphic that has locked me out entirely. The direct-connect video and keyboard are completely useless.

A few days later, I decide take a closer look at VNC. There are bits and pieces of it laying around MadHatter, but they're probably left over from the 14.04 install. Oh, geez, and there are two different VNC servers that I could install. I think. The various "how to" guides just tell me "do this, and do that, and there you go!" and I have to be sure I'm not mixing things up. Only one of the guides even admits that there's an alternative, and it doesn't explain why it picked one instead of the other. And, as usual, everybody assumes that you can just type their commands and everything will work out hunky-dory.

Meh. I run the install. It doesn't work. I am not even a tiny bit surprised. The instructions seem to be unaware that 16.04 has mostly junked the previous scheme for launching daemons and other programs that need to be run when the computer boots up. Waay back in the day, from I-don't-know-when to about 2010, one launched background processes like mail servers and print managers and all that sort of thing by putting little start-up scripts in a folder called "init.d", or (and I think this one's even older than that one) putting them in a list of commands for a program called 'cron.' Cron was already going to get activated when the computer booted up, and cron, in turn, would work through a list of other things that should get launched. Cron, however, wasn't intended to be clever. If you told it "run that," it would, but if "that" blew up, crashed, errored out, or otherwise had some kind of issue, well, about all Cron could do was file a report.

The init.d system was slightly better, because it eventually encompassed some delightfully obvious commands, "start" and "stop." (What? Linux commands with vowels? How rare!) Postgres, MySQL, and MythTV were all apps that launched on boot, for example, and if, say, I needed to change some major setting on MythTV, I could shut it down with "stop mythtv-backend" make the change to the configuration file, and then "start mythtv-backend" and whatever mysterious jiggery-pokery was involved would just get handled by the appropriate start/stop script.

Alas, this system also had limitations. I'm not sure why this wasn't good enough, but I do know that Apple got fed up with it years ago. OSX is built around Linux/Unix as well, but they replaced the init.d directory/collection of individual scripts with a master control system called system-starter back in, um, 10.2? 10.3? Users could add things to the startup sequence by putting appropriate control files into their [~]/Library/Startup-Items folders. Just a few years ago, Apple replaced THAT with launch.d, which uses [~]/Library/LaunchAgents and LaunchDaemons.

Anyway, the Ubuntu people also seem to have decided that init.d-style startup has to go. The only instructions I could find for how to get VNC to work with the new system were for the *other* VNC program, the one I hadn't installed, but I managed to figure out how to adapt them. Hallelujah! Holy crap! It actually worked! Unbelievable!

Now for VirtualBox. There was a menu, with a submenu, with "VirtualBox" listed. It didn't work. That's okay, it had been installed way back when, before the VNC or desktop bits, so there's probably parts missing. Uninstall. Reinstall. Yup, that's better.

But still not working. I could create a virtual machine, but if I tried to start said machine, I was told that "vboxdev" wasn't installed, and I needed to copy it into the necessary folder. Easier said than done; the place that I was supposed to copy it from didn't exist. Huh?

In fact, there was no such file as "vboxdev" on the computer. Anywhere. So why, o VirtualBox, did you not bring that file with you when you were installed? The answer took another couple of hours to dig out, mostly because there were dozens of explanations on the Interwebs that were completely useless. What it actually boiled down to, and what no error message or log file correctly reported, was that Ubuntu hadn't bothered to install the header libraries.

Because Linux tries to run on, well, almost any imaginable computing device, from multi-core Xeon processors to low-watt ARM chips (you can run it on a Newton! It's amazing!), Linux programs almost never come as binary files. "Compiling" means translating more or less human-readable code into processor-specific numeric gibberish. Sometimes you can compile for a lesser processor which other processors can handle, but eventually that means not taking advantage of the newer chip's full potential. Currently, this is most commonly seen with programs that require a "64-bit processor," and thus won't run on older 32-bit ones.

The Linux solution is to have the computer do its own compiling, so that it can create the code that is ideally suited for itself. In theory, this makes a piece of software capable of running on an incredibly vast range of systems. There are (for me at least) two notable downsides to this scheme. The lesser one is that installing a program can take time. Lots of time. I've had some programs take three or four hours to install, with the processors running close to 100% the entire time, as chunk after chunk after chunk was converted from source code to an intermediate linkable library and then to the final binary files.

That's the lesser problem. The bigger one is that nobody writes programs from scratch any more. Instead, you take advantage of what other people have already written. You don't need to (or want to) reinvent how to move a pointer across the screen, or how to print to a printer, or how to communicate over a network, or how to make menus work, or any of thousands of other things that other people have already figured out how to do. Instead, you use a library of pre-defined functions.

There are hundreds of libraries, and most of them have been updated/upgraded/improved dozens or hundreds of times. So your program needs to keep track of WHICH libraries, and what versions, it needs, and the compiling process has to match all the right parts up, or the program will try to do X and there won't be any X-ability in place to catch that request, which is Not Good.

In Windows, such libraries are called DLLs (dynamic linked library). In OSX, they're .frameworks or .dylibs. In Linux, they're mostly libsomethingorother, and have all kinds of endings (.so, .py, .mo, or nothing, for example). There seem to be at least 4000 libraries in 16.04, although it might be as few as 1000 (due to the same library having multiple names) or more than 16,000 (because my search script didn't count anything not in a directory named /lib). Suffice to say, "lots and lots and lots."

I cannot begin to count how many times I've tried to install something on a Linux machine and had it fail because some library, or header, or widget, or fiddly bit, was missing or the wrong version or otherwise tasted bad. Even on OSX, I absolutely dread dealing with installing something if I can't get a pre-compiled binary, because my experience has been that well over half the time, the 'automatic' compile/install will fail, usually with a cryptic or flat-out erroneous error message. I think maybe 20% of the time, the problem is un-solvable, and I have to give up attempting to use that program.

Many of these libraries get installed when something else gets installed and pulls them along with. Installing Ruby means getting a bunch of Ruby libraries. Installing MythTV means installing MySQL means installing a bunch of MySQL libraries. But there are also libraries that come 'baked in' to linux. The kernel or core libraries are pretty much a given. Still, what's in those libraries changes over time, so you can't use them unless you've got an index. The index is contained in the "header files."

Now, since nearly every program you ever install in Linux has to be compiled, and nearly every program is going to want to use at least some of the functions in the kernel libraries, and you can't do that without the header files, one would think that the Unbuntu installer wouldn't even dream of NOT installing them.

Sometimes that installer program can be unimaginably idiotic. Because my install had no header files. This is so pointless that it took me almost twenty minutes to figure out how to GET it to install those files. Why would you need to know how to do that? Why wouldn't it already be done? Damfino.

apt-get install linux-headers-generic

Uninstall Virtualbox. Reinstall Virtualbox. Wait two to three hours for an immense amount of compiling to complete. Launch Virtualbox.

Gasp. It seems to be working.

Oh, drat. Yes, it seems to be working perfectly. Because it's now saying "Hello, I'm your new imaginary computer. It's time to install an operating system on your new imaginary computer. Please insert your install disk in the CDROM drive, so I can capture it, attach it to the imaginary computer, and we can get started with installing the OS."

It does seem that I am just a few minutes away from finally having exactly what I wanted: a virtual Windows system running on my multi-core Unbuntu server which I can use to drive my laser cutter without having to have extra (heat-producing power-consuming noise-making) hardware running to do it.

Just as soon as I can find my !*#%*# WinXP install disc.

So. Why am I a 'tech-support genius?'

Practice. Lots and lots and lots of practice. That's why.

Current Mood: numbnumb

(Leave a comment)

August 16th, 2016

06:55 pm - Why I Hate Unix, Part . . . seven?
This is going to be a pretty geeky diatribe, but you shouldn't have to know how to write a shell script in order to follow along.
  Two weeks ago, there were nine computers active and running in my home*. That's not including the Amiga 3000 or the TiBook, which might get fired up once a year or so to play some old games, nor my iPhone, iPads, or anything else that isn't a multi-purpose programmable chunk of computing technology. Most of those computers are Macintoshes, but there are two machines in my server rack that are not. MadHatter was running Linux, specifically Ubuntu 14.04LTS. It's a machine left over from Alexandria Digital Literature, and it's got serious horsepower, specifically a quad-core 3GHz Xeon processor and a hardware SATA RAID5 drive controller. Its primary duties include hosting my database, and running the MythTV server.
  The other machine is an older less powerful server, which I brought back online about a year ago when I got my laser cutter, because the laser cutter control program only ran on Windows. So I blew the dust off JubJub, with a dual-core Xeon and a SCSI Raid5 controller, and installed . . . um, I think Windows XP? on it.
  Well, last week, after a very busy month, I finally had some more work to do with the laser cutter. When I punched JubJub's "on" button, after all the usual pre-Windows boot screens flashed by, I got a blank black screen. No error message, no beeps, no whining, but also no Windows.
  The simplest thing to do at this point would have been to reformat and reinstall the whole thing, or restore the system from backups. However, as a separate server, that's an extra 100 watts of power used when it's running, and MadHatter is still woefully underused, so I figured this might be a good time to move windows over to MadHatter and run it in a virtual box. I'd been thinking about doing that for some months.
  However, most of the virtual PC stuff seemed to take for granted the presense of a GUI. When I upgraded MadHatter two years ago from Ubuntu 12.04LTS to 14.04LTS, the installer program had screwed things up in a fairly major way. MadHatter didn't have a GUI. I could see and use one if I connected remotely (via VNC), but the actual physical screen just came up black. Since I do almost all my work on it remotely via a command line, I'd given up trying to fix it, but it looked like it might be a problem for the virtual pc systems.
  The other problem with MadHatter was that it wasn't actually using the RAID array. If you aren't that familiar with the term, what the hard drive controller was supposed to do was take the six 400 gigabyte hard drives and tie them together into one 2000 gigabyte hard drive that would remain stable even if one of the drives failed. Actually, only three of the original six drives were still functioning, but I had replaced one with a 500 gig drive. When I'd tried to upgrade 12 to 14, I spent about two days trying to get it to boot up before I gave up and just ran the whole thing off a 1.5 terabyte drive in one of the remaining slots.
  Now, Ubuntu releases new versions of the OS every six months, but every two years they do an LTS version, which is "Long Term Support." They promise to keep updating and bug-fixing LTS releases for at least a year or two (or more?) after the release of the *next* LTS version. That's why I only run LTS editions on my server. Well, it's been two years, so there's a new LTS, the 16.04 edition (That's 2016, 4th month).
  Good old Linux/Unix. They love to make things insanely complicated. As you might know, what we call "booting" a computer is actually a corruption of "bootstrapping," because getting a computer started is like having it pull itself up by its bootstraps. The main logic board has the BIOS, which has only a very very simple program on it. The BIOS's job is to know just enough about hard drives to grab a bunch of bytes off it which hopefully contain an actual program, and then to run that program.
  On the Mac, the firmware book loader is slightly more sophisticated. It knows enough to be able to look for a drive's 'name,' and a little bit about the file structure, so that if you rearrange your drives it will still boot off the drive you expect, and it can find an actual file on the drive and pull THAT into memory and run it, although that file had better have the right name and be in the right spot.
  Linux, naturally, has at least three *different* schemes for getting a computer up and running. There's GRUB, LILO, GRUB2, and I think at least one more. The Wintel-type machines' BIOS isn't as fancy as the Mac version. It just grabs the first couple of sectors of data off the first hard drive it can find, loads that, and runs it. If you re-arrange your drive cables, the computer might fail to boot, if the first drive it finds didn't have a bootloader installed on sector zero.
  So I started by creating an install DVD for Ubuntu 16.04 LTS, and trying to install it. The hardware RAID controller lets me tell it which physical drive will appear to be "first" to the BIOS, so if I want to change which drive is the boot drive, I don't have to actually physically swap them around. I moved the drive with the current 14.04 OS to the back of the line, turned the four small drives into one redundant drive array, and told Ubuntu "install yourself."
  It did, and, once it was done, ejected the DVD and said "I'm ready to reboot! [enter]" I press the button...and it fails the reboot.
  I spent two full days finding all kinds of ways for Ubuntu 16 to not work, and eventually it became clear that GRUB2, the latest and most recommended bootloader, was just flat incompatible with a 3Ware redundant drive array. Now, that should have been impossible. It's hardware RAID: it should be quite impossible for the software to be able to tell the difference between a hardware drive array and a simple hard drive. Nevertheless, they clearly weren't getting along.
  Well, fine. Linux actually has a very well respected system for creating raid arrays in software. So I told the hardware drive system to make all the drives appear as ordinary drives, and instead assembled them into an array using "mdadm" which is the software raid tool. GRUB2 proudly claimed to be able to boot off a raid array.
  Now, this is actually a pretty impressive trick. Remember, the BIOS is just grabbing a glop of bytes from the first drive it finds, which, in this case, will be GRUB. When I format the hard drives, I have to save a chunk of space to hold GRUB. (It may be that it no longer has to be the very beginning of the drive, don't quote me on that.) All the online instructions recommend saving about 1 megabyte for that. When GRUB loads, it can't just go "hey, there's the actual file system over there" like it normally would, since in this case, the files have been sliced into pieces and sprinkled across multiple hard drives. It has to know how to re-combine the parts. Even the Mac bootloader can't do that.
  Naturally, it took me two MORE days before I got to that stage. Turns out that GRUB is *theoretically* capable of this trick, but in reality, it totally sucks at the job. I installed and rebooted over and over again, but most of the time, it couldn't even find the drive array.
  Now, GRUB, like the Mac bootloader and unlike the Windows one, recognizes that what order your drives are in is a really lousy way to keep track of what to boot from, so when you set it up, you can specify a drive either by position ("boot off the 3rd hard drive"), by name ("boot off 'Fred'"), or by unique ID number. ID is the recommended method.
  Eventually I had a boot menu set up that let me choose any of the above methods. Now I'd told mdadm when it made the array to call it "md0", because that's the expected default. If I had two software RAID arrays, the 2nd one would be "md1". But when I'd reboot, Ubuntu would decide to call this md127. I have no idea why. Except when it didn't.
  The afternoon of day 4 was the first time I actually got Ubuntu to BOOT directly into the array. But sometimes it could only boot using the drive number, sometimes it would boot using the ID, and sometimes it would only boot with the name. Say what?? I wasn't changing anything! Thanks so much, GRUB/Ubuntu/Linux for being quixotic and ill-behaved. Geez.
  Now that I'd finally gotten it booting correctly, I cloned my old Ubuntu 14 files onto the new array, booted onto it, and then told it "upgrade yourself to 16." That way I wouldn't have to export and re-install the database, MythTV, and all the other stuff.
  "No," it replied. "The network is broken."
  In fact, it was now broken on my ORIGINAL working partition. WTF? I hadn't changed anything there! Now, if I booted into one of my newer version 15 partitions, everything was hunky-dory, so there was nothing wrong with the hardware. But booting into either copy of 14; no networking. And don't forget, my 14 install didn't have direct control, either! It booted into a broken GUI that just gave me a black screen!
I eventually discover the secret key sequence that forces it to cough up a command line on the console. I have no idea what I did that un-broke the networking system, but after six hours or so, it suddenly started working again.
  "Okay! Now, Ubuntu 14, please upgrade yourself to 16." Naturally, after five hours of the installer running, it rebooted and was once again broken. It would eventually boot, but it was so upset that it wouldn't allow saving anything to the hard drive. The file system was read-only.
So after four full days of this, the install manager (dpkg) is currently spending a couple hours churning through all its packages, re-updating, re-downloading, and re-installing them to get them (I hope) to a properly working state.
I might actually having managed to upgrade this server before a full week has passed. Fingers crossed.
* * * * *
  If anybody else wants to try booting Ubuntu off a RAID5 array, I'll tell you what I had to do, because as far as I can tell, nobody's ever actually done this before. There are a couple of web pages that say it can be done, and one guy says "it's easy" and explains how (his instructions didn't even remotely work), another guy says "it's really hard" and explains how (his didn't work either, but then, they didn't make a lot of sense), but neither of them say "and I did it this way and it works great." The official Ubuntu documentation's page has a comment area where a bunch of guys basically say "These instructions are seriously broken, and need to be fixed," but can't agree on what would work, and there are a number of pages where people say, basically "Booting off RAID5 is a total mess; and might not even be possible."
  The first problem is whether or not to partition a drive using the old-school Windows-style MBR (Master Boot Record) or the new-school GPT. MBR has only a fixed space for the bootloader, and it's possible to set up GRUB so that it becomes too big to fit. I used GPT. That turned out to be a mistake. I eventually discovered GRUB wasn't booting of the raid partition because it couldn't find it, and that was because it didn't have a f***ing clue what GPT was. I had to hard-code in "don't forget to include the GPT module!" to the GRUB installer before it could read the partitions on the GPT disks. Idiotic! I also ended up force-loading the "raid5rec" and "mbraid09" modules, because, again, it was too stupid to figure that it needed them on its own. (Note that "insmod part_gpt" in the menu command failed; it couldnt find part_gpt.mod. I had to include it in the environment variable to cause it to be included during installtion of GRUB.)
  Secondly, if you do you GPT, you'd better be sure to create a 1 megabyte partition on every drive, and mark it as a "bios_boot" partition. Probably. Some people thought this was necessary, others didn't. (Note that 99.8% of all info online is actually about booting to a RAID1 array, or 'mirrored' array, not the more sophisticated RAID5, and most of it is actually from around 2011.)
  Then, when you create the array with mdadm, you have to be sure to tell it to use "--metadata=0.9" This is the oldest possible metadata structure, but it also appears to be the only one GRUB actually can comprehend. Unfortunately, if you don't explicitly require this, mdadm will use version 1.2, and then GRUB won't have a clue how to piece the drives together into the array at boot time.
  The "update-grub" program uses another program called 'os-prober' to find partitions with operating systems on them, in order to magically create a full and complete boot loading menu. It consistenly failed to identify my array as having an OS on it, even though it would recognize the identical twin version on the extra drive. So I got to build the menu command by hand.
  One of the few resources online that actually talks about booting to RAID5 tells you that you need to use the UUID of the array itself, rather than the UUID of the partition (or rather, he uses both, in different spots). This turned out to be erroneous advice on my system. Only the partition ID was relevant.


*For the curious, the roll call is as follows: a shiny new used Mac Pro tower is my primary desk/personal machine. A 15" MacBookPro handles shipping, and has the USB scale and label printer attached. There's a MacMini for running the MythTV client, attached to my TV. A MacBook Air is my actual travelling laptop, along with a 17" MacBookPro that is held back running 10.6 for Freehand and other legacy software. Mentioned above is the Ubuntu server, and the just-self-decommissioned Windows machine. Sharing the server rack with them is my XServe, which is my mail/web/DNS/file/backup server. There's a Mac Cube in my bedroom which runs the alarm clock program I wrote for myself. Those are the active, powered systems. Offline but operable ones include the TiBook with 10.4 for old games, the Amiga 3000 for even older ones, a Quicksilver tower for testing/evaluating SCSI drives, a Pentium tower running Windows for Workgroups 3.1 (no, seriously!) with a 5 1/4" floppy drive for, well, reading old floppy disks, an original Banana PC-9000-style Macintosh because it's cool, and about five or six other PowerPC-era Macs and a couple of 68030 Macs because I haven't gotten rid of them yet. Plus parts to build three or four Intel machines, a box full of MacLaptop bits, and my very first computer, the Ohio Scientific Challenger 1P.

(Leave a comment)

May 3rd, 2016

01:09 am - Apples, Oranges, AND Pears, Please
For some years now a variety of news sources have presented consumers with those lovely little charts that show why compact fluorescent bulbs are better than incandescent, and these days, why LED bulbs are even better-er. And this is all true, or at least, is mostly true now that LED bulbs can be had for less than $10, instead of the $50 that they cost just a year or two ago.

However, what people keep conveniently ignoring is that we've had something as good as LED lighting for years already, the always-so-conveniently-overlooked NON-compact fluorescent lighting.

Now, frequently at this point when I bring up standard fluorescent lighting, somebody will tell me they hate how harsh and blue the light is, they hate how it flickers, and/or they hate the buzzing. These are all things that are only true of fixtures that are at least 20 years old. F-bulbs light up because one drives a pretty high voltage through them, so the current can fly through the tube without a wire. To get that high voltage, F-fixtures use a transformer, commonly called a ballast. In Ye Olde Days, the ballast was magnetic, which meant it tended to hum, and the bulb was driven by an arc running at the standard 60 Hz frequency that you get out of your wall socket. Modern fixtures use cheaper, lighter, more reliable electronic ballasts, which also drive the bulb at 20,000 Hz, utterly eliminating any perception of flicker. The only remaining behavior of f-bulbs that is grounds for grumping is the half-second lag between when you hit the switch and when the bulb lights up, and even that isn't a requirement. The brief pause only occurs with a 'rapid-start' ballast. 'Instant-start' ballasts kick a voltage spike down the tube to get it to ionize and fire up in milliseconds. The downside of instant-start is that it shortens the bulb life a bit.

So how good are standard f-bulbs vs. LED? Basically, f-bulbs kicked LED bulbs to the curb until about a year ago, when the cost of LED bulbs finally got low enough to make them competitive. Two years ago, I paid $45 for a 1200-lumen LED bulb. That's about the amount of light you'd get from a classic 90-watt incandescent bulb. Now, your standard modern four-foot T8 (aka one inch in diameter) fluorescent bulb puts out 3000 lumens, but most fixtures don't do the best job of getting the light from the back of the bulb out where it can do some good, so I'm going to downgrade the lumen rating to 2250. That means your standard two-bulb fluorescent light fixture puts out 4500 lumens. I have two of them in my kitchen to provide adequate task lighting there.

So that's what I'm going to use as my benchmark: 4500 lumens of light. There aren't any LED or CFL bulbs that can deliver that much light, so, like the fluorescent fixture, it's going to take multiple bulbs. I'm using bulbs from Philips' catalog whenever possible, because they make high-quality lamps and they make all the different types.

Let's start back two years ago, when I bought that last LED bulb. For LED lighting, I'll take three Philips LED bulbs: 1500 lumens each, 14 watts, $35/bulb, and a rated lifetime of 10,950 hours.

"Hey, I thought LED bulbs were good for 25,000 or 50,000 hours!" Only the really dim ones. Philips' 9-watt LED bulb provides 700 lumens and lasts 25,000 hours, but you need to buy twice as many to get the same amount of light. Don't worry, I'll let the longer-life bulb play in a bit. But right now, it's the 1500-lumen bulb on deck.

Our compast fluorescent bulb lasts 10,000 hours (as long as you don't put it in a ceiling fixture!), costs $7.30, uses 29 watts, and provides 1500 lumens. Well, that's what they say. The fact that CFL bulbs are usually coiled or otherwise smooshed together means a fair amount of the light is trapped within the coils of the bulb. In my experience, CFLs are much darker than their lumen rating would indicate. But I'm feeling generous; we'll let it stand for now.

The non-compact F-bulb is a four-foot T8. 32 watts, $3.50 at Home Depot, and with a rated lifetime of 46,000 hours. That's right; it will outlast the LED bulb more than 4 times over. Rated for 3000 lumens, I'm only crediting it with 2250, as mentioned.

Then there's the hot bulbs. The incandescent bulb is a 100-watt classic with an estimated 1500 hour lifespan, and a cost of $0.50 each.

Finally, a halogen bulb. Because they run hotter, we can get 1500 lumens for only 72 watts, but the bulbs cost $3 each, and the lifespan's only 1000 hours.

Our starting race course will be turning the bulb on for 4 hours per day, with a cost for the electricity of $0.12 per kilowatt-hour. And here's the results:

The X-axis is years in service, and the Y is total cost of ownership. The LED bulb starts saving money vs. the hot bulbs half way through the 2nd year, but loses all the ground it gained when you have to buy new bulbs during year 4. Except that, if you'd bought the first set four years ago, you don't have to pay $35/bulb now. Here's what happens with today's bulb prices:

That blue LED line is wayyy down there now. And yet, the yellow line of the standard fluorescent still wins, if only by a hair. I confess that the CFLs are doing much better than I expected, but still, they're still the lamest option among the cold bulbs.

Now, $0.12/kWh for power is a national average. Here in the northwest, my electric bill is $0.067 for the first 600 kWh, then it goes to $0.10. I only used 1000 kWh last month, so my average was 0.0827. Cheaper electricity favors the less-efficient but longer-lasting f-bulbs:

If, on the other hand, you're paying $0.18/kWh, LEDs do much better:

And yet, those old-fashioned glass tubes are still right there, neck and neck with the LEDs. The LED bulbs are smaller, oh, yes, and that's a very nice thing, to be sure. But they're also a new and immature technology. I've had a couple of LED bulbs burn out on me, years before they should have.

One more graph. I need to give the dimmer longer-lasting LED bulb a chance to shine, pun intended. This is the chart that you'll usually see somebody use to pitch the virtue of LED bulbs. The contestants are a 9-watt LED bulb, 700 lumens, lifespan of 25,000 hours and a price tag of $8.50; a 15-watt CFL for $6 with a lifespan of 12,000 hours, a 2' long T8 fluorescent rated for 1400 lumens, a price of $3.00, and a lifespan of 30,000 hours; a $3 43-watt 1000-hour halogen bulb; and a $0.75 75-watt 800-lumen incandescent bulb good for 1,500 hours.

Those stair-steps, by the way, are when the bulb burns out and has to be replaced. You don't see it on the hot bulbs so much because you have to replace them at least once/year.

The hot bulbs do have one serious advantage, though. They have a CRI of 100. The Color Rendering Index is a code number to tell you how close to full-spectrum the light from a particular lamp is. Really cheap f-bulbs have a CRI of 70, which will look white to the eye, but because there are specific colors missing from the spectrum, colored objects in that light might look unexpectedly dark, depending on if the pigment on the object mostly reflects colors that aren't present in the light from the bulb. Really good fluorescent bulbs have a more complex mixture of phosphors which provide a smoother and more even spectrum. A CRI of 80 or better is nice; artists should use bulbs with a CRI of 90 or better.

Now, "white" LEDs actually work a lot like fluorescent bulbs. The "LED" part actually emits UV light, and then there's a phosphor layer to downshift the UV into the visible spectrum. For the 700-lumen graph, the LED bulb had a CRI of only 80. The CFL had a rating of 82. The T8 bulb scored an 85. Also, for that graph, I selected bulbs who's light output matched the halogen bulb. Halogen bulbs are slightly whiter than incandescent bulbs. I'm not going to go into the full explanation of color temperature because some of you know it already, and some of you don't really care, but in a nutshell, you can get bulbs that are more pink-ish or more blue-ish. Incandescent bulbs only come in one color, because it's what you get when you heat something (like, say, a tungsten filament) to a temperature of 2,700 degrees Kelvin. That's 4400º Fahrenheit. That's how hot the filiment gets in a bulb. Halogen bulbs use a clever trick to reduce the rate at which the tungsten evaporates, so they can take the little wire all the way to 4900ºF, aka 3,000ºK, and emit a light with a bit more blue in it, so it doesn't look as orange-y.

Because the cold bulbs aren't making light by heating something up to crazy-hot temperatures, they can offer a much wider selection of colors. Incandescent-colored bulbs are usually called "warm white," "soft white," or "kitchen/bath." There's no official standard term; do yourself a favor and look for the numeric color temperature. If you want your LED or fluorescent bulbs to blend perfectly with your incandescent bulbs, you want 2700K. If you're matching halogens, get 3000K.

Personally, I usually like my lighting a little less orange, so I prefer Neutral, Cool White, or Daylight bulbs. Neutral is what Philips calls 3500K. Cool White is usually about 4100K. 5000K bulbs are sometimes called Sunlight.  Don't confuse Sunlight with Daylight. Daylight is what you get outdoors in the shade, which means mostly the light from the blue sky, not the yellow sun. That's usually around 6500K, so it's quite a bit more blue than Cool White. Finally, unless you've recalibrated your computer monitor, it's almost certain to be set to have a white point of 9300K. I'm not sure if you can get a bulb at that temperature, though. For the first series of graphs, all the cold bulbs had a color temperature of 5000K.

LED bulbs don't yet have nearly the range of color temps available, but most of them offer 2700K and 5000K, and that's a good start. Over time, I'm sure they'll keep getting cheaper and offer more options.

Lights out. :)

(Leave a comment)

February 18th, 2016

05:43 am - Phoney Baloney
Act One: I Say Good-bye To My Land Line and Most of My Cell Phone Bill

A few years ago, I had to do a lot of cost cutting. Most of you know why; those that don't, I'll just say there was a divorce. One of the things that got cut was my phone bill. I switched from a T-Mobile plan that was running me about $40/month to one from PureTalkUSA that's $10/month, and replaced the ISDN land-line phone service with VoIP.

For most residential users, VoIP means MagicJack or Skype or some such. However, all of them, although much cheaper than a regular phone line, are still not free. I found a way to get my land-line costs down to $0, atlhough it's a pretty convoluted scheme.

First, there's an account with sipsorcery.com. This free service gives me the internet component for VoIP. I can run client software on any of my computers, and even on my iPhone, and all those devices connect to the sipsorcery server to allow me to talk to people over the internet. However, this doesn't include a phone number. That's the part that makes Skype not free; they charge you to provide a bridge to cross over from the internet networks to the telephone networks. Without that bridge, you can't call any "normal" phone or any cellular phones because they're all connected to the phone network.

The 2nd part is an account with IPKall. *These* guys provide a free phone number. If somebody calls my IPKall number, they've got a server somewhere that accepts the call from the phone network, and bridges it onto the internet, specifically as a VoIP connection to a SIP server such as the one at sipsorcery. However, you cannot call OUT from that gateway. It's one-way phone net -> internet.

So the third piece is a Google Voice account. Google Voice *also* gave me a free ten-digit phone number. Google Voice, however, doesn't have any "real" phone lines of their own at all.  When somebody calls my GVoice number, I'd bet good money that the result is something like this: that call goes to whichever local phone company switch/computer (probably the ubiquitous Lucent 5ESS) 'owns' my phone number. The 5ESS tells the Google server that it has an inbound call for that number. The GVoice server tells the 5ESS to hold that call in suspension for just a moment. Then the GVoice server tells the 5ESS to call *me*. Specifically, it calls (1) my cellular phone number, (2) my IPKall VoIP number, and (c) my Charter VoIP/POTS number. (When I switched from DirecTV to Charter for televison and internet, the Charter system included phone service at no extra charge.) Hopefully, at some point I 'answer the phone' at one of those numbers. The 5ESS tells GVoice that one the numbers it's calling has gone off-hook. GVoice tells it to drop all the other calls, and then to take that *inbound* call it's got, and plug it into the line that just answered. Then GVoice lets go of the call.

Note that the Gvoice server never had to actually handle any data packets that had a human voice wrapped inside. It just had to be able to tell the 5ESS what to do. Now, the GVoice server does have to hande the voice call if I don't answer any of my phones, because then it provides voice mail services.

But if the IPKall number only accepts inbound calls, how do I call my dentist? Simple. Well, not exactly simple, but not rocket science either.  I sit down at my computer, open the Google Voice web page, and tell it I want to call my dentist. The GVoice server calls me first, and once I've answered, then it calls my dentist. If/when they answer, it tells the 5ESS to link the two calls, and lets go. So even my outbound calls are actually inbound calls.

Act Two: My Tech Leaps Forward

Now, this has been working fairly well for 3+ years. But about a year ago, in return for providing household tech support to a friend of my mom's, I received in return some of the Apple technology that they'd replaced with newer things. I'd been using an iPhone 3gs. This couple had upgraded their iPhone 4 and iPhone 5 to iPhone 6s, a MacBook Air to a MacPro, and an iPad to a Mini iPad 2 (or 3? I forget). In return for tying all their new gear together (along with their AppleTV, her iMac, and their new printer), I got the hand-me-downs.

Now the iPad, which one might want to call an iPad one, except that Apple never called it that because until the 2 came along, who needed to? so I tend to refer to it as the "iPad dead stop", cannot be upgraded past iOS 5.1.1. The current version is 9.2+, and new apps these days almost always require at least iOS 8, so I've really enjoyed it, but most of the apps I use have to run on the iPhone 5. There are a lot of apps I'd much rather have on a bigger screen.

But all this shiny new (or new-ish) tech has made me more acutely aware that my super-cheap cell plan does not include any data. It's voice and text only. (That means SMS, not MMS. You can't 'text' me a picture, just words.) That means no mobile mapping, no Siri in the car, and no GPS-enabled games, unless I can find an open WiFi hotspot.

Act Three: I Consider The Virtues of A Vow Of Silence

Let me emphasize something explained in Act One. When I'm at home, within range of my WiFi access point, I frequently will answer phone calls using my iPhone's VoIP client app, rather than as a cell phone. If I take the call via the cell network, it eats up my minutes. So it occurs to me that if I could find an adequately affordable data plan, I wouldn't NEED to have voice. If I could get a 'tablet' data-only plan, I could also answer and place phone calls.

Phone calls are nothing like, say, streaming a hi-def movie. You do NOT need "super-fast 4G LTE" data for a phone call. 4th generation cell data runs about 5 to 10 megabits/second. (Well, that's what Verizon says on their web page. Technically, it's required to deliver at least 100Mbps to moving users, and 1Gbps to stationary ones, or it can't be called 4G.)  Third gen speeds are  200kbps to as much as 10Mbps, depending on how new the 3G tower hardware is. A phone call needs about 12kbps. (An aside: Apple's FaceTime can be used for either videophone or sound-only calls. It doesn't have a mechanism for bridging to the telecom net, so you can only use it to talk to people on internet-enabled gizmos, but it uses an advanced codec for the audio, and most people notice the dramatically better sound quality compared to traditional phone calls almost immediately. As a bonus, FaceTime calls are encrypted end-to-end, so nobody except possibly the NSA can tap or intercept the call. Currently none of the alternatives (Skype, Google Hangouts, etc.) offer that. )

So if I could only find somebody that offered, say, 0 bytes of 4G LTE but all the 3G data I can eat at some super-cheap price, I would be a very happy camper. I use my iPhone to access the internet when I'm not home, and I can activate a WiFi hotspot and all my widgets can have internet access wherever I go! Sweet!

I went looking. Nobody offered exactly that (no surprise there), and, in fact, most of the cheap plans offered a little bit of LTE data, and then charged more if you went over your limit, even if/when you'd be limited to 3G after you'd used up your LTE quota. T-mobile came the closest, with a $20/month plan with 2 gigabytes of LTE, after which it drops to 3G (or, in some areas, 2G), but at no extra charge for all you can eat.

Act Four: I Travel To Barbaric Lands, and the Internet Forsakes Me

That's where things stood until I went to Seattle for Foolscap. I was staying with my friend Margaret, who had been inflicted by typical CenturyLink incompetence with loss of her land line and internet access for almost two months. She'd been surviving on crumbs of internet from her cell phone. The convention hotel wasn't much better. Non-guest access anywhere but the lobby was appallingly expensive. I had a couple of friends who could let me use their mobile hotspots, but if they left the room I was in, there went my internet connection as well.

As a result, I decided now was the time to act on my research. I took off from the hotel Saturday afternoon, went to the nearby mall, walked into the T-Mobile store, and said "I want your $20 tablet plan, please. For my iPhone. Yes, I know that it's a phone, and I know that the tablet plan means I won't be able to call people. That's what I want. So can I get that?"


Well, actually, it was more like "It depends" at first. According to my salesman, if he sold me a SIM card for a tablet and I put it in my phone, T-Mobile's cell towers would still be able to recognize that the device containing that SIM was a phone, not a tablet, and would not enable a connection. Unless, that is, my iPhone happened to be one originally purchased in another country? The T-mobile towers would not recognize a non-US phone's serial number, so they would assume it must be some kind of tablet, and this allow the data connection.

"We've tried it before, and it just doesn't work." Igor had the grace to look embarrassed and somewhat apologetic about this, and in return, I didn't come right out and say that in essence T-Mobile had hardwired "behave like an asshole" into their cell towers.

"But you also have mobile hotspots, right?"


"So if I got the $20 data plan, and one of those, and then tethered my phone to it via WiFi, then everything wourld work?"


"So I'd have to spend $100 for the hotspot box, and then have to carry two devices around with me, even though there's absolutely no technical reason whatsoever why my iPhone couldn't serve just as well as the hotspot?"

"Pretty much."


"Ah! But we've got a special promotion right now: a free tablet!" As Igor explained, instead of buying a mobile hotspot for $100, I could get a somewhat larger mobile hotspot that happened to have a display and touch screen built into it, for free. Well, sort of free, in a sense. It was a $200 Android tablet, which I would have to pay for, except that for every month that I kept my T-mobile service, they'd make the $7 monthly payment for the tablet until it was paid off, two years later. If I cancelled the plan, I'd owe the balance. No interest, no penalty, just the remainder of the cost.

Although I'd still have to carry two gizmos, at least the 2nd gizmo was free, and possibly even useful. But could I perhaps get an iPad instead? It would be wonderful to have something even slightly newer than an iPad dead stop. Yes, but I would naturally have to pay for one, either $500-ish or $650-ish, depending on which model I wanted.

I see. I guess I'll just stick with the Android, then. Sigh.

Act Five: My Android Is Kissed By A Princess, and Is Transformed

Other than the "yet one more gizmo to carry" issue, the Android tablet-as-hotspot worked really really well. Oh my gosh, having the internet with me wherever I went was even more fun than I'd imagined. Whee! My phone, my iPad, and my laptop all had internet wherever I went!

Barely one day later I'm telling part of this story at the post-convention dinner, and Beth wants to see this new tablet of mine. It seems that her Nook had died recently, her niece had received an iPad upgrade for Christmas and had given her old one to Beth, and that while Beth appreciated that quite a bit, the hand-me-down iPad was a lot bigger than her Nook had been. It was big enough that Beth really wasn't happy with it as a Nook-replacement. My Android, on the other hand, was much more the form factor she wanted. Would I be interested in trading?

Oh, dearie me yes! But is the iPad a WiFi-only, or was it a cellular-enabled one? Aha! A sim-card slot! Disappointingly, popping my SIM card into the iPad didn't work, but by the time we discovered that, I was so excited about turning my Android into an iPad upgrade that I arranged with Beth to meet the next day and take the iPad into the T-Mobile store to see if they could figure out what the deal was.

As it happened, the solution to the problem was provided by a tech at the Apple Store, and was excruciatingly embarrassing to Beth, Igor, and myself. None of the three of us had thought to reboot the iPad.{slap forehead}

So presto! The Apple guy turns the iPad off and back on, and there it is! I immediately try to turn on the hotspot, but I can't figure out how to do that, which is odd, because I could have sworn it was right near the top of the Settings screen. And it is. On my iPhone.

Come again?

I didn't learn all the details until later, but basically, it was a case of Apple, Inc. being even bigger dickwads than T-Mobile. The iPad 2 used to have a hotspot function, but Apple took it away with iOS 7. Every other iPad was still allowed to be a hotspot, but not so the 2. Unless I jailbroke the iPad and installed a hotspot app on it, although, the Apple tech rushed to add, that would void my warranty.

I rolled my eyes. A lot. And explained that I had already repeatedly deleted the gigabyte-sized iOS 9 upgrade that my phone kept downloading, because Apple kept making the iPhones less accessible with each iOS release, and I was already sick and tired of being prevented from accessing my own d**m data on my own d**m phone, so I wasn't going to let it upgrade until I could jailbreak it. It has been well over a decade since the last time I thought Apple's warranty had more value than being able to open the case.

Act Six: Where Apple Doubles Down on Being A Jackass, and I Make a Surprising Discovery

It's more than a week before I have time to try jailbreaking the iPad. The whole "jailbreaking" thing is, to me at least, a weirdly un-Apple-like issue to start with. For crying out loud, they have released nearly the entire Mac operating system as open source! I've seen a super-cheap Asus notebook running OSX! Well, strictly speaking, it was running Darwin, which is the core that OSX is built on. But it looks just like OSX, and it's completely legal. Anybody who wants to can download the source and compile it for whatever they want to try to run it on.

But with iOS, it's quite the opposite. With the iPad which can't be upgraded past iOS 5, I can plug in a USB cable and look at the files  on its flash drive. Staring with iOS 6, or maybe 7, Apple quit allowing programs to do that. They've never let people just load whatever they want onto the phone. No, you are supposed to only get apps via the iTunes store, which means only ones they've officially approved. Well, fuck you, Apple. I don't mind if you void the warranty and wash your hands of any obligation to help me fix my phone if I load any non-approved software. That's perfectly reasonable. But trying to force your opinion on me is utterly offensive.

And an asinine waste of resources. The hackers figure out how to circumvent Apple's childish selfishness. Apple releases a new version of the OS that blocks that avenue. The hackers find another one. Apple blocks that. Around and around and around.

So I start researching jailbreaking the iPad. The most recent iOS version they've broken is 9.0.2. Annoyingly, somebody had upgraded the iPad recently, it's got iOS 9.2.1 on it. I can't jailbreak it until I've retrograded the os back down to 9.0.2. I try to do that three times, but I am unsuccessful. More research reveals that Apple started authenticating all the upgrades, and the iThings won't allow firmware upgrades that aren't authenticated. This has to be done by a server at Apple at the time of the firmware install, and Apple stopped authenticating 9.0.2 last October 15th. It isn't possible to jailbreak the IPad yet.

And, as it happens, I'm just as screwed over on my iPhone. As I mentioned, I have repeatedly refused to allow the phone to install updates, because there have been multiple occasions when I needed my phone to have functionality that Apple couldn't be bothered to provide, and I'm sick and tired of it. Thus, my phone is still held back to iOS 8.4.1.

Nobody ever released a jailbreak tool for 8.4.1. There are tools for jailbreaking 8.2 and 8.3, but it appears they didn't get around to smashing open 8.4 before 9 was released. Now that they've cracked 9.0.2, I can't imagine anybody's going to bother trying to break 8.4.1. And, since Apple won't authenticate anything before 9.1, I cannot upgrade the iPhone just a little bit forward. I might as well let it upgrade to 9.2, because somebody might eventually jailbreak that version.

Thus, after spending hours trying to make this work, I am left with a phone that has absolutely no reason at all to not be the hotspot except that T-Mobile won't let it, and an iPad that has absolutely no reason at all not to be a hotspot except that Apple won't let it.

I decide, as a lark, because it's so easy to do, to go ahead and pop the T-mobile SIM card that's in my iPad into the iPhone. I'm a little curious as to the form that the non-working-ness will take. Will it say "No Signal" or "Carrier Error" or make a funny farting noise, or what? Naturally, the iPad's set up for the mini SIM size but the phone takes the micro SIM. All SIMs are the same, it's just a matter of extra plastic to make the actual chip fit in larger older trays. I put the T-mobile SIM in my phone, and the Puretalk SIM in my iPad.

I haven't tried sending a text to the iPad yet, but the T-mobile SIM in my iPhone is not malfunctioning nor showing an error message.

In fact, It. Works. Perfectly.

Well, almost perfectly. I have driven around town stopping wherever I wanted, and accessed the internet on my iPad by connecting to my iPhone via WiFi as a hotspot. I've plugged a USB cable into the phone and a laptop and the laptop's happily turned the phone into a a cellular modem. I've even tethered the iPad to the iPhone via Bluetooth, instead of WiFi.

The only thing that doesn't work right is the voice over IP part! The VoIP client on the iPhone itself is unable to connect to the sipsorcery server. Clients running on gizmos tethered to the iPhone can reach the server, and if I call my Google Voice number, they will ring, indicating an incoming call. But! None of them can *receive* the sound packets! If I talk on the phone, the person on the other end can hear me, but I cannot hear them. Either T-mobile is somehow partially firewalling the VoIP packets, or there's an address conflict with the local subnet somewhere.

The Ironic Conclusion

So do I now have to try to figure out how to route my VoIP packets around the barrier, whatever it might be? No, actually, that probelm is suddenly a moot point. Because the very next day after my epic battle with the iToys, I got an email message.

"IPKall will discontinue service on May 1."

Thus, some time in the next couple of months, I have figure out some other means for bridging my VoIP calls to the telecom network.

Now, one might ask, since Google already has their Google Voice servers connected to the telecom network, why they aren't just bridging the phone calls onto the internet themselves. In fact, many people have asked exactly that. Google hasn't answered. I'm not surprised, because the only honest answer would basically boil down to "We're too lazy/cheap to do it." There was a 3rd party that had actually done the coding necessary, and that some people were using to connect their VoIP to Google Voice without all the extra steps in between. Google bought that company, and shortly thereafter, they shut down the direct-connect service.

Oh, and even though my T-Mobile SIM card is a Tablet-only SIM (even though there's no such thing. It's just T-Mobile making bullshit up as an excuse for their behavior) it still has to have a phone number associated with it. Igor let me choose what area code I wanted. I told him "well, why not just use my existing number, since I'm going to drop the voice-plan anyway."

"No, we can't do that. That's a voice number."

I didn't say a word. I just gave him A Look. He replied with an embarrassed shrug; we both knew this was just more  T-Mobile make-believe.

(Leave a comment)

> previous 10 entries
> Go to Top