Monday, November 14, 2011

The Microblogger's Dilemma

Will every site soon have a "status update" field? How can I possibly update all of them?!

This has passed trend and is headed straight toward pandemic. I suppose because it's so easy to implement, there is a proliferation of "what are you doing now?" kinds of status update opportunities. Twitter. Facebook. LinkedIn. Google Plus. And that's just the biggest, most popular ones, and doesn't consider geolocation updates, which is a whole nother set of sites (FourSquare, etc).

The dilemma is ... which one do you update?  If only one or two interesting things happen to you in a day, or week, what site do you update? I solved this for a while by tying my Twitter account to LinkedIn and Facebook, but there is an unspoken disapproval of this leveraging of posting. You can't be a true Facebook devotee if you only post to Facebook via Twitter, right?  And how could you possibly post both to Google Plus and facebook?!  That's heresy!

[An aside to you Facebook devote├ęs ... yes, I know that they now like their name to be all lower case, but like Wall Street Journal editors, I refuse to follow all weird capitalization schemes, preferring to stick to my own journalistic standards].

If auto-reposting from Twitter to Facebook and LinkedIn is not cool, can I copy/paste the same thing to Facebook and Google Plus?  What if most of the people, like me, are members of all of them? Won't they see through my ruse, and discount my "interestingness" because I post the same thing to all of my microblogs?

If I post different things to each site, what does that mean?  Someone who is interested in what I have to say now has to look in 3 or 4 places?  What a waste of time for them, and for me.

And yet, if you join Facebook, or Twitter, or Google Plus, and neglect them, that's the worst of all, right?

The paradox I find greatest of all is the parallel proliferation of "get funding quick!" sites for angel and vc funding.  I have joined several recently, to look around, in pursuit of elusive angel funding. But if you run, say,, you don't want me to also be on (never mind that the founders of one site may well be active on the other). I actually got an email from somebody at basically saying that I hadn't spent enough time on their site, or filled out enough data, or updated my status enough, and therefore I wasn't worthy of being recommended for investment. It should occur to them that the less time I have to update yet-another web site, the more likely it is that I'm doing actual work worthy of investment. Myopic.

I am posting this diatribe, er, open question, to (gasp!), where I maintain an old-fashioned blog from the middle of the last decade.  It is at least persistent through all these trends, and supports more than 140 characters.  I will, of course, post a link to this through to all my microblogs -- and I will do little else. In order to feed the appetite of these microblogs, I must do that as rabidly as I once processed email (okay, okay, I still do rabidly process email).

I guess my point is ... are we being asked to declare our allegiances to particular vendors/technologies by where we choose to update our status?  What an odd result of a weird little microtrend.

Friday, November 11, 2011

Open Letter to Web Site Developers

Dear Web Site Developer (or misguided management):

Please don't do any of these things on your web sites:
  1. Ask me to enter my email address twice.
  2. Tell me what characters I can and can't use in my password.
  3. Time out my sessions for my own protection.
  4. Make me change my password every so often.
  5. Make every field in your form *required.
  6. Make it impossible for me to change my email address.
  7. Insist that I provide you with a security question and answer.
I know how to type my email address and I know more about how to create a secure password than you do, and I do not forget my passwords. You have meetings where you talk about "reducing friction" for people to join your sites. You create friction every time I log in, not just when I sign up.

If you are a bank, and your page times out after 5 minutes and I have to log in AGAIN, inside my highly secure physical location with no possible access to my computer by anyone but me ... are you protecting me, or irritating me?

Glenn Reid

Tuesday, July 12, 2011

Expert Culture vs. Ease of Use

There is a phenomenon I call an "expert culture" where things that are hard to understand and use become popular precisely because they are hard to use. Once you figure out how to use something complicated, you become an "expert", and it feels good to be an expert. You help other people because it makes you feel smart, and then they learn, and then they are an expert too.

Conversely, products that are easy to use and have few unnecessary features are often dismissed as trivial or underpowered.

This is a fascinating and bizarre contrast, and it is very counter-intuitive. We are all led to believe that things that are Easy to Use get adopted, and complicated things are eschewed. There are many counterexamples to this, although Apple products are perhaps an existence proof that at least somebody buys Ease of Use.

This occurred to me as I was deleting some early posts on Google Plus that were open questions, trying to figure out how Google+ worked. Valid points, I felt, and reflective of a "newbie" experience on a new platform. I was deleting them because I felt foolish for having posted them, and I realized that Google Plus is an Expert culture, and facebook is "for the rest of us".

Circles alone, in Google+, are really complicated, even once you know how they work. Consider this graphic representation of the rules for who can see your post on Google+. If that's not an expert culture, I don't know what is. At least half the posts I have seen go by on Google+ are in fact about how to use Google+. That tells you something too.

[I started posting this on Blogger because it's essentially a blog post, and I may finish it there too. But I wanted to see if this medium could replace blogging completely. I don't think so, not quite yet. I don't have enough control, can't set a title, and I can't embed links and things like that. Maybe I'm just used to those things, and blogging shouldn't rely on them. We'll see.]

Sunday, July 03, 2011

JIT vs Buffering

I've been studying Just-In-Time techniques for business and manufacturing and process (or JIT, or the Toyota Production System). There are fascinating parallels between computer science, where JIT is also used to some extent, and manufacturing. I have some thoughts to add to this discussion, and no place to add them, so I'm blogging about it.

JIT has two things at its core: predictive forecasting, and signaling. Predictive forecasting is easy to understand: the more predictable your task is, the more lean your process can become, because you know what is going to happen. Signaling, or Kanban, is an abstraction for somebody raising their hand and shouting: "hey, we're out of rivets over here!" Both are necessary for JIT, and they are to some extent opposites of each other. If you know exactly what's going to happen, you should have no need for signaling.

The real world is, of course, somewhere in between, and the process you design is at the mercy of the business problem you're addressing. Toyota developed this process for manufacturing cars, where demand isn't completely predictable, but it has a little bit of built-in time padding -- a car does not need to be finished within 3 hours (approximate assembly time) of ordering to meet customer expectations.

Banking, on the other hand, is essentially real time. If a customer asks for their money, you have to give it to them. There is a complex JIT system behind banking that tries to predict this demand and provide the money "just in time" from thin reserves at branches and central banks, but a failure in predictive forecasting of JIT banking is considered bad: a "run on the bank."

I like looking at problems from both ends. If you consider real-time phenomena that don't quite work because of the inability to perfectly forecast demand, like banking, computer networking, restaurant management, and the flow of water and electricity, you see that buffering (inventory) is introduced to smooth out the flow and meet demand in real time.

In manufacturing and retail, where inventory and stock was presumed, the effort was the opposite: to remove the buffering and provide real-time supply: JIT manufacturing.

I believe that these are opposite sides of the same coin. An inventory buffer can reach zero, the effect being that a signal is thrown, and the process waits for inventory to be delivered. [It is worth noting that this is exactly how computer networking works, with signals and everything].

Here is my thesis, then:
All process has buffering (inventory) and variable demand. Optimization is the same in all scenarios: you are simply adjusting the size of the buffer.
This is commonplace in computer networking. You explicitly state the size of a buffer, and you can adjust it. 2k? 20k? 200k? Depends on what you're doing (chat vs downloading movies), and on, yes, how predictable demand is. Chat has low volume and a high degree of unpredictability; video download is completely predictable and high volume. The buffers are much larger for video download than for chat.

Let's consider other another example: a team of roofers putting new shingles on a building. There are several intriguing layers of process and signaling and buffering in this seemingly straightforward task.

First, the size of the roof is known, so it would seem that the raw materials can be supplied in exact quantities. But the roofers always order a little extra material -- more nails, shingles, and tar paper -- than is necessary. This is to allow for the unexpected (unpredictability in forecasting), including errors and damaged materials. The less waste the better, so the amount of extra material is adjusted for each job, trying to match the challenges of the roof geometry to the skills of the crew and the number of shingles per package. Can you do the job with nothing left over, and no trips to the hardware store?

Next there is the transportation issue. Some roofing contractors lift all the shingles up onto the roof ahead of the job, spacing them approximately according to coverage. This is an extreme of buffering in advance. Other contractors have somebody yell for more shingles as needed (signaling) and have somebody run up the ladder with another pack of shingles. Which is better? It depends on what you're optimizing. Pre-placing the shingles suggests that the application of shingles is more skilled labor, and more expensive, and should never have to wait for the [cheaper] labor to carry the shingles to them. But if the nailer yells before he/she is actually out of shingles, then the more skilled resource is never idled, and the effect is the same. Or more correctly, the optimization moves upstream, to the difference between serializing the tasks (putting all shingles on the roof before you start) and the cost differential between renting a boom lift truck vs a crew of people to carry shingles up the ladder.

The reason I thought of roofing as an example is that it has buffering in unexpected places. How many shingles does a roofer take out of the pack before walking over to the area currently being roofed? Buffering. How many nails does the roofer take in hand at once? Buffering.

The size of a buffer is often artificially constrained to be the size of a roofer's hand, or the available shelving space in a warehouse. To the extent that the "natural" buffer size that would best optimize the process does not match up with the actual size of the buffer available, you get inefficiency.

This mismatch between physical buffer capacity (tank size, shelf size, memory chip size) and the optimal buffer size for a given process is, I think, a very profound issue at the heart of many, many serious problems.

Sunday, June 05, 2011

Why Podcasting Never Really Happened

Back in the day, I predicted that podcasting was not really "a thing" and would go nowhere. Nobody really agreed with me, but I believe that now it is safe to declare podcasting officially dead. (Blogging is almost dead, too, so this posting is perhaps paradoxical).

Here is why podcasting (as an authoring paradigm) never happened: audio is the same as video. Except not as good.

Video and audio are both time-based media. In fact, audio is simply a subset of video. You can see this on YouTube by finding a song you like, posted with lame photographs layered on top of it as the "video" part.

But it is actually much harder to edit audio than to edit video, because there is nothing to look at when you're editing (almost nothing -- you can get waveforms that help a little, but they're pretty hard to use effectively). Video has cues and transition points and also audio, so it's just easier to edit. Period.

So, audio is less good, less interesting, and harder to produce and edit than video, and it takes just as long to consume. Why would it ever become a popular consumer authoring medium? Exactly.

It's actually easier to understand the value of black-and-white TV after seeing color TV than it is to understand the value of audio-only on your computer, or iPod. Podcasting failed, and instead, iPods now all have tiny video screens!

Tuesday, April 12, 2011

The Truth about Multitasking

I've been reading recently about scientists studying multitasking and declaring that we're no good at it. I have something to say about that.

First, some background. Multitasking is a word borrowed from computer science and reapplied to life, which doesn't happen that often. In a computer's operating system, at the very heart of it, is a switching mechanism for doing a little bit of work on a lot of running programs, in a "round robin" approach, so that each of them gets a little time slice. No one program gets to run unbridled for any length of time. Processors are so blindingly fast that you can't tell the difference. A typical CPU today can execute many billions of instructions per second, so the fact that it is switching back and forth between, say, your browser and your email program and updating the display is done so many, many times per second that you couldn't possibly perceive it.

But yes, the computer is multitasking. And guess what? It doesn't come for free.

To switch back and forth between the many processes running on your computer, the operating system does what is called a context switch. This is important, so pay attention. A context switch isn't just jumping around between processes. The CPU needs to also store the context from each process, so it can come back to it later. The context is as little as possible--a bunch of memory locations, a few details of what the processor was doing, and things like that. It's not unlike the folders on your desk that contain the context of your human tasks -- the bills that you might be needing to pay, or the phone number and resume of the job candidate you're about to call.

There are decades of research on operating systems that try to trim down how much context you need to save when switching between them. The less, the better, with a huge multiplier in efficiency for every little bit you don't have to store, because the CPU has to physically copy the information back and forth every time it does a context switch. So do you, by the way, when you multitask. Understanding this, and making your context storage small and efficient, helps a lot with multitasking efficiency. If you have to open up your Word document, or find the phone number, every time you get back around to a task, the startup cost is too great, and you spend all your time thrashing (a real computer science term, believe it or not) and not doing real work. It feels like real work, but if you're just opening and closing folders (contexts) you're not doing anything useful.

Now here's where it gets interesting.

Computers don't just switch evenly between all processes. They look around to see where time is best spent. For example, many processes are blocked on I/O (input/output). That means that they're waiting for a file to be opened from your disk, or something to come back from the "cloud" over the network, or any of various other wait states that are commonplace deep inside a computer. If a processed is blocked, you don't give it any time. Simple as that. More CPU time for the other processes.

See what I mean, about getting interesting?

In the real world, our I/O is blocked all the time, too. We're waiting for somebody to call us back. We're at a stop light. We're waiting for the DSL modem to reboot. We're standing in line at a store, or even worse, at the DMV.

Human beings are not stupid. We know, as a species, that we can use our blocked I/O time for something more valuable. It is why we text madly at stop lights, or talk on the phone at the DMV, or do things that seem anti-social to old people, but are good time management when done well.

When scientists study "multi-tasking" in the lab, they do it abstractly, with tasks that don't exist in the real world, and have very few wait states. So yes, it is less efficient to switch frequently between truly parallel tasks that have no wait states. You spend time context switching and lose efficiency. But that's not how the real world is, and that does not seem to be accounted for in these studies.

Consider the teenager listening to music, texting, talking on the phone, and doing homework. Crazy? Good multi-tasker? If you dissected this scenario carefully, you would see all kinds of wait states. Texting has big holes in the timeline, waiting for somebody to reply. Even talking on the phone is like that. You can tune out for a few seconds and the person you're talking to doesn't notice, or care. With some people, the talkers, you can help yourself to 30 seconds of not listening and they won't even notice. Listening to music can be a background task, requiring a different part of your brain than conscious thought. Gee, there actually is a fair amount of time left over for homework, as long as it's the kind of homework that doesn't require your frontal cortext (which is, alas, most of it): coloring in the graph for biology; searching for something on google; writing your name and the date at the top of each piece of paper.

In the study I linked at the beginning of this article, the "multi-tasking" was forced on people by interrupting them with annoying tasks, which did not allow them to make the context switch themselves, or save state. That's not a useful measure of multi-tasking, it's the reason we don't answer our phones and close the door at the office. Interruptions, I claim, are not a form of multi-tasking.

Anyway, I think it's time somebody did a study on human multi-tasking with the ideas of context switching and wait states deeply embedded in the study. I guarantee they would get very, very different results.

Friday, February 18, 2011

Cost cutting: rolls of toilet paper? Really?

It was bad enough when suddenly Cheerios boxes were thinner by 20% than they used to be. Same price, less content. There was a rash of that when the economic downturn really hit. I saw that it was probably necessary, but found it discouraging.

Today, I discovered that the same brand of toilet paper that I've been buying forever (Scott) has made their rolls 3/8" narrower than they used to be. That may not seem like a problem, except that the toilet paper holder at work is one of these designs:

And the roll simply doesn't fit! It falls right out of the holder:

Here's the difference in the old and new sizes. It's quite significant. In the words of Wayne Campbell: "Shyah, as if we wouldn't notice! "

Luckily, I have a machine shop, and I'm not afraid to use it. And I have some inventory of copper tubing of just the right diameter: