Wednesday, April 02, 2014

Origins of the Peace Sign


I found this fascinating: the Peace Sign, which we all take for granted as an icon, was designed by somebody, on purpose, to advocate nuclear disarmament:
"It was invented by a member of our movement (Gerald Holtom) as the badge of the Direct Action Committee against Nuclear War, for the 1958 Aldermaston peace walk in England. It was designed from the naval code of semaphore, and the symbol represents the code letters for ND.'"

Borrowed liberally from this site: Peace Sign History

Thursday, August 08, 2013

Mediocrity Creeps Back: A Review of iPhoto '11

iPhoto, You've Changed...

I led the original team that built iPhoto 1.0 (I have the custom-printed T-shirt and scars to prove it). I still use iPhoto and I still love it, 12 years later.  But I also have some criticisms.  For the record, I was responsible for iPhoto 1.0 through (I think) iPhoto 4.0, but I have not worked at Apple since the end of 2003.

I just now started using iPhoto '11 because I have resisted updating to Lion and Mountain Lion (I still like Snow Leopard better). But I bought a new laptop, so I'm using all the new stuff, like it or not. I mention this only because my perspective on iPhoto '11 is fresh, having just started using it.

Overall, my impression of iPhoto '11 is that it has some interesting new features that I won't use, and some old ones that I wish would go away (Face Recognition), but worse, the user interface has gradually degraded. It is pretty, and it is "easy to use", but it does not strike the right balance between power and ease of use.  It just isn't as good as previous versions, for the same level of functionality.

The mantra for iPhoto 1.0 was essentially that the user interface should disappear — photos are something you look at, so you want a very visual interface, with more photo, less UI.  This is the balance that is largely missing in iPhoto '11.  There is much more UI, and a lot less Photo.

The biggest problem is that the tools in iPhoto '11 are inside the area used for the photo itself, so if you click on Info or Edit, the photo gets [dramatically] smaller to make room for the tools.  This is maybe good for "ease of use", but bad for "usability".  The buttons in the Edit panel are way wider than they need to be, because the Info/Edit space is fixed size — but what a waste of space for labeled buttons that say things like "Crop" and "Adjust".

In the screen shots here, you can see the photo before and after the Info button is clicked, and how much the photo is reduced in size.



The whole point of iPhoto is the photos -- they should be as large as possible.  Calculating the area in pixels, I see that the photo occupies only 42.9% of the pixels within the window.  Less than half of the area of the screen is devoted to the photo!  It's much worse when a photo is in Portrait orientation — only 30% of the available pixels are used for the photograph!



A couple of versions back, the Adjust user interface was a floating panel, and the other features (like Crop and Enhance) were along the bottom bar, using far less screen real estate. The functionality of this new vertical strip of space is the same, but its use of space is dramatically worse. This is a step in the wrong direction. I know how decisions like this are made: in the name of "consistency".  Put all the features in the same piece of real estate, because they are similar.  But this is a programmer's point of view, not necessarily a user's point of view, and if it has consequences like reducing the amount of space to display the photo (and also changing the view when you Edit, so it changes size up and down as you're viewing) then it is a bad decision.  To quote the late Steve Jobs, "consistency is overrated."

Next topic: Manage Keywords:



Where did these keywords come from? I certainly didn't create them, and I don't want to look through them, much less use them.  What the heck is this?!  It's surprising, confusing, and useless. The keywords bear a vague resemblance to some of my photos, including words such as "barrel" and "cloud", leading me to believe that there is some kind of feature recognition going on — like face recognition but for barrels and clouds — that suggests these keywords for me to use.

Really?!

And if the features are all now in the window as Edit and Create and Add To (which I don't like, obviously) why are Keywords in their own floating panel?  Why not put Adjust back in a floating panel, which is better than where it is now, since I can see my photo better?  There isn't much cohesive thought going into these features, or their arrangement.  I suspect Design By Committee.

I could find fault with many more features, as everything I look at has gotten slightly more cluttered, less good, or otherwise muddled, but I will stop there on the laundy list, and consider the more philosophical underpinnings of these choices...

Tradeoffs

Software design is about making tradeoffs: space vs. accessibility, speed vs. fidelity, ease of use vs. power. We thought a lot about these issues 10-12 years ago, and struck a good balance where you mostly saw your photos, and didn't have a bunch of useless features.  I think that balance has gradually eroded since then, each release being slightly less good than the one that came before it. It is amusing, and saddening, to see some of the tradeoffs that we made so long ago being reversed — with the outcome that was predicted those long years ago:

Maybe it's slightly easier to find the Adjust controls (which you only need to do once ever), and it's more consistent now (which doesn't matter that much), but the photo is now a lot smaller when you're editing it — and that's not worth it!  Bad tradeoff.

Sort Photos...

One of the few anecdotes I tell about Steve Jobs is from iPhoto 1.0, when we were just about to ship it. And I mean just about to ship it!  It was December, and we were in Golden Master Candidate 3 or something close to that.  No more changes, other than very high-priority bug fixes, and those only cautiously.  We had a Sort Photos submenu, just exactly as it appears in iPhoto '11



Steve was going through the menus one final time before we shipped it, and he stopped on this submenu.  The conversation went something like this:

Steve: "What is this menu for?"
Glenn: "So you can sort your photos by different things."
Steve [looking through them]: "They are sorted by Date by default, right?"
Glenn: "Yes."
Steve: "Get rid of that menu item" [Sort By Date]
Glenn: "Okay."
Steve: "Why would you want to Sort By Caption?"
Glenn: "I can't think of any good reason to sort by caption"
Steve: "Get rid of it."
Glenn: "Okay."
Steve: "Why would you want to sort by any of these other things?"
Glenn: [some lame possibilities provided]
Steve: "Get rid of the whole menu."
Glenn: "I can do that easily, as you know, in Interface Builder — but the documentation, particularly the localized documentation, will need to be changed, too, and we don't have enough time for that."
Steve: [after a few moments thought]: "Fuck the French and German documentation."

So of course we made the change, and of course Cheryl Thomas' team managed to update the French and German documentation on time anyway, by working late hours, and we shipped it without the Sort Photos submenu.  I realized that Steve was right, that you really didn't need to sort your photos by this and that, when there were already so many other ways to organize and view your photos, and probably few people would ever use the Sort Photos menu, and all it did was clutter up the application.

So it seems odd to me that the Sort Photos menu is now back, and Steve is gone. It makes me sad, considering both of those points.  Will mediocrity start to take over, now that he is gone? It is as though Sort Photos won out, in the end.

Friday, July 26, 2013

Mobile is the Future! Or is it?

I hear this every day: "The future is mobile."  Yahoo is getting on that train now, everybody is getting on board.  I am not so sure this is a smart bet in the long term, unless you're betting against the U.S. economy.  Here's why I think that.

The information economy is a combination of two basic things:

  1. Doing work and producing things.
  2. Talking about work, and talking about producing things.
The United States, as a whole, has been transitioning from (1) to (2) over the past 50 years, although we are also, to some extent, reinventing the ways in which we do work and produce things, in digital terms. But the transition away from production toward "making money as a sir effect of other people producing things" is showing pretty strong.
After everything is said and done, there's always a lot more said than done. 
You can make a case that mortgage-backed securities, insurance, Gmail, Facebook, Goldman Sachs as a whole, and Skype are all Category 2 activities, as are the meetings in which we all spend half our days, talking about what we're going to do.

You can also make a good case that Tesla Motors, Make Magazine, SpaceX, and Google Glass  are proof that we're making a resurgence back to Category 1 production-based roots.

But this is all backdrop against which I am considering the computer industry itself, the purveyor of tools for the information economy: where is the growth market for people using computing technology: Category 1, or Category 2, and which way is it trending?

I believe that real computers -- laptops and desktops -- will hold a firm grip on the part of the economy that actually produces work product, whether it's manufacturing or spreadsheets, images, video, or the written word. You need a real keyboard, a big screen, and a file system to "do work".

Mobile platforms, on the other hand, are great for communication, keeping up in real time, touching base, chatting, updating status, checking in. You just can't actually type a paragraph or edit anything meaningful.

The real reason that Mobile is a high-growth area right now, outselling computers and seeming like a trend, is that people are backfilling a void that has existed, where these tools were not part of the workflow -- and also that the phones/tablets themselves have short-life obsolescence built in, so you need to update them a lot more often.

But I simply do not believe that mobile technology is replacing desktop/laptop technology. I think it is augmenting it, as a better communications platform, which is why the phone itself has proven to be the perfect platform for this: it's communications technology, not work-producing technology.

If you believe that people doing real work and producing things is disappearing and we're all going to just be talking about it on our tablets and phones -- and if you're actually right about that -- then I'm moving to some other country.

Wednesday, June 26, 2013

Blow up your TV. Throw Away Your Paper. Move to the Country.


A hear a lot of complaining recently about Google and Facebook and how evil they might be, and how they are taking over the world, and invading your privacy, and blah blah blah.

News flash: using a credit card is far more invasive of your privacy than anything on the internet. They know everything about you at Visa, and they sell it to everybody else.  It has been going on for a long time, and you don't seem to care. Facebook has very little information, comparatively.  Quit worrying about it.

The internet is the TV of the new generation. Remember when hippies were yelling about turning off the "boob tube" and how it was rotting our collective brains and how TV alone would destroy modern civilization?  Farmville is equivalent to "Green Acres".  The internet is equivalent to TV -- it is an entertainment medium, supported by advertising. The only real difference is that TV has become really expensive (yet still ad-supported) and the internet is still mostly free.  Though I suppose Comcast has their hands in both of those pockets, don't they?

You don't have to use Google search, or Gmail, or Google Calendars, or any of the rest of it.  You don't.  But you can if you want to, and it's [mostly] free, so why not?

It's just that simple.  People need to shut up and make their choices.  Watch TV if you want to.  Create a Tumblr if you want to.  And if you don't want to, or are worried about your privacy, then don't.


Tuesday, May 07, 2013

Adobe Creative Cloud 1 - First Impressions

Adobe is really launching the Creative Cloud with CS6.  I decided to have a look.  I'm a very long-term power user of Adobe products, so I have been very curious to see what the Creative Cloud is all about.

The first thing you have to deal with (note that I didn't say "enjoy") is the Adobe Application Manager.  It's a little bit better than its predecessor, but stills seems to think that it is better about downloading apps than, say, the browser. I'm not sure I agree.  It also has some issues (see below).

I have a pretty strong feeling that it brought down my internet connection, which is normally rock solid.  I clicked on a large number of apps to download (who wouldn't, with a new Cloud offering?), and after a few minutes and one successful download/install, the internet connection was hung up so badly that I had to hard reset the Comcast router.  Maybe that was a coincidence.  I started a whole bunch of downloads after reboot, to see.

The next thing I noticed is that two of the apps that I downloaded -- directly from Adobe -- needed updates immediately after I downloaded them!  What's up with that?!

Adobe Lightroom 4.4 shows that it downloaded, but the "Launch App" link is grayed out, and the app seems not to be present in the Applications folder (!?).


This is a fairly major problem because (1) it seems not to have installed the app, and (2) I can't click "Install" because there is no Install link.  Re-launching the Application Manager shows Lightroom 4.4 as "Up to date" but it's not in Applications and I don't believe it's installed.  This seems like a MAJOR BUG.  IT also shows Photoshop and Illustrator and Flash Professional and Premiere all need Updates.  Aaaargh!  I *just* downloaded them, guys!

The installer should have links that take you to a page describing these apps.  I didn't know what Prelude was, or Muse, or Audition, and there was no way to find out from the Application Manager, which is where I was making the download decisions.  The Application Manager also has a fixed-size window.  Why?  It has a scroll bar (more content than will fit) and I have a huge screen. It's annoying to have to scroll to see things that I have plenty of room to see.  How hard is it to make the window resizable?  They even disable the green "maximize" button, which is harder to do, programmatically, than to actually make the window resizable.  And the names of three of their apps don't fit in the window without ellipses (...), which would display easily if I could resize the window (see screen shot above).  A little thing, perhaps, but after 25+ years of app development, you'd think Adobe would be leading the way on UX design, not trailing behind the pack on basic usability issues.

The next thing I noticed is that Dreamweaver CS6 is a lot like CS5.  A lot.  I'm not a big Dw user, but I haven't noticed any differences at all so far.  It's the first app that downloaded so it's the first I'm trying out.  I'll try some of the others.

Hmmm.  There are some serious glitches in mainstay applications that I discover within seconds of using them:

  • If you drag a file onto Photoshop to open it, the app launches but does not open the file.  This behavior has worked (on the Mac) for 20 years. After it is running, if I drop the same file onto Photoshop, it opens.  Bug.
  • The cursor in Illustrator displays with a really bad artifact.  I suspect it's because my "graphics card is not supported", which I learned from Photoshop, but not from Illustrator.  I have a Mac Pro that is about six years old, and works with literally every other app that I have ever tried.  Sorry, Adobe, but "fail".  Below this list is a screen shot of the cursor artifact.  Unusable, right?



All in all, I'm not terribly impressed by the quality of this major release.  Luckily I still have my CS5 apps installed.  I think I'll be going back to them, and possibly canceling my subscription -- a definite downside to the subscription model.


Wednesday, May 01, 2013

Starbucks Pisses Me Off [and is forgiven]

I have been a VERY loyal Starbucks customer for many years, as anyone who knows me will attest. I say good things about them.  I am a brand ambassador for them.  I go to Starbucks every day, though I took a few months off for health reasons at the end of last year.

My drink for the past couple of years has been a "triple short no-whip mocha" which seems to fall into the cracks of their policies, because I like a small amount of milk, but it costs as much as a Venti with 3 shots.  I pay $4.65 for an 8-ounce coffee drink.  Every day.  By conservative estimate, I spend about $1500/year at Starbucks.

But that's not what I'm pissed about.  That's just background.

A few years back, I bought a Gold Card when they cost $25.  It gave me a 10% discount on all purchases.  I loved it.

Since then, I've watched as they changed the "rewards program" to reward me less and less for my loyalty.  First they got rid of the 10% discount in favor of a free drink every time you bought 10 drinks, but they mailed you a post card to redeem to freebie, knowing that few people would get it together to redeem the post cards. Then it was 12 drinks, now I think it is up to 13 drinks.  But it gets worse.

There now is some kind of minimum threshold you have to meet to maintain "gold status", which is 30 stars within some time frame.  As if somehow I am no longer a "gold customer", which by most retail standards I certainly am.  They have my history in their little computers, since I always use my card (now the app) for my purchases.

Here's what pisses me off.  Yesterday I was getting close to a free drink, I noticed in the little app.  Two more stars to go.  Woo hoo!  Today I bought a $4.65 mocha, a panini sandwich, and a water, and used my app to pay....

I got an incredibly unfriendly alert that popped up and said, "You have failed to meet the minimum criteria to maintain membership.  Your reward stars have been reset to 0."

I stared at it in disbelief.  This is how they reward their best customers!  Gamification is one thing, but actually penalizing me for not earning 30 stars in some arbitrary amount of time -- and how exactly am I supposed to do that if you reset my count to 0?

This is appalling to me, and actually made me upset, right there at the cash register.  I got totally, completely pissed off at Starbucks, and vowed to boycott them.

Is that what you want with your "Rewards" program, Starbucks?  To piss off one of your best customers with your little star program, to the point where he doesn't want to come back into your stores, and will be considering Peet's or some other worthy competitor from now on?

Congratulations to your brain-dead rewards marketing team for totally screwing up what once really did feel like a "gold" program, and made me happy to buy coffee at absurdly inflated prices.  No more.  No more.






[Epilogue/update: 5/2/13]

A friend happened to post something on Starbucks wall at almost the same time yesterday that I posted this, complaining in almost the same way about no longer wanting to remain loyal to Starbucks.  I put a comment on her post, mentioning this blog entry.  I think a Starbucks employee must have read my comment, and this blog post, because I got the below email today.  I am undecided whether or not this suggests great customer service and I am happy again (a possibility) or whether it is impersonal (no actual contact from customer service, just this email) and lame and would not have happened had I not complained publicly.  Whether or not I am Gold is somewhat beside the point -- they need to seriously revisit the reward system because it is not, in fact, set up to reward actual loyal customers; it's more like a video game where you can periodically get a "game over" screen and have to replay the whole level.  Who wants that in a coffee rewards system?

My Starbucks Rewards? Rewards Status Check Balance Reload a Card Send an eGift
Here's to another glowing year at Gold. Raise a mug to celebrate.
Thanks for staying. The 30 Stars you earned keeps you at the Gold level another year. Here's to another year of rewards galore.

You're on a roll, so keep earning those Stars. Another 30 within a year keeps you at Gold level for yet another year. We're hoping this will be a happily ever after type of thing.

[Epilogue/update: 5/22/13]

I've decided I forgive Starbucks, and have reloaded my card twice since posting this article.  The baristas are great, and overall it's a great company.

Tuesday, October 09, 2012

Tanks

P9223071P9223036P9223037P9223038P9223039P9223040
P9223041P9223042P9223043P9223044P9223045P9223046
P9223047P9223048P9223049P9223050P9223051P9223052
P9223053P9223054P9223055P9223056P9223057P9223058

Tanks, a set on Flickr.

I took these photos a while back when touring "Pony Tracks Ranch", the largest private collection of tanks and heavy artillery in the U.S., and maybe the world. It's now the Military Vehicle Technology Foundation.

Wednesday, August 01, 2012

The Science of the Full Moon

By the time I finish this blog post, it should be exactly a full moon: 8:28:30pm, according to my iPhone app.

I am a very scientific thinker, yet I believe in the power of the full moon.  Why?

Scientists observe things carefully and try to draw correlations and conclusions. Some of them we can prove, some we can't. But I am really good at noticing and recognizing patterns, and that's where science starts: observation and pattern matching.

It is my observation that people are weird around the full moon, and more passionate, and more impulsive, and more romantic.

Does the moon cause this?

Not directly, as in gravitational pull or tides or anything like that. But we all see and experience the moon, and it affects us, like daisies or sunshine or the ocean. In that sense, yes, the full moon affects human behavior and makes us slightly crazy ... in a good way.

I will drive up windy Highway 84 tonight and try to see the moon at every opportunity, and I will think of my Mom, in Maine, who just finished doing the very same thing.  She loves the full moon.


Wednesday, July 11, 2012

Beginnings of iMovie

I just ran across an email exchange between me and Steve Jobs from 1998, in which the very beginnings of iMovie are envisioned.  It was from a discussion before I was hired into Apple to actually build iMovie.
I was ridiculously long-winded, and Steve was very terse.  That's how it was. Many years later I saw him typing an email and understood why he was always so terse: he was a very slow, two-finger typist!  I know, hard to believe.
Here is the email, unedited.

-------------------------------------------------------------------------

From: Steve Jobs
Date: Mon,  8 Jun 98 12:32:41 -0700
To: Glenn Reid
Subject: Re: questions

I'd also love to just drag special effects on my video timeline and have
it know what to do.  Automatically find the nearby splice and install the
transition effect perfectly and all automatically.


Steve


X-Sender: rtbrain@206.184.139.133
In-Reply-To: <<9806060028.AA00332@ni-master.pixar.com>
Date: Fri, 5 Jun 1998 18:37:46 -0700
To: Steve Jobs <
From: Glenn Reid <
Subject: Re: questions

>I think Avid Cinema is the closest and best thing out there.  Can
>we do better?

Depends on what "better" means.  Obviously one could go way off
the deep end on features but not get it right (there are lots of
examples of that :-)

The biggest hurdle, I think, is conceptual: to make people believe
they can do this themselves and that it's not complicated.  Desktop
publishing was an easy leap for people because it was only a little
more complicated than a typewriter, which they understood.  By
contrast, 3D has never caught on because nobody thinks they can do
3D, and to a large extent, they're right.  Too damn complicated.

Video is right in between stovetop publishing and 3D: it's possible
to bring it to the masses, but it has to be dirt simple.  Avid
Cinema is close to being that.  I like their top-level approach, which
is four steps:  1) Storyboard, 2) Bring Video In, 3) Edit Movie,
4) Send Movie Out.  It doesn't get much simpler than that, unless
you get rid of Storyboard.  They have a fair number of editing
tools, a timeline, and other things that resemble video editors
more than they resemble stovetop publishing, and I'd be tempted
to simplify those even more.

What I think is needed is the "SimpleText" of video.  It doesn't need
to do much other than let you read in some movies, do some splicing
and editing, and write it back out.  Get rid of the dead time, the
places where you said stupid things right into the microphone, etc.,
and send it to grandma.

The hardest thing about editing video is finding the stuff you
want on a tape and getting rid of the stuff you don't want.  There's
no magic to that: it's just grunt work.  We might be able to do
some guesswork to find the transition points, but it would only
be guesswork.

The key will be to find a conceptual leap that thinks of video
differently than ever before.  If you've ever looked at the "Variations"
dialog in Photoshop; something like that.  Instead of giving you
a dialog box to adjust RGB (who knows if a picture needs "more red"
or "more magenta" by looking at it?) they show 10-12 variations
that would result from adding blue, magenta, etc, and you just click
on the one you like.  It then shifts your choice into the middle
and lets you keep clicking to improve the image by choosing the one
you like the best from the variations presented to you.  It's brilliant.

I think there's still some room for this king of conceptual leap
in video editing: something simple and elegant that makes finding
what you want easy.  The only thing that comes to mind right now
is a kind of "binary search".  You show two points in the video
and let them click somewhere between, as in "I think it's a little
after the birthday hat, but the backyard stuff was quite a bit
later":

   "birthday hat"                                          "back yard"

   |---------------X------------------------------------------------|


You click around where the X is and it narrows down the search again
and again, until you find the end point.  It could be quasi-animated
or aided by guesswork somehow.

Anyway, I'm digging in too deep, but I think there is room for
innovation and simplification.  I'm guessing, from talking with
Sina and Will Stein, that what you're after is the Democratization of
Video Editing, the simplest and most obvious tool yet for doing
basic editing.  It seems like it could be done.


Glenn

-------------------------------------------------------------------------


Thursday, February 16, 2012

Dumber and Dumber

I read today about "Mountain Lion", Apple's continued dumbing down of MacOS X.  I sat up a little straighter in my chair.

Everywhere I look, things are getting dumber.  A "smart phone", maybe, but it's really just a small, dumb laptop with a phone in it.

Our country is getting dumber -- our blockbuster IPO's are fluffy stuff like "social networking", the Super Bowl and Desperate Housewives are the pinnacle of entertainment, while science is being slowly but systematically bred out of our youth.

Nobody can spell or write well any more.  Our goods are not well made, sound bites have replaced thoughtful discourse, people say they want to "Fix America" when they really just want cheaper gas and more crap to watch on TV.

So-called Mountain Lion got me thinking about the Renaissance today, and how we are spiraling backwards by almost any cultural or intellectual measure.


It has been almost 500 years since Leonardo da Vinci died, at the age of 60.  He contributed more to the world in one short lifetime than the rest of the world's population has managed in the subsequent several hundred years.


What the hell is going on, and we aren't we doing anything about it?

Saturday, February 04, 2012

Social Networking and the River of Crap

Learn what is floating downstream in your personal River of Crap.

All the social networks now have at their core what I call the River of Crap.  That is the central news feed that is pulled from your contact list and what they choose to post or "share".  It is effectively a newspaper, where the writers and editors are your friends/contacts.  It really can be a river of crap, but luckily you can filter it. Some of it is advertising, just like a newspaper, and some of it is truly interesting and relevant.

I've been building, using, and thinking about social networking for a long time. At its core, it's not what we all think it is.  It is not a way to "stay in touch", it is not a "friends list", it is not a place to "upload photos"....

Facebook and Twitter and Google Plus are a form of personal branding: it has become how we tell the world who we are, and we do it by showing others the things we like.  A few decades ago we did this with bumper stickers and T-shirts and baseball caps with logos on them.  Now we do it by posting on a social network instead.  When you share a picture of Obama with words superimposed on it, you are making a statement about yourself.  When you Like a post that criticizes Susan G. Komen foundation for pulling funding from Planned Parenthood, you are essentially putting a bumper sticker on your virtual car.

Facebook and Twitter are also editorial services, where the editors are people you know.  The most common reason stated for not liking Facebook, or not wanting to participate, is the sense that you have to constantly read about "what other people had for breakfast."  While that's true sometimes, it misses the point.

I used to monitor Google News every day, and sometimes I still do check it.  But it is an automated filter, and it is not nearly effective as my own personal Rivers of Crap.  If anything interesting or important is going on, I read about it first on a social network.  Don't you?

There are slight (but important!) differences between Facebook, Twitter, and Google Plus when it comes to the River of Crap, and how it gets filtered.  This may well be the sorting algorithm by which the winner is chosen, in the long run.

Facebook has some hidden algorithm that decides what you should see.  It's complicated, and doesn't feature all your friends equally.  Everybody notices this, and nobody really likes it, but you can't quite tell what it's not showing you, so nobody complains all that much about it.  It is a filter, but you can't control it.  The remaining filter controls are all basically "hide" in one form or another.  If you don't like a post, or a person's posts, you can hide it, or them, from your Feed.  When I talk to people I realize that most Facebook users don't really use this.  They just accept the River of Crap for what it is, and use Facebook more or less based on that.  That is probably why Facebook tries to automate the filter on your behalf, because few people take control and do it themselves.

Twitter doesn't have filtering, they have Search.  Your River of Crap is just there, scrolling by, and perhaps because of the real-time focus and 140-character limit, people post a lot more frequently to Twitter than other networks.  It's a faster flowing River of Crap!  But to filter and decide what you want to read, you end up using #hashtag searches to follow topics.  Less good as a newspaper metaphor, but better for research, because you can find information posted by people whom you are not following.

Google Plus introduced Circles as a new way to aggregate the people you're trying to follow or pay attention to.  It is complicated, both in terms of posting (do people really make the decision to post to circles other than Public?) and in trying to use it to filter what you read.  And it combines the two-way nature of Friends lists (facebook) with the one-way nature of Following (twitter) but in doing so, it confuses most of us.  Frankly, the whole thing doesn't work very well yet as a social network.  But it is a more powerful mechanism in the long run.  Like many powerful mechanisms, if nobody uses the power, and people just stare at the River of Crap and decide whether to participate or not, it will likely not win hearts and minds.

What's interesting to me is that the people who build and run these services don't seem to understand what they have built. They don't offer Newspaper-like filtering, or topic-based viewing, or any other way to control the River of Crap.  They still think they're building networks of Friends.

Sunday, January 29, 2012

The Jackling House

I wonder if the people who prevented Steve Jobs from demolishing the Jackling house for so many years feel bad?  From reading their page, it sure doesn't look like it.

I mean, it got demolished anyway, but Steve died and didn't get to build his dream house.

Nice going, guys.

Monday, November 14, 2011

The Microblogger's Dilemma

Will every site soon have a "status update" field? How can I possibly update all of them?!

This has passed trend and is headed straight toward pandemic. I suppose because it's so easy to implement, there is a proliferation of "what are you doing now?" kinds of status update opportunities. Twitter. Facebook. LinkedIn. Google Plus. And that's just the biggest, most popular ones, and doesn't consider geolocation updates, which is a whole nother set of sites (FourSquare, etc).

The dilemma is ... which one do you update?  If only one or two interesting things happen to you in a day, or week, what site do you update? I solved this for a while by tying my Twitter account to LinkedIn and Facebook, but there is an unspoken disapproval of this leveraging of posting. You can't be a true Facebook devotee if you only post to Facebook via Twitter, right?  And how could you possibly post both to Google Plus and facebook?!  That's heresy!

[An aside to you Facebook devoteés ... yes, I know that they now like their name to be all lower case, but like Wall Street Journal editors, I refuse to follow all weird capitalization schemes, preferring to stick to my own journalistic standards].

If auto-reposting from Twitter to Facebook and LinkedIn is not cool, can I copy/paste the same thing to Facebook and Google Plus?  What if most of the people, like me, are members of all of them? Won't they see through my ruse, and discount my "interestingness" because I post the same thing to all of my microblogs?

If I post different things to each site, what does that mean?  Someone who is interested in what I have to say now has to look in 3 or 4 places?  What a waste of time for them, and for me.

And yet, if you join Facebook, or Twitter, or Google Plus, and neglect them, that's the worst of all, right?

The paradox I find greatest of all is the parallel proliferation of "get funding quick!" sites for angel and vc funding.  I have joined several recently, to look around, in pursuit of elusive angel funding. But if you run, say, angel.co, you don't want me to also be on growvc.com (never mind that the founders of one site may well be active on the other). I actually got an email from somebody at angel.co basically saying that I hadn't spent enough time on their site, or filled out enough data, or updated my status enough, and therefore I wasn't worthy of being recommended for investment. It should occur to them that the less time I have to update yet-another web site, the more likely it is that I'm doing actual work worthy of investment. Myopic.

I am posting this diatribe, er, open question, to (gasp!) blogger.com, where I maintain an old-fashioned blog from the middle of the last decade.  It is at least persistent through all these trends, and supports more than 140 characters.  I will, of course, post a link to this through bit.ly to all my microblogs -- and I will do little else. In order to feed the appetite of these microblogs, I must do that as rabidly as I once processed email (okay, okay, I still do rabidly process email).

I guess my point is ... are we being asked to declare our allegiances to particular vendors/technologies by where we choose to update our status?  What an odd result of a weird little microtrend.


Friday, November 11, 2011

Open Letter to Web Site Developers

Dear Web Site Developer (or misguided management):

Please don't do any of these things on your web sites:
  1. Ask me to enter my email address twice.
  2. Tell me what characters I can and can't use in my password.
  3. Time out my sessions for my own protection.
  4. Make me change my password every so often.
  5. Make every field in your form *required.
  6. Make it impossible for me to change my email address.
  7. Insist that I provide you with a security question and answer.
I know how to type my email address and I know more about how to create a secure password than you do, and I do not forget my passwords. You have meetings where you talk about "reducing friction" for people to join your sites. You create friction every time I log in, not just when I sign up.

If you are a bank, and your page times out after 5 minutes and I have to log in AGAIN, inside my highly secure physical location with no possible access to my computer by anyone but me ... are you protecting me, or irritating me?

Sincerely,
Glenn Reid

Tuesday, July 12, 2011

Expert Culture vs. Ease of Use

There is a phenomenon I call an "expert culture" where things that are hard to understand and use become popular precisely because they are hard to use. Once you figure out how to use something complicated, you become an "expert", and it feels good to be an expert. You help other people because it makes you feel smart, and then they learn, and then they are an expert too.

Conversely, products that are easy to use and have few unnecessary features are often dismissed as trivial or underpowered.

This is a fascinating and bizarre contrast, and it is very counter-intuitive. We are all led to believe that things that are Easy to Use get adopted, and complicated things are eschewed. There are many counterexamples to this, although Apple products are perhaps an existence proof that at least somebody buys Ease of Use.

This occurred to me as I was deleting some early posts on Google Plus that were open questions, trying to figure out how Google+ worked. Valid points, I felt, and reflective of a "newbie" experience on a new platform. I was deleting them because I felt foolish for having posted them, and I realized that Google Plus is an Expert culture, and facebook is "for the rest of us".

Circles alone, in Google+, are really complicated, even once you know how they work. Consider this graphic representation of the rules for who can see your post on Google+. If that's not an expert culture, I don't know what is. At least half the posts I have seen go by on Google+ are in fact about how to use Google+. That tells you something too.

[I started posting this on Blogger because it's essentially a blog post, and I may finish it there too. But I wanted to see if this medium could replace blogging completely. I don't think so, not quite yet. I don't have enough control, can't set a title, and I can't embed links and things like that. Maybe I'm just used to those things, and blogging shouldn't rely on them. We'll see.]

Sunday, July 03, 2011

JIT vs Buffering

I've been studying Just-In-Time techniques for business and manufacturing and process (or JIT, or the Toyota Production System). There are fascinating parallels between computer science, where JIT is also used to some extent, and manufacturing. I have some thoughts to add to this discussion, and no place to add them, so I'm blogging about it.

JIT has two things at its core: predictive forecasting, and signaling. Predictive forecasting is easy to understand: the more predictable your task is, the more lean your process can become, because you know what is going to happen. Signaling, or Kanban, is an abstraction for somebody raising their hand and shouting: "hey, we're out of rivets over here!" Both are necessary for JIT, and they are to some extent opposites of each other. If you know exactly what's going to happen, you should have no need for signaling.

The real world is, of course, somewhere in between, and the process you design is at the mercy of the business problem you're addressing. Toyota developed this process for manufacturing cars, where demand isn't completely predictable, but it has a little bit of built-in time padding -- a car does not need to be finished within 3 hours (approximate assembly time) of ordering to meet customer expectations.

Banking, on the other hand, is essentially real time. If a customer asks for their money, you have to give it to them. There is a complex JIT system behind banking that tries to predict this demand and provide the money "just in time" from thin reserves at branches and central banks, but a failure in predictive forecasting of JIT banking is considered bad: a "run on the bank."

I like looking at problems from both ends. If you consider real-time phenomena that don't quite work because of the inability to perfectly forecast demand, like banking, computer networking, restaurant management, and the flow of water and electricity, you see that buffering (inventory) is introduced to smooth out the flow and meet demand in real time.

In manufacturing and retail, where inventory and stock was presumed, the effort was the opposite: to remove the buffering and provide real-time supply: JIT manufacturing.

I believe that these are opposite sides of the same coin. An inventory buffer can reach zero, the effect being that a signal is thrown, and the process waits for inventory to be delivered. [It is worth noting that this is exactly how computer networking works, with signals and everything].

Here is my thesis, then:
All process has buffering (inventory) and variable demand. Optimization is the same in all scenarios: you are simply adjusting the size of the buffer.
This is commonplace in computer networking. You explicitly state the size of a buffer, and you can adjust it. 2k? 20k? 200k? Depends on what you're doing (chat vs downloading movies), and on, yes, how predictable demand is. Chat has low volume and a high degree of unpredictability; video download is completely predictable and high volume. The buffers are much larger for video download than for chat.

Let's consider other another example: a team of roofers putting new shingles on a building. There are several intriguing layers of process and signaling and buffering in this seemingly straightforward task.

First, the size of the roof is known, so it would seem that the raw materials can be supplied in exact quantities. But the roofers always order a little extra material -- more nails, shingles, and tar paper -- than is necessary. This is to allow for the unexpected (unpredictability in forecasting), including errors and damaged materials. The less waste the better, so the amount of extra material is adjusted for each job, trying to match the challenges of the roof geometry to the skills of the crew and the number of shingles per package. Can you do the job with nothing left over, and no trips to the hardware store?

Next there is the transportation issue. Some roofing contractors lift all the shingles up onto the roof ahead of the job, spacing them approximately according to coverage. This is an extreme of buffering in advance. Other contractors have somebody yell for more shingles as needed (signaling) and have somebody run up the ladder with another pack of shingles. Which is better? It depends on what you're optimizing. Pre-placing the shingles suggests that the application of shingles is more skilled labor, and more expensive, and should never have to wait for the [cheaper] labor to carry the shingles to them. But if the nailer yells before he/she is actually out of shingles, then the more skilled resource is never idled, and the effect is the same. Or more correctly, the optimization moves upstream, to the difference between serializing the tasks (putting all shingles on the roof before you start) and the cost differential between renting a boom lift truck vs a crew of people to carry shingles up the ladder.

The reason I thought of roofing as an example is that it has buffering in unexpected places. How many shingles does a roofer take out of the pack before walking over to the area currently being roofed? Buffering. How many nails does the roofer take in hand at once? Buffering.

The size of a buffer is often artificially constrained to be the size of a roofer's hand, or the available shelving space in a warehouse. To the extent that the "natural" buffer size that would best optimize the process does not match up with the actual size of the buffer available, you get inefficiency.

This mismatch between physical buffer capacity (tank size, shelf size, memory chip size) and the optimal buffer size for a given process is, I think, a very profound issue at the heart of many, many serious problems.


Sunday, June 05, 2011

Why Podcasting Never Really Happened

Back in the day, I predicted that podcasting was not really "a thing" and would go nowhere. Nobody really agreed with me, but I believe that now it is safe to declare podcasting officially dead. (Blogging is almost dead, too, so this posting is perhaps paradoxical).

Here is why podcasting (as an authoring paradigm) never happened: audio is the same as video. Except not as good.

Video and audio are both time-based media. In fact, audio is simply a subset of video. You can see this on YouTube by finding a song you like, posted with lame photographs layered on top of it as the "video" part.

But it is actually much harder to edit audio than to edit video, because there is nothing to look at when you're editing (almost nothing -- you can get waveforms that help a little, but they're pretty hard to use effectively). Video has cues and transition points and also audio, so it's just easier to edit. Period.

So, audio is less good, less interesting, and harder to produce and edit than video, and it takes just as long to consume. Why would it ever become a popular consumer authoring medium? Exactly.

It's actually easier to understand the value of black-and-white TV after seeing color TV than it is to understand the value of audio-only on your computer, or iPod. Podcasting failed, and instead, iPods now all have tiny video screens!

Tuesday, April 12, 2011

The Truth about Multitasking

I've been reading recently about scientists studying multitasking and declaring that we're no good at it. I have something to say about that.

First, some background. Multitasking is a word borrowed from computer science and reapplied to life, which doesn't happen that often. In a computer's operating system, at the very heart of it, is a switching mechanism for doing a little bit of work on a lot of running programs, in a "round robin" approach, so that each of them gets a little time slice. No one program gets to run unbridled for any length of time. Processors are so blindingly fast that you can't tell the difference. A typical CPU today can execute many billions of instructions per second, so the fact that it is switching back and forth between, say, your browser and your email program and updating the display is done so many, many times per second that you couldn't possibly perceive it.

But yes, the computer is multitasking. And guess what? It doesn't come for free.

To switch back and forth between the many processes running on your computer, the operating system does what is called a context switch. This is important, so pay attention. A context switch isn't just jumping around between processes. The CPU needs to also store the context from each process, so it can come back to it later. The context is as little as possible--a bunch of memory locations, a few details of what the processor was doing, and things like that. It's not unlike the folders on your desk that contain the context of your human tasks -- the bills that you might be needing to pay, or the phone number and resume of the job candidate you're about to call.

There are decades of research on operating systems that try to trim down how much context you need to save when switching between them. The less, the better, with a huge multiplier in efficiency for every little bit you don't have to store, because the CPU has to physically copy the information back and forth every time it does a context switch. So do you, by the way, when you multitask. Understanding this, and making your context storage small and efficient, helps a lot with multitasking efficiency. If you have to open up your Word document, or find the phone number, every time you get back around to a task, the startup cost is too great, and you spend all your time thrashing (a real computer science term, believe it or not) and not doing real work. It feels like real work, but if you're just opening and closing folders (contexts) you're not doing anything useful.

Now here's where it gets interesting.

Computers don't just switch evenly between all processes. They look around to see where time is best spent. For example, many processes are blocked on I/O (input/output). That means that they're waiting for a file to be opened from your disk, or something to come back from the "cloud" over the network, or any of various other wait states that are commonplace deep inside a computer. If a processed is blocked, you don't give it any time. Simple as that. More CPU time for the other processes.

See what I mean, about getting interesting?

In the real world, our I/O is blocked all the time, too. We're waiting for somebody to call us back. We're at a stop light. We're waiting for the DSL modem to reboot. We're standing in line at a store, or even worse, at the DMV.

Human beings are not stupid. We know, as a species, that we can use our blocked I/O time for something more valuable. It is why we text madly at stop lights, or talk on the phone at the DMV, or do things that seem anti-social to old people, but are good time management when done well.

When scientists study "multi-tasking" in the lab, they do it abstractly, with tasks that don't exist in the real world, and have very few wait states. So yes, it is less efficient to switch frequently between truly parallel tasks that have no wait states. You spend time context switching and lose efficiency. But that's not how the real world is, and that does not seem to be accounted for in these studies.

Consider the teenager listening to music, texting, talking on the phone, and doing homework. Crazy? Good multi-tasker? If you dissected this scenario carefully, you would see all kinds of wait states. Texting has big holes in the timeline, waiting for somebody to reply. Even talking on the phone is like that. You can tune out for a few seconds and the person you're talking to doesn't notice, or care. With some people, the talkers, you can help yourself to 30 seconds of not listening and they won't even notice. Listening to music can be a background task, requiring a different part of your brain than conscious thought. Gee, there actually is a fair amount of time left over for homework, as long as it's the kind of homework that doesn't require your frontal cortext (which is, alas, most of it): coloring in the graph for biology; searching for something on google; writing your name and the date at the top of each piece of paper.

In the study I linked at the beginning of this article, the "multi-tasking" was forced on people by interrupting them with annoying tasks, which did not allow them to make the context switch themselves, or save state. That's not a useful measure of multi-tasking, it's the reason we don't answer our phones and close the door at the office. Interruptions, I claim, are not a form of multi-tasking.

Anyway, I think it's time somebody did a study on human multi-tasking with the ideas of context switching and wait states deeply embedded in the study. I guarantee they would get very, very different results.

Friday, February 18, 2011

Cost cutting: rolls of toilet paper? Really?



It was bad enough when suddenly Cheerios boxes were thinner by 20% than they used to be. Same price, less content. There was a rash of that when the economic downturn really hit. I saw that it was probably necessary, but found it discouraging.

Today, I discovered that the same brand of toilet paper that I've been buying forever (Scott) has made their rolls 3/8" narrower than they used to be. That may not seem like a problem, except that the toilet paper holder at work is one of these designs:


And the roll simply doesn't fit! It falls right out of the holder:


Here's the difference in the old and new sizes. It's quite significant. In the words of Wayne Campbell: "Shyah, as if we wouldn't notice! "


Luckily, I have a machine shop, and I'm not afraid to use it. And I have some inventory of copper tubing of just the right diameter:


Wednesday, June 16, 2010

Inventor Labs blog

We started a product design-oriented blog over at Inventor Labs. I will continue to blog here (sporadically, as always) but may put more technical or product-related posts over there.