I've been studying Just-In-Time techniques for business and manufacturing and process (or JIT, or the Toyota Production System). There are fascinating parallels between computer science, where JIT is also used to some extent, and manufacturing. I have some thoughts to add to this discussion, and no place to add them, so I'm blogging about it.
JIT has two things at its core: predictive forecasting, and signaling. Predictive forecasting is easy to understand: the more predictable your task is, the more lean your process can become, because you know what is going to happen. Signaling, or Kanban, is an abstraction for somebody raising their hand and shouting: "hey, we're out of rivets over here!" Both are necessary for JIT, and they are to some extent opposites of each other. If you know exactly what's going to happen, you should have no need for signaling.
The real world is, of course, somewhere in between, and the process you design is at the mercy of the business problem you're addressing. Toyota developed this process for manufacturing cars, where demand isn't completely predictable, but it has a little bit of built-in time padding -- a car does not need to be finished within 3 hours (approximate assembly time) of ordering to meet customer expectations.
Banking, on the other hand, is essentially real time. If a customer asks for their money, you have to give it to them. There is a complex JIT system behind banking that tries to predict this demand and provide the money "just in time" from thin reserves at branches and central banks, but a failure in predictive forecasting of JIT banking is considered bad: a "run on the bank."
I like looking at problems from both ends. If you consider real-time phenomena that don't quite work because of the inability to perfectly forecast demand, like banking, computer networking, restaurant management, and the flow of water and electricity, you see that buffering (inventory) is introduced to smooth out the flow and meet demand in real time.
In manufacturing and retail, where inventory and stock was presumed, the effort was the opposite: to remove the buffering and provide real-time supply: JIT manufacturing.
I believe that these are opposite sides of the same coin. An inventory buffer can reach zero, the effect being that a signal is thrown, and the process waits for inventory to be delivered. [It is worth noting that this is exactly how computer networking works, with signals and everything].
Here is my thesis, then:
All process has buffering (inventory) and variable demand. Optimization is the same in all scenarios: you are simply adjusting the size of the buffer.
This is commonplace in computer networking. You explicitly state the size of a buffer, and you can adjust it. 2k? 20k? 200k? Depends on what you're doing (chat vs downloading movies), and on, yes, how predictable demand is. Chat has low volume and a high degree of unpredictability; video download is completely predictable and high volume. The buffers are much larger for video download than for chat.
Let's consider other another example: a team of roofers putting new shingles on a building. There are several intriguing layers of process and signaling and buffering in this seemingly straightforward task.
First, the size of the roof is known, so it would seem that the raw materials can be supplied in exact quantities. But the roofers always order a little extra material -- more nails, shingles, and tar paper -- than is necessary. This is to allow for the unexpected (unpredictability in forecasting), including errors and damaged materials. The less waste the better, so the amount of extra material is adjusted for each job, trying to match the challenges of the roof geometry to the skills of the crew and the number of shingles per package. Can you do the job with nothing left over, and no trips to the hardware store?
Next there is the transportation issue. Some roofing contractors lift all the shingles up onto the roof ahead of the job, spacing them approximately according to coverage. This is an extreme of buffering in advance. Other contractors have somebody yell for more shingles as needed (signaling) and have somebody run up the ladder with another pack of shingles. Which is better? It depends on what you're optimizing. Pre-placing the shingles suggests that the application of shingles is more skilled labor, and more expensive, and should never have to wait for the [cheaper] labor to carry the shingles to them. But if the nailer yells before he/she is actually out of shingles, then the more skilled resource is never idled, and the effect is the same. Or more correctly, the optimization moves upstream, to the difference between serializing the tasks (putting all shingles on the roof before you start) and the cost differential between renting a boom lift truck vs a crew of people to carry shingles up the ladder.
The reason I thought of roofing as an example is that it has buffering in unexpected places. How many shingles does a roofer take out of the pack before walking over to the area currently being roofed? Buffering. How many nails does the roofer take in hand at once? Buffering.
The size of a buffer is often artificially constrained to be the size of a roofer's hand, or the available shelving space in a warehouse. To the extent that the "natural" buffer size that would best optimize the process does not match up with the actual size of the buffer available, you get inefficiency.
This mismatch between physical buffer capacity (tank size, shelf size, memory chip size) and the optimal buffer size for a given process is, I think, a very profound issue at the heart of many, many serious problems.