Saturday, January 28, 2012

Sustainability v. Efficiency

In 1977, before I went up to Cambridge to study mathematics, I worked - "interned" as one would say nowadays - for six months at IBM's then UK headquarters at North Harbour (just north of Portsmouth). My team's job was mathematical modeling of IBM's internal information-processing systems, which operated on 370-series mainframes. It was an exciting time for me as I discovered for the first time that I had knowledge and skills that people would actually pay for.  Not that these were particularly sophisticated... I remember having to give a painstaking explanation of why the geometric-series formula 1/(1-x)=1+x+x^2+... can be applied when x is a square matrix.

Towards the end of my internship, my manager asked me to work out a modeling assignment for a different group who were concerned at the rapid growth of demand for the service that they ran.  How long, their manager (a rather senior figure) wanted to know, could they continue operating effectively with their present hardware.  My colleagues watched with some amazement as their brash seventeen-year-old wunderkind presented his conclusions to the pinstriped executive.  "Your system will freeze solid in six months", I said.

Then I left. I heard the sequel later.  Right on cue, the system had run out of capacity to handle the demands made on it!  To be honest, I think this reflects luck as much as it does any clever modeling on my part.   But the modeling I had done was based on the simple idea that there is a tradeoff between the efficiency and the sustainability (or perhaps we should say resilience) of any system that has to cope with random events.

The classic mathematical technique that exemplifies this is called queuing theory.  Consider the example of a bank teller who takes one minute on average to serve each customer.  Customers arrive at random and wait in a line (in the order of arrival) until the teller is free.  It might seem natural to believe that if customers also arrive at one-minute intervals (on average) the teller will be working with maximum efficiency.  However, queuing theory predicts that in that circumstance the average customer will have to wait infinitely long to get served! (In the simplest case, the expected number of people in line is proportional to 1/(m-1), where m is the average interval between customers in minutes.)   It was this model that I had implemented to come up with my 6-month prediction.

It is not hard to intuit why the model behaves in this way.  With a 1-minute interval between customers the teller is operating at full capacity.  Any slight random perturbation - two customers arriving at the same time, or one customer who takes much longer than usual to process - will build up a backlog and there is no "slack" in the system that will allow it to clear.  If a system is to have some resilience to random effects, it must have some "slack" in its normal operation.  In other words, it must be somewhat inefficient.

This has implications for sustainability issues like energy supply.  Undoubtedly,  the greenest and most inexpensive energy source out there is "negawatts" - energy that we refrain from consuming, in other words, ways to make our energy use more efficient.  Homes and household appliances, for instance, have become fantastically more energy-efficient over recent decades.  Using less energy has to be at the center of a push for sustainability.  But, a completely efficient energy economy would have no slack for random events - a transmission-line failure, a hard frost, a national emergency.  Localization of the economy could push against global efficiency measures too, however desirable it might be from a sustainability standpoint.  I haven't seen this question addressed in back-of-envelope calculations (valuable as these are) about what a green energy economy might look like.

1 comment:

Angela Jones said...

John, thanks for explaining mathspeak in plain English in a way anyone can relate to. Angela Jones