I am of an age to have been reading science fiction stories that included fusion reactors providing electricity for most of 50 years now. For much of that, I've also read non-specialist technical publications about progress towards that goal. One piece of black humor that has emerged over that period is the definition of a new basic constant in physics: the time remaining before commercial fusion power is ready is always 30 years. There is little reason to believe that the constant is changing.
ITER is a multi-country research effort broadly believed to be the project with the best chance of delivering commercial fusion [1]. At the present time, the schedule calls for first plasma in 2020 and first deuterium-tritium fusion in 2027. ITER itself won't be used to generate electricity. The project's schedule calls for the DEMO system, ITER's successor, to begin generating power in 2033. After an indeterminate period of DEMO operation, PROTO can be designed and built -- the prototype for a commercial reactor. Given all of that, 2043 seems like the nearest potential date -- still 30 years out. There are game-breakers in the project as well; for example, one of the purposes of ITER is to determine if the structural materials actually hold up under the high neutron flux that will be produced.
The problem is, in my mind, is that we (taking my usual parochial view of "we" to mean the US) need to take actions sooner than that. I believe that the US, and the heavily-populated eastern portions of the US in particular, face an electricity crisis within the next 30 years.
In 2011, states in the US Eastern Interconnect accounted for about 71% of all US generation. In addition, the Eastern Interconnect is much more dependent on nuclear and coal than the other parts of the country: about 70% of the Eastern Interconnect's generation is from coal and nuclear; for the Texas Interconnect about 45%; and for the Western Interconnect about 38% [2]. That 70% in the Eastern consists of 23% from aging nuclear reactors and 47% from coal. Over the next 30 years, the license extensions for the nuclear plants will expire. At least IMO, no one in their right mind is going to grant further extensions to those plants. Too many of them are aging badly, and spent fuel storage will become an increasingly difficult problem. Popular opinion seems to be trending towards forcing coal-fired plants to clean up their act. Not just the controversial CO2 requirements, but the much less controversial emissions of sulfur- and nitrous-oxides, heavy metals, and fine particulates. In many cases, that will translate into replacement systems, not upgrades of existing plants.
So the first problem is that if the US, and the Eastern Interconnect in particular, is to maintain its generating capacity, it appears that it must undertake a substantial capital program over the next 30 years, before fusion is going to be commercially ready. Companies are not going to undertake another spending program before they are ready to begin retiring those relatively-new non-fusion plants. That introduces one set of delays. The second problem is that advanced tech can be very much a "use it or lose it" proposition. In 1969, the Saturn V rocket could lift 118,000 kg to low earth orbit. We used that lift capability to put men on the moon. Today, the largest rocket available globally can only lift a small fraction of that much to LEO. New systems that would match the Saturn V are on the drawing boards, but no one looks for them to be ready before 2030. Fusion could easily turn into the same sort of thing: the PROTO system gets built, but falls into disuse because of the capital timing issue, and we lose the hands-on knowledge needed for broad commercial deployment.
And that's why I'm not betting on fusion as a solution. Not because we won't eventually solve the technical problems, but because the timing and limits on available capital are going to be bad.
[1] I acknowledge that there are other programs pursuing other strategies. ITER is the program that almost everyone thinks of when they think "fusion research".
[2] Figures are for calender year 2011, from the EIA by-state-by-source spreadsheet. Coal usage will be down somewhat across all three regions in 2012, due to a glut of cheap natural gas. Nuclear usage will be down in the Western in 2012 due to the San Onefre reactors being offline for most of the year.
Thursday, February 28, 2013
Wednesday, February 20, 2013
Electric Transportation
It seems like every blog and/or online news source I read regularly has had something about the public pissing contest between The New York Times and Tesla Motors. The subject of the debate is a 500-mile test drive of the Tesla Model S done by John Broder, a Times' writer. Mr. Broder gave the car and Tesla's rapid charging stations located along the route a bad review; Elon Musk, Tesla's founder, claimed that the writer was biased and had gone out of his way to make the car look bad. One interesting side note is that Mr. Musk's assertions are based on the very detailed data logging that the car performs. He notes that such logging is now standard practice for cars provided to the media after the Top Gear television show gave the Model S a pre-planned bad review.
To the extent that their budgets will allow, most Americans who purchase a car get one designed to deal with the most extreme 10% of their driving experience. Weight and drive-train designed to get to work on the six days per year that it has snowed. Cargo capacity to take the family on the annual camping trip. Range to drive 600 miles to Grandma's for the holidays. The Tesla Model S is an attempt to design a car that meets one set of extreme needs -- certainly not the most extreme set -- subject to the constraint of an all-electric traction system. On the one hand, Mr. Musk asserts the design is successful. On the other hand, Mr. Broder asserts that it is not. On the gripping hand, it's a moot point; most of us are going to live long enough to see the day when most Americans no longer buy vehicles based on their extreme needs, but rather their routine ones.
What I'm anticipating is that within our lifetimes we're much more likely to be covering most of that 500 miles in a pleasant electric train at 120 or 150 mph, with the driving on each end in something like MIT's city car. At the home end of the trip, it gets you and your bag(s) to the station; quite possibly a neighborhood light-rail station and you change to the long-distance train elsewhere. At the destination end, you pick up a rental from the head of the line and pay by the mile and days for the short distances you drive it. Or if Google and Tyler Cowen have their way, the self-piloting car drives you, which would be particularly helpful in an area you don't know. Yes, it's slightly less convenient since your schedule is dictated to some degree by the train schedules. In exchange, covering the main part of the trip in three hours, with none of the hassles of driving on the New Jersey Turnpike, makes up for quite a lot. And very much worth pointing out, for the 90% or more of the routine driving trips, from home to work and back, or on a loop covering a batch of local errands, the city car provides a far better match to the job than the Model S.
Photo credits: Tesla Model S, Motor Trend. MIT city car, MIT.
To the extent that their budgets will allow, most Americans who purchase a car get one designed to deal with the most extreme 10% of their driving experience. Weight and drive-train designed to get to work on the six days per year that it has snowed. Cargo capacity to take the family on the annual camping trip. Range to drive 600 miles to Grandma's for the holidays. The Tesla Model S is an attempt to design a car that meets one set of extreme needs -- certainly not the most extreme set -- subject to the constraint of an all-electric traction system. On the one hand, Mr. Musk asserts the design is successful. On the other hand, Mr. Broder asserts that it is not. On the gripping hand, it's a moot point; most of us are going to live long enough to see the day when most Americans no longer buy vehicles based on their extreme needs, but rather their routine ones.
What I'm anticipating is that within our lifetimes we're much more likely to be covering most of that 500 miles in a pleasant electric train at 120 or 150 mph, with the driving on each end in something like MIT's city car. At the home end of the trip, it gets you and your bag(s) to the station; quite possibly a neighborhood light-rail station and you change to the long-distance train elsewhere. At the destination end, you pick up a rental from the head of the line and pay by the mile and days for the short distances you drive it. Or if Google and Tyler Cowen have their way, the self-piloting car drives you, which would be particularly helpful in an area you don't know. Yes, it's slightly less convenient since your schedule is dictated to some degree by the train schedules. In exchange, covering the main part of the trip in three hours, with none of the hassles of driving on the New Jersey Turnpike, makes up for quite a lot. And very much worth pointing out, for the 90% or more of the routine driving trips, from home to work and back, or on a loop covering a batch of local errands, the city car provides a far better match to the job than the Model S.
Photo credits: Tesla Model S, Motor Trend. MIT city car, MIT.
Monday, February 18, 2013
Siting Solar Power Plants
Will Oremus at Slate makes fun of a Fox News subject-matter expert that thinks Germany gets much more sunshine than the US, and that's why Germany's solar power installations have proceeded more quickly than in the US. That position is absurd on its face; the Slate piece includes a nice graphic, courtesy of the US National Renewable Energy Laboratory (NREL), showing the solar resources available for Germany, Spain, and the US. Essentially everywhere in the US has more sunshine than Germany. Only Alaska is comparably bad. You have to wonder where this expert lives. Denver is not the sunniest place in the country, but even so, conventional wisdom here is that "if the sun doesn't shine for three days in a row, the whole city is ready to commit suicide." In Germany... well, this page with weather statistics for Bremen, in northwestern Germany, indicates a median annual cloud cover of 85%.
Rather than piling on the Fox News expert, I want to use the map to set up some different questions. Consider just the contiguous states of the US. Draw a north-south line that passes through roughly the middle of Nebraska and the Dakotas. In round numbers, three-quarters of the US population lives east of that line. Almost half of the people who live east of that line live in the 15 states that have an Atlantic Ocean seaport (or are Vermont). And just under half of that group live in the heavily urbanized BosWash corridor. OTOH, all of the best solar resources in the US are west of that line, with the very best far to the west in Arizona, California, and Nevada.
When the time comes that the US East Coast (and BosWash in particular) decides that it needs solar power on a large scale [1], there are two options. They can do local installation, which has some drawbacks: it will take more panels to generate a given amount of electricity per year than it would take in the Southwest [2]; there are significant periods when the sky stays overcast; much of the non-agriculture open space is wooded; and almost all the land is in private hands. The alternative is to build installations in the Southwest, along with high-voltage DC transmission lines to move the power east. In the SW, the sun shines more often, the sun shines more brightly, lots of the desirable areas don't grow trees, and there are enormous areas that are owned by the federal government. Any split that puts three-quarters of the US population on the side of "let's use a portion of those western public land holdings to generate power for us" gives them the votes in Congress to make it be so.
Is it reasonable to think that the East will want access to western renewable resources? Absolutely. NREL's Renewable Electricity Futures Study is one of the largest and most detailed studies available of scenarios under which the US achieves high penetrations of renewable generation. The fundamental model used in that study is a linear optimization that minimizes total costs. In the unconstrained scenarios, the amount of additional transmission capacity increases exponentially with increasing penetration of renewable power (see Figure ES-8 in the Executive Summary). The study identifies the principle use of that increased capacity as providing eastern load centers with access to high-quality western renewable resources. That is, western renewables plus long transmission lines cost less than eastern renewables. People may not always go for the lowest-cost option, but that's the way to bet.
And finally, is it reasonable to think that developers will prefer to make use of public, rather than private or state, lands? Again, the answer is yes, if that's where the resources are. Wyoming is by far the largest coal producer among the 50 US states; over 80% of all coal mined in Wyoming is produced from federally-owned lands (and much of that Wyoming coal is burned in states as far away as Georgia). Advocates for developing the Green River oil shale deposits in Colorado, Wyoming and Utah — a terrible idea, by the way, but that's a topic for a different day — are constantly pushing to open federal land, even though large high-quality deposits on private holdings are readily available.
Will there be an eastern grab for western renewable resources? I say yes, although it probably won't happen for 25-30 years. Will the western states resent it? Also yes. "Interesting" things may happen.
[1] For various reasons, I believe that time will eventually come. Reasonable people can disagree.
[2] Here "panel" means a photovoltaic panel, which can operate under a wide range of sky conditions. The advantage of locations in the Southwest is much greater for concentrating solar power, which requires direct sunshine.
Rather than piling on the Fox News expert, I want to use the map to set up some different questions. Consider just the contiguous states of the US. Draw a north-south line that passes through roughly the middle of Nebraska and the Dakotas. In round numbers, three-quarters of the US population lives east of that line. Almost half of the people who live east of that line live in the 15 states that have an Atlantic Ocean seaport (or are Vermont). And just under half of that group live in the heavily urbanized BosWash corridor. OTOH, all of the best solar resources in the US are west of that line, with the very best far to the west in Arizona, California, and Nevada.
When the time comes that the US East Coast (and BosWash in particular) decides that it needs solar power on a large scale [1], there are two options. They can do local installation, which has some drawbacks: it will take more panels to generate a given amount of electricity per year than it would take in the Southwest [2]; there are significant periods when the sky stays overcast; much of the non-agriculture open space is wooded; and almost all the land is in private hands. The alternative is to build installations in the Southwest, along with high-voltage DC transmission lines to move the power east. In the SW, the sun shines more often, the sun shines more brightly, lots of the desirable areas don't grow trees, and there are enormous areas that are owned by the federal government. Any split that puts three-quarters of the US population on the side of "let's use a portion of those western public land holdings to generate power for us" gives them the votes in Congress to make it be so.
Is it reasonable to think that the East will want access to western renewable resources? Absolutely. NREL's Renewable Electricity Futures Study is one of the largest and most detailed studies available of scenarios under which the US achieves high penetrations of renewable generation. The fundamental model used in that study is a linear optimization that minimizes total costs. In the unconstrained scenarios, the amount of additional transmission capacity increases exponentially with increasing penetration of renewable power (see Figure ES-8 in the Executive Summary). The study identifies the principle use of that increased capacity as providing eastern load centers with access to high-quality western renewable resources. That is, western renewables plus long transmission lines cost less than eastern renewables. People may not always go for the lowest-cost option, but that's the way to bet.
And finally, is it reasonable to think that developers will prefer to make use of public, rather than private or state, lands? Again, the answer is yes, if that's where the resources are. Wyoming is by far the largest coal producer among the 50 US states; over 80% of all coal mined in Wyoming is produced from federally-owned lands (and much of that Wyoming coal is burned in states as far away as Georgia). Advocates for developing the Green River oil shale deposits in Colorado, Wyoming and Utah — a terrible idea, by the way, but that's a topic for a different day — are constantly pushing to open federal land, even though large high-quality deposits on private holdings are readily available.
Will there be an eastern grab for western renewable resources? I say yes, although it probably won't happen for 25-30 years. Will the western states resent it? Also yes. "Interesting" things may happen.
[1] For various reasons, I believe that time will eventually come. Reasonable people can disagree.
[2] Here "panel" means a photovoltaic panel, which can operate under a wide range of sky conditions. The advantage of locations in the Southwest is much greater for concentrating solar power, which requires direct sunshine.
Wednesday, February 13, 2013
Feature Creep
One of Cain's Laws™ says "To the extent that the budget will allow, put the complicated parts in software rather than hardware" [1]. Recently I've been working on what is now the annual reprogramming of The World's Most Sophisticated Whole-House Fan Controller™. While taking care of that chore, I found myself thinking about the flip side of the Law, and one of the larger risks associated with systems implemented mostly in software: feature creep.
The fan controller has been a good platform for feature creep. There are no physical controls per se, only a 320-by-240 pixel graphic display with a simple resistive touch screen. Every single part of the actual user input controls is defined in the software running on the microcontroller. Over the two-year life of the controller, there have been at least five different styles of UI used: little dynamically labeled virtual buttons, big virtual buttons, a virtual slider, a virtual keypad (shown to the left), and a virtual five-way keypad [2].
The complexity of the UIs and software behind them have varied considerably. At the low end of the scale was the interface with three big buttons: off, low, and high. At the other end is what the code comments call the "target temperature" mode: the user specifies a target temperature they'd like to reach using the ten-key keypad and the software decides what speed to run the fan and for how long [3]. Last summer the target temperature mode was the one used most frequently. At least I think it was -- I haven't taken the obvious step of having the controller count the number of times each interface is used to start the fan. My usability friends would be disappointed with me if they knew I wasn't measuring things.
If I ever redesign the controller guts, one of the things I would add is something that can make noise. The feedback of an audible click when a virtual button gets pressed would be helpful. As it turns out, it wouldn't really cost any more, or take up any more board space, to have a component that produces tones rather than just a click. Of course, that would open up a whole new range of questions. How many kinds of noise should the controller make? Should there be a menu of noises from which the user can choose? How loud should the noises be? How complicated should the noises be? The microcontroller doesn't have enough memory to play back sampled audio, but has plenty to store a pitch-plus-duration definition of several little tunes. Are there any conceivable circumstances under which having a fan controller render part of "Blue Suede Shoes" is useful? Should there be a part of the interface to allow the user to enter their own pitch-plus-duration tunes? Is there a market for selling (probably copyright violating) tune definitions to be keyed in?
One of the existing capabilities of the controller that I've never taken real advantage of is the radio receiver that I built into it. The idea was that the controller would be able to receive messages from a remote control so that it wasn't necessary to climb the stairs [4]. The man who originally installed our whole-house fan says that would be popular with some of his elderly clients. The only remote I've built is a little breadboard rig that sends simple messages. But with even a simple display, and a few more buttons... well, the wireless protocol I'm using allows up to 27 bytes per message, which is enough to send complicated instructions. Like, say, "Play 'Blue Suede Shoes'". But that's a project for next year's reprogramming exercise.
[1] This Law grew out of a 1980s Bellcore project where I was building the physical layer for the world's first ISDN test set. The applied research people had been working on a three-board all-hardware implementation of the physical layer for two years. I was not popular with them when I convinced my management to let me build a software-based version using one of the new (and expensive) single-chip signal processors from TI instead. The true value of the Law became obvious when we prepared to go into the field to test three switch manufacturers' products. Each of the three implemented a different incorrect version of the physical layer link activation protocol. With relatively modest changes in the software that ran on my board, the test set could interface with all three of those incorrect implementations. The applied research boards wouldn't bring up the link for any of the three.
[2] One of those widgets with up, down, left, and right arrow keys, plus a button in the middle. I've always thought there ought to be a spiffy name for such an input widget, but "five-way keypad" is all I've ever seen.
[3] In both of these cases there was/is a fail-safe in the form of a hidden timer that will shut the fan off after twelve hours no matter what the temperature is. One of the design principles has been that the controller should never run the fan forever.
[4] Assuming that the windows upstairs are already open, or that having the fan draw strictly from downstairs windows is desired. Running a fan with a three-quarter horsepower motor in a house with no open windows can lead to some... interesting results as air is drawn through the remaining available paths. One of the better stories I have heard is about pulling large volumes of air down the chimney, and what the soot and ashes did to the light-colored carpet.
The fan controller has been a good platform for feature creep. There are no physical controls per se, only a 320-by-240 pixel graphic display with a simple resistive touch screen. Every single part of the actual user input controls is defined in the software running on the microcontroller. Over the two-year life of the controller, there have been at least five different styles of UI used: little dynamically labeled virtual buttons, big virtual buttons, a virtual slider, a virtual keypad (shown to the left), and a virtual five-way keypad [2].
The complexity of the UIs and software behind them have varied considerably. At the low end of the scale was the interface with three big buttons: off, low, and high. At the other end is what the code comments call the "target temperature" mode: the user specifies a target temperature they'd like to reach using the ten-key keypad and the software decides what speed to run the fan and for how long [3]. Last summer the target temperature mode was the one used most frequently. At least I think it was -- I haven't taken the obvious step of having the controller count the number of times each interface is used to start the fan. My usability friends would be disappointed with me if they knew I wasn't measuring things.
If I ever redesign the controller guts, one of the things I would add is something that can make noise. The feedback of an audible click when a virtual button gets pressed would be helpful. As it turns out, it wouldn't really cost any more, or take up any more board space, to have a component that produces tones rather than just a click. Of course, that would open up a whole new range of questions. How many kinds of noise should the controller make? Should there be a menu of noises from which the user can choose? How loud should the noises be? How complicated should the noises be? The microcontroller doesn't have enough memory to play back sampled audio, but has plenty to store a pitch-plus-duration definition of several little tunes. Are there any conceivable circumstances under which having a fan controller render part of "Blue Suede Shoes" is useful? Should there be a part of the interface to allow the user to enter their own pitch-plus-duration tunes? Is there a market for selling (probably copyright violating) tune definitions to be keyed in?
One of the existing capabilities of the controller that I've never taken real advantage of is the radio receiver that I built into it. The idea was that the controller would be able to receive messages from a remote control so that it wasn't necessary to climb the stairs [4]. The man who originally installed our whole-house fan says that would be popular with some of his elderly clients. The only remote I've built is a little breadboard rig that sends simple messages. But with even a simple display, and a few more buttons... well, the wireless protocol I'm using allows up to 27 bytes per message, which is enough to send complicated instructions. Like, say, "Play 'Blue Suede Shoes'". But that's a project for next year's reprogramming exercise.
[1] This Law grew out of a 1980s Bellcore project where I was building the physical layer for the world's first ISDN test set. The applied research people had been working on a three-board all-hardware implementation of the physical layer for two years. I was not popular with them when I convinced my management to let me build a software-based version using one of the new (and expensive) single-chip signal processors from TI instead. The true value of the Law became obvious when we prepared to go into the field to test three switch manufacturers' products. Each of the three implemented a different incorrect version of the physical layer link activation protocol. With relatively modest changes in the software that ran on my board, the test set could interface with all three of those incorrect implementations. The applied research boards wouldn't bring up the link for any of the three.
[2] One of those widgets with up, down, left, and right arrow keys, plus a button in the middle. I've always thought there ought to be a spiffy name for such an input widget, but "five-way keypad" is all I've ever seen.
[3] In both of these cases there was/is a fail-safe in the form of a hidden timer that will shut the fan off after twelve hours no matter what the temperature is. One of the design principles has been that the controller should never run the fan forever.
[4] Assuming that the windows upstairs are already open, or that having the fan draw strictly from downstairs windows is desired. Running a fan with a three-quarter horsepower motor in a house with no open windows can lead to some... interesting results as air is drawn through the remaining available paths. One of the better stories I have heard is about pulling large volumes of air down the chimney, and what the soot and ashes did to the light-colored carpet.
Monday, February 11, 2013
Saying Nasty Things About Spreadsheets
James Kwak at The Baseline Scenario and Lisa Pollack at FTAlphaville both point to the task force report (PDF) on how JPMorgan's Chief Investment Office lost billions of dollars. Among the factors identified in the report are a number of impressive spreadsheet errors. The spreadsheet program, in this case, was Excel. Excel and its spreadsheet brethren are very likely the most commonly used business software in the world. And not just in the business world; in some academic fields, Excel is the platform of choice for numerical work simply because it is almost certainly safe to assume that the people with whom you want to share particular calculations will have Excel available [1].
Spreadsheet errors of disastrous proportions are relatively common. So much so that there is a European Spreadsheet Risks Interest Group (EuSpRIG) that holds an annual conference, tries to identify best spreadsheet practices, and collects horror stories from around the world. Researchers at the University of Hawaii have been studying the issue of spreadsheet errors for many years.
Contemporary spreadsheets even "help" you introduce errors. For example, when you insert a new row or column within a range used in a formula, the software will typically "correct" the formula to accommodate the change. All of the spreadsheet programs that I have worked with will, particularly for boundary cases, sometimes make the wrong change. When I worked for the Colorado General Assembly's Joint Budget Committee, all of the analysts feared inserting new rows in a certain spreadsheet because we knew that the software (not Excel in this case) was going to break an unknown number of formulas in seemingly random places when we did the insertion.
Many years ago I had the unfortunate experience of being the representative from the programming side of a project sent to the project leader's staff meetings. It was a moderately large project that was going to cost a hundred million dollars or so to deploy. The decision to go forward was being made on the basis of a very large spreadsheet that incorporated market penetration data, financial calculations, detailed operational cost estimates, and so forth. At some point I asked to see the test cases for the spreadsheet that verified the accuracy of the calculations. When I was told that there was no formal testing, but the people who had built the spreadsheet had been "very careful," my mouth got away from me. "If we did the real-time software that way," I blurted, "you'd fire the whole lot of us." That was the episode that really hammered home for me that building a spreadsheet is a programming exercise, too often done by people who have never been trained in programming [2].
Certainly my own personal coding fails to follow best practices, leading to occasional mistakes I would prefer not to have made. There's really no excuse for some of the bad practices; I plead personal laziness and ancient habits. I do a much better job when someone else is paying for my services. Still, I have long believed that spreadsheet software in general makes it difficult to follow good practices. Test cases for code are hard to do because of the embedded data. Flow control and the order of calculations can be difficult to determine, even with visual displays that show cell dependencies in different ways. Version control can be difficult. It's too easy for developers to "peek" at data in other parts of the sheet that they're not supposed to be using [3]. At least I know what the risks for my sloppy practices are, which is more than can be said for most spreadsheet users.
Still, it's kind of surprising that a firm the size of JPMorgan, making decisions involving billions of dollars of the firm's own money, would allow such a bad set of spreadsheet mistakes be made.
[1] It's also common to assume that they will have the Microsoft Windows version of Excel, which implies more than simple spreadsheet calculations. That version of Excel includes, for example, the nonlinear optimization Solver and VBA (Visual Basic for Applications).
[2] Yes, there are good self-taught programmers. But that's not the way to bet.
[3] Object-oriented programming languages, that enforce concealing the data structures and algorithms inside a class, have been widely adopted for a reason. I'm old enough to remember that good programmers, in the days before OOP languages were common, very often used OOP philosophy.
Spreadsheet errors of disastrous proportions are relatively common. So much so that there is a European Spreadsheet Risks Interest Group (EuSpRIG) that holds an annual conference, tries to identify best spreadsheet practices, and collects horror stories from around the world. Researchers at the University of Hawaii have been studying the issue of spreadsheet errors for many years.
Contemporary spreadsheets even "help" you introduce errors. For example, when you insert a new row or column within a range used in a formula, the software will typically "correct" the formula to accommodate the change. All of the spreadsheet programs that I have worked with will, particularly for boundary cases, sometimes make the wrong change. When I worked for the Colorado General Assembly's Joint Budget Committee, all of the analysts feared inserting new rows in a certain spreadsheet because we knew that the software (not Excel in this case) was going to break an unknown number of formulas in seemingly random places when we did the insertion.
Many years ago I had the unfortunate experience of being the representative from the programming side of a project sent to the project leader's staff meetings. It was a moderately large project that was going to cost a hundred million dollars or so to deploy. The decision to go forward was being made on the basis of a very large spreadsheet that incorporated market penetration data, financial calculations, detailed operational cost estimates, and so forth. At some point I asked to see the test cases for the spreadsheet that verified the accuracy of the calculations. When I was told that there was no formal testing, but the people who had built the spreadsheet had been "very careful," my mouth got away from me. "If we did the real-time software that way," I blurted, "you'd fire the whole lot of us." That was the episode that really hammered home for me that building a spreadsheet is a programming exercise, too often done by people who have never been trained in programming [2].
Certainly my own personal coding fails to follow best practices, leading to occasional mistakes I would prefer not to have made. There's really no excuse for some of the bad practices; I plead personal laziness and ancient habits. I do a much better job when someone else is paying for my services. Still, I have long believed that spreadsheet software in general makes it difficult to follow good practices. Test cases for code are hard to do because of the embedded data. Flow control and the order of calculations can be difficult to determine, even with visual displays that show cell dependencies in different ways. Version control can be difficult. It's too easy for developers to "peek" at data in other parts of the sheet that they're not supposed to be using [3]. At least I know what the risks for my sloppy practices are, which is more than can be said for most spreadsheet users.
Still, it's kind of surprising that a firm the size of JPMorgan, making decisions involving billions of dollars of the firm's own money, would allow such a bad set of spreadsheet mistakes be made.
[1] It's also common to assume that they will have the Microsoft Windows version of Excel, which implies more than simple spreadsheet calculations. That version of Excel includes, for example, the nonlinear optimization Solver and VBA (Visual Basic for Applications).
[2] Yes, there are good self-taught programmers. But that's not the way to bet.
[3] Object-oriented programming languages, that enforce concealing the data structures and algorithms inside a class, have been widely adopted for a reason. I'm old enough to remember that good programmers, in the days before OOP languages were common, very often used OOP philosophy.
Friday, February 8, 2013
Coal Consumption Trends
Last week, a number of blogs and other news sites commented on an EIA graphic showing that China's coal consumption in 2011 was approaching the total consumption of the rest of the world. I downloaded the data from the EIA and put together my own slightly different graphic, shown below (use "view image" or its equivalent to see it full size). In the spirit of Jeffrey Brown's work on net oil exports, I lumped China and India together. Jeffrey puts them together because they have large populations, economies that are developing rapidly, and both have outgrown -- at least for the time being -- their domestic petroleum supplies. The same arguments seem to me to apply to coal; while both have large coal reserves, and their use of coal is growing rapidly, they appear to have outgrown their ability to mine and move domestic coal to the places where they can burn it usefully.
The red line traces the combined coal consumption of China plus India; the green line shows consumption for the United States; the blue line shows consumption for the rest of the world. China and India have moved from a bit under 2.0 billion short tons per year in 2000 to a bit over 4.5 billion short tons per year in 2011, an annual growth rate over 7.5%. The rest of the world has increased its coal consumption slightly, and the US has decreased its consumption by a little. When 2012 numbers are available, the level of US consumption will likely be noticeably lower: due to a glut of low-cost natural gas, US electricity generators have cut back on their use of coal. The rest of the world's consumption may increase; for example, Germany's decision to shut down the nuclear fleet has resulted in an increase in the use of other sources, including coal.
The bottom line here seems clear -- "fixing" the coal portion of global carbon dioxide releases from fossil fuels is largely out of the developed countries' hands. China and India are growing their economies furiously, trying to lift billions of people out of poverty. For the last sixty or more years, no one has known how to grow an economy without growing electricity consumption, and China and India are not exceptions. For base load generation in a relatively poor country, coal has no rivals today in terms of availability, transportability, and relatively inexpensive investments in relatively simple technology. Absent an alternative, China and India will no doubt continue to grow their coal use as possible rather than face the domestic consequences of reduced electricity supplies.
Certainly the developed countries have nothing that they can offer to China and India as an alternative. No clean-coal systems that could capture and sequester the emissions from burning coal. No proven modular nuclear systems that might be suitable for use where the grid is inadequate for moving power over long distances. No viable storage system that could deal with the intermittency (on the various time scales) that limit renewables. And no form of high-growth economy that doesn't require large energy resources.
The red line traces the combined coal consumption of China plus India; the green line shows consumption for the United States; the blue line shows consumption for the rest of the world. China and India have moved from a bit under 2.0 billion short tons per year in 2000 to a bit over 4.5 billion short tons per year in 2011, an annual growth rate over 7.5%. The rest of the world has increased its coal consumption slightly, and the US has decreased its consumption by a little. When 2012 numbers are available, the level of US consumption will likely be noticeably lower: due to a glut of low-cost natural gas, US electricity generators have cut back on their use of coal. The rest of the world's consumption may increase; for example, Germany's decision to shut down the nuclear fleet has resulted in an increase in the use of other sources, including coal.
The bottom line here seems clear -- "fixing" the coal portion of global carbon dioxide releases from fossil fuels is largely out of the developed countries' hands. China and India are growing their economies furiously, trying to lift billions of people out of poverty. For the last sixty or more years, no one has known how to grow an economy without growing electricity consumption, and China and India are not exceptions. For base load generation in a relatively poor country, coal has no rivals today in terms of availability, transportability, and relatively inexpensive investments in relatively simple technology. Absent an alternative, China and India will no doubt continue to grow their coal use as possible rather than face the domestic consequences of reduced electricity supplies.
Certainly the developed countries have nothing that they can offer to China and India as an alternative. No clean-coal systems that could capture and sequester the emissions from burning coal. No proven modular nuclear systems that might be suitable for use where the grid is inadequate for moving power over long distances. No viable storage system that could deal with the intermittency (on the various time scales) that limit renewables. And no form of high-growth economy that doesn't require large energy resources.
Monday, February 4, 2013
Government IT Worker Shortage
A couple of weeks back, Government Technology put up a piece titled "6 Ways to Address the IT Labor Shortage". Some state and local governments are facing shortages of IT workers, particularly in system administration. The problem is going to get worse, as states are looking at as much as 40% of their workforce retiring over the next decade. Further, according to the article, the federal Bureau of Labor Statistics is forecasting that job openings for IT workers are going to grow faster than the number of candidates graduating with appropriate bachelors degrees. The four-year degree distinction is significant; many state civil service systems require a four-year degree as an entry hurdle for "professional" positions.
The six ways pretty much boil down to the same thing: push more kids into STEM (science, technology, engineering, mathematics) fields of study, and tailor those studies more to real-world demands. Since system administration seems to be the worse problem, the goal would presumably be to have new grads emerge with both theoretical knowledge and hands-on experience. I understand that position, but do point out that the professors who teach at schools granting four-year CS or software engineering degrees are probably going to push back against teaching, say, Microsoft Windows certification classes. That's not why they went into academia.
Changing gears for a moment, readers at Slashdot (News for Nerds) are regularly pointed at articles like the one titled "Programmers: Before you turn 40, get a plan B". The articles -- not all written by old geeks like me -- assert that age discrimination is alive and well in the IT world, and oldsters are pushed out. Depending on any particular author's views, it may be described as being "pushed out", or it may be described as "left voluntarily" because they are no longer willing to put up with insane hours and management that is frequently clueless about IT. The articles generally point out that the drop-off with age in employment within IT is much more pronounced than the drop-offs that occur in other engineering fields. Suffice it to say that there are a substantial number of Baby Boomers (persons aged roughly 48-66 as I write this) who were IT folk but are now doing something else for a living, voluntarily or otherwise.
Since I'm a Boomer myself, I feel entitled to say unpleasant things about us. As a group, the Boomers are ill-prepared for retirement. Some of it is because, as a group, we didn't save enough -- many of us, for example, because we thought owning a house whose price soared during the bubble meant we didn't need to save otherwise. Some of it because we have been through two serious stock market declines whacking our savings over the course of a critical decade. Critical in terms of where it fell relative to retirement age -- a big market decline in the decade before you plan to retire means there's not enough money [1]. When I was taking public policy classes at the University of Denver as a 50-something, I often told my 20- and 30-something classmates that as part of the policy-making cadre, the first crisis they would have to deal with regarding the Boomers wasn't the bankrupting of Medicare or Social Security, but would be that the Boomers as a group couldn't afford to retire and the US private sector wouldn't be prepared to employ us.
While there are a lot of details that would have to worked out, there certainly appears to be an opportunity for a win-win situation here. The governments appears to need people now, not just in several years after a STEM education program ramps up -- assuming a push for STEM education actually produces that needed bodies. Given the number of articles like the one referenced above, I'm not sure that it will be easy to push young people into what appears to be a dead-end career, particularly when doing so requires years of difficult study. I understand some of the obvious arguments against this type of fix: that the Boomers lack exactly the right skill set, that they aren't long-term career employees, etc. At the same time, the Boomers are available now, and one of the purposes of requiring a four-year degree is to hire people who have "learned to learn" and can acquire specific new skills relatively quickly. We ought to be able to work something out.
[1] It may be significant to note that Boomers who have had a long-term career in state or local government positions -- the 40% of the workforce that the article fears will retire over the next decade -- are probably in better shape in theory than many. Governments are pretty much the last employers still offering defined-benefit pension plans with cost-of-living increases. In practice, there is some question as to whether state and local governments will actually be able to deliver on those pension promises over the long haul.
The six ways pretty much boil down to the same thing: push more kids into STEM (science, technology, engineering, mathematics) fields of study, and tailor those studies more to real-world demands. Since system administration seems to be the worse problem, the goal would presumably be to have new grads emerge with both theoretical knowledge and hands-on experience. I understand that position, but do point out that the professors who teach at schools granting four-year CS or software engineering degrees are probably going to push back against teaching, say, Microsoft Windows certification classes. That's not why they went into academia.
Changing gears for a moment, readers at Slashdot (News for Nerds) are regularly pointed at articles like the one titled "Programmers: Before you turn 40, get a plan B". The articles -- not all written by old geeks like me -- assert that age discrimination is alive and well in the IT world, and oldsters are pushed out. Depending on any particular author's views, it may be described as being "pushed out", or it may be described as "left voluntarily" because they are no longer willing to put up with insane hours and management that is frequently clueless about IT. The articles generally point out that the drop-off with age in employment within IT is much more pronounced than the drop-offs that occur in other engineering fields. Suffice it to say that there are a substantial number of Baby Boomers (persons aged roughly 48-66 as I write this) who were IT folk but are now doing something else for a living, voluntarily or otherwise.
Since I'm a Boomer myself, I feel entitled to say unpleasant things about us. As a group, the Boomers are ill-prepared for retirement. Some of it is because, as a group, we didn't save enough -- many of us, for example, because we thought owning a house whose price soared during the bubble meant we didn't need to save otherwise. Some of it because we have been through two serious stock market declines whacking our savings over the course of a critical decade. Critical in terms of where it fell relative to retirement age -- a big market decline in the decade before you plan to retire means there's not enough money [1]. When I was taking public policy classes at the University of Denver as a 50-something, I often told my 20- and 30-something classmates that as part of the policy-making cadre, the first crisis they would have to deal with regarding the Boomers wasn't the bankrupting of Medicare or Social Security, but would be that the Boomers as a group couldn't afford to retire and the US private sector wouldn't be prepared to employ us.
While there are a lot of details that would have to worked out, there certainly appears to be an opportunity for a win-win situation here. The governments appears to need people now, not just in several years after a STEM education program ramps up -- assuming a push for STEM education actually produces that needed bodies. Given the number of articles like the one referenced above, I'm not sure that it will be easy to push young people into what appears to be a dead-end career, particularly when doing so requires years of difficult study. I understand some of the obvious arguments against this type of fix: that the Boomers lack exactly the right skill set, that they aren't long-term career employees, etc. At the same time, the Boomers are available now, and one of the purposes of requiring a four-year degree is to hire people who have "learned to learn" and can acquire specific new skills relatively quickly. We ought to be able to work something out.
[1] It may be significant to note that Boomers who have had a long-term career in state or local government positions -- the 40% of the workforce that the article fears will retire over the next decade -- are probably in better shape in theory than many. Governments are pretty much the last employers still offering defined-benefit pension plans with cost-of-living increases. In practice, there is some question as to whether state and local governments will actually be able to deliver on those pension promises over the long haul.
Sunday, February 3, 2013
Grid Vulnerability
Over the last year, there have been a huge number of articles and postings and everything else about the wide range of vulnerabilities of the US power grid to terrorist cyber attacks. If I were running al-Qaeda, by tomorrow morning (ie, early Feb 4) I'd have my press release ready to take credit for the power outage at the Super Bowl, saying "Yes, your power grid is just that vulnerable. And we can do it from anywhere in the world, any time we want."
Subscribe to:
Posts (Atom)