Thursday, November 29, 2012

East Coast Energy Risks

CNN recently ran a print piece titled "Data shows East Coast gas shortages were inevitable".  The article points out that, given the situation before Hurricane Sandy, any largish disruption was going to create a gasoline shortage.  A number of reasons for the short term problem were cited: low regional reserves, recent refinery closures, heavy dependence on long pipelines, and the dependence of local retailers on the electricity being on in order to operate their pumps.  The author concludes that it will not be easy for the New York metropolitan area to avoid the same risk in future storms as well.

Nor was the situation confined to the New York region.  The area around Washington, DC also experienced wide-spread power outages as shown in the chart to the left (Credit: Washington Post), although they recovered much more quickly.  The risks are widespread throughout the BosWash urban corridor.  These are short term risks: high winds knock out power lines, storm surge floods other sorts of infrastructure, etc.  The more important long-term point alluded to in the article is that the region sits at the end of long pipelines, long power lines, long rail lines, and long shipping routes over which it receives the large majority of its energy inputs.

 The situation is likely to get worse over the next couple of decades.  The area is home to a number of aging nuclear reactors.  Yesterday, the New York Public Service Commission ordered Con Edison, the principle power provider for New York City, to develop plans to keep the power on in the event the Indian Point nuclear complex is shut down (Indian Point provides about 25% of the city's electricity).  The Indian Point operating licenses expire in 2013 and 2015, and renewals are likely to be held up both by procedures at the Nuclear Regulatory Commission and the political opposition of the governor.  Nor are the Indian Point reactors the only ones that are aging badly (see Oyster Creek's tritium leaks in New Jersey for another example).  Proposed alternatives -- new gas-fired generation, increased imports of hydro power from Quebec -- generally increase the dependence of the region on distant energy supplies.

 From time to time I get into arguments with people about the future of the US East Coast cities in an energy-constrained future.  The people I argue with assert that those cities are in the best position, because they use so much less energy per capita than, say, Mississippi or South Dakota.  My side of the argument is that those cities are very risky places to be, because while they may use less energy, they are dependent on a very large long-distance network of transport systems to get the energy they do use.  A modern city without electricity isn't a city any more; in fact, it quickly becomes uninhabitable as the elevators, refrigeration, water, sewage treatment and so forth quit working.

I'm not as pessimistic as John Michael Greer, but do anticipate a slow steady change in what America and the world look like as energy constraints begin to pinch.  In the long run, I expect (although I don't suppose I'll live long enough to see it) the US to separate into multiple independent parts.  One of the interesting aspects of that separation will be how BosWash behaves.  They're wealthy, they have tremendous political power within the current structure, but they are heavily dependent on a far-flung network to deliver the energy they need.  Whether they can keep the energy flowing over that network will be an interesting question.

Monday, November 26, 2012

Easily-Defeated Article Limits

It has become increasingly common for newspapers to limit free access to their content.  Both the New York Times and the LA Times restrict the number of articles that you can read each month unless you're a paying subscriber.  This being the Internet, people immediately began looking for ways to defeat the limit.  There are a couple of different ways to do it (they're widely known, so I don't feel like I'm costing either paper anything).  One is to periodically delete all of the cookies the sites have stored with your browser.  Another is to keep your browser from running scripts from those sites.

 Let me begin by remarking that both newspapers are doing their best to "have their cake and eat it too."  They want to make it easy for people to download articles without taking any extra actions.  For example, when I provide a link to a NY Times piece (which I have done), someone can follow that link and get a copy of the article immediately.  Unless, of course, the person following my link has already downloaded ten articles already this month, in which case the Times wants them to get a message that their free-article limit has been reached and they'll have to pay if they want to see that particular article.  How does the Times implement that check against the limit?

Based on what we know about blocking the check, there are two parts.  First, each time you download an article, a "cookie" comes with it.  A cookie is just a chunk of data that your browser stores.  In this particular cookie is a count of how many articles you've downloaded this month.  Whenever you make a request to the site that sent you the cookie, a copy of the cookie goes to the server as part of your request.  The Times' server increments the article count in the cookie and sends it back.  From what we know, the Times' server does not do the actual blocking; it just increments the article count.  As part of the article download, the Times also sends along a script -- a piece of code that your browser executes.  The script checks the article count in the cookie and blocks the display of the article if the count is too high.  We also know that if the cookie doesn't exist, the Times sends back a cookie with a count of zero.

Why implement the limit check in this fashion?  It makes things easier for the Times because all of the hard work is being done on your computer, not on their server.  For every request, the Times' server gets to do the same thing: increment the cookie counter, creating a new cookie if necessary, and download the requested page plus updated cookie plus script.  No checking against the database to see if you're a subscriber.  No generating a different sort of response.  This makes the server simpler, faster, and (IMO probably the deciding factor) cheaper.  It also makes it easy to defeat the limit: either delete the Times' cookies periodically or refuse to allow the script to run.  Deleting the cookie in order to defeat the limit does raise an ethical question (I'm intentionally taking action in order to read articles that the owner hasn't given me "permission" to read).  Keeping the script from running is more problematic.

There are good reasons to block scripts.  Scripts can find a lot of personal information about you and send it off to the bad guys.  In extreme cases, scripts can mess with your computer in bad ways.  Security advisers often recommend blocking script execution generally (the University of California at Santa Cruz guidelines for campus users is an example of such a recommendation).  If a person has blocked scripts, the Times' limit on the number of articles that can be viewed is defeated.  If a person were running the Firefox browser, with the NoScript add-in blocking execution of scripts, the Times turning on their article limit would have been a total non-event.  That person's perception of the Times' site would have been exactly the same after the limit was turned on as it was before the limit.  In effect, the Times is asking readers to operate their browser in an insecure fashion so that the Times can implement article limits cheaply.

The Times is essentially saying, "We're going to put articles up in a public place.  We request that you only read ten articles per month without buying a subscription.  We want you to remember your count and stop at the appropriate time.  We're going to count everything you read, no matter how trivial, no matter how you got there (including following bad links we provide), against the limit.  And we're not going to make a serious effort to keep you from reading past the limit."  That's not a business arrangement, that's a request for contributions.

Tuesday, November 20, 2012

Fun with Cartograms

A cartogram is a map in which the geometry is distorted so that the displayed area of a region matches some variable other than physical area.  Red/blue cartograms with US states distorted to reflect the number of electoral votes rather than the physical area become popular every four years.  Entire web sites have been created to distribute cartograms.  From time to time, I find myself wanting to generate a cartogram, but have lacked the appropriate software.  Last week I decided to do something about that.  I spent a day looking at various free packages available on the Internet.  Some wouldn't run on my Mac; some required learning obscure details of a complex user interface; some required map data in specific formats I didn't have available.

Ultimately, I decided to build my own little system around M.E.J. Newman's cart and interp programs.  The programs are written in vanilla C and compiled properly on my Mac [1].  The paper describing the detailed algorithm [2] is also available.  I already had a file with state outlines that I had obtained from Wikipedia.  A couple hundred lines of Perl later, and I had working code that would generate cartograms for the 48 contiguous states plus the District of Columbia, using Dr. Newman's programs to do the hardest part.  There's still a lot of details to attend to to make things a bit more general and more automatic, but at least I can play with maps.
The first map shown to the left is the basic undistorted map.  It's either a conic or equal-area projection of the continental US; the Wikipedia page doesn't say which.  That's not really important, as the two are essentially identical over this area.  The Wikipedia file has a couple of small errors in the outline descriptions that show up in certain drawings.  I corrected the worst one, but plan on obtaining different outlines at some point in the future anyway.

The next map is distorted so that the area of each state reflects its population.  The proportions are not perfect, but close.  The errors are probably due to my using too little padding around the map.  The Gastner and Newman paper discusses how much padding is appropriate, and I used less than they recommend.  In areas where the population density is roughly the same over adjacent states (eg, Illinois, Indiana, and Ohio) the shapes of the states are recognizable.  Where density changes drastically (eg, California) the shapes are more distorted.  This is the classic problem for cartograms -- how to adjust parameters so that things don't get distorted too badly.

The next map is distorted so that the area of each state represents the size of the federal land holdings within that state.  The same 11 states are shaded violet in this map and the preceding one.  It is one thing to read that most such holdings are in the West; the cartogram makes that painfully obvious.  The two violet areas can't be compared directly; the total areas of the 48 states in the two distorted maps don't match.  With some care, though, the diffusion algorithm should make it possible to set things up so that the maps can be compared.

Finally, just to show that I can do it, the basic red/blue map distorted by each state's electoral votes for the 2012 Presidential election.  Note how prominent Washington, DC becomes in this map, as it expands so its area is equal to that of the other states with three votes (eg, Wyoming and Vermont).  As always, there are lots of things that could be done with color shading to convey additional information.


[1] The programs depend on one external library for an implementation of the Fast Fourier Transform.  That library is also free, has been ported to many operating systems, and built just fine on my Mac.

[2] Michael T. Gastner and M. E. J. Newman (2004) Diffusion-based method for producing density equalizing maps, Proc. Natl. Acad. Sci. USA 101, 7499-7504.

Saturday, November 17, 2012

Western Donor and Recipient States

By far the most popular entry in this blog has been one I wrote about federal donor and recipient states: that is, whether states pay more or less in federal taxes than they receive in federal expenditures.  In the last ten days or so, there has been a sharp increase in the number of times that page is downloaded.  I suspect that is related to a prediction I made to a friend shortly before the election that if President Obama won, there would be an increase in the use of the terms "secession" and "revolution" from people in so-called red states [1].  The increase has certainly happened.  My opinion is that the increased interest in donor/recipient status is from people in blue areas researching the often-quoted factoid that blue states (and within states, areas) subsidize red states/areas.

That previous piece wasn't concerned with the red-blue differences, but with the difference between the 11 states from the Rockies to the Pacific compared to the rest of the country.  By the usual Tax Foundation measures, the western states clearly subsidize the rest of the country.  Inside that group, depending on exactly what you compare, five states are donors: California, Colorado, Nevada, Oregon, and Washington.  Of the remaining six, all but Arizona have populations below three million -- some way below three million -- so those don't have a large effect on the total.  In this piece, I want to discuss the thesis that even in the six western recipient states, why they are recipients doesn't necessarily match the conventional wisdom of poor, lazy, etc.

The first unconventional factor is the very large federal government land holdings in the West.  In each of the 11, between 30% and 85% of the state's area is owned by the feds.  This ownership results in assorted distortions.  Wyoming is an example.  Almost 40% of the coal mined in the US is produced in Wyoming.  Much of that production is from federal lands.  In states where large amounts of coal are produced from private land -- West Virginia, Kentucky -- the state levies substantial severance taxes.  Wyoming can't tax the federal government.  Instead, the federal government shares a portion of the royalty revenue it gets from the coal mining with the state.  In the federal flow of funds accounting, these royalty payments show up as federal expenditures.  Absent those large payments, Wyoming would be a donor rather than a recipient.

Another factor is the size of some of the operations conducted on federal land in the West.  There are a number of national laboratories and large military bases located in the West.  This type of operation is often even larger when compared to the size of the state population where they are located.  New Mexico is an example.  Sandia National Laboratory is located in New Mexico, and the Lab's $2B annual budget is counted as an expenditure in New Mexico.  The White Sands military reservation (including the White Sands missile range) has a similar budget.  Those two facilities alone account for about $2,000 in federal expenditures for every person who lives in New Mexico.  Idaho is another small state with a large national laboratory.

Finally, there are some demographic things that affect the outcome.  States like Arizona are home to a large number of retirees.  While a farmer works in Illinois, his Social Security and Medicare are taxes collected in Illinois.  When he retires to Arizona, his SS and Medicare payments are federal expenditures in Arizona.  Increasingly, he may also bring Medicaid money into Arizona as well, if he moves into a nursing home and is poor enough [2].  There doesn't appear (to me) to be any sane way to account for people retiring to states other than those where they paid their taxes while they were working (and where their children continue to work and pay taxes).  Absent some way to account for that situation, Southwestern (and Southern) states receive disproportionate social insurance payments but it's not their fault.

My point is that determining the donor/recipient status of a state in a meaningful way is harder than just comparing tax receipts and flow-of-funds expenditures.  In the West, it's harder than most places.


[1] There are very few all-red or all-blue states.  The real divide is a rural/urban thing, which becomes clear if you look at the red-blue maps done at the county level.  For example, Georgia may be red but Atlanta and adjacent counties are blue, at least in the last two Presidential elections.

[2] Almost 50% of Medicaid expenditures are now for long-term care, particularly for the low-income elderly.

Monday, November 12, 2012

IEA World Energy Outlook 2012

The International Energy Agency (IEA) released their World Energy Outlook 2012 report today.  The big news about this forecast is that they predict that by 2020 the US will be the largest oil producer in the world, and by 2030 the US will return to being a net oil exporter.  The last time the US was a net oil exporter was in the late 1940s.  I've written about the WEO report before.  I said then that I found the forecasts improbable.  I still do, and for the same reason.  The numbers in the report are generated from the IEA's World Energy Model (WEM).  In the documentation for that model we find:
The main exogenous assumptions concern economic growth, demographics, international fossil fuel prices and technological developments.... Demand for primary energy serves as input for the supply modules.

As a modeller myself, I've always complained bitterly about this structure.  In effect, it allows the people using the model to: (1) assume a politically acceptable level of growth; (2) work backwards to the supplies and prices of energy necessary to produce that growth; and (3) assign production levels to various sources in order to produce the necessary supplies and prices.  In past years the IEA assigned large amounts of supply growth to the OPEC countries.  Now that OPEC has suggested that they won't be providing large increases in production, the IEA forecasts that tight oil in the US (plus natural gas liquids) will provide the needed increases.  What's missing in this picture?  There should be a feedback loop that links primary energy production costs to supplies and prices.  Producing a million barrels per day over decades from tight US formations such as the Bakken require lots of new wells to be drilled essentially forever.  Money spent on drilling is not available to the rest of the economy.  Energy supplies and prices need to be a part of the economic model, not specified outside of it.

The graph to the left is an example of the kind of linkage that I'm talking about [1].  Since the 1970s, each time that US expenditures on crude oil have increased sharply at levels from above or reaching 4% of GDP, a recession has followed.  People have built models with energy as part of the economy for decades.  The model in Limits to Growth is probably the best-known of the group.  Ayres and Warr have published a considerable amount of work where the availability and cost of energy are a core part of the economic model.  Such models seem to yield pretty consistent results: without "then a miracle happens" technology intervention or new cheap sources of oil, we are living in the years where economic output peaks and begins to decline.

 Over the last few years, the IEA forecasts have shown a lot of change from year to year.  This year there's a big swing in oil (and near oil) production away from OPEC to the US.  Big swings don't give me much confidence in the underlying models.


[1]  Credit: Steven Kopits of Douglas-Westwood, testifying before the US House Subcommittee on Energy.

Friday, November 2, 2012

Election Day and Numbers

It's almost election day, so I feel obligated to write something political.

I'm a numbers guy -- always have been, probably always will be.  When politicians propose policy, I'm one of the people who demand that they show numbers that make at least some sense.  And when I want to know whether my candidates are doing well, I look at numbers: fund raising and polls in particular.  I'm also a pseudo-academic [1], so have been pleased that some real academics look at ways to combine multiple polls to give more accurate results.  The chart to the left is an example from the fivethirtyeight web site from earlier this month that shows estimated probabilities for Obama or Romney winning in the electoral college and in the popular vote.

Nate Silver and Sam Wang, have been under attack lately.  Silver and Wang are only two of the better known aggregators.  There are also sites like Votamatic and Real Clear Politics' No Toss-Up States.  All four of those show Obama with a high probability of winning the electoral college vote; not surprisingly, much of the criticism comes from Romney supporters.  One of the common complaints leveled at Nate Silver in particular is that he doesn't weight all polls evenly.  These attacks seem particularly partisan since unskewedpolls.com, a site that is openly partisan in favor of Romney, and that mangles the reported polling data to show that Romney will apparently win in a landslide, is not given the same treatment [2].

Another form of attack comes from the media.  The Washington Post's Chris Cillizza, whose Fix column rates various races, recently moved Ohio from "lean Obama" to "tossup".  Cillizza's reason for this?  "….the absolute necessity for Romney to win the state if he wants to be president - leads us to move it back to the 'tossup' category."  Not that the numbers have changed, but that Ohio has become the "must win" state for Romney, therefore it becomes a tossup.  Maybe Chris is right.  OTOH, I'm more inclined to the theory that Chris' job #1 is to sell newspapers and pull eyeballs to the Washington Post's web site.  That's a lot easier to do if it looks like a horse race.

And finally, there are attacks based on the hypothesis that polls can be wildly wrong because they don't reflect the secret behind-the-scenes things that only a political insider would know.  Certainly polls can be wrong.  They can be wrong even beyond the margin-of-error numbers that are always included in the press releases [3].  Polls were one of the reasons that the Chicago Tribune printed its "Dewey Defeats Truman" headline.  But the statisticians that design the polls continue to learn their craft and improve their skill.  For example, one hears that cell phones have made polling less inaccurate.  Yep, and you can bet that the poll designers were at the forefront of saying that, and then designing methods to account for the effect.

People like Nate Silver and Sam Wang, even though they may personally prefer an Obama victory, depend for their livelihoods on being accurate and unbiased.  Upton Sinclair famously said, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"  It is difficult to get an academic statistician to bias his/her results when their reputation and future salary depend on being unbiased.  I've hung out with academics and former academics most of my life.  And based on that experience, I'm a firm believer that if you can bring real numbers that show Silver and Wang have flaws in their models, they'll be the first to admit it.  So far, the attackers aren't bringing numbers.



[1]  The way I define these things, real academics get PhDs and work for universities.  They teach, do research, speak at conferences, and publish in refereed journals.  I'm only a pseudo-academic.  I stopped at multiple Masters degrees and my research activities were within the confines of the old Bell System and its various derivative parts following the 1984 break-up.  I have occasionally spoken at conferences and did publish a paper in the refereed IEEE Spectrum, but the conferences were, and Spectrum is, aimed at practicing engineers as well as academics.

[2]  Nate attempts to weight polls based on several factors, including historical accuracy.  At least IMO, the manipulations done by unskewedpolls.com and some others lack the same sort of statistical justification that Nate considers.

[3]  Other factors: the Tribune's political insiders also predicted a Dewey win, and working around a year-long printers union strike forced the Tribune to go to press before there were any actual results available.

Thursday, November 1, 2012

Do the Math's "Star Trek Future" Survey

Over at Do the Math, Tom Murphy has an interesting piece about a semi-formal survey of physicists he conducted, and those physicists' opinions about the achievability of various advanced technologies and situations.  The physicists covered the gamut from undergraduate majors to grad students to full faculty members.  The results are discouraging for those who -- like me -- were promised flying cars when we were young.  The surveyed physicists saw self-driving cars being generally available within 50 years.  Everything else on the list -- fusion energy, lunar colonies, contact with aliens -- were "out there" tech or applications, and a lot of things -- artificial gravity, warp drive, teleportation -- were in the "not going to happen" category.

Tom provides lots of caveats.  He cheerfully admits that he's isn't a survey expert, and may have screwed up the structure of the questions.  The choices for time frames are quite broad: less than fifty years, more than 50 but less than 500, more than 500 but less than 5,000, and so forth.  The physicists put fusion energy into the second of those categories; both 75 years and 475 years in the future fall into that band.  He identifies an "expert gradient" pattern: the graduate students are more pessimistic than the undergrads, and the faculty members are even worse.  Tom even references -- in more polite terms -- the old saw that science advances one funeral at a time.

Like me, Tom thinks that our current high-tech society faces a number of difficult fundamental challenges in this century.  He takes a more global view than I do.  When he considers potential energy sources, for example, he often looks at global needs.  I admit to being a lot more parochial than that.  I think that there are big chunks of the world that have very little chance of maintaining their current population level and maintaining (or achieving) a high-tech society, so we need to be looking at regional solutions.  For example, India will have problems because of its large and growing population.  Africa will have problems because of the lack of existing infrastructure.  And so on.

The good news -- if you can call it that -- is that exotic new science doesn't appear to be necessary for some regions.  One of Tom's most interesting posts is the concluding one in his examination of alternate energy sources, an energy matrix that compares those sources on several measures (availability, potential size, etc).  Tom's conclusions?  Electricity is a solvable problem.  A relatively small number of technologies, most already in existence, will probably suffice.  And that transportation is a hard problem, as electrification is more difficult (compared to heating/cooling, lighting, etc).  I agree with those conclusions for some regions.

One of the regions that I worry about is the US's Eastern Interconnect.  Almost 70% of the US population lives in the states that make up that area; in 2010, those states were responsible for 72% of all US electricity generation; and also in 2010, 73% of that electricity came from coal and nuclear power plants (50% coal, 23% nuclear).  Over the next 25 years, the large majority of those nuclear plants will reach the end of their operating license extensions, and it seems unlikely (at least to me) that very many of them will be allowed to continue operation.  The Eastern Interconnect accounts for 80% of coal-fired generation in the US; any major reductions in coal use to address climate change and air pollution issues will fall very heavily on the Eastern Interconnect.  It's not clear to me where adequate supplies of electricity are going to come from.