Wednesday, March 26, 2014

Mike the Pythoneer...

A few weeks back my old Mac Mini got to the point where it was giving me the "gray screen of death" every 18 to 36 hours [1].  Replacing the RAM -- failing RAM being the most frequent cause of kernel panics -- didn't fix the problem.  I decided that six-and-a-half years was a good run, that I had outgrown the RAM size limit, that having a graphics chip too dumb to support acceleration for OpenGL wasn't good, and that having no OS upgrade path for the old machine was a bad thing.  So I got a new Mini.  It has not been an entirely painless process, largely because there have been a bunch of changes in the Apple development tools.

I have a couple of pieces of software that I use regularly that I wrote in Perl.  I'm entirely dependent on one of them, a note-taking application that also acts as the front-end for (sort of) organizing a whole collection of files of various types -- PDFs, images, old source code -- that is pretty static these days [2].  Another one, that is part of a package for drawing cartograms, is under off-and-on development.  Both were broken under the default Perl/Tcl/Tk that comes as a standard part of the new OS X, and required some effort to get running again.  To get the note-taking app running I ended up downloading and using ActiveState's Perl instead of the Apple default.  At one point in the past I had toyed with the idea rewriting the note-taker in Python (for other reasons) and had some code that tested all of the necessary GUI functionality; that old Python code ran on the new Mini with no problems using the Apple default installation.

Reading a variety of Internet things led me to (perhaps erroneously) conclude that: Apple is moving away from Perl and towards Python for scripting; Tkinter is a required part of the Python core, but Perl has no required GUI module; so Apple is more likely to keep the chain of Python/Tcl/Tk working in the future.  Suddenly, switching to Python seemed much more compelling than before.  I also came across Eric Raymond's old "Why Python?" article from the Linux Journal.  His experience seemed to match my own recollections of writing that little Python program related to the note-taker: I got over the "eeewww" thing about the use of white space to identify block structure fairly quickly, and I seemed to be writing useful non-toy code fairly quickly.

One of my favorite (and I use that word with some ambiguity) time-wasters on my computer is FreeCell.  Since Windows 95, Microsoft has always provided a simple version of the game as part of the standard distribution.  When I switched to a Mac at home several years ago, the lack of a simple free reasonably-attractive version of the game grated on me [3].  I ended up using a Javascript version in my browser, but recently that one began to act flaky, putting up placeholders instead of images for some of the cards.  "Two birds with one stone," I thought.  "Practice Python and get a version of FreeCell that looks and behaves like I want."

The good news is that with a couple of manuals open in browser tabs, writing the game went quickly.  Call it 15 hours total over three days to go from nothing to something that my fingers are almost comfortable with up and running.  And by nothing, I mean just that: no cardface images, no thoughts on data structure.  A blank slate.  Some of that time was doing small sorts of rewriting, when I would find an example that showed a better Python idiom.  The bad news comes in several parts:
  • There's a version of FreeCell that my fingers know just sitting there on the desktop now, begging to be "tested".
  • Feature creep is going to be an issue.  It should have undo.  It should have the ability remember the current layout and return to that.  It should have a built-in solver that works from any position.
  • It should run on my old Linux laptop.  It should run on my Android tablet.  It should run on my wife's iPhone (well, maybe that's a stretch).
So I guess I'm going to be a Pythoneer (I actually think "Pythonista" sounds better, but that's been preempted as the name of a commercial application).  Expect an occasional update on how things are going...

[1]  In the event of certain kernel panics, Mac OS X puts up a translucent overlay to block the display, along with a dialog box that says "You have to restart your machine" in several languages.  Until recently, I had no idea that such a thing even existed.  For the record, since it no longer has to support nearly as many processes, the old Mini has been up without a problem for more than three weeks.

[2]  At last count, a few hundred pages of notes and more than 100M of images, PDFs, etc.  As to why write my own when there are dozens of note-taking applications out there, let's just say that I'm an old geek and paranoid and don't like to have critical data stored in a proprietary file format.

[3]  I'm sure that all of the authors of the solitaire packages out there think their games are laid out attractively.  I just happen to disagree.

Tuesday, March 18, 2014

3D Printing

Seth Stevenson at Slate has a column about his disastrous attempts to run a 3D printer.  The picture to the left is one of his attempts.  His experience seems educational, given that the price of printers continues to fall.  Slate apparently gives Seth a bigger hobby budget than my wife gives me, since they bought him a printer.

The Solidoodle 4 looks like a really nice little hobby machine.  Steel frame and covers, compact footprint, relatively large working volume ( a cube eight inches on a side -- you can build relatively large things), ability to use either ABS or PLA plastic.  Billed as being driven over a USB connection by any of Windows, Linux, or a Mac.  With a retail price tag of $999, fully assembled (many low-cost 3D printers come as kits, with some degree of assembly required).

Seth's experience, though, demonstrates that's there more to 3D printing than just taking the gadget out of the packaging and firing it up.  Calibration, cleaning, temperature selection, little details like sometimes spraying the bed on which the object is printed with hair spray so the plastic adheres properly, putting up with the noise and stinks.  If the experiment had been more successful, there would have been the longer-term problems of maintaining an inventory of materials (ie, do you have enough red plastic on hand to make that spiffy Christmas tree ornament?).  The list of problems reminds me of homemade printed circuit boards.

You can make a printed circuit board at home in your garage or basement [1] in a matter of hours.  The results look like homemade PCBs.  OTOH, if you need to make PCBs infrequently, you can upload the computer files to a service bureau who will, at a cost of a couple dollars per square inch, make the board for you and mail it to your home.  You get a much higher quality result, at the cost of time (a couple of weeks is typical) and a bit more expense.  When I built the World's Most Sophisticated Whole-House Fan Controller™™ a few years ago, I had a service bureau make my boards, and was very happy with the entire experience [2].

There are a rapidly growing number of 3D printing service companies.  Generally speaking, they're going to have better printers than I could possibly afford -- unless I become a service company myself -- because they keep them busy; they already know things like the proper temperature setting for all of the materials they use; they have experience with the idiosyncrasies of their equipment; and they keep suitable stocks of material on hand.  I would be very interested in reading a column where Seth tries that route, as well.  Certainly for time being, any 3D printing experiments I conduct will be done through a service bureau.

[1] Given the nature of the chemicals and the stains they can make, don't try this in your kitchen.  Trust me on this.

[2] Some of that might be that I ordered two copies but they sent me four for the agreed-upon price.  Since I eventually used three of the boards (due to operator error and a voltage surge), that turned out very well.

Monday, March 10, 2014

TABOR Lawsuit to Procede?

You know you're a policy wonk of some sort when you find the following of great interest: last Friday the 10th Circuit US Court of Appeals ruled [PDF] that the Colorado TABOR lawsuit could go forward.  No word yet on whether the defendants in the case will appeal to the SCOTUS or not.  For those to whom this might have passing interest...

TABOR, the Taxpayers Bill of Rights, is an amendment to the Colorado state constitution passed in 1992 which imposed two main restrictions on state and local governments across the state.  First, it restricted the rate at which government spending could increase year-over-year to the sum of population growth plus inflation, unless increases beyond that were approved by a vote of the people in the jurisdiction.  Excess revenue beyond what could be spent had to be refunded.  Second, it required that new taxes, or increases to existing tax rates, must also be approved by a vote of the people.  The effect was to impose direct-democracy restrictions on the ability of elected government officials to set tax rates and spending levels.

Whether by design or coincidence, the most serious consequences of the TABOR restrictions come into play during recessions.  Following the recession of 2001, the Colorado state budget was in sufficiently dire straits that the legislature put Referendum C on the ballot in 2005.  Ref C provided a five-year time-out on TABOR refunds, and set a new floor from which future spending increases would be measured.  Ref C was supported by then-governor Bill Owens; many people believe that support cost Mr. Owens a bright future on the Republican national stage.  The recession of 2007-09 also put large pressures on the state budget.  In response to the pressures brought on by recessions, the Colorado legislature has resorted to a certain amount of accounting trickery: the Colorado Opportunity Fund shields a considerable part of state higher education funding from TABOR, as did a rewrite of the unemployment insurance statutes to replace the word "tax" with "premium" wherever it appeared.

In 2011, a group of current and former members of the state General Assembly filed suit in federal court challenging TABOR on the grounds that by removing control of taxes and spending levels from the legislature, Coloradans were denied the "Republican Form of Government" guaranteed by the US Constitution.  To this point, the argument has been whether the plaintiffs have standing to sue, and whether the entire matter is non-justiciable due to the political question doctrine.  In 2012, the District Court ruled that (a) plaintiffs had standing and (b) this case is sufficiently different from others settled in the past that the political-question doctrine did not apply.  Last Friday, the Appeals Court agreed, and remanded the case to the District Court for further proceedings.

Opinions. Bear in mind that IANAL (although I had to pretend to be one at time while I was working for the state legislature), nor without bias in this matter:
  • To this point, the defense has been remarkably lazy.  Their argument consists basically of "state legislators don't get standing just because they lose a vote" and "the SCOTUS says Guarantee Clause matters are non-justiciable."  In fact, the SCOTUS decisions are much more nuanced than that.  Where the defense has ignored the details entirely, plaintiffs appear to have done their homework on those nuances, building arguments that all of the SCOTUS conditions for standing and justiciability are satisfied.  Thus far, the District and Appeals Courts have concurred.
  • In light of this, I expect the defendants to appeal to the SCOTUS.  I say that because I expect that the defense has been just as lazy in preparing arguments in the event that the case goes forward.  Better odds for success at this point for getting a conservative SCOTUS -- and politically, TABOR is a darling of small-government conservatives -- to say the Appeals Court misinterpreted things and toss the whole case.
  • Should the case go forward, the defense argument will largely be "the legislature gets to split the pie up however they want, subject to other restrictions that aren't being challenged in this suit -- they just don't get to set the size of the pie."  And that such a restriction doesn't impose an undue burden, since they can always ask the voters to make the pie bigger (although in Colorado, making such a request requires a two-thirds super-majority in each chamber).
  • Should the case go forward, and the defendants make that argument, they will lose -- setting the size of the pie will be defined to be a core legislative function.  And lose again on appeal.  And then win in the SCOTUS, assuming the current make-up.

Saturday, March 1, 2014

A Handset? Really?

News stories today reported that President Obama spent 90 minutes on the phone with Russian President Putin, warning him about the consequences of taking military action in Ukraine.  This is the picture that accompanied one such story.  Seriously, dude, do you want me to believe that you're dumb enough to spend 90 minutes on a important diplomatic call holding a handset?

Over the course of my professional career(s), I spent a lot of time on long-duration phone calls.  I will guarantee that if you do so, at some point in the call there comes a moment when you need both hands free.  Maybe you have to type something into the computer.  Maybe you need to pin the piece of paper down while you jot a note.  Maybe you just need both hands free because you can't say "Vladimir, you have no idea who you are f*cking with!" without appropriate hand waving [1].  The moment comes, and you can't do the proper thing if one hand is busy holding a handset up to your ear.

Before making/taking a call that was supposed to last that long, I had my headset on.  I still wear a headset even if I'm just calling my Mom, let alone making a business call.  And the headset is no doubt behind the times.  If I had the NSA at my beck and call, you can d*mned well be sure that I would have the world's absolute best echo cancelers in place, and I'm talking in the open air, hands waving the whole time, without benefit of a headset.

Let's get with the times.  A handset on a coiled cord is not the impression we're trying to make.

[1] Just so you remember why I couldn't get elected dog catcher, let alone something that requires even more diplomacy :^)

Photo credit: Official White House photo by Pete Souza

Friday, January 31, 2014

Two Nuclear Paths

In  a previous post I laid some groundwork for eventually arguing that different parts of the US are following two very different policies with respect to nuclear electricity.  It's not a new topic for me; I've written about the differences in the size of the role nuclear has in powering the three electric grids in the US; that nuclear power is much more important in the Eastern Interconnect than in the Western; and that the evidence provided by plans for new reactors suggest things will continue that way. Over the last several months there have been a number of events that reinforce my thinking on the subject.

Georgia Power placed the 900-ton bottom portion of the containment vessel for reactor unit 3 at the Vogtle power plant.  Georgia is an ideal example of a state that needs nuclear power.  The state has, at least compared to its growing demand, quite limited undeveloped renewable resources suitable for generating electricity.  For that matter, the state has little in the way of fuel resources, depending on imports of everything.  It always startles me to remember that the 3.5 GW coal-fired Scherer power plant is fueled exclusively with Wyoming Powder River Basin coal that travels 2,100 miles to be burned.  Long supply lines can be fragile.  The 2011 Mississippi, Missouri, and Ohio river floods were only the most recent to create various disruptions in rail transport from west to east.  Nuclear fuel is much more compact and refuelings infrequent.

In a different direction, Southern California Edison notified the Nuclear Regulatory Commission that they will retire the two reactors (half of the California fleet) at the San Onefre power plant rather than repairing them.  The plant was taken offline in January, 2012 when excessive wear in steam-generation tubes resulted in the release of a small amount of radioactive steam.  Last summer, SCE urged conservation and the California independent system operator gave permission for two gas-fired generators to be temporarily restored to service.  Those generators had been shut down as part of a clean-air effort, with plans to replace them with new generators in a better location.

Also in the Western Interconnect, an interesting study of the Columbia Generating Station (the nuclear power plant in Washington State) was published.  The report estimates that the cost to produce electricity at CGS is significantly higher than the price at which the same amount of electricity could have been purchased in the wholesale market.  The report identifies several reasons why that is so, including: it's the only nuclear plant the owner operates, it is located far from the demand centers where its electricity is consumed, and that the region now has a surplus of renewable generating capacity.

On a final note, a number of industrial-scale solar power plants began delivering power to the grid.  These include the 250MW California Valley Solar Ranch (PV); the 130 MW Tenaska Imperial Solar Energy Center South project (PV); and the 392 MW Ivanpah Solar Electric plant (central-tower solar thermal).  Summer 2014 ought to be interesting in Southern California — particularly if it's a hot one — as they try to juggle intermittent renewables and natural gas to balance the decommissioning of San Onefre.

[1] Clean up the ash ponds.  Install new pollution control equipment for fine particulates and various noxious gases.  Potentially, pay a carbon tax or buy emission permits for the CO2.  There are a number of reasons to think that coal-fired electricity will cost more in the future.

Friday, January 24, 2014

Western Secession 6 - East vs West in Maps

The broad theme of this series of posts is that a peaceful partition of the US into at least two parts is likely in the middle sort of future (probably more than 25 years, probably less then 50).  The particular partition that I think about is East and West.  Previously, I argued that there is a natural geographic dividing line between the two: the Great Plains region, already pretty empty of people and generally getting emptier.  This post puts up a whole pile of maps in order to argue that there are fundamental differences in the situations faced by the East and the West, which will in turn lead them to want to take different paths in solving some problems.  When those differences become more important than the similarities between the regions, separation becomes a viable option.  Some of these maps have appeared in earlier posts.

Mountains.  One-third of the 48 contiguous states isn't like the other two-thirds, as shown in the relief map to the left.  From the western edge of the Great Plains to the Pacific Ocean, the terrain is dominated by
mountains.  East of the Great Plains, not nearly so much.  The highest point east of the Great Plains is only 1,400 feet higher than my house in a Denver suburb; I routinely make up that difference on "easy" hikes up into the foothills.  The difference in terrain has a number of consequences, as discussed along with the next several maps.  Map credit: U.S. Geological Survey. (Aside: I love this map.  The USGS says it's based on 12 million elevation data points extracted from their topo maps.  A 56"x36" paper version is available for $12.00.)

Settlement patterns.  The mountainous terrain dictated where people could settle (and continues to do so today).  Steep gradients mean that the rivers are generally not navigable over long distances (the Columbia being a limited exception).  Areas where agriculture is practical, and often the types of agriculture, are limited (the growing season at altitude can be remarkably short).  All of this dictates where significant numbers of people can live, summarized in the population map to the left (each white dot represents 7500 people).  In some ways, the western part of the US is more urban than the eastern part.  Not in the sense of tall buildings and small apartments, but rather that a larger majority of the people live in the urban and suburban areas of a few metro areas.  Metro areas are fewer, and much farther apart.  The spaces between metro areas are empty in a way that occurs rarely in the eastern part of the country.  Map credit: U.S. Census Bureau.

Transportation.  The map to the left shows truck freight volume by federal highway route for 2007, with thicker lines indicating more tonnage.  Just as the terrain had a large influence on where sizable cities are possible in the West, mountain ranges (and more importantly mountain passes) dictate where most of the transportation routes must run.  In some cases, the routes today are the same routes that wagon trains used when they headed out across the Great Plains headed for the West Coast.  Wyoming's South Pass is the only sane place for a busy freight route to cross the Rockies between Colorado and the Canadian border.  Implicit in this map is that a good deal of the east-west traffic involves transport between the coastal port cities and the more heavily populated East.  Rail freight volumes show a similar pattern, with the addition of a huge-volume route headed east from northwestern Wyoming.  That route carries very large shipments of Powder River Basin low-sulfur coal to eastern power plants.  Map credit: U.S. Department of Transportation.

Federal land holdings.  As a result of the settlement patterns and timing (the federal government made an enormous change in public land policy around 1900), the federal government has very large land holdings in the western states.  The cartogram to the left, where states have been resized to represent the area owned by the feds, illustrates the point.  This has made life difficult for state governments in many ways.  Policies affecting a variety of things — some not immediately obvious — can be difficult to manage when the largest landowner in the state (about 40% of the land, on average) is free to simply ignore the state law and do what it pleases.  Local resentment towards federal ownership rises and falls in cycles, and seems to be on the upswing again in recent years.  Map credit: author's own work, using a wrapper around Mark Newman's highly useful cart and interp programs.

Water.  Precipitation west of the Great Plains is very low compared to the areas on the east side.  The areas with the heaviest precipitation are, for the most part, mountain ranges or valleys between ranges where the water falls as snow in the winter.  Agriculture in the West has always been about storage and management of water.  As Mark Twain is famously credited for saying, "Whiskey is for drinking; water is for fighting over."  Irrigation is important even in the Pacific Northwest, which appears to be much wetter, due to seasonal variations.  During the critical growing months of July and August, Seattle and Portland are as dry or drier than Phoenix and Denver.  Phoenix and Denver get summer rainfall from thunderstorms triggered by the North American Monsoon that does not reach Oregon or Washington.  The prime irrigation example is California's Central Valley: with irrigation it is perhaps the richest farming area in the world; without irrigation, it's a semi-arid near-desert.  Map credit: Oregon Climate Service.

Fire.  The last three maps painted a picture of a West that is sparsely settled, dry, and with large areas held by the federal government, much in the form of undeveloped national forests and wilderness areas.  That's a nice prescription for wildfires.  Fire is, in fact, a natural part of many western ecosystems.  For example, some tree species have evolved so that the heat of a fire (which burns off brush and grass that would compete with the seedlings) is required to release their seeds.  The map to the left illustrates the number of wildfires from 1980 to 2003 that covered more than 250 acres individually.  250 acres is, by western standards, a small fire.  In most recent years, at least one western wildfire has reached at least 100,000 acres.  Some have been several times that large.  Western wildfires have become much more dangerous and damaging in recent years, though, in part due to misguided fire-suppression policies on federal land during the first half of the 20th century that allowed huge amounts of fuel to accumulate.  Map credit: U.S. National Aeronautics and Space Administration.

US electric power grids.  The next few posts in this series are going to talk about electricity.  I make no bones about it — I believe that managing the transition from fossil-fuel powered electricity generation to something else, in quantities sufficient to support modern tech, is the public policy problem for the next 50 years.  The US power grid is actually three grids that are largely independent, illustrated in the map to the left.  The dividing line between the Western Interconnect and the others falls largely within the Great Plains.  That division isn't surprising.  Historically, the three grids grew out of the connections between large utilities, and the Great Plains are a wide (and expensive) barrier for long-distance high-capacity power connections to cross.  In addition, much of the Plains region is served by rural electric cooperatives rather than larger utilities.  The important point to make here is that the two large interconnects are managed separately.  Map credit: Real Energy Services blog.

Solar and onshore wind renewable energy resources.  I also believe that there will be important differences in opinion on how to solve the supply problem, divided largely along the line between the Eastern and Western Interconnects (or down the middle of the Great Plains, or between mountain and non-mountain, wet-vs-dry, or any of several other factors that all yield much the same result).  Think of it in terms of the answers to the question, "Where will the non-fossil-fuel supply of electricity come from?"  Wind and solar (and conventional hydro) are renewable sources with large potential.  The map to the left shows where good on-shore wind and solar resources occur in the US.  Basically, from the Great Plains west.  Another important east-west difference is that many of the best resources in the West are relatively close to major population centers.  This map doesn't include conventional hydroelectricity; compared to demand, the large share of undeveloped hydro also falls in the Western Interconnect.  Map credit: Recycled Energy blog, using data from the National Renewable Energy Laboratory.

Nuclear power plants.  The other large existing source of non-fossil source that is commonly discussed is nuclear fission (commercial fusion has been 30 years away for the last 60 years, and according to the ITER time table, still is).  The map to the left shows the location of all commercial power reactors in the US.  Fission power is very much an eastern thing.  There were never a large number of reactors in the Western Interconnect, and the number has been steadily declining: the Ft. Saint Vrain generating station in Colorado, the Trojan station in Oregon, and most recently the pair of San Onefre reactors in California have been decommissioned.  The highlighted reactor is the Columbia Generating Station located on the Hanford Nuclear Reservation in Washington, which was the subject of a recent analysis whose conclusions were quite negative.  One might think that the geographic distribution would also make the problem of storing long-lived nuclear waste largely an eastern thing, but political power has — so far — dictated that waste burial will be consigned to the West.  The overall distribution of fission plants seems unlikely to change; all of the proposed new reactors that have reached the NRC license review stages are located in the Eastern or Texas Interconnects.  Map credit:  McCullough Research's Economic Analysis of the Columbia Generating Station.

Saturday, January 18, 2014

The Internet of Things

One of my friends makes an annual pilgrimage to Las Vegas for the Consumer Electronics Show, walks tens of miles of the convention floor aisles, and sends out a sometimes serious, sometimes tongue-in-cheek review of the overall theme.  This year he reports that it's "the internet of things."  The chip makers are producing the hardware to make it cheap to embed a processor, wifi, and IP stack; the consumer companies are racing to put the hardware into everything you can imagine (and some that I certainly didn't).

Interestingly, along with his e-mail announcing this year's report, I found an article from the BBC about a smart refrigerator that had been hacked and included in a spam-bot network [1].  I suspect that this is just the beginning of the problem.  As the article notes, security is probably not high on the list of features the consumer electronics firms are working on.  After all, security is hard, it's largely invisible (except when it's annoyingly visible), and who's going to buy their smart refrigerator based on how secure it is?  Like most consumer things that have become smart, it's going to be all about screen real estate and the size of the app store.

This is a subject that I thought about a lot in a previous career.  I have a patent for a software architecture that allowed smart devices (cable television set-top boxes specifically) to live behind a stout firewall and extend limited functionality to the Internet in a controlled manner.  Because even back then I was really afraid about the damage that could be done to the devices and that the devices could do if they were just attached transparently to the Internet.  Even with this sort of protection, having lots of relatively simple-minded devices running in my house was a scary thought.  Part of my job was finding ways to use little cracks in a firewall to implement gross security breaches.  It's amazing what you can do if you can get the right one piece of software to run on something behind the firewall.  Given enough devices behind my home's firewall, especially if some of those devices are portable and get attached to other networks occasionally, somebody is going to figure out a way to get that first piece of code in place.

On a lighter note, smarts are going to be embedded in things we wear as well.  I eagerly await reports of the first celebrity wardrobe malfunction that gets blamed on "somebody hacked the clothing."

[1] The use of a picture of a Samsung smart refrigerator should not be taken to indicate that the hacked refrigerator was a Samsung product, or that Samsung refrigerators' security is either better or worse than that of any other smart appliance.  It's just a convenient picture.