Saturday, September 19, 2015

Infrastructure Needs - Another Regional Reference

From time to time, I write something about failing infrastructure in the US.  Usually, I complain about how I seem to be living in a completely different country than the one some article is describing.  This week, The Atlantic ran a piece about the problem of combined storm and sanitary sewer systems.  Such systems are prone to overflows of raw sewage into nearby rivers and lakes when it rains hard.  One of the links in that article leads to the EPA map shown here, which identifies the largest of the 770 or so combined sewer systems in the US.

The American Society of Civil Engineers provides annual (very pessimistic) reports on the nation's infrastructure.  Regional cost estimates in their 2013 report reflect the distribution shown in the map: per-capita costs to fix the problems in the Mid-Atlantic, Great Lakes, and New England regions are much higher than other parts of the country.  Those regions plus portions of the Plains region are the only one where the cost of wet weather overflows is a significant part of future water system spending needs.

Correcting the overflows inherent in combined systems is expensive.  Washington, DC is about half-way through a 20-year $2.6B project to eliminate most of the three billion gallons of untreated sewage released into nearby rivers annually.  The project includes boring some 13 miles of 25-foot-wide tunnels at a depth of more than a hundred feet below the city.  Milwaukee has reduced its sewage releases into Lake Michigan by almost 80% by digging a longer, deeper tunnel -- at a cost of over $3B.  Somewhat over half of the cost of the Milwaukee system, which began construction in 1983, was in the form of federal grants.  The federal government has largely discontinued making grants for these purposes, making loans instead.  The EPA estimates that the total cost of upgrading the combined systems across the country to be about $90B; the ASCE estimates are significantly higher.

In many cases, local governments are not going to be able to afford the kinds of construction needed to fix the problems.  Detroit, poster child for the Rust Belt, is an example.  This Scientific American piece summarizes the situation -- bankruptcy, massive debt, shrinking population, and long-term climate predictions that include more frequent heavy-precipitation events.  There will be pressure to turn a local problem into a state one, and state problems into federal ones.  In the future, I believe, such spending will be an increasing source of friction between different regions of the country.

Thursday, September 17, 2015

Voting Stuff

Earlier this week, The Atlantic ran a pair of pieces about voting in the United States.  This one talked about low voter turnout, and how various practices disproportionately increase that problem for the poor and minorities.  This one talked about the aging of voting machines in the US.  I was disappointed, because the word "mail" didn't appear even once in either article.  There's a revolution in voting practices happening in the American West, and it didn't even get a mention.

This graphic from the New York Times illustrates three levels of by-mail balloting.  The lightest shade indicates states where less than 5% of votes cast in 2010 were cast by mail; the middle shade states between 5% and 18%; the darkest shade states where greater than 18% of votes cast were cast by mail.  Those dividing points don't make it clear just how widely used vote-by-mail has become in the West.  Colorado, Oregon, and Washington mail a ballot to all registered voters; in all three, more than 90% of votes cast are cast by mail.  In 2014, more than 60% of all votes cast in Arizona were cast by mail.  That same year, more than 50% of all votes cast in California were cast by mail.

Once instituted, vote-by-mail has incredible bipartisan support from voters.  In a recent poll with the question "Should the state continue to use its vote-by-mail arrangement?", 80% of Democrats and 75% of Republicans answered yes.  Several California government officials seem to be actively pushing universal vote-by-mail as a way to lower the costs of conducting elections.  Both Arizona and California have direct ballot initiatives, so it seems likely that even without action by the state governments, vote-by-mail will become standard within a few years.  That will make vote-by-mail the norm in all of the "big five" western population states [1]; the smaller states seem likely to follow along.

Vote-by-mail directly addresses several of the issues raised on The Atlantic's piece on low voter turnout.  It specifically creates a sizable early voting window.  It fixes the problem of balancing work and other life concerns against taking time off to vote.  Whether it has increased turnout in the three states that have gone the farthest is an unanswered question [2], but it addresses problems that are claimed to contribute to low turnout.  Vote-by-mail also addresses the technology issues that were raised.  Scanners for hand-marked paper ballots are accurate and relatively cheap.  Additionally, the total number of machines of all types needed, and the related expenses, are greatly reduced.

One of the interesting phenomena I have seen with respect to vote-by-mail is geographic.  My friends who are opposed to the idea almost all grew up east of the Mississippi River, and seem to be terrified that vote-by-mail will lead to widespread voter fraud.  The western experience has been that such fraud is almost non-existant -- certainly no worse than the fraud experienced in states without heavy use of vote-by-mail.  There's probably a Ph.D. dissertation in there for some sociology or political science graduate student.




[1] As usual, I use "western" to mean the 11 contiguous states west of the Great Plains (sorry, Texas).  The "big five" by population are Arizona, California, Colorado, Oregon, and Washington, which account for about 85% of the total western population.

[2] All three are direct initiative states. There is quite a bit of evidence that having a contentious initiative item on the ballot does increase turnout.

Saturday, July 4, 2015

Arizona v. Arizona and East v. West

On the last day of the term, the US Supreme Court finally released its decision in the case of Arizona Legislature v. Arizona Independent Redistricting Commission. In this case, the citizens of the state of Arizona had stripped their legislature of the power to draw the districts for election of members of the US House of Representatives. The legislature sued, citing the Elections Clause of the US Constitution:

The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing Senators.

By a 5-4 vote, the Court decided that at least for the Elections Clause, the term "Legislature" means the normal legislative process in the state, rather than simply the representative body.  If the normal process includes citizen initiatives, the Court said, then the voters can decide how districts are to be drawn, even if the process excludes the traditional legislature entirely.  The deciding vote was cast by Justice Kennedy, much to the surprise of the conventional wisdom. Not that Kennedy was the deciding vote; that much they got right. What they got wrong was which way Kennedy would jump. Why did they get it wrong?

My answer to that question is that they forgot that Kennedy is a westerner. A California boy who went to college at Stanford, practiced law in the Golden State, and sat on the Ninth Circuit Appeals Court before moving up the Supreme Court. Appointed by Ronald Reagan because Kennedy was a known California quantity. I strongly suspect that the last paragraph of the majority's syllabus in the opinion is Kennedy's work:
Banning lawmaking by initiative to direct a State’s method of apportioning congressional districts would not just stymie attempts to curb gerrymandering. It would also cast doubt on numerous other time, place, and manner regulations governing federal elections that States have adopted by the initiative method. As well, it could endanger election provisions in state constitutions adopted by conventions and ratified by voters at the ballot box, without involvement or approval by “the Legislature.”
Shorter form: the citizen initiative genie was let out of the bottle a hundred years ago, and trying to put the genie back in the bottle now would be extraordinarily messy.

Citizen initiatives were widely adopted in the western states during the Progressive era at the beginning of the 20th century.  The map to the left highlights states with direct initiatives for either constitutional amendments or statutes or both in blue (direct initiatives bypass the legislature entirely) [1].  West of the Great Plains, direct initiatives are the rule, not the exception.  The statement quoted above is almost certainly true of all those western states: all of them have provided regulation of "the times, places and manner of holding elections" in either statute or amendment that bypassed the legislature.

While the initiative processes were adopted in the West during the Progressive era, initiatives are used today by both liberals and conservatives.  During the time I have lived in Colorado, conservatives were successful in adding the Taxpayers Bill of Rights (TABOR) that removed state and local governments' ability to raise tax rates or introduce new taxes without submitting the measures to the voters.  Liberals were successful in having amendments adopted that increased spending on K-12 education and created a renewable energy mandate.  It's a western thing, and it's not surprising that Justice Kennedy was unwilling to try to put it back in its lamp.


[1] Florida is a late-comer to the party, having adopted direct initiatives for amendments when they rewrote their constitution in 1965.  Since then, the legislature and governors have made it as difficult as possible for Floridians to actually exercise that power.

Wednesday, June 3, 2015

Ahead of My Time?

A while back I was complaining about the evils of CSS and its mis-use to render the Web ugly, unreadable, or both.  I even threatened to resort to writing code to clean pages up as they were downloaded.  Over the course of April and May I spent quite a few spare minutes putting together a preliminary piece of JavaScript code to carry out that threat.  Tip of the hat to the folks that do GreaseMonkey, a Firefox extension that makes it straightforward to have a piece of code executed whenever a page finishes loading, the product that I use to launch my code.  I have to admit that I have been surprised by how much content can be made to have consistent appearance by working in a bottom-up fashion independent of the Web site -- I thought the code would have to be considerably more complicated.

I installed the newest version of Firefox for desktop machines today, and noticed that "reader mode" is now available by default.  If the software thinks that it can extract a simplified version of the page's principal content, an extra icon appears in the address bar.  Click on the icon and a new pane opens, with the text that the software thinks is the primary page content rendered in a simple layout.  Unsurprisingly, it doesn't get everything right; pictures might be excluded, and graphics that combine both images and text are likely to be garbled.  On some pages -- like the individual post pages Blogger generates for this site -- the software doesn't recognize that there's a main piece of content that could be extracted, so reader-mode isn't offered.  Still, I am clearly not the only one that thinks the Web is being rendered unreadable by the designers.  I'm not concerned with stripping away as much of the clutter as reader-mode; I simply want text presented consistently in terms of fonts and sizes.

Content extraction and formatting tools turn out not to be new (my bad, you can't keep up to date about everything).  Readability, for example, is available for a variety of platforms, and is attempting to build commercial products and services around the idea of simplifying content.  But having the software embedded in a popular desktop browser and enabled by default is probably a bigger thing for user acceptance.  Mozilla offers guidelines for structuring page content that makes it easier for the reader-mode software to recognize and extract content.  Will page designers be encouraged by the folks paying the bills to conform to those guidelines?  Will the advertising companies figure out ways to present ads so that they are still included in the simplified material?  How long before there's a user preference setting that invokes reader-mode automatically if the page content is recognized as conforming to the guidelines?  And of course, Mozilla is a much bigger target, and it might be tempting for an ad-selling firm to go to court on the legal theory that tools that rewrite possibly copyrighted material should be illegal or that damages should be paid.

Speaking broadly, this falls into the type of war that I've claimed for a long time the content providers can't win.  Content providers have to conform to standards so that their content can be rendered.  In this case, they have to stick with HTML and JavaScript's DOM.  Content consumers have a steadily increasing amount of processing power at their disposal for tearing the HTML apart and extracting a subset of the content.  Browsers give consumers the ability to write and/or install software on their own.  Nor is the necessary software all that complex, as I've demonstrated.  So the content providers can't win the war on legal grounds, because they can't put enough people in jail to matter.  The only way to "win" is to make content that is compelling and pleasant to use.

For at least one personal situation, the whole thing is an amusing development.  I was invited to be part of a group discussing site appearance and functionality for a multi-author blog (as a reader who regularly comments, not as a member of the editorial staff that makes the actual decisions).  I've been thinking that perhaps I should resign that "position" since I now run my own rewriting software as the default, and it does things that affects the size and proportions of various widgets on the blog's pages.  More interesting for the long term, though, is the whole question of how much effort should go into appearance issues since it seems likely that contemporary browsers will be making more and more decisions about what content to show and how to format that content.

Saturday, April 25, 2015

Challenging the Fifth Branch


In the early 1900s, at the height of the Progressive movement, most of the states in the western half of the US adopted citizen initiative processes to put statutes, constitutional amendments, or both on the ballot. While some eastern states also adopted such provisions, it is not surprising that the movement was widespread in the West. Citizens in western states believed broadly that they were being exploited by outside interests – read large Eastern corporate interests in particular – and that those interests had purchased control of the state legislatures.

I'm a fan of the citizen initiative. Some of it might be that I simply grew up with initiatives as part of the system. Still, state and local government is a machine, elected officials have to fit into that, and there are things that replacing the individual cogs can't accomplish. Consider the issue of Congressional redistricting which arises every ten years following the decennial census. In Article I, Section 4 the Elections Clause of the US Constitution assigns responsibility for such to the legislature of each state. It would be unusual indeed if a state legislature willingly gave up control of that process [1]. In 2000, by ballot initiative, the people of Arizona approved a change in the state constitution that took Congressional redistricting completely out of the hands of the legislature. 

In 2012, after the commission approved a new map, the legislature sued the commission in federal court, arguing that the ballot initiative violated the Elections Clause. Congressional redistricting cases are decided by a special federal court; the three-judge court split 2-1 and rejected the challenge, upholding the validity of the citizen initiative. The main precedents cited were Ohio ex rel. Davis v. Hildebrant and Smiley v. Holm, where the courts had interpreted the Article I Section 4 "legislature" to mean the legislative process as determined by the individual states. The Supreme Court heard oral arguments on the Arizona case in March this year. The expert consensus is that the justices seemed inclined to find that "legislature" should be interpreted more narrowly to mean only the elected legislative body (eg, this piece at SCOTUSblog). 

The Arizona case is not the only initiative-related case potentially before the Supreme Court this term. In 1992, Colorado passed the Taxpayer Bill of Rights (TABOR). Among other things, TABOR takes away the state legislature's power to raise existing tax rates or introduce new taxes on their own; the legislature can only refer such measures to the voters. In 2011 some members of the Colorado state legislature sued in federal court, arguing that taking away the legislature's power to control taxation violated the constitutional requirement that states have a republican government. The district court found that the situation was sufficiently different from historical precedents to proceed with a trial [2]. That opinion was upheld by the appeals court, and has been appealed to the Supreme Court. To date, almost five months after materials were distributed to the justices, no decision has been made as to whether to grant certiorari. 

If I wear my natural western-populist hat then I want the Court to rule in favor of the people in both cases [3]. I expect that me to be disappointed. 

If I wear my paranoid "of course the Supreme Court is partisan" hat then I would expect them to simply rule against the people in Arizona and for the people in Colorado, because the Arizona suit was brought largely by Republican legislators and the Colorado suit largely by Democrats. I expect that me to be disappointed as well. 

The reality will be, I expect, rather messy. When the Founders wrote the Constitution, they pretty clearly saw no role for the voters once they had chosen their representatives. It seems unlikely, though, that after a hundred years the Court would decide that initiatives are simply not allowed. That leaves them in the position of trying to draw lines with regard to how much this "fifth branch" of government can do [4]. And line drawing always seems to be messy. 



[1] In 1983, the Washington State legislature referred an amendment to the people that created a largely-independent redistricting committee. The proposal didn't remove the legislature from the process entirely, as they can still overrule the committee if they can get a two-thirds majority vote in each chamber. 

[2] The distinction drawn by the court in the Colorado case is that it is not just more difficult for the legislature to raise taxes – as in California's two-thirds supermajority requirement – but actually impossible for the Colorado legislature to raise taxes. The court says that's a new situation that should be argued at trial. 

[3] Full disclosure: when I was a budget analyst for the Colorado legislature, TABOR made my job a lot more difficult. However, when the state faced a real budget crisis in the early naughts, the legislature referred a ballot issue granting limited relaxation of the TABOR limits to the people, and the people approved it. 

[4] I would say "fourth branch", except that's been pretty much taken to mean the regulatory bureaucracy. 



Image credit: Washington State Legislature oral history site.

Sunday, March 29, 2015

With CSS, More Seems to be Less

In the beginning -- at least for my purposes here -- there was HTML.  Tags gave straightforward markup capabilities, but large amounts of the actual presentation were left to the readers' discretion.  Some people like serif fonts; some people don't.  Some people need bigger text to read comfortably.  Some people don't like 36-point headlines.  In an online world, those are choices that ought to be left to the reader.

Then the Web was increasingly taken over by commercial concerns, who hired graphics design professionals to lay out more sophisticated pages.  Professionals who took much greater advantage of the newer HTML capabilities to force the pages to look the way the professionals thought they should.  Color!  Giant text!  Tiny text!  Five fonts for different purposes on every page!  Not to put too fine a point on it, but some of the professionals seemed to have taken degrees in Ugly and Hard to Read.

Some Web browsers gave their users tools to fight back.  One of the reasons I settled on Firefox early on was that it let me override part of the ugliness.  I could specify fonts that were usually honored and a minimum font size (a maximum would also be nice, if anyone at Mozilla is listening).  From time to time I would look over someone's shoulder to see how they experienced the web.  I gloated internally, about how much more consistent in appearance "my" web was compared to theirs.  How much easier to read.  How much more visually attractive, because of my superior taste in fonts [1].

Then came CSS, and the widespread adoption of various CSS libraries, and the use of JavaScript to set CSS properties, and the widespread adoption of JavaScript libraries that did all sorts of things besides the feature or two the page developer needed.  Some browsers helped the readers out again, by allowing the user to have their own style sheet that was applied to all pages.  But CSS has, among other things, specificity rules that make it possible for the page developer to identify a particular element at such a fine grain that none of the browsers will honor the user's choices.  In fact, the specificity rules can get so complex that developers need specialized tools to debug their pages, trying to figure out why some bit of text here or there isn't in the font/size/whatever that they expected.  The image accompanying this post is a portion of a screenshot of Firebug, one such tool, showing part of the CSS cascade that resulted in a piece of text on a downloaded page ignoring my font choices.

I suppose I shouldn't feel so strongly about this that I'm sitting here considering the question, "How hard would it be to write a Firefox add-on that (reasonably intelligently) overrides the font choices for all of the content on every page?"  Guess I've finally turned into a full-blown curmudgeon.







[1] Not to mention faster, because Firefox had Adblock, and it's amazing how much faster the front page of a newspaper site loads if your browser simply skips loading 30 advertising elements from DoubleClick's under-engineered servers.

Wednesday, March 25, 2015

Will Distributed Generation Kill the Big Utilities?

A friend of mine wrote an interesting piece for Utility Dive this week.  The basic premise is that the rapid growth of edge generating capacity – think rooftop solar – will outpace the country's transmission and distribution grid's ability to cope with the added complexity.  I'm an advocate of renewable energy supplies, so think it's a question worth considering.  I think of the problem in terms of three questions: (1) Which grid?  (2) Can such a collapse happen?  (3) Will such a collapse be allowed to happen?

Which grid?  I continue to be disappointed by pieces written as if the US had a single network for transmitting electric power.  There are three: the Eastern, Western, and Texas Interconnects are synchronized AC networks that have minimal AC-DC-AC connections between them.  The history that led to three grids is straightforward.  The Great Plains are a relatively empty buffer nearly 500 miles wide that split the country.  The yellow line on the figure to the left (you should be able to use your browser's "view image" option to see it full size) runs roughly down the middle of the GP, and comes relatively close to the dividing line between Eastern and Western Interconnects.  Texas was able to opt to be its own grid and did so, hoping to avoid federal interference.  Note that El Paso, the only major metro area in Texas that is west of the GP, is part of the Western, rather than Texas, grid.  Each of the three grids has a different degree of complexity, different renewable resource portfolios, and a somewhat different political environment.

Can such a collapse happen?  In order: least likely for the Texas grid; somewhat more so but not a significant risk for the Western grid; quite possible for the Eastern grid.  The fundamental difference is a simple matter of complexity.  The Texas grid serves about 27 million people, and the large majority of the demand is inside the triangle formed by Houston, Dallas-Fort Worth, and San Antonio.  The Western grid serves about 70 million people, and the large majority of demand happens in half-dozen or so large population centers: Seattle-Portland, Northern California around San Francisco Bay, Southern California, Las Vegas, Phoenix-Tucson, Front Range Colorado, and Salt Lake City.  A small number of single-state situations, and limited interconnect problems.  The Eastern grid, covering 205 million people across 36 states, is a whole 'nother matter.  I've written before about the relative complexity of the grids as illustrated by low-carbon studies done by the various national labs.  There are plenty of nuts-and-bolts studies for the Western grid that all come up with similar plans.  Such studies for the Eastern grid simply don't seem to exist because the problem is so much harder.

Will such a collapse be allowed to happen?  In Texas, not just no, but hell no.  A single state legislature, a single PUC; they can regulate the snot out of distributed generation in such a way that it's useful, but not threatening to the collapse of the grid.  In the West, probably not.  Most of the individual Western states have statutory requirements for large renewable share of total supply and are already thinking about intrastate distributed generation.  There are only a couple of interstate grid topologies that make sense.  The Western Governors Association spends considerable effort looking at the regional transmission grid as a whole.  When it comes to the Eastern grid, though, I throw up my hands.  When people write about the US having an aging, third-world, rickety electric grid, they're largely writing about the East.  Too many jurisdictions, too many places to connect... yeah, it could be allowed to happen.

My bottom line is that I acknowledge the risk my friend writes about; I just don't acknowledge that it's a national problem.

Sunday, March 22, 2015

My Name is Michael, and I'm a Pack Rat

Sometimes the first step to solving a problem is to admit that you have one...

Out in my garage is a galvanized steel bucket. Here's a picture of my daughter with the bucket. She's carrying about three small rocks in it, destined to be mulch for weed control, because she needed to be helping. My daughter now has a daughter of her own just about that age, who looks equally serious when taking on an important task like moving rocks. (I know because the granddaughter and I took a walk around the neighborhood the other day, and stopped to spend a few minutes returning the neighbor's rocks that someone had kicked out onto the sidewalk to their proper place in the landscaping. It was clearly Important Work.) The bucket is stained with this and that these days, and there are other better buckets in the garage, but I'm not about to get rid of it.

I have a piece of software called "scraps" that I use every day. I wrote the first version of it something more than 30 years ago, with the intent of using it instead of writing things on scraps of paper that I would promptly lose. Do you want to know the name, address, and phone number of the dentist who pulled my wisdom teeth the year after I moved to New Jersey and went to work for Bell Labs? It's in there, down at the bottom of a scrap titled "Dentists" that has contact information for every dentist I've seen since then. For no more than it does, an unconscionable amount of time has been spent porting that hunk of antique C code to every operating system I've ever used, from assorted versions of UNIX to DOS to Linux to Mac OSX...

There are a large number of bookcases in our house. The overflow from those are stashed away in bankers' boxes down in the basement. Some of the books in them are textbooks from when I was an undergraduate, and not just books from my major. Some of the books are trash fiction that filled long hours on business trips in the days before portable computing, that I'll never read again. (Sorry, Eric Van Lustbader.) I've been trying to get rid of some of them, but seem to be incapable of throwing any away until I have a suitable EPUB version stored and properly backed up. (It's astounding the range of out-of-print books that people have scanned and put up on the internet.)

My name is Michael, and – among other vices – I'm a pack rat...

Friday, January 9, 2015

New Toy

One of the things on my list of long-term projects is a DIY book scanner. There are a ridiculous number of paper books in my house, and I'm getting old enough to think about downsizing the living quarters at some point. Lots of people have done impressive things in the field, like this one. Being who I am, though, I want a system where the hardware is simpler and smaller, even if that means that the software has to be somewhat more competent. The rough rule-of-thumb of "put everything possible into software" served me well for 25 years in a high-tech field, and I'm not about to give it up.

The usual camera people put into a DIY scanner is either an old smartphone or an old snapshot grade digital camera.  Those are cool, particularly if there's one or both readily at hand.  My own experience with them, at least with a book scanner in mind, is getting them to take the picture when I want it, and to get the picture out of the camera into a computer where it can be worked on.  One a different tack, I've also been wanting to spend some time playing with a Raspberry Pi computer.  Hoping to get two birds with one stone, I purchased a Pi (model B+), a micro-SD card that would let me set up Raspbian easily, and a five mega-pixel camera module.

After going through the old electronics box, I plugged a monitor into the HDMI port, a keyboard and mouse into two of the USB ports, plugged in an Ethernet cable to tie it to the switch in my office and powered it up.  Looked like... Linux.  I felt like the little girl from Jurassic Park: "It's a UNIX system.  I know this!"  The second day I put all of the old electronics back in the box and just ran "ssh -Y" with the appropriate options from my Mac. I haven't gone very far with the camera yet, besides configuring the Pi to use it and verifying that it works. On the other hand, I have spent some time entertaining myself by porting a variety of small software that I've written over the years to the Pi.  Just to see how things went. And "porting" implies that the process was harder than it actually was.

After fooling around for a bit, I copied part of my collection of cartogram software over to it. One of the core pieces is written in C – after downloading a couple of needed libraries (using the provided Debian aptitude and apt-get commands), those just compiled. My front- and back-end code that sets things up for those core number-crunching programs is written in Perl – had to download the GD module, but that also just worked. Copied over some data and map files and ran things and – cartograms! Things ran pretty slowly, but the chip doesn't have the kind of heavy-duty double-precision floating point hardware support that "real" processors provide.

I'm looking forward to seeing how much of the job can be done on the Pi.