Tuesday, August 12, 2008

Disaster. Recovery. Invention.

In most of this last year's pieces here at the land of SANNAS (a fine team of wonks and also a fantastic snack with a good lager) the theme often centered upon increasing demands of distributed ephemeral data and the challenge of managing the process of custody and validation.
This article's being typed into a 1Gb stick; about 2Mb of that stick contains an encryption program; acceptable overhead IMHO for the promise of securing the Next Great Novel and Sudoku downloads, as well as the launch codes for the Acme ® ‘Lil Armageddon family of products, my sonnets to Paris Hilton and other juicy bits.

I do not, as they say, keep a tidy desk. My brain stays healthy by understanding my own LIFO filing system and an ability to understand the strata and the high probability parts of the piles wherein nestles the Airline Magazine or the clipping of a local paper's crank I wanted to riff upon at leisure. This represents an elegant strategy promoting mental health albeit with a risk of structural collapse of the entropy-friendly piles of the arcane lore.

Somewhere, someone must be working on a desktop computing metaphor that allows for significant standing loads. Bearing walls. Like that. At the very least, maybe something like "The Clapper" to find that 1Gb slice of memory...

So, here's the thing: data all over the place, connected and unconnected with the not so subtle growth of metadata to describe the context and provenance of information along with the burden of incremental data to manage the data and thereby the added processing cycles for data management itself. Extremely bright designers have delivered high value tool infrastructures, and I, for one, am not worthy of holding their pocket protectors in the area of difficult code and algorithm implementation, and generally customer focused implementations.

But in the realm of Disaster Recovery mechanisms and services, preemptive trumps reactive. Some scenarios of the mode of disaster, use cases, deliver an example.

Pandemic Flu, Weather, Earthquake, Toxic Spill, extended outages of power, water, other broken infrastructure should be the object of sandtable exercises, at a minimum, to game through what (might likely) work in these scenarios.

Rather makes removable media a bit of a problem during times of "saw fan, engaged same", not to mention getting to the unnetworked unautomated and unavailable mélange of annotated manuals and Post It notes which, don't you know, are the keys to the kingdom, whether one acknowledges that or not.

The adhocracy of portable data (iPhone, et.al.)seems to drive the industry towards some sort of nexus, wherein the overall practice and form of storage management and optimization will trend toward something that looks very much like Open Source toolkits and standards. For some this will be the defining disaster; however, other mature technology (e.g., MVS et seq) informs us that the core functionality and benefits of the "mature" technology do not by any means always disappear, but become the subject of new core businesses and invention. Ye Olde Virtual Machine has shown a tenacity in meeting the market need, albeit in quite new forms.
So, vis a vis Disaster Recovery, the pressure is on for shifts that make for highly interoperable and fungible networked storage resources (think Googleplex) with arbitrarily attached processing and storage tools. A lot of the "math" to predict the future comes from the good works of people like Gene Amdahl and Jim Gray (of Microsoft fame) in that a feasibility test can be accomplished with relative ease; with new cost factors and performance factors in hand, the maxim of "in the long run, all costs are variable" will again prove in with new invention. Of particular interest will be the results of open standards initiatives (akin to Web 3.0 posited mechanisms)where ontology will bloom like kudzu in Dixie.

And that, as the young lady informs us, is "hot".