Took a look at the published "inventory" of information on Google to give myself some orientation to the development timeline of what folks have been saying about availability (specifically"High Availability") and "Continuous Data Protection" to see when people started turning ideas into products.
The HA issues zipped right along from around 1985 or so (this is a survey, not a census, dear reader) with articulated specifications, formation of managed service offerings, products, etc. zipping right along to our current worlds.Continuous Data Protection, and by that, *that* particular search term shows up circa 1991 as prelim to disk mirroring products appearing later that decade.
The pre Sysplex days (and more people were working on the distributed problem than IBM) rested upon dark fiber, to me, reflecting the some people longing for dial tone at 40 pfennings a minute. SMDS, SONET offerings hadn't yet shown up, but the results were pretty convincing among some (rumored) blue sparks and flame that having trusted data in at least two places at once with a prayer (improvement) in recovering ones systems from the distributed data, well.... very good thing.
I'd argue, however, that the Continuous Data Protection model is the converged solution for how to answer the question of applications availability; the economics of (planned) redundancy favor that information distribution. Kindred concerns of custodial, compliance, and reliable connectivity, while significant, do invite innovations in putting the data objects. Market momentum for how to build higher availability into applications comes from known good libraries of "how to do this".
The DeDupe market space, as well, offers cost relief from the ability to recycle and realize more efficiencies in net storage capacities. The cautionary tale here comes from distributed computing, wherein some applications resemble Yorkie Terriers. Very very good at playing "this, all of this, is mine!" to the tune of "Big Boss Man" resulting with a conundrum of which manager manages the manager managers and a stack of dueling control systems oh heck lets put another piece of monitoring software in there that ought to hold 'em....
Which in turn brings back memories of notoriously brittle High Availability systems from the 90s, wherein the prudent management discipline was to establish that it was at last working and hanging up a sign that said "Was working fine when I left it. You broke it."
Some local use cases (involving moderate storage requirements and a thin network infrastructure) indicate that Continuous is the way to go (assuming that the data "containers or objects" allow for incremental updates). Saves network, and keeps one closer to the point in time when the fan will make a rude noise. Seriously looking at the peer to peer model has some wonderful attributes of survivability, and redundancy (boy, you can say that again) also with the potential for borrowing resources across networks.So in no way is it a motherhood issue as to how.
Barbie: Math is hard. She caught H E Double Hockeysticks for that but that's a fact.
Meanwhile, the what is of the motherhood issue (viz, a requirement to keep things going). But that how (one's chosen implementation). Hoo Wee! That how's a poser. But to me there's something in the thought that "this system swaps things about all of the time and keeps running with a provable audit trail of service levels" as more comforting than "it's in Lou's truck". One can always, as it were, burn a disk. Demonstrating recovery during system operation as a normal course of business.... cool.
Subscribe to:
Post Comments
(
Atom
)
No comments :
Post a Comment