If one looks at the real and virtual "stuff" we produce and consume, one of the big state changes comes from new economies of scale mediated by networked information.
Adoption rates for all sorts of things begin more and more to look like vertical lines (e.g., faster broadband versus slower) and better product architectures afford no small degree of modularity and component based systems.
What used to be called information float has become, to a great extent, less of a factor because federations of "smart friends and strangers" can rapidly vet vapor and mojo.
This new "component based and interoperable" lot of hardware, software, bio, and info means that the friction of creation and status quo form less of a barrier than before. This is the kind of environment that welcomes punctuated equalibria (evolution by jerks, as they say!) and disintermediation.
Removing (or mitigating) other frictional issues such as health care in turn improves the formation of new, and I assert, generally smaller organizations. From personal experience, I can say that the 1990s recession spawned a good number of high quality companies and think tanks.
With the provisioning of fractional services (payroll, HR, storage, webservers...) that reflect almost atomistic (and often "free") marginal costs compared to the big box running at 1% capacity, the formula for the production function begins to take rational, versus whole number or integer, steps. Again, with health care, the exploitation of the quaint statistic of large numbers in a common pool.... well, you do the math.
Trust in God but lock your car... I do believe that opportunities will be exploited if only from the fatigue of status quo and the inertia of the known.
Friday, March 06, 2009
Tuesday, March 03, 2009
Conservation Economy (via Ecotrust)

Portland Oregon's Ecotrust.org has made extensive use of a pattern language (see Christopher Alexander Pattern Language) as a design framework for sustainable systems.
"A Conservation Economy
When the health of ecosystems and communities is not integrated into economic activities, all three suffer. In turn, economic dependence on destructive activities creates apparent conflicts between work, nature, and community. How can we create an economy that effectively meets human needs while regenerating natural systems? An economy which grows organically — and fills new niches — by working with nature and enriching human capacities?
In A Conservation Economy, Economic arrangements of all kinds are gradually redesigned so that they restore, rather than deplete, Natural Capital and Social Capital. This will create extraordinary opportunities for those who foresee and drive these changes. The Fundamental Needs of people — and the Ecosystem Services which sustain them — are the starting point for a different kind of economic prosperity that can endure generation after generation."
The details can be found at http://www.conservationeconomy.net/conservation_economy.html
Monday, October 13, 2008
Destination Cloud Computing
Just a lead in: cloud computing will be vital for "small" businesses because "a man's got to know his limitations."
Now we might twitter, and for all that companies growing from startup to second stage, usually around 5+ employees, all at once notice a rumble in the fuselage.
We had service bureaus, used ADP for payroll, and edge into Quicken to run the books. Cloud computing, next big thing (and has been for years).
Now we might twitter, and for all that companies growing from startup to second stage, usually around 5+ employees, all at once notice a rumble in the fuselage.
We had service bureaus, used ADP for payroll, and edge into Quicken to run the books. Cloud computing, next big thing (and has been for years).
Labels:
Information Technology
,
SanNasTimes Article
,
Storage
Tuesday, August 12, 2008
Disaster. Recovery. Invention.
In most of this last year's pieces here at the land of SANNAS (a fine team of wonks and also a fantastic snack with a good lager) the theme often centered upon increasing demands of distributed ephemeral data and the challenge of managing the process of custody and validation.
This article's being typed into a 1Gb stick; about 2Mb of that stick contains an encryption program; acceptable overhead IMHO for the promise of securing the Next Great Novel and Sudoku downloads, as well as the launch codes for the Acme ® ‘Lil Armageddon family of products, my sonnets to Paris Hilton and other juicy bits.
I do not, as they say, keep a tidy desk. My brain stays healthy by understanding my own LIFO filing system and an ability to understand the strata and the high probability parts of the piles wherein nestles the Airline Magazine or the clipping of a local paper's crank I wanted to riff upon at leisure. This represents an elegant strategy promoting mental health albeit with a risk of structural collapse of the entropy-friendly piles of the arcane lore.
Somewhere, someone must be working on a desktop computing metaphor that allows for significant standing loads. Bearing walls. Like that. At the very least, maybe something like "The Clapper" to find that 1Gb slice of memory...
So, here's the thing: data all over the place, connected and unconnected with the not so subtle growth of metadata to describe the context and provenance of information along with the burden of incremental data to manage the data and thereby the added processing cycles for data management itself. Extremely bright designers have delivered high value tool infrastructures, and I, for one, am not worthy of holding their pocket protectors in the area of difficult code and algorithm implementation, and generally customer focused implementations.
But in the realm of Disaster Recovery mechanisms and services, preemptive trumps reactive. Some scenarios of the mode of disaster, use cases, deliver an example.
Pandemic Flu, Weather, Earthquake, Toxic Spill, extended outages of power, water, other broken infrastructure should be the object of sandtable exercises, at a minimum, to game through what (might likely) work in these scenarios.
Rather makes removable media a bit of a problem during times of "saw fan, engaged same", not to mention getting to the unnetworked unautomated and unavailable mélange of annotated manuals and Post It notes which, don't you know, are the keys to the kingdom, whether one acknowledges that or not.
The adhocracy of portable data (iPhone, et.al.)seems to drive the industry towards some sort of nexus, wherein the overall practice and form of storage management and optimization will trend toward something that looks very much like Open Source toolkits and standards. For some this will be the defining disaster; however, other mature technology (e.g., MVS et seq) informs us that the core functionality and benefits of the "mature" technology do not by any means always disappear, but become the subject of new core businesses and invention. Ye Olde Virtual Machine has shown a tenacity in meeting the market need, albeit in quite new forms.
So, vis a vis Disaster Recovery, the pressure is on for shifts that make for highly interoperable and fungible networked storage resources (think Googleplex) with arbitrarily attached processing and storage tools. A lot of the "math" to predict the future comes from the good works of people like Gene Amdahl and Jim Gray (of Microsoft fame) in that a feasibility test can be accomplished with relative ease; with new cost factors and performance factors in hand, the maxim of "in the long run, all costs are variable" will again prove in with new invention. Of particular interest will be the results of open standards initiatives (akin to Web 3.0 posited mechanisms)where ontology will bloom like kudzu in Dixie.
And that, as the young lady informs us, is "hot".
This article's being typed into a 1Gb stick; about 2Mb of that stick contains an encryption program; acceptable overhead IMHO for the promise of securing the Next Great Novel and Sudoku downloads, as well as the launch codes for the Acme ® ‘Lil Armageddon family of products, my sonnets to Paris Hilton and other juicy bits.
I do not, as they say, keep a tidy desk. My brain stays healthy by understanding my own LIFO filing system and an ability to understand the strata and the high probability parts of the piles wherein nestles the Airline Magazine or the clipping of a local paper's crank I wanted to riff upon at leisure. This represents an elegant strategy promoting mental health albeit with a risk of structural collapse of the entropy-friendly piles of the arcane lore.
Somewhere, someone must be working on a desktop computing metaphor that allows for significant standing loads. Bearing walls. Like that. At the very least, maybe something like "The Clapper" to find that 1Gb slice of memory...
So, here's the thing: data all over the place, connected and unconnected with the not so subtle growth of metadata to describe the context and provenance of information along with the burden of incremental data to manage the data and thereby the added processing cycles for data management itself. Extremely bright designers have delivered high value tool infrastructures, and I, for one, am not worthy of holding their pocket protectors in the area of difficult code and algorithm implementation, and generally customer focused implementations.
But in the realm of Disaster Recovery mechanisms and services, preemptive trumps reactive. Some scenarios of the mode of disaster, use cases, deliver an example.
Pandemic Flu, Weather, Earthquake, Toxic Spill, extended outages of power, water, other broken infrastructure should be the object of sandtable exercises, at a minimum, to game through what (might likely) work in these scenarios.
Rather makes removable media a bit of a problem during times of "saw fan, engaged same", not to mention getting to the unnetworked unautomated and unavailable mélange of annotated manuals and Post It notes which, don't you know, are the keys to the kingdom, whether one acknowledges that or not.
The adhocracy of portable data (iPhone, et.al.)seems to drive the industry towards some sort of nexus, wherein the overall practice and form of storage management and optimization will trend toward something that looks very much like Open Source toolkits and standards. For some this will be the defining disaster; however, other mature technology (e.g., MVS et seq) informs us that the core functionality and benefits of the "mature" technology do not by any means always disappear, but become the subject of new core businesses and invention. Ye Olde Virtual Machine has shown a tenacity in meeting the market need, albeit in quite new forms.
So, vis a vis Disaster Recovery, the pressure is on for shifts that make for highly interoperable and fungible networked storage resources (think Googleplex) with arbitrarily attached processing and storage tools. A lot of the "math" to predict the future comes from the good works of people like Gene Amdahl and Jim Gray (of Microsoft fame) in that a feasibility test can be accomplished with relative ease; with new cost factors and performance factors in hand, the maxim of "in the long run, all costs are variable" will again prove in with new invention. Of particular interest will be the results of open standards initiatives (akin to Web 3.0 posited mechanisms)where ontology will bloom like kudzu in Dixie.
And that, as the young lady informs us, is "hot".
Labels:
Design
,
Information Technology
,
SanNasTimes Article
,
Storage
Thursday, July 17, 2008
Disk Payload Management
Transfer of data has an upper bound of the speed of light and a lower bound of amount of a budget, excluding strange action at a distance and physics not yet known. It's all fun and games until something divides by zero.
In a delightful teaser article, Neil J. Gunther's "The Guerrilla Manual" delivers a bolus of refreshing views on capacity planning and performance management with a cleansing amount of terse common sense.
In particular, he notes, "You never remove the bottleneck, you just shuffle the deck."
Network Effects and Blinkenlights
Back in the mid 1980s, at least one large financial institution allocated IT budgets using a simple ratio of numbers of customer accounts by type, with appropriate finagle factors. At least it was a model that, assuming a lot of linearity, had simplicity and apparent transparency going for it.
Of course, these were the times of data centers with big boxes, and the occasional minicomputer. The unit costs of processing, networks, and storage were significant vis a vis cycles or bits or bytes per dollar and cycles per watt.
Of course, also, the use cases for the technology moved rather slowly, with occasional punctuation with growing online inquiry from, say, customer service agents or the addition of Automatic Teller Machines to the CICS olio of the big iron code.
More gadgets and new approaches to programming by the end users (unclean!!!) resulted in rather surprising effects upon infrastructure through rampant flaming queries (what did he say?) and even complete suites of large scale computing systems dedicated to new types of models. In the case of financial services, one big dude jammed with APL for determination of fixed income dynamics. APL, for those who don't recall, was developed for passive aggressive savants who didn't want management looking into what they'd written. But, with letting the punishment fit the crime, APL rocked for matrix operations and was a darling of the early generation of quants, including those laugh a minute actuaries.
Somewhere, someplace, someone is hacking FPGAs to stick into the Beowulf cluster of X Boxes. I gotta feeling.
So where were we... Oh, so the point is that the common factor around these early instances of "end user" computing involved moderate and increasing network effects. Transactional data could be used as feeds to these user managed systems, and network effects with emphasis upon storage and I/O tuning became significant as a means of moving the bottleneck back to the cpu. Now pick another card.
The disk to disk discussion comprises several use cases, ranging from performance optimization (e.g, put the top 10 movies on the edge of the network) to business continuance to the meta issue of secure transfer and "lockup" of the data. Problem is, how does one deal with this mess which embraces Service Oriented Architectures and Halo dynamism?
Intelligent Payloads?
This problem of placing data and copies of data in "good enough" sites on the network seems encumbered by how these data are tagged in such a way as to inform the "system" itself on the history of the atomic piece of interest as it transits other systems and networks. Perhaps something that appends usage information to the information itself, rather like appending travel stickers to an old steamer trunk tracing its owner's tours of Alice Springs, Kenosha, and Banff.
And no, I'm not advocating still another inband system monitor... more MIBs than MIPS and all of that problem.
This could, I believe, be a fertile area for new types of automation that begin to apply optimization (satisficing, most likely, in the sense of "good enough" strategies, see Herbert Simon for more G2) thereby, maybe (he qualifies again!) to reduce the amount of time and money spent upon forensics and weird extraction of information needed to govern surprisingly fluid dynamic systems.
Zipf's Law (think top 10 lists, 80/20 rule, The Long Tail issues, etc.) and other power law behaviors will still apply to the end product of such analysis, but perhaps the informed payloads will ease the analysts' management of these turbulent parcels. (Some insights to the framing of the problem of getting top level insight into systems structures and how they express emergent behaviors can be found at the Santa Fe Institute and their many papers on "Small World" problems.)
So, the bounds on this problem of course reduces to time and money. That topic also is taken up by Gunther, with emphasis upon what some of my old gang at the Wall Street joint referred to as "the giggle test" for feasibility.
This is a brief piece about an intriguing problem where more insight can be gained from Operations Research methodologies than from Information Technology praxis per se.
It nets out to (sorry) not only if it is not measured, it isn't managed, but add to that the cautionary insight of "if it isn't modeled, it isn't managed."
In a delightful teaser article, Neil J. Gunther's "The Guerrilla Manual" delivers a bolus of refreshing views on capacity planning and performance management with a cleansing amount of terse common sense.
In particular, he notes, "You never remove the bottleneck, you just shuffle the deck."
Network Effects and Blinkenlights
Back in the mid 1980s, at least one large financial institution allocated IT budgets using a simple ratio of numbers of customer accounts by type, with appropriate finagle factors. At least it was a model that, assuming a lot of linearity, had simplicity and apparent transparency going for it.
Of course, these were the times of data centers with big boxes, and the occasional minicomputer. The unit costs of processing, networks, and storage were significant vis a vis cycles or bits or bytes per dollar and cycles per watt.
Of course, also, the use cases for the technology moved rather slowly, with occasional punctuation with growing online inquiry from, say, customer service agents or the addition of Automatic Teller Machines to the CICS olio of the big iron code.
More gadgets and new approaches to programming by the end users (unclean!!!) resulted in rather surprising effects upon infrastructure through rampant flaming queries (what did he say?) and even complete suites of large scale computing systems dedicated to new types of models. In the case of financial services, one big dude jammed with APL for determination of fixed income dynamics. APL, for those who don't recall, was developed for passive aggressive savants who didn't want management looking into what they'd written. But, with letting the punishment fit the crime, APL rocked for matrix operations and was a darling of the early generation of quants, including those laugh a minute actuaries.
Somewhere, someplace, someone is hacking FPGAs to stick into the Beowulf cluster of X Boxes. I gotta feeling.
So where were we... Oh, so the point is that the common factor around these early instances of "end user" computing involved moderate and increasing network effects. Transactional data could be used as feeds to these user managed systems, and network effects with emphasis upon storage and I/O tuning became significant as a means of moving the bottleneck back to the cpu. Now pick another card.
The disk to disk discussion comprises several use cases, ranging from performance optimization (e.g, put the top 10 movies on the edge of the network) to business continuance to the meta issue of secure transfer and "lockup" of the data. Problem is, how does one deal with this mess which embraces Service Oriented Architectures and Halo dynamism?
Intelligent Payloads?
This problem of placing data and copies of data in "good enough" sites on the network seems encumbered by how these data are tagged in such a way as to inform the "system" itself on the history of the atomic piece of interest as it transits other systems and networks. Perhaps something that appends usage information to the information itself, rather like appending travel stickers to an old steamer trunk tracing its owner's tours of Alice Springs, Kenosha, and Banff.
And no, I'm not advocating still another inband system monitor... more MIBs than MIPS and all of that problem.
This could, I believe, be a fertile area for new types of automation that begin to apply optimization (satisficing, most likely, in the sense of "good enough" strategies, see Herbert Simon for more G2) thereby, maybe (he qualifies again!) to reduce the amount of time and money spent upon forensics and weird extraction of information needed to govern surprisingly fluid dynamic systems.
Zipf's Law (think top 10 lists, 80/20 rule, The Long Tail issues, etc.) and other power law behaviors will still apply to the end product of such analysis, but perhaps the informed payloads will ease the analysts' management of these turbulent parcels. (Some insights to the framing of the problem of getting top level insight into systems structures and how they express emergent behaviors can be found at the Santa Fe Institute and their many papers on "Small World" problems.)
So, the bounds on this problem of course reduces to time and money. That topic also is taken up by Gunther, with emphasis upon what some of my old gang at the Wall Street joint referred to as "the giggle test" for feasibility.
This is a brief piece about an intriguing problem where more insight can be gained from Operations Research methodologies than from Information Technology praxis per se.
It nets out to (sorry) not only if it is not measured, it isn't managed, but add to that the cautionary insight of "if it isn't modeled, it isn't managed."
Labels:
Design
,
Information Technology
,
SanNasTimes Article
,
Storage
Subscribe to:
Comments
(
Atom
)