Excluding the outsourcing of software application development and Helpdesks to India over the past few years, there have been some very significant technology outsourcing innovations in the course of human history. One of the earliest was tools, particularly the arrow head, eliminating the need to be up close and personal when killing your food.
You can imagine the scene: Cavemen Woz in his cave startup, knapping and polishing his flint. Caveman Job with real turtle bones around his neck, would be like, “this is going to change the way you kill animals”, and everyone would be like “nooo”, and then Caveman Job would say “one more thing” and he’d bring out a bow and attach the flint to a stick and kill a deer 200 yards away, and everybody would be like, “oooh”, and then there’d be a cavestarter round of bartering to fund production of arrowheads, bows and arrows and everybody would get one. And then Caveman Gor would be like “I invented this bow and arrow” and Caveman Ted would be like “the bow and arrow is a series of intertwined vines”… (Okay, enough of this…)
Another disruptive technology outsourcing innovation was the cooking pot, which greatly reduced the need for humans to spend all day hunting and eating by outsourcing the function of the stomach in breaking down food, it allowed tuberous roots to be consumed and greatly increased the calorific value of meat. This new availability of time allowed humans to indulge more in non-crucial activities such as art, philosophical thought and Omphaloskepsis.
You can imagine the scene: Cooking Pot support get a visit from a disgruntled cooking pot buyer, who would be like “this pot doesn’t cook”, and Pot support would be like “have you tried blowing out the fire and rekindling it?” and he’d be like, “fire? Well, I put it over the dried sticks and leaves” and they’d be like “you have to start the fire”, &c, &c,…
The current evolutionary technology outsourcing innovation appears to be memory, both human and computer. On the human side, there’s no need to remember anything any more, or maybe its just me and I can’t remember anything anymore. It used to be that you actually had to memorise things, and then you just sat at a computer and Googled stuff, now I just walk around asking Glass random stuff and Glass reads the results back to me. But I don’t need to remember any of it because I can just ask again when I need it. This concept of just in time is something I observe in my teenage girls, they even explicitly say they don’t need to learn something because they can just Google it.
My youngest is most fervent in this, she has a violent and allergic reaction to the concept of the vocabulary test which she ably demonstrates with C’s and D’s:
“Why do I have to learn how to spell? Autocorrect will just fix it, or I’ll just Google it”.
As a parent I have an equally violent and allergic reaction to this, but it did get me thinking. What is the future of literacy? How long before we don’t have to read and write because assistants like Glass are, in a sense, returning us to the oral tradition? (Or is it aural tradition? Is it the speaking tradition or the listening tradition?).
This trend is captured in the 21st Century skillset requirements around critical thinking, a shift away from what to learn to how to learn – a trend we should be adopting not only in our interpretation activities, but in all facets of our museum professional work. I had a conversation at a recent conference about the decline of reading in schools and what a problem this will become. I was less pessimistic, mass literacy is a fairly recent thing in the scheme of things, so maybe the shift away from the written word and print to rich media and virtual assistants like Glass, will usher in a return to the oral (and visual) tradition – oral tradition 2.0 perhaps?
In the computer memory world, this outsourcing trend is playing out as we transition to the cloud. I can’t survive without having anytime access to my data, either for work or play. For work, I’ve pretty much shifted to Google Drive – accessible from my iPhone, Tablet, laptop and home computer – or anyone else’s computer for that matter. For play, I have Netflix, Spotify and I’ve written this post on my iPhone, Tablet and home computer over the course of a single day.
Former Microsoft VP Steven Sinofsky (@stevesi) who oversaw IE and SkyDrive developments among other things, has a great post on Designing for exponential trends of 2014 which elegantly captures the cloud trend:
Cloud first becomes cloud-only … don’t distract with attempting to architect or committing to on-premises … recent grads will default to cloud-first productivity.
Which begs the question of how long before mobile first becomes mobile only. He covers mobile too:
While today it seems inconvenient if one needs to resort to “analog” to use a service, 2014 is a year in which every service has a choice and those that don’t exist in a mobile world won’t be picked.
This trend for alternative choice is also playing out in museums, where its often easier for a programmatic team to rent cloud storage, cloud server space, or a service-based app, rather than enter into the laborious negotiation with a traditional IT department which hasn’t comprehended that we all have alternatives now. In fact, the IT department is less and less the definitive source for technology information in a museum, because technology is less and less about what technology you know as it is about how you use the technology that is available.
The Sinofsky post is an interesting read from someone who is informed, he also talks about Phablets, a new term for me which I initially thought was shorthand for how fabulous tablets are, but is in fact a descriptive word for a device that straddles the functions of a phone and tablet. I’ll be staunchly refusing to use that term, but maybe I’ll coin the term Phlass instead.