All Watched Over By Search Engines of Loving Grace

All Watched Over By Search Engines of Loving Grace

Google’s shopping spree has continued with the purchase of the British artificial intelligence (AI) start-up DeepMind, acquired for an eye-watering £400M ($650M). This is Google’s 8th biggest acquisition in its history, and the latest in a string of purchases in AI and robotics. Boston Dynamics, an American company famous for building agile robots capable of scaling walls and running over rough terrain (see BigDog here), was mopped up in 2013. And there is no sign that Google is finished yet. Should we be excited or should we be afraid?

Probably both. AI and robotics have long promised brave new worlds of helpful robots (think Wall-E) and omniscient artificial intelligences (think HAL), which remain conspicuously absent. Undoubtedly, the combined resources of Google’s in-house skills and its new acquisitions will drive progress in both these areas. Experts have accordingly fretted about military robotics and speculated how DeepMind might help us make better lasagne. But perhaps something bigger is going on, something with roots extending back to the middle of the last century and the now forgotten discipline of cybernetics.

The founders of cybernetics included some of the leading lights of the age, including John Von Neumann (designer of the digital computer), Alan Turing, the British roboticist Grey Walter and even people like the psychiatrist R.D. Laing and the anthropologist Margaret Mead. They were led by the brilliant and eccentric figures of Norbert Wiener and Warren McCulloch in the USA, and Ross Ashby in the UK. The fundamental idea of cybernetics was consider biological systems as machines. The aim was not to build artificial intelligence per se, but rather to understand how machines could appear to have goals and act with purpose, and how complex systems could be controlled by feedback. Although the brain was the primary focus, cybernetic ideas were applied much more broadly – to economics, ecology, even management science. Yet cybernetics faded from view as the digital computer took centre stage, and has remained hidden in the shadows ever since. Well, almost hidden.

One of the most important innovations of 1940s cybernetics was the neural network, the idea that logical operations could be implemented in networks of brain-cell-like elements wired up in particular ways. Neural networks lay dormant, like the rest of cybernetics, until being rediscovered in the 1980s as the basis of powerful new ‘machine learning’ algorithms capable of extracting meaningful patterns from large quantities of data. DeepMind’s technologies are based on just these principles, and indeed some of their algorithms originate in the pioneering neural network research of Geoffrey Hinton (another Brit), who’s company DNN Research was also recently bought by Google and who is now a Google Distinguished Researcher.

What sets Hinton and DeepMind apart is that their algorithms reflect an increasingly prominent theory about brain function. (DeepMind’s founder, the ex-chess-prodigy and computer games maestro Demis Hassibis, set up his company shortly after taking a Ph.D. in cognitive neuroscience.) This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control. Put simply, the brain learns about the statistics of its sensory inputs, and about how these statistics change in response to its own actions. In this way, the brain can build a model of its world (which includes its own body) and figure out how to control its environment in order to achieve specific goals. What’s more, exactly the same principle can be used to develop robust and agile robotics, as seen in BigDog and its friends

Put all this together and so resurface the cybernetic ideals of exploiting deep similarities between biological entities and machines. These similarities go far beyond superficial (and faulty) assertions that brains are computers, but rather recognize that prediction and control lie at the very heart of both effective technologies and successful biological systems. This means that Google’s activity in AI and robotics should not be considered separately, but instead as part of larger view of how technology and nature interact: Google’s deep mind has deep roots.

What might this mean for you and me? Many of the original cyberneticians held out a utopian prospect of a new harmony between people and computers, well captured by Richard Brautigan’s 1967 poem – All Watched Over By Machines of Loving Grace – and recently re-examined in Adam Curtis’ powerful though breathless documentary of the same name. As Curtis argued, these original cybernetic dreams were dashed against the complex realities of the real world. Will things be different now that Google is in charge? One thing that is certain is that simple idea of a ‘search engine’ will seem increasingly antiquated. As the data deluge of our modern world accelerates, the concept of ‘search’ will become inseparable from ideas of prediction and control. This really is both scary and exciting.

Google Prepares For A Future Where We See Ourselves In Every Computing Interaction

Google Prepares For A Future Where We See Ourselves In Every Computing Interaction

Google seems to have paid at least $500 million to acquire DeepMind, an artificial intelligence startup that has a number of high-profile investors, and that has demoed tech which shows computers playing video games in ways very similar to human players. Facebook reportedly also tried to buy the company, and the question on most people’s minds is “Why?”

More intelligent computing means more insightful data gathering and analysis, of course. Any old computer can collect information, and even do some basic analytics work in terms of comparing and contrasting it to other sets of data, drawing simple conclusions where causal or correlational factors are plainly obvious. But it still takes human analysts to make meaning from all that data, and to select the significant information from the huge, indiscriminate firehose of consumer data that comes in every day.

AI and machine learning expertise can help improve the efficiency and quality of data gathered by Google and other companies who rely on said information, but it can also set the company up for the next major stage in computing interaction: turning the Internet of Things into the Internet of Companions. Google is hard at work on tech that will make even more of our lives computer-centric, including driverless cars and humanoid robots to take over routine tasks like parcel delivery, but all those new opportunities for computer interaction need a better interface if they’re to become trusted and widely used.

Google has already been working on building software and tech than anticipates the needs of a user and acts as a kind of personal valet. Google Now parses information from your Gmail and search history to predict what you’ll ask about and provide the information in advance. Now has steadily been growing smart and incorporating more data sources, but it still has plenty of room for improvement, and there’s no better way to anticipate a human’s needs than with a computer that thinks like one.

Another key component of Google’s future strategy has to do with hardware. The company’s last high-profile acquisition was Nest Labs, which it bought for $3.2 billion in cash earlier this month. Nest’s smart thermostat also uses a significant amount of machine learning to help anticipate the schedule and needs of its users, which is something that DeepMind could assist with on a basic level. But there’s a larger opportunity, as once again a more human element could help make the Internet of Things a more accessible concept for the average user.

We’ve seen little beyond computers that can play video games from DeepMind, but that demonstration speaks volumes about what Google can do with the company. Robotics and hardware investments like those already made by the company are interesting, to be sure, but DeepMind is in many ways the thread that will draw all these separate initiatives together: There’s an adoption disconnect between technically impressive innovations, and convincing everyday end users to actually embrace them. DeepMind could help humanize tech that seems otherwise deeply impersonal (and in the case of self-driving cars, even anti-human) in a way that spurs uptake.

More human machines could be a big reason why Google has reportedly created an ethics board to supervise the use of DeepMind’s AI tech. Google probably isn’t that worried about the possibility of accidentally creating SkyNet, but when you start building computing devices that think and act like humans, you’re bound to get into fraught moral territory. Both in terms of both what said tech can learn and know about its users, as well as what, if any, responsibility we have to treat said tech differently than any standard computer.

Depending on your view of Google and what it does, the DeepMind acquisition is either troubling or exciting. Of course, it has the potential to be both, as does any potential advancement in AI and machine learning, but I can’t help but be enthralled by the possibilities of the picture Google is painting with its latest big-picture moves. More than any other, it seems to be committed to a future that lives up to the vision of the science fiction blockbusters we all grew up with, and it’s impossible to deny the allure of that kind of ambition.