The idea of a killer robot, capable of making its own, lethal decisions autonomously, is something that defines The Terminator in James Cameron’s 1984 film.

Luckily for humanity, autonomous killer robots do not exist just yet. In spite of huge advances in technology, truly autonomous robots remain in the domain of science fiction.

At the end of 2020, the excitement that has driven autonomous vehicle initiatives began to wane. Uber sold its self-driving division at the end of 2020, and while the regulatory framework for autonomous vehicles is far from clear, technology remains a major stumbling block.

A machine operating at the edge of a network – whether it is a car or a robot or a smart sensor to control an industrial process – cannot rely on back-end computing for real-time decision-making. Networks are unreliable and latency of just a few milliseconds may mean the difference between a near miss and a catastrophic accident.

Experts generally accept the need for edge computing for real-time decision-making, but as those decisions evolve from simple binary “yes” or “no” responses to some semblance of intelligent decision-making, many believe that current technology is unsuitable.

The reason is not solely because advanced data models cannot adequately model real world situations, but also because the approach to machine learning is incredibly brittle and lacks the adaptability of intelligence in the natural world.

In December 2020, during the virtual Intel Labs Day event, Mike Davies, director of Intel’s neuromorphic computing lab, discussed why he felt existing approaches to computing require a rethink. “Brains really are unrivalled computing devices,” he said.

Measured against the latest autonomous racing drones, which have on-board processors that consume around 18w of power and can barely fly a pre-programmed route at walking pace, Davies said: “Compare that to the cockatiel parrot, a bird with a tiny brain which consumes about 50 milliwatts of power.”

The bird’s brain weighs just 2.2 grams compared with the 40 grams of processing power needed on a drone. “On that meagre power budget, the cockatiel can fly at 22 mph, forage for food and communicate with other cockatiels,” he said. “They can even learn a small vocabulary of human words. Quantitatively, nature outperforms computers three-to-one on all dimensions.”

Trying to outperform brains has always been the goal of computers, but for Davies and the research team at Intel’s neuromorphic computing lab, the immense work in artificial intelligence is, in some ways, missing the point. “Today’s computer architectures are not optimised for that kind of problem,” he said. “The brain in nature has been optimised over millions of years.”

According to Davies, while deep learning is a valuable technology to change the world of intelligent edge devices, it is a limited tool. “It solves some types of problems extremely well, but deep learning can only capture a small fraction of the behaviour of a natural brain.”

So while deep learning can be used to enable a racing drone to recognise a gate to fly through, the way it learns this task is not natural. “The CPU is highly optimised to process data in batch mode,” he said.

In deep learning, to make a decision, the CPU needs to process vectorised sets of data samples that may be read from disks and memory chips, to match a pattern against something it has already stored,” said Davies. “Not only is the data organised in batches, it also needs to be uniformly distributed. “This is not how data is encoded in organisms that have to navigate in real time,” he added.

A brain processes data sample by sample, rather than in batch mode. But it also needs to adapt, which involves memory. “There is a catalogue of past history that influences the brain and adaptive feedback loops,” said Davies.

Making decisions at the edge

Intel is exploring how to rethink a computer architecture from the transistor up, blurring the distinction between CPU and memory. Its goal is to have a machine which processes data asynchronously across millions of simple processing units in parallel, mirroring the role of neurons in biological brains.

In 2017, it developed Loihi, a 128-core design based on a specialised architecture fabricated on 14 nm process technology. The Loihi chip includes 130,000 neurons, each of which can communicate with thousands of others. According to Intel, developers can access and manipulate on-chip resources programmatically by means of a learning engine that is embedded in each of the 128 cores.

When asked about application areas for neuromorphic computing, Davies said it can solve problems similar to those in quantum computing. But while quantum computing is likely to remain a technology that will eventually appear as part of datacentre computing in the cloud, Intel has aspirations to develop neuromorphic computing as co-processor units in edge computing devices. In terms of timescales, Davies said he expects devices to be shipping within five years.

In terms of a real-world example, researchers from Intel Labs and Cornell University have demonstrated how Loihi could be used to learn and recognise hazardous chemicals outside, based on the architecture of the mammalian olfactory bulb, which provides the brain with the sense of smell.

For Davies and other neurocomputing researchers, the biggest stumbling block is not with the hardware, but with getting programmers to change a 70-year-old way of traditional programming to understand how to program a parallel neurocomputer efficiently.

“We are focusing on developers and the community,” he said. “The hard part is rethinking what it means to program when there are thousands of interacting neurons.”

source