An artificial intelligence research group presented a self-driving bike capable of navigating around obstacles, following a person, and responding by voice commands. The AI technology was remarkable while the auto-driving bike itself was of little use. Neuromorphic computing, a special type of AI computer was the power supply for the bike.
Neuromorphic computing is not a new concept. It was, in reality, first introduced in the 1980s. However, recent advances in the artificial intelligence field have revived curiosity in neuromorphic computers.
Deep learning and neural networks have increased in popularity and have encouraged AI hardware for neural network computations (or neuromorphic computing) to be developed. Few developments that have arisen in recent years is neuromorphic computing, which is exciting owing to its similarity to biological and artificial neural networks.
How deep neural networks function
Artificial neural networks(ANN), AI software that roughly monitors the structure of the human brain, are the focus of recent developments in artificial intelligence. Neural networks consist of tiny computing units that conduct basic mathematical functions, artificial neurons.
There is not any need for artificial neurons either. Even if it is built in layers, it will achieve impressive things, such as image recognition and text transformation. Hundreds of millions of nerves, distributed over thousands of layers, may include deep neural networks.
When a deep learning algorithm is being developed, developers run several examples along with the predicted outcome through the neural network. Each of the artificial neurons is modified by the AI model as more and more data is reviewed. The identification of cancer on slides or flagging suspicious bank transfers is getting increasingly reliable for the specific tasks for which they have been created.
Complexities of neural networks on traditional hardware
Traditional computers are assisted by one or more CPUs. CPUs pack a lot of power and can operate at fast speeds with complex tasks. Since neural networks are distributed, it is challenging to operate them in classical computers. Their CPUs must emulate millions of artificial neurons and calculate each of them in turn via registries and memory locations.
GPUs, the equipment used in games and 3D applications, can do a significant quantity of parallel work and are especially good at matrix multiplication, central neural network operation. For neural network processes, GPU arrays have proven very useful.
GPU manufacturers boast of increased popularity in neural networks and in-depth learning. In the last several years, Nvidia graphics hardware company has seen its value in stock prices multiply.
Nevertheless, GPUs still lack the neural network physical framework and have to mimic neurons in software, although at a breakneck level. There are other inefficiencies, such as high energy usage, owing to the differences between GPUs and neural networks.
Unlike processors for general purposes, neuromorphic chips are, like artificial neural systems, structured physically. Each neuromorphic chip consists of several tiny, artificial neuron-related computing units. Unlike CPUs, many different operations can not be performed on neuromorphic chips. The mathematical function of a single neuron is only moderately efficient.
The physical connections between artificial neurons are another important characteristic of neuromorphic chips. Such interactions build neuromorphic chips much like human brains that are called synapses and contain biological neurons and links. It gives neuromorphic computers their real strength to create several physically linked artificial neurons.
The architecture of neuromorphic computers allows them to function and run neural networks even easier. The AI models can be operated at a faster speed than CPUs and GPUs with less power consumption. This is important because electricity consumption already is a key challenge for AI.
The smaller and less efficient device sizes make them ideal for use in cases where AI algorithms need to be run at the edge of the cloud instead.
The number of neurons they contain characterizes neuromorphic chips. The neuromorphic chip Tianjic used at the start of this paper in the self-driving bike contained approximately 40,000 artificial neurons and 10,000 synapses, with an area of approximately 3.8 square millimeters. Tianjic performed 1.6-100x faster and consumes 12-10,000x less power when compared to a GPU with equal numbers of neurons.
However, 40,000 are a few neurons as much as a fish ‘s brain. About 100 billion neurons are found in the human brain.
The Tianjic chip was more a concept proof than a commercially available neuromorphic computer. Neuromorphic chips have already been developed by other companies. Two examples of neuromorphic computing in production are Intel’s Lohi chips and Pohoiki Beach computers. Pohoki provides 1000 times better performance and is 10,000 times more energy-efficient than GPUs.
Artificial general intelligence (AGI) & neuromorphic computing
The AI technologies currently available are narrow: they can solve certain problems and generalize their knowledge badly. The Tianjic chip can solve many problems in one unit, including object identification, voice recognition, navigation, and obstacle avoidance. Ben Dickson of TechTalks says that AI chips, which look much more like our brain, may open up new paths to understand and create intelligence. The author also talks about the evil side of technology, the darker effects of new technology, and what we must look for. Originally the entire article was published on TechTalk. Here you can read the initial Techtalks.com edition of this post.
- The AI researchers who created the Tianjic chip found in a paper published in Nature that their work would help us to get closer to the general artificial intelligence (AGI).
- The human brain ‘s capabilities should be replicated by AGI.
- The AI technologies currently available are narrow: they can solve certain problems and generalize their knowledge badly.
- The AI chip was able to solve a variety of problems in one device, according to Tianjic designers. These include object detection, speech recognition, navigation, and preventing barriers.
- General artificial insight requires more than a combination of several narrow AI models.