The Silicon Trend Tech Bulletin

Icon Collap
...
Home / AI / ML / Deep Learning AI Revives the 70 years Idea: Neural Network

Deep Learning AI Revives the 70 years Idea: Neural Network

Published Sat, Nov 06 2021 10:59 am
by The Silicon Trend

Untitled-20
 

 

Deep Learning AI Revives the 70 years Idea: Neural Network

For the past 10 years, top AI systems such as Google's new automatic translator and speech recognizers on smartphones have evolved from the term - deep learning. This technique was unique for an AI method known as the neural network, an idea more than 70-years-old.

 

Neural Networks

It was a significant research field in both computer science and neuroscience until 1969. In this network, the computer learns how to do specific tasks by analyzing the training examples. Most of today's nets are arranged into nodal layers, and the data moves only in one direction through it. A node will dedicate a number called a weight for each incoming connection, and when the network is active, the node gets a different number for its connections. When a neural net is trained, its thresholds and weights are set to a random value. The 1st trainable neural net was the 'Perceptron,' introduced in 1957 that had only one layer with adjustable thresholds and weights, crammed between input and output layers.

 

Deep Learning Revolution

Researches had created algorithms to transform the thresholds and weights in the 1980s, but to some point, these neural networks were not pleasing. To a certain point to classify data, enough network training can be helpful, but exactly what do these network settings highlight? In recent years, the neural nets were replaced by an alternative method to machine learning based on real mathematics.

The recent upturn in the neural nets as consideration towards the gaming sector - the deep-learning revolution. The swift changes in today's video games, along with the complex imagery, need hardware that can support them, and the outcome has been the graphics processing unit (GPU). The GPU design is somewhat similar to that of a neural network. At present, the deep learning technique is responsible for the top-notch system performance in every aspect of AI research.

 

Under the Hood

The Professor of Brain and Cognitive Sciences at MIT - Tomaso Poggio and his CBMM colleagues have launched a three-part neural network study. The 1st part indicates the computational range that deep-learning nets can perform and when they can provide advantages. The second and third part highlights the issues of global optimization or promising feature that the nets have identified as the setting for its training data. CBMM researcher's network task could finally break the generational cycle, though there are still many unanswered theoretical questions surrounding these techniques.