The Silicon Trend Tech Bulletin

Icon Collap
...
Home / AI / ML / Are Other AI Domains Loitering Behind NLP in Terms of Innovation?

Are Other AI Domains Loitering Behind NLP in Terms of Innovation?

Published Thu, May 19 2022 06:11 am
by The Silicon Trend

 

Nlp-1

 

 

Are Other AI Domains Loitering Behind NLP in Terms of Innovation?

The US multinational tech conglomerate - Meta recently introduced a 175Bn parameter Open Pretrained Transformer (OPT) model. The company claims that this considerable model, trained on publicly available data sets, is the first language tech released with its pre-trained models and training code. The model joins several other advanced language models that have been created and launched recently. As a result, the NLP domain of AI has seen a drastic innovation in the last few years, with participation from leading tech giants across the globe.

 

Also Read: Samsung and Google Joined Hands for Health Connect Development

 

NLP Domain Progress

Natural Language Processing (NLP) is the most incredible field of AI. Yet, even in humans, having proficiency in different languages is considered a significant indicator of intelligence. Generally, it's deemed to be suggestive of a potential to parse challenging messages and decipher coding variations across slang, dialects, and context. 

Surprisingly, AI researchers consider teaching machines the potential to grasp and respond to natural language a great moment and even a step ahead in achieving general intelligence.

A widely considered innovation, the 175Bn parameter GPT-3, introduced by OpenAI in 2020, has been trained on 700Gb of data scraped from across the web. The parameter set a precedent for even more significant, advanced, and computationally inexpensive models.

 

Also Read: AI Tech Has Transformed the Sport for Service Suppliers

 

Innovation Supporting NLP

Revolution in the NLP field was lagging behind compared to other areas, such as computer vision, which benefitted tremendously from the emergence of massive pre-trained models - enabled by ImageNet. These models helped achieve innovative outcomes in object detection, semantic segmentation, video recognition, and human pose estimation.

In recent years, the most definitive creation was the Transformers - developed at Google Brains in 2017. It is a novel neural network architecture based on the self-attention mechanism. The model outperformed both convolutional and recurrent models, and it was observed that Transformer needs little computational power to train - being a right fit for model ML hardware.

Because of Transformers and the subsequent invention of BERT, NLP obtained its ImageNet moment. BERT revolutionized NLP, and since then, several variations of these models have been put forward, like ALBERT, XLNet, and RoBERTa. Beyond Transformers, several representation practices such as ULMFiT and ELMo have made hot buzz news by showing that pre-trained language prototypes can obtain innovative outcomes on various NLP tasks.

 

Also Read: Google Reveals the Globe's Biggest Machine Learning Hub