The Silicon Trend Tech Bulletin

Icon Collap
...
Home / Other Tech / Google's Innovative Conversation Technology: LaMDA

Google's Innovative Conversation Technology: LaMDA

Published Wed, Jun 15 2022 11:39 am
by The Silicon Trend

 

google-1

 

 

 

Google's Innovative Conversation Technology: LaMDA

The US tech giant - Google has always had a soft spot for language, and in 2021, they set out to translate the web. In the same year, they invented ML techniques that aid the company in a better grasp on the intent of search queries. Over time, the advancements in other areas have made it seamless and easier to organize and access loads of data conveyed by the written and spoken word.

The latest research innovation of Google is LaMDA, which adds pieces to one of the most tantalizing sections of the conversation. While conversations tend to revolve around particular topics, Google's open-ended nature means they can begin in one place and end up somewhere different. For instance, a conversation with a friend about a TV show could evolve into a discussion on the country where the show was filmed before settling on a debate on the country's best regional cuisine.

 

Also Read: Developed AI Could Lead to the Trolley Issue in the Future

 

The Long Path to LaMDA

LaMDA's conversational skills have been built on Transformer-like language models such as GPT-3 and BERT for years in the making. This generates a model that can be trained to read several words, pay attention to how those words relate to one another and predict what comes next. However, unlike other language models, LaMDA was trained on dialogue. 

During its training, it picked up on many nuances that differentiate open-ended conversation from other language forms. One of those is sensibleness. However, sensibleness isn't the only thing that makes a good response. Satisfying responses also tend to be confident by relating precisely to the conversational context.

LaMDA builds on earlier Google research, released in 2020, that displayed that Transformer-based language models trained on dialogue could learn to talk on virtually anything. Once trained, LaMDA can be fine-tuned to enhance the sensibleness and specificity of responses.

 

Also Read: Developing a More Helpful Browser with ML Tech

 

First is First: Responsibility

Google explored dimensions such as interestingness by assessing whether responses were meaningful, unexpected, or hilarious. Of course, being Google, they cared a lot about factuality and were probing methods to ensure LaMDA's reactions aren't just compelling but accurate. However, the most crucial question Google ask is whether its techs adhere to their AI principles. The language might be one of humanity's most essential tools, but like all others, it can be misused.

 

Also Read: Latest Meta AI Study Shares New Datasets and Research on Measuring Fairness and Alleviating Bias in NLP Training Data

 

When developing technologies like LaMDA, Google's highest priority is working to ensure they reduce risks. We are familiar with ML model issues like unfair bias. This is why Google developed open-source resources that researchers can leverage to analyze models and data on which they're trained.

Recently, the Medium and a Google collaboration performed with LaMDA. Unfortunately, the interview was conducted over several distinct chat sessions due to technical faults. 

You can find the complete source of interview here: Is LaMDA Sentient? — an Interview