Google has always been at the forefront to help organize and access information conveyed using written and spoken formats. It invented machine learning techniques that aid the company in understanding the intent of search queries.
According to Google, language is one of computer science’s most difficult puzzles and an even more challenging part of that puzzle is conversations. The search giant has introduced LaMDA - Language Model for Dialogue Applications, as a stepping stone to solving that very puzzle. It can engage in a free-flowing way about a seemingly endless number of topics, an ability that unlocks more natural ways of interacting with technology and entirely new categories with various potential applications.
Google described in its blog that:
While conversations tend to revolve around specific topics, their open-ended nature means they can start in one place and end up somewhere completely different. A chat with a friend about a TV show could evolve into a discussion about the country where the show was filmed before settling on a debate about that country’s best regional cuisine.
LaMDA’s conversational skills have been built on Transformer, which is also the basis of many recent language models like BERT and GPT-3. Transformer is a neural network architecture that Google Research invented and open-sourced in 2017.
That architecture produces a model that can be trained to read many words, pay attention to how those words are correlated, and then predict what words it thinks will come next. Unlike other languages, LaMDA has been trained on dialogue, including sensibleness of conversational context. LaMDA strives to achieve not only sensibleness but also aims to be specific.
Google explains:
After all, the phrase “that’s nice” is a sensible response to nearly any statement, much in the way “I don’t know” is a sensible response to most questions. Satisfying responses also tend to be specific, by relating clearly to the context of the conversation. LaMDA has been trained to be both.
LaMDA is still in the early stages of development, and sensibleness and specific replies are not the only qualities Google is trying to build into the AI model. It wants interesting, insightful, compelling, and factually correct responses. It is also trying to minimize misuse of the technology. Google carefully vets and scrutinizes all the information so that the responses are free from biases, do not contain hate speech, and do not provide misleading information.