At its October 4th event, Google kicked off by talking about Google Assistant, which was introduced at the company's I/O 2016 developer conference. The web giant showed off a few queries, similar to the things we’ve seen before with Google Now. As you’d expect, Google Assistant will be available across all sorts of devices including Watches, Phones, TVs and on the go in the car.
Sundar Pichai, who delivered the speech, explained that the knowledge graph that Google Assistant will take its information from already stores 70 billion facts about, people, places and things. Along with the knowledge graph, Assistant will be able to tap into other technologies that Google has at its disposal including natural language processing, translations, voice recognition and image recognition.
According to Pichai, image recognition has improved its accurateness by 4% since 2014, going from 89.6% to 93.9%. With this improvement, the recognition can detect aspects such as colours and is able to count things in the picture. Translations have also been significantly improved, more natural sounding translations are now produced than before.
Pichai went on to discuss a new text-to-speech technology called WaveNet which helps generate more natural sounds. It will enable Google to have different languages, personalities, be able to differentiate between different dialects and even capture emotions.
Google first rolled out its AI to the new messaging service Google Allo, and today the company wants to roll it out to two new ‘surfaces’, one in the context of the phone and one in the context of the home. The new Assistant will ship with the new Pixel phone which was also announced by Google today.
5 Comments - Add comment