Dubbed Bard, the service will compete with the Microsoft-backed ChatGPT
An “experimental conversational AI service” named Bard will be made available to “trusted testers” on Monday, Alphabet CEO Sundar Pichai has announced. Google will also start using AI technology to improve and expand searches.
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses,” Pichai wrote in a blog post.
The AI is powered by Language Model for Dialogue Applications (LaMDA) technology, which Google unveiled two years ago. Pichai said the testing was intended “to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information,” after which the AI can be made “more widely available to the public in the coming weeks.”
According to Pichai, Google “re-oriented the company around AI six years ago,” and developed a set of AI technologies that are “creating entirely new ways to engage with information, from language and images to video and audio.” The company is now rolling out these technologies to its signature web search service.
“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” Pichai wrote.
LaMDA previously made headlines in June 2022, when Google engineer and ethicist Blake Lemoine claimed that the program had become self-aware. The company said Lemoine’s claims were “wholly unfounded” and fired him for violating “employment and data security policies.”
Bard’s reveal comes less than three months after ChatGPT, developed by the Microsoft-backed OpenAI, became available to the general public. Within weeks, the AI became wildly popular – and caused alarm at schools and universities, due to its ability to mimic the academic style of writing. It has also raised questions about the political and cultural bias of the humans training it to “think,” as well as ethical concerns about hiring low-paid African labor to help with censorship.
Pichai sought to head off criticism by saying that Google’s AI will be developed “responsibly” and in line with the principles published in 2018. The company also provides “education and resources” for everyone involved with the project in order to “make AI safe and useful,” he said.