Google on Monday announced its experimental conversational artificial intelligence chatbot service, which it calls Bard.
As the competition in the field of artificial intelligence continues to heat up, tech giants and start-ups alike are working tirelessly to develop the next big thing in AI. However, recent missteps have proven that the industry is still in its infancy, and much work needs to be done. The latest of these missteps is Google’s AI-powered chatbot, Bard, which has already cost the company a staggering $100 billion in market value.
OpenAI, on the other hand, seems to be one step ahead with its revolutionary ChatGPT chatbot, which uses AI to create human-like text. The growing popularity of ChatGPT, its wide range of applications, and the potential it holds to disrupt internet searches have forced other companies to fast-track their own AI developments.
Microsoft, Google, and Baidu are all racing to catch up to OpenAI, with Google being the farthest among the established companies. Google’s chatbot, Bard, is powered by LaMDA, the company’s own language learning model that operates on A.I. technology.
Google CEO Sundar Pichai stated that technologies like LaMDA will eventually be integrated with Google’s search engine. Bard is expected to be released to the public in a few weeks, but this latest misstep may lead Google to take a step back and perfect the technology before making it widely available.
The mistake, in particular, was that Bard provided incorrect information about the James Webb Space Telescope being the first to take pictures of a planet outside the Earth’s solar system. However, it was discovered by the Very Large Telescope in Chile in 2004, not by James Webb. As a result, Google shares took a significant hit, falling nearly 8% in midday trading, with its market cap decreasing from $1.35 trillion last week to $1.27 trillion, wiping $100 billion in market value.
A spokesperson for Google commented on the situation, saying that Bard’s error highlights “the importance of a rigorous testing process,” and that the company is starting its Trusted Tester program this week to ensure that Bard’s responses meet a high standard of quality and accuracy before its public release.
However, Bard is not the only chatbot to have suffered from inaccuracies. ChatGPT, for instance, has also been prone to racial and gender biases, as well as providing incorrect information when asked about specific topics. These missteps highlight the fact that AI technology still has a long way to go before it becomes a reliable source of information.
As the competition in the AI industry intensifies, it is crucial that companies take the time to thoroughly test and perfect their technology before releasing it to the public. With the potential that AI holds to revolutionize our lives, it is vital that it be developed and implemented in a responsible and safe manner.