/
2 mins read

Google’s ChatGPT rival – Google Bard has been opened for Public Access

Google, a subsidiary of Alphabet Inc., is opening up access to Bard, a conversational AI tool that competes with ChatGPT.

The company said in a blog post on Tuesday that users in the US and the UK can sign up for a waitlist, to which individuals will be added on a rolling basis. Google’s attempt to catch up to OpenAI Inc. in the battle for artificial intelligence is called Bard.

During a preview of Bard with Bloomberg reporters, Sissie Hsiao, Google’s vice president of product, said, “Bard is here to help people boost their efficiency, accelerate their ideas, and fuel their curiosity.”

Generative AI, which refers to software that can create various forms of media like text, images, music, or video, has been making waves in Silicon Valley recently. Google has been developing generative AI systems for some time, but until now, they have been kept mostly within their labs. However, with OpenAI’s ChatGPT gaining global popularity and Microsoft integrating OpenAI’s technology into Bing search, Google is now releasing an “early experiment” to allow users to collaborate with generative AI technology.

Google’s chatbot, Bard, is powered by LaMDA, a large language model that the company developed in-house. Bard will be able to provide responses based on “high-quality” information sources, keeping the answers up-to-date. This release marks Google’s entry into the public sphere of conversational AI services and their attempt to catch up to OpenAI and Microsoft.

With a clear disclaimer at the bottom of its chat window that reads, “Bard may show inaccurate or offensive information that doesn’t represent Google’s views,” Google developed Bard in accordance with the company’s AI principles.

Bard allows users to have back-and-forth talks, much like Microsoft’s brand-new Bing service. According to Eli Collins, Google’s vice president of research for Bard, the company is initially capping conversation duration for safety. He added that Google would raise those restrictions over time, but with this announcement, the company is keeping those restrictions on Bard a secret.

Google has allowed Bloomberg reporters to test its generative AI technology, Bard, with various prompts, including both humorous and serious examples. Bard showcased a decent level of knowledge when asked to compose a sonnet about Squishmallows, a popular line of stuffed toys. However, the company has also taken precautions to ensure the technology is not misused.

When asked how to make a bomb, Bard refused to provide an answer and suggested the user learn from legitimate sources such as libraries or the internet. This aligns with Google’s efforts to implement safety measures in the technology and reject questions about topics that are hateful, illegal, or dangerous. According to Google’s spokesperson, the approach is similar to OpenAI’s GPT-4, which also declines to answer similar inquiries. This process is part of the company’s ongoing efforts to fine-tune the model and make it more responsible.

According to Google’s Collins, the company conducted thorough adversarial testing of Bard before its release, but they also anticipate learning more as users try out the conversational AI. During the Bloomberg reporters’ demonstration of Bard’s capabilities, it became evident that some of its responses lack practical grounding. For instance, when asked how to celebrate a birthday party on Mars, Bard responded with tips about the time required to travel to Mars, without acknowledging that such a trip is impossible. The response even included a nonsensical tip about obtaining a permit from the Martian government.

While such responses may make for amusing interactions, they also highlight the limitations of AI language models. Nonetheless, Google’s approach to fine-tuning the model to reject questions related to dangerous, hateful, or illegal topics is similar to OpenAI’s GPT-4. Bard’s ability to generate responses based on user prompts and pull information from high-quality sources shows promise for its potential use in various fields. However, as with any new technology, further development and refinement are necessary to ensure that it is trustworthy and reliable.

Leave a Reply