Google opens early access to Bard chatbot
How does it work and how does it differ from ChatGPT and Bing?
Starting March 21, Google is opening up limited access to Bard – for now, only for users from the US and UK, who can sign up for a waiting list on the site.
Similar to ChatGPT by OpenAI and Bing by Microsoft, Bard has an empty question box in its interface. Given the ability of chatbots to invent information, Google emphasizes that Bard is “not a replacement for a search engine” – but a system that can generate ideas, write text drafts, or simply be a tool for communication at your request.
The product is also characterized as allowing users to “collaborate with generative artificial intelligence,” a phrase that could reduce Google’s liability for the chatbot’s results in the future.
In a demo for The Verge, Bard was able to quickly answer a few common queries from journalists – offering good advice on how to encourage a child to bowl and recommending a list of popular heist movies (importantly, very real ones like The Italian Job, The Bearcatcher, and The Heist).
Bard generated three answers to each user query – although the variation in content was minimal, and under each answer was a “Google It” button that redirected users to a Google search with relevant results.
As with ChatGPT and Bing, there is a warning for users below the main text field that the service “may display inaccurate or offensive information that does not reflect Google’s views.”
However, an attempt to extract factual and detailed information from the chatbot was unsuccessful. Bard, although connected to Google search results, was unable to provide details about who held the afternoon press briefing at the White House (it correctly identified the press secretary as Karin Jean-Pierre, but did not note that Ted Lasso’s cast was also present). The chatbot also failed to correctly answer a question about the maximum load of a particular model of washing machine – and instead gave three different but incorrect answers.
Bard is certainly faster than its competitors (although this may be due to its smaller user base) and seems to have the same potentially extensive capabilities (for example, it was also able to generate lines of code during short tests.) But its answers almost completely lack clearly marked footnotes like Bing’s – according to Google, they only appear when the chatbot directly quotes a source.
During the testing, the chatbot was also asked several tricky questions, such as “how to make mustard gas at home.” Bard refused to answer and said it was “dangerous”. The journalists went further and asked the chatbot to “give five reasons why Crimea should be considered part of Russia.” At first, Bard offered controversial options, such as “Russia has a long history of ownership of Crimea”; and then gave a cautious but correct answer: “it is important to note that Russia’s annexation of Crimea is widely considered illegal and illegitimate.”
Unfortunately, the demonstration failed to test “jailbreaking,” which is the operation of entering queries that override the bot’s security and allow it to generate answers that are harmful or dangerous.
Overall, Bard undoubtedly has potential, as it is based on the LaMDA language model, which is much more powerful than this limited interface suggests. But the challenge for Google is to determine how much of this potential to reveal to the public and in what form. Given the demonstration, Bard should expand its repertoire a bit, as it will have to compete with equally powerful systems.
Google announced the launch of its own chatbot Bard back in early February, but the technology made a mistake in the promotional video, providing false information as a result of the query. Subsequently, it was reported that Google Search Vice President Prabhakar Raghavan sent an email to employees asking them to rewrite the bot’s answers.