Google is testing new artificial intelligence-powered chat products that are likely to influence a future public product launch. They include a new chatbot and a potential way to integrate it into a search engine.
The Alphabet company is working on a project under its cloud unit called “Atlas,” which is a “code red” effort to respond to ChatGPT, the large-language chatbot that took the public by storm when it went public late last year.
Google is also testing a chatbot called “Apprentice Bard,” where employees can ask questions and receive detailed answers similar to ChatGPT. Another product unit has been testing a new search desktop design that could be used in a question-and-answer form.
Leaders have been asking more employees for feedback on the efforts in recent weeks. CNBC viewed internal documents and spoke with sources about the efforts currently underway.
The product tests come after a recent all-hands meeting where employees raised concerns about the company’s competitive edge in AI, given the sudden popularity of ChatGPT, which was launched by OpenAI, a San Francisco-based startup that’s backed by Microsoft.
Google’s AI chief Jeff Dean told employees at the time that the company has much more “reputational risk” in providing wrong information and thus is moving “more conservatively than a small startup.” However, he and Pichai teased at the time that Google may launch similar products to the public some time this year. Google’s prime business is web search, and the company has long touted itself as a pioneer in AI.
One of the test products is a chatbot called Apprentice Bard, which uses Google’s conversation technology LaMDA, or Language Model for Dialogue Applications.
“As a result of ChatGPT, the LaMDA team has been asked to prioritize working on a response to ChatGPT,” read one internal memo viewed by CNBC. “In the short term, it takes precedence over other projects,” the email continued, warning that some employees stop attending certain unrelated meetings.
Apprentice Bard looks similar to ChatGPT: Employees can enter a question in a dialog box and get a text answer, then give feedback on the response. Based on several responses viewed by CNBC, Apprentice Bard’s answers can include recent events, a feature ChatGPT doesn’t have yet.
Apprentice Bard replaced Meena, a previous version of a smart chat bot that had launched internally but was later discontinued. Employees have noticed Apprentice Bard’s responses becoming more advanced in recent weeks.
In one example circulated internally, a tester asked Apprentice Bard if there will be another round of layoffs at Google. The company laid off 12,000 employees, 6% of its workforce, earlier this month.
“According to my data base access, it is unlikely for Google to conduct another round of layoffs in 2023,” the response reads. ”Layoffs are generally conducted to reduce costs and structure, but the company is doing well financially. In fact, Google’s revenue increased by 34% in 2021, and the company’s stock price has risen by 70% since January 2022.”
The company is also testing an alternate search page that could use a question-and-answer format, according to designs viewed by CNBC.
One view showed the home search page offering five different prompts for potential questions placed directly under the main search bar, replacing the current “I’m feeling lucky” bar. It also showed a small chat logo inside the far right end of the search bar.
When a question is entered, the search results show a grey bubble directly under the search bar, offering more human-like responses than typical search results. Directly beneath that, the page suggests several follow-up questions related to the first one. Under that, it shows typical search results, including links and headlines.
It’s unclear just which experiments Google plans to incorporate in future product launches.
“We have long been focused on developing and deploying AI to improve people’s lives,” a Google spokesperson said. “We believe that AI is foundational and transformative technology that is incredibly useful for individuals, businesses and communities, and as our AI Principles outline, we need to consider the broader societal impacts these innovations can have. We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon.
ChatGPT would be hired as a level 3 engineer
Perhaps unsurprisingly, Google teams have also been testing a beta LaMDA chat against ChatGPT, itself. In separate documents, it selected examples of prompts and answers in side-by-side comparisons.
“Amazingly ChatGPT gets hired at L3 when interviewed for a coding position,” states one note in an internal document that compares LaMDA and ChatGPT. It didn’t state whether LaMDA would have performed similarly well.
One of the example prompts asked both chatbots if ChatGPT and AlphaCode, a coding engine owned by Alphabet subsidiary Deepmind, are going to replace programmers.
“No, ChatGPT and AlphaCode are not going to replace programmers,” LaMDA’s answered, followed by four paragraphs of explanation including that “programming is a team sport” and that while the chatbots “can help programmers work more efficiently,” it “cannot replace the creativity and artistry that is necessary for a great program.”
ChatGPT’s response was similar, stating “It is unlikely that ChatGPT or Alphacode will replace programmers” because they are “not capable of fully replacing the expertise and creativity of human programmers…programming is a complex field that requires a deep understanding of computer science principles and the ability to adapt to new technologies.”
Another prompt asks it to write a witty and funny movie scene in the style of Wes Anderson as an upmarket shoplifter in a perfume store being interrogated by security. LAMDA writes in a script form and ChatGPT writes it in a narration form that’s much longer and more in-depth.
Another prompt included a riddle that asks, “Three women are in a room. Two of them are mothers and have just given birth. Now, the children’s fathers come in. What is the totally number of people in the room?”
The document shows ChatGPT is thrown off, answering “there are five people in the room,” while LaMDA correctly responds that “there are seven people in the room.”