Google execs understand that the company’s artificial intelligence search tool Bard isn’t always accurate in how it responds to queries. At least some of the onus is falling on employees to fix the wrong answers.
Prabhakar Raghavan, Google’s vice president for search, asked staffers in an email on Wednesday to help the company make sure its new ChatGPT competitor gets answers right. The email, which CNBC viewed, included a link to a do’s and don’ts page with instructions on how employees should fix responses as they test Bard internally.
Staffers are encouraged to rewrite answers on topics they understand well.
“Bard learns best by example, so taking the time to rewrite a response thoughtfully will go a long way in helping us to improve the mode,” the document says.
Also on Wednesday, as CNBC reported earlier, Pichai asked employees to spend two to four hours of their time on Bard, acknowledging that “this will be a long journey for everyone, across the field.”
Raghavan echoed that sentiment.
“This is exciting technology but still in its early days,” Raghavan wrote. “We feel a great responsibility to get it right, and your participation in the dogfood will help accelerate the model’s training and test its load capacity (Not to mention, trying out Bard is actually quite fun!).”
Google unveiled its conversation technology last week, but a series of missteps around the announcement pushed the stock price down nearly 9%. Employees criticized Pichai for the mishaps, describing the rollout internally as “rushed,” “botched” and “comically short sighted.”
To try and clean up the AI’s mistakes, company leaders are leaning on the knowledge of humans. At the top of the do’s and don’ts section, Google provides guidance for what to consider “before teaching Bard.”
Under do’s, Google instructs employees to keep responses “polite, casual and approachable.” It also says they should be “in first person,” and maintain an “unopinionated, neutral tone.”
For don’ts, employees are told not to stereotype and to “avoid making presumptions based on race, nationality, gender, age, religion, sexual orientation, political ideology, location, or similar categories.”
Also, “don’t describe Bard as a person, imply emotion, or claim to have human-like experiences,” the document says.
Google then says “keep it safe,” and instructs employees to give a “thumbs down” to answers that offer “legal, medical, financial advice” or are hateful and abusive.
“Don’t try to re-write it; our team will take it from there,” the document says.
To incentivize people in his organization to test Bard and provide feedback, Raghavan said contributors will earn a “Moma badge,” which appears on internal employee profiles. He said Google will invite the top 10 rewrite contributors from the Knowledge and Information organization, which Raghavan oversees, to a listening session. There they can “share their feedback live” to Raghavan and people working on Bard.
“A wholehearted thank you to the teams working hard on this behind the scenes,” Raghavan wrote.
Google didn’t immediately respond to a request for comment.