Alphabet’s Google and Discord platforms have been working together for months, providing a forum for insiders such as Google’s product managers, designers and engineers to openly discuss the effectiveness and usefulness of artificial intelligence (AI) tools.
But right now, there are many insiders questioning whether it is worth investing a lot of resources in the development of the AI chatbot “Bard”.
Two participants in Google’s Bard community on the Discord platform shared some details from July to October. It is expected that even company executives responsible for developing chatbots are ambivalent about the tool’s potential.
Dominik Rabiej, senior product manager at Bard, wrote in a July Discord forum, “My rule of thumb is not to trust the output of a large language model (LLM) unless I can independently verify it.” I would love to take the model up to that level, but it’s not there yet. ”
Rabiej suggests limiting people’s use of Bard to “creative/brainstorming applications.” Rabiej also said that using Bard coding is also a good option, “because you inevitably have to verify that the code works.”
Cathy Pearl, Bard’s head of user experience, wrote on the forum in August, "The biggest challenge I’m still thinking about is: What is really useful about LLM? It’s like really making a difference, yet to be determined. ”
** All bet on “Bard” **
For Google, ensuring its Bard’s success is crucial. Google has long been far ahead in search engines, which is its financial lifeline, accounting for about 80% of parent company Alphabet’s revenue.
But Google’s dominance in search has been challenged with the advent of generative AI, with some predicting that new tools from OpenAI and other startups could upend Google’s strong position in the market.
In March this year, Google launched the AI chatbot “Bard” to compete with ChatGPT. Since then, Google has added a steady set of new features to the product, including letting the AI tool analyze photos and generate responses to queries in dozens of languages.
Last month, Google also released its most ambitious update to date: Bard Extensions, an extension for Bard, that connects Bard with the most popular services like Gmail, Map, Docs and YouTube.
But as Google further integrated Bard into its core product, the company also received complaints that the tool would generate fictitious facts and offer potentially dangerous advice.
On the same day the company launched an app extension for Bard last month, it also announced a “Google It” button to help people check with one click that the answers generated by the tool matched the results given by search engines.
Overwhelmed with negative reviews
On the Bard forum, some internal users questioned why Google used low-paid, high-workload contractors to perfect Bard’s answer.
Although the company has publicly stated that it is not only relying on these workers to improve the AI models that power Bard, there are many other ways to improve its accuracy and quality, Tris Warkentin, Bard’s director of product management, responded that human input is important for training Bard’s algorithms.
Warkentin writes, "Humanizing is vital so that Bard can be everyone’s product… We don’t need an ivory tower, we need something for everyone. ”
In mid-July, one user mentioned Project Nimbus, a $1.2 billion contract between Google and Amazon to provide AI tools to the Israeli military. The user questioned making AI a lethal weapon, and he was subsequently banned from the forum on the grounds that users must avoid “political, religious or other sensitive topics” in their chats.
There are also questions raised about the consequences of the huge cost of maintaining LLM. An insider on the Discord forum said, "Is anything being done to reduce LLM’s staggering resource costs? In particular, water consumption, and the huge demand for GPUs. ”
Cathy Pearl, Bard’s head of user experience, responded, “I believe we will continue to find ways to get the same behavior with fewer resources.” ”
In addition, James, a user experience designer at Bard, said in the Discord community, “From my generally negative perception of the impact that a new generation of AI could have, I think education is one of the most interesting and likely ‘well-done’ areas of this technology.” ”
He believes that higher and higher education institutions may use the technology “to help students create richer experiences because it has access to support in different subjects almost around the clock.” ”
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Google insiders are dissatisfied with the AI tool "Bard": Is it worth spending so many resources to develop?
Original source: CaiLian News
Alphabet’s Google and Discord platforms have been working together for months, providing a forum for insiders such as Google’s product managers, designers and engineers to openly discuss the effectiveness and usefulness of artificial intelligence (AI) tools.
But right now, there are many insiders questioning whether it is worth investing a lot of resources in the development of the AI chatbot “Bard”.
Two participants in Google’s Bard community on the Discord platform shared some details from July to October. It is expected that even company executives responsible for developing chatbots are ambivalent about the tool’s potential.
Dominik Rabiej, senior product manager at Bard, wrote in a July Discord forum, “My rule of thumb is not to trust the output of a large language model (LLM) unless I can independently verify it.” I would love to take the model up to that level, but it’s not there yet. ”
Rabiej suggests limiting people’s use of Bard to “creative/brainstorming applications.” Rabiej also said that using Bard coding is also a good option, “because you inevitably have to verify that the code works.”
Cathy Pearl, Bard’s head of user experience, wrote on the forum in August, "The biggest challenge I’m still thinking about is: What is really useful about LLM? It’s like really making a difference, yet to be determined. ”
** All bet on “Bard” **
For Google, ensuring its Bard’s success is crucial. Google has long been far ahead in search engines, which is its financial lifeline, accounting for about 80% of parent company Alphabet’s revenue.
But Google’s dominance in search has been challenged with the advent of generative AI, with some predicting that new tools from OpenAI and other startups could upend Google’s strong position in the market.
In March this year, Google launched the AI chatbot “Bard” to compete with ChatGPT. Since then, Google has added a steady set of new features to the product, including letting the AI tool analyze photos and generate responses to queries in dozens of languages.
Last month, Google also released its most ambitious update to date: Bard Extensions, an extension for Bard, that connects Bard with the most popular services like Gmail, Map, Docs and YouTube.
But as Google further integrated Bard into its core product, the company also received complaints that the tool would generate fictitious facts and offer potentially dangerous advice.
On the same day the company launched an app extension for Bard last month, it also announced a “Google It” button to help people check with one click that the answers generated by the tool matched the results given by search engines.
Overwhelmed with negative reviews
On the Bard forum, some internal users questioned why Google used low-paid, high-workload contractors to perfect Bard’s answer.
Although the company has publicly stated that it is not only relying on these workers to improve the AI models that power Bard, there are many other ways to improve its accuracy and quality, Tris Warkentin, Bard’s director of product management, responded that human input is important for training Bard’s algorithms.
Warkentin writes, "Humanizing is vital so that Bard can be everyone’s product… We don’t need an ivory tower, we need something for everyone. ”
In mid-July, one user mentioned Project Nimbus, a $1.2 billion contract between Google and Amazon to provide AI tools to the Israeli military. The user questioned making AI a lethal weapon, and he was subsequently banned from the forum on the grounds that users must avoid “political, religious or other sensitive topics” in their chats.
There are also questions raised about the consequences of the huge cost of maintaining LLM. An insider on the Discord forum said, "Is anything being done to reduce LLM’s staggering resource costs? In particular, water consumption, and the huge demand for GPUs. ”
Cathy Pearl, Bard’s head of user experience, responded, “I believe we will continue to find ways to get the same behavior with fewer resources.” ”
In addition, James, a user experience designer at Bard, said in the Discord community, “From my generally negative perception of the impact that a new generation of AI could have, I think education is one of the most interesting and likely ‘well-done’ areas of this technology.” ”
He believes that higher and higher education institutions may use the technology “to help students create richer experiences because it has access to support in different subjects almost around the clock.” ”