Why Nostr? What is Njump?
2023-10-15 05:50:56

nodez on Nostr: "My rule of thumb is not to trust LLM output unless I can independently verify it," ...

"My rule of thumb is not to trust LLM output unless I can independently verify it," Dominik Rabiej, a senior product manager for Bard, wrote in the Discord chat in July, referring to large language models -- the AI systems trained on massive amounts of text that form the building blocks of chatbots like Bard and OpenAI's ChatGPT. "Would love to get it to a point that you can, but it isn't there yet."

"The biggest challenge I'm still thinking of: what are LLMs truly useful for, in terms of helpfulness?" said Googler Cathy Pearl, a user experience lead for Bard, in August. "Like really making a difference. TBD!" [...] Two participants on Google's Bard community on chat platform Discord shared details of discussions in the server with Bloomberg from July to October. Dozens of messages reviewed by Bloomberg provide a unique window into how Bard is being used and critiqued by those who know it best, and show that even the company leaders tasked with developing the chatbot feel conflicted about the tool's potential. Expounding on his answer about "not trusting" responses generated by large language models, Rabiej suggested limiting people's use of Bard to "creative / brainstorming applications." Using Bard for coding was a good option too, Rabiej said, "since you inevitably verify if the code works!"

https://www.bloomberg.com/news/articles/2023-10-11/google-insiders-question-usefulness-of-bard-ai-chatbot?leadSource=uverify%20wall
Author Public Key
npub17ryxfn6h8hshzpfmaaxl8vcuvkfnx7sf07aanusd0pgxujgvddjq7y9shm