Daniel Khent on Nostr: "Microsoft’s new safety system can catch hallucinations in its customers’ AI ...
Published at
2024-03-29 21:34:00Event JSON
{
"id": "2f873106cefa85211c0f262a9665215419d2d80b960b4eec5013c696e4f82e58",
"pubkey": "aa94f6a49865c5c2dc5b139d683fa5d77574aaba1f8f510aeda0014826480e1b",
"created_at": 1711744440,
"kind": 1,
"tags": [
[
"t",
"ai"
],
[
"t",
"chatgpt"
],
[
"t",
"llms"
]
],
"content": "\"Microsoft’s new safety system can catch hallucinations in its customers’ AI apps\"\n\nNot sure I buy that you can generically catch hallucination\n\n#ai #chatgpt #LLMs\n\nhttps://www.theverge.com/2024/3/28/24114664/microsoft-safety-ai-prompt-injections-hallucinations-azure",
"sig": "8670091f6a18ce8f9c4c60fe33ac963c313003a203a4dbb861761209688abc3ad83989df30c8ec30a989474928017041977bf8c08e5ca693474088293bcb589c"
}