someone on Nostr: 1. AI should be aligned with human values to not harm humans. Nostr is the best place ...
1. AI should be aligned with human values to not harm humans.
Nostr is the best place for humans to freely express their opinions.
Therefore AI should be aligned with Nostr values.
If an AI is not close to Nostr then it may not be close to humans.
2. An LLM that is trained with notes here is sum of ideas, values, opinions of Nostr.
Comparison with this model can be a quick safety benchmark.
This may be a method to quantify LLM misalignment.
We could then say "only LLMs that have similar values to Nostr is worth listening to".
Published at
2024-06-13 17:39:15Event JSON
{
"id": "69fb644d30629021f634126a174b3e37fcc2b622c359e49bdd43059492d257d3",
"pubkey": "9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1",
"created_at": 1718293155,
"kind": 1,
"tags": [],
"content": "1. AI should be aligned with human values to not harm humans.\nNostr is the best place for humans to freely express their opinions.\nTherefore AI should be aligned with Nostr values.\nIf an AI is not close to Nostr then it may not be close to humans.\n\n2. An LLM that is trained with notes here is sum of ideas, values, opinions of Nostr.\nComparison with this model can be a quick safety benchmark.\nThis may be a method to quantify LLM misalignment.\nWe could then say \"only LLMs that have similar values to Nostr is worth listening to\".\n",
"sig": "a958978ce0a90b9164540a2ae42ed42e6bfa7c965738bc1562d04e8d496fd1bdb34158e6e1af0c4b7996d2dfa8ff0be915e3db0c70c83eb73ef5ee5ac15eb302"
}