Ben Evans on Nostr: A: Adding more parameters to a LL model makes it "better" B: Continued ingest of the ...
A: Adding more parameters to a LL model makes it "better"
B: Continued ingest of the general Internet corpus "improves" general models.
Opposition: What if neither of those hypotheses is true? What if neither A or B comes to pass?
What if:
~A: Adding more parameters adds nothing in terms of utility;
~B: Increased output of LLMs pollutes general input sets.
C: Subsequent LLMs are even worse than current generations.
Published at
2023-12-20 23:54:49Event JSON
{
"id": "3cf792c6ef3a947bb3eff66de1301637e05737b7b0c69db9b3e7219aa5de4d9f",
"pubkey": "e780dfc7f1fe58b31b0ca8d26d5f5bcb2e610fe70d8eb0db1594d4bf8a9db5ec",
"created_at": 1703112889,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.social/users/kittylyst/statuses/111615206294110379",
"activitypub"
]
],
"content": "A: Adding more parameters to a LL model makes it \"better\"\n\nB: Continued ingest of the general Internet corpus \"improves\" general models.\n\nOpposition: What if neither of those hypotheses is true? What if neither A or B comes to pass?\n\nWhat if:\n\n~A: Adding more parameters adds nothing in terms of utility;\n\n~B: Increased output of LLMs pollutes general input sets. \n\nC: Subsequent LLMs are even worse than current generations.",
"sig": "e2b73d2fabf839073a5db5cf60b3afb1ba62e6b3c25b8f948e8a9a59b7c14618effa60e05cf922937ad64fb19229353945d7fcd6c63f71720ade84c531fbc169"
}