Why Nostr? What is Njump?
2024-06-04 01:50:36

liminal 🦠 on Nostr: Now that's a statement i can't relate more to 😅 I first encountered LDA from a ...

Now that's a statement i can't relate more to 😅

I first encountered LDA from a previous project -> "find k topics in n documents" sounds great until you realize you need to be pretty confident in how many topics there are. Then, not wanting to be sure if i can impose that assumption, i go to heirarchical LDA, but in both cases i just didn't know how to interpret the clusters.

embedding models, being primarily trained on natural language (bert, gpt) just need to associate coocurring words together. You get some high dimensional vector and the idea is if concepts are similar, they'll be close to each other in this vector space. So you can cluster spatially without needing to assume how many there are 🙂
Author Public Key
npub1m3xdppkd0njmrqe2ma8a6ys39zvgp5k8u22mev8xsnqp4nh80srqhqa5sf