Why Nostr? What is Njump?
2024-05-23 03:19:03

iefan 🕊️ on Nostr: I've just built an AI assistant that performs voice recognition and text-to-speech ...

I've just built an AI assistant that performs voice recognition and text-to-speech directly on the device. It's using a fine-tuned Google Gemini Flash model, which is fast and works great.

I know, Google, right? But what if we replace that model with an open-source one, like Phi-3 or Gemma-2b, that can also run locally on a device, even a phone? It might be a bit slower and more battery-intensive, but in return, you get a completely private AI assistant that can run offline.

The fun part is I can make it into a PWA, so it can run on any device—Android, iOS, and PC. Plus, it will have proper Nostr integration.

You can check how these models might work on your device using WebGPU in your browser. We'll use this tech and even better ones.

LLM demo in browser: https://webllm.mlc.ai/

I'll also add PDF and vision capabilities. If I'm not making a big miscalculation, with one toggle, you should be able to use Stable Diffusion in one PWA—offline, locally, and completely private.

Let me know if you have any suggestions or recommendations for models or features. I'll share an initial version soon, and from there, we can improve it together.
Author Public Key
npub1cmmswlckn82se7f2jeftl6ll4szlc6zzh8hrjyyfm9vm3t2afr7svqlr6f