2023-06-08 16:08:13
- reply
Pretty hyperbolic. "A few evil verifiers" means you don't use them, clients delist them, etc. Verifiers would be incentivized by performance and trust, and with an open data format, their performance is transparent. It would probably not be used for anything security sensitive. Could be used for initial hydration of an app, and then a lazy fetch could confirm/deny it. It would probably never be used for something like say, acceptance of an event on a relay. One connection to your chosen (fictional) historical relay and a filter on n verifiers, with custom acceptance logic, for example: If 100% of verifiers do not agree, fallback to standard fetch.
NIP-05 would probably need to be ammended for batching. Would incrementally solve number of connections, but still requires multiple requests for multiple NIP-05 providers.
2023-05-15 16:06:20
- reply
Probably not, data can always be lost because the protocol is lightweight and simple, redundancy/replication is intentionally omitted from protocol. Additionally, it's not impossible for the network to have private splinters that are separate from the rest of nostr, intentionally or otherwise. Nostr.band cannot sync notes from relays it does not know about. While it's possible nostr.band, as a centralized service, will capture the majority of network, it's debatable if over time this would be feasible, practical and/or economical. It would be extensive but probably incomplete. Also, some have taken it upon themselves, for better or worse, to replicate data between relays.
2023-03-21 21:14:53
- reply
1. Umbrel, user friendly, low tech knowledge required. Has a GUI and vibrant marketplace.
2. Raspiblitz, easy to setup, technical. Needs to use cli to do many operations.
Both run on cheap hardware, a raspberry pi, low power consumption. Both resolve over tor. To be accessible via clearnet, use tunnelsats
I've spent today refactoring `nostrwatch-js` and combing the logs on the daemons.
1. I found an bug in `nostrwatch-js` that caused problems for slow relays which would result in a latencies not being calculated correctly. This is an edge case, but I found 6 relays affected by this in production.
2. I found a bug in `nostrwatch-js` that caused write checks to fail for slow relays. (this is **not** related to paid relay write checks, those should fail)
3. I found a pretty big issue in the daemons, where the jobs overlapped after ~7 days of running. I haven't been able to identify if this caused any issues with data yet, but it definitely caused the daemons the crash periodically.
4. It is possible the improvements to `nostrwatch-js` mentioned above will resolve issues that some relay operators have reported related to Uptime, but cannot confirm with certainty at this moment in time.
I have long wanted to refactor `nostrwatch-js` (formerly `nostr-relay-inspector`, and have spent part of yesterday and today doing so. I also added features that are in line with the goals of `[email protected]`. `0.3` provides more data to operators and completely refactors the relay detail page.
The improvements to `nostrwatch-js` will be rolled out to the daemons tomorrow and will be rolled out to nostr.watch with the `0.3` release or possibly as a patch to `0.2` if `0.3` takes longer than anticipated.
2023-03-01 12:23:45
- reply
NM. So it slices out the first item. Each relay is an item in an array.
```
[
'relays',
'wss://relay.damus.io',
'wss://eden.nostr.land',
'wss://relay.snort.social',
'wss://offchain.pub',
'wss://nos.lol',
'wss://brb.io',
'wss://nostr.mutinywallet.com',
'wss://relay.nostrica.com',
'wss://relay.orangepill.dev/',
'wss://puravida.nostr.land'
]
```