Why Nostr? What is Njump?
2023-06-07 19:58:29
in reply to

David Vorick [ARCHIVE] on Nostr: 📅 Original date posted:2017-03-31 📝 Original message:Sure, your math is pretty ...

📅 Original date posted:2017-03-31
📝 Original message:Sure, your math is pretty much entirely irrelevant because scaling systems
to massive sizes doesn't work that way.

At 400B transactions per year we're looking at block sizes of 4.5 GB, and a
database size of petabytes. How much RAM do you need to process blocks like
that? Can you fit that much RAM into a single machine? Okay, you can't fit
that much RAM into a single machine. So you have to rework the code to
operate on a computer cluster.

Already we've hit a significant problem. You aren't going to rewrite
Bitcoin to do block validation on a computer cluster overnight. Further,
are storage costs consistent when we're talking about setting up clusters?
Are bandwidth costs consistent when we're talking about setting up
clusters? Are RAM and CPU costs consistent when we're talking about setting
up clusters? No, they aren't. Clusters are a lot more expensive to set up
per-resource because they need to talk to eachother and synchronize with
eachother and you have a LOT more parts, so you have to build in
redundancies that aren't necessary in non-clusters.

Also worth pointing out that peak transaction volumes are typically 20-50x
the size of typical transaction volumes. So your cluster isn't going to
need to plan to handle 15k transactions per second, you're really looking
at more like 200k or even 500k transactions per second to handle
peak-volumes. And if it can't, you're still going to see full blocks.

You'd need a handful of experts just to maintain such a thing. Disks are
going to be failing every day when you are storing multiple PB, so you
can't just count a flat cost of $20/TB and expect that to work. You're
going to need redundancy and tolerance so that you don't lose the system
when a few of your hard drives all fail within minutes of eachother. And
you need a way to rebuild everything without taking the system offline.

This isn't even my area of expertise. I'm sure there are a dozen other
significant issues that one of the Visa architects could tell you about
when dealing with mission-critical data at this scale.

--------

Massive systems operate very differently and are much more costly per-unit
than tiny systems. Once we grow the blocksize large enough that a single
computer can't do all the processing all by itself we get into a world of
much harder, much more expensive scaling problems. Especially because we're
talking about a distributed system where the nodes don't even trust each
other. And transaction processing is largely non-parallel. You have to
check each transaction against each other transaction to make sure that
they aren't double spending eachother. This takes synchronization and
prevents 500 CPUs from all crunching the data concurrently. You have to be
a lot more clever than that to get things working and consistent.

When talking about scalability problems, you should ask yourself what other
systems in the world operate at the scales you are talking about. None of
them have cost structures in the 6 digit range, and I'd bet (without
actually knowing) that none of them have cost structures in the 7 digit
range either. In fact I know from working in a related industry that the
cost structures for the datacenters (plus the support engineers, plus the
software management, etc.) that do airline ticket processing are above $5
million per year for the larger airlines. Visa is probably even more
expensive than that (though I can only speculate).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170331/229b0e3d/attachment.html>;
Author Public Key
npub14h948ndawt32ps3y225sslch8344tmw8k5z0lxuxr6wq9tu6sl2qj9yjx0