Walrus vs Strawman I: Why Full Replication Wastes Massive Bandwidth
Full replication—storing complete copies across all validators—is the strawman approach to decentralized storage. Every validator holds every byte. Conceptually simple. Cryptographically straightforward. And monumentally wasteful.
The bandwidth costs accumulate silently. A blob written to n validators requires transmitting complete data n times. Recovery is equally expensive: retrieving a large blob forces downloading from at least one validator, consuming full bandwidth. Scale this across millions of blobs across networks with thousands of validators, and bandwidth becomes the limiting cost factor, not storage.
Filecoin and Arweave accepted this tradeoff knowingly. Full replication made security arguments trivial and implementation simple. The cost was acceptable when storage was expensive and decentralized networks were experimental. That era has ended. Modern applications routinely handle terabyte-scale datasets where full replication multiplies bandwidth costs by 25 or more.
The bandwidth waste compounds through redundancy. Not only is each blob replicated fully, but redundancy itself is replicated. Every validator stores every byte of every redundancy mechanism. The system wastes bandwidth at every layer.
@Walrus 🦭/acc rejects this strawman not from principle but from pragmatism. When storage is cheap and bandwidth is expensive, replicating everything becomes economically irrational. Smarter encoding delivers security with a fraction of the bandwidth cost.

