Not sure you’d be able to find 100k people to host a 1TB server though. Plus, redundancy would be better anyway since it would provide more download avenues in case some node is slow or has gone down.
Yes, it’s a big ask, because it’s a lot of data. Any distributed solution will require either a large number of people or a huge commitment of storage capacity. Both 100,000 people and 1TB per node is a lot to ask for, but that’s basically the minimum viable level for that much data. Ten million people each committing 50GB would be great, and offer sufficient redundancy that you could lose 80% of the nodes before losing data, but that’s not a realistic number to expect to participate.
Not sure you’d be able to find 100k people to host a 1TB server though. Plus, redundancy would be better anyway since it would provide more download avenues in case some node is slow or has gone down.
Yes, it’s a big ask, because it’s a lot of data. Any distributed solution will require either a large number of people or a huge commitment of storage capacity. Both 100,000 people and 1TB per node is a lot to ask for, but that’s basically the minimum viable level for that much data. Ten million people each committing 50GB would be great, and offer sufficient redundancy that you could lose 80% of the nodes before losing data, but that’s not a realistic number to expect to participate.