ArchiveTeam are independent from IA, but their stuff mostly does end up uploaded into the Wayback Machine. Storage space (like yours) isn’t usually what they are looking for, but rather the internet bandwidth and “virgin” IP address of aforementioned “warriors” running their code to scrape different websites, and then uploading the results to AT’s servers, where they are collected and eventually uploaded again to IA.
How do I go about this? I’ve got 30TB available, 24/7 uptime. It’s not much but it might help.
.
Thanks I’ll look into it.
I already have a full backup of the entire English Wikipedia using Kiwix. It’s surprisingly small. Around 100GB.
In case you haven’t looked into it yourself yet…
ArchiveTeam are independent from IA, but their stuff mostly does end up uploaded into the Wayback Machine. Storage space (like yours) isn’t usually what they are looking for, but rather the internet bandwidth and “virgin” IP address of aforementioned “warriors” running their code to scrape different websites, and then uploading the results to AT’s servers, where they are collected and eventually uploaded again to IA.
Check out https://tracker.archiveteam.org/ for current projects