cross-posted from: https://lemmy.ml/post/5768010
You know BOINC, the thing where you can donate your processing power to specific computational projects?
Is there anything like that, but for hosting platforms / services?
Something where you could say “I am willing to dedicate this much of my CPU, RAM and storage space to this project or this group of people”.
Say that I have a server that is more or less collecting dust, and I want to make it do something productive.
I am aware of YUNOHost and alternatives, but that still requires me to choose which things to deploy and also somehow then offer that to the community.
As a certified lazy dude, I would much rather say “here’s the computer, use it for whatever you need the most”.
The issue I see with this is that my goodwill could be abused for hosting something inappropriate or even illegal, and then I would be held responsible. So there should be some transparency requirement or some other mechanism that helps prevents this.And yes, self-hosting would not be the accurate term to describe this kind of distributed resource sharing. “croud-sourced self-hosting”? “crowd-hosting” sounds like a good description for this phenomenon.
Some implementation of this probably already exists. Please provide any relevant names or links that would help me find more about this.
The problem here, to my understanding (context: I work in IT, but I’m not claiming to have a PhD in comp sci or whatever) is that something like BOINC works because the computation is highly self contained. Basically you’re just working through a list of math problems.
But something like, say, Lemmy or Mastodon isn’t really all that heavy on raw math. Instead it’s all about referencing items in databases, and dynamically assembling them into pages that are presented to a user. So it’s mostly about a) storing information, and b) accessing the stored information.
You can’t really offload that, because you have to be able to trust that wherever you’re putting the data, it’ll still be there when you need it. Not very easy when you might just turn your PC off at night… Or have a power outage. You also have to deal with the security and privacy issues involved in placing that data on random people’s computers.
Then you have the problem of connection speeds. Consumer internet connections typically have pathetic upload speeds. Generally the biggest issue with doing any kind of distributed database is that you need lightning fast communication speeds between every component. This is why no one builds distributed databases.
Once you actually present the data to the user’s PC, most of the “processing” happens on their end, so you’re already donating as much power as you reasonably can.
Like I said, I’m not a hardcore computer scientist, so there might be something in missing here, but to my understanding there’s really no way that you could usefully leverage any kind of “borrowed” processing power for any sort of platform or service outside of the very narrow field of “crunching big numbers.”