Lemmy.world is very popular, and one of the largest instances. They do a great job with moderation. There’s a lot of positives with lemmy.world

Recently, over the last month, federation issues have become more and more drastic. Some comments from lemmy.world take days, or never, synchronize with other instances.

The current incarnation of activity pub as implemented in Lemmy has rate issues with a very popular instance. So now lemmy.world is becoming a island. This is bad because it fractures the discussion, and encourages more centralization on Lemmy.world which actually weakens the ability of the federated universe to survive a single instance failing or just turning off.

For the time being, I encourage everyone to post to communities hosted on other instances so that the conversation can be consistently access by people across the entire Fediverse. I don’t think it’s necessary to move your user account, because your client will post to the host instance of a community when you make a comment in that community I believe.

Update: other threads about the delays Great writeup https://lemmy.world/post/13967373

Other people having the same issue: https://lemmy.world/post/15668306 https://aussie.zone/comment/9155614 https://lemmy.world/post/15654553 https://lemmy.world/post/15634599 https://aussie.zone/comment/9103641

  • jet@hackertalks.comOP
    link
    fedilink
    arrow-up
    19
    ·
    6 months ago

    If the front end is a single website, then it can be taken down, and provides a central weak point.

    • Kecessa@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 months ago

      Not really if in the background everything is divided over a bunch of servers and backed up by other servers

      It’s the only way to solve the centralization of users, take the option away from them and handle it in the background.

      • jet@hackertalks.comOP
        link
        fedilink
        arrow-up
        6
        ·
        6 months ago

        Let’s say whoever is running the front end doesn’t like a community and blocks it… How do we prevent that?

        • Kecessa@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          6 months ago

          You don’t have “somebody running the front end” though, it’s all done by the people providing hosting services.

          Think “crypto philosophy as a message board” but instead of having everyone sync the whole history you split all data randomly in a way that guarantees it is stored on three servers at all time.

          Heck, you could also have multiple front ends if you wanted, all pulling and pushing data to the same servers and this way you could log in from any of them, the front end would only have an influence on UI/UX, in the background the data would always come from the same places and for this reason the front end dev wouldn’t have the power to block communities.

          • jet@hackertalks.comOP
            link
            fedilink
            arrow-up
            3
            ·
            6 months ago

            Ok. So just like Lemmy but communities are spread using some hash table over multiple existing nodes?

            • Kecessa@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              6 months ago

              Yep, divide everything so no one has real power and it’s the users that decide what they want and don’t want on their feed, allow the hosts to decide if they want to host NSFW content or not (and the users can make the same choice), make it so the users don’t have to decide what instance they register with, their credentials are just stored in the database.

              For the front end you then have two ways to deal with it, a single one and the hosts “vote” on how to deal with it (crypto style) or the hosts are the database that anyone can access and it allows anyone to create a front end…

              • jet@hackertalks.comOP
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                Your second scenario where the hosts decide and the users choose the hosts is what Lemmy is doing now

                • Kecessa@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  6 months ago

                  Not really though, with Lemmy the host and front end are the same. You access Lemmy from hacker talks, I access it from shitjustworks, both our respective instances host our respective data and provide us a UI to access their instance and could decide to only give us access to the content that they host as some instances already do.

                  What I’m talking about is front end devs not having to host any of the content themselves, just accessing the database hosted by others and showing that info in the UI they developed and pushing changes to the database when the users sign up or post comments.

                  Front end doesn’t have control over where the info is stored and don’t store anything locally, back end doesn’t have control over who’s pulling and pushing data, they can just choose to filter out NSFW content from their own servers but it just means it won’t pick them to host that content and will instead pick servers that don’t mind hosting it.

                  In this way the hosting is the same as any hosting services but completely decentralized and the data is open to all and no host can wipe it because of the backups (contrary to Lemmy where if an instance disappears all the content it hosted is gone).

                  • jet@hackertalks.comOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    6 months ago

                    So the data would be stored in something like IPFS? Or ceph?

                    Ceph has issues because the The metadata nodes needed for disc admission are central points of control

                    IPFS could work, but the cost of hosting go up exponentially. But yeah you could build Lemmy on top of IPFS

    • RubberDuck@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      Like raid 6 or higher. Data is distributed with included redundancy to make up for nodes dropping off.

      But then community admins would need to backup their communities as raid is not backup.

      I just don’t know how that would work with the data… restoration distribution across new nodes etc etc. but then … I’m not a dev.

      • jet@hackertalks.comOP
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Raid doesn’t work in this context. Because we’re assuming we have antagonistic peers. So Central control of any element, gives away control of the whole system.

        In a redundant array of inexpensive disks, there’s the assumption that there’s bunificent administrator organizing everything.

        • RubberDuck@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          6 months ago

          I get that… sort off.

          In Raid the admin supplies the disks, creates the pools and the raid platform does the rest is this really different?

          In the analogy it would be an admin starts a pool of 1, other admins join their node into the pool and the system handles distributing content across the nodes in the pool. No raid level selection as the system aims for optimal redundancy.

          I just expect this setup to run into similar issues surrounding equitable data and load distribution, as not all nodes will be equal in power, storage capacity, bandwidth etc etc. something that actual Raid arrays should not have…

          But it’s cool to think about.