Please. Captcha by default. Email domain filters. Auto-block federation from servers that don’t respect. By default. Urgent.

meme not so funny

And yes, to refute some comments, this publication is being upvoted by bots. A single computer was needed, not “thousands of dollars” spent.

  • HTTP_404_NotFound@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    138
    arrow-down
    11
    ·
    edit-2
    1 year ago

    Sigh…

    All of those ideas are bad.

    1. Captchas are already pretty weak to combat bots. It’s why recaptcha and others were invented. The people who run bots, spend lots of money for their bots to… bot. They have accessed to quite advanced modules for decoding captchas. As well, they pay kids in india and africa pennies to just create accounts on websites.

    I am not saying captchas are completely useless, they do block the lowest hanging fruit currently. That- being most of the script kiddies.

    1. Email domain filters.

    Issue number one, has already been covered below/above by others. You can use a single gmail account, to basically register an unlimited number of accounts.

    Issue number two. Spammers LOVE to use office 365 for spamming. Most of the spam I find, actually comes from *.onmicrosoft.com inboxes. its quick for them to spin it up on a trial, and by the time the trial is over, they have moved to another inbox.

    1. Autoblocking federation for servers who don’t follow the above two broken rules

    This is how you destroy the platform. When you block legitimate users, the users will think the platform is broken. Because, none of their comments are working. They can’t see posts properly.

    They don’t know this is due to admins defederating servers. All they see, is broken content.

    At this time, your best option is for admin approvals, combined with keeping tabs on users.

    If you notice an instance is offering spammers. Lets- use my instance for example- I have my contact information right on the side-bar, If you notice there is spam, WORK WITH US, and we will help resolve this issue.

    I review my reports. I review spam on my instance. None of us are going to be perfect.

    There are very intelligent people who make lots of money creating “bots” and “spam”. NOBODY is going to stop all of it.

    The only way to resolve this, is to work together, to identify problems, and take action.

    Nuking every server that doesn’t have captcha enabled, is just going to piss off the users, and ruin this movement.

    One possible thing that might help-

    Is just to be able to have an easy listing of registered users in a server. I noticed- that actually… doesn’t appear to be easily accessible, without hitting rest apis or querying the database.

    • Dessalines@lemmy.ml
      link
      fedilink
      English
      arrow-up
      65
      arrow-down
      4
      ·
      1 year ago

      This is all 100% correct. People have already written captcha-bypassing bots for lemmy, we know from experience.

      The only way to stop bots, is the way that has worked for forums for years: registration applications. At lemmy.ml we historically have blocked any server that doesn’t have them turned on, because of the likelihood of bot infiltration from them.

      Registration applications have 100% stopped bots here.

      • eyy@lemm.ee
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        3
        ·
        edit-2
        1 year ago

        You’re right that captchas can be bypassed, but I disagree that they’re useless.

        Do you lock your house? Are you aware that most locks can be picked and windows can be smashed?

        captchas can be defeated, but that doesn’t mean they’re useless - they increase the level of friction required to automate malicious activity. Maybe not a lot, but along with other measures, it may make it tricky enough to circumvent that it discourages a good percentage of bot spammers. It’s the “Swiss cheese” model of security.

        Registration applications stop bots, but it also stops legitimate users. I almost didn’t get onto the fediverse because of registration applications. I filled out applications at lemmy.ml and beehaw.org, and then forgot about it. Two days later, I got reminded of the fediverse, and luckily I found this instance that didn’t require some sort of application to join.

        • HTTP_404_NotFound@lemmyonline.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          1 year ago

          Don’t read the first sentence, and then glaze over the rest.

          I am not saying captchas are completely useless, they do block the lowest hanging fruit currently. That- being most of the script kiddies.

      • Stovetop@lemmy.ml
        link
        fedilink
        English
        arrow-up
        24
        ·
        1 year ago

        But even then, however, what’s to stop an army of bots from just ChatGPTing their way through the application process?

        I went to a website to generate a random username, picked the first option of polarbear_gender, and then just stuck that and the application questions for lemmy.ml into ChatGPT to get the following:

        I want to join Lemmy.ml because I’m really into having meaningful discussions and connecting with others who have similar interests. Lemmy.ml seems like a great platform that fosters a diverse exchange of ideas in a respectful way, which I like.

        When it comes to the communities I’d love to be a part of, I’m all about ones that focus on environmental conservation, wildlife preservation, and sustainability. Those topics really resonate with me, and I’m eager to jump into discussions and learn from fellow passionate folks.

        As for my username, I chose it because I’ve got respect for polar bears and how they live with the environmental challenges they face. And throwing in “gender” is just my way of showing support for inclusivity and gender equality. Building a more just and fair society is important to me.

        I don’t know the full criteria that people are approved or declined for, but would these answers pass the sniff test?

        I’m just worried that placing too much trust in the application process contributes to a false sense of security. A community that is supposedly “protected” from bots can be silently infiltrated by them and cause more damage than in communities where you can either reasonably assume bots are everywhere, or there are more reliable filtering measures in place than a simple statement of purpose.

        • HTTP_404_NotFound@lemmyonline.com
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          1 year ago

          As I said in my post-

          There are very intelligent people who make lots of money creating “bots” and “spam”. NOBODY is going to stop all of it.

          The only way to resolve this, is to work together, to identify problems, and take action.

          If I decide I want to write spam bots for lemmy- there isn’t much that is going to stop me. Even approvals, aren’t hard to work around. Captchas are comically easy to get past. Registered emails? Not a problem either. I can make a single valid email, and then re-use it once on every single instance. Writing a script that waits for approvals, is quite easy.

            • imaqtpie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              1 year ago

              Btw, what’s the deal with your instance? I noticed you’re from one of the original servers from 4 years ago. Do you know why it was founded or can you direct me to some information?

              I’m from the reddit migration, although a bit more experienced than most (having been here over 2 weeks makes me a unicorn on my server).

              I’d like to spread some more knowledge about the history of the platform and what kind of different servers are out there. Problem is, I don’t have any knowledge! Help!

                • imaqtpie@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 year ago

                  Ah, I see. So tchncs.de hosts other federated platforms, and someone probably decided to set up a Lemmy site when it was originally created 4 years ago. But it was likely pretty empty until the past couple weeks.

                  Ok good to know, I don’t really know about XMPP/Jabber but I like what I see on wikipedia. Thanks!

        • HTTP_404_NotFound@lemmyonline.com
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          I admin a decent sized facebook group, at 10.8k members currently.

          Luckily, the facebook group is specifically for people living in a certain geographical area. As such, I am able to make questions, only somebody living in the area would know.

          You would be surprised, there are LOTS of spammers who answer all of the questions. (Just- getting the wrong answer on the area-specific questions)

          Duct-cleaning spam has been a real problem. lmao.

      • mohawk@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Wait what’s the difference between the suggested auto block and you historically blocking instances without applications? Is there other criteria you use to determine the block?

        Not saying I know the answer, just curious.

    • eyy@lemm.ee
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      3
      ·
      1 year ago

      Haven’t you heard of the “Swiss cheese” model of security?

      The best way to ensure your server is protected is to unplug it from the Internet and put it in an EMF-shielded Faraday cage.

      There’s always a tradeoff between security, usability and cost.

      captchas can be defeated, but that doesn’t mean they’re useless - they increase the level of friction required to automate malicious activity. Maybe not a lot, but along with other measures, it may make it tricky enough to circumvent that it discourages a good percentage of bot spammers.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      1 year ago

      I disagree. I think the solution is moderation. Basically, have a set of tools that identify likely bots, and let human moderators make the call.

      If you require admins to manually approve accounts, admins will either automate approvals or stop approving. That’s just how people tend to operate imo. And the more steps you put between people wanting to sign up and actually getting an account, the fewer people you’ll get to actually go through with it.

      So I’m against applications. What we need is better moderation tools. My ideal would be a web of trust. Basically, you get more privileges the more trusted people that trust you. I think that should start from the admins, then to the mods, and then to regular users.

      But lemmy isn’t that sophisticated. Maybe it will be some day, IDK, but it’s the direction I’d like to see things go.

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 year ago

        HackerNews does something similar where new users don’t have the ability to down vote until they have earned enough upvotes from other users.

        We could extend that, and literally not allow upvotes to properly register if the user is too new. The vote would still show on the comment/post, but the ranking of the comment/post will only be influenced by seasoned users. That way, users could scroll down a thread, see a very highly upvoted comment bang in the middle, and think for themselves “huh, probably bots”.

        Very hierarchical solution, heavily reliant on the mods not playing favourites or having their own agenda.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          heavily reliant on the mods

          As any solution to this sort of problem should be, IMO.

          If the mods suck, then go make another community. If enough of the mods are good, they can be a huge part of the solution. I’m envisioning this:

          1. users report other users
          2. mods ban users based on reports, if the reports have merit
          3. admins block instances based on reports from mods, if the reports are consistent

          Transparency keeps the mods honest, or at least allows users in the community to name and shame bad mods.

          Some automated tools to help mods out are always welcome.

      • HTTP_404_NotFound@lemmyonline.com
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        I disagree. I think the solution is moderation.

        But- that is basically agreeing with the context of what I said.

        So I’m against applications.

        I don’t like them either, but, the problem is, I don’t have ANY other tools at my disposal for interacting with and viewing the other users.

        What we need is better moderation tools.

        Just a way to audit user activity and comments would be a big start. I honestly cannot find a method for doing this via the UI. Having to resort to querying the database just to dig up details.

          • HTTP_404_NotFound@lemmyonline.com
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            1 year ago

            Well, I dug around and built a pretty simple webassembly GUI for lemmy just now-

            It would appear, the API is actually missing easy ways to just… query users, and actions. However, skipping past the lemmy api and going directly to the database, is easy enough.

            And, from there, its pretty easy to run each user’s post history through say, a piece of ML, which detects the potential for spam, and reports on the highest risk users.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I wonder how hard that would be to throw into a separate service? We could probably set up a replica of the main db to run the ML algo against and ship it as an optional add-on service.

              Admins could then set it to update the model on some schedule and to flag likely spammers/scammers.

              Sounds feasible.

              • HTTP_404_NotFound@lemmyonline.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Querying users, is actually extremely easy.

                Query users, posts, and then performing sentiment analysis, is also extremely easy.

                (From the database- that is)

                The API is a bit limited.

                I am messing around with the database right now… and… well, it maintains a LOT of data.

    • homesnatch@lemmy.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Captcha is like locking your car… There are still ways to get in, but it blocks the casual efforts.

      I review my reports. I review spam on my instance. None of us are going to be perfect.

      Do you review upvote bots? The spam is an easily replaceable account, the coordinated army of upvote bots may be harder to track down.