Over the past 5-6 months, I’ve been noticing a lot of new accounts spinning up that look like this format:

  • https://instance.xyz/u/gmbpjtmt
  • https://instance.xyz/u/tjrwwiif
  • https://instance.xyz/u/xzowaikv

What are they doing?

They’re boosting and/or downvoting mostly, if not exclusively, US news and politics posts/comments to fit their agenda.

Edit: Could also be manipulating other regional news/politics, but my instance is regional and doesn’t subscribe to those which limits my visibility into the overall manipulation patterns.

What do these have in common?

  1. Most are on instances that have signups without applications (I’m guessing the few that are on instances with applications may be from before those were enabled since those are several months old, but just a guess; they could have easily just applied and been approved.)
  2. Most are random 8-character usernames (occasionally 7 or 9 characters)
  3. Most have a common set of users they’re upvoting and/or downvoting consistently
  4. No posts/comments
  5. No avatar or bio (that’s pretty common in general, but combine it with the other common attributes)
  6. Update: Have had several anonymous reports (thanks!) that these users are registering with an @sharklasers.com email address which is a throwaway email service.

What can you, as an instance admin, do?

Keep an eye on new registrations to your instance. If you see any that fit this pattern, pick a few (and a few off this list) and see if they’re voting along the same lines. You can also look in the login_token table to see if there is IP address overlap with other users on your instance and/or any other of these kinds of accounts.

You can also check the local_user table to see if the email addresses are from the same provider (not a guaranteed way to match them, but it can be a clue) or if they’re they same email address using plus-addressing (e.g. [email protected], [email protected], etc).

Why are they doing this?

Your guess is as good as mine, but US elections are in a few months, and I highly suspect some kind of interference campaign based on the volume of these that are being spun up and the content that’s being manipulated. That, or someone, possibly even a ghost or an alien life form, really wants the impression of public opinion being on their side. Just because I don’t know exactly why doesn’t mean that something fishy isn’t happening that other admins should be aware of.

Who are the known culprits?

These are ones fitting that pattern which have been identified. There are certainly more, but these have been positively identified. Some were omitted since they were more garden-variety “to win an argument” style manipulation.

These all seem to be part of a campaign. This list is by no means comprehensive, and if there are any false positives, I do apologize. I’ve tried to separate out the “garden variety” type from the ones suspected of being part of a campaign, but may have missed some.

[New: 9/18/2024]: https://thelemmy.club/u/fxgwxqdr
[New: 9/18/2024]: https://discuss.online/u/nyubznrw
[New: 9/18/2024]: https://thelemmy.club/u/ththygij
[New: 9/18/2024]: https://ttrpg.network/u/umwagkpn
[New: 9/18/2024]: https://lemdro.id/u/dybyzgnn
[New: 9/18/2024]: https://lemmy.cafe/u/evtmowdq
https://leminal.space/u/mpiaaqzq
https://lemy.lol/u/ihuklfle
https://lemy.lol/u/iltxlmlr
https://lemy.lol/u/szxabejt
https://lemy.lol/u/woyjtear
https://lemy.lol/u/jikuwwrq
https://lemy.lol/u/matkalla
https://lemmy.ca/u/vlnligvx
https://ttrpg.network/u/kmjsxpie
https://lemmings.world/u/ueosqnhy
https://lemmings.world/u/mx_myxlplyx
https://startrek.website/u/girlbpzj
https://startrek.website/u/iorxkrdu
https://lemy.lol/u/tjrwwiif
https://lemy.lol/u/gmbpjtmt
https://thelemmy.club/u/avlnfqko
https://lemmy.today/u/blmpaxlm
https://lemy.lol/u/xhivhquf
https://sh.itjust.works/u/ntiytakd
https://jlai.lu/u/rpxhldtm
https://sh.itjust.works/u/ynvzpcbn
https://lazysoci.al/u/sksgvypn
https://lemy.lol/u/xzowaikv
https://lemy.lol/u/yecwilqu
https://lemy.lol/u/hwbjkxly
https://lemy.lol/u/kafbmgsy
https://discuss.online/u/tcjqmgzd
https://thelemmy.club/u/vcnzovqk
https://lemy.lol/u/gqvnyvvz
https://lazysoci.al/u/shcimfi
https://lemy.lol/u/u0hc7r
https://startrek.website/u/uoisqaru
https://jlai.lu/u/dtxiuwdx
https://discuss.online/u/oxwquohe
https://thelemmy.club/u/iicnhcqx
https://lemmings.world/u/uzinumke
https://startrek.website/u/evuorban
https://thelemmy.club/u/dswaxohe
https://lemdro.id/u/efkntptt
https://lemy.lol/u/ozgaolvw
https://lemy.lol/u/knylgpdv
https://discuss.online/u/omnajmxc
https://lemmy.cafe/u/iankglbrdurvstw
https://lemmy.ca/u/awuochoj
https://leminal.space/u/tjrwwiif
https://lemy.lol/u/basjcgsz
https://lemy.lol/u/smkkzswd
https://lazysoci.al/u/qokpsqnw
https://lemy.lol/u/ncvahblj
https://ttrpg.network/u/hputoioz
https://lazysoci.al/u/lghikcpj
https://lemmy.ca/u/xnjaqbzs
https://lemy.lol/u/yonkz

Edit: If you see anyone from your instance on here, please please please verify before taking any action. I’m only able to cross-check these against the content my instance is aware of.

    • Coelacanth@feddit.nu
      link
      fedilink
      English
      arrow-up
      52
      ·
      4 months ago

      I believe “Russian Bot Farm Presence” is the preferred metric of social network relevance in the scientific community.

      • Admiral Patrick@dubvee.orgOP
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        Lol, that sounds like a Randall Munroe unit of measurement, and I love it. If there’s not already an xkcd for that, there should be.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      4 months ago

      I hope this post doesn’t tank the monthly active users stats lol. Mostly that’s me hoping this problem isn’t as big as I fear.

    • What surprises me is that these seem to be all on other instances - including a few big ones like just.works - rather than someone spinning up their own instance to create unlimited accounts to downvote/spam/etc.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        Not really: if you’re astroturfing, you don’t do all your astroturfing from a single source because that makes it so obvious even a blind person could see it and sort it out.

        You do it from all over the places, mixed in with as much real user traffic as you can, and then do it steadily and without being hugely bursty from a single location.

        Humans are very good at pattern matching and recognition (which is why we’ve not all been eaten by tigers and leopards) and will absolutely spot the single source, or extremely high volume from a single source, or even just the looks-weird-should-investigate-more pattern you’d get from, for example, exactly what happened to cause this post.

        TLDR: they’re doing this because they’re trying to evade humans and ML models by spreading the load around, making it not a single source, and also trying to mix it in with places that would also likely have substantial real human traffic because uh, that’s what you do if you’re hoping to not be caught.

  • kersploosh@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    57
    ·
    4 months ago

    After digging into it, we banned the two sh.itjust.works accounts mentioned in this post. A quick search of the database did not reveal any similar accounts, though that doesn’t mean they aren’t there.

  • A Basil Plant@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    ·
    edit-2
    3 months ago

    My bachelor’s thesis was about comment amplifying/deamplifying on reddit using Graph Neural Networks (PyTorch-Geometric).

    Essentially: there used to be commenters who would constantly agree / disagree with a particular sentiment, and these would be used to amplify / deamplify opinions, respectively. Using a set of metrics [1], I fed it into a Graph Neural Network (GNN) and it produced reasonably well results back in the day. Since Pytorch-Geomteric has been out, there’s been numerous advancements to GNN research as a whole, and I suspect it would be significantly more developed now.

    Since upvotes are known to the instance administrator (for brevity, not getting into the fediverse aspect of this), and since their email addresses are known too, I believe that these two pieces of information can be accounted for in order to detect patterns. This would lead to much better results.

    In the beginning, such a solution needs to look for patterns first and these patterns need to be flagged as true (bots) or false (users) by the instance administrator - maybe 200 manual flaggings. Afterwards, the GNN could possibly decide to act based on confidence of previous pattern matching.

    This may be an interesting bachelor’s / master’s thesis (or a side project in general) for anyone looking for one. Of course, there’s a lot of nuances I’ve missed. Plus, I haven’t kept up with GNNs in a very long time, so that should be accounted for too.

    Edit: perhaps IP addresses could be used too? That’s one way reddit would detect vote manipulation.

    [1] account age, comment time, comment time difference with parent comment, sentiment agreement/disgareement with parent commenters, number of child comments after an hour, post karma, comment karma, number of comments, number of subreddits participated in, number of posts, and more I can’t remember.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      That would definitely work for rooting out ones local to an instance, but not cross-instance. For example, none of these were local to my instance, so I don’t have email or IP data for those and had to identify them based on activity patterns.

      I worked with another instance admin who did have one of these on their instance, and they confirmed IP and email provider overlap of those accounts as well as a local alt of an active user on another instance. Unfortunately, there is no way to prove that the alt on that instance actually belongs to the “main” alt on another instance. Due to privacy policy conflicts, they couldn’t share the actual IP/email values but could confirm that there was overlap among the suspect accounts.

      Admins could share IP and email info and compare, but each instance has its own privacy policy which may or may not allow for that (even for moderation purposes). I’m throwing some ideas around with other admins to find a way to share that info that doesn’t violate the privacy of any instances’ users. My current thought was to share a hash of the IP address, IP subnet, email address, and email provider. That way those hashes could be compared without revealing the actual values. The only hiccup with that is that it would be incredibly easy to generate a rainbow table of all IPv4 addresses to de-anonymize the IP hashes, so I’m back to square one lol.

      • A Basil Plant@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        Yes, this would essentially be a detecting mechanism for local instances. However, a network trained on all available federated data could still yield favorable results. You may just end up not needing IP Addresses and emails. Just upvotes / downvotes across a set of existing comments would even help.

        The important point is figuring out all possible data you can extract and feed it to a “ML” black box. The black box can deal with things by itself.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      45
      ·
      edit-2
      4 months ago

      I strongly advise verifying first, but yes.

      I can only verify them based on the posts/comment votes my instance is aware of. That said, I do have sufficient data and enough overlap to establish a connection/pattern.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    ·
    4 months ago

    I think what we need is an automated solution which flags groups of accounts for suspect vote manipulation.

    We appreciate the work you put into this, and I imagine it took some time to put together. That will only get harder to do if someone / some entity puts money into it.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      23
      ·
      4 months ago

      Yeah, this definitely seems more like script kiddie than adversarial nation-state. We’re not big enough here, yet anyway, that I think we’d be attracting that kind of attention and effort. However, it is a good practice run for identifying this kind of thing.

      • Starbuncle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        It’s easy on Reddit because they have their own username generator when you sign up, but the usernames being used here are very telling. Random letters is literally the absolute bare minimum effort for randomly generating usernames. A competent software engineer could make something substantially better in an afternoon and I feel like an adversarial nation-state would be using something like a small language model trained solely on large lists of scraped usernames.

    • SorteKanin@feddit.dk
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      automated solution

      On the other hand, any automated solution will be possible to work around. Such a system would be open source like the rest of Lemmy and you’d know exactly the criteria you need to live up to to avoid getting hit by the filter.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        3 months ago

        I guess it could end up being an arms race.

        What if the tool was more of a toolbox, where each instance could configure it the way that they want (ex. Thresholds before something is flagged, etc.) Similar to how automod works, where the options are well known but it’s hard to tell what any particular space is running behind the scenes.

        At the very least, tools like this can make it harder for silent vote manipulation even if it doesn’t stop it entirely

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 months ago

    As an end user, ie. not someone who either hosts an instance or has extra permissions, can we in anyway see who voted on a post or comment?

    I’m asking because over the time I’ve been here, I’ve noticed that many, but not all, posts or comments attract a solitary down vote.

    I see this type of thing all over the place. Sometimes it’s two down votes, indicating that it happens more than once.

    I note that human behaviour might explain this to some extent, but the voting happens almost immediately, in the face of either no response, or positive interactions.

    Feels a lot like the Reddit down vote bots.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      30
      ·
      4 months ago

      As a regular user, I don’t think there’s much you can do, unfortunately (though thank you for your willingness to help!). Sometimes you can look at a post/comment from Kbin to see the votes, but I think Mbin only shows the upvotes. Most former kbin instances, I believe, switched to mbin when development on kbin stalled.

      The solitary downvotes are annoying for sure. “Some people, sigh” is just my response to that. I just ignore those.

      Re: Downvote bots. I can’t say they’re necessarily bots, but my instance has scripts that flag accounts that exclusively give out downvotes and then bans them. That’s about the best I can do, at present, to counter those for my users.

      • Tanoh@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 months ago

        Re: Downvote bots. I can’t say they’re necessarily bots, but my instance has scripts that flag accounts that exclusively give out downvotes and then bans them. That’s about the best I can do, at present, to counter those for my users.

        It is usually not a good idea to specify what your exact metrics are for a ban. A bad actor could see that and then get around it by randomly upvoting something every now and then.

        • Admiral Patrick@dubvee.orgOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 months ago

          True. But it uses a threshold ratio. They’d have to give out a proportional number of upvotes to “fool” it, and at that point, they’re an average Lemmy user lol. That script isn’t (currently) setup to detect targeted vote brigading, just ones that are only here to downvote stuff. I’ve got other scripts to detect that, but they just generate daily/weekly reports.

          It takes time to detect them, but it does prevent most false positives that way (better to err on the side of caution and all that).

  • XNX@slrpnk.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 months ago

    How did you discover this? I wonder if private voting will make it too difficult to discover

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      36
      ·
      edit-2
      4 months ago

      Try to summarize this as briefly as I can:

      I was replying to a comment in a big news community about 5 months ago. It took me probably 2 minutes, at most, to compose my reply. By the time I submitted the comment (which triggered the vote counts to update in the app), the comment I was replying to had received ~17 downvotes. This wasn’t a controversial comment or post, mind you.

      17 votes in under 2 minutes on a comment is a bit unusual, so I pulled up the vote viewer to see who all had downvoted it so quickly. Most of them were these random 8 character usernames like are shown in the post.

      From there, I went to the DB to look at the timestamps on those votes, and they were all rapid-fire, back to back. (e.g. someone put the comment AP ID into a script and sent their bot swarm after it)

      So that’s when I realized something fishy was happening and dug deeper. Looking at what was upvoted from those, however, revealed more than what they were downvoting. Have been keeping an eye out for those type of accounts since. They stopped registering for a while, but then they started coming up again within the last week or two.

      I wonder if private voting will make it too difficult to discover

      Depends how it’s implemented. If the random usernames that are supplied from the private votes are random for each vote, that would make it nearly impossible to catch (and would also clutter the person table on instances with junk, one-off entries). If the private voting accounts are static and always show up with the same identifier, I don’t think it would make it much more difficult than it is now with these random user accounts being used. The kicker would be that only the private version of the account would be actionable.

      The only platform with private voting I know of right now is Piefed, and I’m not sure if the private voting usernames are random each time or static (I think they’re static and just not associated with your main profile). All that said, I’m not super clear on how private voting is implemented.

  • ericbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 months ago

    But this is SOO tedious. The annoying bit is it could just be one person who set it up over a weekend, has a script that they plug into when wanting to be a troll, and now all admins/mods have to do more work.

    You’re fighting the good fight! So annoying that folks are doing it on freaking lemmy.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      I wonder if there’s a way for admins to troll back. Like instead of banning the accounts, send them into a captcha loop with unsolvable or progressively harder captchas (or ones designed to poison captcha solving bots’ training).

  • dethada@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    3 months ago

    Is there any existing opensource tool for manipulation detection for lemmy? If not we should create one to reduce the manual workload for instance admins

    • johannesvanderwhales@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      If there were, upbotters would use it to verify that new bottling methods weren’t detectable. There’s a reason why reddit has so much obfuscation around voting and bans.

      • Draconic NEO@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I mean if a new account or an account with no content on it starts downvoting a lot of things or upvoting a lot of things that’s generally a red flag that it’s a vote manipulation account. It’s not always but it’s usually pretty obvious when it actually is. A person who spends their entire time downvoting everything they see, or downvoting things randomly is likely one of those bots.

        Could they come up with ways around it? Sure by participating and looking like real users with post and comment history. Though that requires effort and would slow them down majorly, so it’s something that they’re very unlikely to do.

      • dethada@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        Good point, but is it then possible to come up with detection algorithms that makes it hard for upbotters even if they know the algorithm? I think that would be more ideal than security through obfuscation but not sure how feasible that is

        • johannesvanderwhales@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          I don’t know honestly. Really, with AI it would be pretty difficult to be foolproof. I’m thinking of the MIT card counting group and how they played as archetypal players to obscure their activities. You could easily make an account that upvoted content in a way that looked plausible. I’m sure there are many real humans that upvote stories positive to one political party and downvote a different political party. Edit: I mean fuck, if you wanted to, you could create an instance just to train your model. Edit 2: For that matter, you could create an instance to bypass any screening for botters…

  • Lampshade@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    17
    ·
    4 months ago

    What stops the botters from setting up their own instances to create unlimited users for manipulating votes?

    I guess admins also have to be on top of detecting and defederating from such instances?

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      34
      ·
      edit-2
      4 months ago

      What stops the botters from setting up their own instances to create unlimited users for manipulating votes?

      Nothing, really. Though bad instances like that would be quickly defederated from most. But yeah, admins would have to keep an eye on things to determine that and take action.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 months ago

      this has already happened multiple times. they get found out fairly quickly and defederated by pretty much everyone.

    • Draconic NEO@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      They usually get found out pretty easily and then defederated by everyone. There’s a service called fediseer which allows instance admins to flag instances as harmful, which other admins can use to determine if they should block an instance.

      In order for that to really work they would have to rotate between a lot of domain names either by changing their own instance’s domain or using a proxy. Either way they’d run out of domains rather quickly.

      It’s way easier for them to just get accounts on the big servers and hide there as if they were normal lurking users.

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    4 months ago

    Fedia hiding the activity is one of those things that I kinda dislike, as it was an easy way to detect certain trolls.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 months ago

      yeah, i’m split on public votes.

      On one hand, yeah, there’s a certain type of troll that would be easy to detect. It would also put more eyes on the problem I’m describing here.

      On the other, you’d have people doing retaliatory downvotes for no reason other than revenge. That, or reporting everyone who downvoted them.

      It depends on the person to use that “power” responsibly, and there are clearly people out there who would not wield it responsibly lol.

      • nondescripthandle@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Im fully against public down votes becaue I already see people calling out other users by their name in threads they’re not even part of. Theres no world where that behavior gets better when you give them more tools to witch hunt. Lemmy is as much an insular echo chamber as any social media and there are plenty of users dedicated to keeping it that way.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        I think retaliatory downvotes happen either way if you’re in an argument. Same with report abuse, which, if it happens to a high degree, would be the moderator’s responsibility to ban the perpetrator (reports here are not anonymous like they were on Reddit).

        Also, if there’s someone with an abusive mind, they can easily use another instance that shows Activity to identify downvoters. The vote is public either way for federation purposes, they’re just hidden from certain instances - at least on the user level, but they’re still there technically.