User research on anti-disinformation in federated/decentralised networks

Hey everyone, this is just a quick one to see if there’s anyone interested in anti-disinformation within social networks and, in particular, those that are federated/decentralised.

I’m working a short piece of user research work with two former colleagues who forked the MoodleNet codebase into Bonfire, a general-purpose federated social network. They’ve got funding for an ‘extension’ of this to consider how to deal with disinformation in those kinds of environments.

If you know someone who’s interested in this kind of thing, could you let me know? I’d like to talk with them in the coming weeks, especially if they have an immediate use case :slight_smile:


I’d be interested to know which federated social networks intend to use this type of measure and even more interested to know where the funding comes from (in this case the European Cultural Foundation). One of the benefits of decentralised social networks, in my mind anyway, is resistance to censorship (or “anti-disinformation” if you prefer) so would like to keep track of the ones who are letting it in through the back door.

Thanks Alan, one of the great things (in my experience) about protocol-based services such as federated social networks is that positive innovations tend to spread quickly. I’m hoping that progress made on anti-disinformation (think shared blocklists or content warnings) could propagate quickly throughout lots of different types of instances.

1 Like

Typically, where do things like content warnings originate from in this type of decentralised protocol-based service? Is it from human moderators or is it more automated?

I guess if I had to clarify my position it would be that I’m all for preventing misinformation spread as long as it’s actually misinformation, and so my concerns revolve around the processes of designating it as such.

Right, exactly, and one explicit aim of this project (hence the name ‘Zappa’) is to ensure the line into “censorship” isn’t crossed. The thing is, we all have different perspectives which is one of the reasons the Fediverse is different to centralised services like Twitter and Instagram.

So while I and the team have some armchair theories about what might work, we’re keen to talk to people who have a real-world need for this kind of thing so we can figure some things out that might work in practice :slight_smile:

1 Like

OK, I’m genuinely interested in that case :slight_smile: I would love for this to be handled better than it is currently. I’m not sure if I know anyone working on this or with a real-world need right now, but I will keep an eye out…

1 Like

Hi @dajbelshaw

I am very interested in this. Besides being part of a tech coop in Mexico, I work with WITNESS - an org. dealing closely with disinformation. I am also part of the C2PA - which is a coalition building a standard for content provenance. On WITNESS side, we are particularly interested in how this applies in decentralized web/entities. I would love to have a chat with you on this. My email if you are up for it:


Great, thanks Jacobo - just emailed you! :star_struck:

Hello. I am from Hypha Co-op, and I work with Starling Lab and Distributed Press, so “disinfo across dweb networks” is definitely on my mind. @dajbelshaw I PM’d you my email address. I’m based in Toronto (ET), happy to have a quick call to share notes.

1 Like

huh, that’s a really interesting problem. i am a little concerned about the approach here, in that the metric that’s being proposed for whether something is misinformation appears to be whether most people in the local community believe it or not - after all, users in a conspiracy theory group would rate ordinary news stories as misinformation. it seems like there’d be a strong motivation to rate uncomfortable truths as misinfo to stop thinking about them

Thanks @benhylau - have emailed!

And @asa you’re absolutely correct in that there’s a fine line between dealing with disinformation and censorship, which is why this project is so interesting :slight_smile:

honestly, to me it’s not so much a question of ‘going too far’ as the attempt to rate the trustworthiness of a source as a scalar quantity at all. i would say that there’s no such thing as a less biased source nor should there be, rather the question is about which biases the source has. but i say that as though i know anything about it, obviously any research is a good start