Skip to main content


#SocialCoop inline poll: should SocialCoop be one of the signatories of the [[Fedipact]] effort to *preemptively defederate* with Threads.net?

https://www.loomio.com/d/AZcJK6y2 is an ongoing Loomio discussion about this but I wanted to see some in-instance discussion ideally.

  • yes (say why) (44%, 4 votes)
  • no (say why) (55%, 5 votes)
  • maybe (0%, 0 votes)
9 voters. Poll end: 4 months ago

in reply to In [[Flancia]] we'll meet

I thinking "limit" option is the best choice for us, and I think that prevents us from being a signatory.
in reply to In [[Flancia]] we'll meet

@ntnsndr I'm curious: are you unaware of the huge volumes of content on Threads which is against the social.coop rules / code of conduct? or do you think Facebook deserves an exception to defederation for some reason?
in reply to three word chant

@3wordchant @ntnsndr thank you so much for raising this point!

I am unaware of the fraction involved, and you're right I should be made aware. I am also unaware in detail of the position of [[threads]] w.r.t. blocking well-defined subsets of users en masse, which is the direction I think we should go in in the general case of very large instances that cater to large diverse populations while maintaining a reasonable approximation of a rational pro-social ethical stance in the case of conflicts.

in reply to In [[Flancia]] we'll meet

in general I just want to try to think first, as a community, of the large number of *people* who are in [[threads]] because that's where they friends are, for example -- and how to help them onboard to the #Fediverse as well as we can!

I would rather their first contact is with friendly open people and groups like those at #socialcoop

in reply to In [[Flancia]] we'll meet

Content warning: links to fascist and bigoted content

This entry was edited (4 months ago)
in reply to three word chant

Content warning: links to fascist and bigoted content

in reply to In [[Flancia]] we'll meet

Content warning: links to fascist and bigoted content

This entry was edited (4 months ago)
in reply to In [[Flancia]] we'll meet

Content warning: links to fascist and bigoted content

in reply to three word chant

Content warning: links to fascist and bigoted content

This entry was edited (4 months ago)
in reply to In [[Flancia]] we'll meet

@3wordchant Unfortunately I think there is need for treating Threads a bit differently than other instances, given that is so large and varied. Despite its failures of enforcement and policy, there is at least a bare-bones policy against hate speech, which distinguishes it from platforms that actively encourage such things. https://help.instagram.com/477434105621119?ref=igtos&helpref=faq_content

The problems it poses should be weighed against the benefits, esp. enabling our members to reach a larger network of people.

in reply to Nathan Schneider

@3wordchant I don't think we have much leverage against Threads by refusal to federate with our few hundred members. In contrast, being visible on threads could help more people there see the option of doing social media cooperatively.

Unlike a space like Gab, most people are joining Threads simply by default, and are not directly associating with the accounts you mention.

I think limiting, is an appropriate compromise.

in reply to Nathan Schneider

@ntnsndr I don't see how the implication that we'd federate with Gab if it had a few million more non-bigoted users is in line with s.c's Federation Abuse Policy.

Folks who want to do outreach to Threads (or Gab) users are completely able to sign up for accounts on those platforms if they like; going back to "balance" it seems obvious that s.c users' safety is more important than making life slightly more convenient for that subset of users who want to evangelise in that way (1/2)

@flancian

in reply to three word chant

@ntnsndr a core strength of the fediverse, and part of the explicit agreement with s.c users, has been to have better moderation (and thus more community safety) than corporate equivalents. Twitter has been a great example of the importance of upholding standards of behaviour, through the rapidly increasing toxicity on the platform after those standards were relaxed in the name of including problematic voices (2/2)

CC @flancian

in reply to three word chant

@3wordchant I think it is a good point that this may require a policy change.

Since Gab does not have a recognizable hate speech policy, no, I don't think a large number of users there would change our approach.

I think the availability of a tool like limiting allows us to make an appropriate choice. In my view social media should never be entirely on-or-off, just like social life should not be. People are complex, and the fediverse should reflect that.

in reply to Nathan Schneider

@3wordchant simple shunning (like sanctions, etc) tends to do more harm than good in other aspects of social life. I think outright banning is less than ideal online too. As the fediverse evolves, I hope the tools improve for enabling self-defense, collective action, and more fine-grained accountability.
in reply to Nathan Schneider

@ntnsndr there's evidence to suggest that sanctions are effective at reducing harmful behaviour¹; it doesn't simply migrate elsewhere when communities are banned in one place. For me (and by my reading of s.c's Federation Abuse Policy), that applies equally to "Facebook should ban hate groups" and "s.c should not federate with Facebook until they enforce their rules against harmful behaviour" (1/2)

¹https://techcrunch.com/2017/09/11/study-finds-reddits-controversial-ban-of-its-most-toxic-subreddits-actually-worked/

CC @flancian

in reply to three word chant

@3wordchant right, I know the studies and evidence. But that is based on the current tools available in online life, where censorship and exile are basically all the tools allow. I have argued with others to explore better options, eg: https://journals.sagepub.com/doi/10.1177/20563051221126041
in reply to Nathan Schneider

@ntnsndr thanks for the paper, I'll read it as soon as I can.

Meanwhile, I fully agree with you that it will be helpful to have better tools.

In the mean-time, while those tools are built, and given that you are familiar with the evidence of deplatforming in reducing harmful behaviour, why do you think it is better not to use the (sure, blunt) tools available? (1/2)

CC @flancian

in reply to three word chant

@ntnsndr Do you think there might be some component of personal privilege in your preference for this approach, and do you see any potential exclusionary impact in applying that preference to a community space that intends/claims to be welcoming to people who face more marginalisation than you do? (2/2)

CC @flancian

This entry was edited (4 months ago)
in reply to three word chant

@3wordchant perhaps. That's why we have had discussions in the co-op about this for months. But the question cuts both ways. For some dynamics of marginalization, being cut off from the wider world is a privilege. For instance, I can only prioritize S.c these days because I no longer have a job that correlates my income to social media reach.
in reply to three word chant

@3wordchant What do we gain in safety by defederating that we wouldn’t gain by limiting?

If we limit, then we will not see any Threads posts in the federated TL. If SC users decide to follow individual Threads accounts and boost any toxic bigotry into local, they’ll be in violation of our own internal codes of behavior and will be dealt with.

This does raise a new Q though. Are we in violation of rules if we quote boost something in order to critique it? Are CWs sufficient?
@ntnsndr @flancian

This entry was edited (4 months ago)
in reply to MJ

@jotaemei @3wordchant I think our rules and moderation team are wise enough to detect the difference between affirmation and critique.

Thanks for these points.

in reply to Nathan Schneider

I worded that concern poorly. Yes, I expect as much from our rules and moderation team, but I was thinking still about how even quote boosted posts could be triggering for some members, and if we should have some guidelines for those cases. @3wordchant @flancian
This entry was edited (4 months ago)
in reply to MJ

@jotaemei off top (others with more knowledge of ActivityPub might be able to think of further examples), defederating would prevent Threads users from organising harassment campaigns invisibly in replies to s.c users, and prevent s.c. users' content from reaching unexpected audiences of hate groups on Threads by being boosted (or whatever Threads calls it there).

CC @ntnsndr @flancian

in reply to three word chant

@3wordchant @jotaemei @ntnsndr these are good examples, thank you. Playing devil's advocate here a bit:

- The telephone can be used to organize a harassment campaign. Should numbers not be able to call each other freely because of this? Should the government tap all lines because of this? My gut feel says no to both. Does this intuition not apply here because of speed or some other factor in this particular network? I'm unsure.

This entry was edited (4 months ago)
in reply to In [[Flancia]] we'll meet

@3wordchant @jotaemei @ntnsndr

- On boosts as a danger/weapon: I'm sorry but I don't see how federation makes the problem significantly worse for what amount to public web posts that can already be scraped, etc. Maybe a visibility rule to 'only show to logged in users from instances in a user-kept allowlist' would be needed for such cases?

Essentially user-defined per-post federation allowlists might be needed in the long term.

in reply to In [[Flancia]] we'll meet

"authorized fetch" is part of what you're describing, and I hope its adoption continues to increase.

As for "you can still see the content on the web", sure, but there's a wide zone between "technically impossible" and "absolutely trivial to do" – surely you agree that putting *any* friction in the way of the bigots who demonstrably exist on Threads will reduce the amount of harm caused, even if not to zero?

CC @jotaemei @ntnsndr

in reply to three word chant

@3wordchant @jotaemei @ntnsndr full disclosure: I am currently not into adopting authorized fetch in Social.coop either. IIUC it makes federation significantly more complex to implement, in particular for smaller/new servers (that don't run Mastodon). I'm happy to be shown wrong here though, maybe I am over-estimating the barriers to federation it would add.
in reply to In [[Flancia]] we'll meet

@3wordchant @jotaemei @ntnsndr On the principle of minimizing/obstructing harm: this point is of course valuable but it also reminds me of many conversations I've had about scraping the Fediverse. In the end I think there might be a philosophical gap here between camps 'the Fediverse should be part of the open web first' and 'the Fediverse should be a walled garden first' -- a more ethical and federated one, but a walled garden in the end.
in reply to In [[Flancia]] we'll meet

@3wordchant @jotaemei @ntnsndr My (surely privileged, tech-bro-influenced position) is currently "open web first", and if someone doesn't want their posts to be seen widely they should use a non-open visibility setting.

This doesn't mean I think we shouldn't defederate with actively fascist instances, or we shouldn't work to improve the paltry visibility settings we have now in Mastodon. We should do both. It's just threads doesn't seem like a fascist/troll instance to me, and I've seen plenty.

in reply to In [[Flancia]] we'll meet

@3wordchant @jotaemei @ntnsndr now, if once threads has set up moderation/admin communication channels harmful accounts stay up... then my position about them will change.

You pointed out earlier that this position might be inconsistent/irrational as the onus of work should be on them given their track record. That's fair. I'm still processing this and I might change my default position because of this.

in reply to In [[Flancia]] we'll meet

Exactly. In February, Facebook will celebrate 20 years of having had the opportunity to set up effective moderation. The parent company's 2022 revenue was over $116 billion; Instagram (the business unit of which Threads is a part) had estimated revenue over $50 billion the same year. I think it's very fair to say that they have had a huge opportunity to improve their content standards, if they were going to.

CC @jotaemei @ntnsndr

in reply to three word chant

@3wordchant @jotaemei I think the basic fact of the matter is that moderation at that scale is a fool's errand. You're always going to be either too restrictive or not restrictive enough for huge numbers of people. That's the beauty of the fediverse—we can be in a global network with more fine-grained moderation choices at the server level.
in reply to Nathan Schneider

@ntnsndr while I agree with you about the benefits of decentralisation, I think framing this as a "basic fact" about scale ignores the factors specific to FB's organisational structure, constituency of its investors, business model, and the nature of the (lack of) legal regulation in its country of origin, and many of the countries where it is most popular.

CC @flancian @jotaemei

in reply to Nathan Schneider

@ntnsndr thanks for the answer. Do you agree that this special-case weighing is a departure from our established Federation Abuse Policy, and that until s.c makes a change to that policy, that the status quo should be that Threads should be defederated based on " fails to enforce policies to deal with hate speech", and the (many) documented instances of Facebook failing to enforce their Community Guidelines?

CC @flancian

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.