Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.
But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?
Algorithms can be useful - and at a certain scale they’re necessary. Just look at Lemmy - even as small as it is there’s already some utility in algorithms like “Active”, “Hot” and “Scaled”, and as the number of communities and instances grows they’ll be even more useful. The trouble starts when there are perverse incentives to drive users toward one type of content or another, which I think is one of the fediverse’s key strengths.
But correct me if I’m wrong (I’m not a programmer), lemmy’s algorithm is basically just sorting; it doesn’t choose over two pieces of media to show me but rather how it orders them. Facebook et al will simply not show content that I will not engage with or that will make me spend less time on the platform.
I agree that they are useful but at a certain point we as a society sometimes need to weight the usefulness of certain technologies against the potential for harm. If the potential for harm is greater than the benefit, then maybe we should somewhat curb the potential for that harm or remove it altogether.
So maybe we could refine the argument to be we need to limit what signals algorithms can use to push content? Or maybe that all social media users should have access to an algorithm free feed and that the algorithm driven feed be hidden by default and can be customizable by users?
Algorithm is just a fancy word for rules to sort by. “New” is an algorithm that says “sort by the timestamp of the submissions”. That one is pretty innocuous, I think. Likewise “Active” which just says “sort by the last time someone commented” (or whatever). “Hot” and “Scaled”, though, involve business logic – rules that don’t have one technically correct solution, but involve decisions and preferences made by people to accomplish a certain aim. Again in Lemmy’s case I don’t think either the “Hot” or “Scaled” algorithms should be too controversial – and if they are, you can review the source code, make comments or a PR for changes, or stand up your own Lemmy instance that does it the way you want to. For walled-garden SM sites like TikTok, Facebook and Twitter/X, though, we don’t know what the logic behind the algorithm says. We can speculate that it’s optimized to keep people using the service for longer, or encouraging them to come back more frequently, but for all intents and purposes those algorithms are black boxes and we have to assume that they’re working only for the benefits of the companies, and not the users.
It would be really nice if at the very least we could get some insight into how algorithms are tuned. It seems obvious that Facebook and X want users to get pissed off. It does not seem ethical at all and should at the very least be examined
While transparency would be helpful for discussion, I don’t think it would change or help with stopping propaganda, misinformation and outright bullshit from being disseminated to the masses because people just don’t care. Even if the algorithm was transparently made to push false narratives people would just shrug and keep using it. The average person doesn’t care about the who, what or why as long as they are entertained. But yes, transparency would be a good first step.
Nah. It is just people, including me, don’t wanna to think too much about the information when it is present to us. Most like to read just the headline and make a conclusion. It is the laziness in thinking and emotional reaction that makes this whole situation worse.
Algorithm (recomendation engines) is just a catalyst.
Non-consensual user-modeling systems should be heavily regulated.
The real question is how do you ban algorithms without banning editorial discretion of the press?