I've been speaking and writing lately about how tech companies try to shift the discussion on misinformation, polarization, harassment, etc., away from the systems and structures that are inherently toxic and toward questions of individual behavior.
This way they can blame their own users for any pathology and steer clear of calls for systemic change.
Today Elon Musk has come through with a perfect illustration for my future talks.
This should be obvious, but having an algorithm that behaves that way is a DELIBERATE CHOICE.
It would be easy enough, for example, to implement basic sentiment analysis so that the algorithm doesn't boost content that you have reacted negatively to in the future.
Musk is playing it both ways. He keeps the algorithm that boosts inflammatory content and drives the online conflicts that draw views and clicks, while pushing the blame for this off onto the individuals involved.
That sucks.
"He keeps the algorithm..."
Perhaps it's other changes (reinstating formerly banned accounts) but Twitter feels like it has amplified this algorithmic choice.
@Greggentry @ct_bergstrom Yes. Instead of seeing accounts I follow, I’m offered up accounts those accounts follow. If I’m following legal analysts who follow domestic extremists, I want the analyses, not the hot takes from the extremists, in my feed.