Granting free speech to others is not a natural human impulse. It is very unsettling to think that one's world view is a perspective rather than based upon absolute truths. To institutionalize free speech is to tacitly accept that 'truth' is more relative than that. It doesn't mean that the notion of 'my truth' is supported, but conflicting evidence is the norm in the real world. So, the impulse to 'moderate' misinformation is not surprising. However, from the Russian collusion hoax to the source of the SARS-2 virus to Hunter Biden laptop, the misinformation that was censored by the major social media platforms often turned out likely to be the more correct side of the story.
It is understandable that a company that offers a website that is ostensibly a public forum will feel a responsibility to have their site to be usable and that the venom that can arise when contentious issues are discussed won't intimidate participants to the extent that they do not participate to their fullest desire. So, the first impulse is to ban hate speech. The problem is that people have different tolerances to hostile rhetoric . Wherever the site sets their hurdle it will be too low for some and too high for others.
Sadly, on most of the largest sites that have been banning hate speech have now morphed into banning 'misinformation'. The implication is that free speech does not protect speech that is wrong. In fact, free speech does protect questionable statements and, in fact, it protects outright lies. Still, at the worst, the people who were against COVID-19 vaccines, believed that Trump actually won the 2020 election, and that fear of Climate Change is overblown, etc. are mistaken. History, evidence and the fullness of time often strengthens positions such as these that had been censored or throttled as misinformation.
Elon Musk cannot 'fix' Twitter simply by tweaking the algorithms or changing the members of moderation boards. A completely different approach is required. I advocate the method delineated below. It will involve moderation on these three levels.
- Twitter should have algorithms that flag potentially illegal speech or posts that appear to be reasonable evidence of crimes. These should be referred to law enforcement. They transcend simple censoring or banning.
- Users should be able to set blanket editing on their accounts. For example, if a person does not want any pornography, they can select that setting and it will be 100% throttled. If the algorithm allows them to see something they don't want to see, they can flag it and similar posts will be throttled. These blanket consumer based censoring algorithms should expand to include ones that might offend people on both sides.
- When a person bans someone or throttles them, others who are statistically similar (banned by the same people) will be downgraded. Essentially, they will be throttled more than before the ban. When you like or retweet, that lowers their throttling and those who are similar. The Statistics behind this idea is complex but this is ultimately doable.
In essence, each user will, over time, create their own custom silo that will be a fuzzy set when compared with other users. This will create 'environments'. Some users will be inclined toward STEM, others toward the arts. Some will create ribald and 'in your face' environments while others will be more urbane. That is the proper implementation of free speech. Essentially, you are free to say whatever you want, and I am free not to hear it. It is not Twitter's responsibility, or any other social media platform purporting to be a public square, to assure that the public square is carefully moderated to eliminate harsh rhetoric and/or misinformation. They can, through modern technology, allow those in the public square to choose the audiences to which they belong and what speech they will hear.
No comments:
Post a Comment