Tuesday, November 15, 2022

Misinformation and Social Media Moderation

 

Granting free speech to others is not a natural human impulse.  It is very unsettling to think that one's world view is a perspective rather than an absolute truth.  To institutionalize free speech is to tacitly accept that 'truth' is more uncertain than that and often more relative than absolute.  It doesn't mean that throwing out rationality and resorting to a position of 'my truth' is supported, but rather we need to accept that conflicting evidence is the norm in the real world and consequently one should, at all times, temper one’s certainty. 

So, the impulse to 'moderate' misinformation is not surprising.  However, it is unacceptable for two reasons. First, we have practical examples where, from the Russian collusion to the source of the SARS-2 virus to Hunter Biden’s laptop, the ‘misinformation’ that was censored by the major social media platforms often turned out to be more likely correct than not. However, it also is a matter of evenness of application.


I am a Marlovian.  That means that I believe that Christopher Marlowe wrote the plays and sonnets that have been attributed to William Shakespeare.  I have concluded that the evidence best supports the hypothesis that Marlowe's death was faked, that he fled to the continent and had his friend, Shakespeare, register the plays in his name.  I won't go into all the evidence here.  However, the community of experts in English Literature would consider the Marlovian theory to be misinformation.  It might be correct or it may be wrong, but either way, the First Amendment and other nations' provisions for protecting unpopular speech allows me to present my case.  I and other Marlovians are not censored, which is an example of uneven application of misinformation moderation.

It is obviously not just me and my Marlovian belief.  For example, Walter and Luis Alvarez, in 1980, proposed that the primary cause of the extinction event 66 mya at the KT boundary was the result of an asteroidal impact.  It arose from geological studies that Walter Alvarez undertook in the 1970s that found excess iridium in the KT boundary layer.  Despite the rewrite that you will find on the Internet, the initial reaction was extraordinarily rancorous with the mainstream Paleontology community flatly rejecting it.  Eventually, the 'smoking gun' crater was found in the Yucatan and it was dated to the KT boundary.  This made it difficult to reject the hypothesis, though to this day some Paleontologists still try.  Until then, it was absolutely treated like 'misinformation'.  However, while derided, it was not censored.

When I was young, it was believed that ulcers were caused by stress and lifestyle. However, in 1982 Barry Marshall and Robin Warren discovered that most ulcers are caused by a bacterium known as Helicobacter pylori.  For this they eventually won the Nobel Prize in Medicine.  However, when they first published their findings, it was ridiculed.  Today, it would be called misinformation.  So, again we see, the danger of censoring what is considered misinformation.

In a more current example, the 'Hunter Biden laptop', when published by the New York Post, was labeled as misinformation and consequently censored in most of the news media.  We now know that it was legitimate.  Some people, including me, think that the misbehavior of a politician's family member is not relevant to the election process.  However, the laptop does contain evidence that may be construed as evidence that Joe Biden was involved in a 'pay for play' scheme with foreign governments while Vice President.   I will leave that to be adjudicated elsewhere, but the laptop itself, whether it was politically relavent or not, was an example of 'misinformation' that turned out to be true.  And that is the point.

Because we can only hope to distinguish truth from falsity through an open and unthrottled public discourse and we really can't be completely sure when someone is mistaken and when they are purposely lying, lying, too, must be protected speech.  In a truly open, free speech environment, lies eventually fall under the weight of contrary evidence. 

I could go on and on, but I think I have made my point.  A very significant portion of what might be called the advancement of human knowledge began as misinformation.  If we censor it, while we may remove disingenuous and often silly narratives from the public discourse, we do so only at the cost of stifling important, new insights.  This is why Free Speech must protect misinformation.  We must allow the process of argumentation to resolve these issues, not Moderation Boards, whether constituted of social media platforms or government officials.  

 In criminal matters the jury is admonished that in order to convict guilt must be established beyond a reasonable doubt. That is even a less stringent burden of proof than that to be exercised before condoning censorship. Traditionally, it has been the likelihood that the speech could reasonably be expected to result in overt public harm, with crying ‘fire!’ in a crowded building where it could result in stampeding deaths as the most often quoted example. Some legal scholars even claim that the possibility of public harm is too low a burden of proof.

Lastly, hate speech needs to be protected, too.  For many, that may seem counter-intuitive, but the same argument of uneven enforcement applies.  If you listen to a Leftist talk about Donald Trump, the hatred is obvious and expressed.  If you listen to a Rightist talk about child pornographers, likewise, the hate is not disguised.  Nearly everyone, save for a few very devout Christians on the Right and absolute Libertines on the Left, people hate and they most often consider their hate to be justified.

So, we are caught in a situation where we all disapprove of speech that communicates hatred in some contexts but approve of it in others.  For some, it is OK to hate Nazis, some hate Jews, others hate the opposite sex, a growing number hate people who try to impose gender norms.  It seems that everyone wants to ban some hate speech but nobody wants to ban all hate speech.  Thus, the banning of hate speech becomes highly problematic.

So, it should be obvious that a different approach to moderation of social media sites must be found.  This is a very current concern because Elon Musk has purchased Twitter and has vowed to restore Free Speech.  If he continues with the moderation model, he is doomed to fail.  As we saw above, there is no way to do it well.

However, it is understandable that a company that offers a website that is ostensibly a public forum will feel a responsibility to have their site to be usable and that the venom that can arise when contentious issues are discussed won't intimidate participants to the extent that they do not participate to their fullest desire or even flee.  So, the first impulse is to ban misinformation and hate speech.  I, personally, don't think that ad hominem has any place in public discourse and I block people who engage in it.  However, anyone who has waded into social media knows that most people don't agree with me. 
People have different tolerances to hostile rhetoric .  Wherever the social media site sets their hurdle for hate speech it will be too low for some users and too high for others. 

Elon Musk cannot 'fix' Twitter simply by tweaking the algorithms or changing the members of moderation boards.  A completely different approach is required.  After some deep reflection, I advocate the method delineated below.  It will involve moderation on these three levels.

  1. Twitter should have algorithms that flag potentially illegal speech or posts that appear to be reasonable evidence of crimes.  These should be referred to law enforcement. These instances transcend simple censoring or banning.  While an additional minor challenge, each jurisdiction within which Twitter operates will likely need a different algorithm.  The bigger problem will be to determine within which jurisdiction the speech actually took place.

  2. Users should be able to set blanket editing on their accounts.  For example, if a person does not want any pornography, they can select that setting and it will be 100% throttled.  If the algorithm inadvertently allows them to see something they don't want to see, they can flag it and, using algorithms, similar posts will be throttled.  Over time, the algorithm will learn precisely what the user mean by pornography.  These blanket consumer based censoring algorithms should expand to include ones that might offend people on both sides.

  3. When a person bans someone or throttles them, others who are statistically similar (banned by the same people) will be downgraded or throttled.  Essentially, they will be throttled more than before the ban.  When you like or retweet, that lowers their throttling, if there is any, and those who are similar.  The statistics behind this idea is complex but this is ultimately doable.

In essence, each user will, over time, create their own custom silo that will be a fuzzy set when compared with other users.  This will create 'environments'.  Some users will be inclined toward STEM, others toward the arts.  Some will create ribald and 'in your face' environments while others will be more urbane.  That is the proper implementation of free speech.  Essentially, you are free to say whatever you want, and I am free to hear it or to not hear it.  It is not Twitter's responsibility, or any other social media platform purporting to be a public square, to assure that the public square is carefully moderated to eliminate harsh rhetoric and/or misinformation.  They can, through modern technology, allow those in the public square  to choose the audiences to which they belong and what speech they will hear.


I understand that some people will say that believing in pyramid building aliens is harmless misinformation, but advocating for opening schools during the COVID-19 pandemic was dangerous.  No.  Actually opening schools may or may not have been dangerous, but arguing for or against it was not.  Clearly, both the governments of may jurisdictions as well as most large social media platforms were overtly attempting to stifle any messaging suggesting that schools should have been open.  Today, in hindsight, it is not clear that their position was the correct one.

EUNA needs to realize that no matter how good the intentions (and I am not sure that they always are pure) this censorious impulse may be, it is fundamentally illiberal and should not be supported by thoughtful citizens.



No comments:

Post a Comment