英语
双语
汉语

错误信息是如何传播的——我们为什么相信它

How Misinformation Spreads—and Why We Trust It
错误信息是如何传播的——我们为什么相信它
3098字
2019-09-19 12:37
71阅读
错误信息是如何传播的——我们为什么相信它

In the mid-1800s a caterpillar the size of a human finger began spreading across the northeastern U.S. This appearance of the tomato hornworm was followed by terrifying reports of fatal poisonings and aggressive behavior toward people. In July 1869 newspapers across the region posted warnings about the insect, reporting that a girl in Red Creek, N.Y., had been “thrown into spasms, which ended in death” after a run-in with the creature. That fall the Syracuse Standard printed an account from one Dr. Fuller, who had collected a particularly enormous specimen. The physician warned that the caterpillar was “as poisonous as a rattlesnake” and said he knew of three deaths linked to its venom.

在19世纪中期,一条手指大小的毛虫开始在美国东北部地区蔓延。西红柿角虫的出现引发了可怕的致命中毒和对人的攻击性行为的报道。1869年7月,整个地区的报纸都发布了关于这种昆虫的警告,报道称纽约州红溪市的一名女孩在与该生物发生冲突后,他“陷入痉挛,最终死亡”。那年秋天,《雪城标准报》(Syracuse Standard)刊登了一位富勒博士的报告,他收集了一个特别巨大的标本。医生警告说,这条毛毛虫“和响尾蛇一样毒”,并说他知道有三起与它的毒液有关的死亡。

Although the hornworm is a voracious eater that can strip a tomato plant in a matter of days, it is, in fact, harmless to humans. Entomologists had known the insect to be innocuous for decades when Fuller published his dramatic account, and his claims were widely mocked by experts. So why did the rumors persist even though the truth was readily available? People are social learners. We develop most of our beliefs from the testimony of trusted others such as our teachers, parents and friends. This social transmission of knowledge is at the heart of culture and science. But as the tomato hornworm story shows us, our ability has a gaping vulnerability: sometimes the ideas we spread are wrong.

虽然角虫是一种贪婪的食者,它可以在几天内吃掉一株番茄,但事实上,它对人类是无害的。富勒发表这一引人注目的描述时,昆虫学家已经知道这种昆虫是无害的,几十年来,他的说法受到了专家们的广泛嘲笑。那么,为什么即使真相唾手可得,谣言依然存在呢?人是社会性学习者。我们大部分的信念都是来自于我们信任的人的见证,比如我们的老师、父母和朋友。这种知识的社会传播是文化和科学的核心。但正如番茄角虫的故事告诉我们的那样,我们的能力有一个巨大的弱点:有时我们传播的想法是错误的。

Over the past five years the ways in which the social transmission of knowledge can fail us have come into sharp focus. Misinformation shared on social media Web sites has fueled an epidemic of false belief, with widespread misconceptions concerning topics ranging from the prevalence of voter fraud, to whether the Sandy Hook school shooting was staged, to whether vaccines are safe. The same basic mechanisms that spread fear about the tomato hornworm have now intensified—and, in some cases, led to—a profound public mistrust of basic societal institutions. One consequence is the largest measles outbreak in a generation.

在过去的五年里,知识的社会传播可能以何种方式让我们失败,成为人们关注的焦点。社交媒体网站上分享的错误信息助长了一种错误信念的流行,人们对各种话题产生了广泛的误解,从选民欺诈的普遍存在,到桑迪胡克(Sandy Hook)小学枪击事件是否经过策划,再到疫苗是否安全。传播对番茄角虫恐惧的基本机制现在已经加强,在某些情况下,还导致了公众对基本社会制度的不信任。其后果之一是几十年来最大的麻疹疫情。

“Misinformation” may seem like a misnomer here. After all, many of today's most damaging false beliefs are initially driven by acts of propaganda and disinformation, which are deliberately deceptive and intended to cause harm. But part of what makes propaganda and disinformation so effective in an age of social media is the fact that people who are exposed to it share it widely among friends and peers who trust them, with no intention of misleading anyone. Social media transforms disinformation into misinformation.

“错误信息”在这里似乎用词不当。毕竟,当今许多最具破坏性的错误信念最初都是由宣传和虚假信息所驱动的,这些宣传和虚假信息故意具有欺骗性,意在造成伤害。但在社交媒体时代,宣传和虚假信息之所以如此有效,部分原因在于,接触到这些信息的人会在信任他们的朋友和同龄人中广泛分享,而无意误导任何人。社交媒体将虚假信息转化为虚假信息。

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind. You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections. You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

许多传播学理论家和社会科学家试图通过将思想的传播建模为一种传染病,来理解错误的信念是如何持续存在的。使用数学模型包括使用计算机算法模拟人类社会互动的简化表示,然后研究这些模拟来了解真实世界。在一个传染模型中,思想就像病毒一样从一个头脑传到另一个头脑。首先它是一个网络,由表示个体的节点和表示社会关系的线组成。你在一个“头脑”中播下一个想法,然后看看它是在何时以及如何在各种假设下传播的。

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of sui/cide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet. Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

传染模型非常简单,但被用来解释令人惊讶的行为模式,如隋唐的流行/艾哈迈德,据报道,横扫欧洲歌德的《少年维特之烦恼》出版后在1774年或数十名美国纺织工人在1962年报道遭受恶心和麻木被咬伤后一个虚构的昆虫。它们也可以解释一些错误的信念是如何在互联网上传播的。在上次美国总统大选之前,一张年轻的唐纳德·特朗普的照片出现在Facebook上。这篇文章引用了1998年《人物》杂志(People)采访中的一段话,称如果特朗普竞选总统,他将成为一名共和党人,因为该党是由“最愚蠢的选民群体”组成的。虽然目前还不清楚“零号病人”是谁,但我们知道,这个基因迅速从一个人传到另一个人。

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

这个表情包的真实性很快被评估和揭穿。事实核查网站Snopes报道称,这句话早在2015年10月就被捏造出来了。但就像西红柿角虫一样,这些传播真相的努力并没有改变谣言的传播方式。仅这个表情包的一个副本就被分享了50多万次。在接下来的几年里,当新的个体分享它时,他们的错误信念感染了观察到这个模因的朋友,反过来,他们把这个错误信念传递到网络的新区域。

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves. Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right. Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

这就是为什么许多被广泛分享的迷因似乎不受事实核查和揭穿的影响。每个转发特朗普表情包的人只是信任转发的朋友,而不是自己验证。如果没有人费心去查这些事实,那么把它们公诸于众也于事无补。这里的问题似乎是懒惰或轻信,因此解决办法仅仅是更多的教育或更好的批判性思维技能。但这并不完全正确。有时,错误的信念仍然存在并传播,甚至在每个人都努力通过收集和分享证据来了解真相的社区也是如此。在这些情况下,问题不是不加思考的信任。它的意义远不止于此。

Trust the evidence

相信证据

The Facebook page “Stop Mandatory Vaccination” has more than 140,000 followers. Its moderators regularly post material that is framed to serve as evidence for this community that vaccines are harmful or ineffective, including news stories, scientific papers and interviews with prominent vaccine skeptics. On other Facebook group pages, thousands of concerned parents ask and answer questions about vaccine safety, often sharing scientific papers and legal advice supporting antivaccination efforts. Participants in these online communities care very much about whether vaccines are harmful and actively try to learn the truth. Yet they come to dangerously wrong conclusions. How does this happen?

Facebook上的“停止强制接种”页面拥有超过14万名粉丝。该网站的版主定期发布一些材料,包括新闻报道、科学论文和对知名疫苗怀疑论者的采访,这些材料的框架是作为该社区疫苗有害或无效的证据。在Facebook的其他群组页面上,数千名忧心忡忡的家长询问和回答有关疫苗安全性的问题,经常分享支持反疫苗努力的科学论文和法律建议。这些在线社区的参与者非常关心疫苗是否有害,并积极尝试了解真相。然而,他们得出了危险的错误结论。这是怎么发生的呢?

The contagion model is inadequate for answering this question. Instead we need a model that can capture cases where people form beliefs on the basis of evidence that they gather and share. It must also capture why these individuals are motivated to seek the truth in the first place. When it comes to health topics, there might be serious costs to acting on false beliefs. If vaccines are safe and effective (which they are) and parents do not vaccinate, they put their kids and immunosuppressed people at unnecessary risk. If vaccines are not safe, as the participants in these Facebook groups have concluded, then the risks go the other way. This means that figuring out what is true, and acting accordingly, matters deeply.

传染模型不足以回答这个问题。相反,我们需要一个模型来捕捉人们根据收集和分享的证据形成信念的案例。它还必须捕捉到为什么这些人一开始就有寻求真相的动机。当涉及到健康话题时,根据错误的信念行事可能会有严重的代价。如果疫苗是安全有效的(确实如此),而父母不接种疫苗,他们就会把孩子和免疫抑制的人置于不必要的风险之中。如果疫苗不安全,就像这些Facebook群组的参与者得出的结论,那么风险就会反过来。这意味着弄清什么是真实的,并据此采取行动,意义深远。

AP Photo

AP Photo

To better understand this behavior in our research, we drew on what is called the network epistemology framework. It was first developed by economists 20 years ago to study the social spread of beliefs in a community. Models of this kind have two parts: a problem and a network of individuals (or “agents”). The problem involves picking one of two choices: These could be “vaccinate” and “don't vaccinate” your children. In the model, the agents have beliefs about which choice is better. Some believe vaccination is safe and effective, and others believe it causes autism. Agent beliefs shape their behavior—those who think vaccination is safe choose to perform vaccinations. Their behavior, in turn, shapes their beliefs. When agents vaccinate and see that nothing bad happens, they become more convinced vaccination is indeed safe.

为了在我们的研究中更好地理解这种行为,我们利用了所谓的网络认识论框架。它是20年前由经济学家首次提出的,目的是研究一个社区中信仰的社会传播。这类模型有两个部分:一个问题和一个由个体(或“代理”)组成的网络。问题涉及从两种选择中做出选择:一种是“接种疫苗”,另一种是“不接种”。在模型中,代理对哪种选择更好有信念。一些人认为疫苗接种安全有效,另一些人则认为疫苗接种会导致自闭症。代理人的信念塑造了他们的行为——那些认为接种疫苗是安全的人选择接种疫苗。反过来,他们的行为又塑造了他们的信念。当接种疫苗的人看到没有什么不好的事情发生,他们就会更加确信接种疫苗确实是安全的。

The second part of the model is a network that represents social connections. Agents can learn not only from their own experiences of vaccinating but also from the experiences of their neighbors. Thus, an individual's community is highly important in determining what beliefs they ultimately develop.

模型的第二部分是一个代表社会联系的网络。代理人不仅可以从自己接种疫苗的经验中学习,还可以从邻居的经验中学习。因此,一个人的社区在决定他们最终形成什么样的信念时是非常重要的。

The network epistemology framework captures some essential features missing from contagion models: individuals intentionally gather data, share data and then experience consequences for bad beliefs. The findings teach us some important lessons about the social spread of knowledge. The first thing we learn is that working together is better than working alone, because an individual facing a problem like this is likely to prematurely settle on the worse theory. For instance, he or she might observe one child who turns out to have autism after vaccination and conclude that vaccines are not safe. In a community there tends to be some diversity in what people believe. Some test one action; some test the other. This diversity means that usually enough evidence is gathered to form good beliefs.

网络认识论框架抓住了传染模型中缺失的一些基本特征:个人有意地收集数据,分享数据,然后经历错误信念的后果。这些发现给我们上了一些关于社会传播知识的重要课程。我们学到的第一件事是,一起工作比单独工作好,因为一个人面对这样的问题,很可能过早地接受更糟糕的理论。例如,他或她可能观察到一个儿童在接种疫苗后被证明患有自闭症,并得出疫苗不安全的结论。在一个社区里,人们的信仰往往有一定的多样性。有些测试一个动作;有些测试另一些。这种多样性意味着通常会收集足够的证据来形成良好的信念。

But even this group benefit does not guarantee that agents learn the truth. Real scientific evidence is probabilistic, of course. For example, some nonsmokers get lung cancer, and some smokers do not get lung cancer. This means that some studies of smokers will find no connection to cancer. Relatedly, although there is no actual statistical link between vaccines and autism, some vaccinated children will be autistic. Thus, some parents observe their children developing symptoms of autism after receiving vaccinations. Strings of misleading evidence of this kind can be enough to steer an entire community wrong.

但即使是这种群体利益也不能保证代理人了解真相。当然,真正的科学证据是概率性的。例如,一些不吸烟的人会得肺癌,而一些吸烟者不会得肺癌。这意味着一些对吸烟者的研究将发现吸烟与癌症无关。与此相关的是,虽然疫苗和自闭症之间没有实际的统计联系,但一些接种过疫苗的儿童将会患有自闭症。因此,一些家长观察他们的孩子在接种疫苗后出现自闭症症状。一连串这类误导的证据足以误导整个社会。

In the most basic version of this model, social influence means that communities end up at consensus. They decide either that vaccinating is safe or that it is dangerous. But this does not fit what we see in the real world. In actual communities, we see polarization—entrenched disagreement about whether or not to vaccinate. We argue that the basic model is missing two crucial ingredients: social trust and conformism.

在这个模型的最基本版本中,社会影响意味着社区最终达成共识。他们决定接种疫苗是安全的还是危险的。但这并不符合我们在现实世界中看到的情况。在实际的社区中,我们看到关于是否接种疫苗的两极分化根深蒂固的分歧。我们认为,这种基本模式缺少两个关键因素:社会信任和因循守旧。

Social trust matters to belief when individuals treat some sources of evidence as more reliable than others. This is what we see when anti-vaxxers trust evidence shared by others in their community more than evidence produced by the Centers for Disease Control and Prevention or other medical research groups. This mistrust can stem from all sorts of things, including previous negative experiences with doctors or concerns that health care or governmental institutions do not care about their best interests. In some cases, this distrust may be justified, given that there is a long history of medical researchers and clinicians ignoring legitimate issues from patients, particularly women.

当个人认为某些证据来源比其他来源更可靠时,社会信任对信念很重要。这就是我们所看到的,当反vaxxer相信社区中其他人分享的证据,而不是疾病控制和预防中心(Centers for Disease Control and Prevention)或其他医学研究机构提供的证据时。这种不信任可能源于各种各样的原因,包括以前与医生的不愉快经历,或者担心医疗保健或政府机构不关心他们的最大利益。在某些情况下,这种不信任可能是有道理的,因为长期以来,医学研究人员和临床医生忽视病人尤其是妇女的合法问题。

Yet the net result is that anti-vaxxers do not learn from the very people who are collecting the best evidence on the subject. In versions of the model where individuals do not trust evidence from those who hold very different beliefs, we find communities polarize, and those with poor beliefs fail to learn better ones.

然而,最终的结果是,反vaxxer并没有从那些收集这方面最佳证据的人那里学习。在这个模型的不同版本中,个人不相信那些持有不同信仰的人提供的证据,我们发现社区两极分化,那些信仰较差的人无法学习到更好的。

Conformism, meanwhile, is a preference to act in the same way as others in one's community. The urge to conform is a profound part of the human psyche and one that can lead us to take actions we know to be harmful. When we add conformism to the model, what we see is the emergence of cliques of agents who hold false beliefs. The reason is that agents connected to the outside world do not pass along information that conflicts with their group's beliefs, meaning that many members of the group never learn the truth.

与此同时,墨守成规是一种倾向,即与所在社区的其他人以同样的方式行事。从众心理是人类心理的一个重要组成部分,它可以导致我们采取我们知道是有害的行动。当我们在模型中加入墨守成规时,我们看到的是持有错误信念的代理人小集团的出现。原因是与外部世界有联系的个体不会传递与他们群体信仰相冲突的信息,这意味着很多群体成员永远不会知道真相。

Conformity can help explain why vaccine skeptics tend to cluster in certain communities. Some private and charter schools in southern California have vaccination rates in the low double digits. And rates are startlingly low among Somali immigrants in Minneapolis and Orthodox Jews in Brooklyn—two communities that have recently suffered from measles outbreaks.

一致性可以帮助解释为什么疫苗怀疑论者倾向于聚集在某些社区。南加州的一些私立和特许学校的疫苗接种率只有两位数。明尼阿波利斯的索马里移民和布鲁克林的正统犹太人的发病率也低得惊人——这两个社区最近都爆发了麻疹。

Interventions into vaccine skepticism need to be sensitive to both social trust and conformity. Simply sharing new evidence with skeptics will likely not help, because of trust issues. And convincing trusted community members to speak out for vaccination might be difficult because of conformism. The best approach is to find individuals who share enough in common with members of the relevant communities to establish trust. A rabbi, for instance, might be an effective vaccine ambassador in Brooklyn, whereas in southern California, you might need to get Gwyneth Paltrow involved.

对疫苗怀疑主义的干预需要对社会信任和一致性都敏感。由于信任问题,仅仅与怀疑论者分享新证据可能没有帮助。由于墨守成规,说服可信的社区成员为接种疫苗发声可能会很困难。最好的办法是找到与相关社区成员有足够共同点的个人,以建立信任。例如,拉比在布鲁克林可能是一个有效的疫苗大使,而在南加州,你可能需要让格温妮丝·帕特洛(Gwyneth Paltrow)参与进来。

Social trust and conformity can help explain why polarized beliefs can emerge in social networks. But at least in some cases, including the Somali community in Minnesota and Orthodox Jewish communities in New York, they are only part of the story. Both groups were the targets of sophisticated misinformation campaigns designed by anti-vaxxers.

社会信任和从众可以帮助解释为什么社会网络中会出现两极分化的信仰。但至少在某些情况下,包括明尼苏达州的索马里社区和纽约的东正教犹太社区,它们只是故事的一部分。这两组人都是反vaxxers精心策划的虚假宣传活动的目标。

The Misinformation Age: How False Beliefs Spread

错误信息时代:错误信念的传播方式

The Misinformation Age: How False Beliefs Spread

错误信息时代:错误信念的传播方式

Influence operations

影响操纵

How we vote, what we buy and who we acclaim all depend on what we believe about the world. As a result, there are many wealthy, powerful groups and individuals who are interested in shaping public beliefs—including those about scientific matters of fact. There is a naive idea that when industry attempts to influence scientific belief, they do it by buying off corrupt scientists. Perhaps this happens sometimes. But a careful study of historical cases shows there are much more subtle—and arguably more effective—strategies that industry, nation states and other groups utilize. The first step in protecting ourselves from this kind of manipulation is to understand how these campaigns work.

我们如何投票,我们买什么,我们称赞谁,都取决于我们对世界的看法。因此,有许多富有、有权势的团体和个人对塑造公众信仰感兴趣——包括那些关于科学事实的信仰。有一种天真的观点认为,当工业界试图影响科学信仰时,他们通过收买腐败的科学家来实现。也许这种情况有时会发生。但是,对历史案例的仔细研究表明,工业、民族国家和其他团体所采用的策略要微妙得多,而且可以说更有效。保护我们免受这种操纵的第一步是了解这些活动是如何运作的。

A classic example comes from the tobacco industry, which developed new techniques in the 1950s to fight the growing consensus that smoking kills. During the 1950s and 1960s the Tobacco Institute published a bimonthly newsletter called “Tobacco and Health” that reported only scientific research suggesting tobacco was not harmful or research that emphasized uncertainty regarding the health effects of tobacco.

一个经典的例子来自烟草行业,该行业在20世纪50年代开发了新技术,以对抗吸烟致死这一日益增长的共识。在20世纪50年代和60年代,烟草研究所出版了一份名为《烟草与健康》的双月刊,只报道了表明烟草无害的科学研究或强调烟草对健康影响的不确定性的研究。

The pamphlets employ what we have called selective sharing. This approach involves taking real, independent scientific research and curating it, by presenting only the evidence that favors a preferred position. Using variants on the models described earlier, we have argued that selective sharing can be shockingly effective at shaping what an audience of nonscientists comes to believe about scientific matters of fact. In other words, motivated actors can use seeds of truth to create an impression of uncertainty or even convince people of false claims.

这些小册子采用了我们所说的选择性分享。这种方法包括采用真正的、独立的科学研究,并通过只提供有利于首选位置的证据来进行管理。通过使用前面描述的模型的变体,我们认为,选择性共享可以非常有效地塑造非科学家受众对科学事实的看法。换句话说,有动机的行动者可以利用事实的种子来制造不确定的印象,甚至说服人们相信错误的说法。

Selective sharing has been a key part of the anti-vaxxer playbook. Before the recent measles outbreak in New York, an organization calling itself Parents Educating and Advocating for Children's Health (PEACH) produced and distributed a 40-page pamphlet entitled “The Vaccine Safety Handbook.” The information shared—when accurate—was highly selective, focusing on a handful of scientific studies suggesting risks associated with vaccines, with minimal consideration of the many studies that find vaccines to be safe.

选择性共享一直是反vaxxer策略的关键部分。在最近纽约爆发麻疹之前,一个自称为“家长教育和倡导儿童健康”(PEACH)的组织制作并分发了一本40页的小册子,名为“疫苗安全手册”。这些被分享的信息——是准确的——也是高度选择性的,集中在少数几项科学研究上,这些研究表明疫苗存在风险,很少意识到发现的许多疫苗都是安全的研究。

The PEACH handbook was especially effective because it combined selective sharing with rhetorical strategies. It built trust with Orthodox Jews by projecting membership in their community (though published pseudonymously, at least some authors were members) and emphasizing concerns likely to resonate with them. It cherry-picked facts about vaccines intended to repulse its particular audience; for instance, it noted that some vaccines contain gelatin derived from pigs. Wittingly or not, the pamphlet was designed in a way that exploited social trust and conformism—the very mechanisms crucial to the creation of human knowledge.

PEACH手册尤其有效,因为它结合了选择性分享和修辞策略。它通过向正统犹太人投射他们社区的成员身份(虽然是匿名发表的,但至少有一些作者是成员),并强调可能引起他们共鸣的担忧,从而与正统犹太人建立了信任。它精心挑选了一些关于疫苗的事实,目的是让特定的受众感到反感;例如,它指出,一些疫苗含有从猪身上提取的明胶。无论有意无意,这本小册子的设计方式充分利用了社会信任和从众心理,而这正是创造人类知识的关键机制。

Worse, propagandists are constantly developing ever more sophisticated methods for manipulating public belief. Over the past several years we have seen purveyors of disinformation roll out new ways of creating the impression—especially through social media conduits such as Twitter bots and paid trolls and, most recently, by hacking or copying your friends' accounts that certain false beliefs are widely held, including by your friends and others with whom you identify. Even the PEACH creators may have encountered this kind of synthetic discourse about vaccines. According to a 2018 article in the American Journal of Public Health, such disinformation was distributed by accounts linked to Russian influence operations seeking to amplify American discord and weaponize a public health issue. This strategy works to change minds not through rational arguments or evidence but simply by manipulating the social spread of knowledge and belief.

更糟的是,宣传者们正在不断地发展更加复杂的方法来操纵公众的信仰。在过去的几年里我们看到的造谣推出的新方法创建impression-especially通过Twitter等社交媒体管道机器人,巨魔,最近,通过黑客或复制你朋友的账户某些错误信念是普遍的,包括你的朋友和其他人与你确定。即使是桃子的创造者也可能遇到过这种关于疫苗的合成论述。根据《美国公共卫生杂志》(American Journal of Public Health) 2018年的一篇文章,这些虚假信息是由一些账户散布的,这些账户与俄罗斯的影响行动有关,这些行动旨在扩大美国的不和,并将一个公共卫生问题武器化。这种策略不是通过理性的论证或证据,而是通过简单地操纵知识和信仰的社会传播来改变人们的想法。

The sophistication of misinformation efforts (and the highly targeted disinformation campaigns that amplify them) raises a troubling problem for democracy. Returning to the measles example, children in many states can be exempted from mandatory vaccinations on the grounds of “personal belief.” This became a flash point in California in 2015 following a measles outbreak traced to unvaccinated children visiting Disneyland. Then governor Jerry Brown signed a new law, SB277, removing the exemption.

虚假信息活动的复杂性(以及放大这些活动的高度针对性的虚假信息宣传活动)给民主带来了一个棘手的问题。回到麻疹的例子,许多州的儿童可以因为“个人信仰”而免于强制接种疫苗。2015年,加州爆发麻疹疫情,未接种疫苗的儿童到迪士尼乐园游玩,这成为了一个爆发点。然后州长杰里·布朗签署了一项新的法律SB277,取消了这项豁免。

Immediately vaccine skeptics filed paperwork to put a referendum on the next state ballot to overturn the law. Had they succeeded in getting 365,880 signatures (they made it to only 233,758), the question of whether parents should be able to opt out of mandatory vaccination on the grounds of personal belief would have gone to a direct vote—the results of which would have been susceptible to precisely the kinds of disinformation campaigns that have caused vaccination rates in many communities to plummet.

对疫苗持怀疑态度的人立即提交了书面文件,要求对下一届州投票进行全民公决,推翻这项法律。如果他们成功地得到了365880个签名(他们只有233758),父母是否应该能够选择的问题为由强制接种疫苗的个人信念会去直接投票的结果会被容易地造谣的活动,造成疫苗接种率在许多社区直线下降。

Luckily, the effort failed. But the fact that hundreds of thousands of Californians supported a direct vote about a question with serious bearing on public health, where the facts are clear but widely misconstrued by certain activist groups, should give serious pause. There is a reason that we care about having policies that best reflect available evidence and are responsive to reliable new information. How do we protect public well-being when so many citizens are misled about matters of fact? Just as individuals acting on misinformation are unlikely to bring about the outcomes they desire, societies that adopt policies based on false belief are unlikely to get the results they want and expect.

幸运的是,努力失败了。但数十万加州人支持就一个严重影响公共健康的问题进行直接投票,这一事实应该引起人们的严肃思考。我们关心制定能够最好地反映现有证据和对可靠的新信息作出反应的政策是有原因的。当如此多的公民在事实问题上被误导时,我们如何保护公共福利?正如个人根据错误信息采取行动不太可能带来他们想要的结果,基于错误信念采取政策的社会也不太可能得到他们想要和期望的结果。

The way to decide a question of scientific fact—are vaccines safe and effective?—is not to ask a community of nonexperts to vote on it, especially when they are subject to misinformation campaigns. What we need is a system that not only respects the processes and institutions of sound science as the best way we have of learning the truth about the world but also respects core democratic values that would preclude a single group, such as scientists, dictating policy.

决定科学事实问题的方法疫苗安全有效吗?——不是让一个非专业人士社区就这个问题投票,尤其是当他们受到误导的时候。我们需要的是一种制度,不仅要尊重健全科学的进程和机构,把它们视为我们了解世界真相的最佳途径,而且要尊重核心的民主价值观,这种价值观将排除科学家等单一群体发号施令。

We do not have a proposal for a system of government that can perfectly balance these competing concerns. But we think the key is to better separate two essentially different issues: What are the facts, and what should we do in light of them? Democratic ideals dictate that both require public oversight, transparency and accountability. But it is only the second—how we should make decisions given the facts—that should be up for a vote.

我们还没有一个能够完美平衡这些相互矛盾的问题的政府体系的建议。但我们认为,关键是要更好地区分两个本质上不同的问题:事实是什么?民主理想要求公众监督、透明度和问责制。但这只是第二个问题——考虑到这些事实,我们应该如何做出决定——应该付诸表决。

This article was originally published with the title "Why We Trust Lies" in Scientific American 321, 3, 54-61 (September 2019)

这篇文章最初发表在《科学美国人》321,3,54 -61(2019年9月)上,标题是“为什么我们相信谎言”。

doi:10.1038/scientificamerican0919-54

doi:10.1038/scientificamerican0919-54

ABOUT THE AUTHOR(S)

关于作者(们)

Cailin O'Connor

Cailin O'Connor

Along with Weatherall, she is co-author of The Misinformation Age: How False Beliefs Spread (Yale University Press, 2019). She is also a member of the Institiute for Mathematical Behavioral Sciences.

她与韦瑟罗尔共同撰写了《错误信息时代:错误观念如何传播》(耶鲁大学出版社,2019年)一书。她也是数学行为科学研究所的成员。

James Owen Weatherall

James Owen Weatherall

Along with O'Connor, he is co-author of The Misinformation Age: How False Beliefs Spread (Yale University Press, 2019). Weatherall is also a professor of logic and philosophy of science at the University of California, Irvine.

他与奥康纳共同撰写了《错误信息时代:错误观念如何传播》(耶鲁大学出版社,2019年)一书。韦瑟罗尔也是加州大学欧文分校的逻辑和科学哲学教授。

0 +1
举报
0 条评论
评论不能为空

Moi的内容