This story is syndicated from The Spectator, the newspaper of Stuyvesant High School in New York City. The original version of the story ran here.
The number of times my mom has told me something and then supported it with “I saw it on Facebook” is too many to count, and many people in my social circles can relate to this experience. Indeed, Facebook has been deemed an unregulated “haven for misinformation,” and 76 percent of Americans disapprove of the social media giant because of it. However, with the current Israel-Hamas conflict overshadowing political and social discourse, this disapproval of widespread misinformation doesn’t seem genuine at all. Instead, people are now encouraging and engaging in it.
At the time of writing this article, I have counted over 50 infographics on the Israel-Hamas conflict posted on Instagram stories, from my closest friends to mere Stuyvesant acquaintances. Though each infographic had its own biases, something tied almost all of them together: inadequate sources. We can see this lack of evidential integrity on a larger scale as well. Between celebrities posting who they believed were endangered Israeli children but who were actually Gazan children, the fiercely-debated blame on either Israel or the Palestinian Islamic Jihad (PIJ) for the Al-Ahli Arab hospital bombing, and the White House backtracking a statement made by Biden regarding unverified claims of beheaded Israeli babies, it’s clear that this misinformation is getting out of hand.
And it’s not just infographics. Reposts on X (formerly known as Twitter) that make claims about events that supposedly happened less than a few hours ago often get exposed by reliable sources for misrepresenting the situation. Their true origins include video game footage, firework celebrations, videos taken from other wars, and more.
One notable example is a popular post on X showing footage of CNN journalists sheltering themselves from a surprise attack from Hamas on October 7. In the video, you can hear the voice of a man saying, “Look round as if you’re in danger; try to look nice and scared.” Using that statement as evidence, the user claimed that CNN had falsified the attack. However, after the user posted the video, multiple fact-checkers found someone fabricated the voice. This example shows how the Israel-Hamas conflict is not only overloaded with misinformation but with disinformation as well.
The difference between the two comes down to intent. Disinformation is false information claimed to be factual with the knowledge of it being incorrect. This prior knowledge means the intent of spreading disinformation is to mislead others. On the other hand, misinformation is false information spread without the intent to mislead and with no prior knowledge that it is false.
AI has also contributed to this epidemic of disinformation as well, as deepfakes of celebrities expressing support for a certain side and generated images of burnt corpses have been going viral on social media platforms, and many are quick to believe them. Though some argue that the effect of AI-generated images in the large scheme of disinformation is extensively overstated, the realism of AI is not something to disregard.
The effects of this type of disinformation have been historically destructive. For instance, exaggerated accounts of German attacks during World War I were used to garner public support, even though the level of atrocities being committed from both sides was later reported to be of about the same severity. Consequently, widespread public skepticism made many witnesses reluctant to report Nazi atrocities nearing World War Two. It is a situation of the boy who cried wolf. On top of that, anti-German sentiment spread in Allied countries.
Another example is that during the Philippine-American War, political cartoons portrayed the Filipino population as savage and unclean, which served as justification to engage in violent tactics with the end goal of annexation. Some estimates placed total deaths as high as 6,000 Americans and 300,000 Filipinos, showing the potential severe effects of propaganda. Even in previous wars between Israel and Hamas, cognitive biases played into manipulating perspectives of certain events.
This rapid spread of disinformation and misinformation is not entirely people’s fault, as social media companies have unethical incentives to allow unchecked information to spread across their platforms. Controversial and contentious content has higher engagement rates, which reaps better profits for social media companies. Therefore, it is in their best economic interest to keep posts containing false information up.
Nevertheless, it is still our responsibility to hold ourselves accountable for the information we repost. We need to wait for fact-checkers, evaluate the reliability of the sources, and check for cited sources. Moreover, once we realize our initial information is wrong, we should quickly admit our mistake and correct ourselves.
But for some reason, this simple process isn’t happening. Even after the misinformed posts I’ve seen reposted by Stuyvesant students and my friends get debunked, I haven’t seen anyone admit their information was incorrect. This phenomenon is more than just anecdotal. The Pew Research Center reports that 16 percent of the U.S. adult population has shared a story they then realized was fake, and these people were also unlikely to admit that they were wrong. This is because when someone figures out something they believe is incorrect, they experience a psychological sensation known as cognitive dissonance. The instinctual reaction to this unpleasant feeling is letting confirmation bias kick in and doubling down on the incorrect facts.
Instead of doubling down, some people take down their story after they realize their mistake. But this is not a better solution. It’s cowardly and ineffective. It doesn’t tell the dozens to hundreds of people who saw that story that the information needed to be corrected.
Unfortunately, some go beyond being afraid to admit their mistakes. Some purposefully use incorrect information to gain support for a certain perspective. The aforementioned Pew Research report finds that 14 percent of the US adult population has engaged in this furtive behavior. When this happens, misinformation becomes disinformation. A mistake becomes purposeful misdirection.
When someone wants to convince a crowd, they use disinformation that triggers emotional reactions to amass impassioned, impulsive support. They’re so dead set on painting the other side as the worst enemy possible that they push hyperbolic images, stories, and videos to the center of attention, regardless of their validity. That’s why the stories that go viral on X are often about the most vulnerable groups, such as children, pregnant women, and the elderly. However, when the other side mirrors this behavior, they quickly point it out as misleading and wrong. The double standard is jarring.
This behavior is inexcusable and morally egregious. Nothing in this world can justify using fabricated violence to emotionally manipulate people into believing a certain way, even if that belief is true.
Keeping all of this in mind, I urge everyone to be as meticulous as possible when reposting something. Take away the bias in the filtration of what information is accurate and what is not. And when a mistake does happen—and it will, because we are human—have the humility to admit it openly. Don’t let your passion on the subject cloud your ethical judgment.
The Israel-Hamas conflict is a dire situation where the loss of family members and friends, as well as the destruction of homes and social spaces, is becoming more and more common. So give it the respect it deserves, and be as truthful as possible.