In today’s interconnected world, the dissemination of disinformation has become a powerful tool and weapon employed by adversaries to destabilize, manipulate, and influence public opinion.
Psychology plays a critical role in both the creation and counteraction of disinformation. Adversaries ensure that their messages resonate on a deep psychological level. They know that fear and anger are potent emotions that can drive individuals to share misleading information without verifying its accuracy, amplifying the reach and impact of the lies.
Combating disinformation effectively requires a deep understanding of the same psychological principles. Training programs must focus on psychological resilience and critical thinking skills that empower individuals to recognize and resist manipulative tactics. Techniques such as inoculation theory can build cognitive resistance against disinformation attacks.
Inoculation theory, introduced by social psychologist William J. McGuire in the 1960s, draws an analogy between biological immunization and psychological resistance. Just as a vaccine exposes the body to a weakened form of a virus to build immunity, inoculation theory posits that exposing individuals to a weakened form of an argument or misinformation can build cognitive resistance to future, stronger attacks.
Furthermore, organizations must leverage psychological insights to craft counter-narratives that are not only factually accurate but also emotionally engaging and persuasive. By aligning these counter-narratives with the values and beliefs of the target audience, they can effectively neutralize the impact of disinformation.
What is deception, disinformation, misinformation, and propaganda?
Misinformation and disinformation are both forms of false information, but they differ significantly in intent and implications. Understanding these differences is crucial, especially in today’s information-rich society where the rapid spread of both has serious consequences.
Misinformation refers to false or inaccurate information that is spread without the intent to mislead. This happens when people share information that they believe to be true at the time they share it, but it isn’t.
Disinformation is false or inaccurate information that is deliberately spread with the intention to deceive. It is a tactic often used in psychological warfare, propaganda, and by individuals or groups trying to influence public opinion or obscure the truth. Disinformation can be much more damaging than misinformation because it involves intentional deceit.
Examples of disinformation include information fabricated by governments to sway public opinion or destabilize other nations, false rumors spread about political candidates, fake news stories for financial gain or to influence people.
Both misinformation and disinformation can lead to political polarization fueled by fabricated stories or misrepresented facts, societal division, confusion, and violence.
Propaganda is the systematic attempt to shape perceptions, manipulate cognitions, and direct behavior to achieve a response that furthers the desired intent of the propagandist. It is often associated with persuasive techniques used by governments, organizations, and media to promote a specific political cause or point of view.
Propaganda often involves a more complex set of messages over time that build on each other, whereas disinformation can be more direct and immediate.
Propaganda uses disinformation as a method. While the French philosopher Jacques Driencourt asserted that everything is propaganda, the term is most often associated with political persuasion and psychological warfare.
Propaganda and hybrid warfare are closely intertwined in the modern geopolitical landscape. Hybrid warfare refers to the strategy that blends conventional warfare, irregular warfare, and cyber warfare with other influencing methods, such as economic warfare and propaganda. The aim is to exploit the vulnerabilities of the opponent across multiple domains, political, military, economic, social, and informational, often without crossing the threshold that would traditionally define war.
Bots, trolls, and fake news sites disseminate propaganda widely and quickly. State-controlled media outlets broadcast the state’s narrative domestically and internationally. Cyber attacks and espionage are used to steal information, subsequently altered and used in disinformation campaigns and propaganda. Incorporating elements of truth makes falsehoods more believable and difficult to detect. This technique leverages several psychological and social dynamics that enhance the persuasiveness and durability of a lie.
Propaganda frequently utilizes emotional language to connect on a personal level and drive reactions. Complex issues are often reduced to simplified, binary choices, making it easier for the message to resonate with a wide audience.
Psychological warfare can be used to influence the opinion and behavior of the civilian population. It involves the use of psychological tactics to influence, disrupt, or demoralize adversaries. This type of warfare targets the minds to induce feelings of fear, doubt, and confusion with the goal of weakening the fighting spirit or gaining a tactical advantage.
Psychological Warfare uses misinformation (to mislead about one’s own capabilities, intentions, or movements), and propaganda (to influence the perceptions and attitudes of both the enemy and neutral or even friendly populations).
In deception (according to Bell and Whaley), someone is showing the false and hiding the real. Hiding the real is divided into masking, repackaging, and dazzling, while showing the fake is divided into mimicking, inventing, and decoying.
Disinformation is a subset of deception. While all disinformation is deception, not all deception is disinformation. Deception is a broader category that includes any act intended to mislead, while disinformation refers specifically to false information created and shared with the intent to deceive.
Propaganda may use deceptive techniques, including disinformation, to achieve its goals. Not all deceptive acts are propaganda if they don't aim to persuade on ideological grounds.
Detecting misinformation, disinformation, deception, propaganda.
Citizens are remarkably bad at detecting misinformation, disinformation, deception, and propaganda.
They often trust what others say, and usually they are right to do so. This is called the "truth bias". People also tend to believe something when it is repeated. They tend to believe something they learn for the first time, and subsequent rebuttals may reinforce the original information, rather than dissipate it.
Humans have an unconscious preference for things they associate with themselves, and they are more likely to believe messages from users they perceive as similar to themselves. They believe that sources are credible, if other people consider them credible. They trust fake user profiles with images and background information they like.
Citizens must understand that millions of fake accounts follow thousands of real users, creating the perception of a large following. This large following enhances perceived credibility, and attracts more human followers, creating a positive feedback cycle.
People are more likely to believe others who are in positions of power. Fake accounts have false credentials, like false affiliation with government agencies, corporations, activists, and political parties, to boost credibility.
Freedom of information and expression are of paramount importance in many cultures. The more freedom of information we have, the better. But the more information we have, the more difficult becomes to understand what is right and what is wrong. The right of expression and the freedom of information can be used against the citizens. We often have the weaponization of information.
The Internet and the social media are key game-changers in exploiting rights and freedoms. They give the opportunity for spreading limitless fake photos, reports, and "opinions". State-sponsored adversaries wage online wars using Twitter, Facebook, LinkedIn, Instagram, Pinterest, Viber etc. Only imagination is the limit.
Social media platforms, autonomous agents, and big data, are directed towards the manipulation of public opinion. Social media bots (computer programs mimicking human behaviour and conversations, using artificial intelligence) spread political views, manufacture trends, game hashtags, spam opposition, attack journalists and persons, kill characters.
In the hands of State-sponsored groups these automated tools can be used to both boost and silence communication and organization among citizens.
62 percent of all web traffic is generated by bots, not humans. Millions of social media accounts are bots, according to researchers at the University of Southern California.
Machine-driven communications tools (MADCOMs) use cognitive psychology and artificial intelligence based persuasive techniques. These tools spread information, messages, and ideas online, for influence, propaganda, counter-messaging, disinformation, intimidation. They use human-like speech to dominate the information-space and capture the attention of citizens.
Artificial intelligence (AI) technologies enable computers to simulate cognitive processes, such as elements of human thinking. Machines can make decisions, perceive data or the environment, and act to satisfy objectives.
Disinformation agents often use the 4D Approach: Dismiss, Distort, Distract, Dismay.
Negative reporting and comments are dismissed. Disinformation agents either deny the allegations, or discredit the persons who report.
Disinformation agents always distort information to serve their overall narrative. They also use distraction techniques, to turn attention away from their activities.
Distraction creates doubt and confusion. The development of conspiracy theories around the facts further adds to the confusion.
Disinformation agents know how to hold up a mirror to any accusation, saying that the accuser has already committed exactly the misdemeanor they are now blaming others.
Spreading dismay, by warning that certain actions will have disastrous consequences for those planning them and the population, is another effective disinformation tool.
With troll factories, disinformation operations and their agents surf the social media and:
a. attack those who have a different opinion.
b. amplify and validate the main disinformation message.
The weaponization of information, culture and money is a vital part of a new hybrid war, which combines disinformation with covert and small-scale military operations. According to Sun Tzu, to fight and conquer in all our battles is not supreme excellence; supreme excellence consists in breaking the enemy's resistance without fighting.
Understanding machine-driven communications tools (MADCOMs)
Key Features of MADCOMs
1. Automation of Communication Tasks: MADCOMs can automate routine communication tasks, such as responding to customer inquiries, scheduling meetings, or managing emails. This capability enhances efficiency and can lead to significant cost savings for businesses.
2. Personalization: Through the use of data analytics and machine learning, MADCOMs can personalize interactions based on the user's past behavior, preferences, and data. This level of personalization is often used in marketing, customer service, and even personalized news feeds.
3. Scalability: Unlike human-driven communication, MADCOMs can handle a vast number of interactions simultaneously, making them ideal for scaling operations in business contexts without a corresponding increase in human resources.
4. Real-Time Analysis and Response: Some MADCOMs are capable of analyzing large volumes of data in real-time and providing responses or communications based on this analysis. This feature is particularly useful in fields like financial trading, real-time customer service, and emergency response systems.
On the positive side, MADCOMs can be used in customer service, where virtual assistants and chatbots are used to handle customer queries, complaints, and FAQs, providing 24/7 service which can adapt to the customer's language and tone. They can be used in healthcare, where AI-driven tools help in patient management by scheduling appointments, reminding patients about medication, and even providing basic diagnostic support. They can be used in marketing, where automated tools manage social media posts, interact with users, and personalize advertising content based on user behavior and preferences.
On the negative side, MADCOMs can significantly enhance the capabilities of state and non-state actors in conducting hybrid warfare operations. The integration of MADCOMs into hybrid warfare strategies emphasizes the evolution of conflict in the digital age.
Role of MADCOMs in Hybrid Warfare:
1. Information Operations: MADCOMs can automate and optimize the dissemination of propaganda and disinformation at scale. These tools can generate, distribute, and amplify tailored messages across various platforms (social media, news sites, etc.), targeting specific populations to influence perceptions, sow discord, or manipulate public opinion.
2. Cyber Operations: AI-driven tools enhance cyber warfare capabilities by automating tasks such as network intrusion, data exfiltration, and the deployment of cyberattacks that can disrupt critical infrastructure or access sensitive information. These operations can be conducted faster and with more complexity than would be feasible for human operators alone.
3. Psychological Warfare: MADCOMs facilitate psychological operations by enabling the rapid analysis of large data sets to identify psychological vulnerabilities within a target population. Customized psychological messages can be crafted and delivered to exploit these vulnerabilities, thereby weakening enemy morale or garnering support for the aggressor’s cause.
4. Decision Support Systems: In hybrid warfare contexts, MADCOMs can serve as advanced decision support tools for military and political leaders, providing real-time strategic analysis, scenario forecasting, and response recommendations. This can improve the speed and effectiveness of decision-making in the fast-paced environment of hybrid conflicts.
Defending against misinformation, disinformation, deception, and propaganda.
Defending against misinformation, disinformation, deception, and propaganda is essential for peace and stability, democracy, risk and compliance management in companies and organizations, national security, and the protection of the critical infrastructure.
Malicious tactics can lead to significant social, political, and economic consequences if left unchecked. Addressing these challenges involves a multi-faceted approach that incorporates education, technological solutions, regulatory frameworks, and individual responsibility.
1. Awareness and Training.
Awareness and Training is crucial in teaching people how to critically evaluate the sources and content of information. This includes understanding the context in which information is presented, recognizing biased or misleading framing, and verifying facts through multiple reputable sources.
Encouraging critical thinking helps individuals question and analyze the information they receive rather than accepting it at face value. Educational initiatives can focus on helping people understand logical fallacies, cognitive biases, and the techniques used in producing manipulative content.
Governments and organizations must educate citizens and employees about the dangers of misinformation and disinformation.
2. Technological Solutions.
Development and wider adoption of automated fact-checking tools can help in real-time identification of false information. These tools can be integrated into social media platforms and web browsers to alert users to disputed content as they encounter it.
AI can detect patterns indicative of fake news and propaganda, analyzing the authenticity of images and videos or detecting bot-like activity on social media platforms.
3. Regulatory and Policy Measures.
Governments can enact laws that require transparency in online advertising, mandate the disclosure of information sources, and penalize the deliberate spread of false information that can cause harm.
A good example is:
The Digital Services Act (DSA)
Regulatory bodies can work with technology companies to enforce standards and practices that minimize the spread of harmful content while respecting freedom of speech. This includes adjusting algorithms that prioritize sensational or misleading content to gain user engagement.
Misinformation and disinformation are often cross-border issues, particularly when they involve state actors. International cooperation and treaties can play a critical role in addressing these challenges collectively.
Defending against misinformation, disinformation, deception, and propaganda requires a coordinated effort that combines educational initiatives, technological innovations, regulatory frameworks, and individual vigilance. By fostering a culture that values truth and openness, societies can better defend against the adverse effects of these deceptive practices.