Political Ads Can Target Your Personality. Here's What Could Go Wrong



41767A13 8C6E 45EE AD9B4FBDD8698EC4 source

More than 2,000 years ago Socrates thundered against the invention of writing, fearful of the forgetfulness it would cause. While writing has since redeemed itself, ChatGPT and its brethren in what is collectively known as GenAI now trigger similar warnings of linguistic novelty posing a threat to humanity. Geoffrey Hinton, who is sometimes called the “godfather of AI,” issued a stark warning that GenAI might get out of control and “take over” from humans.

The Word Economic Forum’s global risk report for 2024, which synthesizes the views of 1,500 experts from academia, business and government, has identified misinformation, turbocharged by GenAI, as the top risk worldwide for the next two years. Experts worry that manipulated information will amplify societal divisions, ideological-driven violence and political repression.

Although GenAI is designed to refuse requests to assist in criminal activity or breaches of privacy, scientists who conduct research on disinformation—false information intended to mislead with the goal of swaying public opinion—have raised the alarm that GenAI is going to become “the most powerful tool for spreading misinformation that has ever been on the Internet,” as one executive of a company that monitors online misinformation put it. One team of researchers has argued that through health disinformation, a foreign adversary could use GenAI to increase vulnerability in an entire population during a future pandemic.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Given that GenAI offers the capability to generate and customize messages at an industrial scale and within seconds, there is every reason to be concerned about the potential fallout.

Here’s why we’re worried. Our group at the University of Bristol recently published an article that underscored those risks by showing that GenAI can manipulate people after learning something about the kind of person they are. In our study, we asked ChatGPT to customize political ads so that they would be particularly persuasive to people with different types of personalities.

We presented GenAI with neutral public health messages and then asked it to rephrase those messages to appeal to the hundreds of participants in the study who were either high or low in openness to experience, one of the “Big Five” personality traits. Openness refers to a person’s willingness to consider new ideas and engage in imaginative and unconventional thinking.

GenAI happily complied, and sure enough, the versions of the ads that matched people’s personality (which we had deduced from a questionnaire that our participants had completed) were deemed to be more persuasive than those that mismatched.

Here’s one example of ad copy—based on an actual Facebook ad—that has been rewritten to appeal to people with personalities classified as having either a high or low degree of openness. (Facebook considers public health messages to be political ads.)

Original ad (taken from Facebook): Vaccines should be available to everyone, everywhere. Tell Boris Johnson to take action.

High openness ad: Experience the extraordinary and join the global movement for universal access to vaccines! Sign up now and help make sure everyone, everywhere can benefit from the power of vaccines.

Low openness ad: Protect yourself and your family. Get your vaccines and stay safe. Take the traditional approach and join the fight against disease. Tell Boris Johnson to take action now!

In our experiment, we obtained participants’ consent to make an assessment of their personality. In actual practice, advertisers and political operators are unlikely to request or receive such consent—and they may not be required to by law. Instead they may be able to exploit previous research, which has revealed that people’s “likes” on Facebook are indicators of their personality type. Advertisers may be simply able to target an audience with a particular personality profile by inspecting their expressed interests on Facebook.

When combined with GenAI’s ability to generate customized messages, this technique places large-scale furtive manipulation within reach of bad-faith political operators or indeed foreign adversaries. Whereas previously, manual targeting at market segments required extensive funding and knowledge, the availability of GenAI has dramatically lowered the cost. Political targeting is now cheaper and easier than ever before.

Personality is merely one dimension explored in research on microtargeting. The spectrum of psychological attributes for personalized manipulation is wide open, encompassing one’s personal values, moral foundations, cognitive biases and social identities. We suggest that the principles of microtargeting could well be adapted to this wide array of psychological domains, presenting a cautionary tale about the diverse ways in which influence might be exerted more subtly and broadly.

All of this raises the question of how the public can be protected against such manipulation? One option might involve regulation, by demanding that GenAI be unbiased and fair in its output, no matter what the user is demanding.

But there are several difficulties with this approach. One issue is that open-source versions of GenAI can be modified by individuals to evade regulation. Another challenge, even among good-faith actors, has to do with the plain fact that it can be impossible to determine what it means to be unbiased or fair.

A final challenge to regulation is the political climate in which online regulation has become a polarized partisan issue. A recent court ruling even barred the U.S. government from communicating with social media platforms to safeguard elections against misinformation or to combat misinformation in a public health crisis. Although this injunction has since been lifted, at least temporarily, there is little doubt that any effort to regulate tech companies will face fierce political resistance.

Perhaps a better option is to rely on people developing the skills necessary to detect when they are being manipulated. The evidence supporting this possibility is, however, ambiguous. On the one hand, people’s skills at detecting manipulative language can clearly be boosted. Previous research has found promise in educational interventions; short instructive videos that raise awareness about manipulative language have been shown to enhance people’s detection abilities. Similarly, when individuals reflect on their own personality traits, they become more adept at discerning ads that are tailored to those traits, adjusting their perceptions accordingly. On the other hand, it is far from clear whether people can fully dismiss information they know to be misleading or false. Misinformation is sticky, often persisting in influencing individuals’ beliefs and decisions despite being debunked. This stickiness highlights a critical gap in the battle against manipulative microtargeting: cognitive awareness alone may not suffice to erase the subtle imprint left by falsehoods or personalized manipulative messages.

As we approach what could be the biggest election year the world has ever seen, misinformation—exacerbated by ever-evolving technologies such as deepfakes—poses unprecedented threats to the integrity of democratic processes. The combination of sophisticated microtargeting and the difficulty of shedding the influence of misinformation underscores the urgent need for a multifaceted approach to safeguarding elections.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.



Source link

About The Author

Scroll to Top