As Trinidad and Tobago prepares for its upcoming general election, a leading technology expert is raising red flags about the growing role of artificial intelligence in political campaigning.
Digital anthropologist Daren Dhoray is warning that the technology has the potential to manipulate public opinion and undermine democratic processes.
Dhoray further cautioned that AI-driven tools, such as ChatGPT, could pose significant threats to election integrity if left unregulated.
“Whether it is currently playing a role—I wouldn’t want to say no—but I’m almost certain that AI is influencing what the general public thinks about political party A or party B, especially since all political parties are now utilizing social media for much of their campaign messaging. It makes it difficult to discern what is real and what is fake.”
Dhoray’s comments come amid allegations from Opposition Leader Kamla Persad-Bissessar that members of the ruling People’s National Movement (PNM) are using AI to spread fake news as part of their campaign strategy.
Recently, an audio clip surfaced purporting to feature a conversation between Persad-Bissessar and Tobago House of Assembly (THA) Chief Secretary Farley Augustine, allegedly discussing collaboration ahead of the election. Persad-Bissessar has since labelled the clip as fake.
Prime Minister Stuart Young has pushed back, suggesting there is evidence that the United National Congress (UNC) has also been using AI to create fake social media profiles and generate misleading comments online.
Dhoray said he would not be surprised if major political parties have already incorporated AI into their campaign strategies—and highlighted the risks that come with it.
“It all comes back to the laws we have regarding how data can and cannot be used. While we do have some data protection and privacy legislation, it might be a stretch to use those laws to charge someone specifically for misinformation. Ethically, there’s also the question: is it morally right to use someone’s image or voice without consent? In cases like that, the law could be called upon under defamation of character.”
Dhoray urged the public to remain extra vigilant, noting that it’s becoming increasingly difficult to identify AI-generated content.
“It’s quite easy now. For those who aren’t familiar with the technology, it’s important to understand that a WhatsApp video being shared might not actually feature the person it appears to. It could very well be a fake video or image. This makes it much harder for those who are less tech-savvy.”
He maintained that while AI presents innovative opportunities for improving political campaigns and voter engagement, it also raises serious concerns related to misinformation, cybersecurity, and ethics.
“AI-controlled chatbots can flood social media with coordinated messages that amplify disinformation or suppress opposing views. Phishing is already a major issue, but now AI makes it easy to automate phishing attacks that mimic official election communications. This can trick people into sharing sensitive data, or even lead to the creation of fraudulent fundraising websites that spoof legitimate campaign pages—stealing donations from unsuspecting supporters.”