Political parties and AI ethics – voluntary accountability or regulatory response?

My Churchill Fellowship research uncovered an intriguing asymmetry in how political parties approach AI-generated content during elections. This finding raises fundamental questions about voluntary ethical leadership versus regulatory intervention in an age where artificial intelligence increasingly shapes political discourse.

The evidence from my fieldwork shows that right-leaning parties are more willing to deploy AI-generated disinformation, creating particular challenges for politically neutral institutions like electoral commissions. This asymmetric deployment presents a fascinating case study in how technological capabilities interact with political incentives.

My upcoming Churchill report examines what internally enforceable AI ethics codes for political parties might look like: voluntary codes of conduct governing AI use in campaigns, with transparency mechanisms and meaningful accountability structures. Such approaches could allow parties to demonstrate ethical leadership while creating public accountability through disclosure rather than prohibition.

Australia’s current system lets politicians behave poorly, and then politicians blame everyone and everything else when turnout or informality or exit poll numbers go down. What is stopping political parties from behaving better without the need for enforcement by others?

My research also explores how parties might champion legislative reform creating consequences for deliberately false statements in politics. The Compassion in Politics approach, currently under consideration by the Welsh Senedd, proposes judicial processes for assessing deliberate deception, with significant sanctions possible.

This raises broader questions about democratic discourse: Should political parties self-regulate their use of AI technologies, or do the stakes require external oversight? How might ethical codes address not only obviously deceptive practices like deepfakes but also subtler issues like AI-generated micro-targeting and synthetic social media engagement? And if we all agree this is important, who is going to regulate this sphere, and who will foot the bill? State and territory electoral commissions are already underfunded for the work they do.

What happens when voluntary approaches prove insufficient? The question remains whether industry self-regulation can address AI-enabled information challenges, or whether democratic institutions require more robust intervention frameworks to maintain electoral integrity.

Leave a comment