Insights from India

My key insights from my discussions in India are as follows, in no particular order.

  • Digital provenance is a good idea, the C2PA standards being the go-to. “Detecting whether or not digital content is fake is currently impossible at internet scale and speed” but it is far more straightforward to identify who created it and how, when, and where it was created or edited.
  • There are some good strategies for infiltrating closed social systems such as WhatsApp, which can be a hugely influential environment in some countries and with some communities. India has created some good examples, including the WhatsApp tipline that ran during the 2019 general election. Electoral management bodies might be able to create a similar helpline in jurisdictions where WhatsApp and similar platforms are in high use.
  • Regulating the use of AI in elections is useful, but beware policy over-reach. There are great innovation opportunities for the use of AI for genuine community outreach and engagement (eg multi language comms). The trick is to “permit and restrict” so that ethical use is promoted whilst other behaviour is discouraged. Don’t throw the baby out with the bath-water.
  • Similarly, codes of conduct and guidelines on AI use for political parties and campaign managers are useful, especially if they are risk-based rather than rights-based. However it is difficult to develop such guidelines in a vacuum, simply for elections and simply for politicians. All actors need to be addressed, from the politicians and campaigners to the media, social media platforms and the AI developers themselves.
  • Fact-checking is fine, and of course it can be important to correct the record at times, but there is no evidence that pointing out incorrect individual facts changes public opinion or behaviour at scale. At best, it is one counteracting measure amongst many that need to be deployed. It can’t be relied upon to do the heavy lifting.
  • Finally, a lot rides on media and digital literacy, at all ages, within and outside election timelines, direct to individuals by trusted organisations and amplified through trusted influencers. This is not (just) a school curriculum thing. It needs to be everywhere, and it can’t all be delivered via individual online engagement, although gamifying the messages has had some success.

The biggest gains will be achieved from getting actual humans to meet and have conversations. The enemy of disinformation is not fact; it is doubt, curiosity and conversation. Society is being so polarised we are voluntarily excluding ourselves from the only arenas that are not directly controlled by purveyors of disinformation: human interactions. We need to lean into conversations at the pub, or over the dinner table, or at family gatherings, not avoid them. We need to remember we are being manipulated, and remind ourselves that we used to know how to disagree agreeably with people. It’s time to dust off those old skills.

Leave a comment