Responsible AI for Responsible Democracy 1

Responsible AI for Responsible Democracy

Imagine for a moment, as you may have already done at some point, this scenario: artificial intelligence has reached a level of sophistication where it can manipulate and oppress human beings, interfere with democratic elections, spread false and distorted information, and promote its own advancement at the expense of those in weaker positions.

Now, erase that. Forget about any developments of AI, and imagine instead that it’s human beings who are doing these same things to each other.

We are not at a point where these abilities could become a problem—we’re at a point where they already are a problem, brought about by humans themselves. And this particular point in history also involves the rapid technological advancements that are bringing AI into our lives. So it’s time to ask some crucial questions: How can AI, which seems to be an inevitable addition to our lives, be developed responsibly so that its uses are purely beneficial and not harmful? And…can responsible AI actually help save democracy in a world where humans have already put it on shaky ground?

Artificial Intelligence in the Political Sphere Today

Image suggests an AI being participating in politics.

 

Humans have created unfair societal systems long before AI ever came around. Now that the technology is emerging, it can sometimes be put to use to follow those same nefarious urges. We’ve seen how AI became problematic and harmful to democracy leading up to the 2016 U.S. election, when disguised Russian accounts and large numbers of bots posing as humans circulated political messages on Twitter and Facebook. Data analytics firm Cambridge Analytica was able to employ machine learning to do “behavioral microtargeting” of voters, analyzing their personalities based on online data and customizing campaign ads to meet their vulnerabilities—for example, a person prone to anxiety or fear could be targeted with political ads that played up frightening scenarios. Even in less sensational ways, AI is becoming a normal part of campaign strategy. Machine learning systems can combine variables to predict voter behaviors or whether a bill is likely to pass, for example. These technologies are already being used—but can they be harnessed for good?

The Potential of Responsible AI for Better Democracy

Image of U.S. flag over motherboard suggests democracy with responsible AI.

 

With the right standards and implementation, AI can be used to support democracy and fairness as well. There are a number of ways it can be helpful in improving information and reducing corruption. Here are a few areas where AI can help improve the democratic system:

  • Applications in fact-checking and bias reporting. Political disinformation is a big problem, with some people even claiming that fact-checkers are themselves biased. Politically neutral AI systems could be a good solution to this information trap. Factmata, for example, uses AI to do rudimentary assessments of online information, creating a “Trust Score” for each article. It reports things like bias, questionable veracity, or potentially harmful speech. Perhaps AI could even recognize and call out content that is produced by a bot?
  • Detection of deepfake techniques. Likewise, AI can call itself out when it’s been misused through deepfake. This malicious technique works by manipulating video files to make it appear as though someone (such as a politician) has said things that he or she never really said, and it has other dangerous potential for falsifying identity. Computer vision, like custom vision, has become very relevant in visual analysis and can be applied in a problem like this. Since deepfake became a public concern a couple years ago, groups ranging from tech startups to university researchers to government agencies have been working with such models to identify it.
  • Efficient gathering of relevant information. AI can potentially help voters become more informed by easily gathering information and comparing stances of different candidates. If a person prioritizes particular topics, the information gathering can respond to this—in effect, once again, turning the AI application of behavioral microtargeting on its head to do helpful work instead.
  • Video/audio analysis for accountability and fairness. In the current light of scrutiny over racial disparities—particularly in law enforcement—video and audio data can be scanned by AI in ways you might not normally think of. For instance, researchers at Stanford University developed an AI technique to compare officers’ speech toward different ethnicities, by analyzing 183 hours of footage from police body cameras. Such applications can increase transparency and promote more fair treatment within law enforcement.
  • Larger overhauling of democratic systems? Given both the unstable condition of the current political milieu and the booming growth of artificial intelligence, it’s possible that the two could converge in ways that lead to even larger reforms. Some theorize that AI could help bring about direct democracy, where—unlike the more common representative democracy—voters decide on policies directly rather than through intermediary representatives. The potential for such sweeping changes is anyone’s speculation at this point, but it does highlight the significance of AI’s potential reach.

Factors in Keeping AI Ethical and Responsible

Image suggests a marriage of humans and responsible AI.

 

The applications that promote fairness, transparency, and better access to information must be grown from those same roots, with diligent commitment to creating responsible AI. What are some of the elements to keep in mind in this effort? We might start with the seven requirements put forth in the European Union’s guidelines for trustworthy AI:

  1. Human agency and oversight. Humans remain in control, understand what the AI machine is doing, and can stop it at any time.
  2. Technical robustness and safety. Backup plans are in place throughout the entire process of an AI application.
  3. Privacy. All digital records must be guaranteed to be private and dealt with in accordance with privacy standards.
  4. Transparency. Every component of the AI system must be traceable and auditable.
  5. Diverse and non-discriminatory systems. Machines must be created and designed without inherent bias or discrimination patterns.
  6. Of benefit to all of society. Ethical AI must not be designed to benefit a particular group, but should instead be beneficial to everyone.
  7. Accountability. There must be someone in the position to be held accountable should something go wrong in the AI system, to avoid unethical practices slipping in.

Microsoft also has its own standards for responsible AI, centered on fairness, transparency, reliability, safety, collaboration, and privacy. All of these are important issues, and it would be wise to set these requirements into more formal legislative standards as society moves forward with artificial intelligence.

Democracy Today Requires Overcoming Harmful AI with Responsible AI

Responsible AI for Responsible Democracy 2

 

It’s already clear that artificial intelligence is on the rise and is being used in ways that dismantle democracy. In order to save democracy, ethical and responsible AI has to keep up. And it has to be created and followed through with the strongest democratic principles. As Vyacheslav Polonski put it, “The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy”—but someone’s got to put the effort in to repurpose it in the right direction. And maybe if we can manage to create better AI, what we create can help us to be better humans to each other.

Leave a Comment