AI: two prongs promote innovation without harm?

Public health researchers in the US are proposing a novel approach to encourage self-regulation within the AI community, with the aim of reducing harmful outcomes without stifling innovation.

“Efforts to promote ethical and trustworthy AI must go beyond what is legally mandated as the baseline for acceptable conduct,” said Jennifer Wager of Penn State University. “We can and should strive to do better than what is minimally acceptable.”

The idea is to combine two existing but seemingly-opposed methods of managing intellectual property: Copyleft licensing (of which Creative Commons licences are examples) and patent trolling, where a company owns intellectual property only to make money by suing others, rather than making anything with it or licensing it out.


Embodied in a concept the team is calling Caite (Copyleft AI with trusted enforcement), the combined approach is built on an ethical use license.


“This license would restrict certain unethical AI uses and require users to abide by a code of conduct, According to Texas A&M University, which worked with Penn State and Seattle company Sage Bionetworks. “Importantly, it would use a copyleft approach to ensure that developers who create derivative models and data must also use the same license terms as the parent works. The license would assign the enforcement rights of the license to a designated third-party known as a ‘Caite host’.”

The Caite host sets consequences for unethical actions, such as financial penalties or reporting consumer protection law violations, said Texas A&M, while creating policies that promote self-reporting and “give flexibility that typical government enforcement schemes often lack”.

It can also create incentives for AI users to report biases that they discover in an AI model, enabling the Caite to broadcast warnings to others using that model.

For the concept to work, a large portion of the AI community would have to participate, said the team, and further research as well as funding would be needed before kick-starting a pilot programme. Diverse AI community members would then need to steer it to self-sustenance.

The research is covered in ‘Leveraging IP for AI governance‘, published in Science (payment required for full access).


Leave a Reply

Your email address will not be published. Required fields are marked *

*