© 2024 CoolTechZone - Latest tech news,
product reviews, and analyses.

AI experts demand better protection for whistleblowers in open letter


Employees at OpenAI, Google Deepmind and other AI companies who openly speak out what’s on their minds, should have better protection from their employers. In order to achieve that, their companies will have to support a culture of open criticism.

Artificial intelligence (AI) offers a wide variety of possibilities and opportunities to deliver unprecedented benefits to humanity. These technologies however also pose serious risks, ranging from increasing existing inequality, to manipulation of public opinion, spreading of misinformation, and loss of control of autonomous systems.

AI companies, governments across the world and AI experts have acknowledged these risks. In practice however, there’s insufficient guidance from the scientific community, policymakers and the public. Furthermore, AI companies have done all that they can to avoid effective oversight and share as little as humanly possible on the potential risks of AI technology.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society,” both current and former employees of OpenAI, Google Deepmind and Anthropic say in an open letter.

Moral appeal to advanced AI companies

The only way to truly get a grasp on the dangers of AI technology, is for employees to speak their mind and hold their employers accountable to the public. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” AI professionals say.

They stress that ordinary whistleblower protection measures are insufficient because they focus on illegal activity, whereas many of the risks they are concerned about are not yet regulated. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

AI experts therefore make a moral appeal to advanced AI companies to better protect whistleblowers from speaking out. First of all, these companies should not enforce any agreement that prohibits criticism on risk-related concerns. Employers should also facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns.

Furthermore, AI companies will have to support a culture of open criticism and allow current and former employees to vent risk-related concerns about AI technology to the public. Lastly, AI companies shall not retaliate against employees who publicly share confidential information on risk-related matters after other processes have failed.

“As long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public,” the AI experts conclude.


Leave a Reply

Your email address will not be published. Required fields are marked