We think of A.I. as some sort of mysterious thing that has the ability and knowledge to help us perform tasks and make our lives easier, but there are actual humans working behind the scenes in large corporations, who not only work on these A.I. systems, but get a first hand look at both the positive and negative sides of A.I., and the potential risks involved in such a new and potentially dangerous (if unchecked) technology.

Some of these employees that have been exposed first hand to some of the potential dangers have spoken out publicly, and are not happy with the way these corporations have responded to them. They are looking for non-retaliation when going public with these potential dangers.

Ars Technica:

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions.

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

They also assert that AI companies possess substantial non-public information about their systems’ capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.

Non-anonymous signatories to the letter include former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright, and Daniel Ziegler, as well as former Google DeepMind employees Ramana Kumar and Neel Nanda.

The group calls upon AI companies to commit to four key principles: not enforcing agreements that prohibit criticism of the company for risk-related concerns, facilitating an anonymous process for employees to raise concerns, supporting a culture of open criticism, and not retaliating against employees who publicly share risk-related confidential information after other processes have failed.

In May, a Vox article by Kelsey Piper raised concerns about OpenAI’s use of restrictive non-disclosure agreements for departing employees, which threatened to revoke vested equity if former employees criticized the company. OpenAI CEO Sam Altman responded to the allegations, stating that the company had never clawed back vested equity and would not do so if employees declined to sign the separation agreement or non-disparagement clause.

But critics remained unsatisfied, and OpenAI soon did a public about-face on the issue, saying it would remove the non-disparagement clause and equity clawback provisions from its separation agreements, acknowledging that such terms were inappropriate and contrary to the company’s stated values of transparency and accountability. That move from OpenAI is likely what made the current open letter possible.

Dr. Margaret Mitchell, an AI ethics researcher at Hugging Face who was fired from Google in 2021 after raising concerns about diversity and censorship within the company, spoke with Ars Technica about the challenges faced by whistleblowers in the tech industry. “Theoretically, you cannot be legally retaliated against for whistleblowing. In practice, it seems that you can,” Mitchell stated. “Laws support the goals of large companies at the expense of workers. They are not in workers’ favor.”

Mitchell highlighted the psychological toll of pursuing justice against a large corporation, saying, “You essentially have to give up your career and your psychological health to pursue justice against an organization that, by virtue of being a company, does not have feelings and does have the resources to destroy you.” She added, “Remember that it is incumbent upon you, the fired person, to make the case that you were retaliated against—a single person, with no source of income after being fired—against a trillion-dollar corporation with an army of lawyers who specialize in harming workers in exactly this way.”

The open letter has garnered support from prominent figures in the AI community, including Yoshua Bengio, Geoffrey Hinton (who has warned about AI in the past), and Stuart J. Russell. It’s worth noting that AI experts like Meta’s Yann LeCun have taken issue with claims that AI poses an existential risk to humanity, and other experts feel like the “AI takeover” talking point is a distraction from current AI harms like bias and dangerous hallucinations.

Even with the disagreement over what precise harms may come from AI, Mitchell feels the concerns raised by the letter underscore the urgent need for greater transparency, oversight, and protection for employees who speak out about potential risks: “While I appreciate and agree with this letter,” she says, “There needs to be significant changes in the laws that disproportionately support unjust practices from large corporations at the expense of workers doing the right thing.”

It is concerning that at the moment, there is no one really looking over the shoulders of these large corporations to make sure that they are not putting either their employees or the general public at risk when developing these A.I. systems. Everything seems from the outside looking in to be very secretive. We have no idea where this A.I. future is taking us, but hopefully more people will become aware of both the potential benefits and risks when they are exposed to a world where A.I. is in every product and service that we use.