|

O.XYZ’s Ahmad Shadid Warns National Security Priorities May Undermine Fairness And Transparency In AI

O.XYZ’s Ahmad Shadid Warns National Security Priorities May Undermine Fairness And Transparency In AI
O.XYZ’s Ahmad Shadid Warns National Security Priorities May Undermine Fairness And Transparency In AI

Before the inauguration of the current United States president, Donald Trump, the National Institute of Standards and Technology (NIST) completed a report on the safety of advanced AI models. 

In October last year, a computer security conference in Arlington, Virginia brought together a group of AI researchers who participated in a pioneering “red teaming” exercise aimed at rigorously testing a state-of-the-art language model and other AI systems. Over the span of two days, these teams discovered 139 new methods to cause the systems to malfunction, such as producing false information or exposing sensitive data. Crucially, their findings also revealed weaknesses in a recent US government standard intended to guide companies in evaluating AI system safety.

Intended to help organizations assess their AI systems, the report was among several NIST-authored AI documents withheld from publication due to potential conflicts with the policy direction of the incoming administration.

In an interview with Mpost, Ahmad Shadid, CEO of O.XYZ, an AI-led decentralized ecosystem, discussed the dangers of political pressure and secrecy in AI safety research.

Who Is Authorized To Release NIST’s Red Team Findings?

According to Ahmad Shadid, political pressure can influence the media, and the NIST report serves as a clear example of this. He emphasized the need for independent researchers, universities, and private laboratories that are not constrained by such pressures.

“The challenge is that they don’t always have the same access to resources or data. That’s why we need — or better said, everyone needs — a global, open database of AI vulnerabilities that anyone can contribute to and learn from,” Ahmad Shadid told Mpost. “There should be no government or corporate filter for such research,” he added.

Concealing AI Vulnerabilities Hampers Safety Progress And Empowers Malicious Actors, Warns Ahmad Shadid

He further explained the risks associated with concealing vulnerabilities from the public and how such actions can hinder progress in AI safety.

“Hiding key educational research gives bad actors a head start while keeping the good guys in the dark,” Ahmad Shadid said.

Companies, researchers, and startups cannot address issues they are unaware of, which can create hidden obstacles for AI firms and result in flaws and bugs within AI models. 

According to Ahmad Shadid, the open-source culture has been fundamental to the software revolution, supporting both continuous development and strengthening programs through the collective identification of vulnerabilities. However, in the field of AI, this approach has largely diminished — for example, Meta is reportedly considering making its development process closed-source.

“What the NIST hid from the public due to political pressure could’ve been the exact knowledge the industry needed to address some of the risks around LLMs or hallucinations,” Ahmad Shadid said to Mpost. “Who knows, bad actors might be busy taking advantage of the ‘139 new ways to break AI systems,’ which were included in the report,” he added.

Governments Tend To Prioritize National Security Over Fairness And Transparency In AI, Undermining Public Trust 

The suppression of safety research reflects a broader issue in which governments prioritize national security over fairness, misinformation, and bias concerns. 

Ahmad Shadid emphasized that any technology used by the general public must be transparent and fair. He highlighted the need for transparency rather than secrecy, noting that the confidentiality surrounding AI underscores its geopolitical significance.

Major economies such as the US and China are investing heavily—including billions in subsidies and aggressive talent acquisition—to gain an advantage in the AI race.

“When governments put the term ‘national security’ above fairness, misinformation, and bias—for a technology like AI that’s in 378 million users’ pockets—they’re really saying those issues can wait. This can only lead to building an AI ecosystem that protects power, not people,” he concluded.

The post O.XYZ’s Ahmad Shadid Warns National Security Priorities May Undermine Fairness And Transparency In AI appeared first on Metaverse Post.

Similar Posts