
Increasingly powerful artificial intelligence (AI) models are seemingly rolling off the assembly lines of leading vendors weekly — but with a notable omission: Safety reports.
In pumping out faster, better AI models without proper documentation, some vendors are raising questions about the safety of their products and their impact on organizations that adopt them. In recent weeks, OpenAI and Alphabet Inc.’s Google launched models without safety details, setting off alarms among experts and enterprises.
Last week, OpenAI introduced GPT-4.1, a new family of AI models that outperform some current models in programming – but without a safety report, known as a model or system card. A company spokesperson, explaining the absence to TechCrunch, said, “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”
(On Friday, OpenAI did share a “safety-focused reasoning monitor” to reduce the risk of its latest AI models, o3 and o4-mini, from being misused to create chemical or biological threats. The system is designed to detect prompts related to dangerous materials and instruct AI models to withhold potentially harmful advice.)
Weeks after debuting Gemini 2.5 Pro, its most powerful AI model, Google on Thursday published a technical report that – critics claim – was light on details, making it hard to assess risks posed by the model.
The dizzying pace of AI model development has produced software at a breathtaking clip. The premise is valid: Speed has value, and tools that accelerate development are essential. But the conveyer-like pace in which they are delivered require an accompanying system card as honest appraisals of AI models to support independent research and safety evaluations.
“Recent shifts in safety posture from major players — such as OpenAI suggesting it might relax safeguards if a competitor does — underscore a dangerous trend: a race to the bottom,” cautions Peter Nguyen, chief marketing officer at Autonomys. “In a space this high-stakes, relying on voluntary ethics is no longer enough.”
Nguyen recommends blockchain as a “transparent, tamper-proof record” of how AI models are trained, accessed and used. “If AI is the engine of our future, blockchain must be the braking system — offering verifiable guardrails that evolve with the tech, but don’t cave to the whims of the market,” he said. “Safety can’t be a feature you toggle off under pressure. With blockchain, it doesn’t have to be.”
Dee Dee Walsh, who works in developer and AI marketing at Growth Acceleration Partners, a custom AI software development and technology company, says AI model safety boils down to early scout teams with “unfettered access to everything and identify all the gotchas,” she said.
Safety First
Dan Balaceanu, co-founder and chief product officer of agentic AI platform DRUID AI, stresses security is often the first question an organization will ask a prospective AI provider, before integrating the technology into their processes. They’re particularly interested in how the model retains data, if the model is used to train itself, and potential data leak risks.
“Ultimately, any data sharing practices and storage need to comply with stringent privacy regulations, as well as regular security monitoring and data encryption,” he said. “Failure to meet strict standards can be detrimental as any data breaches can harm a businesses’ public standing and potentially trigger legal or financial penalties.”
The need for speed has given rise to “vibe coding,” where AI-generated code gets pushed without much understanding or validation. Y Combinator has already said most of their startups are generating code with AI, and that trend is only growing.
“The real risk isn’t the speed itself. It’s the gap between what people think AI can do and what it can actually do,” Code Metal CEO Peter Morales said in an email. “Public confidence is ballooning past its capabilities. People trust it on subjective topics and assume it can write clean, production-ready code. What’s funny is that Anthropic, a company focused on AI safety, is also helping fuel this mismatch.”
“Their CEO said we’re a year away from AI doing all coding, and now we’re less than a year out from that prediction,” Morales continued. “It sets expectations that ignore the hard parts. AI might assist in every domain eventually, but it won’t be producing safety-critical systems without extensive validation and oversight. That mismatch is where things break.”