Raimondo’s announcement comes on the same day that Google touted the release of new data highlighting the prowess of its latest artificial intelligence model, Gemini, showing it surpassing OpenAI’s GPT-4, which powers ChatGPT, on some industry benchmarks. The US Commerce Department may get early warning of Gemini’s successor, if the project uses enough of Google’s ample cloud computing resources.

Rapid progress in the field of AI last year prompted some AI experts and executives to call for a temporary pause on the development of anything more powerful than GPT-4, the model currently used for ChatGPT.

Samuel Hammond, senior economist at the Foundation for American Innovation, a think tank, says a key challenge for the US government is that a model does not necessarily need to surpass a compute threshold in training to be potentially dangerous.

Dan Hendrycks, director of the Center for AI Safety, a non-profit, says the requirement is proportionate given recent developments in AI, and concerns about its power. “Companies are spending many billions on AI training, and their CEOs are warning that AI could be superintelligent in the next couple of years,” he says. “It seems reasonable for the government to be aware of what AI companies are up to.”

Anthony Aguirre, executive director of the Future of Life Institute, a nonprofit dedicated to ensuring transformative technologies benefit humanity, agrees. “As of now, giant experiments are running with effectively zero outside oversight or regulation,” he says. “Reporting those AI training runs and related safety measures is an important step. But much more is needed. There is strong bipartisan agreement on the need for AI regulation and hopefully congress can act on this soon.”

Raimondo said at the Hoover Institution event Friday the National Institutes of Standards and Technology, NIST, is currently working to define standards for testing the safety of AI models, as part of the creation of a new US government AI Safety Institute. Determining how risky an AI model is typically involves probing a model to try and evoke problematic behavior or output, a process known as “red teaming.”

Raimondo said that her department was working on guidelines that will help companies better understand the risks that might lurk in the models they are hatching. These guidelines could include ways of ensuring AI cannot be used to commit human rights abuses, she suggested.

The October executive order on AI gives NIST until July 26 to have those standards in place, but some working with the agency say that it lacks the funds or expertise required to get this done adequately.

You May Also Like

Scam: Britons in lockdown targeted with fake texts claiming they must pay a fine for breaching rules

Britons are being targeted by scammers with fraudulent calls and text messages…

‘Baldur’s Gate 3’ Review: Play the Way You Choose

Baldur’s Gate 3 is a game about making choices. Encounter an imposing,…

How Russia’s brazen plan to put nukes in SPACE could cripple America – causing a nationwide blackout, grounding military aircraft fleet and disabling banking system

Russia‘s brazen plan to put a nuclear weapon in space could threaten satellites…

Upgrading to Windows 11 next month could VOID your PC’s warranty forever

UPGRADING to Windows 11 could void your PC warranty. The shock revelation…