26 C
New York
19 September 2024
Image default
GovernmentSentient

The US Government Will Get Sentient AI First – OpenAI, Anthropic Sign Key Deal

  • President Biden established the US Artificial Intelligence Safety Institute as part of the National Institute of Safety and Standards (NIST) in October 2023.
  • OpenAI and Anthropic have agreed to give the NIST early access to cutting-edge AI models like the upcoming Strawberry model.
  • The effort aims to improve safety standards for AI development but imposes few restrictions on what the NIST might do with AI.

The US government has been known to drag its heels with new technologies; just look at the SEC vs crypto. 

So why is it moving so fast on AI?

A recent agreement between the government and two leading AI developers gives the NIST nearly unfettered access to cutting-edge AI models before public release.

What’s in it for Uncle Sam? Time for a closer look. 

The Rise of a New Government Agency

The key player in the US government’s involvement with AI is the US Artificial Intelligence Safety Institute. It’s all in the name – the stated purpose of the agency, founded last year by an executive order from President Biden, is to ensure that key safety principles form the bedrock of AI development.

That emphasis on safety is shown through Biden’s proposed AI Bill of Rights; the first point is the right of US citizens to ‘safe and effective systems.’

To that end, the NIST negotiated agreements with Anthropic and OpenAI, two companies leading the drive toward artificial general intelligence (AGI).

The agreements cover:

  • Collaboration: Working together on AI safety research, testing, and evaluation.
  • Access to models: The institute will receive access to new AI models from these companies before and after public release.
  • Safety research: Focus on evaluating AI capabilities, safety risks, and methods to mitigate these risks.

What Will The NIST Do With AI?

The agreements between OpenAI, Anthropic, and the NIST are strictly voluntary. By entering into them, the AI companies receive a huge PR boost and the implicit blessing of the US government.

we are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models.

for many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!

— Sam Altman (@sama) August 29, 2024

But what does the NIST get? Virtually unfettered and unrestricted access to the latest models.

In other words, if OpenAI or Anthropic develops an AGI, the US government will get it first.

An AGI – artificial general intelligence – is a type of AI that can match or outperform humans in a wide range of intellectual tasks. Often, AI (narrow AI) is designed for particular tasks.

And interestingly, there’s no requirement for the NIST to disclose what they would do with an AGI. 

  • Will the NIST say no to a specific release, like the upcoming Strawberry model from OpenAI?
  • Will the NIST deploy an AGI for government use before public release? 
  • And if so, would they even say anything?

One thing is clear: the agreement gives the US government, through the NIST, a direct voice in private AI and AGI development. It also sets the stage for collaboration with other governments, such as the UK.

Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models in close collaboration with its partners at the UK AI Safety Institute.

A Soft Touch – Room for Innovation, or a Lack of Transparency?

The framework works as a ‘soft touch’ government regulation, allowing the NIST oversight without specific rules and regulations. However, while the framework provides valuable flexibility for AI companies, it comes at the cost of poor transparency. 

In fact, the agreement raises the real possibility that the US government could obtain an AGI from Anthropic or OpenAI and deploy it with no one the wiser.

References

Disclaimer: The opinions expressed in this article do not constitute financial advice. We encourage readers to conduct their own research and determine their own risk tolerance before making any financial decisions. Cryptocurrency is a highly volatile, high-risk asset class.

Our Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Read More

Related posts

Swiss Government Bank PostFinance Introduces Trading for XRP, SOL, ADA, DOT, And AVAX

Chinese Government Announces Extradition of Crypto Pyramid Scheme Leader

cryptotankers.com

Leave a Comment