Connect with us

Hi, what are you looking for?


US Leads the Way in AI Safety with New Institute

US takes the lead in AI safety, emphasizing transparency and accountability in the AI advancements with UK partnership.

Pixels Hunter/Shutterstock

Gina Raimondo, the Commerce Secretary, announced the US will establish an AI safety institute. 

The focus will be on assessing risks linked to advanced AI models. 

Raimondo said this during her speech at the AI Safety Summit in Britain. She emphasized the importance of collaboration between academia, industry, and the private sector. 

Raimondo asked experts from academia and industry to join the AI safety institute and get involved. She believes that having a diverse group of individuals, such as researchers, engineers, and industry leaders, is crucial. 

This diverse group will enable a comprehensive understanding and evaluation of the risks associated with advanced Artificial Intelligence models. This collaborative approach aims to attach the collective expertise needed to navigate the complex site of AI safety.

AI safety: Gov-private & global collaboration

The secretary of commerce highlighted that the government cannot alone undertake this critical mission. She emphasized the importance of private sector participation in Artificial Intelligence research, development, and deployment. 

The institute will work closely with private companies to improve AI safety standards by using their resources and expertise.

Raimondo wants to create a formal partnership between the U.S. AI Safety Institute and its UK counterpart. 

This international collaboration is significant as AI safety is a global concern. By working together, both countries can share knowledge, best practices, and research findings. Also enhancing the overall safety of Artificial Intelligence technologies on a global scale.

The National Institute of Standards and Technology leads the charge

The National Institute of Standards and Technology (NIST) will lead the efforts of the institute. NIST leads the US government’s efforts, mainly evaluating advanced Artificial Intelligence models. 

The institute’s core responsibilities include the development of standards to ensure the safety and security of AI models.

They are making rules to check AI-generated content. Researchers are establishing safe spaces to study new dangers posed by AI. Researchers are also setting up safe spaces to study the existing effects of Artificial Intelligence.

Artificial Intelligence systems can affect national security, the economy, public health, and safety, which is why this initiative is important. 

AI Security Mandate for Accountability and Transparency

President Joe Biden issued an order on Monday. The order requires developers of systems that pose risks to the United States to share safety test results. You should share these results with the U.S. government. 

This rule follows the Defense Production Act and is a precaution to avoid any harm caused by its use.

The executive order instructs government agencies to develop regulations for testing AI safety. It also requires them to address risks such as chemical, biological, radiological, nuclear, and cybersecurity threats. 

By creating a regulatory framework that promotes transparency, accountability, and safety in AI development. The U.S. government aims to strike a balance between innovation and security.

Avatar photo
Written By

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


You May Also Like


The photo-realistic text-to-video program takes the internet by storm.


Explore how Xi Jinping's US visit intersects with challenges in realizing his Chinese Dream, influencing diplomatic dynamics.


Explore how China's open-source AI startup 01.AI disrupts with a $1 billion valuation, reshaping the tech landscape.


Elon Musk has a grave concern about AI surpassing human intelligence, and it would be a drastic risk for us.

Copyright © 2022 Trill! Mag