NIST has also taken other steps, including forming an AI Safety Advisory Group in February of this year that included AI creators, users, and academics, to put some barriers to AI use and development.
The advisory group, called the AI Safety Institute Consortium (AISIC), has been tasked with developing guidelines for building AI systems teams, assessing AI capability, managing risk, ensuring safety and security, and watermarking AI-generated content. Several major tech companies, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, have joined the consortium to ensure the safe development of AI.