Navigating Compliance and Innovation: The new frontier for AI and cloud computing providers

Jan. 17, 2025
As the industry evolves, proactive measures will be key to ensuring the responsible development and deployment of advanced technologies.

In a significant move to bolster the safety and security of advanced technologies, the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) recently proposed mandatory reporting rules for advanced artificial intelligence (AI) developers and cloud computing providers. This initiative aims to ensure that ‘frontier’ AI models and computing clusters are developed and deployed responsibly, with a keen focus on security and compliance.

As AI standards continue to shift, organizations may be unprepared to comply with new regulations. This article explores the implications of the BIS proposal and outlines actionable steps organizations can take to maintain compliance while fostering innovation and competitiveness.

Understanding the BIS Reporting Proposal

The BIS reporting requirements, as outlined in Section 4.2(a)(i) of Executive Order 14110, apply to companies developing dual-use foundational AI models and those possessing large-scale computing clusters. These entities are required to report on various aspects of their work, including model training activities, cybersecurity measures, and ownership of model weights. Additionally, they must provide results from AI red-team testing and detailed safety measures to mitigate potential security risks.

Interestingly, the proposal is expected to apply to a limited number of entities—primarily developers of powerful AI models and computing clusters. The technical thresholds set by the BIS, such as AI model training runs exceeding 1026 computational operations or computing clusters with a data center network speed above 300 Gbit/s. Additionally, these clusters must have a theoretical maximum of more than 1020 operations per second for AI training.

Implications for Data Management and Governance

While the reporting requirements may not apply to most companies, the proposal signals a broader trend toward increased regulation in the sector. Organizations should proactively enhance their data and AI governance frameworks to ensure accurate data capture and reporting related to AI model training and computing resources. This may necessitate investments in upgraded data management systems to enable real-time tracking and streamline compliance processes.

Moreover, the emphasis on safety and security underscores the need for comprehensive risk management strategies. This includes conducting red teaming exercises and regular assessments of AI systems to identify vulnerabilities and establish protocols for addressing identified risks.

Additionally, due to guidelines from different regulatory bodies such as the European Union’s AI Act and the General Data Protection Regulation (GDPR), multinational businesses have to navigate and comply with various regulatory frameworks.

Stakeholder Management for Establishing Governance Frameworks

To effectively establish governance frameworks, organizations should engage with regulators to provide feedback on compliance requirements and involve key stakeholders—such as customers, employees, and investors—in shaping these frameworks. Regular training sessions for employees to update them on compliance and security measures related to AI and cloud technologies are also crucial, as they embed a culture of compliance through consistent education, leadership support, and clear policies.

Benchmarking and Model Evaluation

Companies can use publicly available benchmarks like General Language Understanding Evaluation (GLUE) or TruthfulQA to help navigate the complexities of AI model selection and evaluation. These standardized frameworks facilitate quick comparison and evaluation across multiple models, helping organizations align AI models with their specific business and compliance needs.

To select and test the right AI models, companies should start by evaluating model performance through small-scale testing and scalability assessments. Incorporating public benchmarks can significantly enhance the evaluation process, providing a robust framework for selecting the right models.

Continuous Benchmarking and Red Teaming

Continuous benchmarking is crucial for addressing emerging risks and challenges associated with AI. Regular performance assessments help track model accuracy and identify areas for improvement. Red teaming exercises, in particular, play a vital role in simulating adversarial attacks, exposing vulnerabilities, and stress-testing AI systems.

These exercises are important in enhancing security and ensuring compliance with evolving standards. By continually evaluating models, companies can stay ahead of emerging risks and maintain a competitive edge in the rapidly changing AI landscape.

The BIS proposal marks a pivotal moment for AI and cloud computing providers, highlighting the need for robust compliance strategies that prioritize safety and security. By embracing continuous benchmarking, red teaming, and comprehensive data governance, organizations can navigate the regulatory landscape while fostering innovation and maintaining competitiveness. As the industry evolves, proactive measures will be key to ensuring the responsible development and deployment of advanced technologies.

About the Author

Gagan Tandon | Chief Data and AI Officer at WillowTree

Gagan Tandon, Chief Data and AI Officer at WillowTree, a TELUS Digital Company, has two decades of experience leading data and AI services for Fortune 100 clients worldwide. He leads WillowTree’s global Data and AI practice—a cross-functional, international team of researchers, data scientists, AI innovators, RPA engineers, software engineers, client service leads, and growth marketing experts. Gagan's team helps organizations develop tailored strategies to deliver high-quality solutions, insights, and recommendations that enhance customer experiences and drive operational efficiency.