The European Union (EU) recently passed landmark legislation to regulate artificial intelligence (AI) — a great stride in AI governance. The act outlines a robust legal structure, establishing standards for AI development and deployment in the EU that prioritize data protection, human rights, and human oversight of AI. In recent years, artificial intelligence has experienced unprecedented growth, transformed numerous sectors and offered significant advantages to both individuals and businesses. Given this rapid advancement, the implementation of regulations is a crucial step towards ensuring the safe and transparent use of AI technology.
Such measures are not only essential for mitigating risks but also for fostering an environment where AI can continue to contribute positively to society. The EU regulation establishes a critical framework aimed at minimizing risks, fostering opportunities, and enhancing transparency in the use of artificial intelligence. This foundation is pivotal for harnessing the benefits of AI, while simultaneously safeguarding human values and ethics. As with any major regulatory structure it does create new challenges and considerations for businesses operating in the EU.
The Challenge of Regulating AI
I founded my startup centered around customer assurance — a holistic security framework designed to accelerate enterprise transactions and innovation — in the wake of GDPR, having seen many bright, innovative companies fail to stay profitable while adjusting to the structure. While global AI regulation is essential, governments and lawmakers should ensure that these regulatory frameworks aren’t setting us on a path to repeat history: over-regulation can stymie innovation.
As more use cases for AI emerge, competitive businesses must rise to the challenge of driving innovation while remaining compliant in an evolving regulatory landscape. The EU AI Act, despite its good intentions, poses significant implementation challenges for businesses due to the complex nature of AI systems. Its stringent guidelines may curb creativity and deter businesses — particularly startups and small to medium-sized enterprises (SMEs) — from adopting AI technologies. The act also introduces challenges for companies bringing new AI innovations to market. Its additional layers of compliance could essentially function as a new “tax” on innovation, slowing down transactions and ultimately putting companies at a competitive disadvantage globally.
How Businesses Can Foster a Culture of Innovation and Compliance
So, how can businesses navigate an ever-evolving regulatory landscape without sacrificing innovation or commercial growth? Certainly not by proceeding business as usual. Here are some strategies business leaders can consider to not only comply with new regulations — but also to use them to strengthen their operations, market positioning and security measures.
- Devote time and resources to cybersecurity and GRC: Investing in cybersecurity and Governance, Risk Management and Compliance (GRC) ensures regulatory compliance and data protection, a priority in many new regulations. Achieve this by allocating resources to build compliance and security teams or identifying third-party experts or vendors to strengthen your security posture. Stringent cybersecurity measures and a robust GRC framework will serve as an essential foundation for adapting to regulatory changes.
- Embed security into your business strategy: Once you’ve established security and compliance teams, ensure that regulatory readiness and security measures are part of high-level strategic discussions — not just siloed to their respective teams. It may take a bit of education to establish a broader understanding of the importance of these topics, but it will pay off when regulatory compliance becomes embedded in your company’s planning, product development and growth initiatives.
- Apply stringent data protection measures: As I mentioned above, regulatory frameworks often prioritize data protection measures — with guidelines on how data should be collected, stored, accessed, shared, transferred, and transformed. To align with these guidelines while still making the most out of data, ensure your teams have a clear understanding of the types of data they’re handling, and determine how you can utilize that data to deliver business value and align with security measures. You can do this by deploying technical safeguards, as well as establishing policies and training to educate staff on maintaining data privacy and integrity.
- Stay abreast of regulatory changes: Finally, stay vigilant. Regularly monitor changes in domestic and global AI regulations, staying on top of new laws, guidelines, and industry standards over AI. As you monitor regulatory updates, you can adjust the strategies and processes above to stay compliant with all relevant frameworks, protecting your organization against potential legal challenges, fees and even reputational damage.
What the EU AI Act Means for U.S. Companies
While the U.S. hasn’t officially put forth national AI regulations, the EU AI Act is a sign of what’s to come: national and regional organizations introducing guidelines and frameworks tailored to local circumstances and needs.
Still, many U.S. companies may soon be pressured to align with new EU regulations to stay competitive in the global marketplace. In some cases, this may deter U.S. companies from expanding to European or global markets, and/or alter product development plans for certain regions. As more regional AI regulations emerge, compliance will likely become increasingly complex in the global marketplace, with the potential for contradicting frameworks.
Companies based in the U.S. that either develop or leverage AI should consider themselves part of the regulation conversation. Yes, get processes and personnel in place to align with EU standards, but don’t stop there. Business leaders should regularly be in contact with regulators, customers, and industry partners to discuss, anticipate and influence regulatory changes. By getting ahead of regulatory changes and committing to ethical AI development and deployment and data protection, U.S. business leaders can position themselves as leaders in AI governance.
As we’ve seen, given its complexities and widespread applications, AI is a tricky thing to regulate. The EU AI Act and regulations coming from other nations are well-intentioned but implementing them is another story. Just like AI is constantly learning and evolving, so should our approach to regulating it: governments and lawmakers should take input from AI companies and users seriously to find the balance between regulation, innovation, and market momentum.
Before founding SecurityPal, Pukar was a seed investor and advisor for a variety of tech startups, and he previously served as a Government, Regulatory Affairs & Public Policy Analyst for PwC, and an Official Liaison (Diplomatic function) to the Economy of Taiwan on behalf of the United States Department of State. Pukar holds a BA from Stanford University.