The path to ethical AI leads through an ever-changing landscape of government policy.
Are you ready for the changes ahead? 

Regulation’s Effects on AI Business

Although designed primarily for the European Union, the proposed tech policies will go beyond Europe’s borders, effectively applying to most if not all online businesses, particularly if those businesses hope to maintain access to any of the markets in the EU’s 27 member states—much like how the General Data Protection Regulation (GDPR) was rolled out and now applies to virtually every site and service on the internet. 

Insofar as the EU AI Act is concerned, there shouldn’t be significant disruption to businesses developing and deploying AI technology. The only businesses that should be concerned are those fielding high risk systems however, which will be subject to the aforementioned reporting and auditing processes pre- and post-deployment. Businesses found running afoul of the AIA’s rules risk penalties amounting to up to 6% of their global revenue or €30 million—whichever is greater of the two, signaling the EU’s earnestness in enforcing the policy. 

All that said, why should industry care whether government gets involved? Or more provocatively, should government even be involved in the first place? The EU and US government’s argument is yes, as industry alone cannot drive innovation if industry’s paramount incentive is profit. Concerns over stifling innovation are well noted however, and it is undoubtedly in the interests of governments to facilitate industry if it wishes to foster a flourishing economy. However, it bears considering how those successful economies are achieved, with the idea here being “not at the expense of people, their privacy, or their rights.” If corporations aren’t incentivized to innovate in those directions by default, it’s up to the state to exert their influence to do so. 

To wit, there are myriad examples of tech companies needing government guidance and oversight, but perhaps no better an example than Facebook, which is a vast platform that develops and leverages some of the world’s most sophisticated AI. Its growing number of scandals and whistleblowers are illustrative examples of its outsized effect in shaping peoples’ lives both on and off the internet. Given their ability to reach billions, to boost or stifle the kinds of information its users see, and how it ultimately influences the media environment it both caters to and operates in, government calls to reign in or penalize it for its missteps have ranged from fines to a potential breakup of the company. 

Given its size, wealth, and power however, imposing penalties to “solve” Facebook’s many issues would be difficult even with the most extreme of sanctions such as a breakup, particularly as it would likely not lead to substantive change in the parts of the company that have proven problematic. In the words of Facebook whistleblower and data scientist, Frances Haugen: “the problems here [are] about the design of algorithms, of AI.” To affect real change, government must step in before harms occur and set rules and guidelines for businesses and their technology, which is an assessment that even Meta, née Facebook, agrees with.  

The truth is that industry and government need each other and are in this fight together. Between the EU’s pioneering tech policies in the AI space and the US’ maturing agency-based AI policy, the proposed regulations would help relieve pressure on tech platforms to shape their products in ways that would be safer and more ethical for the markets they aim to serve. However, a set of rules and penalties alone needn’t be the only focus of these policies. Given that the EU’s overarching policy goals are to promote trustworthy, secure, and human-centered AI and to encourage wider adoption of the technology, they also have a dimension of encouraging AI innovation and business. 

Similarly in the US, a significant product of the White House’s National Artificial Intelligence Initiative Act is the National AI Research Resource (NAIRR) and its attendant National AI Research Resource Task Force—a team of government and industry experts who will advise on the implementation and administration of the government-provisioned compute cloud, data, and tools, ensuring how it can best be leveraged to promote AI research and innovation. Ensuring that the tech industry also has a say in contributing to AI innovation, experts on the task force include those from Google, the Allen Institute for AI, IBM, and our very own Defined.ai CEO and Founder, Daniela Braga. 

Regulation, though an occasionally fraught term, is ultimately a back-and-forth process. Its successes depend not only on industry compliance, but also industry input and feedback on its effects. As the proposals mature and are eventually adopted, industry will play an important ongoing role in moderating, improving, and streamlining future refinements as, ultimately, it’s these laws that will govern us while we conduct business. 

The Road Ahead 

Now that we’ve explored the different AI policy proposals in development between the EU and the US—and discussed their necessity—it’s clear their implementation is imminent and will reshape the AI landscape. AI legislation is coming, and businesses must be prepared for it. 

It thus naturally follows that the earlier businesses move to adapt their practices and processes to the proposed policies, the sooner, easier, and—most importantly—less costly compliance will be once the rules officially come into effect. A distinct advantage of early adoption is the ability to trial what some of these policy-compliant processes would look and feel like, opening avenues for a dialog with the government agencies proposing these policies and influencing modifications and refinements that would be beneficial to business. 

Given the both the EU and the US regulations’ overarching aims are to ensure AI and the opportunities they unlock will be safe, unbiased, and preserve privacy, they naturally lend themselves well to reinforcing ethical AI principles. It thus follows that early adoption and compliance by AI businesses would make for a major selling point and business advantage—an important consideration given the growing wariness of the public in large platforms and tech in general.  

As discussed in previous posts, committing to and building ethical AI is no small endeavor. From data sourcing and processing to algorithmic design and selection to testing and benchmarking to implementation and ongoing auditing, there are so many steps in the AI life cycle that businesses and their AI/ML teams need to factor in ethical considerations. Daunting as that task may be, the need for regulations makes it clear just how important ethical AI will be to our technological future. However, it’s also important to remember that we’re not in this alone: businesses will rise to the challenge of not only designing more ethical AI systems and processes, but also providing ethical AI services in the space. 

From its inception, Defined.ai has been and continues to be an ethical AI business. Long before regulation was proposed, CEO and Founder Daniela Braga made it her mission to stamp out bias in AI training data, making it easier for businesses to build AI models that were fair, inclusive, and representative in their outputs. With the introduction of the GDPR, Defined.ai was one of the first to adapt and implement the policies because they made sense for the AI future we wanted to see. 

“We took a year as a small startup to develop our own data privacy manifesto and policies, and while the world tried to catch up [to the GDPR], we were already implementing these, ahead of the game,” recalls Braga of the GDPR’s introduction. “We are a data company—obviously we had to tackle that. When the California Consumer Privacy Act came, we were already operating under GDPR requirements.” 

Whether compliance to the proposed EU and US regulations will be as swift as GDPR compliance was remains to be seen. However, Defined.ai has been and will always be a step ahead—both to help our partners navigate these challenges, but also because it’s the right thing to do. They are, after all, some of the best ways we can reach our ethical AI future sooner. 

“What I can tell you is that being small, we were able to adapt faster, but we saw our clients—we work with the largest AI builders in the world—struggling to catch up,” says Braga, noting the need for cooperation in this space. 

Given Defined.ai’s forward-thinking vision of an ethical AI future and our size and agility, we can’t imagine a future where we aren’t sharing our vision and expertise. We’re all in this race toward a better, more just AI future together. Why not join us and see what Defined.ai can do to help you realize your ethical AI goals and meet compliance with the coming AI regulations?