Generative AI and other low-code/no-code technologies are spreading like wildfire in enterprises worldwide. And now, regulators are expressing concerns about these deployments, promising to take action against companies that offer or use AI-powered tools that run afoul of laws governing privacy, fair trade, and other areas. As a result, risk management leaders and teams must ensure their generative AI deployments pose no risks to enterprise data or compliance with laws and regulations.
Regulators to AI Builders: We’re Watching You
In a May 3, 2023, New York Times opinion piece, U.S. Federal Trade Commission (FTC) chair Lina Khan compared the explosive growth of generative AI to that of “Web 2.0” in the early 2000s. “What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data,” Khan wrote. “What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security,” she added.
Khan and the FTC have announced that they will do everything possible to avoid similar problems with generative AI. Specifically, the FTC intends to crack down on companies that make or sell AI-powered products that create or amplify bias or deception. The FTC also warned AI developers about engaging in unfair business practices or limiting access to the resources necessary to build AI-powered solutions.
To underscore this intention, the FTC and three other U.S. government agencies issued a joint statement declaring that AI-powered decisions made for companies must adhere to all U.S. laws. The Civil Rights Division of the United States Department of Justice joined the FTC, the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC). “Although many of these [AI-powered] tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” the statement said.
The scrutiny of generative AI and related technologies is increasing worldwide as well. In early April, Italy became the first Western country to ban ChatGPT, the chatbot powered by generative AI and the fastest-growing consumer application created so far. Garante, the Italian Data Protection Watchdog agency, ordered OpenAI, ChatGPT’s creators, to stop processing information for or from Italian users. The agency lifted the ban only after OpenAI, the creators of the software, made multiple changes to the software to address Garante’s concerns.
The European Union (EU) has already proposed major legislation called the European AI Act. Among other things, the Act is expected to severely limit the use of AI in several specific areas, including critical infrastructure, education, and the legal and judicial systems. The Act is also expected to align with the EU’s General Data Protection Regulation (GDPR) and to focus on AI applications that could affect fundamental human rights or safety.
Regulators focus on the data used to train the so-called large language models (LLMs) that drive generative AI. Regulators and legislators are concerned about both the accuracy of that data and the protection of the privacy of data belonging to individuals. Inaccurate data can lead to critical or even catastrophic consequences, especially in the areas of energy and healthcare. And data that compromises privacy could generate even more considerable penalties and reputational damage than caused by violations of current laws and regulations.
Two Critical Enterprise Challenges
The rapid growth of attention from legislators and regulators parallels the unprecedented growth of generative AI and other AI-powered and low-code/no-code deployments at enterprises worldwide. (OpenAI does not operate in China, but several Chinese companies are reportedly building alternatives to ChatGPT and other generative AI tools.) As a result, wherever your enterprise does business, agencies are likely considering or implementing laws and rules that will affect where and how your business can use AI.
This means your business now faces two near-immediate challenges. First, to protect valuable enterprise data and maximize its business value, you need to know where generative AI and other low-code/no-code deployments are happening, preferably as they happen. And in addition to identifying those deployments and their potential risks to your data, you must also know or quickly learn if they violate any local, national, or transnational laws or rules. This will require more effort from more participants than focusing solely on enterprise data protection.
Four Things You Need To Do Now
Figure out what you have. Then, put processes and technologies in place that provide guardrails for generative AI and other low-code/no-code deployments by citizen developers across your enterprise. Those guardrails must include instant notification when a deployment takes place. They must also support careful analysis of how each actual or requested deployment does or may affect the data’s accuracy, consistency, security, or timeliness that drives decisions at your business.
Determine your legal posture. In parallel with determining what you have, you need to determine if any of your deployments do or could soon violate laws or regulations that apply where you do business. California, Colorado, Connecticut, Utah, and Virginia have comprehensive data privacy laws in place. Other states have introduced or announced plans for privacy protection laws as well. While most of these will likely only apply within their respective states, the GDPR applies to all EU citizens, even those living or working outside of the EU. Determine if anything you have risks violation of laws or regulations. To gather this information and keep it current will require collaboration among those responsible for IT, cybersecurity, legal affairs, compliance, and perhaps HR.
Get and keep users onboard. Many, if not most, enterprise generative AI and low-code/no-code deployments take place with little to no corporate oversight. This means users must be educated consistently and frequently about the risks to enterprise data and legal and regulatory compliance and their valuable roles in mitigating those risks. Your HR team can help you deliver training related to this important area to new affiliates upon engagement and regularly to all constituents.
Keep current. The laws and regulations affecting AI deployments are growing and evolving as fast and unpredictably as the technologies themselves. You and your colleagues must ensure that your enterprise can stay on top of both evolutionary trajectories. Again, this will require collaborations beyond IT and risk management.
A single episode of non-compliance with a relevant law or regulation can be costly to your enterprise, both financially and reputationally. It is critical that you and your colleagues keep tabs on laws and regulations as they evolve and to evolve management of your generative AI and low-code/no-code deployments as needed to minimize risks to your data and your legality.
How Incisive Can Help
At Incisive Software, we’re committed to helping organizations build a strong foundation for success based on accurate and trustworthy data. However, with the growing reliance on citizen-developed applications, low-code/no-code and open-source tools, and complex spreadsheets, the risks of data errors and mismanagement have become greater than ever.
We’re dedicated to providing innovative solutions that empower organizations to reduce their exposure to these risks, improve data quality and enable confident decision-making. By combining automation, modern technologies, and proven practices, our solutions bring greater accuracy, control, and insight to managing an organization’s most complex, critical, and sensitive data resources.