Generative AI Regulation Is on the Horizon, but How Will This Impact the Market?

Subscribe To Download This Insight

By Reece Hayden | 2Q 2023 | IN-6988

Although most governments remain in an exploratory or drafting phase, generative Artificial Intelligence (AI) regulation is on the horizon. Private and public stakeholders must be ready to build a fair framework that mitigates the risks of generative AI, while ensuring that social and economic opportunities can flourish. Given the scope of generative AI, these complex frameworks must look at six interconnected areas to ensure that we move toward “responsible AI”; however, global inconsistency will have far reaching social, economic, and geopolitical implications.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Governments Begin to Define Their Regulatory Approach to Generative AI

NEWS


Regulators are already playing catchup with generative AI, given the pace of innovation. They are only now discussing frameworks, which will likely take between 1 and 2 years to come into force. The goal is to mitigate AI risks, while maximizing social and economic opportunities. However, dreams of a common global framework seem far off, as responses have been fragmented, so far, with some looking to apply a “light touch,” while some are being slightly more restrictive:

  • Australia: The government rolled out a Responsible AI Network, as well as funding for the responsible rollout of the technology. In addition, talks are ongoing about changing the Privacy Act to cover transparency within generative AI.
  • European Union (EU): The proposed AI Act calls for three main requirements: 1) generative AI models and applications must be independently reviewed before commercial release, this stage will allow regulators to classify the risk across a framework that includes minimal, limited, high, and unacceptable risk; 2) generative AI is banned for real-time facial recognition and biometric security; and 3) content generated by AI must be disclosed to consumers to prevent copyright infringement.
  • India: Following a pro-innovation approach, the government is expected to apply a “light touch” to regulation with no immediate plans. But certain checks and balances are being put in place to help standardize and support “responsible AI” development.
  • United Kingdom: Bullish on AI safety regulation, Rishi Sunak is looking to position the United Kingdom as the “geographic home of global AI safety regulation.” Although different propositions have been made, the latest announcement has indicated a relatively fluid approach that places any regulation in the hands of the affected sectors. This framework will complement the United Kingdom’s existing General Data Protection Regulation (GDPR) and comes with additional economic stimulus to drive Venture Capital (VC) investment in U.K. AI companies.
  • United States: Given the U.S. political structure, federal regulation is unlikely to be forthcoming; however, plenty of influential private/public stakeholders are lobbying for regulation. The United States has taken tentative steps toward regulation as the National Telecommunication and Information Administration (NTIA) has put out a formal request for policy input targeting the issue of AI ecosystem accountability.
  • China: Proposed regulation by the Cyberspace Administration of China (CAC) looks to target both service providers and users. Some of the key areas are: 1) legal responsibility for training; 2) produced content must respect “social virtue and public order customers”; 3) service providers must define audience/purpose of services and adopt suitable measures to prevent reliance/addiction; 4) service providers cannot retain user information; and 5) security assessment for public AI tools.
  • Japan: Trying to catch up with the United States and China by limiting regulation. So far, Japan has said that it will not enforce copyright on any data used to train generative AI models.

Six Key Areas That Regulation Should Focus On

IMPACT


Regulation should be focused on balancing risks and rewards. Below, ABI Research explores the six key areas that regulatory frameworks should focus on as they represent the most significant “risk-reward” trade-offs for stakeholders:

  • Copyright: High-profile legal challenges have resulted from vendors using “copyrighted” data for training, e.g., Getty Images has recently sued Stable Diffusion claiming infringement without permission or compensation. Regulation should look to protect enterprise creative property, while allowing vendors to optimize model performance and cost.
  • Privacy: Countries, enterprise, consumers are worried about data used to prompt chatbots. Data gathered and stored by vendors or countries could have significant intellectual property, geopolitical, and public privacy implications. Enterprise Intellectual Property Rights (IPR) (e.g., internal code) could be leaked to competitors through model training, or worse, consumer/enterprise activity could be spied on. However, regulating this may be challenging if countries want to remain internationally competitive, given the value of these user data for training. Instead, defining legal requirements to disclose when/if data will be captured, where they will be stored, and how they will be used can help better inform user actions, while still ensuring that vendors have an opportunity to press a potential competitive advantage.
  • Ownership: Ownership of content generated by generative AI is hugely complicated for monetization, but also for legal liability. Some argue that generative AI is simply a tool, so any content generated is owned by the user, while some suggest ownership lies with the generative AI developer. A clear distinction is evident when enterprises fine-tune and deploy on their own infrastructure, but can applications like ChatGPT claim ownership when their tools are used?
  • Job Displacement: One of the hardest areas to regulate without hindering enterprise investment/innovation. Enterprise deployment of generative AI will start to displace jobs as it augments productivity and even automates previously human-operated processes. Minimizing widespread redundancies is essential, but regulators need to balance this with enterprise freedom to deploy and innovate to access cost savings and deliver new customer experiences/services.
  • Sourcing: As AI-generated content becomes increasingly realistic, misinformation could become a significant geopolitical challenge. Enforcing clear regulation from foundation models upward that imprints watermarks/citations to highlight how and where certain content was generated is vital for misinformation prevention. In addition, this can support human and AI differentiation, allowing human creativity to retain some inherent value.
  • Transparency: Most generative AI applications are “black boxes,” but ensuring transparency/explanability will help with bias prevention, troubleshooting, and reducing hallucinations. However, some could argue that disclosing “closed-sourced” models could eliminate competitive differentiators like performance. This will be a difficult line to tread for regulators and likely receive significant opposition from market leaders looking to protect their market primacy.

One question is outstanding: who should be regulated? Foundation models, applications, and users could all be regulated. ABI Research recommends a “light” top-down framework that implements laws to reduce copyright risk, limits misinformation through watermarks at the foundation level, is complemented with a bottom-up enterprise led model, is built through private-public cooperation, and defines how employees can use AI safely.

Balanced Regulation Can Be Pro Innovation

RECOMMENDATIONS


The enterprise Business-to-Business (B2B) generative AI market remains nascent with most enterprises still assessing the opportunities and risks of this new technology. Some “doomsayers” argue that regulation will hamstring innovation in the market; however, ABI Research argues that balanced regulation that looks to move the market toward “responsible AI” usage will have a positive impact on the enterprise market. For example, regulation clarifying who owns content generated from AI will help enterprises more effectively define their implementation strategy. On top of this, a regulatory push toward “responsible AI” will create innovation across core areas, as already shown by market innovation:

  • Copyright regulation on training data if imposed will increase the cost of data, placing greater value on data services (e.g., curation, creation, and federated databases). The squeeze on training data is already evident as companies look to protect their intellectual property and has contributed to increased investment funding for data service startups. MOSTLY AI (a synthetic data company) raised US$25 million; Snorkel AI raised US$85 million at a valuation of $US1 billion; and Hazy raised US$9 million).
  • NVIDIA is looking to position itself as a “responsible AI” vendor by innovating across multiple areas. It has deployed NeMo Guardrails, an open-sourced tool kit for conversational systems, and entered partnerships with Getty Images, Adobe, and Shutterstock to license content to train Picasso, its text-to-image model generator.

These examples of innovation and investment will be a drop in the ocean if balanced regulation is introduced. However, adverse consequences could result from regulation, especially concerning the cost of training foundation models and the development of new services. For this reason, private/public cooperation is essential to balance risks and rewards, even if it opens the process to lobbying.

Services

Companies Mentioned