5 FAQs from General Counsels on AI (July 2023)

I’m sharing the five most common questions I received from in-house GCs this month on use of AI, along with my thoughts and related legal developments.

1.     Where is the US on regulating AI?

AI regulation in the US is a rapidly evolving work in progress with a mix of proposed and passed laws at the federal and state levels. These rules tackle key concerns like consumer protection, privacy, bias, and transparency. But unlike the EU AI Act, there’s no one-size-fits-all regulation. This means we need to continue to monitor developments closely and analyze each company's legal exposure case-by-case.

2.     What risks are there if we use AI to generate code?

When it comes to AI-generated code, it’s crucial to be cautious. Analyze the potential risks against the benefits and pay attention to the specific use cases involved. It’s wise to seek legal counsel to advise on concrete actions that will minimize risks. And for private companies, expect a thorough open source audit during the buyer’s diligence process. Managing the risks is key.

3.     Do we need to have an AI usage policy or can we ban usage altogether?

Just like using search engines and SaaS software, understanding AI’s do’s and don’ts is crucial – knowing how to safely prompt models, acceptable uses of AI-generated output, and being aware of risks, such as inaccuracies and biases. A well-designed policy promotes innovation while ensuring ethical and safe AI practices. Embrace AI responsibly with a thoughtfully crafted AI usage policy that meets your organization’s needs.

 4.     Can I use scraped data to train my AI models?

a)     The legality of using scraped data to build models depends on specific facts and may hinge on whether it’s likely to meet the fair use defense for copyright infringement. In the past four weeks, several complaints were filed against major technology companies, including Google, OpenAI, and Meta, alleging copyright infringement, privacy violations, and other claims. While the outcome of these cases may clear up the uncertainty of how the fair use doctrine applies in the field of AI, it may take years before final decisions are reached.

b)     Industry pressure is mounting with Hollywood writers striking, an open letter from the Author’s Guild, and publishers calling for action against companies using their data for model training.

c)     Some companies are locking down their APIs (e.g., Twitter and Reddit) to protect their data, while others are publicizing recent licensing deals for training data (e.g., OpenAI’s notable deals with Shutterstock and Associated Press).

d)     Take note of Google’s privacy policy update this month, revealing that it may use public data to train its AI models.

 5.     Are we liable for the output of our models?

a)     Whether you are building your product or service on top of a third party model or building your own foundational models, it is important to include appropriate contractual terms and disclosures around the risks to minimize your potential liability.

b)     Check out what’s happening with Section 230 or DM me for a conversation.


The information in this social media post (“post”) is provided for general informational purposes only, and may not reflect the most recent updates. No information contained in this post should be construed as legal advice, nor is it intended to be a substitute for legal counsel on any subject matter. No reader of this post should act or refrain from acting on the basis of any information included in, or accessible through, this Post without seeking the appropriate legal or other professional advice.

Like
Reply
Heidi Mayon

Securities law specialists helping companies through the most important moments in their life cycle.

11mo

Great post, Katie! You've been putting out great content on AI and I appreciate your thought leadership.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics