Episode 4

full
Published on:

4th Dec 2024

Balancing Risk to Reward: How Businesses Can Embrace AI with Legal Confidence

This podcast episode dives into the intricate relationship between law and artificial intelligence, featuring Angela Dunning, a prominent AI lawyer. One of the main points discussed is the balancing act companies must perform when considering the use of AI: while there are significant opportunities for innovation and efficiency, there are equally important legal risks to navigate, particularly around intellectual property and liability. Dunning emphasizes the need for companies to proactively understand how AI is being used within their operations, as the legal landscape is still evolving. The conversation sheds light on the importance of having clear protocols for AI use, especially regarding copyright issues and compliance with legal standards. As organizations increasingly adopt AI technologies, the insights shared provide a valuable framework for making informed decisions that align with both business goals and legal requirements.

The episode features a deep dive into the intersection of law and artificial intelligence, led by Adam Davidson and guest Angela Dunning, a prominent litigation partner with extensive experience in IP law. The conversation begins with an exploration of how AI has increasingly become a part of legal practices, particularly in intellectual property litigation. Angela discusses the varying approaches companies are taking towards AI, with some legal departments adopting a 'stay away' strategy due to perceived risks, while others embrace AI to maximize benefits. This dichotomy highlights the importance of finding a middle ground, where companies can safely experiment with AI while also establishing protocols for its use.

One of the key themes in this episode is the importance of tracking AI's usage within companies. Angela emphasizes that even if a company decides to abstain from AI, it cannot ignore the reality of an AI-driven world. She advocates for a proactive approach where companies identify use cases for AI and develop practical monitoring strategies. The conversation also delves into specific risks associated with AI, such as copyright issues and the potential for employees to inadvertently use AI tools without proper oversight. Angela underscores the need for transparency and communication within organizations regarding AI, as this will help mitigate risks and promote responsible usage.

The episode wraps up with a fascinating discussion on copyright and the implications of using generative AI. Angela explains the current stance of the US Copyright Office, which asserts that AI-generated content cannot be copyrighted, thus entering the public domain. This creates significant challenges for businesses that seek to protect the content created through AI. The discussion touches upon the famous 'monkey selfie' case, which illustrated the necessity for human authorship in copyright claims. Overall, this episode provides valuable insights into the legal landscape of AI, offering guidance for businesses navigating the complex world of intellectual property and technology.

Takeaways:

  • AI's integration into business creates both opportunities and risks that must be navigated carefully.
  • Legal departments often advise caution with AI, but a balanced approach is essential.
  • Companies need to monitor AI use internally to avoid unintended legal liabilities and risks.
  • Understanding the copyright implications of AI-generated content is crucial for businesses today.
  • There are significant differences in risk levels based on how AI is used across industries.
  • Open communication between legal teams and executives can foster smarter AI deployment strategies.
Transcript
Adam Davidson:

Hi, this is Adam Davidson, one of the co founders of FeedForward.

Adam Davidson:

And this, of course, is the FeedForward podcast.

Adam Davidson:

This week is the first of what I expect to be many episodes about the fascinating, confusing world of law and AI.

Adam Davidson:

We have Angela Dunning, one of the leading AI lawyers in the country, who's going to talk to us about AI and ip, AI and risk and AI and opportunity.

Adam Davidson:

This is a conversation specifically for non lawyers, so I think you'll get an awful lot out of it.

Adam Davidson:

And we're going to have Angela back on another time.

Adam Davidson:

So if there are questions you wish I had asked or questions on your mind, please post them in the forum or email us and we'll make sure to ask them next time we talk to Angela.

Adam Davidson:

It's great to meet you, Angela.

Angela Dunning:

It's nice to meet you, by the way.

Adam Davidson:

Yeah, thank you so much for doing this.

Adam Davidson:

I'm really excited for this conversation.

Adam Davidson:

So, yeah.

Adam Davidson:

Who are you and what do you do for a living?

Angela Dunning:

Hi, I'm Angela Dunning.

Angela Dunning:

I'm a litigation partner at Clery Gottlieb based in Silicon Valley.

Angela Dunning:

I've been practicing IP litigation matters for about 25 years.

Adam Davidson:

Cool.

Adam Davidson:

And AI has become an increasing part of your life as a lawyer?

Angela Dunning:

It has me along with everyone else, but that is a huge part of my practice these days.

Adam Davidson:

Okay, great.

Adam Davidson:

So I'm hoping this is the first of several conversations.

Adam Davidson:

I wanted to start a little broad because I have not actually had a good talk with a lawyer about AI, but I have a lot of talks with people about lawyers about AI.

Adam Davidson:

So something I hear a lot is I'll talk to someone either in the C suite of a large company or a senior vice president or something.

Adam Davidson:

Often could be marketing, it could be consumer focused or the head of innovation.

Adam Davidson:

And a very common thing is, well, we're not really doing much with AI because our chief legal officer, our general counsel, says it's too risky.

Adam Davidson:

They don't want us touching it.

Adam Davidson:

Is that something you hear?

Angela Dunning:

Yeah.

Angela Dunning:

I think so far we've seen a range of approaches from companies, legal departments that range everything from just stay away from it, it's too risky to no holds barred.

Angela Dunning:

Let's figure out how to use this and maximize the benefits we can provide our users and customers and employees.

Angela Dunning:

And honestly, I think something in the middle makes the most sense.

Angela Dunning:

Even if you're choosing to abstain from use of AI, you're not immune to the changes in the environment, in the world we live in, where AI is a fact of life and companies really need to figure out how to make use of it in furtherance of their business.

Angela Dunning:

So I think the best thing that chief legal officers or general counsels can do is figure out the best use cases for their business, come up with a practical protocol for deployment and monitoring of its use.

Angela Dunning:

Just saying no runs the risk that employees are going to use it anyway and they're not going to tell you.

Adam Davidson:

I'd say not risk.

Adam Davidson:

I would say certainty that employees are currently using it all the time and just won't tell you about it.

Angela Dunning:

I mean, that's absolutely right.

Angela Dunning:

And tracking and monitoring is really important for a company.

Angela Dunning:

So the board needs to know how AI is being used in furtherance of the business.

Angela Dunning:

The public disclosures with respect to use of AI need to be accurate.

Angela Dunning:

But also if AI is being used, for instance, in the generation of code or other content that the company needs to own, that needs to be tracked and disclosed in any copyright applications, or you run the risk of potential waiver of coverage for that content.

Angela Dunning:

And so tracking becomes really important.

Angela Dunning:

And having a culture in which employees can come forward with proposed use cases and have a process for getting that approved, I think becomes very important.

Angela Dunning:

The risk level differs depending on what the use case is.

Angela Dunning:

And so that should be taken into account too.

Angela Dunning:

So let me come back and address that question if you have it.

Adam Davidson:

Sure.

Adam Davidson:

Okay, so I want to get into.

Adam Davidson:

I mean, Certainly we at FeedForward are advocates of using AI.

Adam Davidson:

I mean, that's sort of our whole reason.

Adam Davidson:

And I might personally be on the reckless side, but I want to begin with honoring the risk, like honoring that skeptical or that concerned lawyer.

Adam Davidson:

So would it be helpful to kind of list.

Adam Davidson:

Here are some risky areas that you need to think about.

Adam Davidson:

And I'm assuming it's quite different for different industries.

Adam Davidson:

Like if you are a medical company that needs to follow HIPAA or you're a bank with lots of proprietary data.

Adam Davidson:

So yeah, walk me through what some of the reasonable risks are and how those vary based on different companies, different indust.

Angela Dunning:

Sure, the analysis is going to be different for every company and industry and use case.

Angela Dunning:

So I can't give legal advice on any of these issues in particular, but I can highlight some overall risks and considerations that companies should take into account.

Angela Dunning:

So certainly there are some use cases that are riskier than others.

Angela Dunning:

If companies are using AI to generate content that is going to be used purely internally, for instance on the company's website, or to create documents and templates that employees may use to document sales efforts or any number, really a limitless list of things that companies may need content for that they can do more efficiently using AI tools, that's probably a lower level of risk.

Angela Dunning:

If, on the other hand, the company is using AI to generate content that is going to be public facing or that they need to own the copyright in, such as the code base to a consumer facing product, that is going to be a riskier use case and needs to be monitored.

Angela Dunning:

Because again, there are certain limitations on what can be copyrighted where AI is used to augment the creation process.

Angela Dunning:

And there is also a risk that in using code generating AI tools, that code that has been open sourced and is subject to an open source license may wind up in the final code base and inadvertently subject the company to copy left or open source terms that they hadn't planned on.

Adam Davidson:

Oh wow.

Angela Dunning:

Yeah.

Adam Davidson:

So If I use AI to create the FeedForward app and that becomes a wildly successful app, and I don't realize that Claude or ChatGPT or whatever had accessed some open source code, I might find that even though we now make a billion dollars a year from this, it's actually an open source project and I can't protect it.

Angela Dunning:

So obviously the circumstances, you know, you're.

Adam Davidson:

Not giving me legal advice for this particular feed forward.

Angela Dunning:

Obviously the answer to that question depends on the specific circumstances.

Angela Dunning:

But that is a concern, that's a reasonable risk.

Adam Davidson:

So I guess to be blunt, like when I hear this conversation, the legal office is, you know, saying no, it's usually in the context of, oh, there are a bunch of sticklers.

Adam Davidson:

And so what I am trying to establish is they're not just a bunch of sticklers.

Adam Davidson:

Maybe some of them are, but there are reasonable risks.

Adam Davidson:

So that IP risk, the copyright risk, there's some plain vanilla risks that we already know about, right?

Adam Davidson:

Like having a public facing chatbot that starts giving discounts that we didn't mean to give, that we now, as I understand it, are likely to have to honor.

Adam Davidson:

Right?

Adam Davidson:

Not to mention the reputational risk of a chatbot that starts saying really offensive things or whatever.

Adam Davidson:

Right.

Adam Davidson:

Those, those would be just kind of obvious risks, I guess.

Angela Dunning:

Well, I haven't heard, I haven't heard of the chatbot that offers discounts the company hasn't approved yet.

Angela Dunning:

Presumably any company deploying a chatbot has taken steps to fine tune that model to adhere to certain parameters and rules.

Angela Dunning:

But you know, there are a number of other risks that I could detail and all of these need to be considered.

Angela Dunning:

They don't mean it's prohibitive, they just mean that companies need to make smart choices about where and how to use AI tools.

Angela Dunning:

So another risk that we are seeing is that where companies in the same industry, so competitors are using the same tool to set prices, the Federal Trade Commission may take a dim view of that and find that there is collusive activity.

Angela Dunning:

Right, that you've basically hired the same person or entity to set prices in a way that is collusive.

Adam Davidson:

Oh, interesting.

Adam Davidson:

I've done that.

Adam Davidson:

I've actually, I had a consulting gig and I asked ChatGPT and Claude like, how much should I charge?

Adam Davidson:

Each one had a different amount and I actually ended up kind of in the middle because I thought they both had valid points.

Adam Davidson:

But if 20 other consultants had done the same thing for the same job, I hadn't thought of that.

Adam Davidson:

I mean that just like we can't all hire the same pricing consultant who says from now on a gallon of oil should be 325.

Adam Davidson:

And we all do that.

Adam Davidson:

Wow, I hadn't thought of that.

Angela Dunning:

That's right.

Angela Dunning:

So that's another area that hasn't gotten too much coverage.

Angela Dunning:

I want to turn back and highlight the copyright risk issue that I mentioned in just a little bit further detail.

Angela Dunning:

So currently the US Copyright Office has taken the position that any content created using generative AI tools, whether that is text or images or video, cannot be copyrighted, cannot be registered.

Angela Dunning:

Instead, it would be in the public domain.

Angela Dunning:

It's not that the platform owns it.

Angela Dunning:

Neither the platform nor the user are able to own and control use of that content.

Angela Dunning:

And this has significance to the decision as to whether and what to use AI for.

Angela Dunning:

Now, that Copyright Office is not going to have the final say on that question.

Angela Dunning:

This is being litigated in the courts.

Angela Dunning:

And the courts may find that there are certain uses of AI that entail such significant human contribution and creativity that they should be protected by copyright.

Angela Dunning:

But the contours of that ownership and where it will land remains to be seen.

Angela Dunning:

And so it is important that companies consider whether the content they're creating is something that they need to own and prevent others from using when they decide whether to deploy AI tools in the creation of that content.

Adam Davidson:

All right, so that is a huge area and I've been reading on it and as a non lawyer, I just find I get more confused with everything I read.

Adam Davidson:

I might also be confused because I don't understand.

Adam Davidson:

But it seems to me the Copyright Office and the Patent and Trademark Office, they're trying to look at models that don't quite match AI.

Adam Davidson:

Like I think, were you involved in the monkey selfie case where a monkey took a picture and it was determined that can't be copywritten because it wasn't a human.

Adam Davidson:

And so they're using that.

Adam Davidson:

Right.

Adam Davidson:

And then there's a case about a kind of cataract lens device.

Adam Davidson:

Anyway, can you tell me about the monkey case and how that applies to AI and what we can learn from that?

Angela Dunning:

Sure.

Angela Dunning:

That was possibly the greatest case I've ever had the chance to work on.

Angela Dunning:

It's not every day a monkey sues your client for copyright infringement.

Angela Dunning:

But the photographer, the human photographer in that case, David Slater, went from England to Indonesia to photograph these crested macaques in the wild, where they are severely endangered.

Angela Dunning:

And the story goes that he was having no luck getting them to pose for his camera.

Angela Dunning:

And he set out a lead and they started clicking selfies, which he, you know, then took back to England to, you know, finally develop and take a look at and thought, holy cow, I've got the next cover of National Geographic.

Angela Dunning:

These are amazing.

Angela Dunning:

He sought to register them with the Copyright Office.

Angela Dunning:

It rejected the copyright registration on the ground that the monkeys took the photos, not him, and therefore they lacked a human author.

Angela Dunning:

At the same time, then Wikipedia published the photographs and destroyed any chance he had of selling those for a meaningful amount.

Angela Dunning:

And then the final shoe dropped and the monkey filed a lawsuit for copyright infringement, saying, hey, I'm the author and you infringed my rights.

Angela Dunning:

Now, the monkey never came to court, so I never had a chance to depose him or ask how much he actually cared about this case.

Angela Dunning:

It's plain he didn't because he didn't show up.

Angela Dunning:

But the case ended up standing for the proposition that human authorship is required for a copyright and that only a human has standing to enforce the Copyright Act.

Adam Davidson:

How did the monk, like, how did the lawsuit actually.

Adam Davidson:

Was it the Indonesian government or who actually filed the lawsuit?

Adam Davidson:

Or was it a very smart monkey who understood copyright law?

Angela Dunning:

I have no idea how smart Naruto was.

Angela Dunning:

He may have been brilliant, but the lawsuit was filed on his behalf by PETA, the People for the Ethical Treatment of Animals.

Angela Dunning:

And I believe they had a genuine interest in bringing attention to the intelligence and warmth of these creatures.

Angela Dunning:

I don't think that using a federal lawsuit to do that was an appropriate use of the judicial system.

Angela Dunning:

But nevertheless, the case went to a federal district court judge who dismissed it for lack of standing on the part of the monkey.

Angela Dunning:

And then the monkey appealed to the Ninth Circuit, where I am happy to say that reason prevailed and the monkey was found not to be a proper litigant in the case.

Angela Dunning:

Obviously, it's silly.

Angela Dunning:

You can make all sorts of jokes about how everybody was going bananas or, you know, what a.

Angela Dunning:

Let me stop there.

Angela Dunning:

We'll come back.

Angela Dunning:

You could cut that joke off there.

Adam Davidson:

No, no, that was good.

Adam Davidson:

I like that joke.

Angela Dunning:

But the primary takeaway from that case that is of relevance to the current AI discussions is the human authorship requirement.

Angela Dunning:

And that was never really in dispute.

Angela Dunning:

If you look back to U.S.

Angela Dunning:

supreme Court cases, it is plain that we've always granted copyright for purposes of, as the Constitution says, promoting the progress of science and the useful arts by encouraging authors and artists to create work for the benefit of society.

Angela Dunning:

Right.

Angela Dunning:

To increase the discourse, the public discussion, to further the art and science that we all benefit from in our culture.

Angela Dunning:

And we do that by giving copyright to humans to incentivize creation.

Angela Dunning:

Now, this opens up very interesting questions in the current debate over fair use and whether AI furthers the progress of science and the useful arts.

Angela Dunning:

And I would argue that it most certainly does.

Angela Dunning:

But this human authorship requirement is critical because in the current context of the Copyright Office's discussions, it has used that requirement to say, AI content cannot be owned at this time.

Angela Dunning:

I would argue that's not quite the conclusion that flows from the premise.

Angela Dunning:

Yes, human authorship is required.

Angela Dunning:

What constitutes human authorship in the context of AI augmented work is an open question, and again, one that the courts are going to have to resolve.

Angela Dunning:

It may not be sufficient to own a copyright, to enter a prompt into a tool that you want a picture of a cat holding a monkey or a monkey holding a cat.

Angela Dunning:

But there are very sophisticated prompts.

Angela Dunning:

There are efforts being undertaken by artists and creators that involve hours and weeks and months of work to create truly beautiful things using AI tools.

Angela Dunning:

And there is a good argument, I think, that that work should be protected just like any other act of human creation.

Adam Davidson:

I want to have a much longer talk with you about AI and ip.

Adam Davidson:

I was thinking this would be kind of an intro to the whole legal field, legal area.

Adam Davidson:

But, yes, it's fascinating.

Adam Davidson:

I mean, it makes me think of John Cage, the composer, who would use dice like randomizers, to generate compositions like, so should we say he didn't compose that the dice composed it?

Adam Davidson:

You could imagine.

Adam Davidson:

I'm sure there's been painters, you know, modernist or postmodernist painters, who set up machines to create certain patterns.

Adam Davidson:

Like, is it.

Adam Davidson:

You know, so to me, it.

Adam Davidson:

When I have read what the Copyright and Patent and Trademark Offices say, they're acting like AI is its own thing that's somehow a machine that no human activity can go through, which is unlike any other machine ever.

Adam Davidson:

Right.

Adam Davidson:

Like, you know, I think of like the famous Dadaist art that was just a urinal in a museum, and the artist didn't create the urinal.

Adam Davidson:

You know, a machine made the urinal, presumably.

Adam Davidson:

But the choice to put the urinal there in a museum, et cetera, is not just considered art, it's considered a crucial part of 20th century art.

Adam Davidson:

So, but what you're saying is, from a legal defensiveness standpoint, that may be a fight that artists or whatever, that's a fight for another day.

Adam Davidson:

But today, taking a fairly conservative view, if there is crucial either creative work or coding or even, I'm assuming, machinery that you are developing, and it's really important that you have IP protection, Is it?

Adam Davidson:

Don't use AI at all, Is it?

Adam Davidson:

I don't want to put you in a place of having to make legal advice, but how should someone think about that?

Angela Dunning:

Sure.

Angela Dunning:

Well, first let me say generative AI is a tool.

Angela Dunning:

Like many others, it is a tool that artists or creators or companies can choose to use or not use.

Angela Dunning:

And in any event, they need to deploy in a way that makes sense for their businesses and their use cases.

Angela Dunning:

So some of the factors that I would suggest companies take into account in deciding whether and how to use it are, again, do I need to own the copyright in what is created?

Angela Dunning:

If not, it's probably an easier call to go ahead and experiment.

Angela Dunning:

If you do, then there are certain steps that companies can take to mitigate the risk, some of the risks that we've described.

Angela Dunning:

So tracking what is being created on AI versus by humans becomes important.

Angela Dunning:

If you do need to apply for copyright protection, the Copyright Office will require that you disclose the AI generated portions you may be able to own, and you should be able to own the human created portions.

Angela Dunning:

The Copyright Office has also said that in a work that is generated using AI tools, that the human creator may own the copyright in the overall composition or layout of the work, even if they can't own the individual AI created elements.

Angela Dunning:

There's also a manner of using AI tools where content owned by the user is fed into the tool as a prompt.

Angela Dunning:

So you can imagine that copyrighted text or images owned by the user are used to prompt further work or output by the tool.

Angela Dunning:

If that output is substantially similar to the human created copyrighted content that was used to prompt it, then the resulting work should still fall within the scope of the original copyright, and third parties should not be able to copy that work without infringing the underlying work that was used in the prompt.

Angela Dunning:

So there are a variety of approaches that companies can bring to use these tools effectively and with risk mitigation in mind.

Adam Davidson:

So as we wrap up this initial call, and if it's okay with you, Angela, I'd like to give a shout out to the FeedForward membership to send questions that we can maybe answer.

Adam Davidson:

So I'm generally not talking to lawyers.

Adam Davidson:

I'm generally talking to business executives who are talking to lawyers and want to persuade them.

Adam Davidson:

You know, as you said, not if there's being so conservative.

Adam Davidson:

You don't allow AI use at all, which really means just you allow lots of AI use.

Adam Davidson:

You just don't know what's happening or just go nuts.

Adam Davidson:

Do whatever you want.

Adam Davidson:

Finding that middle ground.

Adam Davidson:

And do you have advice for sort of non lawyer executives who are talking to their general counsels and chief legal officers like make the best case for let's not maximize conservatism or risk aversion.

Adam Davidson:

How do people have that conversation?

Angela Dunning:

I think it's the same conversation a company has any time they're trying to weigh the risks and benefits of taking a particular approach.

Angela Dunning:

Obviously, generative AI tools can introduce a lot of efficiency.

Angela Dunning:

They can reduce the amount of time needed to perform tasks that employees don't always like to perform.

Angela Dunning:

You know, they can be used.

Angela Dunning:

Let me give you one example.

Angela Dunning:

I have a friend who runs a startup devoted to generating new drugs for treatment of orphan diseases.

Angela Dunning:

Incredibly important work.

Angela Dunning:

He uses generative AI tools to generate pitch decks that he can use with investors.

Angela Dunning:

That is not work that he would have time to do himself.

Angela Dunning:

And without the ability to raise money, he couldn't focus on his main goal, which is to bring these therapies to the clinic and hopefully to market to help patients.

Angela Dunning:

That is a perfect example of a way in which generative AI tools can be used to benefit a company with little risk and something that companies and legal departments need to be open to considering.

Angela Dunning:

And I think if you set up an open line of communication, if you have somebody, hopefully with knowledge of the space designated to help establish protocols for clearance of particular projects, for vetting of tools and the terms of service that come along with those tools, somebody who is assigned to make sure that any steps are being taken that may be necessary to avail yourself of indemnification or other protections in the terms offered by companies.

Angela Dunning:

If there is somebody monitoring again how these tools are being used, I mean, let me give you another example of a risk.

Angela Dunning:

If you don't know your employees are using these tools and they go around your back and they sign up for a personal account using one of these tools to help their work.

Angela Dunning:

Company may find that that employee has bound the company by a certain set of terms that would be disadvantageous rather than actually negotiating an enterprise agreement for use of those tools.

Angela Dunning:

Again, it's just something to be considered.

Angela Dunning:

What are the terms under which we want to use this tool?

Angela Dunning:

Are there provisions that we want to negotiate?

Angela Dunning:

Do we want to use this on premises in a closed environment so that we can protect our confidential information and trade secrets to the extent they're being uploaded?

Angela Dunning:

Heaven forbid an employee who feels like they have to circumvent management uploads confidential information or trade secrets of the company to a non private version of a tool and somehow subjects that information to a claim that it is no longer a trade secret or protected, or allows it to be trained on by the tool or accessed by third parties.

Angela Dunning:

These are all things that companies need to be aware of.

Angela Dunning:

And if you don't have a plan in place, if you don't have people that your employees can talk to to fig out how to use these tools responsibly and effectively, I think the risk of the unknown is greater than having a plan in place to make sure you know how the tools are being used and that they're being used effectively and appropriately.

Adam Davidson:

Yeah, that makes complete sense to me.

Adam Davidson:

a fundamental level, like in:

Adam Davidson:

Not really, because we're all using Google Maps and spam blockers and.

Adam Davidson:

But generative AI.

Adam Davidson:

Like my mom who's 84 says, I'm just not going to use it.

Adam Davidson:

And that's like a reasonable thing to say.

Adam Davidson:

I would expect by the end of:

Adam Davidson:

Another thing I'm hearing from you is that maybe the safest places to play, which is not to say totally safe, is that internal stuff.

Adam Davidson:

You get AI as internal helper.

Adam Davidson:

Like we had a great talk with the people at the NBA team, the Portland Trailblazers who are running their AI adoption and they're currently not using AI at all for customer facing communication.

Adam Davidson:

That means social media, that means emails, but they're using it like crazy on internal work, including tracking customer support and other things.

Adam Davidson:

And that seems like maybe not for everybody, but that seems like the kind of thing you're talking about where you're not saying we're not going to use it, but you are putting a box around it as you get familiar with it, as you get to know it.

Adam Davidson:

Is that reasonable advice?

Angela Dunning:

Yeah, I think that's right.

Angela Dunning:

Again, every company is going to have a different set of use cases and a different risk tolerance, but avoiding the issue altogether and just saying don't use it isn't practical or likely to be adhered to.

Angela Dunning:

I will also say one more I'll throw another risk at you based on what you just described.

Angela Dunning:

Many companies are using versions of third party trained generative AI models in house, right?

Angela Dunning:

You take in the model and you can fine tune it for your own purposes, whether that's to serve as a chat bot for your customers or for some other purpose.

Angela Dunning:

It is very important for companies to be aware of their obligations of disclosure to users and customers about how their data is being used.

Angela Dunning:

So one important point to keep in mind is that if your terms of use have a general provision that say that customer or user data may be used to improve the company's products or services, that is likely not adequate.

Angela Dunning:

Notice if what you are doing with that data is fine tuning or training an AI model.

Angela Dunning:

Companies would be well advised to look at their terms of service to make sure that they accurately disclose or reflect how AI is being used.

Angela Dunning:

And if they want to use customer or user data for purposes of training generative AI tools, they should strongly consider making that clear in their terms.

Adam Davidson:

Fabulous.

Adam Davidson:

All right, Angela, we could obviously talk for hours, and I hope we do over the next few months if that's okay.

Adam Davidson:

So thank you so much.

Adam Davidson:

This was fabulous.

Adam Davidson:

I'm grateful that you joined us for the FeedForward podcast, Adam.

Angela Dunning:

It was my pleasure.

Angela Dunning:

Thanks so much for having me.

Adam Davidson:

You can listen to other episodes of the FeedForward podcast.

Adam Davidson:

You're probably already subscribed if you heard this one, but if not, it's just FeedForward FM.

Adam Davidson:

Our member rules are that you can share FeedForward FM with anyone who works at your company.

Adam Davidson:

So not just the three active members, but anyone at your company.

Adam Davidson:

But we do ask that you not share share it beyond your company.

Adam Davidson:

Please let us know what you think, what questions you want us to ask Angela, maybe suggestions of other folks you think we should talk to, other topics, et cetera.

Adam Davidson:

I'm Adam Davidson.

Adam Davidson:

I'm always on the discord or you can email me at adamdomdavidson.

Adam Davidson:

Com.

Show artwork for Feedforward Member Podcast

About the Podcast

Feedforward Member Podcast
Feedforward is a member community for corporate leaders learning about AI.
Each episode dives deep into one company or one issue that will help executives make better decisions around AI.

About your host

Profile picture for Adam Davidson

Adam Davidson

Adam Davidson is a co-founder of Feedforward.

He also co-founded NPR's Planet Money and hosted Freakonomics series on AI.

Adam was a business journalist for more than 30 years, working at NPR, The New York Times Magazine, and The New Yorker.