Amid all of the amazing innovations going on right now, there’s a lot of concern about how to treat human “content.”

And before that happens, societies have to look more closely at the issue.

First of all, we have to define human content, and bring a broadness to that category of information. You have creative works like songs, and poems and pieces of visual art. But you also have professional intellectual property – information about how someone does their job, about their routines, their strategies, and how they excel in their given role. Then you have their likeness, their characteristics, their voices, their faces, and their bodies - what makes them, “them.”

All of this is personal data, and all of it should be protected. In fact, you can see some canary in the coal mine cases around compliance with the European GDPR. So the issues of intellectual property and fair use in AI go far beyond what we are used to dealing with in the legal world.

With that in mind, here are three aspects of this that leaders are talking about in trying to figure out appropriate regulation for AI systems.

MORE FOR YOU

Influence and Ownership

One of the biggest questions is: in all of this broad application of personal data, how do you know when the system has crossed the line from taking general influence, to stealing content?

In other words, the owners or leaders of AI companies could argue that the systems are just taking information piecemeal from different places, while the underlying intellectual property, whatever it is, is being siphoned off into their jurisdictions. And some would argue that the lawsuits from NYT and others against OpenAI or other model companies represent a case of this.

I wanted to showcase part of a conversation that Chris Anderson recently had with Sam Altman of OpenAI, and I’ll be coming back to this, because they really discussed all three of my categories here.

One part that’s relevant to influence begins with Altman describing how AI can take more direct influence, or coalesce from broader training sets, and how it’s often hard to tell the difference.

“If you can’t tell the difference, how much do you care?

Chris Anderson: “So that’s what you’re saying — it doesn’t matter. But isn’t that, though, at first glance, just IP theft?”

The two then discuss how there may be more than one human source, and questions about how to divide up the money. Altman seemed to suggest that in his view, humans will still be at the center of the process.

The consensus seems to be that it’s hard to tell when the system is cross the line.

Here’s some additional thought on fair use in AI:

“The goal of (AI) training is to teach AI how to recognize patterns and generate outputs that mimic human creativity, which has worked to varying degrees of success,” writes Syed Balkhi at copyrighted. “As I’m sure you can imagine, this crosses critical legal lines. Specifically, most people want to know if using copyrighted material to train an AI is an infringement or is fair use. AI advocates would say that training data is used in transformative ways: the AI does not reproduce the original content; rather, it abstracts patterns to create new works. However, critics say that even such indirect use is an exploitation of copyrighted material, especially when the original creators did not agree to the use of their works. This makes it a serious copyright infringement issue. Legal cases are just beginning to address these issues.”

Consent for AI Use

Another key issue is consent.

I’ve seen this happen before with my own eyes – someone comes on stage with a new AI application, and they talk to the host about what it can do. The host wants a demo, so the presenter works out a little piece based on the host’s own data, and they have a laugh about how good the AI system does.

But once in a while, someone will turn to the purveyor of AI systems, and say, “you know, I never really gave consent for that.”

And that makes everyone take a minute, and stop and think. How do we enforce consent?

This is a question we have to be thinking about.

It Thinks You’re Good

Going back to the discussion between Altman and Anderson. There’s a piece where Anderson brings up the movie “Her,” and talks about an AI reviewing someone’s work, deciding it is good, and influencing people to make a decision on bringing their artistic efforts to the world.

This is something that creative people can be pretty excited about. It seems counterintuitive, and sort of strange, that an AI could succeed in promoting an author or artist, or musician, where humans have failed. But it makes it kind of odd sense as well. If we are understanding that AI acts as an agent for our collective consciousness, using the information on the Internet, then we would value its evaluations of anything, whether it’s recommendations of previously published content, or enthusiasm for a new unpublished work.

And then there’s AGI. Without going directly back to the above conversation, I came up with two themes that Altman and Anderson discussed, and that I’ve heard elsewhere, characterizing what these systems, systems endowed with a form of artificial general intelligence, will be able to do:

1. Do my job

2. Do stuff for me

So first, AGI agents would be able to mine your professional data and be able to replicate your role in your company or business, even if you’re a strategy consultant or leader – maybe especially if you’re a strategy consultant or leader.

The second one is a little more straightforward. AI will be doing tasks for us that we don’t want to do ourselves. And that can be a broad range of tasks, from utilizing a robot to do dishes, to coming up with a report or drafting a grant application.

This is all a lot to think about, not just in terms of job displacement, but in trying to create a new regulatory regime for something brand new. Let’s keep thinking about it together.