AI, Music, and Copyright: Where Things Stand (briefly).

Semi-briefly.

Obviously enough, especially if you pay attention to, say, copyright blogs, because you’re a forensic musicologist, generative AI continues to blur the edges of authorship, originality, and the boundaries of copyright law. The headlines arrive daily from, again say, Law Professor Edward Lee’s superb chatgptiseatingtheworld substack that keeps track of every single case that’s out there. Through it all, though, the legal landscape, especially where music copyright is concerned, remains largely familiar. So, we can indeed cover it fairly briefly.

The same basics have guided music copyright disputes for decades. They will continue to do so, regardless of the tech: a clear understanding of substantial similarity, protectable expression, and how courts treat evidence. That said …

Training vs. Output: Phase one and phase two?

Across the current wave of lawsuits, the critical distinction remains the same:

Using copyrighted works to train a model is being treated as one question. Evaluating similarity in an AI-generated output is treated as another. And we’re very much stuck on the first for now.

Courts have not yet decided whether model training is infringing. My money is on “it’s not,” or at least “not very.” But I’m still persuadable otherwise, and with fifty different lawsuits out there, why would we expect them to be aligned or clear? I certainly expect it will be clear enough though that when an AI system produces an output, the analysis will default to the traditional copyright framework: protectable expression and substantial similarity.

Nothing about machine learning alters that. It can’t.

Patience is a virtue, or something.

We may have to take the long road. Maybe it’s best. When I see where things are going, I question the efficiency of starting from square one and insisting that square two be the next step. Recently, I wrote an analysis of the Ninth Circuit’s revival of Ambrosetti v. Oregon Catholic Press, a decade-old dispute, not because of the musicological merits, which I find lacking, but because of procedural wrangling. That piece is here: detailed analysis of Ambrosetti. In that article, I noted that, “the similarities are trivial, so access is irrelevant.” (One of the musicologists in the case expectedly argues otherwise, but they’re wrong.) And I went about complaining that copyright cases often drift into process over substance, just as Ambrosetti shows you can revive a lawsuit without reviving a viable claim. The access phase here gets a new look, thereby reviving the should-be-doomed-anyway extrinsic test part. Here, again, I’m open to being persuaded that this is how it needs to go and not a waste of time. Still, when the courts fixate, rightly or wrongly on threshold questions and allow weak similarity claims to advance, the entire ecosystem absorbs the cost.

In all these AI litigations, the first phase consumes all the energy so far:

The state of play, real quick:

1. Training is being evaluated through fair-use principles.

Is model training “transformative”? Does it create market substitution? Courts are testing these questions, but no final answers exist yet.

2. Output liability remains conventional.

If an AI-generated musical work resembles a copyrighted piece in a protectable way, the analysis is a substantial similarity test. No new doctrine has replaced it, nor can I imagine what might.

3. Very generally, courts, some at least, seem to be losing patience with speculative harm.

I’m showing my hand a little here, but the courts are increasingly expecting concrete, not hypothetical, examples of copying or market displacement.

For Artists and Rights-Holders: What’s so new about any of this?

  • AI outputs can infringe if they copy protectable expression.
  • Training may be an infringement, it’s undecided, and is the battleground for now. Has been for a while. Hey Polymarket, make a market, I’ll vote with my dollars.
  • Creators are and would do well to integrate AI tools into their workflows. That’s a good place for this energy.

What This Means for the Coming Years

Eventually, we’ll have a lot of music that we know was trained on existing music. Is that new? No. I’m training on existing music nearly every time I sit at a piano in some sense. But an algorithm doesn’t sit at a piano. So it’s different. Or not. It could be a better mousetrap; good old competition.

For the AI Moment

The law is unchanged for now. These cases are young. And substantial similarity may yet be the only test that resolves anything.

For now we’re in the procedural, early, “isn’t this just plain wrong?” phase, but the real stuff lies ahead.

Leave a Comment

Filed under Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *