Key Takeaways From the Copyright Office's Report on the Economic Impact of AI

In February 2025, the U.S. Copyright Office released a report titled “Identifying the Economic Implications of Artificial Intelligence for Copyright Policy.” Edited by Brent Lutes, the Office’s chief economist, the volume represents the collective insight of an ad hoc committee of economic scholars tasked with “identifying the most consequential economic characteristics of AI and copyright, and what factors may inform policy discussions and decisions.”

Rather than contributing to alarmist narratives about the demise of human creativity, the report situates the discussion within well-established economic frameworks of supply and demand, incentive structures, and cost-benefit analyses. Viewing AI through an economic lens allows us to move beyond legal technicalities to examine the real-world impacts on creative markets and incentives.

These are the measured musings of social scientists — and it’s very clear we’re still on step one of the scientific method. That is to say, you won’t find any solutions here. Throughout the 50+ pages of the report, the authors repeatedly acknowledge (no fewer than 22 times) that “empirical research is still needed” before any definitive conclusions can be reached. While the technology itself is developing at warp speed, its impact on the marketplace of human creativity is an ongoing experiment unfolding in real time, with economic and creative implications that cannot yet be fully quantified or predicted.

So, what indicators do these scholars suggest policymakers should monitor as AI transforms creative industries? Here are the highlights:

AI Giveth and AI Taketh Away

First question: will AI eliminate the need for artists? Generative AI is getting better at creating original music, drawings, and literature every day — with output that is increasingly indistinguishable from human-generated work. AI doesn’t need to eat, sleep, or pay rent — if AI floods the market, there is concern that these works would divert revenue from human creators.

Indeed, this reasoning is part of why the Copyright Office has been so steadfast in asserting that AI-generated outputs should not receive copyright protection. If the goal of copyright is to incentivize human creators, sending money to the machines is not the way to go.

But the economists offer a counterpoint. While acknowledging that AI-generated works may potentially create a substitution effect, this impact could be offset by AI’s role as a productivity-enhancing tool that shifts the supply curve for human creativity. By enabling creators to produce higher-quality content more efficiently across various media, AI functions similarly to previous technological disruptions like digitization. The analysis suggests a nuanced approach to copyright policy is needed — one that recognizes AI as both competitor and facilitator in creative markets, potentially rebalancing the economic landscape for human creators despite initial competitive disadvantages.

This pattern of creative disruption is hardly unprecedented. The report draws a compelling parallel to the invention of photography. “The new device put many portrait artists out of business, pushing them to other occupations,” it says. “At the same time, cameras facilitated new forms of creative expression and paved the way for further innovations like cinematography. The camera, despite its displacing effects, clearly led to advancements in the production of creative works and generally improved social welfare.”

The Protection of Publicity Rights from Coast to Coase

This section of the report, written primarily by Shane Greenstein, considers rights of publicity (name, image, likeness, or “NIL” rights) in the context of AI, focusing on non-nefarious commercial uses rather than the unambiguously deceptive “deepfakes.” Applying Coasean property rights theory, Greenstein analyzes whether these rights should be assigned to individual subjects or to the public, noting that while rights should theoretically flow to their highest-value user through negotiation when transaction costs are low, NIL contexts often involve significant and asymmetric transaction costs.

The analysis compares scenarios ranging from voice actors (where straightforward contracting can work efficiently) to crowd scenes using composite AI faces (where individual negotiations become virtually impossible). Rather than imposing a one-size-fits-all solution, Greenstein suggests that optimal policies might need to be context-specific. He also presents a rather pessimistic take on the potential introduction of federal legislation to supplant the current patchwork of state publicity laws. While artist advocates have relentlessly pushed for federal harmonization of these laws, Greenstein warns that uniformity will not resolve the underlying economic tensions between competing interests or the practical challenges of enforcement. Instead of providing the clear-cut framework advocates desire, federal legislation would still need to navigate the same difficult trade-offs between creator protections and public access, potentially introducing new ambiguities rather than eliminating them.

Unlimited Problems of Unbounded Models

“Unbounded” AI models — systems trained on vast, diverse datasets containing billions of works from countless creators with no clearly defined boundaries — present significantly greater economic challenges than their bounded counterparts. While courts deliberate whether “AI ingestion” constitutes fair use, securing licenses for unbounded models continues to cause heartburn for content providers and developers alike.

Policy approaches involve significant tradeoffs: unlimited access to copyrighted materials might accelerate AI innovation but undermine artistic incentives, while strict protections could preserve authors’ rights but hamper technological progress and potentially favor wealthy incumbents who can afford licensing.

The most immediate practical hurdle is royalty distribution. When an AI system trains on billions of creative works, determining each creator’s fair share becomes administratively overwhelming. This creates what experts call the “penny problem” — situations where processing a payment costs more than the payment itself, resulting in an inefficient system that fails to adequately compensate contributors despite using their work.

Compulsory licensing offers a potential middle path by addressing the prohibitive transaction costs. Similar to mechanical royalties — where artists can record cover versions by paying standardized fees instead of negotiating directly with songwriters — compulsory licensing for AI would establish predetermined rates for using copyrighted works in training data. This approach could democratize AI development beyond tech giants and create a more level playing field for smaller developers.

Critics argue that compulsory licensing’s predetermined royalties fail to reflect the varying value different works bring to specific AI applications, treating unique creative contributions as interchangeable commodities. If authors and artists perceive their work as systematically undervalued in AI training datasets, it could diminish their motivation to produce high-quality content, potentially undermining the very creative ecosystem upon which AI systems depend.

Socioeconomic Impacts

The potential displacement of human authors represents more than just economic concerns. Author Catherin Tucker also raises concerns about cultural homogenization as AI systems, trained on existing data, produce works that reflect and potentially amplify existing biases and trends. The result might be a less diverse, less innovative creative landscape.

Tucker outlines four key sources of data distortion: variations in creators’ financial resources, differences in copyright protection status, uneven digitization across cultures, and disparities in data availability. These factors collectively skew AI training toward privileged perspectives — favoring public domain works (pre-1929), open-access content from creators with alternative income sources, already-digitized materials that reflect historical power structures, and large-scale datasets with concentrated rights ownership like Reddit (which represents only 11% of U.S. adults, predominantly male).

Furthermore, when creators choose to protect their copyright by restricting AI access to their work, they may inadvertently write themselves out of history as AI increasingly shapes cultural production and preservation. This creates a troubling paradox: artists must either surrender their creative rights for inclusion in the AI-generated canon or maintain their rights at the cost of cultural erasure and diminished influence.

The economists warn that without careful policy intervention, these biases could reinforce existing inequalities, diminish diverse perspectives, and potentially erase certain cultural heritages from AI-generated content. This erasure would particularly impact indigenous cultures, non-English works, and artists from less-privileged backgrounds who depend on commercializing their art for survival.

Finding Balance

The report functions as a sophisticated economic scale — weighing potential harms of AI against its benefits. Rather than seeking to eliminate all negative impacts, the economists aim to identify policies that would tip the balance toward net positive outcomes.

As the line between human and AI creation grows increasingly blurred, our legal frameworks must evolve thoughtfully. Finding the right balance requires careful consideration of competing interests and a clear-eyed assessment of practical realities. The goal isn’t to simply maximize AI development or to absolutely protect copyright; it’s to create a system where both can flourish sustainably.