Register to attend Responsible AI: What’s Real, What’s Next, and What Matters webinar 📅 January 22, 2025 | 🕚 11 AM EST | 🌐 Virtual

Responsible AI in the Arts: How Creative Disciplines are Shaping AI Developments Everywhere

Responsible AI Institute

A little over year ago, Heart on My Sleeve, achieved internet virality for mimicking the vocal styles of Drake and The Weeknd, without either artist ever recording the song. The individual behind this song used computer-generated versions of these musicians’ voices to create a tune that sparked widespread controversy. At first, there was confusion over whether the track was eligible for Grammy consideration and musical artists across the country responded with outcry over how their talents could be used without permission for others’ gain. Since then, incidents involving the nonconsensual use of human artists’ creative works by AI systems to generate content have continued to emerge, underscoring the need for adequate guidelines to address these developments. 

Consent to use artist-made video, audio, design, and written materials for AI training is becoming complicated in the U.S., which manifested acutely this past year in American courts. Specifically, advancements in generative AI (GenAI) are introducing copyright infringement issues in creative disciplines, which may change how we think about artistry and IP laws. Regardless of how these court cases are settled, the approach we take to addressing AI-related concerns in the creative arts will have far-reaching implications beyond these disciplines..

In this blog, I cover why we need a set of responsible AI practices specific to individual artistic fields that are drafted jointly by artists and responsible AI advocates. These practices should guide art usage in the advancement of GenAI and inform how AI systems are developed across these domains. The issues emerging in creative fields are not occurring in a vacuum and they will influence machine learning guardrails across society. Therefore, through this blog, I cover the following topics:

  • Challenges arising from AI in the creative arts
  • How these issues are shaping our understanding of responsible AI
  • What these developments mean for the larger AI community
  • What we should do about it
  • How communities can collaborate to engage these challenges
  • Our call for collaboration

What are the Challenges Arising from AI in the Arts?

GenAI is becoming increasingly common in product development and the creative arts are also using these tools. However, there is lingering legal ambiguity surrounding GenAI’s use in the arts, particularly in relation to copyright infringement from unlicensed content in training data and the ownership of AI-generated works. For instance, if a GenAI model owner used their AI system to create a movie with machine-made scripts, music, and visual effects, it is currently unclear whether this GenAI film could be copyrighted by the model owner. Also, it would be legally ambiguous whether the owner could be sued for patent, trademark, or copyright infringement, depending on the artistic resources used to train the model. 

To navigate these topics, U.S. courts are attempting to establish how intellectual property (IP) laws should be applied to GenAI content and there are several cases under review right now that could present competing rulings. While these cases primarily depend on how the concepts of “fair use” doctrine and “derivative works” are interpreted, the larger outcome of these cases may be the conviction that AI applications introduce a legal dynamic that cannot currently be captured in existing IP laws. Based on the patchwork of court responses, Congress may experience pressure to draft a comprehensive AI package that clarifies IP rights for AI systems. 

So far, the majority of U.S. court cases which address these AI-driven issues in the arts relate to four central disciplines: written works (including fiction, non-fiction, and journalism), visual arts, photography, and music. The largest concentration of cases focus on written works, since most GenAI applications are text-based. These cases highlight the tension between AI developers using copyrighted material or journalistic content to train AI models without the owners’ permission. Some of the major cases include: Alter v. OpenAI, New York Times v. Microsoft, The Intercept Media and Raw Story Media v. OpenAI, Center for Investigative Reporting v. OpenAI, Daily News v. Microsoft, Kadrey v. Meta, and Leovy v. Google, among others. The initial takeaways from these cases seem to be that AI companies must obtain proper permissions to use written materials and follow copyright laws to train their models. 

Additionally, the Andersen v. Stability AI et al. and Thaler v. Perlmutter cases show how these issues relate to the visual arts. These cases highlight the need for transparency when artistic works, like paintings or graphics, are included in training datasets and illustrate that ambiguity remains over how we should bestow authorship to model developers in a responsible way across the AI ecosystem. Given that such a model is trained on thousands of artists’ paintings or unprotected digital designs, is it fair, ethical, and legal for the model owner to then copyright the products of their model? Right now, we don’t have a comprehensive legal answer to this question in the U.S.

Photography is also being affected by AI-related court cases and “fair use” doctrine is at the center of cases like Getty Images v. Stability AI and Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith. Under Section 107 of the Copyright Act, “fair use” is a doctrine designed to protect freedom of expression by permitting the unlicensed use of copyrighted works under select circumstances. Typically, “fair use” allows for exceptions to copyright for educational purposes or when only small portions of an owner’s works are used. However, creative expression and imaginative works are unlikely to receive copyright exceptions under “fair use” claims. On the other hand, this stipulation may not apply to GenAI model training, depending on future U.S. jurisprudence; GenAI developers could successfully argue for education-based copyright exceptions to using artworks in their training data, but upcoming adjudications will determine the legal strength of this argument. Also, a related issue is being raised by concerned photographers: whether AI-generated photos should receive the same type of copyright protection and accreditation as physically photographed images. In his book, After Photography, Fred Ritchin wrote that “photography’s strongest suit (is) its ability to see in ways that humans cannot, and to find emerging rhythms of life that people may sense but cannot focus upon.” Yet, U.S. courts are now deciding the extent to which AI can legally serve this purpose in photography, which may degrade human conceptions of reality. 

Finally, GenAI is affecting the music industry, which was showcased in the Concord Music Group, Inc. v. Anthropic PBC case and other public controversies where artist voices were used in songs like “Heart on My Sleeve.” One issue brought up in the Concord case is whether “derivative works” include GenAI works that were trained using the lyrics from other artists. Often referred to as the adaptation right, the “derivative work” right refers to the legal right of individuals in the U.S. to copyright something that incorporates some or all of a preexisting work. When an AI model is trained using the vocal talents or lyrics of artists without their permission, it introduces uncertainty over who should receive royalties and authorship rights. 

How Are These Issues Impacting Our Conceptions of Responsible AI?

Beyond the IP concerns mentioned above, the questions over assigning ownership to AI-generated works and replacing human artistry with machine-produced art are shaping how we broadly think about responsible AI. In the Thaler v. Perlmutter case, mentioned in the visual arts section above, the U.S. Copyright Office (USCO) refused to register an AI-generated work and this decision was upheld by the U.S. District Court for the District of Columbia. Although the USCO requires human authorship to bestow IP rights to a work, it is questionable whether artists can legally receive credit and residuals from partially AI-generated content. 

In the future, artists using GenAI systems in creative disciplines may receive rewards for their work, if they can prove a portion of human authorship or disguise the amount of AI-generated material included in their work. Under these circumstances, unassisted artists may be put at an economic disadvantage to artists who use algorithmic processing, which could unfairly undermine the value of authentic human artistry. Imagine a scenario in which an AI developer has trained their model to produce songs tailored to the artistic preferences of Grammy judges, based on a database of music that previously received awards. Should this developer be given the same award consideration as a singer-songwriter who did not have access to these tools? Regulatory frameworks may need to shift to address these scenarios, which are increasingly becoming reality. 

Additionally, in 2023, when the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) went on strike to protest the growing use of GenAI in media studios, the public rallied behind protecting human artists and their likenesses. A major issue at the center of these strikes was whether studios should replace artists with GenAI content normally produced by writers and actors, in order to cut costs. These events showcased a growing perception that responsible AI practices may require our societies to protect the employment of humans in certain fields over AI use cases, as well as our ability to produce artistic media that we find enriching. This topic is highly contentious, because the prerogative to protect human employment and creativity is not always shared by businesses, but it could determine the longevity of artistic disciplines in the future. Yet at the Responsible AI Institute, we believe that our conceptions of ethical AI should evolve as new AI applications change human livelihood. However, what responsible AI practices should look like in these disciplines is yet to be determined. 

What Do These Developments Mean for the Larger AI Community?

Organizations that use AI systems trained on publicly available data could experience significant disruptions, depending on how U.S. courts crack down on IP rights in the above cases. Any companies that use GenAI products must ensure their training data is free of unlicensed content and obtain explicit permissions to avoid legal penalties for breaching IP laws. Some companies are choosing to completely restrict employee access to GenAI applications to avoid these penalties, especially when creating marketing materials; if any AI-produced video, audio, image, or text-based content is unlicensed and distributed by the company, they could face high fines. Moreover, these developments could mean that training data used in GenAI systems is predominantly sourced from countries that have weak copyright laws, which would overrepresent the perspectives or creative styles of communities from these countries in AI training data. For better or worse, American datasets may not be used as often in AI training, if companies do not want to compensate the artists behind their creative works. 

Also, the issue of GenAI displacing human artists in lieu of machine-generated content has implications beyond the arts. Emerging AI regulations touch upon how AI system’s may impact a person’s employment opportunities and attempt to limit the degree to which this practice is permissible. For example, the Colorado AI Act stipulates that AI systems which significantly affect a Colorado resident’s employment opportunities are considered high-risk and must meet strict requirements. In the future, “high-risk” categorizations of AI systems may be expanded to include applications which deny human creators the opportunity to own and continue producing their own artistic works, which would impact the use and development of GenAI in several sectors.

What Should We Do About It?

To protect the endurance of human artistry and avoid the displacement of human work, responsible AI principles need to be integrated across artistic disciplines and GenAI development. Based on existing AI governance standards, we know that AI systems built using human works should be transparent and accountable to instill trust, such that information about the system and its outputs are available to individuals interacting with the system. Additionally, we know these systems should be privacy-enhanced, such that practices which safeguard human autonomy, identity, and dignity are included and emphasize an individual’s right to consent. What we don’t know is how these principles should be operationalized in specific artistic disciplines. 

The responsible AI field needs to expand and draft AI best practices informed by artists across written fields, the visual arts and film, photography, and music. There are significant gaps in AI governance frameworks on what responsible AI should include in these disciplines and how they relate to GenAI, in part because the arts are often sidelined in conversations on AI’s impact. Still, there are emerging examples of where these gaps can be filled, such as the Archival Producers Alliance’s “Best Practices for Use of Generative AI in Documentaries,” which was released in September 2024. These guidelines are some of the first of their kind for the arts and review how filmmakers should responsibly use primary sources with AI applications.

However, the good news is that artists don’t have to start from scratch; responsible AI is a growing movement, with many organizations collaborating on best practices and principles that can support groups in artistic disciplines and GenAI. At the Responsible AI Institute, we invite collaboration on sector-specific AI guidance to improve the trustworthiness of AI systems everywhere. We also encourage groups to think creatively about what responsible AI can look like – from creating datasets of artistic content for IP protection, to maintaining the provenance of AI-generated content. Expanding responsible AI to address issues in the arts will be an iterative process, but we absolutely believe in the importance of this goal. Fred Ritchin accurately predicted that “our media, in the digital environment, will profoundly and permanently change us – our worldview, our concept of soul and art, our sense of possibility.” While AI systems are changing how humans interact with the arts, we can effectively protect human creativity and autonomy with responsible AI practices.  

Our Call for Collaboration 

Want to get more insights and support the creation of AI best practices for artistic disciplines? We invite you to fill out this contact form to express your interest in participating in our upcoming 2025 initiative on the creative arts. We are in the process of assembling working groups across artistic fields and we aim to bring together diverse artists, AI practitioners, IP experts, art-led collectives, and RAI advocates interested in shaping how RAI can be integrated in these disciplines and GenAI broadly. Ultimately, we hope to better protect the longevity of human artistry and avoid the mass displacement of human work through AI technologies. For questions or more information, please contact our AI Policy Analyst, Sez Harmon, at sez@responsible.ai. 

About the Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Strategy & Marketing 

Responsible AI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Find Responsible AI Institute Here:

RAI Hub

LinkedIn

YouTube

Share the Post:

Related Posts

Responsible AI Institute Caps Strong Year with RAISE Community Event, Leaders in RAI Awards, and Enhanced Resources

Austin, Texas January 8, 2025 Responsible AI Institute (RAI Institute) concluded 2024 with significant achievements in advancing responsible AI practices, marked by its annual RAISE...

Leaders in Responsible AI

Jisha Dymond, OneTrust December 2024 Jisha Dymond OneTrust Chief Ethics & Compliance Officer What does your job entail within your organization? As the Chief Ethics...

Responsible AI Institute's RAISE 2024 & Leadership in RAI Awards

Responsible AI Institute December 11, 2024 RAISE Recap The Responsible AI Institute’s RAISE 2024 event brought together over,1,000 industry leaders to explore responsible AI development....

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!

Login to the RAI Hub