Skip to main content
Home » Artificial Intelligence » Lessons on responsible use of generative AI from industry heavyweights
Future of AI Q2 2024

Lessons on responsible use of generative AI from industry heavyweights

hand of a human touching a robot data connection, Generative AI
hand of a human touching a robot data connection, Generative AI

Rebecca Finlay

CEO, Partnership on AI

Amidst the rapid evolution of generative AI technologies, global policymakers are racing to regulate potential risks. Learn how collaborative, voluntary efforts in business, media and civil society can inform both policy and practice.


Ensuring safe and responsible AI has always been the mission of Partnership on AI (PAI). Through years of work on deepfakes and other synthetic media, it’s clear that any guidance or regulation on how to build, create or distribute AI-generated audio, video and images will struggle to keep pace with the field’s development.

Magnitude of synthetic media governance

Take, for instance, the plethora of AI-image-generating apps or presidential election deepfakes to appreciate the urgency of the issue. To fill the guidance gap, PAI created its framework on Responsible Practices for Synthetic Media, following consultation with companies, media and civil society organisations across the generative media ecosystem.

Additionally, we published an unprecedented body of work showing how organisations address issues such as transparency, digital dignity, safety and expression. Media organisations like the BBC and CBC, technology companies, AI startups and more contributed case studies offering an inside look into how synthetic media governance can be applied, augmented, expanded and refined for use in practice.

Enhancing institutional transparency

Our case studies’ transparency helps ensure accountability in the responsible creation, distribution and use of synthetic media. The cases highlight institutional practices and tactics to address questions like how to build disclosure mechanisms and how to develop policies to prevent misuse. There is a clear need for policymakers and other actors to have access to this information.

Media organisations like the
BBC and CBC, technology
companies, AI startups and more.

Governance recommendations complexity

The diversity of use cases for synthetic media made our goal of producing clear and tangible governance recommendations much more difficult. This was even more complex because of our ecosystem approach.

The cases touched on vast societal dynamics ranging from freedom of speech, the meaning of harm, transparency, creative endeavour and consent — topics that each warrant their own specific analysis exercises.

Guiding synthetic media governance

In direct response to the rapid pace of AI development and details learned throughout case reflection, we will continue to iterate the framework to advance better organisational practice. Government regulation and policy are key complements to the Synthetic Media Framework and our governance activities at PAI.

The centrality of consent, transparency, support for creative expression and harm mitigation are essential to synthetic media policymaking. These policies, which have been tested in practice, have the potential to provide a foundation to inform regulatory momentum.

Next article