Navigating the Intersection of Art, AI, and Ownership in Web3
It's a tale as old as time: new inventions, new fears. Artists are at the vanguard of emerging tech - confronting biases, navigating authorship, and asking what it means to be human in an age of AI.
The emergence of AI has forced us as a society to reflect on what it means to be human and, ultimately, on whether we believe humans are inherently good or evil. Artists are leading the charge in exploring the new medium, raising questions about human creativity, representation, and ownership. In a Digital Art Week panel discussion entitled “Diverse Perspectives on AI”, NFT community World of Women brought together artists, activists, and industry specialists to shed light on the complexities of navigating the multitude of contradictions exposed by AI.
Human vs machine: the role of AI in artistic expression
While society has a long record of backlash against new technologies – painter Paul Delaroche famously declared “from today, painting is dead!” upon seeing his first photograph – history has shown us that the creative process evolves and adapts, reacting to and eventually encompassing new inventions. As with its predecessors, AI can’t be uninvented, so we’re forced to adopt a position.
Many creatives have spoken out against the dangers associated with AI, with some filing legal challenges. The panellists at the “Rewriting the AI Rulebook for Artists” event were much more optimistic. In their view, AI is an opportunity – much like any other tool or medium, it’s a canvas upon which creators can project their visions and engage with the complexities of our lived reality. But whether they see AI as a collaborator or a threat, artists are unanimous in the sentiment that machines will never replace humans.
In other words, the machine is a dynamic medium through which we as individuals can explore the interplay between humanity and technology. Artists can harness the capabilities of AI to augment their own practice, leveraging its production speed to explore new forms of expression and, ultimately, push the boundaries of art.
Addressing biases in AI-generated art
Panellist Victoria Emslie, actor and disability activist, praised AI’s role in offloading distracting medial work, especially for neurodiverse individuals, but equally cautioned against homogenous inputs amplifying pre-existing biases. Echoing Holly Herndon’s observation that “you are what you eat”, Emslie shared details of her work, which looked at depictions of disability in high fashion. Generative AI failed to create believable images. One year later, despite societal and technological progress, the same experiment yielded, disappointingly, similar results.
Indeed, it’s the recognition that, as machines learn from vast amounts of data, including images, text, video, and other media, they inevitably inherit the biases encoded within these datasets. Artists working in the field consciously subvert these assumed realities by introducing diversity. Projects like Jake Elwes’ seminal “Zizi – Queering the Dataset” aim to disrupt mainstream representation by introducing underrepresented identities and experiences into machine training data. By deliberately injecting and even oversaturating the learning models, artists can disrupt dominant narratives and foster inclusion.
The pervasive nature of biases embedded within algorithms poses significant challenges. Aarti Samani, another panellist, argued that the only way to combat them is through conscious effort – by accepting the obligation to provide feedback on AI outputs. In her work, she explores bias correction algorithms to mitigate the impact of bias. By incorporating principles of fairness and equity into the design and implementation of machine learning systems, we shape a more ethical, socially responsible future.
Moderating AI
But who’s overseeing the whole process? The challenge is that there’s no singular consensus and regulation, as usual, lags behind innovation. One way regulatory bodies are attempting to confront obvious biases and problematic AI outputs is through censorship. The problem is determining who decides what should be censored – in other words, how can we prevent harm without stifling expression?
Commercially available generative AI attempts to sidestep the issue of problematic output by banning certain words to be used in prompts. Artists like Stephanie Dinkins [challenge the refusal of Midjourney to depict slave ships](https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html#:~:text=Artists Say A.I.-,Shows Bias%2C With Algorithms Erasing Their History,perpetuate discrimination and need improvement.&text=The artist Stephanie Dinkins has,in her Brooklyn-based practice.) by asking, “what is this technology doing to history?” Similarly, the anonymous artist OONA, speaking on a later panel, mentioned an artist who explores the role of coded language in subjugated communities by using queer vernacular to bypass content moderation.
These examples demonstrate that no matter how familiar AI becomes with what we as a society deem acceptable, a human element will be required to prevent genuine harm. It’s a topic being tackled from various perspectives – the creation of consensual, ethical training algorithms via exactly.ai or the consent layer introduced to AI training by organisations like Holly Herndon and Mathew Drayhurt’s Spawning come to mind – and will continue to be hotly debated as technology advances and becomes available on a mass scale.
Collective consciousness vs ownership
The artist’s relationship with AI raises another conflict, namely that between AI’s “collective consciousness”, as Refik Anadol termed it, and individual authorship. As artists pursue greater autonomy in their work, often by migrating to another emerging technology in the third iteration of the internet – “web3” – they encounter the paradox of establishing demonstrable ownership over a collectively generated output.
At the heart of this tension is the concept of provenance. Experiments like Holly Herndon’s Holly+, an AI trained voice based on Herndon’s own, and Mario Klingemann’s Botto, an algorithm positioned as an “autonomous artist”, illustrate the desire for creators for attribution. Both are governed by a decentralised autonomous organisation (DAO), though in the case of Holly+, Herndon holds veto rights.
It’s primarily the legal issues surrounding AI algorithms fed on publicly available data that cause concern. In the “black box” that is machine learning, how can an artist know that their copyrighted work isn’t being used to train algorithms? One solution being explored is anchoring AI-generated artworks within a blockchain-based framework of ownership. Beyond the practical aspects of maintaining a record of provable ownership, preserving authenticity via immutability establishes trust and credibility, appealing to the principles of transparency heralded by a new generation of artists.