Hey guys, Andy here. It has been a bewildering week in regards to copyright and AI and, for once, I refer to real life events and not those in the tech space. Starting with the firing of the head of the Library of Congress, events in both the UK and USA show an increasingly clear divergence of subject matter experts and creative rights versus political will.
On 8th May, the White House fired Librarian of Congress Carla Hayden, the first African American in the role, with no reason in a two line email. A day later the Copyright Office published a 108 page report offering guidelines as to how US copyright law should apply to generative AI in particular to fair use. Less than twenty four hours later the head of the office, the Shira Perlmutter, the Register of Copyrights who was appointed by the Librarian of Congress, herself was also fired by the White House. The report is not law but it broadly reads in favour of authors and copyright owners. A breakdown by Aaron Moss, detailing “weight”, transformation, comparisons with human learning, and market saturation can be found here.
It is worth noting that the Library of Congress, a research library, is the world’s biggest library holding 180 million works and open to those over the age of 16 years old. The library holds books, films, audio recordings, photographs, newspapers, maps and manuscripts where the firing of both the Librarian of Congress and also the Register of Copyrights is, according to the FT, “The exercise of arbitrary power, asserting control over knowledge.”
However, for all the support to authors that Perlmutter’s report offers (and at time of writing the report remains extant) Republicans in the USA submitted a bill blocking any states from enforcing any law or regulation targeting a range of automated computer systems for 10 years, which covers everything from chatbots to search results, from mapping systems to health care, from safety for minors to deepfakes.
In moves that mirror the USA, in the UK an amendment which was passed by 272 votes to 125 in the House of Lords requiring tech companies to reveal which copyrighted materials were used in their models, was stripped from the government Data Bill via the means of an “arcane parliamentary procedure”. It is a blow against the UK creative industry with the country acting as a vassal to the USA, in effect a 51st state, to where the assets will offshore and who will see the financial gains.
Why is this important, especially for authors and creators who do not fall under the jurisdiction of the USA? For rights holders, whether in the USA or not, the guidelines offer a detailed rebuttal to those who argue in favour of unauthorised AI training. This, in turn, is important due to the scale and value of what is at stake.
Despite, on both sides of the pond legal supporting transparency and regulation by subject matter experts, consistent with current legislation, lawmakers are barreling towards abandoning the labour force of an entire industry and handing over the fruits of this labour, and potentially their nation’s cultural history, to a handful of tech oligarchs without recompense.
Stephen Klein, CEO and founder of Curioser.AI, has estimated that, “The fair market value of the content appropriated for AI training is conservatively estimated between $45 billion and $165 billion.” This appropriation of works by tech oligarchs who then rent their services back to those they stole from has been described as John McCormick, Associate Director of Learning Design, Brandeis University, as “Technofeudalism”.
Becca Caddy, freelance journalist and author, elaborates, “Because the truth is, we’re not just training machines. We’re training ourselves to accept a world where our most meaningful expressions become raw material for someone else’s profit.”
It is no longer hyperbole to believe unauthorised AI training could be devastating on individual creators, not only leading to job displacement but also a devaluation of human artistic and intellectual labour. As Pascal Hetzscholdt, a senior director at publishing house Wiley, says,
“An over-reliance on AI for content generation also risks a homogenization of culture, where outputs become increasingly derivative of past works rather than reflecting genuinely novel human perspectives. Furthermore, a development ethos that sidelines copyright concerns may also neglect other crucial ethical considerations, leading to AI systems that perpetuate biases, compromise privacy, or are deployed in manipulative ways.”
The existential questions, nervously laughed at and brushed off by some just months ago, is now pertinent. What is art?
andybarnham
I am a portrait photographer based in Cheltenham, UK. Born in Hong Kong to a Chinese mum and British dad, I had an international upbringing while I educated in the UK. I started photography as a hobby while serving as an officer in the British Army.
After my service I turned this passion into a career and became immersed in London's sartorial scene. I am now focusing my camera on portraiture and using this eye for detail which was refined over ten years. As a former Royal Artillery officer it is only fitting I shoot with a Canon camera.


