[This post first appeared in Photo AI, my newsletter about how computational photography is fundamentally changing photography as we know it. Subscribe here, or view the archives at Photo-AI.com.]
When I was making AI-generated images using Adobe Firefly for last week’s newsletter, I was struck not only by how good some of the results were, but also by a surprising feeling: creative ownership.
Now, that doesn’t make much sense.
On the surface, Generative AI involves very little creative “work.” You type a phrase describing what you’d like to see, and after a few seconds or minutes, the service generates a set of possibilities that attempt to match the description. It doesn’t matter if I’m a good photographer, or great at Photoshop, or see light in a different way than people who aren’t photographers or artists. The result is, by many empirical standards, of better quality than I could make without the software’s assistance, particularly given the minimal effort put into it.
I typed “porcupine reading a book at the base of a tree blooming cherry blossoms in spring” which resulted in an image that I immediately fell in love with. It made me want to know more about why this particular porcupine is hanging out there and what he’s reading. It immediately invoked a story I wanted to follow.
I also created a striking portrait of a woman by typing “young black bald woman in a black tank top portrait bokeh at night in front of a neon sign _photo”. (The underscore denotes one of the options Firefly presents, in this case specifying a “photo” image instead of an “art” image; when you save the image to disk, the filename includes the entire text prompt.) I really like how the image turned out—weird disconnected earring notwithstanding—and think about what it would take to shoot this portrait in the real world, with its challenges of setting up lighting and location and coordinating with a model.
When the result appeared, I felt a similar sensation as when I run across an image in my library where all the elements come together to make a good photo. I got that “good job Jeff” sensation that is often fleeting for photographers (even the ones not named Jeff).
Is having a sense of creative accomplishment even fair when talking about GenAI? Did I really create those images? It’s a murky question. The US Copyright Office has weighed in a few times so far. In February 2023, it denied copyright to Midjourney-created images in a graphic novel, but affirmed copyright for the text. In March 2023, the office elaborated its stance and called for additional public guidance. It also indicated the importance of human-created artworks (emphasis mine):
In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans. The Office’s registration policies and regulations reflect statutory and judicial guidance on this issue.
This is a legal interpretation, which I feel runs alongside the bigger question: why did I feel as if I had created those works?
I think some of it is appreciation that Firefly made aesthetically pleasing results. For instance, I get the same feeling when I view other people’s photos I like in the Glass app. I appreciate the finished product.
But there’s also the ephemeral nature of these AI-generated works. If you go back to the previous newsletter, you’ll see that the image of the porcupine reading a book is different from the images above, even though both were made using the exact same text prompt.
This is considered an annoyance now, because people may want to generate different scenes using the same characters or elements. (In Firefly you can in a limited way: click the three-dot icon and choose Use as Reference Image. The ability to upload your own source images is not yet implemented.)
That limitation, though, gives AI art something sought after by art collectors and patrons for centuries: uniqueness. Just as the distinctive brush strokes of a Monet painting make it singularly unique—even if it’s one of a series of paintings of the same scene, or even copies of other versions—every output from a GenAI service is different from every other.
Plus, regardless of the mechanism of their creation, and despite the fact that the mechanism was created using machine learning models built on millions of existing images, none of my images existed at all until I summoned them into being.
And yes, I did just say “my” images.