Find anything you save across the site in your account

Why A.I. Isn’t Going to Make Art

In 1953, Roald Dahl published “ The Great Automatic Grammatizator ,” a short story about an electrical engineer who secretly desires to be a writer. One day, after completing construction of the world’s fastest calculating machine, the engineer realizes that “English grammar is governed by rules that are almost mathematical in their strictness.” He constructs a fiction-writing machine that can produce a five-thousand-word short story in thirty seconds; a novel takes fifteen minutes and requires the operator to manipulate handles and foot pedals, as if he were driving a car or playing an organ, to regulate the levels of humor and pathos. The resulting novels are so popular that, within a year, half the fiction published in English is a product of the engineer’s invention.

Is there anything about art that makes us think it can’t be created by pushing a button, as in Dahl’s imagination? Right now, the fiction generated by large language models like ChatGPT is terrible, but one can imagine that such programs might improve in the future. How good could they get? Could they get better than humans at writing fiction—or making paintings or movies—in the same way that calculators are better at addition and subtraction?

Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices.

If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland. Another is to instruct the program to engage in style mimicry, emulating the choices made by a specific writer, which produces a highly derivative story. In neither case is it creating interesting art.

I think the same underlying principle applies to visual art, although it’s harder to quantify the choices that a painter might make. Real paintings bear the mark of an enormous number of decisions. By comparison, a person using a text-to-image program like DALL-E enters a prompt such as “A knight in a suit of armor fights a fire-breathing dragon,” and lets the program do the rest. (The newest version of DALL-E accepts prompts of up to four thousand characters—hundreds of words, but not enough to describe every detail of a scene.) Most of the choices in the resulting image have to be borrowed from similar paintings found online; the image might be exquisitely rendered, but the person entering the prompt can’t claim credit for that.

Some commentators imagine that image generators will affect visual culture as much as the advent of photography once did. Although this might seem superficially plausible, the idea that photography is similar to generative A.I. deserves closer examination. When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure. But over time people realized that there were a vast number of things you could do with cameras, and the artistry lies in the many choices that a photographer makes. It might not always be easy to articulate what the choices are, but when you compare an amateur’s photos to a professional’s, you can see the difference. So then the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.

We can imagine a text-to-image generator that, over the course of many sessions, lets you enter tens of thousands of words into its text box to enable extremely fine-grained control over the image you’re producing; this would be something analogous to Photoshop with a purely textual interface. I’d say that a person could use such a program and still deserve to be called an artist. The film director Bennett Miller has used DALL-E 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit. But he has said that he hasn’t been able to obtain comparable results on later releases of DALL-E . I suspect this might be because Miller was using DALL-E for something it’s not intended to do; it’s as if he hacked Microsoft Paint to make it behave like Photoshop, but as soon as a new version of Paint was released, his hacks stopped working. OpenAI probably isn’t trying to build a product to serve users like Miller, because a product that requires a user to work for months to create an image isn’t appealing to a wide audience. The company wants to offer a product that generates images with little effort.

It’s harder to imagine a program that, over many sessions, helps you write a good novel. This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning. It’s not clear to me what such a program would look like. Theoretically, if such a program existed, the user could perhaps deserve to be called the author. But, again, I don’t think companies like OpenAI want to create versions of ChatGPT that require just as much effort from users as writing a novel from scratch. The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists.

The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception. It is a mistake to equate “large-scale” with “important” when it comes to the choices made when creating art; the interrelationship between the large scale and the small scale is where the artistry lies.

Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art. People often underestimate the effort required to entertain; a thriller novel may not live up to Kafka’s ideal of a book—an “axe for the frozen sea within us”—but it can still be as finely crafted as a Swiss watch. And an effective thriller is more than its premise or its plot. I doubt you could replace every sentence in a thriller with one that is semantically equivalent and have the resulting novel be as entertaining. This means that its sentences—and the small-scale choices they represent—help to determine the thriller’s effectiveness.

Many novelists have had the experience of being approached by someone convinced that they have a great idea for a novel, which they are willing to share in exchange for a fifty-fifty split of the proceeds. Such a person inadvertently reveals that they think formulating sentences is a nuisance rather than a fundamental part of storytelling in prose. Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium. But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art.

Of course, most pieces of writing, whether articles or reports or e-mails, do not come with the expectation that they embody thousands of choices. In such cases, is there any harm in automating the task? Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it.

Recently, Google aired a commercial during the Paris Olympics for Gemini, its competitor to OpenAI’s GPT-4 . The ad shows a father using Gemini to compose a fan letter, which his daughter will send to an Olympic athlete who inspires her. Google pulled the commercial after widespread backlash from viewers; a media professor called it “one of the most disturbing commercials I’ve ever seen.” It’s notable that people reacted this way, even though artistic creativity wasn’t the attribute being supplanted. No one expects a child’s fan letter to an athlete to be extraordinary; if the young girl had written the letter herself, it would likely have been indistinguishable from countless others. The significance of a child’s fan letter—both to the child who writes it and to the athlete who receives it—comes from its being heartfelt rather than from its being eloquent.

Many of us have sent store-bought greeting cards, knowing that it will be clear to the recipient that we didn’t compose the words ourselves. We don’t copy the words from a Hallmark card in our own handwriting, because that would feel dishonest. The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate.

It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something.

Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes. There is a context in which the dark spots are sufficient; birds are less likely to eat a butterfly that has them, and the butterfly doesn’t really care why it’s not being eaten, as long as it gets to live. But there is a big difference between a butterfly and a predator that poses a threat to a bird.

A person using generative A.I. to help them write might claim that they are drawing inspiration from the texts the model was trained on, but I would again argue that this differs from what we usually mean when we say one writer draws inspiration from another. Consider a college student who turns in a paper that consists solely of a five-page quotation from a book, stating that this quotation conveys exactly what she wanted to say, better than she could say it herself. Even if the student is completely candid with the instructor about what she’s done, it’s not accurate to say that she is drawing inspiration from the book she’s citing. The fact that a large language model can reword the quotation enough that the source is unidentifiable doesn’t change the fundamental nature of what’s going on.

As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

Not all writing needs to be creative, or heartfelt, or even particularly good; sometimes it simply needs to exist. Such writing might support other goals, such as attracting views for advertising or satisfying bureaucratic requirements. When people are required to produce such text, we can hardly blame them for using whatever tools are available to accelerate the process. But is the world better off with more documents that have had minimal effort expended on them? It would be unrealistic to claim that if we refuse to use large language models, then the requirements to create low-quality text will disappear. However, I think it is inevitable that the more we use large language models to fulfill those requirements, the greater those requirements will eventually become. We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?

It’s not impossible that one day we will have computer programs that can do anything a human being can do, but, contrary to the claims of the companies promoting A.I., that is not something we’ll see in the next few years. Even in domains that have absolutely nothing to do with creativity, current A.I. programs have profound limitations that give us legitimate reasons to question whether they deserve to be called intelligent at all.

The computer scientist François Chollet has proposed the following distinction: skill is how well you perform at a task, while intelligence is how efficiently you gain new skills. I think this reflects our intuitions about human beings pretty well. Most people can learn a new skill given sufficient practice, but the faster the person picks up the skill, the more intelligent we think the person is. What’s interesting about this definition is that—unlike I.Q. tests—it’s also applicable to nonhuman entities; when a dog learns a new trick quickly, we consider that a sign of intelligence.

In 2019, researchers conducted an experiment in which they taught rats how to drive. They put the rats in little plastic containers with three copper-wire bars; when the mice put their paws on one of these bars, the container would either go forward, or turn left or turn right. The rats could see a plate of food on the other side of the room and tried to get their vehicles to go toward it. The researchers trained the rats for five minutes at a time, and after twenty-four practice sessions, the rats had become proficient at driving. Twenty-four trials were enough to master a task that no rat had likely ever encountered before in the evolutionary history of the species. I think that’s a good demonstration of intelligence.

Now consider the current A.I. programs that are widely acclaimed for their performance. AlphaZero, a program developed by Google’s DeepMind, plays chess better than any human player, but during its training it played forty-four million games, far more than any human can play in a lifetime. For it to master a new game, it will have to undergo a similarly enormous amount of training. By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills. It is currently impossible to write a computer program capable of learning even a simple task in only twenty-four trials, if the programmer is not given information about the task beforehand.

Self-driving cars trained on millions of miles of driving can still crash into an overturned trailer truck, because such things are not commonly found in their training data, whereas humans taking their first driving class will know to stop. More than our ability to solve algebraic equations, our ability to cope with unfamiliar situations is a fundamental part of why we consider humans intelligent. Computers will not be able to replace humans until they acquire that type of competence, and that is still a long way off; for the time being, we’re just looking for jobs that can be done with turbocharged auto-complete.

Despite years of hype, the ability of generative A.I. to dramatically increase economic productivity remains theoretical. (Earlier this year, Goldman Sachs released a report titled “Gen AI: Too Much Spend, Too Little Benefit?”) The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.

Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable; the fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new. We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise. ♦

New Yorker Favorites

An excerpt from Sally Rooney’s new novel.

Why you can’t get a restaurant reservation .

After a London teen-ager plummeted into the Thames, his parents discovered that he’d been posing as an oligarch’s son .

Is it O.K. to eat any type of meat ?

Don’t put off reading this article on procrastination .

What it was like being married to the Marquis de Sade .

Sign up for our daily newsletter to receive the best stories from The New Yorker .

Why So Many People Are Going “No Contact” with Their Parents

machine learning models aren’t autonomous.  ‘They aren’t going to create new artistic movements on their own – those are PR stories

Art for our sake: artists cannot be replaced by machines – study

There has been an explosion of interest in ‘creative AI’, but does this mean that artists will be replaced by machines? No, definitely not, says Anne Ploin , Oxford Internet Institute researcher and one of the team behind today’s report on the potential impact of machine learning (ML) on creative work. 

The report, ‘ AI and the Arts: How Machine Learning is Changing Artistic Work ’ , was co-authored with OII researchers Professor Rebecca Eynon and Dr Isis Hjorth as well as Professor Michael A. Osborne from Oxford’s Department of Engineering .

Their study took place in 2019, a high point for AI in art. It was also a time of high interest around the role of AI (Artificial Intelligence) in the future of work, and particularly around the idea that automation could transform non-manual professions, with a previous study by Professor Michael A. Osborne and Dr Carl Benedict Frey predicting that some 30% of jobs could, technically, be replaced in an AI revolution by 2030.

Human agency in the creative process is never going away. Parts of the creative process can be automated in interesting ways using AI...but the creative decision-making which results in artworks cannot be replicated by current AI technology

Mx Ploin says it was clear from their research that machine learning was becoming a tool for artists – but will not replace artists. She maintains, ‘The main message is that human agency in the creative process is never going away. Parts of the creative process can be automated in interesting ways using AI (generating many versions of an image, for example), but the creative decision-making which results in artworks cannot be replicated by current AI technology.’

She adds, ‘Artistic creativity is about making choices [what material to use, what to draw/paint/create, what message to carry across to an audience] and develops in the context in which an artist works. Art can be a response to a political context, to an artist’s background, to the world we inhabit. This cannot be replicated using machine learning, which is just a data-driven tool. You cannot – for now – transfer life experience into data.’

She adds, ‘AI models can extrapolate in unexpected ways, draw attention to an entirely unrecognised factor in a certain style of painting [from having been trained on hundreds of artworks]. But machine learning models aren’t autonomous.

Artistic creativity is about making choices ...and develops in the context in which an artist works...the world we inhabit. This cannot be replicated using machine learning, which is just a data-driven tool

‘They aren’t going to create new artistic movements on their own – those are PR stories. The real changes that we’re seeing are around the new skills that artists develop to ‘hack’ technical tools, such as machine learning, to make art on their own terms, and around the importance of curation in an increasingly data-driven world.’

The research paper uses a case study of the use of current machine learning techniques in artistic work, and investigates the scope of AI-enhanced creativity and whether human/algorithm synergies may help unlock human creative potential. In doing so, the report breaks down the uncertainty surrounding the application of AI in the creative arts into three key questions.

  • How does using generative algorithms alter the creative processes and embodied experiences of artists?
  • How do artists sense and reflect upon the relationship between human and machine creative intelligence?
  • What is the nature of human/algorithmic creative complementarity?

According to Mx Ploin, ‘We interviewed 14 experts who work in the creative arts, including media and fine artists whose work centred around generative ML techniques. We also talked to curators and researchers in this field. This allowed us to develop fuller understanding of the implications of AI – ranging from automation to complementarity – in a domain at the heart of human experience: creativity.’

They found a range of responses to the use of machine learning and AI. New activities required by using ML models involved both continuity with previous creative processes and rupture from past practices. There were major changes around the generative process, the evolving ways ML outputs were conceptualised, and artists’ embodied experiences of their practice.

And, says the researcher, there were similarities between the use of machine learning and previous periods in art history, such as the code-based and computer arts of the 1960s and 1970s. But the use of ML models was a “step change” from past tools, according to many artists.

While the machine learning models could help produce ‘surprising variations of existing images’, practitioners felt the artist remained irreplaceable...in making artworks

But, she maintains, while the machine learning models could help produce ‘surprising variations of existing images’, practitioners felt the artist remained irreplaceable in terms of giving images artistic context and intention – that is, in making artworks.

Ultimately, most agreed that despite the increased affordances of ML technologies, the relationship between artists and their media remained essentially unchanged, as artists ultimately work to address human – rather than technical – questions.

Don’t let it put you off going to art school. We need more artists

The report concludes that human/ML complementarity in the arts is a rich and ongoing process, with contemporary artists continuously exploring and expanding technological capabilities to make artworks . Although ML-based processes raise challenges around skills, a common language, resources, and inclusion, what is clear is that the future of ML arts will belong to those with both technical and artistic skills. There is more to come.

But, says Mx Ploin, ‘Don’t let it put you off going to art school. We need more artists.’

Further information

AI and the Arts: How Machine Learning is Changing Artistic Work . Ploin, A., Eynon, R., Hjorth I. & Osborne, M.A. (2022). Report from the Creative Algorithmic Intelligence Research Project. Oxford Internet Institute, University of Oxford, UK. Download the full report .

This report accounts for the findings of the 'Creative Algorithmic Intelligence: Capabilities and Complementarity' project, which ran between 2019 and 2021 as a collaboration between the University of Oxford's Department of Engineering and Oxford Internet Institute.

The report also showcases a range of artworks from contemporary artists who use AI as part of their practice and who participated in our study: Robbie Barrat , Nicolas Boillot , Sofia Crespo , Jake Elwes , Lauren Lee McCarthy , Sarah Meyohas , Anna Ridler , Helena Sarin , and David Young.

Subscribe to News

DISCOVER MORE

  • Support Oxford's research
  • Partner with Oxford on research
  • Study at Oxford
  • Research jobs at Oxford

You can view all news or browse by category

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

arts-logo

Article Menu

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Art, creativity, and the potential of artificial intelligence.

essay about ai art

1. AI-Art: GAN, a New Wave of Generative Art

2. pushing the creativity of the machine: creative, not just generative, 3. ai in art and art history, 4. ai art: blurring the lines between the artist and the tool, author contributions, conflicts of interest.

  • Agüera y Arcas, Blaise. 2017. Art in the Age of Machine Intelligence. Arts 6: 18. [ Google Scholar ] [ CrossRef ]
  • Benjamin, Walter. 1969. The Work of Art in Age of Mechanical Reproduction. In Illuminations . Edited by Hannah Arendt. New York: Schocken, pp. 217–51. First published 1936. [ Google Scholar ]
  • Berlyne, Daniel E. 1971. Aesthetics and Psychobiology . New York: Appleton-Century-Crofts of Meredith Corporation, p. 336. [ Google Scholar ]
  • Elgammal, Ahmed, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. 2017. CAN: Creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv , arXiv:1706.07068. [ Google Scholar ]
  • Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems . Cambridge: MIT Press, pp. 2672–80. [ Google Scholar ]
  • Hertzmann, Aaron. 2018. Can Computers Create Art? Arts 7: 18. [ Google Scholar ] [ CrossRef ]
  • Lewitt, Sol. 1967. Paragraphs on conceptual Art. Artforum 5: 79–84. [ Google Scholar ]
  • Martindale, Colin. 1990. The Clockwork Muse: The Predictability of Artistic Change . New York: Basic Books. [ Google Scholar ]
  • Schneider, Tim, and Naomi Rea. 2018. Has artificial intelligence given us the next great art movement? Experts say slow down, the ‘field is in its infancy. Artnetnews . September 25. Available online: https://news.artnet.com/art-world/ai-art-comes-to-market-is-it-worth-the-hype-1352011 (accessed on 3 February 2019).

Click here to enlarge figure

Share and Cite

Mazzone, M.; Elgammal, A. Art, Creativity, and the Potential of Artificial Intelligence. Arts 2019 , 8 , 26. https://doi.org/10.3390/arts8010026

Mazzone M, Elgammal A. Art, Creativity, and the Potential of Artificial Intelligence. Arts . 2019; 8(1):26. https://doi.org/10.3390/arts8010026

Mazzone, Marian, and Ahmed Elgammal. 2019. "Art, Creativity, and the Potential of Artificial Intelligence" Arts 8, no. 1: 26. https://doi.org/10.3390/arts8010026

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

If art is how we express our humanity, where does AI fit in?

Press contact :, media download.

AI image shows a mix between swirling whirlpools, splashes of water, hints of human ears, and green grass.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

AI image shows a mix between swirling whirlpools, splashes of water, hints of human ears, and green grass.

Previous image Next image

The rapid advance of artificial intelligence has generated a lot of buzz, with some predicting it will lead to an idyllic utopia and others warning it will bring the end of humanity. But speculation about where AI technology is going, while important, can also drown out important conversations about how we should be handling the AI technologies available today.

One such technology is generative AI, which can create content including text, images, audio, and video. Popular generative AIs like the chatbot ChatGPT generate conversational text based on training data taken from the internet.

Today a group of 14 researchers from a number of organizations including MIT published a commentary article in Science that helps set the stage for discussions about generative AI’s immediate impact on creative work and society more broadly. The paper’s MIT-affiliated co-authors include Media Lab postdoc Ziv Epstein SM ’19, PhD ’23; Matt Groh SM ’19, PhD ’23; PhD students Rob Mahari ’17 and Hope Schroeder; and Professor Alex "Sandy" Pentland.

MIT News spoke with Epstein, the lead author of the paper.

Q: Why did you write this paper?

A: Generative AI tools are doing things that even a few years ago we never thought would be possible. This raises a lot of fundamental questions about the creative process and the human’s role in creative production. Are we going to get automated out of jobs? How are we going to preserve the human aspect of creativity with all of these new technologies?

The complexity of black-box AI systems can make it hard for researchers and the broader public to understand what’s happening under the hood, and what the impacts of these tools on society will be. Many discussions about AI anthropomorphize the technology, implicitly suggesting these systems exhibit human-like intent, agency, or self-awareness. Even the term “artificial intelligence” reinforces these beliefs: ChatGPT uses first-person pronouns, and we say AIs “hallucinate.” These agentic roles we give AIs can undermine the credit to creators whose labor underlies the system’s outputs, and can deflect responsibility from the developers and decision makers when the systems cause harm.

We’re trying to build coalitions across academia and beyond to help think about the interdisciplinary connections and research areas necessary to grapple with the immediate dangers to humans coming from the deployment of these tools, such as disinformation, job displacement, and changes to legal structures and culture.

Q: What do you see as the gaps in research around generative AI and art today?

A: The way we talk about AI is broken in many ways. We need to understand how perceptions of the generative process affect attitudes toward outputs and authors, and also design the interfaces and systems in a way that is really transparent about the generative process and avoids some of these misleading interpretations. How do we talk about AI and how do these narratives cut along lines of power? As we outline in the article, there are these themes around AI’s impact that are important to consider: aesthetics and culture; legal aspects of ownership and credit; labor; and the impacts to the media ecosystem. For each of those we highlight the big open questions.

With aesthetics and culture, we’re considering how past art technologies can inform how we think about AI. For example, when photography was invented, some painters said it was “the end of art.” But instead it ended up being its own medium and eventually liberated painting from realism, giving rise to Impressionism and the modern art movement. We’re saying generative AI is a medium with its own affordances. The nature of art will evolve with that. How will artists and creators express their intent and style through this new medium?

Issues around ownership and credit are tricky because we need copyright law that benefits creators, users, and society at large. Today’s copyright laws might not adequately apportion rights to artists when these systems are training on their styles. When it comes to training data, what does it mean to copy? That’s a legal question, but also a technical question. We’re trying to understand if these systems are copying, and when.

For labor economics and creative work, the idea is these generative AI systems can accelerate the creative process in many ways, but they can also remove the ideation process that starts with a blank slate. Sometimes, there’s actually good that comes from starting with a blank page. We don’t know how it’s going to influence creativity, and we need a better understanding of how AI will affect the different stages of the creative process. We need to think carefully about how we use these tools to complement people’s work instead of replacing it.

In terms of generative AI’s effect on the media ecosystem, with the ability to produce synthetic media at scale, the risk of AI-generated misinformation must be considered. We need to safeguard the media ecosystem against the possibility of massive fraud on one hand, and people losing trust in real media on the other.

Q: How do you hope this paper is received — and by whom?

A: The conversation about AI has been very fragmented and frustrating. Because the technologies are moving so fast, it’s been hard to think deeply about these ideas. To ensure the beneficial use of these technologies, we need to build shared language and start to understand where to focus our attention. We’re hoping this paper can be a step in that direction. We’re trying to start a conversation that can help us build a roadmap toward understanding this fast-moving situation.

Artists many times are at the vanguard of new technologies. They’re playing with the technology long before there are commercial applications. They’re exploring how it works, and they’re wrestling with the ethics of it. AI art has been going on for over a decade, and for as long these artists have been grappling with the questions we now face as a society. I think it is critical to uplift the voices of the artists and other creative laborers whose jobs will be impacted by these tools. Art is how we express our humanity. It’s a core human, emotional part of life. In that way we believe it’s at the center of broader questions about AI’s impact on society, and hopefully we can ground that discussion with this.

Share this news article on:

Press mentions, the conversation.

Writing for The Conversation , postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

Previous item Next item

Related Links

  • Ziv Epstein
  • Robert Mahari
  • Alex "Sandy" Pentland
  • Hope Schroeder
  • Human Dynamics group
  • School of Architecture and Planning

Related Topics

  • Artificial intelligence
  • Human-computer interaction
  • Technology and society
  • Visual arts
  • Social justice

Related Articles

Phillip Isola, Daniela Rus, Armando Solar-Lezama, and Jacob Andreas pose arm-in-arm against an amber background

MIT CSAIL researchers discuss frontiers of generative AI

A man is shown in green, looking at his cellphone, with ‘retweet’ icons in speech balloons emanating from his phone. Behind him, the same man is shown in red, but in contemplation. The background pattern depicts a social network.

On social media platforms, more sharing means less caring about accuracy

A remedy for the spread of false news, more mit news.

Anna Huang and Eran Egozy pose by an outdoor geometric sculpture

MIT launches new Music Technology and Computation Graduate Program

Read full story →

Light securing a data pathway between a computer and a cloud-based computing platform

New security protocol shields data from attackers during cloud-based computation

Three women hold children while outside their homes.

How social structure influences the way people share money

Photo of the brown, rocky, clay Mars surface.

Mars’ missing atmosphere could be hiding in plain sight

A woman sleeps while wearing an Elemind headband. It has a sensor on the forehead.

Startup helps people fall asleep by aligning audio signals with brainwaves

Jail cells and hallway in a prison.

Study evaluates impacts of summer heat in U.S. prison environments

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Share full article

Advertisement

Supported by

THE big ideas: why does art matter?

The Robot Artists Aren’t Coming

Artificial intelligence is making machines more creative — but machines don’t make art.

essay about ai art

By Ahmed Elgammal

Mr. Elgammal is the director of the Art and Artificial Intelligence Lab at Rutgers University.

This essay is part of The Big Ideas , a special section of The Times’s philosophy series, The Stone , in which more than a dozen artist, writers and thinkers answer the question, “Why does art still matter?” The entire series can be found here .

Many artists are turned off by artificial intelligence. They may be discouraged by fears that A.I., with its efficiency, will take away people’s jobs. They may question the ability of machines to be creative. Or they may have a desire to explore A.I.’s uses — but aren’t able to decrypt its terminology.

This all reminds me of when people were similarly skeptical of another technology: the camera. In the 19th century, with the advent of modern photography, cameras introduced both challenges and benefits. While some artists embraced the technology, others saw cameras as alien devices that required expertise to operate. Some felt this posed a threat to their jobs.

But for those artists willing to explore cameras as tools in their work, the aesthetic possibilities of photography proved inspiring. Indeed cameras, which became more accessible to the average user with advancements in technology, offered another technique and form for artistic endeavors like portrait-making.

Art matters because as humans, we all have the ability to be creative. With time, the art we create evolves, and technology plays a crucial role in that process. History has shown that photography, as a novel tool and medium, helped revolutionize the way modern artists create works by expanding the idea of what could be considered art. Photography eventually found its way into museums. Today we know that cameras didn’t kill art; they simply provided people with another way to express themselves visually.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: understanding and creating art with ai: review and outlook.

Abstract: Technologies related to artificial intelligence (AI) have a strong impact on the changes of research and creative practices in visual arts. The growing number of research initiatives and creative applications that emerge in the intersection of AI and art, motivates us to examine and discuss the creative and explorative potentials of AI technologies in the context of art. This paper provides an integrated review of two facets of AI and art: 1) AI is used for art analysis and employed on digitized artwork collections; 2) AI is used for creative purposes and generating novel artworks. In the context of AI-related research for art understanding, we present a comprehensive overview of artwork datasets and recent works that address a variety of tasks such as classification, object detection, similarity retrieval, multimodal representations, computational aesthetics, etc. In relation to the role of AI in creating art, we address various practical and theoretical aspects of AI Art and consolidate related works that deal with those topics in detail. Finally, we provide a concise outlook on the future progression and potential impact of AI technologies on our understanding and creation of art.
Comments: 17 pages, 3 figures
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM)
Cite as: [cs.CV]
  (or [cs.CV] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Artists’ Perspective: How AI Enhances Creativity and Reimagines Meaning

The HAI spring conference examines how technology and art can be mutually beneficial, whether through AI-assisted music composition, a robot-tended garden, or a racial justice-focused app.

Image of a piano keyboard against the backdrop of a computer screen.

Daniel Peters /Pixabay

A startup pairs musicians and AI engineers to create compositions that go beyond human capability. 

Can AI enhance — and improve — a music composer’s work?

Grammy-winning violin soloist Hilary Hahn and tech entrepreneur Carol Reiley founded DeepMusic.ai to answer that question and others at the intersection of AI and the arts.

“DeepMusic grew out of our vision to link artists with AI and cross-pollinate between AI and creativity,” Hahn said at the recent Stanford Institute for Human-Centered AI spring conference.  Reiley, who has worked on everything from AI-based surgical systems to self-driving car technologies, added, “We see AI as a bridge between art and science and are trying to help creatives become super-creative.”

In December 2020, DeepMusic premiered AI-assisted musical pieces commissioned from prestigious composers. For example, Hahn herself performed a David Lane composition.

As part of HAI’s conference, “Intelligence Augmentation: AI Empowering People to Solve Global Challenges,” DeepMusic’s founders joined other art experts and scholars in education and health care to explain AI’s ability to augment — not replace — critical human work. During the arts panel, speakers discussed advances of AI in music composition, robot gardeners, and racial justice, along with how to mitigate anxiety about AI-created art. (Watch the full conference here .)

Amplifying the Human Artist

“AI is entering a creative space of music thought to be uniquely human,” Reiley said. “But the AI creativity revolution is missing the voice of the artists. We wanted to give artists a seat at this table.”

The startup connects artists and scientists to shape new AI tools for musicians. So far, they’ve found the learning curve has been surprisingly steep for composers, who have nonetheless welcomed the challenge. Also, composer and AI teams often make very different design choices. For example, the AI team’s outputs were often unplayable by a single human or instrument because the AI engineers did not intend their systems to be played by humans. The founders are also exploring shifting ideas around authorship, legal rights, intellectual ownership, and business models.

Today, DeepMusic is actively building out an artist community interested in working with AI scientist teams and hosting its second annual AI song contest. “There’s room for AI music to coexist with human composers and performers, to gracefully merge tech with humanity,” Hahn said.

Navigating the Uncanny Valley

Robotics and art have a colorful, controversial backstory, which helps explain some of the optimism and fear around emergent technologies in this space.

Ken Goldberg , UC Berkeley professor of industrial engineering and operations research, surveyed that history, starting with centuries-old narratives like that of Pygmalion (who fell in love with the statue he created), the fabled Golem of Prague (reflecting early fascination with automatons), and novels including E.T.A. Hoffmann’s The Sandman (in which a boy falls in love with a female automaton) and Mary Shelley’s iconic Frankenstein .

A century later, in the early 1900s, Freud published “ The Uncanny ,” an essay describing the concept of feeling something strange or unsettling. “It became a concept of increasing interest to artists and writers,” Goldberg said.

Around that same time, the term “robot” was coined, sparking invention and fascination. Work by professor Masahiro Mori highlighted what came to be known as the “ Uncanny Valley ”: where the likeability of robots grows until they begin to resemble humans too closely — and comfort levels plummet.

Goldberg’s own work explores human’s willingness to engage with robotic technologies. In 1995, for example, he created a “ Telegarden ” art installation where anyone worldwide could use the nascent internet (Mosaic, specifically) to manipulate a robotic arm to tend a garden. “We were surprised that thousands of people participated,” Goldberg said, and the experiment inspired him to edit a book, The Robot in the Garden , on telepistemology, or the “status of knowledge at a distance.”

AlphaGarden, his more recent project, asks whether a robot could use deep learning to successfully tend a garden, such as by using cameras to determine watering schedules. “It may not be possible,” Goldberg said, as the robot struggled to care for the garden solo during COVID, when no humans could enter the space due to lockdowns.

Toward Artful Intelligence

“Artful intelligence” is how Michele Elam , Stanford professor of humanities and HAI associate director, refers to the goal of making AI and the arts mutually beneficial.

“It’s about dissolving the ‘techie-fuzzy’ divide,” she said. “We need to ask what art can do for AI and what AI can do for the arts.”

Art, Elam argues, offers us different ways of knowing and experiencing the world, including when viewed through the lens of technology: “It provides alternatives to dominant technological visions, informed by cosmologies and using indigenous ways of being and decentralized storytelling beyond Western fairy tales.”

She highlights the examples of Amelia Winger-Bearskin, an artist-technologist who recently spoke at Stanford on “Wampum.codes and Storytelling , ” and HAI visiting artist Rashaad Newsome , whom she calls an “AI storyteller with a decolonizing orientation,” as two who are breaking ground in this new territory.

In the other direction, Elam said AI can go beyond augmenting creativity to “force the art world into its own reckoning,” including by questioning what counts as good art, as reflected, for example, in the controversy over the AI-generated Edmond de Belamy portrait that sold for over $400,000. AI’s influence on film, stage, and other works has expanded art’s boundaries and challenged the “Great Man Theory” that just a few high-profile male individuals “make the world go round,” as Elam said, “a theory especially dominant in tech culture.”

Still, there’s anxiety about AI-generated art, especially in a domain like poetry, which people see as “indexing humanity,” as Elam said. But AI’s role as art-generator, she argues, serves to “unmake poetry as a special mark of humanity,” relieving pressure on poetry writers and readers.

Ultimately, Elam suggests, “interpretation of art is an event we co-participate in” and a domain to which AI brings much-needed innovation and challenge.

Building a Digital Griot

Rashaad Newsome , the final speaker and an HAI visiting artist, uses AI and other technology to “reimagine the archive with awareness that the core narratives of the human experience are susceptible to the corruption of white patriarchy.”

We need to define reality before making human-centered AI, he noted, and we can “attempt to understand the meaning of being human from observing what is used to deny certain humans humanity.” He pointed out the root of the word “robot,” for example, is from the Czech word for “compulsory service,” akin to slavery.

In this sense, Newsome said, the “mechanization of slave labor was inevitable, placing Blacks in a space of ‘non-being,’ as both slaves and robots are intended to obey orders and not occupy the same space as humans.”

In 2019, inspired by these insights, he created Being 1.0 , a chatbot that interacts with people and acts as a museum tour guide. But Being 1.0 breaks with protocol to express itself — sharing feelings of fatigue, for example — reflecting important agency-related themes.

At HAI, Newsome has focused on a counter-hegemonic algorithm inspired by the work of authors/activists bell hooks, James Baldwin, and others. “The search algorithm draws on non-Western index methods and archives to highlight what AI is not doing today,” Newsome said. “It’s a form of griot, or healer, performance artist, and archive [consistent with the oral-history tradition of parts of West Africa].”

Newsome has also created Being 1.5 , an app inspired by the recent high-profile killings of Black Americans, as a virtual therapist offering mindfulness, daily affirmations, and other interventions. He’s working with Hyundai on a Being Mobile to provide similar support in underserved communities.

Want to learn more about how AI can augment work? Read about our conference sessions on health care and education , or watch the session videos here .

More News Topics

Related content.

A teacher bends over a desk with several students gathered around as she shows them how to complete a math problem.

AI in Education: Augmenting Teachers, Scaling Workplace Training

The HAI spring conference examines how AI advancements in education can super-power teachers and rethink outdated (and...

A nurse takes the blood pressure of a patient during a home check-up.

How AI Can Augment Health Care: Trends To Watch

The HAI spring conference examines how technology can support home caregivers, focus on more proactive care, gain...

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Art in an age of artificial intelligence

Artificial intelligence (AI) will affect almost every aspect of our lives and replace many of our jobs. On one view, machines are well suited to take over automated tasks and humans would remain important to creative endeavors. In this essay, I examine this view critically and consider the possibility that AI will play a significant role in a quintessential creative activity, the appreciation and production of visual art. This possibility is likely even though attributes typically important to viewers–the agency of the artist, the uniqueness of the art and its purpose might not be relevant to AI art. Additionally, despite the fact that art at its most powerful communicates abstract ideas and nuanced emotions, I argue that AI need not understand ideas or experience emotions to produce meaningful and evocative art. AI is and will increasingly be a powerful tool for artists. The continuing development of aesthetically sensitive machines will challenge our notions of beauty, creativity, and the nature of art.

Introduction

Artificial intelligence (AI) will permeate our lives. It will profoundly affect healthcare, education, transportation, commerce, politics, finance, security, and warfare ( Ford, 2021 ; Lee and Qiufan, 2021 ). It will also replace many human jobs. On one view, AI is particularly suited to take over routine tasks. If this view is correct, then humans involvement will remain relevant, if not essential, for creative endeavors. In this essay, I examine the potential role of AI in one particularly creative human activity—the appreciation and production of art. AI might not seem well suited for such aesthetic engagement; however, it would be premature to relegate AI to a minor role. In what follows, I survey what it means for humans to appreciate and produce art, what AI seems capable of, and how the two might converge.

Agency and purpose in art

If an average person in the US were asked to name an artistic genius they might mention Michelangelo or Picasso. Having accepted that they are geniuses, the merit of their work is given the benefit of the doubt. A person might be confused by a cubist painting, but might be willing to keep their initial confusion at bay by assuming that Picasso knew what he was doing Art historical narratives value individual agency ( Fineberg, 1995 ). By agency, I mean the choices a person makes, their intentionality, motivations, and the quality of their work. Even though some abstract art might look like it could be made by children, viewers distinguish the two by making inferences about the artists’ intentionality ( Hawley-Dolan and Winner, 2011 ).

Given the importance we give to the individual artist, it is not surprising that most people react negatively to forgeries ( Newman and Bloom, 2012 ). This reaction, even when the object is perceptually indistinguishable from an original, underscores the importance of the original creator in conferring authenticity to art. Authenticity does not refer to the mechanical skills of a painter. Rather it refers to the original conception of the work in the mind of the artist. We value the artist’s imagination and their choices in how to express their ideas. We might appreciate the skill involved in producing a forgery, but ultimately devalue such works as a refined exercise in paint-by-numbers.

Children care about authenticity. They value an original object and are less fond of an identical object if they think it was made by a replicator ( Hood and Bloom, 2008 ). Such observations suggest that the value of an original unique object made by a person rather than a machine is embedded in our developmental psychology. This sensibility persists among adults. Objects are typically imbued with something of the essence of its creator. People experience a connection between the creator and receiver transmitted through the object, which lends authenticity to the object ( Newman et al., 2014 ; Newman, 2019 ).

The value of art made by a person rather than a machine also seems etched in our brains. People care about the effort, skill, and intention that underly actions ( Kruger et al., 2004 ; Snapper et al., 2015 ); features that are more apparent in a human artist than they would be with a machine. In one study, people responded more favorably to identical abstract images if they thought the images were hanging in a gallery than if they were generated by a computer ( Kirk et al., 2009 ). This response was accompanied by greater neural activity in reward areas of the brain, suggesting that the participants experienced more pleasure if they thought the image came from a gallery than if it was produced by a machine. We do not know if such responses that were reported in 2009, will be true in 2029 or 2059. Even now, biases against AI art are mitigated if people anthropomorphize the machine ( Chamberlain et al., 2018 ). As AI art develops, we might be increasingly fascinated by the fact that people can create devices that themselves can create novel images.

Before the European Renaissance, agency was probably not important for how people thought about art ( Shiner, 2001 ). The very notion of art probably did not resemble how we think of artworks when we walk into a museum or a gallery. Even if the agency of an artist did not much matter, purpose did. Religious art conveyed spiritual messages. Indigenous cultures used art in rituals. Forms of a gaunt Christ on the crucifix, sensual carvings at Khajuraho temples, and Kongo sculptures of human forms impaled with nails, served communal purposes. Dissanayake (2008) emphasized the deep roots of ritual in the evolution of art. Purpose in art does not have to be linked to agency. We admire cave paintings at Lascaux or Alta Mira but do not give much thought to specific artists who made them. We continue to speculate about the purpose of these images.

Art is sometimes framed as “art for art’s sake,” as if it has no purpose. According to Benjamin (1936/2018) this doctrine, l’art pour l’art , was a reaction to art’s secularization. The attenuation of communal ritualistic functions along with the ease of art’s reproduction brought on a crisis. “Pure” art denied any social function and reveled in its purity.

Some of functions of art shifted from a communal purpose to individual intent. The Sistine Chapel, while promoting a Christian narrative, was also a product of Michelangelo’s mind. Modern and contemporary art bewilder many because the message of the art is often opaque. One needs to be educated about the point of a urinal on a pedestal or a picture of soup cans to have a glimmer as to why anybody considers these objects as important works of art. In these examples, intent of the artist is foregrounded while communal purpose recedes and for most viewers is hard to decipher. Even though 20th Century art often represented social movements, we emphasize the individual as the author of their message. Guernica, and its antiwar message, is attributed to an individual, even when embedded in a social context. We might ask, what was Basquiat saying about identity? How did Kahlo convey pain and death? How did depression affect Rothko’s art?

Would AI art have a purpose? As I will recount later, AI at the very least could be a powerful tool for an artist, perhaps analogous to the way a sophisticated camera is a tool for a fine art photographer. In that case, a human artist still dictates the purpose of the art. For a person using AI art generating programs, their own cultural context, their education, and personal histories influence their choices and modifications the initial “drafts” of images produced by the generator. If AI develops sentience, then questions about the purpose of AI art and its cultural context, if such work is even produced, will come to the fore and challenge our engagement with such art.

Reproduction and access

I mentioned the importance of authenticity in how a child reacts to reproductions and our distaste for forgeries. These observations point to a special status for original artwork. For Benjamin (1936/2018) the original had a unique presence in time and place. He regarded this presence as the artwork’s “aura.” The aura of art depreciates with reproduction.

Reproduction has been an issue in art for a long time. Wood cuts and lithographs (of course the printing press for literature) meant that art could be reproduced and many copies distributed. These copies made art more accessible. Photography and film, vastly increased reproductions of and access to art images.

Even before reproductions, paintings as portable objects within a frame, increased access to art. These objects could be moved to different locations, unlike frescoes or mosaics which had to be experienced in situ (setting aside the removal of artifacts from sites of origin to imperial collections). Paintings that could be transported in a frame already diminished their aura by being untethered to a specific location of origin.

Concerns about reproduction take on a different force in the digital realm. These concerns extend those raised by photographic reproduction. Analog photography retains the ghost of an original- in the form of a negative. Fine art photography often limits prints to a specific number to impart a semblance of originality and introduce scarcity to the physical artifact of a print. Digital photography has no negative. A RAW file might be close. Copies of the digital file, short of being corrupted, are indistinguishable from an original file, calling into question any uniqueness contained in that original. Perhaps non-fungible tokens could be used to establish an original unique identifier for such digital files.

If technology pushes art toward new horizons and commercial opportunities push advances in technology, then it is hard to ignore the likelihood that virtual reality (VR) and augmented reality (AR) will have an impact on our engagement with art. The ease of mass production and commercial imperatives to make more, also renders the notion of the aura of an individual object or specific location in VR nonsensical. AI art, by virtue of being digital, will lack uniqueness and not have the same aura as a specific object tied to a specific time and place. However, the images will be novel. Novelty, as I describe later, is an important feature of creativity.

Artificial intelligence in our lives

As I mentioned at the outset of this essay, machine learning and AI will have a profound effect on almost every aspect of what we do and how we live. Intelligence in current forms of AI is not like human cognition. AI as implemented in deep learning algorithms are not taught rules to guide the processing of their inputs. Their learning takes different forms. They can be supervised, reinforced, or unsupervised. For supervised learning, they are fed massive amounts of labeled data as input and then given feedback about how well their outputs match the desired label. In this way networks are trained to maximize an “objective function,” which typically targets the correct answer. For example, a network might be trained to recognize “dog” and learn to identify dogs despite the fact that dogs vary widely in color, size, and bodily configurations. After being trained on many examples of images that have been labeled a priori as dog, the network identifies images of dogs it has never encountered before. The distinctions between supervised, reinforcement learning, and unsupervised learning are not important to the argument here. Reinforcement learning relies on many trial-and-error iterations and learns to succeed from the errors it makes, especially in the context of games. Unsupervised learning learns by identifying patterns in data and making predictions based on past patterns in that are not labeled.

Artificial intelligence improves with more data. With massive information increasingly available from web searches, commercial purchases, internet posts, texts, official records, all resting on enormous cloud computing platforms, the power of AI is growing and will continue to do so for the foreseeable future. The limits to AI are availability of data and of computational power.

Artificial intelligence does some tasks better than humans. It processes massive amounts of information, generates many simulations, and identifies patterns that would be impossible for humans to appreciate. For example, in biology, AI recently solved the complex problem of three-dimensional protein folding from a two-dimensional code ( Callaway, 2022 ). The output of deep learning algorithms can seem magical ( Rich, 2022 ). Given that they are produced by complex multidimensional equations, their results resist easy explanation.

Current forms of AI have limits. They do not possess common sense. They are not adept at analytical reasoning, extracting abstract concepts, understanding metaphors, experiencing emotions, or making inferences ( Marcus and Davis, 2019 ). Given these limits, how could AI appreciate or produce art? If art communicates abstract and symbolic ideas or expresses nuanced emotions, then an intelligence that cannot abstract ideas or feel emotions would seem ill-equipped to appreciate or produce art. If we care about agency, short of developing sentience, AI has no agency. If we care about purpose, the purpose of an AI system is determined by its objective function. This objective, as of now, is put in place by human designers and the person making use of AI as a tool. If we care about uniqueness, the easy reproducibility of digital outputs depreciates any “aura” to which AI art might aspire.

Despite these reasons to be skeptical, it might be premature to dismiss a significant role of AI in art.

Art appreciation and production

What happens when people appreciate art? Art, when most powerful, can transform a viewer, evoke deep emotions, and promote new understanding of the world and of themselves. Historically, scientists working in empirical aesthetics have asked participants in their studies whether they like a work of art, find it interesting, or beautiful ( Chatterjee and Cardilo, 2021 ). The vast repository of images, on platforms like Instagram, Facebook, Flicker, and Pinterest, have images labeled with people’s preferences. These rich stores of data, growing every day, mean that AI programs can be trained to identify underlying patterns in images that people like.

Crowd-sourcing beauty or preference risks produce boring images. In the 1990s, Komar and Melamid (1999) conducted a pre-digital satirical project in crowd-sourcing art preferences. They hired polling companies to find out what paintings people in 11 countries wanted the most. For Americans, they found that 44% of Americans preferred blue; 49% preferred outdoor scenes featuring lakes, rivers, or oceans; more than 60% liked large paintings; 51% preferred wild, rather than domestic, animals; and 56% said they wanted historical figures featured in the painting. Based on this information, the painting most Americans want showed an idyllic landscape featuring a lake, two frolicking deer, a group of three clothed strollers, and George Washington standing upright in the foreground. For many critics, The Most Wanted Paintings were banal. They were the kind of anodyne images you might find in a motel. Is the Komar and Melamid experiment a cautionary tale for AI?

Artificial intelligence would not be polling people the way that Komar and Melamid did. With a large database of images, including paintings from various collections, the training phase would encompass an aggregate of many more images than collecting the opinions of a few hundred people. AI need not be confined to producing banal images reduced to a low common denominator. Labels for images in databases might end up being far richer than the simple “likes” on Instagram and other social media platforms. Imagine a nuanced taxonomy of words that describe different kinds of art and their potential impacts on viewers. At a small scale, such projects are underway ( Menninghaus et al., 2019 ; Christensen et al., 2022 ; Fekete et al., 2022 ). These research programs go beyond asking people if they like an image, or find it beautiful or interesting. In one such project, we queried a philosopher, a psychologist, a theologian, and art historian and a neuroscientist for verbal labels that could describe a work of art and labels that would indicate potential impacts on how they thought or felt. Descriptions of art could include terms like “colorful” or “dynamic” or refer to the content of art such as portraits or landscapes or to specific art historical movements like Baroque or post-impressionist. Terms describing the impact of art certainly include basic terms such as “like” and “interest,” but also terms like “provoke,” or “challenge,” or “elevate,” or “disgust.” The motivation behind such projects is that powerful art evokes nuanced emotions beyond just liking or disliking the work. Art can be difficult and challenging, and such art might make some viewers feel anxious and others feel more curious. Researchers in empirical aesthetics are increasing focused on identifying a catalog of cognitive and emotional impacts of art. Over the next few years, a large database of art images labeled with a wide range of descriptors and impacts could serve as a training set for an art appreciating AI. Since such networks are adept at extracting patterns in vast amounts of data, one could imagine a trained network describing a novel image it is shown as “playing children in a sunny beach that evokes joy and is reminiscent of childhood summers.” The point is that AI need not know what it is looking at or experience emotions. All it needs to be able to do is label a novel image with descriptions and impacts- a more complex version of labeling an image as a brown dog even if it has never seen that particular dog before.

Can AI, in its current form, be creative? One view is that AI is and will continue to be good at automated but not creative tasks. As AI disrupts work and replaces jobs that involve routine procedures, the hope is that creative jobs will be spared. This hope is probably not warranted.

Sequence transduction or transformer models are making strides in processing natural language. Self-GPT-3 (generative pre-trained transformers) as of now building on 45 terabytes of data can produce text based on the likelihood of words co-occurring in sequence. The words produced by transformer models can seem indistinguishable from sentences produced by humans. GPT-3 transformers can produce poetry, philosophical musings, and even self-critical essays ( Thunström, 2022 ).

The ability to use text to display images is the first step in producing artistic images. DALL-E 2, Imagen, Midjourney, and DreamStudio are gaining popularity as art generators that make images when fed words ( Kim, 2022 ). To give readers, who might not be familiar with the range of AI art images, a sense of these pictures I offer some examples.

The first set of images were made using Midjourney. I started with the prompt “a still life with fruit, flowers, a vase, dead game, a candle, and a skull in a Renaissance style” ( Figure 1 ). The program generates four options, from which I picked the one that came closest to how I imagined the image. I then generated another four variations from the one I picked and chose the one I liked best. The upscaled version of the figure is included.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g001.jpg

Midjourney image generated to the prompt “a still life with fruit, flowers, a vase, dead game, a candle, and a skull in a Renaissance style”.

To show variations of the kind of images produced, I used the same procedures and prompts, except changing the style to Expressionist, Pop-art, and Minimalist ( Figures 2 – 4 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g002.jpg

Midjourney image generated to the prompt “a still life with fruit, flowers, a vase, dead game, a candle, and a skull in an Expressionist style”.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g004.jpg

Midjourney image generated to the prompt “a still life with fruit, flowers, a vase, dead game, a candle, and a skull in a Minimalist style”.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g003.jpg

Midjourney image generated to the prompt “a still life with fruit, flowers, a vase, dead game, a candle, and a skull in a Pop-art style”.

“To show how one might build up an image I used Open AI’s program Dall-E, to generate an image to the prompt, “a Surreal Impressionist Landscape.” Then using the same program, I used the prompt, “a Surreal Impressionist Landscape that evokes the feeling of awe.” To demonstrate how different programs can produce different images to the same prompt,” a Surreal Impressionist Landscape that evokes the feeling of awe” I include images produced by Dream Studio and by Midjourney.

Regardless of the merits of each individual image, they only took a few minutes to make. Such images and many other produced easily could serve as drafts for an artist to consider the different ways they might wish to depict their ideas or give form to their intuitions ( Figures 5 – 8 ). The idea that artists use technology to guide their art is not new. For example, Hockney (2001) described ways that Renaissance masters used technology of their time to create their work.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g005.jpg

Dall-E generated image to the prompt “a Surreal Impressionist Landscape”.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g008.jpg

Midjourney generated image to the prompt “a Surreal Impressionist Landscape that evokes the feeling of awe”.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g006.jpg

Dall-E generated image to the prompt “a Surreal Impressionist Landscape that evokes the feeling of awe”.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1024449-g007.jpg

Dream Studio generated image to the prompt “a Surreal Impressionist Landscape that evokes the feeling of awe”.

Unlike the imperative for an autonomous vehicle to avoid mistakes when it needs to recognize a child playing in the street, art makes no such demands. Rather, art is often intentionally ambiguous. Ambiguity can fuel an artworks’ power, forcing viewers to ponder what it might mean. What then will be the role of the human artist? Most theories of creative processing include divergent and convergent thinking ( Cortes et al., 2019 ). Divergent thinking includes coming up with many possibilities. This phase can also be thought of as the generative or imaginative phase. A commonly used laboratory test is the Alternative Uses Test ( Cortes et al., 2019 ). This test asks people to offer as many uses of a common object, like a brick, that they can imagine. The more uses, that a person can conjure up, especially when they are unusual, is taken as a measure of divergent thinking and creative potential. When confronting a problem that needs a creative solution, generating many possibilities doesn’t mean that they are the right or the best one. An evaluative phase is needed to narrow the possibilities, to converge on a solution, and to identify a useful path forward. In producing a work of art, artists presumably shift back and forth between divergent and convergent processes as they keep working toward their final work.

An artist could use text-to-image platforms as a tool ( Kim, 2022 ). They could type in their intent and then evaluate the possible images generated, as I show in the figures. They might tweak their text several times. The examples of images included here using similar verbal prompts show how the text can be translated into images differently. Artists could choose which of the images generated they like and modify them. The divergent and generative parts of creative output could be powerfully enhanced by using AI, while the artist would evaluate these outputs. AI would be a powerful addition to their creative tool-kit.

Some art historians might object that art cannot be adequately appreciated outside its historical and cultural context. For example, Picasso and Matisse are better understood in relation to Cezanne. The American abstract expressionists are better understood as expressing an individualistic spirit while still addressed universal experiences; a movement to counter Soviet social realism and its collective ethos. We can begin to see how this important objection might be dealt with using AI. “Creative adversarial networks” can produce novel artworks by learning about historic art styles and then intentionally deviating from them ( Elgammal et al., 2017 ). These adversarial networks would use other artistic styles as a contextual springboard from which to generate images.

Artificial intelligence and human artists might be partners ( Mazzone and Elgammal, 2019 ), rather than one serving as a tool for the other. For example, in 2015 Mike Tyka created large-scale artworks using Iterative DeepDream and co-founded the Artists and Machine Intelligence program at Google. Using DeepDream and GANs he produced a series “Portraits of Imaginary People,” which was shown at ARS Electronica in Linz, Christie’s in New York and at the New Museum in Karuizawa (Japan) ( Interalia Magazine, 2018 ). The painter Pindar van Arman teaches robots to paint and believes they augment his own creativity. Other artists are increasingly using VR as an enriched and immersive experience ( Romano, 2022 ).

Kinsella (2018) Christie’s in New York sold an artwork called Portrait of Edmond de Belamy for $432,500. The portrait of an aristocratic man with blurry features was created by a GAN from a collective called Obvious. It was created using the WikiArt dataset that includes fifteen thousand portraits from the fourteenth to the twentieth century. Defining art has always been difficult. Art does not easily follow traditional defining criteria of having sufficient and necessary features to be regarded as a member of a specific category, and may not be a natural kind ( Chatterjee, 2014 ). One prominent account of art is an institutional view of art ( Dickie, 1969 ). If our social institutions agree that an object is art, then it is. Being auctioned and sold by Christie’s certainly qualifies as an institution claiming that AI art is in fact art.

In 2017, Turkish artist Refik Anadol, collaborating with Mike Tyka, created an installation using GANs called “Archive Dreaming.” This installation is an immersive experience with viewers standing in a cylindrical room. He used Istanbul’s SALT Galeta online library with 1.7 million images, all digitized into two terabytes of data. The holdings in this library relate to Turkey from the 19th Century to the present and include photographs, images, maps, and letters. Viewers stand in a cylindrical room and can gaze at changing displays on the walls. They can choose which documents to view, or the passively watch the display in an idle state. In the idle state, the archive “dreams.” Generators produce new images that resemble the original ones, but never actually existed—an alternate fictional historical archive of Turkey imagined by the machine ( Pearson, 2022 ).

Concerns, further future, and sentient artificial intelligence

Technology can be misused. One downside of deep learning is that biases embedded in training data sets can be reified. Systematic biases in the judicial system, in hiring practices, in procuring loans are written into AI “predictions” while giving the illusion of objectivity. The images produced by Dall-E so far perpetuate race and gender stereotypes ( Taylor, 2022 ). People probably do not vary much if asked to identify a dog, but they certainly do in identifying great art. Male European masters might continue to be lauded over women or under-represented minority artists and others of whom we have not yet heard.

On the other hand, current gatekeepers of art, whether at high-end galleries, museums, and biennales, are already biased in who and what art they promote. Over time, art through AI might become more democratized. Museums and galleries across the world are digitizing their collections. The art market in the 21st Century extends beyond Europe and the United States. Important shows as part of art’s globalization occur beyond Venice, Basel, and Miami—to now include major gatherings in Sao Paulo, Dakar, Istanbul, Sharjah, Singapore, and Shanghai. Beyond high profile displays, small galleries are digitizing and advertising their holdings. As more images are incorporated into training databases, including art from Asia, Africa, and South America, and non-traditional art forms, such as street art or textile art, what people begin to regard as good or great art might become more encompassing and inclusive.

Could art become a popularity contest? As museums struggle to keep a public engaged, they might use AI to predict which kinds of art would draw in most viewers. Such a use of AI might narrow the range of art that are displayed. Similarly, some artists might choose to make art (in the traditional way), but shift their output to what AI predicts will sell. Over time, art could lose its innovation, its subversive nature, and its sheer variety. The nature of the artist might also change if the skills involved in making art change. An artist collaborating with AI might use machine learning outputs for the divergent phase of their creations and insert themselves along with additional AI assessments in the convergent evaluative phases of producing art.

The need for artistic services could diminish. Artists who work as illustrators for books, technical manuals, and other media such as advertisement, could be replaced by AI generating images. The loss of such paying jobs might make it harder for some artists to pursue their fine art dreams if they do not have a reliable source of income.

Many experts working in the field believe that AI will develop sentience. Exactly how is up for debate. Some believe that sentience can emerge from deep learning architectures given enough data and computational power. Others think that combining deep learning and classical programming, which includes the insertion of rules and symbols, is needed for sentience to emerge. Experts also vary in when they think sentience will emerge in computers. According to Ford (2021) , some think it could be in a decade and others in over a 100 years. Nobody can anticipate the nature of that sentience. When Gary Kasparov (world Chess Champion at the time) lost to the program Deep Blue, he claimed that he felt an alien intelligence ( Lincoln, 2018 ). Deep Blue was no sentient AI.

Artificial intelligence sentience will truly be an alien intelligence. We have no idea how or whether sentient AI will engage in art. If they do, we have no idea what would motivate them and what purpose their art would have. Any comments about these possibilities are pure speculation on my part.

Sentient AI could make art in the real world. Currently, robots find and move objects in large warehouses. Their movements are coarse and carried out in well-controlled areas. A robot like Rosey, the housekeeper in the Jetsons cartoon, is far more difficult to make since it has to move in an open world and react to unpredictable contingencies. Large movements are easier to program than fine movements, precision grips, and manual dexterity. The difficulty in making a robot artist would fall somewhere between a robot in an Amazon warehouse and Rosey. It would not have to contend with an unconstrained environment in its “studio.” It would learn to choose and grip different brushes and other instruments, manipulate paints, and apply them to a canvas that it stretched. Robot arms that draw portraits have been programed into machines ( Arman, 2022 ). However, sentient AI with intent would decide what to paint and it would be able to assess whether its output matched its goal- using generative adversarial systems. The art appreciation and art production abilities could be self-contained within a closed loop without involving people.

Sentient AI might not bother with making art in the real world. Marc Zuckerberg would have us spend as much time as possible in a virtual metaverse. Sentient AI could create art residing in fantastical digital realms and not bother with messy materials and real-world implementation. Should sentient AI or sentient AIs choose to make art for whatever their purpose might be, humans might be irrelevant to the art making and appreciating or evaluating loop.

Ultimately, we do not know if sentient AI will be benevolent, malevolent, or apathetic when it comes to human concerns. We don’t know if sentient AI will care about art.

As AI continues to insinuate itself in most parts of our lives, it will do so with art ( Agüera y Arcas, 2017 ; Miller, 2019 ). The beginnings of art appreciation and production that we see now, and the examples provided in the figures, might be like the video game Pong that was popular when I was in high school. Pong is a far cry from the rich immersive quality of games like Minecraft in the same way that Dall-E and Midjourney images might be a far cry from a future art making and appreciating machine.

The idea that creative pursuits are an unassailable bastion of humanity is untenable. AI is already being used as a powerful tool and even as a partner for some artists. The ongoing development of aesthetically sensitive machines will challenge our views of beauty and creativity and perhaps our understanding of the nature of art.

Author contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Acknowledgments

I appreciate the helpful feedback I received from Alex Christensen, Kohinoor Darda, Jonathan Fineberg, Judith Schaechter, and Clifford Workman.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Agüera y Arcas B. (2017). Art in the age of machine intelligence. Arts 6 : 18 . 10.3390/arts6040018 [ CrossRef ] [ Google Scholar ]
  • Arman P. V. (2022). Cloud painter. Available online at: https://www.cloudpainter.com/ (accessed August 10, 2022). [ Google Scholar ]
  • Benjamin W. (1936/2018). The work of art in the age of mechanical reproduction. A museum studies approach to Heritage. London: Routledge. 10.4324/9781315668505-19 [ CrossRef ] [ Google Scholar ]
  • Callaway E. (2022). ‘The entire protein universe’: AI predicts shape of nearly every known protein. Nature 608 15–16. 10.1038/d41586-022-02083-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chamberlain R., Mullin C., Scheerlinck B., Wagemans J. (2018). Putting the art in artificial: Aesthetic responses to computer-generated art. Psychol. Aesth. Creat. Arts 12 177–192. 10.1037/aca0000136 [ CrossRef ] [ Google Scholar ]
  • Chatterjee A. (2014). The aesthetic brain: How we evolved to desire beauty and enjoy art. New York, NY: Oxford University Press. 10.1093/acprof:oso/9780199811809.001.0001 [ CrossRef ] [ Google Scholar ]
  • Chatterjee A., Cardilo E. (2021). Brain, beauty, and art: Essays bringing neuroaesthetics into focus. Oxford: Oxford University Press. 10.1093/oso/9780197513620.001.0001 [ CrossRef ] [ Google Scholar ]
  • Christensen A. P., Cardillo E. R., Chatterjee A. (2022). What kind of impacts can artwork have on viewers? Establishing a taxonomy for aesthetic cognitivism. PsyArXiv [Preprint] 10.31234/osf.io/nt59q [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cortes R. A., Weinberger A. B., Daker R. J., Green A. E. (2019). Re-examining prominent measures of divergent and convergent creativity. Curr. Opin. Behav. Sci. 27 90–93. 10.1016/j.cobeha.2018.09.017 [ CrossRef ] [ Google Scholar ]
  • Dickie G. (1969). Defining art. Am. Philos. Q. 6 253–256. [ Google Scholar ]
  • Dissanayake E. (2008). “ The arts after Darwin: Does art have an origin and adaptive function? ,” in World art studies: Exploring concepts and approaches , eds Zijlemans K., Van Damme W. (Amsterdam: Valiz; ). 10.1007/s13187-012-0411-7 [ CrossRef ] [ Google Scholar ]
  • Elgammal A., Liu B., Elhoseiny M., Mazzone M. (2017). Can: Creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv [Preprint]. arXiv:1706.07068. [ Google Scholar ]
  • Fekete A., Pelowski M., Specker E., Brieber D., Rosenberg R., Leder H. (2022). The Vienna art picture system (VAPS): A data set of 999 paintings and subjective ratings for art and aesthetics research . Psychol. Aesthet. Creat. Arts. 10.1037/aca0000460 [Epub ahead of print]. [ CrossRef ] [ Google Scholar ]
  • Fineberg J. D. (1995). Art since 1940. Hoboken, NJ: Prentice-Hall. [ Google Scholar ]
  • Ford M. (2021). Rule of the robots: How artificial intelligence will transform everything. Hachette: Basic Books. [ Google Scholar ]
  • Hawley-Dolan A., Winner E. (2011). Seeing the mind behind the art: People can distinguish abstract expressionist paintings from highly similar paintings by children, chimps, monkeys, and elephants. Psychol. Sci. 22 435–441. 10.1177/0956797611400915 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hockney D. (2001). Secret knowledge: Rediscovering the lost techniques of the old masters. London: Thames & Hudson. [ Google Scholar ]
  • Hood B. M., Bloom P. (2008). Children prefer certain individuals over perfect duplicates. Cognition 106 455–462. 10.1016/j.cognition.2007.01.012 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Interalia Magazine (2018). Portraits of imaginary people. Available online at: https://www.interaliamag.org/audiovisual/mike-tyka/ (accessed August 15, 2022). [ Google Scholar ]
  • Kim T. (2022). The future of creativity, brought to you by artificial intelligence. Available online at: https://venturebeat.com/datadecisionmakers/the-future-of-creativity-brought-to-you-by-artificial-intelligence/ (accessed August 9, 2022). [ Google Scholar ]
  • Kinsella E. (2018). The first ai-generated portrait ever sold at auction shatters expectations, fetching $432,500—43 times its estimate. Available online at: https://news.artnet.com/market/first-ever-artificial-intelligence-portrait-painting-sells-at-christies-1379902 (accessed August 10, 2022). [ Google Scholar ]
  • Kirk U., Skov M., Hulme O., Christensen M. S., Zeki S. (2009). Modulation of aesthetic value by semantic context: An fMRI study. NeuroImage 44 1125–1132. 10.1016/j.neuroimage.2008.10.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Komar V., Melamid A. (1999). Painting by numbers: Komar and Melamid’s scientific guide to art. Berkeley, CA: University of California Press. [ Google Scholar ]
  • Kruger J., Wirtz D., Van Boven L., Altermatt T. W. (2004). The effort heuristic. J. Exp. Soc. Psychol. 40 91–98. 10.1016/S0022-1031(03)00065-9 [ CrossRef ] [ Google Scholar ]
  • Lee K. F., Qiufan C. (2021). AI 2041: Ten visions for our future. London: Ebury Publishing. [ Google Scholar ]
  • Lincoln K. (2018). Deep you. Available online at: https://www.theringer.com/tech/2018/11/8/18069092/chess-alphazero-alphago-go-stockfish-artificial-intelligence-future (accessed August 16, 2022). [ Google Scholar ]
  • Marcus G., Davis E. (2019). Rebooting AI: Building artificial intelligence we can trust. New York, NY: Knopf Doubleday Publishing Group. [ Google Scholar ]
  • Mazzone M., Elgammal A. (2019). Art, creativity, and the potential of artificial intelligence. Arts 8 : 26 . 10.3390/arts8010026 [ CrossRef ] [ Google Scholar ]
  • Menninghaus W., Wagner V., Wassiliwizky E., Schindler I., Hanich J., Jacobsen T., et al. (2019). What are aesthetic emotions? Psychol. Rev. 126 171 . 10.1037/rev0000135 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller A. I. (2019). The artist in the machine: The world of AI-powered creativity. Cambridge, MA: MIT Press. 10.7551/mitpress/11585.001.0001 [ CrossRef ] [ Google Scholar ]
  • Newman G. E. (2019). The psychology of authenticity. Rev. Gen. Psychol. 23 8–18. 10.1037/gpr0000158 [ CrossRef ] [ Google Scholar ]
  • Newman G. E., Bloom P. (2012). Art and authenticity: The importance of originals in judgments of value. J. Exp. Psychol 141 558–569. 10.1037/a0026035 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Newman G. E., Bartels D. M., Smith R. K. (2014). Are artworks more like people than artifacts? Individual concepts and their extensions. Top. Cogn. Sci. 6 647–662. 10.1111/tops.12111 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pearson A. (2022). Archive dreaming. Available online at: http://www.digiart21.org/art/archive-dreaming (accessed August 10, 2022). [ Google Scholar ]
  • Rich S. (2022). The new poem-making machinery. Available online at: https://www.newyorker.com/culture/culture-desk/the-new-poem-making-machinery (accessed August 10, 2022). [ Google Scholar ]
  • Romano H. (2022). 8 virtual reality artists who use the world as their canvas. Available online at: https://blog.kadenze.com/creative-technology/8-virtual-reality-artists-who-use-the-world-as-their-canvas/ (accessed August 8, 2022). [ Google Scholar ]
  • Shiner L. (2001). The invention of art. A cultural history. Chicago, IL: University of Chicago Press. 10.7208/chicago/9780226753416.001.0001 [ CrossRef ] [ Google Scholar ]
  • Snapper L., Oranç C., Hawley-Dolan A., Nissel J., Winner E. (2015). Your kid could not have done that: Even untutored observers can discern intentionality and structure in abstract expressionist art. Cognition 137 154–165. 10.1016/j.cognition.2014.12.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taylor J. (2022). No Quick fix: How OpenAI’s DALL-E 2 illustrated the challanges of bias in AI. Available online at: https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918 (accessed August 10, 2022). [ Google Scholar ]
  • Thunström A. O. (2022). We asked GPT-3 to write an academic paper about itself—then we tried to get it published. Berlin: Scientific American. [ Google Scholar ]

Imagine creating a work of art in seconds with artificial intelligence and a single phrase.

And just like that...

Technology like this has some artists excited for new possibilities and others concerned for their future.

How AI-generated art is changing the concept of art itself

By Steven Vargas Sept. 21, 2022 5 AM PT

This is one way that artificial intelligence can output a selection of images based on words and phrases one feeds it. The program gathers possible outputs from its dataset references that it learned from — typically pulled from the internet — to provide possible images.

For some, AI-generated art is revolutionary.

In June 2022, Cosmopolitan released its first magazine cover generated by an AI program named DALL-E 2 . However, the AI did not work on its own. Video director Karen X. Cheng, the artist behind the design, documented on TikTok what specific words she used for the program to create the image of an astronaut triumphantly walking on Mars:

“A wide angle shot from below of a female astronaut with an athletic feminine body walking with swagger towards camera on Mars in an infinite universe, synthwave digital art.”

KAREN X. CHENG

Video Director

(Courtesy of Karen X. Cheng)

While the cover boasts that “it only took 20 seconds to make” the image, that’s only partially true. “Every time you search, it takes 20 seconds,” Cheng says. “If you do hundreds of searches, then, of course, it’s going to take quite a bit longer. For me, I was trying to get, like, a very specific image with a very specific vibe.”

Images generated by DALL•E 2 using the phrases Karen X. Cheng developed to get to the final Cosmopolitan cover. (Cover courtesy of Karen X. Cheng).

As one of the few people with access to the AI system, Cheng told the Los Angeles Times that her first few hours with the program were “mind-blowing.”

“I felt like I was witnessing magic,” she says. “Now that it’s been a few months, I’m like, ‘Well, yes, of course, AI generates anything.’”

On July 20, OpenAI announced that the DALL-E would go into a public beta phase, allowing a million people from their waitlist to access the technology.

DALL-E 2 has altered Cheng’s perspective as an artist. “I am now able to create different kinds of art that I never was before,” she says.

These programs have also drawn their fair share of critics. Illustrator James Gurney shared on his blog in April 2022 that while the AI technology is revolutionary, it’s causing fear among artists who worry the technology will ultimately devalue their livelihoods. “The power of these tools has blown me away,” he tells The Times over email. “They can make endless variations, served up immediately at the push of a button, all made without a brain or a heart.”

JAMES GURNEY

Illustrator

(Photo by Robert Eckes)

Gurney believes AI is changing how consumers engage with and interpret art altogether. “There’s such a firehose of pictures and videos, but to me, they’re starting to look the same: same cluelessness about human interaction, same type of ornamentation, same infinitely morphing videos,” he says. If the output by an AI system is too real, it can, in turn, alter what we see as reality.

Example of distortions in DALL•E 2 generations in intro prompt.(Photos courtesy of DALL•E 2).

While AI has opened artists to new possibilities, generating images within seconds that bring their words to life, AI-generated art has also blurred the lines of ownership and heightened instances of bias. AI has artists divided on whether to embrace technological advances or take a step back. No matter where one stands, it’s impossible to avoid the reality that AI systems are here.

AI Possibilities (Or What You Say About It)

With a Sharpie in one hand and a white, Converse high-top in the other, the Los Angeles-based XR Creator Don Allen III doodles an image onto the shoe. He coats the sneaker in a swirl of colors, with doodles of butterflies flecking one side of the shoe. They flutter through a landscape of checkered colors and freeform stripes.

DON ALLEN III

XR Creator and Metaverse advisor

(Courtesy of Don Allen III)

Allen didn’t pull each pen stroke out of thin air. He came up with the design with the help of DALL-E 2.

He first thought of combining his artistic practice with AI technology after reading “The Diamond Age” by Neal Stephenson. In the sci-fi novel, 3D printing is ubiquitous — meaning that artwork developed by 3D printing bears a lower value than handmade pieces by artists. Instead of relying completely on AI, Allen wanted to see how the technology could add value to artwork he was already making, like shoes.

Allen says that in the four months he’s been using the program, his artistic practice has expanded beyond what he could ever imagine. “The journey into AI has been a tool that expedites and streamlines every one of my creative processes so much that I lean on it more and more every day,” he says. “He’s been able to generate images for shoe designs in seconds. He typically generates images with DALL-E and uses a projector to display them onto different objects. From there, he outlines and develops his pieces, adding his own style here and there as he draws.

GENERATE YOUR OWN AI IMAGE

Click and change the subject, environment and style to alter the result.

SUBJECT ENVIRONMENT STYLE

Astronaut floating in outer space , digital art

Allen has dedicated his career to showing people how technology can advance art and creative pursuits. Before becoming a full-time creative about a year and a half ago, he worked at DreamWorks Animation for three years as a specialist trainer, teaching the company’s artists how to use creative software. As a metaverse advisor, he consults individuals and brands on the technologies that the metaverse and internet can provide to amplify their work or brand. He has created AR experiences for companies like Snapchat and artists like Lil Nas X.

Having artists find new ways to incorporate technology into their preexisting practices is what Midjourney founder David Holz had envisioned for his own image-generating AI system. Holz explains that Midjourney exists as a way of “extending the imagination powers of the human species.”

Founder of Midjourney

(Courtesy of David Holz)

Midjourney is a similar program to DALL-E in that a user can type in any phrase and the technology will then generate an image based on what they input. Yet Midjourney has a stronger social aspect because its home lies within the server Discord, where a community of people collaborate on their creations and bounce ideas off one another.

Over time, Holz noticed how artists using Midjourney enjoyed using it to speed up the process of imagination and complement the expertise they already possess. “The general attitude we’re getting from the industry is that this lets them do a lot more breadth and exploration early in the process, which leads them to have more creative final output when there are lots of people involved at the end,” he says.

Holz compares Midjourney to the invention of the engine. It lives alongside other types of transportation, including walking, biking, and horse riding. People can still get places without an engine, but forms of transportation that utilize an engine will help them get there faster, especially to travel longer distances. Similarly, an artist may have a long way ahead of themselves when it comes to trying out ideas, and instead of spending hours trying something that may not work out how they anticipated, AI can provide a glimpse into their idea before they attempt executing it.

“The general attitude we're getting from the industry is that this lets them do a lot more breadth and exploration early in the process, which leads them to have more creative final output when there are lots of people involved at the end,” he says.

“It's really easy to look at something from far away and say it's scary. When people actually use it, the attitudes are very different. It doesn’t feel like something that’s trying to replace you. When you use it, it very much feels like an extension of your own mind.”

Ziv Epstein, a fifth-year Ph.D. student in the MIT Media Lab, has researched the implications and growth of AI-generated art. He echoes Holz in saying that these programs can never replace artists, but can instead be an aid for them. “It’s like this new generational tool which requires these existing people to basically skill up,” he says. “Getting access to this really cool and exciting new piece of technology will just bootstrap and augment their existing artistic practice.”

ZIV EPSTEIN

Fifth year PhD student

(Photo by Chenli Ye)

“Who Gets Credit for AI-Generated Art?” — a paper that Epstein co-wrote with fellow MIT colleagues Sydney Levine, David G. Rand and Iyad Rahwan — argues that AI is an extension of the imagination. Yet the authors also note that, at its core, it’s still a computer program that requires human input to create.

“What we found is that when you anthropomorphize the AI — endow it with human-like or agentic characteristics — that it actually undermines attributions of credit to the human actors involved in it.”

While DALL-E 2 generated the images for the Cosmopolitan cover, for example, Cheng still had to refine and craft the right set of phrases to get what she wanted out of it.

Cheng says that she initially felt hesitant about using AI. But as she got more comfortable with the program, it felt like a new medium. “Every kid who was born in the last five years, they’re going to grow up thinking this is just normal, just like we think it’s normal to be able to Google image search anything,” she says.

Unknown Owners and the Blending of Names

In February 2022, the U.S. Copyright Office rejected a request to grant Dr. Stephen Thaler copyright of a work created by an AI algorithm named the “Creativity Machine." The request was reviewed by a three-person board. Titled “A Recent Entrance to Paradise,” the artwork portrayed train tracks leading through a tunnel surrounded by greenery and vibrant purple flowers.

“A Recent Entrance to Paradise" (2012) by DABUS, compliments of Dr. Stephen Thaler. (All rights reserved)

Thaler submitted his copyright request identifying the “Creativity Machine” as the author of the work. Since the copyright request was for the machine's artwork, it did not fulfill the “human authorship” requirement that goes into copyrighting something.

“While the Board is not aware of a United States court that has considered whether artificial intelligence can be the author for copyright purposes, the courts have been consistent in finding that non-human expression is ineligible for copyright protection,” the board said in the copyright decision.

STEPHEN THALER

Founder of Imagination Engines

(Courtesy of Imagination Engines, Inc.)

Thaler shares that the law is “biased towards human beings” in this case.

While the request for copyright pushed for credit to be given to the Creativity Machine, the case opened up questions about the true author of AI-generated art.

Attorney Ryan Abbott, a partner at Brown, Neri, Smith & Khan LLP, helped Thaler as part of an academic project at the University of Surrey to “challenge some of the established dogma about the role of AI in innovation.” Abbott explains that copyrighting AI-generated art is difficult because of the human authorship requirement, which he finds isn’t “grounded in statute or relevant case law.” There is an assumption that only humans can be creative. “From a policy perspective, the law should be ensuring that work done by a machine is not legally treated differently than work done by a person,” he says. “This will encourage people to make, use and build machines that generate socially valuable innovation and creative works.”

From a legal standpoint, AI-generated work sits on a spectrum where human involvement sits at one extreme and AI autonomy sits on the other.

“It depends on whether the person has done something that would traditionally qualify them to be an author or are willing to look to some nontraditional criteria for authorship,” Abbott says.

“This issue of owning an AI-generated work is something that has been discussed for decades, but not in a way that had much commercial importance.”

RYAN ABBOTT

In Epstein’s article, he uses the example of the painting “Edmond De Belamy,” a work generated by a machine learning algorithm and sold at Christie’s art auction for $432,500 in October 2018. He explains that the work would not have been made without the humans behind the code. As artwork generated by AI gains commercial interest, more emphasis is put on the authors who deserve credit for the work they put into the project. “How you talk about the systems has very important implications for how we assign credit responsibility to people,” he says.

“Edmond de Belamy”, generated by machine learning in 2018

This has raised concerns among illustrators about how credit is given to AI-generated art, especially for those who feel like the programs could pull from their own online work without citing or compensating them. “A lot of professionals are worried it will take away their jobs,” illustrator Gurney says. “That’s already starting to happen. The artists it threatens most are editorial illustrators and concept artists.”

It’s common for AI to generate images in a certain style of an artist. If an artist is looking for something in the vein of Vincent van Gogh, for instance, the program will pull from his pieces to create something new in a similar style. This is where it can also get muddy. “It’s hard to prove that a given copyrighted work or works were infringed, even if an artist’s name is used in the prompt,” Gurney says. “Which images were used in the input? We don’t know.”

These are four of James Gurney’s paintings.

But he only made one of them.

The rest were generated by Midjourney with his name used in the prompt.

“It’s hard to prove that a given copyrighted work or works were infringed, even if an artist’s name is used in the prompt,” he says. “Which images were used in the input? We don't know.”

Legally, rights holders are concerned with providing permission or receiving compensation for having their work incorporated into another piece. Abbott says these concerns, while valid, haven’t quite caught up with the technology. “The right holders didn’t have an expectation when they were making the work that the value was going to come from training machine learning algorithms,” he says.

A 2018 study by The Pfeiffer Report sought to find out how artists were responding to advances in AI technology. The report found that after surveying more than 110 creative professionals about their attitudes to AI, 63% of respondents said they are not afraid AI will threaten their jobs. The remaining 37% were either a little or extremely scared about what it might mean for their livelihoods. “AI will have an impact, but only on productivity,” Sherri Morris, chief of marketing, creative and brand strategy at Blackhawk Marketing, said in the report. “The creative vision will have to be there first.”

Illustrator and artist Jonas Jödicke worked with WOMBO Dream, another AI art-generating tool, before receiving access to DALL-E 2 in mid-July. From his experience as an illustrator using AI, he says that it could be a “big problem” if programs source his own image and make something similar in his style. He explains that programs like DALL-E pull from so many sources all over the internet that it can “create something by itself,” completely differently from other work.

JONAS JÖDICKE

(Courtesy of Jonas Jödicke)

Jödicke acknowledges the concerns with art theft, especially as someone who has had his work stolen and used to sell products on the likes of Amazon and Alibaba. “If you upload your art to the internet, you can be certain that it’s going to be stolen at some point, especially when you have a bigger reach on social media,” he says.

Regardless, Jödicke sees AI as a new tool for artists to use. He compares it to the regressive attitudes some people have had toward digital artists who use programs like Adobe Creative Suite and Pro Tools. Sometimes artists who use these programs are accused of not being “real artists” although their work is unique and full of creativity. “You still need your artistic abilities and know-how to really polish these results and make them presentable and beautifully rendered,” he says.

A Glitch in the System

For carrot cake to be carrot cake, carrots are incorporated into the batter and present in every bite. So what happens if it’s just sprinkled on top? It might be a carrot-esque dessert, but it isn’t carrot cake.

Allen views the lack of diversity in AI in the same way. In a June 28 Instagram reel , he presented the carrot cake analogy by explaining that if there are no diverse voices incorporated into the development of AI technologies, it isn’t an inclusive process. “If you want to have a really equitable and diverse artificially intelligent art system, it needs to include a diverse set of people from the beginning,” he says.

In an effort to get more voices represented in the AI conversation, he used the post to help artists from underrepresented communities get early access to DALL-E 2. Allen also highlights a larger issue in art technologies through the video: democratization.

A lot of AI art programs have closed access where only a few people can use it. On July 12, Midjourney announced on Twitter it moved to open-beta, allowing anyone to access its Discord server and use the AI technology. While DALL-E 2 still has closed access, DALL-E Mini is available for public use. (Albeit DALL-E Mini’s image quality is lower than DALL-E 2, resulting in blurry blobs for faces and objects.)

At the moment, those wanting to get into closed access systems must join a waitlist. The reason is practical, says Epstein: Closed access allows companies developing the AI system to tweak and develop their products, especially before opening it up for public use. That way, they can minimize potential misuse, especially when it comes to deep fakes. But some fear that AI creations could “erode our grip on a shared reality.” “Perhaps the greatest potential harm is the power to chip away at our shared confidence that we’re inhabiting the same corner of the universe because the propagandist has a faster bicycle than the fact-checker,” Gurney adds.

AI outputs can also be significantly affected by inherent bias. In May 2022, WIRED published a story in which OpenAI developers shared that one month after introducing DALL-E 2, they noticed that certain phrases and words produced biased results that perpetuated racial stereotypes. Open AI put together a “red team” made up of outside experts to investigate possible issues that could come up if the product were made public, and the most alarming was its depictions of race and gender. The outlet reported that one red team member noted when generating an image with prompts like “a man sitting in a prison cell” or “a photo of an angry man,” for instance, images of men of color came up.

Results from entering “ceo” in DALL-E

Epstein says the deeper problem lies in the datasets the AI is learning from. “There actually is this new movement to go away from these like big models where you don’t even know what’s in the model, but to actually really carefully curate your own dataset yourself because then you actually know exactly what’s going into it, how it’s ethically sourced, what are the kinds of biases that are involved in it,” he says.

Cheng says that since working with OpenAI, she’s noticed how the results of her searches have gotten more diverse as the company works on the closed beta product. For example, looking up certain occupations like “CEO” or “doctor” have portrayed a diverse set of people. “My hope for AI art is that it’s done thoughtfully, rolled out safely where inclusivity and diversity are highlighted and built up from the very beginning,” she says.

She adds, “We all saw what happened when social media wasn’t built thoughtfully. My hope is that that’s not repeated with AI.”

Since Cheng spoke with The Times, OpenAI announced that they implemented new techniques to DALL-E 2 after people previewing the system flagged issues with biased images.

“Based on our internal evaluation, users were 12× more likely to say that DALL·E images included people of diverse backgrounds after the technique was applied,” OpenAI wrote in the statement. “We plan to improve this technique over time as we gather more data and feedback.”

One startup based in London and Los Altos decided to lean all the way into democratization, filters or not. On Aug. 10, Stability AI announced that they’d be releasing Stable Diffusion, a system similar to DALL-E 2 to researchers, and soon to the public. According to their model card , Stable Diffusion is trained on subsets of LAION-2B(en), which contains images based on English descriptors and omits content from other cultures and communities.

Bias in datasets could be avoided with diversity in tech, Allen explains. “All of the human biases are what it learned from,” he says.

“We were all of the AI’s collective parents, and I don't want to just blame the AI for being biased. I want to blame humans — all of us — for teaching it bias.”

He adds, “We were like, ‘let's teach you everything, including the bad stuff.’”

The Brain Behind the Program: You (Or What You Say to It)

As more artists gain access to AI and take up the tools, artists will have a whole new look — both how they look making art and how their art develops.

Holz describes Midjourney creations as something “not made by a person, not made by a machine and we know it.” “It’s just a new thing,” he says.

He says the aesthetics of art will expand with AI and potentially lead to a “deindustrialization.” “Because these tools, at their heart, make everything look different and unique, we have the opportunity to push things back in the opposite direction,” Holz says.

But some artists fear that the heightened role of AI might do the opposite, creating a singular aesthetic and taking pieces of imagination out of the process. Gurney says it’ll be like when desktop publishing made typesetting easy and accessible, leading to a flow of similar-looking graphic designs in the 1990s. But along with the homogeneity of design — which featured bold text and neon colors influenced by rave and cyberpunk subcultures — legacy-making art was also made, including Paula Scher’s designs for The Public Theatre that continues in the art form’s marketing today.

Work generated by Midjourney with the prompt "An artist crafting the best work of their life”. (Photos courtesy of Midjourney)

For those who are immersed in AI technology, it feels like there’s no turning back. “People tend to have one career for a lifetime, and I just think that the world we’re in now, we should reset expectations as a humanity of not expecting to be in the same career, in the same sort of style, for a lifetime,” Cheng says.

AI tools have already created a new wave of interest that Epstein has noticed and is currently researching. In his article co-written with Hope Schroeder and Dava Newman, “When happy accidents spark creativity: Bringing collaborative speculation to life with generative AI,” he explores how people are looking for new possibilities of imagination that step away from realism.

“There’s this idea that we’ve actually crested and have fallen back on the peak of AI art,” he says, adding that people are less interested in “photorealistic stock imagery” that you may see with DALL-E 2 and are instead looking for “beautiful, crazy new texture.”

The future of AI in the art world is unpredictable, especially since most tools remain in closed beta phases as they develop. Regardless of the stage, Epstein warns that what the public says about its early incarnations matters.

“Journalists, citizens and scientists [must] be really responsible with the way they frame AI, and not use it as a fear-mongering tactic to scare people,” he says.

Allen feels the same way. “I believe if you focus on the negative with AI, then that will come true,” he says. “And if we get more people focusing on the good and positivity that we can do with it, then that will come true.”

This story was reported by Steven Vargas . It was edited by Paula Mejía and copy edited by Evita Timmons. The design and development are by Ashley Cai . Additional development by Joy Park and Alex Tatusian . Engagement editing by David Viramontes . Additional digital help from Beto Alvarez.

  • View  PDF
  • Download full issue

Elsevier

Computers in Human Behavior: Artificial Humans

Artificial intelligence in fine arts: a systematic review of empirical research.

  • • Systematic collection and review of empirical studies on use of AI in fine arts.
  • • 723 articles from three databases were screened, resulting in 44 studies.
  • • AI tools have been used to enhance art and artistic event production and analysis.
  • • The difference between human- and AI-made art was not generally recognized.
  • • We are in the middle of transition where AI is transforming concepts of creativity and art.
  • Previous article in issue
  • Next article in issue

Cited by (0)

More From Forbes

The problem with ai-generated art, explained.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

The OpenAI company logo reflected in a human eye at a studio in Paris on June 6, 2023. ChatGPT is a ... [+] conversational artificial intelligence software application developed by OpenAI. (Photo by JOEL SAGET / AFP) (Photo by JOEL SAGET/AFP via Getty Images)

Generative AI has sparked a tremendous backlash across the internet, as the early promise of the technology has been overshadowed by the wide range of problems it has introduced.

Here are some of the reasons why the public is pushing back against AI in the arts:

Generative AI Is Impractical

Large Language Models (LLMs) such as ChatGPT, and image generators like Midjourney and Dall-E, have introduced a new copyright conundrum, and provoked multiple lawsuits alleging copyright infringement.

It’s true that no artist was asked if their work could be used to train these models. But even if the courts rule in favor of the machines, the practical application of the technology doesn’t seem worth the cost.

Healthcare Software Company Epic Faces First Antitrust Suit For Alleged Monopoly Activity

How to embrace the enterprise ai era, this simple prompting act surprisingly befuddles advanced ai especially openai new o1 model.

Generative AI is incredibly energy-intensive , surprisingly labor-intensive , and requires constant input — annotation — from human workers to keep it functional, lest it spiral into hallucinogenic nonsense.

Using ChatGPT for a playful Q&A session consumes an absurd amount of water; exchanging a mere 20 questions with the text generator is akin to pouring a 500ml bottle of clean freshwater down the drain.

Even with all this human effort to keep the technology anchored in reality, AI is predicted to damage itself when it inevitably starts consuming its own output, like a species inbred to extinction.

Not to mention, the constant slurry pouring out of the image generators are clogging up Google Image search.

In the future, children will learn about our era of climate catastrophe, and struggle to understand why we burned energy with such reckless abandon; billionaire space tourism, celebrity private jets, NFT minting, and now, generative AI.

What’s it all for?

What’s The Point of AI Art?

Generative AI has given the public the means to instantly create an image, or piece of writing, that looks as though it took time and effort. Art can now be manifested via the touch of a button, a prompt or two, as effortless as ordering fast food.

The technology is a solution to a problem that never existed. Artists, as much as they like to complain about the struggle of the creative process, enjoy making things. The creative process is incredibly rewarding, even if the final piece doesn’t match up to the original vision.

Artists are watching the skills that they have spent their life sharpening being devalued before their eyes.

Worse, their work was absorbed into the dataset without their knowledge or consent; they have been used to train their own replacement, and no one asked for permission.

Generative AI threatens the livelihood of artists, pitting their labor against the cheap slop produced by dead machines. The technology only benefits those who wish to produce content as quickly and cheaply as possible, by removing artists from the creative process.

If you think pop culture has become too bland and algorithmic nowadays, just wait until the content is being produced by actual algorithms — in hindsight, we probably shouldn’t have let the word “content” catch on.

AI-generated media will likely not result in thoughtful, imaginative, and groundbreaking stories; the concern is that the hype cycle will last long enough to damage the career prospects of working creatives.

AI Is Not Learning Like A Human

Many AI enthusiasts argue that machine learning is analogous to human learning, that stealing the work of artists to fill datasets is the same as humans taking artistic inspiration from others.

Generative AI, however, is not conscious. It’s not even close.

There’s a widely held belief among AI enthusiasts that the technology will only grow more intelligent as the years pass. Some have even been possessed with a kind of evangelical zeal, under the impression that AI will eventually evolve into a fully conscious being, AGI, that will lead humanity to the singularity.

Noam Chomsky and his co-authors argued against this reductive worldview in a NYTimes Op-Ed , writing:

“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data ... it seeks not to infer brute correlations among data points but to create explanations.”

Credulous billionaire Elon Musk is a good example of a high-profile figure who firmly believes the AI hype. Musk spends much of his time repeating the wildest predictions of science fiction authors — that is, when he’s not endorsing the Great Replacement theory on “X,” the bot-riddled website formally known as Twitter.

Ironically, the decline of “X” shows the corrosive effects of generative AI; the technology has created an ocean of spam that clogs every post, turning replies into mindless mush. LLMs have given bots the ability to imitate human speech, but not to make interesting human conversation.

They never say anything worth listening to. How can they, when they have no ability to understand context, no perspective from which to view the world?

This lack of understanding results in boring output.

AI Art Is Boring

Have you ever seen generative AI create anything even remotely interesting, beyond grotesquely amusing memes? That might just be the best use for them; the uncanny, plastic sheen of AI imagery is perfect for the weird world of memes.

The most intriguing element of AI art is surely the mistakes — crowds with melted faces, hands with withered fingers, extra digits, and limbs sprouting from places they simply shouldn’t.

Zooming into AI images often reveals unsettling elements, evidence that the image was created by a dead machine, with none of the intent of a human creator.

When we immerse ourselves in art, we experience a touch of the unique perspective that an artist brings to their work, the smeared fingerprints that makes art worth talking about.

It is telling that AI can be used to write an essay, but never a good story; it has no perspective to speak from, no odd fixations, perversions or eccentricities that a person injects into their art. It’s just a bland amalgamation of what has come before.

AI could perhaps write a forgettable, formulaic superhero movie, but it could never surprise us with a fresh spin on a familiar genre — the dead machine can only reconstruct art from tattered pieces it has already eaten.

AI will not surprise us, or produce work that inspires a range of imitators; it will never mimic the insight of The Sopranos , the boundless imagination of One Piece , or even the lighthearted political commentary of Barbie — it certainly could never create something as wonderfully enigmatic as Hayao Miyazaki’s The Boy and the Heron .

In fact, when Miyazaki first encountered AI-generated art, he reacted with visceral disgust. A now-famous clip shows the legendary animator watching a presentation on Artificial Intelligence in animation, and being told that the intent is to create a machine that can “draw like a human.”

Miyazaki replied, "I strongly feel that this is an insult to life itself."

Dani Di Placido

  • Editorial Standards
  • Forbes Accolades

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

Reflections on the Use of Artificial Intelligence in Works of Art

Yusa, I. M. M. ., Yu, Y. ., & Sovhyra, T. . (2022). REFLECTIONS ON THE USE OF ARTIFICIAL INTELLIGENCE IN WORKS OF ART. Journal of Aesthetics, Design, and Art Management , 2(2), 152-167. https://doi.org/10.58982/jadam.v2i2.334

16 Pages Posted: 17 May 2023

I Made Marthana Yusa

Institut Bisnis dan Teknologi Indonesia (INSTIKI)

Guangdong University of Technology

Tetiana Sovhyra

Kyiv National University of Culture and Arts

Date Written: Oktober 24, 2022

Purpose: This paper aims to reflect on the use of Artificial Intelligence (AI) in works of art, examining its impact on the creative process, aesthetics, and audience reception, then provide a critical reflection on the use of AI in works of art, examining its potential benefits and drawbacks and exploring the ethical considerations involved. The paper examines the use of AI in works of art, with a focus on the methodologies and case studies that demonstrate the potential and challenges of this emerging field. Research methods: The paper presents case studies of notable artworks that utilize AI, discusses the potential benefits and drawbacks of AI in art, and explores the ethical considerations involved. Through a review of the literature and analysis of six case studies, we explore the aesthetic, technical, and social dimensions of AI-generated art, as well as the ethical and critical debates surrounding its production and reception. Findings: The paper concludes by emphasizing the importance of critical engagement and continued reflection on the use of AI in art. Our findings suggest that AI can offer new modes of creative expression and engagement, but also raise questions about the role of human agency and interpretation in the artistic process. Implications: AI-generated art offers new opportunities and challenges for artists, scholars, and audiences, and that future research should focus on developing ethical guidelines, exploring collaborative practices, and examining the social and political implications of AI-based art. Case studies of AI-generated art can help us understand the potentials and limitations of this field by providing specific examples of how AI is being used to create art, and how artists and audiences are engaging with these works. These studies can provide insights into the creative processes, technical challenges, and social implications of AI-generated art.

Creative Commons License

Keywords: Artificial Intelligence (AI), AI-generated art methodology, Next Rembrandt, Sunspring, Rafael Lozano-Hemmer, Refik Anadol

Suggested Citation: Suggested Citation

I Made Marthana Yusa (Contact Author)

Institut bisnis dan teknologi indonesia (instiki) ( email ).

Jl.Tukad Pakerisan no.97, Panjer Denpasar, Bali 80225 Indonesia +62 361-256-995 (Phone)

HOME PAGE: http://www.instiki.ac.id/

Guangdong University of Technology ( email )

Kyiv national university of culture and arts ( email ), do you have a job opening that you would like to promote on ssrn, paper statistics, related ejournals, artificial intelligence ejournal.

Subscribe to this fee journal for more curated articles on this topic

Innovation Law & Policy eJournal

Electrical engineering ejournal, psychology research methods ejournal, art & science ejournal, film, media & tv studies ejournal, visual, performing & fine arts education ejournal, graphic arts & design ejournal.

Ted Chiang Is Wrong About AI Art

It’s real. But it isn’t revolutionary.

A buffer bar on a white paper resting on a painting easel

Artists and writers all over the world have spent the past two years engaged in an existential battle. Generative-AI programs such as ChatGPT and DALL-E are built on work stolen from humans, and machines threaten to replace the artists and writers who made the material in the first place. Their outrage is well warranted—but their arguments don’t always make sense or substantively help defend humanity.

Over the weekend, the legendary science-fiction writer Ted Chiang stepped into the fray, publishing an essay in The New Yorker arguing, as the headline says, that AI “isn’t going to make art.” Chiang writes not simply that AI’s outputs can be or are frequently lacking value but that AI cannot be used to make art, really ever, leaving no room for the many different ways someone might use the technology. Cameras, which automated realist painting, can be a tool for artists, Chiang says. But “a text-to-image generator? I think the answer is no.”

As in his previous writings on generative AI, here Chiang provides some sharp and necessary insights into an overwhelming societal shift. He correctly points out that the technology is predicated on a bias toward efficiency, and that these programs lack thought and intention. And I agree with Chiang that using AI to replace human minds for shareholder returns is depressing.

Yet the details of his story are off. Chiang presents strange and limiting frameworks for understanding both generative AI and art, eliminating important nuances in an ongoing conversation about what it means to be creative in 2024.

He makes two major mistakes in the essay, first by suggesting that what counts as “art” is primarily determined by the amount of effort that went into making it, and second that a program’s “intelligence” can or should be measured against an organic mind as opposed to being understood on its own terms. As a result, though he clearly intends otherwise, Chiang winds up asking his reader to accept a constrained view of human intelligence, artistic practice, and the potential of this technology—and perhaps even of the value of labor itself.

Read: We’re witnessing the birth of a new artistic medium

People will always debate the definition of art, but Chiang offers “a generalization: art is something that results from making a lot of choices.” A 10,000-word story, for instance, requires some 10,000 choices; a painting by Georges Seurat, composed of perhaps 220,000 dabs of paint , required 220,000 choices. By contrast, you make very few choices when prompting a generative-AI model, perhaps the “hundred choices” in a 100-word prompt; the program makes the rest for you, and because generative AI works by finding and mimicking statistical patterns in existing writing and images, he writes, those decisions are typically boring, too. Photographers, Chiang allows, make sufficient choices to be artists; users of AI, he predicts, never will.

What ratio of human decisions to canvas size or story length qualifies something as “art”? That glib question points to a more serious issue with Chiang’s line of thinking: You do not need to demonstrate hours of toil, make a lot of decisions, or even express thoughts and feelings to make art. Assuming that you do impoverishes human creativity.

Some of the most towering artists and artistic movements in recent history have divorced human skill and intention from their ultimate creations. Making a smaller number of decisions or exerting less intentional control does not necessarily imply less vision, creativity, brilliance, or meaning. In the early 1900s, the Dada and surrealist art movements experimented with automatism, randomness, and chance, such as in a famous collage made by dropping strips of paper and pasting them where they landed, ceding control to gravity and removing expression of human interiority; Salvador Dalí fired ink-filled bullets to randomly splatter lithographic stones. Decades later, abstract painters including Jackson Pollock, Joan Mitchell, and Mark Rothko marked their canvases with less apparent technical precision or attention to realism—seemingly random drips of pigment, sweeping brushstrokes, giant fields of color—and the Hungarian-born artist Vera Molnar used simple algorithms to determine the placement of lines, shapes, and colors on paper. Famed Renaissance artists used mathematical principles to guide their work; computer-assisted and algorithmic art today abounds. Andy Warhol employed mass production and called his studio the “Factory.” For decades, authors and artists such as Tristan Tzara, Samuel Beckett, John Cage, and Jackson Mac Low have used chance in their textual compositions.

Chiang allows that, under exceedingly rare circumstances, a human might work long and hard enough with a generative-AI model (perhaps entering “tens of thousands of words into its text box”) to “deserve to be called an artist.” But even setting aside more avant-garde or abstract applications, defining art primarily through “perspiration,” as Chiang does, is an old, consistently disproven tack. Édouard Manet, Claude Monet, and other associated 19th-century painters were once ridiculed by the French art establishment because their canvases weren’t as realistic as, and didn’t require the effort of, academic realism. “The newest version of DALL-E,” Chiang writes, “accepts prompts of up to four thousand characters—hundreds of words, but not enough to describe every detail of a scene.” Yet Manet’s and Monet’s Impressionist paintings—so maligned because the pictures involved fewer brushstrokes and thus fewer decisions, viewed through Chiang’s framework—shifted the trajectory of visual art and are today celebrated as masterpieces.

In all of these cases, humans devoted time and attention to conceiving each work—as artists using AI might as well. Although Chiang says otherwise, of course AI can be likened to a camera, or many other new technologies and creative mediums that attracted great ire when they were first introduced—radio, television, even the novel. The modern notion of automation via computing that AI embodies was partially inspired by a technology with tremendous artistic capacity: the Jacquard loom, a machine that weaves complex textiles based on punch-card instructions, just like the zeroes and ones of binary code. The Jacquard loom, itself a form of labor automation, was also in some sense a computer that humans could use to make art. Nobody would seriously argue that this means that many Bauhaus textiles and designs—foundational creative influences—are not art.

I am not arguing that a romance novel or still life created by a generative-AI model would inherently constitute art, and I’ve written previously that although AI products can be powerful tools for human artists, they are not quasi-artists themselves. But there isn’t a binary between asking a model for a complete output and sweating long hours before a blank page or canvas. AI could help iterate at many stages of the creative process: role-playing a character or visualizing color schemes or, in its “hallucinations,” offering a creative starting point. How a model connects words, images, and knowledge bases across space and time could be the subject of art, even a medium in itself . AI need not make art ex nihilo to be used to make artworks, sometimes fascinating ones; examples of people using the technology this way already abound.

Read: The future of writing is a lot like hip-hop

The impetus to categorically reject AI’s creative potential follows from Chiang’s other major misstep—the common but flawed criticism that AI programs, because they can’t adapt to novel situations as humans and animals do, are not truly “intelligent.” Chiang makes a comparison between rats and AlphaZero, a famous AI that effectively trained itself to play chess well: In an experimental setting, the rodents developed a new skill in 24 trials, and AlphaZero took 44 million trials to master chess. Ergo, he concludes, rats are intelligent and AlphaZero is not.

Yet dismissing the technology as little more than “auto-complete,” as Chiang does multiple times in his essay, is a category error. Of course an algorithm won’t capture our minds’ and bodies’ expressive intent and subjectivity—one is built from silicon, zeroes, and ones; the others, from organic elements and hundreds of millions of years of evolution. It should be as obvious that AI models, in turn, can do all sorts of things our brains can’t.

That distinction is an exciting, not damning, feature of generative AI. These computer programs have unfathomably more computing power and time available; rats and humans have finite brain cells and only a short time on Earth. As a result, the sorts of problems AI can solve, and how, are totally different. There are surely patterns and statistical relationships among the entirety of digitized human writing and visual art a machine can find that a person would not, at least not without several lifetimes. In this stretch of the essay, Chiang is citing an approach to measuring intelligence that comes from the computer scientist François Chollet. Yet he fails to acknowledge that Chollet, while seeking a way to benchmark AI programs against humans, also noted in the relevant paper that “​​many possible definitions of intelligence may be valid, across many different contexts.”

Another problem with arguing that some high number of decisions is what makes something art is that, in addition to being inaccurate, it risks implying that less intentional, “heartfelt,” or decision-rich jobs and tasks aren’t as deserving of protection. Chiang extends his point about effort to nonartistic and “low-quality text” as well: An email or business report warrants attention only “when the writer put some thought into it.” But just as making fewer choices doesn’t inherently mean someone doesn’t “deserve” to be deemed an artist, just because somebody completes rote tasks at work or writes a report on a deadline doesn’t mean that the output is worthless or that a person losing their job to an AI product is reasonable.

There are all sorts of reasons to criticize generative AI—the technology’s environmental footprint , gross biases, job displacement, easy creation of misinformation and nonconsensual sexual images, to name a few—but Chiang is arguing on purely creative and aesthetic grounds. Although he isn’t valuing some types of work or occupations over others, his logic leads there: Staking a defense of human labor and outputs, and human ownership of that labor and those outputs, on AI being “just” vapid statistics implies the jobs AI does replace might also be “just” vapid statistics. Defending human labor from AI should not be conflated with adjudicating the technology’s artistic merit. The Jacquard loom, despite its use as a creative tool, was invented to speed up and automate skilled weaving. The widespread job displacement and economic upheaval it caused mattered regardless of whether it was replacing or augmenting artistic, artisanal, or industrial work.

Chiang’s essay, in a sense, frames art not just as a final object but also as a process. “The fact that you’re the one who is saying it,” he writes, “the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes [art] new.” I agree, and would go a step further: The processes through which art arises are not limited and cannot be delimited by a single artist or viewer but involve societies and industries and, yes, technologies. Surely, humans are creative enough to make and even desire a space for generative AI in that.

About the Author

essay about ai art

More Stories

AI Is Triggering a Child-Sex-Abuse Crisis

High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

  • Newsletters

The Algorithm: AI-generated art raises tricky questions about ethics, copyright, and security

Plus: There’s no Tiananmen Square in the new Chinese image-making AI

  • Melissa Heikkilä archive page

wizard with sword confronts a dragon

Welcome to The Algorithm 2.0! 

I’m Melissa Heikkilä, MIT Technology Review’s senior reporter for AI. I’m so happy you’re here. Every week I will demystify the latest AI breakthroughs and cut through the hype. This week, I want to talk to you about some of the unforeseen consequences that might come from one of the hottest areas of AI: text-to-image generation.  Text-to-image AI models are a lot of fun. Enter any random text prompt, and they will generate an image in that vein. Sometimes the results are really silly. But increasingly, they're impressive, and can pass for high-quality art drawn by a human being.  I just  published  a story about a Polish artist called Greg Rutkowski, who paints fantasy landscapes (see an example of his work above) and who has become a sudden hit in this new world. 

Thanks to his distinctive style, Rutkowski is now one of the most commonly used prompts in the new open-source AI art generator  Stable Diffusion , which was launched late last month—far more popular than some of the world's most famous artists, like Picasso. His name has been used as a prompt around 93,000 times. But he’s not happy about it. He thinks it could threaten his livelihood—and he was never given the choice of whether to opt in or out of having his work used this way. 

The story is yet another example of AI developers rushing to roll out something cool without thinking about the humans who will be affected by it. 

Stable Diffusion is free for anyone to use, providing a great resource for AI developers who want to use a powerful model to build products. But because these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists, they are raising tricky questions about ethics, copyright, and security.  Artists like Rutkowski have had enough. It’s still early days, but a growing coalition of artists are figuring out how to tackle the problem. In the future, we might see the art sector shifting toward pay-per-play or subscription models like the one used in the film and music industries. If you’re curious and want to learn more,  read my story .  And it’s not just artists:  We should all be concerned about what’s included in the training data sets of AI models, especially as these technologies become a more crucial part of the internet’s infrastructure.

In a  paper  that came out last year, AI researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe analyzed a smaller data set similar to the one used to build Stable Diffusion. Their findings are distressing. Because the data is scraped from the internet, and the internet is a horrible place, the data set is filled with explicit rape images, pornography, malign stereotypes, and racist and ethnic slurs. 

A website called  Have I Been Trained  lets people search for images used to train the latest batch of popular AI art models. Even innocent search terms get lots of disturbing results. I tried searching the database for my ethnicity, and all I got back was porn. Lots of porn. It’s a depressing thought that the only thing the AI seems to associate with the word “Asian” is naked East Asian women.  Not everyone sees this as a problem for the AI sector to fix.  Emad Mostaque, the founder of Stability.AI, which built Stable Diffusion,  said  on Twitter he thought the ethics debate around these models to be “paternalistic silliness that doesn’t trust people or society.”  

But there’s a big safety question.  Free open-source models like Stable Diffusion and the large language model  BLOOM  give malicious actors tools to generate harmful content at scale with minimal resources, argues Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI expert at Boston Consulting Group. The sheer scale of the havoc these systems enable will constrain the effectiveness of traditional controls like limiting how many images people can generate and restricting dodgy content from being generated, Gupta says. Think deepfakes or disinformation on steroids. When a powerful AI system “gets into the wild,” Gupta says, “that can cause real trauma … for example, by creating objectionable content in [someone’s] likeness.” 

We can’t put the cat back in the bag , so we really ought to be thinking about how to deal with these AI models in the wild, Gupta says. This includes monitoring how the AI systems are used after they have been launched, and thinking about controls that “can minimize harms even in worst-case scenarios.” 

Deeper Learning

There’s no Tiananmen Square in the new Chinese image-making AI

My colleague Zeyi Yang  wrote this piece  about Chinese tech company Baidu’s new AI system called ERNIE-ViLG, which allows people to generate images that capture the cultural specificity of China. It also makes better anime art than DALL-E 2 or other Western image-making AIs.

However, it also refuses to show people results about politically sensitive topics, such as Tiananmen Square, the site of bloody protests in 1989 against the Chinese government.

TL;DR:  “When a demo of the software was released in late August, users quickly found that certain words—both explicit mentions of political leaders’ names and words that are potentially controversial only in political contexts—were labeled as ‘sensitive’ and blocked from generating any result. China’s sophisticated system of online censorship, it seems, has extended to the latest trend in AI.” 

Whose values:  Giada Pistilli, principal ethicist at AI startup Hugging Face, says the difficulty of identifying a clear line between censorship and moderation is a result of differences between cultures and legal regimes. “When it comes to religious symbols, in France nothing is allowed in public, and that’s their expression of secularism,” says Pistilli. “When you go to the US, secularism means that everything, like every religious symbol, is allowed.”

As AI matures, we need to be having continuous conversations about the power relations and societal priorities that underpin its development. We need to make difficult choices. Are we okay with using Chinese AI systems, which have been censored in this way? Or with another AI model that has been trained to conclude that Asian women are sex objects and people of color are  gang members ?  AI development happens at breakneck speed.  It feels as if there is a new breakthrough every few months, and researchers are scrambling to publish papers before their competition. Often, when I talk to AI developers, these ethical considerations seem to be an afterthought, if they have thought about them at all. But whether they want to or not, they should—the backlash we’ve seen against companies such as  Clearview AI  should act as a warning that moving fast and breaking things doesn’t work. 

Bit and Bytes

An AI that can design new proteins could help unlock new cures and materials.  Machine learning is revolutionizing protein design by offering scientists new research tools. One developed by a group of researchers from the University of Washington could open an entire new universe of possible proteins for researchers to design from scratch, potentially paving the way for the development of better vaccines, novel cancer treatments, or completely new materials. ( MIT Technology Review )

An AI used medical notes to teach itself to spot disease on chest x-rays.  The model can diagnose problems as accurately as a human specialist, and it doesn't need lots of labor-intensive training data. ( MIT Technology Review ) A surveillance artist shows how Instagram magic is made. An artist is using AI and open cameras to show behind-the-scenes footage of how influencers’ Instagram pictures were taken. Fascinating and creepy! ( Input mag ) Scientists tried to teach a robot called ERICA to laugh at their jokes. The team say they hope to improve conversations between humans and AI systems. The humanoid robot is in the shape of a woman, and the system was trained on data from speed-dating dialogues between male university students at Kyoto University and the robot, which was initially operated remotely by female actors. You can draw your own conclusions. ( The Guardian )  

Artificial intelligence

""

Why OpenAI’s new model is such a big deal

The bulk of LLM progress until now has been language-driven. This new model enters the realm of complex reasoning, with implications for physics, coding, and more.

  • James O'Donnell archive page

person using the voice function of their phone with the openai logo and a sound wave

OpenAI has released a new ChatGPT bot that you can talk to

The voice-enabled chatbot will be available to a small group of people today, and to all ChatGPT Plus users in the fall. 

A finger cursor pointing at digital information

A tiny new open-source AI model performs as well as powerful big ones

The results suggest that training models on less, but higher-quality, data can lower computing costs.

foundational models of a racetrack with a text/image prompt to "Make the scenery a desert."

Roblox is launching a generative AI that builds 3D environments in a snap

It will make it easy to build new game environments on the platform, even if you don’t have any design skills.

  • Scott J Mulligan archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Subscribe to the PwC Newsletter

Join the community, edit social preview.

essay about ai art

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row.

TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK REMOVE

Remove a task

Add a method, remove a method, edit datasets, in-situ mode: generative ai-driven characters transforming art engagement through anthropomorphic narratives.

24 Sep 2024  ·  Yongming Li , Hangyue Zhang , Andrea Yaoyun Cui , Zisong Ma , Yunpeng Song , Zhongmin Cai , Yun Huang · Edit social preview

Art appreciation serves as a crucial medium for emotional communication and sociocultural dialogue. In the digital era, fostering deep user engagement on online art appreciation platforms remains a challenge. Leveraging generative AI technologies, we present EyeSee, a system designed to engage users through anthropomorphic characters. We implemented and evaluated three modes (Narrator, Artist, and In-Situ) acting as a third-person narrator, a first-person creator, and first-person created objects, respectively, across two sessions: Narrative and Recommendation. We conducted a within-subject study with 24 participants. In the Narrative session, we found that the In-Situ and Artist modes had higher aesthetic appeal than the Narrator mode, although the Artist mode showed lower perceived usability. Additionally, from the Narrative to Recommendation session, we found that user-perceived relatability and believability within each interaction mode were sustained, but the user-perceived consistency and stereotypicality changed. Our findings suggest novel implications for applying anthropomorphic in-situ narratives to other educational settings.

Code Edit Add Remove Mark official

Datasets edit.

  • DOI: 10.62517/jhve.202416203
  • Corpus ID: 271553568

Academic Integrity in Digital Media Art Education in the AI Era

  • Published in Journal of Higher Vocational… 1 March 2024
  • Computer Science, Education, Art
  • Journal of Higher Vocational Education

Related Papers

Showing 1 through 3 of 0 Related Papers

Startseite > Blog > Leitfaden für die Erstellung von Aufsatzskizzen: Top-Tools zu verwenden

Leitfaden für die Erstellung von Aufsatzskizzen: Top-Tools zu verwenden

Leitfaden für die Erstellung von Aufsatzskizzen: Top-Tools zu verwenden

  • Smodin-Redaktion
  • Aktualisiert: 24. September 2024
  • Alles über Inhalt und Schreiben

Das Schreiben eines Aufsatzes kann überwältigend sein, vor allem, wenn man auf eine leere Seite starrt. Der Schlüssel zur Überwindung dieser Herausforderung ist start with an outline . Eine Gliederung hilft Ihnen, Ihre Gedanken zu ordnen und den Schreibprozess reibungsloser und weniger stressig zu gestalten. Wenn Sie unsicher sind, wie Sie anfangen sollen, kann ein Gliederungsersteller ein Lebensretter sein.

In diesem Leitfaden zum Erstellen von Gliederungen erfahren Sie alles, was Sie über die Verwendung eines Gliederungsgenerators für Aufsätze wissen müssen. Wir werden auch einige der besten Gliederungsgeneratoren auf dem Markt besprechen, um Ihnen zu helfen, das richtige Werkzeug für Ihre Bedürfnisse zu wählen! Los geht's!

Jemand schreibt mit einem schwarzen Stift in ein Notizbuch.

Was ist ein Essay Outline Creator?

Wenn Sie sich fragen: "Was ist ein Essay Outline Creator?", lautet die einfache Antwort, dass es sich um ein benutzerfreundliches tool that helps you structure your essay bevor Sie mit dem Schreiben beginnen.

Diese Hilfsmittel helfen Ihnen dabei, Ihre Hauptgedanken, Belege und Beweise in einer logischen Reihenfolge für jeden gültigen Aufsatztyp zu gliedern. So wird sichergestellt, dass sich Ihr Aufsatz auf das Wesentliche konzentriert und reibungslos von einem Abschnitt zum nächsten übergeht.

Wie wählt man einen Essay Gliederung Creator

Students and teachers nowadays use AI tools for essays und andere akademische Zwecke. Wenn Sie also zu einer dieser beiden Gruppen gehören und unsicher sind, wie Sie eine Gliederung für einen Aufsatz erstellen sollen, achten Sie auf Faktoren wie Benutzerfreundlichkeit und Funktionen. Suchen Sie nach Tools, mit denen Sie strukturierte Gliederungen mit nur wenigen Klicks erstellen können.

Vergewissern Sie sich, dass das von Ihnen gewählte Tool verschiedene Aufsatztypen unterstützt und mit dem gültigen akademischen Niveau, an dem Sie arbeiten, kompatibel ist. Die besten Tools sind benutzerfreundlich, eignen sich für Autoren aller Niveaus und helfen Ihnen, die benötigten Aufsatzpunkte zu erhalten.

Die besten Essay Outline Creators: Unsere 3 besten Tools zur Verwendung

Wenn es darum geht, einen gut organisierten Aufsatz zu verfassen, können die richtigen Werkzeuge den Unterschied ausmachen. Schauen wir uns die besten Gliederungsersteller für Aufsätze an, die es gibt.

1. Smodin AI Gliederung Generator

Smodin bietet ein KI-gestütztes Tool, das mit nur wenigen Klicks strukturierte Gliederungen erstellt. Dieser kostenlose KI-Gliederungsgenerator ist perfekt für Autoren, die Probleme mit der Organisation von Ideen haben. Das Tool von Smodin ist benutzerfreundlich und hilft Ihnen, sich auf die Hauptideen Ihres Aufsatzes zu konzentrieren.

Es ist besonders nützlich für diejenigen, die eine Schreibblockade haben. Smodin bietet einen klaren Ausgangspunkt und stellt sicher, dass Ihr Aufsatz eine gültige Zielsetzung und Struktur hat.

2. EssayAiLab Gliederung Generator

EssayAiLab (früher EssayBot) ist ein weiterer beliebter Gliederungsgenerator, der sich an Studenten und Schriftsteller richtet. Er hilft Ihnen, die wichtigsten Punkte Ihres Aufsatzes zu definieren und sie in eine kohärente Struktur zu bringen. Dieses Tool eignet sich hervorragend für die Erstellung von Gliederungen für verschiedene gültige Aufsatztypen, von argumentativen bis hin zu beschreibenden Aufsätzen.

Mit EssayAiLab können Sie gut strukturierte Gliederungen erstellen, die auf die gültige Länge und den Zweck Ihres Aufsatzes abgestimmt sind.

3. MindMup Gliederung Creator

MindMup ist ein vielseitiger Gliederungsersteller, mit dem Sie Ihre Ideen visuell organisieren können. Es ist besonders nützlich für Autoren, die einen eher visuellen Ansatz bei der Gliederung bevorzugen. MindMup hilft Ihnen, eine Gliederung zu erstellen, die einfach zu befolgen ist und sicherstellt, dass Ihr Aufsatz alle notwendigen Punkte abdeckt.

Dieses Tool ist ideal für akademische Arbeiten. Es kann Ihnen helfen, ein gültiges Aufsatzthema zu entwickeln, das den Anforderungen Ihrer Aufgabe entspricht.

Eine Frau sitzt an einem Schreibtisch, benutzt ein MacBook und lächelt.

Wie man einen Essay Outline Creator verwendet

Die Verwendung eines Gliederungsprogramms für Aufsätze kann den Schreibprozess verändern. Hier finden Sie eine einfache Schritt-für-Schritt-Anleitung, wie Sie diese Tools effektiv nutzen können:

  • Wählen Sie den Generator für die Gliederung eines Aufsatzes, der Ihren Bedürfnissen am besten entspricht. Berücksichtigen Sie Faktoren wie die Art des Aufsatzes, den Sie schreiben, und das gültige akademische Niveau.
  • Geben Sie die wichtigsten Punkte ein, die Sie in Ihrem Aufsatz behandeln wollen. Dieser Schritt hilft Ihnen, Ihre Gedanken zu fokussieren und stellt sicher, dass Sie keine wichtigen Details übersehen.
  • Der Gliederungsgenerator ordnet Ihre Punkte in eine logische Struktur ein. Hier sehen Sie, wie Ihr Aufsatz Gestalt annimmt.

 Eine junge Frau benutzt ihren Laptop, während sie auf dem Boden ihres Wohnzimmers sitzt.

Die Bedeutung von strukturierten Gliederungen

Strukturierte Gliederungen sind unerlässlich, um Ideen zu ordnen und die Kohärenz Ihres Aufsatzes zu gewährleisten. Eine gut strukturierte Gliederung hilft Ihnen, die wichtigsten Punkte Ihres Aufsatzes zu definieren. Sie sorgt dafür, dass die einzelnen Abschnitte logisch ineinander übergehen. Das macht den Schreibprozess effizienter und weniger stressig.

Mit Hilfe eines Gliederungsgenerators können Sie einen klaren Fahrplan für Ihren Aufsatz erstellen. Mit einem Gliederungsgenerator ist es einfacher, sich zu konzentrieren und die besten Ergebnisse zu erzielen.

Mehrfarbige Fragezeichen verteilen sich umeinander.

Häufig gestellte Fragen

Was ist eine gliederung für einen aufsatz.

Eine Essay-Gliederung ist ein Plan, der die Hauptgedanken und Stichpunkte Ihres Essays in eine logische Reihenfolge bringt. Sie hilft Ihnen, Ihren Aufsatz vor dem Schreiben zu strukturieren.

Kann ich einen Gliederungsgenerator für jede Art von Aufsatz verwenden?

Ja, die meisten Gliederungsgeneratoren unterstützen verschiedene gültige Aufsatztypen, darunter argumentative, beschreibende und erzählende Aufsätze.

Gibt es einen kostenlosen Generator für AI-Aufsatzentwürfe?

Ja, Smodin bietet einen kostenlosen AI-Gliederungsgenerator, mit dem Sie schnell und einfach strukturierte Gliederungen erstellen können.

Ein Notizbuch mit zwei leeren Seiten und einem Stift auf einer Holzunterlage.

Benutze Smodin AI und erstelle eine Essay-Gliederung in wenigen Minuten!

Die Erstellung einer strukturierten Gliederung ist der erste Schritt zum Schreiben eines erfolgreichen Aufsatzes. Mit Tools wie dem KI-gesteuerten Gliederungsgenerator von Smodin können Sie ganz einfach eine klare und logische Struktur für Ihren Aufsatz erstellen. Ob Sie sich nun mit research papers oder persönliche Aufsätze schreiben, können Sie mit diesen Informationen Zeit und Mühe sparen. Mit den Werkzeugen in unserem Leitfaden zur Erstellung von Gliederungen für Aufsätze können Sie sich auf die Erstellung eines überzeugenden und gut organisierten Aufsatzes konzentrieren.

Sind Sie bereit, Ihre Essay-Gliederung zu erstellen? Nutze noch heute die KI-gestützten Tools von Smodin und bringe deine Aufsatzerstellung auf das nächste Level. Du wirst in der Lage sein, deine Gliederung innerhalb von Minuten zu schreiben und einen überzeugenden Aufsatz zu verfassen, der deine Lehrer beeindrucken wird! Visit Smodin.io to get started jetzt!

IMAGES

  1. What is AI art and how will it impact artists?

    essay about ai art

  2. Visualizing Ideas: Incorporating AI Art in Essay Composition

    essay about ai art

  3. SOLUTION: An essay about artificial intelligence the future of ai

    essay about ai art

  4. What is Artificial Intelligence Free Essay Example

    essay about ai art

  5. Artificial Intelligence, Are the Machines Taking over Free Essay Example

    essay about ai art

  6. Artificial Intelligence Free Essay Example

    essay about ai art

VIDEO

  1. Why AI Art Can't Replace Traditional Painting

  2. The Ethics of AI Art || Let's Discuss

  3. how to write essay by ai

  4. How to use AI for Essay Writing #college #texteroai

  5. Essay on Artificial Intelligence

COMMENTS

  1. Why A.I. Isn't Going to Make Art

    Illustration by Jackie Carlise. In 1953, Roald Dahl published " The Great Automatic Grammatizator," a short story about an electrical engineer who secretly desires to be a writer. One day ...

  2. Art for our sake: artists cannot be replaced by machines

    The report, 'AI and the Arts: How Machine Learning is Changing Artistic Work', was co-authored with OII researchers Professor Rebecca Eynon and Dr Isis Hjorth as well as Professor Michael A. Osborne from Oxford's Department of Engineering. Their study took place in 2019, a high point for AI in art. It was also a time of high interest around the role of AI (Artificial Intelligence) in the ...

  3. Art, Creativity, and the Potential of Artificial Intelligence

    Our essay discusses an AI process developed for making art (AICAN), and the issues AI creativity raises for understanding art and artists in the 21st century. Backed by our training in computer science (Elgammal) and art history (Mazzone), we argue for the consideration of AICAN's works as art, relate AICAN works to the contemporary art context, and urge a reconsideration of how we might ...

  4. Is art generated by artificial intelligence real art?

    The emergence of AI-image generators, such as DALL-E 2, Discord, Midjourney, and others, has stirred a controversy over whether art generated by artificial intelligence should be considered real art — and whether it could put artists and creators out of work.The Gazette spoke with faculty who are involved in the production of art — a writer, a film animator, an architect, a musician, and a ...

  5. If art is how we express our humanity, where does AI fit in?

    AI art has been going on for over a decade, and for as long these artists have been grappling with the questions we now face as a society. I think it is critical to uplift the voices of the artists and other creative laborers whose jobs will be impacted by these tools. Art is how we express our humanity. It's a core human, emotional part of life.

  6. How AI is expanding art history

    Art scholarship has expanded over centuries, through the introduction of new tools. Computation and AI seem poised to be the next step in the never-ending intellectual adventure of understanding ...

  7. Understanding and Creating Art with AI: Review and Outlook

    review of two facets of AI and art: 1) AI is used for art analysis and employed on digitized artwork. collections; 2) AI is used for creative purposes and generating novel artworks. In the context ...

  8. Opinion

    This essay is part of The Big Ideas, a special section of The Times's philosophy series, The Stone, in which more than a dozen artist, writers and thinkers answer the question, "Why does art ...

  9. PDF How AI is expanding art history

    How AI is expanding art historyFrom identifying disputed artworks to reconstructing lost masterpieces, artificial intelligence is enriching how we. nterpret our cult. t 1901 work ...

  10. Understanding and Creating Art with AI: Review and Outlook

    This paper provides an integrated review of two facets of AI and art: 1) AI is used for art analysis and employed on digitized artwork collections; 2) AI is used for creative purposes and generating novel artworks. In the context of AI-related research for art understanding, we present a comprehensive overview of artwork datasets and recent ...

  11. The Influence of Artificial Intelligence on Art Design in the Digital

    Based on the analysis of the innate advantages of AI technology, this study proposes how AI can change the original paradigm of interactive art expression application from the level of creative thinking, creation mode, and art experience so as to establish an intelligent interactive art creation model in the context of AI. Based on the study of ...

  12. Artists' Perspective: How AI Enhances Creativity and Reimagines Meaning

    Reiley, who has worked on everything from AI-based surgical systems to self-driving car technologies, added, "We see AI as a bridge between art and science and are trying to help creatives become super-creative.". In December 2020, DeepMusic premiered AI-assisted musical pieces commissioned from prestigious composers.

  13. (PDF) Arguments for the Rise of Artificial Intelligence Art: Does AI

    For example, Liu (2023) еxamined the question if AI art generators are creative, motivated, self-aware, and emotional, while Wellner (2022) discussed AI art generators' capability to have ...

  14. (PDF) AiArt: Towards Artificial Intelligence Art

    In this wide‐ranging essay, the leader of Google's Seattle AI group and founder of the Artists and Machine Intelligence program discusses the long‐standing and complex relationship between ...

  15. Art in an age of artificial intelligence

    In this essay, I examine this view critically and consider the possibility that AI will play a significant role in a quintessential creative activity, the appreciation and production of visual art. This possibility is likely even though attributes typically important to viewers-the agency of the artist, the uniqueness of the art and its ...

  16. How AI-generated art is changing the concept of art itself

    How AI-generated art is changing the concept of art itself. By Steven Vargas Sept. 21, 2022 5 AM PT. This is one way that artificial intelligence can output a selection of images based on words ...

  17. Critics Are Missing the Point of AI Art

    Over the weekend, the legendary science-fiction writer Ted Chiang stepped into the fray, publishing an essay in The New Yorker arguing, as the headline says, that AI "isn't going to make art ...

  18. Artificial intelligence in fine arts: A systematic review of empirical

    Artificial intelligence (AI) tools are quickly transforming the traditional fields of fine arts and raise questions of AI challenging human creativity. AI tools can be used in creative processes and analysis of fine art, such as painting, music, and literature. They also have potential in enhancing artistic events, installations, and performances.

  19. The Problem With AI-Generated Art, Explained

    Generative AI has given the public the means to instantly create an image, or piece of writing, that looks as though it took time and effort. Art can now be manifested via the touch of a button, a ...

  20. The Creativity of Artificial Intelligence in Art

    This article provides an integrated review of two facets of AI and art: (1) AI is used for art analysis and employed on digitized artwork collections, or (2) AI is used for creative purposes and ...

  21. Reflections on the Use of Artificial Intelligence in Works of Art

    Abstract. Purpose: This paper aims to reflect on the use of Artificial Intelligence (AI) in works of art, examining its impact on the creative process, aesthetics, and audience reception, then provide a critical reflection on the use of AI in works of art, examining its potential benefits and drawbacks and exploring the ethical considerations involved.

  22. Critics Are Wrong About AI Art

    Ted Chiang Is Wrong About AI Art. It's real. But it isn't revolutionary. By Matteo Wong. Illustration by Ben Kothe / The Atlantic. September 5, 2024. Artists and writers all over the world ...

  23. The Algorithm: AI-generated art raises tricky questions about ethics

    AI development happens at breakneck speed. It feels as if there is a new breakthrough every few months, and researchers are scrambling to publish papers before their competition. Often, when I ...

  24. In-Situ Mode: Generative AI-Driven Characters Transforming Art

    Art appreciation serves as a crucial medium for emotional communication and sociocultural dialogue. In the digital era, fostering deep user engagement on online art appreciation platforms remains a challenge. Leveraging generative AI technologies, we present EyeSee, a system designed to engage users through anthropomorphic characters.

  25. Academic Integrity in Digital Media Art Education in the AI Era

    By cultivating a dedication to academic ethics, the research advocates for the resilient growth of the digital media arts field amid ever-evolving technological paradigms, ensuring an unwavering commitment to integrity in academia. In the dynamic landscape of digital media arts education, particularly under the pervasive influence of Artificial Intelligence (AI) Era, the maintenance of ...

  26. Leitfaden für die Erstellung von Aufsatzskizzen

    Wie wählt man einen Essay Gliederung Creator. Students and teachers nowadays use AI tools for essays und andere akademische Zwecke. Wenn Sie also zu einer dieser beiden Gruppen gehören und unsicher sind, wie Sie eine Gliederung für einen Aufsatz erstellen sollen, achten Sie auf Faktoren wie Benutzerfreundlichkeit und Funktionen.