
The AI Art Movement


The Start of AI Art
Between the late 1950s and the early 1960s, the German philosopher Max Bense developed his Information Aesthetics theory. This pioneering concept was focused on developing exact scientific measures to introduce objectivity into aesthetics. By applying mathematics to the theory, he managed to connect art with rationality, paving the way for a new understanding of digital art.
Bense’s work provided a new theoretical framework where art was objective, and not externally influenced. Scientist-artists such as Frieder Nake, Georg Nees, and A. Michael Noll were influenced by Bense’s Information Aesthetics, as they explored the use of algorithms to create visual art.
Nees was the first person to publicly exhibit his ‘computer art’ at a seminar in 1965 organised by Bense himself: the Institute of Philosophy and Theory of Knowledge.
At this stage, digital art started to experience a rise outside aesthetic theories.
In the early 1970s, Harold Cohen, a British artist living in California, developed the AARON system – a computer program written to create visual images autonomously.
Cohen’s research began with figuring out methods to apply AI to fine art. After AARON was developed, he continued working on the program, so that it could perfect the making of decisions – such as choosing colours or composition. AARON started as a program that could only create monochrome line drawings, evolving to produce digital prints of coloured shapes, such as human figures or foliage.
Bense’s argument for developing art along rational ideas stated that rationality was our main defence against fascism. Computer artists during the earlier days were focused on developing generative aesthetics. This theory is based on removing subjectivity from artistic processes, creating an aesthetic perspective that is backed by the transparency of science.
The idea of generative aesthetics is also defended by Nake in the interview below, as he comments on how computer art is not designed to produce masterpieces but to produce a system of designs. These designs are not about artistic craft, but rational aesthetic coherence.
AI Art Today
AI systems have been gradually introduced into our lives – social media, phone assistants, online searches, and even self-driving cars – performing numerous tasks in an automated manner.
In the past four years, we have experienced a rise in the use of algorithms to create works of art, opening up a new path for AI artists. In 2019 there were claims that we are entering a ‘Gold Rush’ of AI art, primarily directed by the first AI artwork being sold at an auction for $432,000.
The painting –Portrait of Edmond Belamy – was a group of portraits of the fictional Belamy family by a Paris-based collective known as Obvious.
The founders of Obvious, Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernier, are researchers and artists whose work aims to explore the creative potential of Artificial Intelligence.
The collective is now collaborating with Kamel Mennour to sell three NFT-based video portraits.
To create the Portrait of Edmond Belamy they used the GAN algorithm, which is composed of two networks – a Generator and a Discriminator. The system was fed with a data set of over 10,000 portraits spanning from the 14th to the 20th century, and while the Generator creates new images based on the set, the Discriminator works on spotting the differences between an image made by the Generator and one that is human-made. The process aimed to make the Discriminator believe that the new portraits are real-life.
Christie’s, the auction house that organised the sale, commented that the future of the art market is going to be deeply impacted by new technologies, as AI algorithms are in the course of influencing art history and visual culture.
Sotheby’s also entered the AI art market in 2019, after selling a work titled Memories of Passerby I by Mario Klingemann. The work was sold for $51,000.
Klingemann, a leading pioneer in the AI art movement, questions the inner workings of systems, aiming to explore human perception and aesthetic theory in his art. To create Memories of Passerby I, he trained a neural network into producing surreal images based on a data set of portraits from the 17th to the 19th century. In the installation, the piece consisted of two screens where the computer depicts two portraits that morph into different faces. The paintings are created as the viewer is looking at the work.
The AI art movement, which was started by artists like Frieder Nake, Georg Nees, Manfred Mohr and Vera Molnár, is now being led by a large community of artists who work across different disciplines to explore creativity through a technological outlet.
Sougwen Chung is an award-winning artist whose work approaches an understanding of the dynamics between humans and systems, by combining hand-made and machine-made marks. Creating a relationship with the AI has allowed her to raise questions about control and authorship, as well as analyse our interactions with others.
Another prominent figure in the AI art movement is London-based artist and researcher Memo Akten. His AI projects are focused on creating reflections of ourselves and making sense of how we see the world.
One of his most famous pieces, Deep Meditations, is an hour-long sound and video installation that works as both a celebration of life and a spiritual journey. The work invites the viewer to acknowledge and appreciate our experience in and as part of the universe.
A fourth pioneering artist in this medium is Refik Anadol, a Turkish media artist and researcher who uses machine learning to transform architectural spaces into canvases to produce live media art.
His works attempt to tackle our experiences with space in an era controlled by media and technology. He creates digital architectural designs to conceptualise the shifting spatial understanding and give it a new meaning.
The Future of AI Art?
Marking the 48th anniversary of Picasso’s death in April 2021, a pair of scientists released a set of NFTs based on a recreation of a lost artwork believed to be attributed to Santiago Rusiñol, that was hidden beneath Picasso’s The Crouching Beggar.
The project leading this work is called Oxia Palus and was founded in 2019, by Anthony Bourached and George Cann. Their goals are to resurrect the world’s lost artworks with AI, extend creative boundaries through new technology, advocate for the responsible use of AI by creative industries and arts education centres, and develop the creative jobs of the future.
To reconstruct the artwork beneath Picasso’s canvas, they processed X-Ray fluorescent images of The Crouching Beggar and Rusiñol’s paintings. Using a 3D height map they re-layered paint onto the canvas, capturing the texture and style of the artist. Combining spectroscopic imaging, AI and 3D printing, they can actualise the visible traces of an earlier painting in an existing artwork, working in the conservation of historic art.
These events are clearly not in the future, but they are a sign of where the convergence of AI and art might be going.
Bourached has argued that instead of using AI to generate new creative concepts, they are looking backwards. The future of AI is not only a tool of creation but also a tool of preservation.
As AI develops and artists create artworks based on algorithms and machine learning, the next step for AI art would be computers being able to recognise emotions. If AI manages to understand how visual culture makes people feel, this would mean that experts would succeed in creating computers with emotional intelligence, and therefore more human.
AI is only as useful to form opinions as to its data. Therefore, to teach a machine to understand emotions in art, a vast amount of information is required. This is what a team of researchers from Stanford University, Ecole Polytechnique, and King Abdullah University of Science and Technology have done.
They have created a dataset called ArtEmis, which is accompanied by machine learning models aiming to provide understanding about the relationship between visual content, language, and emotion. The set includes over 80,000 images and over 400,000 emotional attributes.
To do so, volunteers were asked to share the main emotion felt about an artwork and explain it in a sentence. The algorithm is then designed to categorise an artwork into one of eight emotional categories, and then explain what in the image justified the emotion.
ArtEmis has shown a lot of promise, as in instances the captions produced to reflect the abstract concept of the artwork, going beyond the known capabilities of a computer.
The team behind this system hopes that in the near future, ArtEmis will work as an aide for artists who will use it as a tool to evaluate their works and ensure that they have the emotional effect they intended it to reflect.
In an article in The Gradient, Fabien Offert argues the possibility of artists’ focus in using AI shifting from its aesthetic exploration to its critical potential. He hopes that AI art will drive innovation by providing criticism about itself.
AI art, similar to other forms of innovation, will be used in relation to its usefulness in the real world. The goal will not be aesthetic anymore, but instead critical to opening up further opportunities or differing ideas – ‘just as abstraction is a critique or realism in painting’.
This future normalisation of AI art will transform machine learning into a set of tools, with no philosophical dilemma.
I am just hoping Offert is right, and this doesn’t turn into an episode of Black Mirror.
Words by Eugenia Pacheco Aisa.
Want more news and stories from the art world? Check out Mouthing Off’s Art Section.