Remembering Desmond Paul Henry: Pioneer of Machine-Generated Art
Today, we commemorate the birthday of the artistic visionary, Desmond Paul Henry (1921-2004), whose pioneering approach to art continues to influence and inspire. An esteemed philosopher and Manchester University lecturer, Henry was at the forefront of the global computer art movement of the 1960s. It was in our historical exhibition, "Automat und Mensch," where we had the honor of showcasing Henry's beautiful machine-generated pieces.
Today, we commemorate the birthday of the artistic visionary, Desmond Paul Henry (1921-2004), whose pioneering approach to art continues to influence and inspire. An esteemed philosopher and Manchester University lecturer, Henry was at the forefront of the global computer art movement of the 1960s. It was in our historical exhibition, "Automat und Mensch" where we had the honor of showcasing Henry's beautiful machine-generated pieces.
Born on July 5th, 1921, Desmond Paul Henry was a visionary exponent of the synergy between art and technology. He pioneered the concept of using computers for interactive graphic manipulation. His analog computer-derived drawing machines from the 1960s serve as a crucial bridge between the Mechanical Age and the Digital Age.
In 1961, thanks to celebrated artist L.S. Lowry and A. Frape, Henry's career reached new heights when he won first prize in a "London Opportunity" art competition. Lowry, recognizing Henry’s potential, insisted on showcasing his machine drawings at Henry's London solo exhibition, titled “Ideographs”, at the Reid Gallery. Henry's groundbreaking work in machine-generated art caught the attention of the media, landing him a spot on the BBC's North at Six series and drawing interest from the American magazine, Life.
Henry's pieces were featured in various exhibitions during this period, including "Cybernetic Serendipity" held at the Institute of Contemporary Arts in London. This exhibition, featuring his interactive Drawing Machine 2, toured the United States, amplifying his international recognition.
Henry constructed three electro-mechanical drawing machines from modified bombsight analogue computers (a technology primarily used in World War II bombers to calculate the precise release of bombs onto their targets). His drawing machines were not merely functional; they were intricately designed systems that combined gears, belts, cams, and differentials. Each machine took up to six weeks to construct. The resulting drawings, each creating a symphony of lines and curves, could take from two hours to two days to complete. Powering these machines required an external electric source, driving one or two servo motors that coordinated the motions of the suspended drawing implements.
Henry's electromechanical drawing machines embraced the unpredictable beauty born from the "mechanics of chance" like the works of artist Jean Tinguely. However, his creations also allowed for interactivity, allowing for personal and artistic input during the drawing process.
During this period, he created around 800 machine-drawings, each an infinitely varied combination of repetitive single lines forming abstract curves. Some of these works were exhibited in 2019 at the “Automat und Mensch – A History of AI and Generative Art” exhibition at Kate Vass Galerie, along with other historically significant generative artworks.
Artworks
Happy Birthday to Kjetil Golid!
Celebrating the birthday of artist Kjetil Golid, we take a closer look at his remarkable career in generative art. Hailing from Norway, Kjetil explores algorithms and data structures through captivating visualizations. His projects fuse aesthetic visuals with original algorithms, resulting in mesmerizing and unpredictable outcomes. Kjetil advocates for creative expression through programming, openly sharing his code and even developing user-friendly tools for non-coders to create interactive visuals.
Celebrating the birthday of artist Kjetil Golid, we take a closer look at his remarkable career in generative art. Hailing from Norway, Kjetil explores algorithms and data structures through captivating visualizations. His projects fuse aesthetic visuals with original algorithms, resulting in mesmerizing and unpredictable outcomes. Kjetil advocates for creative expression through programming, openly sharing his code and even developing user-friendly tools for non-coders to create interactive visuals.
One notable exhibition of Kjetil's work took place at Kate Vass Galerie in 2020, featuring him in the "Game of Life - Emergence in Generative Art" exhibition. This show served as a tribute to the late mathematician John Horton Conway, renowned for his influential "Game of Life" concept. His process results in bold works with basket weave-like patterns that resemble graphic pixelated flags or banners. These pieces recall computing origins in the Jacquard loom, a device that employed punch cards to simplify the intricate weaving process of 18th-century textiles. We believe this exhibition marked a turning point in Kjetil's career, subsequently leading to his feature in the New York Times.
Kjetil's artistic journey reached new heights with his “Archetype” drop at Artblocks in 2021. The selected “Archetypes” were showcased at the ZKM Cube as part of the exhibition "CryptoArt It’s Not About Money," alongside other prominent NFT artworks like CryptoPunks and Cryptokitties. “Archetype” explores the use of repetition as a counterweight to unruly, random structures. As every single component looks chaotic alone, the repetition brings along a sense of intentionality, ultimately resulting in a complex, yet satisfying expression.
Kjetil's "Iterations" series made its debut at the Phygital show in April 2022 at Kate Vass Galerie. The series explores the interplay between structure and chaos, creating a visual impression of a structure by repeating random layouts of blocks. However, the process intentionally introduces imperfections and mutations to reintroduce an element of chaos. This was the first time when Kjetil accompanied his digital work with unique 1/1 signed fine art prints 120x120cm.
Among his notable works is "Curvescape VII – Outpost" which adds architectural elements to a previously barren landscape. The scale and nature of these structures remain enigmatic, blending the aesthetics of a free-spirited pencil with the precision of computer-generated imagery.
Later at the end of 2022, the beginning of 2023 - Kjetil's "10 EXPANSE" series made a grand entrance during the New Year's Eve auctions. Minted on his own smart contract, the series divides the plane into a grid and transforms each grid cell into cuboids of varying dimensions. The coloration of each cuboid's face is determined by its slant relative to a light source, translating geometric relationships into numerical values mapped to specific locations in the color space.
Through his inventive and boundary-pushing artworks, Kjetil Golid continues to captivate audiences with his exploration of algorithms, generative art, and the fascinating interplay between structure and chaos.
To read more about the artist and see the full portfolio of his works HERE.
More Projects
History of AI - The new tools: ChatGPT
Let's start with last year's biggest AI sensation: ChatGPT. It is a language model developed by OpenAI. It was created to generate human-like responses to natural language and to assist in various tasks. The development started with the creation of its predecessor, GPT-1, in 2018. Since then, the San Francisco-based artificial intelligence company has been working to improve its language generation capabilities through iterative training on large datasets of text. Developers have been feeding the system large amounts of text, such as books, articles, and websites, and using this data to teach it patterns to generate coherent responses.
In recent months, the development of new AI tools has made it clear that AI has become a part of our everyday lives and is no longer restricted to the realm of research centers and tech companies. AI has gradually become a technology that businesses and individuals can use to transform the way they work and live. In our previous article, we presented the history of AI, which was first introduced in 1956 at the historical Dartmouth Conference. Now, we are excited to bring you the latest tools that showcase what AI is capable of today.
Let's start with last year's biggest AI sensation: ChatGPT. It is a language model developed by OpenAI. It was created to generate human-like responses to natural language and to assist in various tasks. The development started with the creation of its predecessor, GPT-1, in 2018. Since then, the San Francisco-based artificial intelligence company has been working to improve its language generation capabilities through iterative training on large datasets of text. Developers have been feeding the system large amounts of text, such as books, articles, and websites, and using this data to teach it patterns to generate coherent responses.
The technology
When OpenAI launched ChatGPT at the end of November 2022, they were not prepared for the huge interest it would quickly gain. However, most of the technology behind ChatGPT is not new. It is based on a technology called neural networks, designed to simulate the way the human brain works. It processes information through interconnected nodes that identify patterns and relationships in data. Neural networks date back to the 1950s, but the specific neural network architecture that ChatGPT uses, called a transformer, was developed in 2017 by Google. It quickly became a popular solution for language translation and text summarization, using a self-attention mechanism, which allows the model to focus on different parts of a text.
Another important technology is “transfer learning,” which is a type of machine learning technique where the developers reuse a pre-trained model as the starting point for a new task. While the concept of transfer learning has been around since the 1970s, it has significantly developed in recent years. It allows ChatGPT to be trained on many texts and then fine-tuned on generating chat responses or answering questions.
“Reinforcement learning” is another key component of ChatGPT. Reinforcement learning is a type of machine learning where the model learns through trial and error by receiving feedback from its environment.
The secret of ChatGPT's success lies not in inventing all these technologies, but in combining and scaling existing ones. By using techniques such as transfer learning, reinforcement learning and the neural network architecture, ChatGPT has been able to create a language model that can generate human-like responses.
The Story of Chatbots
The history of chatbots goes back decades ago. One of the earliest chatbots was called ELIZA, which was developed in the mid-1960s by Joseph Weizenbaum at MIT. ELIZA was a program that simulated conversation by using pattern matching and a set of predefined responses. It was designed to mimic the conversational style of a psychotherapist and was used as a tool for studying human-machine communication.
In the 1970s and 1980s, chatbots began to be used in a variety of applications, including customer service, information retrieval, and entertainment. One popular chatbot during this time was called Parry, which was developed by Kenneth Colby in the 1970s. Parry was designed to simulate a patient with paranoid schizophrenia and was used as a tool for studying the human perception of mental illness.
In the 1990s, with the rise of the internet, chatbots became more widely used for online customer service and support. One notable chatbot during this time was ALICE, which was developed by Richard Wallace in the mid-1990s. ALICE was designed to simulate conversation with a human user and was used in customer service and information retrieval.
Overall, early forms of chatbots were developed and used for a variety of applications, including studying human-machine communication, simulating mental illness, and providing online customer service and support. The development of chatbots has paved the way for more advanced language models like ChatGPT, and I look forward to seeing how this technology will continue to evolve in the future.
The road to mainstream popularity
After the release of GPT-1 in 2018, OpenAI introduced the next model, GPT-2 in 2019, which was trained on even larger text data than its predecessor. It was capable of generating high-quality human-like text, such as news, stories, or even computer code. It quickly received a lot of attention from various industries and researchers. In 2020, OpenAI released GPT-3, which had amazing capabilities in generating coherent text and performed well in answering questions, summarising and translating. GPT-3.5 was built on this model in mid-2021 by further improving it. It also included new features such as generating texts based on specific scenarios and preventing the repetition of phrases. In May 2022, InstructGPT was introduced, which is a variation of the GPT series. It was designed to be more controllable, allowing users to provide more specific instructions. Users could guide the model by suggesting possible paths for it to follow.
While all these models were available on the company's website for developers to integrate into their own software, they did not gain mainstream popularity. However, this has changed with the emergence of ChatGPT in November 2022, which is based on the GPT-3.5 architecture and has already been adopted by millions of users.
We might ask: why did ChatGPT become the most talked-about AI model? It was one of the first AI models that were made publicly accessible and understandable to non-experts. It mimics human conversation and generates outputs in a way that humans understand and communicate. In other words, it aligns with what a human wants from a conversation AI: helpful and truthful responses, an easily accessible chat interface, and the ability to ask follow-up questions if necessary.
Users have found a variety of ways to use the model, such as creating resignation letters, answering test questions, writing poetry, or even seeking life advice. In many ways, ChatGPT can be a „virtual best friend” who can help you in many situations and even show empathy.
The limitations
ChatGPT's popularity has made it a target for users attempting to exploit its flaws. Some of these users have discovered that ChatGPT can generate unwanted outputs. OpenAI has taken action to address this issue by using adversarial training to prevent users from tricking the model into producing harmful or incorrect responses. The training involved multiple chatbots against each other, with one chatbot trying to generate a text that will force another chatbot to produce unwanted responses. The successful attacks are added to the training data so that the model can learn to ignore them.
ChatGPT also has other limitations, that could be improved. Like any other language model, Chat GPT can contain biases and stereotypes that may be offensive to specific individuals. It also has limited knowledge of events that happened after September 2021, and it is not able to answer questions about specific topics or new developments in a particular field. It may also have difficulty understanding context, especially carcasm, and humor. If the user uses sarcasm in their message, the model sometimes fails to understand the meaning of that and responds incorrectly. Even though ChatGPT can generate empathetic responses, it cannot provide truly empathetic responses in all situations.
ChatGPT-4 and the future
OpenAI has updated the system several times since its launch. The latest update occurred on the 14th of March 2023, when ChatGPT-4 was released. This latest version is capable of solving even more complex problems, better understanding and reasoning about the context of conversations and generating even more human-like responses quickly and accurately. In addition to improving its capabilities, the developers have also made it a priority to enhance its safety, making it less likely to provide harmful or incorrect answers.
In addition to its popularity among users, many big companies noticed ChatGPT capability. OpenAI has recently partnered with Microsoft and global management consulting firm Bain, known for their marketing campaigns for major companies like Coca-Cola. This partnership will allow these companies to integrate ChatGPT's language capabilities into their own products and services, further expanding the model's impact.
ChatGPT as an artistic practice
GPT chat can be used for artistic purposes in a variety of ways. As a language model, GPT chat can generate text in response to prompts or questions, and this can be used by artists to inspire their creative work.
Here are some ways in which people can use GPT chat for artistic purposes:
Writing prompts: GPT chat can be used to generate writing prompts for artists. They can provide a topic or theme, and GPT chat can generate a variety of prompts that can help the artist explore the topic from different angles. These prompts can inspire poetry, short stories, or other forms of written work.
Collaborative storytelling: GPT chat can be used to collaborate with others to create a story. Artists can start by providing a prompt or a beginning of a story, and then invite others to add to the story by typing in their responses. As the story progresses, GPT chat can use the information provided by the collaborators to generate the next part of the story.
Dialogue writing: GPT chat can be used to generate dialogues between characters. Artists can provide the context of the conversation, the personalities of the characters, and the tone of the conversation. GPT chat can then generate a dialogue that fits the context, personalities, and tone.
Character creation: GPT chat can be used to create unique characters for an artwork. Artists can describe the physical appearance, personality traits, and backstory of the character, and GPT chat can generate a detailed description of the character. This can help artists visualize the character and bring them to life in their artwork.
Poetry generation: GPT chat can be used to generate poetry in response to a given prompt. Artists can provide a theme or a set of keywords, and GPT chat can use its language generation capabilities to produce a poem that fits the theme or keywords.
Chat GPT can be combined with the tool, called ‘Mid-journey’, for example. Various AI tools can be linked to their potential to help artists and creatives in their work. While Chat GPT can generate text-based prompts, dialogues, and character descriptions that can inspire creative work, Mid-journey AI can analyze the performance of existing artworks and identify opportunities to improve them. Combining the two tools, artists can use Chat GPT to generate prompts and ideas for new artworks, and then use Mid-journey AI to analyze the performance of those artworks and make data-driven decisions to optimize them. Ultimately, by using these tools in tandem, artists can leverage the power of AI to enhance their creative process and create more successful artworks. With the continued development of AI and machine learning technologies, GPT chat is likely to become even more advanced and valuable for artistic purposes in the future.
“METAMORPHOSES" BY MISS AL SIMPSON - 100 ITERATIONS 31/05
Miss AL Simpson, an award-winning crypto artist, has been at the forefront of the web3 movement since 2018. Renowned for her distinctive style, she seamlessly merges digital graffiti with animated 3D historical motifs. Notably, AL Simpson has embarked on pioneering AI collaborations that push the boundaries of traditional art paradigms. By embracing the potential of artificial intelligence as a creative partner, she redefines the future of crypto art.
As part of the curated group exhibition "Do Androids Dream About Electric Sheep?" by Kate Vass Galerie, opening on 31/05, Miss AL Simpson presents her first AI long-form project ", Metamorphoses." This series comprises 100 unique live-generated iterations and serves as the beginning of the ongoing show. Drawing inspiration from the epic poem "Metamorphoses" by the Roman poet Ovid, Miss AL Simpson embarks on a novel approach for a long-form project. The mythological tales within Ovid's work explore themes of transformation, encompassing love, desire, power, and social and political climate, often incorporating themes of transformation and change that mirror the turbulence of our current society.
Miss AL Simpson's rendition of "Metamorphoses" is a timeless and enduring series of works. Ovid's "Metamorphoses" has profoundly influenced Western literature and art for centuries. Miss AL Simpson draws upon these timeless stories to explore the complex relationship between AI and humanity. For instance, in the tale of Narcissus, who becomes transfixed by his reflection, Simpson finds a parallel to humans fixating on the output of AI, perceiving it as conscious and relatable rather than recognizing it as the product of an algorithm. With masterful skill, she adapts the themes of transformation and the interplay between the human and AI realms, resonating with contemporary audiences. Leveraging the capabilities of 0KAI, Miss AL Simpson develops her concept by employing prompts and a combination of traits and rarities, blending the poems with her earlier analogue artwork from 2018 to create a beautiful body of work.
"Metamorphoses" will be presented as a Dutch auction on May 31, 2023, at 8 pm (CET) on https://0kai.k011.com, offering art enthusiasts an opportunity to engage with and acquire this remarkable collection.
This series of works are about the "AI commodification of consciousness" as a future concern within the context of the past through Ovid’s “ Metamorphoses”.
Written by Miss AL Simpson
ART PRACTICE - COMMODIFICATION
My artistic practice has always questioned the issue of COMMODIFICATION. Commodification is the transformation of goods, services, ideas, and people into commodities or objects of trade. I have particularly been intrigued, to-date, by the commodification of women within advertising and the COLLECTIVE CONSCIOUSNESS.
In the context of fashion magazines, women (their images, personalities, and stories) are often commodified, particularly through advertising. In many fashion magazines, the presentation of women often conforms to conventional beauty standards and societal expectations. The images, articles, and advertisements typically emphasize physical appearance, style, and consumer goods, implicitly suggesting that women should strive to achieve these standards and lifestyles. This depiction can commodify women by reducing their value to their physical appearance and consumption habits, and by promoting a specific, often unattainable, ideal of femininity. This commodification process is not just about selling products; it also sells an image, a lifestyle, and a specific set of values. Advertisers use these techniques to create a desire for their products, linking them to the attainment of the promoted ideals.
My practice (pre-AI) was always about ripping up this commodification, sometimes literally, through ripping up the magazines into collage. Sometimes by using ink, textures and paint to obliterate the fashion magazine advert completely. This was done both in an analogue and digital way. This method can be seen in a lot of my pre-AI artworks. IN this way, I have also always explored the issue of transformation.
This is one aspect of the commodification of consciousness but what is interesting is whether tokenizing NFTs is another aspect of the commodification of consciousness too. In terms of the commodification of consciousness, if we interpret consciousness broadly to include the creative and intellectual output of an individual, then the creation of NFTs based on our artistic ideas and execution could be seen as a form of commodifying consciousness. Each NFT is a tokenized version of an artistic idea, and the sale of these tokens effectively turns those ideas into commodities.
THIS PROJECT
As you can see, there is a connection with this project and my practice. Also, I like the fact that I am literally feeding my old cryptoart images (based on a similar theme) into the machine to generate new AI images.
As an artist who has explored the concept of “COMMODIFICATON” for the whole of my practice, it seemed like a natural progression to look, with a kind of futurism, to how COMMODIFICATION might look down the line. With the rapid growth of AI, I was intrigued by the concept of AI COMMODIFICATION OF CONSCIOUSNESS, both for its mystery but also for what it might mean for humans. I thought that it might be interesting to explore this transformative idea by looking at past mythology through the eyes of Ovid’s ‘Metamorphoses” which is all about transformation.
The AI Commodification of Consciousness refers to the idea of artificial intelligence technologies being developed to a point where they can replicate, simulate, or even surpass human consciousness, with this capacity being bought, sold, and traded as a commodity. This raises a host of ethical, philosophical, and socio-economic questions, including the nature and value of consciousness, the ethical treatment of artificial entities, and the potential consequences of creating and distributing such technology.
"Metamorphoses" is a Latin narrative poem by the Roman poet Ovid, completed in 8 CE. It is an epic exploration of transformation and change in mythology, ranging from the creation of the world to the deification of Julius Caesar.
With this project “Metamorphose”, I have brought together all three aspects into one:
1. Commodification
2. Transformation
3. Relating the other two concepts to a future where AI and Humans work hard find out what that means for our Consciousness.
I wanted to keep some reference to my early roots of exploration of this field so there is an AI interpretation of ripped Vogue magazine textures and dripping black ink. However, are the figures Greek figures from Ovid or are they future AI metaphors? That is for the viewer to decide.
DETAILS OF THE PROJECT
1. Pygmalion and the Statue (Book X): Pygmalion sculpts a woman out of ivory that is so beautiful and lifelike he falls in love with it. He prays to Venus to bring the statue to life, and his wish is granted. This story relates to AI in that we are creating something artificial (the statue/AI), that could become 'alive' in a metaphorical sense (consciousness). It brings up questions about the creation of artificial life and love for the artificial.
2. Daedalus and Icarus (Book VIII): Daedalus, a skilled craftsman, creates wings for himself and his son Icarus to escape from Crete. Icarus flies too close to the sun, melting his wings and causing him to fall into the sea and drown. This story brings up themes of human hubris, the misuse of technology, and unintended consequences, all of which are relevant to AI development.
3. Echo and Narcissus (Book III): As I detailed in the previous responses, this tale's themes of self-love, replication, and the inability to return love can all be related to AI commodification of consciousness.
4. Tiresias (Book III): Tiresias was transformed from a man into a woman, and then back into a man. This could be used to discuss the fluidity and transformation of identity, a relevant theme when considering how AI might assume various roles and identities.
5. Arachne and Minerva (Book VI): Arachne, a skilled mortal weaver, challenges the goddess Minerva to a weaving contest. Despite Arachne's undeniable skill, Minerva transforms her into a spider for her hubris. This tale could be related to AI, questioning how far we can push our technological 'weaving' before we incur unforeseen consequences.
NARCISSUS AND ECHO
In the story of Narcissus and Echo from Ovid's "Metamorphoses", a critical moment and line that encapsulates the tragedy of Narcissus is:
"quis fallere possit amantem?"
This line translates to "Who could deceive a lover?"
This line comes from the following context in Book III (lines 339-510), where Narcissus sees his own reflection in the pool and falls in love with it, not realizing it's his own image:
"stupet ipse videnti, seque probat, nitidisque comis, lucidus et ore. qui simul adspexit, simul et notavit amantem; quodque videt, cupit, et, quae petit, ipse recusat, atque eadem adspiciens perituraque desiderat ora, ignarusque sui est. incenditque notando, utque sitim, quae non sitiat, bibendo movet.
. . .
manibusque sua pectora pellit, et illas rubet ire vias, et rubet ire vias, tactaque tangit humum lacrimis madefacta suis. . . . quis fallere possit amantem?"
Translated, these lines convey:
"He wonders at himself, and stirs the water, and the same form appears again. He does not know what he sees, but what he sees kindles his delight. He sees himself in vain, and thinks what he sees to be nothing. He himself is the object that he burns for, and so he both kindles and burns in his desire.
. . .
He struck his naked body with hands not strong for the purpose. His chest reddened when struck, as apples are wont to become, or as the purple surface of a grape, when it is pressed with the finger, before it is ripe for the table. . . Who could deceive a lover?
In the line "quis fallere possit amantem?", Narcissus is so consumed with his own image that he fails to recognize the deception – the lover he sees is himself. It demonstrates the destructive power of self-love and obsession, and could be related to AI in the sense of our societal narcissism and infatuation with our own technological prowess, to the point that we might fail to perceive the pitfalls and risks of creating machines that mimic or surpass our own cognitive abilities.
PYGMALION AND GALATEA
The key Latin line from Ovid's "Metamorphoses" that encapsulates the moment of transformation when Pygmalion's ivory statue (later known as Galatea) becomes a living woman is as follows:
"corpus erat!"
This line translates to "It was a body!"
This short exclamation is found in Book X, line 243. The full context is as follows (lines 238-243):
"oscula dat reddique putat loquiturque salutatque, et credit tactis digitos insidere membris, et metuit, pressos veniat ne livor in artus, et modo blanditias adhibet, modo grata puellis munera fert illi, conchas teretesque lapillos et volucres et mille modis pictas anseris alas."
Translated, these lines convey:
"He gives it kisses and thinks they are returned; he speaks to it; he holds it, and imagines that his fingers sink into the flesh; and is afraid lest bruises appear on the limbs by his pressure. Now he brings presents to it, such as are pleasing to girls; shells, and pebbles, and the feathers of birds, and presents of amber."
The sentence "corpus erat!" (line 243) comes after these lines, and it is the moment when Pygmalion realizes that the statue has transformed into a living woman. He can feel warm flesh instead of the cold ivory he had sculpted, marking the incredible transformation from inanimate object to living being. It is this moment that relates most strongly to the concept of artificial intelligence, particularly the point at which AI might become indistinguishable from human consciousness.
DAEDALUS AND ICARUS
The story of Daedalus and Icarus in Ovid's "Metamorphoses" provides one of the most iconic cautionary tales in literature. Here's a key Latin line from the tale found in Book VIII:
"ignarus sua se quem portet esse parentem."
Roughly translated, it means "unaware that he is carrying his own downfall."
The line comes from the following larger context (lines 183-235), where Daedalus warns his son, Icarus, about the dangers of their flight:
"medio tutissimus ibis. neu te spectatam levis Aurora Booten aut Hesperum caelo videas currens olivum: temperiem laudem dixit. simul instruit usum, quale sit iter facias monstrat, motusque doceri.
. . .
ignarus sua se quem portet esse parentem."
Translated, these lines convey:
"You will go most safely in the middle. Lest the downy Bootes, seen by you, or Helice with her son, or Orion with his arms covered with bronze, draw you away, take your way where I lead; I command you! We go between the sword and the late-setting constellation of the Plough. Look not down, nor summon the constellations that lie beneath the earth, behind you; but direct your face to mine, and where I lead, let there be your way. I give you to these to be taken care of; and, although I am anxious for my own safety, my chief concern is for you, which doubles my fear. If, as I order, you control your course, both seas will be light to me. While he gives him advice, and fits the wings on his timid shoulders, the old man's cheeks are wet, and his hands tremble. He gives his son a kiss, one never to be repeated, and, raising himself upon his wings, he flies in advance, and is anxious for his companion, just as the bird, which has left her nest in the top of a tree, teaches her tender brood to fly, and urges them out with her wings. He bids him follow, and directs his own wings and looks back upon those of his son. Some angler, catching fish with a quivering rod, or a shepherd leaning on his staff, or a ploughman at his plough-handle, when he sees them, stunned, might take them for Gods, who can cleave the air with wings. And now Samos, sacred to Juno, lay at the left (Delos and Paros were left behind), Lebynthos, and Calymne, rich in honey, upon the right hand, when the boy began to rejoice in his daring flight, and leaving his guide, drawn by desire for the heavens, soared higher. His nearness to the devouring Sun softened the fragrant wax that held the wings: and the wax melted: he shook his bare arms, and lacking oarage waved them in vain in the empty air. His face shouting 'Father, father!' fell into the sea, which from him is called the Icarian. But the unlucky father, not a father, said, 'Icarus, where are you? In what place shall I seek you, Icarus?' 'Icarus' he called again. Then he saw the feathers on the waves, and cursed his arts, and buried the body in a tomb, and the island was called by the name of his buried child."
In the line "ignarus sua se quem portet esse parentem," Daedalus is unaware that he carries his own sorrow, or his own downfall. This is a poignant moment, highlighting the tragic nature of technological overreach, which is a highly relevant theme when discussing the potential risks and dangers associated with artificial intelligence.
ARACHNE AND MINERVA
In the tale of Arachne and Minerva (also known as Athena) from Ovid's "Metamorphoses", a significant line that illustrates the central conflict and subsequent transformation is:
"mutatque truces voltus et inpendit Arachnen."
This translates to "She changes her savage expression, and attacks Arachne."
This line is found within the following larger context in Book VI (lines 1-145), where Arachne, a mortal weaver, dares to challenge the goddess Minerva to a weaving contest:
"quod tamen ut fieret, Pallas sua tela removit, quaeque rudis fecerat, laesaque stamina vellis corripuit virgaque truces acuit iras, mutatque truces voltus et inpendit Arachnen."
Translated, these lines convey:
"However, so that it might be done, Pallas removed her own web, and the threads that the unskilled woman had made, and had spoiled with her fleece; and she struck Arachne's impudent head with her hard boxwood shuttle, and attacked her with it. And Arachne was afraid, and she grew pale."
Following this, Minerva turns Arachne into a spider for her hubris, forcing her to weave for all eternity. This story is a cautionary tale about the consequences of challenging the gods, or in a broader sense, transgressing natural or established boundaries.
When applying this to AI, it could be interpreted as a warning about the potential dangers of challenging the natural order with our technological creations. Just as Arachne faced consequences for her hubris in challenging a god, we may face unforeseen consequences in our pursuit of creating machines that mirror or even surpass human intelligence. The transformation of Arachne into a spider can symbolize the transformation of society and individuals through the advent of artificial intelligence.
TIRESIEAS
In the story of Tiresias in Ovid's "Metamorphoses", a key Latin line that captures his transformation from man to woman and back to man again is:
"Corpora Cecropius rursus nova fecit in artus."
This line translates to "Again he made new bodies with his Cecropian hands."
This line comes from the larger context in Book III (lines 316-338), where Tiresias experiences his unique transformations:
"... si qua est fiducia vero, tu mihi, qui volucres oras mutaris in illas quas petis, et gemino reparas te corpore, Tiresia, dic' ait 'o, melior, cum femina sis, an auctor cum sit amor nostri?' Tiresias 'quamquam est mihi cognitus error,' dixit 'et ante oculos, ut eram, non semper adesse, nec male Cecropias inter celeberrima matres versatus, septemque annos effecerat illa. octavo, ad veterem silvis rediere figuram, et cecidere nova forma pro parte virili. saepe, puer, plena vitam sine femina duxi, saepe meis adopertus genitalibus ignes persensi, multoque tui tunc artior ignis. corpora Cecropius rursus nova fecit in artus, omnibus ut feminae cessissent partibus illi; tuque, puer, neque enim est dubium, tibi magis uror.' "
Translated, these lines convey:
" ... if there is any confidence in the truth, you tell me, who transform the wings that you seek into those that you have, and repair yourself with a double body, Tiresias, whether, when you are a woman, or when the author of our love is, is better. Tiresias replied, 'Although I am known to be mistaken, and not always to be before my eyes, as I was, nor to be badly among the most famous mothers of Cecropia, and that woman had completed seven years. In the eighth year, they returned to their old figure in the woods, and their new shape fell away in the male part. Often, boy, I have lived a life without a woman, often I have felt fires covered by my genitals, and your fire is then much tighter. Again he made new bodies with his Cecropian hands, until all parts of that woman had given way to him; and you, boy, for there is no doubt, I burn for you more.' "
Tiresias, having lived as both a man and a woman, is asked by Jupiter and Juno to settle a dispute over which gender derives more pleasure from love. His unique experience of transformation and identity can be related to AI in terms of how artificial intelligence can assume various roles and identities, providing unique insights that may not be accessible to humans. This can open up discussions about the fluidity and adaptability of AI, and its ability to take on diverse perspectives and functions.
MORPHING BEYOND THE SURFACE: THE ART OF NANCY BURSON'S DIGITAL PORTRAITURE
Nancy Burson is an American artist and photographer born in 1948. She is best known for her pioneering work in computer morphing technology and is considered to be the first artist to apply digital technology to the genre of photographic portraiture. Utilizing the morphing effects she developed while she was at MIT, Burson has incorporated these techniques in various ways throughout her career including age-enhancing techniques, the creation of composite portraits, and the development of race-related works. Many people may recognize some of her creations, even if they aren’t familiar with her name. Some of her most recognizable works include the digitally morphed “Trump/Putin” on the cover of Time magazine, and the updates of missing children's portraits on milk cartons.
We can see her art in museums and galleries such as the MoMA, Metropolitan Museum, Whitney Museum, Victoria and Albert Museum in London, the Center Pompidou in Paris, the LA County Museum of Art, and the Getty Museum among others. She was a visiting professor at Harvard and a member of the adjunct photography faculty at NYU for five years.
In this article, we will focus on Burson's morphing technique, presenting how she used this technology in different ways throughout her career. We will showcase her works and discuss why her art is important in the history of digital art and portrait photography. This article will include an exclusive interview with Nancy Burson herself. Burson shares her inspirations and the creative process behind her work, as well as discusses her past and personal reflections on her art with Kate Vass.
Morphing technology
Motion pictures and animations often use morphing techniques to create a seamless transition between two images. It is a geometric interpolation technique that has been around for a long time, with traditional methods like tabula scalata and mechanical transformations. Besides these techniques, probably the most effective way to morph images was through “dissolving”, which was developed in the 19th century. Dissolving is a gradual transition from one projected image to another, for instance, a landscape dissolving from day to night. This technique was groundbreaking in the 19th—early 20th century, proving the potential of visual effects in motion pictures.
Computers replaced dissolving since the 1980s and have made morphing more convincing than ever before. Digital morphing involves distorting one image at the same time as it fades into another by making corresponding points and vectors on the before and after images. For example, we can mark key points on a face, such as the corners of the nose or the location of the eyes, and also mark these points in the second picture. The computer then distorts the first face to match the shape of the second face while gradually blending the two.
One of the first companies that used digital morphing techniques was the computer graphics firm, Omnibus. It used this technique first for commercials and then for movies. It contributed to a film called "Flight of the Navigator" which featured scenes with a computer-generated spaceship that appeared to change it’s shape. Bob Hoffman and Bill Creber were the developers of that software. Later, morphing techniques have mainly been used in the film industry. Morphs appeared in movies like “Willow” (1988), “Indiana Jones and the Last Crusade” (1989), and “Terminator 2: Judgment Day” (1991). In 1992, Gryphon Software developed the first morphing software, called MORPH, designed exclusively for personal computers.
Morphing Technology in Art
Nancy Burson is widely regarded as the first artist to use digital technology to create composite portraits in the early 1980s. However, prior to this, composite images were created through analog techniques by overlaying multiple images to blend and create a single image. Artists like Francis Galton and William Wegman made significant strides toward analog composite images before the arrival of digital technology.
In the 1870’s, Francis Galton first applied compositing techniques for the purpose of visualizing different human “types”. He tried to determine whether specific facial features could be associated with distinct types of criminality. Galton used multiple exposures to create composite photographs of different segments of the population, including mentally ill and tuberculosis patients as well as what he regarded as the "healthy and talented" classes, such as Anglican ministers and doctors.
Later, William Wegman continued this tradition by creating composite images in his artistic practice in 1972. His piece, "Family Combination" compared a photograph of himself with the genetic combination of his parents. He used two negatives simultaneously, one of his mother and one of his father, to create one image.
Nancy Burson: “I remember seeing Wegman’s Family Combinations when I was already working with MIT. And I didn’t find out about Galton until I was already making my own composites. It was a shock in that Galton was responsible for Eugenics, or the science of you are what you look like. I was horrified at the thought of anyone thinking that my composites were done with the same line of racist thinking. However, when they were first shown at the International Center of Photography (ICP) in 1985, it did seem that people understood my images had a light heartedness to them.”
Nancy Burson’s development
St. Louis-born Nancy Burson moved to New York in 1968. She visited the exhibition, "The Machine as Seen at the End of the Mechanical Age" at the Museum of Modern Art (MoMA), which made a tremendous impact on her life. The exhibition featured moving objects and video images, which sparked Nancy's interest in technology.
N.B.: “What I liked best about the Machine show at MoMA was that some of the pieces were interactive. The idea that viewers could participate in the art was huge and quite memorable for me. It turned the museum into a fun experience and not just a cultural one. The pieces that stuck with me the most were Nam June Paik’s videos presented in what I remember as dark cases. I remember that after seeing that exhibition, I began to visualize in my mind an interactive age machine in which viewers could see themselves older. I knew nothing at all about computers so I knew I’d have to find the people who did!
I grew up with a science background. My mother was a lab technician and my favorite memories of my childhood were of doing fake experiments with real blood! I put everything I could under my brother’s microscope!”
Nancy met early computer graphics consultants through artist’s Robert Rauschenberg’s (known for his collages that incorporate everyday materials) and Billy Klüver's (video artist) organization, Experiments in Art and Technology (EAT). Rauschenberg and Klüver had worked together with scientists to organize the “9 Evenings” performances that took place at the 69th Regiment Armory in NYC in 1966. They established the non-profit EAT that same year. EAT was pairing artists together with scientists and Nancy was paired with an early graphics specialist who informed her the technology to produce aged images was years in the future. In the interim, Burson turned to painting and in 1976, it was suggested she contact Nicholas Negroponte, the head of MIT’s Architecture Machine Group, later called The Media Lab. They had just hooked up a camera to a computer which was an early version of a digitizer or scanner. It was one of the first times that a computer interacted with a live version of a face. It took five minutes to scan a live face and subjects had to lie underneath the copy stand and be told when they could blink.
Originally, she thought her idea would only be useful beyond an art context for its potential usage in cinematic special effects. However, she soon realized that the technology she developed was more useful than she initially thought. By 1981, Nancy was issued a patent for her aging machine, “The Method and Apparatus for Producing an Image of a Person’s Face at a Different Age.”
N.B.: “In my initial meetings with MIT, I was paired with a collaborator, Thomas Schneider. The project was dubbed ATE, an homage to its first beginnings from EAT. It was Tom’s design for the triangular grid which still remains the standard morphing grid in the industry today for everything from sophisticated AI software to SnapChat. When I left MIT in 1978, Tom and I became co-inventors, and he turned over his rights to me. In 1979, I was invited to present my initial video of three faces aging that were the result of my work with MIT at the annual SIGGRAPH computer graphics conference. It was that conference that first exposed the new facial morphing technologies to everyone who was to become anyone in the industry, including those who became the graphics specialists at Industrial Light and Magic (ILM) and Pixar films.
Through my former colleagues at MIT, I continued my work with two computer graphics specialists working for Computer Corporation of America, Richard Carling and David Kramlich. We fashioned our early morphing sequences into funny video presentations that were shown in the SIGGRAPH film shows in 1983 and 1984, further exposing the technology to everyone in the industry. And it was Richard, David, and I who collaborated on my first book called Composites: Computer-Generated Portraits published in 1986.”
Early works (1976–1999)
To facilitate the development of the software being built by MIT from 1976 to 1978, Burson created a self-portrait series, “Five Self-portraits at Ages 18, 30, 45, 60, and 70” in which she documented her own future aging process. She collaborated with a makeup artist to transform her face using make up to the ages of 18, 30, 45, 60, and 70. It was that study that informed how the software would enable her to explore the methodologies of altering and manipulating human features. Through the 1980s, she also used digital technology to blend the faces of groups of individuals to produce Composite Portraits.
Burson often digitally combined and manipulated images of well-known individuals, such as famous politicians and celebrities. Through these images, Burson investigated political issues, gender, race, and standards of beauty. She created one of her most important works, “Warhead I” (1982), by using images of five world leaders—Ronald Reagan, Brezhnev, Margaret Thatcher, François Mitterrand, and Deng Xiaoping—proportionally weighting the image by the number of nuclear warheads deployable by the nation they led.
Burson also examined the concept of beauty in our society. In one of the first composites made in 1982, she creatied composite portraits that were a comparison of styles between the 1950’s and the 1980’s, exploring how beauty is defined in our society. For example, her "First Beauty Composite" (1982) featured the faces of Bette Davis, Audrey Hepburn, Grace Kelly, Sophia Loren, and Marilyn Monroe. And her "Second Beauty Composite" (1982) featured images of Meryl Streep, Brooke Shields, Jane Fonda, Diane Keaton, and Jacqueline Bisset.
She also explored other themes through her composite portraits. Her "10 Businessmen From Goldman Sachs" (1982) was a blend of 10 businessmen from that firm, while her "First and Second Male Movie Star Composites" blended images of movie stars in yet another style comparison. The "Big Brother" image produced in 1983, was a mix of the faces of political figures Hitler, Stalin, Mussolini, Mao, and Khomeini for a CBS TV special that recreated George Orwell’s Big Brother from his novel 1984.
The process of morphing faces was not only time-consuming but also required a considerable amount of skill and effort in its early days.
N.B.: “In 1982, the process that we used to morph faces was different in that we would overlay all the images on the eyes and ascertain an average. Then we would warp, or morph, each face to fit the average. It took 25 minutes to scan a face and morph it to the average face and then we’d combine them all together. It was an incredibly tedious process. Nowadays, there are some great programs available online which morph faces together and can weight them in different percentages as well.”
One of Nancy Burson’s most important works is the “Age Machine”, which utilized her patented technology of 1981, “The Method and Apparatus for Producing an Image of a Person’s Face at a Different Age.”
The “Age Machine” was an interactive installation that simulated the aging process, offering a glimpse into what viewers might look like in the future. The installation scanned the face, and the viewer interactively moved data points for their features (ends of eyes, nose, mouth etc.). Then the software applied an aging template that corresponded to the viewer’s facial structure. Over the years, David Kramlich increased the speed of the process from 25 minutes to just a few seconds making it possible for viewers to be active participants within the museum experience. Quickly garnering attention for its boundary-pushing creativity, the “Age Machine” used cutting-edge technology to delve into questions surrounding identity and aging. It was shown at many important venues including the Venice Biennale in 1995 and the New Museum (NYC) in 1992.
Another variation of the initial morphing process was used by law enforcement agencies to find missing children or adults, predicting how they would look many years after their disappearance. Copies of the software were acquired by the FBI, as well as the National Center for Missing Children. By the mid 80’s, at least half a dozen children and several adults were found from Burson’s updating methodologies and collaborations with law enforcement. Through her art, Burson did what other artists only dream about. She changed people's lives.
N.B.: “I was completely unaware of the software’s potential use in finding missing children. However, in 1983, the Ron Feldman Gallery asked me into a group exhibition called “The 1984 Show”. The piece I chose to exhibit were aged images of Prince Charles and Diana with their “composite” teenage son, William. A woman with a missing child saw the show and asked me if there was anything I could do to update children’s faces. It was the true beginning of my work with missing kids. I knew the process we had been using to composite faces needed to be adjusted to update children’s faces so David Kramlich and I reworked the process. At that point, there were some Hollywood producers who came to us with a couple of serious cases of parental abductions and when they were aired on national TV, the children were found within the hour. It was wild to see the images we’d created on a computer screen come to life when those kids were found.”
Based on the same technique, Burson developed other machines as well. The “Anomaly Machine” (1993) showed viewers how they would look with a facial anomaly based of the research and data that she collected in taking portraits of real children’s faces with various craniofacial disorders. The “Couples Machine” combined two people together in whatever percentages they wanted. All these machines were offshoots of the original Couples Machine from 1989, which combined viewers’ faces with movie stars and politicians and was shown originally at the Whitney Museum.
Later works (2000 - now)
Besides the above creations, there was another important artwork that was based on Nancy’s development of "The Method and Apparatus for Producing an Image of a Person's Face at a Different Age". “The Human Race Machine” was a public art project which was inspired by a meeting in the mid-1998 with one of Zaha Hadid's staff and was launched at the Mind Zone in the London Millennium Dome on January 1st, 2000. It was the perfect backdrop for the project which was an immense success with long queues of visitors waiting for hours. “The Human Race Machine” used morphing technology to show people how they would look as a different race, including Asian, Black, Hispanic, Indian, Middle Eastern, and White. After the British launch, “The Human Race Machine” went on to be featured in diversity programs in colleges and universities across the US for over a decade. It was also featured in Burson’s traveling retrospective that began at New York University’s Grey Art Gallery in 2002, and was nominated for Best Museum Show of the Year by the American Art Critics Association.
The project provided viewers with the visual experience of being another race and perhaps helped to promote a message of unity by allowing people to experience what it’s like to be someone of a different race. Later she created several works that explore the concept of diversity and equality.
N.B.: “In 2000, I did a project for Creative Time that accompanied a group exhibition exploring DNA in which I showed the “Human Race Machine”. Accompanying the exhibition, Creative Time commissioned a billboard at Canel and Church Street that said, “There’s No Gene For Race.” I consider the “DNA HAS NO COLOR” message of these sculptures and billboard an update of that same concept. Race is merely a social construct having nothing to do with the science of genetics. In fact, our DNA is really transparent, although scientists color it so it can be seen. With the prevailing upturn in racism today, I believe “DNA HAS NO COLOR” is an even more timely message than “There’s No Gene For Race” was almost 23 years ago. We are all one race, the human one.”
Nancy Burson has always had a keen interest in political issues. In relationship to her “Human Race Machine”, she created a series of images of Donald Trump as different races prior to his election. The “What If He Were: Black-Asian-Hispanic-Middle Eastern-Indian” was originally commissioned by a prominent magazine, but the magazine decided not to publish the images. Despite this setback, Burson remained committed to her work. These images were her attempt to not only expose Trump as a racist, but to challenge public consciousness regarding racist beliefs in the US.
Even though the work depicting Trump could not be published in that magazine, Huffington Post eventually published it after it was shown at the Rose Gallery (Santa Monica, CA) in 2016.
Another work called “Trump/Putin” was selected for the cover of Time magazine in 2018. This work showcased a slow transformation of the faces of Donald Trump and Vladimir Putin into one face. The image illustrated the sudden, unexpected liaison between thesse two leaders and the potential involvement by Russia in the 2016 election for the United States presidency.
Burson already knew the photo editor of Time magazine, Paul Moakley, who had selected one of her earlier works, “Androgyny” for Time’s book, “100 Photographs: The Most Influential Images of All Time”. Moakley was impressed by “Trump/Putin” and featured it on the cover of Time magazine. The image went viral immediately.
Nancy Burson's passion for addressing pressing political issues through art continues to this day, as evidenced by her recent work "The Face of Global Carbon Emissions". This piece depicts the top five heads of state whose countries are the biggest contributors to global warming. The composite image is weighted according to the approximate percentages of each country's carbon emissions. The featured leaders are Xi Jinping of China (contributing 28%), Joe Biden of the US (contributing 15%), Narendra Modi of India (contributing 7%), Vladimir Putin of Russia (contributing 5%), and Fumio Kishida of Japan (contributing 3%).
N.B.: “I prefer to make composites these days as political statements. “The Face of Global Carbon Emissions” grew out of my concern for our Mother Earth. We need to reduce our carbon emissions or risk extinction of humanity. I wanted to put a human face on the problem by combining the five world leaders who hold the power to change climate policies.”
Expressing social issues and bias
Nancy Burson has dedicated her career to exploring the complexities and mysteries of the human face and the societal issues it reflects. Faces have always been an important part of human communication and expression. Portraiture represents a person's likeness and has been used throughout history to depict individuals and their personalities, capturing their essence through subtle cues like facial expressions and body language. Nancy Burson does not use portraiture to represent personal expression. Rather, she highlights social, political, and cultural issues, focusing on larger societal biases and injustices.
The human face is a mysterious and changeable thing. Burson's art explores how it is influenced by social and scientific ideas and biases. Her work uncovers the fluidity of concepts such as beauty, sex, race, power, family, and even species. It challenges viewers to reevaluate their ways of seeing the world. She wants us to see the works and actively engage with them, instead of just decoratively viewing them.
Morphing Techniques and AI
Morphing techniques and many AI image generative programs blur the line between reality and illusion. AI can generate pictures and videos that appear almost real, and morphing techniques can allow for the seamless transition of one image into another. When these two technologies intersect, the possibilities for creating lifelike illusions are almost limitless.
One of the important AI examples of morphing techniques is the DeepFakes. It uses an AI algorithm to replace one person’s face with another realistic transformation. The technique raises deep concerns about the potential for creating fake news or propaganda.
Another example of the intersection of morphing techniques and AI is GAN (Generative Adversarial Networks). GAN is a type of deep learning model that can generate new realistic images by blending and morphing elements from other images. It is applied in various fields, like art, fashion, and design. In recent years, GANs have been used to create unique and visually striking art pieces that blend elements from different styles seamlessly. The resulting images are both surreal and captivating, blurring the line between reality and imagination.
Nancy Burson has been keeping a close eye on the advancements in technology, including the rise of artificial intelligence. While she acknowledges the potential benefits of this new tool, she also recognizes the potential for misuse and unintended consequences.
N.B.: “Each new technology brings its own potential misuses along with it. I consider AI to be a powerful new tool that I think is especially good at illustrating abstract concepts. Clearly some limitaitons have already been found, such as the software’s apparent racial bias that’s especially disturbing when used for law enforcement. I never imagined there would be a time that AI composited fake people would be used as spokespersons for websites. That’s terrifying to me.”
MANIFESTO TERRICOLA by SOLIMAN LOpEZ
We are thrilled to introduce the "Manifesto Terricola", an artistic and scientific project developed by Soliman Lopez in collaboration with Polo De Vinci, MIT Media Lab, ESAT, Viven Roussel, Maggie Coblentz, Javier Forment, and Lena Von Goedeke. The project explores the current state of humanity and our future, with a particular focus on the environmental changes happening on the planet. The manifesto is stored in DNA and preserved in a biodegradable ear-shaped sculpture on Svalbard Island in the Arctic. It defends a positivist view towards the use of biotechnology as new materials of human expression and working in harmony with nature. The message of the Manifesto Terricola is integrated into this time capsule, frozen and icy at the Arctic pole, as a kind of farewell to the Earth we once knew, to say that we have changed it and ourselves. The project draws inspiration from art history, using the format of a manifesto, and refers to Vincent Van Gogh's ear. Solimán López, a well-known figure in the contemporary art world, developed the project using his expertise in new technologies, new media art, blockchain, and biotechnology.
Written by Soliman Lopez
1< INTRODUCTION
The main objective of this document is to explain the details related to the project "Manifesto Terricola" by artist and researcher Solimán López in collaboration with Polo De Vinci, MIT Media Lab and ESAT (Escuela Superior de Arte y Tecnnología). The project is developed under the supervision of researcher and expert of biomateriality, Viven Roussel with the support of researcher and artist Maggie Coblentz and the collaboration of bioinformatician Javier Forment and artist Lena Von Goedeke.
In the context of the residencies proposed by both institutions in collaboration and with the island of Svalbard as a field of work, the artist is selected to develop an artistic project relevant to the local, global, artistic and human context, which questions the current reality of humanity and our future, as well as introducing research opportunities within the framework of the analysis of the Arctic as a testimony of the environmental change present on planet Earth. That is why after several meetings and deliberations, taking into account the subject matter, the technological challenge and the proposed innovation, the working team has chosen the "Manifesto Terricola" project as the appropriate one based on the following details that are explained below in this document.
2< WHAT´S THE MANIFESTO TERRICOLA?
It is an artistic document that presents a state of the current situation of humanity in different areas such as economics, ethics and morality, psychology, geopolitics, the environment, art and the connection in between humans and the Earth.
But the document also contemplates a particular materiality, being stored in DNA and encapsulated in biodegradable bio printed material with the shape of an ear, produced to preserved on Svalbard Island in the Arctic.
Formally, it follows two historical approaches to the meaning of an artistic manifesto, being at the same time a work and text of intentions and a scientific tool. These intentions lie in an intrinsic analysis of materiality, the intangible, the anthropocene, science, digital storage and art itself.
The manifesto is separated into 5 conceptual blocks:
Block I: ENVIRONMENTAL TRANSITION.
Block II: SCIENCE AND BELIEFS Block
III: ECONOMICS Block
IV: TRANSHUMANISM / AUGMENTED HUMANS
Block V: CONSCIOUSNESS / ART
Each of these content blocks is stored in its own DNA sequence and inserted all together in a bio print that saves and encapsulates the digital content.
The Manifesto claims pure conceptual production and the new materialities of the digital and intangible as a form of expression and a sublime relationship with nature in order to extend the understanding of the very essence of human technology and bioentities.
The Manifesto Terricola also defends a positivist stance towards the advances of biotechnology as new materials of human expression and conciliation with nature and the biological. The text proclaims the integration and acceptance of technology as a new engine of consciousness for the future of humanity, thought and art and positions our generation in context with a future one and vice versa. The overall message of the Manifesto Terricola is at the same time integrated into a natural time capsule, frozen and icy at the Arctic pole, as a kind of farewell to the once Earth to say, yes, we have changed you and at the same time we have changed ourselves.
The manifesto also calls for a conceptual engagement with the crucial debate of the moment, arguing that all current artwork must have an intrinsic critical eye on these issues in order to balance with the current moment.
The Manifesto takes the form of a sculpture in the shape of a bioprinted human ear containing the DNA-encoded information of the manifesto itself.
3< THE ARTISTIC WORK, ITS REPRESENTATION
The work consists of two parts. The content of the manifesto, its audio and image. The materiality that houses it, represented by a human ear printed three-dimensionally with what is called bioprinting.
This object has a strong representation in the history of art, having clear references. On the one hand, the tragic duel of the painter Vincent Van Gogh with his ear to "make himself heard", passing through the also famous one implanted in the forearm of the artist Stelarc to make an implant listen and at the same time speak, or the project of the also bio artist Joe Davis with the introduction in the DNA of a mouse ear of the image of the milky way.
These precedents serve as a solid base to create a new evolution of this ear concept that has listened to the history of art and that now proposes other materiality and adds concepts.
On the one hand it is an ear that represents in itself transhumanism by its own materiality and the possibility of imprinting itself with a bio material. On the other hand, this object/organ/listening interface holds frequencies information within itself and at the same time, like Stelarc's ear, listens and speaks and represents a global message like Joe Davis'. Of course, it also represents the tragedy of our humanity as destroyers of our own environment.
Within the project there is an important communication dimension linked to its distribution and dissemination. That is why certain actions and exhibitions are foreseen that will have as their main object a replica of the ice cane that incorporates the manifesto inside it.
The creation of large-format video projections and immersive installations are also foreseen.
4< MAIN PRECEPTS FOR THE MANIFESTO TERRICOLA
¿- Why Terricola?
From the Latin terricola ("inhabitant of the earth or soil"). If we look at the etymology of the word, the prefix terri- is self-explanatory, earth coming from terra, but it is interesting to look at the suffix -cola, which comes from -cul referring to cultivation. On the other hand, the root -col/cul is related to the Indo- European root kwel which means to stir or to move and which we find for example in a word of Greek origin such as kyklos which means wheel. And it is even more interesting to say that the verb colere in Latin means to cultivate and to inhabit, which also has a connotation to the concept of sedentary, a term that we have already mentioned in relation to the first civilisations that manipulated cereals to make a viable economy in their environment and that is now called into question with the manifesto insofar as our land is no longer a land suitable for this cultivation or for this habitability, turning us into extra-terrestrial beings.
Why the Arctic and Svalbard?
The technical and conceptual characteristics to carry out the project are ideal in Svalbard. On the one hand, it is a sensitive place in the use of art as a model of communication and commitment to the preservation of the Arctic as a key place in the global balance. On the other hand, the scientific research intrinsic to the project finds in this latitude of the world, a perfect place to work in low temperature conditions, in an isolated place and with easy access for proper installation and monitoring. Finally, Svalbard has great symbolic power as a place weakened and victim of the repercussions of human activity and the accelerating impact of climate change, being a global icon of iconic significance for a committed and frontal message such as the one proposed here.
Who is the manifesto addressed to?
The target audience of this manifesto is anyone who wants to approach it in any of its forms, but within the framework of the general concept and the intellectual proposal it implies, we can identify 4 fundamental population groups in terms of the relationship they have with technology and the evolution of our species:
1º Humans who have assumed the change derived from the anthropocene and who, due to laziness, selfishness, ignorance, comfort and many other factors subjugated to the former, openly contribute to the uncontrolled entropy that generates the plausible imbalance we are experiencing.
2º Humans who intend with an outstanding use of technology to revert the changes to return the Earth to its ideal state of habitability, with advanced designs, sustainability as a fundamental word and cooperation.
3º Humans who with an outstanding use of technology, like the first ones, are contributors to the growth of the anthropocene, but who have in their roadmap the exploration of other places in space that leave behind the Earth as habitat in a sort of space colonisation based on the exponential growth of our technology, consciousness, energy and species.
4º Humans isolated from the debate with basic primary needs derived from their level of consciousness, resources, geographic location and politics causing a lack of capacity to position themselves on the issue.
Motivations:
Every work of art, whatever its nature, requires an action, however symbolic it may be. A motivation. I define below some of the motivations for the creation of the Manifesto Terricola.
Eco anxiety: It has been mentioned before, but this manifesto captures the global eco-anxieties derived from a society in biological check.
Art: It is in itself a motivation, but an instrument and vehicle of communication. A means of visual, conceptual and graphic transmission. Symbol and information package, as well as a generator of community and debate.
Message:
It is said that the end is the message and in this case this issue is very evident. The proposed message is a message that is embedded in neutrality. The manifesto proposes a place of debate and a message towards a potential recipient of the future to make them understand the moment they are in when they receive it and to draw clear lines of understanding and experience around the main issue of the project: the "habitability of the earth and our suitability to it".
Permafrost: Leaving a message for generations to come has always been in the view of all our civilisations. From the most ancestral ones in their eagerness to build in stone and leave us hidden messages in every centimetre. This intention is also present in this project through the staging of the manifesto through a natural process provoked under the concept of Permafrost.
Research: To pose a scientific experiment that can re-signify the sustainable encounter of humanity's information in balance with nature or what we could call biodata storage.
5< SCIENTIFIC DIMENSION
Numerous scientific studies have already identified DNA as a suitable materiality as a bio-marker for the study of different characteristics related to melting ice, permafrost, movement, friction and the identification of life in glacial environments and in general in hydrological environments in which traces are particularly complicated to control. It is interesting to highlight the study published in Environmental Science & Technology where problems are solved with the use of PLA (Polylactic Acid), and encapsulants for DNA such as silica or clay gel among others, which in conjunction make these bio-markers a perfect material.
In our eagerness to keep the environment intact and not generate any alteration in the short or long term, we have chosen the same materiality for our artistic action, knowing that it is a material that is already being used in different ongoing studies in places such as Antarctica, Latin America and Asia.
Earlier we explained how the manifesto is an enunciative text and at the same time a work in itself. But it is also a scientific object that helps us to recognise the changes in the place in which it is embedded, identifying four fundamental lines of scientific contribution in the context of current research on the island.
- DNA as a guarantor of digital information: We are interested in studying the way in which the DNA that contains the information in the manifest, remains in extreme conditions of temperature, friction and radiation. We intend for the manifest to become a proof-of-life beacon that will be recovered in the coming years and provide us with hard evidence of the versatility of this digital information storage model that can help store part of our general culture.
We can demonstrate that the information integrated in the glacier through the manifest is durable and a future opportunity that revalues the ecosystem and connects it with new nanotechnologies.
- Biomaterial as a guarantor of environmental neutrality: We also want to demonstrate with this action how scientific and artistic production can be carried out with 0 environmental impact through the choice of suitable materials that arise from in-depth research in biotechnology material.
- Analysis of the suitability of storing digital information in a way that is harmless and sustainable for nature and sensitive ecosystems: Demonstration of new environmental applications for the linking and storage possibilities of the visual and digital culture of today's society. Stable natural media as an inert repository.
- DNA as a vibratory witness: Analysis of the vibrational changes in DNA resulting from climate change and environmental disturbances.
Our project is an exploration of data conservation and new materialities. We are interested in studying how the synthetic DNA, which contains a translated digital information in, can hold up under extreme conditions of temperature, friction and radiation. This takes the form of an Art-Science project lead by the Université Léonard de Vinci (represented by Association Léonard de Vinci) and the artist Soliman Lopez. The artist proposes to explore the question of the narratives of humanity at a time of climate crisis and dematerialisation of data with a DNA encapsulation in biodegradable biomaterial - biotechnology developed by the Tissue Labs and DNA method encoding with the bioinformatic Javier Forment form Universidad Politécnica de Valencia and the Genscript laboratory which it the artist is collaborating since 4 years. The Manifesto Terricola claims pure conceptual production and the new materialities of the digital and intangible as a form of expression and a sublime relationship with nature in order to extend the understanding of the essence of human technology and nature.
Humanity is faced with the question of how to preserve its heritage and what traces it will leave behind, whether or not it does so. An important part of human activity is recording and storing data for various purposes. These means are not entirely unrelated to the global rise in temperature. For thousands of years, permafrost and polar ice have been naturally accumulating data from environmental activities and life on the earth's surface. It is therefore possible to rationally consider the issue of storage and its environmental cost from a more ecological and sustainable perspective. If we are to continue producing this vast amount of information, it is urgent to find a better way to store and connect it directly to the natural environment, conceptually and molecularly.
Solimán López proposes with "Manifest Terricola" project, to use the concept of permafrost for biological storage (DNA storage) under the polar ice - in which sustainability, research into new materials, social responsibility towards humanity and the art to merge. Our project is related to research on DNA with nanomaterial and biomaterial degradability in regard to data conservation and bioprinting. DNA tracers are actually used in different research areas (hydrology, nanomaterial, DNA and chemicals tracer, etc.) for their properties : “specificity, environmental friendly, stable migration” and high differentiability. Then DNA is here a guarantor of digital information: we intend to realize a manifest. A digital content extracted from the manifest´s text, under the ADN form to become a proof-of-life beacon that will be recovered in the coming years and provide us with hard evidence of the versatility of this digital information storage model that can help to store part of our Human culture. We will use an artist's text that addresses the same question that the research does but from a sociological perspective including a visual icon to make the research more recognizable and self explanatory.
We also propose a new way and visual encoding through the metaphor of frequencies, integrating in the information stored, more possibilities of understanding for future generations. Using a universal language like the vibration and frequency waves approach us to a more wider community not only related with only one kind of language.
On the other hand, scientifically, the analysis of the vibrational changes in DNA resulting from climate change and environmental alterations will be pursued with the help of the research team from De Vinci Innovation Center and the rest of the team. In practical terms, we insert a biodegradable object developed in partnership with the biological lab in a hole dug from a drill core ice -as a marker of our time. It contains the DNA text (manifesto).
Biomaterial is a guarantor of environmental neutrality, and demonstrates with this action how scientific and artistic production can be carried out with 0 environmental impact. We base our research on expertise from our partners and in the scientific research on environmental DNA marker : “ DNA molecules are essential constituents of life, which have existed ubiquitously in food, air, water, and also in almost all life forms.Therefore, it is naturally nontoxic.In addition, the sequences of DNA tracers are all artificially designed and chemically produced, but not from the genome of any organisms; therefore they neither possess any hereditary properties nor carry any potential risk of introducing foreign genes to any hosts.”
With this project and the subsequent correlative analysis, we demonstrate that the information embedded in the glacier (through the manifesto) is sustainable and an opportunity for the future to revalue the ecosystem and connect it with new nanotechnologies. Demonstrating new environmental applications for the linking and storage possibilities of the visual and digital culture of today's society is a huge challenge. Natural media, as a place of inert storage, can change the perception of materiality and our relationship to the world. Perhaps this message can lead people to a particular relationship with data and matter, where data is not destructive but constitutive of the materiality of the world.
6< THE ACTION IN SVALBARD
Date: 15 - 26 April 2023.
Localisation: TBD.
Action description:
We propose the exploration through a drill ice in a higher area of a glaciar. We want to extract the column of ice and deposit our biomaterial in the base.
We propose a depth of around 100meters to place the biomaterial.
Once the biomaterial of around 10cm is deposited in the bottom, we will put back the ice column to its place.
With this action we ensure the prevalence of the biomaterial frozen and in a safe conceptual place. If the melting process sometime put out the sculpture, there in the surface, that will means we need to react more urgently and faster to stops the melting process. All the process will be recorded.
7< ENVIRONMENTAL CLARIFICATIONS
The DNA created in the laboratory for the purpose of storing and encoding digital information is a totally inert and isolated material that is encapsulated in a mineral such as silica, which is also present in nature. If at any time this micron-sized material were to come into direct contact with an aquatic environment, for example, it would be completely diluted and biodegraded without any additional effect on its environment.
On the other hand, for the "beacon" object, we propose several material alternatives, all of them equally effective due to their inert character, present in nature and their total biodegradation.
As summary:
DNA with no possible expression.
Easily degradable and absorbable biomaterial without any trace.
Inert material doubly protected by an inert material.
Materials that are visually discreet and in total harmony with their environment.
Absolute respect for local communities.
8< DESCRIPTION OF MATERIALS USED
Below we describe the material used for the sculpture/manifesto:
4ug (micro grams of ribonucleic acid) x 5 Biomaterial for 3D printing.
In general in bioprinting the most common materials are:
Biodegradable polymers, such as collagen, alginate, poly(urethane ester) and poly(lactic-co-glycolic acid) (PLGA).
Gels, such as agarose gel, gelatine, polymeric hydrogels and alginate gels.
Nanoparticles, such as gold, silver and copper nanoparticles.
Ceramic materials, such as hydroxyapatite, bioglass and zirconium oxide.
Metallic materials, such as titanium, stainless steel and gold.
Extracellular matrices, such as matrigel matrices and chondroitin sulphate matrices.
Cells, such as stem cells, progenitor cells and differentiated cells.
Growth factors, such as nerve growth factors and bone growth factors.
As our material is supposed to be inert we will use ceramic materials and biodegradable polymers to obtain 0 trace.
Bio materiality:
The possibilities of storing DNA in materials available in nature is very broad, but for this occasion we have decided to work with the more stable ones in this sort history of DNA encapsulating to have no surprises.
All natural environments are sensitive, but this issue is even more visible in the context of an Arctic island. The ecosystem must be respected and the introduction of a non-"natural" material is not within the possibilities and logic of the project.
On the other hand, we have already mentioned the conceptual idea of associating the project with an evident aesthetic minimalism that could be associated with suprematism and Malevich's "White on White" in an exercise also of "0 polution" and respect for one's own environment. Furthermore, this visual exercise reinforces the idea of "Out of place Artifact (OOPART)“ and the surprise effect in this possible permafrost that seems possible in the future of the current situation. Also the size of the intervention is not much longer than 8cms.
The fact of using a bio print helps us to understand the project to a better extent, being an important part of the project. An object designed for listening, which at the same time contains in its materiality a sound message and which is incorporated in the deepest nature to listen.
It should also be noted in this section that biology is positioned as a perfect place to store the work and its content, identifying a sort of metalanguage with nature through the insertion it must have in nature itself. On the one hand, we include the content molecularly, but at the same time and symbolically, we bury it in its essence, in a glacier with a material that is gonna hear the Earth.
9< STEPS OF CREATION
We define below the steps necessary to carry out the work.
1º. Compilation of conceptual and intellectual material for the development and drafting of the Terricola Manifesto at document level.
2º. Tokenisation of the text in one or more Blockchain for its memory and digital registration as well as proof of existence and authenticity.
3º. Rematerialisation of the digital document. Transcription DNA using Python coding language.
4º. Sending the coding to the laboratory. In this case, the most feasible option is the GENSCRIPT laboratory with which the artist has already worked in the past.
5º. Creation in the laboratory of the DNA with the sequence sent.
6º. Sending of the genetic material to the Paris team for its final encapsulation.
7º. Encapsulation in biomaterial. Introduction of the DNA in the process of creating the bio print.
8º. Proof of its existence. Sending the sample to the laboratory of origin for analysis and verification. Presence test.
9º. Tokenisation of the final result.
10º. Journey to the Arctic for its introduction into the glacier.
11º. Identification of the precise location and extraction of ice block with similar techniques for iceberg stratigraphic analysis.
12º. Cut section at the critical length identified in a possible melt as "point of no return".
13º. Insertion of the "terricola disc" and compression between the ice blocks.
14º. Insertion of the ice block in its place of origin or "hollow".
10< STORAGE AND FREQUENCIES
DNA STORAGE:
It is the process of encoding and decoding binary information that is found or synthesised in DNA strands.
Under the concept of DoT, DNA of Things, similar to IOT, internet of things, we can incorporate these DNA strands in different materials that encapsulate the DNA inside or molecularly coexist with it. This is the case of what has been done in the artist's previous works, in the Harddiskmusuem encapsulating in silica the DNA that contains the metadata of the Museum or in the case of OLEA through a group of bacteria that express their DNA with the designed sequence of the synthesis of the Smart Contract of a token. In this case the .zip file transformed into binary code using Python code will be subsequently sequenced in DNA, and encapsulated in the biomaterial used for bioprinting.
——— The sequence: We intend to offer a new possibility and metaphor in the synthesis of information. To do so, we turn again to nature, choosing vibration and frequency as a matter of representation.
The Earth has what in physics is called a "magnetic transverse wave" which corresponds to the electromagnetic emissions produced by the Earth's motion and its events such as storms, and their "encapsulation" in the ionosphere, resulting in a vibration of 7.8Hz as discovered by Winfried Otto Schumann in the 1950s. This frequency and its measurement is also used by scientists as a great thermometer to analyse changes on our planet, such as global temperature and weather conditions.
Coincidentally, the vibrations analysed in the human brain are also based on 7.8Hz frequency. An eerie coincidence that somehow connects us with the Earth in an invisible but plausible way. Both frequencies are changing. Since the 1980's the Earth's frequency has risen to levels of 30Hz so we can say that we are in a vibrational imbalance that although not 100% scientifically proven, conceptually interesting to analyse. Is this further evidence of our extinction as earthlings? It is for this reason that the sequencing of our information will focus on this frequency metaphor, being at the same time a double file, an image with the spectrogram of the recording of a mechanical voice reading the text of the manifesto and the audio itself.
The document will eventually contain 5 files.
5 image fragments showing the sound sling (spectrogram) and the text associated with the sling and the frequencies of the phonemes.
Each DNA block (5) corresponds to the 5 content blocks included in the manifest.
Each element of DNA will be associated with one frequency in resonance with 7.8 to make the encoding vibrate in harmony with the Earth.
Anex < BACKGROUND AND CONTEXT
The artist Solimán López has an extensive career in contemporary art, new technologies, interactives, blockchain and biotechnology, and is a relevant figure in the international panorama of what is known in the sector as "new media".
Among his works closest to the concept of this project, we can highlight the Harddiskmuseum, which was founded in 2013 under the concept of storing works of art based on digital files on a hard drive to revalue them and extract them from the noise of the internet. An attempt to put the spotlight on the value of the intangible in contemporary culture.
In 2019 this museum was the first to be stored in DNA under DNA Storage techniques that will be explained later due to the important role they play in this project. The Harddiskmuseum is also accompanied in its origin by the "Manifesto Intangible", a document that gives conceptual sense to the project and a clear precedent of the artist's artistic intentions.
Another reference project in the context of "Manifesto Terricola" is OLEA, in which the artist manages to store the smart contract of an Ethereum blockchain token also in DNA, which gives rise to what he calls a “biotoken".
This DNA, expressed in a group of bacteria, is subsequently introduced into olive oil, creating an original biological bank connected physically and conceptually with the blockchain and historically linking two economies, the first represented by agriculture and a current one based on digital.
In other projects such as Introns, the artist plays with the concept of DNA profiling to create geometric avatars that represent humans within the context of the metaverse, posing a visual and conceptual alternative to the concept of the traditional anthropomorphic avatar, reified and sexualised by the consumption of the body.
These are some of the examples that the artist has been developing from his studio, but in the general context, new media and digital art has gained in recent years, a fundamental relevance to understand contemporary culture.
With the pandemic provoked by Covid19, the processes of assimilation of disruptive technologies with Web3.0 and its most evident representation through the concept of NFTs and the controversial cryptocurrencies, we are positioned in a technological place where everything is possible.
A second consequence of this pandemic is the close co-relationship between biology, science and digital technologies. Recall that our PCRs were already available on our digital device, creating a direct conceptual bridge between the personal biological and the personal mobile device. In countries such as China and Japan, moreover, this process takes place in situ, in the home, where citizens already have all the biotechnological machinery at their disposal to obtain the results of their DNA samples.
All in all, the global context has meant that the relationship between art, science and technology (biotechnology) is nowadays very close.
Technology has become the new hope of humanity in many ways, even provoking a mystical displacement to the detriment of religion, which now struggles to adapt its historical and nineteenth-century precepts to a society that is no longer binary, neither hetero nor homo, neither black nor white, but undefined, a question that breaks with all the established canons firmly represented by the concept of heaven and hell.
If we continue with this simile, we can affirm that we find ourselves precisely in an ideological limbo, in which the forces and factual power indicate that technology and all that it entails, is a great power and a new weapon of mass destruction, with risks even as serious as sentimental genocide or genocide of the collective conscience.
And meanwhile, in this loop of anaesthesia, provoked in a certain way by the misuse of these technologies, especially communication, which have turned our mobiles into the pixels of a great world television (commonly known as the "idiot box"), the Earth continues to spin, or at least it seems so (our nucleus, precisely at this time in January 2023, has slowed down its spin once again, as if giving us to understand that it is going in another direction) and that the changes of polarisation, as is happening with society, are turning.
We are both direct victims and executioners at the same time of what already has a name, "eco-anxiety", a psychological disorder already often mentioned by psychologists, which describes the alteration of the patient's state due to his or her concern about climate change. The term was coined and supported by the American Psychological Association in 2017.
But it is perhaps more interesting to bring up a term previously coined by the Australian philosopher Glenn Albrecht that already includes this tension and suffering derived from the habitable environment. “Solastalgia” defines the same state mentioned in eco-anxiety but in native population groups who are deprived of their traditional living conditions, their territory, customs, objects and surrounding nature, which makes them forced "aliens" from their former Earth. Perhaps a feeling that is already present in Inuit communities and peoples.
This climate change has also already been summed up in a term that comes out of the mouths of philosophers, snobs, environmental professionals, psychologists, artists, politicians... a widespread term but one that we have not yet defined in its magnitude because when a new terminology is introduced, its acceptance is inherently proposed. And this acceptance has implications of great weight for the future of humanity.
The "anthroposcene". This term, proposed by Nobel laureate Paul Crutzen in 2002, defines the current age symbolised by a constant and evident deterioration of the environment as a consequence of human activity. The term also contains other interesting definitions to be questioned, such as the concept of environment itself. From the moment this term becomes popular, as we said, we find a succinct acceptance. An acceptance of having islands and even continents of plastic, or the fact of beginning to take advantage of the melting Arctic ice to establish faster trade routes between East and West, or rather, between China and Europe. These acceptances are sometimes somewhat absurd, or is it that there is something we have not been told and that we are already prepared to inhabit an Earth that will never be as we understand it?
At times, myself feels a bit like this, as if to say, well it seems that everything is under control, that they know what they are doing and that the change of our habitat is not urgent as there is a plan already in place to deal with it.
But I honestly don't think this is the case. Our ecosystem is seriously damaged and for those of us who like it pure and clean, our environment is already an assault on the senses, the intellect, education, ethics, morality, intellectuality, conscience and respect for our species and others. There is practically no place in the world where we do not already have an imprint present. Even those places we have not physically visited already bear our trace, as in the case of that sea plastic bag, which like an anthropocene jellyfish floated in waters more than 10,000 metres deep in the Mariana Trench in the Pacific Ocean. Its discoverer, Victor Vescovo, broke the depth record for an exploration and evidenced another record for humanity, that of our complete incapacity to stop this vicious circle that is driven by the economy of the here and now.
If we talk about economy, we must go back to the origins of agriculture, back in 10,000 BC according to studies at the Göbekli Tepe site in Turkey. This period coincided with a change in the genetics of cereals, which allowed their intensive and extensive cultivation, the sedentarisation of humans and consequently "civilisation". This gesture changed everything, as the Earth ceased to be the Eden that offers us its fruits and became the factory of our desires. The motives and technology of this genetic alteration would be the subject of another study. What is interesting here to contextualise the "Manifesto Terricola" project is how the economy has taken an ideological place that only a new economy can replace, and for that we must re-economise the Earth and put it back in the right place of interest. This seems very complex in a global ecosystem where everything is polarised by immediate desire, well-being and the right to a "dignified life", a fight that, given the number of inhabitants on the planet, has already generated serious geopolitical borders for the understanding of the different contemporary civilisations. Hatred is the locus of the economy and is taking over everything to such an extent that we can affirm that we are in a society enraged, lost in anger and obviously disconnected by the consequences of this violent act.
But the situation is unsustainable and an economy without a place to develop it seems impossible. Here my doubts arise about the plan B drawn up somewhere. It is not possible that we do not have this strategy designed by those whose economy will leave our atmosphere and spread to other places in the universe, because if not, the logic is something that does not exist.
It is very scary to see how the human being commercialises with everything that is essential, the aforementioned agriculture and the product of the Earth, the appropriation of its land and species, of the place called the world and its fundamental concepts and facts such as health represented by medicine or ethics by religion, among other disciplines.
All these issues are basic to the future of a complex society and that is precisely where the primary business is established. The rest we all know. The only alternative to a paradigm shift lies in shifting the interests of this basic economy to other scales of value and principles, leaving the Earth to breathe easy, for if there is one thing I know for sure. Let us remember that we are on Earth in a renting period.
Within this global social, economic and ideological context, when things go wrong for us, we find no solution and we feel alone in this vast universe, religions and great technologies emerge and, like a new deity, they are proposing a window of salvation to the one that is coming. This is undoubtedly generating an unprecedented ideological crisis, already noted recently, but much deeper than a paragraph can explain. The leap in humanitarian consciousness that we are witnessing is generating a social and class split. We could imagine this differential leap between the extreme Catholic and evangelist believers who make up more than 50% of the population of a country like Brazil, and demoralised atheist population groups living in innovation clusters and technological hubs on islands and natural paradises, who see in technology a new technocratised model of life that offers both informational and technical well-being as well as an evolved level of consciousness thanks to this massification of information, which, if well treated, is a very powerful weapon.
It is undoubtedly an unprecedented conceptual war in which technology and science and their empirical power are brutally pitted against the power of history, myth and the deities present in our vestiges of an antiquity temporarily questioned by the very techniques used and which, precisely because technology has called into question its origins and its main actors. This information, as controversial as any other for a system under control through faith, dismantles myths at the click of a button, which are refuted with the noise of fake news, image alteration, CGI and paid testimonies. A truly interesting moment that distracts us so much that we forget precisely the big issues that are fundamental to keeping our ecosystem habitable.
This power struggle (science and technology vs. religion and faith) has led to what we have called "transhumanism", another interesting term coined in recent history by Fereidoun M. Escandadiary, which refers to a cultural, intellectual and technological movement in which human beings are modified through disruptive tools that improve their capacity on various levels. Undoubtedly, the leap in consciousness proposed in the interesting analysis of the Kardashev Scale could serve as an example to understand this evolutionary leap in consciousness proposed by technology as a whole, taking us from mere hunter-gatherers to intergalactic beings who control space, time and matter at will, a kind of spatial agriculture, which would undoubtedly have to be reviewed at its base. Feeling transhumanist and also accepting this condition gives us an environmental and medial "carte blanche" to relate to the world in an unconditional and unlimited way, expanding our physical, intellectual and cognitive capacities but at the same time helping us to adapt more quickly to an environment that we ourselves are modifying. This haughty attitude that situates us as semi-gods empowered by technology generates rivers of ink as long as the critical positioning implied about our species and the identification of the moment when we cease to be who we are and become another, no longer terrestrial, species. It is what I call the "threshold", a tense positioning that provokes a leap in one direction or the other, the precipice or solid ground. What becomes evident is that staying on the threshold forever is not possible because the poles have a great magnetism and in this matter there is no middle ground, since to belong to technology is to belong to an intertwined system of hyperconnections and data that make possible a new network, as if it were pulses of fungi underground. And belonging to the disconnected model of information can generate a real split even at the species level. In any case, it is interesting to contextualise how transhumanism is the confluence and solution to this moral conflict between science and religion to give way to a figure that contemplates both concepts through the empowerment and increase of once-human capacities thanks to technology, which will turn us into a different species, perhaps adapted to another planet that no longer answers to the name of Earth, just as we would not answer to the name of humans or homo sapiens. This question is particularly relevant in the context of the "Manifesto Terricola" project since, as we will see later on, the intentions and the overall message of the project do not lie in the idea of knowing ourselves in an uninhabitable world, but understanding ourselves in the need to adapt physiologically or to flee and rethink the opportunity it offers us or to regret what we have lost.
An ideal place to ask ourselves these questions is in the space of art and philosophy or the evolution of the intellect. Using this technological functionality to expand the capacities of expression and understanding of the world becomes a utilitarian obligation of these advances, with art being the place of convergence as it has been historically. The spaces of manifestation that we understand today as artistic have historically reflected the changes of the moment, the doubts, challenges, aesthetics, wars and disputes, opportunities and duels, and although we may think of ourselves as more "modern", art today continues to do the same. This is where the application of technology comes into play from a conceptual and respectful point of view, with a global message that expresses itself for all, and where the project begins to make sense.
After this contextualisation and existential approach, we present the main motivations of the Manifesto Terricola as a new opportunity of global consciousness change.
13< DOCUMENTATION AND COMMUNICATION
Audiovisual and photographic documentation of the scientific and artistic actions required to carry out the project, including the use of a drone with the necessary permits, is foreseen.
Bibliography and references:
https://editorial.centroculturadigital.mx/articulo/la-conciencia-la-ultima-fronterade-lo-humano
http://www.quimicaviva.qb.fcen.uba.ar/v19n2/E0182.html
http://www.ideam.gov.co/web/ecosistemas/ glaciares#:~:text=Un%20glaciar%20se%20compone%20de,estado%20originando%20peque%C 3%B1os%20drenajes%20o
https://brent.xner.net/pdf/PriscuetalQS.pdf
https://www.dw.com/es/deshielo-de-glaciares-tibetanos-revela-1000-microbios-desconocidospotencialmente-peligrosos/a-62373260
https://www.biotekis.es/dia-a-dia-con-las-enzimas/
http://www.scielo.org.co/scielo.php? script=sci_arttext&pid=S0034-74342006000400008
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3020885/
https://pubmed.ncbi.nlm.nih.gov/11055733/
https://www.frontiersin.org/articles/10.3389/fmicb.2022.894893/full
https://www.nature.com/articles/s41467-021-21587-5
https://academic.oup.com/nsr/article/7/6/1092/5711038
https://www.colorado.edu/ecenter/sites/default/files/attached-files/ seracare_stability_of_genomic_dna_at_various_storage_conditions_isber2009.pdf
http://wearcam.org/cyborgsconference/ CyborgsConference2021_proceedings_excerpt_p4-6_and47-64_.pdf
https://oursounduniverse.com/the-infrared-frequencies-of-dna-bases-science-and-artby-s-alexjander/
https://academics.skidmore.edu/blogs/jlinz/atom-tones/
https://pubs.acs.org/doi/10.1021/acs.est.7b02928
https://pure.roehampton.ac.uk/ws/portalfiles/portal/668193/ Christian_Bok_s_Xenotext_Experiment_Conceptual_Writing_and_the_Subject_of_no_Sub jectivity.pdf
https://www3.gobiernodecanarias.org/medusa/ecoblog/ysansanc/category/musica/ analisis/
http://solar-center.stanford.edu/SID/activities/ionosphere.html
https://www.nature.com/articles/s41598-018-36341-z
https://interestingengineering.com/science/what-is-the-schumann-resonance
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5805718/
https://www.nature.com/articles/s41467-020-15744-5
https://www.nature.com/articles/s41598-020-60105-3
https://pubmed.ncbi.nlm.nih.gov/25584811/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1143999/pdf/biochemj00324-0036.pdf
https://pubs.rsc.org/en/content/articlelanding/2017/ra/c7ra06125k
https://pub.dega-akustik.de/ICA2019/data/articles/000410.pdf
https://hal.science/hal-00612116/document
https://pubs.acs.org/doi/10.1021/bi500344j https://pubmed.ncbi.nlm.nih.gov/9181808/
https://hal.science/hal-01743602/document
https://academic.oup.com/nar/article/46/4/2074/4774278
https://www.sciencedaily.com/releases/2017/07/170704093618.htm
https://academic.oup.com/gji/article/206/2/1366/2606029
https://www.ncbi.nlm.nih.gov/guide/dna-rna/
https://www.researchgate.net/figure/a-Attachment-efficiency-of-DNA-RNA-on-clay-hydrogelb-Protection-of-DNA-in-clay_fig2_258335731
https://www.mdpi.com/2073-4360/13/10/1582#
https://www.xataka.com/investigacion/nueva-tecnica-mit-para-almacenar-archivos-adnmediante-capsulas-promete-hacer-facil-su-posterior-recuperacion
https://link.springer.com/article/10.1007/s42514-022-00094-z
https://berthub.eu/articles/posts/amazing-dna/
https://www.researchgate.net/publication/ 277023595_Natural_Data_Storage_A_Review_on_sending_Information_from_now_to_then_ via_Nature
https://www.sciencedirect.com/science/article/abs/pii/S0165232X18304233
CONTACT DETAILS:
projects@solimanlopez.com
vivien.roussel@devinci.fr
HISTORY OF AI - THE NEW TOOLS: DALL-E AND Midjourney
One of the earliest AI systems, AARON, was developed in the 1970s by Harold Cohen. AARON created paintings and drawings based on a set of rules and decision-making processes that Cohen programmed into the software. Since AARON's development, many other AI-based tools and software have been created to aid artists in their work. These tools have contributed to the field of AI art and have made it possible for artists to explore new forms of creativity and expression. In this article, we will discuss some of the recent AI tools and software that have emerged in the 21st century and are widely used in the art world: DALL-E and Midjourney.
While AI as an artistic tool has gained significant attention in recent years, it's important to note that this concept is not new. One of the earliest AI systems, AARON, was developed in the 1970s by Harold Cohen. AARON created paintings and drawings based on a set of rules and decision-making processes that Cohen programmed into the software. Since AARON's development, many other AI-based tools and software have been created to aid artists in their work. These tools have contributed to the field of AI art and have made it possible for artists to explore new forms of creativity and expression. In this article, we will discuss some of the recent AI tools and software that have emerged in the 21st century and are widely used in the art world: DALL-E and Midjourney.
DALL-E
The San Francisco-based company, OpenAI, not only developed ChatGPT, which we discussed in one of our previous articles, but also the image-generative AI model, DALL-E, which is connected to the firm’s name. DALL-E is a digital image-generative AI software, based on deep learning technology, that generates digital images from natural language descriptions. It is named after the famous surrealist artist, Salvador Dali and the animated robot from the Pixar movie, WALL-E.
Prior to developing DALL-E, OpenAI was experimenting with a language processor AI. In 2019, the company released GPT-2, which had 1.5 billion parameters and was trained on a large dataset consisting of 8 million web pages. This software was designed to predict the next word in a given text and was also capable of tasks such as question-answering, text summarization, and translation. The next iteration, GPT-3, uses even larger parameters (175 billion) and is capable of performing even more impressive natural language processing tasks.
DALL-E was developed using this model, which serves as a basis for some of the underlying technology used in the project. In addition to GPT-3, the developers also utilize GAN (Generative Adversarial Networks) to bring DALL-E to life.
GAN works with two neural networks: a "generative" network that is trained on a specific dataset (such as images of flowers or objects) until it can recognize them and generate a new image; and the "discriminator" system that has been trained to distinguish between real and generated images and evaluates the generator's first attempts. After sending messages back and forth a million times, the generator AI produces better and better images. Besides GAN, DALL-E also uses a combination of other deep learning techniques, such as transformation, autoencoders, reinforcement learning, and attention mechanisms.
While DALL-E is not the first AI technology capable of generating images, it is the first designed specifically to create images from textual inputs, setting it apart from other models. Previous image-generating AI technologies, such as DeepDream and StyleGAN, were able to generate images, but the results were often less realistic, often blurry, and lacking detail.
In contrast, DALL-E is capable of creating images that resemble real photographs, with a high level of realism and detail. Like OpenAI's other product, ChatGPT, DALL-E is accessible and understandable to a wide audience. It has democratized AI-generated art, allowing anyone to collaborate with artificial intelligence to create unique images
To use DALL-E, the user provides a text prompt that describes the objects, people, or styles they want to see in an image. DALL-E then breaks down the text into discrete units called tokens, which are essentially individual words that have been isolated from the original text. These tokens are fed into a language model to understand the context and generate a semantic representation of the text. Using this representation, DALL-E creates an initial image by passing the semantic representation through an encoder network, which produces a low-dimensional vector image. After generating the initial image, DALL-E refines the image multiple times to make it more realistic. To do this, the image is passed through a series of decoder networks that gradually improve the image. During this process, a discriminator network helps to evaluate the realism of the image and provides guidance to the refinement process. This refinement continues until the image reaches sufficient realism. Once the image is complete, it is presented to the user as output.
DALL-E was first introduced in 2021. One year later, the company released an updated version, DALL-E 2. The new product delivers improved clarity between the text and visuals and speeds up the results to a few seconds. DALL-E 2 implements a diffusion technique that begins with a random pattern of dots and systematically transforms it into a picture by identifying specific characteristics of the image. The output images are now more realistic, and detailed, and boast a higher resolution than ever before. Alongside these enhancements, DALL-E 2 features a new feature known as "variations." This tool provides the AI image generator with a simple image and enables the system to generate as many variants as the user need. It is also possible to mix another image with this, cross-pollinating the two and blending the most important parts of each.
Midjourney
Midjourney is an independent research lab that has gained recognition for its text-to-image AI program with the same name. Similar to DALL-E, the program is designed to create images from textual descriptions. Users type a word or phrase at the input prompt and receive a compelling image on-screen within about a minute of computation. Midjourney has developed its own unique style that has caught the attention of many in recent years.
The idea for the program came from David Holz, co-founder of Leap Motion, a company that produces motion-sensing technology for computers and virtual reality headsets. In 2020 Holz, with a small team, began working on Midjourney, realizing the potential of AI technology. In particular, OpenAI’s CLIP technology sparked his interest in creating high-quality images from AI models using text input. Midjourney was first released in March 2022, and the team has been working to improve it ever since, releasing new versions every few months.
Midjourney's main distinction compared to other AI tools is its emphasis on painterly aesthetics in images, rather than photorealism. One of Midjourney's notable strengths is its ability to adapt real art styles and apply them to any combination of elements that the user wants. It is particularly good in generating visually stunning environments, including fantasy and sci-fi scenes.
Midjourney utilizes a Machine Learning (ML) algorithm to process the user's description. Although not much is known about the specifics, it is believed that Midjourney employs a form of the latent diffusion model, which is the same technology that powers Stable Diffusion. After analyzing the user's input, Midjourney generates the image that best matches the description, then applies the desired art style(s) before seamlessly merging them.
Midjourney is unique among other AI bots as it doesn't offer a website or mobile application for users to access its services. Instead, users must join Midjourney's Discord server, which is a platform that enables people to communicate through voice and text messages and share media content such as images and videos. Discord works with various operating systems, including Windows, Android, iOS, iPadOS, and Linux, and can be accessed through web browsers.
After creating a Discord account, users can visit the Midjourney's website and click on "Join the Beta" to get started. Upon accepting the beta invite, they will gain access to the platform. However, the free package is limited to 25 images and only allows access to the public chat, which may result in longer wait times. To use Midjourney, users can type “/imagine” in the public chat and provide a detailed prompt that includes preferred art styles, moods, and subjects. Once the input is analyzed, Midjourney will generate a grid of four images for the user to choose from.
Images created by Midjourney for all users in a Discord channel are generated and made available within about a minute, contributing to a sense of community. There are also options for upgrading to a subscription. Subscribers can send their text prompts to the Midjourney bot in a private Direct Message in the Discord app and receive images in response without public interaction from other users in a public channel. Despite this, any images generated by Midjourney are still publicly visible by default.
Although the platform is a consumer product, around 30%-50% of its users are professionals who use it for conception in commercial art projects. Midjourney helps to generate a lot of creative ideas, which can help stakeholders to converge on the idea they want more quickly. Additionally, the platform can give artists more confidence in areas they are not confident in, such as colors, composition, and backgrounds.
Midjourney has gained popularity and attracted attention from various industries. The British magazine “The Economist” and the leading Italian newspaper “Corriere della Sera” have utilized Midjourney to create covers and a comics. In 2022 a Midjourney-generated image, “Théâtre d'Opéra Spatial” won first place in a digital art competition at Colorado State Fair. The program has also been used to create illustrations for an AI-generated children's book called “Alice and Sparkle”, with the creator Ammaar Reeshi spending hours selecting the best results from hundreds of generated images.
History of AI - innovation of film, computer graphics & animation
This article delves into the origins of film as an artistic medium, tracing its roots to earlier traditions such as storytelling, literature, theatre, and visual arts, and examines its development as a visual art form using film technologies from the late 19th century.
“Art challenges technology, technology inspires art”
—Catmull and Wallace, 2014
Film as an artistic medium has its roots in various earlier traditions, including storytelling, literature, theatre, and visual arts. For instance, ancient traditions such as Cantastoria involved combining storytelling with a series of images displayed or indicated sequentially. Furthermore, light and shadows were utilized to create art forms that predate modern film technology.
The history of filmmaking and technology highlights the significance of collaboration between artists and technologists, each inspiring and pushing the other to greater heights. During the early days of photography, most photographers had to be both artists and technologists, constantly experimenting with new techniques to enhance their art. However, in film and animation, this interaction has become even more integral to the art form.
The development of film as a visual art form using film technologies began in the late 19th century, and its exact origins are not well-defined. However, the public screening of ten of the Lumière brothers' short films in Paris on 28 December 1895 is often considered the breakthrough moment for projected cinematographic motion pictures, marking the emergence of film as a commercially viable medium.
The history of film technology is as diverse and complex as the art form itself, with early inventions and innovations playing a crucial role in making it possible. Even before the creation of photography, film's roots can be traced back to 1832 with the development of the phenakistoscope. This early device featured a rotating disk with sequential drawings of moving objects. At the same time, the zoetrope, which came shortly after, used a rotating drum with images on the inside to create a similar effect. These devices captivated audiences with short animations that played continuously, effectively telling a moving story.
To demonstrate that horses lifted all four hooves off the ground during a gallop, a racehorse breeder in California hired the services of photographer Eadweard Muybridge. Muybridge used multiple cameras to take successive photos of the horses in motion, then mounted on a rotating disk and projected on a screen, effectively creating the first "moving picture". Interestingly, this was also the same year that Thomas Edison invented the phonograph, a device designed for recording and reproducing sound. In 1888, Edison tasked his lab assistant, Dickson, with inventing a motion picture camera to complement the phonograph's audio capabilities, which resulted in the creation of the kinetograph.
The Lumière brothers unveiled the cinématographe in 1895, a lightweight and portable device that could function as a camera, projector, and printer. Their debut film, "Workers Leaving the Lumière Factory," was one of the earliest examples of moving pictures. The Lumière brothers also ventured into various camera technologies, colour processing, and creative techniques to utilize them.
Interestingly, throughout film history, more art tinkerers and collaborative teams of artists and technologists have contributed to its evolution. Another example is The stage magician George Méliès, who filmed fantastical stories like A “Trip to the Moon”, employing a wide range of clever in-camera tricks to create delightfully inventive and beguiling films and Walt Disney, who used and pushed new technologies of sound and color recording, and drove other innovations along the way, such as the multiplane camera.
Orson Welles' pioneering film techniques were primarily facilitated by the use of new camera lenses by his cinematographer, Gregg Toland. The advent of portable cameras and audio equipment allowed the French New Wave to experiment with film techniques, which, in turn, influenced up-and-coming American directors such as Francis Ford Coppola and George Lucas. Despite working with a limited budget, Lucas and his team on Star Wars were early pioneers of various visual effects, such as creating the sound effect for the "blaster" by hitting telephone guy-wires, as well as innovators in digital film editing and compositing, as noted by Rubin in 2005.
Since then, digital and computer graphics technology have revolutionized how stories are told in the film. Directors such as Michel Gondry and James Cameron have pushed these technologies to unforeseen limits, introducing new techniques and styles of storytelling. With each new development, filmmakers rapidly adopt these technologies to transform the medium again.
Computer Animation (from the 1940s to the present day)
The history of computer animation can be traced back to the 1940s and 1950s when pioneers like John Whitney began experimenting with computer graphics. Nowadays, it's hard to find a recent film that doesn't use some form of 3D animation technology, including Tenet, Dune, and Marvel movies. The combination of live-action and 3D animation is seamlessly integrated into these films.
John Whitney, Sr. was a prominent figure in computer animation and is considered one of its pioneers. He and his brother James created experimental films in the 1940s and 1950s using a custom-built device from old anti-aircraft analogue computers. The device controlled the motion of lights and lit objects, making it the first example of motion-control photography. Whitney is best known for the animated title sequence in Alfred Hitchcock's Vertigo, which he collaborated on with graphic designer Saul Bass. He established Motion Graphics Inc in 1960, which produced titles for film and TV while continuing to create experimental works. His motion control model photography was used in Stanley Kubrick's 2001: A Space Odyssey in 1968 and the slit-scan photography technique used in the film's "Star Gate" finale.
In the 1960s, the establishment of digital computers opened up new possibilities for computer graphics. While initially used for scientific and engineering purposes, artistic experimentation emerged by the mid-1960s, led by Dr Thomas Calvert. One of the earliest programmable digital computers was SEAC, which introduced a drum scanner in 1957 that created the first digital image by scanning a photograph of Russell Kirsch's son. The team could extract line drawings, count objects, recognize character types, and display digital images on an oscilloscope screen with the computer. This breakthrough paved the way for subsequent computer imaging and highlighted the significance of this first digital photograph.
In Sweden, a 49-second vector animation was created on the BESK computer showing a car traveling down a planned highway, using a specially designed digital oscilloscope and camera controlled by the computer. This short animation was broadcast on national television. At the same time, Bell Labs in New Jersey was a leading research contributor in computer graphics, animation, and electronic music. Researchers like Edward Zajac, Michael Noll, and Ken Knowlton established themselves as pioneering computer artists.
Ivan Sutherland is credited with creating Interactive Computer Graphics and is an internet pioneer. He created Sketchpad I while working at the Lincoln Laboratory in 1962. This program was the first Graphical User Interface that enabled users to interact directly with images on the screen. Sketchpad I is regarded as one of the most significant computer programs ever created by an individual.
In 1963, Edward Zajac created A Two Gyro Gravity Gradient attitude control System, one of the first computer-generated films, at Bell Labs. It demonstrated how a satellite could permanently be stabilized to face the Earth while orbiting. Ken Knowlton also developed the Beflix animation system in 1963, which used simple "graphic primitives" to produce dozens of artistic films by artists such as Stan VanDerBeek, Knowlton, and Lillian Schwartz. Meanwhile, William Fetter, a graphic designer for Boeing, developed ergonomic descriptions of the human body in 1964, resulting in the first 3D wire-frame figures. These figures, known as the "Boeing Man," became iconic in the early history of computer graphics.
Michael Noll created early computer-generated 3D movies in 1965, including stereographic and four-dimensional hyper-objects. By 1967, he used 4D animation to create computer-animated title sequences for films and TV specials.
Charles Csuri was also one of the pioneers of computer animation. As a professor, fine artist, and computer scientist, his research and artistic vision led to advances in software that created new artistic tools for 3D computer graphics, computer animation, gaming, and 3D printing, all before their widespread commercial applications. In the early days of his career, Csuri's art consisted of his drawings and sketches, which he made in order to mathematically transform them using analog and digital technologies.
One of the best examples of his early period is the “Chaos to Order” (1967). The creation process for the artwork involved the use of random distribution techniques, where each line of the bird was placed at random positions within the composition. The computer then generated a chaotic version of the hummingbird, with lines and shapes overlapping and intersecting in a seemingly haphazard manner. Csuri then in progressive stages, brought the bird back together, manipulating the random placement of lines and shapes to create a more coherent and recognizable image of a hummingbird.
A group of Soviet physicists and mathematicians, led by Nikolai Konstantinov, created a mathematical model for the motion of a cat in 1968. They used a BESM-4 computer to develop a program that solved the ordinary differential equations for this model. Using alphabet symbols, the computer printed hundreds of frames on paper, later filmed sequentially, resulting in the first-ever computer animation of a character - a walking cat.
The art journal Studio International published a special issue titled "Cybernetic Serendipity – The Computer and the Arts" in July 1968. The issue showcased a comprehensive collection of computer art from organizations worldwide, exhibited in London, San Francisco, and Washington, DC. This event was a milestone in the development of computer art and is widely regarded as a source of inspiration. Notable examples of computer art from this period include "Chaos to Order" by Charles Csuri and "Running Cola is Africa" by Masao Komura and Koji Fujino.
Towards 3D: mid-1970s into the 1980s
The 1973 sci-fi movie Westworld was the first to use digital image processing. The Gunslinger android's perspective was portrayed as pixelated through motion picture photography digitally processed by John Whitney Jr. and Gary Demos at Information International, Inc. They used the Technicolor Three-strip Process to separate each frame of the source images by colour, convert them into rectangular blocks according to their tone values, and output them back to film as cinegraphic block portraiture.
Home computers saw the introduction of 3D computer graphics software in the late 1970s, including the first known example, 3D Art Graphics, by Kazumasa Mitazawa for Apple II in 1978. Disney's 1979 film The Black Hole used wireframe rendering created by Disney engineers to depict the black hole. Ridley Scott's Alien used wireframe graphics for navigation monitors the same year. The 1980s brought further advancements in hardware, such as framebuffer technologies, which combined with improved computer power and affordability to enable radical new developments in commercial graphics workstations.
From the 1990s to now, computer animation has expanded in film and TV
In the 1990s, CGI technology had advanced enough for major film and TV production use. 1991 was a significant year with two box-office hits utilizing CGI. James Cameron's Terminator 2: Judgment Day was one of them. In 1993, Steven Spielberg's Jurassic Park was another milestone, integrating 3D CGI dinosaurs with life-sized animatronic models.
Computer animation is thriving in various industries today, providing more opportunities for animators than ever before. The story of computer animation contradicts the idea that art and technology conflict, as they often work together to evolve and grow an art form. A great example of success is Pixar Animation Studios, pioneering 3D computer animation as an art form achieved through close collaboration between artists and engineers. Pixar's culture treats both groups as crucial to the company's success, resulting in years of technical and creative innovation and commercial and artistic success.
As technology advances, 3D animation is expected to improve its capabilities and complexity significantly. AI and machine learning hold immense potential in accelerating the animating process, paving the way for exploring a wider array of animation styles.
Kate Vass Galerie Introduces 0KAI - A Revolutionary On-Chain AI Art Technology
0KAI: GENERATIVE AI ON-CHAIN ENCRYPTED
The art world is constantly evolving, and new technologies are pushing the boundaries of what's possible. The latest innovation in this space is 0KAI, a groundbreaking on-chain AI art technology that's set to transform the way we create and collect AI art. The first project to launch on 0KAI is 'Algoritmo Divino' by Ganbrood, a pioneering AI artist from the Netherlands, which will be available on the 18th of April.
ABOUT 0kAI
Kate Vass Galerie is proud to introduce 0KAI, the first on-chain AI art technology that safeguards the privacy of the artist's input, revolutionizing and transforming the way of creating and collecting AI art, introduced by KVG’s new digital extension K011.
A secret no one can read, but everyone can see, keeping the artist's complete integrity while allowing the collector to hold the seed inputs for each artwork. With 0KAI, we enable prompt encryption and real-time artwork generation, delivering the encrypted prompt in its hash on-chain saved eternally.
The next pioneering AI masterpieces will be generated on-chain, showcasing a new era of human-machine interactions. Pioneering with this approach of collecting AI art, we want to give the collector more than just unique iterations but the core of each artist's creativity.
Because AI artists have different approaches, we decided to leave artists complete flexibility regarding the token mechanics and the prompt-generation techniques giving each of them their smart contract.
Each artwork is created in real-time, with nothing being pre-minted, stored on Arweave after minting.
The 0KAI engine ensures true randomness in the artwork's generation. Additionally, rarity traits are defined by the artist, influencing the prompt and hence the image generation.
In summary, 0KAI is a game-changing technology that enables long-form AI art, simultaneously providing complete transparency and security for artists and collectors. With One-kind features, 0KAI offers a new and innovative approach to collecting art on the chain.
‘Algoritmo Divino’ by Ganbrood
On the 18th of April, we are happy to launch our first project by one of the most prominent AI artists – Ganbrood. His first long-form on-chain AI drop 'Algoritmo Divino' will go live at the Dutch auction with 333 unique live-generated iterations.
‘Abandon all hope, ye who enter here.’ - Dante Alighieri, Inferno, Canto III.
This quote speaks to the idea of transformation, letting go of old ways of thinking and embracing a new perspective. The Divine Comedy is just an allegory to parallel the similarity of long-form synthetic image generation using neural networks, as an art and image creation evolution.
Ganbrood (Bas Uterwijk), from the Netherlands, has been re-evaluating his role as a photographer and has been working with AI for the last four years. Primarily self-taught, he has always been involved in forms of visual storytelling that imitate and distort reality. Since 2019 he has combined his different skills and experiences when working with generative adversarial networks (GANs): Deep Learning, Artificial Intelligence based software that interprets and synthesizes photographs. With the help of these neural networks, he constructed portraits with a photographical quality that were never recorded by an actual camera.
New AI tools are transforming the creative process for artists, pushing the boundaries of what are possible and inspiring new forms of expression. ‘Algoritmo Divino’ is the first on-chain series by Ganbrood, generated in real-time during the mint. Three hundred thirty-three unique iterations will surprise and guide you through three realms of Inferno, Purgatorio & Paradiso. The rarity, and prompt are encrypted on-chain, giving the collector a unique opportunity to collect the artwork's 'secret ingredient', holding the core 'prompt', which is encrypted and is eternally saved on the blockchain.
In Dante's Divine Comedy, the transformation of souls as they journey through the afterlife is a central theme. The characters' experiences and behaviours lead to their conversion towards greater virtue and understanding or deeper learning and suffering.
Similarly, long-form synthetic image generation using diffusion neural networks, generates images that progress and evolve. The process involves repeatedly applying a diffusion process to an initial image, resulting in a series of intermediate images gradually evolving towards a final, synthesized image. Each image represents a different stage in the generation process, and the final image depicts the journey's culmination.
Divine Comedy's idea of journey and progression is a powerful metaphor in 'AlgoritmoDivino' by Ganbrood, which represent many levels of alteration: working with prompt to image generation to personal growth and development. His own transformation as an artist from photography to synthetic images, accepting the new realm of AI and new tools as a creative process, all reflect in ‘Algoritmo Divino’.
AI artists use various methods to create their work, which is often individual and curated. They typically begin with an initial image, prompts, or other techniques; the process is selective and detailed. This allows the artist to choose elements that align with their style or to manipulate the final image to their concept and intention.
Long-form AI art, on the other hand, is the opposite of standard practice. Mastery of AI requires a high level of sensitivity to set the AI parameters in a unique way that produces high-quality results. It involves feeding the algorithm with higher, variable input, which removes the artist's final control and creates a more challenging process. Paradoxically, the importance of human intuition is underscored in this interaction with the machine’s algorithms and demonstrates the artist's skill and talent.
Ganbrood challenges himself with the long-form generation, which contradicts his standard process. The irreversibility of the results leaves little or no control over the output from the black box. Nevertheless, his signature themes of animals and gods are present in every 333 iterations, symbolizing humanity's desire to become godlike.
333 unique live-generated iterations of 'Algoritmo Divino' are available to mint on 18th April at 7 pm CET. For the first time, the collector can collect AI artwork in real-time, with the encrypted prompt, and rarities the artist uniquely integrates.
HAPPY BIRTHDAY TO ISKRA VELITCHKOVA
In this article, we would like to focus on one of our talented artists, Iskra Velitchkova, who is celebrating her birthday today.
Iskra Velitchkova is a Bulgarian self-taught artist, currently living in Spain. Her works explore the interactions between machines and humans, and how this relationship can help us better understand our limits and ourselves. As a visual thinker, Iskra has a proven record in the tech and artificial intelligence industry, which she now applies to her own artistic research.
Tradition and culture are essential elements of her work, which she combines with her Balkan roots and the influences of Mediterranean culture to create unique and authentic pieces. Iskra's art is inspired by her belief in the power of dialogue between science, art, and people. Her work explores questions about roots and limits, as well as the intersection of emotions, traditions, and cultures. Her art combines abstraction and figuration, often featuring minimalist elements, that are reminiscent of movements, such as Bauhaus, Suprematism, and Dadaism. Her works are often connected with organic shapes that remind us of particular elements from nature.
Iskra’s work is based on a combination of digital media and physical, traditional materials and methods. She uses generative techniques to explore different patterns and ways of representation. Then she trains artificial neuronal networks to find relationships, new shapes, and spaces. This process leads her to create new media digital pieces, physical paintings using acrylic, oil, and ink, plotted generative works, and sculpture pieces made of clay.
Iskra's work has been exhibited in galleries and museums around the world, including at prestigious events such as Art Basel Miami, IEEE VIS Berlin, and Art SG in Singapore. Her recent exhibitions include "Proof of People" in London, "Unblock Gaudi" at the Museum Angewandte Kunst in Frankfurt, and "Natively Digital 1.3: Generative Art" at Sotheby's New York.
Projects
Kate Vass Galerie has been collaborating with Iskra Velitchkova since 2022, starting with the Phygital exhibition which explored the works of generative artists with both digital and physical components. For the show, Iskra presented three pieces from her “Un-distance” series: “#1 You”, “#2 You and me”, and “#3 Us”. Each piece consisted of a digital NFT and a plotter drawing. While the digital pieces were created using Processing, for the physical works, she used a plotter machine to create the drawings on paper using a pen and ink, and she hand-painted the background. Iskra tried out a new technique for this project by layering multiple drawings on top of each other. By working with the limitations of the computer and embracing the randomness of the process, she was able to create unexpected and intriguing results that made her question the nature of the outcome. The project aimed to create an algorithm by carefully balancing lines, colors, and overlaps to give the impression of a spring emerging.
In the beginning of 2023, Kate Vass Galerie collaborated again with Iskra on the project PAL, which was showcased at the Art SG fair in Singapore. PAL is an inaugural collaborative performance created by Iskra and another generative artist, Marcelo Soria-Rodríguez. The live performance lasted for four days, from January 11th to 15th, during the art fair. The installation included an analogue computer, a video recorder, one VHS tape, a timer, a generative system attached to a digital screen, and a magnet. The entire process was recorded on the VHS tape, which was then digitized to create 100 unique pieces. These 100 unique works were launched as NFTs on KVG's digital component, a new marketplace called K011. PAL explores the imperfections and connections between humans and machines and questions how their interactions can create unexpected beauty in the imperfections. During the performance, the VHS tape was continuously played and rewound on an old VHS-player, while a small magnet was passed over it to induce random distortions and real-life interference, creating unique and unpredictable variations.
Works in our Collection
We are honored to have several works by Iskra Velitchkova in our collection, including one work from her “PAL” and “Un-distance” series, as well as her well-known work "Psychedelic chicken." For the past years, Iskra has been exploring the anatomy of bird shapes through her work. She employs recursive methods to build, break, and refine her own generative models, exploring how organic and natural illusions can emerge from code through human intervention in the process.
It is fascinating to see how she translates these explorations into her art, resulting in unique and intriguing pieces that blend technology and traditional art techniques.
Happy birthday, Iskra, and best wishes for the coming years!
History of AI - The invention of photography
Technology has significantly increased artistic and professional opportunities throughout history by offering artists more advanced tools. It revitalizes art forms that may have become stagnant. While introducing new technologies can initially cause concern and fear among traditional artists about being replaced, it ultimately leads to the developing of new artistic styles. Additionally, these technological advancements make art more accessible to a larger segment of society, both as creators and consumers. This trend has been especially evident in the past two centuries, starting with the Industrial Revolution
“From today, painting is dead!”
—Paul Delaroche, painter, at a demonstration of the daguerreotype in 1839.
Technology has significantly increased artistic and professional opportunities throughout history by offering artists more advanced tools. It revitalizes art forms that may have become stagnant. While introducing new technologies can initially cause concern and fear among traditional artists about being replaced, it ultimately leads to the developing of new artistic styles. Additionally, these technological advancements make art more accessible to a larger segment of society, both as creators and consumers. This trend has been especially evident in the past two centuries, starting with the Industrial Revolution.
This narrative draws parallels between photography's history and AI's current status as an artistic tool. In the beginning, photography was not viewed as a legitimate form of art, similar to how some still consider AI non-artistic because of its mechanical nature. While some artists embraced photography, others saw it as a threat to traditional art forms. The introduction of photography displaced older technologies, such as portraiture, which were no longer viewed as art. However, as photography developed and became more widespread, artists learned to master it and express themselves using this new medium. The controversy surrounding photography eventually dissipated as it became a widely accepted art form accessible to non-experts and hobbyists.
Moreover, photography revolutionized the art form, leading it towards greater abstraction. I predict that AI as an artistic tool will follow a similar trajectory, with new AI tools eventually being fully recognized as artists’ tools. AI tools may stimulate traditional media, similar to how photography breathed new life into the old art form.
The invention of photography is credited to Frenchman Louis Daguerre and Englishman William Henry Fox Talbot, who independently developed their own photographic processes in the mid-19th century. Daguerre's process, known as the daguerreotype, involved exposing a copper plate coated with silver iodide to light and then treating it with mercury vapor to create a permanent image. Talbot's process, known as the calotype, involved coating a paper negative with silver chloride and then developing a positive image from the negative. Photography changed the way people saw and recorded the world around them. Prior to photography, the only way to record an image was through painting or drawing, which were time-consuming and often required significant artistic skill. With the invention of photography, images could be captured quickly and accurately, allowing people to document events, people, and places with unprecedented detail.
Photography quickly became a popular medium for artistic expression, with photographers experimenting with different techniques and styles to capture the world in unique and compelling ways. Some early photographers, such as Julia Margaret Cameron and Henri Cartier-Bresson, used photography to create beautiful and evocative portraits, while others, such as Ansel Adams, used photography to capture the natural beauty of the world around them.
Back in time, the question of whether photography can be considered art has been a subject of debate for several decades, resulting in three central positions. Some argued that photography could not be regarded as art because it was created by a mechanical device rather than human creativity. Many artists were dismissive of photography and considered it a threat to "real art." For instance, the poet Charles Baudelaire stated in a review of the Salon of 1859, "If photography is allowed to supplement art in some of its functions, it will soon supplant or corrupt it all together, thanks to the stupidity of the multitude which is its natural ally." The second perspective was that photography could be helpful to artists, such as for reference, but should not be seen as equal to drawing and painting. The third group compared photography to established art forms like etching and lithography and believed that photography could eventually be as significant as painting.
By 1863, a painter-photographer named Henri Le Secq claimed, "One knows that photography has harmed painting considerably, and has killed portraiture especially, once the livelihood of the artist." Photography replaced most older forms of portraiture, such as the silhouette, and no one particularly regretted this loss. These debates illustrate how new technologies, like AI as an artistic tool, can disrupt traditional art forms and provoke discussions about their legitimacy and significance.
As history shows, the invention of photography had an unexpected impact on painting, forcing artists to question their role in creating realistic depictions of the world. With the widespread availability of cameras, photorealistic images became common, causing painters to explore new forms of abstraction. The Tonalist movement sought to create atmospheric scenes, while the Impressionists were likely influenced by the "imperfections" of early photographs. Symbolists and post-Impressionists moved away from perceptual realism altogether. Photography continued to influence modern art, with multiple-exposure photography influencing Futurism and Cubism. It's possible that photography was a major catalyst for the Modern Art movement, inspiring artists and pushing them beyond realism.
The Pictorialist movement, which emerged in 1885, aimed to establish photography as a recognized art form revolutionized photography by emphasizing the aesthetic qualities of beauty, tonality, and composition over creating a factual representation. They sought to elevate photography to the same level as painting, and to gain recognition for it as an art form in galleries and other artistic institutions. In its early years, photography was primarily used for scientific and documentary purposes. Still, this perception shifted in the 1850s when figures like the English painter William Newton advocated for photography's artistic potential.
Their efforts helped acknowledge photography's artistic potential, culminating in the “Buffalo Show” organized by Alfred Stieglitz at the Albright Gallery in Buffalo, NY, which was the first photography exhibition at an American art museum in 1910. Photography was then firmly established as an art form and was free to move beyond the constraints of Pictorialism.
To avoid making hasty generalizations about the relationship between computer tools and art, this article aims to highlight a few historical references, exploring the emergence of photography first; and, in the next chapter, delving into film and computer animation.
Today, photography continues to be a popular and important medium for artistic expression, with digital technology making it easier than ever for anyone to capture and share images. With advances in technology, such as high-resolution sensors, image stabilization, and editing software, photographers can now create images that are more detailed, dynamic, and visually stunning than ever before. Whether used for artistic expression or practical purposes, photography remains a powerful tool for capturing and communicating the beauty and complexity of the world around us.
These cases illustrate how, despite initial concerns that new technology might replace artists, it often presents them with new possibilities and roles. Additionally, I will examine how automation is utilized in procedural and computer art in the next chapter. While outsiders may perceive the computer or technology as the primary creative force, the reality is that humans are the author of the work in each field. The artistic human value remains unquestionable, and human input is most essential.
Generative artworks in Westworld season 4
The world-famous and Emmy-awarded HBO series, Westworld returned with the 4th season on the 27th of June. If you watched the first few episodes of the show, you might discover a few hidden gems on the walls.
Westworld always pushed the boundaries of future technologies. The dystopian science fiction has been taken the most compelling questions about robot consciousness, the ethics of artificial intelligence, and neuroscience. Regarding new technology and AI, the creators pay attention to every small detail, even what hangs on the wall. They featured many generative artworks from several well-known artists, like Mario Klingemann or Entangled Others Studio, which fit perfectly into this futuristic world.
The world-famous and Emmy-awarded HBO series, Westworld returned with the 4th season on the 27th of June. If you watched the first few episodes of the show, you might discover a few hidden gems on the walls.
Westworld always pushed the boundaries of future technologies. The dystopian science fiction has been taken the most compelling questions about robot consciousness, the ethics of artificial intelligence, and neuroscience. Regarding new technology and AI, the creators pay attention to every small detail, even what hangs on the wall. They featured many generative artworks from several well-known artists, like Mario Klingemann or Entangled Others Studio, which fit perfectly into this futuristic world.
In this article, we collected all the featured artists and artworks from the show.
MARIO KLINGEMANN, The Butcher’s Son series
Mario Klingemann is a generative artist and skeptic, whose preferred tools are neural networks, code, and algorithms. His interests are manifold and in constant evolution, involving artificial intelligence, deep learning, generative and evolutionary art, glitch art, data classification, and visualization or robotic installations. If there is one common denominator it’s his desire to understand, question, and subvert the inner workings of systems of any kind. He also has a deep interest in human perception and aesthetic theory.
His artwork, The Butcher’s Son series reflects the artist’s long-term interest in AI. He focussed on the human body training his AI models to explore postures, based on the analysis of images found on the internet. The artwork was generated entirely by a machine and ‘painted’ by a generative adversarial network. It’s using a method known as ‘transhancement’, which adds hair, skin texture, or another pixelated form to the image.
ENTANGLED OTHERS STUDIO, Hybrid Ecosystems series
Sofia Crespo and Feileacan McCormick, co-founders of Entangled Others Studio, experiment with generative art to move further into an entanglement with the more-than-human world of us & others. A world where diversity and inter-connectedness are nurtured and engaged through neural art.
The Hybrid Ecosystems series is an exploration into creating an unveiling of our entangled world. The digital and physical world seems at first glance separate, occupying different layers of reality that seem to reluctantly interact. The reality of our mundane life is that these two layers of reality are in fact tightly interwoven, constantly influencing, interacting, shaping & reshaping, consuming, and acting. The artwork explores how we can stretch our imagination so as to see a world of harmoniously interacting artificial and natural life as one single, sustainable ecosystem.
A Mathematical Visual Poem: A Cognitive View of "Pandemic Meditation" by Kaz Maslanka
Kate Vass Galerie is excited to feature an article written by Kaz Maslanka about mathematical visual poetry. Maslanka, an aerospace engineer, artist, mathematical visual poet, and philosopher, has been a pioneer of this genre since the early 1980s. In his article, Maslanka introduces the concept of mathematical visual poetry through one of his recent works, "Pandemic Meditation," and focuses on a particular structure known as the Similar Triangles Poem or Proportional Poem. The article (A Cognitive View of "Pandemic Meditation" (A Mathematical Visual Poem)) was originally published in the January 2022 issue of the Journal of Humanistic Mathematics.
Mathematical visual poetry is a poetic genre whereby metaphorical expressions are created using mathematical structures. Within the structure, the poetics are understood by the cross-mapping of numerous conceptual domains including visual, lexical, and mathematical. Here I focus on one particular mathematical visual poetic structure: what I call a Similar Triangles Poem or Proportional Poem. To illustrate the ideas discussed, I present “Pandemic Meditation,” a mathematical visual poem; in particular, I discuss how this mathematical poem uses the mechanisms of poetic metaphor in the context of the embodied mind. The intent of this paper is not to explain “Pandemic Meditation,” for explanations of poetry serve only to kill it. Instead, the intent here is to give the reader the tools to access similar triangles poems in general, and this expression in particular, and to show how it functions within the definitions of poetic metaphor. This paper can be used as a template to study all similar triangles visual poems, and more generally, as a source to study visual poetry.
1. Inspiration For “Pandemic Meditation”
The imagery in Figure 1 comes from a photo I shot at the Philadelphia art museum and is a Josen Dynasty (Korean) painting of Myeongbujeon (Judgement Hall), where the ten kings of Hell reside to judge the dead. Overlayed across the painting, I have chosen to display equations from physics.
The great zen master Hyon Gak Sunim repeatedly states that zen meditation is a “technology” delivering one to understand their “true self”. The enlightened state of meditation happens before thought, so the barrier to enlightenment is the act of thinking. It is interesting to note that Rodin’s sculpture “The Thinker” sits at the gates of hell. In what follows, I share with the reader a particular outcome of the overwhelming act of thinking inspired by the real-time horrors during the pandemic of 2020.
I have always had an affinity for Eastern wisdom, and I have been fortunate enough to have been introduced to a few Korean Seon (Zen) masters. It is their insight that continues to help me with my continuing endeavor to learn more about and live my life guided by their wisdom. As part of this endeavor, I continue to practice meditation, which through much effort has kept me sane during the pandemic.
More specifically, meditation keeps me grounded in the present moment. When one is in the present moment, one can detach from the discontentment that is inherent within the human condition. In these recent days of the COVID-19 pandemic, I have not achieved this state easily, to say the least. The chaos surrounding the affliction has made meditation challenging. The stress of watching the bodies in New York City being stored in refrigerated trucks on the streets due to there being nowhere else to put them has been extremely disturbing for me. The endless news cycles constantly remind me of my age as well as the volatile populations that remain at risk. The latter, among other issues during this crisis, have kept me up at night thinking about family.
The poetic expression, “Pandemic Meditation” (see Figure 2), was inspired during an anxiety-ridden meditation I experienced. Observing my mind was a trial, as the ideas and visual elements seemed to jump about incessantly. During this particular meditation session, thirteen anxieties were spinning around my mind—empty—yet, I was attached such that I could count them. It was a dark space and this is not particularly how I’d like to report meditation. Meditation is the most successful way to minimize anxiety and it helps provide inspiration to continue my poetic expressions. And in this instance, it provided me with an experience where I could see relationships in a way as to point me in the direction of where more practice is needed. I collected my thoughts, arranged, ordered them, and created this expression.
2. Cognitive Linguistics
I want to discuss the mechanisms of poetic metaphor, but before I can do this, f irst we need an understanding of basic or conventional metaphor. Conventional metaphor is the type of metaphor we use in our normal conversations and must be understood in order to study literary metaphor. A comprehensive study of metaphor is beyond the scope of this paper. However, I would like to put forth some initial concepts and definitions concerning metaphor.
A conceptual domain is any construction of coherent thoughts that can be understood in reference to our experience. While limited, for expediency, you may want to start off by just thinking of a conceptual domain as a mental construction, idea, or concept. Lakoff states:
“Each such conceptual metaphor has the same structure. Each is a unidirectional mapping from entities in one conceptual domain to corresponding entities in another conceptual domain. As such conceptual metaphors are part of our system of thought. Their primary function is for us to reason about relatively abstract domains using the inferential structure of relatively concrete domains.” [4]
When constructing or analyzing a metaphor, we perform an ontological mapping across the source domain to the target domain. That is, we understand all the elements of the target domain in terms of what we know about all the elements of the source domain. The source domain expresses concrete concepts where the target domain expresses relatively abstract concepts.
To better understand these mappings between the source and target domains, we need a nomenclature to discuss them. Cognitive linguists name the metaphors using the mnemonic, TARGET IS SOURCE. To illuminate how the nomenclature works for analyzing conventional conceptual metaphor, Lakoff describes the process as:
“Aspects of one concept, the target, are understood in terms of non-metaphoric aspects of another concept, the source. A metaphor with the name A IS B is a mapping of part of the structure of our knowledge of source domain B onto target domain A.” [5]
Lakoff gives us the example LOVE IS A JOURNEY. In this metaphor, we understand the ontology of love in reference to the ontology of a journey. In conventional metaphor, a metaphorical expression is the language used to convey a metaphor. It is important to note that a metaphorical expression is not a metaphor. The metaphor is at a superordinate level relative to the metaphorical expression. One metaphor can be thought of as the root of many metaphorical expressions. In other words, many metaphorical expressions share the same common metaphor. Lakoff states:
“If metaphors were merely linguistic expressions then we would expect different linguistic expressions to be different metaphors.” [2]
When we look at the metaphor LOVE IS A JOURNEY, we can see that many metaphorical expressions are built from that single metaphor. Here are three of numerous examples Lakoff gives that use the metaphor, LOVE IS A JOURNEY:
“We’ve hit a dead-end street.”
“We can’t turn back now.”
“Their marriage is on the rocks.” [2]
Again, one can see from these expressions that we are understanding the target domain LOVE from what we understand about the source domain, JOURNEYS. There are numerous other examples of metaphorical expressions that utilize this metaphor.
3. Poetic Metaphor
Up to this point, I have only talked about conventional metaphors and not addressed poetic or novel metaphor. Concerning poetic metaphor, Lakoff puts forth the same premise as conventional metaphor thus pointing out the relationship between thought and language:
“The generalization governing poetic metaphorical expressions are not in language but in thought. They are general mappings across conceptual domains.” [2]
Lakoff and Turner [5] tell us there are three basic mechanisms for interpreting linguistic expressions as novel metaphor: extensions of conventional metaphors; generic-level metaphors; image-metaphors.
A similar triangles poem is a poem in the form of a/b = c/d, where we are cross-mapping concepts across the variables to create poetic expressions. What I would like to show in this paper is that successful similar triangles poems use all three mechanisms listed above within the one construction. To illustrate my point, I will analyze “Pandemic Meditation,” which is a similar triangles poem. I will say a bit more about similar triangles poems soon, in Section 4. But before that, let us open up Lakoff and Turner’s three mechanisms.
Extension of conventional metaphor is recognized as an expression that is a novel use of a metaphor that we use in our everyday language. Lakoff and Johnson go into great detail illuminating conventional metaphor in their book Metaphors We Live By [3]. Suffice it to say that the extension of conventional metaphor evokes a creative realization that gives us an understanding of something common, yet in a very different way.
The next mechanism is image-mapping, and Lakoff gives us an example from Andre Breton:
For example, consider:
“My wife ... whose waist is an hourglass.”
This is the superimposition of the image of an hourglass onto the image of a woman’s waist by virtue of their common shape.” [2]
Let us note that image-metaphor and image-schema are different; I will address that later. That said, Mark Johnson states:
“[image-schemas are] the recurring patterns of our sensory-motor experience by means of which we can make sense of that experience and reason about it”. [1]
Our third mechanism is generic-level metaphors. These extend to a large array of metaphoric expressions. Here I will focus on a subset of the genericlevel metaphors, the GENERIC IS SPECIFIC metaphor. When we get a general understanding of something by merely reading a specific case for it, we are experiencing a GENERIC IS SPECIFIC metaphor. Proverbs operate in this manner because they make specific statements that can be used in numerous general situations. Lakoff and Turner explain:
“There exists a single generic-level metaphor, GENERIC IS SPECIFIC, which maps a single-level schema onto an indefinitely large number of parallel specific-level schema that all have the same generic-level structure as the source domain schema.” [5]
It is interesting to note that when applied, a pure mathematical equation automatically expresses a generic-level schema in a couple of ways.
Firstly, we see that the single equation structure provides numerous applied mathematical specific-level schemas. For instance, in physics, we have the equations for force: F = ma, distance: d = vt, or Ohm’s law: V = IR. And there are many more of these types of equations, all being specific-level expressions with the same generic-level structure, that is a = b times c.
In addition, within an equation, each variable has the ability to express infinite levels of values. In essence those values, or in the context of mathematical poetry, levels of importance or magnitude function in relation to each other as different specific-level schemes expressed in the form/scheme of a general-level equation. In other words, the equation is at the general level and any fixed set of values for the variables would be a specific-level schema.
At the end of the paper, I will address this metaphor in the context of “Pandemic Meditation”. Along the way, we will also see how each of the three mechanisms functions more generally in the context of similar triangles poems.
4. Similar Triangles Image-schema
I defined a similar triangles poem as a poem in the form of a/b = c/d, where we are cross-mapping concepts across the variables to create poetic expressions. I occasionally use the term “proportional poem” as a synonym. Yet, I believe that, at least in this present context, the similar triangles image scheme is more helpful in understanding how the poetic form interacts with the construct of poetic metaphor.
The equation used in “Pandemic Meditation” uses a similar triangles imageschema; see Figure 3 for a visual summary of the properties of similar triangles that will be relevant to us here. More generally, similar triangle poems intrinsically contain an image-schema, whereby the structure of a pair of similar triangles is part of the metaphorical mapping. What makes this method captivating is that aspects of this structure lend themselves directly to mapping metaphoric expressions across the horizontal bar and the equal sign of the equation.
When we are using a similar triangles poetic scheme, the cognitive structure of the similar triangles provides the overall image-schema for the equation, and the terms for the equation provide the conceptual domains.
There are multiple ways these domains can be grouped to construct a metaphorical expression; see Figure 4 (we will say a lot more about this figure in Section 6). What is consistent in every grouping is that we map the conceptual values of the source domain to the corresponding conceptual values in the target domain. The image-schema inherent in the analogy of the proportional legs of the triangles will always be compared to the image-schema created by the proportional analogy of the conceptual domains that are being expressed. The terms of the equations have values that are open to interpretation but are generally experienced as contextual importance or magnitude.
As in all poetry, it is up to the reader to find meaning in the expression. Symmetry, in this context, can be explained as the concepts that remain consistent when other things change as we move between two domains. The important factor and the subsequent strength of the metaphoric expressions will be judged on the clarity in the symmetry expressed when finding a cognitive pattern across the corresponding ontologies between the source and target domains. The metaphor can be thought of as the union in the set of ontologies of the source domain and the set of ontologies of the target domain.
The union is the symmetry. In a mathematical visual poem, there will always be some imagery (image-metaphors) that we can map to each other or to the lexical or mathematical expressions (conceptual domains); see Figure 5 for a depiction of the image-mappings involved in a similar triangles poem. We will say more about this in Section 7.
5. Similar Triangles Equation
Before getting further into the methods for analyzing these structures for poetics, let us examine how similar triangles can be used as a language to solve the dilemma of finding the height of a Mission Beach palm tree. The photo/diagrams we will use will offer us a good example of how to analyze similar triangles or proportional equations.
Notice a man and a palm tree in Figure 6. If I were to give you a tape measure and ask you to tell me how tall the palm tree is, would you be able to do it? It is easy if you understand the relationship between similar triangles and are able to set up the equation.
An interesting factor about similar triangles is that there is a proportional relationship between the sides of the two triangles. In the second diagram, you can see this proportion relationship in that the tree’s height is to its shadow as the man’s height is to his shadow. In algebra, we can set up the equation to express this relationship; see Figure 7.
The tree’s height divided by the tree’s shadow is equal to the man’s height divided by his shadow; see Figure 8.
We can physically measure both of the shadows and the man’s height, plug them into the equation given in Figure 7, and solve the equation for the tree’s height. This method will give us the answer to how tall the palm tree is.
6. Mapping Cognitive Domains
Within the similar triangles poem structure, there are eight important mappings that we can focus on. We use algebra to solve for each cognitive domain (variable) and analyze the results. A successful proportional poem is a creation whereby the source domain and target domain share a cognitive pattern. It is up to the reader to parse through the four syntactical arrangements and the four syntactical aspects shown in Figure 4.
There are two redundancies in the syntactical arrangements for we will find two mappings are reciprocals of the other two. This means that we will be swapping the target and source domains to see what happens. If the poem is strong, then two of the arrangements will resonate strongly, while the other two may only provide interesting nuances. That said, a strong expression will make sense in each of the four syntactical aspects. Figure 11 shows one aspect used in “Pandemic Meditation”.
Yet, all four aspects are meaningful, and we list two of them below, in Figures 12 and 13.
7. Image-Mappings
Since a mathematical visual poem automatically includes images, metaphors, and image-schema, we are able to image map numerous metaphorical expressions in its structure. Not only can we map image-metaphors across each other but also we can map them to image-schema in the mathematical and lexical domains; see Figure 5. In other words, we can map image-schema to abstract target domains that don’t contain image-schema. For instance, we can map an image-metaphor to one of the target domains in the similar triangles equation (once again, see Figure 5.)
Wealso need to be cognizant that image-schema metaphors are not the same as image-metaphors. As Lakoff and Turner state:
“Image-metaphors map rich mental images onto other rich mental images. They are one-shot metaphors, relating one rich image with one other rich image. Image-schema, as the name suggests are not rich mental images; they are instead very general structures, like bounded regions, paths and centers (as opposed to peripheries), and so on. The spatial senses of prepositions tend to be defined in terms of image-schemas (e.g., in, out, to, from, along, and so on).” [5]
This leads us to notice that within the similar triangles image-schema of “Pandemic Meditation,” a container image-schema can be found as well. Therefore we have image-schema within image-schema. In “Pandemic Meditation,” the container-schemas are expressed as, “In a shouting match” or “encircling the corpse that my soul drags”.
In this poetic expression, you will notice the number 13 provides a schema as well, for it is repeated in numerous ways, including the circled letters in the reflecting anagrams (“eleven plus one = twelve plus two”). There is an ominous sky, large raven, three skeletons, and a swinging/jumping monkey seen among the circling image-schema; see Figure 10 for some of the details.
The reader is to map the images to the domains searching for a cognitive pattern connecting us to the metaphors. The image-schema of 13 circles connecting the identical letters to each other map to the image-schema created by the domain of 13 vultures. The 13 vultures map to the skeletons which map to the raven with a 3-way connection to the concept of death. The A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH map to the 13 vultures due to the connecting container schema. The Buddhist concept of monkey mind, an analogy of our disquieted mind, maps to the 13 circles schema as a path-schema for the monkey to follow. The Skeletons, Raven, and Vultures map to the target domain of A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH or to the target domain of the NON-ATTACHED MIND (or lack thereof).
I am fascinated by similar triangles equations and by the numerous ways of solving them, which all together offer a versatile form for poetry. We map across the horizontal bar as well as the equal sign, enabling the target and source domains to move around the equation, creating new syntax and richer meaning. We also change meaning by assigning different values or levels of importance to the variable concepts as shown in the 4 syntactical aspects from Figure 4. That is, aesthetic value is found as one creates values for the poem while reading it, much like finding personal meaning in a traditional lexical poem. There are multiple values present in this semantically dynamic equation.
8. The Equation
Let us now look at the proportionality equation in “Pandemic Meditation” and explore a few ways we can manipulate it algebraically. We have already seen Figure 9, which displays the equation written in the syntax of the poem and also in a serif font. Figure 11 below shows us the form as it appears in the poem itself.
Because proportional poems carry the image-schema of similar triangles, they can be read as a is to b as d is to e (as in Figure 3). The mappings of the target and source can be reversed so that the structure can be read lexically in four different ways (recall Figure 4.)
Let us first focus on two different ways we can view the similar triangles structure built into “Pandemic Meditation” and then look at four attributes to take into consideration.
Two Examples of Arrangements
One of the new forms would be asserting that the value of NON-ATTACHED MINDis to the value of A JURY OF 13 ACUTEANXIETIESINASHOUTING MATCH as the value of MY CORPOREAL BODY is to the value of A DARK CLOUD OF 13 VULTURES ENCIRCLING THE CORPSE MY SOUL DRAGS.
Or we can mathematically manipulate the original equation from Figure 11 to get Figure 12.
Another form would be asserting that the value of NON-ATTACHED MIND is to the value of MY CORPOREAL BODY as the value of A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH is to the value of A DARK CLOUD OF 13 VULTURES ENCIRCLING THE CORPSE MY SOUL DRAGS.
Or once again, we can mathematically manipulate the original equation from Figure 11, this time, to get Figure 13.
The Four Aspects
To illustrate the dynamics inherent in these equations, let us now view the four syntactical attributes inherent in the solutions for each variable of the expression — note that these values vary from zero to infinity and everything in-between — once again we are referring to Figure 4.
When the value of NON-ATTACHED MIND becomes near-infinite the value of A DARK CLOUD OF 13 VULTURES ENCIRCLING THE CORPSE MY SOUL DRAGS becomes near zero.
When the value of A DARK CLOUD OF 13 VULTURES ENCIRCLING THE CORPSE MY SOUL DRAGS becomes near-infinite then the value of NON-ATTACHED MIND becomes near zero.
When the value of A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH becomes near-infinite then the value of MY CORPOREAL BODY becomes near zero.
When the value of MY CORPOREAL BODY becomes near-infinite then the value of A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH becomes near zero.
9. Poetic Metaphor Mechanisms in “Pandemic Meditation”
The paper set out to show how the mechanisms of poetic metaphor are present in similar triangles poems. While there are many examples, let us highlight at least one example for each of the three mechanisms of poetic metaphor described in Section 3 in relation to the mathematical visual poem “Pandemic Meditation”.
Extended conventional metaphor: Whenwemapthevaluesinthesource domains of A DARK CLOUDOF13VULTURESENCIRCLINGTHE CORPSE MY SOUL DRAGS and MY CORPOREAL BODY to the target domains of A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH and NON-ATTACHED MIND, we are performing an example of an extended conventional metaphor. The conventional metaphor used in these expressions is the MIND IS BODY metaphor. These metaphorical expressions are not conventional in the sense that they are not part of our basic use of language. As you solve the equation in the numerous ways, you read many expressions of the MIND IS BODY metaphor.
Image-mappings: There are numerous examples of image-mappings in this poem, but I would like to focus on two:
(a) The image-schema of the similar triangles mapping one to another is an example of mapping an image-schema to a similar imageschema. In other words, similar triangles ensure the image-schema is consistent across the expressions.
(b) The image of the 13 circles moving between the letters of the anagrams to the 13 anxieties and again to the 13 vultures uses the same circling image-schema which is another example of imagemapping expressed in the poem.
The generic-level metaphor: This metaphor mechanism can be realized in this poem by noticing that the GENERIC IS SPECIFIC metaphor exists within the specific-level expression of A JURY OF 13 ACUTE ANXIETIES IN A SHOUTING MATCH AS ADARKCLOUDOF13 VULTURES ENCIRCLING THE CORPSE MY SOUL DRAGS. Due to vultures being symbols of death, we see that we have the conventional metaphor DEATH IS A DEVOURER. With our mapping, we see the expression, DEATH IS ANXIETY. This can be understood as not only 13 specific-level omens of death but countless specific-level omens of death for every anxiety that is encountered during the pandemic.
I will reiterate that the equation structure alone affords numerous values within the variable terms/domains. This means that the conceptual domains are variable and thus able to express countless fixed specificlevel expressions for the four conceptual domains within the equation.
For example, NON-ATTACHED MIND can be expressing a mind of enlightenment or a mind in hell; it can also express everything inbetween (see Figure 4. Of course, the proportional schema means that when you change the value (importance) of a conceptual domain, one of the other conceptual domains must change importance as well because of our expectation that the similar triangles schema must be consistent.
10. Conclusion
I would like to end this essay with a personal note. The aesthetic beauty that I have found within a mathematical visual proportional poem lies in an alternating cognizance between the two experiences below:
The beauty found in analyzing the lexical and mathematical multiple dynamic cross-domain mappings inherent in the similar triangles imageschema.
The conflation of lexical and sensorial visual imagery.
From the examples above, I hope readers can clearly see that similar triangle poems satisfy the requirements of poetic metaphor. I also hope that they are intrigued by the form and will try to construct or write their own similar triangles poems.
Acknowledgments: This essay was written as a response to a request from Jesse Russell Brooks, Director of the Film and Video Poetry Society, Los Angeles, California.
References
[1] Mark Johnson, “The philosophical significance of image schemas,” pages 15–34 in From Perception to Meaning Image Schemas in Cognitive Linguistics, edited by Beate Hampe (De Gruyter Mouton, Berlin, 2008).
[2] George Lakoff; “Contemporary Theory of Metaphor,” pages 202–251 in Metaphor and Thought edited by Andrew Ortony (Cambridge University Press, 1992).
[3] George Lakoff and Mark Johnson, Metaphors We Live By, University of Chicago Press, 1980.
[4] George Lakoff and Rafael Nu˜nez, Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, Basic Books, 2000.
[5] George Lakoff and Mark Turner, More Than Cool Reason: A Field Guide To Poetic Metaphor, University of Chicago Press, 1989.
Coded: Art Enters the Computer Age, 1952–1982
"Coded: Art Enters the Computer Age, 1952–1982" is an exhibition that explores the critical relationship between early computer art and other contemporary art movements of the time. It features international and interdisciplinary artworks from artists, writers, musicians, choreographers, and filmmakers who worked directly with computers and used algorithms or other systems to produce their work. The exhibition is organized by Leslie Jones, the curator of Prints and Drawings at the Los Angeles County Museum of Art (LACMA).
The exhibition begins with works from the early 1950s when only universities and research centers could afford access to computers. It focuses on these early experiments when artists had to collaborate with other researchers and engineers or teach themselves coding. For these artists, in addition to technical knowledge and aesthetic sense, it was also important to connect with art historical genres. Even the first generative artists like A. Michael Noll or Frieder Nake wanted to engage with the history of art, creating artworks that referred to works by Piet Mondrian or Paul Klee. The exhibition also features work by the Hungarian-born artist Vera Molnar, who was also engaged with constructivism and concrete art.
The exhibition investigates the relationship between early computer art and other art movements, such as Op art, constructivism, conceptual art, Fluxus, and minimalism. This approach of connecting generative art with other mainstream art movements is not a new concept. Many researchers like Grant Taylor (“When the Machine Made Art: The Troubled History of Computer Art”, 2014), Charlie Gere (“Digital Culture”, 2022), and Philip Galanter (“Generative Art and Complexity Theory”, 2003) among others have provided detailed overviews of the early history of computer art and its connections to other art movements. However, "Coded" is the first major exhibition to bring these works together in one place and showcase them side-by-side. For the first time, visitors can see Sol LeWitt's cubes next to Manfred Mohr's work, a relationship that has often appeared in the literature but has never been seen by the public together. "Coded" also highlights a less well-known movement in Croatia, „New Tendencies”, which organized exhibitions between 1969 and 1973, making it one of the first attempts to introduce computer art to a wider audience by linking it to art historical movements.
The exhibition not only showcases fine art but also examples from other art fields, such as poetry, film, dance, and music, providing a comprehensive overview of the earliest days. The exhibition features compositions by Lejaren Hiller and Leonard Isaacson using computers, and Max Matthews, who developed the first music program at Bell Labs. The exhibition features a playlist of computer-generated music created with the help of Mark "Frosty" McNeill, which visitors can listen to on LACMA's website and within the exhibition. In addition to the playlist, the show also includes several other programs, such as Analívia Cordeiro's upcoming performance integrating dance and computer art.
The exhibition ends in 1982 when personal computers emerged, as LACMA wanted to represent only the early years of computer art and how the computer was used at that time. Although the exhibition features significant artists like Manfred Mohr, Vera Molnár, Francois Morellet, Frieder Nake, Harold Cohen, Charles Csuri, Desmond Paul Henry, and A. Michael Noll, there are some artists missing who also made important contributions to the development of this art form during its early stages.
Artists like Roman Verostko and Herbert W. Franke made a huge impact on computer-generated art. In 1982, Roman Verostko developed an interactive program that controls the drawing arm of a machine known as a pen plotter, creating intricate and complex drawings that were unlike anything else at the time. Franke was a pioneer in both computer-generated art and generative photography. He developed innovative techniques for creating abstract photographic images using computer algorithms, creating works that were both aesthetically pleasing and technically innovative.
Gottfried Jäger and Hein Gravenhorst were also significant artists in the field of generative photography. Their work explored the potential of using computers to generate and manipulate photographic images. Nancy Burson is known for being the first artist to experiment with composite photographs and for pioneering the technique of “morphing”, which involves using a computer program to overlay and manipulate photos. It is important to recognize the significance of these artists in computer-generated art and to acknowledge the ways in which they helped to shape and define the early years.
The Los Angeles County Museum of Art (LACMA) has a long-standing commitment to exhibiting and collecting works of art that incorporate technology and new media. The museum has been at the forefront of presenting and promoting the use of technology in art since the 1960s and has showcased groundbreaking exhibitions about computer art and digital media. This has helped to establish the museum as a leading institution in the field of new media and technology-based art. The museum's acquisition of Frederick Hammersley's drawing in 1972 was a significant moment in the development of computer art, and this exhibition "Coded: Art Enters the Computer Age, 1952-1982" continues this legacy.
Date: Feb 12–Jul 2, 2023
Location: Los Angeles County Museum of Art , BCAM, Level 2
THE HISTORY OF ARTIFICIAL INTELLIGENCE – PART I.
The History of AI. Artificial Intelligence (AI) is a hot topic today, with programs like ChatGPT showing the world how powerful and capable it has become. However, AI is not a new concept invented in the 21st century. The idea of AI can be traced back to ancient mythology where tales of artificial beings that possessed human-like qualities were common.
Artificial Intelligence (AI) is a hot topic today, with programs like ChatGPT showing the world how powerful and capable it has become. However, AI is not a new concept invented in the 21st century. The idea of AI can be traced back to ancient mythology where tales of artificial beings that possessed human-like qualities were common.
The idea of creating machines that could think like humans began to take shape in the mid-twentieth century, with the emergence of electronic computers. In the 1940s and 1950s, computers could not store commands, only execute them. They were also extremely expensive and only prestigious universities or big companies could afford them. During this time pioneers such as John von Neumann and Alan Turing made significant contributions to the field of AI, laying the groundwork for modern programming.
In 1956, the concept of AI was initialized at a historic conference „Dartmouth Summer Research Project on Artificial Intelligence” (DSRPAI) hosted by John McCarthy and Marvin Minsky. “Logic Theorist”, a program designed to mimic human problem-solving skills created by Allen Newell and Herbert A. Simon was introduced here. The program is considered to be the first AI program in the world. McCarthy proposed a two-month project that brought together top researchers to explore artificial intelligence. Unfortunately, the conference didn’t bring the success McCarthy wished for. However, the significance of this event cannot be undermined. AI gained its name, its mission, and its first success.
In the 1960s computers became faster, cheaper, and more accessible and they could store more information. This period was significant progress in the field of AI, with the development of programs such as the “General Problem Solver”, which could solve a wide range of problems by breaking them down into smaller parts. Joseph Weizenbaum’s ELIZA also showed promise to be the first chatbot in 1966. In Japan, Waseda University introduced WABOT-1 in 1972, the first full-scale “intelligent” humanoid robot. Its limb control system allowed it to walk, grip and transport objects with its hands. It could also measure distances, and directions and communicate with a person in Japanese. These successes convinced governments to fund AI research at several universities and research institutions.
Alongside AI, computer-generated art has also taken its first steps. The genre dates back to the 1960s when artists like Frieder Nake, Georg Nees, Michael Noll, and Vera Molnár began to use computers to create geometric and abstract artworks. These artists developed algorithms and mathematical formulas to create simple patterns and shapes. While these artists were not necessarily working with what we would now consider AI, they were still important in establishing the field of computer-generated art and exploring computers as a tool for artistic expression.
In the 1970s, after the optimism, AI researchers encountered many obstacles and the promised results failed to materialize. The biggest obstacle was the lack of computer power. The computers could not store enough data or process it fast enough. AI became the subject of critique, and the researchers faced a crisis as funding was cut. The so-called “AI winter” lasted until the 1980s.
However, many researchers didn't give up on their hopes and continued to work in this field. A significant milestone in the history of AI-generated art is the development of AARON by Harold Cohen, which is considered to be the first AI art system. Cohen had been working on the project since the early 1970s at the University of California, San Diego, where he taught. AARON was designed to produce art autonomously, using a symbolic rule-based approach to generate images. In its initial form, it created simple black and white drawings that grew more complex over the years. At first, he used custom-built plotter devices, and sometimes he coloured these images by hand. In the 1980s, he managed to add more representational imagery such as plants, people, and interior scenes.
In the 1980s, AI experienced a resurgence in popularity. The "Expert System," created by Edward Feigenbaum, was a major success as it mimicked the decision-making process of a human being, providing advice to users. The Japanese government funded the project as well as other AI tools with its “Fifth-Generation Computer Project”. Unfortunately, as had happened before, most of the ambitious goals were not met, and the second 'AI winter' began, with most of the funding being cut again.
In the 1990s and 2000s, many goals of artificial intelligence had been achieved. In 1997, Deep Blue became the first computer chess-playing system that beat a world chess champion, Garry Kasparov. The event was broadcast live on the internet and served as a huge step forward for AI. In the same year, speech recognition software was added to Windows. In 2002, for the first time, AI entered the home with Roomba, a vacuum cleaner. AI was also used in fields, such as mathematics, electrical engineering, economics, and operations research. It seemed that there was not a problem that AI could not sort out. During this period, computer storage became bigger, and the emergence of the internet provided access to large amounts of data. Cheaper and faster computers were able to successfully solve many problems of AI.
The advancements in AI technology took a significant leap forward in 2011 when IBM's Watson computer competed on the game show Jeopardy!. Watson was designed to process natural language questions and provide accurate responses, and it was able to defeat two of Jeopardy!'s most successful human competitors.
Another significant achievement was when Geoffrey Hinton at Google began to work on deep learning systems in the 2010s. Deep learning uses artificial neural networks to analyze and process data, learning from examples. Google quickly recognized the potential of deep learning and started investing in the technology. In 2015, they created AlphaGo, a computer program that was able to defeat the world champion in the game, Go. This was an important accomplishment, as Go has many possible board configurations, making it much more complex than chess.
Deep learning systems have given a huge boost to the development of image generation. Generative adversarial networks (GANs) were developed by Ian Goodfellow and colleagues in 2014. GANs work with two neural networks. A "generative" one is trained on a specific dataset (such as images of flowers or objects) until it can recognize them and generate a new image. And the "discriminator" system, has been trained to distinguish between real and generated images and evaluates the generator's first attempts. After a million times sending ideas back and forth, the generator AI produces better and better images. Since the introduction of GANs, many artists have made significant contributions to the field of AI-generated art using this technology, such as Mario Klingemann, and Robbie Barrat. The latter has used GANs to generate surreal and abstract portraits. The use of GANs in art has opened up new possibilities for artists to create unique works.
Another significant contribution to the field of image generation is DeepDream, which was developed by Alexander Mordvintsev at Google in 2015. DeepDream uses a convolutional neural network to find patterns in images, thus creating a dream-like, psychedelic appearance on them. In the following years, several companies released apps (like Artbreeder) that transform photos into the style of well-known paintings or generate images based on texts (such as DALL-E).
Not only in the art field, but in general, AI research and development has rapidly increased. Many other big tech companies, like Microsoft and Facebook, are investing in the field. In recent years, AI has entered into our everyday lives, from virtual assistants (Siri or Alexa) to personalized recommendations (on Netflix or Spotify). AI has also been used in healthcare to develop more accurate diagnoses, and in the finance industry to detect fraud and make better investment decisions. In the future, AI technologies will continue to influence the way we live, work and interact with the world around us.
In our next article, we’ll introduce the most recent achievements of AI, including ChatGTP, DALL-E 2, or Lumen5. Stay tuned!
Browse the works of some of our AI artists:
Panel talks at «Le monde non objectif» exhibition finissage
On 31/1, at the finissage of the solo show of Swiss generative artist Eko33 "Le monde non objectif", curated by Kate Vass Galerie at unpaired, we also hosted the panel about the history and the future of generative art. The conversation foregrounds key definitions to examine the history of the human & the machine relationship. The panel was meant to outline current tendencies and their relation to art history with decades of experimentation.
Our speakers: are Eko33, Johannes Gees, Lukas Amacher, and Georg Bak, chaired by Kate Vass.
Digital art did not develop in an art-historical vacuum; it has strong connections to previous art movements, including Dada, Fluxus and conceptual art. The importance of the above for digital art resides in their emphasis on formal instruction and their focus on concept, event, and audience participation, as opposed to unified material objects. The idea of rules being process for creating art has a clear connection with the algorithms that form the basis of all software and every computer operation: a procedure of formal instructions that accomplish a 'result' in a finite number of steps. Just as with Dadaist poetry, the basis of any form of computer art is instruction as a conceptual element. Art's notions of interaction and 'virtuality' were also explored early on by artists such as Marcel Duchamp & Laszlo Moholy Nagy in relation to objects & their optical effects. Duchamp's work, in particular, has been extremely influential in digital art: the shift from object to concept embodied in many of his works can be seen as a predecessor of the 'virtual object' as a structure in the process. Fluxus group performances and events in the 1960s were also often based on the execution of precise instructions.
The element of 'controlled randomness' that emerges in Dada, OULIPO, and the works of Duchamp & John Cage points to one of the basic principles and most common digital medium paradigms, the concept of random access as a basis for processing and assembling information. Computers were used for the creation of artworks as early as the 1960s. Michael A. Noll, a researcher at Bell Labs, created some of the earliest computer-generated images, e.g. Gaussian Quadratic (1963), which were shown at Howard Wise gallery in NY in 1965.
We invite you to browse the catalogue of the show here and follow up for more events:
#generativeart #curatedart #digitalart #historyofart
The Father of Computer Art: The Art of Charles Csuri
Charles Csuri (1922-2022) is known as “The Father of Computer Art”. He was a professor, fine artist, and computer scientist who made significant contributions to the field of computer art. Through his research and artistic vision, Csuri developed software that created new artistic tools for 3D computer graphics, computer animation, gaming, and 3d printing.
Charles Csuri was born in 1922 in Grant Town, West Virginia, to immigrant parents from Hungary. He afforded college with a football scholarship in 1942 to The Ohio State University. After attaining The Bronze Star for heroism in WWII at “The Battle of The Bulge” he continued his education with the GI bill. Further, in 1943 the US Army sent him to the Newark College of Engineering, where he studied algebra, trigonometry, analytic geometry, calculus, physics, and chemistry, and received a degree in engineering. After the war, Csuri turned down an opportunity as an All American O.S.U. football player to join the NFL. Instead, he received his MFA in classical art education studying painting, drawing, and sculpture alongside his close friend Roy Lichtenstein in the 1940s. In 1947 Csuri became a professor teaching fine art while exhibiting his paintings in N.Y. During this time, he created traditional paintings, drawings, and sculptures but had a continuing dialogue with a friend and engineer at The Ohio State University in the early 1950s about working with the computer to create art. The problem seemed like science fiction till 1964 when he saw a raster image made using a computer. In the same year, he took a course on computer programming and created his first computer-generated picture in 1964 with an analogue computer and in 1965 an IBM 7094 with a drum plotter using FORTRAN programming language. In addition to teaching fine art, Csuri became a professor of Computer Information Science and created digital art with custom software in a Unix environment.
In 1968, Csuri sent shockwaves through the University by being the first artist to receive financial support from the National Science Foundation (NSF). Csuri used this funding to support computer scientists in a collaborative effort to develop the artist's tools he envisioned to create computer art. He implemented a three-part program: building up a library of data and programs that focused on creating new artistic tools for the generation and transformation of images, developing a graphic console, and establishing an educational program. With the grant, he proposed a formal organization, called the Computer Graphics Research Group (CGRG) at OSU in 1971, which he later renamed to Advanced Computer Center for the Arts and Design (ACCAD), in order to develop and experiment with the potential of the application of computer animation. NSF was so impressed with his work, that they supported his creative research for twenty years. The results of this research have been applied to a variety of fields, including flight simulators, computer-aided design, visualization of scientific phenomena, magnetic resonance imaging, education for the deaf, architecture, and special effects for television and films. In 1984, Csuri established the first computer animation company in the world, Cranston Csuri Productions, which produced animation for all three major U.S. television networks, the BBC, and commercial clients.
EARLY PERIOD (1963 - 1974)
His early works, from mid-1963 to the 1970s are significant. His way of thinking about computer art, his creative process, and his intentions during this period helped shape his later art. Many of the themes, such as object transformation, hierarchical levels of control, and randomness were established in the early years. In the early days of his career, Csuri's art consisted of his drawings and sketches, which he made in order to mathematically transform them using analogue and digital technologies. In 1968 Csuri made an important discovery by creating a numeric milling machine sculpture as the brainchild for his pioneering development in the next phase of his career. In the early 1970’s he took this visionary idea further to develop 3D animation and real-time interactive art objects.
One of the best examples of his early period is the Hummingbird (1967), which is considered to be one of the earliest computer-animated films. To create the film, he generated over 30.000 images with 25+ different motion sequences using computer punch cards. The output was then drawn directly onto the film using a microfilm plotter. The resulting animation shows a hummingbird dissolving, recomposing, and floating along imaginary waves. The film received an award at the Fourth International Experimental Film Competition in Brussels, then MoMA in New York purchased it in 1967 for its permanent collection as one of the first computer animations.
Csuri’s interest in randomness and game-like playfulness is exemplified in work, Random War (1967). In his work, he gave pictures of 400 soldiers along with a written list of their names to the computer. Using a pseudorandom number generator, the computer determined the distribution of the soldiers on the battlefield. The names became the soldiers under the categories of “Missing in Action”, “Wounded”, “Dead”, and “Medals Awarded”. Csuri used the soldiers’ names to personalize the randomness and chaos of war. Csuri also used pop icon names as well as presidents, and ordinary people to emphasize that war in life and death doesn't discriminate. This was an important conceptual and ironic commentary on the computer playing GOD in an age when computers were considered evil. For Csuri as a veteran, this artwork was pivotal in his search for meaning in his life-spanning digital artistic career.
MIDDLE PERIOD (1989-2000)
In 1990, Csuri became a professor Emeritus, retiring from his role at ACCAD to focus on creating digital art. During this middle period, he started experimenting with combining traditional drawings and paintings with evolving proprietary software developments. He brought his drawings into a three-dimensional computer space and used techniques like texture, bump mapping, and embossing.
Csuri describes his works as “organic looking”, suggesting that the use of traditional media helped to remove the sterile feeling of the mechanical computer forms. Developing the technique called texture mapping, he wrapped the virtual models with his oil paintings. The result is playful and focuses on the beauty and elegance of life.
A good example of this period is A Childs's Face. Csuri created a thick application of oil paint, that he wrapped around the computer-generated bust head that appears to be emerging from the background. Csuri plays with positive and negative space in a strong mix of pattern, color, and three-dimensional texture. Csuri also applies this technique with his drawing in The Hungarians.
During this period, he continued to define artistic tools in collaboration with computer programmers through the development of custom software programs. Some of his core programming tools were the ribbon tool, the fragmentation tool, and the colormix tool. The fragmentation tool gave Csuri the ability to play with abstraction and representation. In the work, Cosmic Matter, Csuri fragmented digital models of classical sculpture where he simulated paint flying in the solar system.
The colormix tool allowed him to define the location of specific colors and create color spaces through which Csuri can place or move his objects. In Wonderous Spring, he used prismatic colors in a transparent layering of forms.
Csuri wanted to “draw” in 3 dimensions so he spearheaded the development of a ribbon tool that appears to draw calligraphic lines in the three-dimensional space. Csuri has used this tool to create works, such as Wire Ball and Horseplay. Another of his digital developments during this period was his experimentation with the interplay of light and transparency. His artistic vision was to create a reflective glass-like quality in 3D layers of transparency and objects. Balancing Act and Clearly Impressive
LATER PERIOD (1996-2006)
In his later period, Csuri uses the core artistic tools, which he created in the early 1990s and he continually modified them throughout his artistic career. One example was the implementation of his fragmentation tool to later apply to millions of objects in a random generative process. Csuri often functioned as an artistic editor while striving to uniquely create by computer something that could never be accomplished by hand. Celestial Clutter, Mosaic Lines, Festive Frame, and Abstract Ribbons. Nevertheless, the influence of the history of art and great masters continues to be an important part of his creative ambitions.
The father of modern art, Paul Cézanne had a significant influence on Csuri’s work and philosophy. Cézanne uses simplified geometric forms of the sphere, cylinder, and cone, forming them with light and color, composing a new order that transforms the visual language of art. Csuri spent years studying Cézanne's methods of manipulating color and light. He also redefines the objects in a three-dimensional space in which he creates new forms that synthesize realism and abstraction. Csuri further was influenced by the artwork of Leonardo Da Vinci and named many of his files Leo. As early as 1966, Csuri created The Leonardo Da Vinci Series with an IBM 7094 and drum plotter he transformed The Vitruvian Man by computer. He was inspired by concepts like Chiaroscuro and Sfumato as seen in Light_0008. where the smoky quality blurs contours so shapes emerge in an atmospheric quality. Reference to Cezanne's color in Landscape Two.
In Venus in the Garden Frame 73, from the Venus series, Csuri uses the famous Greek sculpture of Venus de Milo with his artistic set of parameters that control light, color, transparency, objects, camera position, and distances in a generative process. Here Csuri explores the mysteries of ancient myths and the evolution of organic forms. The sculpture is embedded in repeating and juxtaposed decorative motifs of ribbons, leaves, and flowers that blur the line between the figurative and the abstract. The interplay of elements in this algorithmic painting creates a rhythmic balance in an organic composition.
Origami Flowers is another example of the influence of traditional art history in Csuri’s art. The clump of irises is a reference to Csuri’s admiration for Japanese art and Van Gogh’s Irises. The work is a mathematical object constructed from polygons, in three-dimensional space with a light source placed by the artist. It consists of a complexity of shapes and lines, the intensities of light and dark colors, and the spatial depth resulting from multiple overlaps of material.
In the 1960s, Charles Csuri began exploring the use of randomness and chance in his art, as seen in works such as "Random War" and "Feeding Time." In the 1990s, he returned to this interest and created a series of generative art pieces known as the Infinity series. These works were generated using computer algorithms and mathematical equations, resulting in unique, endlessly repeating pieces. Csuri created three-dimensional environments composed of geometric forms, colors, lights, and shadows, assembled from thousands of fragmented objects. The overarching theme throughout his career has been in search of artistic and conceptual meaning in every creation by image transformation.
Charles Csuri continued creating digital art and animation until the age of 99. He is recognized as a true renaissance man: an athlete, professor, artist, and revolutionary innovator who combined art, science, and technology as one of the first pioneers of the genre. As an inspiring visionary, he mentored 60 PHD digital art and programming students. They have and will continue his legacy as a pioneer in the field of digital art and animation. Charles Csuri will be remembered in the history of art as a founder of the digital art movement. His revolutionary pictures and animation have been showcased in international exhibitions. His artwork can be found in renowned museums, such as the Museum of Modern Art NYC, Pompidou Center Paris, Victoria Albert Museum of Art London, and ZKM Museum Karlsruhe. LACMA Los Angeles, California. Whitney Museum of Art NYC, and Museum of Contemporary Art, Zagreb Croatia
Browse the works of Charles Csuri HERE!
Interview with Swiss artist, Eko33 by Arltcollector.eth
The original article is published on www.arltcollector.substack.com on 6th December 2022.
The generative artist known as Eko33 has been making art with computers since 1999. From being featured at Venice Biennale to now having his own solo-exhibition, he has already become an influential and respected artist in the gen art and NFT communities alike.
It was my pleasure to interview Eko33 and speak with him on Twitter Spaces on November 30, 2022. We spoke about many things including his creative process and upcoming solo-exhibition happening on December 7th 2022, curated by Kate Vass. I think a blog post on the Kate Vass Galerie website, articulated it best about Eko33’s work.
In that, it, “focuses on clear geometrical structures while including non-obvious yet advanced and diversified algorithmic forms. His works have no recognizable subject matter, using elements of art, such as lines, shapes, forms, colors, and texture. He aims to create pure art using algorithms. Emotions conveyed by his work are evoked by an eloquent use of colors and saturation on top of a unique layering approach underlying the subjectivity and biases of the human condition.”
Eko, you’ve been featured in several exhibitions around the world this year, such as Venice Biennale, Art Basel 2022, and Cortesi Gallery. What was your experience like being a part of these shows?
It’s always a fantastic experience to meet real-life collectors and artists within international art fairs.
It’s interesting to see the evolution of the space within a concise period of time. I remember NFT art week Shenzhen and Art Basel Miami in December 2021 and how fast things changed and evolved when I was doing the opening of the Venice Biennale in Bloomberg’s Pavilion with Tezos at Venice Biennale and Art Basel Basel last May.
Traditional art galleries enter the space, technology, and know-how to display digital art is also making tremendous progress on a monthly basis.
Attendance for digital art keeps on increasing, I remember at Paris+ Art Basel how crowded the place was where NFTs were displayed.
Is there an event in particular that really stood out to you the most from this past year?
Each of them are very unique in their own ways. One of my best memories was looking at my long-form series being displayed alongside Herbert W. Franke, ‘Mondrian’ (1979) in Art Basel, Basel, and then having my work featured in Forbes magazine as part of the best of Art Basel 2022.
Is there a place you have traveled to or an event you’ve attended recently that inspired you enough to create something?
When I travel I always take a lot of pictures with my phone or a drone. I’m doing this because I use it as an archive for creating color palettes.
I travel extensively and even if I may not be cognizant of it all the time I’m sure it influences my emotions and mindsets. The first thing I do upon arriving in a new place is to look for interesting museums or bookstores.
When I worked on the long-form series called “Epochs” I started the project while being in Lisbon Portugal and I got fascinated by azulejos which turned out to be the starting point of a deep dive into the works of Sebastien Truchet and his famous “Truchet tiles” which turned out to be the core of this project.
Quick side note, where is one of your favorite places to travel and why?
One of the best places to travel to is going back home, I absolutely love Switzerland. I live in a rather secluded area in the swiss alps and I’m absolutely grateful each time I go back to discover the evolution of nature, I’m never bored contemplating it.
On a more "exotic" note, one of my favorite places is a french island called Corsica. It’s a jewel in the mediterranean sea with authentic nature and locals.
Your solo-exhibition opening December 7th 2022 in Zug, Switzerland is entitled, ‘Le monde non objectif’. How did you come together with Kate Vass Galerie to curate this collection?
Kate had the idea for me to initially show two collections, one called “artefacts” and the other called “Untitled”.
Then we took time to review everything I created over the past 18 months and we believed it would be a good idea to expand the scope of the exhibition and to include more facets of my work.
When did you begin working on this body of work?
It’s a rather recent body of work, the “oldest” pieces showcased are from projects which started 2 years ago.
I read that coding is usually the end of the process for you when creating something. Can you explain in further detail a little bit about your creative process and how you spend 80% drawing and only 20% actually coding?
I’ve been collecting a lot of mid-century furniture and old magazines from this era. I also enjoy collecting and reading art books.
The first step of my process is going through all these references, bookmarks from previous readings, etc. It can be literature, non-fiction, research papers, or drawings I’ve made by hand in the past.
My process can also evolve and change over time. I then draw a lot, identify color palettes, and collect more references.
After this, I start coding. Usually, I spend most of my coding time pushing the code to its limits as well as identifying the space of parameters suitable for the project. I like to anticipate as much as I can while leaving space for randomness.
I like to say that I work between control and randomness.
So, I’m curious to know, how did you first get into NFTs?
I’ve been in the crypto space for quite some time. Initially, I thought it was a scam but gradually I learned more about it. Usually, the more you learn about blockchain technology the more you understand its underlying potential.
The intersection of game theory, cryptography, decentralization, technological challenges, and network effects are really fascinating to me.
I first got into NFTs as a collector and then I naturally started minting my own work.
What are your overall thoughts about this space?
So many smart and passionate people in the same place are quite rare.
Of course, not everyone has good intentions but overall it reminds me of the thrill and excitement I went through when I saw the arrival of the internet.
I have to highlight your spontaneous tweets simply titled “Gm if you love computer generative art!” which is always accompanied by one of your thrilling works of art. How did this ongoing series come about?
Working with computers and machines can be a lonely adventure. To avoid cabin fever I like imagining that each morning I pass by my friends and say good morning to them with a big smile on my face.
This is exactly what I’m doing on Twitter while sharing my advanced work in progress at the same time. It’s also a way for me to show my collectors that I’m here and keep working hard on new projects.
As an artist going into 2023, do you feel pressure or think it’s necessary to be more active on Twitter and other social platforms to promote yourself and your work?
I don’t know if it can be qualified as pressure because at the end of the day everyone does whatever they want. Having said that, I know it can be difficult for some artists to be here and active each and every day.
Sometimes your engagement could fluctuate and if you don’t have a thick skin you may start having doubts about what you do and even sometimes feel a bit depressed if the audience does not react as you would have expected.
Social media algorithms have no mercy and I quite enjoy this type of blunt reminder.
I agree. I've come to appreciate the algorithm. Looking back now, how has a computer and generative art evolved since you began creating it in 1999?
It’s day and night. In 1999 I remember having to learn as much as I could on my own. The Internet was still in its infancy. Finding a learning community was challenging to say the least.
From a technical standpoint, I remember IRCAM in Paris created a hardware midi interface to work with sensors. I spent the entire summer in order to be able to afford it. Nowadays you can DIY this type of hardware for less than 30 euros.
The Intel Pentium CPUs were revolutionary at the time but nowadays our phones are way way more powerful. What we do in real-time today was just science fiction in 1999.
From an artistic point of view, I’m a bit surprised sometimes to see some projects made in 2022 being quite close from what was made a while ago.
With all the new technology available today, I believe we should aim for more radical and ambitious innovation.
I’m really working hard to integrate electronics, laser cutters, 3D printing, direct GPU interfacing, and game engines such as Unity in my practice. This is my main focus and I hope I can present this type of generative art in the near future.
Did you ever imagine the world would eventually become so receptive to digital and generative computer art?
Not even in my wildest dreams. I’ve witnessed the evolution and growing interest in digital and generative art and it’s really an amazing breakthrough.
I’m especially happy for artists like Vera Molnar, Herbert W. Franke, and others who could see this change unfolding after decades of modest interest from the public at large for generative art.
On that note, is there anybody you really look up to as an artist, past or present?
Ben Laposky, Michael Noll, Frieder Nake, Georg Nees, Sophie Tauner-Arp, Mark Wilson, Waldemar Cordeiro, Paul Klee, Sonia Delaunay, Kandinsky, Aurelie Nemours, Sol Lewitt, Attila Kovacs, Chuck Csuri, Johan Shogren, François Morellet, Robert Mallary, Bridget Riley, Josef Albers, Anni Albers
I was reading your blog posts like “The story of Anni Albers and her contribution to generative art”, as well as previewing your “probably nothing” podcast, which you host. What other things do you love to do when you’re not creating art?
I believe we never spend enough time with friends and family. I learned this the hard way when I lost my father because of a car accident when I was quite young.
When I’m not creating art and when I’m not with my friends or family I enjoy doing electronic music, especially generative electronic with eurorack modules.
I like the analog feeling of making music from a pure electric signal, patching cables, not being able to save them, and not using computers at all, just a bunch of obscure small modules combined together.
I love to hear about your passion for music. I must know, what are some of your favorite albums that you can listen to for inspiration when creating?
I listen to music all day long while creating new works, my tastes are all over the place. I absolutely love Hindustani music, of course, Ravi Shankar is among my favorites. Ambient music is also something I especially enjoy such as Brian Eno’s album called Apollo: Atmospheres and Soundtracks.
Other legends such as Ennio Morricone, Frank Zappa, Pierre Henry, Luc Ferrari, Philip Glass, Kraftwerk, Tangerine dream, boards of canada, Karlheinz Stockhausen, Erik Satie, Klaus Nomi, Nicolas Jaar, Khruangbin, Jean-Michel Jarre, Apparat.
Lastly, I want to ask what you’re currently building that’s getting you excited.
Right now I’m entirely focused on the forthcoming solo exhibition curated by Kate Vass and displayed at unpaired. Gallery on December 7th in Zug. It’s going to be a great opportunity to discover my new series called “untitled”.
It has never been exhibited before and will be presented on a high-quality digital display as well as super high-quality physical pieces in large format.
In early 2023 I’m also going to have a very exciting project which hasn’t been announced publicly yet. All I can say is that I’m thrilled and super proud about it. I didn’t want to rush the project and it was worth it as the curation panel really liked it.
Next year I’m finally going to be more focused on using electronics and real-time interactive installations to create generative art.
I hope to be able to combine zero knowledge proof technology with creative and artistic use cases during live events with robotics, lasers, and real-time minting experiences.
It’s incredible to learn about how you got started in generative art, and to experiment with real-time interactive installations and new tech. You’ve come a long way in a short amount of time.
Your curiosity to innovate and execute new ideas is inspiring. It helps push the whole community forward.
We’re all looking forward to the release of your solo show and the future of Eko33. Next time we do this, I hope to visit you in the Swiss Alps!
‘Le monde non objectif’ Solo-exhibition by Swiss artist Eko33
Kate Vass Galerie is happy to present the solo show of a Swiss artist, Jean-Jacques Duclaux, aka Eko33, 'Le mode non objectif' which takes place in Zug, at unpaired. Gallery on the 7th of December.
At the beginning of the 20th century, artists began to free themselves from academic constraints, abandon pictorial aspects and experiment only with colors and forms. One of the first figures to abandon objective art was Kazimir Malevich who was a leading member of the Suprematism movement. He wrote his vision in his book, The Non-Objective World, which was published in 1927 as part of the Bauhausbücher series and it is referred to as a manifesto of Suprematism.
Malevich was among the first painters who attempted to achieve the absolute painting that was clear from every objective reference. The term, non-objective art, takes nothing from reality and emphasizes the “primacy of pure feeling”. In contrast to Constructivism, non-objective art opposes any link between art and utility and the imitations of nature. Malevich aimed to create pure art, using geometric forms, where the feeling was the determining factor, the one and only source of creation.
“Blissful sense of liberating nonobjectivity drew me forth into the “desert,” where nothing is real except feeling… and so feeling became the substance of my life.” - Malevich, 1927
The formal elements of non-objective art remain part of recent artistic practices. Many artists during the 20th century use exclusively geometrical forms and exclude every objective element. In the 1960s, the forerunner of generative art, Sol LeWitt, embraced the philosophy of ’non-objectivity’ by creating the geometric ’Wall drawing’ series. Technology provides even more tools for artists to explore the concept of absolute art.
The works of Jean-Jacques Duclaux, also known as Eko33 focus on clear geometrical structures while including non-obvious yet advanced and diversified algorithmic forms. His works have no recognizable subject matter, using elements of art, such as lines, shapes, forms, colors, and texture. He aims to create pure art using algorithms. Emotions conveyed by his work are evoked by an eloquent use of colors and saturation on top of a unique layering approach underlying the subjectivity and biases of the human condition.
Eko33 has created digital artworks using computer code since 1999. His practice is focused on creating artistic software and processes which generate unique artworks. Far from letting computers do as they wish; he defines the artistic rules and gives space to controlled luck and randomness.
In his artistic practice, classical approaches play a major role. Coding is usually the end of the process as he allocates more than 80% of his time drawing on paper and only 20% on the implementations. After creating the algorithm and identifying the suitable range of parameters matching his artistic vision, he moves on to the final stage, adding another layer of code to build the texture and final touches of his artworks. He uses multiple computer languages such as Processing, P5.js, Python, VEX, GLSL, 3d rendering engines alongside the custom software he built. Sometimes he uses an old Commodore SX-64 to pay tribute to the pioneers
Opens on 7th December at 6:30 PM in Zug, unpaired. Gallery (Rigistrasse 2, 6300)
Listen to the Twitter Space Interview with Arlt Collector and Kate Vass here!
For more information click here!
DIMENSIONALISTI - CADAF NYC 2022
Kate Vass Galerie is pleased to announce the exclusive program for CADAF 2022, taking place in New York from 11-13 November. We will present the work of computer art pioneer Charles Csuri, as well as Ganbrood and Lucas Aguirre, who worked with 3D technology in their artistic practice.
Over the centuries, many artists have tried to depict our three-dimensional world on a two-dimensional surface. The Italian Renaissance was defined by the synthesis of art and science, resulting in the acquisition of a three-dimensional perspective with the help of Euclid’s geometry. The illusion of three-dimensional depth was a major turning point in art history, allowing the development of more naturalistic styles.
During the first decades of the twentieth century, major modern art movements, such as Cubism or Futurism developed a revolutionary new approach to representing reality. The aim of Cubist artists, such as Picasso and Braque, was to show different viewpoints at the same time, within the same space by breaking the objects into planes. The idea of the fourth dimension was one of the favorite subjects of mathematics, science fiction, and art. There are many historical arguments about how Cubists encountered literature on the fourth dimension, but there is no doubt that their art was influenced by the theory of Henri Poincaré and Maurice Princet. By the end of the 1920s, the temporal fourth dimension of Einsteinian Relativity Theory achieved widespread popularity. As a result, some avant-garde artists recognized that space and time are not separate categories as had previously been taken for granted, but they are related dimensions in the sense of the non-Euclidean conception. Like non-Euclidean geometry, the fourth dimension was a symbol of liberation for artists. It encouraged artists to depart from visual reality and to reject the one-point perspective system that was used for depicting three-dimension.
The non-Euclidean geometry and the new conception of space and time influenced a Hungarian poet, and art theorist, Charles Sirató. As an emigrant in Paris, Sirató was fascinated by modern art, especially by paintings with depth and sculptures with moving elements. In 1936, he wrote a manifesto in which he lumped all the avant-garde tendencies together as a single movement and he called it: Dimensionism. Sirató used the formula “N + 1” to express how arts (like literature, painting, and sculpture) have to absorb a new dimension: literature should leave the line and enter the plane, painting should leave the plane and enter space from two to three dimensions, while sculpture would step out of closed forms to four dimensions. The movement’s endpoint would be “cosmic art”, which could be experienced with all five senses. The manifesto was signed by many prominent modern artists such as Hans Arp, Kandinsky, Robert Delaunay, and Marcel Duchamp.
Although the movement has since been almost completely forgotten, the representation of three-dimensional and four-dimensional space has remained an important theme in art. In the second half of the twentieth century, technology provides artists with even more tools to experiment.
One of the fathers of computer art, Charles Csuri’s research and artistic vision led to advances in software that created new artistic tools for 3D computer graphics, computer animation, and 3D printing. His work, “Cosmic Matter” from 1989 is an iconic example of Csuri’s early development of artistic tools that pioneered the field of computer art and animation. In “Cosmic Matter”, Csuri invented, in collaboration with other programmers, a technique called “texture mapping” that allows him to map his oil paintings onto 3D objects and further created object fragmentation to look like animated paint flying through the air. Further, he uses an embossing technique long before image processing capabilities use in Photoshop. In this picture, Csuri combines his formal fine art training with his revolutionary technology. This work was featured in Charles Csuri's retrospective international exhibition in 2006, at “The Beyond Boundaries”, Siggraph in 2006, & IEEE Computer Graphics and applications in 1990.
Spatial depth has always been a decisive element in Ganbrood's work. As a 3D animator in the early 1990s, but also his special effects for movies and later photography, dimensionality played a significant role as an essential part of visual language. In his latest series, SOMNIVM, he has combined visual themes that have inspired him since his youth: antic mythology, fairy tales, fine art, theatre, and science fiction. He used an Artificial Intelligence algorithm, GAN (Generative Adversarial Networks), to mix all these different visual elements. This neural network has enabled him to create more abstract, pseudo-figurative pieces that, at first glance, look figurative but, after close inspection, turn out to be abstract shapes. This series also demonstrates his penchant for illusions and trickery; the works ‘Skené’ and ‘The Displaced’ explore mind-altering effects such as pareidolia, apophenia, and synchronicity. The elements of contradictory mediums he used, like fresco paintings and photography and 3D, unbalance human visual recognition.
The Argentinian artist, Lucas Aguirre combines elements of physical reality in a digital space. His work, “Aparecida” is one of the first experiments with analog painting and 3-dimension. His painted strokes are scanned three-dimensionally and reconfigured using virtual reality. The result of these operations returns to the physical plane and be continued analogically a back and forth between traditional modes and new media. He understands digital tools as a possibility to subvert everyday elements and thus generate other visions. The virtual is a space of pure potentiality.
#102, 1964
52 x 39.5 cm
Unique work on paper
Drawing machine 2
Condition report: Signed and dated on front by the artist.
White Indian ink tube pens on black drawing paper. Hand-applied highlights
TO INQUIRE ABOUT THIS WORK PLEASE CONTACT US: info@katevassgalerie.com