MEMO AKTEN
NEWS
Deep Meditations: We are all connected #06 - Plug me in.mp4
From Deep Meditations: We are all connected series
Video work with music
Unique
PRICE: 6 ETH
If you are interested in this work and would like to pay in ETH, please email us at info@katevassgalerie.com and transfer the amount to our wallet HERE.
Description:
Deep Meditations is a series of works that exist in various forms, primarily a large-scale video and sound installation; a multi-channel, long-form abstract film; a monument that celebrates life, nature, the universe and our subjective experience of it. The work invites us on a spiritual journey through slow, meditative, continuously evolving images and sounds, told through the imagination of a deep artificial neural network.
We are invited to acknowledge and appreciate the role we play as humans as part of a complex ecosystem heavily dependent on the balanced co-existence of many components. The work embraces and celebrates the interconnectedness of all human, non-human, living and non living things across many scales of time and space – from microbes to galaxies.
Exhibited at Untitled Miami Art Fair, Miami, Kate Vass Galerie Booth A43, December 2024
SOLD
Deep Meditations: We are all connected #05 - Mad World.mp4, 2020
From Deep Meditations: We are all connected series
Video work with music
Unique
If you are interested in this work and would like to pay in ETH, please email us at info@katevassgalerie.com and transfer the amount to our wallet HERE.
Description:
Deep Meditations is a series of works that exist in various forms, primarily a large-scale video and sound installation; a multi-channel, long-form abstract film; a monument that celebrates life, nature, the universe and our subjective experience of it. The work invites us on a spiritual journey through slow, meditative, continuously evolving images and sounds, told through the imagination of a deep artificial neural network.
We are invited to acknowledge and appreciate the role we play as humans as part of a complex ecosystem heavily dependent on the balanced co-existence of many components. The work embraces and celebrates the interconnectedness of all human, non-human, living and non living things across many scales of time and space – from microbes to galaxies.
Exhibited at Untitled Miami Art Fair, Miami, Kate Vass Galerie Booth A43, December 2024
Deep Meditations: We are all connected #04 - Underworld.mp4, 2020
From Deep Meditations: We are all connected series
Video work with music
Unique
PRICE: 6 ETH
If you are interested in this work and would like to pay in ETH, please email us at info@katevassgalerie.com and transfer the amount to our wallet HERE.
Description:
Deep Meditations is a series of works that exist in various forms, primarily a large-scale video and sound installation; a multi-channel, long-form abstract film; a monument that celebrates life, nature, the universe and our subjective experience of it. The work invites us on a spiritual journey through slow, meditative, continuously evolving images and sounds, told through the imagination of a deep artificial neural network.
We are invited to acknowledge and appreciate the role we play as humans as part of a complex ecosystem heavily dependent on the balanced co-existence of many components. The work embraces and celebrates the interconnectedness of all human, non-human, living and non living things across many scales of time and space – from microbes to galaxies.
Exhibited at Untitled Miami Art Fair, Miami, Kate Vass Galerie Booth A43, December 2024
BigGAN Study #4 - BigGAN Madness.mp4, 2018
From #BigGAN Studies series with music
Video work
Unique
PRICE: 6 ETH
If you are interested in this work and would like to pay in ETH, please email us at info@katevassgalerie.com and transfer the amount to our wallet HERE.
Description:
A number of experiments in audio-reactive, automated – and controlled – traversal of the latent space of a large generative adversarial network (in this case, the model known as BigGAN by Google Deepmind). These videos use many different methods of navigating the latent space. Each video uses a different form of audio reactive traversal, and some additionally use a custom controlled method. Many BigGAN (or other GAN) music videos followed, but these are the very first – at least to be posted publicly.
Exhibited at Untitled Miami Art Fair, Miami, Kate Vass Galerie Booth A43, December 2024
BigGAN Study #2 - It's more fun to compute.mp4, 2018
From #BigGAN Studies series with music
Video work
Unique
PRICE: 6 ETH
If you are interested in this work and would like to pay in ETH, please email us at info@katevassgalerie.com and transfer the amount to our wallet HERE.
Description:
A number of experiments in audio-reactive, automated – and controlled – traversal of the latent space of a large generative adversarial network (in this case, the model known as BigGAN by Google Deepmind). These videos use many different methods of navigating the latent space. Each video uses a different form of audio reactive traversal, and some additionally use a custom controlled method. Many BigGAN (or other GAN) music videos followed, but these are the very first – at least to be posted publicly.
Exhibited at Untitled Miami Art Fair, Miami, Kate Vass Galerie Booth A43, December 2024
Reincarnation, 2009
3 chapters
Unique NFT each
Technique: custom software, computer vision, motion tracking, contemporary dance, fluid simulation
to inquire about availability please email us at info@katevassgalerie.com
Layers of Perception: Meditation #3, 2023
SOLD OUT
Video work
Unique
PRICE: 6 ETH
Continuing from my 2017 “Learning to see” series, “Layers of perception” continues my exploration into using state-of-the-art Machine Learning (AI) technologies to investigate human perception, and more broadly speaking, our self affirming cognitive biases, our inability to see the world from others’ points of view, and the resulting social and political polarization.
We see the world through a very specific lens. A lens shaped by both our evolutionary history and biology, and our upbringing and culture.
We have evolved to perceive space at certain scales, scales relevant to our ancestors, prey, predators and immediate environment. Animals, rocks, trees; ranging from grains of sand to mountains. We do have the incredible capacity to build tools to sense scales much smaller and much larger. But we cannot intuitively comprehend these scales, the quantum weirdness of the subatomic realm, or the dynamics of supermassive galactic structures many light years across.
Similarly, we have evolved to perceive time at certain scales, timescales relevant to our ancestors and their activities. And again, though we are able to build tools to sense timescales much smaller and much larger, we cannot comprehend the femtosecond timescales of subatomic particle dynamics, or the geological timescales of erosion, or the cosmic timescales of star and galaxy formation.
The lens through which we see the world is also shaped by our upbringing and culture. It is a very convincing illusion that the reality which we perceive is the truth, the full truth, and nothing but the truth. This illusion, that our individual perceived “reality” is the one true reality, is not compatible with the global communities in which we live, and leaves us vulnerable to political manipulation and polarization, distracting us from the crises we need to solve together.
“Layers of perception” bridges vastly different realities and scales of space and time. For a brief moment, the commonalities between radically different worlds are brought together, and shown in a unified vision. And in doing so, similar to “Learning to see”, the piece works on dual layers. It uses the natural limitations of our neurobiological perception and related biases, as a metaphor to reflect on the limitations of our higher level cognition, how we make meaning, and what we consider to be the truth, and our “reality”.
Layers of Perception: Meditation #2, 2023
NFT Video work
Unique
PRICE: 6 ETH
Continuing from my 2017 “Learning to see” series, “Layers of perception” continues my exploration into using state-of-the-art Machine Learning (AI) technologies to investigate human perception, and more broadly speaking, our self affirming cognitive biases, our inability to see the world from others’ points of view, and the resulting social and political polarization.
We see the world through a very specific lens. A lens shaped by both our evolutionary history and biology, and our upbringing and culture.
We have evolved to perceive space at certain scales, scales relevant to our ancestors, prey, predators and immediate environment. Animals, rocks, trees; ranging from grains of sand to mountains. We do have the incredible capacity to build tools to sense scales much smaller and much larger. But we cannot intuitively comprehend these scales, the quantum weirdness of the subatomic realm, or the dynamics of supermassive galactic structures many light years across.
Similarly, we have evolved to perceive time at certain scales, timescales relevant to our ancestors and their activities. And again, though we are able to build tools to sense timescales much smaller and much larger, we cannot comprehend the femtosecond timescales of subatomic particle dynamics, or the geological timescales of erosion, or the cosmic timescales of star and galaxy formation.
The lens through which we see the world is also shaped by our upbringing and culture. It is a very convincing illusion that the reality which we perceive is the truth, the full truth, and nothing but the truth. This illusion, that our individual perceived “reality” is the one true reality, is not compatible with the global communities in which we live, and leaves us vulnerable to political manipulation and polarization, distracting us from the crises we need to solve together.
“Layers of perception” bridges vastly different realities and scales of space and time. For a brief moment, the commonalities between radically different worlds are brought together, and shown in a unified vision. And in doing so, similar to “Learning to see”, the piece works on dual layers. It uses the natural limitations of our neurobiological perception and related biases, as a metaphor to reflect on the limitations of our higher level cognition, how we make meaning, and what we consider to be the truth, and our “reality”.
Layers of Perception: Meditation #1, 2023
NFT Video work
Unique
PRICE: 6 ETH
Continuing from my 2017 “Learning to see” series, “Layers of perception” continues my exploration into using state-of-the-art Machine Learning (AI) technologies to investigate human perception, and more broadly speaking, our self affirming cognitive biases, our inability to see the world from others’ points of view, and the resulting social and political polarization.
We see the world through a very specific lens. A lens shaped by both our evolutionary history and biology, and our upbringing and culture.
We have evolved to perceive space at certain scales, scales relevant to our ancestors, prey, predators and immediate environment. Animals, rocks, trees; ranging from grains of sand to mountains. We do have the incredible capacity to build tools to sense scales much smaller and much larger. But we cannot intuitively comprehend these scales, the quantum weirdness of the subatomic realm, or the dynamics of supermassive galactic structures many light years across.
Similarly, we have evolved to perceive time at certain scales, timescales relevant to our ancestors and their activities. And again, though we are able to build tools to sense timescales much smaller and much larger, we cannot comprehend the femtosecond timescales of subatomic particle dynamics, or the geological timescales of erosion, or the cosmic timescales of star and galaxy formation.
The lens through which we see the world is also shaped by our upbringing and culture. It is a very convincing illusion that the reality which we perceive is the truth, the full truth, and nothing but the truth. This illusion, that our individual perceived “reality” is the one true reality, is not compatible with the global communities in which we live, and leaves us vulnerable to political manipulation and polarization, distracting us from the crises we need to solve together.
“Layers of perception” bridges vastly different realities and scales of space and time. For a brief moment, the commonalities between radically different worlds are brought together, and shown in a unified vision. And in doing so, similar to “Learning to see”, the piece works on dual layers. It uses the natural limitations of our neurobiological perception and related biases, as a metaphor to reflect on the limitations of our higher level cognition, how we make meaning, and what we consider to be the truth, and our “reality”.
SOLD OUT
‘Learning to see: Gloomy Sunday’, 2017
Part - ‘Earth’
Price: 8 ETH
original video divided into 4 chapters: Fire, Water, Air, Earth(Flower)
1ch HD Video
unique 1/1 video as NFT minted on artist's own smart contract on Ethereum
Exhibited at “Automat und Mensch” show, Zürich, Kate Vass Galerie, 2019
Description:
Learning to See is an ongoing collection of works that use state-of-the-art machine learning algorithms to reflect on ourselves and how we make sense of the world. The picture we see in our conscious mind is not a mirror image of the outside world, but is a reconstruction based on our expectations and prior beliefs. An artificial neural network looks out onto the world, and tries to make sense of what it is seeing. But it can only see through the filter of what it already knows. Just like us. Because we too, see things not as they are, but as we are.
In this context, the term seeing, refers to both the low level perceptual and phenomenological experience of vision, as well as the higher level cognitive act of making meaning, and constructing what we consider to be truth. Our self affirming cognitive biases and prejudices, define what we see, and how we interact with each other as a result, fuelling our inability to see the world from each others’ point of view, driving social and political polarization. The interesting question isn’t only “when you and I look at the same image, do we see the same colors and shapes”, but also “when you and I read the same article, do we see the same story and perspectives?”. Everything that we see, read, or hear, we try to make sense of by relating to our own past experiences, filtered by our prior beliefs and knowledge. In fact, even these sentences that I’m typing right now, I have no idea, what any of it means to you. It’s impossible for me to see the world through your eyes, think what you think, and feel what you feel, without having read everything that you’ve ever read, seen everything that you’ve ever seen, and lived everything that you’ve ever lived. Empathy and compassion are much harder than we might realize, and that makes them all the more valuable and essential.
SOLD OUT
‘Learning to see: Gloomy Sunday’, 2017
Part- ‘Air’
Price: 8 ETH
original video divided into 4 chapters: Fire, Water, Air, Earth(Flower)
1ch HD Video
unique 1/1 video as NFT minted on artist's own smart contract on Ethereum
Exhibited at “Automat und Mensch” show, Zürich, Kate Vass Galerie, 2019
Description:
Learning to See is an ongoing collection of works that use state-of-the-art machine learning algorithms to reflect on ourselves and how we make sense of the world. The picture we see in our conscious mind is not a mirror image of the outside world, but is a reconstruction based on our expectations and prior beliefs. An artificial neural network looks out onto the world, and tries to make sense of what it is seeing. But it can only see through the filter of what it already knows. Just like us. Because we too, see things not as they are, but as we are.
In this context, the term seeing, refers to both the low level perceptual and phenomenological experience of vision, as well as the higher level cognitive act of making meaning, and constructing what we consider to be truth. Our self affirming cognitive biases and prejudices, define what we see, and how we interact with each other as a result, fuelling our inability to see the world from each others’ point of view, driving social and political polarization. The interesting question isn’t only “when you and I look at the same image, do we see the same colors and shapes”, but also “when you and I read the same article, do we see the same story and perspectives?”. Everything that we see, read, or hear, we try to make sense of by relating to our own past experiences, filtered by our prior beliefs and knowledge. In fact, even these sentences that I’m typing right now, I have no idea, what any of it means to you. It’s impossible for me to see the world through your eyes, think what you think, and feel what you feel, without having read everything that you’ve ever read, seen everything that you’ve ever seen, and lived everything that you’ve ever lived. Empathy and compassion are much harder than we might realize, and that makes them all the more valuable and essential.
SOLD OUT
‘Learning to see: Gloomy Sunday’, 2017
Part - ‘Fire’
Price: 8 ETH
original video divided into 4 chapters: Fire, Water, Air, Earth(Flower)
1ch HD Video
unique 1/1 video as NFT minted on artist's own smart contract on Ethereum
Exhibited at “Automat und Mensch” show, Zürich, Kate Vass Galerie, 2019
Description:
Learning to See is an ongoing collection of works that use state-of-the-art machine learning algorithms to reflect on ourselves and how we make sense of the world. The picture we see in our conscious mind is not a mirror image of the outside world, but is a reconstruction based on our expectations and prior beliefs. An artificial neural network looks out onto the world, and tries to make sense of what it is seeing. But it can only see through the filter of what it already knows. Just like us. Because we too, see things not as they are, but as we are.
In this context, the term seeing, refers to both the low level perceptual and phenomenological experience of vision, as well as the higher level cognitive act of making meaning, and constructing what we consider to be truth. Our self affirming cognitive biases and prejudices, define what we see, and how we interact with each other as a result, fuelling our inability to see the world from each others’ point of view, driving social and political polarization. The interesting question isn’t only “when you and I look at the same image, do we see the same colors and shapes”, but also “when you and I read the same article, do we see the same story and perspectives?”. Everything that we see, read, or hear, we try to make sense of by relating to our own past experiences, filtered by our prior beliefs and knowledge. In fact, even these sentences that I’m typing right now, I have no idea, what any of it means to you. It’s impossible for me to see the world through your eyes, think what you think, and feel what you feel, without having read everything that you’ve ever read, seen everything that you’ve ever seen, and lived everything that you’ve ever lived. Empathy and compassion are much harder than we might realize, and that makes them all the more valuable and essential.
AUTOPOEISIC TRANSMOGRIFICATION FRAGMENT #006,
NFT, Open Edition
MP4 file
Price: 0.04 ETH
SOLD OUT
‘Learning to see: Gloomy Sunday’, 2017
Part- ‘Water’ 8 ETH
original video divided into 4 chapters: Fire, Water, Air, Earth(Flower)
1ch HD Video
unique 1/1 video as NFT minted on artist own smart contract on etherium
Exhibited at “Automat und Mensch” show, Zürich, Kate Vass Galerie, 2019
Description:
Learning to See is an ongoing collection of works that use state-of-the-art machine learning algorithms to reflect on ourselves and how we make sense of the world. The picture we see in our conscious mind is not a mirror image of the outside world, but is a reconstruction based on our expectations and prior beliefs. An artificial neural network looks out onto the world, and tries to make sense of what it is seeing. But it can only see through the filter of what it already knows. Just like us. Because we too, see things not as they are, but as we are.
In this context, the term seeing, refers to both the low level perceptual and phenomenological experience of vision, as well as the higher level cognitive act of making meaning, and constructing what we consider to be truth. Our self affirming cognitive biases and prejudices, define what we see, and how we interact with each other as a result, fuelling our inability to see the world from each others’ point of view, driving social and political polarization. The interesting question isn’t only “when you and I look at the same image, do we see the same colors and shapes”, but also “when you and I read the same article, do we see the same story and perspectives?”. Everything that we see, read, or hear, we try to make sense of by relating to our own past experiences, filtered by our prior beliefs and knowledge. In fact, even these sentences that I’m typing right now, I have no idea, what any of it means to you. It’s impossible for me to see the world through your eyes, think what you think, and feel what you feel, without having read everything that you’ve ever read, seen everything that you’ve ever seen, and lived everything that you’ve ever lived. Empathy and compassion are much harder than we might realize, and that makes them all the more valuable and essential.
Reincarnation, 2009
Video work
1 channel HD 1920×1080 @ 60fps video
Duration: 9:26 seamless loop
Technique: custom software, computer vision, motion tracking, contemporary dance, fluid simulation
Unique
to inquire about availability please email us at info@katevassgalerie.com
Deep Meditations: We are all connected #08 - Avril 14.mp4, 2020
From Deep Meditations: We are all connected series
Video work with music
Unique
PRICE: 6 ETH
If you are interested in this work and would like to pay in ETH, please email us at info@katevassgalerie.com and transfer the amount to our wallet HERE.
Description:
Deep Meditations is a series of works that exist in various forms, primarily a large-scale video and sound installation; a multi-channel, long-form abstract film; a monument that celebrates life, nature, the universe and our subjective experience of it. The work invites us on a spiritual journey through slow, meditative, continuously evolving images and sounds, told through the imagination of a deep artificial neural network.
We are invited to acknowledge and appreciate the role we play as humans as part of a complex ecosystem heavily dependent on the balanced co-existence of many components. The work embraces and celebrates the interconnectedness of all human, non-human, living and non living things across many scales of time and space – from microbes to galaxies.
Exhibited at Untitled Miami Art Fair, Miami, Kate Vass Galerie Booth A43, December 2024