top of page

Imagination Engine I: Generating Abstract Art through EEG


Jennifer testing out the Imagination Engine on a Participant, openbci headset on a male's head.
Jennifer testing out the Imagination Engine on a Participant

Overview

Is mind reading becoming a reality? Recent research suggests that what transpires in our brains might not be as concealed as once believed. The DreamDiffusion paper demonstrated the ability to reconstruct tangible everyday objects that individuals gaze upon. What if, instead of real-world images, we expose individuals to abstract art and train machine learning models accordingly? Abstract art can provoke strong emotional responses and uniquely activate the visual cortex in response to various colors and shapes. This project, known as the "Imagination Engine," aims to leverage these insights to translate brain activity into abstract art.



Background

Recent advances in neuroscience and machine learning have opened doors to new possibilities for understanding and harnessing the human brain's capabilities. A notable illustration of this progress is the DreamDiffusion paper, which investigates the conversion of brain activity, particularly EEG data, into visual images. This pioneering work challenges the notion that the inner workings of the brain are impenetrable.

The DreamDiffusion project adopted a three-step approach. First, it trained an EEG encoder on a vast dataset of EEG data, enabling it to convert brain signals into lower dimension representations. Second, it employed fine-tuning with Stable Diffusion to enhance the quality of generated images by aligning EEG signals with visual content. Lastly, it incorporated a CLIP encoder, which added a semantic layer to the generated images, imbuing them with not only visual coherence but also conceptual significance.

Unfortunately, DreamDiffusion didn’t publish their pre-trained EEG encoder. For this project, I followed the methods of step2 and step3 in the paper.


Project Goals


Inspired by the convergence of neuroscience and machine learning showcased in DreamDiffusion, our "Imagination Engine" project seeks to push the boundaries of mind-reading technology. While DreamDiffusion concentrated on translating EEG data into recognizable everyday objects, our goal is to explore abstract art's realm.

The human brain's robust response to emotions and abstract concepts conveyed through art is well-documented. Different colors, shapes, and patterns elicit distinct neural reactions. By training our machine learning model on abstract art, we aim to impart it with the intricate language of emotions and artistic expression encoded within EEG data. Our ultimate objective is to empower the machine to craft abstract art from the depths of the human mind.


Methods & Technical Implementation



Data Collection

For data collection, an OpenBCI 16-channel headset was employed to capture brain activity. The process involved selecting a random image from the DELAUNAY dataset and recording 2 seconds of EEG data while the individual gazed at the image. To ensure the quality of data and mitigate fatigue, a brief break was scheduled every 8 images. In total, a dataset comprising EEG data associated with 1300 distinct images was collected.


Training

The training phase adopted a two-fold approach inspired by DreamDiffusion's steps 2 and 3. First, fine-tuning with Stable Diffusion was implemented to enhance the model's capacity to generate meaningful images from EEG signals. The pre-trained EEG encoder utilized in DreamDiffusion was not available unfortunately, necessitating a slightly different training procedure. Subsequently, the model underwent further refinement through alignment with a CLIP encoder.


Results & Insights

It's important to emphasize that this project is still a work in progress, with a limited amount of training data compared to the DreamDiffusion paper. Despite these constraints, intriguing correlations and patterns have emerged in the generated results. Notably, the Imagination Engine has shown promising abilities in capturing specific colors and compositions from EEG data, hinting at its potential to delve deeper into abstract art generation. This ongoing exploration signifies the exciting possibilities that lie ahead in this endeavor.


You can find more on Jennifer Ziyuan Huang's github: https://hoyiki.github.io/

193 views0 comments

Recent Posts

See All
bottom of page