Atlas of Perception

Exploring the elementary visual concepts that compose our world
The Atlas of Perception radial display - a circular arrangement of 13 monitors showing continuous visualization of neural network features

A New Visual Grammar

What if everything we see is composed of just a few hundred visual elements? Much like how three primary colors combine to form the vast spectrum we perceive, our visual world might be built from a limited set of fundamental concepts.

The Atlas of Perception reveals these building blocks of visual understanding by exploring the latent space of neural networks. By examining how machines parse our world, we gain insight into the grammar of appearance itself—the modular semantics of vision.

The Elements of Seeing

Modern neural networks perceive our world through their own version of visual concepts—elemental patterns that combine to form complex imagery. Through artistic interpretation and mechanistic interpretability techniques, this project makes these hidden elements visible.

Displayed on a continuous circular canvas that has no beginning and no end, these visualizations mirror the formless nature of the abstract feature spaces within AI systems. The radial arrangement invites viewers to consider perception as circular rather than linear—a constellation of interrelated concepts rather than a hierarchy.

Circuit pattern visualization - neural network feature

Circuit Patterns

Visualizations resembling printed circuit boards—realistic at first glance but revealing themselves as nonsensical upon closer inspection. A reflection of how neural networks construct meaning from patterns.

Organic forms visualization - neural network feature

Organic Forms

Flowing, organic lines reminiscent of medical illustrations. These patterns reveal how neural networks encode biological structures and movement, creating a strange familiarity through abstract representation.

Geometric abstraction visualization - neural network feature

Geometric Abstraction

Series of right angles and squares—pure abstraction in the style of op art. These represent the most fundamental visual building blocks, the geometric primitives from which more complex patterns emerge.

Various neural network features

...

These are just the beginning. Inside neural networks lie hundreds more visual primitives—from texture patterns to spatial relationships, from motion indicators to abstract concepts that have no names in human language.

Beyond Human Perception

What can AI systems teach us about seeing? Rather than imposing human interpretations onto machine perception, the Atlas of Perception explores these systems on their own terms, revealing a visual language both alien and familiar.

As we contemplate these visualizations, we're invited to reconsider our understanding of visual perception itself—to see beyond seeing, and recognize the constructed nature of all visual experience.

Technical

The current version of the Atlas of Perception debuts at the CVPR 2025 Art Show displays visual concepts extracted from a Sparse Autoencoder (SAE) with 4,608 latents within the SigLIP model. This TopK SAE was trained on the CC3M dataset using .last_hidden_state embeddings from the google/siglip-so400m-patch14-384 model by Sangwu Lee and is documented here. Hardware design and fabrication done by Paul Yarin.

Latents selected for the atlas were tested for being self-activating — meaning that when the visualizations themselves are shown to the SigLIP model, they reliably re-activate their corresponding concept. This creates a closed loop where the neural network recognizes its own internal representations made visible.