Cartographic exploration of interpretable latents within large language models. Twelve Gemma Scope sparse autoencoder 'features' inside the Gemma 2 2B model are investigated by clustering and rendering thousands of maximally activating prompts. Select a latent below to explore a map of activating concepts.