Word Embedding Space (t-SNE Projection)
Dimension 1 (t-SNE)
Dimension 2
💡 What does this visualization show?

Every word in an LLM is represented by a high-dimensional vector (e.g., d = 8,192 for Llama 3 70B). These vectors capture semantic relationships: words with similar meaning have similar vectors and are close together in space. The t-SNE projection makes this structure visible in 2D – notice the clear clusters for animals, countries, verbs, and adjectives.