Glossary

Key terms and concepts in transformer-based LLMs

About This Project

Langsplain is an interactive educational tool designed to help you understand how modern Large Language Models work under the hood.

What You'll Learn

  • How text is tokenized and converted to embeddings
  • The mechanics of self-attention and why it's powerful
  • How transformer blocks process information layer by layer
  • What Mixture of Experts (MOE) is and why it matters
  • How LLMs generate text one token at a time

Interactive Features

  • Guided Tour: A step-by-step walkthrough of the architecture
  • Clickable Diagram: Click any component to learn more
  • Attention Demo: Visualize how tokens attend to each other
  • MOE Demo: See how routing works in expert models

Further Learning

Technical Notes

This visualization uses simplified, toy-sized models for demonstration purposes. Real LLMs have much larger dimensions (e.g., 4096-8192 vs our 64) and more layers (32-96 vs our 3). The attention patterns shown are computed on actual (tiny) weights but won't match production model behavior.

Credits

Built with vanilla JavaScript, D3.js for visualizations, and Anime.js for animations. No framework dependencies - just clean, educational code.