0.70


      
About this demo

This is a ~119K parameter GPT-2-style transformer with 2 layers, 64-dimensional embeddings, and 4 attention heads. It was trained from scratch using a custom autograd engine written in Odin, then compiled to WebAssembly for inference. The entire model runs locally in your browser — no server calls.

Based on Andrej Karpathy's MicroGPT.