TinyBookyLLM is a 65M-parameter Llama-3-architecture LLM trained on my laptop in roughly half an hour using only ~5GB of memory. Trained on ~32M tokens of Gutenberg English with a 16k BPE tokenizer. Architecture: 9 layers, 640 hidden size, 2048 intermediate, 8 query heads, 2 KV heads (GQA), RoPE, SiLU, 512 sequence length. 16,000 steps, peak LR 5e-5, cosine decay, no warmup. The output sounds like a convincingly coherent pastiche of 19th-century English prose — a fun demonstration of how much style a very small model can absorb from a focused corpus.