A low-code way to learn AI

Learn how AI works from a real LLM implemented entirely in Excel

Excited to share that Spreadsheets-are-all-you-need will be at the AI Engineer World’s Fair. This is one of San Francisco’s biggest technical AI conferences. Look forward to seeing you there!

Details and registration at https://www.ai.engineer/worldsfair

“Probably the best 10 minutes you can invest to understand LLMs”

Guillaume Decugis (Entrepreneur & VC)

“I have seen nothing which could come close in traceability and accessibility to understand transformers and LLMs”

Maximilian Hentschel (AI Principal Product Manager)

Watch the demo

Watch the 10 min demo from the Seattle AI Tinkerers meetup

Sophisticated yet simple

Spreadsheets-are-all-you-need is a low-code introduction to the details behind today’s Large Language Models (LLMs) that’s ideal for:

  • Technical executives, marketers, and product managers
  • Developers and scientists transitioning into machine learning
  • AI policy makers and ethicists

If you can understand a spreadsheet, then you can understand AI!

Learn from a real LLM

Spreadsheets-are-all-you-need implements the forward pass of GPT2 (an ancestor of ChatGPT that was state of the art only a few years ago) entirely in Excel using standard spreadsheet functions.

This same Transformer architecture is the foundation for OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard/Gemini, Meta’s Llama, and many other LLMs.

More lessons to come! Get notified!

Future videos will walk through more details on the internals of modern AI. Subscribe below to get notified about new tutorials and updates.

Intuit Mailchimp

“Absolutely amazing for getting a deeper understanding of large language models.”

ShiSh S. (Microsoft)

“Demonstrates step-by-step how transformers work in generative AI language models using a tool familiar to us all: Excel.”

Lucy Tancredi (SVP, Technology)

“Should be required coursework.”

Scott Arnold

“A force of nature at helping people understand LLMs…the Gutenberg Press of AI.”

Rafael Martins

Watch the lessons

Enjoyed a video? Share it with a friend!

Lesson 1: Demystifying GPT with Excel

In this 10-minute video we kick things off by walking through the high-level architecture of GPT-2 and witnessing each phase of the Transformer come to life in an Excel spreadsheet.

Lesson 2: Byte Pair Encoding & Tokenization

In this lesson we dive into the first phase of GPT, the tokenization phase and the Byte Pair Encoding (BPE) algorithm used in models like ChatGPT. We cover,

  • Detailed walkthrough of the BPE algorithm, including its learning phase and application in language data tokenization.
  • Spreadsheet Simulation: A hands-on demonstration of the GPT-2’s tokenization process via a spreadsheet model.
  • Limitations and Alternatives: Discussion on the challenges of BPE and a look at other tokenization methods.

Lesson 3: Word Embeddings

This lesson is a gentle introduction to the amazing world of word embeddings. This is the key step that translates your prompt into numbers for the Transformer. In this video we,

  • Motivate why the LLMs need embeddings in the first place
  • Discuss the key concepts behind word embeddings
  • Reproduce the famous “king – man + woman = queen” using a spreadsheet and discover sometimes “king – man + woman = king”

Extra: An end-to-end walk through of the Excel sheet

This is a high level walk through of the Excel implementation. It is primarily geared to those who already understand Transformers and want to know how the standard architecture is mapped to the spreadsheet.

Try it yourself


The sheet is available as an xlsb (Excel binary) file in the Releases section of the github repo. You should be able to download and run this file in Excel for Mac or PC.


If you’re quickly trying to orient yourself to the spreadsheet this walk through video may be helpful though it is not oriented to beginners. For beginners, it’s recommended to start with the lesson videos.

Please realize the implementation is just enough to run very small workloads:

  • Full GPT2 small (124M parameters) model including byte pair encoding, embeddings, multi-headed attention, and multi-layer perceptron stages
  • Inference/forward pass only (no training)
  • Context is limited to 10 tokens in length
  • 10 characters per word limit
  • Zero temperature output only

This sheet is very big. Unfortunately, it is not unusual for Excel to lock up (but only on a Mac) while using this spreadsheet. It is highly recommended to use the manual calculation mode in Excel and the Windows version of Excel (either on a Windows directory or via Parallels on a Mac).


Bugs are not out of the question. Please file issues on Github


@ianand on Twitter

ianand/spreadsheets-are-all-you-need on Github

Discuss on Hacker News


What about Google Sheets?

This project actually started on Google Sheets but the full 124M model was too big and switched to Excel. I’m still exploring ways to make this work in Google Sheets but it is unlikely to fit into a single file as it can with Excel.

Why can’t I chat with it like ChatGPT? It doesn’t match the output of ChatGPT?

Aside from the minuscule context length, it also lacks the instruction tuning and reinforcement learning from human feedback (RLHF) that turn a large language model into a chatbot.

Why is it called Spreadsheets-are-all-you-need

The name is a play on the title of the famous Attention Is All You Need paper which first described the Transformer machine learning architecture that underlies ChatGPT, Claude, Bard, and many of the latest Generative AI tools.