The Feynman diagram is a visual representation of subatomic particles and their interactions. The diagrams are not meant to literally represent the particles and their interactions but rather a way to represent the mathematical formulas that describe them. They are extremely useful in quantum field theory and other areas of physics – Frank Wilczek, who won the 2004 Nobel Prize in Physics, credits Feynman diagrams as an invaluable tool in his research.
Designing your own notation can compress complicated topics into understandable ones. The catch-22 is that it takes a deep understanding of the topic to build the notation in the first place. But just like compression, upfront costs can be amortized over time by saving other resources (e.g., space, compute, etc.).
Programmers and mathematicians do the most basic version of this on a day-to-day basis with variables and conventions (e.g., i represents an iterator variable, or “blackboard bold” ℝ as the set of all real numbers). Then there’s Paul Erdős, who came up with his own conversational vocabulary. Or Donald Knuth, who created his own typesetting system, TeX, to write his books. Or Paul Graham, who wrote Hacker News in his own programming language, Arc.
It's why developers can be insanely productive by writing their own tools (today, I still use most of the developer tools that I built -- minikube, skaffold, virgo, and LLaMaTab, to name a few. Even if they aren’t global maximums for productivity (that’s the best case), they might be local maximums for personal use. That’s why Rob Pike still uses Acmeas a text editor, despite being outdated compared to new tools like VSCode.
If you push the definition of notation to mean language, large language models see the world through their own notation – tokenization and vector embeddings. We’re learning just how important some of these special tokens are in the tokenization language – whether it’s special start of sentence tokens or “register” tokens that LLMs use to keep track of information or use as local registers.