Deep dives on software, machine learning, and the systems behind them.
In-depth explorations of software engineering, deep learning, and the technologies that power modern systems.
Featured
-
Building Model Armor: multi-layer safety filtering for LLMs
Google's Model Armor wraps gen-AI calls in a managed safety pipeline you flip on with a flag. This article rebuilds that pipeline from scratch — rule-based filters, classifiers, LLM judges, prompt rewriting, and output moderation — so you understand what each layer actually buys you and where the seams are when you swap pieces into your own stack.
-
Securing a web app you don't fully understand
You inherited an app and can't vouch for what's inside it. Rewriting takes quarters you don't have. This is a layered playbook for hardening it from the outside — secret scanning, a WAF at the edge, bot protection, OSV-Scanner for dependencies, then SAST and DAST — in the order you should add them, and what each layer actually catches.
-
Understanding CNNs — convolutions, feature maps, and pooling
What a convolution actually computes, and why stacking them lets a network see edges, then textures, then objects. Built bottom-up with interactive demos: feature maps you can poke at, pooling shown as the dimensionality squeeze it really is. By the end every diagram in a CNN paper should read like English.
-
How neural networks learn: deep dive into backpropagation and gradient descent
Most tutorials wave at gradient descent and skip backprop, or bury it in chain-rule notation. This one builds both from the bottom up: what a gradient is, why the loss surface curves, and how the chain rule walks errors backward through the network — with widgets you can scrub to see each piece move.