- Published on
The glue nobody teaches
- Authors

- Name
- Krzysztof Kozłowski
I've been spending time with Kubernetes — across experiments, side projects, and various platforms I've worked with. The biggest surprise wasn't the YAML, or the networking model, or all the moving parts.
The biggest surprise was that every tutorial teaches you the pieces — and almost nobody teaches you how they fit together.
A pile of pieces
There's a guide on setting up a pod.
There's another one on Helm charts.
There's another one on ArgoCD.
There's another one on Prometheus.
Each one is well written. Each one is correct. Each one stops at the edge of its tool — as if that's where understanding ends.
The moment I tried to wire all of these into something that actually works end-to-end — multiple environments, secrets that aren't just base64 in a YAML file, observability that's actually useful when something goes sideways, CI/CD that doesn't hand cluster credentials to every pipeline — I hit a wall.
Not because any single piece was too hard. Each piece, on its own, has a tutorial.
The wall is the glue between them.
The questions nobody answers
Try to find a clear answer for any of these:
- How do you structure a Terraform repo when you have 4 environments, 3 layers (networking, platform, app), and 6 engineers who don't want their state files locked by each other every afternoon?
- Where exactly do secrets live, if not in Kubernetes Secrets and not in YAML? And how do they get from your cloud vault into a running pod, automatically, without anyone copy-pasting anything?
- How does a CI pipeline build an image and then not apply it to the cluster — without breaking the deploy flow, without losing audit trail, without making the developer's life worse?
- What does observability "good enough" look like? Metrics? Sure. Logs? Obviously. But how do you actually go from a Slack alert to the exact trace, the exact log line, the exact SQL query in under sixty seconds?
- Who decides the order in which things deploy? Database migration before app rollout, ingress after backend is ready, monitoring before the workload it monitors. Where does that ordering live, and what enforces it?
The answers exist. They're scattered across vendor docs, GitHub issues, conference talks, and the experience of engineers who learned the hard way.
That scattered state is the problem.
What this blog is
These are notes from exploring this hands-on. Not a "10 years of Kubernetes wisdom" pitch. Not a curriculum. Just an honest record of what it took to connect the dots — what worked, what didn't, what I learned the hard way, and what I'd do differently next time.
I'm going to write about:
- Azure infrastructure patterns — hub-spoke setups in Terraform, AKS networking that doesn't surprise you under load
- GitOps with ArgoCD — App-of-Apps, sync waves, drift detection, and what breaks the moment more than one person touches the repo
- Observability that actually helps — metrics, logs, and traces, actually connected, with the collector treated as its own important system
- Zero-trust on the platform — workload identity, External Secrets Operator, no plaintext credentials anywhere
- Lessons from experiments — including the setups that worked locally and fell apart the moment I scaled them up
Some posts will be opinions. Some will be patterns. Some will be specific stories where I burned hours and want you to skip the same wall.
The bar I'm setting for myself: every post should be something you couldn't have learned from a vendor doc.
Why now
Because I should have started years ago.
For a long time I thought, "everyone already knows this." They don't. I've watched experienced engineers give up on Kubernetes three times because no one connected the gap between "here's a pod" and "here's a production platform." The pieces are documented everywhere. The blueprint that connects them isn't.
That's what I want to build here.
If there's an infrastructure or platform topic you've always wanted someone to explain from a practical, hands-on perspective — open an issue on the repo, or message me on LinkedIn. I'll write about it.
Otherwise, see you in the next note.