Release Management with No Budget, No Tool, and a Confluence Page
When I joined the project, we were weeks away from the first real go-live. The kind of moment everyone had been waiting for — business, stakeholders, the development team and a part of the company leaning in. Code had already been going to production before I got there. Several deployments, in fact, to a restricted group of users. Shipping early, validating with real traffic. That's exactly what you want.
And the team had been rightly focused on that — getting the product in front of real users, learning fast, iterating. That was the correct priority for that stage. But it meant that some of the scaffolding around releases hadn't been built yet. There was no changelog, no version map, no single place where someone had written down "release X contained services A, B, C at these versions." The knowledge existed, but it was scattered across Git histories, people's memories, and different tools.
Two weeks in, someone asked on a call what went out in the last deployment. I remember opening Azure DevOps in one tab, GitHub in another, Confluence in a third, trying to stitch together an answer in real time. I got through it, but it took way longer than it should have. Not because nobody had done the work — the work was there — but because there was no single place that pulled it all together. That was the moment I decided to build that place.
It wasn't about blame or about what should've been done earlier. It was about the fact that I never wanted to scramble like that again. Not in an abstract "we should improve our processes" way. In a very concrete "I need to be able to answer this question in thirty seconds" way.
The starting point
Services lived in GitHub. Work items in Azure DevOps. Documentation in Confluence. Three tools, each fine on its own, none of them telling a coherent story about releases. A deployment would go out, and if you wanted to know what was in it, you had to ask the person who triggered it. If they remembered. If they were available.
Work items showed "done", but not which release they belonged to. Confluence had some pages, mostly from months ago. Git tags existed, but without a consistent pattern. Not a catastrophe — a slow leak. The kind that costs you twenty minutes here, a confused stakeholder there, and eventually an incident where nobody's quite sure what version of what is running in production.
I looked at a few tools that promised to unify this. Evaluated them, read the docs, and imagined the integration work. And then I closed those tabs and opened a blank Confluence page instead. Sometimes the right answer is simpler than you want it to be.
What I actually did
I started with semantic versioning. Not the "we discussed it and everyone agreed" kind — the kind where you write down what a breaking change means in your specific context and you make it part of how PRs get reviewed. We documented it, and then I repeated it in reviews until people stopped needing the reminder. I just had to survive the period where it felt like I was nagging.
Then came the Confluence release page. Every release gets one. On it, two things: release notes in plain language — not commit messages — and what I call the bill of materials. A table. Every service in the release with its new version. A snapshot of production at that point in time.
I didn't expect the bill of materials to become the most valuable part of the whole process, but it did. The release notes are nice. Stakeholders appreciate them. But when something goes sideways at odd hours, the person assigned to debug the problem, opens that bill of materials and immediately sees what changed. No digging through pipelines, no comparing Helm charts, no pinging people.
The last piece was simpler but just as important: tagging every Azure DevOps work item with the release it ships in. Manual? Yes. Every single time. But it closes the loop — you can go from a release to its work items to the PRs to the code. That traceability chain is what lets me answer the question that used to paralyze me, now in under thirty seconds, for any release going back months.
What surprised me
The team adopted it faster than I thought they would. I was bracing for the "great, another process" pushback, but it didn't really come. The feedback I got instead was that it brought clarity — people actually wanted to know what was in a release, they just hadn't had a clean way to see it. Once the structure was there, developers started filling in release pages without me asking. That was the moment I knew it would stick.
But the thing that surprised me most came from outside the engineering team. Our business owner started using the release notes as the basis for a newsletter that goes out to end users after every release. I didn't design the notes for that purpose. I wrote them so a product person could understand what shipped without reading commit logs. But they turned out to be exactly what someone needed to communicate changes externally. That's when the release page stopped being "an engineering thing" and became part of how the product talks to its users.
The other surprise was that the manual work turned out to be quietly useful, not just tolerable. When I write release notes, I think about what we're actually shipping. When I fill in the bill of materials, I catch version mismatches I would've missed if a script had done it for me. The process forces a pause, and more than once, that pause caught something.
People ask me why I haven't automated it. The APIs are all there — Azure DevOps, GitHub releases, and Confluence REST. I could stitch them together. But I've looked at what it would actually take — the edge cases, the error handling, the maintenance when one API changes — and the return isn't there yet. The bill of materials is the first candidate. Pulling deployed versions programmatically and pushing them to a Confluence table is doable, and I'll probably get to it soon. The rest stays manual for now, and I'm genuinely okay with that.
What I'd do differently
If I were starting over, I'd write down the process earlier. For the first few releases, I kept it mostly in my head — I knew the steps, I'd just do them. Which is fine until you realize that a process that lives in one person's head isn't really a process; it's a dependency. If I step away from the project for any reason, does the release management survive? That's the real test. Not whether it works when you run it, but whether it works when you're not in the room. I eventually documented everything in a runbook, but I wish I'd done it from the start.
A note on when this stops working
I'll be the first to admit — this works because of where we are right now. The project went live about a year ago, the team is small enough that I'm still close to the code, and we're not releasing ten times a day. If any of those things change significantly, the manual approach starts to crack.
I've seen what mature platforms look like. Dozens of teams, services deploying independently throughout the day, nobody owns the full picture anymore. You can't ask someone to hand-update a Confluence table in that world. You need automated release notes generated from conventional commits, dashboards that pull live versions straight from the cluster, tooling that does the boring work so people can focus on the interesting work.
So why am I writing an article about a manual process? Because most projects aren't there yet. Most projects are somewhere in the messy middle — past the "just ship it" phase but nowhere near the "fully automated platform engineering" phase. And in that middle, a Confluence page with a table and someone who cares enough to keep it updated goes a surprisingly long way.
If you're in a similar spot
If you just joined a project and you can't answer "what's running in production right now?" — start with the bill of materials. Not a versioning strategy. Not a branching model. Not a tool evaluation. A Confluence page with a table. Service names, versions, last deploy date. You can set it up in twenty minutes, and it will pay for itself the first time someone asks.
The tooling gap between Azure DevOps, GitHub, and Confluence is real, and it's annoying, and no amount of process design makes it go away. You're going to be the glue between them for a while. Just make sure you're being intentional about it instead of reactive.
That's the whole thing. No maturity model, no acronym, no certification needed. Just a process that means I never again have to freeze on a call when someone asks what we shipped. If you've solved this differently — or if you're stuck in the same spot and figuring it out — I'd genuinely like to hear about it.