We Built a Strategy Engine. Now Every Decision Goes Through It.

If you can't explain why an AI project should exist in a bootstrapped business, you shouldn't build it. Our strategy engine exists to stop us lying to ourselves.

Stainless steel workshop scene with an open notebook of checklists and a laptop beside a glass of amber liquid under moody lighting.

Build Log


A bootstrapped company doesn't get to pretend. When cash is tight and the team is small, every new project competes directly with something else that also genuinely matters: new product development competes with sales calls, process improvements compete with fulfilment, even what you'd loosely call "interesting experiments" compete with the energy reserves of people who are already running on not quite enough sleep. We spent the first five years of Asterley Bros inside that constraint, and it shaped the way we think about every decision we make now. Including, maybe especially, decisions about what to build with AI.

That discipline is why we built what we now call our strategy engine. It's not a grand AI overlord and it's not a concept from a pitch deck. It's decision discipline encoded into a tool specifically so we stop lying to ourselves about what deserves our time and what doesn't. Every meaningful decision now goes through it: hiring, new product development, supply chain changes, whether we build a new AI system at Absolution Labs, whether we change a production process at Asterley Bros. The bar is consistent and it doesn't move for excitement.


The AI ROI problem is real, even for people spending serious money

The loudest AI stories are the big wins, but the distribution of actual results is still pretty ugly. That's not a story about AI not working. It's a story about how hard it is to measure and operationalise, and if it's hard for companies with proper innovation budgets and dedicated data teams, it's doubly hard for a six-person operation that also has to make vermouth, ship boxes, keep the books straight, and show up at trade shows with product that actually tastes right.

The honest answer to why ROI is elusive for most businesses is structural rather than technical. Projects stay in pilot mode, never fully integrating into workflows. They optimise the wrong metric because nobody paused to define the right one at the start. Or they generate genuine efficiency somewhere that nobody thought to measure, so the value is real but invisible, and eventually the tool gets quietly abandoned because it's hard to justify renewing something you can't prove is working. The ROI problem is mostly a governance problem dressed up as a technology problem.


What we mean by "Strategy Engine"

The tool works by forcing every strategic idea through the same structured evaluation. Gap analysis: what capability or resource do we currently lack that this project requires? Critical risk analysis: where could this fail, and what would prevent that failure? A clear view of our current capacity: can we actually build and maintain this given what we're already running? And a forced ROI conversation: what's the measurable outcome, what's the build cost, what's the ongoing review cost, and what does the return need to be to justify all of it? The engine then ranks and categorises the ideas, surfaces the failure modes, and makes the trade-offs legible in a way that a founder's gut feeling can't.

The aim is to remove the human emotional bias and judge each idea on its actual merit. Founders are supposed to be optimistic. That optimism is also, in practice, how you accidentally spend six months on something that looked compelling at the whiteboard and unravelled the moment it met real operations. The strategy engine is the structure that sits between the excitement and the build decision, and it's improved almost every significant call we've made since we started using it properly.


The inputs matter more than the model

We don't talk about the "model" very much internally because the model is genuinely the least interesting piece of this. The hard work is defining what we're actually trying to decide, with enough specificity that the answer can be compared to other answers. For each proposed initiative we capture the constraint it removes (time, cash, quality risk, lead time), the workflow it touches, the operational dependencies, the downside if it fails, and the maintenance burden including how often it will realistically need review. Without that structure you can't rank anything. You just have a collection of ideas that all feel approximately important, and you end up building what the most enthusiastic person in the room argued for most recently.

The forcing function of structured inputs is that it catches vagueness before it becomes wasted time. "This will help with operations" is not an input; it's a hunch. "This removes 3 hours per week from trade order management by replacing the email thread with an automated portal" is an input. Once you've expressed it that clearly, you can actually evaluate it: is 3 hours per week worth the build cost? What's the maintenance overhead? What breaks if the portal goes down? Those questions all have answerable forms once the framing is specific enough.


A strategy engine is basically a founder bullshit detector

Here's the uncomfortable truth about this kind of tool: it works best precisely in the situations where you most want to ignore it. Founders are very good at narrating why something would be "really cool." We're also very good at using "AI" as a justification for pursuing things because it feels modern and defensible, when the actual underlying motivation is that it's interesting and we want to build it. The strategy engine has pushed back on ideas that we were genuinely excited about, ideas that felt entirely sensible until they were put against real business metrics, proper market analysis, and a cold look at our current capabilities and capacity. Some of those ideas weren't wrong, they were just wrong for right now. Others turned out to have failure modes that weren't obvious until the evaluation surfaced them.

That's the value. Not that the engine makes decisions for you, but that it makes your reasoning legible and comparable and honest in a way that's hard to achieve when it's all happening inside your own head. You can still override it. You're the one running the business. But at least you know you're overriding it, and you can articulate why.


The reinvestment thesis: the point isn't to save money

There's a thread from this week's thinking that ties the strategy engine to the broader question of what we're actually trying to do with all of this. We're not building AI tools at Asterley Bros and Absolution Labs to reduce headcount or save money in the abstract sense. The goal is to automate the things that don't add value to the business or to our customers, and then take those returned hours and consciously reinvest them into the work that only humans can do well: tastings, site visits, masterclasses, new product development, proper one-to-one customer relationships, and marketing that has a genuine point of view behind it.

A strategy engine makes that reinvestment explicit rather than aspirational. It turns "this will save time" into "this buys us six hours a week, and those hours will go to X." That second framing is entirely different in practice because it forces you to think about the destination of the efficiency, not just the efficiency itself. It's the difference between freeing up capacity and actually using it for something that builds the business. The first without the second is just a tidier version of the status quo.


A simple comparison: three ways small teams justify AI work

Approach How it decides Where it breaks What the strategy engine adds
"Feels useful" Intuition and excitement Bias, pet projects, weak follow-through Forces explicit goals, costs, and failure modes
"Tech first" Build because the tool exists Optimises the wrong metric, low adoption Anchors builds to constraints and workflows
"Ops first" Start from bottlenecks and pain Can under-invest in strategic bets Ranks ops wins against longer-term compounding bets

The maintenance cost nobody budgets for

A strategy engine also makes visible a cost that most teams quietly ignore: maintenance. If you build an AI tool and never revisit it, it drifts. The world changes, your data changes, your prompts quietly rot, and one day you realise you've been making decisions based on something that hasn't properly matched reality for weeks. So part of the ROI calculation isn't just build cost, it's the ongoing review cost. If you can't afford to review a tool properly and regularly, you can't honestly afford to own it, because the version of the tool that nobody's maintaining is worse than no tool at all. It gives you false confidence in outputs that have gone stale. That maintenance cost belongs in the strategy engine evaluation from the very first conversation about whether to build something. We've started being much more rigorous about this, and it's changed which projects we actually commit to.


What this looks like week to week

Practically, the engine shows up as a weekly ritual. Every week we identify the bottlenecks and constraints we're bumping into, then run the highest-priority candidates through the evaluation to decide what to remove next. Sometimes that surfaces a new build. Sometimes it's unglamorous automation that was already on the list: trade ordering, invoicing, production planning. And sometimes it tells us to do nothing, go back out and sell, and come back to the building work next week. That last answer is probably the most valuable one it gives us, and also the one that founders are most likely to resist. The engine doesn't have an ego investment in activity. We do. That's the whole tension it's designed to resolve.


Frequently asked questions

What is an AI strategy engine?
It's an internal decision-support tool that evaluates every strategic idea through the same structured lens: gaps, risks, capacity, and expected ROI.

How do you measure AI ROI in a small business?
Tie each build to a constraint it removes and an outcome it changes, then include build cost plus ongoing review and maintenance time.

Does a strategy engine replace human judgement?
No. It reduces emotional bias and improves clarity, but accountability still sits with the humans running the business.

Why do many AI projects fail to show measurable impact?
Many projects stay as pilots and don't integrate into workflows, so spend remains experimental and hard to measure.


Robert Berry is co-founder of Asterley Bros, a London-based premium aperitivo company, and Absolution Labs, an AI automation consultancy for drinks businesses. He makes vermouth by day and builds AI systems in the margins.