Lanbow
Three Signals
Box CEO Aaron Levie recently made what might be the most precise observation about AI and jobs this year: AI won't eliminate roles — it will raise the complexity of every role.
The logic is straightforward. When everyone has access to the same AI tools, "getting it done" is no longer the standard. "Getting it done better than everyone else" is. An engineer's job is no longer writing code. A financial analyst's job is no longer running reports. Every role's bar is being pushed higher by AI, not lower.
Around the same time, OpenAI Chief Scientist Jakub Pachocki said in a podcast interview that they are not far from models that can work autonomously for several days — without precise instructions, without constant intervention, with models assessing their own progress and adjusting course independently. This isn't a long-term vision. He's talking about a timeline of months.
The third signal is perhaps the most telling. A product manager at a major tech company spent over an hour trying to get an AI model to handle a simple scheduled email task. The model repeatedly broke the template. He switched to a different model, and the same task worked on the first try. His conclusion: models are still inconsistent in executing agentic tasks.
Three signals, one conclusion: AI execution capability is growing exponentially, but who makes the decisions, how they're made, and how decision quality is maintained — that layer hasn't kept pace.
This aligns with what we observe across our client engagements. Stanford's HAI AI Index (2026) shows global enterprise AI adoption has reached 78%, yet PwC's Global AI Study (2026) indicates that only 20% of enterprises will capture 75% of AI's commercial value.
The gap isn't tools. It's decisions.

Three "Tool Traps" in Global Advertising
Project these signals onto multi-market advertising, and a familiar problem emerges.
Trap 1: Tools without a system.
Most brands expanding internationally have already adopted AI creative generation, automated bidding, and multi-platform management tools. The tool inventory looks impressive — but these tools share no common decision logic. The creative team uses one AI to batch-produce visuals. The media buying team uses another to adjust bids. Nobody owns the decision of whether to shift budget from TikTok to Meta this week, or whether to pivot creative direction from feature showcases to lifestyle narratives. Gartner's 2025 CMO Survey reveals the same paradox: enterprises use only 49% of their MarTech stack's capabilities on average, yet CMOs continue to purchase more tools. Tools are running. Decisions are fighting.
Trap 2: Data without judgment.
The dashboard has everything — CPA, ROAS, CTR, frequency, reach. But data is not judgment. When CPA rises 20%, the correct response could be "increase budget to push through the learning phase," "kill the creative and change direction," or "exit this market entirely." Three decisions with completely different cost structures. AI can tell you CPA went up. But deciding how to respond requires not more data, but a decision framework. Platform metrics don't always reflect real change — creative fatigue, competitor bid increases, audience pool shifts, and seasonal fluctuations can all produce false signals. Traditional approaches rely on individual buyer experience, but human processing bandwidth is limited and cannot compound structured priors across markets and clients.
Trap 3: Execution without compounding.
This is the most insidious one. Most teams operate in "weekly reset" mode: whatever was learned from last week's creatives, audiences, and feedback starts over from scratch the following Monday. Execution-level information isn't structurally retained. Every decision cycle begins from zero. Success and failure patterns live in scattered personal documents. The campaign process lacks structured logging and replay mechanisms. Experience cannot be reused. Market entry cold starts cannot be accelerated. AI speeds up execution — but if the learning from each execution isn't captured by a system, faster execution just means faster waste.
Three traps, one root cause: The tool layer has been commoditized by AI, but the decision layer above it remains em
Budget Doubled. ROAS Went Up.
This isn't theoretical.
A North American DTC brand working with Lanbow doubled their ad budget after deploying the decision system. Conventionally, expanding the audience pool means diminishing marginal returns — ROAS should decline. Instead, ROAS increased by more than 10%, the best-performing cohort reached nearly 7x, and purchase volume grew 76%.The team and platforms remained the same, while both creative quality and decision strategy were upgraded through the system.
The system executed through two rounds of Diagnose → Evolve iteration:
In Round 1, the system observed high CTR but low Purchase/ATC rates, and determined that post-click behavior resembled browsing and inspiration-seeking rather than purchase intent. It decomposed the broad audience into two distinct Ad Sets based on actionable purchase scenarios, each with independent budgets and learning windows. Creative direction shifted from generic feature showcases to specific purchase decision points — durability messaging and low-risk commitment framing.
In Round 2, Cohort A showed improved add-to-cart rates but continued to drop off at final checkout. Cohort B produced high ROAS with higher average order values. The system allocated incremental budget to the more stable winner first, then layered retargeting across different funnel drop-off points — browse-no-cart, cart-no-purchase — staging budget increases to avoid learning phase overlap.
No team members were swapped. No platforms were switched. Creative iteration accelerated, but the biggest upgrades happened at the decision layer.
Execution improved, and decisions improved even more. This is the fundamental difference between system-driven growth and experience-driven growth.
From Experience-Driven to System-Driven
For the past decade, the scarcest resource in international advertising has been "an experienced media buyer." Find someone who can navigate both Meta and Google, and they can single-handedly support growth in one market. This is experience-driven growth — decision quality depends on individual capability.This model faces two fatal problems in the AI era.
First, experience doesn't scale.
A single media buyer can manage 3–5 markets at most. When a brand needs to run campaigns across Southeast Asia, the Middle East, and Latin America simultaneously, the answer isn't a better individual. It's a decision system that can operate in parallel across all markets.
Second, experience has a shelf life.
Levie is right — AI is continuously raising role complexity. Last year's effective campaign strategy may completely fail this year due to platform algorithm updates, competitive shifts, or user behavior migration.
McKinsey's State of AI report (2026) shows global enterprise AI governance maturity at only 2.3 out of 5, with 74% of enterprises listing "model inaccuracy" as their highest risk. The depreciation rate of individual experience is accelerating. Systems can continuously learn and continuously calibrate.
System-driven doesn't mean replacing people.
Quite the opposite. When AI takes on execution, human value concentrates on calibration and judgment. AI drives scaled execution. Frontline operators own unit economics calibration and compliance boundaries. The human role shifts from "the person who does things" to "the person who defines what's right."And experience compounding solves the third trap — every campaign execution produces structured signals that the next campaign can directly reuse.
No more weekly resets. Weekly accumulation instead.
Gartner predicts that by 2028, 33% of enterprise applications will embed Agentic AI. The competitive moat is shifting from "which market to enter" to "what system manages multiple markets."
You don't lack data. No team lacks data today. You have Meta, Google, TikTok — you have all of it.
What you lack is an enterprise growth decision system.
Advertising is investing.





