Agent-Led Growth

Token-Efficient Software: Why the Next Era of Architecture Will Be Shaped by How AI Reads, Not How Humans Click

A thought leadership argument for why token efficiency — the cost for AI agents to discover, read, and act on data — is the new architectural primitive. Covers how databases, APIs, UIs, SaaS pricing, and deployment models all change when the primary software operator is an AI agent.

Pascal18 min read

Token-efficient software is an emerging architectural philosophy where systems are designed to minimize the cost — in LLM tokens — for AI agents to discover, read, and act on data. Instead of optimizing for page load speed, click depth, or API response time, token-efficient architecture optimizes for how cheaply and accurately an AI agent can understand and operate a system. This changes everything: the data layer, the API surface, the UI, the deployment model, and the pricing.

This might sound like a niche optimization concern. It's not. It's the most consequential architectural shift since mobile-first design, and it's happening right now.

Mobile-first didn't just shrink websites. It restructured navigation, killed Flash, invented responsive grids, and created entirely new software categories (ride-sharing, mobile banking, stories). Token-first won't just make software cheaper to operate with AI. It will restructure what software looks like, how it's built, and who it's built for.

This article maps the shift across every layer of the stack.

The New Performance Metric

For thirty years, software performance has been measured in human terms:

  • Page load time. How fast does the screen render?
  • Click depth. How many clicks to reach a feature?
  • API latency. How quickly does the server respond?
  • Time to interactive. When can the user start doing things?

These metrics assume a human is the operator. A person sitting in front of a screen, clicking, reading, waiting. Every optimization in modern software — CDNs, lazy loading, caching, progressive rendering — exists to make that human's experience faster.

But a new operator has entered the building. AI agents — systems like Claude Code, Cursor, GitHub Copilot, and the growing ecosystem of autonomous agents — now interact with software systems millions of times per day. They don't click. They don't see pixels. They don't wait for page loads. They consume data through tool calls and reason about it in tokens.

And tokens cost money. Every token an AI agent spends discovering what's in your system, reading data, parsing responses, and figuring out how to act — that's a real cost. At scale, it becomes the dominant cost.

Tokens per operation is the new page load time.

Here's the shift:

EraPrimary operatorKey metricOptimization target
Desktop (1995–2010)Human at deskPage load timeServer speed, bandwidth
Mobile (2010–2020)Human on phoneTime to interactiveResponsive design, app size
Cloud (2015–2025)Human via browserAPI latencyMicroservices, CDNs, caching
Agentic (2025–)AI agentTokens per operationData format, context density, middleware reduction

The companies that figured out mobile-first early — Instagram, Uber, Stripe — defined the last decade of software. The companies that figure out token-first early will define the next one.

The Token Cost Model

Every interaction an AI agent has with your data costs tokens in two dimensions:

  1. Discovery — finding what's relevant
  2. Consumption — reading the actual content

These two costs compound across every layer of the stack.

How AI Agents Actually Read Data

When an AI agent like Claude Code needs to check your project status, here's what happens with a traditional web application backed by a database:

Agent → Tool call (request tokens) → API server → SQL query → Database
Database → SQL result → API server → JSON response → Tool call (response tokens) → Agent reasoning

Each arrow is a translation layer. Each translation costs tokens. A typical task lookup:

  • 1 tool call to query the API (~200 tokens request + ~500 tokens response)
  • Often needs a second query for related data (joins, foreign keys, context)
  • JSON responses carry structural overhead — keys, nesting, type markers
  • Total: ~1,000–2,000 tokens per data retrieval

Now compare the same operation when data lives in a well-structured markdown file:

Agent → Read tool call → File system → Markdown content → Agent reasoning

One hop. One read. A kanban board file is ~200–500 tokens for 20–30 tasks. Related context (project notes, goals, dependencies) lives in adjacent files accessible by glob pattern. Total: ~300–800 tokens per data retrieval.

That's a 2–3x difference per operation. Over hundreds of operations per day, it's the difference between a $50 monthly AI bill and a $150 one. Over thousands of agents across an organization, it's a budget line item.

Why Markdown Wins on Token Density

This isn't about markdown being trendy. It's about information density per token.

Claude's tokenizer — like most LLM tokenizers — was trained on massive amounts of markdown. The encoding is efficient. A markdown table uses roughly 40% fewer tokens than the equivalent JSON array carrying the same information. A markdown heading with a bulleted list uses roughly 60% fewer tokens than the same data in nested JSON objects.

Consider a simple list of five tasks:

JSON (98 tokens):

{"tasks":[{"id":1,"title":"Review proposal","status":"in_progress","assignee":"pascal","due":"2026-04-05"},{"id":2,"title":"Send invoice","status":"todo","assignee":"pascal","due":"2026-04-06"},{"id":3,"title":"Update CRM","status":"done","assignee":"pascal","due":"2026-04-03"},{"id":4,"title":"Write blog post","status":"in_progress","assignee":"pascal","due":"2026-04-07"},{"id":5,"title":"Client call prep","status":"todo","assignee":"pascal","due":"2026-04-05"}]}

Markdown (54 tokens):

## Tasks
- [ ] Review proposal — due Apr 5 (in progress)
- [ ] Send invoice — due Apr 6
- [x] Update CRM — due Apr 3
- [ ] Write blog post — due Apr 7 (in progress)
- [ ] Client call prep — due Apr 5

Same information. 45% fewer tokens. And the markdown version is simultaneously more readable by humans — a property that JSON does not share.

This isn't a trivial difference. It's a compounding advantage that affects every read operation an AI agent performs.

The Data Layer: Files Beat Databases (at Agent Scale)

This is the most provocative claim in this article, so let me be precise about the conditions under which it's true.

For AI agent operations at the scale most businesses operate — dozens to hundreds of active records, not millions — structured markdown files in a filesystem are more token-efficient than relational databases. Not slightly. Significantly.

The Middleware Tax

A Postgres database is an excellent piece of engineering. But between an AI agent and the data inside that database, there are layers:

  1. An API server — Express, FastAPI, whatever — that translates HTTP requests into SQL
  2. An ORM or query builder — that translates code into SQL
  3. The database engine — that executes the query and returns rows
  4. A serializer — that translates rows back into JSON
  5. The HTTP response — that carries that JSON back to the agent

Each layer is a translation. Each translation adds tokens (in the API response), latency (in the round trip), and failure surface (in the error handling).

The filesystem has one layer: the file.

FactorDatabase (Postgres + API)Filesystem (Markdown)
Tokens per read~1,000–2,000~300–800
Middleware layers4–5 (API, ORM, DB, serializer, HTTP)1 (file read)
Write costAPI call + SQL INSERT/UPDATEEdit tool (just the diff)
Multi-entity contextMultiple queries (joins, subqueries)One file or adjacent files
SearchSQL WHERE (precise, but costly per query)Grep/Glob (fast, less precise)
Human readabilityRequires a UI to visualizeReadable in any text editor
Version controlRequires migration toolingNative git

When Databases Still Win

Databases are better when:

  • You have thousands or millions of records — file-per-record doesn't scale past ~500 items
  • You need transactional consistency — ACID guarantees matter for financial data, user accounts, concurrent writes
  • You need relational queries across large datasets — "find all customers who bought X and also did Y" across 100,000 records
  • You have multiple concurrent writers — file-level merge conflicts are worse than row-level locks

But here's the thing: most operational software doesn't hit these thresholds. A typical B2B company has 50–500 active deals, 5–50 active projects, 10–100 recurring tasks. At this scale, a well-organized file system is not just competitive with a database — it's superior for AI agent operations because it eliminates every middleware layer between the agent and the data.

The File-First Architecture

What does file-first look like in practice?

workspace/
├── INDEX.md              # 200 tokens — thin index of everything
├── projects/
│   ├── client-a.md       # 400 tokens — tasks, status, context, notes
│   ├── client-b.md       # 350 tokens
│   └── client-c.md       # 300 tokens
├── pipeline/
│   ├── active-deals.md   # Current pipeline as markdown table
│   └── closed-q1.md      # Archived deals
├── goals/
│   ├── q2-2026.md        # North star + pillar metrics
│   └── monthly/april.md  # Monthly targets
└── kanban/
    ├── sprint.md          # Current sprint board
    └── backlog.md         # Everything else

An AI agent operating on this system:

  1. Reads INDEX.md (~200 tokens) to orient
  2. Reads the one relevant file (~400 tokens)
  3. Total: ~600 tokens for full context on a project

The equivalent database operation: query the projects table, join with tasks, join with notes, join with contacts, serialize to JSON, parse. Total: ~1,500+ tokens and 2–3 tool calls instead of 1.

The file-first system is also version-controlled by default (it's git), human-readable by default (it's markdown), and requires zero infrastructure (no server, no ORM, no migrations).

The API Layer: Thinner, Flatter, or Gone

If the data layer is shifting toward files, what happens to APIs?

The Current API Model

Modern software architecture looks roughly like this:

Human → Browser → Frontend → API Gateway → Microservice → Database
AI Agent → Tool Call → API Gateway → Microservice → Database

Both humans and AI agents go through the same API. But the API was designed for the human path. It returns data shaped for UI rendering — paginated, formatted, nested in ways that make frontend development convenient.

AI agents don't need pagination. They don't need UI-friendly formatting. They don't need nested objects that mirror component hierarchies. They need flat, dense, context-rich data in the fewest possible tokens.

What Token-Efficient APIs Look Like

Three patterns are emerging:

1. Context-window-aware endpoints. Instead of returning 50 records per page (designed for a list UI), return a single dense summary with exactly the fields an agent needs. An endpoint like /api/pipeline/summary that returns 200 tokens of structured overview is more valuable to an agent than /api/deals?page=1&limit=50 that returns 3,000 tokens of detailed records.

2. Markdown response format. Some API designers are starting to offer Accept: text/markdown as a response format alongside JSON. The same data, returned as a markdown table instead of a JSON array, is cheaper for agents to consume. This sounds radical until you realize it's just content negotiation — a feature HTTP has supported since 1997.

3. File-based APIs. The most extreme pattern: instead of querying an API, the agent reads a file that the system keeps updated. This is essentially what static site generators do — pre-render the data into a format that's cheap to read. Applied to operational software, it means writing a summary markdown file every time the underlying data changes, so agents read files instead of making API calls.

The MCP Pattern

The Model Context Protocol (MCP) is already pushing this direction. MCP servers expose tools that AI agents can call directly — no frontend, no browser, no UI layer. But current MCP implementations still return JSON. The next evolution is MCP servers that are context-window-aware — that know the agent has a limited token budget and optimize their responses accordingly.

Imagine an MCP tool that, instead of returning all 47 fields of a CRM contact, returns a 50-token markdown summary:

**John Smith** — VP Marketing at Acme Corp
Deal: $50K, Negotiation stage, next step: send proposal by Apr 7
Last contact: Apr 2 (email, discussed pricing)

That's everything an agent needs to make a decision. The 47-field JSON object was everything a frontend needed to render a contact page. Different operators, different data shapes.

The UI Layer: Dashboards Become Read-Only

This is where it gets uncomfortable for the software industry.

The Current Model: UI as Operating Interface

Today's SaaS products are built around their UI. The dashboard isn't just a way to view data — it's the way to operate the system. You update deals by clicking in the CRM. You manage campaigns by navigating the ads platform. You publish content by using the CMS interface.

The UI is the operating layer. And every pixel of it costs engineering time, design time, and maintenance time. A typical B2B SaaS product spends 60–70% of its engineering budget on frontend development — building, maintaining, and iterating on the interface through which humans operate the system.

The Agentic Model: UI as Read-Only Dashboard

When AI agents become the primary operators, the UI's role changes fundamentally. You don't need a form to update a deal if the agent does it via API. You don't need a campaign builder if the agent configures campaigns programmatically. You don't need a content editor if the agent writes and publishes through the CMS API.

What you still need is visibility. Humans want to see what's happening. They want dashboards that show pipeline status, campaign performance, content metrics. But they're reading, not writing. Monitoring, not operating.

This means the UI becomes a read-only dashboard — a visualization layer on top of a system that's operated by agents.

The implications for software companies are enormous:

AspectCurrent model (UI as operator)Agentic model (UI as monitor)
Frontend engineering60–70% of eng budget20–30% of eng budget
UI complexityForms, wizards, multi-step flowsCharts, tables, status indicators
User interaction modelCRUD operations via clicksRead-only with exception handling
Mobile app necessityHigh (operate on the go)Low (monitor on the go, operate via agent)
Accessibility burdenFull (every feature needs ARIA, keyboard nav)Reduced (mostly data visualization)

The "Glass Cockpit" Pattern

Aviation went through this transition decades ago. Early cockpits required pilots to manually adjust hundreds of individual controls — fuel mixture, propeller pitch, cowl flaps, trim tabs. Modern glass cockpits display information on screens while autopilot systems handle most operations. The pilot monitors and intervenes when needed.

Software is heading toward the same pattern. The AI agent is the autopilot. The dashboard is the glass cockpit. The human monitors and intervenes for exceptions, strategy, and judgment calls.

This doesn't mean UIs disappear. It means the UIs that survive are the ones optimized for situational awareness, not for data entry.

Software Architecture: Flat, Readable, Context-Dense

Token-efficient software architecture looks fundamentally different from what we build today.

Today's Architecture: Optimized for Machines Talking to Machines

Modern cloud architecture is a stack of abstractions:

Browser → CDN → Load Balancer → API Gateway → Auth Middleware → 
Rate Limiter → Service Mesh → Microservice A → Message Queue → 
Microservice B → Cache → Database → ...

Every layer adds latency, complexity, and operational overhead. But each layer also solves a real problem: scaling, security, reliability, observability.

The question token-efficient architecture asks is: how many of these layers exist because a human is the operator?

The CDN exists because humans experience page load latency. The load balancer exists because thousands of humans hit the frontend simultaneously. The complex auth middleware exists because humans need session management, CSRF protection, and cookie handling. Rate limiting exists because humans (and bots pretending to be humans) make unpredictable request patterns.

An AI agent operating via tool calls doesn't need most of this. It doesn't load pages. It doesn't manage sessions. It doesn't make unpredictable request patterns. It makes structured, authenticated, sequential tool calls.

Tomorrow's Architecture: Optimized for Agents Reading Context

Token-efficient architecture has three principles:

1. Minimize layers between agent and data.

Every middleware layer translates data from one format to another. Each translation costs tokens in the response and complexity in the system. The ideal agent architecture is:

Agent → Authenticated tool call → Data

One layer. Authentication and the data. Everything else — routing, serialization, formatting — is either eliminated or pushed to the edges.

2. Pre-compute context, don't query it.

Instead of making agents run complex queries to assemble context, pre-compute it. Write summary files. Generate status documents. Build indexes. This is the same principle behind search engine indexing — expensive computation happens at write time so that reads are cheap.

A system that updates a STATUS.md file every time something changes gives agents instant, cheap context. A system that requires three API calls to assemble the same context charges the agent 3x the tokens every time.

3. Make data human-readable and agent-readable simultaneously.

Markdown is uniquely positioned here. It's the only common format that is simultaneously:

  • Efficient for LLM tokenizers (trained on markdown)
  • Readable by humans without any tooling
  • Editable by humans in any text editor
  • Version-controllable with git
  • Parseable by AI agents in a single read

JSON is agent-readable but human-hostile. HTML is human-rendered but agent-expensive. SQL is query-powerful but requires middleware. Markdown is the intersection of all three requirements.

SaaS Pricing: The Per-Seat Model Dies

This is the business model implication of token-efficient thinking, and it's the one that should worry every SaaS company.

Why Per-Seat Pricing Exists

SaaS companies charge per seat because:

  1. Each human user requires a UI — which costs engineering and infrastructure
  2. Usage roughly scales with users — more seats means more API calls, more data, more support
  3. It's legible — buyers understand "10 users × $50/month"

Why Per-Seat Breaks in the Agentic Era

When AI agents operate your software:

  • One agent can do the work of 5–10 human operators
  • The agent doesn't need a UI — it needs API access
  • Usage doesn't scale with seats — it scales with operations

A company that previously needed 10 HubSpot seats ($500/month) can now run the same operations through one AI agent with API access. HubSpot still provides the same value — the CRM, the data, the infrastructure. But the pricing model doesn't capture it because the value isn't delivered through seats anymore.

What Replaces Per-Seat

Three pricing models are emerging:

1. Per-operation pricing. Charge for what the system does, not who uses it. Stripe already does this — they charge per transaction, not per user. As AI agents become the primary operators, more SaaS will move to operation-based pricing.

2. Per-token or per-API-call pricing. Charge for the data consumed. This directly incentivizes token-efficient system design — the cheaper your system is for agents to operate, the more competitive you are.

3. Platform fees. Charge a flat fee for access to the infrastructure, regardless of how it's operated. This is the model that benefits token-efficient architectures most — a flat fee for data access means the cheapest agent wins.

The SaaS companies that adapt pricing fastest will thrive. The ones that cling to per-seat in a world where agents replace seats will watch their revenue contract as customers consolidate 10 seats into 1 API connection.

Deployment: Simpler, Lighter, Fewer Moving Parts

Token-efficient systems are easier to deploy because they have fewer components.

The Current Deployment Stack

A modern SaaS application requires:

  • Frontend hosting (Vercel, Netlify, AWS CloudFront)
  • API servers (containers, serverless functions)
  • Database (managed Postgres, MySQL, MongoDB)
  • Cache layer (Redis, Memcached)
  • Background job processors (queues, workers)
  • Auth service (Auth0, Clerk, custom)
  • Monitoring and logging (Datadog, Sentry)

That's seven categories of infrastructure for a single application. Each one needs configuration, monitoring, scaling, and maintenance.

The Token-Efficient Deployment Stack

A token-efficient system built for agent operation can often be:

  • A git repository with structured markdown files (data layer)
  • A lightweight API for agent authentication and access control
  • A read-only dashboard for human monitoring (optional)
  • A CI/CD pipeline that validates changes (optional)

That's one to four components instead of seven. The complexity reduction isn't marginal — it's structural.

This doesn't mean every application can be reduced to a git repo. But it means the question changes from "what infrastructure do I need to build this product?" to "what's the minimum infrastructure that lets agents operate on this data?"

Version Control as a Feature, Not a Bolt-On

When data lives in files, you get version control for free. Every change is a git commit. Every state transition is a diff. Every rollback is a git revert.

Database systems require custom migration tooling, audit tables, and temporal queries to achieve what git provides natively. For AI agent operations — where you want to track what the agent changed, when, and why — git is a perfect audit log.

What This Means: Software Built for Readers, Not Clickers

The throughline of this entire article is a single observation: software has been designed for humans who click, but the next generation of software will be designed for agents who read.

This reversal touches every decision a software builder makes:

Data format

Before: Choose the format that's easiest to query (SQL) or render (JSON). After: Choose the format that's cheapest to read and richest in context per token (markdown, structured plaintext).

API design

Before: Design endpoints around UI needs — paginated lists, nested objects, CRUD operations. After: Design endpoints around context needs — dense summaries, flat structures, pre-computed state.

UI architecture

Before: Build the UI as the primary operating interface — forms, wizards, multi-step flows. After: Build the UI as a monitoring dashboard — charts, status indicators, exception alerts.

Infrastructure

Before: Scale for concurrent human users — load balancers, CDNs, session management. After: Scale for sequential agent operations — API throughput, token-efficient responses, pre-computed context.

Pricing

Before: Charge per seat — each human operator is a revenue unit. After: Charge per operation or per data access — each agent action is a revenue unit.

Version control

Before: Version control is for code. Data lives in databases with migration tooling. After: Version control is for everything. Data-as-files means git tracks all state changes.

The Transition Period: Hybrid Systems

We're not jumping from GUI-first to agent-first overnight. The transition will produce hybrid systems:

  • Databases with markdown exports. Systems that maintain a relational database as the source of truth but generate markdown summaries that agents read. The database handles scale and consistency; the markdown handles token efficiency.
  • APIs with dual response formats. Endpoints that return JSON for frontends and markdown for agents. Content negotiation via Accept headers — a 29-year-old HTTP feature finally finding its purpose.
  • Dashboards with agent operation logs. UIs that show both human-readable dashboards and a feed of what agents are doing. The human monitors; the agent operates.
  • Git-backed operational systems. Project management, CRM, and task systems where the underlying data is files in a repository, with a web UI rendering them as dashboards. Changes can come from the UI (human) or from file edits (agent).

These hybrid patterns let organizations adopt token-efficient design incrementally, without abandoning existing infrastructure.

The Objection: "This Doesn't Scale"

The strongest objection to file-first, token-efficient architecture is scale. And it's partly valid.

At 10 million records, you need a database. No filesystem structure handles that gracefully. At 10,000 concurrent writers, you need transactional guarantees. Markdown files can't provide row-level locking.

But here's the counter: most software doesn't operate at that scale. The vast majority of B2B applications serve companies with dozens to thousands of active records, not millions. The vast majority of operational workflows involve single-digit concurrent writers, not thousands.

The software industry has been building for Silicon Valley scale and deploying to small-business reality. A 50-person manufacturing company doesn't need a distributed database with eventual consistency. They need a system their AI agent can read cheaply and their team can monitor visually.

Token-efficient architecture isn't for everyone. But it's for more companies than you'd think — specifically, it's for the 95% of businesses that were never going to hit database scale limitations anyway.

FAQ

What is token-efficient software?

Token-efficient software is software designed to minimize the number of LLM tokens required for AI agents to discover, read, and operate on its data. This includes choosing data formats that compress well for LLM tokenizers (like markdown over JSON), reducing middleware layers between agents and data, and pre-computing context so agents don't need multiple queries to understand the system state.

Why does token efficiency matter?

Because AI agents are becoming the primary operators of software. Every token an agent spends reading your data costs money and consumes context window space. At scale — across hundreds of agent operations per day — the difference between a token-efficient system (300–800 tokens per read) and a token-inefficient one (1,000–2,000 tokens per read) translates to real cost differences and capability differences. Agents with more remaining context window make better decisions.

Does this mean databases are obsolete?

No. Databases remain the right choice for large-scale data (millions of records), transactional consistency requirements, complex relational queries across large datasets, and high-concurrency write scenarios. Token-efficient architecture argues that for operational data at typical business scale (dozens to thousands of records), file-based storage is more efficient for AI agents to operate on.

What's wrong with JSON for AI agents?

JSON carries structural overhead — curly braces, quotes around keys, commas, brackets, type markers — that communicates structure to parsers but wastes tokens for LLMs. A markdown table conveys the same tabular data in roughly 40% fewer tokens. JSON is the right format for machine-to-machine communication. Markdown is a better format for machine-to-LLM communication.

How does this affect SaaS pricing?

Per-seat pricing breaks when AI agents replace human operators. One agent with API access can do the work of 5–10 human users, collapsing seat count without reducing the value the software provides. SaaS companies will need to shift toward per-operation, per-API-call, or platform-fee pricing models that capture value based on what the system does, not how many humans log in.

Can I adopt token-efficient architecture incrementally?

Yes. The most practical starting point is hybrid systems: keep your database as the source of truth, but generate markdown summaries that AI agents read. This gives you the scale and consistency of a database with the token efficiency of files. Over time, as agent operations become more central, you can shift more of the operational layer toward file-based systems.

What does token-efficient UI design look like?

Read-only dashboards optimized for monitoring, not operating. Charts, status indicators, and exception alerts instead of forms, wizards, and multi-step flows. The UI becomes a "glass cockpit" — a situational awareness tool for humans, while AI agents handle the actual operations through APIs or file system interactions.

Is this only relevant for AI-native companies?

No. Any company that uses AI agents to interact with its software systems benefits from token-efficient design. As AI coding assistants, operational agents, and automation tools become standard across industries, the companies whose systems are cheapest for agents to operate will have a structural cost advantage.

How does version control fit into this?

When data lives in files rather than databases, you get version control (git) for free. Every data change is a commit. Every state transition is a diff. This provides an automatic audit trail, easy rollbacks, and a complete history of what changed, when, and why — capabilities that databases require custom tooling to approximate.

Where can I learn more about building agent-first systems?

Start by examining your current token costs. If you use Claude Code, Cursor, or similar tools, pay attention to how many tool calls and tokens each operation requires. Map your most common agent workflows and identify where middleware layers are adding unnecessary token overhead. The highest-impact change is usually at the data layer — moving frequently-read operational data into structured markdown files that agents can consume in a single read.

Further Reading

  • [AI Agents Don't Need UIs: Why CLI-First Tools Are the Future of Marketing Ops](/blog/cli-first-marketing-ops)
  • [The Claude Code GTM Stack: Running an Entire Go-to-Market Operation from Your Terminal](/blog/claude-code-gtm-stack)
  • [Building an AI-Native Agency: Architecture, Operations, and Lessons](/blog/building-ai-native-agency)
  • [Agent-Led Growth: The GTM Operating Model for the Next Five Years](/blog/agent-led-growth)

*Pascal is the founder of [Ryzo](https://ryzo.nl), where he builds agentic systems that replace expensive SaaS stacks for European SMBs. His daily operations run through Claude Code — optimized for token efficiency, not dashboard aesthetics. Before founding Ryzo, he spent a decade building B2B go-to-market systems across Europe. You can reach him on [LinkedIn](https://linkedin.com/in/pascalbrouwers).*