Skip to content
Accent
Shortcuts
D Dark
G Grid
/ Search
Back to Journals
March 25, 2026 11 min read 310 views

I Simulated My Market Launch Before Writing a Single Line of Code – Here’s What AI (Mirofish) Told Me

Most founders test ideas by calling a few friends, sending a survey to their Twitter/X followers, or shipping an MVP and hoping for the best.
I did something different: I ran a market launch simulation with 44 AI agents, each representing a realistic user persona, and watched them react to my product in real time. I didn’t build this from scratch. I used MiroFish, an open-source multi-agent simulation engine that turns documents into a digital parallel world of AI agents.

This wasn’t a thought experiment. It was a social simulation driven by real market research, graph-based knowledge, and multiple AI models – including local models on my MacBook Air, Gemini 2, and GPT‑4.
Here’s what I learned, why I think AI market simulation for startups is about to be a standard tool, and how you can do it before writing a single line of code.

The Problem With Traditional Market Research

Traditional market validation is still mostly manual, slow, and biased.

  • Surveys are biased. People tell you what sounds smart or polite, not what they’ll actually do.
  • User interviews are tiny samples. Ten calls can give insight, but not a sense of market dynamics.
  • You can’t see network effects. Static interviews don’t show how ideas spread (or die) in communities.
  • Feedback comes too late. By the time you realize you built the wrong thing, you’ve already spent months of dev time.

I’ve done the usual discovery calls and interviews. They’re still useful. But they don’t answer questions like:
“What happens when skeptical managers, early‑adopter developers, and budget‑conscious founders talk to each other about my product?”

That’s where AI social simulation comes in.

What Is AI Social Simulation?

AI social simulation is basically agent-based modeling plus modern LLMs.
Instead of static personas in a slide deck, you simulate a small society of agents who talk, disagree, influence each other, and change their mind over time.

In my setup:

  • graph-based knowledge engine ingests a market research brief and builds a network of entities, topics, and relationships.
  • AI agents are assigned personas: each with beliefs, behaviors, geographies, professions, and motivations.
  • These agents interact across simulated social platforms (Twitter/X‑style feeds, Reddit‑like threads, private chats).
  • You observe emergent behavior: who adopts early, who objects loudly, what spreads, and where interest dies.

If you’ve read my earlier posts on building multi-agent systems with OpenClaw and Paperclip, this is a natural extension of that work — but focused on market behavior instead of engineering workflows.

You’re not just asking, “Would you use this?”
You’re watching a small synthetic market argue about your launch.

In my case, I used MiroFish as the core engine.
You feed it seed material (in my case, a market research brief), it builds a knowledge graph with Zep, generates agent personas, and then runs a social simulation where those agents post, reply, argue, and shift opinions over time.
Under the hood, it uses the OASIS framework from CAMEL-AI to model social actions like posting, commenting, following, and reposting.

Mirofish

My Setup — What It Actually Took

The whole flow ran on top of MiroFish, which handles the GraphRAG build, agent generation, simulation runs, and the reporting layer.

Mirofish Agents

Here’s the concrete workflow I used to simulate product launch with AI:

  1. Seed document: my market brief
    I wrote a 6–8 page document: target users, jobs‑to‑be‑done, positioning, pricing assumptions, and competitive landscape. This became the “source of truth” for the simulation.
  2. GraphRAG build with Zep
    I fed the document into a GraphRAG pipeline via MiroFish, which uses Zep to extract entities and relationships and build a knowledge graph.
    For a reasonably rich doc, the build took around 7–10 minutes.
  3. Persona generation: 44 distinct agents
    From that graph, MiroFish generated 44 agents each with:
    • Role (e.g., founder, PM, indie dev, agency owner)
    • Geography (US, EU, Global South, etc.)
    • Motivation (speed, stability, cost, control, etc.)
    • Risk profile (early adopter vs conservative)
  4. Running the simulation: 10 rounds
    I dropped a launch‑style announcement into the environment and let agents interact over 10 rounds.
    They could:
    • Comment publicly
    • Reply to each other
    • Share, ignore, or push back
    • Change their stance between rounds
  5. Model choices and cost
    I experimented with multiple models:
    • local 12B model via LM Studio on my MacBook Air
    • Gemini 2 for fast, low-latency turns
    • GPT‑4 for deeper synthesis and reporting
    The cost was surprisingly low: even with cloud APIs, a full run was around $0.01–$0.03 per simulation, which is negligible compared to any real-world campaign spend or dev sprint.

This is in the same “DIY but serious” spirit as my earlier self‑hosted AI agent setup on AWS — except this time, the “product” is a synthetic market instead of an engineering team.

My First Attempt: LM Studio on a MacBook Air

Because I’m me, I started by going local‑first.
I used LM Studio with Qwen2.5 Coder 12B on my MacBook Air.

A few observations:

  • It worked, but my MacBook Air started to crawl. Fans went up, everything slowed down.
  • The agent conversations looked okay at a distance, but something felt off.
  • Reasoning was often shallow and repetitive, and skeptical personas didn’t push back as much as they should.

The main issue wasn’t just speed. It was fidelity.
The simulation didn’t feel like a messy, opinionated real world — it felt like 44 slightly‑different autocomplete bots.

That’s when I realized: if I want this to actually influence roadmap decisions, I can’t compromise too much on model quality.

Switching to Gemini 2 and GPT‑4

Next, I switched to Gemini 2 and GPT‑4 via API.

I kept the same setup:

  • Same seed document
  • Same persona generation logic
  • Same 10‑round simulation structure

Two things changed immediately:

  1. Agent behavior felt more human.
    • Agents formed coherent, persistent opinions across rounds.
    • Skeptical personas stayed skeptical with better arguments.
    • Some “users” changed their mind after reading others’ comments — something that rarely emerged in the local 12B run.
  2. The objections became sharper.
    • Instead of generic “I’m not sure this will work,” I saw objections like:
      • “This adds another tool to our already messy stack.”
      • “We tried something similar last year; adoption died after the champion left.”
    • These are the kinds of objections real users bring up in proper discovery calls.

In other words, better models didn’t just improve the writing — they upgraded the simulation’s psychology.

That’s when I concluded: for this use case, investing in better AI models is not a luxury; it’s part of the research budget.

What the Simulation Revealed

Without naming the product, here are some of the key insights the AI social simulation tool surfaced:

  • Segmented adoption curves
    • Indie developers and solo founders adopted immediately.
    • Mid‑level managers and enterprise stakeholders hesitated, wanting proof of team-wide adoption, not just individual productivity gains.
  • A dominant, unexpected objection
    The most common objection wasn’t price or features. It was “team adoption friction”:
    • “This is great for power users, but how do I get the rest of my team to care?”
      This pushed me to think beyond “feature set” and more about onboarding, training material, and internal champions.
  • Geographic behavior differences
    • Global South users were more optimistic and willing to experiment as long as cost stayed sane.
    • US and EU users wanted case studies, benchmarks, or signals that “other teams like us already use this.”
  • Word-of-mouth patterns
    • Two highly connected agents (community builders) drove a disproportionate amount of awareness.
    • When they were neutral, adoption slowed; when they turned positive, a noticeable chunk of the network followed.
  • One insight that would have taken months to learn
    If I had gone straight to market, I probably would’ve spent a quarter optimizing features and pricing.
    The simulation made it clear that internal sell‑through and social proof might matter more than yet another power feature.

That’s the kind of thing traditional surveys almost never tell you explicitly.

Mirofish Report

Why Model Quality Changes Everything (For Real)

Let me be blunt:
Running this with a small local model vs with Gemini 2 + GPT‑4 felt like the difference between tabletop role-play and a real focus group.

  • With the local 12B model:
    • Agents were flatter, less skeptical, too agreeable.
    • Conversations often repeated the same points with minor wording changes.
  • With Gemini 2 + GPT‑4:
    • Agents had more consistent personalities and deeper reasoning.
    • Objections stung a bit because they sounded eerily like human stakeholders I’ve actually met.

This convinced me of a simple principle:

For strategic decisions, cheap ≠ good enough.

Yes, better models cost more per token.
But if you’re a founder, PM, or investor, you’re already spending far more money and time on things that may never work.

  • Burning a few dollars on higher‑fidelity AI agent simulation is trivial compared to a single sprint of engineering time.
  • The upside is insight into the future: how different user segments, geographies, and roles might react before you commit to a build.

I don’t see this as “API cost.”
I see it as buying a small time machine.

Who Should Be Doing This

From what I’ve seen, pre‑launch market research AI like this makes sense for:

  • Pre‑launch founders
    Validate positioning, pricing narratives, and messaging before you start coding.
  • Product managers
    Stress‑test feature rollouts and change‑management narratives across different stakeholder personas.
  • Growth and marketing teams
    Model how a campaign plays out across geographies, communities, and influence networks.
  • Investors and advisors
    Use AI agent simulation for business to pressure‑test market sizing claims and go‑to‑market stories.

If your work depends on predicting how groups of people will respond, simulating the social layer first is an increasingly sane move.

How This Fits Into My Broader Agent Work

If you’ve read my previous posts:

…this market simulation is the next chapter.

There, I used agents to ship code and infrastructure.
Here, I’m using agents to simulate users and markets.

The common thread:
I don’t want AI as a demo. I want it as a teammate that reduces my uncertainty — whether that’s in engineering, infrastructure, or market validation.

Simulation Is the New Unfair Advantage

The cost barrier for doing this is basically gone.

  • A full run costs roughly what you’d pay for a coffee.
  • The setup takes an afternoon, not a quarter.
  • You’re not replacing talking to real users — you’re front‑loading the insight so that your real conversations are sharper and better targeted.

For me, AI market simulation for startups is becoming a standard part of how I think about new products.
I still talk to humans. I still ship MVPs.
But now, I also ask: “What does my synthetic market think?” before I invest serious time and money.


Want to run your own simulation?
I’m putting together a full setup guide (tools, prompts, and architecture) in this post so you can replicate this without reinventing the wheel.

Frequently Asked Questions

What is AI market simulation in this context?

AI market simulation here means using a swarm of AI agents, each with a realistic persona, to model how real users might react to a product launch, pricing change, or campaign before you actually ship it.

What is MiroFish and why did you choose it?

MiroFish is an open‑source multi‑agent simulation engine that turns documents into a digital world of AI agents who debate, share, and change opinions over time. I chose it because it combines GraphRAG, social interaction (via OASIS), and LLMs into one pipeline that is designed specifically for prediction and scenario testing.

How many agents did you use in your simulation and why?

or this startup launch experiment I used 44 agents, each mapped to a specific persona cluster (role, geography, motivation, risk profile) so the simulation stays rich but still runs in under 20 minutes.

Can AI market simulation replace real user interviews or surveys?

No. It is a front‑loaded layer, not a replacement. The goal is to surface likely objections, adoption patterns, and narrative risks early, so that real user interviews and surveys are more focused and higher quality.

How accurate are these simulations in predicting real‑world behavior?

They are not crystal balls, but they are very good at exposing plausible dynamics: who is likely to resist, which narratives might dominate, and how different segments or geographies may diverge. Treat them as decision support, not truth.

Do I need to be a machine learning engineer to use this?

No. You do need to be comfortable with basic dev tooling (APIs, environment variables, running a small web app), but most of the heavy lifting—graph building, agent generation, and simulation—is handled by MiroFish

Where can I find your full setup guide and configuration?

I’ve published a separate MiroFish setup guide that covers the full stack (MiroFish, Zep, GPT‑4o‑mini, Gemini), architecture diagram, prompts, .env config, and cost breakdown, linked from the “Want to run your own simulation?” section in this post.

Categories: Technical, Leadership

Written by Sanjay Shankar

Engineering Leader & Creative Technologist.

Leave a Reply

Your email address will not be published. Required fields are marked *

S
Sanjay's Assistant Online
Hi! 👋 I'm Sanjay's assistant. Ask me anything about his work, services, or products.
Or if you'd like to talk directly: