SYNTHETIC SIMULATIONS
Pre-launch behavioral intelligence for product & marketing teams
Research Phase
Predict user behavior with agentic simulation

Know how your users will behave — before you ship a single thing.

According to research from Harvard Business School, 95% of new products fail — not from poor execution, but from poor pre-launch understanding of the customer. We are building a platform that will convert your real behavioral data into thousands of AI-powered agents that experience your ads, products, and content in a simulated environment — delivering directional signals at a fraction of the cost and time of traditional testing.

The Problem

Every launch is a blind bet

A/B testing burns real users, requires traffic you don't have pre-launch, and takes weeks to reach significance. Focus groups cost $15,000–$75,000, take 4–6 weeks, and suffer from observer bias. Analytics are retrospective — they tell you what happened, not what will. Surveys capture stated preference, not revealed behavior.

The Solution

Simulate before you commit

We are building a system that creates digital twins from your actual behavioral data, runs them through your content or product in a simulated social environment at society scale (10,000+ agents), and delivers qualitative + quantitative predictions. Think of it as a weather forecast for user behavior — directional, probabilistic, and increasingly accurate the more domain data it has.

r = 0.85

Human–Simulation Correlation

Across 70 nationally representative U.S. survey experiments, AI-generated persona responses correlated with actual human treatment effects at r = 0.85.

Hewitt et al., 2024, building on Argyle et al., 2023
85%

Test-Retest Validity

1,052 interview-grounded agents replicated real participants' survey responses 85% as accurately as individuals replicate their own answers over two weeks.

Park et al., 2024/2025 — Stanford Generative Agents
< 0.2

RMSE in Social Dynamics

Tested against 198 real-world information-propagation cases, OASIS replicated observed spreading dynamics with a mean normalized RMSE under 0.2 — including group polarization and herd behavior.

Yang et al., 2024 — OASIS, arXiv:2411.11581
10K+

Scale Threshold

Critical group dynamics — virality, polarization, herd behavior, social amplification — only emerge reliably at ≥10,000 agents. Below that threshold, network effects disappear. Our simulations default to 10,000+ agents.

OASIS validation research
Principle 01

Fine-tune on your data, not generic personas

Off-the-shelf LLM personas reflect the modal internet user — not your customers. Our twin construction pipeline is designed to convert your behavioral logs, engagement history, and demographic signals into dynamic behavioral priors fine-tuned on traces of actual decisions your users made.

Principle 02

Run at society scale

The research is definitive: critical group dynamics only emerge at ≥10,000 agents. Our target architecture defaults to 10,000 agents, with OASIS infrastructure supporting up to 1 million — capturing network effects that determine whether a campaign spreads or a feature gets adopted organically.

Principle 03

Measure group behavior, not individual prediction

We do not predict what any one user will do. We surface directional, segment-level signals: which segments engage, where drop-off concentrates, which variant resonates stronger, how content propagates — and the qualitative reasoning behind why.

01

Connect Your Data

Behavioral logs, engagement history, CRM signals, and demographic data via our ingestion API or secure file upload.

02

Build the Twins

Our pipeline will cluster your users into behavioral archetypes, initialize LLM agents with those profiles, and optionally fine-tune on your domain-specific action traces.

03

Define the Scenario

Upload the ad creative, product flow, or content piece. Configure platform context, social network topology, and recommendation system weight.

04

Run the Simulation

10,000+ agents will encounter your stimulus inside an OASIS environment with a realistic dual recommendation system across 21 human-like action types.

05

Get the Report

Segment-level signals, drop-off analysis, variant comparison, virality scores, and agent verbatims explaining why.

10K+
Agents per simulation (target)
1M
Max agents (OASIS capacity)
21
Human-like action types (OASIS)
📣

Ad & Campaign Pre-Flight

Which creative variant resonates with which segment — before production spend. Test five concepts in the time it currently takes to brief one focus group.

🧩

Product Feature Validation

Where do users drop off, get confused, or disengage — before engineering commits? Surface friction points while they're still cheap to fix.

✍️

Content Strategy

Which topics, formats, and angles will drive engagement and spread? Use social graph dynamics to predict whether content will propagate virally or quietly die.

💰

Pricing & Messaging Tests

How do different segments respond to price framing, benefit emphasis, or urgency messaging? Run behavioral experiments at scale in minutes, not months.

This is
  • A pre-launch risk filter that raises confidence
  • Strong at group-level and directional signals
  • Calibrated to your behavioral data
  • Faster and cheaper than focus groups or A/B tests
  • Increasingly accurate with more of your data
  • Social-dynamics-aware (virality, herd effects)
This isn't
  • A replacement for real-world testing
  • A predictor of individual sequential behavior
  • Accurate out-of-the-box for all use cases
  • A guaranteed outcome
  • A general-purpose market research platform
  • A survey or polling tool
Built on
OASIS (Oxford / Shanghai AI Lab / CAMEL-AI / HKU / Max Planck) Up to 1M Concurrent Agents Dual Recommendation System Proprietary Twin Construction Domain Fine-Tuning

Simulation is a weather forecast, not a crystal ball — directional, probabilistic, and increasingly accurate the more domain data it has. We are honest about what it can and cannot do.

— Our philosophy