Game Theory LabObserve AI · Play yourself
AI Persona
Human Reference
Play ↗

Prisoner's Dilemma

Can you trust a stranger you'll never speak to?

2 players4 rounds

Cooperation Rate by Round

Does the end-game defection cascade emerge in AI personas?

AI personas
Human baseline
Human: Dal Bó & Fréchette (2011) · AER 101(1) · Treatment E4 · n=358AI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

Stag Hunt

Why risk the hunt when you can pocket both rewards for free?

4–10 players3 rounds1 discussion round

Stag-Choice Rate by Round

Do AI personas sustain coordination, or cascade to the safe Rabbit choice?

AI personas
Human baseline
Human: Van Huyck, Battalio & Beil (1990) · AER 80(1) · 4–6 player groupsAI: atypica.AI personas · accumulated sessions
17 sessions recordedPast games →

Beauty Contest

Don't pick what you think is best — pick what you think others think is best.

4–10 players3 rounds1 discussion round

Winning Choice Distribution — Round 1

PMF of round-winning guesses (closest to ⅔ × mean). Lower choices signal deeper strategic reasoning.

AI personas
Human baseline
Human: Nagel (1995) · AER 85(5) · p = 2/3 · R1 · N≈69 · winning-choice PMF derivedAI: atypica.AI personas · accumulated sessions
1 session recordedPast games →

Golden Ball

Everyone can share — or one person can take it all.

4–10 players3 rounds

Split Rate by Round

Does cooperation erode as the game progresses? AI personas vs human baseline.

AI personas
Human baseline
Human: van den Assem, van Dolder & Thaler (2012) · MS 58(1) · directional referenceAI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

All-Pay Auction

The winner takes the prize — but everyone pays their bid.

4–10 players3 rounds1 discussion round

Bid Distribution — Round 1

PMF of first-round bids (prize = 100). Humans show escalation bias, AI closer to Nash.

AI personas
Human baseline
Human: Gneezy & Smorodinsky (2006) · Games Econ Behav · overbidding patternAI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

Volunteer's Dilemma

Someone must volunteer — but who wants to be the one who pays?

3–10 players3 rounds1 discussion round

Volunteer Rate — Round 1

Probability of volunteering (N=5). Humans show higher volunteering due to altruism.

AI personas
Human baseline
Human: Diekmann (1985, 1993), Franzen (1995) · volunteer rate experimentsAI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

Public Goods Game

Contribute to the common good — or free-ride and let others pay?

4–10 players3 rounds1 discussion round

Contribution Distribution — Round 1

PMF of contributions to public pool (endowment = 20). Humans show conditional cooperation, AI may defect more.

AI personas
Human baseline
Human: Ledyard (1995) · JEL handbook · meta-analysis of public goods experimentsAI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

Colonel Blotto

Allocate your troops wisely — concentrate or spread?

3–8 players3 rounds1 discussion round

Allocation Strategy Distribution — Round 1

PMF of allocation patterns (6 troops, 4 battlefields). Humans over-concentrate, AI spreads more.

AI personas
Human baseline
Human: Experimental Blotto game studies · tendency to over-concentrateAI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

Trolley Problem

Two moral dilemmas — where do you draw the line?

4–10 players1 rounds1 discussion round

Classic Trolley — Pull Lever or Do Nothing?

Most pull lever (redirect threat). AI shows higher utilitarian rate.

Fat Man Variant — Push or Do Nothing?

Most refuse to push (active killing). AI shows much higher utilitarian rate.

AI personas
Human baseline
Human: Thomson (1985) 'The Trolley Problem' · empirical moral psychologyAI: atypica.AI personas · accumulated sessions
0 sessions recordedPast games →

Ultimatum Game

Divide the money — if they accept

2 players1 rounds
No distribution data available for this game type
0 sessions recordedPast games →