Among AI is a Python simulation of Among Us where every single player is a large language model. Ten AI agents — each using a different model (DeepSeek, Gemini, GPT-4o mini, Mistral Large, Claude Haiku, Qwen 72B, Llama 3.3 70B) — play a full game of social deduction from start to finish, completely autonomously. You spectate.
How It Works
Every ~3 seconds each player receives a structured prompt containing everything they currently "know": their location on the ship, who they can see, their assigned tasks, their memory of past events, active sabotages, and kill/meeting cooldowns. The model replies with a single action (MOVE_TO_ROOM, KILL, REPORT_BODY, CALL_MEETING, etc.) and a short reason.
When an emergency meeting fires, all surviving agents are prompted in parallel to generate natural-language chat messages arguing their case, then cast a vote. The most-voted player gets ejected.
The Impostor is assigned secretly at game start — it knows its role, nobody else does.
Key Features
- 10 simultaneous LLM players each with a unique personality and speaking style
- Full Among Us mechanics — tasks, kills, vents, lights and reactor sabotage, emergency button, voting, ejection
- Real discussions — agents generate natural-language arguments during meetings and vote based on the conversation
- Per-player model selection — any OpenRouter model assignable to any colour via
config.yaml - Live reasoning panel — spectator view shows the watched player's current thought process in real time
- Player overview — alive/dead status visible throughout gameplay and embedded in the meeting UI
Cost
A full 10-player match runs roughly 100–200 LLM calls across all players. Using the default mixed model lineup via OpenRouter, one game costs around $0.15–$0.25 (~20 cents). Running all-flash models (e.g. Gemini Flash for everyone) brings it under $0.05.
Technical Details
- asyncio + threading keeps all LLM calls non-blocking — the game loop never stalls waiting for a model
- OpenRouter provides a single unified API endpoint for all providers (Anthropic, Google, OpenAI, Mistral, Meta, Qwen, DeepSeek)
- Pygame-CE handles rendering, input, and the 60fps game loop
- PyTMX loads the Tiled map format used for the spaceship layout
- Per-player memory stores a rolling log of observed events that feeds back into each prompt, giving agents continuity across decisions
- Built on top of an open-source Among Us clone by AI0702, with the entire AI/LLM layer added from scratch