OpenAI Symphony is an open-source Apache 2.0 orchestrator released April 27, 2026, that maps every active Linear issue to a dedicated Codex agent workspace. It polls Linear every 30 seconds, creates an isolated sandbox per issue, and keeps agents running until each task is done -- no human supervision required. Internal OpenAI teams reported a 500% increase in landed PRs in three weeks.
I've been watching the agent orchestration space for months, and Symphony is the first tool I've seen that systematically solves the "I can only manage three Codex sessions at once before I lose track" problem. Here's what shipped, how the architecture actually works, and the honest caveats before you wire it up to your board.
What is OpenAI Symphony?
Symphony is an open-source spec and reference implementation for autonomous Codex orchestration over Linear. Every open Linear issue in an active state gets its own agent workspace -- Symphony polls your board every 30 seconds, creates an isolated sandbox per issue, launches Codex in app-server mode, and runs the agent until the task completes. Released April 27, 2026, under Apache 2.0.
OpenAI identified the ceiling: engineers could manage roughly 3 to 5 concurrent Codex sessions before context-switching became painful and productivity compounded rather than scaled. Symphony removes that ceiling by treating your issue tracker as an agent command center. You add tasks from anywhere -- a meeting, your phone -- and agents start working without you watching.
OpenAI is explicit: Symphony is not a product they intend to maintain. It's a reference implementation to study, fork, and rebuild. Codex built the Elixir reference implementation in one shot. OpenAI then had Codex rewrite it in TypeScript, Go, Rust, Java, and Python to stress-test the spec and surface ambiguities -- a useful validation technique in itself.
How does the Linear-to-agent pipeline work?
Symphony runs a poll-dispatch-execute loop. Every 30 seconds it queries Linear for issues in configured active states. For each active issue, it creates an isolated workspace directory, launches a Codex app-server process inside it, and sends a session prompt assembled from the issue context plus your WORKFLOW.md. Each agent runs in its own sandbox with no shared state between issues.
The WORKFLOW.md is the central configuration artifact. It uses YAML front matter for settings and Markdown body text as the actual Codex session prompt. You encode your real development process: branch naming convention, test commands, PR template requirements, walkthrough video format. Every agent for every issue reads this file before starting. It's the equivalent of an onboarding doc that every contractor on your team gets on day one.
When an agent finishes, it produces proof of work: CI status, PR link, review feedback, complexity analysis, and optionally a walkthrough video. Symphony auto-restarts crashed agents and auto-picks up new work as issues enter active states. The Elixir reference uses Erlang/OTP process supervision for fault tolerance and supports hot code reloading without stopping active subagents -- specifically useful during early WORKFLOW.md iteration when you're still tuning the prompt.
What does the 500% PR increase actually mean?
OpenAI reported a 500% increase in landed PRs among some internal teams in Symphony's first three weeks. That number is real. It's also the figure most coverage has reprinted without context. Before routing your full Linear board through Symphony, the right question is not whether agents can open PRs at 5x velocity -- it's whether your team can review them at that rate.
The core critique, surfaced by InfoWorld's coverage: generation scales effortlessly, validation does not. If Symphony opens 20 PRs per day and your team can review 5, you've shifted the bottleneck from writing code to reviewing it. You haven't eliminated it. Output volume rises; the burden of review, testing, and governance rises with it. This is a structural property of autonomous agent output at scale, not a design flaw in Symphony.
What OpenAI's teams did gain was speculative capacity. Symphony made it trivial to spin up agents on ideas, refactors, and hypotheses they'd never have prioritized under normal engineering constraints -- then keep only what looked promising. That's a real shift in how you use engineering bandwidth, and it's meaningfully different from the raw PR throughput story that led most headlines.
Get the AI Agent Briefing
One email per week. The best AI agent news, tutorials, and tools -- written by someone who actually builds with them.
Subscribe Free
How do you set up Symphony?
Symphony requires a Linear account, Codex access, and a devbox or always-on server to run the orchestrator. The reference implementation is Elixir; community TypeScript, Go, and Python ports are in early development. Getting the Elixir version running takes roughly 30 minutes with prerequisites in place. The setup sequence from SPEC.md:
- Clone the repo:
git clone https://github.com/openai/symphony
- Configure Linear: Set your Linear API key and define which workflow states Symphony treats as active.
- Write WORKFLOW.md: Encode your team's real development process -- branch naming, test commands, PR requirements, proof-of-work expectations.
- Set a workspace root: Configure the directory where Symphony creates per-issue sandboxes.
- Start the orchestrator: Symphony begins polling Linear on the 30-second cadence.
One critical step before enabling Symphony across a full board: verify your Linear workflow state mapping is airtight. Symphony triggers on "active" states -- if issues in design review, blocked, or waiting-for-external are misconfigured as active, agents will start working on tasks that aren't ready. Define your state map explicitly in the WORKFLOW.md front matter before turning Symphony on at scale. This is the most common early misconfiguration.
What Symphony doesn't handle yet
Symphony is a spec and reference implementation, not a managed service. There are four real gaps to plan around: Linear-only backend support, no built-in audit trail, uncapped Codex API costs at high concurrency, and no OpenAI maintenance commitment. Each has practical implications for teams evaluating it for production use.
Linear only. The shipped code targets Linear. GitHub Issues, Jira, Asana are not supported. The SPEC.md is abstract enough that you could build a different backend adapter, but OpenAI hasn't built them. If your team doesn't use Linear, plan for custom adapter work before any of the orchestration value is accessible.
No structured audit trail. Symphony requires proof of work (CI status, PR links) but the spec doesn't mandate structured logging or compliance-ready audit records. For regulated environments or teams with traceability requirements, this is a gap you'd need to fill before treating Symphony as a production system rather than an experiment.
Uncapped API costs at scale. Parallelizing agents across 20 or 30 active issues simultaneously means 20 or 30 concurrent Codex sessions. Each runs a full agent loop. There's no built-in cost cap, rate limiter, or concurrency ceiling in the current spec. At high issue volume, Codex API costs compound fast. Control your Linear state machine carefully to limit how many issues are "active" at once while you're still calibrating.
No OpenAI maintenance commitment. The announcement was explicit: study it, fork it, rebuild it for your stack. If you build on Symphony and hit edge cases in the Elixir implementation, you're shipping your own fixes. That's a reasonable tradeoff for teams with engineering capacity. Worth knowing upfront if you're a smaller team expecting maintained open-source tooling.
FAQ
What is OpenAI Symphony?
OpenAI Symphony is an open-source Apache 2.0 orchestration spec that turns a Linear project board into a control plane for Codex coding agents. Every active Linear issue gets a dedicated agent workspace that runs until the task is complete. Released April 27, 2026. OpenAI built it as a reference implementation to demonstrate Codex App Server -- not a product with ongoing maintenance.
Does Symphony work with GitHub Issues or Jira?
No -- the current reference implementation targets Linear only. The SPEC.md is abstract enough to support different issue tracker backends, but OpenAI hasn't built adapters for GitHub Issues, Jira, or Asana. Community ports for alternative backends are in early development. If your team uses anything other than Linear, plan for custom integration work before adopting Symphony in any capacity.
How does Symphony prevent concurrent agents from conflicting?
Symphony creates an isolated workspace directory for each active Linear issue -- agents run in separate sandboxes and do not share state. Each workspace has its own Codex app-server process and its own git working tree. The Elixir reference implementation uses Erlang/OTP process supervision to restart crashed agents and manage concurrent issue polling on the 30-second cadence without coordination overhead.
What did the 500% PR increase actually mean?
Among some internal OpenAI teams, landed pull requests increased 500% in Symphony's first three weeks. This reflects Codex implementation capacity unlocked by continuous autonomous agents -- not a 5x gain in reviewer throughput. The bottleneck shifts from writing code to reviewing it. Teams need matching review capacity or the production rate gain stalls at the code review queue.
Get the AI Agent Briefing
One email per week. The best AI agent news, tutorials, and tools -- written by someone who actually builds with them.
Subscribe Free