Codex in the ChatGPT Mobile App: Preview Lets You Run, Review, and Steer Codex from Your Phone
OpenAI put Codex in the ChatGPT mobile app as a preview on May 14, 2026. You start work, review outputs, steer execution, and approve next steps from your phone while Codex keeps running on a laptop, Mac mini, or devbox. Rolling out on iOS and Android across all plans (including Free and Go) in supported regions, with Windows host support coming soon.
OpenAI shipped a preview of Codex inside the ChatGPT mobile app on May 14, 2026. From your phone you start new work, review outputs, steer execution, and approve next steps -- but Codex itself keeps running on a host machine (laptop, Mac mini, or devbox) that pushes live context to mobile. The preview is rolling out on iOS and Android across all plans, including Free and Go, in supported regions. Windows host support is coming soon.
For where this fits in the Codex stack, the 0.130.0 remote-control headless app-server is the underlying plumbing for headless operation, and the April Mac computer use, browser comment mode, and thread automations post is where the "Codex for almost everything" framing started.
Key Takeaways
- Preview, not GA. Rolling out gradually on iOS and Android in supported regions.
- All plans, including Free and Go. Per the OpenAI Help Center release notes for May 14, 2026 ("Codex remote access from ChatGPT mobile").
- Host stays on the Mac. Codex keeps running on a laptop, Mac mini, or devbox; the phone is a remote-control surface that loads live context from the host.
- Live context surfaced to mobile. Project context, approvals, plugins, screenshots, terminal output, diffs, and test results.
- Windows host support: coming soon. macOS is the only supported host at preview launch.
- Setup starts in the Codex Mac app, continues in ChatGPT mobile via QR code. Update both apps first.
What "Codex in ChatGPT Mobile" Actually Is
The OpenAI Developer Community post is precise: "Now in preview: Codex in the ChatGPT mobile app". The first paragraph spells out the model:
Codex continues running on a laptop, Mac mini, or devbox. You can start new work, review outputs, steer execution, and approve next steps from your phone.
This is remote control of a host, not Codex running on the phone. The mobile app is a thin control layer that pulls state from wherever Codex is actually executing. The Help Center release-notes entry for May 14, 2026 names it "Codex remote access from ChatGPT mobile" and lists exactly what the mobile app loads: project context, approvals, plugins, screenshots, terminal output, diffs, and test results.
If you have been running Codex with long-horizon /goal workflows, browser comment mode, or computer-use sessions on a Mac, this is the missing surface that lets you keep those running while you are away from the desk.
Rollout: Plans, Platforms, Regions
| Dimension | Status at preview launch |
|---|---|
| Mobile platforms | iOS and Android |
| Plans | All ChatGPT plans, including Free and Go |
| Regions | Supported regions only (OpenAI does not enumerate; preview is rolling out gradually) |
| Host OS | macOS today; Windows coming soon |
| Phase | Preview, not general availability |
The all-plans rollout is the noteworthy detail. Codex's other recent surfaces -- the Chrome extension, computer use on Mac -- shipped first to paid tiers. Putting the mobile control surface on Free and Go means OpenAI wants the broadest possible adoption of "I asked Codex to do something from my phone" as a habit.
Setup: macOS Host + Mobile + QR Pairing
The Help Center release-notes entry describes the pairing flow:
- Update the ChatGPT mobile app to the latest version.
- Update the Codex app on macOS to the latest version.
- Open the Codex app on the Mac and begin setup there.
- Scan the QR code with the ChatGPT mobile app to complete pairing.
- Keep the host machine awake, online, and running Codex while you use mobile.
That last requirement is load-bearing. The mobile app is not running Codex; it is reading from the Mac. If the laptop sleeps in your bag, the phone loses live context. For "leave it running while I commute" workflows, this implies either keeping the lid open, configuring the Mac not to sleep on power, or using a Mac mini/devbox you can leave running.
How a Reader Uses This
A few concrete workflows where the mobile surface adds something real, not just a redundant UI:
- Long-running goal-mode tasks. You kicked off a multi-step
/goalworkflow in Codex CLI before lunch. From your phone, you can see when the agent hits an approval gate, scan the diff, approve, and let it keep going -- without opening a laptop. - Computer-use sessions on a Mac mini. You have Codex driving apps on a Mac mini at home or in the office. The phone lets you check what it's doing, intercept a wrong turn, and steer the next step.
- Plugin-heavy workflows. Plugins (Chrome, integrations) surface to mobile as live context. If a plugin asks for approval mid-run, you handle it on the phone.
- Test-result watch. Codex finished a test run while you were in a meeting. Mobile shows the test results and any diffs, with the same approval surface you'd see on the host.
Where mobile is the wrong surface today: long composer-style sessions where you'd be typing significant prompts, or anything where you need full IDE context. Those still belong on the host.
What This Is Not
Three clarifications to head off confusion:
- Not Codex running on the phone. The phone does not execute agent tasks locally. The host does.
- Not a replacement for the CLI or IDE. The mobile app is a control surface layered on top of the existing Codex install.
- Not Windows-ready. macOS is the only supported host at preview launch. Windows host support is on the announced roadmap.
How It Fits With Other Recent Codex Surfaces
OpenAI has shipped several Codex surfaces in close succession:
- April 16, 2026: Mac computer use, in-app browser comment mode, and thread automations -- the "Codex for almost everything" reposition.
- May 7, 2026: Codex CLI 0.129.0 shipped a
/hooksbrowser, plugin management changes, and modal Vim editing in the composer. - May 9, 2026: Codex CLI 0.130.0 added the
codex remote-controlheadless entrypoint. - May 14, 2026 (this post): Codex in the ChatGPT mobile app, preview.
The progression is consistent: Codex on more surfaces, with more ways to drive the same underlying agent from wherever you are. Mobile is the obvious missing one, and it is now in preview.
When to Use Codex Mobile vs. the Other Surfaces
| Surface | Best for |
|---|---|
| Codex CLI | Heavy composer-style work, repo-scoped tasks, scripting via codex exec |
| Codex app (macOS) | Local UI, computer use, in-app browser, plugins |
| ChatGPT web app | Casual prompts, mobile-browser fallback when the mobile app preview is not available |
codex remote-control (0.130) | Running Codex headless as a service or for custom integrations |
| ChatGPT mobile (this) | Steering existing host sessions: approvals, diffs, test results, light prompts |
Caveats and Open Questions
- Preview means changes. Behavior, surfaces, and host requirements may change before GA. Mobile-only workflows are not load-bearing yet.
- Region availability. Neither the Developer Community post nor the Help Center entry enumerates regions. If the mobile app does not surface the feature yet, that's expected during a rollout.
- Privacy and audit posture. The mobile app surfaces live context including terminal output and screenshots from the host. Teams with stricter audit requirements should treat the mobile pairing the same as adding any other remote-access path to a developer workstation.
FAQ
See structured FAQ in the schema header for question-level details: what shipped, platforms and plans, setup, what it actually is, and how it relates to other Codex surfaces.
Sources
- OpenAI Developer Community announcement: https://community.openai.com/t/now-in-preview-codex-in-the-chatgpt-mobile-app/1380940
- OpenAI Help Center, ChatGPT release notes (May 14, 2026 entry): https://help.openai.com/en/articles/6825453-chatgpt-release-notes
Frequently Asked Questions
What did OpenAI ship on May 14, 2026?
A preview of Codex inside the ChatGPT mobile app. From your phone you can start new work, review outputs, steer execution, and approve next steps. Codex itself keeps running on a host machine (laptop, Mac mini, or devbox), and the mobile app loads live context from that host: project context, approvals, plugins, screenshots, terminal output, diffs, and test results.
Which platforms and plans is it available on?
The preview is rolling out on iOS and Android in supported regions, across all ChatGPT plans including Free and Go. The host today must be macOS; Windows host support is coming soon. The OpenAI Help Center release notes (May 14, 2026) call it 'Codex remote access from ChatGPT mobile' and confirm the all-plans rollout.
How do you set it up?
Update the ChatGPT mobile app and the Codex app on macOS. Setup starts in the Codex app on the Mac, then continues in ChatGPT mobile after scanning a QR code. The host machine must remain awake, online, and running Codex while you are using mobile -- if the host sleeps or disconnects, mobile loses live context.
Is this Codex moving to the phone, or remote control of a host?
Remote control of a host. Codex continues running on the host (laptop, Mac mini, or devbox) and the mobile app is a control surface that loads live context from there. That is why the host has to stay awake and online. Practically, it means you can leave a long-running task at your desk and steer it on the train, but the work happens where Codex is installed.
Why does this matter relative to existing Codex surfaces?
Until now, the ways to drive Codex outside the CLI were the web app, IDE integrations, the in-app browser, and (per 0.130.0) the new `codex remote-control` headless app-server. The ChatGPT mobile surface adds a first-party, phone-native control layer that brings approvals, diffs, and terminal output to a device most people already carry. For long-running goal-mode and computer-use sessions, that is meaningfully different from refreshing the web app on a phone browser.
What can you actually do from the phone today?
Per OpenAI's announcement: start new work, review outputs, steer execution, and approve next steps. The Help Center detail lists the live context the mobile app pulls from the host -- project context, approvals, plugins, screenshots, terminal output, diffs, and test results. Treat it as the approval-and-steering surface; the host is still where heavy execution happens.