AI Catchup

The AI Catchup -- May 5, 2026

By 3 min read

Welcome back. The throughline this fortnight: agent runtimes are moving out from behind chat surfaces and into things you can program against. Cursor put its agent into an SDK. Codex CLI made goals first-class. Warp open-sourced the client. And Cursor 3.2 quietly turned cross-repo, parallel agent work into the default flow.

Let us get into it.

This Week in AI

The big story is the agent-as-runtime shift. Two drops in two days, each one moving agent capability into a programmable layer instead of a UI surface.

First, Cursor shipped the Cursor SDK in public beta on April 29. @cursor/sdk exposes the same agent runtime, harness, and models that power Cursor desktop, CLI, and web -- with three runtimes (local, Cursor-hosted cloud, self-hosted cloud), Composer 2 support, and standard token-based pricing. It is the largest move yet to turn Cursor from an editor product into a programmable platform.

Second, Codex CLI 0.128.0 landed persisted /goal workflows on April 30. The release ships app-server APIs and model tools for goals, runtime continuation logic so a turn does not stop short, and TUI controls to create, pause, resume, and clear goals. Felipe Coury credits Eric Traut (the Pyright lead) and frames it as Codex's productized take on the Ralph loop -- keep a goal alive across turns, do not stop until it is achieved.

Two different shapes, one direction: agents are becoming infrastructure.

Tool Spotlight

Warp open-sourced its client on April 27 under AGPL v3, with the warpui UI framework crates released under MIT. The repository at github.com/warpdotdev/warp is the actual Rust client codebase, not an issues placeholder. The same release ships a TOML settings file you can edit from the settings page or by asking Warp's agent to update settings for you. The server stays closed-source. OpenAI is the founding sponsor, with Thibault Sottiaux quoted on the launch.

For teams that wanted to audit Warp's process model, telemetry, or agent integration before adopting it -- the source is now there to read under AGPL. The MIT-licensed UI framework is reusable on its own.

What We Are Watching

Cursor 3.2 shipped on April 24 with three changes that pull cross-repo, parallel agent work into the default flow. The /multitask command runs async subagents to parallelize a request instead of queueing it. The worktrees experience runs isolated branch work in the background by default, with one-click foregrounding. Multi-root workspaces let a single agent session target frontend, backend, and shared-library folders at once. If you are on a 2.x build, you will pick up Cursor 3, 3.1, and 3.2 together when you update.

Comparison Reading

The Codex CLI vs Claude Code vs Cursor architecture deep-dive is the right anchor for understanding why this fortnight's announcements look the way they do. Sandboxing models, context windows, plugin systems, and scheduling primitives differ across the three -- and the SDK and runtime shapes shipping right now inherit those underlying choices. Cursor SDK leads with the in-editor context layer made programmable. Codex CLI's /goal is the runtime piece that turns a single turn into a persistent objective.

Quick Hits

  • Codex CLI 0.128.0 also added a codex update command, configurable TUI keymaps, plan-mode nudges that suggest creating a plan when a turn is going long, action-required terminal titles for backgrounded sessions, and the ability to edit /statusline and /title while a turn is active. The /goal headline overshadowed a strong release.
  • Cursor SDK public-beta caveats worth noting: APIs may change before GA, inline MCP servers are not persisted across Agent.resume(), artifact download is not yet implemented for local agents, and tool call schemas are unstable -- parse defensively.
  • Warp's open-source release is client-only. The server stays proprietary, so self-hosting Warp end-to-end is not on the table -- but the AGPL client is a meaningful audit and trust signal for security-conscious teams.

That is it for this fortnight. The AI Catchup publishes every Tuesday. If you found this useful, subscribe to get it in your inbox.

Until next week -- stay caught up.

Get the weekly AI Catchup

Tools, practices, and what matters -- in your inbox every Monday.