AI Catchup

Cursor Canvases: When to Ask the Agent for a UI Instead of Text

Cursor 3.1 introduced interactive canvases -- agent-generated dashboards and custom interfaces that render as durable artifacts in the side panel of the Agents Window. Built from a first-party React component library (tables, charts, diagrams), canvases are best for multi-source data exploration, repeated triage, and reusable mini-tools you keep using all session.

Cursor Canvases: When to Ask the Agent for a UI Instead of Text

Cursor shipped interactive canvases on April 15, 2026 as part of Cursor 3.1, giving the agent a way to answer with a custom interface instead of more text. A canvas is an agent-generated dashboard or mini-tool composed from Cursor's first-party React component library, rendered as a durable artifact in the side panel of the Agents Window. It is designed for the moments when you would rather click around a UI than scroll through another wall of markdown.

Key Takeaways

  • Canvases are agent-generated interactive UIs, not static images
  • Rendered as durable artifacts in the Agents Window side panel
  • Built from Cursor's first-party React component library (tables, charts, diagrams, custom components)
  • Persists across the session unlike chat output
  • Best for incident response, PR review clustering, eval failure analysis, ML experiment tracking

What Is a Cursor Canvas?

A canvas is a custom interactive interface the Cursor agent emits in response to a prompt. Instead of returning a paragraph of analysis or a table dumped into chat, the agent assembles a small UI -- a dashboard, a triage view, a diff cluster, a chart panel -- from a library of building blocks Cursor maintains in-house. The result renders inline in the Agents Window and stays there for the rest of the session.

The component library is React-based and first-party. The blocks include tables, boxes, diagrams, and charts, alongside Cursor components you have already seen elsewhere in the editor like diffs and to-do lists. Logic and interactivity -- click handlers, filters, the way one row drills into another -- are tailored to the prompt that produced the canvas. You do not author the canvas markup. You ask for what you want, the agent emits it, and Cursor renders it.

The shape is closer to a generated mini-app than a generated document. Canvases ship in both the Agents Window and the editor, so you can request one from either surface.

How Canvases Differ From Text or Code Replies

A text reply is read once and lost in scrollback. A code reply edits a file and lives in the repo. A canvas does neither. It is a UI that exists alongside the conversation, that you can click into, and that holds state across the session.

That changes how you frame the prompt. With text, you ask "summarize the last 20 PRs and their status." With a canvas, you ask "show me the last 20 PRs grouped by author with status badges and a filter for stale ones." The first answer is one paragraph you read and forget. The second is a panel you keep open for the next two hours, sorting and filtering as your team's work changes underneath it.

The canvas also avoids two everyday irritations of chat-shaped output. Long tables wrap badly inside chat bubbles and become hard to scan. Multi-source comparisons that would otherwise require copying numbers between Slack, a dashboard, and a spreadsheet collapse into a single rendered view. The agent does the join in the canvas; you do the looking.

Where Canvases Render

Canvases live in the side panel of the Agents Window -- the same column where the terminal, the in-editor browser, and the source-control view already sit. Cursor's framing is that canvases are a peer pane in that column, not a popup, not a modal, and not a chat attachment. You can switch between the agent's terminal output, a browser tab the agent opened, source control, and the canvas without leaving the window or losing the canvas's state.

The side-panel-as-peer pattern is not unique to Cursor in 2026. The same shape -- a canvas, a terminal, a browser, and source control coexisting next to a coding agent -- shows up in the Warp agentic development environment, and it is becoming the default mental model for where agent-produced artifacts belong. Pinning canvases to the side panel rather than threading them into chat is what makes them durable in practice. They survive scrolling, survive new agent turns, and are still there when you come back from lunch.

The Four Use Cases Cursor Highlights

Cursor's launch post calls out four jobs canvases handle better than plain text. They are worth memorizing because they are the shape of the prompts where canvas-versus-text is an obvious win.

1. Incident response. When something breaks at 2am, the data you need is spread across Datadog, Sentry, and a Databricks query. Cursor's example pulls from all three and joins them into a single chart in a canvas. You stop alt-tabbing between three browser tabs and start asking the chart questions: which release introduced the error spike, which service is the loudest, which customer accounts are affected. The agent assembles the join. You drive the canvas.

2. PR review. A long backlog of pull requests is hard to review one diff at a time. A canvas can group related diffs together and surface pseudocode summaries of what each cluster does, so you scan ten PRs at the level of intent before you open any of them in detail. The clustering happens at canvas-build time. The reading happens at your pace.

3. Eval failure clustering. Eval suites generate hundreds of pass/fail rows that are useless as a flat list. A canvas can cluster failures by symptom -- which prompts failed for similar reasons, which model versions regressed together, which slices of the dataset are systematically broken. You go from "the eval suite has 312 failures" to "there are 4 distinct failure modes, here is the worst one."

4. ML experiment tracking. Comparing training runs across hyperparameters is the canonical "I need a small dashboard" task. A canvas can render the comparison inline -- runs by config, loss curves overlaid, a row per checkpoint -- without you spinning up a Streamlit app or a notebook. The dashboard exists for as long as you need it and disappears with the session.

The thread connecting all four is multi-source data the agent can pull together for you, with enough interactivity that you keep coming back to the same canvas for the next half hour.

When to Ask for a Canvas vs Plain Text

Not every question wants a canvas. The choice is mostly about whether you are reading the answer once or working with it.

Ask for a canvas when:

  • The answer pulls from multiple sources you would otherwise stitch by hand
  • You expect to filter, sort, or re-query the same data several times in the next hour
  • There are many similar items to scan visually -- PR clusters, eval rows, experiment runs
  • The output is a reusable mini-tool you keep open for the rest of the session

Stick with plain text or a code edit when:

  • The question has a one-shot answer that fits in a paragraph
  • The agent is editing files and the canonical artifact is the diff, not a UI
  • The work ends in a commit, a deploy, or a chat message rather than a clickable view
  • You do not need to come back to the answer after reading it

The cost of a canvas is the agent's time to assemble it and the side panel real estate. For a quick lookup, it is overkill. For a triage pass over 50 alerts, it is the difference between a 10-minute scroll and a 10-second scan. (For a different angle on instructing Cursor, see the Cursor rules tutorial on auto-generating skills from your chat history.)

Canvases vs Claude Artifacts vs ChatGPT Canvas

Canvases sit inside a small but growing category of "durable, interactive, agent-generated artifacts." The conceptually closest cousins are Claude artifacts and ChatGPT canvas. All three give the model a way to emit a persistent, interactive surface instead of more chat.

The differentiator Cursor leans on is the dev environment. A Cursor canvas is tied to the agentic workflow specifically, and that workflow has access to your repo, your terminal, and your source control. A canvas can pull data and context from those surfaces in a way that a chat-window artifact cannot. Artifacts and ChatGPT canvas live inside chat threads. Cursor canvases live alongside the rest of the dev environment as a peer pane in the Agents Window.

That is the comparison Cursor itself draws and it is the one to take seriously. The artifact-vs-canvas naming differences across vendors are still settling, and the safer way to reason about them is by where the artifact lives -- inside chat or alongside the IDE -- and what it can pull from. For a broader read on how Cursor and Claude Code line up across the rest of their feature surface, the cursor vs claude code comparison is the longer treatment. For the broader 2026 agentic-tools context, Claude Code Routines is the other major April 2026 launch worth pairing with this one.

Pricing and Availability

Cursor 3.1 ships canvases as part of the release, available in both the Agents Window and the editor. The launch changelog does not mention tier gating, so canvases are available wherever Cursor 3.1 is installed. Treat that as a snapshot of the launch state rather than a pricing guarantee; the Cursor 3.1 changelog and the canvas announcement are the canonical references for what shipped on day one.

The most useful thing to do this week is to take a single recurring task -- a triage queue, an alerts dashboard, a backlog grooming pass -- and try asking for it as a canvas instead of as a text answer. You will know within one session whether canvases earn their place in the side panel. If the canvas you build on Monday is still open and useful on Friday, you have found the shape of prompt this feature was built for.

Frequently Asked Questions

Are Cursor canvases included in all plans?

Canvases ship as part of Cursor 3.1 and the launch changelog does not mention any tier gating, so they are available across plans wherever Cursor 3.1 is installed. Cursor has not promised that this stays the case forever, so check the current pricing page if you need a contractual answer for a procurement review.

Can I export a Cursor canvas?

The April 15, 2026 launch announcement does not describe a dedicated export format. Canvases render as durable artifacts in the Agents Window side panel and the launch coverage does not document a portable file format, so plan to share them by screenshot or by re-running the prompt that generated them in another teammate's editor.

What is the difference between Cursor canvases and Claude artifacts?

Both are durable, interactive, agent-generated UIs. The differentiator Cursor highlights is integration with the dev environment: a canvas can pull from your repo, terminal, and source control, and it lives as a peer pane next to those tools rather than inside a chat thread. Artifacts and ChatGPT canvas live in the chat surface.

Get the weekly AI Catchup

Tools, practices, and what matters -- in your inbox every Monday.