Internal AI Workspaces: The Ramp Glass Playbook for Company-Wide AI Adoption
Ramp's internal 'Glass' platform pushed AI adoption to 99 percent by treating the harness -- not the model -- as the bottleneck. Every employee gets a fully configured workspace via SSO, a marketplace of 350+ shared skills, persistent memory from existing systems, and scheduled automations. Here is the playbook other companies can borrow.
Internal AI Workspaces: The Ramp Glass Playbook for Company-Wide AI Adoption
Most companies running AI pilots stall around 20 to 40 percent adoption. Ramp pushed past 99 percent of employees using AI daily by building an internal workspace called Glass -- and the lesson is not about a clever model choice. The lesson is that the harness, not the model, is the bottleneck. Every Ramp employee gets a fully configured AI workspace on day one: tools wired through Okta SSO, a marketplace of 350+ shared skills, persistent memory built from Slack and Notion and Linear, and cron-style automations that run overnight. This article distills that playbook into something other companies can borrow.
Key Takeaways
- Models are good enough; setup friction is the actual bottleneck
- One Okta SSO sign-in connects every internal tool
- A skills marketplace turns one person's win into the team's baseline
- Persistent memory from Slack, Notion, Linear means agents do not start cold
- Cron-style automations let agents work overnight and post to Slack
- Build vs buy: own it if AI productivity is strategic, because vendor cycles are too slow
The Harness vs Models Thesis
Ramp's CEO Eric Glyman announced Glass in early April 2026, and the long-form companion piece by Seb Goddijn -- co-authored with Shane Buchan, Cameron Leavenworth, Calvin Kipperman, Jay Sobel, and Caroline Horn -- hit the Hacker News front page the same day. The thesis is the part most internal AI strategy decks get wrong.
Goddijn's framing, in his own words: "The primary barrier to AI adoption wasn't the models themselves, but the complexity of setting up environments." A frontier model is impressive on a benchmark and useless if the salesperson cannot connect it to Gong, the support agent cannot read a Zendesk ticket, and the finance lead cannot pull yesterday's spend. Setup friction, not capability, kept AI pilots from generalizing.
The shift is conceptual: stop treating AI as an app you launch and start treating it as a workspace your employees already live inside. That reframing is what produced the 99 percent daily-use number. Adoption follows the path of least resistance, and that path is a workspace that opens already configured.
The harness includes the model, but it also includes authentication, integrations, memory, the catalog of skills the team has authored, the scheduler that runs jobs overnight, and the multi-pane interface that lets an analyst work three threads at once. Most of that work is unsexy. All of it compounds.
SSO: Making Setup One Click Instead of a Quest
The first piece of the harness is single sign-on wired through Okta SSO. When a new Ramp employee opens Glass, they do not chase API keys, install CLIs, or read a setup guide. They sign in once. Every internal tool the company has approved is already connected: research tools, an inspect tool, command-line utilities, Slack, Notion, Linear, Gong, and Zendesk. The agent reads and writes against those systems immediately, scoped to that employee's permissions.
Compare that to the standard internal AI rollout. An enthusiastic engineer connects three tools in 20 minutes; the rest of the company hits a permission wall, files a ticket, and never comes back. By centralizing integrations behind SSO, Ramp turned a setup quest into a sign-in. The bar to first useful action drops from hours to seconds.
The architectural detail worth borrowing: integrations are owned by a platform team, not by individual employees. New tools onboard centrally, get audited once, and become available to everyone. That is the only model that scales beyond a handful of power users.
The Skills Marketplace Model
The second piece is a marketplace called Dojo. Dojo holds 350+ skills, each a markdown file describing how to do one task well. They are stored in Git, versioned, code-reviewed, and discoverable inside Glass. A "Sensei" recommendation system surfaces relevant skills to each employee based on their role, the tools they use, and recent activity, so a new sales rep does not have to know Dojo exists to get value from it.
Goddijn's framing for why this matters: "One person's breakthrough should become everyone's baseline." Without a marketplace, every employee invents their own version of the same prompt, makes the same mistakes the previous person already solved, and the company pays the discovery cost a hundred times. With a marketplace, the cost is paid once and amortized across everyone who installs the skill.
The structural choice is the giveaway. Skills are markdown, not closed prompts buried in a vendor UI. They live in Git so they can be reviewed like code, rolled back if a regression lands, and forked when a team needs a variant. (For a complementary pattern, see how Cursor rules can be auto-generated from chat history -- skills emerge from real usage rather than being designed in a vacuum.) The marketplace is not a content library. It is a code repository with a discovery layer on top.
The cultural payoff is the second-order effect Goddijn names: "We don't believe in lowering the ceiling. We believe in raising the floor." A skills marketplace does not cap what your strongest users can do. It pulls the median user up to where the strong users already are.
Persistent Memory From Existing Systems
The third piece is persistent memory that the agent does not have to build from a cold start every session. Glass auto-builds context from where employees already do work: their team members, active projects, the Slack channels they participate in, the Notion docs they own, and the Linear tickets assigned to them. A 24-hour synthesis pipeline keeps that memory fresh as the underlying systems change.
The contrast with cold-start AI is stark. A vanilla chat session starts knowing nothing about your team, your projects, or your last quarter. You spend the first 10 minutes of every conversation pasting context. Persistent memory eliminates that loop. The agent walks in already knowing what your week looks like.
The architectural decision worth noting is the source of the memory. Ramp did not ask employees to maintain a separate "memory file." The memory is derived from systems employees use anyway, so the agent reads from systems of record directly. That is the only design that survives contact with reality. A separate memory store decays the moment people stop updating it.
Scheduled Automations and Headless Mode
The fourth piece is the move from chat to scheduled automation. Glass supports cron-style jobs that run on a schedule and post results into Slack. The example Ramp shared is a daily spend-anomaly report that runs at 8am, scans the prior day's transactions, flags outliers against historical patterns, and lands in a finance Slack channel before the team starts work.
This is the headless mode. You set up the task once, walk away, and the agent does the work overnight. If a step needs human approval -- a permission scope to widen, an irreversible action to confirm -- the request comes to your phone. The interaction model shifts from "I sit at the keyboard while the agent works" to "the agent works and I check in when asked."
The leverage compounds. Headless mode also unlocks workflows humans never do because they are too tedious to remember: weekly competitive scans, daily dashboard checks, end-of-day pipeline summaries. For a public-product analog, Claude Code Routines lets individual developers schedule the same kind of recurring work without building a Glass-grade harness in-house. The pattern is the same: cron the agent, ship the results to where the team already looks.
Skill Examples by Role (Sales, Support, Finance, Engineering)
The skills in Dojo are not abstract templates. They are role-specific workflows authored by the people who do the work. A representative cross-section of what the marketplace looks like in practice:
Sales. A Gong call analysis skill takes a recorded call, extracts objections and competitive mentions, and produces a structured summary. A competitive battlecard generator pulls public information about a named competitor, formats it against Ramp's positioning, and hands the rep a one-page document before the next call.
Support. A Zendesk investigation skill takes a ticket, auto-pulls the customer's full ticket history, checks account health signals, and surfaces the relevant context the support agent needs without manual digging. The agent does the lookup; the human does the judgment.
Finance. The spend-anomaly skill mentioned earlier runs as a scheduled job, but variants of it are also available on demand for ad hoc investigation. A finance lead can ask "show me anomalous spend in the last week" and get the same analysis the cron job produces, scoped to whatever window matters.
Engineering. PR-review workflows that read a diff, run static checks, and surface likely issues for the human reviewer. Repo-specific debugging skills that load the right context for a given service before the engineer starts asking questions. (A different orchestration surface for some of this work is Warp's agentic development environment, which gives engineers a terminal-native version of the multi-pane workspace pattern.)
The unifying property: every skill encodes a real workflow that one employee figured out, then turned into a reusable artifact for the rest of the team. The marketplace is an institutional memory of how to do things well.
The Six-Step Playbook for Your Company
Most of what Ramp built is replicable without their engineering org. The playbook in six steps:
-
Audit which configuration steps stop non-technical employees from using AI. Sit with three employees from non-engineering roles and watch them try to use the AI tool of your choice. Every time they hit a permission wall, an API key prompt, or a "connect to your account" detour, write it down. That list is your harness backlog.
-
Centralize tool integrations behind SSO so connection is one click, not a setup quest. A platform team owns the integrations. Individual employees never paste tokens. New tools onboard once, audit once, and become available to everyone. The bar to first useful action should be a sign-in, not a guide.
-
Build an internal skills/prompts marketplace, code-reviewed and versioned. Skills live in Git as markdown, get reviewed like code, and surface in the workspace through a discovery layer. Sensei-style recommendations are nice but optional; the non-negotiable is that the marketplace exists and the review loop closes.
-
Wire persistent memory from your existing systems (chat, docs, tickets) so the agent does not start cold each session. Read from systems of record. Do not ask employees to maintain a separate memory store. A daily synthesis pipeline keeps context fresh without a custom event bus.
-
Bias to learn-by-doing: ship installable wins instead of training sessions. Ramp's observation, paraphrased: the people who got the most value installed a skill on day one and got a result. They did not sit through a training session. Optimize for the moment a new user runs their first useful skill, not the moment they finish onboarding documentation.
-
Build vs buy: own it if AI productivity is strategic, because vendor cycles are too slow and the internal feedback loop into your own product is too valuable. Off-the-shelf tools cover the easy 80 percent. The remaining 20 percent is where your competitive edge lives. If AI is strategic to how your business works, the build cost pays back in iteration speed and the data flywheel between internal usage and external product.
The playbook does not require a Ramp-sized engineering team. The smallest viable version is one engineer running an SSO-backed proxy in front of an off-the-shelf agent, a Git repo of skills, and a daily cron that posts a report to Slack. Start there. Add Sensei later.
Where the Market Is Going (Anthropic Cowork, Notion Custom Agents)
Ramp's pattern is not isolated. The same architecture is showing up in shipping products from major vendors over the last few quarters.
Anthropic Cowork, which moved from preview to general availability between January and April 2026, gives individual users a desktop agent with file access, scheduled tasks, and mobile dispatch. Cowork ships with 11 open-source role-specific plugins covering sales, finance, legal, marketing, HR, engineering, design, and operations -- the same role taxonomy Glass organizes Dojo around. The convergence is not coincidence. Roles are the natural unit of organization once you accept that AI productivity is workflow-shaped, not chat-shaped.
Notion 3.3 Custom Agents, released in February 2026, lets teams define agents with specific tool access and personas inside the same workspace where their docs and databases live. The bet is that a Notion-shaped agent works because it is already inside the system of record -- the same persistent-memory thesis Glass operationalizes.
Across all three -- Glass, Cowork, Notion -- the pattern is identical: SSO-connected tools, role-specific skills or plugins, persistent context from existing systems, scheduled jobs that run without a human at the keyboard. The category is consolidating around this shape because it is what works.
The implication for any team picking a 2026 strategy: the question is no longer about model choice. The question is what your harness looks like. The model is a swappable component. The harness is the moat.
Frequently Asked Questions
What is Ramp's Glass and is it available externally?
Glass is Ramp's internal AI workspace, built only for its own employees. It is not a product you can sign up for, buy, install, or license. There is no external availability, pricing page, or waitlist. Ramp shared the architecture and lessons publicly so other companies can borrow the playbook, but the Glass software itself stays internal.
What is the difference between an AI skill and an AI agent?
A skill is a reusable, code-reviewed instruction file -- typically markdown -- that teaches an agent how to do one specific task well. An agent is the model plus its tools, memory, and runtime. Skills are the playbook; the agent is the player. One agent can load many skills based on the task at hand.
Should we build or buy our internal AI platform?
Buy if AI productivity is a supporting cost center and off-the-shelf tools cover 80 percent of your workflows. Build if AI productivity is strategic to your business, your data and tools are heavily custom, or you want the internal feedback loop to inform your own product. Ramp built because vendor cycles were too slow.
Get the weekly AI Catchup
Tools, practices, and what matters -- in your inbox every Monday.