- Features
- Pricing
- English
- français
- Deutsche
- Contact us
- Docs
- Login

Key takeaway: Granting AI agents unrestricted access to cloud infrastructure is an unacceptable security risk. Upsun provides a "zero-trust" framework by utilizing isolated, production-perfect preview environments that allow AI to be productive without the risk of a hallucinated production outage.
TL;DR: The end of the "root access" LLM
|
In 2026, the primary hurdle to AI adoption is trust. You wouldn't give a junior developer root access on day one; you would give them an isolated environment and a senior engineer to review their Pull Requests. Yet, many teams are handing over production-level API tokens to LLMs that are statistically guaranteed to hallucinate.
This isn't just a security nightmare; it’s a reliability one. An agent doesn't need to be malicious to be dangerous; it just needs to be wrong about a resource limit or a service binding.
Key takeaway: Infrastructure for the agentic era must be designed for graduated trust, where agents only earn the right to modify production state after proving logic in a version-controlled sandbox.
On Upsun, we treat governance as code. By providing a platform that handles container orchestration and isolation, we provide the "predictable world" AI agents need to be successful without infrastructure drag.
.upsun/config.yaml), turning probabilistic guesses into deterministic actions based on your real environment.Key takeaway: AI agents must prove their logic in a byte-level clone of production before they are ever granted permissions to touch the live environment.
The real power of Upsun for AI-enabled development is the ability to validate fixes safely. By utilizing production-perfect preview environments, you create a secure "loop" for the agent:
Key takeaway: The goal of "zero-trust" isn't to slow down development; it's to make high-velocity innovation sustainable.
In the "vibe coding" era, speed often comes at the expense of governance. Upsun balances AI autonomy with human decision-making by moving governance into the platform layer.
Giving an AI a "blank check" to your cloud account is a relic of early-stage AI hype. For the enterprise, the path forward is deterministic governance. By hosting your AI context on Upsun, you ensure that your agents are "infrastructure literate" but strictly governed within a secure sandbox.
The question isn't whether you can trust the AI. It's whether you can trust the platform that isolates it.
Does the AI ever see my actual production data?
Through Upsun’s production-perfect clones, an agent can interact with a replica of your production data in an isolated sandbox. This allows for "production-accurate" validation without any risk to your live site.
How do we prevent "Shadow AI" infrastructure?
By defining your AI stack, including specific model versions and service relationships, in the unified configuration file, you treat the AI infrastructure as part of the application logic. This ensures every interaction inherits the same version-controlled security guardrails.
Does running these isolated environments increase cloud costs?
Yes, every environment is a billable resource. However, Upsun allows you to define a lower resource profile for validation, and Git-driven integrations can automatically tear down these environments the moment a PR is merged to prevent "staging waste".
What role does the Upsun MCP Server play in this?
The Upsun MCP Server serves as the authoritative, read-only bridge between the LLM and your environment. It allows the agent to "read" the configuration and services via a secure API without requiring root access to your cloud provider's console.