- Features
- Pricing
- English
- français
- Deutsche
- Contact us
- Docs
- Login
Here’s the uncomfortable truth: most companies do not have an AI problem. They have a delivery problem wearing an AI costume.
MIT’s Project NANDA research has been widely cited for a brutal headline statistic: roughly 95% of corporate generative AI pilots fail to produce measurable business impact or returns, while only about 5% break through to meaningful outcomes. (Yahoo Finance) The models are impressive. The demos are dazzling. The budgets are real. And yet the results rarely survive contact with production workflows.
If you are leading an AI initiative, that number should not depress you. It should focus you.
Because the “5%” are not necessarily the companies with the biggest GPU spend, the flashiest chatbot, or the loudest AI rebrand. The 5% are the ones that treat AI like any other production capability: versioned, tested, observable, secure, and repeatable. They build systems where AI can be evaluated against real data, in real environments, with real governance. They connect pilots to workflows, not slides. And they stop confusing experimentation with delivery.
That is where Upsun’s AI story starts.
We are not here to sell hype. We are here to help teams ship. We are here to help you get your Proof of Concept (PoC) reliably into production.
Our strategy is simple: focus on helping our customers in the three places where AI outcomes are won or lost.
If you want to be in the successful 5%, you do not need more AI theater. You need a better path from idea to production.
Let’s address the question we get right away.
“Where are the GPUs?”
We do not offer GPU infrastructure in Upsun today, and that is intentional.
Most companies we speak with are not training foundation models. They are not running large-scale inference fleets on their own hardware. They are building products that consume best-in-class models through APIs and services, then wrapping those capabilities with company-specific context, governance, and user experience. That is not a compromise. It is the dominant pattern.
So instead of building a GPU catalog to check a box (and waste a lot of time and resources), we put our effort where it helps the majority of teams succeed: everything around the model.
Because models are not the hard part anymore. The hard part is making AI features behave like production software.
When AI initiatives stall, it is rarely because “the model wasn’t smart enough.” More often it is because the system around the model was not designed for:
In other words, teams succeed or fail where platforms either help or hurt.
Upsun’s thesis is that AI workloads look like modern applications: they span multiple services, they evolve quickly, and they need to be governed like everything else. That is exactly what a cloud application platform should excel at.
AI has a funny effect on technology stacks: it makes them more diverse, not less.
A team might ship a Node.js API, a Python retrieval service, a background worker for document processing, and a small PHP admin interface, all in the same product. Another team might have a .NET application that calls model APIs, with a Python microservice for evaluation harnesses and batch jobs. AI multiplies this “glue code,” and glue code shows up in whatever language makes sense.
That is why runtime flexibility matters.
Upsun is designed to support common and less-common runtimes across languages and frameworks, with Git-driven configuration and predictable deployment flows. That means AI teams can choose the right tool for each part of the system without having to beg an internal platform team for exceptions or wait months for a bespoke runtime.
This is also why “API-centric” matters. AI products are API products. They integrate with model providers, data sources, observability stacks, queues, and internal services. A platform that makes integrations awkward will quietly kill AI momentum.
If we had to pick one reason AI projects fail in production, it would be this:
Teams do not test AI behavior under production-like conditions.
They test prompts in a notebook. They test a small dataset. They test with “happy path” inputs. Then they ship, cross their fingers, and hope.
That approach fails for normal software. With AI it fails faster and louder, because the edge cases are the product. Small data quirks, formatting differences, missing fields, or stale context can flip outputs. Evaluations that look great in a controlled environment can crumble when exposed to the messy reality of actual users.
So our platform story for AI starts with environments.
Upsun’s Git-driven approach enables teams to spin up isolated environments per branch, with configuration tracked and versioned. Pair that with production-like data workflows (including cloning and sanitization patterns where appropriate), and teams can validate AI features under realistic conditions before they hit users.
This is not just about correctness. It is about confidence.
AI features are inherently probabilistic. You cannot eliminate uncertainty, but you can eliminate guesswork about whether your system behaves the same way in dev, staging, and production.
That is how teams move from “this seems fine” to “we can ship this.”
A lot of AI risk is not “Skynet.” It is operational.
These are not exotic failures. They are basic delivery failures.
The best risk reduction is boring infrastructure hygiene: isolated environments, secrets management, clear service boundaries, and predictable config. If your platform makes those things easy, AI becomes safer by default.
That is the kind of “AI support” that matters for most teams. Not a GPU checkbox. A system that makes shipping AI reliable.
There is another trend that is easy to misunderstand.
AI-augmented development is not just autocomplete.
Yes, code assistants inside IDEs are useful. But the bigger shift is that development workflows are becoming agent-assisted end-to-end. People are using AI to reason about architectures, generate scaffolding, write tests, update configs, triage logs, and propose fixes.
If you want a blunt description for non-technical stakeholders, “vibe coding” captures the vibe, but not the reality. Real teams still need rigor: reviews, guardrails, reproducibility, and accountability.
So we ask a different question:
How do we make Upsun a platform that AI agents can use safely and effectively, while keeping humans primary in the loop?
Teams do not just read docs anymore. Their tools read docs.
That changes what “good documentation” means. It is no longer enough to have pages that look nice. Docs need to be structured, consistent, and machine-consumable, so assistants can retrieve the right information without hallucinating.
That is why we invest in:
These details sound small. They are not.
In an AI-augmented workflow, bad docs do not just slow humans down. Bad docs become bad outputs at scale.
As agentic workflows mature, developers will expect assistants to do more than write code. They will expect agents to understand the platform they deploy on.
That is where MCP (Model Context Protocol) becomes interesting.
With MCP-style integration, an AI assistant can retrieve authoritative platform context: configuration schemas, best practices, environment details, and operational constraints. Instead of guessing how to structure a config file or how to wire services together, the assistant can query the source of truth.
Upsun’s direction here is straightforward: give customers MCP options that reduce friction and increase correctness.
That includes ideas like:
The principle is more important than any single implementation: we want customers to spend less time fighting tooling and more time shipping.
If you are building AI features that go beyond a chatbot, you end up needing retrieval. You need a place to store embeddings, run similarity search, and attach real-world metadata and filters so your application can fetch the right context at the right time. In other words, you need a vector database or a vector-capable data layer.
Upsun supports that workflow in a pragmatic way.
If you want a dedicated vector store, you can run Chroma as part of a multi-application project on Upsun. Chroma is a popular an open-source vector database designed for AI applications that need to store, query, and manage embeddings efficiently, and outline how to configure it as a Python application with persistent storage across deployments.
You can also run Qdrant, which is a vector similarity search engine and vector database designed for semantic matching and filtering-heavy use cases. Again, Upsun supports running it as a standalone application in a multi-application project, keeping it isolated, configurable, and persistent across deploys.
This matters for AI-augmented development because retrieval is not something you validate by reading code. You validate it empirically:
Branch-based environments make these questions testable. Your AI agent can build an environment from a branch, run ingestion against a cloned dataset, evaluate retrieval quality, and give you evidence, not vibes. That is the difference between “demo ready” and “production ready.”
AI workflows also tend to break the “one runtime per app” assumption.
A very common pattern looks like this:
Upsun’s composable image is built specifically for this reality. It enables you to install several runtimes and tools in your application container, and it is built on Nix, which means you can pull from a very large package ecosystem and keep builds deterministic and reproducible.
Just as importantly, Upsun’s composable image is explicitly designed for multiple runtimes in a single application container. In a composable image, you can add multiple runtimes to a single application container via configuration, so your Node API and Python worker can coexist without inventing a fragile build process from scratch.
Because it is configuration-driven, it is also AI-friendly. Your assistant can propose changes to the exact config that defines how your environment is built, not just code that assumes the runtime magically exists.
Upsun also supports the practical details teams always get stuck on:
And if your vector approach leans on Postgres for parts of the stack, Upsun supports enabling PostgreSQL extensions through configuration, rather than through manual ops steps. Upsun documents enabling extensions under configuration.extensions in .upsun/config.yaml, and notes that extensions must come from the supported list.
Put these pieces together and the philosophy becomes clear:
That is AI-augmented development the way it should be: less “look what the model wrote,” more “here is the environment, the data, the retrieval layer, and the proof that it works.”
The third part of our AI story is internal, but it shows up in the customer experience.
We are using AI in Upsun, but we are selective. We are not interested in “AI for AI’s sake.” We do not want to ship a generic chat widget, rename the company, and call it a roadmap.
We want to apply AI where it removes real friction.
And we want to do it in a way that respects a hard reality: most AI pilots fail because they never connect to the workflow. (Computerworld)
So our product AI strategy starts with workflow blockers.
One of the biggest onboarding cliffs in modern platforms is configuration.
New projects often stall at the same point: the developer has code, but needs the right platform configuration to deploy it correctly. They have to choose runtime settings, define services, wire routes, set build steps, handle environment variables, and more.
That is exactly the kind of task AI is good at, if you constrain it properly.
Our first step has been using AI to help customers create config files. That does not mean “free-form prompt and hope.” It means guided generation, grounded in the platform’s schema, with validation and human review.
This is a perfect example of “use AI wisely.”
You do not need a chatbot for that. You need an assistant that understands your platform’s rules.
Config generation taught us something important:
AI is most useful when you pair it with constraints, context, and a tight feedback loop.
When you give the model structure (schemas), authoritative context (docs), and validation (CI checks or platform validation), you get outputs that are dramatically more reliable than “prompt engineering” alone.
This lesson scales beyond config.
It applies to:
In other words, the best product AI features look less like conversation and more like automation with intelligence.
Once you can generate config safely, the next logical move is to help customers reason about what happens after deployment.
That is where agentic capabilities can bring real value, especially when paired with observability.
Imagine an assistant that can:
That is the direction we are moving toward: AI that helps customers get from “something broke” to “it is fixed and verified,” faster.
Again, the point is not to replace engineers. The point is to remove the repetitive investigation work that drains teams and slows delivery.
The MIT “GenAI Divide” framing resonates because it captures what many leaders feel: adoption is high, transformation is low. (Yahoo Finance)
That gap is not closed by buying more tools. It is closed by building better systems.
So when Upsun uses AI inside the product, we treat it like any other capability:
If the answer is no, it is probably a demo, not a product feature.
We are building a platform where AI work is not a special project, but a normal, repeatable, production-grade workflow.
That is how customers stop living in pilot purgatory.
And it is how teams start behaving like the 5%: not by chasing novelty, but by mastering execution.
Because the real competitive advantage in the AI era is not access to models. It is the ability to ship, learn, and improve faster than everyone else, without breaking trust.
Upsun exists to make that boring, powerful loop easier.
No gimmicks. No rebrand theater. Just a platform that helps you ship.
If you are building AI features and you want them to survive beyond the demo:
Join our monthly newsletter
Compliant and validated

