- Features
- Pricing
- English
- français
- Deutsche
- Contact us
- Docs
- Login

Years back I sat in on a platform evaluation with a customer who spent forty-five minutes of the meeting focusing on one thing: their custom PHP content management system.
They had opinions about the CMS. Strong opinions. They had benchmarks, a migration plan, a proof of concept. They had a diagram. They had questions about the deployment pipeline for this CMS that were, for a single application, more thoroughly considered than most organizations' entire infrastructure strategies.
Later in the meeting, somebody asked, almost in passing, about the rest of their stack.
Come to find out, they had three hundred other applications. On-prem, AWS, Rackspace, some of them running on servers nobody had gone into in months. No version control on most of them. Production hotfixes on a weekly basis. A wiki page somewhere that purported to list all the environments and had last been edited in 2018.
The evaluation we were in the middle of was about the one CMS. The elephant in the next four rooms was the other three hundred apps.
I've sat in enough of those meetings to notice a pattern. The questions that decide whether a platform choice ages well are rarely the questions that show up in the evaluation. Here are five that belong in your next platform evaluation.
Not "does the platform support this framework." Not "does it have an integration for your database." The specific question: if you drew a map of your applications, how much of that map could be managed by the platform you're evaluating, and how much is integrated through marketplace connectors, wired up by your DevOps team, or sitting in a cloud account the platform doesn't see?
Most platforms cover a recognizable slice. The slice is usually the application runtime, the deployment pipeline, and a managed compute layer. Everything else: databases, queues, workers, background jobs, services in languages the platform doesn't run, lives somewhere else.
The percentage of the application on the platform is the percentage of the application that actually gets the platform's benefits. The rest is still yours to run or manage.
Preview environments are the load-bearing part of modern developer experience. If they work, bugs die in review. If they don't, bugs die in front of customers.
The test isn't "does the preview URL load." Every competent platform gets that right. The test is whether the services and data behind the preview match production well enough that a reviewer can answer "does this work?" without guessing. For most teams on most platforms today, the honest answer is "the code does, the data doesn't, and good luck spotting the difference."
Not today, maybe. At some point, if your roadmap includes regulated customers, enterprise procurement, sovereign-region requirements, or a CFO who wants cloud spend to count against a committed-use agreement, somebody will ask. "Can you run in eu-west-1?" "Can you not use AWS, because our biggest customer competes with Amazon?" "Can we pay for this through our Azure marketplace commitment?"
If the answer is "we'd have to replatform," the next conversation is about whether the deal is worth the quarter-long migration. That's a conversation better had before the deal is on the table than during.
Compliance certifications are scope statements. The question isn't "do we have SOC 2." The question is which controls apply to which systems, and where the boundary runs.
Platforms govern what they manage. The governance boundary for most platforms is the managed compute layer. If your application is larger than that, which most are, the parts outside the platform boundary are governed by whatever else the team has assembled, audited separately, or quietly not audited at all.
The audit scope should match the architecture diagram. If it doesn't, the gap between them is where the audit gets expensive.
This is the cultural question. It's also the hiring question, the onboarding question, and the retention question, in that order.
The frontend team almost always has a modern workflow. The backend team often doesn't. The data team usually doesn't. The ML team definitely doesn't. If a new engineer has to learn three different deployment models to ship their first feature, the platform choice isn't just a technical decision. It's shaping the developer experience of everyone you're going to hire for the next three years.
One workflow for every runtime is a team decision as much as a technical one.
For each of the five, there's a simple test. Can you answer the question clearly, without an "it depends," in under thirty seconds?
If yes, you know where you stand. If no, the question is where the work is. That's not a crisis. It's just the scope of your next internal conversation, before the platform choice hardens into something harder to change.
The customer evaluating that CMS, the one with the three hundred other apps, did not work through these five questions in the meeting I was in. They worked through them later, reluctantly, after somebody on their own team pushed back on the scope. The CMS got a good platform. The three hundred apps got a multi-year project to migrate off the hotfix spreadsheet. The CMS decision was fine. It just wasn't the decision they actually needed to be making.
Most platform evaluations are about the one app in the room. The five questions are about the other three hundred.
Pick your weeks.
If this resonated, the next piece in this series goes deep on question two: why preview environments that look right often aren't, and what byte-for-byte cloning actually changes about the review process.