• Formerly Platform.sh
  • Contact us
  • Docs
  • Login
Watch a demoFree trial
Blog
Blog
BlogProductCase studiesNewsInsights
Blog

What mid-market IT teams wish they knew before deploying AI agents

AIdeveloper workflowplatform engineeringGitOpspreview environmentsobservabilitysecurity
04 February 2026
Share

AI agents are quickly shifting from experimentation into day-to-day operations. That shift is showing up in the data. McKinsey’s latest State of AI research highlights both broader AI use and the growing focus on “agentic AI,” even as many organizations still struggle to scale safely. 

For mid-market IT teams, agents can feel like the unlock: automate repetitive workflows, reduce backlog pressure, and deliver more output without expanding headcount.

The lesson early adopters tend to learn later is simple: agents don’t just add productivity, but also a new operational surface area. If governance isn’t embedded early, the risks appear after deployment, when it’s hardest to unwind them.

Here is  what mid-market IT teams consistently wish they had in place before AI agents reached production.

What is an AI agent and why does it change governance requirements?

An AI agent is a system that doesn’t only generate output, it can also take actions across tools and services. In practice, that means an agent may be able to write code, query data, call APIs, trigger workflows, or modify records. Once deployed, it operates at machine speed and often across multiple systems.

That is why agents change governance requirements. Traditional governance assumes humans can intervene. Agents reduce or remove that window.

A helpful way to frame it is: copilots assist humans inside a workflow; agents become part of the workflow itself.

What do teams underestimate before deploying AI agents?

Most teams underestimate how quickly agents stop being “tools” and start becoming infrastructure.

At the beginning, an agent is often limited in scope. One team owns it. It’s used for a narrow task. Access is granted pragmatically to make it useful. But usefulness spreads fast: more teams ask for it, more workflows depend on it, more data sources are connected, and more permissions are added.

By the time an organization recognizes the agent is business-critical, it may already have broad access without clear boundaries, auditability, or ownership.

Which risks only become visible once agents run in production?

Several risks only surface after agents become part of real workflows.

  1. Access expands quietly. What started as “read-only” access grows into cross-system access as teams connect new tools and data sources. The agent becomes a convenience layer over systems that were previously separated by process.
  2. Trust becomes implicit. Teams begin treating agent outputs as stable because they “almost always  work.” When an agent is wrong, it often fails in subtle ways that don’t trigger alerts until the impact is real.
  3.  Accountability becomes unclear. When an agent touches multiple systems, troubleshooting becomes harder because no single person “made the decision,” yet the organization still owns the outcome.

This is where teams feel the governance gap most acutely: when they try to answer basic operational questions and realize the evidence trail is incomplete.

What’s the one thing teams regret not putting in place earlier?

The most common regret is not defining boundaries upfront.

Teams wish they had established:

  • what data agents are allowed to access
  • what actions they are permitted to take
  • where agents are allowed to operate (dev vs production)
  • what monitoring and review is required

Once agents are relied on in production, tightening controls feels disruptive. And when controls arrive only after a scare or an audit, governance becomes reactive and inconsistent.

What trade-offs do early adopters only discover after deployment?

Early adopters often discover a false trade-off between speed and safety.

When governance is missing, teams move fast initially, then slow down later as incidents force controls in a hurry. Exceptions accumulate. Rules vary by team. Operational overhead increases.

When governance is designed into workflows early, teams tend to move faster over time. Guardrails reduce uncertainty. Changes become predictable. Trust improves because behavior can be explained and reviewed.

In other words, the trade-off is not speed versus safety. It is planned structure versus reactive friction.

Why governance has to be embedded into agent workflows

Agent governance cannot rely on documentation alone. Policies that live in wikis are advisory. Agents require enforceable controls.

Effective governance shows up where agents operate:

  • access is explicit and scoped
  • environments are separated clearly
  • behavior is observable
  • unsafe actions are blocked before they execute

This is the difference between “we have a policy” and “the policy actually works.”

As AI systems become more dynamic and gain access to tools and services at runtime, new risks emerge that cannot be addressed through documentation alone. This reinforces why real-time governance needs to be designed into the architecture rather than added later.

How platforms help teams govern AI agents without slowing delivery

Governance is easier when teams build on an AI-ready application platform that makes systems predictable.

When environments, configuration, and deployments are standardized, teams can apply consistent rules without inventing bespoke controls for every agent integration. This matters most in mid-market teams, where governance needs to scale without creating new operational roles.

This is where a platform approach can help: not by “governing AI” on its own, but by making the underlying workflows governable.

What Upsun provides before you deploy AI agents

Upsun doesn’t claim to solve AI governance end-to-end. What it does provide is a foundation that makes governance easier to embed into delivery workflows.

In practice, that includes:

  • Declarative, Git-driven configuration that makes environment and service setup explicit and reviewable
  • Isolated environments and previews that support safe testing before going production
  • Clear separation between environments and data to reduce accidental exposure
  • Observability built into the platform to understand behavior once deployed

These capabilities help IT teams design governed workflows that developers can actually follow, because they’re built into the way software is delivered, not bolted on afterward.

What to do before deploying AI agents in production

Before agents reach production, mid-market IT teams should be able to answer, plainly:

  • Where are agents used today?
  • What data can they access?
  • What actions can they take?
  • What evidence trail exists when something goes wrong?

If those answers are unclear, risk already exists, even if no incident has happened yet.

Governance first, scale second

AI agents will become more common, more capable, and more autonomous. The question is not whether teams adopt them. The question is whether adoption happens deliberately or spreads without structure.

The teams that scale agents safely tend to do one thing early: they embed governance into workflows before agents become critical infrastructure. That is what keeps speed high and surprises low.

What to do next

If AI agents are already part of your workflows, the next step isn’t choosing better models, it’s ensuring your platform can support experimentation without exposing production to risk. That means predictable environments, clear boundaries, and validation before deployment.

Stay updated

Subscribe to our monthly newsletter for the latest updates and news.

Your greatest work
is just on the horizon

Free trial
UpsunFormerly Platform.sh

Join our monthly newsletter

Compliant and validated

ISO/IEC 27001SOC 2 Type 2PCI L1HIPAATX-RAMP
© 2026 Upsun. All rights reserved.