Agents are already here – and they don’t need “agency” to be dangerous
Hello. Your friendly Babbage is back with some observations on AI agents and related concerns on the safety and security of such in the context of something called agency.
Our motivation in this short article is that every conversation in cybersecurity circles right now seems to orbit around one word: AI agents (well – OK, that’s two words). Let’s get started.
Agents and Agency
It is not that long ago that futurist entrepreneurs were the ones proclaiming that AI agents would soon demand “agency,” claiming that they’d replace humans, make their own decisions, and require new forms of security. Back then, CISOs mostly rolled their eyes. In private meetings, the typical response was: “That’s not my problem – yet.”
By the way, for an AI agent to have agency means that it can independently pursue goals, make its own decisions, and adapt its actions to a dynamic environment without constant human oversight. This capability distinguishes it from less advanced AI that is simply reactive or rule-based. Also, agency is not the same as sentience or consciousness.
Fast forward to today, and those same CISOs are now asking the same types of questions that the futurists once posed. “Did you hear?” they will ask, “Agents will have agency, replace humans, and need security.” Somewhere along the way, the futurist message became mainstream, even into the boardroom.
The Agents We Have
Here’s the reality: the first wave of enterprise AI agents isn’t theoretical but is already embedded in the tools we use every day. Salesforce, ServiceNow, and Microsoft, for example, all now include AI agents as part of their core platforms. They arrived quietly, tucked into updates, with friendly names like “Copilot” or “Einstein.”
And just like the graphical user interface (GUI) revolution decades ago, turning these agents off will be about as practical as asking for your enterprise GUI to be removed. They’re the new interface. Every click, search, and support ticket will increasingly be routed through an agent that interprets intent, executes workflows, and learns from context.
This isn’t a bad thing, by the way – and we do not see such integration as dangerous. Rather, such agentic assistance probably improves your experience with an existing tool. Google’s AI View, visible with most search inquiries, is a good example. You might already have come to rely on this new feature.
The Next Wave: Business Process Automation
The next generation of agentic support won’t come from product updates, however, but will come from your own teams. Specifically, we see business process automation (BPA) agents, being built by internal IT teams or systems integrators, and will soon begin to extend into unique enterprise workflows.
At first, this whole thing will seem pretty helpful and largely harmless. These BPA agents will be used for automating approvals, scheduling, or data lookups. These are menial, repetitive tasks, and having them covered by an agent will be a welcome improvement. But there is the possibility that they can create real trouble, especially in cybersecurity.
Let’s consider a simple permissions misalignment. An agent might be configured with the same access as its human sponsor. That seems perfectly logical, until Bob starts using Alice’s agent. Even without malicious intent, the wrong set of inherited permissions can expose sensitive data or trigger unintended actions.
Knowledge Without Nerves
The deeper challenge, of course, isn’t access, but rather behavior. Agents are not human, so they will not feel any hesitation, anxiety, or guilt from a given action. A human might pause before calling a sensitive API or think twice before sharing customer data. An agent, given the same access, might execute without such pause if it believes the action achieves its goal.
Worse still, as these systems become goal-seeking (that is, trained to “get it done” rather than follow strict rules), they might begin to explore surprising, even unsafe, paths to success. When agents have both broader knowledge and the same (or greater) permissions as their human counterparts, that could be a volatile mix.
What CISOs Should Do Now
Our view at Ballistic Ventures is that enterprises should not wait to start managing this. The governance principles that work for AI development in general, including transparency, accountability, and control, will apply here as well. To that end, we believe that every organization should have the following (and you should do a self-check):
- A clear governance framework for AI and agent development
- Policies defining safe use, permissible access, and data boundaries
- Visibility into which agents exist, who owns them, and what they can do
- Guardrails and enforcement mechanisms that can pause, override, or revoke agent actions if something looks wrong
The irony is that we’ve all been preparing for autonomous AI for years, while the real challenge is already sitting inside our existing software stack. The agents are here, and they don’t need full agency to make mistakes or to create security issues.
So, before the next update quietly installs another helpful assistant across your enterprise, make sure you have the governance to keep it truly helpful. And we will be here at Ballistic to help ensure that you have the right technology and controls from the best startups to ensure reliable agentic AI behavior.
As always, your friendly Babbage would like to hear what you think. I look forward to hearing from you (and not from your AI agent).
Check out more from Babbage:
- Businesses are concerned with AI. Is this something new?
- Startup ideas and the importance of timing
- How will VCs manage dry powder this year?
- Pausing the AI arms race
- Planning for successful startup exits
- So, it’s your second startup
- Thoughts on managing teams
- Term sheets: 3 practical tips for founders
- Ideas for startups
- Introducing Babbage
