Artificial intelligence has changed the nature of surveillance. Modern video analytics can identify behaviors, correlate signals across sensors, and surface potential threats with a speed and consistency that was not possible even a few years ago. Detection, once the limiting factor in security operations, is no longer the primary challenge.
What has replaced it is a harder, more consequential question: once AI detects something, who is responsible for deciding what happens next?
That question sat at the center of a recent Security Info Watch webinar, Surveillance Gets Smarter: The AI Analytics Revolution in Real-Time Threat Detection, which brought together Alex Vourkoutiotis, Chief Technology Officer at ECAM, Antoinette King of Credo Cyber Consulting, Kasia Hanson of KFactor Global Security Advisory, Jody Russell of Ambient.ai, and Mike Arnold of Acoem. While the panelists represented different parts of the security ecosystem, their perspectives converged on a shared realization. AI has accelerated detection faster than most organizations can operationalize response.
As AI becomes more capable, response is where security outcomes are now won or lost.
Detection Has Outpaced Response
Throughout the discussion, panelists described a familiar pattern emerging across deployments. As AI analytics improve, organizations gain visibility into activity that was previously missed or ignored. Jody Russell spoke to how modern computer vision models do not simply reduce false alarms; they surface meaningful events that legacy systems never detected. From a technology standpoint, this is a success. From an operational standpoint, it can be destabilizing.
Mike Arnold expanded on this challenge from a multi-sensor perspective. As organizations add acoustic detection and other sensing technologies to video, situational awareness improves, but so does signal volume. Each new modality adds context, but also complexity. Without orchestration, more data does not necessarily lead to better decisions.
Kasia Hanson described what happens next for many clients. Improved detection quietly shifts responsibility downstream. Security teams suddenly need more people to review alerts, clearer escalation protocols, and tighter operational discipline just to keep up with what AI is now revealing. AI succeeds technically before organizations are ready structurally.
Alex Vourkoutiotis framed this dynamic bluntly. “With agentic now, we’re finding that we’re able to find a lot more needles and we’re able to make that haystack a whole lot smaller,” he said. “So your organizations are finding that the agent counts that they currently have would typically need to increase to be able to vet the amount of information that agentic AI is presenting to them.”
Better detection does not eliminate work. It redistributes it. And unless response is designed alongside detection, that redistribution often lands squarely on the client.
Why Human-in-the-Loop Is an Operational Requirement
This reality is why every panelist, from technologists to consultants, emphasized the importance of keeping humans involved. Human-in-the-loop was not presented as a philosophical preference, but as an operational necessity. Antoinette King spoke to the accountability implications of automated systems influencing decisions in real-world environments, underscoring that AI outputs require human interpretation to ensure appropriate action.
Vourkoutiotis brought that idea into operational focus. “Typically in the security industry, when we’re talking human-in-the-loop for the application of AI, we’re talking about the monitoring operator,” he said. “That’s the last kind of individual that’s going to have a touch point with that.”
That human presence exists because AI outputs are not binary. They are probabilistic, contextual, and often uncertain. “So the human agent as part of that loop is important to verify when you have low risk stratification as an output from AI that needs human attention,” Vourkoutiotis explained.
In practice, human-in-the-loop is not about mistrusting AI. It is about managing ambiguity before it turns into error. As AI systems surface more edge cases and gray areas, the need for human judgment increases, not decreases.
Agentic AI Concentrates Responsibility
That tension becomes even more pronounced as AI moves toward agentic behavior. As the panel discussed systems capable of recommending or initiating actions rather than simply alerting, the stakes of response rose sharply. “The agentic system has a lot of responsibility if it’s going to automate part or all of that process,” Vourkoutiotis said.
Jody Russell noted that as AI begins to operate closer to action, organizations must be extremely clear about escalation logic and human override. Automation can accelerate good decisions, but it can just as easily accelerate the wrong ones if response pathways are not tightly controlled. Antoinette King echoed this concern, pointing out that autonomy without clearly defined responsibility creates risk that organizations are often unprepared to absorb.
Vourkoutiotis connected these concerns directly to training and operational oversight. “So we put a ton of effort into training of the agentic to make sure that it is making quality decisions, human-based decisions,” he said.
Agentic AI does not remove responsibility. It concentrates it. As autonomy increases, the cost of poor response rises alongside it.
Hybrid Security Only Works When Response Is Designed In
This is where much of the industry’s conversation around hybrid security becomes incomplete. Hybrid models, combining AI, cameras, and human guards, were widely discussed during the webinar as the practical path forward in an environment of constrained budgets and expanding risk surfaces. Vourkoutiotis described how this plays out operationally. “So we do find, in fact, that we do a hybrid solution where we’re augmenting restrictive budgets to provide much more robust security coverage,” he said. “Where you might only be able to have one or two guards at a large facility, cameras can augment that.”
But hybrid security introduces a critical dependency. “The issue becomes twofold,” Vourkoutiotis noted. “How much do you trust the system for accurate responses? And then from an agentic point of view, what is the decision-making tree that that system is inferring for the organization?”
Kasia Hanson reinforced this from a consulting standpoint, noting that many organizations adopt hybrid models without fully accounting for the operational lift required to verify, escalate, and respond to AI-driven detections. Detection scales faster than response unless someone deliberately designs for that gap.
Integration Without Control Increases Risk
The same pattern appears in conversations around integration. Panelists discussed the benefits of bringing together video, acoustics, and other sensing technologies to create a richer understanding of risk. Mike Arnold spoke to the power of multiple modalities working together. Vourkoutiotis agreed, but drew a critical distinction between integration and control. “A lot of organizations want to say that they will integrate everything and anything,” he said.
Integration without response design, however, creates its own problems. “If we don’t do it appropriately, then you just get additional information that the systems don’t handle appropriately, people don’t know what to do with it, and either you turn it off or it’s to your detriment because you don’t.”
More signals do not improve security unless they are translated into clear, timely action.
Why Accuracy Matters More Than Efficiency
Vourkoutiotis challenged the industry’s historical focus on efficiency. “Artificial intelligence from a computer vision model was typically used for efficiency,” he said. “That didn’t really have a positive impact on the customer.”
Efficiency gains disappear when organizations must add staff simply to manage alert volume. At ECAM, the metric is different. “Our premise is to use this strictly to increase accuracy, protect human life, and ensure that we’re mitigating risk,” Vourkoutiotis said. Accuracy changes outcomes. Efficiency follows as a result, not as a goal.
How ECAM Removes the Response Burden
Taken together, the discussion revealed a clear gap in the market. Many organizations can deliver better detection. Fewer are willing to own what happens after detection occurs. This is where ECAM’s model diverges.
While every panelist emphasized the importance of humans in the loop, most organizations stop at advocacy. ECAM assumes the responsibility directly. “A human in the loop gives us protective measures to determine that we are in fact making the right decisions and acting on them appropriately, and the agentic system has a lot of responsibility if it’s going to automate part or all of that process,” said Vourkoutiotis. “From the governance perspective, I would say that that is in fact the most important part to have a human intervention on the AI systems, because we need to determine that agentic AI is making quality decisions that are based on what a human being would want to infer on a decision and not what a computer system is learning amongst the data sets that we give it,” he added.
By developing the AI, integrating it into live environments, and providing the monitoring agents and security professionals who verify detections and execute response, ECAM removes the operational burden that improved detection typically creates for clients. Response is not delegated. It is delivered.
What Leaders Should Take Away
This recent webinar ultimately underscored an inflection point for the industry. AI works. Detection is accelerating. Signal volume will continue to grow. The differentiator now is not who can see more, but who is prepared to act responsibly at scale.
As surveillance gets smarter, security will belong to those who take ownership of the full lifecycle, from detection through decision to response. Integrated intelligence backed by human action, not just better analytics, is what turns AI into real security.
Next Steps
AI is changing what security systems can detect. The harder question is who is responsible for what happens next.
If you are exploring AI-driven surveillance and want to understand how to improve detection without inheriting operational complexity, ECAM works with organizations to design and operate end-to-end security programs that combine AI analytics with human verification and response.