Agent VS Agentless, Why Not Both? The Story Behind Topia 3.5
TL;DR - A month ago we've felt that there is a significant amount of companies who would deploy Vicarius's Topia if we also had an agentless version, so we've done it.
How It All Started?
"I Like your technology, but I cannot afford to deploy another agent...". Boy, we had LOADS of these calls lately. As a small vendor, it's very hard to disrupt an industry that goes through the same path for the last 15 years.
We know, we know... You got so many things installed and running, so why add another one? Trust us, to perform continuous and contextual vulnerability assessment you must be where the application is installed AND actually being used (we even got Gartner to back us on that).
To top that claim, imagine there was something who could secure your vulnerable applications as they are being used and without patching? Neat, right?
Our strong belief is that an agent-based approach is a must when it comes to maintaining strong security posture and keeping you up to date with emerging threats.
When theory comes to reality, you realize that there are scenarios you simply cannot deploy an agent, such as, locked down server the IT team cannot access, a network which is being managed by another department or any other reason.
This is the reason we took the feedback and head back to the coding cave.
With an agent we can accomplish the following (let's stick to the highlights):
1. Predicting unknown vulnerabilities and detecting known ones.
2. Prioritizing based on actual application behavior and usage.
3. Protect against exploitation of the software as it is being used.
* All of it is being done continuously and in real-time.
If we want to maintain the competitive advantage we enjoy today, we must offer the same functionality without an agent. As for items 1 and 2, it can be achieved rather easily through the right network queries (which, to be honest, rely on other "agents" - services such as WMI, RPC, SNMP and others...).
Not Everything Is Simple
The third part was a bit challenging - how can you secure a software when you're not "there"?
Fortunately, the "outsource" approach which we used to collect the data required for predicting and prioritizing, could actually be used here. By taking advantage of existing technologies already deployed on the asset, we can offer patch-less protection.
The "heavy lifting" is being done by the server - finding unknown vulnerabilities by reversing the binaries and prioritizing risks based on the context. The remaining part is to feed the relevant service with the vulnerable areas. The agentless protection feature will be introduced in the future.
To maintain a high-security posture, an agent-based approach remains the horse you should bet. There is no replacement for being where all the action happens.
That being said, in some cases you must also support the agentless approach to provide a full risk visibility. While network scans are less reliable, they allow you to reach some of the "darker corners" of your company.
Now remember - Nobody ever got fired for choosing Vicarius.