The terms Artificial Intelligence (AI) and Machine Learning (ML) are among the hottest buzzwords in today’s digital world. As the development and implementation of these technologies grow, their place in our lives continues to be an ongoing source of discussion and even heavier debate. Will these machines soon replace the need for software developers, programmers, and coders altogether? At some point, will they make decisions based solely on their own protocols? No one knows for sure. But the question of whether or not these technologies may someday outgrow the need for human support is nothing new. As people, we have always toyed with the notion of being physically and intellectually usurped by our own innovation.

From the ancient Greek myth of Talos, the giant bronze automaton who hurled boulders at the Argonauts, to the cyborgs of Hollywood’s The Terminator, our fascination with the dominion of machines is well-established. What’s not so clear, however, is the question of whether or not human intelligence will retain its value in a world run primarily by devices. And as the predictive powers of computers continue to evolve, this uncertainty about the human component becomes increasingly relevant in the cybersecurity industry, where ML plays a major role in digital protection.

The Human Edge

Although AI and ML are often used interchangeably, particularly as they apply to Big Data and analytics, it’s worth understanding their underlying separation. ML relates to a machine’s ability to take information and run with it, creating algorithms and outputs on its own, while AI is the next level application of this computing ability—carrying out tasks in a “smart” and increasingly independent way. When applied to cybersecurity, new AI algorithms use the intelligent automation of ML to respond more effectively to digital threats, inevitably reducing the human component. When ML principles are incorporated into systems, they have the ability to adapt over time and provide a dynamic edge against all sorts of bad actors. Unlike people, AI doesn’t make functional mistakes, and it never gets tired, stressed out, or overworked. It is consistently unflagging. But that said, AI still has certain limitations only remedied by the intervention of human thinking.

The lifeblood of black hats is steeped in adaptability. They are always looking for new ways to confuse or circumvent the security models put in place to deter them. So, as AI models become more effective at detecting threats, hackers become more adept at finding other avenues for exploit. This process known as adversarial ML involves the study of how models work and the art of evasion, something AI has a limited capacity to fight.

Cybersecurity solutions that rely solely on AI may be able to battle most of today’s cyber threats, but they will never be able to win the war against the unknowns. Just like the old adage, “fighting fire with fire,” the devious and all-too-human thinking of a hacker may be temporarily thwarted by the powers of ML, but it can only be fully vanquished by the equally creative mind of another human.

Technical Pitfalls

While most industry experts agree on the significance of ML in digital solutions, some cybersecurity professionals go even further by suggesting it now has the ability to stand alone in the fight against malware. This type of ML-centric thinking emphasizes the power of reverse engineering, where millions of malware samples are extracted, examined, and algorithms created with similar features. When a file drops on a disk, it is statically analyzed in a millisecond; compared to existing malware features; and effectively identified as such. While this approach is lightweight and offers greater detection than that of AV signatures, it provides only a limited scope of what’s really out there.

Considering a lot of malware uses the same essential code, false positives are common. And an AI-generated algorithm can only detect blacklisted files, sometimes consuming precious hours, days, weeks, and even months before landing on a verdict—meanwhile, the systems in need of protection remain vulnerable to never-before-seen attacks. This lag in performance can leave customers pondering the state of their own protection and feeling a bit like a participant in a high-stakes game of digital Russian Roulette.

Others in the industry, take a more balanced approach to security by purposefully integrating human functions into the processing intelligence of computers. As security solution providers, we feel confident in the value of interplay between humans and machines, which is why we have designed our technology to maximize the predictive qualities of ML while still acknowledging its limitations. To attain real security, it is essential to keep humans “in the loop” by combining their analytic strength with the power of modern computing.

Practical Problems

Because ML can evolve with the threatscape and be trained to directly address the remediation of problematic behaviors, it is a natural choice for easing workloads, improving productivity, and enhancing digital security—all with little to no human intervention. However, it’s important to keep in mind ML is dependent upon a number of algorithms for predicting the nature and origin of cyberattacks. While these algorithms do improve over time in their ability to detect and address threats, they are only effective against variations on the same theme of existing viruses, Trojans, worms, and other such malware. They cannot predict entirely new ones designed by malicious actors to breach systems and compromise data—only humans can do that. Which means, malware with a completely unrecognizable presentation, behavior, and provenance remains elusive to the powers of AI.

When customers rely solely on AI for endpoint security and cyber threat prevention, they invariably run questionable applications without any real understanding of the danger. Is it good? Is it bad? Could it lead to a breach? Users are equipped with nothing more than question marks as they perform decisions on files to keep business running smoothly and without interruption. And as they struggle to protect themselves while maintaining day-to-day operations, they remain open to unseen threat.


At the end of the day, humans still offer the best and brightest way to control the shortcomings of computers. Just ask the ancient Greeks. Even the massive robotic Cretan, Talos, who could drive off pirates with a volley of rocks, had a weakness. Forged by a blacksmith god, his divine blood flowed through a single vein plugged by a bronze nail. Otherwise, his form was fierce and impenetrable. But in the end, Talos was eventually destroyed, not by an army of other mechanical men—but rather, by one mortal woman who simply pulled the nail from his body and watched his power drain away.

Related Resources:

Artificial Intelligence To The Rescue: Predicting Natural Disasters

Better IT Security with AI

Implementing Artificial Intelligence into Businesses

How Can AI Assist In The Prevention Of Attacks?

Post a comment