There have been many news articles surrounding the buzz about Artificial Intelligence (AI) and the implications of its impact on our everyday lives.  AI is believed to be the next stepping stone towards a better future, but some cyber security analysts think the latest AI tools such as ChatGPT from OpenAI could also be used equally for bad as well.

True AI as portrayed in popular film or TV shows is still firmly in science fiction, however, there have been advances in the technology which have left security analysts and other developers with high hopes for the future of the technology space.  Although there is just as much anxiety over the fast-growing landscape of AI development, so much so that even Google has called the hotly debuted ChatGPT AI bot a ‘code red moment’.

What is AI Capable of in the Cyber Security Space

In the past most researchers have passed over AI as nothing more than a parlor trick with nothing really under the hood.  Most solutions turning out to be overhyped by marketing teams and far less sophisticated or practical than advertised.  But as advancements have been made over the years, AI tools such as ChatGPT have given security analysts more pause.

Security researcher Casey John Ellis, chief  technology officer, and founder of BugCrowd said that “it’s influenced the way that they’ve been thinking about the role of machine learning and AI in innovations.”  In a short period of time, security researchers have been able to perform a number of offensive and defensive cybersecurity tasks, such as generating convincing or polishing phishing emails, developing usable Yara rules, spotting buffer overflows in code, generating evasion code that could help attackers bypass threat detection, and even writing malware.

Although the developers of ChatGPT (OpenAI) claim that there are restrictions put in place which prevent their AI tool from being used to directly write malicious code, analysts claim that they have been able to still generate what could be used as ransomware malware from the ChatGPT tool by indirectly describing what they wanted without stating specifically what the code would be used for.

Researcher Dr. Suleyman Ozarslan noted that he could describe the tactics, techniques, and procedures of ransomware without describing it as such, and get the bot to help build the software using the programming language Swift.  Ozarslan compared it to the likes of a 3D printer, stating that a 3D printer may not print a working gun, but you can still get it to print a grip, trigger, magazine, and the like on their own.

Although the current iterations are still primitive, it’s easy to see how far AI and Machine Learning has come and what implications it could have in the cyber security space’s future.  The limiting factors are still somewhat clear, in that you have to still somewhat know what you’re doing in order to build software with an AI bot like ChatGPT.  However cyber criminals may be able to use it to play to their advantages.  For example, the AI could easily be used to write phishing emails in perfect English, or sending spam messages and calls more convincingly.

It's not all bad though, as developments in AI tools and capabilities can also help stop bad guys as well.  Many Cyber Security firms are using AI and machine learning to stop malware, and tools that use AI to better cyber security such as Sentinel One are quickly becoming favored among Anti-Virus programs.

Natural Networks like other Cyber Security providers is always working to stay ahead of the curve when it comes to protecting your data and IT infrastructure.  As a Managed Services Prover, we work to always implement the best in IT security practices.  Natural Networks has even implemented AI driven cyber security tools such as Sentinel One to protect our clients from malware and other online threats.  If you want to learn more about how your can partner with Natural Networks to protect your business’ critical IT components, give us a call today!