Email Security Blog

The Ups and Downs of ChatGPT and Email Security

Artificial Intelligence (AI) has seen rapid growth in the past year, especially when it comes to language models. With its recent release, all eyes are on ChatGPT weighing the pros and cons of this sophisticated technology.

Understanding ChatGPT

ChatGPT is a sophisticated chatbot created by the AI research and deployment firm, OpenAI that can engage in natural conversations on a wide range of subjects. To do so, it uses a combination of machine learning techniques and natural language processing (NLP) algorithms, both of which are subsets of artificial intelligence.

At its core is the GPT-3 language model which has been trained on (i.e., fed) massive amounts of data in the form of books, articles, internet content, social media posts, and more. As a result, ChatGPT can generate text and responses that are smart, grammatically correct, coherent, and relevant. ChatGPT can also challenge incorrect theories, reject inappropriate requests, and even admit mistakes.

Since its debut, ChatGPT has been free to try, and its developers are hoping to gain additional insight from those using the service. However, OpenAI has also put subscription services in place which offer access to a newer version powered by GPT-4.

ChatGPT: Downsides from the Start

Several industries quickly found reasons to worry about the darker side of ChatGPT. Educators, for example, were among the first to voice their opinions on the technology’s ability to put academic integrity at risk. An anonymous poll of Stanford University students showed that 17% of responders used ChatGPT during quarter final assignments and exams.1 Alarms went off for the healthcare industry, especially when one study showed that ChatGPT “performed at or near the passing threshold” for the U.S. Medical License Examinations.2 And yet, those relying on ChatGPT’s knowledge in lieu of an actual medical professional are often misdiagnosed. It’s increasing anxiety levels and crowding physician’s offices with individuals seeking unnecessary medical procedures and scheduling unwarranted appointments.

ChatGPT and Email Security

If there is one industry that touches all our lives, it is cybersecurity – more specifically, email security and phishing. When it was first introduced, ChatGPT caught the attention of IT and cybersecurity professionals with its ability to compose high-quality phishing emails, write code, convert code from one programming language to another, and even write sophisticated malware containing no malicious code.3

Further proof of these concerns surfaced just months after the release of ChatGPT, when BlackBerry Global Research conducted a survey of 1,500 IT decision makers. While all participants believe in ChatGPT’s potential for good, they also voiced concerns:4

  • 75% acknowledged ChatGPT as a potential cybersecurity threat.
  • 53% believe it will help hackers craft more believable and legitimate sounding phishing emails.
  • 49% believe it will enable less experienced hackers to improve their technical knowledge and develop more specialized skills.

The threats posed prompted OpenAI to update ChatGPT so that malware requests would be rejected. But, as hackers do, ways to “jailbreak” the system were soon devised. In other words, cybercriminals found a way to effectively bypass security protocols so that ChatGPT would again help write phishing emails, malware, and hacking scripts. OpenAI continues to fight this battle, but they also know the limitations of ChatGPT, admitting, “While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.”5

Artificial Intelligence: Friend or Foe?

The release of ChatGPT has prompted plenty of discussion, often shedding such a poor light on artificial intelligence that a “good vs. evil” reminder seems to be warranted.

Friend. Artificial intelligence first came onto the scene in the early 1950s. Since then, a wide range of wonderful applications have been developed that have the potential and power to improve our quality of life. For instance, every day AI brings us Siri, Alexa, and even Netflix recommendations. Smart home devices rely on AI, and you must agree that we’d all be lost without Google maps. At a higher level, AI can screen for cancer, predict the impacts of climate change impacts, and determine the right crops for areas where there is famine. AI can convert text to sign language so children can learn to read. And, of course, AI keeps us safe whether we’re at home or online.

Foe. When it comes to phishing, the bad guys will always look for ways to improve their game. In this case, it’s clear that cybercriminals will continue to exploit ChatGPT to craft phishing emails that sound better and deceive more easily. And, where ChatGPT’s defenses are weak, we can also expect a new wave of hackers to join the ranks, relying on the technology to write malware. But all is not lost – not by a long shot.

With AI from INKY, You Can Fight Fire with Fire

It’s no secret that while ChatGPT could give cybercriminals an upper hand, the technology is far from perfect. Even OpenAI explains that “ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers,” and that “given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.”5 Even with its flaws, ChatGPT still has many advantages to phishers. So, how do you win this battle? You outsmart cybercriminals with an email security solution that’s smarter than ChatGPT.

INKY delivers the most powerful AI anti-phishing technology available and pairs it with a dose of good-old human decision-making proves to be the golden ticket when it comes to battling phishing threats. INKY catches everything, including zero-day attacks, malware, Business Email Compromise (BEC), and even malicious emails generated with the help of ChatGPT.

INKY begins by automatically scanning every email and performing a high-level analysis. Once the phishing analysis is complete, INKY applies one of more than 60 different interactive banners, which notify the user of the email’s phishing threat level. Gray is neutral. Yellow advises caution. Red signals danger. The banners also explain the threat, ultimately educating and guiding users to make smarter email security decisions.

Email Security Remains a Big Win for MSPs

The ChatGPT survey conducted by BlackBerry also reinforced the strong desire companies have for implementing email security platforms. They found that 82% of IT decision-makers plan to invest in AI-driven cybersecurity in the next two years and almost half (48%) plan to invest before the end of 2023.4 With this level of potential, now is the perfect time to become an INKY Partner. You’ll have the tools you need to gain new business and the email phishing technology to keep customers safe and happy. INKY’s automated reporting will also help, providing your customers with regular updates on the threats you’ve helped them avoid with INKY.

Learn more about the artificial intelligence that makes INKY so powerful and explore the many advantages available to our MSP Partners. Schedule a free INKY demonstration today.

----------------------

INKY is an award-winning, behavioral email security platform that blocks phishing threats, prevents data leaks, and coaches users to make smart decisions. Like a cybersecurity coach, INKY signals suspicious behaviors with interactive email banners that guide users to take safe action on any device or email client. IT teams don’t face the burden of filtering every email themselves or maintaining multiple systems. Through powerful technology and intuitive user engagement, INKY keeps phishers out for good. Learn why so many companies trust the security of their email to INKY. Request an online demonstration today.

 

1Source: https://stanforddaily.com/2023/01/22/scores-of-stanford-students-used-chatgpt-on-final-exams-survey-suggests/

2Source: https://www.kevinmd.com/2023/03/the-pros-and-cons-of-using-chatgpt-for-your-health-care-needs.html

3Source: https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware

4Source: https://www.blackberry.com/us/en/company/newsroom/press-releases/2023/chatgpt-may-already-be-used-in-nation-state-cyberattacks-say-it-decision-makers-in-blackberry-global-research

5Source: https://openai.com/blog/chatgpt

Topics: