Skip to main content

We are waiving Fall 2023 application fees! Apply by July 31.

Red Teaming AI: Cybersecurity Experts Compete to Hack ChatGPT for Vulnerability Detection

The online Master of Business Administration (MBA) with a Concentration in Cybersecurity Management program from the University of Illinois Springfield (UIS) prepares technology leaders to protect and secure information against cyberthreats. This requires a vigilant approach to any vulnerabilities found in software, hardware or other applications.

When OpenAI unveiled its generative artificial intelligence platform in 2022, everyone from high school students to programmers began using it to produce everything from book reports to computer code. The platform, ChatGPT, uses Large Language Model (LLM) advanced AI technology to mimic human conversation. Typically, users prompt ChatGPT and similar platforms with questions or requests to produce text, images, code, music and other content based on patterns and data in its LLM.

Hackers also have discovered generative AI techniques to create holes in cybersecurity measures and exploit them. One method, prompt hacking, involves manipulating the LLM to produce false or misleading information. When coders use that corrupt information to write programs, they unwittingly create potential cyberattack vulnerabilities.

“The impacts of large language models and AI on cybersecurity range from the good to the bad to the ugly,” InfoWorld warns. “Any tool that can be put to good use can also be put to nefarious use.”

How Is Red Teaming Used to Identify Potential Cybersecurity Weaknesses?

Red teaming is a form of ethical hacking organizations employ to test their cybersecurity measures. Red teams simulate cybercrooks’ tactics, techniques and procedures as a proactive risk assessment process, enabling companies to identify vulnerabilities so they can be closed before an attack.

“By conducting red-teaming exercises, your organization can see how well your defenses would withstand a real-world cyberattack,” IBM advises.

What Known Vulnerabilities Have Cybersecurity Experts Detected in AI Chatbots?

The DEF CON 31 hacker convention staged an independent mass red team exercise at the federal government’s request. Over 2,000 competitors drilled into eight generative AI LLM platforms to find ways malicious actors can exploit generative AI to defeat cybersecurity measures.

Each red team selected an AI chatbot and tried to manipulate its LLM to generate content that hackers can use to defeat cybersecurity protections for the following:

  • Security measures that detect and respond to vulnerabilities (distinct from penetration testing, which exploits known vulnerabilities to assess the risk of a breach)
  • Information integrity protections that defend the accuracy of data from human error, alterations, crashes and hardware compromise
  • Internal consistency defenses to ensure that data is current and coherent across all databases, platforms, and security
  • Societal protection that safeguards data privacy and personal information against intrusion, bias, incorrect predictions and social engineering scams

DEF CON 31 did not release the results immediately, but widespread coverage of the event suggested the red teams found the platforms riddled with flaws that would take millions of dollars to fix.

“Current AI models are simply too unwieldy, brittle and malleable,” SecurityWeek included in its report on the hacker conference. “Security was an afterthought.”

How Can Information Security Professionals Address AI Vulnerabilities?

Advanced expertise gained through a curriculum designated as a National Center of Academic Excellence in Cyber Defense Education (NCAE-C) is ideal for business professionals to gain the insights and acumen to lead cybersecurity management.

One such program, the AACSB-accredited MBA in Cybersecurity Management offered online by UIS, prepares graduates for top roles through studies that:

  • Explore processes that protect information systems against cyberattacks and data breaches via applicable principles, practices such as red teaming, and tools for cybersecurity management.
  • Immerse them in vital preventative techniques with advanced knowledge of cryptography principles, architecture, operations, and firewall prevention and intrusion detection systems.

The skills acquired through UIS’s online MBA in Cybersecurity Management program position graduates to compete in high-demand, lucrative careers. Employers predict a 32% increase in hiring through 2032, with an average annual salary of $112,000.

Learn more about the University of Illinois Springfield’s online Master of Business Administration with a Concentration in Cybersecurity Management program.

Our Commitment to Content Publishing Accuracy

Articles that appear on this website are for information purposes only. The nature of the information in all of the articles is intended to provide accurate and authoritative information in regard to the subject matter covered.

The information contained within this site has been sourced and presented with reasonable care. If there are errors, please contact us by completing the form below.

Timeliness: Note that most articles published on this website remain on the website indefinitely. Only those articles that have been published within the most recent months may be considered timely. We do not remove articles regardless of the date of publication, as many, but not all, of our earlier articles may still have important relevance to some of our visitors. Use appropriate caution in acting on the information of any article.

Report inaccurate article content:

Request more information

Submit this form, and an Enrollment Specialist will contact you to answer your questions.

  • This field is for validation purposes and should be left unchanged.

Or call 888-905-1171

Begin Application Process

Start your application today!
or call 888-905-1171 888-905-1171
for help with any questions you may have.