Key GenAI cybersecurity challenges and risk mitigation strategies

Rajiv Kakar

Generative artificial intelligence (GenAI) has the capacity to understand, learn, adapt, and implement knowledge across a broad range of tasks at a level equal to or beyond human capability. Unlike Narrow AI, which is designed to perform a specific task such as voice recognition or recommendation algorithms, GenAI can apply intelligence to any problem, and be able to perform any intellectual task that a human being can do.

While it holds extraordinary promise for the future, GenAI comes shrouded in various concerns, extending from ethical dilemmas to security susceptibilities. This article will explore some of the key challenges of GenAI and risk mitigation strategies from a cybersecurity perspective.

Key challenges of GenAI 

A persistent issue of AI is the lack of transparency, frequently referred to as the black box problem. It’s difficult to understand how complex AI models make decisions, and this can create a security risk by allowing biased or malicious behavior to go unchecked.

Businesses are rapidly exploring GenAI solutions with little forethought on the security implications on the rest of the IT estate. There is currently no limit for the complexity of attack surfaces of AI systems or other security abuses enabled by AI systems. In addition, AI models heavily rely on third-party technologies, where the large language models (LLMs) like ChatGPT are outside the control of an enterprise. Consequently, the learning parameters where AI systems may be trained for decision-making outside an organization’s security controls or trained in one domain and then “fine-tuned” for another raises concerns about intended and actual usage.

Datasets used to train AI systems may become detached from their original and intended context, or may become stale or outdated relative to deployment. This introduces the problem of decisions made on incorrect data. Moreover, changes during training of models may fundamentally alter AI system performance and outcomes.

LLMs typically capture more information than they process, and considering the privacy policy of ChatGPT, the platform may regularly collect user data such as IP address, browser info and browsing activity. These may be shared with third parties, competitors, and regulators. The use of pre-trained models that can advance research and improve performance can also increase levels of statistical uncertainty and cause issues with bias management, scientific validity, and reproducibility.

On top of the computational costs for developing AI systems and their impact on the environment and planet, it is very difficult to predict failure modes for the emergent properties of large-scale pre-trained models. AI systems may require more frequent maintenance and triggers for conducting corrective maintenance. Additionally, it is challenging to perform regular AI-based software testing, or determine what to test, since AI systems are not subject to the same controls as traditional code development.

“Artificial stupidity,” the term used to describe situations where AI takes decisions that may seem illogical to humans due to its inadequate understanding of the real-world context, presents another challenge. Talks of AI singularity, a hypothetical scenario where AI outstrips human intelligence, have also started to gather momentum. Critics argue that a super-intelligent AI poses a real existential risk, as it might spin out of human control. 

The dehumanizing effects of GenAI are another cause for concern. Over-reliance on AI risks devaluing human skills and minimizing human interactions. Moreover, the widespread application of GenAI may give rise to economic disparity, as the benefits of AI may not distribute evenly across society. Finally, the misuse of GenAI, particularly for harmful purposes like illegal surveillance, spreading propaganda, or weaponization, cannot be overstated.

The already dense and complex AI landscape is further complicated by substantial hype and a multitude of diverse solutions. The resulting application environment is scattered with multiple third-party technology solution components which require thorough vetting in enterprise contexts. 

Types of GenAI attacks

There are various types of GenAI attacks manifesting across enterprises. Adversarial attacks involve manipulating an AI model's input data to make the model behave in a way that the attacker desires, without triggering an alarm. For example, an attacker could manipulate a facial recognition system to misidentify an individual, allowing unauthorized access. 

A data poisoning attack involves maliciously manipulating the data used to train AI models. By introducing false or misleading data into the training dataset, attackers can compromise the accuracy and reliability of AI systems. This can lead to biased predictions or compromised decision-making. On the other hand, a model theft or model inversion attack may attempt to steal and/or reverse-engineer AI models to obtain sensitive information. 

In a transfer learning attack, an attacker manipulates an AI model by transferring knowledge gained from one domain to another, resulting in the AI system producing incorrect or harmful outcomes when applied to new tasks. In input manipulation, interacting with a chatbot or an AI-driven system can lead to incorrect or harmful responses simply by changing words or asking tricky questions. For instance, a medical chatbot might misinterpret a health query, potentially providing inaccurate medical advice.

AI can also be used by malicious actors to automate and enhance their cyberattacks. This includes using AI to perform more sophisticated phishing attacks, automate the discovery of vulnerabilities, or conduct faster, more effective brute-force attacks.

GenAI security risk management

To mitigate attack vectors, organizations must establish comprehensive regulations and standards that can guide the responsible use and development of GenAI. A GenAI Risk and Control framework can be very helpful in highlighting areas of vulnerability and risk mitigation in some of the following areas:  

Threat recognition. Identify possible threats GenAI might enable, such as autopilot system hacking, data privacy threats, decision-making distortion, or manipulation.

Vulnerability Assessment. Evaluate weak spots in the system that might be exploited due to GenAI characteristics.

Risk Impact Analysis. Look into potential implications if any threats were actualized (financial implications, impact on company reputation, etc.)

Mitigation Strategy Development. Develop strategies to mitigate these risks, whether that means strengthening your network security system, creating backup systems, securing data privacy with improved encryption, or continuously auditing & updating the AI’s programming against potential manipulation.

Contingency Planning. Develop a plan for responding to any breaches or issues that occur, despite mitigation efforts. Include steps to fix the issue, mitigate the damage, and prevent future occurrences.

Constant Monitoring & Updating. GenAI systems should be regularly monitored and updated to patch vulnerabilities and keep up with the evolving threat landscape.

Training & Awareness. Ensure that all users of GenAI systems are properly trained on security best practices and are aware of the potential threats.

External Cooperation. Cooperate with other firms and institutions to share threat intelligence and promote a collective defense strategy.

Regulation Compliance. Ensure compliance with all applicable laws and regulations surrounding data security and AI, such as general data protection regulation (GDPR).

Incident Response Plan. Prepare a clear and concise plan to follow when a breach occurs, which includes reporting breaches, managing and controlling the situation.

Organizations must consider upgrading cloud security and moving towards zero trust principles, whereby every access request is authenticated, authorized and validated every time. Antivirus systems should be upgraded from the current norm of using a pre-programmed list of known attack vectors (signature based) to systems that can observe unusual patterns and alert on deviations (anomaly based). Embracing GenAI monitoring by introducing the appropriate tools allows organizations to monitor AI prompts and see that they do not deviate from original scenarios.

Review and strengthen security around a GenAI application stack emphasizing on integration points between systems (API’s) and identify AI systems and assets by drawing up a plan of usage. Organizations can assign a dedicated team to test AI models at base and application level, as well as introduce moderation and control on user developed applications, tools and products. Any experimental or uncontrolled work on GenAI within the enterprise must be monitored.

Applying these strategies can minimize the risks associated with GenAI and help efficiently manage cybersecurity.

Navigating AI pitfalls by mitigating risks

While the potential of GenAI is undeniable, a cautious, forward-thinking approach is crucial to navigating its potential pitfalls. It is imperative to establish comprehensive risk mitigation, standards, and frameworks that can guide the responsible use and development of GenAI.

 

Rajiv Kakar is a Technology Consulting Principal of SGV & Co.

This article is for general information only and is not a substitute for professional advice where the facts and circumstances warrant. The views and opinions expressed above are those of the author and do not necessarily represent the views of SGV & Co.

Leading the way in business

Other SGV News and Publications