Security And Governance Challenges Of GenAI

Security And Governance Challenges Of GenAI

Security And Governance Challenges Of GenAI

Author: Aparna Achanta, Forbes Councils Member
Published on: 2025-01-13 13:45:00
Source: Forbes – Innovation

Disclaimer:All rights are owned by the respective creators. No copyright infringement is intended.


Aparna Achanta is a Principal Security Architect at IBM with over a decade of experience leading secure application development projects.

Organizations are looking to rapidly seek value from generative AI (GenAI), often neglecting a vital aspect: security. A recent IBM Institute for Business Value survey involving C-suite executives revealed that only 24% of ongoing GenAI projects take security into consideration. This is despite 82% of participants emphasizing that secure and reliable AI is crucial for their business’s success. In the quest to enhance efficiency and automate routine tasks to free up time, employees often use public GenAI applications, exposing sensitive data. While employees have good intentions, they frequently overlook crucial data security and privacy issues. GenAI will revolutionize operations; therefore, enterprises must develop strong security, risk management and compliance policies to secure and govern GenAI applications.

GenAI Governance Challenges

Governance, risk and compliance (GRC) underpin GenAI reliability and safety. GenAI swiftly permeated the workplace within a few months, causing regulators, compliance and security experts to hastily develop guidelines. While many organizations have reacted by prohibiting the use of GenAI, this is not advisable as it enhances employee productivity and creativity. Preventing its use stifles innovation. Even if employers restrict GenAI tool use, employees might still use them discreetly, increasing data security and compliance risks because the IT team is unaware. Security leaders need to develop a cybersecurity strategy that safeguards their AI systems and data while aligning with business goals.

GenAI Security Challenges

Data Integrity, Regulatory And Privacy Concerns: GenAI models require vast amounts of training data, ensuring the integrity of input data is vital to avoid biased or harmful AI outputs. To sustain GenAI’s value, the inputs and outputs of GenAI models must be reliable and trustworthy, encompassing data confidentiality, integrity and availability. A leak of training data can lead to compliance violations and hefty fines. Malicious actors can misuse GenAI to create “deepfakes” that raise privacy concerns about protecting an individual’s personal attributes.

Increased Attack Surface: Attackers can target GenAI models with adversarial prompts, tricking GenAI apps into generating false, undesired or harmful results. While bias and ethics are generally top concerns when considering trustworthy AI, the entire AI development lifecycle is vulnerable to novel threats, jeopardizing trust. Securing these requires comprehensive cybersecurity strategies to protect against potential threats, which demands increased effort from security teams.

Access Control Issues: Unauthorized access to sensitive data can be disastrous for organizations. Preventing it requires managing access and identity control. It can be challenging for organizations to maintain robust authentication mechanisms, integrate them with the cloud and regularly verify and monitor access permissions. A rushed rollout of GenAI can lead to data breaches.

GenAI Vendor-Related Issues: GenAI vendors can knowingly or unknowingly access organizational data, which adds another significant layer of risk. Vulnerabilities can be introduced if cloud providers do not adequately secure their infrastructure.

Governance Recommendations For GenAI

Provide Training For Responsible GenAI Use: Organizations can offer comprehensive, role-based GenAI training to all levels of employees to promote the use of responsible AI. Understanding the GenAI model’s intended function and outcomes is crucial for identifying deviations. Unexpected business risks can result from GenAI that deviates from its operational design. Understanding these dangers helps firms assess risk tolerance.

Assess Existing Governance Frameworks: Companies can first assess the applicability of existing AI governance frameworks like the NIST AI Risk Management Framework and DHS’s Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure to analyze adoption. A nuanced understanding of the technical and legal elements can help effectively manage GenAI.

Adopt A Centralized Approach To GenAI Governance: A centralized GenAI governing body that can guide all departments to execute GenAI governance consistently would prevent redundancies from siloed governance.

Establish An Agile And Flexible Governance Framework: GenAI is a particularly dynamic field, with technological progress occurring almost every week; failing to stay current can be disastrous. An effective governance framework for GenAI must account for such rapid developments. Being adaptable at every stage of GenAI implementation can benefit organizations in incorporating new advancements.

GenAI Security Best Practices

Data Minimization: In data security, the principle of “less is more” is paramount. Supply GenAI applications with only data that is strictly necessary. Identify the minimum dataset required for every interaction to achieve the desired result. Avoid sensitive data, such as personally identifiable data (PII), whenever possible.

Data Encryption: Securing GenAI applications by implementing robust and differential data encryption solutions protects sensitive data from unauthorized access. Organizations can adopt privacy-by-design principles to integrate privacy considerations into GenAI development lifecycle phases.

Reduce Attack Surface And Complexity: Implement principles of Zero Trust, like micro-segmentation, to isolate different parts of the network and validate the user at every stage of interaction. Automated monitoring and threat detection tools identify and respond to critical security threats in real time, enhancing the overall security posture.

Limit Permissions Granted To GenAI Apps: Developers must assume breach while granting access to GenAI apps. Ensure users have least privilege access to data to perform tasks with GenAI. Audit and log privileged users’ access and search for unusual logins or behavior.

GenAI Vendor Assessment And Resilience Strategies: Vet the GenAI vendor carefully by reviewing their inherent security controls. Develop cloud resilience plans tailored for GenAI applications that include redundancy, failover mechanisms and regular backup and recovery processes to ensure quick data restoration and business continuity during service interruptions.

Conclusion

Given the vast expanse of GenAI and its multifaceted challenges, it is imperative that robust security and governance frameworks be inherent in its integration in every organization. The future lies in proactively securing AI systems against possible threats while promoting innovation. A potential governance model would require continuous collaboration among policymakers, GenAI developers and end users to ensure that GenAI solutions meet ethical and regulatory standards. Accountability will be the cornerstone of this journey of secure AI deployment. Investing in cutting-edge security technologies and encouraging continuous security awareness training can help organizations minimize risks and tap into the transformative possibilities of GenAI.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?



Disclaimer: All rights are owned by the respective creators. No copyright infringement is intended.

Leave a Reply

Your email address will not be published. Required fields are marked *

Secured By miniOrange