Introduction

Artificial Intelligence (AI) has quickly become a hot-button topic worldwide, with new tools for personal and professional use emerging daily. Executive leaders across every industry now face a challenge: quickly implementing and scaling AI programs. Many leaders struggle to prioritize among competing AI initiatives – and, often, lose sight of the risks associated with introducing AI tools into their technology environment. The urgency to implement AI has been met with hundreds of articles, videos, social media guides, and “how to” explainers, each clamoring for attention and purporting to have a definitive approach. For executive leaders, it can be difficult to parse this bombardment of media to distill the answer to one central question: “How can I succeed at securing Artificial Intelligence tools at my organization?”

We’ve got the answer: securing Artificial Intelligence starts with risk.

Securing Artificial Intelligence Starts with Understanding AI Risk

As with cybersecurity, you cannot secure Artificial Intelligence without understanding risk. What is risk? Simply put, risk equates to potential impact to the business. In the case of AI, this could include impacts such as the exposure of sensitive or non-public data, disruptions to business operations leveraging AI technologies, and reputational harm stemming from data privacy concerns or the incorporation of AI-generated “hallucinations” in work products.

So, what should executives planning or actively implementing AI technology consider when it comes to AI risk management?

  • Maintain visibility of data used to train or populate AI models – any data introduced to the model can later be exposed, leading to data privacy violations
    • Adhere to applicable data protection regulations (e.g., GDPR, CCPA)
    • Avoid uploading confidential or sensitive data, such as personally identifiable information (PII), health data (ePHI), secrets and protected intellectual property, and HR data (e.g., salaries, personnel records)
  • Develop and adhere to strict governance and oversight across all phases of AI use
    • Implement governance best practices (such as steering committees, roadmaps, policy creation, metrics) to provide oversight and guidance
    • Validate AI tools to assure expected performance and review AI vendors for compliance violations and security issues – models drift over time
  • Implement appropriate access controls and prevent misconfigurations – AI can be misused by almost anybody
    • Prevent AI model manipulation by adversarial actors (i.e., data poisoning, model inversion attacks) or insiders (e.g., baseline poisoning)
    • Restrict access to sensitive AI tools and ensure the use of appropriate authentication mechanisms
    • Establish safeguards to prevent the use of Generative AI and GAN for malicious purposes and train users on how to avoid risks from “deepfakes”

How can Neuvik Help?

It can be challenging to understand where to begin when securing Artificial Intelligence – we understand! That’s why we’re introducing Neuvik’s AI Risk Management services. No matter where your organization is in its AI journey, our services have been designed to increase security – without slowing the speed or scale of AI adoption.

With 25+ years of deep technical and risk management expertise, Neuvik serves clients across industries implement secure, effective AI risk management programs. Recent examples include:

  • AI Risk Assessment: Assessed the implementation of a healthcare chatbot to prevent the inadvertent disclosure of ePHI or patient data, ensuring compliance with regulatory requirements
  • AI Executive Briefings & Advisory Services: Provided a detailed briefing on risks associated with both Generative AI (GenAI) and Generative Adversarial Networks (GAN) to an executive team to increase awareness and highlight the need for secure implementation
  • AI Penetration Testing: Identified risks associated with AI tooling and the broader technology environment as part of a Merger & Acquisition due diligence, enabling the acquiring company to negotiate on deal price given costs associated with remediating risks

Read below to learn more about how Neuvik’s AI Risk Management services can help when securing Artificial Intelligence at your organization.

AI Risk Briefings & AI Risk Advisory

Want to educate your team on AI risk? Neuvik offers tailored AI risk briefings to provide an introduction to Generative AI (GenAI) and Generative Adversarial Networks (GAN) and associated risks, using recent case examples. Bespoke briefings allow Neuvik to highlight case examples and trends specific to your organization’s industry and specific questions.

For on-going support in securing Artificial Intelligence at your organization, consider Neuvik’s AI Risk Advisory services. With 25+ years of experience in advising both technical and business leaders, Neuvik offers trusted expertise in AI risk management and technical security.

AI Risk Assessment

The Neuvik Risk Assessment utilizes a proprietary methodology that includes the first comprehensive mapping of relevant AI frameworks to the industry best practice risk management framework NIST CSF 2.0. This enables our AI Risk Assessment to identify risks using the NIST AI Risk Management Framework (NIST AI 600-1 & NIST AI 100-1), the OECD AI Risk Framework, and a blend of proprietary questions designed to ensure thorough results. This framework also maps to HIPAA and COBIT for healthcare clients to provide an additional perspective on regulatory risk.

The result: AI risk insights you can easily incorporate into your organization’s current cybersecurity reporting.

Conducting an AI Risk Assessment will help your organization answer the following questions:

  • How AI contributes to technical risk across the organization?
  • What non-technical (people, process, technology organization / operating model) risks AI introduces?
  • If the organization have appropriate risk management capabilities in place for AI, such as an AI risk register, asset inventory, vendor risk program, etc.?


Each risk assessment results in prioritized, tactical recommendations to perform in the near- and long-term to increase capability and ensure secure AI implementation.

AI Penetration Testing

It’s not enough to understand AI risk – implementing AI securely requires technical validation. Neuvik’s Advanced Assessments team offers 20+ years of expertise in penetration testing, Red Teaming, and technical testing. We have a deep understanding of what makes AI different – and how to effectively test it.  

Unlike most penetration tests, Neuvik’s AI Penetration Test leverages a proprietary methodology designed to test for AI-specific vulnerabilities and configuration issues. In addition to validating the secure implementation and configuration of AI tools, Neuvik’s proprietary methodology tests the underlying generative algorithms themselves, testing for the ability to manipulate the AI into revealing sensitive data.

AI Penetration Tests ensure:

  • Secure configuration and integration of AI tools within the technology environment
  • Awareness of vulnerabilities present in AI tooling
  • Clear understanding of AI tooling’s susceptibility to unique attack types, such as prompt injection and model poisoning
  • Validation of access and data privacy controls

Technical testing complements all AI Risk Management services – more details about Neuvik’s technical offerings can be found at https://neuvik.com/our-services/advanced-assessments/.

AI Risk Governance & Program Development

Do you find yourself asking or being asked:

  • How do we get organized when it comes to AI?
  • Do we need an AI Council or Board? If so, what does one look like? Who should be on it?
  • What program milestones should we consider when it comes to AI adoption as an organization?
  • What AI security policies, procedures, and training should we implement?
  • What metrics related to AI adoption, use, and risk should we track?
  • What are “known unknowns” when it comes to AI adoption / use? How can we try to reduce risks?

If so, consider Neuvik’s AI Governance and/or AI Program Development services. These structured initiatives assist organizations in establishing an AI governance program and/or in building the foundational program capabilities needed to understand, manage and measure AI-related risk. These services can be used for organization-wide initiatives or for smaller scale implementation efforts.

AI Vendor Risk Management

As with Third-Party Risk Management (TPRM), AI vendor management is critical to ensure visibility and reduce risk.

Want to gain an understanding of risks associated with your AI vendors and asking:

  • What can we do to ensure the AI tools we’re on-boarding are secure?
  • What types of questions should we ask AI vendors when planning on-boarding?
  • What does our procurement department need to know about AI risk?

Neuvik’s AI Vendor Risk Management services assist your organization in building an AI-specific vendor risk management questionnaire, implementing review processes, and creating a vendor risk register.  

AI Asset Inventory

Do you know what your organization’s AI footprint is? Wondering:

  • How do we identify AI tools in use across our organization?
  • What can we do about “shadow AI”? Is this something we can proactively mitigate?
  • What types of tools exist to keep track of AI tools in use over time?

Consider Neuvik’s AI Asset Inventory services. This service assists in identifying AI tools in use and creating an AI-specific asset inventory, which can then be integrated with existing asset inventory tools.


Understanding your organization’s AI footprint is critical to maintain visibility, implement controls effectively, and perform governance.


This service can assist organizations beginning from scratch or looking to accelerate the speed to populate an existing AI asset inventory tool.

AI Transparency, Explainability & Functionality Review

Asking the following:

  • We know that AI doesn’t always do what it says it will, but we aren’t testing but how to test for that. Can you help?
  • I’ve read way too much documentation about AI risks, but I don’t know what actually matters – can you clarify?
  • We’re interested in technical testing but aren’t quite sure what we should include. Can you help us?

Consider Neuvik’s AI Transparency, Explainability, Fairness service. The goal of the AI Transparency, Explainability, and Functionality Review is to ensure AI tools act as expected and do not introduce unknown risks due to a lack of familiarity with underlying functionality.

For a specific AI tool, Neuvik will create a comprehensive “portfolio” of insights. These insights include technical functionality of the AI, possible types of biases present, and potential ethical issues that may arise from its use. Where possible, Neuvik will aim to address technical details such as underlying training data and any known vulnerabilities. Additionally, and where possible, Neuvik will cover non-technical insights, such as known hallucinations and/or the AI’s susceptibility to common adversarial techniques, such as flattery, intimidation, or other human-like commands intended to distract the AI and force an unexpected action.

This service would be most effectively complemented by AI penetration testing and/or AI risk assessment, or as part of a series of broader AI Risk Management service offerings.

Conclusion

Ready to secure your AI implementation? Contact us today or learn more about Neuvik’s AI Risk Management services: https://neuvik.com/our-services/cyber-risk-management/.