Research Hub > Rules Before Tools: Why Governance Is Key to Securing AI
Article
7 min

Rules Before Tools: Why Governance Is Key to Securing AI

Many organizations have begun rapidly integrating AI technology into their daily work. But, without a strong governance framework in place, you’re risking “shadow AI” vulnerabilities, project failure and even exposure to new, novel security threats.

Shot of a group of young people using computers during a late night in a modern office

As artificial intelligence (AI) technology becomes a more pervasive component of modern business strategies, promising increased productivity, innovative solutions and competitive advantages, many organizations  are eager to implement it as quickly as possible to reap the immediate benefits.

However, adopting AI without a cohesive, flexible security strategy built around it can lead to project failure or even exposure to new, novel risks introduced by the technology itself. A strong AI security strategy should address these risks while considering the broader business implications of AI use. This begins with establishing a secure environment through robust governance frameworks, ethical standards and effective risk management.

Enterprise vs. Commercial Use Cases for AI

Whether your organization is concerned about being “left behind” when it comes to AI technology or has practical applications for AI in mind, the most important question to ask before beginning an AI initiative is, “What is the return on investment I expect to see?”  With diverse opportunities for AI integration across a wide spectrum of applications, use cases tend to fall into two distinct categories: enterprise and commercial.

In an enterprise setting, organizations use AI as a feature or a product, enabling end-users with productivity tools like Chat GPT or more customizable applications like Salesforce Einstein to work faster by leveraging AI capabilities. While these tools can help streamline operations to a degree, the question becomes, where do these productivity gains go and how does that translate to ROI?

Commercial use cases, on the other hand, tend to use AI as a “factory,” leveraging large language models (LLMs) to extract business intelligence or implementing customer-facing applications like chatbots. While these use cases require more complex planning and execution, ROI gains are much more apparent as these initiatives aim to enhance customer experiences and drive business growth.

Early conversations around AI adoption should center around achieving a clear goal. Is your organization’s goal to achieve measurable ROI or to simply position itself as early adopters of AI technology? Whatever the goal, you must balance the need for innovation while accounting for security in every facet of your strategy — and it all starts with governance. 

Top 5 Vulnerabilities of LLMs and the Risks of Shadow AI

Unless your organization is planning on developing and training its own large language model LLM, it’s likely that you will leverage an existing one. While this is the simplest way to implement AI in your organization, it means that the LLM will likely have the same vulnerabilities as any of the top LLMs on the market.

According to the Open Web Application Security Project (OWASP), a few of the most common vulnerabilities for most LLMs in 2025 include:

  1. Prompt injection: Malicious inputs from end users can “jailbreak” your LLM.
  2. Insecure output handling: Outputs can become a gateway to compromising backend systems.
  3. Training data poisoning: Bad actors can contaminate your model's training data.
  4. Model denial of service: Attackers can overload your LLM, causing degraded services or inflated costs.
  5. Sensitive information disclosure: Sensitive data can inadvertently be exposed by your LLM.

Due to the accessibility and proliferation of AI technology in the workplace, the reality is that professionals in all industries have begun using AI on their own to some extent. Using “shadow AI” or AI tools that are not vetted or managed by the company amplify these risks, including:

  • Data leakage: Team members may inadvertently share sensitive company information with third-party AI tools. Because AI models are trained on the inputs of users, this information may lead to data leaks, breaches or non-compliance with data protection regulations like the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) .
  • Inaccurate or unreliable outputs: Unvetted AI tools may produce incorrect, biased or low-quality results, which can harm decision-making or damage the company's reputation.
  • Lack of governance: Without company oversight, there’s no way to ensure that the tools align with organizational policies, ethical standards or compliance requirements.

One of the most predominant risks of shadow AI is the unintended permissions that users can grant to AI tools, a vulnerability still present in many popular AI tools today. Essentially, any permission a user has within the company network can also be extended to the AI tool. For instance, if someone uses the tool to request access to sensitive payroll data, the AI might provide information that the user shouldn’t have access to, creating a serious security breach.

Mitigating AI Security Risks with an AI Governance Framework

The solution to mitigating these risks lies in creating a structured governance framework that ensures a secure and ethical approach to AI deployment. The first step is to establish a steering committee to oversee AI implementation. This committee will be responsible for vetting tools, managing third-party risks and aligning AI initiatives with the organization’s strategic goals.

From there, your organization can set up an acceptable use policy detailing when, where and how to use AI technology in everyday work. This acceptable use policy should be based on the answers to questions like:

  • How is your organization currently blocking third-party generative AI (GenAI) from the internet?
  • How is your organization preventing team members from subverting those blockages and using “shadow AI?”
  • Do you have a sanctioned alternative to third-party GenAI?

7 Key Elements of an AI Governance Framework

After that, it’s time to build a governance framework that involves defining seven key elements:

  1. Scope: Define the specific AI applications, systems or processes that fall under the governance framework. This helps in setting clear boundaries and focuses on areas that are crucial for organizational success.
  2. Structure: Assign clear roles and responsibilities to your AI governance board or committee. Utilize existing frameworks like the NIST AI Risk Management Framework (RMF) to guide the governance structure.
  3. Ethics and compliance: Ensure that AI practices align with ethical standards, adhere to regulatory requirements and comply with legal mandates. This protects organizations from potential reputational damage and legal penalties.
  4. Risk management: Conduct thorough risk assessments and regular penetration testing to identify and mitigate any weaknesses in the system.
  5. Access controls: Implement strict access controls and monitoring for LLM systems to prevent unauthorized use or tampering.
  6. Model oversight: Maintain clear oversight of AI/ML models to ensure they are accurate, traceable, interpretable and explainable. This also includes creating a knowledgebase that allows you to curate the information that your GenAI tool can access, avoiding granting your LLM unintended permissions.
  7. Continuous evaluation: Regularly assess AI/ML systems for compliance with established standards and identification of potential risks. This ongoing evaluation ensures that AI systems remain effective and secure over time.
  8. Data quality: Implement data governance policies to ensure high-quality data is maintained with integrity and used ethically. Quality data is the backbone of effective AI systems.

Security Solutions for AI

Like any security strategy, empowering your teams with knowledge of the greatest risks to your organization is key. When securing an LLM, for example, focusing on inputs and outputs will be the greatest challenge. For example, how do you ensure that the inputs are not malicious? How can you be certain that the outputs are ethical, valid and not an AI “hallucination?”

The best practice today is to implement effective AI security posture management (AIPSM) strategies. Addressing gaps that typically are not addressed by the traditional app security lifecycle development model, AIPSM includes data security posture management and cloud security posture management with considerations specific to AI vulnerabilities.

At a time when developers can download any LLM they choose or even develop their own AI agents without company knowledge, AIPSM is more important than ever. Informed by your AI governance framework and acceptable use policy, AIPSM forces your security team to fill gaps that aren’t addressed by the traditional application security development lifecycle model by asking questions like, “How do we scan and vet these new AI models?”

It all comes down to strategies built around discovery, visibility, vulnerability management and runtime protection. For example, one important part of most AIPSM strategies involves creating an LLM firewall. Unlike a traditional firewall, an LLM firewall is designed to protect organizations from risks associated with using AI models by monitoring and controlling the flow of data between users and the AI. This includes preventing sensitive or confidential information from being shared with the model, filtering unethical or harmful outputs and ensuring compliance with company policies and regulations. Trained by your organization, an LLM firewall can help mitigate risks like data leaks, misuse or unintended consequences of AI interactions.

The Future of AI — and its Potential Risks

As AI becomes more engrained in our daily lives and work, AI agents appear to represent the cutting edge of AI security. Since they can be developed on their own or purchased “off the shelf,” AI agents seem poised to perform the kinds of complex tasks traditionally handled by humans. As AI becomes more capable, it seems likely that agentic AI security, in which AI agents do everything from security operations center (SOC) analyst work to identity and access management (IAM) provisioning and de-provisioning, is almost inevitable.

However, their development and deployment come with a myriad of challenges. The growing trend of using multiple AI agents, each with distinct personas and tasks, may introduce more challenges than solutions. These agents can be programmed with unique personalities and points of view, allowing them to handle specific tasks like “security task A, B or C.” Their differing perspectives may lead to slightly varied answers to the same question, and they can operate simultaneously, routing tasks among themselves. While this approach offers flexibility and efficiency, it also creates significant risks.

The primary issue is the lack of visibility and control over these agents. Organizations don’t often know which agents have been created, what tasks they perform or where they operate. There is no clear way to audit their actions, understand their decision-making processes or verify their outputs. Additionally, questions arise about their access and credentials — what they have, how they’re being used and whether misuse would even be detected. Without proper oversight, this lack of transparency poses serious security and governance concerns.

It’s for reasons like this that Gartner predicts that over 40% of agentic AI projects will be canceled by 2027. While organizations won’t want to slow down innovation by stopping developers from experimenting with AI, doing manual discovery of AI tools and establishing effective AI processes and procedures is the best place to start.

Successfully adopting AI technologies means taking a comprehensive approach to security, governance and risk management. By implementing a robust AI governance framework, your organization can unlock the full potential for innovation that AI promises — without the risks that come with shadow AI.

Engaging an expert partner with vast experience in all aspects of AI integration and development as well as security can help your organization navigate this AI security journey. From data governance to the secure development of AI agents and more, a trusted partner can help integrate security into all of your AI initiatives, ensuring that innovation and security are seamlessly connected.

Walt Powell

Lead Field CISO

Walt Powell is the Lead Field CISO at CDW, specializing in providing executive guidance around risk, governance, compliance and IT security strategies.