Skip to main content

M.R. Asks 3 Questions: George Gerchow, CSO, Bedrock Data

By August 8, 2025September 4th, 2025Article

George Gerchow is Chief Security Officer at Bedrock Data and was formerly Sumo Logic’s Chief Security Officer & SVP of IT.

His background includes security, compliance, and cloud computing disciplines. George has years of practical experience in building agile security, compliance, and IT teams in rapid development organizations.

George has been on the bleeding edge of public cloud security, privacy, and modernizing IT systems since being a Co-Founder of the VMware Center for Policy and Compliance. With 20+ years experience in the industry, he is a Faculty Member for IANS (Institute of Applied Network Security), and is also a known philanthropist and CEO of a nonprofit corporation, XFoundation.

In this conversation we look at how he and his teams are navigating AI Security Through Data Protection and Governance.

M.R. Rangaswami: What drew you to Bedrock Data at this stage of your career? What makes the company’s approach to data security different from existing solutions in the market?

George Gerchow: I was introduced to Bedrock Data as a former customer when I worked at MongoDB. I was truly impressed with their technology in solving a problem we faced around data classification at scale before implementing enterprise AI-based enterprise search. During the proof of concept, we observed that the solution was effective at scale. 

What truly makes Bedrock stand out is the metadata lake. This all-in-one repository shows you where your data resides and what’s occurring throughout your entire environment, which is something the industry currently lacks. This metadata lake approach offers organizations visibility they’ve never experienced before.

Data protection needs are expected to grow significantly. For years, we’ve relied on endpoint security, perimeter security and other methods with some success, but moving forward, true defense in depth will be driven by data protection. 

Data is becoming increasingly difficult to handle. In fact, enterprises are storing more and more data, especially with the rise of AI, and data volumes are triple what they were before. Data exhaust is a real issue because people simply don’t delete data. Think about it personally: when was the last time you went through and deleted any of your emails or information from your device or the cloud? You never do. This problem will only continue to grow in the enterprise environment.

Ultimately, the combination of proven technology, the metadata lake architecture, and working with people I trust made Bedrock Security an obvious choice at this stage of my career.

M.R.: As organizations deploy AI agents that directly access sensitive enterprise data, what new categories of risk should security leaders be preparing for that go beyond traditional data protection concerns?

George: The biggest risk by far is shadow AI, and this will become a major problem if security teams don’t address it proactively. Just like with shadow IT and the cloud in the past, shadow AI will emerge when security teams slow things down or say no without offering proper alternatives. When you don’t provide a platform, process or system that people trust to bring their AI ideas forward, developers and business users will simply go around you.

The main point is to be open about what we’re genuinely doing with AI. Security teams should fairly evaluate AI requests by focusing on two key questions: What does this do for your customers, and what does it do for the company? These should be the primary considerations, or else you’ll be flooded with requests that are hard to properly assess.

There’s also a new type of risk related to entitlement and data correlation that’s alarming. With AI and enterprise search capabilities, you might have a document marked as non-critical, but when AI systems analyze and connect information across multiple documents, a single sentence or word could become critical when combined with data from other sources. This creates entirely new attack vectors that traditional data protection methods weren’t built to address.

We’re also seeing increasingly advanced AI-driven attacks that organizations must prepare for. These include data poisoning, prompt injection vulnerabilities and sophisticated social engineering techniques that utilize AI-generated content. The complexity and scale of these attacks will necessitate entirely new defensive strategies.

M.R.: How can organizations prevent shadow AI from becoming the next major security blind spot, and what governance frameworks should be in place before AI adoption accelerates beyond manual oversight capabilities?

George: Security leaders must actively use AI, dedicating the first hour daily to understand and manage it. Rapid adoption is vital, but awareness of risks is key. Many security professionals may unintentionally hinder progress due to unfamiliarity with AI and threats. Transparency about data used in AI, like creating an AI Data Bill of Materials (AI DBOM), is essential, especially since reverse engineering outputs is hard.

Security teams must adopt an adversarial mindset from the start. You can’t defend an AI system until you know how to break it. This involves continuously testing for vulnerabilities like data poisoning and prompt injection attacks, conducting red team exercises and running bug bounties against AI implementations.

To prevent shadow AI from becoming the next major security blind spot, organizations must transition from manual, form-based governance to context-aware, lifecycle-driven oversight. Traditional governance frameworks were designed for static assets and human actors. AI moves faster, adapts dynamically, and can operate beyond conventional visibility if not supported by a structured, transparent protocol. This is where the Model Context Protocol (MCP) becomes essential.

MCP enables security and governance teams to embed real-time context into every AI interaction from model input/output behavior, to identity of requestors, to the sensitivity of underlying data accessed. Instead of relying on policy documents or manual controls, MCP enables the enforcement of decisions based on real-time signals, such as data classification, risk posture and access justification.

Manual oversight won’t scale with AI adoption, so organizations need to automate security processes and build self-learning systems that adapt to new threats. Equally important is breaking down organizational silos because AI threats represent cross-functional problems that require constant communication across departments.

We need to shift from compliance-focused security to risk-based strategies using AI-powered security solutions. You can’t defend against AI-driven threats with old tools designed for on-premises environments because they are not scalable, cost-effective or fast enough for today’s data challenges. The opportunity to stay ahead of shadow AI is limited, so organizations must act proactively to build these frameworks before AI adoption outpaces their manual oversight capabilities.

M.R. Rangaswami is the Co-Founder of Sandhill.com