Skip to main content

M.R. Asks 3 Questions: Bruno Kurtic, Co-Founder & CEO, Bedrock Security

By May 9, 2025Article

Bruno Kurtic is an accomplished entrepreneur with 30 years of experience in building and leading high-growth technology companies. Before founding Bedrock, Bruno co-founded Sumo Logic, where he crafted the company’s product and strategy, leading it from inception to a successful IPO.

His hands-on approach in go-to-market activities and securing patents helped raise over $346 million in funding from top-tier investors, including Greylock Partners and Sequoia Capital. Following the IPO, Bruno served as Chief Strategy Officer, continuing to guide the company’s strategic direction. Bruno earned his undergraduate degree in Quantitative Methods and Computer Science from the University of Saint Thomas, followed by an MBA from MIT.

He now leads Bedrock Security’s accelerated approach to cataloging data enabling security, governance and data teams to proactively identify risks, enforce policies and optimize data usage — without disrupting operations or driving up costs. 

M.R. Rangaswami: What led you to found Bedrock Security, and how does your metadata lake approach fundamentally differ from existing security solutions?

    Bruno Kurtic: After thirteen years leading Sumo Logic from inception to a public company, I took a year off for reflection. My time off coincided perfectly with the explosion of advancements in generative AI as the technology began to solve previously unsolvable problems. During this time, I unplugged, traveled, learned and engaged with more than 100 technologists working across generative AI, security and operations. Through all of my conversations it became clear that data security was the main blocker for enterprises trying to innovate faster, move to the cloud and adopt new technologies like large language models. This is why I embarked on a journey to help build Bedrock Security and solve the data security problem for the age of AI.

    What makes the Bedrock Security approach fundamentally different is our metadata lake technology, a comprehensive, continuously updated view of all enterprise data. Traditional security solutions have struggled because securing data requires knowing where it is, what it is, and only then can you properly secure it with additional context. Data Security Posture Management (DSPM) tools have historically been built with a singular focus on detecting sensitive data. At Bedrock Security, our metadata lake is built as a flexible knowledge graph, providing deep insights into what data exists, where it resides, how sensitive it is, how it moves, who has access to it and more, all in one place. With this approach, we are empowering security, data governance and data management teams to instantly understand whether data is sensitive, authorized for usage and compliant without manual overhead. This foundation is required for many security use cases including DSPM, data governance, threat detection and response, intellectual property tracking and more.

    M.R.: How is AI governance reshaping security teams, and what new skills are required for security leaders to succeed in this environment?

      Bruno: The rapid adoption of AI technologies is fundamentally changing the security landscape, creating both new challenges and opportunities for security teams. According to our recent “2025 Enterprise Data Security Confidence Index” surveying over 500 cybersecurity professionals, we’re seeing a dramatic shift in responsibilities with 86% of professionals reporting data security duties expanding beyond traditional boundaries. More than half of respondents have added new AI data responsibilities in the past year, and CISOs, CSOs and CTOs are particularly impacted with nearly 70% having taken on new data discovery responsibilities specifically for AI initiatives.

      There is a clear gap between AI adoption and AI security capabilities. Security teams and data teams are struggling to keep up with the exponential growth of data while security budgets and resources only grow linearly. Security leaders are now expected to provide visibility and control across an organization’s entire data ecosystem, particularly as it relates to AI systems. Fewer than half of organizations (48%) are highly confident in their ability to control sensitive data used for AI/ML training, creating significant risks of data leaks, compliance violations and reputational harm. To tackle these challenges, security teams must prioritize AI data governance starting with a comprehensive AI Data Bill of Materials (DBOM). A DBOM provides a complete, contextual inventory of all data flowing into AI systems, from training through deployment, and serves as a foundational tool for enforcing safe, compliant and trustworthy AI governance.

      For security leaders to succeed in this new environment, they need to develop a more data-centric mindset and acquire skills that span traditional security domains, data governance and AI oversight. This includes the ability to measure effectiveness through OKRs, such as time-to-access for data requests, amount of data without designated owners and proportion of classified versus unclassified data. Security leaders must shift from viewing governance as merely a defensive measure to understanding it as a business enabler. When organizations can track and understand their data flows end-to-end, governance transforms from a hindrance into a growth driver that allows companies to responsibly accelerate innovation, confidently enter new markets and create differentiated user experiences.

      M.R.: What specific risks do organizations face as they struggle to protect sensitive data for AI/ML training? How should companies approach securing their AI data pipelines?

        Bruno: The most immediate risk is inadvertently feeding sensitive or unclassified data into AI systems, leading to unintended exposure through model outputs, compliance violations and data misuse at scale. This becomes especially problematic with the exploding volume of unstructured data feeding today’s AI models. Our research shows 79% of organizations struggle to classify sensitive data in AI systems, leaving them vulnerable to both compliance violations and data breaches. With the commercialization of agentic AI, we will expect to see more volume and speed of data sharing, further increasing the urgency for scalable and accurate data governance.

        Enterprise security fundamentally exists to protect data. However, most enterprise security tools and technologies are blind to data and focused instead on infrastructure, identities, networks and perimeters because data is fluid, unstructured and difficult to contain. This is why data security is an essential lens for all security efforts. Beyond security risks, inadequate data governance creates legal exposure under regulations like GDPR, CCPA, HIPAA and the EU AI Act. Perhaps most damaging to business outcomes is when model bias and poor performance occur because training data isn’t properly vetted and secured, leading to algorithmic discrimination or unreliable outputs that undermine trust in AI initiatives.

        To secure data, you first need to know where it is (discovery), then know what it is (classification), and only with that bedrock of understanding can you start securing it with additional business and usage context (entitlements, risk assessment, governance, threat detection and more). A metadata lake approach can help organizations secure AI data effectively by providing continuous visibility into their data landscape. Companies can use this to implement automated AI data discovery and classification to identify sensitive information before it has a chance to enter training pipelines, and enforce least-privilege access controls based on data sensitivity and purpose. This approach also empowers security, data governance and data management teams to understand whether data is sensitive, authorized for usage and compliant without manual overload. 

        M.R. Rangaswami is the Co-Founder of Sandhill.com