Skip to main content

M.R. Asks 3 Questions: Aryan Poduri, Author and High School Senior

By Article

Aryan Poduri is a Bay Area high school senior who likes to build. He’s the author of GOAT Coder, a beginner-friendly programming book for children that has sold over 2,000 copies, thanks to it’s concentration on making coding approachable, practical, and enjoyable.

In his words, Aryan is especially interested in how technology can be used to teach, empower, and bring people together, rather than intimidate them. When he’s not working on projects or thinking about his next idea, Aryan can usually be found playing basketball, watching football, experimenting with design and digital art, or working out.

He sees these hobbies as a creative reset. A way to step away from screens, move his body, and come back to problem-solving with a clearer head.

As we head into the holidays, we hope this conversation with Aryan inspires you to have conversations with the young people in your life about access and opportunities in tech; we can learn so much.

M.R. Rangaswami: In your opinion, what stops kids from getting into coding?

Aryan Poduri: One of the biggest roadblocks is that most kids simply don’t get early exposure. Coding is treated like some mysterious adult-only skill, when really it’s just another form of problem-solving. A lot of schools don’t have strong computer science programs, and even when they do, they usually start way too late. By the time students finally meet coding, they’ve already built this idea in their head that it’s “too hard” or “not for them.” It’s basically like giving someone a bicycle at 17 and saying, “Here, ride this like you were six.”

The other problem is the resources themselves. Most beginner books feel like they were written by robots trying to talk to other robots. Kids look at the first page, see a paragraph about “data types” or “object-oriented paradigms,” and immediately check out. It’s not that they don’t want to learn. It’s that the door is basically locked from the start. If you want kids to walk in, you have to make the entrance a little more inviting than a wall of academic vocabulary.

M.R.: Does giving kids more opportunities to learn coding matter if they’re not already thinking about it?

Aryan: Coding is becoming a basic skill like writing emails or understanding how not to burn toast. When kids learn to code early, they’re not just memorizing random commands; they’re learning how to think. They get better at breaking big problems into smaller ones, staying patient, and trying again when things fail. Those skills spill into everything else they do, whether it’s school, sports, or figuring out how to fix the family Wi-Fi (which instantly makes you the household hero).

On a broader level, expanding access to coding matters because it opens doors that many kids wouldn’t otherwise get. Not everyone grows up surrounded by tech or has a parent who can help them learn. When more kids have access, the tech world gets more voices, more creativity, and more people solving real problems from different angles. It’s basically the difference between a playlist with one song on repeat and an entire Spotify library. You just get way more possibilities.

M.R.: How do you hope GOAT Coder will help the next generation of coders?

Aryan: GOAT Coder exists because I got tired of seeing kids run into the same boring barriers I did when I first started learning. I wanted something fun. Something that felt more like a conversation than a textbook. So I wrote the book I wish younger-me could’ve opened: one with jokes, stories, and straightforward explanations that actually make sense. The whole point is to take away the fear factor and replace it with curiosity. If a kid finishes a chapter and thinks, “Wait, that was actually fun,” then I’ve basically won.

The book is my way of giving kids an easy entry point into a world that usually looks locked away behind complicated symbols and intimidating explanations. Instead of telling them, “Here’s everything you need to know before you start,” it says, “Let’s start, and you’ll figure things out as we go.” It’s a small step, but it helps build confidence, and confidence is everything in coding. If you can get a kid to believe they can do it, they usually will.

M.R. Rangaswami is the Co-Founder of Sandhill.com

Read More

SEG’s SaaS – M&A and Public Market Report: 3Q25

By Article

Our friends at Software Equity Group have released their Q3 Quarterly SaaS Report.

With SaaS deal activity surging to to record levels in 3Q25, they’re reporting that this last quarter has been the strongest SEG has ever tracked.

Year-to-date volume has surpassed 2,000 deals, putting 2025 on pace to exceed 2,500 total transactions, a new all-time high.

Here, we’ve pulled two noteworthy M&A highlights and 2 Public Market Highlights from the report.

(For the full report, click here.)

SAAS M&A HIGHLIGHTS

1. SaaS M&A hit an all-time high in 3Q25 with 746 transactions, +26% YoY in 2Q25.

The acceleration reflects buyer confidence, pent-up demand, and improved financing. An expanding SaaS universe and private equity firms selling businesses to return capital have boosted deal supply, supporting record activity and a durable baseline for future SaaS dealmaking.


2.  Private equity remains the dominant force in SaaS M&A, accounting for 58% of total transactions in 3Q25.

Consistent with long-term trends since 2020. The market remains led by private equity, sustaining deal velocity while strategic buyers continue to pursue acquisitions aligned with businesses showing growth opportunities in AI.

SAAS PUBLIC MARKET HIGHLIGHTS

  1. The SEG SaaS Index™ continued to improve in 3Q25

The upper quartile alone gained almost +12% over the same period since April. The strong recovery over just five months highlights the resilience of leading SaaS companies, which continue to demonstrate durable revenue growth and expanding profitability despite intermittent macroeconomic shocks.

Notable outperformers this quarter included:

  • Oracle (+33%)
  • CS Disco (+20%)
  • CrowdStrike (+19%)
  • Domo(+16%)

2. Profitability leadership in 3Q25 remained concentrated in essential infrastructure and financial categories. 

The SEG SaaS Index median EBITDA margin rose from 4.6% to 9.2% YoY, driven by categories such as ERP & Supply Chain (26.5%), Human Capital Management (23.8%), and Financial Applications (11.5%), which led the field with impressive EBITDA margins.

    To download the full report, click here:

    Clare Christopher is the editor at Sandhill.com

    Read More

    2H 2025: Sector Update on Mega Acquihires in AI: An Allied Adviser Report

    By Article

    Our friends at Allied Advisers have released their sector update on hires in AI, observing how Big Tech has increasingly turned to mega acqui-hires— high value strategic acquisitions focused on talent and IP rather than entire companies.

    This shift enables rapid integration of elite AI teams and technologies, enhancing innovation capabilities without traditional M&A complexities.

    This sector update will review:

    • The talent scarcity, rising costs and regulatory flexibility, mega acquihires that have become Big Tech’s competitive strategy.
    • Rapid Talent Acquisition – it’s approach and impact
    • Effects of bypassing regulatory and transactions related delays and shorten time-to-innovation
    • The human capital and reshaping the ecosystem.

    Top AI professionals are now in “treat mode”—highly valued and aggressively pursued; while the overall tech talent landscape sees selective growth in emerging roles and contraction in traditional segments.

    Looking ahead, we see strong tailwinds in these types of AI acquisitions that prioritize talent and IP integration, focusing on disruptive AI startups with top technical teams. Success hinges on effectively merging teams and cultures thereby accelerating innovation. Stakeholders must navigate this evolving landscape carefully to balance consolidation with ecosystem diversity and sustained competitive advantages.

    To read Allied Adviser’s full sector update, click here:

    Gaurav Bhasin is the Managing Director of Allied Advisers.

    Read More

    Quick Answers to Quick Questions: Anand Srinivasan, Managing Partner of Kaizen Analytics 

    By Article

    Anand Srinivasan is a Managing Partner at Kaizen Analytix, a leading provider of AI, data analytics, and technology services and solutions.

    Headquartered in Atlanta, Kaizen is recognized for its speed, flexibility, and ability to rapidly deliver actionable insights that drive sustainable business benefits across the value chain. The company has been spotlighted by Gartner, NPR, Forbes, and Entrepreneur, and named to the Inc. 5000 list as one of America’s fastest-growing private companies.

    In this interview, Anand discusses how Kaizen combines business acumen, deep subject matter expertise, and technical know-how to help organizations unlock measurable results, and he shares his perspective on the evolving role of AI and analytics in building competitive advantage.

    M.R. Rangaswami: Why is Agentic AI so important for companies right now and what are solutions you’re seeing that are different from traditional generative AI implementations?

    Anand: Research from Gartner has named Agentic AI the top tech trend for 2025 and Kaizen is seeing this trend firsthand with our clients.  Agentic AI is becoming essential for companies because it addresses many of the shortcomings of traditional AI deployments that simply produce outputs without context or actionability, to now providing recommendations on how to act upon the data, driving decisions and actions in real time – and doing it all autonomously and at scale.

    We’ve entered a new era where AI agents don’t just analyze data–they act on it. At Kaizen, we take a value-first approach to AI. We don’t chase hype, but we recognize that hype often precedes real value creation. Our agentic AI solutions intentionally combine predictive analytics, agentic frameworks, and generative AI, leveraging each for its strongest strengths, rather than relying on any one technology in isolation. 

    Predictive analytics provides the traceability and visibility enterprises depend on, while agentic and generative AI unlock new levels of automation and action. By curating solutions that balance these strengths and weaknesses, we deliver systems that are both explainable and adaptive. 

    M.R.: As enterprises adopt agentic AI at scale, what are the biggest challenges companies face in integrating these intelligent agents into existing workflows?

    Anand: Enterprises typically encounter challenges on three fronts: technology, process, and people. On the technology side, legacy systems often struggle to support the API-driven integrations that agentic AI agents require. From a process perspective, some workflows are well-suited for automation, while others need to be redesigned or adapted before AI can add real value.

    On the people side, organizations face a shortage of skilled AI engineers and change-management expertise needed to sustain adoption at scale. It’s no surprise that research from IDC indicates that 88 percent of AI pilots fail to reach production, highlighting the difficulty of scaling AI initiatives beyond initial experiments. 

    Equally important when considering Agentic AI solutions is making sure there is a human element. We find that successful AI deployments include a subject-matter expert who serves as a “human in the loop.” This role provides critical oversight to ensure responsible guardrails, contextual understanding, and alignment with business objectives. The result is agentic AI that is pragmatic, trusted, and designed to drive measurable outcomes–whether in pricing, anomaly detection, or extracting value from unstructured data. 

    M.R.: How is agentic AI reshaping enterprise decision-making beyond traditional automation?

    Anand: Agentic AI is helping enterprises rediscover their “ikigai”—a renewed sense of purpose in how decisions are made and value is created. Post-COVID workforce shifts led to a loss of tribal knowledge about how things actually work inside organizations. Agentic AI fills that gap by capturing institutional wisdom and enabling employees to focus on elevated outcomes and meaningful insights, not just ‘doing the work.’

    The result is decision-making that is faster, more adaptive, and more customer-focused. Instead of relying on 5 or 10 times the number of software developers to hard-code complex processes, agentic AI finds easier, more efficient ways to add value to the end customer and scales without legacy constraints. It’s not just automation—it’s a way for companies to scale intelligence and impact across the enterprise.

    Agentic AI represents a turning point in how enterprises approach intelligence, automation, and decision-making. Unlike earlier waves of AI that focused mainly on efficiency gains, this new generation of AI agents is reshaping how organizations respond to disruption, scale knowledge, and create customer value. The companies that succeed will be those that balance automation with human judgment, build adaptability into their workflows, and embrace AI not as a standalone tool but as an embedded capability across the enterprise. In many ways, agentic AI is less about replacing work and more about redefining it—elevating the role of people while enabling organizations to compete at an entirely new level.

    Over the next five years, agentic AI will move from early adoption to enterprise necessity. Organizations that embrace it will not only streamline operations but also unlock new levels of resilience, adaptability, and customer value. Those that delay risk being left behind as decision-making itself becomes a core source of competitive advantage.

    M.R. Rangaswami is the Co-Founder of Sandhill.com

    Read More

    SEG’s SaaS – M&A and Public Market Report: 2Q25

    By Article

    Our friends at Software Equity Group have released their Q2 Quarterly SaaS Report.

    As they observed, even with the mixed economic signals, strong buyer appetite persists.

    Austin Hammmer, Principle at SEG offers, “SaaS M&A has normalized at a structurally higher level, with Q2 marking another record quarter for volume.” The report reveals, “The market is strong, valuations remain compressed but are stabilizing meaningfully, with momentum building as buyers grow increasingly confident and prepare for a more favorable macro environment.”

    Here, Sandhill summarizes two highlights from the M&A portion of the report, and two highlights from their SEG Q2 public market update.

    2 SAAS M&A HIGHLIGHTS

    1. Aggregate software industry M&A volume remained strong in 2Q25, continuing the trend established in Q1. The broader market recorded 1,126 software M&A transactions in the quarter, matching 1Q25’s high watermark and reinforcing the post-COVID baseline of ~900+ deals per quarter.

      This sustained activity reflects growing clarity around monetary policy, easing investor concerns tied to geopolitical volatility, and renewed urgency among buyers to capitalize on scalable, efficient platforms.


      2. SaaS M&A continued its record-setting pace in Q2 with 637 transactions, the highest quarterly total SEG has tracked. That brought 1H25 volume to 1,273 deals, up 30% from 1H24. With four consecutive quarters of 530+ deals, SaaS has firmly re-enforced itself as a consistent and resilient engine of software M&A.

      Recurring revenue, strong retention, and vertical specialization remain key attractors as platform strategies and sponsor activity fuel continued growth.

      SAAS PUBLIC MARKET HIGHLIGHTS

      1. The SEG SaaS Index™ rose meaningfully from Q1 to Q2, narrowing its YTD loss to 9.1% after being down over 20% in March. Performance remained bifurcated, with top quartile companies rising 7.5% YTD, while the bottom quartile fell over 21%, highlighting growin valuation dispersion.

        Investors rewarded consistent, efficient operators while penalizing those with unprofitable or inconsistent models. While broader equity markets posted modest gains on stable rates and easing inflation, public SaaS remained more sensitive to execution quality, with macro relief benefiting only the strongest names.


        2. Valuations compressed, with the median EV/TTM revenue multiple falling to 5.1x, down from 5.9x in 1Q25 and 5.8x in 2Q24. However, dispersion remained wide: top quartile companies traded at 9.0x, compared to just 2.8x in the bottom quartile. Companies with >40% Weighted Rule of 40 scored 12.4x, nearly 2.5x the Index median, reinforcing that balanced performance still commands a premium.

        Similarly, companies with 120% net retention traded at 11.1x, a 117% premium to the Index median, underscoring how deeply investors value durable, expanding customer relationships.

      To download the full report, click here:

      Clare Christopher is the editor at Sandhill.com

      Read More

      M.R. Asks 3 Questions: Arthur S. Hitomi, CEO and Co-founder of Numecent

      By Article

      Few leaders in enterprise technology have had as broad and lasting an impact on how software is delivered and consumed as Dr. Arthur S. Hitomi. As President, CEO, and co-founder of Numecent, Dr. Hitomi has spent his career reimagining software deployment, from his early work shaping Internet standards like HTTP 1.1 and REST, to inventing the patented Cloudpaging technology that now powers seamless, on-demand application delivery across modern enterprise environments.

      With 38 patents issued and many more pending, Dr. Hitomi is a recognized pioneer in on-demand application delivery, application virtualization, streaming, and desktop transformation. At Numecent, he has led both technology development and business strategy, guiding the creation of Cloudpaging and Cloudpager, which allow enterprises to deliver even their most complex Windows applications to any desktop, virtual or physical, without reengineering.

      In this interview, Dr. Hitomi shares his perspective on the real-world pain points Numecent solves, how their platform is helping enterprises modernize faster and more cost-effectively, and how innovations like Cloudpaging AI are scaling transformation across the software ecosystem.

      M.R.: What real pain does Numecent’s Cloudpaging technology solve for enterprises managing legacy Windows applications?

      Dr. Hitomi: One of the most persistent and costly challenges facing enterprise IT today is maintaining business-critical legacy and custom Windows applications through platform upgrades and updates, particularly as Microsoft’s Windows 10 end-of-support deadline looms. Applications written years or even decades ago often contain hardcoded dependencies, outdated middleware, or components that conflict with modern environments. This makes them incompatible with Windows 11 or virtual desktop infrastructures like Azure Virtual Desktop (AVD) or Windows 365, and incredibly costly to maintain in an ever-changing environment.

      Numecent’s Cloudpaging technology addresses this problem at the root. Rather than repackaging, recoding, or reengineering applications (which can take weeks per app) Cloudpaging dynamically containerizes applications so they can run on modern platforms without modification. Its patented technology breaks applications into “pages” and streams only what’s needed to launch, with the rest delivered on demand. Applications behave as though they’re locally installed, yet remain isolated, with full support for drivers, services, middleware, and even conflicting Java versions.

      The result? Enterprises can migrate their desktop environments without leaving legacy apps behind or undergoing massive reengineering efforts. Cloudpaging enables seamless compatibility, accelerates digital transformation, and saves millions in development and operational costs.

      MR: How does Numecent’s Cloudpaging and Cloudpager platform deliver long-term cost and operational advantages?

      Dr. Hitomi: Traditional application deployment methods are slow, resource-intensive, and difficult to manage at scale, especially in environments where users are distributed across hybrid workforces and multiple device types. IT teams spend an inordinate amount of time troubleshooting installations, repackaging apps for different environments, and managing updates and rollbacks across multiple platforms.

      Cloudpaging changes the equation by virtualizing applications at the page or binary level. It enables them to launch within seconds, run at native speed, and operate without the need for system reboots or changes to the host OS. Applications are streamed just-in-time, meaning organizations drastically reduce bandwidth usage and storage overhead. More importantly, Cloudpaging maintains application integrity, so each app runs exactly as intended, no matter the underlying system.

      Layered on top is Cloudpager, Numecent’s cloud-native application container orchestration platform. Cloudpager provides a single pane of glass to manage application delivery, versioning, entitlements, and compliance policies across physical desktops, virtual desktop infrastructure (VDI), DaaS, and cloud workspaces. With real-time visibility and control, IT teams can dynamically update or roll back applications, enforce usage restrictions, and reduce downtime, all without disrupting end users.

      The combination of Cloudpaging and Cloudpager empowers enterprises to adopt modern desktop strategies while reducing total cost of ownership, improving end-user experience, and creating a more agile, DevOps-style approach to desktop application management.

      MR: What growth or opportunity does Numecent see with the launch of Cloudpaging AI and its broader ecosystem impact?

      Dr. Hitomi: The recent introduction of Cloudpaging AI represents a major leap forward in accelerating application modernization at enterprise scale. Historically, even with powerful tools like Cloudpaging, creating containers for complex legacy apps still required skilled engineers to carefully sequence application components and dependencies. With Cloudpaging AI, Numecent leverages machine learning to automatically analyze and package applications in minutes, drastically reducing the effort and time required.

      This innovation unlocks a massive opportunity: enterprises can now containerize and migrate hundreds, even thousands, of legacy and homegrown applications quickly and cost-effectively. It also means partners, system integrators, and managed service providers can scale their services faster and offer end-to-end desktop transformation solutions without the typical bottlenecks.

      As organizations move toward hybrid cloud and modern digital workspaces, Numecent’s technology suite – including Cloudpaging, Cloudpager, and AI Packager – makes it possible to maintain operational continuity while accelerating transformation. Enterprises can retire legacy infrastructure, embrace AVD or Windows 365, and still run mission-critical apps without compromise. It’s a powerful enabler for digital resilience, security, and long-term IT agility.

      M.R. Rangaswami is the Co-Founder of Sandhill.com

      Read More

      AI at the Crossroads: Why CISOs Must Lead the Charge

      By Article

      By Mike Gentile, CEO of CISOSHARE

      No matter what industry you work in, you know generative artificial intelligence (AI) is here, rewriting the rules of business in plain sight. 

      Marketing teams churn out campaigns in minutes. Developers push code in the time it takes to sip a coffee. The pace is breathtaking, but amid the rush, one question is splashed in red on the walls: 

      Who is making sure this power is used safely, ethically, and securely? For too many organizations, the answer is no one, and that’s a problem.

      According to Gartner, more than half of enterprises are piloting or using generative AI. However, fewer than 10% have established governance frameworks to manage the risks. 

      This gap indicates a fault line running beneath the future of business. Without proper oversight, efficiency gains come at the expense of security and accountability.

      Just recently, IBM’s Cost of a Data Breach Report found that one in five organizations reported a breach due to security incidents involving “shadow AI,” or AI tools that employees use without organizational knowledge or approval. These breaches were more likely to result in the compromise of personally identifiable information and intellectual property, making them extremely costly. 

      In other words,organizational leaders and the Chief Information Security Officers (CISOs) they generally task to address this can’t afford to sit back. These are the people who must step forward and take the lead in governing AI before that fault line cracks wide open.

      Let’s talk about it. 

      Why CISOs Must Step Forward

      Data leaks. Biased algorithms making decisions no one can explain. Shadow AI tools creeping into workflows without approval. Third-party integrations that open doors no one is watching. 

      These are all landmines sitting under every organization rushing to adopt AI without a plan.

      The truth is, AI is moving too fast for half-measures. You can’t kick this to a committee, and you can’t hand it off to IT and hope for the best. The stakes are too high, and the risks too immediate.

      This is exactly why CISOs have to step into the arena. They’re the ones who know where systems break, where blind spots form, and how attackers think. 

      CISOs see the fault lines before anyone else. They’re responsible for protecting trust, livelihoods, and entire enterprises. In this moment, that means governing AI in new and proactive ways. 

      The Risks Few Want to Talk About

      Generative AI has dazzled leaders with its potential, but beneath the hype lies hard truth. Some the top risks include: 

      Shadow AI

      Unauthorized AI tools slip into the workplace, operating outside the guardrails of security. This problem starts small—an employee downloads a flashy app to make their job easier. There’s no approval, no oversight. Suddenly, sensitive data is flowing into unvetted systems, compliance gaps open, and security leaders are left blind.

      Regulatory Exposure

      Lawmakers aren’t asleep. Across the globe, governments are drafting rules that will demand accountability. 

      Eventually (if not currently), companies that can’t demonstrate control over their AI use will face fines as well as public hits to their reputation.

      Long-Term Operational Risk

      AI models adapt, drift, and change in ways no one can fully predict. Without governance, today’s productivity tool can become tomorrow’s breach vector, eroding stability from the inside out.

      In general, the risk is as existential as it is technical. If AI undercuts trust, both customers and employees will lose faith in the organization itself.

      Questions Every CISO Should Be Asking

      Before greenlighting any AI initiative, security leaders must insist on clarity. At minimum, every CISO should be asking:

      1. Who owns accountability for AI outputs and errors?
      2. How is sensitive data protected from misuse or leakage?
      3. What compliance obligations apply today, and which may apply tomorrow?
      4. How will AI decisions be audited, monitored, and explained?
      5. Where does human oversight begin and end?

      When a CISO raises these questions, it won’t always draw nods of agreement. Sometimes it slows the conversation, and sometimes it shifts the mood around new technology and innovations.  

      However, that’s the weight of real leadership. Today’s CISOs need the courage to steady the AI course when everyone else is eager to sprint ahead.

      A Call to Leadership

      AI is rewriting the rules of business, but governance will determine whether those rules lead to resilience or collapse. 

      This starts with executive management at organizations realizing this is something that is important. From there, assessing if they have the internal security team to address it or hiring it if they don’t.

      CISOs have spent decades building programs to balance speed, progress, and security. Now is the moment to apply that discipline to artificial intelligence tools. 

      To remain silent, to allow adoption without oversight, is to abandon the very role CISOs were created for. The path forward is clear: security must lead, or organizations will stumble into risks they cannot contain.

      The story of AI won’t be defined by those who rush headlong without caution. It will be shaped by leaders who bring vision, discipline, and resolve, and I believe that mantle belongs to the CISO.


      About the Author:

      Mike Gentile, CEO of CISOSHARE, has spent his career guiding organizations through the challenge of building and scaling security programs. Known for his practical approach and deep expertise, he works with enterprises worldwide to turn security strategies into living, operational systems. 

      Today, his focus includes helping leaders confront new frontiers in risk management, including the fast-moving arena of AI governance.

      Read More

      Quick Answers to Quick Questions: Lokhesh Ujhoodha, Software Engineer, AI at Kurrent

      By Article

      Having previously served as Kurrent’s Technical Support Manager where he specialized in event-driven architectures and CQRS implementations, Lokhesh brings expertise hands-on experience to build complex event sourcing systems with cutting-edge AI integration.

      This conversation is for those who love insights on AI-driven database solutions and learning about what organizations can do to ready themselves for the changes that are coming in the database automations.


      M.R. Rangaswami: How is Kurrent’s MCP Server reimagining database interactions for both technical and non-technical users? 

      Lokhesh Ujhoodha: There’s a broader shift in how people interact with complex systems using MCP Servers. Traditionally, working with a database has meant writing structured queries, managing schemas and understanding low-level details like indexing or projection logic. That’s fine if you’re a backend engineer or database administrator, but it creates steep barriers for others who often have questions they can’t easily get answered without writing code.

      With the introduction of MCP and emerging open-source servers, the way users interact with databases is starting to shift. Now users can interact with their data through natural language either directly with an AI assistant or through integrated tools. This means you can ask for a stream of events, write a new projection or debug logic through a conversational interface.

      For technical users, this accelerates development cycles by enabling faster prototyping, allowing developers to test projection logic and debug issues through conversational commands. For less technical users, it lowers the barrier to engaging with data in meaningful ways. In both cases, it’s reimagining the interface layer between humans and systems by abstracting it into intelligent workflows that are easier to reason about and iterate on.

      M.R.: Why is democratizing access to real-time data architectures so important as we look ahead to a future where agentic AI workflows become mainstream?

      Lokhesh: The next generation of AI systems requires access to data that’s timely, contextual and complete. Unlike traditional AI that relies on static training sets or batched data, agentic workflows need to be fed with live signals and historical context in real time. Without that, their actions risk being either irrelevant or outright wrong.

      This is why democratizing access to real-time architectures is such a key enabler. It’s not just about giving business users access to static dashboards or pre-built reports; it’s about giving both humans and AI systems the ability to interact with data in a feedback loop. For example, an AI agent that’s monitoring fraud can’t wait for a batch process to run overnight. It needs to see a suspicious transaction and contextualize it against previous behaviors instantly.

      Most current data infrastructure, however, isn’t designed for that. It’s either streaming but not persistent, or it’s persistent but too slow or rigid for dynamic use cases. To support democratized AI participation across technical roles and business units, you need systems that treat data as a continuous stream of context-rich events, while still preserving fidelity and traceability. 

      This is where event-native solutions become particularly relevant. These systems combine immutable event storage with real-time streaming capabilities, enabling AI systems to access both live data and historical context instantly. Their event-native design can store billions of indexed streams with consistent ordering, providing complete data lineage while supporting real-time event processing. Multi-language support and simplified integration reduce the complexity barriers that often prevent broader adoption of real-time systems.

      That’s what makes emerging real-time data platforms so significant. But because data streaming is not yet supported by MCP at this moment in time (although on the roadmap), the Kurrent MCP Server leverages the in-built real-time data processing projections and streaming capabilities of KurrentDB to also produce code that can get you started with live data subscriptions.

      M.R.: What should organizations do to prepare for the shift from traditional database development practices to AI-driven automation, and what does this signal about the broader database market?

      Lokhesh: Organizations preparing for AI-driven automation need to rethink how they store and access data. Traditional databases that only capture current state are essentially blind to the decision-making processes that AI agents require. When an AI agent needs to understand why something happened, how a system evolved or what patterns led to specific outcomes, point-in-time snapshots simply aren’t enough.

      This is where storing complete state transitions becomes transformative for AI workflows. Unlike traditional approaches that discard the journey and keep only the destination, KurrentDB preserves every state change as a first-class citizen. This means AI agents have access to the full narrative of how systems evolved – not just what happened, but the sequence, timing and context of every transition.

      This comprehensive transition history enables AI agents to perform temporal reasoning, understanding causality rather than just correlation. They can replay scenarios to test different decision paths, identify patterns across time that would be invisible in static data and learn from the complete audit trail of past actions. When an AI agent needs to make a decision, it’s working with the full context of how similar situations played out historically.

      The future-proofing implications are significant. As AI capabilities advance toward more sophisticated reasoning, multi-agent coordination and causal understanding, having access to rich state transition data becomes exponentially more valuable. Today’s AI might use simple pattern matching, but tomorrow’s AI will leverage complete behavioral histories to make nuanced decisions, coordinate with other agents and provide explainable reasoning chains.

      At a market level, this signals that the database layer is becoming the memory system for intelligent applications. The organizations that recognize this shift and adopt event-driven architectures with complete state transition preservation will have AI agents that can reason more effectively, learn more comprehensively and adapt more intelligently than those working with traditional state-only data models.

      The competitive advantage isn’t just in having AI agents, but it’s in having AI agents with access to the complete story of how your systems behave and evolve over time.

      M.R. Rangaswami is the Co-Founder of Sandhill.com

      Read More

      M.R. Asks 3 Questions: Christine Heckart, CEO of Xapa World

      By Article

      Christine Heckart is the founder and CEO of Xapa World, an AI-powered platform reshaping leadership development through bite-sized, gamified daily experiences to help humans build critical skills for a post-AI world.

      A visionary leader with over three decades in the tech industry, Christine has created multiple market categories and is dedicated to democratizing leadership growth across all levels of an organization.

      M.R. Rangaswami: From your seat in the industry, what’s left for humans in an AI world?

      Christine Heckart: If you work with generative AI systems you know there is a LOT left for humans – especially those who can manage and collaborate with AI teammates. AI teammates are super helpful for tasks that are narrow, specific, repeatable and take precision (ie: reading Xrays) and/or where ‘generative’ is an asset not a liability (ie: story telling and creative writing). Generative systems don’t answer the same way twice, or know the difference between correct and incorrect. They give the illusion of thinking, but they don’t think. 

      The most recent METR report shows that AI makes coders 19% SLOWER, although the self-report that AI makes them 20% faster. And the most recent model reviews show that the AI is hallucinating 48-79% of the time – the more advanced models hallucinate more frequently. 

      Agents cannot make business decisions or prioritize what’s important. Agents give good results only if humans provide enough context, direction and framing. And agents don’t know when their answers are missing vital elements. In short, AI agents are great assistants, but humans are still responsible for the outcomes. 

      No court in the world lets us blame the AI if problems happen. Humans, in a post-AI world, still have full responsibility for the quality and ethics of outcomes. Humans need to manage, prioritize, judge, approve, and decide. Humans resolve conflict, understand context, bring empathy, and collaborate. The AI agents can help make many of outcomes easier, faster, and more creatively elevated, but the ‘elevation’ happens with the human-and-machine collaboration, not with a full outsource.

      So what’s left for humans? Potentially, more meaningful work (fewer repetitive tasks), deeper connection to each other, the difficult decision-making and risk taking, and helping to delight customers in new and unexpected ways. This is a post-AI world that can benefit everyone. 

      M.R.: If we can train the AI Models, why can’t we train the humans?

      Christine: The biggest problem within companies isn’t incorporating AI, or training the AI models, it’s the  mindset shift in people. The goal of companies is to drive growth and revenues, reduce costs, increase innovation, and solve customer problems. The skills humans need to contribute to these outcomes are skills like critical thinking, problem solving, curiosity, listening, influence, courage, decision-making, adaptability, etc. etc. The AI’s can’t do this even with all the training in the world….they can support the humans who do so. 

      So why don’t we put even a small percentage of the time, money and energy into training the humans that we do into training the machines? I think it’s because it’s so difficult to do training at scale. Humans need more than ‘training’ – ie: more than cognitive understanding of a topic – we need people to build skill and muscle and then apply it in the context of a situation. 

      As machines code, diagnose, optimize, and write—what’s left for humans isn’t less important – it’s more. 

      Everyone is a manager with AI. Everyone needs good business judgment, empathy, resilience, ethical decision-making, curiosity, problem solving, decision making, and the confidence to set context and deal with ambiguity. These aren’t “soft skills.” They’re core competencies for navigating a world where change is constant and uncertainty is the norm. Few companies teach them. 

      Fortunately, new solutions – including my company, Xapa – are coming to market, using AI and gaming mechanics to help people not just learn, but practice and build real muscle. 

      M.R. What is your take on AI and job distruption? Are we overreacting or not reacting enough?

      Christine: Have you EVER worked in a company or organization that said “we have too many people for the work we need to accomplish?” (I sure haven’t!)

      Big companies and small companies always want more resources than they can afford. Not only that, demographically we know that most of the developed world faces a labor shortage in the next ten years. So AI seems like a fantastic solution at just the right time.  

      I think we’re underreacting to the AI disruption in all the wrong places. Everyone’s talking about job displacement—83 million roles lost, 69 million created. But the real disruption and transformation isn’t the technology, it’s the people.  Most companies are trying to bolt AI into products and processes without addressing – and upgrading – the human operating system. That’s like putting a rocket engine on a go-kart and expecting it to fly.

      “What does it take for humans to thrive alongside AI?” The answer isn’t more tools—it’s more trust, more adaptability, and better decision hygiene. The least replaceable skills in the future are the things the AI can’t do, and the things you can’t outsource even if they could. 

      AI is here to stay. The companies that win will be the ones that prepare their people—not just their systems—for change. Here’s a great ebook on leading transformational change, and AI is the ultimate in transformational change. You can find it at www.xapa.com or through this QR code: 

      Read More

      M.R. Asks 3 Questions: George Gerchow, CSO, Bedrock Data

      By Article

      George Gerchow is Chief Security Officer at Bedrock Data and was formerly Sumo Logic’s Chief Security Officer & SVP of IT.

      His background includes security, compliance, and cloud computing disciplines. George has years of practical experience in building agile security, compliance, and IT teams in rapid development organizations.

      George has been on the bleeding edge of public cloud security, privacy, and modernizing IT systems since being a Co-Founder of the VMware Center for Policy and Compliance. With 20+ years experience in the industry, he is a Faculty Member for IANS (Institute of Applied Network Security), and is also a known philanthropist and CEO of a nonprofit corporation, XFoundation.

      In this conversation we look at how he and his teams are navigating AI Security Through Data Protection and Governance.

      M.R. Rangaswami: What drew you to Bedrock Data at this stage of your career? What makes the company’s approach to data security different from existing solutions in the market?

      George Gerchow: I was introduced to Bedrock Data as a former customer when I worked at MongoDB. I was truly impressed with their technology in solving a problem we faced around data classification at scale before implementing enterprise AI-based enterprise search. During the proof of concept, we observed that the solution was effective at scale. 

      What truly makes Bedrock stand out is the metadata lake. This all-in-one repository shows you where your data resides and what’s occurring throughout your entire environment, which is something the industry currently lacks. This metadata lake approach offers organizations visibility they’ve never experienced before.

      Data protection needs are expected to grow significantly. For years, we’ve relied on endpoint security, perimeter security and other methods with some success, but moving forward, true defense in depth will be driven by data protection. 

      Data is becoming increasingly difficult to handle. In fact, enterprises are storing more and more data, especially with the rise of AI, and data volumes are triple what they were before. Data exhaust is a real issue because people simply don’t delete data. Think about it personally: when was the last time you went through and deleted any of your emails or information from your device or the cloud? You never do. This problem will only continue to grow in the enterprise environment.

      Ultimately, the combination of proven technology, the metadata lake architecture, and working with people I trust made Bedrock Security an obvious choice at this stage of my career.

      M.R.: As organizations deploy AI agents that directly access sensitive enterprise data, what new categories of risk should security leaders be preparing for that go beyond traditional data protection concerns?

      George: The biggest risk by far is shadow AI, and this will become a major problem if security teams don’t address it proactively. Just like with shadow IT and the cloud in the past, shadow AI will emerge when security teams slow things down or say no without offering proper alternatives. When you don’t provide a platform, process or system that people trust to bring their AI ideas forward, developers and business users will simply go around you.

      The main point is to be open about what we’re genuinely doing with AI. Security teams should fairly evaluate AI requests by focusing on two key questions: What does this do for your customers, and what does it do for the company? These should be the primary considerations, or else you’ll be flooded with requests that are hard to properly assess.

      There’s also a new type of risk related to entitlement and data correlation that’s alarming. With AI and enterprise search capabilities, you might have a document marked as non-critical, but when AI systems analyze and connect information across multiple documents, a single sentence or word could become critical when combined with data from other sources. This creates entirely new attack vectors that traditional data protection methods weren’t built to address.

      We’re also seeing increasingly advanced AI-driven attacks that organizations must prepare for. These include data poisoning, prompt injection vulnerabilities and sophisticated social engineering techniques that utilize AI-generated content. The complexity and scale of these attacks will necessitate entirely new defensive strategies.

      M.R.: How can organizations prevent shadow AI from becoming the next major security blind spot, and what governance frameworks should be in place before AI adoption accelerates beyond manual oversight capabilities?

      George: Security leaders must actively use AI, dedicating the first hour daily to understand and manage it. Rapid adoption is vital, but awareness of risks is key. Many security professionals may unintentionally hinder progress due to unfamiliarity with AI and threats. Transparency about data used in AI, like creating an AI Data Bill of Materials (AI DBOM), is essential, especially since reverse engineering outputs is hard.

      Security teams must adopt an adversarial mindset from the start. You can’t defend an AI system until you know how to break it. This involves continuously testing for vulnerabilities like data poisoning and prompt injection attacks, conducting red team exercises and running bug bounties against AI implementations.

      To prevent shadow AI from becoming the next major security blind spot, organizations must transition from manual, form-based governance to context-aware, lifecycle-driven oversight. Traditional governance frameworks were designed for static assets and human actors. AI moves faster, adapts dynamically, and can operate beyond conventional visibility if not supported by a structured, transparent protocol. This is where the Model Context Protocol (MCP) becomes essential.

      MCP enables security and governance teams to embed real-time context into every AI interaction from model input/output behavior, to identity of requestors, to the sensitivity of underlying data accessed. Instead of relying on policy documents or manual controls, MCP enables the enforcement of decisions based on real-time signals, such as data classification, risk posture and access justification.

      Manual oversight won’t scale with AI adoption, so organizations need to automate security processes and build self-learning systems that adapt to new threats. Equally important is breaking down organizational silos because AI threats represent cross-functional problems that require constant communication across departments.

      We need to shift from compliance-focused security to risk-based strategies using AI-powered security solutions. You can’t defend against AI-driven threats with old tools designed for on-premises environments because they are not scalable, cost-effective or fast enough for today’s data challenges. The opportunity to stay ahead of shadow AI is limited, so organizations must act proactively to build these frameworks before AI adoption outpaces their manual oversight capabilities.

      M.R. Rangaswami is the Co-Founder of Sandhill.com

      Read More

      INTERVIEW: M.R. RANGASWAMI ON WHAT’S REALLY HAPPENING

      By Article

      PODCAST: M.R. Rangaswami was recently interviewed on Humans of Bombay (which he suggests should be called, “Humans of Silicon Valley”) discussing what is REALLY happening to Indians in USA with Visas, deportations and what’s next for the India-America partnership.

      Host Karishma Mehta made this conversation one of the most insightful, honest and in-depth interviews M.R. has done as of late.

      We hope you enjoy.

      M.R. Rangaswami is the Co-Founder of Sandhill.com

      Read More

      M.R. Asks 3 Questions: Sundari Mitra, CEO and Co-Founder of Asato.AI

      By Article

      Sundari Mitra, CEO and co-founder of Asato.AI, is a seasoned leader with extensive experience as a three-time CEO. At Asato she is revolutionizing enterprise IT management by empowering CIOs with AI-driven insights to optimize technology, talent, and innovation.

      Previously, Sundari was the Chief Incubation Officer at Intel Corporation, leading
      disruptive innovation and the next $10B opportunity. Before that, she was Corporate Vice President and General Manager of the IP Engineering Group (IPG) at Intel where she led a
      7,000 – person team focused on developing best-in-class IP powering $75B of Intel revenue
      across multiple market segments.

      As Founder and CEO of NetSpeed, she built a global company that transformed SoC design for the world’s leading semiconductor companies before an acquisition by Intel. She has a demonstrated track record of leading transformative strategies and building great engineering and go-to-market teams in both large and small companies and brings her experience in vision, strategy, technology and
      market development to the companies and leadership teams she works with.Sundari Mitra, CEO and cofounder of Asato.AI, is a seasoned leader with extensive experience as a three-time CEO. At Asato she is revolutionizing enterprise IT management by empowering CIOs with AI-driven insights to optimize technology, talent, and innovation.

      Previously, she was the Chief Incubation Officer at Intel Corporation, leading disruptive innovation and the next $10B opportunity. Before that, she was Corporate Vice President and General Manager of the IP Engineering Group (IPG) at Intel where she led a 7,000 – person team focused on developing best-in-class IP powering $75B of Intel revenue across multiple market segments.

      As Founder and CEO of NetSpeed, she built a global company that transformed SoC design for the world’s leading semiconductor companies before an acquisition by Intel. She has a demonstrated track record of leading transformative strategies and building great engineering and go-to-market teams in both large and small companies and brings her experience in vision, strategy, technology and
      market development to the companies and leadership teams she works with.

      M.R. Rangaswami: What were you noticing that was happening in the market that made you create  Asato? 

      Sundari Mitra: Asato was born out of a problem I faced firsthand at Intel, after my previous startup was  acquired. I found myself responsible for thousands of engineers and vast IT assets spanning infrastructure, code, and data that was spread across global teams, all under  tight budget constraints. 

      The issue wasn’t lack of data. It was the lack of Insights based on the data that was  presented to me. I couldn’t easily answer fundamental questions like: What do we own?  How is it being used? Are we getting the expected ROI? Despite having access to tools and  dashboards, I lacked a clear, connected view of the organization’s assets. Decision making was slow and often based on incomplete information. 

      I envisioned a system that could help leaders like me observe their environment, orient  around goals, decide with clarity, and act – then close the loop by tracking outcomes. The  classic OODA loop. In early 2023, with the rise of GenAI and more mature tech stacks, I  saw an opportunity to build such a system for leaders of IT organizations. 

      Through market research and conversations with CIOs, the hypothesis was validated.  Teams were struggling with siloed systems, limited visibility on Saas and IT sprawl, difficult  to interpret contracts and renewals, all making it very complex to make decisions. 

      So, we created Asato: a business observability platform built for CIOs. It unifies data,  insights, decision-making, and outcome tracking in one place, all accessible through  natural language interfaces, enabling leaders to finally run their IT organization with  confidence and clarity.

      M.R.: How does the Asato approach fundamentally differ from traditional IT  management and monitoring solutions? 

      Sundari: Traditional IT tools are built to keep infrastructure running; they track uptime,  performance, compliance, and alert you when something breaks or drifts. They’re reactive  by design. What they don’t do is connect those signals in a way that helps leaders make  strategic, data-driven decisions. 

      At Asato, we take a very different approach. We start with a mindset shift: instead of just  monitoring systems, we focus on helping organizations actively manage and optimize their  entire technology landscape.

      We build a living map of a company’s digital footprint – everything from applications,  licenses, and user access to vendor relationships and cost centers. And we enrich that  map with real-time data from systems like SSO tools, billing platforms, usage analytics,  contracts, and finance systems. 

      That full picture lets us go beyond surface-level monitoring. Traditional tools stop at logs  and alerts. Asato goes further to analyze the relationships within the organization’s tech  stack to surface inefficiencies and risks that typically go unnoticed. Asato identifies hidden  inefficiencies: duplicate tools, underused contracts, misaligned licensing. What used to  be static reporting becomes a dynamic decision layer. 

      We believe IT should be managed like a portfolio, every dollar should serve a purpose and  deliver value. With Asato, CIOs and their partners in finance and procurement gain visibility  and context needed to align technology investments with business outcomes. It’s about  shifting from reactive firefighting to proactive, outcome-driven leadership.

      M.R.: Traditional IT dashboards struggle with complex business questions like “Which  teams are driving our software costs?” or “Why are we paying for licenses that aren’t  being used?” How does your platform go beyond basic monitoring to answer these  nuanced questions that CIOs need answered? 

      Sundari: This is exactly the gap we set out to close. Most dashboards tell you what is happening in  your environment. Asato is built to help you understand why it is happening, who is driving  it, and what you can do to optimize it. 

      Take licensing for example. Traditional tools might show how many licenses you’ve  purchased and how many are in use. We go deeper and map each license to a real user,  their team, the cost center funding it, and actual usage patterns. That way, leaders don’t  just see what they’ve bought, they understand how it is being used and where there is  waste. 

      On the spend side, we track much more than just what was paid. Asato tracks each  application’s renewal timeline, the contract owner, and utilization benchmarks. We align  procurement documents such as POs, contracts, invoices and flag anomalies, like invoices  without matching POs or PO overages beyond contract terms. This makes it easy to catch  overspend, spot renegotiation opportunities, and benchmark cost against value. We also  flag renewals that are coming up, spot contracts that could be renegotiated, and  benchmark costs against usage to expose hidden optimization.

      Our users – CIOs, CFOs, Procurement leaders, all come to us with cross-functional  questions: Can we consolidate redundant tools? Where is spend overlapping across  departments? What’s our true cost per active user by business unit? These are questions  traditional dashboards can’t answer because they don’t connect usage, contracts,  ownership, and cost in one place.  

      Our mission is to turn scattered operational data into clean, decision-ready intelligence  that leaders can trust. Instead of getting lost in static reports and dashboards, they gain the  insights they need to drive smarter, faster and more strategic decisions about their entire  digital environment.

      M.R. Rangaswami is the Co-Founder of Sandhill.com

      Read More

      SEG’s SaaS – M&A and Public Market Report: 1Q25

      By Article

      Our friends at Software Equity Group have released their Q1 Public Market Report reflecting 1Q25 marking the most active quarter on record with 636 deals, up 19% quarter over quarter and 31% year over year.

      As they observed, even with the mixed economic signals, strong buyer appetite persists.

      HERE ARE 6 HIGHLIGHTS FROM THE REPORT:

      1. The start of 2025 has been marked by a complex and often contradictory macro environment. While inflation is moderating and labor markets remain stable, ongoing trade tensions, geopolitical instability, and mixed economic signals have created an uncertain backdrop for decision-makers.

        In this environment, public and private investors are moving cautiously but continue seeking high-performing, resilient SaaS businesses that can weather volatility and sustain long-term value.
      2. The SEG SaaS Index™ declined 13% in 1Q25, with the bottom quartile falling more than 25% while the top quartile declined just ~5.6%, a clear sign that investors are rewarding companies with consistent execution, retention, and margin strength.
      3. SaaS M&A remained active, with 636 transactions in the quarter, up 19% QOQ and 31% YOY, the strongest quarter since 1Q22. Financial sponsors continue to lead the market, particularly in vertical SaaS and add-on acquisitions.
      4. M&A valuations held steady, with the median EV/TTM revenue multiple at 4.1x, flat QOQ, and up slightly from 3.9x in 1Q24. The average multiple of 6.0x reflects continued appetite for premium assets, though buyers remain disciplined. Top-tier companies with strong retention, vertical focus, and profitable growth continue to command a clear pricing premium.
      5. U.S. GDP grew 1.6% in 1Q25(1), slower than expected (estimates were ~2%) and inflation was at 2.4% YOY at the end of March(2), cooler than expected (Feb YOY was 2.8%). Many industry constituents are closely watching the impacts of tariff policy on growth, inflation, and unemployment. Still, buyer focus has not wavered on high-quality software assets.
      6. Healthcare remains the most active vertical, accounting for 21% of vertical SaaS deals in 1Q25, as providers continue investing in digital infrastructure to modernize care delivery. Financial Services (15%) and Real Estate (10%) also saw elevated activity, reflecting sustained demand for industry-specific platforms in complex, regulated sectors.

      To review the full report, click here.

      Read More

      Navigating M&A in a Volatile Market: An Allied Advisers Report

      By Article

      Our friends at Allied Advisers have released a report on Navigating M&A in a volatile market with practical tips for founders and management for driving an exit.

      This report offers:

      • Best practices which founders, management and investors of both early and late stages can develop in their tool kit to get a successful outcome
      • Input for sellers targeting strategics buyers, we advise them to build strong, strategic relationships and partnerships ahead of a formal sale process to get the highest value
      • Views on the impact of tariffs, decline of IPOs, rise of PE activity and VC / PE’s DPI challenges, and what all this means for the M&A markets going forward
      • Reccommendation for sellers to run a sale process as there is variance in the value offered by buyers and a market driven price discovery is the best way to achieve optimal value.

      You can read the full report, here.

      Gaurav Bhasin is the Managing Director of Allied Advisers.

      Read More

      M.R. Asks 3 Questions: Tom Hogan, Author, VC and Oracle’s first Creative Director

      By Article

      Tom Hogan was Oracle’s first Creative Director, developing the marketing programs, including ORACLE Magazine and Oracle Open World, that distinguished the company in its early days.

      After Oracle he migrated to the VC world as the founder of Crowded Ocean, a small agency that specialized in launching startups (51 launched, 17 going public or being acquired.) He now lives in Austin, TX, where he writes novels and screenplays full-time. His novel about the startup life in Silicon Valley, The Forever Factor, has just been released.

      We hope you enjoy this conversation with Tom as much as we did.

      MR:  You helped launch more than 50 startups as a founder of Crowded Ocean. Now you’re the author of a novel set inside the startup world and looks at the role of VCs. What was the inspiration for this book?

      Tom Hogan: Ever since I left Silicon Valley and Crowded Ocean, people would ask when I was going to write a book about the tech world—specifically about Silicon Valley and the startup world. They would say that the only people who got it right were the team that created Silicon Valley, the HBO series.

      There have been some excellent first-person accounts of life at a startup, so I didn’t think the world was desperate for another. So I decided if I did it, I’d want to take the same approach as when I wrote The Devil’s Breath, my novel about the Holocaust. Rather than hit the reader over the head with another non-fiction account of camp life by survivors and liberators, I made Auschwitz a setting for a murder mystery, rather than the star of the show. It’s the same with this latest novel, The Forever Factor: readers will learn a lot about the Valley and startups, but within the context of a suspense thriller about biohacking and the quest to live forever. 

      MR:  What are the key points you want readers to take away from this book?

      Tom: First off, it’s that we’re much further along the road to achieving major changes to how long we’ll live. It’s been said that “the first person to live to 200 has already been born.” If we’re not there already, we’re close.

      Second, it’s that, while the Valley continues to develop world-changing technologies and treatments, it’s not that receptive to change. It’s still a boy’s club and points to its major successes as evidence that its models and attitudes are on target. They’re comfortable that 90+ percent of its startups fail in their first two years, with their major wins covering those failures.

      Third, I was amazed, once I started my research, at how many individuals have made ‘biohacking’ a major part of their lives. It’s not just the fasts and supplements and implants—it’s also their willingness to be their own guinea pigs.

      MR:  What’s the difference between a technology startup—which is where Crowded Ocean focused—and a science startup?

      Tom: That’s a great question, and if I’d thought about it ahead of time, I might have saved myself some headaches by sticking to the more familiar tech world. But the longevity angle meant the startup needed to be a science-based biotech company.

      The differences between the two are stark, especially from a VC perspective:

      • Funding: Biotech startups require significantly more upfront capital due to the high cost of research and lab setup. While a tech company might be happy with just laptops, cloud services, and basic office space, a biotech startup needs specialized facilities—think wet labs and clean rooms—and expensive equipment costing millions of dollars. Shared lab spaces can reduce some of these capital expenses, but establishing and running a lab is expensive.
      • Time to revenue: Tech companies can create products quickly, often developing an MVP in months. Biotech companies can face development cycles of years, even decades, having to pass clinical trials and regulatory approval before they start generating revenue.
      • Culture: Tech startups’ famous “move fast and break things” ethos doesn’t cut it in the biotech world. Science doesn’t always work as planned. Failure, or at least roadblocks and dead ends, are par for the course. Plus, you can’t “break things” when human health is involved.
      • Leadership: Biotech startups require specialized scientific talent, including molecular biologists, chemists, and other experts that are harder to find than the engineering and product development expertise of tech startups. Biotech companies typically have PhDs, MDs, or both in leadership roles.
      • Risk profile: Tech startups primarily face market adoption risk (will people use it?). Biotech startups face scientific risk (will the science work?), clinical risk (will it work in humans?), and regulatory risk (will it be approved?) before they even get to the market adoption risk.
      • Exit strategies: Tech startups aim for rapid growth leading to an IPO or big acquisition. Because of the research and regulatory burdens, biotech firms rarely have IPOs. Acquisitions for healthcare biotech firms often come earlier in their lifecycle, usually by a big pharma company.

      To sum up, investors looking for success in the biotech realm better be prepared to have the necessary scientific expertise, deep pockets, and long-term horizons. It’s a whole different world than tech startup investing.

      M.R. Rangaswami is the Co-Founder of Sandhill.com

      Read More

      M.R. Asks 3 Questions: Bruno Kurtic, Co-Founder & CEO, Bedrock Security

      By Article

      Bruno Kurtic is an accomplished entrepreneur with 30 years of experience in building and leading high-growth technology companies. Before founding Bedrock, Bruno co-founded Sumo Logic, where he crafted the company’s product and strategy, leading it from inception to a successful IPO.

      His hands-on approach in go-to-market activities and securing patents helped raise over $346 million in funding from top-tier investors, including Greylock Partners and Sequoia Capital. Following the IPO, Bruno served as Chief Strategy Officer, continuing to guide the company’s strategic direction. Bruno earned his undergraduate degree in Quantitative Methods and Computer Science from the University of Saint Thomas, followed by an MBA from MIT.

      He now leads Bedrock Security’s accelerated approach to cataloging data enabling security, governance and data teams to proactively identify risks, enforce policies and optimize data usage — without disrupting operations or driving up costs. 

      M.R. Rangaswami: What led you to found Bedrock Security, and how does your metadata lake approach fundamentally differ from existing security solutions?

        Bruno Kurtic: After thirteen years leading Sumo Logic from inception to a public company, I took a year off for reflection. My time off coincided perfectly with the explosion of advancements in generative AI as the technology began to solve previously unsolvable problems. During this time, I unplugged, traveled, learned and engaged with more than 100 technologists working across generative AI, security and operations. Through all of my conversations it became clear that data security was the main blocker for enterprises trying to innovate faster, move to the cloud and adopt new technologies like large language models. This is why I embarked on a journey to help build Bedrock Security and solve the data security problem for the age of AI.

        What makes the Bedrock Security approach fundamentally different is our metadata lake technology, a comprehensive, continuously updated view of all enterprise data. Traditional security solutions have struggled because securing data requires knowing where it is, what it is, and only then can you properly secure it with additional context. Data Security Posture Management (DSPM) tools have historically been built with a singular focus on detecting sensitive data. At Bedrock Security, our metadata lake is built as a flexible knowledge graph, providing deep insights into what data exists, where it resides, how sensitive it is, how it moves, who has access to it and more, all in one place. With this approach, we are empowering security, data governance and data management teams to instantly understand whether data is sensitive, authorized for usage and compliant without manual overhead. This foundation is required for many security use cases including DSPM, data governance, threat detection and response, intellectual property tracking and more.

        M.R.: How is AI governance reshaping security teams, and what new skills are required for security leaders to succeed in this environment?

          Bruno: The rapid adoption of AI technologies is fundamentally changing the security landscape, creating both new challenges and opportunities for security teams. According to our recent “2025 Enterprise Data Security Confidence Index” surveying over 500 cybersecurity professionals, we’re seeing a dramatic shift in responsibilities with 86% of professionals reporting data security duties expanding beyond traditional boundaries. More than half of respondents have added new AI data responsibilities in the past year, and CISOs, CSOs and CTOs are particularly impacted with nearly 70% having taken on new data discovery responsibilities specifically for AI initiatives.

          There is a clear gap between AI adoption and AI security capabilities. Security teams and data teams are struggling to keep up with the exponential growth of data while security budgets and resources only grow linearly. Security leaders are now expected to provide visibility and control across an organization’s entire data ecosystem, particularly as it relates to AI systems. Fewer than half of organizations (48%) are highly confident in their ability to control sensitive data used for AI/ML training, creating significant risks of data leaks, compliance violations and reputational harm. To tackle these challenges, security teams must prioritize AI data governance starting with a comprehensive AI Data Bill of Materials (DBOM). A DBOM provides a complete, contextual inventory of all data flowing into AI systems, from training through deployment, and serves as a foundational tool for enforcing safe, compliant and trustworthy AI governance.

          For security leaders to succeed in this new environment, they need to develop a more data-centric mindset and acquire skills that span traditional security domains, data governance and AI oversight. This includes the ability to measure effectiveness through OKRs, such as time-to-access for data requests, amount of data without designated owners and proportion of classified versus unclassified data. Security leaders must shift from viewing governance as merely a defensive measure to understanding it as a business enabler. When organizations can track and understand their data flows end-to-end, governance transforms from a hindrance into a growth driver that allows companies to responsibly accelerate innovation, confidently enter new markets and create differentiated user experiences.

          M.R.: What specific risks do organizations face as they struggle to protect sensitive data for AI/ML training? How should companies approach securing their AI data pipelines?

            Bruno: The most immediate risk is inadvertently feeding sensitive or unclassified data into AI systems, leading to unintended exposure through model outputs, compliance violations and data misuse at scale. This becomes especially problematic with the exploding volume of unstructured data feeding today’s AI models. Our research shows 79% of organizations struggle to classify sensitive data in AI systems, leaving them vulnerable to both compliance violations and data breaches. With the commercialization of agentic AI, we will expect to see more volume and speed of data sharing, further increasing the urgency for scalable and accurate data governance.

            Enterprise security fundamentally exists to protect data. However, most enterprise security tools and technologies are blind to data and focused instead on infrastructure, identities, networks and perimeters because data is fluid, unstructured and difficult to contain. This is why data security is an essential lens for all security efforts. Beyond security risks, inadequate data governance creates legal exposure under regulations like GDPR, CCPA, HIPAA and the EU AI Act. Perhaps most damaging to business outcomes is when model bias and poor performance occur because training data isn’t properly vetted and secured, leading to algorithmic discrimination or unreliable outputs that undermine trust in AI initiatives.

            To secure data, you first need to know where it is (discovery), then know what it is (classification), and only with that bedrock of understanding can you start securing it with additional business and usage context (entitlements, risk assessment, governance, threat detection and more). A metadata lake approach can help organizations secure AI data effectively by providing continuous visibility into their data landscape. Companies can use this to implement automated AI data discovery and classification to identify sensitive information before it has a chance to enter training pipelines, and enforce least-privilege access controls based on data sensitivity and purpose. This approach also empowers security, data governance and data management teams to understand whether data is sensitive, authorized for usage and compliant without manual overload. 

            M.R. Rangaswami is the Co-Founder of Sandhill.com

            Read More

            Quick Answers to Quick Questions: Award-Winning Science Writer, Anil Anathaswamy

            By Article

            Anil Ananthaswamy is an acclaimed science writer and author whose work spans some of the world’s top publications, including New Scientist, Scientific American, Quanta, and Nature. A former staff writer and editor at New Scientist, he is also a 2019–20 MIT Knight Science Journalism fellow and the author of four celebrated books, including Why Machines Learn: The Elegant Math Behind Modern AI, published last year. (The book has received critical acclaim, including from Geoff Hinton, who called it “a masterpiece.”)

            Anil’s background bridges engineering and journalism—he trained in electronics and computer engineering at IIT-Madras and the University of Washington before earning a science journalism degree from UC Santa Cruz. He has taught and mentored widely, and was recognized with the Distinguished Alumnus Award from IIT-Madras for his contributions to science communication.

            In this conversation, Anil gives Sandhill readers his input on the foundational mathematics of machine learning, the complexities of AI safety across sectors, and the uncertain but transformative future of AI for leaders and consumers.

            M.R. Rangaswami: How would you describe “fundamental math” of machine learning, and how does that math apply across industry?

              Anil Ananthaswamy: We can divide the fundamental math of machine learning into two broad categories. One is the math you require to get a good conceptual understanding of ML, so that one can make good decisions about the kinds of algorithms/models/datasets needed to solve some particular problem. This level math involves a basic understanding of linear algebra, calculus and probability and statistics.

              The second category is the kind of math you need to design ML algorithms/models. This requires an advanced understanding of calculus (such as multivariate calculus, vector calculus, convex optimization techniques, etc.), linear algebra (matrix decompositions and tensor operations), information theory, graph theory and so on. The list can get quite heavy.

              So, depending on which side of the fence you are in the industry, you might either have to learn the basics or the more advanced aspects of these branches of math.

              M.R.: What do you feel is considered safe AI? Is it user dependent? Industry dependent? Software dependent?

                Anil: I think it’s all of the above. As with any technology, the safe use of it depends on the capabilities we build into our AI systems, how we use it, and what safeguards are employed by industries that deploy the AI systems, which of course will involve the software scaffolds that tie artificial intelligence to the other aspects of information processing.

                The difference between AI and the technologies that have come before (such as the internet) is going to be unprecedented scale and speed at which AI is going to permeate almost everything we do. Also, the barriers to entry for bad actors is much lower than it has been in the past. Using AI for nefarious purposes is relatively easy (think about deep fakes, for example). We will have to work extra hard to ensure the safe use of AI.

                Also, the black-box nature of deep neural network-based AI is going to make it harder to ensure the kind of safe use that relies on being able to interpret the workings of these machines.

                M.R.: What is your forecast for what AI will do for leaders and consumers by 2030?

                  Anil: Given the pace at which things are changing, it’d require someone with a very clear crystal ball to foresee what’s going to happen in two years, let alone in five or ten years. But some broad trends are clear. Machine learning is here to stay. Some of the concerns we had about whether neural networks will be effective have been laid to rest. For example, computer vision and image recognition (and the consequent downstream applications that they make possible)—something that was considered an extremely hard problem a decade ago—is now a mature technology. The same goes for many natural language processing tasks—such as machine translation of text from one language to another.

                  The wild card is whether or not large language models will truly deliver on their promise. Big companies are betting that scaling up these models—making them bigger and spending more on training data and compute—will create systems that will be superhuman in their abilities, coming close to what many call artificial general intelligence (AGI). While scaling has delivered amazing results (as evidenced by the “emergence” of sophisticated behavior in LLMs as they have been made bigger), there are also concerns that these LLMs, based on the Transformer architecture and next token prediction, may have in principle limitations that might be impossible to overcome.

                  Regardless, both leaders and consumers need to prepare for AI/ML systems that will both empower and disrupt in equal measure, with the attendant social consequences, such as job losses, as AI/ML and robotic systems replace the low hanging fruits (such as cognitive tasks that can be easily automated, basic coding, and even simple kinds of manual labor).

                  M.R. Rangaswami is the Co-Founder of Sandhill.com

                  Photo credit: Amit Madheshiya / TED

                  Read More

                  MR Asks 3 Questions: Chris Williams, COO of Interaction Associates 

                  By Article

                  Chris Williams serves as the Chief Operating Officer for Interaction Associates. His background includes more than ten years in the professional services space in business operations, recruiting, business development, and complex research roles. Prior work includes strategy consulting for Fortune 500 clients.

                  Interaction Associates is best known for introducing the concept and practice of group facilitation to the business world in the early 1970’s. For over 50 years, IA has provided thousands of leaders and teams with practical, simple, and effective programs, tools, and techniques for leading, meeting, and working better across functions, viewpoints, and geographies. 

                  In this week’s refreshing perspective on AI, Chris discusses the experience his firm has had working with tech industry leaders in the age of GenAI and why Silicon Valley’s leaders should consider it equally important to emphasize critical people skills, including critical thinking, communication, and collaboration, that will be core differentiators in the workplace.

                  Employment in human-skill intensive roles is expected to grow 3x faster compared to less human-skill intensive roles. Put differently, people skills are more critical than ever in the tech industry. 

                  M.R. Rangaswami: How can tech companies and tech leaders effectively integrate the development of people skills even as they rapidly adopt and implement new technologies?

                  Chris Williams: People skills are a prerequisite for building effective digital products and solutions that resonate with their target market and meaningfully engage their audience.

                  First, adopt a skills-based approach. Leaders can identify and prioritize the specific human-centered skills (like communication, collaboration, problem-solving, and emotional intelligence) that are critical for adoption and implementation of new technologies.

                  Next, map out your current capabilities and assess where gaps exist in essential people skills. This can inform a target skills development strategy.

                  Finally, consider how you can develop a culture of continuous learning and innovation. This is where the 20% policy (example: Google) provides a strong illustration – allowing employees to spend time on passion projects that encourage working together in new, creative, and productive ways.

                  M.R.: What are the key components of effective team dynamics in the tech industry, and how can leaders foster these dynamics to improve team outcomes?

                  Chris: Tech companies want to create a culture where people feel safe, invested, and valuable. These traits don’t happen by accident. They rely on people skills like group facilitation, collaboration, and alignment.  Key components include:

                  1. Strong communication skills = teams must be able to explain concepts clearly to both technical and non-technical audiences. This communication is essential for gaining buy-in, aligning across departments, and accessing organizational resources.

                  2. Collaboration across functions and geographies = collaboration across boundaries can be done and is critical to operational stability. 

                  3. Critical Thinking and Problem Solving = while AI tools shine with large data analysis, human teams need the problem solving capability to assess various inputs, facts, perspectives, and make wise and informed judgements.

                  4. Relational abilities = successful team dynamics rely on the ability to build trust, align on goals, and work cohesively together. 

                  M.R.: Generative AI (GenAI) capabilities are quickly improving, and the technology is being adopted individually and company-wide. How can tech companies ensure that employees effectively use GenAI as a tool without compromising their essential skills and values?

                  Chris: For GenAI to be used effectively and widely adopted, leaders must follow a process that minimizes resistance and maximizes success. This is done by putting people at the center.

                  Start with people, not tools:  Most AI challenges stem from people and process issues, not technology. Fear of the unknown is reduced when people are informed and included.

                  Focus on solving real problems:Don’t just look for tasks to automate – look for problems that matter to people. This is best done by having a real conversation with people about their process. Ask teams, “where are you getting stuck?” and listen closely. You’ll see the patterns emerging: repeated tasks, bottlenecks in the workflow, and recurring frustrations.

                  Make the work visible: When a key issue is identified, collaborate with the team to visually map out the process. This not only helps to clarify where the problem lies (including current state) but also reveals where process re-engineering and AI automation can help in practical, non-threatening ways.

                  Outcomes of collaboration and buy-in: Prioritizing human problems and co-creating solutions together will help you put a clear AI strategy in place that people support and work for the business. Buying tools isn’t enough – people need training, clarity, and alignment on how to best use these tools.

                  By making the work and the process visible, it strengthens the team collaboration and buy-in for how AI automation can do more than just streamline the task – it can make the teamwork itself smoother.

                  M.R. Rangaswami is the Co-Founder of Sandhill.com

                  Read More

                  MR Asks 3 Questions: Sagie Davidovich, Co-Founder and President, SparkBeyond

                  By Article

                  Sergey Davidovich is Co-Founder and President of SparkBeyond, pioneer of the AI-powered Always-OptimizedTM platform that drives constant improvement in business operations. The Always-Optimized platform extends Generative AI’s reasoning capabilities to KPI optimization, discovering complex patterns across disparate enterprise data sources like CRM, ERP and system logs. SparkBeyond’s technology solves the hardest challenges in customer and manufacturing operations. 

                  In this interview, Sagie shares his insights on the emerging trends in AI taking shape in 2025 that will impact how enterprises leverage this technology long-term. Sagie also highlights why Agentic AI is central to this discussion. 

                  M.R. Rangaswami: Why is Agentic AI gaining momentum as an industry classification?

                  Sergey Davidovich: Agentic AI has rapidly emerged as a focal point in artificial intelligence. Unlike traditional AI tools that primarily respond to user prompts, Agentic AI is designed to autonomously analyze data, predict outcomes, and execute decisions with minimal human oversight.

                  According to Deloitte, 25 percent of companies using generative AI are expected to launch Agentic AI pilots by year end. Companies like Google, Salesforce, Microsoft, and HubSpot have already begun integrating Agentic AI.

                  These developments signal a shift from static automation to dynamic systems capable of continuous learning and adaptation, just like human knowledge workers would. At its core, this revolution revolves around two key concepts:

                  1. Defining KPIs as actionable objectives for agents: By aligning agents with measurable business goals such as customer retention or revenue growth, organizations can ensure these systems focus on what truly matters.
                  2. Enabling continuous improvement through real-time learning: This allows agents to refine their strategies dynamically based on evolving data and conditions.

                  M.R. Rangaswami: What is the missing piece in Agentic AI and how do we address it?

                  Sergey Davidovich: While execution capabilities are essential, future-proof autonomy in Agentic AI requires more than just task completion—it demands introspection and self-improvement.

                  At the heart of strategic intelligence lies hypothesis generation & testing—the cornerstone of the scientific method. This iterative process allows agents not only to learn what is true, but also to adjust their strategies and workflows when new data challenges existing assumptions. Hypothesis testing enables agents to explore possibilities beyond predefined guidelines, fostering innovation and adaptability.

                  As one industry example, we support clients with the ability to generate billions of hypotheses that ensures agents operate with a deep understanding of their environment and agent builders have the information they need to improve the agent and its ability to achieve its goals. 

                  M.R. Rangaswami: Can you give a real-world example of Agentic AI at work?

                  Sergey Davidovich: In a marketing ecosystem powered by Agentic AI, optimization examples include: 

                  • A Campaign Management Agent optimizes ad spend across channels 
                  • A Customer Segmentation Agent refines audience targeting based on real-time behavioral data 
                  • A Content Optimization Agent tests messaging strategies tailored for each segment 
                  • A Marketing Mix Modeling Agent reallocates budgets dynamically based on performance metrics

                  These agents work collaboratively, adjusting strategies in concert to maximize overall marketing ROI while balancing KPIs such as customer acquisition cost and lifetime value.

                  M.R. Rangaswami is the Co-Founder of Sandhill.com

                  Read More

                  Quick Answers to Quick Questions: David Winkler, EVP & CPO, Docufree

                  By Article

                  David Winkler is Executive Vice President and Chief Product Officer at Docufree, a leading provider of cloud-based document management and process automation solutions. The company is a services-led leader in digital transformation solutions including: large-volume document capture; data extraction and integration; intelligent process automation; cloud-based document management; and digital mailroom services. Today, over 1,500 enterprises and government agencies rely on Docufree to empower their workforces with the information they need and ensure processes are executed with speed, accuracy, and compliance from wherever work needs to happen. 

                  This week, I had the chance to catch up with David.  Here is our conversation. 

                  M.R. Rangaswami: What are the biggest challenges companies face in managing documents today?

                  David Winkler: One of the biggest challenges is the sheer volume of information spread across physical and digital environments. Many organizations still rely on paper-based workflows, which slow down operations and increase risk. Compliance is another major concern, as businesses must ensure documents are properly stored, accessed, and disposed of according to industry regulations. Finally, security remains a top priority, with organizations needing to safeguard sensitive data from cyber threats and unauthorized access.

                  Our management process and automation solutions are engineered to benefit organizations of all types looking to achieve greater efficiency, and scalability in an increasingly digital world, there is a strong need in highly regulated industries such as the financial services, healthcare, government, insurance, and legal sectors—all of which handle large volumes of sensitive documents and require strict compliance. 

                  M.R. Rangaswami: What trends do you see shaping the future of document management and digital workflows?

                  David Winkler: AI and machine learning will continue to transform document-intensive processes, driving automation and efficiency. We’re seeing rapid adoption of intelligent document processing (IDP) to extract insights from unstructured data, as well as cloud-based document management and collaboration tools to support remote and hybrid work environments. As regulatory landscapes intensify, businesses will increasingly rely on automated compliance and governance solutions to stay ahead of new requirements. Additionally, we’re seeing large enterprises invest in transforming their corporate mail centers into digital mailrooms.

                  M.R. Rangaswami: Can you explain how a Digital Mailroom works and its benefits?

                  David Winkler: A Digital Mailroom is a game-changer for organizations managing large volumes of inbound physical and electronic mail communications. Instead of manually handling physical mail, it’s digitized at the point of entry—classification, indexing, data extraction, and securely delivering it electronically to the right recipients and/or business systems.

                  This eliminates paper backlogs, accelerates response times, and enables remote access to critical documents in real-time. It’s especially beneficial for enterprises with distributed workforces, ensuring employees can securely access their mail anytime, anywhere. Additionally, a Digital Mailroom enhances compliance by providing complete visibility and tracking of inbound and outbound communications.

                  M.R. Rangaswami is the Co-Founder of Sandhill.com

                  Read More

                  M.R. Asks 3 Questions: Kirk Dunn, CEO of Kurrent

                  By Article

                  Kirk Dunn brings a career of operational experience at top technology companies, including his tenure as COO of Cloudera, where he helped scale the company from its early stages to rapid growth. Now, as CEO of Kurrent, Kirk is transforming how businesses handle real-time data for AI, application development, and analytics.

                  Kurrent acts like a DVR for business data, capturing not just what happens but also the when, how, and why—while keeping infrastructure costs manageable amid rising AI expenses. Headquartered in San Francisco, Kurrent’s event-native technology is deployed in high-stakes industries worldwide, including finance, tech, oil and gas, manufacturing, retail, healthcare, automotive, and government.

                  In our conversation, Kirk shares his viewpoint of how the old way was being “data-driven” while the future lies in being “event-driven.” Instead of just collecting and analyzing isolated data points, businesses will expect to capture and understand the complete context and story behind their data.


                  M.R. Rangaswami: What are the challenges you’ve found with traditional databases and streaming solutions that inspired you to join Kurrent?

                  Kirk Dunn: Kurrent is solving an age-old data problem in a completely new way. What excited me about Kurrent is that we’ve built something fundamentally different: an event-native data platform that stores data chronologically in an append-only, immutable way. Kurrent never  overwrites data like traditional databases do. Instead, our platform is designed to create a permanent, unchangeable log of all data changes that can always be read or queried back and forth in time – in real time, without having to hop between various bolt-on solutions.

                  Kurrent therefore enables businesses to maintain complete context and truly understand not just WHAT happened, but also the WHEN, HOW, and WHY it happened.

                  This ability to maintain complete historical context while seamlessly handling real-time streams is transformative. Kurrent is the only solution that unifies event-oriented application logic and state-based data models for developers, providing a single platform to store application data in its native event format while streaming highly curated operational or analytical data products directly to downstream interfaces.


                  M.R.: Why is data with context key for businesses today, and how does that change the way businesses operate?

                  Kirk: The insights businesses need to compete and win don’t emerge from isolated data points. They come from understanding the rich, complete context of every situation at any given moment. Context requires following the data journey from origin through lifecycle.

                  For example, to better serve an ecommerce customer, a business has to understand that person’s whole customer journey: what they put in their shopping cart, what they took out, what they ended up buying on a specific day, what they ended up buying on a different day or forever abandoned in their shopping cart. 

                  Context is absolutely necessary when we look at modern business requirements. Companies may need to understand real-time competitive analysis for pricing intelligence, monitor minute fluctuations in equipment telemetry to ensure precision mining or manufacturing, accurately feed AI models for image recognition, or deliver exact insights about customer behavior. In each of these scenarios, having data without context is like trying to read a book with random pages missing — you might get some information, but you’ll miss the complete story. Operating with context means better revenue or other positive business outcomes in some cases, and could even be life-saving in others.

                  The way a business runs today should be thought of as a series of events. When you can trace data back to where it originates while maintaining its complete fidelity, you create a single source of truth that enterprises can rely on for both real-time insights and historical analysis. This transforms how businesses operate by enabling truly real-time, context-aware decision making — which is our mission here at Kurrent.

                  M.R.: What is Kurrent doing that is unique for the market, and how do you think this will shape the way businesses view their data moving forward?

                  Kirk: Kurrent is unique as it is the first and only event-native data platform that combines an event-native database with integrated streaming capabilities. We’ve built a modern data platform that allows companies to originate data, aggregate it from other sources, and maintain its integrity in an immutable, globally ordered log. Kurrent is able to stream data precisely to the exact point of need, enabling companies to serve their customers in very granular ways. 

                  Our platform has many use cases, and it’s particularly transformative for application development. Being able to build apps and services without sacrificing context, scalability or consistency is key to modern business success. Today’s application development demands reliable data synchronization across services, but developers often face significant challenges that slow down development and introduce complexity. Kurrent provides guaranteed data consistency and simplified architecture for easier microservices development and enhanced service scalability. We like to call it “microservices without the mess.”

                  Additionally, our platform is perfect for AI/ML workflows, where Kurrent can enrich processes by serving source data to large language models. Kurrent’s ability to process natural language events natively reduces data transformation overhead from the majority of project time to just a fraction. This is crucial because AI models, like analytics systems, typically lose resolution on their data as it’s moved, transformed, updated and reused.

                  I believe what we are doing at Kurrent will fundamentally change how businesses view their data in the future. The old way was being “data-driven” while the future lies in being “event-driven.” Instead of just collecting and analyzing isolated data points, businesses will expect to capture and understand the complete context and story behind their data. Businesses will demand solutions that can maintain data fidelity, delivered exactly where and when it’s needed with precision and complete context.

                  M.R. Rangaswami is the Co-Founder of Sandhill.com

                  Read More

                  Software Equity Group’s Annual SaaS Report: 2025

                  By Reports

                  Our friends at Software Equity Group have released their SEG Annual SaaS Report, delivering unparalleled insights into the SaaS landscape, combining analysis of over 118 publicly traded SaaS companies with a detailed review of more than 2,100 SaaS M&A transactions in 2024—the second-most active year on record. The SEG SaaS Index™ ended 2024 up 3% despite macroeconomic headwinds. With inflation stabilizing at 3.2% and anticipated interest rate cuts in 2025, SaaS businesses are well-positioned to capitalize on improving conditions.

                  Here are three highlights from their report:

                      2,107 TOTAL SAAS M&A DEALS IN 2024

                      The SaaS industry recorded the second-highest number of deals on record, with SaaS transactions making up 61% of all software M&A activity, reflecting sustained strength in the market

                      26.6% UPPER QUARTILE STOCK PRICE INCREASE IN 2024

                      The upper quartile of companies in the SEG SaaS Index™ achieved an impressive average stock price increase of 26.6%, highlighting the strong performance of top-tier SaaS businesses in the public market​.

                      11.7x MEDIAN EV/TTM REVENUE MULTIPLE FOR HIGH RETENTION COMPANIES

                      Companies with net retention rates above 120% achieved a premium valuation of 11.7x EV/TTM revenue, a 109% premium over the Index median of 5.6x, highlighting the critical role of retention in valuation performance​.

                      Download SEG’s full report to uncover the key trends, benchmarks, and insights that can help you navigate the ever-evolving SaaS market, here.

                      Read More

                      Sector Update on Customer Experience Software: 2025

                      By Article

                      Our friends at Allied Advisers have released their 2025 sector update on the Customer Experience (CX) Software Industry. The demand for CX software is expanding rapidly driven by advancements in AI technologies, adoption of omnichannel strategies and the need for personalization.

                      CX software has evolved from basic CRM tools to advanced platforms powered by AI-powered chatbots, emotion AI, and speech analytics. Emphasis on real-time customer insights through emotion AI and voice-of-customer strategies is growing. Companies are enhancing loyalty with personalized experiences across digital and physical touchpoints, while omnichannel approaches ensure consistent interactions across platforms.

                      Industries such as retail, healthcare, finance, and telecom are adopting these solutions to enhance user experiences and achieve operational excellence. In financial services, AI-driven CX solutions have increased customer satisfaction by 20% and achieved revenue growth of 10-15%.

                      M&A activity in CX is undergoing a transformation, shifting toward the acquisition of emerging specialized firms driving innovations such as AI-powered loyalty programs and hyper-personalized customer journeys among others. These advancements enable predictive insights and seamless customer interactions, making such firms highly attractive targets. Despite economic challenges, investor confidence remains robust towards resilient startups with strong business models.

                      This report outlines the dynamic growth of the CX software industry and its transformative impact across sectors, offering exciting opportunities for investment and innovation.

                      At Allied Advisers, we look forward to partnering with founders and investors to navigate this exciting landscape.

                      Click here to read the full report.

                      Gaurav Bhasin is Managing Director at Allied Advisers.

                      Read More

                      M.R. Asks 3 Questions: Alex Ly, Principal Solutions Engineer, Solo.io

                      By Article

                      Alex Ly is a seasoned solutions engineer, currently serving as a Principal Solutions Engineer at Solo.io, where he specializes in cloud platforms, service mesh, and API gateway technologies. Prior to joining Solo.io, Alex served as a Senior Solutions Architect at Red Hat, focused on cloud platform solutions. Alex has also been a featured speaker and workshop practitioner at industry events like ServiceMeshCon and IstioCon.

                      In this week’s conversation with Alex, we go through the highlights and roles of AI Gateways in scaling and securing AI applications. We hope you enjoy his perspectives as much as we did.

                      M.R. Rangaswami: What is an AI Gateway, and why is it essential for managing AI services and infrastructure?

                      Alex Ly: An AI Gateway is a dedicated infrastructure for the management of AI services, models, and infrastructure. It simplifies the integration and operation of AI workloads by providing essential features like control, security, and observability over AI traffic. Unlike traditional API Gateways, AI Gateways are built to address the distinct challenges of managing interactions between applications and AI models, especially at scale.

                      Technically, an AI Gateway, like Solo.io’s Gloo AI Gateway, can operate as an additional endpoint for an existing gateway proxy or as a dedicated gateway-proxy endpoint. This flexibility allows organizations to configure the Gateway to meet their specific AI infrastructure needs, supporting tasks like authentication, access control, traffic routing, and performance optimization. AI Gateways also enhance operational efficiency by reducing redundant requests to AI models, monitoring data flows, and enabling seamless failovers between model providers.

                      M.R.: What benefits do AI Gateways offer to development, security, and infrastructure teams?

                      Alex: AI Gateways provide distinct advantages for development, security, and infrastructure teams, each tailored to their specific needs in AI application development and operations.

                      For development teams, AI Gateways simplify the process of building applications by reducing friction, minimizing boilerplate code, and decreasing errors when working with large language model (LLM) APIs from multiple providers.

                      Security and governance teams also benefit from enhanced security measures, as AI Gateways restrict access, enforce safe usage of AI models, and provide robust visibility through controls and audits. 

                      Lastly, infrastructure teams rely on AI Gateways to scale AI applications effectively, using advanced integration patterns and cloud-native capabilities to boost high-volume, zero-downtime connectivity. 

                      These combined benefits empower organizations to build, operate, and secure AI applications with greater effectiveness and reliability.

                      M.R.: What are the key challenges in scaling AI applications, and how do innovations like semantic caching, model failovers, and prompt guardrails address these challenges?

                      Alex: Scaling AI applications presents challenges such as ensuring reliability across multiple models, optimizing costs, safeguarding sensitive data, and improving the quality of AI outputs. Key innovations like model failovers, semantic caching, and prompt guardrails play a fundamental role in addressing these issues.

                      • Model Failovers guarantee reliability by seamlessly switching between AI systems and providers during outages or performance issues. This prevents disruptions and enhances resilience across applications. For instance, if a preferred AI model becomes unavailable, a failover mechanism can dynamically reroute requests to an alternative provider without impacting performance.
                      • Semantic Caching reduces operational costs and latency by caching responses for repetitive prompts. This minimizes redundant requests to LLM APIs, speeding up response times and optimizing resources. It’s particularly effective for high-volume applications like chatbots or virtual assistants.
                      • Prompt Guardrails protect against risks such as data infiltration, model abuse, and unauthorized access by implementing strict governance and access controls. These guardrails help prevent sensitive or confidential information from being accidentally exposed during AI interactions. For instance, policies enforced through an AI Gateway can monitor and restrict the types of prompts processed by AI models, adding a key layer of security and compliance for enterprises scaling their AI applications. 

                      Together, these innovations address the scalability challenges of AI applications, making sure that they remain reliable, secure, efficient, and capable of delivering high-quality results at scale.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      M.R. Asks 3 Questions: Vinnie Mirchandani, Analyst and Author

                      By Article

                      Vinnie Mirchandani is a leading technology analyst with a four-decade career at PwC, Gartner, and a firm he founded, Deal Architect. He has advised clients around the world and has worked, lived, and traveled in 75 countries. He has written ten books, countless blogs, and records a video channel on technology-enabled innovation. His most recent book he co-authored is a fiction mystery, The AI Analyst.

                      As we gradually work into a new year, we hope you’ll find entertainment in this book and conversation.

                      M.R.: Vinnie, I am not surprised to see a book from you on AI, but in the mystery genre? You have me really curious.

                      Vinnie Mirchandani: MR, not only my first fiction, but also first with a co-author. Kimberly McDonald Baker was at Oracle and I was at Gartner when we first met over 25 years and discussed an outline of the plot.

                      Barry Roman, a billionaire CEO of Silicon Valley based company, Polestar disappears. His former executive, Patrick Brennan is now a technology analyst for Oxford Research in St, Peterburg, FL. He has worked with the FBI before and is embedded in the search for Barry. So much has changed in the 25 years since. AI and automation in robotics, drones etc. have matured.

                      SV now has wealth created by Apple, NVIDIA, Google, Salesforce and so many others. California and Florida have both changed dramatically. All this upheaval provided a nice canvas and many new characters for the book. The suspects in the plot include Barry’s estranged wife, a disgruntled former executive, Chinese intelligence, the Italian Mob, Latin gangs and many others. And plenty of non-human copilots, drones, bots play roles as they are doing in real life.



                      M.R.: Enterprise tech is not easy to explain to the average reader. They are even less familiar with Industry Analysts. What challenges did you face in simplifying these complex subjects for your audience?

                      Vinnie: Actually, even folks in enterprise tech are not that familiar with contemporary tech like GPUs, LLMs and digital agents or how analysts create Magic Quadrants and how that world has changed with
                      boutique firms and bloggers.

                      But I am a storyteller. I have written countless case studies about corporate strategies, breakout products, and high-tech events in my technology innovation blogs and books. My interviewees are from around the world, speak with heavy accents and use TLAs so rampant in tech. I am used to translating that into much simpler language and that skill definitely helped here.

                      The early reviews of the book have been very positive. More challenging was this book required an intense focus on human emotion in terrifying, joyful, and humorous situations. For that I thank my wife, Margaret Newman. She always does a readability review of my business books but here her background in psychiatry really came in handy. She is an astute observer of people and she forced us to dwell longer in many scenes and bring out emotional intensity from each character.

                      M.R.: What will surprise readers the most?

                      Vinnie: I don’t want to reveal too much of the plot, but there are many twists and turns like any good mystery should have. The book is 300 pages long – many of my readers like that size so they can finish it on an international or cross country flight. Reviewers have told us they could not put the book down.

                      Another surprise for many readers is how many settings they will recognize. In your part of the world we
                      have Pebble Beach, Half Moon Bay, Gilroy and several others. Several restaurants and museums in
                      western Florida where Patrick and his Indian American wife, Maya are based. Talking of India, we have
                      scenes where we explain rituals in a Hindu wedding. U of Michigan football shows up in several scenes.

                      So does Western China. The book is fast-paced, but we take the reader around the globe. Another twist
                      is how many strong female characters we have in the book. That’s where having a lady co-author really
                      helped. However the most surprising thing readers will find is Polestar is a next-gen tech vendor which combines agentic AI with humanoid robots. Polestar defines its verticals as each of the 800+ occupations the Bureau of Labor Statistics tracks, Their solutions automate limbs, eyesight and other human faculties, not just cognitive skills. Similarly, Oxford Research is a next-gen analyst firm. They have labs which test products and their security vulnerabilities, they create copilots. Their focus is not just IT, but energy and other STEM disciplines and they have a global reach. The book also has a next-gen technology buyer in the NYC financial institution, Sheldon Freres which knows how to monetize its mountains of unique data in our age of AI.

                      So, while I hope readers enjoy the suspense and humor in the book, I think it will also inspire them to
                      evolve their own jobs and broadly their employers.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      M.R. Asks 3 Questions, Krishna Yadappanavar, Co-Founder and CEO of Kloudfuse

                      By Article

                      Kloudfuse is the industry’s first unified observability platform for high cardinality data across all observability streams, seamlessly integrates into any environment, operates cost-effectively within your VPC, and delivers a smooth, SaaS-like experience.

                      In this conversation, CEO Krishna Yadappanavar addresses the critical paradigm shift for building adaptable, cost-efficient, and robust observability solutions when managing high data volumes and complex datasets.

                      Using the  “Four C’s”—Cardinality, Control, Cost, and Causality Krishna is able to break down (and simplify) what’s possible with observability.

                      M.R.: Why has Observability gained such momentum in the last few years?

                      Krishna Yadappanavar: Observability has always been critical to business success, but there are a few factors that are contributing to its rising popularity. 

                      Many organizations are dealing with overwhelming volumes of metrics, logs and traces, which can be difficult to process, let alone extract meaningful insights from. To manage these different data streams, many businesses have put in place fragmented solutions, leading to silos and inefficiencies as teams have to juggle multiple tools to discover the root cause of performance issues. 

                      In recent years, applications have increasingly been built on microservices and cloud-native architectures. These architectures decompose applications into smaller, interconnected services, each generating its own data, resulting in an overwhelming volume of data points. The greater the volume of data, the higher the costs of observability solutions.

                      The growing importance of observability can be boiled down to what I call the “Five C’s”: Cardinality, Control Causality, and Consolidation, Cost.

                      M.R.: You talk about the five C’s, can you share more about why they are increasingly important for Observability?

                      Krishna: The first C is Cardinality, which reflects the challenge of handling high data volumes. As we’ve discussed, the rise of microservices as the foundation of modern application stacks has increased the amount of data that needs to be monitored. Every line of code, container it runs in, pod, cluster, user flow, service call, and database query—all contribute to performance issues. The permutations of these variables create an overwhelming number of possibilities, making it increasingly difficult to analyze and uncover insights. This poses a significant challenge for observability tools to manage such large volumes of data. Our platform is specifically designed to tackle this issue.

                      The second C is Control. As data becomes increasingly valuable to organizations, they are becoming more cautious about vendor lock-in. Companies want to maintain control over their own data and avoid being tied to proprietary SaaS observability platforms. Our solution offers customers greater control by enabling private virtual cloud deployments, all managed through a dedicated control plane for top-notch security and simplified administration.

                       Consolidation, brings together all observability data—metrics, logs, and traces, real user monitoring, continuous profiling—into a single data lake to provide a unified observability. This simplifies the process for developers by eliminating the need to switch between different tools and manually piece together different data signals to find the root cause of the problem. 

                      The fourth C is Causality, focuses on cause-and-effect relationships in data. Troubleshooting often involves ‘unknown unknowns’ — issues that were not anticipated. While we don’t claim to pinpoint the root cause every time, as every system and problem is unique, we provide the powerful tools and insights to help you identify causal relationships more effectively and uncover deeper insights.

                      Finally, private deployments naturally tie into the third C, Cost. The sheer volume of the data, coupled with fragmented observability tools has led to high costs. Customers seek a more deterministic cost model that ensures predictability. Our virtual private cloud (VPC) approach and flat pricing model helps mitigate this risk by maintaining tighter control over data volumes and expenses.

                      Together, these five Cs—Cardinality, Control, Cost, Causality, and Consolidation—are critical to customers as they integrate Observability into their workflows to understand, monitor, and optimize the performance, health, and reliability of their systems, applications, and infrastructure.

                      M.R.: Kloudfuse is the second company you founded. How do you see customers behave differently in this market and at this point of time than the past?

                      Krishna: That’s right, there is a lot of learning that comes from having built and sold a company before. A major lesson has been the importance of product-market fit. We are laser focused on that. Most of our customers are not newcomers to observability. They have already adopted legacy tools and have experienced the growing pains that come with gen 1 products. They know the challenges well. 

                      This familiarity makes working in an established market an advantage. Our customers know what they need, and we collaborate closely with them to ensure we meet those needs. Our rapid release cycle allows us to quickly incorporate feedback and release new features, sometimes within a two-week window.

                      One of the most common requests from customers is consolidation. With the fragmentation in observability tools, businesses are looking for solutions that manage both frontend and backend observability in one platform. To meet this demand, we’ve integrated real user monitoring (RUM) and session replay capabilities to complement traditional observability of metrics, logs, and traces. 

                      Another trend we’re seeing is the demand for built-in AI features that help shorten diagnosis times and identify “unknown unknowns.” To address this, our platform includes algorithms such as Prophet, SARIMA, Pearson Correlation Coefficient, and others for anomaly detection and causality.

                      Cost savings are also a priority for many customers, especially those with large-scale deployments. As ARM processors become more popular for their cost-effective performance, we’ve re-engineered our platform to run efficiently on ARM instances.

                      Lastly, we’re seeing growing interest in generative AI and large language models (LLMs). We’re actively working with customers to integrate these capabilities into our platform. 

                      Beyond just delivering features, we focus on understanding customer workflows, analyzing and cleansing their data, and integrating these elements seamlessly into our platform. The goal is to make workflows a first-class experience within our product. When customers adopt a platform like Kloudfuse, they don’t just get the Observability solution—it’s a comprehensive, data-driven solution that enhances our customers’ overall business performance.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      M.R. Asks 3 Questions: Anurag Kamal, Co-Founder & CEO, Electric Fish

                      By Article

                      A few months ago, ElectricFish caught my attention when the company opened a new manufacturing facility and corporate headquarters to build and deploy intelligent grid-edge charging solutions to accelerate EV adoption, especially in areas with power-grid limitations. 

                      But what really stood out to me was California State Senator Josh Becker’s comment that “ElectricFish is tackling the need for more high-speed EV charging and doing it in a way that is easier on the grid and can be deployed faster, while also creating new manufacturing jobs right here on the Peninsula.”

                      I’ve since learned more about the company’s journey; to date, the company has received $1.69 million in grants from the California Energy Commission, and has also received funding from the Department of Energy’s Cradle to Commerce, Caltech Rocket Fund, Michigan Mobility Funding Platform, 2023 Oxford Seed Fund, and Third Derivative.

                      Last week, I caught up with one of the co-founders, Anurag Kamal, CEO of ElectricFish. Here is my conversation with Anurag.

                      M.R. Rangaswami: Can you explain ElectricFish’s “intelligent grid-edge charging solution”, and how is it different from other  charging solutions, like Tesla’s, which are currently dominating the EV space? 

                      Anurag Kamal: Certainly. Our charging solution is also an energy asset that boosts grid reliability and resilience while reducing the costs of deploying extremely fast EV charging. 350 Squared™ is our plug and play energy storage system that is designed to power on-site and community loads while also delivering extremely fast EV charging through two DC fast charging ports (CCS and NACS). EVs can be charged at 20 miles a minute speed (up to 350 kW), which is state of the art capabilities for charging speeds. 

                      This can be deployed in two to three weeks using an existing grid connection. This is critical because it allows businesses to avoid the lengthy and costly process of extensive infrastructure upgrades, which can take months or even years. In contrast, Tesla Superchargers typically require significant investments in infrastructure, such as trenching, electrical upgrades, and dedicated transformers, all of which can delay installation and create added expenses. Moreover, Tesla’s systems are entirely grid-dependent and do not provide backup power for communities during outages.

                      Further, the system can also provide up to 48 hours of backup energy for community use. This means the 350 Squared can power critical infrastructure during outages or serve as an energy source when grid electricity is unavailable. The market opportunity within grid services is a vital yet often overlooked aspect of the evolving EV landscape. With the electric vehicle market projected to keep accelerating in growth, our grid-edge charging devices can also enhance grid resilience, capitalizing on the accelerating demand for decarbonization and Grid modernization efforts. Our robust dual revenue model, combining hardware sales with a subscription-based software platform, strategically positions ElectricFish to deliver significant returns in this burgeoning sector.

                      The National Park Service, Los Angeles Department of Water and Power, and the International Brotherhood of Electrical Workers have completed successful pilot projects with ElectricFish. These projects demonstrate a new ability to unlock DC fast-charging in grid-constrained sites in days instead of years, while also monetizing backup energy for critical onsite loads. 


                      M.R.
                      How does ElectricFish’s mission and EV charging solutions help achieve California’s goals regarding electric vehicle (EV) infrastructure?

                      Anurag:  California has set some of the nation’s most ambitious goals for reaching net-zero emissions, including transitioning to electrified transportation for everyday commuters and businesses, such as logistics. Achieving these goals would require providing access to reliable EV fast charging in high-traffic and highway off-ramp public locations, multi-family residences, truck rest stops and depots, and at or near ports. The state is targeting one million public EV chargers by 2030, a goal that faces numerous hurdles such as high installation costs, grid capacity constraints, and ensuring equitable access, particularly in underserved communities where these challenges tend to be more prevalent.

                      ElectricFish’s rapid plug-and-play deployment makes it easier to install chargers quickly in both urban and grid-constrained areas. The system combines on-site energy storage with fast chargers, allowing peak demand spikes from fast charging to be smoothed out for grid reliability. This not only provides time and cost savings for customers and the end-users, but also lowers the overall cost of achieving the transition, improving energy and transportation affordability for the community. 

                      Our mission is to build a network of energy assets that can contribute to modernizing the grid, and do so in an equitable manner. In fact, we are committed to empowering universal access to clean, affordable energy while championing fairness and equity, bridging the energy gap worldwide.

                      Our technology also gives underserved communities or regions affordable charging solutions without the need for costly grid upgrades. Historically, these communities face the most challenges due to underinvestment in grid infrastructure, which makes traditional EV chargers much more expensive to install. ElectricFish offers a solution that fully aligns with and enables California’s policy goals for an equitable energy transition.

                      It has been well reported that for a number of reasons California is lagging behind on building out the infrastructure to support its EV adoption goals. And this is exactly where we can help.  Our technology helps close this gap by enabling faster infrastructure build-out for  light, medium and heavy-duty vehicles.


                      M.R. Rangaswami
                      : Globally, what is the future of electric grids, and what can ElectricFish’s role be in that future?  

                      Anurag:As we face the urgent need to modernize our aging electrical grids, the timing for ElectricFish couldn’t be more critical. With the extreme growth of generative AI, widespread electrification and the accelerating adoption of EVs, future electrical grids around the world will look very different than anyone imagined just a few years ago.  The whole system will become much more decentralized, allowing energy to be generated and managed at the edges of the grids, as close as possible to where it’s needed. 

                      This future will need reliable, resilient, intelligent infrastructure all along the grid edge that can dynamically handle spikes in power demand without spending billions on traditionally vulnerable “poles and wires.” 

                      ElectricFish is delivering that future infrastructure today. Our battery-integrated fast chargers are designed for reliability and resilience, providing dependable performance in various conditions. And with the charging industry’s largest batteries and fastest charging, managed by cutting edge AI software, ElectricFish is future-proofing today’s grid investments.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com 

                      Read More

                      A Quick Q&A with Weldon Dodd, SVP of Global Solutions at Kandji

                      By Article

                      In this article, Weldon Dodd, SVP of Global Solutions at Kandji, delves into Apple’s approach to Mobile Device Management (MDM), showcasing how its user-centred design has simplified IT management for administrators and users alike. He discusses the role of automation in MDM tools, such as Apple Configurator, in streamlining routine tasks and eliminating manual processes, which enhances overall efficiency in managing devices.

                      Weldon also touches on the innovative features of Declarative Device Management (DDM), a new approach that empowers devices to proactively maintain compliance standards, easing scalability challenges and reducing server loads in large enterprises. With a career that began in IT labs and progressed through wireless telecom and large-scale Apple deployments, Weldon has become a leader in enterprise device management, joining Kandji in 2020 to shape product strategy and foster community.

                      M.R. Rangaswami: How did Apple’s approach to user-centric design influence its development of Mobile Device Management (MDM) solutions?

                      Weldon Dodd: Apple’s commitment to user-centric design, rooted in simplicity and intuitive functionality, deeply influenced its approach to MDM solutions. The company applied its design philosophy to ensure that IT tasks, often complex, were made simple and seamless for administrators and end users alike. This focus on user experience is evident in how Apple designed MDM tools to streamline the device management process, making it as easy as possible to configure, secure, and maintain devices across enterprises. Apple’s MDM is an example of its design principle—products should “just work”—with minimal friction for IT teams and employees.

                      M.R.: How has automation through Apple’s MDM tools improved the efficiency of managing diverse devices in the workplace?

                      Weldon: Automation in Apple’s MDM tools has significantly enhanced the efficiency of managing a wide range of devices by removing manual configurations and streamlining administrative tasks.

                      Tools like Apple Configurator, along with remote management capabilities, enable IT teams to apply policies, update software, and enforce security standards across thousands of devices without manual intervention. This level of automation allows administrators to handle large fleets of devices more quickly and efficiently, ensuring that organizations can maintain security, compliance, and operational standards with minimal disruption to end users.

                      M.R.: How does Declarative Device Management (DDM) represent a shift from traditional automation methods, and what are its key benefits for enterprises?

                      M.R.: Declarative Device Management (DDM) is a big leap forward in automation, shifting from traditional command-based management to a model where devices take a more proactive role. Instead of MDM servers constantly pushing commands to devices, DDM allows the devices themselves to assess and apply policies based on a declared state set by IT administrators.

                      This results in less back-and-forth with servers and faster responses to compliance needs. For enterprises, DDM brings major benefits—greater scalability, reduced server load, and a more dynamic, self-managing system that ensures devices stay compliant without constant oversight. 

                      Apple’s MDM has evolved over the years because the company has embraced this process, continuously enhancing its tools to meet changing enterprise needs.

                      Another lesson is that automation should always simplify—Apple’s solutions make IT management easier, not harder. Lastly, automation is not static. It must evolve with new challenges, just as Apple has continuously refined its approach to device management, ensuring it stays ahead of the curve.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      M.R. Asks 3 Questions: Moti Rafalin, CEO and co-founder, vFunction

                      By Article

                      With over 20 years of expertise in enterprise software, infrastructure, and security and a track record that includes leading WatchDox to a successful acquisition by BlackBerry, Moti is passionate about building scalable solutions and driving innovation in application architecture.

                      In this interview, Moti discusses how vFunction uses AI-driven tools to help organizations reduce architectural technical debt, simplify their systems, and ensure long-term scalability and resilience.

                      M.R. Rangaswami: What are the challenges with software engineering and application development that inspired you to found vFunction?

                      Moti Rafalin: Application architecture is one of the most impactful factors for improving application scalability, resiliency, and long-term competitiveness. While technical debt is a well-known issue, its most common and damaging form – architectural technical debt (ATD) – is especially enigmatic, causing severe performance issues that result in lost revenue opportunities, delayed projects, and customer churn. Despite the risks associated with technical debt, many companies continue to focus on new development at the expense of architectural sustainability. This short-term focus often results in what we refer to as the “boiling frog syndrome,” where the gradual accumulation of technical debt goes unnoticed until it’s too late. Without addressing technical debt, applications require large modernization projects every three to five years, diverting an organization’s energy and resources from new development.

                      vFunction was founded to tackle this problem by providing the first AI-driven architectural observability platform, enabling teams to proactively reduce complexity, manage ATD, and ensure their applications remain resilient and scalable as business demands evolve. vFunction’s platform initially focused on understanding and visualizing the dependencies between business domains, thereby enabling organizations to efficiently break down overly complex monolithic applications into manageable components. This not only reduces ATD but also allows teams to unlock cloud-native benefits like elasticity and scalability, ensuring their systems can evolve efficiently with market demands. vFunction’s architectural observability platform has evolved to support microservices, focusing on preventing the accumulation of technical debt in distributed environments by mapping microservices dependencies, detecting drift and architectural issues, and enabling rules-based governance.

                      M.R.: What is architectural observability? How is it unique? And how does it differ from other application observability platforms?

                      Moti: Architectural observability is the ability to analyze an application’s architecture, mainly through learning its dynamic flows, to gain deep insights into its structure, detect architectural drift, and identify and address complexity and technical debt. Unlike application observability platforms that focus on symptoms of problems, like poor performance and outages, architectural observability shifts left, focusing on the core architectural issues and drift that are the root cause of many performance and scalability issues. Architectural observability connects the dots between architectural decisions and their impact on overall business goals, offering a holistic view of how architecture evolves over time and how it influences application quality and technical debt.

                      vFunction’s AI-driven architectural observability platform is unique in its ability to analyze and visualize the architecture across a company’s entire app portfolio, using its patented dynamic code analysis and offering actionable insights that help teams continuously reduce complexity. vFunction leverages OpenTelemetry to understand and address complexity for any microservices stack. It empowers engineering leaders to quickly pinpoint mission-critical technical debt, continuously simplify their applications, and ensure their development cycles contribute directly to the organization’s high-level objectives. With intuitive visualizations and AI-powered benchmarking, vFunction allows architects and technical leads to make informed application decisions, while developers can focus on delivering new features or refactoring with the support of intelligently prioritized tasks. This level of transparency and direction ensures that all team members can track their progress and contribute to the organization’s success without introducing new risks or unknowns.

                      M.R.: Neglecting software complexity and technical debt threatens long-term business growth. How can companies take a strategic, proactive approach to reduce complexity and prioritize technical debt?

                      Moti: A strategic, proactive approach to reducing software complexity and prioritizing technical debt begins with consistently measuring, prioritizing, and resolving technical debt. This approach offers a clear path for organizations to manage this challenge in a phased, scalable way:

                      • 1) Measure technical debt continuously, gaining real-time insights into how close the organization is to the “boiling point” where accumulated debt starts to erode engineering velocity, compromise application security, and limit scalability. This requires dedicating engineering resources to ongoing remediation efforts, rather than waiting for a system failure or tackling technical debt only during large, often inefficient, modernization projects.
                      • 2) Once technical debt is identified and measured, the next step is to prioritize the most critical issues. This involves reviewing the data and focusing on KPIs that prevent the issue from reaching a boiling point.
                      • 3) In the most mature phase of technical debt management, organizations make a strategic decision to invest more heavily in continuous remediation efforts. The entire engineering team, including architects, SREs, and application owners, fully engage in this process. Together, they work to modernize the application architecture, prevent architectural drift, and build scalable, resilient systems on an ongoing basis.  

                      The platform’s architectural governance capabilities enable organizations to combat microservices sprawl by setting and enforcing rules that prevent drift and ensure alignment with design intent. Meanwhile, comprehensive user flow analysis features for microservices align design intent with actual implementation to keep architectural integrity intact while accelerating development.

                      vFunction’s platform supports organizations at every stage of this maturity model by offering AI-driven tools that identify, prioritize, and help remediate technical debt. Whether the organization is just beginning to measure the scope of its technical debt or is ready to invest fully in its resolution, vFunction provides the observability, governance, and automation necessary to guide that journey.

                      By adopting this strategic, phased approach to technical debt, companies can reduce complexity, protect long-term application resiliency and scalability, and position themselves for sustainable innovation and growth.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      M.R. Asks 3 Questions: Patrick Myles, CEO, PathPresenter

                      By Uncategorized

                      Patrick Joined PathPresenter as CEO in April 2022 with decades of experience in digital health. PathPresenter offers a platform designed to help hospitals, labs, and pharmaceutical companies streamline pathology workflows to improve diagnostic accuracy and efficiency, and enable the use of AI tools. The company recently announced $7.5 million in series A funding to continue their work. 

                      We hope you enjoy this quick and thoughtful conversation.

                      M.R. Rangaswami: What is the biggest challenge that labs and hospitals have deploying digital pathology?

                      Patrick Myles: We find that the biggest challenge is interoperability. Currently, there a) are 20+ different scanner vendors, each with their own image file format, b) a variety of laboratory information systems (the software that runs the pathology labs) that contain the patient and biopsy information, c) many options for storing digital pathology images from on-premise to the cloud, or in a hybrid environment, and finally d) a host of new AI algorithms from many vendors are coming onto the market, each with their own user interface. How do hospitals and labs get all these components to work together to create a seamless workflow?

                      Ensuring interoperability is the key job that PathPresenter’s image management and viewing platform addresses. We are the much-need workflow layer that brings all these various components together into a seamless solution. We do that by being 100% vendor agnostic, meaning we partner with all 3rd party vendors and integrate their components into our platform. We are continuously filling in the workflow gaps so pathologists can be more productive and efficient, and institutions receive the largest return on their digital investment.

                      M.R.: How is technology and the advancements of AI changing the pathology landscape?

                      Patrick: We are really seeing a perfect storm. The image quality of digital images from whole slide scanners is now visually equivalent with what a pathologist sees under the microscope. At the same time, advances in network speeds and storage technologies have made digital case signout not only practical, but scalable. Now, the real boon is coming from AI, which is presenting a once in a generation ability to transform diagnosis and precision medicine. The promise of AI is making digital pathology investment a “must-have” at just about every institution, lab, and pharma company. For the pathologists themselves, some continue to view digital pathology and AI technology cautiously. However, many recognize the compelling benefits and see the inevitability of the new technology. Therefore, they are choosing to explore and leverage it to provide better care for patients, while enhancing their careers and their profession. 

                      M.R.: What is the future of digital pathology and how will that impact hospitals, labs, pharma companies and patients?

                      Patrick: The immediate future of digital pathology is digitization itself. Currently, approximately 10% of the 1 billion glass biopsies that are produced each year globally are digitized. In the next 5 years, digitization levels will increase to 75%+. This build-out will create many benefits for hospitals, pathologists and patients, ultimately increasing the speed and accuracy of diagnosis. With more hospitals becoming digital, with an increasing number of second opinions and consultations happening, we expect to see a democratization of knowledge taking place, which will be a benefit to patients world-wide, particularly in underserved areas. 

                      Driving the future of digital pathology the most is the development and application of new AI technology. The rapid advancements in large language models, generative AI, and multimodal learning are enabling a convergence of anatomic pathology with molecular pathology, which promises to enhance diagnostic accuracy, streamline workflows, and reduce costs, while providing personalized patient plans, early detection and prevention, improved prognosis, and reduced anxiety for patients. 

                      Read More

                      M.R. Asks 3 Questions: Amit Patel, Senior Vice President, Consulting Solutions

                      By Article

                      Amit Patel is Senior Vice President at Consulting Solutions, a nationally recognized leader in technology solutions and services. Its key practice areas include Advanced Analytics and Data Science, Agile + Product + Design, Application Development, Cloud & Infrastructure, Cybersecurity, Delivery Leadership, Energy, and ERP (SAP & Oracle).

                      The company’s scalable engagement models—from individual technology consultants to strategic enterprise programs—enable clients to tap into world-class talent, expertise, and services to drive technology and enterprise transformation initiatives. 

                      M.R. Rangaswami: What are some of the biggest ITSM challenges that AI and automation can address?

                      Amit Patel: AI and automation can be used to transform incident management, since they can categorize and prioritize incoming tickets based on their content to ensure that the most critical issues are promptly addressed. Also, AI’s ability to predict potential issues based on historical data and current trends supports proactive incident management to minimize problems—or avoid them altogether.

                      Similarly, AI/automation can be used to help balance ITSM workloads by predicting peak usage times and adjusting staffing levels accordingly. 

                      M.R.: What role can AI and automation play in enhancing customer experience in ITSM?

                      Amit: AI chatbots and virtual assistants can provide around-the-clock customer support, addressing many routine service requests and inquiries on a 24/7 basis. This improves the overall availability of assistance and decreases wait times. For example, password resets and status updates can be handled automatically, freeing agents to focus on more complex issues.

                      Because AI can analyze historical interactions, it can also provide tailored and personalized recommendations based on the customer’s previous issues and preferences, enhancing the overall experience. It can also ensure that provided responses and resolutions adhere to best practices, reducing the variability that often comes with human agents.  

                      M.R.: What are some examples of how AI’s predictive analytics can help to identify potential issues before they become problematic?

                      Amit: As mentioned previously, AI can analyze historical performance data, enabling it to identify normal operating patterns. When deviations start to occur, AI is able to predict impending performance bottlenecks or failures so that ITSM can “head them off at the pass” before they impact users. In this same way, it can also forecast future capacity needs by analyzing current trends and usage patterns.

                      AI can also analyze historical patterns in network traffic and user behavior to predict possible security threats. For example, it can detect unusual login attempts or sudden spikes in data access. By identifying such anomalies earlier, organizations can respond more swiftly to potential attacks to mitigate risks.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      Quick Answers to Quick Questions: Keith Babo, Head of Product, Solo.io

                      By Article

                      Keith Babo leads the product team at Solo.io, covering the full range of application networking technologies required to build modern, cloud-native application architectures. Before joining Solo.io, Keith held product management and engineering leadership positions at Red Hat, Sun Microsystems, and Intel Corporation.

                      This conversation Keith offers how he and his team builds on DevOps by shifting the focus from collaboration and automation, to optimizing the entire software development lifecycle through centralized, integrated platforms. Keith also highlights the evolution from DevOps practices to more streamlined and standardized processes in platform engineering.

                      M.R. Rangaswami: How does platform engineering build on the foundations of DevOps, and what
                      sets it apart in the software development landscape?


                      Keith Babo: The primary difference between DevOps and platform engineering lies in their approach and focus. DevOps was introduced to foster collaboration between development and operations teams, leveraging automation and agile methodologies to streamline the software delivery pipeline. However, as organizations scaled their DevOps practices, they encountered new challenges, such as managing diverse and distributed systems and provisioning resources efficiently.

                      Platform engineering emerged to address these challenges by building on the foundation of DevOps. It focuses on creating internal developer platforms (IDPs) that provide self-service capabilities, standardized workflows, and integrated tooling. This centralized approach not only streamlines the development and deployment process but also enhances security, observability, and governance.

                      Essentially, while DevOps emphasizes collaboration and automation, platform engineering aims to optimize and standardize the entire delivery process through a cohesive, integrated environment.


                      M.R.: How is platform engineering improving the efficiency and productivity of
                      development teams?


                      Keith: Platform engineering significantly improves the efficiency and productivity of development teams by providing a streamlined and standardized environment for software development and deployment. One of the key advantages is the introduction of self-service capabilities. Developers can independently provision resources, deploy applications, and manage their environments without relying on traditional, ticket-based operations. This reduces lead times and increases agility, allowing developers to focus more on writing code and solving business problems.

                      Moreover, platform engineering integrates various tools and services into a unified interface, ensuring that repetitive tasks are automated and handled consistently. This not only improves productivity but also enhances the reliability and scalability of the software delivery process. By removing the friction associated with provisioning resources and managing environments, developers can concentrate on their core tasks, experiment, iterate, and deploy new features rapidly, ultimately driving business growth and innovation.


                      M.R.: What role does security play in platform engineering, and how does it enhance
                      an organization’s security posture?


                      Keith: Security is a critical aspect of platform engineering, and it plays a vital role in enhancing an organization’s security posture. Platform engineering incorporates security measures into the core workflows of delivering applications and services through the internal developer platform (IDP). By standardizing workflows and automating processes, platform engineering ensures that security best practices are consistently applied across the organization, creating a more robust and resilient
                      security framework.

                      Automated policies and guardrails are also implemented to enforce security standards, reducing the risk of vulnerabilities and compliance issues. This systematic approach not only improves the overall security of applications in production, but also ensures that security considerations are integrated from the beginning of the development process.

                      This proactive integration of security measures means that potential risks are identified and mitigated early, reducing the likelihood of security breaches and operational disruptions.

                      Additionally, platform engineering fosters a shared responsibility for security between development and platform teams, making it easier to establish a comprehensive security posture. This collaboration ensures that security is not an afterthought but a fundamental component of the development lifecycle.

                      By embedding security into the platform, organizations can mitigate risks and maintain compliance more effectively, as demonstrated by recent incidents like the Snowflake data breach. This holistic
                      approach to security, embedded within the platform engineering framework, enables organizations to safeguard their digital assets while continuing to innovate and deliver high-quality software solutions.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      Digital Disruption in HealthTech: Pioneering a New Era in Healthcare

                      By Article

                      Our friends at Allied Advisors have found that valuable business insights on an industry are obtained from successful entrepreneurs who have grown a significant business in that industry (“been there and done that”).

                      Allied Advisers conducted an interview with their former client Jay Nitturkar, who bootstrapped a vertical SaaS healthcare technology company to 8-figure ARR and exited successfully to PSG Equity, a $10B+ fund. Founders and management teams will benefit from learning about some of the playbooks in the journey from inception to a successful exit highlighted in the Allied Advisors report.

                      Here are a few highlighted Q&A with Jay Nitturkar:

                      Allied Advisors: What was the inspiration behind pVerify and what market gap were you trying to address

                      Jay Nitturkar:

                      • As Director of Operations in medical billing, I saw physician practices lose over 20% of revenue due to insurance claim denials and patient delinquencies
                      • Traditional verification methods were inadequate and time-consuming
                      • pVerify was created to provide comprehensive and efficient patient insurance and benefits verification, offering customizable products and excellent customer service to address the issues

                      Allied Advisers: How did you build your customer segmentation and go-to-market strategies? How did you enable customer acquisition, growth and retention?

                      Jay Nitturkar:

                      • We developed a niche SaaS product targeting physicians and hospitals, initially focusing on specialist offices for quicker sales cycles and better ROI
                      • We relied heavily on SEO and targeted Google AdWords campaigns to manage customer acquisition, despite limited funds
                      • Inbound leads from effective SEO efforts became our main revenue source, with ~20% conversion rate
                      • The strategy we used included offering a free trial to potential customers, which boosted conversions and aligned with a product-led growth (PLG) strategy
                      • Exceptional customer service led to strong word-of-mouth referrals and high client satisfaction in the healthcare industry

                      Allied Advisers: How did you compete against companies which were well funded compared to you?

                      Jay Nitturkar:

                      • We competed against well-funded companies by offering customer-centric products and exceptional 24/7 customer service from offices in the USA and India. Additionally, the quick implementation times in days outpaced competitors who often took weeks or months

                      To read the full HealthTech report and insights, including captures of the transactional trends in healthtech and features a few leading private and public healthtech companies that have become market leaders in their focus areas, click here.

                      Read More

                      A Quick Q&A with Mitchell Levy: Global Credibility Expert and Marshall Goldsmith Executive Coach

                      By Article

                      Mitchell Levy is a 2x TEDx speaker, Global Credibility Expert, and Executive Coach who has helped over 1,000 corporations and individuals define themselves succinctly in fewer than 10 words. He emphasizes the importance of clarity in building credibility and trust, guiding clients to articulate their purpose and value clearly.

                      If 98% of individuals and companies lack clarity, we hope sharing Mitchell’s insights with his simple and powerful approach.

                      M.R. Rangaswami: Can you explain how clarity impacts credibility and why it is essential for building trust?

                      Mitchell Levy: Clarity is a cornerstone of credibility because it ensures that your message is understood and your intentions are transparent. When you communicate with clarity, you reduce misunderstandings and build trust, as people can easily grasp what you stand for and what you offer. In my experience, credibility is fundamentally about being known, liked, and trusted. Clarity facilitates this by allowing others to clearly see who you are, what you do, and why you do it, which helps in establishing trust and likability.

                      Without clarity, your message can become muddled, leading to confusion and skepticism. People are more likely to trust and engage with individuals and organizations that articulate their purpose and value succinctly and transparently. Therefore, clarity is essential for building trust and fostering credible relationships​​.

                      M.R.: In your experience, what are the most common barriers to achieving clarity, and how can individuals overcome them?

                      Mitchell: The most common barriers to achieving clarity include:

                      1. Overcomplexity: People often overcomplicate their messages, believing that more information equals better understanding. In reality, simplicity and brevity are key to clarity.
                      2. Lack of Focus: Not having a clear focus or trying to address too many points at once can dilute your message.
                      3. Internal Noise: Personal biases, assumptions, and lack of self-awareness can cloud your ability to communicate clearly.

                      To overcome these barriers, one can:

                      1. Simplify Their Message: Focus on the core message you want to convey. Remove unnecessary details and use simple, straightforward language.
                      2. Know Their Audience: Understand who you are communicating with and tailor your message to their needs and level of understanding.
                      3. Seek Feedback: Regularly ask for feedback to ensure your message is being understood as intended. This helps identify areas where clarity may be lacking.
                      4. Practice Consistency: Be consistent in your messaging across all platforms and interactions to reinforce your clarity and credibility.

                      By addressing these barriers, individuals and corporations can significantly enhance their clarity, leading to greater credibility and effectiveness in their personal and professional interactions​​.

                      M.R.: Can you share a success story where gaining clarity significantly transformed an individual’s or organization’s credibility and effectiveness?

                      Mitchell: One notable success story involves a client who was struggling to articulate their value proposition clearly. This individual was a highly skilled professional with a wealth of experience, but their message was lost in a sea of jargon and overly complex explanations. We worked together to define their CPoP (Customer Point of Possibilities), which is a succinct statement of the primary problem they solve for their clients.

                      Through a single clarity coaching session, we distilled their message into a clear, concise CPoP that immediately resonated with their target audience. As a result, they experienced a significant transformation in their business. Their improved clarity not only made their value proposition more compelling but also boosted their credibility. Prospective clients found it easier to understand what they offered and why it was valuable, leading to increased trust and a higher conversion rate.

                      This transformation underscores the power of clarity in enhancing credibility and effectiveness. When you can clearly articulate who you are, whom you serve, and the specific problems you solve, you position yourself as a credible, trustworthy expert in your field​​.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      10 Ways To Prevent Shadow AI Disaster

                      By Article

                      As contributing writer Mary Pratt, shares in her CIO article, just as it was with shadow IT of yore, there’s no one-and-done solution that can prevent the use of unsanctioned AI technologies or the possible consequences of their use.

                      However, CIOs can adopt various strategies to help eliminate the use of unsanctioned AI, prevent disasters, and limit the blast radius if something does go awry. Here, IT leaders share 10 ways that CIO can do so.

                      Unsanctioned AI in the workplace is putting company data, systems, and business relationships at risk.

                      Here’s are 10 ways to pivot employees’ AI curiosity toward acceptable use — and organizational value.

                      1.  SET AN ACCEPTABLE USE POLICY FOR AI

                        A big first step is working with other executives to create an acceptable use policy that outlines when, where, and how AI can be used and reiterating the organization’s overall prohibitions against using tech that has not been approved by IT, says David Kuo, executive director of privacy compliance at Wells Fargo and a member of the Emerging Trends Working Group at the nonprofit governance association ISACA. Sounds obvious but most organizations don’t yet have one. 
                      2. BUILD AWARENESS ABOUT THE RISKS AND CONSEQUENCES

                        Kuo acknowledges the limits of Step 1: “You can set an acceptable use policy but people are going to break the rules.” So warn them about what can happen.

                        “There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. Outline the risks associated with AI in general as well as the heightened risks that come with the unsanctioned use of the technology.

                        Kuo adds: “It can’t be one-time training, and it can’t just say ‘Don’t do this.’ You have to educate your workforce. Tell them the problems that you might have with [shadow AI] and the consequences of their bad behavior.”

                      3. MANAGE EXPECTATIONS

                        Although AI adoption is rapidly rising, research shows that confidence in harnessing the power of intelligent technologies has gone down among executives, says Fawad Bajwa, global AI practice leader at Russell Reynolds Associates, a leadership advisory firm. Bajwa believes the decline is due in part to a mismatch between expectations for AI and what it actually can deliver.

                        He advises CIOs to educate on where, when, how, and to what extent AI can deliver value.

                        “Having that alignment across the organization on what you want to achieve will allow you to calibrate the confidence,” he says. That in turn could keep workers from chasing AI solutions on their own in the hopes of finding a panacea to all their problems.

                      4. REVIEW AND BEEF UP ACCESS CONTROL

                        One of the biggest risks around AI is data leakage, says Krishna Prasad, chief strategy officer and CIO at UST, a digital transformation solutions company.

                        Sure, that risk exists with planned AI deployments, but in those cases CIOs can work with business, data and security colleagues to mitigate risks. But they don’t have the same risk review and mitigation opportunities when workers deploy AI without their involvement, thereby upping the chances that sensitive data could be exposed.

                        To help head off such scenarios, Prasad advises tech, data, and security teams to review their data access policies and controls as well as their overall data loss prevention program and data monitoring capabilities to ensure they’re robust enough to prevent leakage with unsanctioned AI deployments.

                      5. BLOCK ACCESS TO AI TOOLS

                        Another step that can help, Kuo says: blacklisting AI tools, such as OpenAI’s ChatGPT, and use firewall rules to prevent employees from using company systems to access. Have a firewall rule to prevent those tools from being accessed by company systems.

                      6. ENLIST ALLIES IN THE EFFORT

                        CIOs shouldn’t be the only ones working to prevent shadow AI, Kuo says. They should be enlisting their C-suite colleagues — who all have a stake in protecting the organization against any negative consequences — and get them onboard with educating their staffers on the risks of using AI tools that go against official IT procurement and AI use policies.

                        “Better protection takes a village,” Kuo adds.

                      7. CREATE AN IT AI ROADMAP THAT DRIVES ORGANIZATIONAL PRIORITIES, STRATEGIES

                        Employees typically bring in technologies that they think can help them do their jobs better, not because they’re trying to hurt their employers. So CIOs can reduce the demand for unsanctioned AI by delivering the AI capabilities that best help workers achieve the priorities set for their roles.

                        Bajwa says CIOs should see this as an opportunity to lead their organizations into future successes by devising AI roadmaps that not only align to business priorities but actually shape strategies. “This is a business redefining moment,” Bajwa says.

                      8. DON’T BE THE DEPARTMENT OF ‘NO’

                        Executive advisers say CIOs (and their C-suite colleagues) can’t drag their feet on AI adoption because it hurts the organization’s competitiveness and ups the chances of shadow AI. Yet that’s happening to some degree in many places, according to Genpact and HFS Research. Their May 2024 report revealed that 45% of organizations have adopted a “wait and watch” stance on genAI and 23% are “deniers” who are skeptical of genA.

                      9. EMPOWER WORKERS TO USE AI AS THEY WANT

                        ISACA’s March survey found that 80% believe many jobs will be modified because of AI. If that’s the case, give workers the tools to use AI to make the modifications that will improve their jobs, says Beatriz Sanz Sáiz, global data and AI leader at EY Consulting.

                        She advises CIOs to give workers throughout their organizations (not just in IT) the tools and training to create or co-create with IT some of their own intelligent assistants. She also advises CIOs to build a flexible technology stack so they can quickly support and enable such efforts as well as pivot to new large language models (LLMs) and other intelligent components as worker demands arise — thereby making employees more likely to turn to IT (rather than external sources) to build solutions.

                      10. BE OPEN TO NEW, INNOVATIVE USES

                        AI isn’t new, but the quickly escalating rate of adoption is showing more of its problems and potentials. CIOs who want to help their organizations harness the potentials (without all the problems) should be open-minded about new ways of using AI so employees don’t feel they need to go it alone.

                        Bajwa offers an example around AI hallucinations: Yes, hallucinations have gotten a nearly universal bad rap, but Bajwa points out that hallucinations could be useful in creative spaces such as marketing.

                        “Hallucinations can come up with ideas that none of us have thought about before,” he says.

                      Thanks to  Mary K. Pratt, contributing writer at CIO for this article and information. The full article can be read, here.

                      Read More

                      M.R. Asks 3 Questions: Brett Shively, CEO of ACI Learning

                      By Article

                      Since 2019, CEO Brett Shively has led ACI, which provides audit, cyber, and IT training to more than 250,000 subscribers worldwide with an 80% content completion rate.

                      With robust leadership roles at OnCourse Learning, Everspring, DeVry, and Kaplan, Brett brings a wealth of experience to his role as CEO of ACI Learning and, his understanding of the training/L&D landscape and its future direction is deeply respected in the field. 

                      We hope you enjoy this quick and thoughtful conversation.

                      M.R. Rangaswami: With so many competing business priorities, why should organizations invest in training and development?

                      Brett Shively: Today’s business landscape is rife with change. Budgets are tight, competition is fierce, the global talent shortage persists. Furthermore, new technologies like generative AI are shaking things up, creating both exciting opportunities and new skills gaps. 

                      By implementing an effective training strategy, leaders can meet this moment head on and empower their workforce to adapt. Many companies recognize the vital role employee learning plays in the success of their business, which is why the learning management system (LMS) market is expected to reach $51.9 billion USD by 2028.

                      The landscape is really changing so quickly, both on the technology side with AI, cybersecurity and other factors, and on the regulatory side with auditing. Companies that actively support continual learning rather than just leaving it to the employee get better outcomes.

                      M.R. What methods work best to upskill employees in our modern workforce?

                      Brett: There are a few simple steps leaders can take to start.

                      First, use assessment tools to uncover employees’ needs and motivations. It’s impossible to help someone if you don’t know what they need. And too often, companies apply a one-size-fits-all approach to employee learning, which can lead to redundancies, disengaged employees, and workers that are either overqualified or underqualified for their role. Assessment tools help businesses identify opportunities for upskilling or reskilling, measure learning progress, track the application of new skills in the workplace, and guide individuals along their career paths.

                      Then, using what was learned from the assessments, companies should tailor trainings to the specific groups of employees they’re educating — even down to the individual level if necessary. For example, learning might be tailored to generational differences: Gen Z employees might get the most value from self-led programs that feature “bingeable” snippets of content — think short-form, Tik Tok-esque videos. Conversely, millennials may prefer more structured, long-form, instructor-led content. In other instances, training might need to be tailored for accessibility, neurodiversity, or some other factor.

                      Finally, don’t underestimate the power of incentivization. Many people are already overloaded in their daily roles, and asking them to learn something new — while exciting — can also seem overwhelming. Incentivization is a simple yet effective way to motivate employees and reignite their spark for learning.

                       

                      M.R. How does AI come into play with the future of learning and development?

                      Brett: AI is a great human intelligence booster. It’s like having a calculator to help you with complex math problems – 50 years ago using a calculator was considered “cheating,” but now they’re universally recognized as learning tools that enable students to focus on the key problem (unless they’re trying to learn how to do long division!). Generative AI can be viewed through the same lens – it helps people get a kickstart on problems, it helps them summarize and organize a large body of information quickly and it can generate creative solutions that can be refined.

                      In addition, AI-infused tools can help organizations take their training efforts to the next level through personalized learning, deeper engagement, boosted efficiency, and inclusive access. AI can also tailor learning experiences to employees’ individual needs by automatically adjusting difficulty levels, recommending helpful resources, and providing customized feedback.

                      The potential benefits of AI are vast, but organizations must be mindful regarding adoption—with so many different solutions available, companies risk overwhelming employees with new information and capabilities, inadvertently diminishing the value they’re aiming to provide. The key is to start slow and offer plenty of support as employees familiarize themselves with these new tools.

                      The pitfalls are well known – when using Generative AI to answer critical questions – checking the answers is equally critical. Additionally, simply asking Generative AI to create something and using that output without any refinement can result in generic, unhelpful answers that are easily spotted as the work of an LLM model.

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      SEG’s 2024 SaaS M&A Public Market Report: Q1 

                      By Article

                      Software Equity Group’s Q1 M&A Report is in, revealing that the start of the year brings strong software deal volume, totalling 823 deals in 1Q24.

                      Despite a recent decline in SaaS M&A activity over the past two quarters, the 486 SaaS deals in 1Q24 far surpass pre-2022 levels, marking a 50% increase from the Q1 average of 2019-2021.

                      Here are 5 Highlights from SEG’s: 1Q24 SaaS Report:

                      1. AGGREGATE SOFTWARE INDUSTRY M&A DEAL VOLUME HAS SETTLED into a steady state that remains strong relative to historical standards, recording 823 total deals in 1Q24, a similar level to 1Q23. The trend shows a steady increase compared to pre-COVID levels, with ‘21 and ‘22 representing a period of
                        unprecedented M&A volume.

                      2. DEAL ACTIVITY FOR SAAS M&A HAS SEEN LESS VOLUME IN THE LAST TWO QUARTERS. This trend is likely caused by a myriad of compounding variables, including regional bank instability in early 2023, macroeconomic concerns, and a high-interest rate environment that have all had lagging impacts on volume.

                        With the Fed planning to cut interest rates three times this year, deal volume is expected to increase. Despite a few lighter volume quarters, the 486 deals SaaS deals in 1Q24 remain well above pre-2022 levels, up 50% from the 2019-2021 Q1 average.

                      3. THE MEDIAN EV/TTM REVENUE MULTIPE FOR 1Q24 WAS 3.8X in line with 4Q23. Over the last three quarters, both the median and average EV/TTM revenue multiples for SaaS M&A have stabilized, indicating that the M&A market has fully responded to a higher interest rate environment and caught up to the public SaaS market.

                        M&A valuations for the remainder of 2024 will be impacted by the Fed’s interest rate-cutting trajectory, but in the interim, highperforming businesses continue to receive strong outcomes in the M&A markets.

                      4. VERTICAL SAAS COMPRISED 49% OF ALL SAAS M&A DEALS in 1Q24, continuing the trend of buyers and investors seeking the types of purpose-built, mission critical applications that are the calling card of vertical software companies.

                        Healthcare and Financial Services represented the most active verticals this quarter. However, several other verticals, including Hospitality, Retail, and Manufacturing, saw increased activity YOY, indicating that the buyers and investors are not just looking for deals in historically active verticals but rather widening their focus to comprise a variety of verticals.

                      5. PRIVATE EQUITY CONTINUED TO PACE SAAS M&A ACTIVITY (59% of 1Q24 deals), driven by PE-backed strategics (48%) that continue to leverage an ideal mix of product synergies and capital allocated to M&A. Strategic buyers (40.7% of Q1 deals) had their most active two-quarter stretch since 1Q22.

                      To read the full Software Equity Group’ SaaS M&A and Public Market Report, click here.

                      Read More

                      M.R. Asks 3 Questions: Brian Biro, Bestselling Author & Speaker

                      By Article

                      Brian Biro is armed with degrees from Stanford University and UCLA, is the author of many bestselling titles and has delivered over 1,800 presentations on leadership, team building, and how to breaking through, globally.

                      His latest book, “Lessons from the Legends” has has drawn inspiration from NCAA and SEC championship-winning coaches Pat Summitt and John Wooden, offering a championship team building formula applicable to business leaders, parents, and educators.

                      M.R. Rangaswami: How can business leaders adapt these principles to create high-performing teams in the corporate world, especially in the face of rapid changes and challenges?

                      Brian Biro: In our business world today, the obsession with accelerating technology and AI, it can be easy for leaders to forget that they are actually in the PEOPLE business. It is how you and and your team grow that will determine how far you can go. Both Coach Summitt and Coach Wooden realized they coached people more than basketball.

                      One of the leadership practices they each used was to guide their teams to focus on controlling their controllables. They believed they did not control results, but they could powerfully impact the effort, energy, attitude, and constant drive to improve that would lead to breakthrough results. Both of these great coaches demonstrated immense humility which also had a profound impact on those they led.

                      They set the example of GIVING credit and taking responsibility. And, it’s amazing what’s accomplished when no one cares who gets the credit. They became shining examples of personal responsibility, always seeking to learn and improve rather than blame.

                      Perhaps most importantly, both coaches focused every single day on the models of personal excellence they developed: Coach Summitt on her Definite Dozen, and Coach Wooden on his Pyramid of Success. That dogged consistency on the principles and qualities they most sought to develop ignited unstoppable cultures at UCLA and the University of Tennessee. Nothing is more important to long-term success in the business arena than a powerful, unstoppable culture.

                      M.R.: How can business leaders develop resilience in themselves and their teams, particularly in the dynamic and unpredictable business environment we often face?

                      Brian: Business leaders can learn from these two Legends to develop extraordinary resilience through a few very powerful leadership practices the coaches lived by. First, they were blame-busters! If you think about blame in the context of time, blame is always about the past. You cannot change the past. So, whenever you find yourself in blame, you are in the past. Coach Summitt and Coach Wooden did not pretend that mistakes weren’t made or than their players and coaches did not make mistakes. But, when mistakes were made, rather then getting stuck in blame, they moved forward by asking what they could learn from setbacks and mistakes to get better. As a result, their players and coaches weren’t terrified of making mistakes and focused instead on constant learning and improvement.

                      Second, both coaches lived by the practice to end every game, workout, and day on a positive note because that would create a springboard into tomorrow. This simple leadership practice created a positive energy about the future…that something good was coming tomorrow.Finally, both coaches constant focus on controlling controllables and letting go of comparison was especially important when dealing with challenges and setbacks. Whenever we focus on what we DO control rather than on what we don’t, we generate momentum, confidence, and resilience. 

                      M.R.: How can business leaders balance passion and composure in their leadership style, and when might one approach be more effective than the other?

                      Brian: Pat Summitt was known for her passion, while John Wooden was characterized by his calm and even-keeled approach. I wrote this book in part to demonstrate that styles are far less important in the long-run than core values, humility, and character. Each coach was 100% authentic in their style and intensity. Both were passionate about teaching and giving credit. Both believed there are no over-achievers, that we have more in us than we know, and both were passionate about helping everyone they led to rise as close to their potential as possible. The great value of John Wooden’s style was his focus on listening. For Coach Summitt it was her intensity about demanding one’s best. Though they went about it differently, both were incredibly PRESENT for others. Only by being fully present do we communicate to others that they are important, significant, and that they matter. 

                      M.R. Rangaswami is the Co-Founder of Sandhill.com

                      Read More

                      The Art of Bootstrapping: Building Success From The Ground Up

                      By Article

                      Bootstrapping businesses (running a business without external capital) has been a common practice since the inception of entrepreneurship. While bootstrapping is an understated way of growing a business, it has stood the test of time in all market cycles if done correctly.

                      This Allied Advisor report profiles operational metrics for bootstrapped companies and has examples of businesses which scaled successfully using this route before taking in significant capital infusion or exiting.

                      Here are a three key points of the report, and the highlights from their Netcore feature; a company that bootstrapped a $110M ARR SaaS Company.

                      1. Growth Fueled by Strong Operational and Capital Efficiency

                        Rule of 40 (Growth rate plus profit margin of at least 40%) is considered as an investor’s benchmark for an investable company. Empirical data indicates that bootstrapped companies broadly score higher on Rule of 40 compared to VC-backed businesses at most revenue levels; VC-funded companies are typically growth focused, often at the expense of profitability.

                        2. Cost Structure of Bootstrapped vs. VC-Funded SaaS Companies

                        Bootstrapped companies prioritize operational efficiency through their lean cost structure. With all spending on marketing and R&D sourced from generated income, they focus keenly on optimizing operations. For example, the chart below shows, the marketing cost for bootstrapped is at 17% as compared to VC-funded companies at 27%.

                        3. Compelling Exit Valuations for Bootstrapped Companies

                        The selection of an exit strategy for a startup is influenced by its growth stage, market conditions, and strategic objectives. The two exit options that are presented at the end are an IPO or a M&A, the former being more frequent. Companies opting for merger exits command a higher EV/Revenue of 17.2x, compared to those choosing to go public with a multiple of 8.8x.

                        ____

                        Bootstrapping a $110M ARR SaaS Company: The Netcore Story

                        Rajesh Jain, founder of Netcore, only bootstrapped his business profitably to over $110M in ARR, but also purchased Unbxd for $100M in cash to further accelerate growth. Having grown two successful profitable bootstrapped businesses, here are a few highlights from his interview with Allied Advisors:

                        AA: You coined the term Proficorn – can you advise what this is for the benefit of founders and
                        entrepreneurs?

                        RJ: Proficorn is viewed as an antithesis of Unicorn, wherein founders own complete stake in the company instead of investors.

                        The trick is to combine profitability of bootstrapped businesses with scaling

                        AA: You have done it twice in your life – what is the magic to doing this?

                        RJ: The key is to prioritize a path to profitability from day one, steering clear of a growth-at-all-costs mindset.

                        Forging a path to profitability swiftly drives frugal operations, ensuring judicious use of limited capital.

                        Successful bootstrap businesses should seek a balance between profitability and growth, exploring various avenues for profit instead of seeing it as an exclusive choice against growth.

                        AA: How did you bootstrapped Netcore to $110M ARR without outside capital?

                        RJ: High gross margins made our company profitable without external funding.

                        Establishing the initial profit engine is vital to sustain and capitalize on market opportunities, avoiding stagnation.

                        AA: How did you acquire a $100M business – Unbxd, without using outside capital?

                        RJ: Our company consistently saved and reinvested profits through incremental growth via product development and strategic acquisitions.

                        Acquisition was focused on a company with business in key markets (US, UK, Australia) with complementary offerings. Unbxd, with its B2B revenue and India-based team, was an ideal fit, funded by internal accruals.

                        The vision lies in merging customer data with the product catalog, a unique strength in B2C-Martech, shaping the company’s future.

                        To read more of the easily presented data in this Allied Advisor Report, click here.

                        Thanks to Gaurav Bhasin, Managing Director at Allied Advisors for pulling together this report.

                        Read More

                        M.R. Asks 3 Questions: Founder and CEO of Kasada, Sam Crowther

                        By Article

                        Sam Crowther created Kasada when he was only 19 years old, in a small shipping container under the Sydney Harbour Bridge.

                        Nine years later, Sam has tripled his team, raised over 39 million USD, protects more than 150 billion in annualised eCommerce and more than 100 million internet users daily. Last year he made the Forbes 30 under 30 list, and their aggressive approach to predicting and preventing bot attacks and online fraud is creating a safer, more secure digital experience for everyone.

                        M.R. Rangaswami: When it comes to bots, what are the most pressing challenges for enterprises today?

                        Sam Crowther: Attackers are driven by money and the use of bots has proven to be a quick, effective way to acquire and resell goods (like tickets, electronics, and shoes) and commit online fraud for huge profits. Accessibility of bots has become democratized where anybody can purchase a sophisticated bot (increasingly offered as a service) at little to no cost and use them without needing a technical understanding.  

                        Another part of the challenge is that enterprises have historically been relying on inadequate, costly bot defenses. Traditional tools are static–allowing time for botters to reverse engineer and get past them. Or they require human interaction (like annoying CAPTCHAS) which frustrate the user experience. Attackers are incredibly motivated to work around these defenses—constantly changing their attack methods to stay a step ahead of defenders. This is all incredibly costly for businesses–both in the costs incurred by playing whack-a-mole with ineffective defenses and the bots themselves as processing fake traffic is expensive.

                        There’s a huge disparity. Users of bots are able to evade defenses at little to no cost, yet many businesses spend millions of dollars in an attempt to protect against bots and yet are unable to move at the increasing speed of the attacker – the bots are winning, and Kasada set out to change this paradigm.

                        M.R.: How is AI changing the bot landscape?

                        Sam: Bots are being used to exploit AI to damage brands, breach systems, and cost businesses a lot of money.

                        One of the most immediate areas is using AI to bypass CAPTCHAs. AI image recognition has gotten good enough that it can bypass even the newest forms of CAPTCHAs at very high degrees of accuracy and at a speed far quicker than a human can. That’s no good because the only ones fooled by CAPTCHAs nowadays are humans, not the bots. Resulting in a horrible user experience for those that decide to use them – and doing very little if anything to secure the experience.

                        One of the biggest existential threats to online businesses today is that AI companies have embraced web scrapers (also known as web crawlers) to haul in huge volumes of data from other companies to train their large language models (LLMs). This has ramifications for businesses that rely on website traffic for monetization, in addition to content creators who don’t receive acknowledgement or payment for their work. These persistent web scrapers can be extremely difficult to stop and detect.

                        Bots are also being used to reverse engineer businesses’ customized LLMs and expose private data or intellectual property via prompt injection attacks. Incorporating generative AI into web applications and mobile apps is opening-up a whole new attack surface aimed to exploit and extract personal information.

                        M.R.: How is Kasada addressing customers’ bot challenges?

                        Sam: One of the keys to success is to take away the ability for an attacker to be successful, impacting their ability to generate a profit. That means making it as costly and frustrating as possible to attack in order to disincentive the adversary.

                        That’s exactly what we’ve done. We have created a system with a proprietary language that dynamically changes itself to present differently every time someone tries to figure it out. This makes it very time consuming and frustrating to even begin understanding the Kasada defense techniques being applied. In addition, we study our adversaries to understand the tools and techniques they use to evade detection. We anticipate these and build layers of resilience in our system so they are forced to raise their game and constantly evolve their methods.

                        Bot detection is a game of cat and mouse. We stay ahead by making sure our dynamic platform and team of experts pivot quicker than the adversary. We make it effortless for our customers to use without any management, and never impede the user experience with visual CAPTCHAs. This is where early market entrants have fallen down — their defenses are static and don’t move fast enough — and they place all of the management overhead on the business which is not a path to success. We’ve learned from our predecessors to create something that not only works better to stop modern bots, but is incredibly simple to use so our customers can focus on growing their business, instead of defending it.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        20 Factors to Track When Valuing Your Software Business (by SEG)

                        By Article

                        For more than two decades, Software Equity Group has honed a proprietary methodology to evaluate software companies and assess their readiness to exit.

                        By considering critical aspects of the software industry, such as market demand, competitive positioning, financial performance, and more, we have not only achieved a leading first-pass success rate but consistently secured and often surpassed the valuation multiples our clients aspire to, guiding software operators to successful exits and garnering industry accolades along the way.

                        Sandhill Group has summerize the 20 Factors to Track When Valuing a Software Business; however the full report from SEG examines the quantitative and qualitative measure measurements that were used.

                        Those details can be reviewed, here.

                        1) GROSS REVENUE RETENTION

                        A GRR acts as a reliable compass pointing toward revenue stability. A high GRR often signals a robust level of customer satisfaction and loyalty. Customers continue to find value in the product or service, leading to sustained revenues. Conversely, a low GRR could flag potential satisfaction, product fit, or service delivery issues.

                        2) ANNUAL RECURRING GROWTH RATE

                        The ARR growth rate highlights your business’s growth trajectory and future potential. It not only assures buyers and investors of the prevailing market demand, strong productmarket fit, and unique product differentiation but also attests to your capability to capitalize on that demand. While a significant growth spike in a single year is promising, consistent growth over several years, as indicated by the Compound Annual Growth Rate (CAGR), is more telling.

                        3) EBITA MARGIN

                        EBITDA margin is a key financial metric that provides insights into the efficiency of the company’s operations and the amount of cash flow generated by core activities. By stripping away factors that can vary greatly between companies, such as interest payments, tax strategies, and amortization, it reveals the underlying profitability of business operations and allows for apples-to-apples comparisons with other SaaS companies.

                        4) RULE OF 40

                        For SaaS businesses, balancing growth and profitability can be challenging. Rapid growth often leads to higher costs, while a sole focus on profitability might stifle expansion. The Rule of 40 provides a holistic view, helping companies determine if their growth strategies are sustainable.

                        5) GROSS MARGIN

                        Gross margins highlight the profitability efficiency of software companies, which often have high multiples due to their robust profit margins. A solid gross margin indicates more profits available for business reinvestment, making it a crucial measure of a company’s financial health and long-term profitability.

                        6) LTV:CAC

                        LTV:CAC is an important unit economic metric, encompassing several levers within the business, which speaks to the efficiency of the business model. By dissecting this formula, one can derive insights into essential components like ARR, Gross Retention, sales and marketing efficiency, and ROI. This metric indicates how efficiently a company’s customer acquisition strategy affects its profitability. Ideally, the revenue a customer brings should exceed the cost to acquire them. A high ratio means the business is seeing a good return on its sales and marketing investments.

                        7) CUSTOMER CONCENTRATION

                        Too much revenue from one customer can be perceived as a risk to potential buyers or investors. If that customer leaves, it could disproportionately impact the company’s revenues and profitability.

                        8) TOTAL ARR

                        ARR indicates the yearly recurring revenue a company expects, making it a key signal of its overall health and growth potential. Buyers and investors pay close attention to this metric, especially for software companies in the lower-middle market segment.

                        9) NET REVENUE RETENTION

                        NRR offers a glimpse into customer satisfaction, how well the product fits the market, and the company’s skill in boosting ARR from current customers. A high NRR suggests growing revenue from the existing customer base, indicating strong loyalty, effective upsell strategies, and a product that consistently meets user needs.

                        10) REVENUE GROWTH

                        Revenue Growth offers an encompassing perspective on a company’s fiscal health, extending beyond the insights provided by ARR growth, which we’ve recognized as a significant metric with our “High” weightage. This insight lets us discern exactly what is propelling the company’s growth. For buyers and investors, growth primarily driven by ARR is often more attractive for the reasons we stated previously.

                        11) LOGO RETENTION

                        Logo retention provides insight into a company’s customer satisfaction levels, the effectiveness of its customer retention strategies, and the overall stickiness of its products and services. A high rate suggests that a company has a durable customer base and demonstrates that customers find value in the company’s offerings. It also implies that your company’s customer relationship management efforts are effective and capable of building long-lasting ties.

                        As such, this reflects positively on your brand reputation and market position, which can, in turn, help attract new customers, enhance market share, and positively impact the company’s enterprise value.

                        12) THE DELIVERY MODEL

                        The delivery model is the core of the business, shaping how customers interact with products and services, how updates are deployed, and how costs are structured. It dictates user experience, rollout strategies for updates, cost dynamics, and, crucially, the scalability of the software’s architecture.


                        13) THE PRICING MODEL

                        The pricing model is vital because it directly impacts revenue visibility, which is the ability to predict and anticipate future revenue streams. A clear understanding of future revenue is essential for planning, scaling operations, and making informed business decisions. Stable and predictable revenue is particularly attractive to stakeholders as it minimizes uncertainty and risk.


                        14) PRODUCT & POSITION

                        Product differentiation is the key to highly valued software businesses. Differentiated products usually lead to strong customer retention, significant growth, and increased customer loyalty. In contrast, commoditized products struggle to compete and face higher churn rates. Product differentiation can be achieved through usability, product depth and breadth, vertically focused or purpose-built solutions, and so on. Differentiated products aren’t easily duplicated overnight, which is important to buyers and investors.

                        15) MARKET ATTRACTIVENESS

                        Market attractiveness is crucial as it delineates the opportunities and limitations of a company and its offerings. In expansive, growing, and less congested markets, a company’s value is particularly enhanced when it occupies a unique position. Market attractiveness is shaped by elements such as growth potential, long-term profitability, product relevancy, and the ability to adapt to changing consumer behaviors.

                        Recognizing these factors informs potential buyers about a company’s growth trajectory and responsiveness to market shifts.

                        16) TECHNOLOGY

                        The technology behind a software product directly influences the product’s performance and user experience. A welldesigned and efficient tech stack demonstrates a company’s ability to scale its operations seamlessly and accommodate a growing customer base without compromising performance.

                        17) MANAGEMENT TEAM

                        The management team is pivotal in guiding a software company’s trajectory and success. Buyers and investors prioritize leadership with a track record of executing business plans and possessing a forward-thinking vision. Beyond product and financials, the team’s past successes suggest future potential, elevating valuation. With stable leadership, succession planning, and a positive culture, the company’s value in the eyes of stakeholders rises. While these qualities are beneficial, they aren’t all mandatory for a company to be seen as valuable.

                        18) MARKET GROWTH

                        Expanding markets hint at increased adoption, making it potentially easier for businesses to make sales. Even in situations where the market might not be expanding significantly, there can still be robust growth opportunities. If the Total Addressable Market (which we will discuss next) is sizable and a company’s solution offers substantial value, it can thrive. This is often the case with many of our clients who are displacing legacy technology or automating manual processes.

                        Their strong value propositions provide solid growth prospects despite rapid market growth. Awareness of market growth rates facilitates goal setting for companies and enlightens potential buyers and investors about the prospective sales momentum.

                        19) TOTAL ADDRESSABLE MARKET (TAM)

                        A substantial TAM indicates vast growth and profit potential, but it’s crucial that the company can effectively tap into revenue from this market. Conversely, a smaller TAM might suggest a niche, potentially limiting long-term growth. For SaaS companies that have chosen not to raise a substantial amount of money through venture capital, it’s essential for a company’s TAM to be expansive enough for growth yet not so vast that it becomes a magnet for intense competition, which could hamper its ability to maintain or even capture market share. The ideal TAM is a balance: large enough for substantial growth potential while maintaining your strong differentiation but not so immense that it becomes a
                        competitive battleground.

                        20) ASSESSEMENTS OF TRENDS

                        By examining the direction of KPIs, potential acquirers can ascertain whether the business is progressing favorably or veering off course. While the current metric might not represent an ideal picture, a positive trend signifies potential. An upward trend can be leveraged to paint a compelling narrative of a promising future, which is particularly vital for attracting buyers and investors. Even if today’s numbers aren’t optimal, a trajectory pointing in the right direction can bolster confidence, enabling stakeholders to envision and advocate for the business’s longterm potential. Similarly, from a market standpoint, if the market is undergoing a significant inflection point and driving demand for new innovative solutions, a company offering such solutions can have a positive impact on value.

                        To view SEG’s full report in detail, click here:

                        Read More

                        M.R. Asks 3 Questions: Gaurav Dhillon, Chairman and CEO of SnapLogic

                        By Article

                        Gaurav Dhillon is the Chairman and CEO of SnapLogic, overseeing the company’s strategy, operations, financing, and partnerships. Having previously founded and taken Informatica through IPO, Dhillon is an experienced builder of technology companies with a compelling vision and value proposition that promises simpler, faster, and more cost-effective ways to integrate data and applications for improved decision-making and better business outcomes. 

                        M.R. Rangaswami: As we’re heading into the new year, how can leaders begin to make room in budgets to take advantage of AI?  

                        Gaurav Dhillon: As generative AI continues to be the topic of conversation in every boardroom, the question board members are asking leaders is not whether they can afford to invest in generative AI but what they will lose if they don’t. As AI-driven technologies continue to expand in reach,, there is a new baseline for business operations, which includes evolving customer expectations. Any sophisticated task with the potential to be automated will be automated.

                        Companies must adapt to maintain a competitive edge, and until a company strategically harnesses AI, it will struggle to meet the industry’s new productivity standards. As organizations begin to prepare for AI implementation, it’s important for them to prioritize reducing their legacy debt—or what is commonly known as technical debt. 

                        The challenge with legacy tech stacks is that they are built around older and outdated languages and libraries, which inhibit an organization’s ability to successfully integrate new applications and systems, including GenAI tools. Modernizing infrastructure is key to ensuring enterprise data is ready for widespread AI adoption and use across the business. AI adoption is increasingly becoming integral to a company’s relevance, efficiency and effectiveness. 

                        M.R.: What do you believe are the biggest inhibitors to AI adoption in the workplace? 

                        Gaurav: The biggest inhibitors of AI adoption in the enterprise are rooted in the fact that people look at consumer AI tools like ChatGPT and make comparisons to their own products. AI is fueled by data, and enterprise AI needs to have guardrails on what type of data it can access. Today, we are still at the hunter-gatherer stage with business data. 

                        Another inhibitor to AI adoption for organizations is security. Ideally, businesses want to leverage AI tools to ask questions about customers, but in order to get to this stage, organizations first need guardrails to ensure that the data is handled and accessed securely. The stakes for consumer AI are low because if you ask ChatGPT to write you a recipe for dinner and it turns out bad, you lose a meal. The bar for enterprise AI is much higher; if a customer looks to your business for answers and solutions, people’s jobs can be at stake.

                        M.R.: As a two-time founder, what key lessons have you learned that you believe every leader should be aware of, especially in the midst of today’s AI revolution?

                        Gaurav: The hardest lesson I’ve had to come to terms with is that product market fit is a scientific art. Companies can do and build amazing things at scale, but that alone won’t determine or define its success. Closely engaging with and listening to early adopters and customers is the only way successful business leaders can discern and establish what the ideal product-market fit is. As a founder and entrepreneur, it’s critical to be a part of this exploration from the very start. While motions like scale can be delegated, product market fit cannot.

                        Read More

                        M.R. Asks 3 Questions: CEO and Co-Founder of Rhythm Systems, Patrick Thean

                        By Article

                        Patrick Thean isn’t a boxer, but he loves to quote Mike Tyson in saying, “Everyone has a strategy until I punch them in the mouth.” Through his years as a CEO, serial entrepreneur, and coach to other company leaders, he has become an expert not only in crafting visionary strategy, but in executing with mastery.

                        Patrick is a USA Today and Wall Street Journal bestselling author. With his book Rhythm: How to Achieve Breakthrough Execution and Accelerate Growth, he shares a simple system for encouraging teams to execute better and faster. He reveals early signs of common setbacks in entrepreneurship and how to make the necessary adjustments not only to stay on track, but also to accelerate growth.

                        His work has been seen on NBC, CBS, and Fox. Patrick was named Ernst & Young Entrepreneur of the Year in 1996 for North Carolina as he grew his first company, Metasys, to #151 on the Inc 500 (now called the Inc. 5000). 

                        Currently the CEO and Co-Founder of Rhythm Systems, Patrick Thean is focused on helping CEOs and their teams experience breakthroughs to achieve their dreams and goals. 

                        M.R. Rangaswami: Crafting a compelling vision is often cited as a critical aspect of strategic leadership. How do you recommend leaders go about developing a clear and inspiring vision for their organizations, and what are the key components that should be included in a well-defined vision statement?

                        Patrick Thean: If you want to create a compelling vision, you first need to change how you approach strategic thinking. Strategic thinking should not be something you do randomly or squeeze into action-focused meetings. You need to get into a Think Rhythm. Start having regular Think sessions where you and your team reflect on your past achievements and challenges and imagine an inspiring future together. 

                        During your Think sessions, you really have to step back from daily operational work and focus on the future of your business. Make it clear to your team that this time is for thinking only – not for finalizing goals or jumping into action. Play around and have fun brainstorming! Don’t shoot any ideas down. 

                        When it comes to crafting a vision, use your Think sessions to dream big. Let your imagination run wild as you imagine what your company could look like five, ten, or even twenty years from now. Experiment with exercises like the Destination Postcard (which asks you to envision your company one year from now, but can be adapted to longer amounts of time). Be specific and include elements like the impact you want your company to make and the growth you want to achieve.

                        Once you and your team have talked through these ideas and have gotten excited about a shared vision, craft a vision statement that will inspire the rest of your employees to step boldly into the future with you. Avoid corporate buzzwords and “fluff” (marketing language). The vision should be easy to read, and it should connect with people’s hearts. You want the rest of your company to feel just as excited about the future as you are!

                        M.R.: Once a vision is crafted, what strategies do you recommend for fostering alignment across different teams and departments to achieve this vision?

                        Patrick: Alignment starts at the very top. The CEO and leadership team need to clearly and repeatedly communicate the company’s vision to all other employees. And as you’re doing this, you can enter your second Rhythm – the Plan Rhythm.

                        During the Plan Rhythm, you need to come together with your leadership team every quarter and every year to discuss, debate, and agree on priorities that move the company in the right direction. Each person on the team should know what they are responsible for accomplishing. Break each priority down into key tasks or milestones to avoid falling into the strategy execution gap.

                        Then you will cascade the company’s plan down to the departments. They will follow the same planning process to agree upon their own priorities, which align with and support the company’s goals. Teams need to talk cross-departmentally, too, to ensure alignment is horizontal as well as vertical. They need to plan for smooth project hand-offs to avoid waste, rework, and worst of all: disappointed customers.

                        Alignment isn’t just important when it comes to executing a plan with your team. Cultural alignment is important, too. Everyone on your team needs to be aligned with your company’s core values and have the right mindsets. This will ensure that they are behaving in ways that create the kind of work culture you’re trying to foster. If they’re seriously misaligned, you might see behaviors that create tension among the team or spin a priority off its track.

                        Even when you have a team of A-Players who are aligned on your core values and aligned on a plan, you need to keep realigning week after week by getting into a Do Rhythm. Hold Weekly Adjustment Meetings to discuss the progress of your top priorities. This practice will give your team thirteen opportunities to take action and reorient when your goals are veering off track. 

                        M.R.: What advice do you offer to leaders striving to cultivate a high-performance mindset within their teams, particularly during times of change or uncertainty? 

                        Patrick: If you are leaving performance conversations to once or twice a year, you are actually decreasing employee engagement. Nobody wants to wait six or twelve months to hear what they’ve been doing well and what they need to work on. A disengaged employee doesn’t perform well and is more likely to leave, which costs valuable organizational knowledge, time recruiting and training a new hire, and of course – money.

                        You need to take a proactive approach to performance instead. Make sure every person on your team, from the C-suite to the frontline employee, understands their role and responsibilities. I recommend using Job Scorecards to make this clear and easy to understand for employees and managers. When people know what is expected of them and what goals they should be working towards, they’re more engaged and they do better work. They also don’t waste time working on the wrong things that won’t really benefit the company. When performance reviews roll around, they will already understand what they’re going to be rated on, because they’ve been working on it the whole time in accordance with their Job Scorecard. This takes much of the fear of the unknown out of the process.  

                        Week to week or month to month, managers should be checking in with their employees by holding 1:1s. A regular 1:1 cadence encourages transparency and accountability. It’s a candid conversation that prompts ongoing feedback in both directions. Managers should also use these meetings to provide coaching and help employees grow their skills and careers. 

                        This is especially important during times of uncertainty, when employees may start to question their job security. If the line of communication is open between manager and employee, you help reduce your employees’ fear of being blindsided by bad news. And when managers are focused on growing and developing their people, employees will feel cared for and engaged. They will do their jobs much better than they would if they were kept in the dark about their own performance.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: James Harold Webb, Chairman and CEO of Paradigm Development Holdings

                        By Article

                        James Webb says the difference between success and failure often comes down to whether the person thinks big in the early stage of the business.

                        Author of Redneck Resilience: A Country Boy’s Journey To Prosperity, James is an investor, philanthropist and successful multi-business owner. He began his entrepreneurial journey in the health industry as the owner of several companies focused on outpatient medical imaging, pain management and laboratory services.

                        Following successful exits from those companies, James shifted his focus to the franchise world and developed, owned and oversaw the management of 33 Orangetheory Fitness® gyms, which he sold in 2019. Not one to stop, he currently has two additional franchise companies in various stages of growth.

                        His insights as a life-long entrepreneur offers great insights for those looking to branch out with building businesses they own, and connecting it with their big-picture plan.

                        M.R. Rangaswami: What are the top two most common missteps a young entrepreneur makes in their first two years of business?

                        James Harold Webb: There are many mistakes an entrepreneur can make during the start-up stage of their business. Taking money “off the table” too quickly can lead to an assortment of problems, including holding back building your infrastructure, expansion, and cash shortage. Other than my “salary” (if needed), I tend to leave all the money back in the business for several years. The only exception to that is determining any income tax consequences and taking what I call a “tax distribution.” Solely for the purpose of paying the prior year’s income taxes or quarterly income tax payments.

                        I see too many 8to5ers who are not putting in the time or effort it takes to get a business off the ground and profitable. When you are ready to stop for the day, make one more phone call or send out one more email. Solve one more problem. Unbox one more package. Whatever it takes, just work harder than anyone else.

                        M.R.: How important is a leadership team in the early stages of building a business? What (if any!) budget should people allocate to that leadership team? 

                        James: Leadership is one of the key elements of a successful business. Creating a corporate culture from the beginning is crucial. Establishing relationships is also extremely high on the leadership list, whether it be with fellow corporate staff, employees, vendors, banking, or even competitors. Listen to people. Invest in people. Take the time to recognize people and to hold yourself accountable to them. Relationships will define your success.

                        M.R.: How can someone who is just starting their business beat the odds and not fail in the first five years? 

                        James: Work harder than anyone else.

                        Hope for the upside, but always plan for the downside. Stay focused on your upside and driving your business to success, but have a contingency plan for the “what ifs.”

                        Build a solid infrastructure before you reap the benefits of your venture. Find the right people who are dedicated to helping you reach your dream of success.

                        With employees, be clear in your expectations, hold them accountable, and be available to assist and direct as needed. Contrary to popular belief, you can be a boss and a “friend.” If they can’t get it done and you’ve done the previous, then it’s time to let them go.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: Sunil Sanghavi, CEO of NobleAI

                        By Article

                        2023 was undoubtedly the year that AI barnstormed our tech consciousness. Trained on massive amounts of public data; AI generated cool new images, wrote up content summaries, and delivered seemingly original work in the blink of an eye. Could this also be the future of helping companies balance the need for sustainable, green innovation against resource and supply chain constraints?

                        Artificial intelligence offers promise for accelerating materials/formulation R&D. But AI for science needs to be uniquely focused, applying small, curated use-case AI models mapping to multiple scientific principals at a time, to speed scientific discovery. This has the potential to be a game-changer across a wide range of fields, including medicine, agriculture, engineering and more, which is why Sunil believes that 2024 will be the year for specialized AI.

                        Sunil Sanghavi is currently CEO of NobleAI, a pioneer in Science-based AI solutions for chemical and material informatics. He has a rich and diverse operating background in deep-tech companies over 40 years. Most recently, he was Senior Investment Director at Intel Capital, where he invested in AI/ML hardware and software companies including Motivo, Untether AI, Syntiant, and Kyndi. He attended the MSc Chemistry program at the Indian Institute of Technology Bombay and obtained a BSEE from Cal Poly, San Luis Obispo.

                        M.R. Rangaswami: You have an impressive resume leading a variety of companies. What led you to NobleAI at this time? 

                        Sunil Sanghavi: Generative AI dominated the discussion in 2023, and will certainly continue to be a fascinating area to watch. At this point most people have experimented with the many available LLM-based tools and understand how they can help us with everyday tasks. But what I find most exciting is the opportunity to apply AI to speed scientific discovery. Science based AI (SBAI) has the potential to be a game-changer across a wide range of fields, including chemistry, materials, energy and many others to speed scientific discovery.  

                        That area is very exciting to me and is what drew me to NobleAI, where we’re showing the power of Science-Based AI (SBAI) to help companies achieve their goals. As opposed to large language models (LLMs), which is what GenAI is (basically scraping massive amounts of publicly available data), SBAI uses SSMs or Smaller Science-infused models where we apply the power of AI to private, industry- or company-specific data sets, and add to that applicable scientific laws and any available simulation data. This elegant process presents incredible opportunities for advancements to develop or improve chemicals, materials and formulations while also tackling pressing issues for companies like cost, supply chain and customer satisfaction. And unlike LLM-based solutions, SBAI is an optimized ensemble of models, optimized for each specific use case. Our ability to do this for literally hundreds of use cases in 3 or so person-months each and at a deterministic cost is what allows us to offer customized solutions while being able to scale NobleAI’s business.

                        M.R.: What are the challenges to innovation using SBAI?

                        Sunil: As is the case with any technological advances, it’s a change in mindset which will be the most immediate challenge. Scientists and researchers are trained to advance or eliminate solutions based on empirical experimentation. This can be cost-prohibitive, and is always by its nature time-consuming and limited in scope. In fact, research into chemical and specialized materials … an industry that spends $ 100 billion per year on R&D …  has not experienced much innovation in the past 50 years for this very reason. Developing chemicals and materials is incredibly complex, often requiring experimentation across a multitude of parameters so that researchers can understand interactions of hundreds of different ingredients interacting at scales ranging from molecular to formulations. But now, AI for science is opening the door to a better approach and NobleAI is leading the charge. The goal is to use AI to more rapidly explore a greater range of chemicals and materials in software (i.e., before going to the lab) saving potentially months or even years of R&D time. 

                        M.R.: Where do you see this really taking off first? What are the emerging trends that are most exciting?

                        Sunil: To me the most exciting possibilities are in the area of sustainability. There’s a big push to improve the safety of material ingredients for both the environment and human health. For instance, more people, organizations and regulators are now talking about the need to replace forever chemicals. But anytime there’s a need to replace an ingredient it can be a real challenge for companies to find substitutes. That’s why you often see the knee jerk reaction to fight a new environmental regulation. But the great thing about Science -Based AI is that we can turn that around. We can support companies and get behind sustainability initiatives. SBAI can not only help companies stay ahead of the shifting regulatory environment but we can support companies to get behind sustainability initiatives. I call what we do “Good AI, For Good”.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        The Annual SaaS 2024 Report

                        By Article

                        Software Equity Group’s annual report is in, revealing that SaaS is here to stay.

                        As the report details, many in the technology industry, the story of 2023 was all about artificial intelligence, its rapidly advancing commercial applications, and the speed and extent with which it will impact the world we live in, both from a business and personal perspective.

                        4 SaaS components of 2023 that will impact what we see in 2024.

                        1. The advancement of generative AI and its impact on software and SaaS companies, both as users and creators of AI, was a top story in 2023 and one that will be front and center in 2024 as well.

                        However, quietly and perhaps a bit behind the scenes, another storyline proved to be just as important in 2023: the resilience of the U.S. economy and subsequent cementing of software and SaaS’s place as a key pillar driving digital transformation globally.

                        2. Inflation decreased by nearly half (with the CPI dropping from 6.5% in December 2022 to 3.4% in 2023), interest rates stabilized, and the labor market remained strong (unemployment rate a 3.7% with 216k jobs added in December).

                        3. Software and SaaS companies pivoted towards operational efficiency, and fortunately for the U.S. economy, many of these companies were successful in this endeavor. The result was a fantastic year for the SEG SaaS IndexTM, with the Index increasing 34% YOY, outpacing the S&P 500 and Dow Jones, and trailing only the Nasdaq (43% increase) among major indices.

                        On the M&A side, there were over 2,000 SaaS transactions, making 2023 the second strongest year on record for SaaS M&A, only narrowly trailing 2022.

                        4. While AI garnered a lot of the hype in 2023, an equally important story is the strength and resilience of the software ecosystem. 2023 was another proof point that SaaS is “here to stay.”

                        4 Macroeconomic Outlooks for 2024: Inflation, interest rates, employment, growth and politics

                        1. Inflation continues to decrease, finishing 2023 at 3.4% YOY compared to December 2022. The underlying core CPI, which strips out volatile food and energy prices, measured 3.9% in December 2023, its lowest YOY change since May 2021. Though additional cooling is still needed for inflation to reach the 2% annual target the Federal Reserve sets, the progress made in 2023 is encouraging.

                        2. The prospect of the Federal Reserve cutting interest rates is coming into focus. They will closely watch inflation and the unemployment rate (which remains solid at 3.7%) as it plots the course through this year.

                        The timing of potential cuts will greatly impact publicly traded SaaS stocks and the M&A markets, as the potential for a lower-cost borrowing environment would be a welcome sight to these markets.

                        3. What about a recession? 2023 growth is now expected to come in between 2 and 3%.
                        GDP growth is expected to decline slightly in 2024 but remains positive at around 2%. This scenario avoids a recession altogether and supports a healthy economic environment.

                        A scenario in which the U.S. beats GDP estimates again provides an upside case for publicly traded SaaS stocks in 2024. This possibility is further bolstered by the recently released Q4 GDP data, in which the U.S. GDP grew 3.3%, beating consensus estimates.

                        4. The economy will be a primary focus on the 2024 campaign trail. However, the reality remains that the Federal Reserve dictates monetary policy independent of political election cycles.

                        Election risk is still present due to the divisive nature of the current U.S. political environment,
                        albeit much less discussed than during the last cycle.

                        Globally, geopolitical risks include regional conflicts in the Middle East and their impact on oil prices, the ongoing Russia-Ukraine war, and tensions between China and Taiwan.

                        To read the details of Software Equity Group’s 2024 SaaS Report, click here.

                        Read More

                        5 Most Impactful Factors In Valuation of Technology Companies

                        By Article

                        The turbulent markets of 2022-2023 and volatility in the M&A environment has brought the topic of valuation to the forefront in many of our discussions with founders and investors.

                        Regardless of market ups and downs, the factors that are most impactful to valuation remain relatively constant, with some standards changing with market cycles as witnessed over the past decade. Safe to say, valuation continues to be both art and science.

                        Allied Advisers put together this article as a refresher on some of the most important valuation factors in the current market for technology companies; we hope our report also services as broad guidance to founders, executives and investors in achieving an optimal valuation outcome for their business.

                        It is often said that valuing a business is more an art than a science. Another assertion is that
                        valuation is in the eye of the beholder, akin to beauty. There is truth in both these statements since
                        enterprise valuation is impacted by several variables, not all of which can be quantified, and
                        perception of future prospects of a business can be quite different depending on the biases of the
                        evaluator.

                        Regardless of this sense of mystery and fuzziness about valuation, there are several fundamental
                        factors that influence the value of a technology business.

                        In this article, we cover five important elements that have a distinct bearing on the valuation of technology companies, noting that many of these factors apply to businesses in other sectors as well.

                        1) Scarcity in a Large Market
                        A business that is the only player, or one of just a few players, in a large end market is likely going
                        to be seen as being valuable since there are limited substitutes for the scarce solution offered by
                        that company. It is simple supply-demand dynamics – when there is clear demand for a product in
                        short supply, the price of that product goes up.

                        (Read more)

                        2) Significant Differentiation from Competitors
                        Often referred to as “USP” or unique selling proposition, differentiation of a technology business is
                        important to valuation since it creates scarcity and sets the business apart from its competition.
                        Differentiation may come from unique product features, ability to address challenging use cases,
                        performance metrics, superior UI design, ease of deployment and use, economic value to the
                        customer (time to value, ROI), etc.

                        (Read more)

                        3) Growth vs. Profit Margin and Rule of 40; Capital Efficient Growth
                        In the frothy market prior to COVID that eventually peaked in 2021, hypergrowth was the mantra
                        for technology companies. Businesses that grew at breakneck pace with no heed to bottom line
                        profitability attracted nosebleed valuations in private funding rounds. A popular performance
                        measure of software companies called Rule of 40 (revenue growth rate + profit margin > 40%) was
                        highly biased towards revenue growth; companies that grew at 100% with -50% operating margin
                        (R40 metric = 50%) were highly valued due to their growth, albeit with poor profit margins, easily
                        attracted capital.

                        (Read more)

                        4) Revenue Model and Gross and Net Revenue Retention Metrics
                        Business models typical to technology product/platform companies are subscription, licensing or
                        transactional. Subscription models provide recurring revenue (monthly or annually), licensing is
                        usually a one-time fee, and the transactional model provides revenue per transaction.

                        (Read more)

                        5) Customer Profile and Concentration
                        Companies that have large enterprises as customers are more likely to be able to expand revenues
                        from such clients given the numerous groups within large organizations and bigger budgets for
                        vendors. In contrast, having small/medium (SMB) customers limits the opportunities for large
                        contracts and wallet share expansion given limited budgets. For these reasons, companies with an
                        enterprise customer base have traditionally been viewed more favorably by investors compared to
                        businesses serving SMB clients.

                        (Read more)

                        To read the full report, click here.

                        Ravi Bhagavan is a Managing Director at Allied Advisers

                        Read More

                        M.R. Asks 3 Questions: Ofer Klein, CEO & Co-founder of Reco.AI

                        By Article

                        Ofer Klein is a decades-long Israeli Defense Force helicopter pilot and avid kitesurfing enthusiast who likens the adrenaline rush to being a founding CEO of a thriving security startup. It’s this unique background and experience that have been key to Ofer’s leadership style and Reco’s success. 

                        Ofer and his fellow co-founders developed the platform and AI algorithm to use for counterintelligence for the Israeli government, and decided to productize the platform in 2020, which lead to the birth of Reco.ai. Now, Reco.ai is a leading organization focused on safeguarding organizations with its modern, AI-driven SaaS security offering.

                        M.R. Rangaswami: What security concerns are not being talked about enough today?

                        Ofer Klein: There are a few. Security Keys Are Replacing Multi-Factor Authentication (MFA) – MFA is a common method of adding a second layer of security onto SaaS applications (in addition to a password). But, MFA is not the only security boundary, as SaaS applications are beginning to use security keys for secondary verification. Security keys are physical devices that use a unique PIN only available on that device to authenticate. 

                        Another is Microsoft 365 and Okta Cyber Attacks. A security concern is maintaining the security of core SaaS applications, such as Microsoft 365 and Okta, as they have more cyber threats because they are foundational to making SaaS programs run, potentially becoming the next SolarWinds. Despite growing security threats, these technologies have experienced an uptick in adoption. The security built into Microsoft 365 E5 and Okta isn’t enough, however, to keep the application and organizational data stored in it secure, prompting organizations to look for dedicated SaaS security solutions.

                        M.R.: Why is securing SaaS applications so important?

                        Ofer: During the pandemic, cloud collaboration tools fundamentally changed the way modern organizations work. Enterprises today are adding new applications to their technology stack at an unprecedented rate, using an average of 371 SaaS applications. This dramatic increase has resulted in an elevated demand for a security solution that provides full visibility into everything connected to a company’s SaaS environment, and at the same time, ensures it complies with regulations. 

                        Attempting to secure new SaaS tools with techniques that were developed for legacy on-premise systems restricts collaboration and misses a broad range of security events. Only by understanding the complete business context of an interaction can security analysts identify and interpret potential threats, and also determine the best and most efficient way to respond.

                        M.R.: What role does AI play in solving SaaS security?

                        Ofer: Like many sectors today, AI is revolutionizing the security industry. Leveraging AI to identify and address security vulnerabilities is rapidly growing and very effective. This is especially true for companies adding new generative AI applications into their technology ecosystems, as this can expose an organization to added risk due to the sharing of emails, recorded calls, and other data. Incorporating AI models, techniques, and processes like Large Language Models (LLMs), Knowledge Representation Learning, and Natural Language Processing (NLP) give companies greater visibility and allows them to discover potentially risky events (such as the improper use to AI tools) and be alerted to data exposure, misconfigurations, and mispermissions around a user.

                        The incredibly fast adoption of generative AI tools has led to new data risks, such as privacy violations, fake AI tools, phishing and more. As a result, organizations need to establish AI safety standards to keep their customer and employee data safe. Having a SaaS security solution that can identify connected generative AI tools is critical. 

                        AI is foundational to our SaaS security offering and enables enhanced functionality and effectiveness. Our proprietary and patented AI algorithm powers our Identities Interaction Graph, which correlates every interaction between people, applications, and data, and then assesses potential risk from misconfigurations, over-permission users, compromised accounts, risky user behavior, and also the use of generative AI applications. 
                        One-third of organizations regularly use generative AI applications in at least one function, making it critical for SaaS security platforms to have the ability to discover anomalous behavior for both humans and machines and gain even deeper proactive threat mitigation.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: Sanjay Sathé, Founder & CEO, SucceedSmart

                        By Article

                        Sanjay Sathé, Founder & CEO of SucceedSmart, is no stranger to disrupting established industries. Previously, Sathé spearheaded RiseSmart’s evolution from a concept based on his personal experiences into a major disruptor in the $3B outplacement industry, becoming the fastest-growing outplacement firm in the world. In September 2015, RiseSmart was acquired for $100M by Randstad.

                        Launching SucceedSmart, a modern executive recruitment platform with a unique blend of proprietary, patent-pending AI and human expertise, was a culmination of Sathé’s 15 years as a candidate of executive search and 15 years as a buyer of executive search. It was clear that the industry was living in the past and ripe for disruption.

                        While many organizations across the broader HR market were embracing technology, the executive search industry continued to operate almost entirely offline and saw a lack of innovation and technology adoption over the past 50 years.

                        Sathé invested time in researching both executives and corporate HR leaders to confirm his thinking and when he received a resounding “yes” to the hypothesis, he dove in to launch SucceedSmart in 2020. SucceedSmart is now on a mission to modernize leadership recruiting for director to C-level talent and fill complex leadership roles with unmatched agility, accuracy, and affordability, while promoting diversity and transparency

                        M.R. Rangaswami: How can artificial intelligence (AI) positively impact HR leaders and teams?

                        Sanjay Sathé: Businesses across industries have increasingly adopted AI in recent years. It’s no longer a question of whether to embrace AI technology—but when and how.

                        Contrary to the misconception that AI will eliminate jobs, AI can empower CHROs, talent partners, talent acquisition teams, hiring teams and other employees to work more strategically, and improve diversity and inclusivity. By automating routine tasks, AI also frees up time for HR professionals to focus on the “human” side of human resources and build relationships with candidates and employees.

                        From an HR perspective, AI automates tasks such as talent sourcing, resume screening, and interview scheduling, and helps centralize all candidate information in a streamlined platform. AI technology also unlocks insights about the hiring process and candidate experience to drive improvements over time. Leveraging AI also minimized conscious and unconscious biases in the hiring process by matching candidates with jobs that align with their accomplishments, skills, and experience.

                        M.R.: What are some of the top challenges in executive recruiting today and how can businesses overcome them?

                        Sanjay: Leadership has an immeasurable impact on business success and executives are among the most critical employees at any organization. Yet, despite increased turnover, business velocity, and competition, executive search has remained devoid of innovation and technological advancements for half a century.

                        The traditional executive search process can take several months—leading to a poor candidate experience, as well as lost productivity and revenue as roles go unfilled. The approach is transactional, exclusionary, clubby, time-consuming, and expensive. Not only is the pricing exorbitant, but in retained search, corporations may have to pay all those fees and still not get a candidate. And the same executives are often passed around between firms, leading to a limited talent pool.

                        Embracing modern executive recruitment technology can help address these challenges, decreasing total time to hire and overall hiring costs, and enable organizations to build more diverse leadership teams. It can also support diversity initiatives by focusing specifically on accomplishments and removing demographic and other personally-identifiable information that may lead to unconscious bias during the hiring process.

                        M.R.: How can businesses effectively build their leadership pipelines given the Silver Tsunami, meaning the wave of Baby Boomer employees retiring in the coming years? 

                        Sanjay: More than 11,000 Baby Boomers reach retirement age each day and more than 4.1 million Americans are expected to retire each year through 2027.

                        Traditional executive search primarily focuses on serving organizations—not executives. Firms often wait for executives to reach out to them and the same executives are often passed around between companies, resulting in a limited talent pool. As an increasing number of executives retire as part of the Silver Tsunami, traditional candidate networks are becoming even smaller. 

                        To improve talent sourcing across all roles amid the Silver Tsunami, organizations can turn to AI-powered candidate recruitment technology—rather than relying on personal connections. This approach enables organizations to be more proactive about succession planning by identifying and nurturing internal talent while simultaneously scouting for external candidates.

                        A modern executive recruitment platform can support the growing and urgent need to fill executive roles as more workers retire, by enabling corporations to build diverse pipelines of qualified executives and reduce total hiring time to a matter of weeks, compared to four to six months with traditional executive search firms. 

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        PitchBook’s 2024 Industrial Technology Outlook

                        By Article

                        What does 2024 hold for industrial tech? PitchBook’s latest Emerging Technology Research looks ahead to what could be in store for verticals like agtech, clean energy, and more.

                        Here is a summary of Pitchbook’s Outlook on Agtech, the Internet of Things, Supply Chain Tech, Carbon & Emissions Tech, and Clean Energy.

                        AGTECH: Autonomous farm robots will see a major increase in adoption.

                        The anticipated surge in adoption of autonomous farm robotics in 2024 is driven by a convergence of compelling factors addressing critical challenges within the agriculture sector.

                        First, the persistent global labor shortages in agriculture are pushing farmers to seek alternative solutions, with farm automation offering a viable response to mitigate the impact of diminishing workforce availability.

                        Second, technological advancements, particularly in artificial intelligence, sensors, and automation, have matured to a point where the cost-effectiveness and reliability of robotic systems make them increasingly attractive for widespread adoption.

                        Third, the imperative to optimize resource use, reduce operational costs, and enhance overall farm efficiency aligns seamlessly with the capabilities of modern farm robotics, positioning them as essential tools for a more sustainable and productive agricultural future.

                        Fourth, the rise of Robotics-as-a-Service models is proving instrumental in easing upfront costs associated with adopting these technologies.

                        Fifth, pilot studies have successfully demonstrated the effectiveness of farm robotics, and companies are now transitioning to full-scale commercialization, making 2024 a pivotal year for the integration of these technologies into mainstream agricultural operations.

                        INTERNET OF THINGS: Outlook: Private 5G startups will produce a unicorn valuation in a late-stage deal or acquisition.

                        Unicorn valuations have been rare in the Internet of Things (IoT) industry with only two VC deals for Dragos and EquipmentShare valuing companies over $1.0 billion in North America and Europe in 2023. 5G startups have not reached this threshold despite achieving rapid valuation growth for midstage companies and a $1 billion exit in the space in 2020 for Cradlepoint. Numerous technical and commercial barriers to entry will ease over the coming year and revenue growth is on pace to accelerate.

                        The fundraising timelines of private leaders align with this trend, creating investment opportunities for growth-stage and corporate VC investors, along with telecommunications acquirers.

                        SUPPLY CHAIN TECH: Drone deliveries will go commercial in the US with more funding and investor interest in the space.

                        The Federal Aviation Administration (FAA) regulates the drone delivery market with a primary consideration on safety. To date, drones have been subject to a restriction called beyond visual line of sight (BVLOS) meaning an operator must have the drone within sight at all times when it is flying.

                        This restriction represents a significant (some might say insurmountable) hurdle for the development of a drone delivery marketplace. The cost of an operator visually tracking and monitoring every delivery via drone is prohibitive.

                        The FAA has stated that it wants to integrate drones into common airspace, and issued a number of exemptions to the BVLOS rule to startups and larger companies over the course of 2023.

                        These exemptions open the door for the market to finally develop.

                        CARBON & EMISSIONS TECH: Demand for carbon credits will recover, following uncertainty in 2022 and 2023.

                        Voluntary carbon markets (VCMs) have been under significant scrutiny in recent years, particularly carbon credits based on avoidance—rather than removal—of emissions.

                        Multiple different sets of standards, and the perceived risk associated with low-integrity credits, has been reducing the overall traded volumes of carbon credits, and has been pushing buyers toward removal-based credits that are easier to prove the integrity of.

                        New independent standards are emerging, and while there are no obligations for credit providers to follow them, they provide the means to show high integrity and reassure buyers.

                        CLEAN ENERGY: US clean hydrogen technology companies will become acquisition targets.

                        Low-carbon hydrogen is seen as a key component of global decarbonization efforts, particularly for certain industrial applications and heavy transportation. Earlier this year, the US Department of Energy allocated $7 billion to a program to develop seven hydrogen hubs across the US, to produce, store, and distribute hydrogen.

                        Companies involved in these hubs are varied, including energy and oil & gas companies that have experience with large-scale energy projects, but will likely look to close technology gaps through acquisitions.

                        To read PitchBook’s full report, click here.

                        PitchBook is a Morningstar company providing the most comprehensive, most accurate, and hard-to find data for professionals doing business in the private markets.

                        Read More

                        M.R. Asks 3 Questions: Jason Lu, CEO and Founder of CECOCECO

                        By Article

                        Jason Lu, the founder of CECOCECO, began his journey in the LED display industry in 2006 by creating ROE Visual. His commitment to perfection and a deep understanding of product quality quickly led to ROE Visual becoming a top brand within the industry.

                        As an innovator in the field, Jason has consistently been a notable figure in the industry and is never content to rest on past achievements. In 2021 he sought new challenges and founded CECOCECO. With this venture, Jason embraced the idea that LED displays could be more than functional tools; they could integrate technology and aesthetics to create emotionally engaging experiences.

                        Jason’s reputation for producing high-quality products is built on years of experience and industry knowledge. His dedication to product development was evident in the launch of ArtMorph by CECOCECO. After two years of dedicated work and maintaining high standards, Jason and his team successfully introduced this innovative product to the market.

                        Under Jason’s leadership, CECOCECO is more than a brand; it’s a testament to ongoing innovation in how the world experiences and interacts with light and display technology.

                        M.R. Rangaswami: What were the key insights or experiences that led you from ROE Visual to creating CECOCECO, and how do these past experiences shape your current vision?

                        Jason Lu: I’ve come to recognize that traditional LED displays, while functional, are not universally applicable to every space and often clash with sophisticated designs. My ambition is to develop products that harmoniously blend functionality with aesthetic appeal. I firmly believe that innovation is fueled by pressure. ROE is currently experiencing stable growth, prompting me to initiate transformative changes.

                        Reflecting on my past experiences, I’ve gained a profound understanding of the path to success and the attitude required for it. I’ve learned that success is not an overnight phenomenon. ROE took 17 years to reach its current stature, reinforcing my belief in the ‘slow and steady wins the race’ philosophy. I don’t equate financial gain with success. While survival is crucial, it’s not the epitome of success. My vision for CECOCECO is to relentlessly pursue excellence in our products, continuously innovate, and be a source of inspiration for the industry and the world at large.

                        M.R.: How does CECOCECO innovate in the LED lighting and display industry, and what future advancements do you foresee in this space?

                        Jason: At CECOCECO, our focus is on pioneering solution-based innovation. While similar products and projects exist, we question their viability and sustainability. Our approach involves crafting systematic solutions with an unwavering commitment to quality in every aspect, from the consistent output of our products to the intricacies of our manufacturing process. This is far more than a mere mechanical production; it necessitates a blend of human creativity and precision control. Our development and manufacturing stages demand extensive manpower, embodying a level of craftsmanship of the highest order. CECOCECO’s mission is to transform previously disjointed elements into cohesive, sustainable systems.

                        Looking ahead, we aim to diversify our product range. This includes offering a wider variety of resolutions and shapes and innovating with flexible screen technologies. Our goal is to provide a more comprehensive and diverse range of solutions to meet the evolving needs of our customers.

                        M.R.: What emerging trends in LED technology and lighting design do you find most exciting, and how is CECOCECO preparing to integrate these trends into future products?

                        Jason: The landscape of LED lighting is undergoing two significant transformations. First, there’s a notable shift from point light sources to surface light sources, with Chip-On-Board (COB) technology gaining increasing popularity. This evolution marks a fundamental change in how we perceive and utilize LED lighting. Secondly, the realm of lighting design is witnessing a surge of creativity. It’s transcending beyond mere color shifts and overlays; dynamic, imaginative light effects are becoming the norm, adding a refreshing dimension to lighting.

                        In response to these trends, CECOCECO is exploring integrating COB technology into our products to harness its unique effects. Lighting design isn’t just an aspect of our product; it’s a cornerstone. We’re committed to experimenting with various surface materials and designs to unlock new potential in creative lighting. Furthermore, we’re enthusiastic about collaborating with leading lighting designers. We aim to conceive and develop even more captivating lighting projects by merging our technological prowess with their creative flair.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: Ran Ronen, Co-Founder & CEO of Equally AI

                        By Article

                        According to Ran Ronen, 2024 will be the year in which technology leaders innovate by example to help create more inclusive experiences and broaden the base of potential users and customers of their technology services and solutions by prioritizing digital accessibility. 

                        Accessible websites and online experiences offer businesses a range of benefits, from ensuring compliance with regulatory requirements and industry best practices, to more users and customers accessing the site, to improved website SEO and brand trust and credibility. Prior to advancements made possible with AI, the technical process of ensuring a website operates as accessible has been a difficult goal for many website owners to achieve due to challenges to manage end to end accessibility compliance. 

                        Ran is the Co-Founder and CEO the world’s first no-code web accessibility solution designed to help businesses of all sizes meet regulatory compliance. This conversation was an enlightening one as he and I spoke about the positive shift he’s seeing in the tech field  to embrace more accessibility guidelines as best practices.

                        He is the Co-Founder and CEO of Equally AI, the world’s first no-code web accessibility solution designed to help businesses of all sizes meet regulatory compliance.

                        M.R. Rangaswami: What is the state of digital accessibility; and why, in today’s tech-driven world do you think adoption is still lagging to make accessibility a priority in user/customer experience? 

                        Ran Ronen: The state of digital accessibility is evolving, yet its integration into mainstream tech remains slower than it should be. Although AI-driven accessibility tools are emerging, many companies still see accessibility as a complex and costly process, often overlooking or delaying it in favor of rapid development. This overlooks the opportunity to appeal to a wider, more diverse customer base and enhance product usability for everyone from the onset.

                        Slow adoption also stems from limited awareness of diverse user needs and the wider benefits of accessibility beyond legal compliance. There’s a critical need for tech leaders to see accessibility not just as a necessity for individuals with disabilities, but as a key factor in improving overall user experience and innovation, which in turn boosts brand reputation and customer satisfaction.

                        M.R.: What are some challenges faced by organizations in managing the technology implementation side of digital accessibility? 

                        Ran: Organizations implementing digital accessibility often face several challenges, including a lack of in-house expertise on accessibility standards and implementation, which makes integrating these practices into existing tech frameworks difficult. Resource allocation is another challenge, as accessibility often competes with other business priorities and can be seen as an additional cost. Also, ensuring consistent accessibility across a diverse range of products and platforms presents a scalability challenge, requiring a strategic approach to meet various tech and user needs effectively.

                        M.R.: As an innovator in the space, what is your hope for the impact of AI in making more companies and their offerings more digitally inclusive? 

                        Ran: As an innovator in the digital accessibility space, my aspiration is that AI will enable a shift in perspective, where digital accessibility becomes not just an aspiration but a practical reality for more companies, especially small and medium-sized businesses. This will help them proactively create accessible products and services, which not only enhances the user experience for all but also opens up new markets and opportunities for innovation. 

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: Ankit Sobti, Co-Founder and CTO of Postman

                        By Article

                        Ankit Sobti is co-founder and CTO for Postman, the world’s leading API Platform. Prior to joining Postman, Ankit worked for Adobe and Yahoo!, where he served as a senior software engineer. In his current role, Ankit focuses on product and development, leading the core technology group at Postman.

                        A key focus for this Q&A are the findings from a recent global survey Ankit and the Postman team published, tracking the most important trends around API use in large enterprises.

                         

                        M.R. Rangaswami: APIs are critical tools for enterprise success, but should they also be considered products?

                        Ankit Sobti: Thinking about APIs as products helps to understand and articulate that APIs, like any other item you’d typically call a product – a website, a mobile app, a physical product – are required to be built with a consumer-driven mindset. 

                        This requires an understanding of who the consumers are, what problems are they trying to solve, why is it a problem in the first place, what else are they doing to solve this problem–and then consciously and deliberately designing a solution to this problem exposed through the interface of an API.

                        And like any other product, APIs also need to be packaged, positioned, priced, distributed, and iteratively improved to evolving consumer needs. 

                        Postman’s 2023 State of the API Report, which surveyed over 40,000 people found 60% of the API developers and professionals view their APIs as products – which I think is a good signal that this realization is well underway. And it makes sense that APIs are increasingly seen as products, serving both internal and external customers. 

                        But how does this view vary by industry and company size? And how much revenue can APIs generate? It turns out that the larger the company, the likelier it is to view its APIs as products. At companies with over 5,000 developers, 68% of respondents said they considered their APIs to be products. At the other end of the spectrum were companies with fewer than 10 employees. There, just 49% of respondents viewed their APIs as products. 

                        M.R.: Are APIs actual revenue generators now for companies?

                        Ankit: Yes, APIs are increasingly unlocking new streams of revenue and business opportunities for companies. In some of the more traditional industries with lower margins for example, we are increasingly seeing APIs being used as a high margin revenue stream. And there are numerous examples now of companies where the primary product being sold is the API.

                        APIs that package insights or key capabilities and can be used to drive strategic partnerships, or allowing companies to become platforms on top of which others can build. We are seeing examples of this ranging from small development shops all the way to large enterprises. 

                        This is something we also saw in our survey, with 65% of the respondents affirming their APIs generate revenue, and almost 10% of companies with money-making APIs said their APIs generated more than three-fourths of total revenue. 

                        M.R.: Does an API-first approach impact revenue?

                        Ankit: API-first companies are defined as those that use APIs as the building blocks of their software strategy. APIs bind together not only the internal components of an organization, but also pave the way for seamless external collaboration. And thinking in terms of these building blocks, an API-first approach allows for easier externalization of the capabilities that APIs provide, and subsequently create easier paths for revenue.

                        In addition, we believe that API-first companies have superpowers that foster happier developers and a healthier business ecosystem. In our customer base, we work with companies across a broad range of industries – and APIs generate significant amounts of revenue, unlock new business opportunities, and drive ecosystem expansion through partnerships.

                        And for companies with APIs, it’s worth weighing how much to invest in them, and adopting an API-first approach. These decisions may have a tangible impact on the bottom line. 

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        Innovate, Engage & Succeed: Embracing the PLG Paradigm – 2H 2023

                        By Article

                        Allied Advisers has just released its inaugural report on product led growth (PLG).

                        Product-Led Growth (PLG) is an innovative customer-centric business strategy that employs user-friendly products to acquire, retain, and expand the customer base, reducing the reliance on traditional sales and marketing.

                        As software users, we have had magical experiences with products that allow us to independently explore, test, purchase and expand usage without intervention from the product vendor’s sales team; these PLG strategies have been utilized successfully by leading SaaS companies such as Dropbox, Zoom, Klaviyo and Slack among others. This contrasts with sales led growth (SLG) that relies on direct sales teams to hunt and harvest product sales opportunities.

                        This report covers insights on how to develop a PLG strategy from Dharshan Rangegowda, a former Allied Advisers client who grew ScaleGrid via a PLG strategy before raising a growth round with a mid-market PE firm.

                        Additionally, the report provides details on transactions of PLG companies as well as profiles of certain PLG businesses in different verticals, indicating significant differences in operational efficiencies when adopting a PLG model.

                        To read the full report, click here.

                        Read More

                        M.R. Asks 3 Questions: Godard Abel, CEO of G2

                        By Article

                        A 5x SaaS entrepreneur, Godard Abel is CEO of G2, the world’s largest and most trusted software marketplace, which he co-founded in 2012. He is also Executive Chairman of ThreeKit, a leading 3D visualization technology company, and Logik.io, a next generation configuration technology.

                        Previously, Godard served as CEO of SteelBrick which was acquired by Salesforce in 2016. Prior to SteelBrick, Godard co-founded BigMachines, where he served as CEO and built it into a leading SaaS provider which was acquired by Oracle in 2013. He also served as a GM at Niku prior to its IPO in 2000 (and subsequent acquisition by CA).

                        Before entering the technology industry, Godard consulted for McKinsey & Company and advised leading manufacturers in the U.S. and Germany on strategy development and business process improvement. Godard was a Finalist for EY Entrepreneur of the Year in 2019, named to the Tech 50 list by Crain’s Business Chicago in September 2014, and to the Chicago Entrepreneur Hall of Fame in 2011. He earned an MBA from Stanford University and both a B.S. and M.S. in engineering from the Massachusetts Institute of Technology.

                        As you can tell by our conversation, Godard is not only an innovator and leader in the tech world, but he is also very skilled at sharing a lot of information in few words.

                        M.R. Rangaswami: How is software buying changing?

                        Godard Abel: B2B buyers now expect consumer-like shopping experiences, where they can conduct research and make purchases quickly, conveniently, and on their own terms. This means expensive software solutions can be bought with a credit card, and the buyer conducts research on review sites and other peer communities. In fact, G2 research finds that 67% of global B2B software buyers usually engage a salesperson once they have already made a purchasing decision. 

                        M.R.: How does AI impact this shift in software buying behavior? 

                        AI will only accelerate the ongoing shift to self-serve software research and buying, delivering modern digital buyer experiences. The ability of AI to provide immediate, data-driven insights is a key driver of this shift. With this in mind, software vendors have an opportunity to lean into AI to meet buyers’ preferences for speed, eliminating friction in the software buying journey. 

                        M.R.: What role does G2 play in this evolving software landscape? 

                        Godard: G2 has over 2.4 million verified reviews on 150,000+ products and services. All 1 billion knowledge workers around the world need software and they’re coming to G2 to research it. With our massive dataset on B2B software and the most traffic from software buyers, G2 is uniquely positioned to power software buying and selling in the age of AI. 

                        Earlier this year, we introduced Monty, the first-ever AI-powered software business assistant built on OpenAI’s ChatGPT. Previously, a buyer would visit G2.com and search for the type of software they were looking for – CRM, for example. However, not every buyer knows exactly what they need.

                        With Monty, you can now describe the business challenge you’re looking to solve and have a conversation. Powered by G2’s extensive dataset, Monty can recommend the best software solutions for your particular need – making the process of research software faster, easier, and more effective.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: Jay Wolcott, Co-Founder & CEO, Knowbl

                        By Article

                        What does the future of customer experience look like with generative AI?

                        According to Knowbl’s CEO and Co-Founder, Jay Wolcott, it’s going to critical to understand the risk in implementing AI solutions and the requirements for what “enterprise-ready conversational AI” means.

                        In this conversation, Jay sheds light on how this innovative technology redefines customer experience, making interactions more seamless, convenient, and efficient.

                        M.R. Rangaswami: What exactly is “BrandGPT,” and how does it differ from traditional conversational AI technologies? 

                        Jay Wolcott: BrandGPT is a revolutionary Enterprise Platform for Conversational AI (CAI) built leveraging large language models (LLMs) from the ground up. Legacy virtual assistance platforms built upon BiLSTNs and RNN frameworks like the speed, ease, and scalability that LLMs can offer through few-shot learning. 

                        Through the release of this all-new approach, CAI can finally meet its potential of creating an effortless self-service experience for consumers with brands. The proprietary AI approach Knowbl has designed within BrandGPT offers truly conversational and contextual interactions that restrict the limits of Generative AI from uncontrollable risks. 

                        This new approach is driving tons of enterprise excitement for new levels of containment, deflection, and satisfaction across digital and telephony deployments. Beyond the improved recognition and conversational approach, Knowbl’s platform allows brands to launch quickly, leverage existing content, and improve the scalability of capabilities while reducing the technical effort to manage. 

                        M.R.: What emerging trends do you foresee shaping the future of conversational AI and customer experience, and how can businesses prepare for these developments?

                        Jay: In 2024 we plan to overcome customer frustration with brand bots and virtual assistants, ushering in a new era of effortless and conversational experiences powered by advanced language models.

                        Brands that embrace LLMs for customer automation early on will establish a competitive advantage, while those who lag will struggle to keep up. Although many organizations are still in the experimental phase of using GenAI for internal purposes due to perceived risks, leading brands are boldly venturing into direct customer automation, reimagining digital interfaces with an “always-on” brand assistant.

                        We also predict 2024 to be the year that bad bots die. New expectations of AI will lead to frustrated consumers when dealing with legacy bots, and a trend in attrition versus retention will appear.

                        M.R.: What complexities do multinational companies face when implementing AI-driven solutions, and how can they navigate the challenges to ensure successful adoption across diverse markets?

                        Jay: Multinational companies encounter a myriad of complexities when implementing AI-driven solutions stemming from the diversity of markets they operate. One significant challenge lies in reconciling varied regulatory landscapes and compliance requirements across different countries, necessitating a nuanced approach to AI implementation that adheres to local regulations. 

                        Additionally, cultural and linguistic diversity poses a hurdle, as AI solutions must be tailored to resonate with the unique preferences and expectations of diverse consumer bases. To successfully navigate these challenges, companies must prioritize a robust localization strategy, customizing AI solutions to align with each market’s specific needs and cultural nuances. 

                        Collaborating with local experts, remaining vigilant of regulatory changes, and fostering open communication with stakeholders is essential for multinational companies to achieve successful AI adoption across diverse markets.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                        Read More

                        M.R. Asks 3 Questions: John Hayes, Founder & CEO, Ghost Autonomy

                        By Article

                        John Hayes is CEO and founder of autonomous vehicle software innovator Ghost Autonomy.

                        Prior to Ghost, John founded Pure Storage, taking the company public (PSTG, $11 billion market cap) in 2015. As Pure’s chief architect, he harnessed the consumer industry’s transition to flash storage (including the iPhone and MacBook Air) to reimagine the enterprise data center inventing blazing fast flash storage solutions now run by the world’s largest cloud and ecommerce providers, financial and healthcare institutions, science and research organizations and governments.

                        Like Pure, Ghost uses software to achieve near-perfect reliability and re-defines simplicity and efficiency with commodity consumer hardware. Ghost is headquartered in Mountain View with additional offices in Detroit, Dallas and Sydney. Investors including Mike Speiser at Sutter Hill Ventures, Keith Rabois at Founders Fund and Vinod Khosla at Khosla Ventures have invested $200 million in the company.

                        Now, let’s get into it, shall we?

                        M.R. Rangaswami: How does the expansion of LLMs to new multi-modal capabilities extend their application to new use cases?

                        John Hayes:  Multi-modal large language models (MLLMs) can process, understand and draw conclusions from diverse inputs like video, images and sounds, expanding beyond simple text inputs and opening up an entirely new set of use cases from everything from medicine to legal to retail applications. Training GPT models on more and more application specific data will help improve them for their specific task. Fine-tuning will increase the quality of results, reduce the chances of hallucinations and provide usable, well-structured outputs.

                        Specifically in the autonomous vehicle space, MLLMs have the potential power to reason about driving scenes holistically, combining perception and planning to generate deeper scene understanding and turn it into safe maneuver suggestions. The models offer a new way to add reasoning to navigate complex scenes or those never seen before.

                        For example, construction zones have unusual components that can be difficult for simpler AI models to navigate — temporary lanes, people holding signs that change and complex negotiation with other road users. LLMs have shown to be able to process all of these variables in concert with human-like levels of reasoning.

                        M.R.: How is this new expansion impacting autonomous driving, and what does it mean for the “autonomy stack” developed over the past 20 years?

                        John:  I believe MLLMs present the opportunity to rethink the autonomy stack holistically. Today’s self-driving technologies have a fragility problem, struggling with the long tail of rare and unusual events. These systems are built “bottoms-up,” comprised of a combination of point AI networks and hand-written driving software logic to perform the various tasks of perception, sensor fusion, drive planning and drive execution – all atop a complicated stack of sensors, maps and compute.

                        This approach has led to an intractable “long tail” problem – where every unique situation discovered on the road requires a new special purpose model and software integration, which only makes the total system more complex and fragile. With the current autonomous systems, when the scene becomes overly complex to the point that the in-car AI can no longer safely drive, the car must “fall-back” – either to remote drivers in a call center or by alerting the in-car driver. 

                        MLLMs present the opportunity to solve these issues with a “top-down” approach by using a model that is broadly trained on the world’s knowledge and then optimized to execute the driving task. This adds complex reasoning without adding software complexity – one large model simply adds the right driving logic to the existing system for thousands (or millions) of edge cases.

                        There are challenges implementing this type of system today, as the current MLLMs are too large to run on embedded in-car processors. One solution is a hybrid architecture, where the large-scale MLLMs running in the cloud collaborate with specially trained models running in-car, splitting the autonomy task and the long-term versus short-term planning between car and cloud.

                        M.R.: What’s the biggest hurdle to overcome in bringing these new, powerful forms of AI into our everyday lives?

                        John: For many use cases, the current performance of these models is already there for broad commercialization. However, some of the most important use cases for AI – from medicine to legal work to autonomous driving – have an extremely high bar for commercial acceptance. In short, your calendar can be wrong, but your driver or doctor can not. 

                        We need significant improvements on reliability and performance (especially speed) to realize the full potential of this technology. This is exactly why there is a market for application-specific companies doing research and development on these general models. Making them work quickly and reliably for specific applications takes a lot of domain-specific training data and expertise. 

                        Fine-tuning models for specific applications has already proven to work well in the text-based LLMs, and I expect this exact same thing will happen with MLLMs. I think companies like Ghost, who have lots of training data and a deep understanding of the application, will dramatically improve upon the existing general models. The general models themselves will also improve over time. 

                        What is most exciting about this field is the trajectory — the amount of investment and rate of improvement is astonishing — we are going to see some incredible advances in the coming months.

                        M.R. Rangaswami is the Co-Founder of Sandhill.com

                         

                        Read More