
What does the future of customer experience look like with generative AI?
According to Knowbl’s CEO and Co-Founder, Jay Wolcott, it’s going to critical to understand the risk in implementing AI solutions and the requirements for what “enterprise-ready conversational AI” means.
In this conversation, Jay sheds light on how this innovative technology redefines customer experience, making interactions more seamless, convenient, and efficient.
M.R. Rangaswami: What exactly is “BrandGPT,” and how does it differ from traditional conversational AI technologies?
Jay Wolcott: BrandGPT is a revolutionary Enterprise Platform for Conversational AI (CAI) built leveraging large language models (LLMs) from the ground up. Legacy virtual assistance platforms built upon BiLSTNs and RNN frameworks like the speed, ease, and scalability that LLMs can offer through few-shot learning.
Through the release of this all-new approach, CAI can finally meet its potential of creating an effortless self-service experience for consumers with brands. The proprietary AI approach Knowbl has designed within BrandGPT offers truly conversational and contextual interactions that restrict the limits of Generative AI from uncontrollable risks.
This new approach is driving tons of enterprise excitement for new levels of containment, deflection, and satisfaction across digital and telephony deployments. Beyond the improved recognition and conversational approach, Knowbl’s platform allows brands to launch quickly, leverage existing content, and improve the scalability of capabilities while reducing the technical effort to manage.
M.R.: What emerging trends do you foresee shaping the future of conversational AI and customer experience, and how can businesses prepare for these developments?
Jay: In 2024 we plan to overcome customer frustration with brand bots and virtual assistants, ushering in a new era of effortless and conversational experiences powered by advanced language models.
Brands that embrace LLMs for customer automation early on will establish a competitive advantage, while those who lag will struggle to keep up. Although many organizations are still in the experimental phase of using GenAI for internal purposes due to perceived risks, leading brands are boldly venturing into direct customer automation, reimagining digital interfaces with an “always-on” brand assistant.
We also predict 2024 to be the year that bad bots die. New expectations of AI will lead to frustrated consumers when dealing with legacy bots, and a trend in attrition versus retention will appear.
M.R.: What complexities do multinational companies face when implementing AI-driven solutions, and how can they navigate the challenges to ensure successful adoption across diverse markets?
Jay: Multinational companies encounter a myriad of complexities when implementing AI-driven solutions stemming from the diversity of markets they operate. One significant challenge lies in reconciling varied regulatory landscapes and compliance requirements across different countries, necessitating a nuanced approach to AI implementation that adheres to local regulations.
Additionally, cultural and linguistic diversity poses a hurdle, as AI solutions must be tailored to resonate with the unique preferences and expectations of diverse consumer bases. To successfully navigate these challenges, companies must prioritize a robust localization strategy, customizing AI solutions to align with each market’s specific needs and cultural nuances.
Collaborating with local experts, remaining vigilant of regulatory changes, and fostering open communication with stakeholders is essential for multinational companies to achieve successful AI adoption across diverse markets.
M.R. Rangaswami is the Co-Founder of Sandhill.com

John Hayes is CEO and founder of autonomous vehicle software innovator Ghost Autonomy.
Prior to Ghost, John founded Pure Storage, taking the company public (PSTG, $11 billion market cap) in 2015. As Pure’s chief architect, he harnessed the consumer industry’s transition to flash storage (including the iPhone and MacBook Air) to reimagine the enterprise data center inventing blazing fast flash storage solutions now run by the world’s largest cloud and ecommerce providers, financial and healthcare institutions, science and research organizations and governments.
Like Pure, Ghost uses software to achieve near-perfect reliability and re-defines simplicity and efficiency with commodity consumer hardware. Ghost is headquartered in Mountain View with additional offices in Detroit, Dallas and Sydney. Investors including Mike Speiser at Sutter Hill Ventures, Keith Rabois at Founders Fund and Vinod Khosla at Khosla Ventures have invested $200 million in the company.
Now, let’s get into it, shall we?
M.R. Rangaswami: How does the expansion of LLMs to new multi-modal capabilities extend their application to new use cases?
John Hayes: Multi-modal large language models (MLLMs) can process, understand and draw conclusions from diverse inputs like video, images and sounds, expanding beyond simple text inputs and opening up an entirely new set of use cases from everything from medicine to legal to retail applications. Training GPT models on more and more application specific data will help improve them for their specific task. Fine-tuning will increase the quality of results, reduce the chances of hallucinations and provide usable, well-structured outputs.
Specifically in the autonomous vehicle space, MLLMs have the potential power to reason about driving scenes holistically, combining perception and planning to generate deeper scene understanding and turn it into safe maneuver suggestions. The models offer a new way to add reasoning to navigate complex scenes or those never seen before.
For example, construction zones have unusual components that can be difficult for simpler AI models to navigate — temporary lanes, people holding signs that change and complex negotiation with other road users. LLMs have shown to be able to process all of these variables in concert with human-like levels of reasoning.
M.R.: How is this new expansion impacting autonomous driving, and what does it mean for the “autonomy stack” developed over the past 20 years?
John: I believe MLLMs present the opportunity to rethink the autonomy stack holistically. Today’s self-driving technologies have a fragility problem, struggling with the long tail of rare and unusual events. These systems are built “bottoms-up,” comprised of a combination of point AI networks and hand-written driving software logic to perform the various tasks of perception, sensor fusion, drive planning and drive execution – all atop a complicated stack of sensors, maps and compute.
This approach has led to an intractable “long tail” problem – where every unique situation discovered on the road requires a new special purpose model and software integration, which only makes the total system more complex and fragile. With the current autonomous systems, when the scene becomes overly complex to the point that the in-car AI can no longer safely drive, the car must “fall-back” – either to remote drivers in a call center or by alerting the in-car driver.
MLLMs present the opportunity to solve these issues with a “top-down” approach by using a model that is broadly trained on the world’s knowledge and then optimized to execute the driving task. This adds complex reasoning without adding software complexity – one large model simply adds the right driving logic to the existing system for thousands (or millions) of edge cases.
There are challenges implementing this type of system today, as the current MLLMs are too large to run on embedded in-car processors. One solution is a hybrid architecture, where the large-scale MLLMs running in the cloud collaborate with specially trained models running in-car, splitting the autonomy task and the long-term versus short-term planning between car and cloud.
M.R.: What’s the biggest hurdle to overcome in bringing these new, powerful forms of AI into our everyday lives?
John: For many use cases, the current performance of these models is already there for broad commercialization. However, some of the most important use cases for AI – from medicine to legal work to autonomous driving – have an extremely high bar for commercial acceptance. In short, your calendar can be wrong, but your driver or doctor can not.
We need significant improvements on reliability and performance (especially speed) to realize the full potential of this technology. This is exactly why there is a market for application-specific companies doing research and development on these general models. Making them work quickly and reliably for specific applications takes a lot of domain-specific training data and expertise.
Fine-tuning models for specific applications has already proven to work well in the text-based LLMs, and I expect this exact same thing will happen with MLLMs. I think companies like Ghost, who have lots of training data and a deep understanding of the application, will dramatically improve upon the existing general models. The general models themselves will also improve over time.
What is most exciting about this field is the trajectory — the amount of investment and rate of improvement is astonishing — we are going to see some incredible advances in the coming months.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Gerry Fan serves as the Chief Executive Officer at XConn Technologies, a company at the forefront of innovation in next-generation interconnect technology tailored for high-performance computing and AI applications.
Established in 2020 by a team of seasoned experts in memory and processing, XConn is dedicated to making Compute Express Link™ (CXL™), an industry-endorsed Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators, accessible to a broader market.
In pursuit of expediting the adoption of CXL, Gerry and his teams have successfully introduced the world’s inaugural hybrid CXL and PCIe switch – with a strategic approach that will make computers faster, smarter, and better for the environment.
M.R. Rangaswami: What barriers are being faced by AI and HPC applications that you are looking to
solve?
Gerry Fan: Next generation applications for artificial intelligence (AI) and high-performance computing
(HPC) continue to face memory limitations. The exponential demand these applications place on memory bandwidth has become a barrier to their further innovation and widespread adoption.
The CXL specification has been developed to alleviate this challenge by offering unprecedented memory capacity and bandwidth so that critical applications, such as research for drug discovery, climate modeling or natural language processing, can be delivered without memory constraints. By applying CXL technology to break through the memory bottleneck, XConn is helping to advance next-generation applications where a universal interface can allow CPUs, GPUs, DPUs, FPGAs and other accelerators to share memory seamlessly.
M.R.: How are you looking to solve the challenge with the industry’s first and only hybrid CXL
and PCIe switch?
Gerry: While CXL technology is poised to alleviate memory barriers in AI and HPC, a hybrid approach that combines CXL and PCIe on a single switch provides a more seamless pathway to CXL adoption. PCIe (Peripheral Component Interconnect Express) is a widely used interface for connecting hardware components, including GPUs and storage devices. Many traditional applications only need the interconnect capability offered by PCIe. Yet, emergingly, next-generation applications need the higher bandwidth enabled by CXL. System designers can be stuck with what approach will be the highest need.
XConn is meeting this challenge by offering the industry’s first and only hybrid CXL 2.0 and PCIe Gen 5 switch. Combining both interconnect technologies on a single 256-lane SoC, the XConn switch is able to offer the industry’s lowest port-to-port latency and lowest power consumption per port in a single chip – all at a low total cost of ownership. What’s more, system designers only have to design once to achieve versatile expansion, heterogeneous integration for a mix of accelerators, and fault tolerance with the redundancy mission critical applications require for true processing availability.
M.R.: In your view, how will XConn revolutionize the future of high-performance computing and AI
applications?
Gerry: Together with other leading CXL ecosystem players, XConn is delivering on CXL’s promise to support faster, more agile AI processing. This will deliver the performance gains AI and HPC applications
needs to accelerate research and innovation breakthroughs. It will also support greater energy efficiency and sustainability while helping to proliferate the “AI Everywhere” paradigm for smarter and more autonomous systems.
By helping to foster innovation and accelerate application use cases, XConn is delivering the missing link that will pave the way for unprecedented computing performance needed for tomorrow’s breakthroughs and technology advancements.
M.R. Rangaswami is the Co-Founder of Sandhill.com

When I sat with Razat he was clear on the imperativeness of the digitalisation in almost every organisation in every industry today, and that is what is leading to more than $3trn of annual spending on it.
His rationale behind digitalisation is sound but as he shared, studies show that much of that work is wasted-more than 40%, in some cases. This is largely due to the disconnect between strategy and what’s being executed by teams across the business.
As the leader in portfolio management and value stream management, this conversation Razat Gaurave shares why bridging the strategy-execution gap is essential for organizational and leadership transformation.
Would you believe that 40% of strategy work gets wasted in execution?
M.R. Rangaswami: What is the biggest challenge orgs face when connecting strategy to execution?
Razat Gaurav: The biggest challenge between strategy and execution is change-change from technology shifts, demographic shifts, and even generational shifts. It’s not a new phenomenon. But what has changed is that the pace of change is exponentially faster. Companies must be able to quickly analyse and adapt or evolve their strategy-and how those changes are executed-while still driving important business outcomes.
M.R. The research arm of The Economist, found 86% of executives think their organizations need to improve accountability for strategy implementation. What challenges do orgs face around measurement?
Razat: The key thing that gets in the way are data silos. Most organisations are swimming in data, yet most of that data is not usable to make decisions. Curating the relevant data to align with your priorities and objectives is critical to achieving accountability for strategy implementation.
What we find is that many organisations have three major gaps when they look at how they measure understanding of strategic goals.
First, organisations are measuring inputs or outputs, but they’re not measuring outcomes. Particularly when dealing with digital transformations, the business and technology teams must work together to focus on the outcome.
The second gap is around creating a synchronised, connected approach to objectives and key results-what some organisations call OKRs. Is leadership in alignment with the way an individual contributor gets measured? And does the individual contributor understand how they impact their leadership’s OKRs? That bidirectional synchronisation is key
And then the last piece is how the different functions in the organisation-finance, manufacturing, sales, and so on-align their OKRs to help achieve the company’s objectives and key results.
M.R.: What should leaders do first to narrow the strategy-execution gap?
Razat: My first piece of advice would be, take a deep breath because change is constant.
As organisations, as leaders, as individuais, we all have to be ready to adapt and change. But beyond taking that deep breath, there are three things I’d advise organisations to do.
First, figure out the three initiatives that wil actually move the needle. Second, define OKRs and an incentive structure for the outcomes you’re trying to achieve. Third, invest in systems that allow you to break out of those data silos to execute as one organisation, as one team.
M.R. Rangaswami is the Co-Founder of Sandhill.com

According to the recent update from Allied Advisers, SMB is the backbone of the US economy; 99.9% of all US businesses are in this segment. With rising SaaS adoption by small businesses for enhancing productivity, we remain optimistic on the long-term view of this sector.
While not surprisingly, SMB SaaS has higher churn than Enterprise SaaS, SMB SaaS has significantly better operational metrics when it comes to sales and marketing expense, R&D expenses, EBITDA margins and less sector competition. Our report covers nuances of SMB SaaS and we believe that SMB SaaS businesses continue to offer compelling opportunities for investors and buyers.
This particular Allied Advisers report has updated their SMB SaaS, highlighting a sector that has been growing with notable outcomes.
The report pulls from IPO’s of: Freshworks ($1.03B), Digital Ocean ($780M), Klaviyo ($576M) and notable exits such as Mailchimp’s acquisition ($12B+, one of the largest boot-strapped exits) and growth of private SMB SaaS companies like Calendly (last valued at $3B), Notion (last valued at $10B).

To see the full summary of Allied Adviser’s update, click here:
Gaurav Bhasin is the Managing Director of Allied Advisers.

One year ago, Software Equity Group started their 2022 report on M&A trends with a simple observation: the stock market activity was not for the faint of heart. That view led to a much broader inquiry throughout the report into the myriad of dynamics at play and the associated impact on the software M&A market.
So how are Founders and CEOs exercising caution when considering M&A and liquidity events in the face of ongoing economic uncertainty, and is their restraint warranted?
To cut to the chase: it depends. For software businesses with the right profile (more on that later), there is tremendous opportunity in the current M&A landscape.
To better assess the state of the market, SEG analyzed data from our annual survey of CEOs, private equity investors, and strategic buyers, in addition to our quarterly report and our transactions.
HERE ARE SEG’S 4 TAKEAWAYS FROM THE RESEARCH:
1. Cautious CEOs Are Holding Off On Going To Market
Not surprisingly, the macroeconomic environment has colored their perceptions of the SaaS M&A market. Seventy-eight percent believe valuations are the same or lower than last year, and over two-thirds believe the market will improve in the coming years.
As a result, many are waiting to explore and see what the future holds before going to market.
2. Buyers And Investors Face Shortage Of Opportunities
In contrast to the CEOs’ viewpoint, buyers and investors are finding that the competition is holding steady or getting stronger. They are eager to do deals with high-quality businesses, but there are not as many opportunities available as in 2022.
Meanwhile, 66.7% of strategics say they have seen no change or a decrease in the volume of high-quality SaaS companies in the market over the past year. This supports the idea that high-quality M&A opportunities are scarce in 2023 and high-quality businesses that pursue a liquidity event receive outsized interest from buyers and investors.
3. Growth, Retention & Profitability Are Key
Given the uncertainty in the macro markets over the last 18 months, it is not surprising that buyers have become more risk-averse, and the profile of a highly desirable asset has shifted.
Nevertheless, while revenue growth and retention are weighted strongly, there is little interest in businesses burning significant cash. In 2020 and 2021, the high-burn, growth-at-all-cost model was considered an attractive asset. In 2023, the story has now changed.
4. High-Quality Assets Are Demanding Premium Valuations
The current market represents a classic supply and demand dynamic. When the supply of a good decreases, and the demand for said good stays the same or increases, its price is expected to increase.
Where is the data that supports it?
The answer is hard to find in the public markets. The share prices of public SaaS companies in the SEG SaaS Index have rebounded this year but are still down roughly 36% from COVID-level peaks.
The Nasdaq has sharply rebounded from 2022 lows, due to the “Magnificent 7” companies and excitement over artificial intelligence. Most notably, valuations in M&A deals have decreased by 36%
since 2021.
There Is Good News For SaaS Companies.
It is easy to understand why CEOs are cautious right now, and many are right to be. The landscape has shifted from where it was a few years ago, with buyer and investor priorities shifting as well. It is clear, however, that the deficit of profitably growing assets on the market is working in favor of sellers.
This is due to increasing competition for highly sought-after software companies that display strong revenue growth and retention. One thing everyone agrees on: higher valuations lie ahead.
To read the full SEG review on SaaS M&A: 4 Buyers’ Perspectives, click here.

In January 2023 Leigh Segall, Chief Strategy Officer at Smart Communications – a leading technology company focused on helping businesses engage in more meaningful customer conversations, shared her predictions on what businesses would be focusing on with customer experience in 2023.
We’ve kept these in our back pocket knowing that as we round out Q4, it would be useful to reflect and review where customer experience strategies are currently at in this climate.
1. Ever-changing customer behaviors will require enterprises to reimagine existing business models
The accelerated shift to digital that was originally driven by the global pandemic has consumers expecting total digital freedom, with the ability to choose when, where and how they interact with brands across many industries.
Even those who were slow to adopt digital are now on board — which means businesses must adapt, not just to meet today’s expectations but also to prepare for the changes tomorrow may bring. Analysts and experts agree that businesses must focus on customer-centricity — particularly industries that have lagged in moving to digital. And they can show that they care by focusing less on one-way transactions and more on two-way customer conversations that drive trust and loyalty, and provide value.
2. Conversational experiences will make or break brand loyalty and customer trust
Consumers and businesses alike are overwhelmed with choice, making competition for attention and loyalty fiercer than ever. Add ongoing instability to the equation, and cultivating trust becomes the key to fostering lasting customer relationships.
Earning customer trust is especially challenging for industries that deal with emotionally-charged matters — such as money, health, and property loss or damage. Businesses addressing these needs should cultivate a tech ecosystem that’s interconnected and interoperable, pulling together data and processes from multiple systems of record to create easy, efficient conversations that are both sophisticated and seamless.
3. Enterprises will automate and digitize key business processes to increase operational efficiency
The pandemic-accelerated pace of digital transformation has led to an IT skills shortage that’s being felt globally. And many businesses are looking to low-code solutions to reduce the burden on IT and increase operational efficiency by empowering non-technical business users.
Shifting the mindset away from maintenance paves the path for future success by freeing IT teams from routine and repetitive tasks, allowing them to focus on more strategic initiatives. Cloud-based solutions also reduce total cost of ownership (TCO) and technical debt while bringing much needed resilience. Cultivating a tech ecosystem that brings agility and flexibility at scale will be critical to increasing operational efficiency without impacting customer experience.
4. Enterprises will mitigate risks and protect brand reputation by increasing the focus on compliance and regulatory requirements
Continuing cyberthreats are creating an increased need for business leaders to focus on compliance and regulatory requirements, which are constantly evolving — particularly for highly-regulated industries such as financial services, healthcare and insurance.
Adopting a cloud-first approach will enable highly-regulated organizations to greatly reduce risks and keep up with ever-changing regulatory requirements — which will continue to evolve in 2023 and beyond. Investing in the right tech partners enables deep visibility into the nuanced requirements of each industry, with the ability to easily make sweeping updates as the rules of engagement change. Layering on automated, digitized solutions helps to ensure communications are compliant across all customer touchpoints; legacy systems simply aren’t up to the task.
5. Technological innovation will remain a top priority as enterprises recognize the increased need for agility and scalability
Business leaders know that speed and scale are mission critical. As global markets become more interconnected and waves of change continue to rise, enterprises must be able to adapt on the fly — and at massive scale. This calls for replacing legacy systems and processes with sophisticated, cloud-first solutions that enable data interconnectivity, operational efficiency and enterprise-wide flexibility.
As customer expectations continue to evolve, businesses need to be able to access and act on customer data and deliver personalized, unique customer interactions at every touchpoint.
We’d love to hear your thoughts — so please send us an email!

As General Partner at Foundation Capital, Ashu Garb collaborates with startups throughout the enterprise stack. His career is reflective of his enthusiasm for machine learning and revolutionizing established software domains to create fresh consumer interactions.
While FC’s inaugural Generative AI “Unconference” was held back in June, we still find ourselves referencing Ashu’s observations from the conference. We hope you take away as much from his highlights as we have.
1. AI natives have key advantages over AI incumbents
In AI, as in other technology waves, every aspiring founder (and investor!) wants to know: Will incumbents acquire innovation before startups can acquire distribution? Incumbents benefit from scale, distribution, and data; startups can counter with business model innovation, agility, and speed—which, with today’s supersonic pace of product evolution, may prove more strategic than ever.
To win, startups will have to lean into their strength of quickly experimenting and shipping. Other strategies for startups include focusing on a specific vertical, building network effects, and bootstrapping data moats, which can deepen over time through product usage.
2. In AI, the old rules of building software applications still apply
How can builders add value around foundation models? Does the value lie in domain-specific data and customizations? Does it accrue through the product experience and serving logic built around the model? Are there other insertion points that founders should consider?
While foundation models will likely commoditize in the future, for now, model choice matters. From there, an AI product’s value depends on the architecture that developers build around that model. This includes technical decisions like prompts (including how their outputs are chained to both each other and external systems and tools), embeddings and their storage and retrieval mechanisms, context window management, and intuitive UX design that guides users in their product journeys.
3. Small is the new big
Bigger models and more data have long been the go-to ingredients for advancements in AI. Yet, as our second keynote speaker, Sean Lie, Founder and Chief Hardware Architect at Cerebras, relayed, we’re nearing a point of diminishing returns for simply supersizing models. Beyond a certain threshold, more parameters do not necessarily equate to better performance. Giant models waste valuable computational resources, causing costs for training and use to skyrocket.
To read Ashu’s full report, and his Top 5 Takeaways, click here.

Roughly 20% of new businesses fail within the first year, and 50% are gone within five years.
So what makes a startup successful? Is it mainly a combination of hard work and luck, or is there a winning formula?
Colin C. Campbell has been a serial entrepreneur for over 30 years. He has founded and scaled various internet companies that collectively have reached a valuation of almost $1 billion. In his new book, Start. Scale. Exit. Repeat.: Serial Entrepreneurs’ Secrets Revealed! Colin shares a wealth of experience, with an in-depth guide featuring interviews with industry experts and points readers in the right direction on their entrepreneurial journey to help answer the questions they’ll encounter.
M.R. Rangaswami: What is it about what you share in Start. Scale. Exit. Repeat.: Serial Entrepreneurs’ Secrets Revealed! that you feel hasn’t been shared before?
Colin Campbell: Start. Scale. Exit. Repeat. represents 30 years of my experience as a serial entrepreneur, a decade of research and writing, and over 200 interviews with experts, authors, and fellow serial entrepreneurs. The book deconstructs the stages of building a company from inception to exit, and lays out strategies to replicate this success repeatedly.
At each stage of a company’s life cycle, it’s crucial to fine-tune your narrative, assemble the right team, secure adequate funding, and put in place effective systems. The strategies for achieving these vary dramatically, from the chaotic, founder-centric startup phase to the more structured approach needed to scale. As you near the finish line, your strategy will have to pivot once again.
The core message of Start. Scale. Exit. Repeat. is that entrepreneurship isn’t a “one and done” affair. It’s a skill—akin to any other trade—that you can master and continually refine. There’s a recipe for launching a successful startup, and this book simplifies it into actionable steps to be taken one at a time.
Furthermore, the book challenges the prevailing obsession with unicorns. We exist in a “unicorn culture,” where a valuation under a billion dollars is often frowned upon. But this mindset is perilous. The high-velocity chase for unicorn status has led to a wreckage of dreams and fortunes along the Silicon Valley highway. I’ve witnessed countless founders succumb to this “Silicon Valley disease,” sacrificing years of labor and significant capital.
There’s a more pragmatic approach to building wealth, and it’s far simpler: start, scale, exit, take some money off the table, and repeat.
M.R.: What was your biggest lesson from one of your biggest setbacks?
Colin: Let’s take a trip down memory lane to the early ’90s. My brother and I launched an Internet Service Provider (ISP) in Canada. We were pioneers on the “Information Superhighway,” connecting hundreds of thousands of Canadians to the internet. We found ourselves in the whirlwind Geoffrey Moore famously described as the “Tornado.” It was an exhilarating ride, especially for a couple of 20-somethings who had grown up on a farm.
We took the company public later in the ’90s and merged it with a wireless cable company, closing at a valuation of approximately $180 million. After receiving 50% of a wireless spectrum for fixed wireless internet from the Canadian government—yes, they handed out spectrum back then to encourage competition—our company’s valuation skyrocketed to over $1 billion. Technically, it was a stock-for-stock swap, with our shares being locked up for 18 months. At 28 years old in 1998, I owned almost 14% of the company.
We thought we were invincible. The internet was poised to change everything, and we were on the forefront.
Then, out of nowhere, the .COM crash hit.
Our company pulled its secondary offering to raise $50 million because the Nasdaq had tanked to 4,000. And it kept falling, plummeting to 1,300 and not recovering for over a decade. It was indeed the .COM crash, and the music had stopped—without enough chairs to go around.
Did we make mistakes? Absolutely. We shouldn’t have relinquished control without securing liquidity. “Liquidity or control” has since become our mantra for all future ventures. And let’s face it—stuff happens. Technologies evolve, regulations change, and market climates shift. That’s why it’s crucial to exit when times are good. When the party’s in full swing, make a discreet exit, take some money off the table, and focus on your next venture.
As for that unicorn of ours? It filed for bankruptcy protection, and our stock plummeted from a high of $19 a share to the paltry sum I sold it for: 6 cents a share.
Thankfully, we regrouped and stuck to our strengths. We launched Hostopia, a global leader in hosting and email solutions for telecoms. We took it public and eventually sold it to a Fortune 500 company—this time for an all-cash deal—just a month before the Lehman crisis in 2008.
M.R.: In your experience, once a business is past the first 5 years of failing, what’s the next riskiest precipice they encounter?
Colin: The vast majority of companies in America are small businesses, and most struggle to scale. But make no mistake—there’s a formula for scaling your enterprise. Some companies might find it more challenging than others, and some may opt out due to the stress and transformative changes that come with scaling.
In the SaaS (Software as a Service) industry: if you’re not growing, you’re dying. After the .COM crash, we found ourselves running low on funds while operating our hosting and email platform. Still, we remained optimistic. Why? Because even though we were bleeding $500,000 per month, our customer base was growing. Growth is the lifeline in SaaS; losing money is acceptable as long as you’re expanding.
Hostopia, for example, adhered to the Rule of 40, maintaining a growth rate plus profit margin that exceeded 40%. We achieved 32 consecutive quarters of growth, leading to an IPO and ultimately a successful sale at a 60% premium over our trading price to a Fortune 500 company. Another venture, .CLUB Domains, also operated in the red for several years. Nevertheless, we managed to cut losses by about half a million dollars annually until we started adding the same amount to our bottom line, culminating in an exit to GoDaddy Registry.
Am I a genius entrepreneur? As much as I’d like to think so, that’s far from the truth. In 2005, our company was facing internal strife, stalled sales, and a board questioning my role as CEO. One board member even remarked, “He’s too young and way in over his head.” That’s when a friend introduced me to Patrick Thean, a coach at Rhythm Systems. Patrick taught us invaluable systems like goal setting, strategic planning, daily huddles, and KPI tracking. In addition, we partnered with other coaches to transform the organization from a tech-centric company to a sales driven organization. The ultimate effect of all of these changes: we tripled our size within a few years.
Since then, we incorporated these systems along with countless other insights I’ve gathered from serial entrepreneurs, experts, and authors. We’ve encapsulated these stories and lessons in the book, laying out a clear roadmap for SaaS companies aiming to scale.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Organizational Optimization is what gives David Luke’s career credibility.
In this quick Q&A, David shares his insights on the major staffing and retention challenges tech leaders are facing and how IT teams can accelerate their approaches to innovation to stay competitive.
M.R. Rangaswami: What kind of staffing and retention challenges are IT leaders facing right now?
David Luke: IT leaders are experiencing a new phenomenon in today’s professionals: an influx of talent that is demanding to work in non-traditional ways. HR departments are finding it difficult to create a standard job class or role category. Executives and line managers alike are turning to firms like Consulting Solutions for a la carte solutions to address anti-patterns that are impeding their business.
Here are what I believe to be the top five challenges in our current labor market:
- Creating a safe space for employees where they can land, grow, and learn while delivering both innovative and traditional pieces of work. By partnering with HR and recruiting firms, leaders can develop a place where folks want to work, are able to grow their career to the level that they desire, and develop their knowledge / skills with a defined path forward.
- Attracting people who are late career that bring knowledge and maturity to an organization. These are the gems in our workforce that can not only deliver with speed but also mentor new professionals in the workforce.
- The ability to balance a lower-cost delivery with a world-class product and retaining those people that deliver that product.
- The decision between remote and on-site, which means ensuring that you are getting the talent that will accelerate your business by offering options for your people. There is some exceptional talent out there who would love to work remotely, and then there are also folks who thrive in an in person collaborative environment. Leaders need to weigh how they want their workforce to be shaped and potentially develop a blend.
- Although it’s an attractive practice, leaders need to understand some of the limitations of nearshoring/offshoring their workforce—fewer overlapped hours, decreased team retention due to offshore labor practices, and collaboration on a limited basis. Ensure that you weigh the cost savings versus delivering an exceptional product.
M.R.: Why is the “product owner role” so critical to delivery team success?
David Luke: Exceptional product owners use their superpowers to bring the product vision down to the team level. They focus relentlessly on prioritizing what is needed and what is wanted for their business, their stakeholders, and their customers. The best product owners can strike the right balance between being specific enough to provide clear direction to the team while still being flexible enough to accommodate changes and shifts in priorities that come from a deep and dynamic partnership with product managers.
These proverbial unicorns also have a deep knowledge of user needs and the experience that the business wants the customer to receive. They easily see the bigger picture and engage often with product managers, customer experience, and user-experience experts to define and drive the delivery of great products.
Elite product owners have an abundance of empathy in their toolkits. They’re able to read the pulse of the team, the customer, and stakeholders while balancing the push and pull to deliver great products.
What sets apart the truly outstanding product owners is the ability to effectively listen. Not just to the words but to the underlying messages and sentiments of everyone who they actively seek to communicate with as part of their rituals, ceremonies, and workdays.
Great product owners don’t just look inward; they excel at looking outward to the market, the competition, and the changing technologies that they work with every day. They know the goals and challenges and can articulate the path forward to lead their teams and their products to successful outcomes. They are storytellers, evangelists, and cheerleaders for their teams and their products. The word on the chest of their superhero suit is often “TEAM”.
M.R.: If technology is evolving faster than workplace structures can keep up, what must IT teams do to accelerate their approach to stay competitive and deliver results?
David: At the heart of any change to approach, regardless of its scope, lies the critical support of leadership. While grassroots efforts can certainly achieve success, a unified message and commitment from the top sets the tone for the entire organization.
To ensure an accelerated approach, it is also essential to establish governance and a defined way of working, while remaining open to adjusting these as you gain a deeper understanding of your company’s culture. With these foundational elements in place, you can then develop charters and set clear, measurable objectives and key results (OKRs) to guide your progress toward success. And most importantly, START THE WORK! Don’t get bogged down in planning—act and stay focused on delivering results.
Once you have established a new, accelerated way of working, you must set about to streamline your efforts and prioritize the things that are most important to your customers. Use your product owners, UX experts, and CX experts to gain the trust and pulse of your customers, as they are who you are building for, and they will tell you if you are getting it right. Leverage new practices such as design thinking to understand who you are building for, what their pains are, and how you can deliver products to eliminate or alleviate those pains.
M.R. Rangaswami is the Co-Founder of Sandhill.com

The question of “are the market conditions right” remains in the minds of investors and executives interested in exploring M&A. We address this question by sharing our perspectives on how to achieve a successful M&A outcome.
Our recommendations are based on Allied Advisers’ deep experience in advising clients on their exit to both Fortune-500 and mid-market strategic buyers, as well as a diversity of PE funds. In the last 12 months, we advised clients on their exit to: Activision Blizzard King, Walmart, Dura Software, PSG Equity and Virtana among others.
While 2010-2021 were robust years for M&A and capital raises for technology companies, the markets today have changed significantly in terms of deal volume and valuation though we are seeing improvements to a more rationale and sustainable market.
With the major indices rebounding this year from the lows of 2022; the question of “are the market conditions right” still remains in the minds of investors and executives interested in exploring M&A.
This article covers some of the M&A trends including that private equity (PE) continues to be major driver of deal volume, there have been new technology M&A buyers among larger private companies, and we are seeing stabilization of deal volume and value.
Also, the impending IPO of Arm (Semiconductor), Klaviyo (Software) and Instacart (Internet) not only provide a litmus test about what private companies are worth in public markets but also create currency, potentially opening the door for them and a slew of other companies for future IPOs and M&A.
We at Allied Advisers are also sharing our own observations and our perspectives on how to achieve a successful M&A outcome in the current environment. In the last 12 months, we advised clients on their exit to: Activision Blizzard King, the world’s largest game network and a Fortune 500 company; Walmart, a Fortune One company; Dura Software, a software consolidator; PSG Equity, a top tier PE fund ($22.1B AUM); and Virtana, a growing PE backed company among others.
Below is the full report from Allied Advisers:
Gaurav Bhasin is the Managing Director at Allied Advisers.

This conversation is ahead of Cyber Security month, and sharing what information is available for our network of tech leaders and the cyber security solutions available to them.
Johnathan Tomek is a VP at Digital Element, a global IP geolocation and intelligence leader for over 20 years. There, he is a seasoned threat intelligence researcher with a background of network forensics, incident handling, malware analysis, and many other technology skills. Previously, Jonathan served as CEO of MadX LLC, Head of Threat Intelligence with White Ops, and Director of Threat Research with LookingGlass Cyber Solutions, Inc.
In this Q&A Jonathan shares the challenges that many of the world’s largest websites, brands, security companies, ad networks, social media platforms and mobile publishers face–and the best practices his team takes to combat online fraud.
M.R. Rangaswami: With the rise of VPNs and residential proxy IP networks, many corporate security teams seem to struggle to see who accessing their networks and data. How should they
approach security as these trends accelerate?
Jonathan Tomek: IP address intelligence data can help security teams hone their best practices for establishing rules for who can access their network. For instance, IP address data reveals a great deal about masked traffic, such as whether it is coming from a VPN, darknet or residential IP proxy. With this knowledge, security teams can opt to block all darknet traffic automatically.
Likewise, knowing that many people use residential IP proxies to scrape websites for competitive research, security professionals can opt to block all residential IP proxies.
The important factor here is context. A company may not be concerned about VPN traffic in general, but if thousands of failed login attempts from a specific VPN over a short time period are observed, this would be indicative of an individual threat versus many unknown attacks.
Digital Element also knows a great deal about the VPN market, including which providers offer features that enable nefarious players to hide their activities.
That insight can be used to set access policies based on the VPN provider. For instance, you may want, as a matter of policy, to block all traffic that stems from VPNs that are free, or accept crypto payment and allow no-logging behavior as an option, as they are features that allow bad actors to cover their tracks.
Though some believe blocking is a common theme, the context provided can be more importan at times, especially after an incident by helping to understand characteristics of the threat and narrow down the area of focus.
M.R. Requesting additional authentication is a safe, but costly, practice. How can IP address
intelligence data help security teams drive efficiency in its access policy?
Jonathan Tomek: Asking for additional authentication is a good security measure, but it does require additional computing power, which isn’t free. It also affects the user experience, especially when a loyal customer signs into a system frequently.
IP address intelligence data is useful here, both in helping networks save resources, and ensuring a more seamless user experience. Such insights include IP stability, which tells us how long a specific IP address has been observed at a specific location.
If a customer signs into your network every day via the same IP address observed at the same geolocation, there may be no need to request a second authentication. But if one day that user attempts to sign-in from an IP address from a geolocation on the other side of the country, or from a more local region but is also a VPN, it would be a good idea to validate them.
IP address intelligence data can provide context to help security teams set policies that prioritize when to request additional authentication.
M.R.: How can IP Intelligence data help security teams understand how a breach occurred, and
to minimize any damage done?
Jonathan Tomek: That’s a great question. Every security professional understands that, try as you might, it is simply impossible to prevent a breach.
The best approach is to be able to respond quickly and minimize the impact in the event of a breach. IP address intelligence is critical to add to a security information and event management solution (SIEM).
By leveraging IP intelligence, you have additional data points which can help reduce false positive alerts, while also refining other alerts for investigators.
The ability to cluster events is a huge timesaver. If a specific VPN was used during a breach, you could find related IP addresses and see how the attacker was attempting to gain entry to your infrastructure, helping you with the timeline.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Software Equity Group’s quarterly report is in, they’re revealing an improved outlook across the broader macroeconomy, industry excitement around AI, and overall investor optimism for growth businesses contributed to a solid first half for publicly traded B2B SaaS companies.
Meanwhile, continued strategic buyer and private equity interest has resulted in strong M&A outcomes for high-quality SaaS businesses exhibiting capital efficient growth, strong retention, and product differentiation.
Here are five highlights from the report:
1. Aggregate Software Industry M&A deal volume has seen strong momentum in recent quarters, reaching 897 total deals in 2Q23 and up 5% from 855 deals in 1Q23
2. Deal activity for SaaS M&A remains high relative to historical periods (538 in 2Q23). Although deal volume in 2Q23 experienced a 5% decrease over the prior quarter, SaaS M&A is on pace for the second-highest annual total in the last ten years (only eclipsed by the bubble year of 2022). The month of May saw 192 M&A deals, the second-highest monthly deal volume for SaaS in ten months.
3. The average EV/TTM Revenue multiple for 2Q23 was 5.6x. However, specific cohorts within SaaS are continuing to sell for premium multiples. Strong outcomes are being had for companies fitting the profile from a SaaS KPI (capital efficient growth, strong retention, etc.) and product differentiation standpoint.
4. Vertical SaaS comprised 46% of all M&A deals in 2Q23. Financial Services jumped up to the pole position of the verticals, representing 18.9% of all SaaS deals.
5. Private equity appetite for SaaS M&A remains high as it represented the majority (61.3%) of deals in 2Q23. PE-backed strategics represented 52.4% of deals, and PE platform investments were 8.9%.
Download the full report from Software Equity Group, here:

With the decline of trust among B2B buyers because of vendor over-promising, economic pressures, and shifting expectations, CEO Evan Huck (and his co-founder Ray Rhoades) have been evaluating the evolution of social proof in the buying journey.
Pulling from his experiences working at TechValidate and SurveyMonkey, Evan was inspired to create a company that could help businesses quickly and efficiently capture customer feedback and — leveraging the power of AI — automatically create on-brand content at scale, removing a significant source of friction from modern go-to-market teams’ sales motions.
M.R. Rangaswami: Trust is at an all-time low for B2B buyers. What’s causing this and why does it matter?
Evan Huck: B2B buyers are becoming increasingly skeptical of vendor marketing hype after repeatedly being burned by sales teams over promising and under delivering. Economic pressures have placed increased scrutiny on every tech purchase, upping the ante on the importance of making the right purchase the first time. Additionally, a recent Gallup poll found that greater access to information, lack of company focus on the customer lifecycle, and shifting expectations from a younger generation of buyers are all contributing factors to the breakdown of trust between vendors and buyers. As a result, peer recommendations and social proof are emerging as critical factors in the B2B buying journey.
Why this matters? Vendors are no longer in control over the buyer journey, and they get less direct interaction with the prospect. Buyers expect to see relevant customer examples validated by real-world data before making large technology purchases. To rebuild trust with buyers, vendors need more than a handful of curated customer success stories – they need a library of authentic and relevant customer proof points that prove the product’s value across different use cases, company sizes, and industries.
M.R.: More than ever before, B2B buyers now look to their peers, not vendors, when making buying decisions. How is UserEvidence helping B2B software companies use customer feedback to address this new reality?
Evan: Historically it has been very difficult to gather enough reliable customer stories – seeking out these proof points is often labor intensive, laden with approvals, and costly. In the past, companies typically have created their own content in-house or leaned on an outside agency for support in collecting and creating these assets. These solutions have left companies scrambling to fill in the gaps as buyers demand more real-world examples they can connect to.
UserEvidence resolves these issues by providing one platform that all go-to-market functions can use to capture customer feedback and — through advanced generative AI capabilities — deliver unbiased customer stories and beautifully designed assets for companies to use in their sales initiatives. Long gone are the days of analyzing customer data manually; UserEvidence processes these datasets quickly so that go-to-market teams can start creating content that attracts buyers. Companies can now easily collect and create these customer stories at scale, taking control of their most valuable asset: real-world social proof.
Another benefit of the UserEvidence platform is the ability to continuously capture feedback and sentiment from users and customers, at important junctures in the customer journey. Surveys are delivered at key moments throughout the customer lifecycle, creating a continuous stream of learnings and insights that drives good decision making.
M.R.: Getting feedback from actual customers helps not only B2B buyers, but every internal function across GTM teams. How does UserEvidence plan to bridge this gap?
Evan: Every function in a B2B company — from the functions that sell a product (product marketing, sales enablement, customer marketing, and customer success), to the functions that build the product (product, product management, strategy) — should be guided by the voice of the customer and customer feedback.
The problem is each function’s efforts to capture feedback are siloed, and the learnings from each effort aren’t shared between functions. Positive stories from a product management survey never make it into the hands of a sales team. Negative feedback from a marketing team’s efforts to find users willing to do case studies never makes it to product management or customer success.
UserEvidence helps unify feedback collection efforts across functions, and helps each function take action on that feedback. Marketing can create on-brand sales and marketing assets, while product management can get insights on how to make the product experience better. Several goals are accomplished with one touch to the customer making for a more elegant customer experience.
M.R. Rangaswami is the Co-Founder of Sandhill.com

A slightly different conversation this week as we speak to Ivan Houlihan, Senior Vice President and Head of the West Coast of the United States for IDA Ireland–the Investment and Development Agency of the Irish Government, which promotes foreign direct investment into Ireland.
Based in California, Ivan leads the team that works closely with existing and potential clients in technology, financial services, life sciences and engineering throughout the Western US and Mexico.
We hope you enjoy this week’s angle about on Cybersecurity, cyber skills and microcredentials.
M.R. Rangaswami: How Do Microcredentials Address the Cybersecurity Talent Scarcity Problem?
Ivan Houlihan: While nations pass resolutions and laws that try to prevent cybercrime, the most widespread answer is increasing the supply of expert security talent to stay ahead of the criminals.
Ivan Houlihan suggests an innovative approach, which involves the concept of microcredentials. These are small, accredited courses that allow candidates to pursue highly focused upskilling and reskilling that responds to specific market needs. Besides creating qualified new candidates to quickly come on board, this solution opens the door to workers that might otherwise not have pursued careers in cybersecurity.
As the head of the West Coast U.S. for IDA Ireland, Houlihan has seen an increasing number of American technology firms with operations in Ireland employ this strategy to address their cybersecurity talent crunch.
When it comes to microcredentials in cybersecurity, Houlihan believes that Ireland’s innovative training programs can become a model for other nations seeking to address the serious issue of cybercrime, which is predicted to cost the world $10.5 trillion by 2025. In this quick Q&A, he explains the basics of setting up a microcredentials program in the cybersecurity space – although microcredentials can be earned in other technical areas, too.
M.R. Rangaswami: What are some of the current issues impacting cybersecurity staffing and why are microcredential programs a reasonable solution?
Ivan: Technology workers in general are often in short supply, but when it comes to qualified cybersecurity personnel, the problem is compounded by educational requirements along with needed specific skills that take time and money for those seeking to enter this field. Technical degrees, specialized training and often, often some graduate work, have discouraged many would-be candidates, particularly those put off by the prospect of student loans and related barriers. One of the biggest myths in the cybersecurity field is that it’s just for people with high proficiency in math, men only or those with certain graduate degrees. People also assume they must go to renowned universities to study for the field in order to pursue such careers. All these factors have conspired to decrease the pool of qualified candidates.
Microcredential programs short-circuit the time and cost of pursuing a lucrative cybersecurity career, although the field does require some technical training as a starting point. Fortunately, being male, having graduate degrees and other assumptions don’t apply, however. Microcredentials bring down the cost and time commitments while increasing cybersecurity job opportunities for women, military veterans, minority groups, people from financially disadvantaged backgrounds, workers from other departments and others previously not often found in the profession. And since microcredential programs are typically online, they are of short duration and can be “stacked” or combined to form bigger accreditations – this makes it easier to get the right kind of training for a promising new career. The most successful microcredential programs demonstrate a collaborative effort between universities, governments, research institutions and industry, with the latter providing curriculum input based on what the candidates need to know to hit the ground running.
M.R.: Describe the cybersecurity microcredential programs you’re aware of, how they operate and the results so far.
Ivan: Technology workers in general are often in short supply, but when it comes to qualified cybersecurity personnel, the problem is compounded by educational requirements along with needed specific skills that take time and money for those seeking to enter this field. Technical degrees, specialized training and often, often some graduate work, have discouraged many would-be candidates, particularly those put off by the prospect of student loans and related barriers. One of the biggest myths in the cybersecurity field is that it’s just for people with high proficiency in math, men only or those with certain graduate degrees. People also assume they must go to renowned universities to study for the field in order to pursue such careers. All these factors have conspired to decrease the pool of qualified candidates.
It’s encouraging to say Ireland has been ahead of other nations in its efforts to increase the supply of cybersecurity talent. Last year, the International Information System Security Certification Consortium, or (ISC)², the world’s largest IT security organization, released a report that found Ireland closed its cybersecurity skills gap by 19.5% while the global gap grew by 26.2%. Through a government grant in 2020, Ireland created Europe’s first microcredential program, called CyberSkills, a collaboration between national agencies, industry and three leading Irish universities led by Donna O’Shea, Chair of CyberSecurity, MTU.
Sign up and instruction are online and in addition to 30 carefully designed microcredentials that learners can take as standalone pieces of learning or integrated into predesigned academic pathways, the program utilizes what’s called the “cyber range,” a unique, cloud-based, secure sandboxed area that simulates real-world scenarios and environments where students can test their new skills.
In talking to O’Shea, she told us that CyberSkills has already trained hundreds of people – and the program is expanding. She believes that the simple but effective collaboration concept of this program could be duplicated by other nations wishing to accelerate and expand their supply of cyber talent. The key, underlying concept of CyberSkills is that the training is totally focused on graduates being able to walk into jobs immediately and have the knowledge they need to be effective.
At a higher level, everyone should look at these microcredential programs as a major innovation in workforce development and lifelong learning. Being largely co-designed by industry makes them relevant and effective while their ease of use and low cost create new avenues for skills development long into the future.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Dr. Sahir Ali is a technology and healthcare leader, investor and board advisor who has extensive experience in the areas of artificial intelligence, medical imaging, cancer research, enterprise technology and cloud computing. He has advised and led some of the fortune 500 companies, hedge funds and other organizations in implementing and integrating cloud technologies and artificial intelligence/data science.
As founder of Modi Ventures, a private investment firm focused on investing in venture capital funds and early-stage startup in disruptive and emerging AI and Medical technology applications, we thought Sahir’s insights into the current investing trends of healthcare AI and TechBio would be insightful.
M.R. Rangaswami: What types of tech bio and healthcare AI investments are gaining funding in the current economic climate?
Sahir Ali: Some of the most exciting breakthroughs in medicine today are happening at the nexus of biology and computer science, using tools such as artificial intelligence (AI). There are two major tech-enabled bio investments themes: therapeutic platforms (drug discovery companies based on novel platform technology) and transformative technologies (companies developing applications of breakthrough technological advances such as genomics and digital health).
M.R.: What advice do you have for emerging startups to succeed in the crowded healthcare technology market?
Sahir: Startups that focus on platform technologies that can yield multiple programs and shots on goal, instead of individual assets with binary outcomes tend to be very attractive from investment perspectives, as well as time to market valuation proposition. We also encourage our founders to establish high-quality partnerships across the ecosystem — true platforms produce many more assets than any individual company can develop.
The healthcare industry is slow to adopt new technology, so startups need to market their product effectively to reach the target audience, especially for digital health and consumer products.
M.R.: What areas of investing in healthcare AI are gaining the most traction in this economy?
Sahir: There is a great deal of traction (funding and support) for companies that combine AI technology to generate novel candidates and strong drug development expertise to validate and find the best potential drugs. Another key area is gene therapy, which offers the potential to cure—not just treat the symptoms of—many major diseases. Some of the most transformative technologies are major new applications of genomics. Next-generation sequencing has outpaced even the fabled Moore’s Law, as the cost and information content of sequencing has improved even faster than the cost and information content of computer chips.
Companies which incorporate nextgen sequencing into diagnostic applications can enable better clinical outcomes at radically reduced costs. When cancer is detected late, only 20% of patients survive for five years, but when detected early, 80% survive for five years. Early detection saves lives and billions of dollars per year in medical costs.
M.R. Rangaswami is the Co-Founder of Sandhill.com

For those of you who have followed M.R. and his illustrious career, you may know a little about his resume from four decades in Silicon Valley.
However, in M.R’s interview with DataStax Chairman and CEO, Chet Kapoor, they both offer stories, humour, reflections and lessons that take us beyond their LinkedIn profiles and into the minds of some of our industry’s great builders.
We hope you enjoy this light-hearted conversation on your next commute.

M.R Rangaswami is the Co-Founder of Sandhill.com (the domain he bought for $20 in 1997)

Rahul Ponnala is the co-founder and CEO of Granica — the world’s first AI efficiency platform — which is on a mission to make AI affordable, accessible and safe to use.
He previously served as Director of Storage and Integrations at Pure Storage, where he engineered and integrated large-scale databases and file storage systems powered by all-flash technology. As a governing board member of The FinOps Foundation under The Linux Foundation, he helps shape the future of cloud financial management. A multidisciplinary academic, Rahul’s research spans mathematics, information theory, machine learning and distributed systems. He holds a portfolio of patents in computational statistics and data compression.
M.R. Rangaswami: What are the hard business and/or technology problems that inspired you to found Granica?
Rahul Ponnala: Advancements in deep learning have been powered by ever-larger models processing ever-growing amounts of data. The performance output of an AI algorithm is primarily determined by the diversity and volume of data it can access. So, as AI becomes integral to products and services in nearly every domain, access to “high quality” data will become both a critical necessity and a fundamental constraint, ultimately dictating the pace and effectiveness of AI investments at enterprises.
To derive “high quality” data, enterprises must extract the maximum amount of information from their data stores and thereby maximize the value of their data – but the challenge here is two-fold. As data volume grows, so do the costs of managing, processing and storing it in the cloud.
Second, as the potential for insight from new data sources increases, the risk of misuse and mishandling increases. Enterprises who can successfully contain rising cloud costs associated with growing data stores, while ensuring the safe use of data in AI to preserve its analytical value, will develop formidable, competitive moats.
Since its inception, Granica has been developing cutting-edge and efficient solutions to allow enterprises to maximize the value of their data – our AI efficiency services are no exception. We are witnessing a Cambrian-like explosion in the pace of deployment of AI into various apps, products and services, marking a major technological shift in the future of computing. And while there has been meaningful progress on the computing infrastructure and algorithmic layers of AI and ML, there has been little progress in increasing the signal-to-noise ratio of the data fueling these algorithms.
This is a very difficult problem, involving deep information and computer science developments, combined with large-scale systems engineering – and this is precisely the problem Granica is focused on solving.
M.R.: How will your AI efficiency platform impact the future of enterprise AI/ML adoption? What is your advice to organizations that want to adopt a more efficient and productive cloud data architecture for their AI initiatives?
Rahul: Extracting the maximum amount of information from data stores is perhaps the most critical
element in the long-term success (or lack thereof) of an organization’s AI investments and strategy. So by delivering a platform capable of helping organizations do just that, Granica is democratizing access to AI by directly making AI more affordable, more accessible and safe to use.
By now, most organizations have grasped the importance and criticality of integrating an AI strategy into their corporate planning-and in fact, this was the most popular question Wall Street analysts asked the management teams of big tech companies this past earnings cycle.
Yet, most organizations – large and small – are left hamstrung in determining where to start and how to do so in an efficient manner, while operating under a set of both economic and time constraints imposed by the market.
When speaking with customers about AI, the number one question that comes up is: “How can I get started and where should I get started?” And our answer, non-surprisingly, is: “Let’s first evaluate the effectiveness and efficiency of your organization’s data strategy.”
By getting plugged into a customer’s environment and providing deep, informative analytics with respect to their cloud data stores and how their data is being used, we are able to provide direct visibility and insight into the inefficiencies present in that customer’s data architecture and gain a deep understanding of that customer’s data and workload characteristics.
This then allows Granica to quickly configure and tailor our platform to their environment and thus accelerate the time to value for the customer. By providing customers with efficient building blocks and tools for their data architecture and AI-powered applications, we can help them optimize their data access, storage and compute resources and thus maximize the value of their data.
M.R.: You’ve expressed that people are integral to your company. What are your values/philosophies as a leader with respect to growing successful teams?
This not only allows us to bring our best professional selves to the office but also build long-term friendships and trust with one another. We want each of our employees to feel comfortable turning to one another to seek guidance, help and coaching – not just about “work”, but also about personal circumstances.
At Granica, our employees, or “ninja warriors” as we like to call them, are the backbone of our organization. We share successes as a team, we make mistakes as a team and we challenge each other.
By doing so, we leverage the collective intelligence of the whole to put everything we can into delivering exceptional experiences for our customers and inspiring one another along the way.
Everyone at Granica lives by the motto of “Whatever it Takes” and we actually have this signage up on our wall in the lobby of our headquarters. It doesn’t matter whether you’re an individual contributor or manager at Granica – we want everyone to be leaders and we want to provide the resources, mentorship and growth opportunities to allow each ninja to grow their careers to new heights.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Allied Advisers has published their sector update on Automation Software which provides an overview of this important segment, recent exciting trends, the transactional market and active acquirers and investors in the ecosystem.
Automation technologies are becoming increasingly pervasive across industries driven by the clear opportunity to achieve great improvements in productivity, process efficiencies and reduce human errors. Tailwinds to this sector have been strengthened by rising costs of labor and operations in an inflationary environment, and innovations in enabling technologies such as AI/Machine Learning, IIoT and Cloud.
The increasing adoption of automation will necessitate further investment into developing technology skills. It is expected that many manually repetitive and low-skill jobs will be replaced by automation technologies, leading to higher unemployment in the economy. On the positive side, automation also opens up the opportunity for workers to be freed up from mundane tasks. Workers who retool and elevate their skill sets for the new world will be able to use their time more effectively and work well with machines to their benefit.
Download their full report here:
Gaurav Bhasin is the Managing Director of Allied Advisers

Aidan McArdle serves as the VP of Technology for Cirrus Data, a leader in block data mobility technology and services. Prior to joining Cirrus Data, Aidan worked at Hewlett Packard Enterprises (HPE) for 17 years, focusing on enterprise storage, servers and operating systems.
In his role at Cirrus Data, Aidan leads a global team to solve complex problems with great technology, develops global services programs, and leads all aspects of pre-sales, product development, and partner management for major initiatives. Aidan also serves as EMEA Partner Enablement Director, helping partners and customers deliver success with their software.
M.R.: What is the most important cloud trend today and what makes it so important?
Aidan McArdle: Top of mind for organizations continues to be cloud adoption, but there is also a strong focus on FinOps or to put it simply – cost optimization, governance, and control. The IT landscape has been awash with layoffs for more than a year now and every enterprise is tightening purse strings as operating expenses (OPEX) comes under increased scrutiny from those paying the public cloud bills.
When storage was largely on-premises, production environments were almost always overprovisioned. It was all capital expenditures that were planned well in advance, and it wasn’t uncommon to have 30-40% utilization. In the cloud, the costs are monthly, and any capacity wasted is hitting their OPEX budgets. Cost control and optimization have become the norm for enterprises, which are striving to find more cost-effective ways to deliver their desired level of performance, reliability, and security.
M.R.: How is cloud computing today impacting CIOs and their enterprises?
Aiden: How to best benefit from the cloud will be (or at least should be) at the top of each CIO’s goals for 2023. It’s very hard to find an enterprise that has not seen a fallout from the post COVID slow down.
The race to the cloud and the need to accelerate digital transformation has delivered many lessons in the last three years. In the rush to flexibly scale and deliver agile applications, many created straightforward ‘Lift and Shift’ plans. The idea being the organization would be able to take the database or application running on-premises and move it to the cloud themselves with little effort. What we’ve seen is for those organizations that managed to get pieces of their workloads into the cloud themselves, they are struggling with huge cost overruns. Other organizations are stuck in delays trying to determine the best path forward.
With a renewed focus on optimization, control, and governance, we will see a positive impact. Costs should be controlled and likely reduced while teams gain a focus on the value of FinOps.
I‘ve had a number of really interesting conversations with businesses about the cost of cloud, repatriation and the shift back to on-premise. We have helped some organizations repatriate their workloads as they realize that for their environment using on-premises or a hybrid cloud strategy is ideal. And for others we have found they can meet their goals without a lot of post migration pain by analyzing their workloads and optimizing ahead of moving them to the cloud.
The focus and thought process has sparked several interesting debates at management meetings this year and hopefully resulted in a plan to gain control over the cloud spend at many enterprises.
M.R.: What else should organizations be thinking about when considering cloud best practices?
Aiden: I don’t believe any organization is too small to look at FinOps and cost optimization. The fundamentals can help set down best practices for organizations of all sizes. For companies that are evaluating a cloud strategy in 2023 or 2024, I always recommend including the migration as part of the strategic planning. Migration is often an afterthought, and this leads to challenges. When accurate planning is not in place to connect people, process, time, and budget to deliver on the intended outcomes you will always find problems, on the contrary though, when the migration is planned properly it is generally executed faster and with minimal impact to the business.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Founder and CEO of Tabnine, Dror Weiss and his team are the creators of the industry’s first AI-powered assistant for developers. As a generative AI technology veteran, he is on a mission to help developers and teams create better software faster.
In this quick conversation, Dror discusses how developers can leverage generative AI today, cover how open source is advancing the generative AI moment and share his thoughts on what’s to come.
M.R. Rangaswami: How can developers take advantage of generative AI technology today? What can they expect in terms of benefit?
Dror Weiss: Software developers can leverage generative AI for code today, in fact Tabnine has around 8M installs from VS Code and the Jetbrains Marketplace. Developers will see the most immediate benefit if they are working on languages that have a large open source example set (JavaScript, Python, etc). However, the value of generative AI for code is likely even higher with esoteric languages and unique code that are currently in the domain of enterprises.
Code completion numbers vary significantly (25-45%), but with detailed ROI studies our customers are seeing mid-teens to low twenties in actual productivity uplift.
M.R.: How is open source helping to advance the generative AI movement?
Dror: At the moment, open source cannot compete on spending and building the largest of models (i.e. GPT4) because currently these cost hundreds of millions of dollars and pull in as much data as possible.
However, we are already seeing strong evolution of open source to build smaller models that are built specially for use cases, such as code. We believe these specialized models are the way forward and have already significantly closed the difference with the largest models.
Much like Linux became the default for operating systems, we expect that open source will do the same for AI.
M.R.: What’s next for generative AI – for developers, the enterprise?
Dror: For developers, we believe generative AI for code will continue to expand into areas such as testing, chat and custom models. As for the enterprise, they are pushing for secure and controlled solutions, indicating they are all in on generative AI.
M.R. Rangaswami is the Co-Founder of Sandhill.com

We’re long past being able to escape Generative AI as a weekly conversation topic. From keynotes at software company conferences, to investment themes for VC/PE investors, we cannot escape Generative AI as a conversation topic today
We reached out to Partner and Associate Director at Boston Consulting Group, Pranay Ahlawat, after reading his article on Regenerative AI Trends That Really Matter. We were impressed and intrigued with how Pranay sees this topic from multiple angles – advising clients, advising investors and a practioner, and wanted to share his insights with our Sandhill’s executive network.
Pranay’s focus on enterprise software and AI at BCG help him discern the hype from reality and understand the true trends that really matter and what software companies, enterprises and investors must know about Generative AI.
M.R. Rangaswami: We have certainly been in hype cycles in the past, what is different about Generative AI and why does it matter?
Pranay Ahlawat: Foundational models or the problem of natural language conversation isn’t new. Natural Language Processing, chatbot platforms and out-of-box text APIs from cloud vendors have been around for a decade today. Foundational models like Resnet-50 have been around since 2015. There are two things that are different about modern-day Generative AI.
First, modern language models or Large Language Models (LLMs) are architecturally different and have a significant performance advantage over traditional approaches like Recurrent Neural Networks and LSTM (Long Short-Term Memory). You will often hear the word transformers and “attention”, which simply put is the ability of the model to remember the context of the conversation more effectively. The quality of comprehension and ability to generate longer free-form text is unlike what we have seen in the past.
Second, these models have a killer app unlike any other and is immediately consumable by non-technical users. We have had transformative technology breakthroughs in the past – internet, mobile, virtualization and cloud, but nothing has come close to the astonishing rise of Chat GPT, which reached a hundred million users in about two months. This tangibility has added to the hype and despite the huge potential, a lot of the claims about Generative AI are unrealistic.
It matters because of the potential impact it has on society. We are a small step closer to general intelligence, we can potentially solve problems we weren’t able to solve before. It’s disruptive for many industries like media, education and personalization. Time will tell how quickly this will happen.
M.R.: What are the three things people must know about Generative AI today?
Pranay: For me the three underlying principles or things you must know – (1) Generative AI is getting democratized, (2) the economics of Generative AI are a crucial vector of innovation and (3) The technology itself has limitations and risks.
First, the technology at the platform level is already democratized and the barriers to entry are continuing to go down. If you look at the commercial players – there are model vendors like Cohere and Antrhopic, platform vendors like Google, AWS and multiple other tooling and platform vendors e.g. IBM WatsonX, and Nvidia NeMo, all making it easier to build, test and deploy generative AI applications. There is real excitement in open source and community driven innovation at all layers e.g. frameworks like PyTorch, foundation models like Stable Diffusion and LLaMA, model aggregators like HuggingFace and libraries like Langchain. Today, a developer can create a generative AI application in a matter of hours, and a lot of complexity is abstracted away because of modern tooling. We have more than five hundred generative AI startups already, and the barriers to entry are continuing to come down.
Second, winners will know how to get the economics right. These models are incredibly expensive to train, tune and run inference on. A 300B parameter model costs anywhere from 2-5M in compute costs to train, and models like GPT-3 costs 1-5 cents per query. To give you an intuition – if Google ran a modern large LLM like GPT-4 for all search queries – it will see profits go down by roughly 10B. So, understanding the task and architecting for the right price/performance is an imperative. There is a ton of innovation and focus on cost engineering today – from semiconductors to newer model architectures and training and inferencing techniques that are focused on getting this price/performance balance right.
Third, there are well documented risks that are still not fully understood. The problem of bias and hallucinations is well documented, there are also unknown cybersecurity risks copyright and IP issues that enterprises need to worry about. Lastly, these models are only as good as the data used to train them, and they make mistakes – Google Bard’s infamous factual error on debut is a good reminder that AI is neither artificial, nor intelligent.
M.R.: Where are we in the adoption curve of Generative AI and where do you believe this is all going?
Pranay: We are still early innings here. We are seeing a ton of enterprises experiment and run pilots and POCs, but almost no adoption at scale. There are certain use cases like Marketing, Customer Support and Product Development that are more ready and have out-of-box tooling e.g. Jasper and GitHub CoPilot etc. The reported performance gains vary significantly, however. There are many numbers, even from reputable sources which are conjecture without any tangible evidence. Companies should evaluate these tools and assess impact before building business cases.
I believe the adoption in the enterprise will be slower than most estimates. Many underlying reasons for that – lack of a strategy and clear business case, lack of talent, lack of curated data, unknown technology risks etc. The biggest challenge is that of change management – according to BCGs famous 70:20:10 framework, 70% of the investments in adopting AI at scale is tied to changing business processes vs. 20 in broader technology and only 10% in algorithms. These physics will remain the same.
We must also acknowledge that the generative AI itself isn’t a silver bullet and we are the very top of the hype cycle. Get your popcorn, the movie has just begun!
M.R. Rangaswami is the Co-Founder of Sandhill.com

AI is the conversation we can’t get away from, so we’re doing our best to bring you as many perspectives, experts and insights into how enterprises are adapting, incorporating and utilising its rapid advancements.
Molham Aref is CEO of RelationalAI, an organisation building intelligence into the core of the modern data stack. He’s had a more than 15 year career in AI where he has been investigating and implementing how knowledge graphs and covers benefits the build of intelligent data applications.
M.R.: Generally speaking, how do you see AI advancing enterprise?
Molham Aref: AI is an expansive concept that encompasses a wide range of predictive and analytical technologies. Gartner coined the term Composite AI to reflect the fact that AI in the enterprise is combining these technologies to help build intelligence into organizations’ decision making and applications. AI provides great opportunities to drive smarter and more insightful outcomes.
Using AI, organizations can improve their decision making and achieve more reliable outcomes. The emergence of large language models (LLMs) has driven AI to an inflection point that requires a combination of techniques to generate results that cannot be achieved by point solutions.
By leveraging AI, organisations can make accurate forecasts, anticipate customer behavior, and optimize resource allocation. This allows them to proactively address challenges, identify opportunities, and ultimately become more profitable.
M.R.: How are you incorporating knowledge graphs working with AI and Enterprise?
Molham: Knowledge graphs were pioneered by technology giants like Google early on to improve search results and LinkedIn to understand connections between people. The technology models business concepts, the relationships between them, and an organization’s operational rules.
Specifically, a knowledge graph organizes data designed to be human-readable, augmenting it with knowledge about the enterprise in a way that allows organizations to take their data, reason over it, and create inferences with the goal of making better decisions. This can be done in a variety of ways, including with graph analytics, which focuses on connections in the data.
Organizations can augment their predictive models with an understanding of the relationships that exist between their data, for example, inventory and profit. These enhanced models enable organisations to arrive at decisions that make them more effective, more competitive, and more successful.
Knowledge graphs are proving to be one more tool in the toolbox that will significantly advance the enterprise.
M.R.: What do you see the future benefits being for organisations who build intelligent data applications?
Molham: Imagine a world where applications seamlessly adapt to your data, driven by intelligent capabilities. Where your applications can take action on your behalf, notify you to make important decisions, and dynamically make recommendations in response to sudden changes.
Once organizations understand the potential impact of AI, they start to embrace technologies like knowledge graphs and data clouds. And with the modern AI stack complete, they can start building applications that let them automate workloads.
With intelligent applications making the easy decisions, humans are freed up to work on the things that are more interesting and complex. Intelligent applications take the drudgery and tedium out of business operations, so that experts can focus more of their time and energy on decisions and tasks that will have a bigger impact, are harder to make, or require more human ingenuity than can be codified in software.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Dr. Alan Baratz’s career picked up momentum when he became the first president of JavaSoft at Sun Microsystems. He oversaw the growth and adoption of the Java platform from its infancy to a robust platform supporting mission-critical applications in nearly 80 percent of Fortune 1000 companies. It was that vast experience, amoung many, that brightly lit the path for Alan’s next role with D-Wave.
First, as D-Wave’s Executive Vice President of R&D, Alan was the driving force behind behind the development, delivery, and support of all of D-Wave’s products, technologies, and applications. Now, spending the last three years as D-Wave’s CEO, Alan is building his expertise to hit a new stride and take his organization to the next level.
M.R. Rangaswami : Can you provide an overview of D-Wave’s technology and the state of the quantum computing market today?
Dr. Alan Baratz: It’s an incredibly exciting time in the quantum computing market, as we’re starting to see companies and governments around the world increasing both interest and investment in the
technology. In fact, a study from Hyperion Research found that more than 80% of responding companies plan to increase quantum commitments in the next 2-3 years and one-third of those will spend more than $15 million annually on quantum computing efforts.
The accelerated adoption of quantum computing comes at a time when businesses are facing difficult economic headwinds and are looking for solutions that help reduce costs, drive revenue and fuel operational effectiveness. Quantum’s power and potential to tackle computationally complex problems make it an important part of any modern enterprise’s tech stack.
And the market potential is significant. According to Boston Consulting Group, quantum computing will create a total addressable market (TAM) of between $450–$850 billion in the next 15 to 30 years, reaching up to $5B in the next three to five years. Many problems, especially those relating to optimization, can be solved with today’s systems.
There are two primary approaches to quantum computing – quantum annealing and gate
model. While you may have heard that quantum computing won’t be ready for years, that longer timeline refers only to gate model.
The reality is that practical quantum solutions, those that use quantum annealing systems, are already in market now, helping organizations solve some of their biggest challenges.
D-Wave customers are using our Leap TM quantum cloud service to gain real-time access to our
quantum computers and hybrid solvers to tackle some of their most complex optimization
problems. We offer a full-stack quantum solution – hardware, software and professional
services – to give customers support throughout their quantum journey. And given our QCaaS
(quantum computing-as-a-service) approach, we make it very easy for the enterprise to
incorporate the technology into their compute infrastructure.
M.R.: What are some examples of commercial applications you’re seeing?
Alan: Optimization is an enterprise-wide challenge that businesses of all kinds face – whether they’re in financial services, manufacturing, logistics, life sciences, retail or more. Many common yet computationally challenging problems like employee scheduling, offer allocation, e-commerce delivery, cargo logistics, and supply chain distribution can all be represented as optimization problems, and thus solved by today’s quantum annealing technology. These problems are made more difficult by the vast amount of data generated daily, which can quickly translate into critical pain points that impact a business’ bottom line.
We’re seeing organizations increasingly turning to quantum-hybrid applications to address these optimization challenges. For example, the nation’s largest facility for handling shipborne cargo used D Wave technology to optimize port operations, resulting in a 60% increase in crane deliveries and a 12% reduction in turnaround time for trucks.
A major credit card provider is using quantum-hybrid applications to optimize offer allocations for its customer loyalty and rewards program to increase cardholder satisfaction while maximizing campaign ROIs. And a defense company created a quantum-hybrid application for missile defense that was able to consider 67 million different scenarios to find a solution in approximately 13 seconds.
The commercial value is apparent, and if you’re not currently exploring quantum in your enterprise, I believe you’re already behind.
M.R.: What’s next for quantum computing?
Alan: The pace of innovation and progress in quantum computing is remarkable. From a commercial exploration and adoption perspective, I believe we’re going to see a major uptick in the near term, as more organizations recognize the technology’s potential and increase investments. Quantum has moved out of the lab and into the boardroom.
It’s no longer just relegated to the R&D teams to play with, but rather has captured the attention of business decisionmakers faced with increasingly challenging and complex problems that require faster time-to-solution. With the increased adoption will come rapid development of proofs-of-concept and ultimately production applications that will help streamline daily enterprise operations.
From a scientific view, I expect major developments on the horizon as quantum annealing technology further scales and reaches even higher qubit counts and coherence times. Gate-model development will continue to progress, as the industry hopes to eventually find a path toward low-noise systems that can actually solve problems. Lastly, we all will continue our efforts to demonstrate quantum’s advantage over classical compute for intractable problems.
We’re already seeing positive signs at D-Wave, as recent research findings contribute to a
growing body of research that may lead us to the first practical quantum supremacy result.

On a recent tour of healthcare organizations across the nation, Riddhiman started closely evaluating how different organizations are securing their data and even more important, securely accessing/sharing data.
From developing new drugs and medical devices to allocating scarce resources amidst supply chain issues, most advancement in healthcare hinges on having access to the right data. Moreover, some of the most sensitive and highly regulated data requires technology solutions that take all of that into account to solve this complex challenge.
Riddhiman recognizes how the traditional solutions used to tackle data problems but has solutions on how the next wave of innovation can allow the healthcare industry to gain insights from health data while maintaining privacy.
M.R. Rangaswami: Data is arguably the most critical driver of innovation in healthcare today. What trends is this driving and what are some key “amount of data” stats in healthcare?
Riddhiman Das: I believe that data is the most critical driver of innovation in healthcare but there are limitations because the data is sensitive and as a result, regulated. Everything in healthcare hinges on having access to the right data: From developing new drugs and medical devices to allocating scarce resources amidst supply chain issues.
It’s no secret that having continuous access to raw health data is invaluable— this fact is well established. However, recent advances in analytics, machine learning, and artificial intelligence have brought us to a tipping point where healthcare can no longer ignore the value of having access to data.
And get this, privacy and compliance concerns have trapped two Zettabytes of data in silos and removed $500B in value creation for healthcare organizations.
M.R.: If we know healthcare has a data problem, how have we traditionally been trying to tackle it?
Riddhiman: Historically, organizations have tried to get around limited access to data by using synthetic, abstracted, or pre-anonymized datasets, but that strategy just doesn’t cut it. The method tends to be expensive and can result in flawed insights if the data contains errors or is missing a key element – that doesn’t really benefit anyone.
We need access to data to drive the next wave of innovation—people’s health and well-being depend on it. We can only achieve this if the data is kept private to maintain patient privacy and the intellectual property rights of healthcare companies and their industry partners.
Over the years, initiatives have emerged to address this. Everyone has heard of HIPAA, which was enacted to protect patients’ health information from disclosure without their consent or knowledge. It also features standards designed to improve efficiency in the healthcare industry. The less-talked-about Sentinel Initiative was created to monitor the safety of medical products via direct access to patients’ electronic health records. Despite legislation and initiatives to help with this problem, the challenge remains and will only become more amplified as health data grows in volume and complexity.
Organizations have been shooting themselves in the foot by relying on manually de-identifying, abstracting, or normalizing data to get the insights they need. It’s nearly impossible to obtain meaningful, accurate, real-time insights from health data in this manner. This outdated method is hardware dependent, poses potential risks for re-identification, offers only partial security, and generally only works on structured or specific types of data.
M.R.: What are some fresh solutions to data and data privacy in healthcare you have seen?
Riddhiman: We’ve seen quite a few technology solutions developed in recent years that tackle this issue in a way that allows healthcare organizations the ability to gain insights from data and maintain privacy beyond what regulations require.
Privacy-enhancing technologies (PETs) were specifically designed to make gleaning insights from health data scalable, accurate, and secure: a true win-win. One PET we’re truly excited about? Federated analytics.
Federated analytics improves upon prior PETs and keeps health data safe in three ways. First, the data is secured at its point of residence so that external parties cannot access it in any meaningful way. Second, the data is kept secure as parties collaborate to decrease the risk of interception. Finally, the data is secured during computation, reducing the risk of sensitive information extraction. Organizations can also track how the data is used to ensure it is only leveraged for its intended purpose.
Federated analytics software lowers the risks associated with sharing health data by eliminating decryption and movement of raw data, while allowing privacy-intact computations to occur. Additionally, technology improvements driven by federated analytics minimize the computational load necessary to analyze data, which reduces hardware dependency and increases scalability.
Other benefits include access to raw data beyond just structured data, including video, images, and voice data; more secure internal (across regulatory boundaries) collaboration and external (between organizations) collaboration; and a lower chance of non-compliance due to simplified, more cohesive contracting processes.
Federated analytics is driving healthcare towards the future. By safely scaling access to raw health data, organizations can optimize processes for clinical trials, develop and deploy groundbreaking AI algorithms, and bolster pharmacovigilance. Thanks to the development of federated analytics solutions, there is no longer a need to choose between gaining powerful insights that will shape the future of healthcare and keeping patient data private.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Does password authentication really work anymore?
Descope Co-Founder and CEO, Slavik Markovich, has been watching the unravelling problem with traditional password authentication, such as user difficulties and security vulnerabilities, for years.
As a solution, Descope is developing sound passwordless methods, such as magic links, one-time passwords, social login, authenticator apps, and biometric authentication, that are gaining traction due to the rise of open standards and support from major companies like Google, Apple, Microsoft, and Shopify.
In this conversation, Slavik get’s straight into the user experience and the solutions we are seeing that work.
M.R. Rangaswami: Why is passwordless authentication picking up steam?
Passwords also cause friction throughout the user journey, leading to churn and a negative user experience. No one wants the cognitive load of remembering unique 16-character passwords for every site or app they access, so they reuse passwords across sites which is a recipe for disaster when passwords get leaked.
Passwordless methods such as magic links, social login, and authenticator apps have been around for a while. Notable apps like Medium and Slack already use passwordless login, while authenticator apps are used as a common second factor in MFA.
However, the rise of open standards and mechanisms such as FIDO2, WebAuthn, and passkeys over the past few years have sent passwordless adoption into overdrive. There are a few reasons at play here:
- Passkeys are based on biometrics, which users are familiar with since they already use fingerprint scanning and facial recognition to unlock their phone or other computing devices.
- Passkeys are being adopted by Internet heavyweights such as Google, Apple, Microsoft, and Shopify, who are also taking steps to educate users about the benefits of these methods.
M.R.: What are some examples of passwordless authentication techniques?
Slavik: Passwordless methods verify users through a combination of possession (what they have) and inherence (who they are) factors. These factors are typically harder to spoof and are more reliable indicators of a user’s identity than knowledge factors are.
These examples include:
- Magic links, which are URLs with embedded tokens that – when clicked – enable users to log in without needing a password. These links are mostly delivered to the user’s email account, but can also be sent via SMS and other messaging services like WhatsApp.
- One-time passwords / passcodes, which are dynamically generated sets of numbers or letters meant to grant users one-time access to an application. Unlike passwords, an OTP is not static and changes every time the user attempts login.
- Social login, which authenticates users based on pre-established trust with an identity provider such as Google, Facebook, or GitHub. Using social login precludes users from creating another set of credentials – they can instead focus on strengthening the passwords they already have on their identity provider account.
- Authenticator apps, which operate based on time-based one-time passwords (TOTP). A TOTP code is generated with an algorithm that uses a shared secret and the current time as inputs – this means the code changes at set time intervals, usually between 30 to 90 seconds.
- Biometric authentication, which are physical or behavioral traits that are unique to an individual. Biometric authentication checks these traits to grant users access to applications. Popular biometric authentication techniques in use today include fingerprint scanning and facial recognition. Biometrics are used in passkeys authentication, which I covered in the previous answer.
M.R.: How do you see this technology evolving over the next several years?
Slavik: I see the evolution of passwordless technologies mostly focusing on education and compatibility in the years to come. The key pillars will be:
- User education: Companies and the industry at large need to continue educating end users about the benefits of passwordless methods and the pitfalls of passwords. There are still myths about passwordless methods like biometrics that are common (e.g. what if someone steals my biometrics?) that need to be addressed (e.g. your biometrics never leave your device).
- Developer education: Standards and protocols such as OAuth, SAML, WebAuthn, and others that form the basis of authentication mechanisms are complex. It takes developers time to pore over these protocols and implement authentication in their apps. Developers need to be provided with tools and enablement that abstract away the complexity of these protocols and let them add passwordless methods to their apps without lots of added work.
- Compatibility: Passkeys compatibility is a work in progress. Over the coming months and years, more apps, browsers, and operating systems need to support passkeys if a passwordless future is to become reality.
All three points above are interrelated. If user education and developer enablement continues improving, more entities will be incentivized to add passwordless support, and vice versa.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Horizontal SaaS vs. Vertical SaaS – Which flavor of SaaS do you prefer?
Allied Advisers has updated their previously published Flavors of SaaS report where they include an analysis across a select group of companies comparing operational metrics across the two flavors of SaaS; Horizontal SaaS and Vertical SaaS.
As advisors who have worked across both flavors, they’re sharing some interesting differences.
While Horizontal SaaS companies generally have larger TAM, Vertical SaaS companies can be more capital efficient and have better operational metrics and capital efficiency, making them better suited for middle-market funds.
While there are category leaders in Horizontal SaaS, there are also a lot of opportunities in building Vertical SaaS companies which can become leaders in their own sectors. In today’s environment where capital efficient growth is being keenly measured, Vertical SaaS companies offer compelling opportunities for investors and buyers.
FOUR HIGHLIGHTS FROM THE REPORT:
I. Many SaaS firms focus on Vertical SaaS models to target a specific niche, allowing them to better serve industry specific client demands and making them easier to market.
II. Vertical SaaS has seen rapid growth of businesses with smaller but more focused TAM (as compared with Horizontal SaaS) and generally more capital efficient business models.
III. The market downturn in 2022 and Covid impacted some Vertical SaaS markets but overall digital transformation continued to accelerate within industries, with standardized solutions not being sufficient to address vertical needs.
IV. We see continued investor interest in Vertical SaaS due to high growth prospects supported by strong business fundamentals, along with generally better performance on multiple metrics than peer Horizontal SaaS companies.
For the full Allied Advisors report, see below:
Gaurav Bhasin is the Managing Director of Allied Advisors

With a new book on the market, Business as UNusual with SAP, we have been looking forward to talking with Vinnie Mirchandani and his two senior VP’s at SAP about the megatrends they’re seeing powerfully ripple across the industry.
As Co-Author and SVP of Strategic Customer Engagements at SAP, Peter Maier, was great to speak with to elaborate on how megatrends are changing competitive playing fields and shaping best business practices.
M.R. Rangaswami: What was the motivation for you and your co-author, Thomas Saueressig, to write the book, Business as UNusual with SAP?

Peter Maier: In our customer conversations Thomas and I experience every day how megatrends are driving the business and technology agenda of our customers. We found it worthwhile to share their voice and perspective how leaders successfully navigate industry megatrends using the capabilities of our intelligent suite and our intelligent technologies.
There are a few simple but deep principles that drive SAP’s product and innovation strategy for our customers in their industries: we focus on our customers’ core business, because that’s where they drive revenue, competitive differentiation and strategic transformation of business models and business operations.
Then we look at end-to-end processes that run along value chains and across industry and company boundaries (that’s why digital business networks are so important). And we use a business filter when we look at new digital technologies: which have the potential to transform our customers’ business?
Artificial intelligence is a great example here, we believe there is huge business potential – but realizing this potential requires integrated end-to-end industry processes. So each megatrend can transform the business of our customers in their industries – and digital technologies are key enablers.
M.R. In your opinion, what makes this period of time “unusual”?
Peter: All consultants have been claiming for decades that the ongoing change requires customers to adjust their strategies and operations. However, the last three years have shown us how fundamentally and quickly our world can change and how important the ability to rapidly adapt to change has become. Multi-year corporate programs have been compressed into quarters, months, and weeks. Fundamental beliefs have gone out of the window. And we perceive a new open-mindedness of many leaders to try new things – to embrace the idea to run a “business as unusual”. So we think it makes sense to use this momentum and start customer engagements to discuss how megatrends can inspire new ways of doing new things.
Many people feel threatened by change. If you look into the root cause for this reaction, you’ll find that change is stressful if it outpaces your ability to adjust or even take advantage of it. This is a very good reason to build and run an organization so that it can easily (or at least better than their peers) cope with disruptive change. And this change comes from all directions, just look at the drivers like generative AI, sustainability, virtual reality, metaverse, geopolitical conflicts, or pandemics. “Prepping” for all eventualities is certainly not the answer, but building and running an intelligent, sustainable, resilient, and agile enterprise certainly is. And many companies and institutions look at SAP to find solutions for this transformation.
M.R. What are the most opportunistic and problematic trends that the book covers?
Peter: We believe that every single megatrend we are discussing holds threats and promises, depending on the reader’s attitude to running a “business as unusual.” Moving from selling products to providing and monetizing the outcome of using the product (“Everything as a service”) can be viewed as a problem for a business – or it can be treated as a great opportunity to create and expand new revenue streams, develop new business models, and establish fresh customer relationships.
Moving to a “circular economy” drives change in product design, supply chain, procurement practices, and product-end-of-life management in many industries. Whether this change is a reason for optimism or pessimism depends on whether this change is viewed as an opportunity or threat. And you will find the same duality in every single megatrend.
Over the course of our research and the discussions with customers, partners, and SAP experts the opportunity/threat balance clearly shifted from seeing problems and challenges to appreciating the potential for innovation and new business relationships. And of course, we are very happy and pleased that our SAP solutions will play key roles in tackling the challenges and capturing the promised value from transforming business processes and business models.
There are many digital technology trends – most prominently artificial intelligence – which we don’t feature in Business as UNusual with SAP as megatrends.
Business as UNusual with SAP focuses on business megatrends and how they shape and change competitive playing fields and best business practices, or how they transform end-to-end business processes along value chains and across industry boundaries.
Technology has always influenced, accelerated, and sometimes triggered business megatrends, and you will find that digital and other technologies and their impact are discussed in the context of each megatrend, from Lifelong Health to New Customer Pathways and from Integrated Mobility to the Future of Capital and Risk.
M.R. Rangaswami is the Co-Founder of Sandhill.com

With 26+ patents in parallel data management and optimization, TigerGraph’s founder and CEO, Dr. Yu Xu, has extraordinary expertise in big data and database systems.
Having worked on Twitter’s data infrastructure for massive data analytics and led Teradata’s big data initiatives as a Hadoop architect, not only does Yu have an impressive resume, but his ability to explain detailed concepts in a simplified way made for easy conversation.
M.R. Rangaswami: Graph databases are gaining momentum as more organizations adopt the technology to achieve deeper business insights. What exactly is a graph database?
Yu Xu: The world is more hyper-connected than ever before, and the ability to tap into the power of rich, growing networks – whether that be financial transactions, social media networks, recommendation engines, or global supply chains – will make or break the bottom-line of an organization. Given the importance of connections in the modern business environment, it’s critical for database technology to keep up.
Legacy databases (known as relational or RDBMS) were built for well-mapped, stable and predictable processes like finance and accounting. These databases use rigid rows, columns and tables that don’t require frequent modifications, but are costly and time-consuming when adjustments need to be made.
The graph database model is built to store and retrieve connections from the ground up. It’s more flexible, scalable and agile than RDBMS, and is the optimal data model for applications that harness artificial intelligence and machine learning.
A graph database stores two kinds of data: entities (vertices) and the relationships between them (edges). This network of interconnected vertices and edges is called a graph. Graph database software stores all the records of these interconnected vertices, attributes, and edges so they can be harnessed by various software applications. AI and ML applications thrive on connected data, and that’s exactly what graph technology delivers.
M.R.: What’s the difference between native and non-native graph databases?
Yu: As graph technology grows in popularity, more database vendors offer “graph” capabilities alongside their existing data models. The trouble with these graph add-on offerings is that they’re not optimized to store and query the connections between data entities. If an application frequently needs to store and query data relationships, it needs a native graph database.
The key difference between native and non-native graph technology is what it’s created for. A native graph database uses something called index-free adjacency to physically point between connected vertices to ensure connected data queries are highly performant. Essentially, if a database model is specifically engineered to store and query connected data, it’s a native graph database. If the database was first engineered for a different data model and added “graph” capabilities later, then it’s a non-native graph database. Non-native graph data storage is often slower because all of the relationships in the graph have to be translated into a different data model for every graph query.
M.R: What are some ways that businesses are leveraging graph databases?
Yu: The use cases for graph technology are vast, diverse, and growing. If an application frequently queries and harnesses the relationships between users, products, locations, or any other entities, it will benefit from a native graph database. The same is true if a use case leverages network effects or requires multiple-hop queries across data.
Some of the most popular use cases for graph include fraud detection, recommendation engines, supply chain management, cybersecurity, anti-money laundering, and customer 360, just to name a few. If your enterprise relies on graph analytics or graph data science, then it needs a native graph database to ensure real-time performance for mission-critical applications.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Ayal Yogev is the co-founder and CEO of Anjuna, the leading multi-cloud confidential computing platform. Ayal firmly believes that the best security solutions are enablers – they open up new opportunities that wouldn’t exist without a heightened level of security and trust. To achieve this, the industry needs a new way of thinking, building, and delivering applications that keeps enterprises in the driver’s seat and keeps their data protected at all times.
Ayal is passionate about giving companies the freedom to run applications anywhere in the world with complete data security and privacy. That’s why he co-founded Anjuna.
With over two decades of experience in the enterprise security space, Ayal shares his thoughts on how confidential computing will impact the cybersecurity landscape. He explains how confidential computing will be the antidote to today’s patchwork of ineffective security solutions, and how it’s poised to make security an enabler of innovation rather than an inhibitor.
M.R. Rangaswami: Can you explain what confidential computing is and why it’s now seeing increased momentum?
Ayal Yogev: The majority of today’s cybersecurity solutions focus on detecting a breach once it’s already happened, then dealing with the repercussions. However, this approach leaves applications and data extremely vulnerable. Confidential computing addresses this vulnerability by processing data inside a hardware-isolated secure enclave, which ensures that data and code are protected during processing. Even in the event of a breach, applications running in confidential computing environments are invisible to attackers and therefore tamper-proof.
Confidential computing has seen rapidly growing support from cloud service providers and hardware manufacturers such as Intel, AMD, Nvidia, and Arm because of its massive, positive impacts on data security. However, it’s largely flown under the radar because of the engineering feat required to re-architect workloads to take advantage of it. Prior to Anjuna, it would take significant developer effort to re-code an application to work in just one of the clouds and then you’d have to repeat the work for each cloud you wanted to use. This is a daunting idea for many enterprises and a big reason why adoption has been slow. But this is changing.
Similar to VMware with server virtualization, Anjuna provides a new specialized software layer that allows enterprises to take advantage of the new hardware capabilities without the need to recode. Ajuna abstracts the complexity of confidential computing CPUs and democratizes access to this powerful technology that will redefine security and cloud.
M.R.: Which industries and companies are adopting this technology and what are the impacts they’ve seen?
Ayal: According to IDC, less than half of enterprise workloads have moved to the cloud. Regulated verticals like financial services are only 20% of the way into their cloud journeys, meaning that 80% of workloads remain on-premises. Although running applications on-premise is less scalable, more complex and typically more expensive than in the cloud, CIOs are prevented from moving to the cloud by security, because in the cloud data security and privacy becomes a shared responsibility between you and your cloud service provider. Confidential computing finally solves this fundamental issue by isolating code and data from anyone with access to your infrastructure.
The value of confidential computing is broadly applicable and I expect that a few years from now confidential computing will be how all enterprise workloads run. In the short term, we see most security-conscious and heavily regulated organizations as the early adopters. Anjuna, for example, works with companies in financial services, government, blockchain, and other highly sensitive industries.
M.R.: When can we expect to see this technology impact our daily lives? What will this look like?
Ayal: Confidential computing is already present in our everyday lives – we use it to protect our phones, credit cards, and more. This is now moving to the server side, and in the future it will move everything to the edge, creating a world of borderless computing.
Adoption of confidential computing is at an inflection point. The ecosystem of manufacturers and cloud services providers has already moved. Intel, AMD, ARM, Nvidia, AWS, GCP, Azure, Oracle, and IBM have already shipped, or are about to ship, confidential computing enabled hardware and cloud services. What we’ve been missing is the software stack that democratized access to these new powerful capabilities, making it easy to use it for all apps without modifications.
I expect that over time, confidential computing will become the de-facto standard for how we run applications. The impact on our daily life will be huge. With ensured data security and privacy, organizations will not only be able to move more applications to the cloud, but also safely adopt emerging technologies like blockchain or AI. Moreover, entire new use cases like cross-organization data sharing and analytics will now be possible with incredible benefits in a wide range of industries like healthcare, financial services, media, and advertising.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Meet Jozef de Vries, the mastermind behind the cutting-edge product development at EnterpriseDB (EDB), a pioneering company revolutionizing Postgres in the enterprise domain.
With over 15 years of experience before joining EDB, Jozef has held various positions at IBM, including building the IBM Cloud Database development organization from the ground up.
In this quick Q&A, Jozef shares how enterprises can leverage Postgres to cater to their database needs and how this open-source platform is shaking up the market.
M.R. Rangaswami: In your opinion, how will Postgres disrupt the open-source database market?
Jozef de Vries: Postgres already has disrupted the database market. The only question that remains is how quickly Postgres will take a majority share of the enterprise database market. EDB is exclusively focused on accelerating the adoption of Postgres as the database standard in the enterprise.
Combined, Postgres is the most loved, most used, and most wanted database in the world. According to StackOverflow surveys of developers, its growth is exponential in 2022 and beyond.
Postgres is the fastest-growing database management system in what Gartner views as an approximately $80 billion market. EDB customers such as MasterCard, Nielsen, Siemens, Sony, Ericsson and others have made Postgres their database standard.
EDB builds Postgres alongside a vibrant community, disrupting the market with greater scalability and cost savings compared to any other system. With more contributors to Postgres than any other company, EDB delivers unparalleled expertise and power to enterprises looking to adopt Postgres as their database standard.
M.R.: How does Postgres (as an open-source object-relational database system) function?
Jozef: Postgres addresses the widest range of modern applications more than any other database today. This means that enterprises that run on Postgres can fundamentally transform their economics, build better applications with greater performance, scalability and security.
When Postgres was designed at the University of California, Berkeley more than 30 years ago, its designers made sure that the underlying data model was inherently extensible. At the time, databases could only use very simple data types, like numbers, strings and dates. Michael Stonebreaker, one of EDB’s distinguished advisors and strategists, and his team made a fundamental design decision to make Postgres easy to add new data types and their associated operations.
For example, PostGIS is an extension of Postgres that makes it easy to work with geographic data elements, polygons, routes, etc. That alone has made Postgres one of the preferred solutions for mapping systems. Other well known extensions are for document stores (JSON) and key value pairs (HSTORE).
This extensible data model, together with the ability to run on every cloud, enables Postgres developers to be enormously productive and innovative.
Alongside a robust independent open-source community, we have made Postgres an extraordinary database, superior to legacy proprietary databases and more universally applicable for developers than specialty databases.
Open source mandates, flexible deployment options, risk mitigation and strong security will drive much broader adoption of Postgres this year and next. EDB supports this with built-in Oracle migration capabilities, unmatched Postgres expertise and 24/7 global support. We uniquely empower enterprises to accelerate strategies, move applications to the cloud and build new applications on Postgres.
M.R.: What are the factors accelerating or inhibiting the adoption rate of Postgres?
Jozef: Purpose-built for general use, Postgres powers enterprises across a wider variety and broader spectrum of applications than any other database, making it the economic game changer for data. There will always be specialty applications that require specialty databases. But for an enterprise standard, developers and IT executives rely on Postgres for the widest range of support.
Postgres technology is extraordinary and is improving faster than competing technologies, thanks to the independent nature of the community and EDB’s relentless commitment to Postgres innovation and development. Our technology approach delivers a “single database everywhere” to any platform including self-managed private clouds and self-managed public clouds, but our fully managed public cloud is the most important accelerator. The fact that we simultaneously deliver breathtaking cost reductions is the icing on the cake.
Additionally, the fact that more developers love, use and want Postgres than any other database in the world is an important “tell” on this prediction.
Developers and business leaders alike seek data ownership and control and they simply don’t have time—or money—to waste. That is why they need a Postgres acceleration strategy, and only EDB can provide that.
Inhibitors to the adoption of Postgres are primarily awareness, staff education and training — all areas that the C-Suite can play a big leadership role in changing. Great leaders recognize the need for expertise from a company that deeply understands Postgres and enables them to run data anywhere. That’s EDB.
Our business is built to remove barriers. Some of the biggest companies in the world including Apple, Daimler, Goldman Sachs, and others have already adopted Postgres as their database standard. It’s not a matter of if, but when the majority of enterprises will follow suit.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Rohit Choudhary is the Founder & CEO of the market leader in data observability, Acceldata.
Having data alone isn’t enough to deliver value to an enterprise – a report by HT Mint found that over 90% of the data available in the world today was generated over the last two to three years alone. But putting it all together is what drives results. For enterprises, data comes in all shapes and sizes—but in the era of hyper-information, disinformation can be equally destructive.
Rohit credits his success to his engineering roots, continuous innovation, a humbling sequence of entrepreneurial learning from successes and failures, and cultural alignment that kept his team together for nearly 20 years. Fresh off its $50 Million Series C funding round, Acceldata is leading the charge for the data observability industry, giving operational control back to understaffed data teams while maximizing ROI.
M.R. Rangaswami: What is Acceldata’s founding story and what led you to raise a significant $50 million Series C funding in the face of economic turmoil?
Rohit Choudhary: My co-founders and I started Acceldata in 2018 after recognizing that a better solution was needed to monitor, investigate, remediate, and manage the reliability of data pipelines and infrastructure. Having built complex data systems at several of the world’s largest companies, it was clear that enterprises were trying to build and manage data products using tools that weren’t optimized for this task. Despite significant investments, data teams still couldn’t see or understand what was happening inside mission-critical analytics and AI applications, failing to meet reliability, cost, and scale investments.
Since our launch, we have seen tremendous company momentum and were fortunate to secure a significant Series C funding round in the midst of an economic downturn. As a result, I can confidently say we’ve built the world’s most comprehensive and scalable data observability platform, correlating events across data, processing, and pipelines to transform how organizations develop and operate data products. Our funding speaks to the true value that organizations across the globe are achieving with data observability, and we’re excited to push the industry even further into the limelight.
M.R.: What is the importance of having reliable and established data across the enterprise? What consequences will companies experience without it?
Rohit: While an organization’s data is among its most valuable assets, data alone isn’t enough to deliver business value to an enterprise. Being able to piece it together to provide meaningful insights is what actually drives results and ROI.
With the migration of data and analytics to the cloud, data volume and data movement are more significant than ever. There is data-at-rest, data-in-motion, and data for consumption, each having different stops in the modern data stack that make it difficult for organizations to get a good handle on their data. Data reliability ensures that data is delivered on time with the utmost quality so business teams can make consistent, timely, and accurate decisions.
In the era of hyper-information, disinformation can be extremely destructive. However, the quality and integrity of the data in hand are what define the return on investment for various analytics and intelligence tools.
M.R.: What steps can organizations take to structure a logical plan of action to manage, monitor, and demystify data quality concerns and data outages?
Rohit: Data observability is the most logical plan of action to manage, monitor, and demystify data quality concerns, misinformation, and data downtimes. Software firms rely on observability as a solution to tackle data quality challenges and pipeline issues. Observability goes above and beyond just routine monitoring. It ensures teams are on top of breakdowns and manages data across four layers: Users, Compute, Pipeline, and Reliability.
Throughout the entire data process – from ingestion to consumption – data pipelines are moving data from disparate sources in an attempt to deliver actionable insights. When that data is accurate and timely, those insights help the enterprise gain a competitive advantage, and deliver the promise of an efficient data-driven enterprise.
M.R. Rangaswami is the Co-Founder of Sandhill.com

It’s been said that 2023 is the year hybrid evolves to multi-cloud for enterprises, driving the importance of data migration to the forefront of IT decision-makers.
Data is the lifeblood of the enterprise and now its movements have become even more complicated. In our quick conversation, Cirrus Data’s VP of Global Marketing Strategy, Mark Greenlaw, shared his observations on what’s happening with data mobility speeds, flexible storage architecture, and multi-cloud transformations.
M.R. Rangaswami: What are companies missing about how digital transformation impacts cloud adoption?
Mark Greenlaw: The phrase “digital transformation” has become a sort of catchall to describe everything from the process of modernizing applications to creating new digital business models. In reality, digital transformation is not replicating an existing service, but using technology to transform the service into something significantly better.
Unfortunately, less than 20% of companies that embarked on digital transformation strategies have been successful. There are varying reasons for the lack of sustained improvements from transformation initiatives, but infrastructure challenges are among the top. The cloud offers relief from rigid on-premises environments and accelerates time to market.
Public cloud companies now offer flexibility, access to third-party ecosystems, automation, and the ability to truly transform services.
M.R.: What do you advise organizations consider before a multi-cloud strategy?
Mark: Companies have been moving to the cloud for several years, but not all clouds are equivalent. As cloud adoption has grown, different cloud services are ideal for applications, workloads, and business processes. Today, many organizations harness a mix of private, hybrid and public clouds. Selecting the right cloud service and understanding how it integrates into your environment is an important first step.
It can be a challenge to determine which cloud is right for each scenario, but once you’ve made that decision executing the migration is often a roadblock. A ‘lift and shift’ strategy without optimization, often doesn’t yield the ROI anticipated. We often hear from organizations that they are surprised by the costs of the cloud. And, once they have moved their workloads to the cloud, moving them between clouds can be cost-prohibitive without the right data mobility solutions in place.
As part of planning a cloud strategy, data mobility needs to be a key consideration. What is the strategy to de-duplicate and compress your workloads? Do you have a solution that will enable you to move data while it is in use? Can you move data between clouds without exorbitant egress fees? These are all questions that when tackled at the beginning will ensure your program’s success.
M.R.: Is moving block data to a new environment a high stake move?
Mark: Block data refers to mission-critical databases and applications which are structured data owned directly by applications. The loss of block data can have a catastrophic impact on business operations. Historically, storage experts would spend months planning the migration of this data onto a new storage platform. Legacy migration processes were manual, time-consuming, and prone to human error. For one customer in the travel and leisure industry, their initial attempt to migrate their block data took 18 months and they only managed to move a quarter of the overall traffic. It had a serious impact on their digital transformation plans.
It’s also important to consider the difference between data migration and data mobility solutions. Data migration is for one-time moves from one platform to another. Data mobility allows organizations to move data between platforms accurately and without delays. Data mobility is essential to maximizing a multi-cloud strategy. Whether you need to move your data for a specific project or you want the flexibility of continuous data mobility, automation and moving data while it is in use dramatically accelerates the speed of the process.
When you can automatically throttle the migration speed around usage, you have the ability to reduce the time spent and bandwidth used by up to 6x. Designing a strategy to manage your data mobility at the beginning of your cloud journey will lead to increased ROI and a better overall experience.
M.R. Rangaswami is the Co-Founder of Sandhill.com

CEO and Co-Founder of Coalesce, Armon Petrossian, launched his company from stealth in January 2022 to solve the largest bottleneck in the analytic space: data transformations.
The 29-year-old entrepreneur focused on helping enterprises overcome the pressing challenge of converting raw data into a more suitable structure for consumption, a process that can take months or even years, to meet daily organizational and operational data-driven demands. The company is currently going head-to-head with dbt Labs and Matillion in the data transformation space.
M.R. Rangaswami: What are the core challenges you find that are associated with operationalizing data?
Armon Petrossian: Companies have been struggling with data transformation and optimization since the early days of data warehousing, and with the enormous growth of the cloud, that challenge has only increased. Data teams, in particular, are challenged with the everyday demands of the business and the shortage of skilled data engineers and analysts to combat the growing volumes and complexity of data.
We are on a mission to radically improve the analytics landscape by making enterprise-scale data transformations as efficient and universal as possible. We see the value of Coalesce’s technology as an inevitable catalyst to support the scalability and governance needed for cloud computing.
One of the most rewarding aspects of my role at Coalesce is seeing the impact our solution has on organizations that want to drive value out of their data. This is especially true for companies that deal with complex data sets and/or are in highly regulated industries.
One of our most recent customer success stories involves partnering with an organization that helps big restaurant brand clients leverage their customer data to show that the brand knows and understands its customers. Helping its numerous clients improve their digital marketing funnel and offering customers a frictionless experience every time they visit the store, whether in person or online, relies heavily on data. This requires having the ability to glean useful insight from data quickly and easily. Coalesce, alongside Snowflake’s Snowpark, was able to help their data science team complete a high-profile transformation in one month, whereas before, the entire team spent 6 months without much progress.
M.R.: What exactly is data transformation? Why does it play such a critical role in the future of data management and the analytics space?
Armon: It’s important to look at how we consume data to understand why data transformations are so important. Initially, organizations that were adopting cloud platforms like Snowflake hit a major hurdle which was getting access to data from their source systems. As that problem has been largely solved by companies like Fivetran, and getting access to different types of data has become much easier, transforming that data to create a cohesive view is the logical next step for businesses to accomplish. This becomes dramatically more difficult as you begin to integrate data from traditional on-premises platforms, like Teradata or Oracle, along with a variety of different web sources. For example, companies may look at vast amounts of historical data to understand how their production line performs in certain scenarios or look into demographic information to target the right potential customers. Whatever the reason, the analytics are only as good as their ability to curate data from various sources and transform it into a consumable format for the analytics and data science teams.
With Coalesce, the data can be organized in an easy-to-access and read fashion while using automation to streamline the process and limit the amount of time needed by highly skilled engineers. This ensures that companies are accessing high-quality data that is easy to use for a variety of purposes, an experience that is not guaranteed with existing tools. With our column-aware architecture, enterprises have the ability to efficiently and easily manage not only existing data but also new datasets as they grow and scale.
M.R.: What are your best practices for enterprises that are looking to keep up in today’s data-rich world?
Armon: My suggestions for best practices can be broken down into four areas:
i. Data-Competitive: Data competitiveness is key for every business, but given the enormous amounts of data being generated by modern enterprises, IT teams are falling behind in organizing and preparing data to be made available to business teams to help guide informed decisions.
ii. Embrace the Cloud: Managing hardware or technology on-premises is expensive, time-consuming and risky. In U.S. history, cars were not nearly as impactful to daily life as a form of transportation until the infrastructure of roads was built across the country. We’re now seeing a similar economic boom with the way the cloud allows access to data for organizations that would have never been able to achieve similar use cases or value previously.
iii. Evaluate Efficiency: IT teams finally understand how important efficiency can be to help deliver a continued competitive edge for enterprises. When applicable, data automation reduces time, effort, and cost while reducing tedious and repetitive work and allowing teams to focus on additional use cases with high-value data objectives.
iv. Strive for Scalability: With more data and the proliferation of the cloud, organizations are challenged with scaling IT systems while maintaining flexibility and control. Companies should look to implement processes that offer the speed and efficiency needed to achieve digital transformation at scale and to meet increasing business and customer demands.
M.R. Rangaswami is the Co-Founder of Sandhill.com

AB’s Unicorn company has pioneered high-performance Kubernetes-native object storage, helping enterprises use the cloud operating model to determine where to run their workloads – depending on what they are optimizing for.
As a Series B company, MinIO has $126 million in funding raised to date, with a billion dollar valuation. Investors include Intel Capital, Softbank Vision Fund 2, Dell Technologies Capital, Nexus Venture Partners, General Catalyst and key angel investors.
As one of the leading proponents and thinkers on the subject of open source software, AB is able to masterfully articulate differences between philosophy and business models – and how the two create cloud function.
M.R. Rangaswami: Can you explain all this chatter about cloud repatriation?
AB Periasamy: Simply put, the concept of “cloud repatriation” is repatriating workloads from public clouds to a private cloud. For years, the mantra of the cloud was fairly straightforward: put everything in the public cloud and keep it there forever. This model made sense as businesses optimized for elasticity, developer agility, service availability and flexibility.
Things changed when businesses reached scale, however, as the benefits were swamped by economics and lock-in. This is leading many enterprises to re-think their approach to the cloud – with a focus on the operating model of the cloud – not where it runs.
It’s important to remember the cloud operating model has a cycle. There are times to leverage the public cloud. There are times to leverage the private cloud. There are times to leverage the colo model. Given the ecosystem that has built up around the cloud – there is certainly self-interest in driving enterprise workloads in that direction – there are the consulting fees to get you there and the consulting fees to manage costs once you realize it is more expensive than forecasted. Nonetheless, sophisticated enterprises are increasingly taking their own counsel on determining what is best for the business – and that is driving the repatriation discussion.
M.R.: What are the key principles of the cloud operating model?
AB: The cloud is not a physical location anymore. Today, the tooling and skill set that was once the dominion of AWS, GCP and Azure, is now available everywhere. Kubernetes is not confined to the public cloud distributions of EKS, GKE and AKS – there are dozens of distributions. MinIO, for example, works in the public cloud, private cloud and the edge. The building blocks of the cloud run anywhere.
Developers know this. It is why they have become the engine of value creation in the enterprise. They know the cloud is about engineering principles, things like containerization, orchestration, microservices, software-defined everything, RESTful APIs and automation.
Understanding these principles and understanding that they operate just as effectively outside of the public cloud creates true optionality and freedom. There is no “one” answer here – but with the cloud operating model as the guide, enterprises create optionality. Optionality is good.
M.R.: How has the cloud lifecycle changed and is repatriation the answer?
AB: Early cloud native adopters quickly learned principles of the cloud. Over time, workloads grew and costs ballooned. The workloads and principles were no longer novel – but the cost to support the workloads at scale was.
For enterprises, it has become clear that the value has been inverted by the costs of remaining on the cloud. This is the lifecycle of the cloud. You extract the agility, elasticity, and flexibility value, then you turn your attention to economics and operational acuity.
Repatriation is but one tool. There are many. It is really about optimization. What you are optimizing for should help determine where you should run your workload. At MinIO, we are agnostic, you can find us in every cloud marketplace (AWS, Azure, GCP, IBM). You can find us on every Kubernetes distribution (EKS, GKS, AKS, OpenShift, Tanzu, Rafay). That is the definition of multi-cloud.
We talk about balancing needs and optimizing for workloads. Again, some workloads are born in the public cloud. Some workloads grow out of it. Others are just better on the private cloud. It will depend.
What matters is that when your organization is committed to the principles of the cloud operating model you have the flexibility to decide and with that comes leverage. And who doesn’t like a little leverage – especially in today’s economy.
M.R. Rangaswami is the Co-Founder of Sandhill.com

As we round the corner on the first quarter of 2023, we thought it would be an appropriate time to check in and review Software Equity Group’s Annual Report.
According to SEG’s report, SaaS continues to be an attractive asset class for private equity and strategic buyers. M&A deal volume in 2022 surpassed 2,000 transactions for the first time, a 21% increase over 2021.
Private equity buyers with record amounts of dry powder drove volume and valuations, comprising nearly 60% of SaaS M&A deals, a record for annual activity, and accounted for some of the highest multiples in 2022.
Public market indices across the board struggled to overcome the tumultuous macroeconomic landscape of 2022. While multiples continued to decline from the unsustainable run-up in 2021 (14.7x), public SaaS companies in the SEG SaaS Index demonstrated operational resiliency. The median EV/Revenue multiple sat 15% higher than 2018’s pre-pandemic levels, which were considered healthy at the time. What’s more, recent indicators show inflation moderating and the potential easing of interest rate hikes, which should bode well for SaaS multiples going forward.
Here are 5 summary points to note:
- Private equity capital overhang and fierce strategic competition catalyzed SaaS M&A activity and buoyed EV/Revenue multiples
in 2022, despite broader macroeconomic turbulence. - SaaS M&A deal volume remains near peak levels, reaching 2,157 deals in 2022 and growing 21% over 2021.
- The median EV/Revenue multiple for SaaS deals jumped to 5.6x in 4Q22, surpassing the median SEG SaaS Index public market multiple of 5.4x. Buyers and investors paying a premium for high-quality assets bolstered valuation multiples for SaaS M&A in 2022.
- Private equity-driven deals accounted for the highest percentage of transactions to date on an annual basis (59.5%) due to the record amount of capital raised demanding deployment to worthy assets.
- Noteworthy deals include Adobe’s acquisition of Figma ($20B), Vista Equity’s acquisition of Citrix ($16.5B), and ICE’s acquisition of Black Knight ($16B).
Click here to view SEG’s full 2023 SaaS Report:
M.R. Rangaswami is the Co-Founder of Sandhill.com
Sharing his on his list what organisations must pay attention to when it comes to their security, Kandji’s VP of Security and Trust, Dom Lombardi, details how organizations can stay one step ahead of this year’s risks, threats and potential attacks.
M.R. Rangaswami: With the higher risk of infrastructure attacks, what will be the biggest thing to stay ahead of to avoid a concerted effort of attacks against organizations?
Dom Lombardi: Attackers will continue to become more creative in their pursuits. It has been reported that about 25% of all data breaches involve phishing and 82% of breaches involve a human element. Many of the security controls we put in place earlier are at risk of being bypassed due to human error. Financially motivated cybercriminals will concentrate on corporate entities, where they will try to derive personal identifiable information (PII) or customer payment card information.
Further, “strategic viability” attacks against critical infrastructure systems will continue to increase. Think oil pipelines, power generation, rail systems, electricity production, or industrial manufacturing. There is still the possibility that key government or corporate services could be targeted — something tied to global tensions.
M.R. Why is it important for companies to prioritize Zero Trust in their cybersecurity
plans?
Dom: Security teams have been talking about the zero-trust cybersecurity approach for a few years. It used to be “trust, but verify.” The new zero trust — in a workplace filled with multiple teams, multiple devices, and multiple locations — is “check, check again, then trust in order to verify.”
Organizations continue to play a cat-and-mouse game with hackers, attackers, and bad actors. Only 6% of enterprise organizations have fully implemented zero trust, according to a 2022 Forrester Research study.
The complex and disparate workplace environments that are so common now make it difficult to adopt zero trust — at least all at once. If you are using AWS, Azure, and GCP with an on-premise instance along with a private cloud where you are running virtualization through VMware — that will take
some time to uniformly roll everything out.
As we all continue to embark on the zero trust journey, we will see new solutions for complex problems companies are experiencing on premise and in public and private clouds. By mastering basic IT (and security) hygiene, updating and communicating your risk register (a manual that outlines current and potential security risks and how they could impact the organization), and working steadily toward a zero-trust security model, you’ll be one step ahead of most other organizations — and hopefully two steps ahead of the hackers!
M.R.: As companies continue to build their security plans, how will the role of the CISO
expand at organizations?
Dom: The CISO can also (continuously) champion the risk register to ensure they receive needed resources to remediate and reduce risk on an ongoing basis. Keep in mind that new threats, risks, and updates will always populate your risk register. It is critical to actively work to remediate against this list to prevent risks from escalating and becoming even more complicated.
Additionally, to prevent miscommunication and promote total transparency, any CISO who does not report directly to the CEO should demand that they do — immediately. Organizations need to take a risk-conscious approach to developing their security program and risk mitigation strategies.
A CISO must report to the CEO to ensure direct lines of communication regarding risk scenarios and potential loss events. CEOs are ultimately accountable for the course of action they set the organization on, and CISOs provide the CEO with the direction and guidance to make informed, risk-conscious decisions.
To set themselves up for success, CISOs should ensure that the general counsel at their organization is in their “peer set.” This relationship with your general counsel is integral to a unified approach to legal and security risk mitigation. The organization’s general counsel and CISO share a common goal: to keep the company, their customers, and the organization’s leaders safe.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Idit Levine, is the founder and CEO of the unicorn service mesh platform, Solo.io.
Some say that it was her professional basketball career that gave her the drive to develop such a successful enterprise. Others report her success should be attributed to her ability to self-teach solution-based skills. Likely, it’s both.
Idit’s dialled-in focus for how enterprises can connect hard-to-change, tested applications with more modern, flexible microservices and service mesh is worthy of her organisation’s 1 Billion Dollar Valuation.
M.R. Rangaswami: Modern enterprise infrastructure and next gen technologies such as modern cloud native and microservices have basically eliminated the perimeter, so how has this affected security as a whole?
Idit Levine: Not so long ago, a perimeter separated a company’s assets from the outside world. Now, there is no “inside” versus “outside”; everything is considered outside. A larger attack surface—the number of exposed and potentially vulnerable resources within your enterprise—means more opportunities for cybercriminals. And the average cost of a data breach in the U.S.? A staggering $9.44 million. Forward-looking organizations have implemented defense-in-depth (DiD), a multi-layered cybersecurity approach with several defensive mechanisms set up to protect valuable data and information. Others are implementing zero-trust, which basically means check, check again, then trust in order to verify.
One of a modern organization’s biggest challenges is assessing exactly how many entities they must secure. Keep in mind that microservices and modern applications have exponentially more pieces than previous generations of applications. One microservice may contain 10 pieces while a previous application had only one. Once you break down these multi-part applications and services, you must factor in how all these pieces communicate over the network—a network that should be inherently untrusted.
M.R.: Service mesh has long been thought of more as a DevOps solution, but can it too help with modern security?
Idit: Service mesh tackles the prime challenges of developing and securing microservices and modern applications (different teams using different languages and frameworks) by moving authentication and authorization to a common infrastructure layer. The service mesh helps authenticate between services to ensure secure traffic flow, also enforcing service-to-service and end-user-to-service authorization. Service mesh enforces role-based access control (RBAC) and attribute-based access control (ABAC). A service mesh can validate the identity of a microservice as well as the resource (server) running the microservice.
A service mesh also acts as traffic control within the network, freeing application teams to focus on building applications that benefit the business—without taking on the additional task of securing these applications. The service mesh delivers consistent security policies for inside and outside traffic and flexible authentication of users and machines. It also enables cryptographically trusted authentication for both users (humans) and machines or applications. Cryptographic security depends on keys to encrypt and decrypt data to verify and validate users. In addition to enabling encrypted paths between applications, service mesh allows for flexible failover (and improved uptime) and known points for security logging and monitoring.
M.R.: Does zero trust have a play here? How should InfoSec treat a zero trust strategy?
Idit: It’s been a year since president Joe Biden issued a cybersecurity executive order spelling out the importance of adopting a zero-trust cybersecurity approach, yet only 21% of critical infrastructure organizations have adopted a zero-trust model.
The zero-trust approach is essential for fast-moving, cloud-native application environments. Many commercial organizations and government agencies are turning to service mesh to bolster their zero-trust initiatives. Government agencies, for example, always struggle to secure high-value assets (including critical infrastructure) from hackers and bad actors. And these attackers can be internal (disgruntled employees or contractor/vendor breaches) or external (foreign nation-state threat actors). As a result, there are no insiders or outsiders; everyone is outside and untrusted until proven otherwise.
Service mesh is one of the simplest ways to enable zero-trust security. A service mesh helps authenticate and cryptographically validate and authorize people, devices and personas. It can further be used to enforce policies and identify potential threats.
M.R. is the Co-Founder of Sandhill.com

Today the customer support experience is critical to revenue at every phase of the customer journey, from pre-sales through renewal and expanding customer relationships. Krishna Raj Raja founded SupportLogic in 2016 to help transform the role of customer support, bringing deep experience in the service and support industry.
As the first hire for VMware India, Krishna built the company’s support organization into a multi-thousand headcount global organization. Now at the helm of SupportLogic, he and the company help some of the largest B2B technology companies in the world to optimize their support experience.
M.R. Rangaswami: What are some of the trends driving the need for companies to focus on their support experience?
Krishna Raj Raja: There are several key trends that are accelerating the need for every company to invest in the customer support experience:
- Velocity of technology adoption. It took 80 years for the invention of the telephone to reach 100 million users. Mobile phones took only 16 years and Whatsapp took less than four years. ChatGPT took only two months to reach that milestone. Not only is the rate of adoption faster, but we are also updating the technology at an increasingly faster rate. Both these trends stress vendors as it’s far more challenging to handle growing support issues without compromising the brand experience.
- Focus has shifted to post-land. The rise of the “subscription economy” and SaaS put the spotlight on customer retention. Today more and more companies are transitioning to usage-based-monetization models. The focus has now shifted from landing customers to driving product adoption post-land. Support plays a crucial role in this transformation. Customers are more likely to adopt a product that is backed by a world-class support experience.
- Product-led growth models for enterprise. This is part of a continued trend of “consumerization of the enterprise” which vendors may falsely assume means that it’s easy to design a perfect product that does not need marketing, sales and support to be successful. The opposite is true, in fact, and while some companies that are PLG-native may have an easier time, many companies who transition from traditional a sales-led GTM motion require even more investment in support experience to evolve successfully.
- Big Data vs. Thick Data. Big Data’s focus historically has been on metadata and machine data. This is the first time in the industry we can process unstructured data at scale. The ability to process customer sentiment and unlock the Voice of the Customer from support interactions has led to the rise of thick data. Emerging business trends can now be spotted in thick data that were previously untapped in big data analysis.
M.R. Rangaswami: AI has jumped from being in the hype cycle to being a more mainstream technology. What role does it play in support experience?
Krishna Raj Raja: ChatGPT has recently gained much media attention and AI technologies in general have accelerated greatly to serve more real-world applications including Support Experience. Companies are using AI and Natural Language Processing (NLP) to mine and organize raw customer sentiment signals like “frustration,” “needs attention,” “looking at alternative solutions” and turn it into predictive scores such as “likely to escalate or churn” and guided workflow steps for support managers and agents to coach, assign cases to the right agent and feed a more intelligent product feedback loop.
The use of AI enables new levels of speed and precision to take the right steps to improve the customer experience at a scale of millions of customer interactions.
M.R.: How do companies make the business case for solutions like SupportLogic during an economic downturn, where all costs are significantly scrutinized?
Krishna: In light of the current economic headwinds, every purchase is under a microscope and the business case must be rock solid. A few factors that are helping to move technology purchases forward:
- An ability to consolidate and reduce other technology spend – i.e. a typical company may be spending money on hundreds or thousands of SaaS applications that get marginal use. If you can demonstrate that you perform the bulk of use cases and the same value as a bunch of them, it’s an easier internal sell to finance leadership.
- Showing clear financial metrics and speaking the language of finance – e.g. calculations on how you help with Net-Dollar Retention, Margins, Customer Lifetime Value and the “Rule of 40” go a long way in getting support from finance and business decision makers.
- Demonstrating benefits across multiple functions/departments in the organization vs. being narrowly focused on one role or function.
The good news is that investing in Support Experience with solutions like SupportLogic addresses all of these areas, making it a top investment priority for organizations that may be cutting back in other areas. We have content that walks through how to make the business case in more detail.
M.R. Rangaswami is the Co-Founder of Sandhill.com

With over two decades of experience advising on tech strategy, M&A integration and operations improvement, Shub Bhowmick’s career has thrived with building and running high impact projects in a wide range of industries.
According to Forbes, Shub’s expertise favours his ability to breakdown complex problems, identify risks, assess business value and then provide recommendations on remediation/value attainment. All of which stemmed from his MBA at Northwestern University’s Kellogg School of Management, and a Bachelor of Technology with honors in Chemical Engineering from IIT-BHU in India.
M.R.Rangaswami: Everyone’s betting on analytics and AI, how should a company evaluate an AI vendor?
Shub Bhowmick: At a recent event, Reid Hoffman said, “You are sacrificing the future if you opt-out of AI completely.” The AI and data science industry continues to evolve at light speed, and this year will be no different. However, enterprises are adjusting their expectations as cost reduction and shareholder value realization are fast becoming a central theme.
In light of the increasing importance of AI in business today, companies worldwide are justifiably spending more time and effort evaluating AI consultants. Data science solutions are more valued than ever before because they help companies differentiate themselves from the competition and spark organic growth.
Identifying the right AI partner or solution can be challenging since everyone claims to be able to solve every problem, every time. First, it is important to know what problems your business is trying to solve; don’t go into this evaluation blindly—ensure that you have a clear list of what you need and what business goals you’re aiming to accomplish. Then, you need to take a closer look at your options: What problems are the various AI vendors solving (and how effective is their work)? What industries do they have experience with? Are they growing and innovating or standing still? Do they have a regional or global presence? Can they support a broad range of users?
Ultimately, doing things at the edge is what the future is about. A combinatorial focus on innovation, customer-centricity, business value realization and custom solutions will help you find the best AI vendor for your organization.
M.R.: What are the most effective ways for companies to use AI and ML to reduce costs and maximize profitability?
Shub: AI and machine learning technology have quickly become integral parts of digital transformation strategies for businesses, as these solutions are essential for improved efficiency, cost-cutting and maximizing profits. AI has the potential to integrate everything within an enterprise from customer insights to hyper-personalization, order generation, warehouse inventory optimization, the right routing optimization, delivery, products shown on the catalog, POS data and finally to pricing. To illustrate their immense capability and potential further, let’s look at some real-world use cases.
For instance, a customer intelligence platform like COSMOS helps retailers get 360-degree visibility into the customer, both when they are with you and with the competition. The platform delivers real-time access to customer insights with seamlessly integrated first- and third-party data to run multiple experiments and perform holistic measurements.
Similarly, the role of AI in CPG and manufacturing is significant, where a solution like supply chain control tower future-proofs supply chain with prescriptive insights and helps companies handle future disruptions and opportunities, with centralized control.
When used in collaboration, AI and ML can predict what products and services will be in greater demand so that businesses can maximize sales and growth opportunities while engaging fewer resources. AI and ML are designed to help companies decrease costs while growing profitability. This is just one of the many reasons more businesses are turning to the latest data science solutions.
M.R.: What is the last-mile problem in AI and how can it be solved?
Shub: The last-mile problem in AI is the critical gap between insight creation and value realization—it has long been one of the most challenging issues for organizations across various industries and continues to test companies today. While generating insights is certainly worthwhile, if you can’t use them to change behavior or move the dial, then that gap is both costly and unproductive for companies.
Tredence ensures insights are actionable and impactful so our clients can grow revenue, remove barriers to innovation and uncover new opportunities to create meaningful and sustainable value. Working with several Fortune 100 CDOs, we help enterprises understand the economic value of data and the importance of leading a data-driven organization. With all that in mind, our goal is to be on every CDO’s speed dial in the next 2-3 years. We excel at solving the last-mile problem and helping organizations create true value; with Tredence, you can solve vertical and horizontal issues.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Josh Lee is the Co-Founder and CEO of Swit, a Project Work Management platform. Selected as the winner of Startup Grind Global Conference 2020 among more than 4,000 applications and ranked No. 1 project management software in the Usability index among 140 competitions at G2 marketplace in 2022.
Recognized by CIO Review as Top 4 Remote Work Tech Solution together with Slack, Asana, and Monday in 2020 and officially recommended by the Google Workspace marketplace as Editor’s choice in 2021.
M.R. : Why is collaboration space saturated and how does Swit differentiate its offering?
Josh : Every team has different workflows and their own preferences for tooling. So each department even in the same organization can choose different tools that belong to the same software category. For example, IT teams want Jira for project work management while non-IT teams like HR, Marketing, or Sales prefer Asana or Monday separately for the same purpose.
This freedom of choosing tools ended up creating departmental boundaries, making it hard for multiple teams to collaborate for shared goals in a project. The more projects anyone gets involved with, the more silos they will suffer from, juggling through too many point solutions while losing context. Under this fragmented work environment, checking task dependencies with other teams in multiple projects feels unattainable. Companies are struggling against disconnected systems. We’re now in a crisis of tool overload. Streamlining workflows across teams is impossible without stitching these disconnected systems back.
So, we built Swit to provide a collaboration framework for cross-team projects by offering just the right amount of every common denominator of collaboration essentials from chat to task in one place. It’s designed for Employee Connection across departments so we can create a more connected employee experience in company-wide, cross-functional projects.
M.R.: How have strategies for digital transformation changed before and after the pandemic?
Josh : Throughout the pandemic, it’s become much harder to connect as teams. People are feeling more disconnected from each other, working in different places in different time zones. There are too many digital tools, and a notification-based chat only has failed to serve as a hub for 3rd party apps integration without distractions. Digital fatigue is now at an all-time high, leading directly to distrust, disengagement, inefficiency, and low productivity. Work synchronization should not depend too much on video call-based sync meetings. There’s just no question that we are digitally drained. In addition, new generations are looking for a unified work hub that enables asynchronous communication with efficiency and trackable collaboration with transparency to bring a more human, remote sense of belonging.
The world is changing and all previous Digital Transformation strategies will not work in this new world. We need a digital twin for the company completely redesigned from the ground up as a true-to-life space that connects our work across systems and brings people back closer together.
Companies will not succeed in digital & cultural transformation by focusing on employee management but companies will succeed only by focusing on employee connection. Standing still in the comfort zone built pre-pandemic is not an option to survive and, needless to say, to thrive. Swit was born to connect people and work across departments and systems so that even large organizations can drive that connection beyond barriers and evolve their employee experience strategy more sustainably.
M.R.: How do you adjust Swit’s GTM strategy during this economic downturn?
Josh : We truly understand one good product is not good enough for scalability because one size does NOT fit all. This market is already hugely saturated with too many single-function point solutions. So we offer SaaS Integration Platform together so our clients can configure and customize the product to their needs, create user-defined bots, build and publish 3rd party app integrations, and automate all the necessary functions all by themselves.
Salesforce said, the Future of SaaS is SIP, and we’ve just brought the Future to Now. This configurable product and customizable platform offering has been optimized to help our users better be able to stay connected across teams and across tools. Internally, we call this PPLG – Product & Platform-Led Growth. We built the “product” to be industry-neutral with common denominators of every team’s daily workflows while the “platform” empowers our clients to meet their industry-specific needs by themselves.
Fortunately, Swit is recession-proof because it’s an essential software that companies use consistently regardless of market fluctuations. Rather, Collaborative Work Management is the fastest growing category during the endemic.
Even though we’ve been offering one work hub that consolidates chat and tasks in one place for 4 years since launch, we’ll also release single-function tiers in July, 2023 with much more affordable pricing plans, add 11 languages and their local currencies to target global markets.
M.R. is the Co-Founder of Sandhill.com

Saket Modi is the Co-founder and CEO of Safe Security, a Cyber Risk Quantification and Management (CRQM) Platform company. A computer science engineer by education, he founded Safe Security in 2012 while in his final year of engineering, along with Rahul Tyagi and Vidit Baxi. Saket enjoys trying global cuisines, photography, and surprising his friends by playing the grand piano.
M.R. Rangaswami: What is it really like to work with John Chambers (an investor in your company) – what is the single most valuable advice he has given you?
Saket Modi: It’s incredible to work with John. He’s someone who has seen the economy and businesses move and reshuffle not once but multiple times. The most valuable advice he’s given us at Safe is to focus on customers. It reflects in our core value of keeping the customer first, always.
M.R.: What trends do you see in the Cyber insurance market: Who is buying, what about rates – say something the readers can take action on?
Saket: In Safe’s 2023 Cyber Insurance market outlook, we observe a trend wherein premium rates stabilize. Insurance carriers are adapting to new ways of underwriting cyber risks with an evolving threat landscape compounded with improved cybersecurity practices and investment with end-insureds.
Carriers have raised the bar for entry for cyber insurance, increasing the information security requirements for organizations to qualify to obtain coverage. Coming out of a hard market, we are now seeing more competition, with more carriers open to underwriting cyber insurance again.
2023 is the year the cyber insurance industry will introduce “inside-out” underwriting. They will leverage continuous, real-time, and precise cyber risk insights to effectively link the cyber insurance policy with the insured’s cybersecurity posture. With two-plus years of significant premium increases amidst reductions in coverage, insureds who have been investing in cyber security want to be acknowledged and rewarded by their cyber insurance partners and are more willing than ever to share “inside-out” cyber risk telemetry in a non-intrusive way.
M.R.: What are the top cyber risks you see in your customer base that are simple to mitigate for enterprises – with the highest ROI?
Saket: I don’t think we know a simple answer here. While customers are worried about ransomware and data breach the most, they increasingly want to model different possible risk scenarios dynamically. It is no longer about which risk is the most probable hypothetically.
Customers want to understand the reality of their cyber risk posture and act accordingly. Organizations we have interacted with understand that risk is subjective – varying with the industry, geography, and annual revenue. Security and risk leaders want to understand how their company is positioned in the present and compare their cybersecurity status with future cyber risk scenarios. That’s where Cyber Risk Quantification and Management (CRQM) solutions, such as the Safe Platform, help them.
SAFE allows them to build custom risk scenarios in their environment – enabling them to demonstrate and measure the likelihood of their organization being breached, the financial impact of possible breach scenarios, and a prioritized list of actions to improve security posture and reduce risk in a manner that maximized returns on security investment (ROSI).
M.R. Rangaswami is the Co-Founder of Sandhill.com

Prior to founding PingCAP, Max Liu was a software engineer for more than 15 years. He spent many hours designing and trying to fix database scaling issues to make coding faster, including creating Codis open-source project as the distributed cache solution.
Having first-hand experience with the time-consuming and repetitive processes engineers and developers face, Max developed TiDB, an open-source distributed database. TiDB offers a more streamlined database that can handle scalability and petabytes of data so customers can focus on more important areas like data analysis and business development.
M.R. Rangaswami: What is HTAP and why is it important to the enterprise?
Max Liu: Hybrid Transactional and Analytical Processing (HTAP) is a type of database that is able to process both Online Transactional and Analytical Processing workloads within the same system, sharing the single source of truth without data link delay in between. This allows for the simplification of technology stacks and data silos, which helps companies to build actionable data insight right from the real-time update and be able to drive for faster growth. HTAP is important to the enterprise because it allows for more efficient and streamlined data processing, which can improve cost efficiency, and expedite business operations and decision making.
M.R.: What are the biggest challenges for database and analytics management today and how should they be addressed?
Max: The biggest challenges for database and analytics management today can be summarized in three Vs: volume, variety, and velocity. The need to process a growing volume and variety of data, the need for real-time data processing and analysis, and the need to integrate data from multiple sources and systems. These challenges can be addressed through the use of advanced technologies such as in-memory databases, distributed databases, and cloud-based analytics platforms, respectively. In order to satisfy all three needs, a very complex data architecture has become a new norm.
As the side effect, the challenge extends further into balancing between data capability and evolving velocity. Additionally, the lack of data engineering talent for such a complex architecture is a high barrier for most organizations to adopt a data-driven culture, investing in skilled personnel, and implementing effective data governance and security practices.
M.R.: Where do you see the database market evolving in the next 5 years?
Max: In the next 5 years, I expect the database market to continue to grow and evolve, with a focus on cloud-based solutions, the integration of artificial intelligence and machine learning technologies, and the development of distributed and scalable databases to support the growing volume and complexity of data. Simplified data architecture is likely to play a key role in this evolution, as it can help to reduce complexity and improve data accessibility, enabling organizations to gain greater insights from their data and make more informed business decisions.
Additionally, there may be increased emphasis on data security and privacy, as well as the integration of databases with other technologies, which can be beneficial from the single database architecture again. Overall, I expect the database market to grow even faster with recent AI technology boosts, like OpenAI 3.5 or even the coming OpenAI 4. And the beauty of simplicity, for instance HTAP databases and low/no code tools, will become more powerful.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Peter Brereton is president and CEO of Tecsys, a global supply chain management software company. He joined Tecsys at its inception and initially led the company’s software development, product management, sales and marketing, and has been now serving as president and CEO for over 20 years.
Having been recognised with an EY Entrepreneur Of The Year® award in Quebec in 2019, Peter leads with a strong moral compass rooted in family and faith. He has guided the company
through tremendous growth, not only with a sharp vision for the supply chain industry, but by
adhering to his fundamental values of authenticity and honesty.
With 20+ years at Tecsys, Peter has a lot to share about growth, leadership and the future of tech in the healthcare space.
M.R. Rangaswami: Let’s break down the functions of “Who” Tecsys is, and what do you do?
Peter Brerenton: Coming up on our 40th anniversary, Tecsys sells and implements SaaS ERP
and WMS solutions that manage agile supply chains. We’re currently at about a $150M run
rate, with SaaS revenues growing at about 40% per year, and we’re profitable!
Tecsys’ end-to-end supply chain software platform includes solutions for warehouse
management, distribution and transportation management, supply management at point of
use, order management and fulfillment, as well as financial management and analytics
solutions.
These solutions are designed to accommodate the needs of several industry
segments; our customers include organisations spanning third-party and complex distribution,
converging retail, healthcare supply chain, as well as government bodies and agencies.
For decades, organisations have been adding length and complexity to their supply chains
without paying attention to the vulnerabilities that those complexities create. Layer in digital
commerce, globalisation, new consumer expectations and aging systems, and what worked in
the past is likely less relevant now. In today’s rapidly changing world, an agile supply chain
platform that can efficiently manage change is crucial to remaining competitive.
That’s where we deliver our greatest value. Through our software solutions, we empower companies to run a modern supply chain practice with end-to-end visibility and the digital tools to adapt to change.
M.R.: I hear that Tecsys solutions are truly transforming some aspects of healthcare. Can you
explain?
Peter: Over the last 10 years, Tecsys has proven that an efficient real-time digital supply chain
platform improves cost, quality and outcome for hospital networks. Tecsys is the established
leader in this market with more than 50 substantial hospital networks on their platform. We are
currently adding an additional two or three per quarter.
Tecsys championed a concept to manage a health system’s supplies following an industry best
practice framework; it came to be known as the consolidated service center and is widely
considered the benchmark for strategic supply chain management at a health system level.
With dozens of major health system implementations under our belt, we continue to lead the
industry in transforming traditional healthcare supply chain operations into modern clinically
integrated supply chain networks.
Another important facet of healthcare supply chain management is to facilitate collaboration between clinical and logistics teams to provide the best possible outcomes for patients. Because this is such an important part of the care delivery chain, we are highly focused on the
deployment of clinically integrated point of use technology that connects clinical operations to
the back-office supply chain activities needed to support patient care.
At this point, we are the only vendor in the market that can tie the or, cath lab, general
supplies and pharmacy together in a truly integrated supply chain along with the off-campus
warehouse or consolidated service center. It turns out that having the right product at the right
time for the right patient and in the hands of the right clinician saves lives and millions of
dollars!
M.R.: What do the next three years look like for Tecsys in the healthcare space?
Peter: The healthcare industry is under pressure from both a clinical and operational
perspective. With labor challenges and rising supply costs continuing to squeeze margin, this
sector is facing a formidable challenge. The pandemic deepened and accelerated those
challenges, exposing vulnerabilities and forcing transformation on healthcare organisations that
were slower to adapt.
Supply chain transparency and traceability will continue to drive investment in the healthcare
sector. Health systems will keep evolving and growing, which means higher supply chain
complexity, and increased challenges.
The behemoth enterprise systems that worked well at the turn of the millennium are really
showing their limitations now, and the urgency to modernise is just ramping up. There are 550 hospital networks in the U.S. and Tecsys is pursuing the top 300. Tecsys fully expects to have
over 100 hospital networks as clients within the next three years on our way to more than 50%
market share.
M.R. Rangaswami is the Co-Founder of Sandhill.com

There’s a new age of B2B Growth and it’s all about the product experience and CEO of Threekit Visual, Matt Gorniak, is at the forefront of this era.
Matt’s expertise comes from co-founding G2, cloud pioneers such as BigMachines (acquired by Oracle), SteelBrick (acquired by Salesforce and now Salesforce CPQ) and now, Threekit Visual.
Here is what Matt and his teams are seeing as they usher in a new age of B2B growth.
M.R. Rangaswami: Why is it the new age of B2B Growth?
Matt Gorniak: The best way to view the world of B2B Commerce is in 3 stages.
First, it was a world of spreadsheets and lots of manual processes.
Second, where most B2B companies are today – a world built around making it easier for the seller to sell. Tools like CRM, CPQ, ERP etc. make sales processes faster and more efficient for the seller.
Third is the new age of B2B where new kings will be crowned. Today it’s not about just making it easier for sellers to sell. What’s changed is that now it’s about making it easy for buyers to buy.
B2B winners will make it easy for customers to buy on their terms. They will show more or their product and deliver amazing, seamless, and efficient product experiences that keep their customers coming back.
M.R.: You mention moving from the age of “Seller to Buyer” – why is that important?
Matt: It has been said but it bears repeating: everyone – and I mean everyone – wants to have an easier buying experience. B2B buyers really do want self-service as much as possible whether buying bulk gift cards or a forklift.
To complete a sale today most B2B companies need a salesperson to collect criteria, create a quote, send samples, do renderings and more.
To compete and win in the future B2B companies will have a tool that allows buyers to configure, price, and visualize a product in real time.
Buyers want to be able to literally see the product, be able to configure it, and get served up all the relevant pricing, quoting, delivery information. And they want it easily accessible, 24/7; with all of the product and customer rules baked in.
M.R.: How Does Threekit Visual Commerce help B2B brands level up to the new age of B2B Growth?
Matt: Threekit creates a magical product experience for you buyers. Let buyers visually configure your product with a platform fully integrated with your tech stack
It works by taking your product catalog and rules and mapping that onto 3D assets. The platform delivers visual configuration in 3D, 2D, and AR so that customers can configure, build, and buy 24/7.
Threekit integrates with all of your systems like CPQ, eCommerce, and ERP – so buyers get an accurate price, delivery estimate, and other key information in real time. You can also syndicate the experience to distributors and resellers so they can sell more on your behalf.
The future of B2B is different – it’s about the buyer. The new age of B2B winners will be the manufacturers that create a product experience which gives the buyer an accurate visual configuration along with all of the necessary information to buy now.
M.R. Rangaswami is the Co-Founder of Sandhill.com
Happy New Year!

Nick Cromydas is the Co-Founder and CEO of Hunt Club, a tech-enabled talent and recruitment company placing leadership roles across the fastest-growing companies in the tech sector.
Based in Chicago, Hunt Club has helped over 1,000 high-growth companies land incredible leaders, helping many of them scale their business from seed funding to unicorn status in a matter of years. By leveraging its technology, talent network, and the power of referrals, Hunt Club helps companies find their next great leader faster.
Prior to Hunt Club, Nick founded New Coast Ventures, a Venture studio that started or invested in over 50+ early-stage startups (4 unicorns) with material exits to companies like goPuff, Compass, and more.
M.R. Rangaswami: Is now the right time to hire, given multiple retractions in the SaaS and tech space?
Nick Cromydas: The short answer is, yes.
Historically, downturns are a great time to a) build a business and b) slow down to re-evaluate your business & focus on getting product-market fit right. We are seeing this shift through the hires companies are pursuing now, versus roles they focused on in previous years, where growth for growth’s sake was driving much of the market.
It’s really a tale of two cities. The media continues to share news around reductions, creating fear in the marketplace, but our team is on the ground helping as tech companies continue to hire. For the most part, growth-stage businesses have plenty of cash.
While the period of hypergrowth and high valuations has cooled off, over $643 billion in global venture investment was deployed in 2021, meaning startups are equipped with cash in their reserves. Founders are just being much more thoughtful in how they deploy those dollars.
Looking ahead, $290 billion in venture “dry powder” dollars are sitting on the sidelines with $162 billion earmarked for new investments in 2023. Analysts predict that this capital will be deployed next year, reinvigorating tech and startup capital, thus boosting hiring volume needs in the new year.
My sense is, as inflation eases and investors start to gain confidence again, we should see a surge in hiring across the tech sector in H1 and H2 of 2023. They might not hire at the same velocity we saw in 2021, largely due to the fact that capital markets won’t be as liquid, but strong balance sheets in Q1 and Q2 2022 are helping to sustain normalcy.
MR.: What are the top talent trends you’re seeing right now?
Nick: Companies want to hire, strategically.
The volume of businesses with intent to hire has not gone down, they are just being much more thoughtful and intentional about spinning up new roles or growing teams too fast. They are also reprioritizing where to deploy dollars, focussing on roles like engineering, product and operations.
In 2021 and the first half of 2022, the scales were flipped. It was a candidate market and employers had to fight and open extra roles to build benches. With the Great Resignation followed by Quiet-Quitting, it was an all-out talent war to secure top talent. That combination created a deep deficit in top talent over the past two years – with many key roles left unfilled. Now, with the unfortunate workforce reduction taking place across industries, there is an opportunity for that talent to be absorbed back into the workforce, quickly. As a result we are having very strategic conversations with our customers about what positions they need to fill now versus where they can hold off to get by to maximize budgets.
This is particularly true in the tech sector, where unemployment most recently fell from 2.2% to 2% in November. There is a healthy tension between the number of open roles and the caliber of talent needed to fill those roles.
The hybrid work model has also driven the need for dramatic innovation, stumping many founders on how to transform themselves to keep up with changed behavior in a digital-first workforce. This means both search and internal talent acquisition needs to change and there has been very little innovation in the space that has achieved scale since LinkedIn. Without a playbook on how to build the best teams, Hunt Club has helped growth stage companies navigate through these changes, offering an effective way for them to reach top talent regardless of their own network or geography.
Another interesting point is that compensation levels have not materially changed due to layoffs and current market dynamics, and they do not show signs of coming down to pre-pandemic levels. In some cases, the salaries are higher than ever. In others, they seem to be on par since 2021. Geographically, the top 4 markets (SF/ Bay Area, NYC, LA, and Boston) have driven 68% of VC investments so far in 2022. These markets continue to hire across state lines due to remote work flexibility since the pandemic.
Demographic changes to the overall workforce are also causing a ripple effect. Aging baby boomers are increasingly retiring from legacy c-level role positions. At the same time, a supply deficit of digital-first talent is making it harder for companies to reach and secure the right people to lead. Companies can’t afford to get these hires wrong, making the need for innovation and accuracy critical to how they approach talent acquisition.
MR.: How are the best leaders handling a looming recession?
Nick: As we’ve scaled Hunt Club over the years, I’ve had the advantage of partnering and learning from top CEOs and investors who are building the companies of tomorrow, while dealing with the challenges of today. The leaders who can tactfully navigate the stages of discomfort and doubt, while staying focused on what’s most important – without creating unnecessary panic, end up on top.
When we encounter a downcycle, we’re all looking for ways to reduce spending, while trying to keep the focus on growing product market fit – a juxtaposition that can feel daunting. Talent is not one of the areas where good leaders skimp out. In order to withstand recession volatility, the smartest companies are focused on making sure they have strong leadership in place to help guide and weather them through the storm.
We are indeed seeing some slowdown across B2C and other sectors, but there are also pockets taking a counterintuitive approach to the macro-market, where hiring is still a top priority. For instance, leadership talent is top of mind for many growth-stage companies and we haven’t seen a drop in those roles. Savvy, forward-thinking founders are actively looking for experienced leaders who have managed through turbulent markets to help sustain and optimize operations. Going back to where we started, early-stage companies recognize that the best time to build a business is often in a downturn. A boomerang market is an opportune time to build foundational teams to drive future growth and scale.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Affectionately as “Bunny,” Warren Weiss has had decades of success uncovering tech giants in their emerging stages. His work on SilverSpring Networks earned him a spot on the Forbes Midas List of Technology’s Top Investors. He’s a four-time CEO of venture backed companies, and worked with Steve Jobs at NeXT.
Bunny serves on the Board of Binarly for WestWave Capital. He remains a General Partner on past fundsat Foundation Capital (where he no longer makes new investments) and serves on the boards of ForgeRock (IPO) and Visier. Bunny is also on the board of the Weiss Scholarship Foundation, a charity that provides education to children in Kenya.
With Warren’s experience and perspective, we figured this would be a quick and insightful read as we head into the new year.
M.R. RangaswamiI: What is your take on investing in early stage companies during a recession?
Warren Weiss: Its very important to have two years cash runway to give early stage companies the time to build a great product and get paying customers. We hear a lot about product lead growth however this usually takes longer than most startups plan for.
There is historical precendence that you can build some of the best companies during a market recession. As a venture investor I have learned you can’t time the market so if you continue to invest
in the very best companies in a down market there is a good chance you will have good outcomes for your investors.
Customers only buy in a recession when it’s one of their top priorities to either cut cost or drive revenue which is one of the main reasons only the strong companies survive.
M.R. How much should early stage companies try to grow their business vs burning cash in a recession?
Warren: Each new series of fund raising comes with milestones around ARR, SaaS metics, product
market fit, sales motion etc. You have to understand as a startup company what these
milestones are and have a plan that gives you enough time to achieve these goals.
In a recession market you won’t raise money from good investors if you don’t build a plan that
shows 100 percent growth in the first couple of years. You should try to keep you monthly net
cash burn in the $300k to $400k range during those first few years. No cash no company!!!
M.R.: What areas of early stage investing are still drawing strong customer demand in the Enterprise market?
Warren: This is likely to evolve the longer this recession last. In the Enterprise venture early stage market we still see strong momentum in security, cloud infrastructure, analytics and Web3.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Kubernetes usage is experiencing record growth—96% of organizations are using or evaluating Kubernetes. However, modern enterprises are challenged with having to use a disparate number of applications to run Kubernetes projects.
Haseeb Budhani is the CEO of Rafay Systems, which he co-founded in October of 2017. Previously spending just as much time wrestling with Kubernetes ops as they did developing the software product they were selling. Haseeb knew there had to be a better way to manage the operations for their modern infrastructure, Haseeb and Hemanth (his Co-Founder) built their own and founded Rafay Systems.
Rafay’s cloud-native Kubernetes Operations Platform is the industry’s first and only product purpose-built for platform teams that addresses the complexity of K8s, delivering the automation developers and operations want with the right level of standardization, control and governance.
This is what Haseeb’s expertise offers us:
M.R. Rangaswami: Kubernetes has become incredibly popular since its inception. Can you please explain why it gained popularity so quickly, why you founded Rafay and what problem(s) Rafay’s approach to Kubernetes operations and management solves for modern enterprises?
Haseeb Budhani: According to Gartner, enterprises adopt Kubernetes to manage modern applications in the cloud because it automates application deployment and scalability, bolsters application stability and works across public and private clouds, among other benefits. The problem is that Kubernetes has a steep learning curve and is complicated to manage at enterprise scale. My colleagues and I started Rafay because we had previously experienced the negative impact of suboptimal infrastructure automation. We witnessed multiple first-gen companies that were founded to help developers provision Kubernetes clusters for container orchestration, but found that they did little to eliminate the complexities of Kubernetes or to provide governance-focused features to ensure clusters are enterprise-ready.
In working with Rafay’s customers, we have affirmed our foundational hypothesis around how a majority of enterprises are keen to speed up their application modernization journeys, but run into the same set of Kubernetes roadblocks. As a result, many of these enterprises begin a long and costly journey of stitching together a variety of off-the-shelf or OSS tools, along with hiring more resources over time to make Kubernetes work. However, the sheer complexity of enterprise-grade requirements and a shortage of engineers with deep Kubernetes experience makes this a doomed endeavor.
At Rafay we address these challenges with our Kubernetes Operations Platform (KOP). Our vision is to deliver a broad platform that enables IT to automate and control every aspect of Kubernetes operations for enterprise and service providers. Similar to how VMware’s vCenter enables the management and operations of virtual machines across multiple VMware hosts, enterprises need a vCenter-like experience for their Kubernetes cluster fleets: an automation and governance framework that IT teams can leverage to deploy and manage Kubernetes across on-premises and cloud environments. Rafay’s core offering is the industry’s first and only platform that brings together all the capabilities enterprises need to turn Kubernetes from a roadblock to an enabler, including multi-cluster management, security, network and application visibility, configuration management, cost management and more. With Rafay, enterprise platform teams can centralize and standardize the use and management of Kubernetes across the company. Our SaaS-first approach enables enterprises to be operational within days – not months – thus helping to accelerate their digital transformation initiatives, while keeping operating costs low.
M.R.: Platform teams are quickly rising as the quarterbacks of innovation for companies. What does this proliferation mean for the future of Kubernetes management?
Haseeb: The pace of innovation in cloud technologies is nothing less than astronomical. This pace exerts pressure on enterprise teams to keep up with changing automation frameworks, integrations and the multitude of tools and supporting services required for the enterprise’s cloud journey to be successful. It ends in many enterprises struggling to stay on track with their application delivery roadmap and timelines. To reduce this complexity and ultimately streamline the deployment and management of Kubernetes and modern applications, platform teams are being instituted to take the helm – no pun intended.
By providing a shared services platform for Kubernetes management and operations, platform teams abstract the operational complexity associated with Kubernetes, with the goal of empowering developers to consume Kubernetes and deploy modern applications in a self-service fashion. Enterprises then gain operational efficiencies by standardizing and automating all the tasks from code to cloud – that is, all the steps between code complete and deploying that completed application in the cloud.
M.R. How do you see the use of Kubernetes growing, and how should enterprises prepare
now in order to leverage it in the best way possible?
Haseeb: As businesses continue to leverage cloud technologies, Kubernetes adoption will radically increase to help companies manage their modern applications more easily. 96% of organizations are already using or evaluating Kubernetes – but these businesses are starting to experience the increasing costs and complexities associated with building Kubernetes management platforms in-house.
More and more, businesses will need to rely on off-the-shelf Kubernetes management offerings to help them manage the chaos. This growing reliance on Kubernetes management and operations solutions is clearly factored into the Kubernetes solutions market size, which is estimated to reach $5.5B+ by 2028.
Many companies attempt to build Kubernetes management platforms on their own. This exercise may seem easy at first as teams get Kubernetes up and running in the lab, but in a production environment, the problem set expands far beyond basic cluster provisioning capabilities.
The complexities of managing Kubernetes intensify as it is used by multiple teams to deploy a growing number of applications across public and private clouds. What’s more, the Kubernetes skills gap is a real issue – tapping into this talent pool is very competitive and, once beginners become experienced, many can take advantage of the open job market.Haseeb
To prepare for the wide-spread growth of Kubernetes, companies should seek experienced partners – ideally those who solve Kubernetes automation and governance requirements with a product-based approach – to guide them through their Kubernetes journey and avoid the pitfalls and wasted time many other companies have experienced. By doing so, enterprises can leapfrog their competition and gain the massive competitive advantage that faster innovation delivers.
M.R. Rangaswami is the Co-Founder of Sandhill.com

As the CEO of SlashNext, Patrick Harr directs a workforce of security professionals focused on protecting people and organizations from phishing anywhere.
Before joining SlashNext, Harr served as the CEO of Panzura. There, he transformed their company into a SaaS company, grew annual contract value by 400%, and led to a successful acquisition in 2020.
Previously, he held senior executive and GM positions at Hewlett-Packard Enterprise, VMware, BlueCoat and was CEO of multiple security and storage start-ups, including Nirvanix (acquired by Oracle), Preventsys (acquired by McAfee), and Sanera (acquired by McDATA).
In a world where hackers, and security threats are common considerations for businesses, this conversation was a helpful one to have.
M.R. Rangaswami: What are the core focus areas you see that help solve phishing attacks and other security threats for customers?
Patrick Harr: Hackers are increasingly turning their attention to mobile devices with new tactics including non-linked based phishing, and SMS/text phishing, known as smishing. The latest Verizon MSI report showed that 83% of organizations report mobile device threats are growing more quickly than other device threats.
Along those lines, we recently released the SlashNext State of Phishing Report for 2022, which analyzed billions of link-based URLs, attachments, and natural language messages in email, mobile and browser channels over six months. We found more than 255 million attacks, marking a 61% increase in the rate of phishing attacks compared to 2021. Also, SlashNext detected an 80% increase in threats from trusted services such as Microsoft, Amazon Web Services, or Google, with nearly one-third (32%) of all threats now being hosted on trusted services.
These findings show that legacy security strategies – including secure email gateways, firewalls, and proxy servers – are no longer stopping threats, especially as bad actors launch their attacks from trusted services and business and personal messaging apps.
SlashNext helps to protect the modern workforce from such malicious messages across all digital channels. SlashNext’s Integrated Cloud Messaging Security is built for email, browser, mobile, and brand to protect organizations from data theft and financial fraud breaches. The SlashNext Complete™ integrated cloud messaging security platform utilizes patented AI SEER™ technology with 99.9% accuracy to detect threats in real-time and prevent users from phishing, smishing, social engineering, ransomware, and malicious file downloads.
M.R.: What industry trends are having the greatest security impacts for the modern workforce today?
Patrick: Cybercriminals are increasingly moving their attacks to mobile and personal communication channels to reach employees. As a result, the single biggest threat to any company is no longer machine security – it is the human security factor due to the explosion of personal employee data in the newly hybrid workforce. These blind spots are becoming more apparent as organizations adopt new channels for personal messaging, communications, and collaboration.
In fact, SlashNext recorded a 50% increase in attacks on mobile devices this year, with scams and credential thefts at the top of the list for payloads. Such attacks on humans will continue to increase because humans are fallible and they get distracted, making it hard for people to easily identify many threats as being malicious.
It all comes down to the question of how do I validate that you really are the person I think I am communicating with? Or is this the trusted file or corporate website link that I assumed it was before clicking on it? This problem is growing because more people are working on the same device for their business tasks and their personal lives simultaneously. I only see this trend accelerating in the coming year.
M.R.: What are you working on to help close this mobile security gap?
Patrick: In October, we launched Mobile Security Personal and Home apps for BYOD and Family use to protect mobile device owners against the growing threat of phishing and fraud attempts on SMS/text, links, and apps. These apps provide total privacy for users’ data.
The personal BYOD edition can be purchased by a business for employees, as either a managed app or unmanaged option for user data privacy for BYOD. The Home edition involves an annual subscription which covers up to five mobile devices that can be shared across family members, and not tied to any corporate business accounts.
SlashNext has the only on-device solution to block link-based and non-link-based SMS phishing attacks, which is the first stage of attack in a Business Text Compromise (BTC). As a result, SlashNext Mobile Security gives users another layer of security on their personal devices while helping businesses to protect their company data and maintain employee privacy.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Edward Chiu is the CEO and Co-Founder of Catalyst, the market’s fastest growing Customer Success Platform (CSP) built by CS Leaders for CS teams. Through his leadership, Catalyst is helping enterprise companies drive revenue growth by giving them a clear line of sight into customer health and upsell opportunities. Previously, Edward built and led the Customer Success team at DigitalOcean, helping the company grow to $150M+ in revenue and one of the largest publicly traded cloud providers.
Catalyst has consistently been named a leader in multiple categories from LinkedIn’s Top 50 Startups, Crain’s Best Place to Work, Built-In Best Place to Work, and most recently Inc.’s Best In Business 2022.
M.R. Rangaswami: Why is Customer Success (CS) so important in this economy and how has it become the new revenue growth engine?
Edward Chiu: We’re experiencing one of the largest stock market drawdowns in the past couple of decades. Some of the best companies in the world you wouldn’t expect have done layoffs like Amazon and Salesforce. Back in the 07-09 recession, there was a study done by a consulting group that highlighted companies that focused on customer experience significantly outperformed the S&P 500, and those who didn’t saw their returns drop in free fall.
Customer Success is important in this economy because it helps to ensure your existing customers, who are keeping your business afloat – are satisfied with the products and services they have purchased. This ultimately leads to the much-needed retention during a down market and most importantly, organic revenue growth. There’s nothing cheaper than generating repeat business and word-of-mouth growth through your existing champions. CS has become an increasingly important part of many companies’ growth strategies, as they also leverage it as a differentiating “product” from their competitors.
Having spoken to hundreds of Chief Customer/Revenue/Executive Officers in the past couple of months, the number 1 focus they are all shifting to is creating “growth through their existing customers”. Leaders are frantically trying to organize all of their customer data from disparate sources and find immediate opportunities where Customer Success and Sales can start to tackle jointly.
M.R.: Sounds like customer success is becoming the new growth, how do executives go about creating a program that truly scales this process?
Edward: Existing customers have ALWAYS been one of the most important growth drivers, but because of this market downturn, it’s now becoming the primary.
Most executives think this shift is incredibly daunting but it’s actually quite simple. To start, CS leaders have to be expeditiously focused on data-driven customer success.
They need to first identify specific metrics and data points that are most relevant to their customers’ success. This may include data on feature adoption, level of engagement with company reps or content, successful outcome moments, etc. Next, they need to work with other stakeholders in the company to identify where this data is captured, generally, it’s in some kind of data warehouse like Snowflake or Redshift.
Most leaders tend to depend on data science teams or engineers to analyze this data using data visualization tools, but that creates a lot of dependency on other departments. That’s where Customer Success Platforms (CSPs) like Catalyst.io come in. CSPs are unique solutions that aggregate all of your data from Salesforce, data warehouse, ticketing software like Zendesk, and converge it into a single pane of glass. From there, you can quickly segment your customers by adoption and automatically send customized interactions to drive upsells that don’t rely on manual labor from your reps.
M.R.: What’s a trend you’re predicting in the next 6 months given the focus from new business acquisition to driving revenue growth from existing customers?
Edward: I was chatting with a Chief Financial Officer recently of a $100M+ software company and the most illuminating thing about our conversation was how important he viewed customer success as a revenue driver for their path to going public. Retention was an afterthought for him because churn isn’t a problem for their company, but effective collaboration between the go-to-market teams to generate targeted upsells is an area that requires deeper investments. Immediately following our call he introduced me to their Chief Customer Officer and VP of Product to learn more about data aggregation and improving their collaborative process to drive upsells.
In all my time in CS, it has been incredibly rare to see finance leaders be so invested in this topic, let alone be the first touch point in creating motions for change. The trend we are witnessing and will continue to see is customer success becoming the center for organizations. Product development will start with what your existing customers are looking for, marketing initiatives will revolve around the customers’ voice, and sales will partner closer than ever with CS to account for majority of the business revenue.
It’s an exciting time for businesses to focus on Customer Success, most importantly it will be the primary focus that leads businesses out of this economic downturn.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Kashyap has a unique history of being a fourth-time entrepreneur, previously selling companies to OpenTable and Future Group. He is also a best-selling author and investor. Kashyap started his first company as a college student at IIT Bombay and sold it to a Silicon Valley firm.
In his current role, Kashyap is focused on driving technology that powers the gig economy and how the workforce is shifting to flex work (including field service, delivery, and rideshare drivers).
Our conversation with Kashyap was timely with the impact holidays have on tech in both traditional commerce and e-commerce.
M.R. Rangaswami: What are some of the technology innovations that will most impact the logistics market this upcoming holiday shopping season, and why should consumers care?
Kashyap Deorah: From what I am seeing, the bigger bottleneck to fulfillment is workforce supply rather than demand. Innovations that can help retailers and ecommerce companies find warehouse and delivery workforce will lead to high impact in terms of fulfilling consumer demand. This adds to the global supply chain woes and rising oil prices. In some ways, this is probably the first supply-driven recession, rather than a consumption or demand-driven recession.
More specifically, there are flex work marketplaces where retailers can post jobs, and these platforms match nearby workers with the right skills, and then manage the end-to-end experience from making sure they show up, to spending the right amount of time, charging for the right time and distance, and so on. Flex workers are being hired for warehouse, store, delivery and other jobs that facilitate last mile logistics. Technology innovations that accelerate this trend will directly impact online shopping this holiday season.
M.R.: Walk us through the past, present and future of gig work, and explain how logistics technology builders have evolved alongside. Also where do you predict the gig economy will go next?
Kashyap: Gig work is quite an irony of our times. When the Department of Labor first came up with the Fair Labor Standards Act (FLSA) in 1938, at the end of the Great Depression, President Franklin Roosevelt and team never imagined that one day there would be a class of businesses driven by workers who are working for themselves, yet integrally driving the entire business that they serve. Now, 2% of America’s GDP (half a trillion dollars) relies on the gig economy and is growing fast. Businesses, workers and the government are slowly but surely coming to terms with this reality.
With the Great Resignation, the gig economy is set to grow to an even higher portion of the GDP. The trend goes well beyond rideshares, food and grocery deliveries, and affects all industries that employ hourly, daily, weekly wage workers. The gig model brings meritocracy and the power of the Internet to a labor market that has been stuck in the archaic model of fixed hourly rate and time-and-a-half overtime paradigm since the 1930s. The post-WWII paradigm of employer-sponsored healthcare will also need to be revisited.
M.R.: You have mentioned that last mile logistics technology and operations require an overhaul – what does this overhaul entail and how will it affect both the gig workers whose job it is to supply last mile delivery and services, and the consumers who have come to expect “instant commerce”?
Kashyap: Logistics tech is long in the tooth. Commerce has focused on deploying technology to win more customers and orders on the front-end. It is a recent phenomenon that fulfillment (last mile logistics) has gone from being the back-end to the key value proposition, or the front-end. In other words, consumers put convenience ahead of product and price when they choose where to shop. Old logistics tech that was built for scheduled delivery by a rostered workforce is finding it hard to meet the moment. A few on-demand apps are gaining market share at the expense of the incumbents due to their modern logistics tech. The biggest overhaul in logistics technology would be to enable gig workers to fulfill on-demand and same-day deliveries with the help of live location and mapping tech. From route planning, to assignment, to seeing the order through to delivery, it all needs to be orchestrated in real-time.
M.R. Rangaswami is the Co-Founder of Sandhill.com

As CEO and Founder of BeyondView, Kul Wadhwa is revolutionizing the way commercial properties are marketed and managed. The company’s immersive, game-like experience is disrupting the commercial real estate landscape by accelerating the property leasing management process at a fraction of current costs.
We talked with Kul about how digital twin tech will help property management and the enhancements we’ll be seeing more of in the commercial leasing space.
M.R.: What direction is real estate technology headed?
Kul Wadhwa: In an industry that is often considered immovable or at least slow to accepting change, technology will increasingly have an impact on real estate. More specifically, the future of real estate technology will involve digital twins. According to a recent report, the digital twin market is expected to reach $72.65 billion by the year 2032, a growth of over 22%.
Utilizing artificial intelligence (AI) and machine learning, real estate companies can digitize a space at a fraction of current costs, creating a live asset. The result is a building-specific database that is highly customizable, presenting easily accessible data that also allows for properties to be reimagined in no time at all. From my perspective as a technology-minded entrepreneur, this change is not only exciting but extremely necessary.
M.R.: How does digital twin technology assist with resource planning and property management?
Kul: Data that is expected to guide the corporate decision-making process is often presented in a manner that is difficult to understand and additional digital and software solutions make the process even more bloated and complicated. By leveraging digital twin technology, decision makers can readily retrieve all relevant information about a building and its assets that is both intuitive and contextualized. This database presents a visualization of data that is easily accessible in real-time from a desktop or even a phone and is available for quick redeployment. Additionally, the information presented facilitates a streamlined decision-making process and eliminates any physical hurdles or constraints.
M.R.: How does real estate technology change the leasing process now and into the future?
Kul: While real estate renderings and virtual walkthroughs of old have been disappointing and unrealistic, new technologies offer immersive, photorealistic, and gaming-like experiences that are proven to accelerate a space’s leasing cycle.
It may sound futuristic that there are technologies that allow stakeholders to join virtual tours of digitally reimagined spaces that look and feel like you are physically present at a remote location. But these technologies exist today. Over time, platforms that offer imbedded communication tools that allow brokers, prospective tenants, and property managers to collaborate in real-time and demonstrate a space’s unmet potential will accelerate the leasing and even buildout process. Especially in downturn markets, technologies like these will prove to be essential.
M.R. Rangaswami is the Co-Founder of Sandhill.com

While the median EV/Revenue multiple for 3Q22 was 6.3x, down 61% year-over-year the Software Equity Group’s Q3 Public Market Update finds the silver lining in this quarter’s update.
Even as public markets continue to experience volatility, valuations for public SaaS companies have effectively been flat month-over-month since May (hovering around 6.0x EV/Revenue).
Despite the broader market decline, the financial health of businesses in the SEG SaaS Index continues to remain strong in several areas:
- Companies continue to grow larger in scale. Total median revenue reached an impressive $602 million in Q3.
- Median revenue growth has maintained a steady pace in the mid-20% range. In 3Q22, the Index boasted a healthy TTM revenue growth rate of 26.4%, up from 24.9% in 3Q21.
- The Index’s 71.3% median gross profit margin remains strong and is generally consistent with prior quarters.
Here are three highlight updates from the 2022 Q3 Report.
I) Revenue Performance
Businesses in SEG’s SaaS Index grew larger in 3Q22, with total median revenue increasing to $602 million. The Index posted a strong median TTM revenue growth rate of 26.4%, up from 24.9% in 3Q21.
However, the median growth rate has declined modestly from 1Q22’s recent peak. Also notable is a shifting distribution of revenue growth rates among companies in the Index. Many companies growing TTM revenue faster than 40% in 3Q21 have now fallen into the lower cohorts of 20-30% and 30-40% TTM revenue growth.
II) Public Market Multiples
EV/Revenue multiples have dropped significantly over the last year, declining from 16.0x in 3Q21 to 6.3x in 3Q22. Incredibly, nearly 71% of companies in the Index traded at greater than 10x in 3Q21, which was the market peak and likely the height of unsustainable irrational exuberance.
Companies trading at >10x+ EV/Revenue in 3Q22 generally outperformed on revenue growth, gross profit margin, and/or EBITDA margin.
III) Product Category Financial Performance
Communications & Collaboration, Dev Ops & IT Management, Sales & Marketing, and Security all posted median TTM revenue growth rates higher than the Index median of 26.4% in 3Q22.
Of all the product categories, Human Capital Management and BI & Analytics posted the most significant YOY increases in TTM revenue growth, increasing from 10.9% to 21.7% and 13.8% to
22.2%, respectively.
While no product category was safe from YoY decline in 3Q22, some held up notably better than others. Security and ERP & Supply Chain maintained the highest EV/Revenue multiples compared to other categories, primarily due to their mission-critical nature and crucial customer reliance on the product offerings.
Interestingly, the Vertically Focused product category has significantly declined in median EV/Revenue, falling 19% below the Index median.
The category’s median gross profit margin of 58.8% was considerably lower than the Index median (71.3%). This notably low gross profit margin is likely the driving factor behind this cohort’s YoY decline. It should not serve as a representation of the many vertically focused companies that post stronger gross profit margins.
Generally, vertically focused companies possess more attractive operating metrics due to the highly specialized nature of their product offering, resulting in more attractive valuation multiples.
Click here to review Software Equity Group’s full report in detail:

In the nascent and uncharted Web3 world of NFTs, crypto and the multiverse, Rick and his teams at Appdetex are doubling down on brand protection.
Many of the world’s largest brands, including four of the five most valuable global businesses, trust Appdetex to process and analyze massive volumes of data from across the internet to detect and address brand misuse.
Here, Rick walks us through how organizations of all sizes can prepare and protect their brand assets from nefarious digital threats.
M.R. Rangaswami: What steps can businesses take to improve how they protect their brand in a world where brand misuse is proliferating at a record pace, online and off?
Rick Farnell: Business today face a tsunami of digital and physical brand threats – including fraud, copyright and trademark infringement (both nefarious and well intentioned), brand impersonation and identity theft, digital and digital-plus-physical counterfeiting, and other types of brand misuse – all of which present a clear and present danger. To address these threats, companies of all sizes should take a proactive approach to securing and protecting their brand and IP online.
As a first step, companies should implement programs and policies to actively monitor how their brand is being used across digital channels. Monitoring the brand will not only protect companies, but also provide important insights into trends and new potential marketing and engagement opportunities. Once monitoring is established and brand threats have been identified, companies must take action to address instances of brand misuse. This includes removing and remediating brand abuse via brand protection platforms, taking legal action and even turning to law enforcement.
Forward-looking brands are also going a step further by thinking beyond current brand misuse and taking measures to prevent future abuse. This can include securing relevant domain names, as well as analyzing and correlating previous instances of brand misuse to identify trends. With the right technology and processes in place, businesses can get ahead – and stay ahead – of the tidal wave of digital brand threats on the horizon.
M.R.: How should Web3 factor into considerations for brand protection in the near term and longer term?
Rick: As Web3 quickly gains in popularity and usage, it will present a host of new challenges for brand protection. In this new mixed reality of the internet, malicious actors are already adapting quickly to profit off of brands, creating a new class of nefarious behavior.
In the near term, most Web3 threats will predominantly still occur in the Web2 world, such as the theft of crypto wallets and NFTs through phishing attacks. Sophisticated fake communities and deceptively enticing promotions are leading to fraud and are showing up in highly integrated social media, mobile applications, paid internet search and imposter Web2 websites. These fraudulent promotions are intended to entice Web3 enthusiasts to engage and feel like part of “something big.” For example, illicit actors may make claims of a private release of an NFT collection or metaverse property as a ruse to walk away with credentials and assets.
Longer term, there are a handful of steps brands should take to ensure protection. To start, companies should begin actively monitoring Web3 channels to see how their brand is being used. Next, they should try to secure relevant Web3 domain names. Web3 domains are very different from the Web2 world as there is no centralized authority like ICANN. This means that once a bad actor holds a Web3 domain that infringes on trademarks, it’s extremely difficult to get that domain transferred to the rightful owner.
M.R.: As C-level executives finalize their budgets for 2023, what do you recommend they focus on?
Rick: Budgeting for 2023 is well underway for most enterprises. With looming reports of economic downturn, many businesses may be tempted to slash budgets across the board – and brand protection is no exception. What business leaders need to realize is that brand misuse is not going away – in fact, as the digital landscape continues to widen, online brand risks will become increasingly prevalent, which can lead to serious ramifications on a business’s bottom line.
My recommendation for C-level executives is to not take their foot off the pedal when it comes to protecting their brand online. In the year ahead, businesses will need to be increasingly proactive in their approach to protecting their brand, IP, copyrights and more in the evolving digital universe, or risk the consequences.
M.R. Rangaswami is the Co-Founder of Sandhill.com

What are real-time analytics, and what role do they play in a company’s profit, forecasting and market positioning?
After spending eight years at Facebook, engineering the infrastructure systems for data management, Venkat launched Rockset – and has further advanced his contribution to understanding and harnessing real-time analytics.
M.R. Rangaswami: What is real-time analytics and how does it fit into a company’s data strategy?
Venkat Venkataramani: Real-time analytics is all about using data as soon as it’s produced to answer questions, make predictions, understand relationships and automate processes. Modern data applications need to process different types of data from multiple sources to initiate specific actions in real-time, such as e-commerce personalization, IoT automation, logistics and delivery tracking, gaming leaderboards, and more.
Until recently, it has been challenging to deliver analytics at the speed and scale required by modern applications. Our company’s real-time analytics platform connects to your data, ingests and indexes any changes in real time, and provides sub-second SQL and data APIs without consuming unnecessary compute, enabling organizations to build data applications at cloud scale.
M.R.: How are real-time analytics helping organizations amid an economic downturn?
Venkat: The two most important recession-proofing tactics for enterprises are the ability to dial down operating costs through real-time process automation while simultaneously accelerating growth through digital customer experiences.
According to a recent study, US businesses have the opportunity to realize the largest overall impact on revenue increase – potentially $2.3 trillion – from leveraging real-time data analytics, with 73% of manufacturers reporting more efficient rollout processes and 67% of financial firms reporting greater efficiencies. Much like electricity delivers value on pay-per-usage basis, real-time analytics is quickly becoming the latest cloud innovation to provide fast analytics on real-time data on a consumption basis.
M.R.: Where do you see real-time analytics going in the years to come?
Venkat: 2022 is the year that real-time analytics is going mainstream. We’ve seen strong growth in real-time data over the last several years as more companies are deploying modern, cloud-native data stacks. However, given the current bearish market economy, the efficiency and performance of data systems are more important than ever.
The ballooning costs of data warehouses that were not built for real-time data has driven the shift to real-time databases that are more optimized for critical use cases, including personalized experiences, security analytics, fleet management and leaderboard gamification, just to name a few.
M.R. Rangaswami is the Co-Founder of Sandhill.com

A combat veteran and Purple Heart recipient who served in U.S. Army Special Operations prior to shifting his focus to the cyber domain, Jeffrey J. Engle has had a fascinating career path that includes hunting for viruses in Kazakhstan to skydiving with the British Special Air Service.
Now the chairman and president of Conquest Cyber, Jeffrey is also the inventor of a cutting-edge Cyber Resiliency Ecosystem Platform & the CEO of 1st Quadrant Services, a Managed Cybersecurity & Compliance Provider.
One of Jeffrey’s focuses has been his work with Native American tribes to improve their cybersecurity. Recent cyberattacks of Native American tribes — including a 2021 attack on the Mandan, Hidatsa and Arikara Nations that shut down their IT systems — have underscored vulnerabilities to bad actors and highlighted the need for tribes to invest in training and security.
This was a fascinating conversation we’re happy to share with you.
M.R. Rangaswami: What makes Native American tribes so vulnerable to cyberattacks?
Jeffrey J Engle: Tribes are no more or less vulnerable to cyberattacks than any other entity across the United States. What makes them unique and, in turn, subject to increased risk is the fact that tribal nations have a broad and diverse attack surface.
Tribal nations have inherently governmental responsibilities, providing health care systems, law enforcement, community support and housing. In addition, they have business interests that are as varied as gaming and defense contracting. This broad attack surface, coupled with many instances of poor telecommunications infrastructure and access to technology during early years, results in a perfect storm of complexity and resource limitation.
M.R.: How do the complex relationships among tribes, the government and law enforcement contribute to the problem?
Jeffrey: Every tribe operates differently, but considerations like indigenous data sovereignty seem to be a universal consideration for tribal nations. When it comes to law enforcement, the additional consideration of criminal justice information services (CJIS) comes into the fold. Beyond that, any interaction of consequence that makes the news increases a tribe’s risk profile, as adversaries seek to further undermine the challenging dynamic of the shared history.
M.R.: What can tribes be doing now to improve their cybersecurity?
Jeffrey: It is critical that all tribal nations and their interests (e.g. business units or healthcare providers) understand where they are in relation to where they want to be. Using the NIST Cybersecurity Framework to provide a structured approach to determining those coordinates is a smart approach. The NIST CSF is best overlaid with a maturity model to further clarify a point where you are achieving the desired outcome versus being able to count on the outcome being consistently achieved over time.
This allows application of compliance requirements that are industry specific — e.g. HIPAA or DFARS, basic cyber hygiene or prescriptive cyber insurance requirements to get progress started. Once you have a clear picture, we always recommend eliminating the tech or processes you do not need, simplifying the things you do and automating everything you can.
This frees up the team to think, plan and do, rather than just react to the situations that hit them that day.
M.R. Rangaswami is the Co-Founder of Sandhill.com

Gaurav Bhasin, Managing Director, Allied Advisers
According to Allied Advisors’s most recent report, any emerging SaaS firms are contemplating a Vertical SaaS model to target a specific niche, allowing them to focus better on client demands and making them easier to market.
Vertical SaaS is witnessing growing emergence of start-ups with smaller but more focused TAM (as compared with Horizontal SaaS) and more capital efficient business models.
COVID-19 severely impacted some Vertical SaaS niche markets but accelerated overall digital transformation across industries, followed by the realization that standardized solutions will not suffice.
We see continued investor interest in Vertical SaaS due to high growth prospects supported by strong business fundamentals, along with better performance on multiple metrics than peer Horizontal SaaS companies.
4 DRIVERS OF VERTICAL SAAS ARE:
1. Industry-specific solutions promotes growth
More robust and focused solutions that appeal to the client’s and the specific industry’s needs.
‒ Products and solutions are constantly updated in response to changing regulatory needs.
2. Higher upsell opportunities help growth
Immediate and significant value to companies looking for focused solutions; increased upsell opportunities due to the demonstrated value. ‒ As per studies, upsell costs only ~24% of the
cost of acquiring a new customer.
3. Lower S&M cost drives capital efficient growth
i. Focused and cost-effective approach to marketing due to narrowly defined customer requirements.
‒ Fewer marketing resources required and faster customer acquisition achieved.
ii. Blossom Street Ventures estimates that vertical companies can achieve up to 8x cheaper CAC vs
horizontal peers.
4. Increased customer trust drives demand
Having knowledge of the market and networking with key players acts as a distinct advantage and
builds customer confidence.
‒ Working closely with experts allows them to keep up with industry requirements and technical issues, leading to greater reliability, higher customization and better performance at industry-specific metrics and KPIs.
2 DRIVERS OF SAAS ARE:
1. Companies are typically focused on a smaller niche market, making it challenging to find new leads, and are exposed to adverse events impacting their target sector; e.g. COVID-19 had severely impacted VSaaS companies in sectors like hospitality and travel, helped companies in collaboration.
2. Lower TAM can be a key challenge with limited options to diversify; companies overcome this by providing additional offerings to existing customer base; e.g. Veeva expanded their product offerings to the healthcare sector rapidly increasing their growth and available TAM.
For a further insights on Allied Advisor’s full fall report, click below.

This week we’re keeping it short and snappy with SEG’s SaaS Index Overview, released August 2022.
Public SaaS company valuations continue to hover above July lows, reaching 6.7x EV/Revenue in August compared to July’s 6.0x multiple.
This volatility will continue as the tug of war between fighting inflation and avoiding a recession plays out.
Here are three highlights from the overview:
- Human capital management continues to lead all other product categories, as the cohort gained 4.4% from July to August after being down 21% YTD in July.
- Communications and Collaboration posted the lowest YTD price performance, with equities in the category declining 56.7% thus far in 2022. The product category has continued to come back to earth after it experienced exorbitant highs due to the rapid acceleration of remote work experienced during COVID.
- Qualys (10.7%), PowerSchool (9.3%), and Alteryx (3%) are three of five companies posting positive YTD returns.
Here is SEG’s full report:

Fast approaching are the days we can identify congestive heart failure through AI and a smartphone app – and Tamir Tal, CEO of Cordio Medical, a medical speech analytics platform, are at the forefront of this development.
Using AI to compare each day’s vocal signature with the baseline, this app, still in clinical testing, is called the HearO where patients open the app and speak the same sentence on their phones daily. If altered fluid states are detected, an alert immediately sends a message to their clinician.
Before Cordio Medical, Tamir took his 20+ years of experience in operations, finance and development to serve as COO at Neovasc Medical Ltd., a cardiovascular medical device company that developed the Neovasc Reducer™ for the treatment of refractory angina.
This was an eye-opening conversation about the future of medical-grade portable devices:
M.R. Rangaswami: Do you believe voice recognition technology will be a leading at-home diagnostic tool?
Tamir Tal: Voice recognition technology will soon be a leading at-home tool because speech recognition technologies have two significant advantages:
1) They are easy to use and induce very high patient compliance – meaning patients find the app and their own smartphone easy to use and do not discontinue use. This level of compliance is significant because at-home monitoring is engraved into the patient’s life for many years and needs to become part of the patient’s daily habits to have true adherence.
2) Speech and voice are very personal and allow practitioners to generate clinical information that cannot be achieved from other measurement tactics. For example, a mother (and MD) can detect changes in health just by listening. This represents an excellent analogy to the speech processing technology, which is based on signal processing and AI.
M.R.: What do investors need to understand about the impact of global medical grade portable medical devices?
Tamir: Medical grade mobile as a medical device solution is the future of healthcare. The ability to transform standard mobile devices, such as smartphones, into advanced medical diagnostic tools allows for a cost-effective, easy-to-use health monitoring device for patients. Overall, the capital expense is minimal, and the distribution and adoption are easy for patients and providers.
M.R.: Since telemedicine has rapidly picked up since the Covid-19 pandemic, how do you believe portable medical devices will impact office visits?
Tamir: The impact of mobile as a medical device solution has allowed for better patient care for healthcare practitioners. The ability to monitor patients almost constantly or daily provides instantaneous diagnostics. Hospitalization or clinic visits are becoming less common, as practitioners can measure the severity of a patient’s condition remotely by using telemedicine and portable medical devices.
M.R. Rangaswami is the Co-Founder of Sandhill.com

As the founder of data lineage platform MANTA, CEO, Tomas Kratky is helping organizations fix blind spots to gain full control and visibility of their data pipelines. Prior to MANTA, he led Profinit, one of the most successful consulting businesses in Central Europe and has 20+ years of experience as an accomplished software developer and IT consultant.
Here we talk about everything from raises to data lineage and trends.
M.R. Rangaswami: Tell us about MANTA’s origin story and recent company momentum, including how you raised $35 million in Series B funding amidst an economic downturn
Tomas Kratky: I founded MANTA in 2016 after seeing a need for a solution to help organizations navigate the increasingly complex data systems. A very key capability every organization must have to stay successful in the modern, fast changing and highly competitive world is the ability to change things, and do it quickly and safely.
Based on my own experience as a consultant and developer, it was clear that the exploding complexity of enterprise data environments was making it impossible, slowing organizations down and increasing the risk of major data incidents. Organizations needed an always up-to-date, detailed, accurate, intelligent, and actionable map of data pipelines and all data dependencies to help safely and efficiently navigate the data environment. Building that map was the first step on the MANTA journey.
With strong growth following our company launch, we were fortunate to secure our Series B funding at the cusp of the economic downturn. Our technology has proven to be a critical tool for organizations looking to cut costs and streamline processes during these uncertain economic times. The answer to improving productivity while decreasing costs is almost always automation, and we are seeing more organizations turn to data lineage to enable agile and efficient change management and to achieve accurate, high quality data that drives productivity while offering important insight into business operations.
M.R.: What is data lineage and how does it fit into a company’s data strategy?
Tomas: Data is an organization’s most critical asset, yet many are struggling with side effects of the exploding data stack that has evolved into a complex ecosystem with thousands of components. Data does not start with your data lake and does not end with your analytics or reporting. It is produced and consumed by every application in your enterprise.
The complexity of expanding, highly interconnected data environments has left many enterprises faced with the inability to deliver required changes fast enough, resulting in increased risk exposure, more material incidents and engineering resources wasted on manual, repetitive tasks.
Data lineage is a tool that solves these intricate challenges by reaching every corner of data environments to offer complete visibility into data ecosystems, no matter how complex they are. Having a complete, clear and comprehensive map of all data flows, sources, transformations and dependencies enables organizations to spend less time figuring out their data and more time putting it to good use
M.R.: Can you share insight into the current state of the data lineage market and what trends are driving interest?
Tomas: Data lineage is very quickly evolving from a critical capability of your compliance framework (understanding data movement and provenance to protect sensitive data, or to ensure explainability for key indicators and metrics reported to internal audit or external regulators) to a foundational layer of the modern data fabric architecture design.
Understanding both technical and non-technical dependencies in your data environment and improving visibility of your key data pipelines is something we see all successful enterprises doing today. It is even more critical when you start thinking about digital transformation projects that the whole industry is going through. They are all about change, which makes visibility and data lineage essential for their success.
Another big trend is a shift towards active metadata. Something we have seen and have been doing since our early days, metadata is not something you should consolidate in a silo (data catalog or metadata repository). Metadata must be delivered to people, machines, and places where it is needed to automate, simplify and improve productivity.
Simple examples from MANTA’s daily life are: integrating automated impact analysis early into a development cycle to prevent incidents and broken dependencies, actively monitoring data pipelines to identify and notify about any potential material issues, or allowing report users to understand data lineage for critical metrics they care about without leaving their workspace. Overall, we see our customers’ productivity going up by 30% to 40%. With basically an endless list of ways how activated metadata can help, this space is very exciting.
M.R. Rangaswami is the Co-Founder of Sandhill.com