Skip to main content

M.R. Asks 3 Questions: Pranay Ahlawat, Partner & Associate, Boston Consulting Group

By June 29, 2023Article

We’re long past being able to escape Generative AI as a weekly conversation topic. From keynotes at software company conferences, to investment themes for VC/PE investors, we cannot escape Generative AI as a conversation topic today

We reached out to Partner and Associate Director at Boston Consulting Group, Pranay Ahlawat, after reading his article on Regenerative AI Trends That Really Matter. We were impressed and intrigued with how Pranay sees this topic from multiple angles – advising clients, advising investors and a practioner, and wanted to share his insights with our Sandhill’s executive network.

Pranay’s focus on enterprise software and AI at BCG help him discern the hype from reality and understand the true trends that really matter and what software companies, enterprises and investors must know about Generative AI.

M.R. Rangaswami: We have certainly been in hype cycles in the past, what is different about Generative AI and why does it matter?  

Pranay Ahlawat: Foundational models or the problem of natural language conversation isn’t new. Natural Language Processing, chatbot platforms and out-of-box text APIs from cloud vendors have been around for a decade today. Foundational models like Resnet-50 have been around since 2015. There are two things that are different about modern-day Generative AI. 

First, modern language models or Large Language Models (LLMs) are architecturally different and have a significant performance advantage over traditional approaches like Recurrent Neural Networks and LSTM (Long Short-Term Memory). You will often hear the word transformers and “attention”, which simply put is the ability of the model to remember the context of the conversation more effectively. The quality of comprehension and ability to generate longer free-form text is unlike what we have seen in the past. 

Second, these models have a killer app unlike any other and is immediately consumable by non-technical users. We have had transformative technology breakthroughs in the past – internet, mobile, virtualization and cloud, but nothing has come close to the astonishing rise of Chat GPT, which reached a hundred million users in about two months. This tangibility has added to the hype and despite the huge potential, a lot of the claims about Generative AI are unrealistic. 

It matters because of the potential impact it has on society. We are a small step closer to general intelligence, we can potentially solve problems we weren’t able to solve before. It’s disruptive for many industries like media, education and personalization. Time will tell how quickly this will happen. 

M.R.: What are the three things people must know about Generative AI today?

Pranay: For me the three underlying principles or things you must know – (1) Generative AI is getting democratized, (2) the economics of Generative AI are a crucial vector of innovation and (3) The technology itself has limitations and risks. 

First, the technology at the platform level is already democratized and the barriers to entry are continuing to go down. If you look at the commercial players – there are model vendors like Cohere and Antrhopic, platform vendors like Google, AWS and multiple other tooling and platform vendors e.g. IBM WatsonX, and Nvidia NeMo, all making it easier to build, test and deploy generative AI applications. There is real excitement in open source and community driven innovation at all layers e.g. frameworks like PyTorch, foundation models like Stable Diffusion and LLaMA, model aggregators like HuggingFace and libraries like Langchain. Today, a developer can create a generative AI application in a matter of hours, and a lot of complexity is abstracted away because of modern tooling. We have more than five hundred generative AI startups already, and the barriers to entry are continuing to come down.  

Second, winners will know how to get the economics right. These models are incredibly expensive to train, tune and run inference on. A 300B parameter model costs anywhere from 2-5M in compute costs to train, and models like GPT-3 costs 1-5 cents per query. To give you an intuition – if Google ran a modern large LLM like GPT-4 for all search queries – it will see profits go down by roughly 10B. So, understanding the task and architecting for the right price/performance is an imperative. There is a ton of innovation and focus on cost engineering today – from semiconductors to newer model architectures and training and inferencing techniques that are focused on getting this price/performance balance right. 

Third, there are well documented risks that are still not fully understood. The problem of bias and hallucinations is well documented, there are also unknown cybersecurity risks copyright and IP issues that enterprises need to worry about. Lastly, these models are only as good as the data used to train them, and they make mistakes – Google Bard’s infamous factual error on debut is a good reminder that AI is neither artificial, nor intelligent. 

M.R.: Where are we in the adoption curve of Generative AI and where do you believe this is all going?

Pranay: We are still early innings here. We are seeing a ton of enterprises experiment and run pilots and POCs, but almost no adoption at scale. There are certain use cases like Marketing, Customer Support and Product Development that are more ready and have out-of-box tooling e.g. Jasper and GitHub CoPilot etc. The reported performance gains vary significantly, however. There are many numbers, even from reputable sources which are conjecture without any tangible evidence. Companies should evaluate these tools and assess impact before building business cases. 

I believe the adoption in the enterprise will be slower than most estimates. Many underlying reasons for that – lack of a strategy and clear business case, lack of talent, lack of curated data, unknown technology risks etc. The biggest challenge is that of change management – according to BCGs famous 70:20:10 framework, 70% of the investments in adopting AI at scale is tied to changing business processes vs. 20 in broader technology and only 10% in algorithms. These physics will remain the same. 

We must also acknowledge that the generative AI itself isn’t a silver bullet and we are the very top of the hype cycle. Get your popcorn, the movie has just begun!     

M.R. Rangaswami is the Co-Founder of

Copy link
Powered by Social Snap