As General Partner at Foundation Capital, Ashu Garb collaborates with startups throughout the enterprise stack. His career is reflective of his enthusiasm for machine learning and revolutionizing established software domains to create fresh consumer interactions.
While FC’s inaugural Generative AI “Unconference” was held back in June, we still find ourselves referencing Ashu’s observations from the conference. We hope you take away as much from his highlights as we have.
1. AI natives have key advantages over AI incumbents
In AI, as in other technology waves, every aspiring founder (and investor!) wants to know: Will incumbents acquire innovation before startups can acquire distribution? Incumbents benefit from scale, distribution, and data; startups can counter with business model innovation, agility, and speed—which, with today’s supersonic pace of product evolution, may prove more strategic than ever.
To win, startups will have to lean into their strength of quickly experimenting and shipping. Other strategies for startups include focusing on a specific vertical, building network effects, and bootstrapping data moats, which can deepen over time through product usage.
2. In AI, the old rules of building software applications still apply
How can builders add value around foundation models? Does the value lie in domain-specific data and customizations? Does it accrue through the product experience and serving logic built around the model? Are there other insertion points that founders should consider?
While foundation models will likely commoditize in the future, for now, model choice matters. From there, an AI product’s value depends on the architecture that developers build around that model. This includes technical decisions like prompts (including how their outputs are chained to both each other and external systems and tools), embeddings and their storage and retrieval mechanisms, context window management, and intuitive UX design that guides users in their product journeys.
3. Small is the new big
Bigger models and more data have long been the go-to ingredients for advancements in AI. Yet, as our second keynote speaker, Sean Lie, Founder and Chief Hardware Architect at Cerebras, relayed, we’re nearing a point of diminishing returns for simply supersizing models. Beyond a certain threshold, more parameters do not necessarily equate to better performance. Giant models waste valuable computational resources, causing costs for training and use to skyrocket.