At our latest portfolio webinar, our Executive In Residence Dan Teodosiu sat down with Mohan Mahadevan, Chief Science Officer at Tractable, for a fireside chat on the evolution of AI - including the paradigm shifts in machine learning, and how foundational models have changed the current AI landscape.
Mohan holds a PhD in Theoretical Physics and has over 24 years of experience developing ML and Computer Vision systems. Before Tractable, he was leading Machine Learning and Computer Vision research at Onfido, Amazon, and KLA-Tencor.
The big shift in the last few years is from representation learning to representation tuning.
Over the last 25 years, there have been four main shifts in machine learning paradigms:
Hypercharge this, and we get to GPT.
But throughout all this evolution, one thing remains unchanged: the value of data. One can only build on top of these models if well curated, “right” data in the “right” volumes is available. Foundational models have not changed the value of the data moat.
Data has always been a moat. Data will continue to remain a significant moat, and foundational models have not changed the value of it
For all of the above, the importance of the data moat has become even more significant. Modelling based on public data offers no moat. We are specifically referring to private data being the moat here. The relevant questions are: How do you bootstrap to get this data? How quickly can you get to a reasonable and representative volume? How do you measure and maintain the quality of the private data and labels?
This is a tough environment with so many researchers and so much happening in the world of AI and foundational models. You have to assume that LLMs are going to evolve and evolve rapidly, e.g. representation with multimodalities (images, videos, audio, text) or the latest foundational models in computer vision (e.g. SAM) just released by Meta.
To avoid becoming obsolete in weeks or months, there are several actions that could be very useful:
You also need your ML engineers to be system-thinkers - able to solve problems as a system rather than in isolation, and be able to be self-critical. Additionally, some capabilities are built very well externally, so there is no need to overly rely on internal capacities, e.g. it’s good to adopt some of the technologies built by FAANGs and emerging players like Stability.ai or the TII with Falcon-40B models.
Sign up for our newsletter to stay up to date on news from Balderton, and our portfolio.