Hundreds of micro-decisions fire each second when a rider taps a button, and behind that simple flicker sits a nervous system of payouts, surge calculations, fraud screens, and safety checks that must stay inside harsh latency budgets even as machine learning models evolve week by week, which is exactly the kind of high-stakes engineering that will take center stage when Uber CTO Praveen Neppalli Naga sits down at StrictlyVC San Francisco. The conversation promised something rare: a view into how AI works when millions depend on it in real time.
The setting adds texture as well as urgency. On April 30 at the Sentro Filipino Cultural Center, operators and investors gather to dissect AI not as spectacle, but as infrastructure—where bugs become headlines and trust can be lost in a single bad release.
Why This Story Matters
AI has seeped into platform architecture, labor platforms, developer workflows, and the information that shapes markets. The result is a new operating reality in which product decisions are inseparable from model behavior, data pipelines, and observability.
Moreover, the frontier has shifted from pure software to “physical AI,” where robotics, sensors, and specialized compute demand disciplined capital and supply-chain fluency. That pivot favors founders who match model prowess with manufacturing judgment and regulatory clarity.
The Stakes and the Questions
Naga’s span at Uber, including the evolution of earnings systems for drivers and couriers, offers a test case for upgrading with AI while preserving reliability, transparency, and marketplace health. The stakes are plain: a mispriced trip or delayed payout can erode trust faster than any growth curve can replace it.
Expect a pragmatic line of inquiry: where to place models in the call path, how to instrument for fairness and drift, and when to roll back. The subtext is as important as the architecture—trade-offs between experimentation and five-nines discipline that define durable platforms.
Inside the Program
The program crosses disciplines to map AI’s company-scale reality. Replit co-founder Amjad Masad brings a developer’s vantage point on AI-assisted coding, from code generation claims to the ripple effects on testing, review culture, and security posture.
Eclipse founder and CEO Lior Susan, fresh off a $1.3 billion fund, profiles defensible hardware moats, the cadence of learning cycles, and milestones that matter before scaling assembly lines. That lens grounds timelines in manufacturing physics rather than slideware optimism.
TDK Ventures president Nicolas Sauvage adds a financing compass: how to use strategic capital to accelerate customer validation, pilots, and early manufacturing, and when strings attached can slow the march to product-market fit. Media veteran Campbell Brown, now leading Forum AI, addresses information integrity—how platforms and brands build harm-reduction frameworks before crises strike.
What Builders Stand To Learn
For systems teams, the playbook points toward layering models with crisp failure modes, tight rollback paths, and end-to-end observability that measures latency, accuracy, and perceived fairness. In practice, that means evals before launch, red-teaming adversarial inputs, and incident response that treats model regressions like production outages.
For engineering leaders, the toolkit is shifting as AI pair-programming rewires throughput and review standards. Teams are targeting higher test coverage, reproducible builds, and secure prompt and config management, while drawing hard interface contracts between research prototypes and production services to prevent “works on demo” from hitting main.
Capital, Markets, and Integrity
On the capital side, a milestone-driven map helps sequence rounds: prototype with learning-focused angels, pilot with strategic money for access to customers and supply chains, and scale with investors who underwrite manufacturing and inventory risk. Each stage prizes a different proof point, from data access to unit economics and quality yields.
On the market side, content integrity now operates as product risk. Detection pipelines, human-in-the-loop review, authenticity signals, and transparent user comms reduce downstream harm, while partnerships with researchers and civil society sharpen early warning systems as models and misuse coevolve.
Conclusion
The throughline was execution: ship AI where it strengthens reliability, instrument reality rather than narratives, and match capital to the learning curve at hand. Attendees left with checklists to apply the next day—how to place models in critical paths, which tests to harden before launch, which suppliers to court before scale, and what integrity safeguards to wire in before growth campaigns. The next moves favored builders who treated AI not as a demo, but as an operating system for their companies.
