I’m joined today by Simon Glairy, a distinguished expert in risk management and AI-driven assessment, to discuss a groundbreaking development in enterprise AI. We’ll be exploring the technology behind Nexus, a new foundation model designed not for language but for the vast structured datasets that power the world’s largest companies, and delve into how it attracted a staggering $255 million in funding. Our conversation will cover the unique architecture that sets this model apart from traditional LLMs, its practical applications for Fortune 100 companies, and the strategic partnerships shaping its deployment.
Raising $255 million at a $1.2 billion valuation is a significant milestone. Could you share some key proof points that attracted such strong investor confidence, and what is your step-by-step plan for deploying this capital to scale operations and development?
The investor confidence really stems from our ability to solve a massive, tangible problem that has stumped contemporary AI. We demonstrated that our Nexus model could effectively handle enormous structured datasets—think tables with billions of rows—which is a common yet unsolved challenge for large enterprises. We secured several seven-figure contracts with Fortune 100 clients even before emerging from stealth, which served as undeniable proof of market demand. With this new capital, our plan is methodical: first, we are significantly expanding our engineering team to accelerate model development and refinement. Second, we’re scaling our enterprise sales and support teams to manage our growing pipeline and ensure our partners, like those using our AWS integration, have a seamless experience.
Your Nexus model is described as a deterministic “large tabular model,” not a transformer-based LLM. Can you explain the practical performance advantages of this architecture for a Fortune 100 client analyzing billions of rows of data? Please share a specific example of its impact.
The key advantage is reliability at an immense scale. Transformer-based LLMs are brilliant with unstructured data like text, but they have a fundamental limitation: the context window. They simply cannot process a spreadsheet with billions of rows in its entirety. Our Nexus model, being a large tabular model, is built specifically for this. Because it’s deterministic, a Fortune 100 client analyzing financial transactions, for example, will get the exact same fraud-detection score for the same input every single time, which is critical for compliance and auditing. This predictability and its capacity to reason over the entire dataset, not just a small window, provides a level of insight that was previously impossible.
You’ve stated that Nexus can replace an “army of data scientists” and massively expand use cases. For a large enterprise, what are the top two or three use cases where this model delivers the most value, and what specific metrics best demonstrate this improved performance?
The value is most profound in areas like complex risk assessment and supply chain optimization. In risk assessment, instead of a team spending months building bespoke models for different risk factors, Nexus can serve as a single, pre-trained foundation model that gets fine-tuned for all of them. The key metric here is a dramatic reduction in model development time, from months to days, and a measurable lift in predictive accuracy. For supply chain optimization, a company can analyze its entire global logistics dataset in one go to identify inefficiencies. Performance is measured by direct cost savings, such as a percentage reduction in shipping costs or improved inventory turnover, driven by insights that were previously hidden in the data’s scale.
Your strategic partnership with AWS allows users to deploy Nexus directly. Could you walk me through how an existing AWS customer would integrate and begin using your model? Please elaborate on the onboarding process and the initial results you’ve observed from this collaboration.
The AWS partnership is designed for simplicity and speed. An existing AWS customer can find and deploy Nexus directly from their current instances, treating it much like any other AWS service. The integration is seamless; they connect their structured data sources, and our team works with them to fine-tune the base model for their specific business problems. The onboarding is very hands-on, ensuring the model is immediately put to work on their highest-priority use cases. From this collaboration, we’re seeing clients achieve meaningful results within the first few weeks, rapidly validating the model’s performance on their own data and building internal momentum for broader adoption.
Contemporary AI models often struggle with reasoning over massive structured datasets. Can you provide an anecdote or a specific business scenario where a transformer-based model failed, and then explain step-by-step how Nexus would successfully tackle that same problem to deliver a reliable insight?
Certainly. A large insurance client attempted to use a leading transformer-based model to analyze its decades-long claims history—a dataset with billions of entries—to predict fraudulent patterns. The model kept failing because it could only process small chunks of the data at a time, completely missing the subtle, long-term patterns that indicated sophisticated fraud rings. It was like trying to understand a novel by reading random paragraphs. Nexus tackles this by first ingesting the entire dataset, all billions of rows, during its pre-training phase. Then, when fine-tuned on the client’s specific fraud labels, it can draw connections across the entire historical record. It identifies the low-and-slow fraudulent activities that unfold over years, delivering a clear, actionable, and deterministic score that the transformer architecture was structurally incapable of finding.
What is your forecast for the evolution of foundation models beyond language and into specialized domains like structured data?
I believe we are at the very beginning of a major shift. For the past few years, the world has been captivated by what foundation models can do with unstructured data like text and images. My forecast is that the next five years will see a Cambrian explosion of specialized foundation models built for specific, high-value data types. We’ll see models for genomic data, for complex chemical simulations, and, as we’re proving, for the structured tabular data that forms the backbone of the global economy. The future isn’t one giant model that does everything; it’s a landscape of powerful, purpose-built foundation models that bring this transformative AI technology to every corner of science and industry.
