Simon Glairy brings a rigorous, risk-first lens to frontier tech. With roots in insurance and Insurtech, he’s steeped in AI-driven risk assessment and commercialization strategy—exactly the mix needed to evaluate non-invasive BCI moving from labs into consumer wearables. In this conversation, he unpacks how a licensing model can scale EEG-plus-AI across headphones, hats, glasses, and headbands, how privacy and compliance frameworks like HIPAA shape data governance, and why a recent $35 million Series A signals an inflection point. We explore industrial design trade-offs, lab-to-field translation, edge-versus-cloud decisions, OEM collaboration playbooks, and the pragmatic KPIs that can make brain-sensing as ubiquitous as wrist heart-rate sensors—without losing trust.
You’re shifting to a licensing model for consumer wearables. What core components are you licensing (sensors, firmware, SDKs, analytics), how do integration timelines typically unfold with OEMs, and what milestones or metrics do you use to greenlight a partner’s go-to-market?
The core bundle is modular: EEG sensors and reference hardware, low-level firmware to stabilize sampling and filtering, SDKs that expose events and features, and analytics that turn signals into cognitive insights. With OEMs, I think in two tracks: a technical integration sprint to prove signal viability in their industrial design, and a go-to-market readiness sprint to validate UX, privacy, and support. Early gates include a working prototype in the target form factor—headphones, hats, glasses, or headbands—passing predefined signal-to-noise criteria and a UX flow that lands a first insight in minutes, not weeks. Before greenlighting, I look for a signed data-protection addendum aligned with HIPAA-grade controls, completed pilot tests with at least one customer segment, and a readiness checklist covering manufacturing test stations and post-launch telemetry.
EEG plus AI powers your non-invasive BCI. How do you maintain signal quality in real-world environments (motion, sweat, noise), what preprocessing and artifact rejection steps matter most, and what accuracy or reliability metrics should buyers demand?
Real-world fidelity starts with mechanical stability: consistent contact pressure and electrode placement reduce motion artifacts before software ever sees the data. Preprocessing needs a disciplined chain—bandpass filtering, adaptive notch for mains hum, and robust artifact rejection for blinks, jaw clench, and cable sway. On-device quality checks should flag bad channels in seconds and prompt micro-adjustments so users don’t fight the headset. Buyers should ask for reliability metrics under movement, sweat, and ambient noise conditions, plus confusion matrices and failure-mode reporting—not just aggregate accuracy—so performance is transparent across the exact scenarios where people actually use the device.
Many brands want brain-sensing in headphones, hats, glasses, and headbands. How does industrial design affect electrode placement and performance, what trade-offs do you make between comfort and signal fidelity, and can you share specific design patterns that consistently work?
Industrial design dictates where you can anchor electrodes without drifting, and that alone can make or break fidelity. In headphones, earcup bands offer stable pressure on temporal sites; in hats, hidden rails maintain contact without hotspots; glasses can use temple arms but need clever counterbalance. The trade-off is always comfort versus pressure: you want enough force for reliable impedance without leaving red marks after a commute. A pattern that works is a three-point stabilization scheme—two fixed anchors and one adaptive contact—paired with a quick calibration animation so users can “feel” the right fit in under a minute.
A recent $35 million Series A was raised to scale commercialization. How are you allocating that capital across R&D, recruiting, manufacturing support, and compliance, and what concrete milestones (units shipped, OEMs onboarded, clinical validations) define success over the next 12–18 months?
That $35 million is fuel for discipline, not just speed. I’d steer a significant tranche to productized R&D—hardening signal pipelines and expanding the SDK—while building a recruiting pod focused on embedded systems, QA, and applied ML. Manufacturing support gets its own budget line for test fixtures and golden-sample calibration so yield doesn’t crater at scale, with compliance funding for HIPAA-grade processes and international readiness. In 12–18 months, success looks like multiple OEMs shipping in at least two of the four target form factors, a validated privacy audit, and repeatable pilots converting to commercial reorders, not just demos.
Gamers care about focus and reaction time. What measurable improvements have you seen in gaming use cases, how were studies or trials designed, and can you share effect sizes, sample sizes, and the specific interventions that moved the needle?
With gaming, the wins come from measuring sustained attention and micro-recovery between bouts. Trials that resonate use counterbalanced designs—baseline play, guided focus training, and real-time neurofeedback—with objective in-game telemetry to avoid self-report bias. While I won’t cite new numbers here, the interventions that consistently move the needle blend pre-session priming with in-session nudges, like micro-break prompts when cognitive fatigue crosses a threshold. The texture of the user experience matters: a headset that feels frictionless for an hour, paired with feedback that’s subtle rather than intrusive, unlocks improvements that persist beyond a single match.
Human-behavior research platforms can amplify BCI insights. How do you translate lab-grade findings into consumer-ready algorithms, what validation steps ensure robustness across demographics and contexts, and where have lab-to-field gaps surprised you?
Translation starts with simplifying feature sets—fewer, more stable biomarkers that survive noisy environments—then wrapping them in SDK surfaces any developer can use. Robustness means stratified validation: hair types, skin tones, ages, and activity contexts are explicit cohorts, not afterthoughts, and we replicate results across multiple devices and two or more form factors. The lab-to-field gap that surprises teams most is context drift: music on a commute, a hat worn backward, or glasses slipping a few millimeters can change the signal landscape. Closing that gap takes continuous calibration cues and on-device quality prompts that keep performance inside guardrails without nagging the user.
Non-invasive BCI is often compared with implanted approaches. Where do you see clear advantages in safety, cost, and scale, what performance ceilings remain, and how do you prioritize features that close the gap without sacrificing accessibility?
Non-invasive wins on safety—no surgery—and on scale because you can integrate into headphones or hats people already wear. Cost structure improves with volume, especially through a licensing model that lets OEMs control design, UX, and distribution. The ceiling is granularity; you won’t match the resolution of implanted arrays today, so you prioritize features that deliver actionable insights—focus, fatigue, cognitive load—rather than raw channel counts. Accessibility stays front and center: quick setup, gentle contact pressures, and algorithms resilient to real life will beat a lab-perfect system that only a few can tolerate.
Achieving “as ubiquitous as wrist heart-rate sensors” requires standardization. What core UX moments (onboarding, calibration, daily wear) must be frictionless, how do you reduce time-to-first-insight, and what drop-off metrics guide your product simplifications?
Onboarding should feel like pairing earbuds: power on, fit check, and a one-minute calibration that doubles as a tutorial. Time-to-first-insight matters—people need a tangible readout within a single session, not a multi-day ramp. Daily wear cues, like a gentle haptic or on-screen check for contact quality, keep habits sticky without nagging. I watch drop-off between first and second session and abandonment during calibration; if more than a sliver of users stall there, we simplify steps or shift work on-device so setup feels almost invisible.
Privacy expectations for brain data are uniquely high. How do you operationalize data minimization, encryption, and access controls day-to-day, what independent audits or certifications matter most, and how do you prove anonymization holds up against re-identification attempts?
Data minimization starts at the edge: compute features locally and avoid shipping raw streams unless a user opts in. Everything in transit and at rest is encrypted, with role-based access and short-lived credentials so no single account becomes a skeleton key. HIPAA-grade processes are table stakes here, supported by independent audits that stress-test policy against practice. To test anonymization, we run re-identification attacks on de-identified datasets and require they fail before any research use; that’s the proof customers and regulators actually trust.
Users may consent to model training on their neural data. How do you granularly scope consent, prevent scope creep, and communicate value exchange, and what techniques (federated learning, on-device training, differential privacy) do you use to limit centralization risks?
Consent should be experiment-specific—name the model, the purpose, and the retention window—rather than a blanket checkbox. Scope creep is blocked by technical gates: models can only access the whitelisted features and time windows a user approved. The value exchange is explicit—better personalization, faster calibration—and reversible if a user changes their mind. We lean on on-device or federated learning where possible, and add differential privacy to any aggregate updates so the central system never needs raw, identifiable brain data.
Health, productivity, and sports offer different ROI stories. How do you tailor KPIs for each (e.g., reduced fatigue, improved task throughput, lower false starts), what pilot designs win stakeholder trust, and which pricing models align incentives without undermining adoption?
In health, fatigue reduction and adherence drive ROI; in productivity, throughput and error rate; in sports, quicker recovery between efforts and fewer false starts. Pilots that win trust define success upfront, collect baseline data, and run A/B or crossover designs so improvements aren’t hand-wavy. I like outcome-based components layered on a modest platform fee—when results show up, everyone wins; when they don’t, customers aren’t stuck paying for promises. That balance encourages adoption while rewarding genuine, measured impact.
OEMs want control over design, UX, and distribution. How do you balance that with maintaining model performance and safety, what SLAs or reference designs reduce integration risk, and can you share a post-launch support playbook that worked well?
The compromise is a reference stack—sensor specs, firmware baselines, and SDK constraints—so OEMs can customize industrial design without breaking models. SLAs cover uptime for cloud endpoints, minimum on-device quality checks, and response times for critical bugs. Reference designs for the four supported form factors shorten integration and keep contact geometry inside proven ranges. Post-launch, the winning playbook pairs telemetry dashboards with weekly triage, a known-issues list, and firmware hotfix windows, so issues are fixed before they snowball into returns.
Edge versus cloud processing drives latency and battery life. What runs on-device versus server-side today, how do you handle intermittent connectivity, and what battery, CPU, and memory budgets you advise partners to meet target responsiveness?
On-device handles acquisition, filtering, artifact rejection, and first-pass inference; the cloud aggregates learnings, manages updates, and powers opt-in analytics. Intermittent connectivity is a design point: the system caches insights and syncs when back online, so users don’t lose history or personalization. I advise partners to plan for real-time pipelines that run entirely offline for core features, reserving the cloud for improvements and fleet health. Budgeting is collaborative—opt for efficient models and prioritize the few features that must be instant, then scale everything else asynchronously.
Scaling a real business model in neurotechnology is a claimed inflection point. What go-to-market lessons changed your roadmap, which segments delivered repeatability, and what early assumptions proved wrong in pricing, channel strategy, or user retention?
The big shift was moving from bespoke pilots to a licensing platform—repeatability beats novelty. Segments where people already wear headgear—gaming headsets, athletic hats, everyday glasses—delivered cleaner integrations and faster adoption. We were wrong to assume premium pricing alone would signal value; users reward consistent insights and respectful privacy far more than a high sticker price. Channel-wise, OEM co-brands with tight post-sale support outperformed splashy launches without a service backbone.
Regulators are circling neurotech. How do you navigate classifications across wellness and medical use, what documentation and testing regimes keep you launch-ready globally, and where do you see policy gaps that industry should proactively close?
The line between wellness and medical is critical; your claims must match your evidence and your regulatory path. Documentation spans risk registers, HIPAA-aligned privacy controls, security test results, and human-factors studies that justify UX choices. Globally, staying launch-ready means building a compliance narrative once, then localizing claims and privacy terms rather than reinventing the stack per country. The policy gap I’d close is standardized guidance for brain-data consent and retention—industry should help craft rules that are as explicit as the stakes.
Bias and inclusivity matter for brain-sensing. How do you validate across hair types, skin tones, ages, and neurodivergent users, what adaptive calibration works best, and where have you had to retool hardware or algorithms to serve underrepresented groups?
Inclusivity starts in the test plan, not the press release. We recruit for diverse hair types and head shapes, adjust electrode materials and comb geometries, and verify performance across age bands and neurodivergent profiles. Adaptive calibration that tightens over the first few sessions—while staying under a minute each time—keeps models personal without exhausting users. When gaps appear, we retool with alternative contact geometries and algorithmic ensembles that hedge against any single-biased feature.
OEM partnerships include gaming brands and human-behavior research platforms. How do these collaborations sharpen your product roadmap and de-risk commercialization while keeping IP secure?
Partnerships with gaming brands stress-test real-time feedback loops, while research platforms pressure-test data integrity and study workflows. Together, they force us to ship features that matter: fast setup, actionable insights, and robust privacy. We protect IP through clear field-of-use licenses and clean-room collaboration—partners get what they need without exposing model internals. The roadmap tightens around what ships and scales, rather than what just dazzles in a demo.
What is your forecast for non-invasive BCI in consumer wearables?
For readers, expect a steady climb rather than an overnight leap: after that $35 million push, licensing into the four everyday form factors will make brain-sensing show up where you already live—gaming, fitness, focus. You’ll see faster onboarding, clearer value in the first session, and privacy controls that feel closer to banking than casual apps. The winning products will make insights as habitual as checking your wrist, while keeping raw brain data local unless you explicitly opt in. If you try one, measure it by how quickly it helps you make a better decision—on a game, a workout, or your workday—and how confidently it protects your data when you say no.
