Insurance Leaders Debate AI Ethics at CII Conference

As we dive into the transformative world of insurance technology, I’m thrilled to sit down with Simon Glairy, a renowned expert in insurance and Insurtech, with a deep focus on risk management and AI-driven risk assessment. With years of experience navigating the intersection of innovation and ethics, Simon offers unparalleled insights into how artificial intelligence is reshaping the industry. In this conversation, we explore the profound impact of AI on insurance, the ethical dilemmas it presents, the importance of balancing technology with human expertise, and the potential disruptions on the horizon as clients themselves begin to harness AI tools.

How do you see artificial intelligence shaping the future of the insurance industry in the coming years?

I believe AI is poised to revolutionize insurance in ways we’re only beginning to grasp. Over the next few years, we’ll see it streamline operations, enhance risk assessment with predictive analytics, and personalize customer experiences through tailored policies. Areas like underwriting and claims processing are already seeing massive improvements with faster, data-driven decisions. Beyond efficiency, AI can help detect fraud more effectively and even predict emerging risks, which is a game-changer for managing uncertainty in an increasingly complex world.

What specific areas within insurance do you think will feel the greatest impact from AI adoption?

Underwriting and claims management are at the forefront. AI can analyze vast datasets—think medical records, driving habits, or even social media activity—to assess risk with incredible precision. In claims, it’s about speed and accuracy; chatbots and automated systems can handle routine cases, freeing up human adjusters for complex issues. Also, customer service is transforming with AI-driven virtual assistants that provide 24/7 support. These areas aren’t just improving—they’re being redefined.

Can you share what ethical responsibility means to you when it comes to using AI in insurance?

Ethical responsibility in this context is about ensuring fairness, transparency, and accountability. AI systems can unintentionally perpetuate bias if the data they’re trained on reflects historical inequities—like unfairly pricing premiums for certain demographics. It’s our duty to ensure these tools don’t harm vulnerable groups or erode trust. It also means being clear with customers about how their data is used and ensuring they’re not just numbers in an algorithm but individuals whose needs matter.

What’s an example of an ethical challenge that might arise with AI in insurance, and how can it be addressed?

One big challenge is algorithmic bias in pricing or coverage decisions. For instance, if an AI model denies coverage to someone based on zip code data that correlates with socioeconomic status, it could unfairly disadvantage entire communities. To address this, companies need to regularly audit their AI models for bias, diversify the data they use, and involve ethicists or third-party reviewers to catch blind spots. Transparency with customers about decision-making processes is also key to maintaining trust.

Why do you think investing in employee training is as critical as adopting AI technologies in insurance?

Technology is only as good as the people using it. AI can automate tasks, but human judgment, empathy, and strategic thinking are irreplaceable, especially in nuanced areas like complex claims or customer disputes. Training ensures employees can interpret AI outputs, spot errors, and make ethical decisions. Without skilled staff, you risk over-reliance on tech, which can lead to costly mistakes or missed opportunities. It’s about creating a synergy where AI handles the heavy lifting, and humans provide the heart and oversight.

What specific skills should insurance professionals develop to work effectively alongside AI systems?

Data literacy is non-negotiable—professionals need to understand how AI models work, what data drives them, and how to question results. Critical thinking is also vital to challenge AI outputs when they don’t align with real-world context. On the softer side, emotional intelligence remains key for customer-facing roles, as AI can’t replicate genuine empathy. Familiarity with regulatory frameworks around data privacy and AI ethics is another must, given the scrutiny this space is under.

What does using AI ‘wisely’ mean to you from a business perspective in the insurance sector?

Using AI wisely means deploying it with purpose, not just because it’s the latest trend. It’s about aligning AI initiatives with core business goals—whether that’s improving customer satisfaction, reducing costs, or mitigating risks—while avoiding over-automation that could alienate clients. It also means being mindful of long-term sustainability, ensuring the technology scales without compromising quality or ethics. Wise use prioritizes outcomes over hype.

How can insurance companies measure if their AI investments are truly profitable and not just a flashy expense?

Profitability comes down to clear metrics. Companies should track how AI impacts operational costs—say, reduced time in claims processing or lower fraud losses. Customer retention and satisfaction scores are another indicator; if AI improves service, that should reflect in loyalty. They also need to look at revenue growth from personalized offerings enabled by AI. It’s critical to set benchmarks before implementation and conduct regular reviews to ensure the tech isn’t just a sunk cost but a driver of value.

How real do you think the threat is of clients using AI tools themselves and becoming competitors to traditional insurance firms?

It’s a very real possibility, especially as AI tools become more accessible. Clients, particularly businesses, could use AI to self-assess risks, negotiate directly with reinsurers, or even pool resources for self-insurance models. Individual consumers might rely on AI advisors for policy comparisons or claims handling, bypassing brokers. While it’s not an immediate threat for most, it’s a wake-up call for firms to innovate and offer value that AI alone can’t replicate, like trust and personalized service.

Looking ahead, what’s your forecast for how AI will continue to evolve within the insurance industry over the next decade?

I foresee AI becoming even more embedded in every facet of insurance, from hyper-personalized policies using real-time data to fully automated claims systems that resolve issues in minutes. We’ll likely see greater collaboration between insurers and tech providers to build bespoke AI solutions. However, the regulatory landscape will tighten, pushing for more transparency and accountability. The challenge will be balancing innovation with trust—ensuring AI empowers both companies and clients without overstepping ethical or legal boundaries. I’m optimistic, but it’ll require careful navigation.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later