What is the relationship between Artificial Intelligence and risk in high-risk industries like oil and gas?
This June at the ESREL and SRA-E 2025 conference hosted by the University of Stavanger, Presight Solutions is proud to lead a panel discussion that unites voices from the regulator (HAVTIL), the operator (Equinor), risk evaluators (DNV), and ourselves, an AI provider in high-risk industries, to discuss the role of AI in risk management.
As AI systems are integrated into critical environments, including predictive maintenance, drilling automation, and safety monitoring, there is growing urgency to ensure their deployment is safe, explainable, and aligned with existing safety regulations in sectors such as oil and gas.

Picture generated by Chatgpt showing technology and AI in industry
AI Risk Management in Oil & Gas
Artificial intelligence promises unprecedented benefits to operational efficiency, anomaly detection, and decision support. It helps identify invisible patterns across vast datasets—enhancing reliability, reducing downtime, and supporting predictive maintenance across platforms.
However, in safety-critical domains, the cost of error is high. Misleading outputs or overly confident predictions from black-box AI models can cause cascading failures if deployed without understanding the associated risks.
As emphasised by HAVTIL:
Presight believes that AI risk management in oil and gas must account for human-in-the-loop interactions. The same rigor used for traditional safety systems must now be applied to intelligent systems.
The Importance of Explainable AI and Uncertainty Quantification
A key challenge discussed by DNV is that Artificial intelligence models can be confidently wrong. To combat this, DNV introduces new methods for uncertainty quantification and explainable AI (XAI) that show not only what the model predicts, but how sure it is, and why.
This level of transparency is critical in high-stakes decisions, where explanations of uncertainty help human operators decide when to intervene and when to trust the model. This is along with transparency of data are important factors to increase trust in the model. But there are also many other factors to consider.
Some questions we will explore:
- How can AI systems be integrated as technical and operational safety barriers?
- Can uncertainty quantification enable safer use of AI for predictive maintenance?
- Can we trust human oversight “human in the loop” (HITL) to catch what AI might miss, especially when models are confidently wrong?
- How AI can act as a copilot assisting in bowtie analysis or hazard identification to improve safety and decision-making? And what are the biggest challenges in using AI tools like OpenRisk in these contexts?
Regulation and Human Oversight in AI Safety
With the adoption of the EU AI Act, regulatory frameworks are beginning to reflect the high-risk nature of deploying AI in sectors like oil and gas. But HAVTIL and others argue that regulations alone are not enough. Many developers and operators lack a shared understanding of how AI behaves in uncertain, rapidly evolving contexts
Presight emphasises a human-in-the-loop approach, where human operators remain central and in control. Without proper alignment between technology, human capabilities, and safety procedures, “HITL” becomes a safety illusion.
Join Us at ESREL 2025
Panel: AI and Risk in High-Risk Industries
📅 June 2025, University of Stavanger
🎙 Panelists from HAVTIL, Equinor, DNV, and Presight
Click here to read more: ESREL SRA-E 2025
Together, we’ll explore how AI risk management, explainable AI, and human-in-the-loop safety design can shape a more resilient oil and gas sector.