SCSC.uk 
SCSC.uk  
Seminar: Frameworks for Safe AI Systems
 
background 

  Event description   Programme    

THE SAFETY-CRITICAL SYSTEMS CLUB, Seminar:

Frameworks for Safe AI Systems

Thursday 25 April, 2024 - London, IET Savoy Place

This seminar will look at the ethical, legal and regulatory frameworks and societal/social aspects of AI systems. It will also consider high-level goals, design frameworks and safety objectives for AI.  The event is the first of three in 2024 on an AI theme.

This event will be held at the IET Savoy Place, London.

Speakers include:

  • Dai Davis, Percy Crow Davis & Co Ltd - "Demystifying AI"
  • George Mason, Frazer-Nash Consultancy - "An Introduction to Safe AI: The Motivation, Legislation, Methods and Challenges"
  • Phill Mulvana, UKAEA - "Design Frameworks For Autonomy: An exploration of the SACE process in a nuclear context"
  • Gabriel Pedroza, Ansys - "Design, Exploration and Evaluation of Safety-Critical Software for Integrating AI-based Algorithms"
  • Zoe Porter, Assuring Autonomy International Programme, University of York - "A framework for the equitable safety of AI-based systems"
  • Mark Sujan, Health Services Safety Investigations Body (HSSIB) - "Human-Centred Healthcare AI"

(The second and third seminars will look at construction, implementation and operation of AI systems.)

 

 

 

 

Talk abstracts and speaker bios:

 

Dai Davis, Percy Crow Davis & Co Ltd - "Demystifying AI"

Abstract: This talk covers the big questions from a legal perspective: Where we are with Artificial Intelligence? What is Artificial Intelligence? What is Machine Learning? Knowledge gaps in AI; What AI models can and can’t do and Is AI Dangerous?

Bio: Dai Davis is a Technology Lawyer.  He practices as a solicitor but is also a qualified Chartered Engineer and Member of the Institution of Engineering and Technology.  Dai has consistently been recommended in the Legal 500 and in Chambers Guides to the Legal Profession for over 30 years.  He has two master degrees: one in Physics, the other in Computer Science. Having been national head of Intellectual Property Law and later national head of Information Technology law at Eversheds for a number of years, Dai has for the past decade been a partner in his own specialist law practice, Percy Crow Davis & Co.  Dai works primarily as a commercial contract lawyer and advises clients throughout the country on intellectual property, computer and technology law subjects. 

 

 

George Mason, Frazer-Nash Consultancy - "An Introduction to Safe AI: The Motivation, Legislation, Methods and Challenges"

Abstract: This presentation introduces the concept of 'safe AI'. The talk begins by outlining the motivation for safe AI and what it means for an AI system to be regarded as safe. Following this, an overview of the legislative landscape surrounding safe AI is given, concentrating on regulatory efforts to date. Next, a summary of methods to ensure safety in AI systems is discussed. Lastly, the challenges impeding the development and deployment of safe AI systems are considered.

Bio: Dr George Mason is a member of the Trustworthy AI group at Frazer-Nash Consultancy, specialising in the risk assessment and safety assurance of AI for safety-critical systems. Prior to joining Frazer-Nash, he was a researcher in the Trustworthy Adaptive and Autonomous Systems and Processes team at the University of York. He holds a BSc in computer science and a PhD in machine learning safety and has experience working on AI for the defence and nuclear sectors.

 

 

 

Gabriel Pedroza, Ansys - "Design, Exploration and Evaluation of Safety-Critical Software for Integrating AI-based Algorithms"

Abstract: In this talk I will review some specificities and challenges of AI algorithms design and implementation. Then, it will present a method and framework to support design of systems integrating AI algorithms and, more importantly, its quantitative safety assessment. The overall approach is illustrated based upon a flight formation use case integrating a Reinforcement Learning model. Some perspectives are finally provided on the work ahead to foster AI maturity.

Bio: Dr. Gabriel Pedroza is Principal R&D Engineer at Ansys and works on safety of AI algorithms in embedded systems. His background includes a Ph.D. in embedded systems security, a M.Sc. in artificial intelligence and a B.Sc. in physics and mathematics. Gabriel’s work mainly focuses on developing methods and techniques to design and validate systems’ trustworthiness, at theoretical and practical levels. He actively participates in standardization groups in aerospace and automotive domains targeting certification of AI safety.

 

 

 

Phill Mulvana, UKAEA - "Design Frameworks For Autonomy: An exploration of the SACE process in a nuclear context"

Abstract: I will talk through the creation of a strawman safety case for an autonomous glovebox system in a nuclear environment. Building on my organisations expertise in robotics and AI, we elected to assess the feasibility of building an autonomous safety case using the SACE framework in preparation for a potential future deployment of an autonomous system. We will explore some of the context that drives the need for a strong development framework, the progress made, opportunities and limitations.

Bio: Phill is a Safety Manager and Principal Technologist for RAICo, the Robotics and AI Collaboration, part of the UK Atomic Energy Authority. Specialising in the safety and regulation of innovative technologies, he now works with industry and regulators on nuclear and robotics challenges, having previously spent time in defence, advanced manufacturing and stored energy. He holds a MSc in Systems Safety Engineering, is a Chartered Manager and a Fellow of the IET. 

 

 

 

Zoe Porter, Assuring Autonomy International Programme, University of York - "A framework for the equitable safety of AI-based systems"

Abstract: This talk will set out the PRAISE framework for assuring ethically acceptable AI. The framework takes the position that, to justify the risks from deploying an AI-based system in a particular context, the reasons for deploying it should be compelling, the distribution of risk and benefit should be equitable so that risk-bearers are not endangered for the advantage of others, and the personal autonomy of people affected by the system should not be unduly constrained. In this way, the PRAISE framework is intended as a framework for equitable, and not merely tolerable, safety.

Bio: Dr Zoe Porter is a Research Fellow in the Institute for Safe Autonomy at the University of York. With a background in philosophy, Zoe's research focuses on ethics and responsibility in the context of AI and autonomous systems.

 

 

 

 

Mark Sujan, Health Services Safety Investigations Body (HSSIB) - "Human-Centred Healthcare AI"

Abstract: This talk looks at AI from the healthcare view. It includes: Application examples of AI in healthcare; Outline of a systems perspective of healthcare AI
Discussion of design metaphors, e.g., substitution vs augmentation and Reflections from a case study looking at the use of AI in an ambulance service

 

 

SCSC.UK uses anonymous session cookies please see Privacy policy

© SCSC 2024