What might go wrong with AI safety and security?

Link to paper

This paper was written to support the Conference on frontier AI safety frameworks (FAISC).

In this paper we draw on our experience working on system and software assurance and evaluation for systems important to society to summarise how safety engineering is performed in traditional critical systems, such as aircraft flight control. We analyse how this critical systems perspective might support the
development and implementation of AI Safety Frameworks. We present in terms of: system engineering, safety and risk analysis, and decision analysis and support.

 We consider four key questions:

  • What is the system?
  • How good does it have to be?
  • What is the impact of criticality on system development? and
  • How much should we trust it?

We identify topics worthy of further discussion. In particular, we are concerned that system boundaries are not broad enough, that the tolerability and nature of the risks are not sufficiently elaborated, and that the assurance methods lack theories that would allow behaviours to be adequately assured.

We advocate the use of assurance cases based on Assurance 2.0 to support
decision making in which the criticality of the decision as well as the criticality of the system are evaluated. We point out the orders of magnitude difference in confidence needed in critical rather than everyday systems and how everyday techniques do not scale in rigour. Finally we map our findings in detail to two of the questions posed by the FAISC organisers and we note that the engineering of critical systems has evolved through open and diverse discussion. We hope that topics identified here will support the post-FAISC dialogues.


Posted

in

,

by

Tags:

Comments

Leave a comment