As the uses of artificial intelligence (AI) continue to expand, there is a growing need for effective risk management to deal with issues ranging from technical, such as algorithm failures, to ethical, including bias in decision-making. AI has the potential to revolutionize many industries and improve our daily lives, but the risks must be carefully considered and managed.
A new ISO/IEC Standard provides essential guidance on risk management for organizations of all sizes and types that utilize AI in their systems or processes. ISO/IEC 23894 shows users how to manage AI-related risks effectively in order to achieve objectives and improve performance.
"While AI systems are similar to traditional IT systems in many ways, they also present new aspects such as their ability to learn," says Wael William Diab, who chairs the joint IEC and ISO committee that develops AI standards.
"SC 42 took the novel approach of developing a framework that employs well-established techniques around risk management. ISO/IEC 23894 provides a holistic and proactive approach to managing AI-related risks with the goal of enabling users to manage the risks effectively to harness the full potential of AI."
A framework for risk management
The new standard adapts and develops the guidelines and general principles of risk management described in ISO 31000. It describes a framework for risk management that requires users to establish context and to identify, analyze, evaluate, treat, monitor and review the risks.
Establishing the context entails defining the organization's objectives and the risks that could impact those objectives, as well as the needs and expectations of stakeholders who will be affected by the risks. Identifying the risks is about identifying the potential risks that could impact the organization's objectives, including risks associated with the organization's activities, processes and external factors.
Analyzing the risks means evaluating the likelihood and impact of the identified risks, as well as the potential consequences of each risk. Evaluating means deciding which risks are worth addressing, based on their likelihood and potential impact, and determining the appropriate response to each risk.
Treating the risks involves implementing the chosen risk response, which could involve avoiding the risk, reducing its likelihood or impact, transferring it to another party, or accepting it. Monitoring and reviewing involve monitoring the risks on an ongoing basis to ensure that they are being managed effectively and reviewing the risk management process to identify any improvements that could be made.
A structured approach
"Implementing this new international standard will not only help organizations to ensure that AI systems operate safely and fairly, but also help them to avoid potential risks and negative consequences," says David Filip, convenor of the working group that developed ISO/IEC 23894. "It can help organizations ensure that their use of AI technology is safe, ethical, and aligned with their goals and values,"
ISO/IEC 23894 provides a structured approach to risk management that can help organizations to identify, assess and address risks in a proactive and effective manner. It provides a framework and principles for ensuring that AI systems operate safely and fairly, while avoiding potential risks and negative consequences.
It can ensure that their use of AI technology is safe, ethical, and aligned with their goals and values.
"ISO/IEC 23894 is designed for a technology that is constantly evolving," says Peter Deussen, project leader of ISO/IEC 23894. "It emphasises the importance of constantly reviewing, identifying and preparing for potential risks."
ISO/IEC JTC 1/SC 42
Mr. Deussen presented ISO/IEC 23894 at the second bi-annual ISO/IEC AI Workshop. Topics covered at the event included AI applications, beneficial AI, novel AI standardization approaches, and emerging AI technology trends and requirements.
SC 42 develops international standards for artificial intelligence. Its unique holistic approach considers the entire AI ecosystem, by looking at technology capability and non-technical requirements, such as business and regulatory and policy requirements, application domain needs, and ethical and societal concerns.
SC 42 is currently working with IEC TC 65 on a new functional safety standard for AI. The aim is to ensure that systems, equipment and devices that rely on AI technologies function safely, even in the presence of failures or errors.
TC 65 is responsible for the IEC 61508 series of standards, which covers the design and implementation of safeguards to prevent accidents and minimize risks to people, property and the environment.
(Source: IEC)