IEEE portfolio of AIS technology and impact standards and standards projects
This standard is a logical extension to IEEE 1872-2015™ Standard for Ontologies for Robotics and Automation. The standard extends the CORA ontology by defining additional ontologies appropriate for Autonomous Robotics (AuR) relating to:
- The core design patterns specific to AuR in common R&A sub-domains;
- General ontological concepts and domain-specific axioms for AuR; and
- General use cases and/or case studies for AuR.
IEEE P2089™ – Standard for Age Appropriate Digital Services Framework – Based on the 5Rights Principles for Children
This standard provides a methodology to establish a framework for digital services when end users are children, and by doing so, tailors the services that are provided so that they are age appropriate. This is essential to creating a digital environment that offers children safety by design and delivery, privacy by design, autonomy by design, health by design, specifically providing a set of guidelines and best practices and thereby offer a level of validation for service design decisions.
This standard defines and classifies the components and functionality of adaptive instructional systems (AIS). It defines parameters used to describe AIS and establishes requirements and guidance for the use and measurement of these parameters.
This standard defines interactions and exchanges among the components of adaptive instructional systems (AISs). It defines the data and data structures used in these interactions and exchanges and parameters used to describe and measure them and establishes requirements and guidance for the use and measurement of the data, data structures, and parameters.
This recommended practice defines and classifies methods of evaluating adaptive instructional systems (AIS) and establishes guidance for the use of these methods. This best practice incorporates and promotes the principles of ethically aligned design for the use of artificial intelligence (AI) in AIS.
IEEE P2660.1™ – Recommended Practices on Industrial Agents: Integration of Software Agents and Low Level Automation Functions
This recommended practice describes integrating and deploying the Multi-agent Systems (MAS) technology in industrial environments for use in building the intelligent decision-making layer on top of legacy industrial control platforms. The integration of software agents with the low-level real-time control systems, mainly based on the Programmable Logic Controllers (PLCs) running the IEC 61131-3™ control programs (forming in this manner a new component known as industrial agents) are also identified. In addition, the integration of software agents with the control applications based on IEC 61499™ standard or executed on embedded controllers is described.
This recommended practice supports and helps the engineers leverage the best practices of developing industrial agents for specific automation control problems and given application fields. Therefore, corresponding rules, guidelines and design patterns are provided.
IEEE P2671™ – Standard for General Requirements of Online Detection Based on Machine Vision in Intelligent Manufacturing
This standard specifies through the general requirements of online detection based on machine vision, including requirements for data format, data transmission processes, definition of application scenarios and performance metrics for evaluating the effect of online detection deployment.
This guide provides the definitions, terminologies, operation procedures, system architectures, key technological requirements, data requirements and applications of and related to user-oriented mass customization. This guide provides reference information to be used by manufacturing enterprises for designing and implementing business models of mass customization.
This standard extends the IEEE 1873-2015™ Standard for Robot Map Data Representation from two-dimensional (2D) maps to three-dimensional (3D) maps. The standard develops a common representation and encoding for 3D map data, to be used in applications requiring robot operation, like navigation and manipulation, in all domains (space, air, ground/surface, underwater, and underground). The standard encoding is devoted to exchange map data between robot systems, while allowing robot systems to use their private internal representations for efficient map data processing. The standard places no constraints on where map data comes from nor on how maps are constructed.
IEEE P2801™ – Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence
This recommended practice identifies best practices for establishing a quality management system for datasets used for artificial intelligence medical devices. It covers a full cycle of dataset management, including items such as but not limited to data collection, transfer, utilization, storage, maintenance and update.
This recommended practice recommends a list of critical factors that impact the quality of datasets, such as but not limited to data sources, data quality, annotation, privacy protection, personnel qualification/training/evaluation, tools, equipment, environment, process control and documentation.
IEEE P2802™ – Standard for the Performance and Safety Evaluation of Artificial Intelligence Based Medical Device: Terminology
This standard establishes terminology used in artificial intelligence medical devices, including definitions of fundamental concepts and methodology that describe the safety, effectiveness, risks and quality management of artificial intelligence medical devices.
It provides definitions using the following forms, such as but not limited to literal description, equations, tables, figures and legends.
The standard also establishes a vocabulary for the development of future standards for artificial intelligence medical devices.
This standard defines the framework of knowledge graphs (KGs). The framework describes the input requirement of KG, construction process of KG, i.e., extraction, storage, fusion and understanding, performance metrics, applications of KG, verticals, KG related artificial intelligence (AI) technologies and other required digital infrastructure.
This standard defines technical requirements, performance metrics, evaluation criteria and test cases for knowledge graphs. The mandatory test cases include data input, metadata, data extraction, data fusion, data storage and retrieval, inference and analysis, and knowledge graph display.
This standard defines guidelines for application of knowledge graphs for financial services. The standard specifies technical framework, workflows, implementation guidelines and application scenarios of financial knowledge graphs.
This guideline for Scientific Knowledge Graphs (SKG) specifies: 1) Data scope, including the actors such as authors or organizations, the documents such as journal or conference publications, and the research knowledge such as research topics or technologies; 2) SKG construction process, including knowledge acquisition, knowledge fusion, knowledge representation, or knowledge inference of scientific knowledge; 3) Applications, including academic service, intelligence mining, or scholar analysis.
The purpose of this Guide is to identify existing best practices and provide instruction sets that define valid verification processes for a range of autonomous system configurations. These best practices apply from the lowest level components and software to the highest level learning or decision making elements (specifically including verification of the inputs to any learning algorithms, such as training data). The guidelines are intended to include both robots and immobots, singly and in groups, focusing primarily on systems that can operate autonomously rather than on automated or supervised robots. They may also be applicable to systems that do not directly interact with the external world (e.g. intelligence networks).
This standard defines a framework and architectures for machine learning in which a model is trained using encrypted data that has been aggregated from multiple sources and is processed by a trusted third party. It specifies functional components, workflows, security requirements, technical requirements, and protocols.
The standard describes specifications for the factors that shall be considered in the development of a Responsible Artificial Intelligence (AI) license. Possible elements in the specification include (but are not limited to): (1) What a ‘Responsible AI License’ means and what its aims are (2) Standardized definitions for referring to components, features and other such elements of AI software, source code and services (3) Standardized reference to geography specific AI/Technology specific legislation and laws (such as the EU General Data Protection Regulation – GDPR) as well as identification of violation detection, penalties, and legal remedies. (4) The specification lists domain specific considerations that may be applied in developing a responsible AI license. The proposed standard shall not require the use of any specific legal text or clauses nor shall the proposed standard offer legal advice.
This document defines best practices for developing and implementing deep learning algorithms and defines a framework and criteria for evaluating algorithm reliability and quality of the resulting software systems.
This standard provides a technical framework for Secure Multi-Party Computation, including specifying: An overview of Secure Multi-Party Computation, A technical framework of Secure Multi-Party Computation, Security levels of Secure Multi-Party Computation, Use cases based on Secure Multi-Party Computation.
This recommended practice specifies governance criteria such as safety, transparency, accountability, responsibility and minimizing bias, and process steps for effective implementation, performance auditing, training and compliance in the development or use of artificial intelligence within organizations.
This guide specifies an architectural framework that facilitates the adoption of explainable artificial intelligence (XAI). This guide defines an architectural framework and application guidelines for XAI, including: 1) description and definition of explainable AI, 2) the categorizes of explainable AI techniques; 3) the application scenarios for which explainable AI techniques are needed, 4) performance evaluations of XAI in real application systems.
IEEE P3333.1.3™ – Standard for the Deep Learning Based Assessment of Visual Experience Based on Human Factors
This standard defines deep learning-based metrics of content analysis and quality of experience (QoE) assessment for visual contents, which is an extension of Standard for the Quality of Experience (QoE) and Visual-Comfort Assessments of Three-Dimensional (3D) Contents Based on Psychophysical Studies (IEEE 3333.1.1™) and Standard for the Perceptual Quality Assessment of Three Dimensional (3D) and Ultra High Definition (UHD) Contents (IEEE 3333.1.2™).
The scope covers the following:
- Deep learning models for QoE assessment (multilayer perceptrons, convolutional neural networks, deep generative models)
- Deep metrics of visual experience from High Definition (HD), UHD, 3D, High Dynamic Range (HDR), Virtual Reality (VR) and Mixed Reality (MR) contents * Deep analysis of clinical (electroencephalogram (EEG), electrocardiogram (ECG), electrooculography (EOG), and so on) and psychophysical (subjective test and simulator sickness questionnaire (SSQ)) data for QoE assessment
- Deep personalized preference assessment of visual contents
- Building image and video databases for performance benchmarking purpose if necessary
Federated learning defines a machine learning framework that allows a collective model to be constructed from data that is distributed across data owners.
This guide provides a blueprint for data usage and model building across organizations while meeting applicable privacy, security and regulatory requirements. It defines the architectural framework and application guidelines for federated machine learning, including: 1) description and definition of federated learning, 2) the types of federated learning and the application scenarios to which each type applies, 3) performance evaluation of federated learning, and 4) associated regulatory requirements.
This standard outlines an approach for identifying and analyzing potential ethical issues in a system or software program from the onset of the effort. The values-based system design methods address ethical considerations at each stage of development to help avoid negative unintended consequences while increasing innovation.
This standard describes measurable, testable levels of transparency, so that autonomous systems can be objectively assessed and levels of compliance determined.
A key concern over autonomous systems (AS) is that their operation must be transparent to a wide range of stakeholders, for different reasons.
For designers, the standard will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency (for instance the need for secure storage of sensor and internal state data, comparable to a flight data recorder or black box).
This standard specifies how to manage privacy issues for systems or software that collect personal data. It will do so by defining requirements that cover corporate data collection policies and quality assurance. It also includes a use case and data model for organizations developing applications involving personal information. The standard will help designers by providing ways to identify and measure privacy controls in their systems utilizing privacy impact assessments.
This standard describes specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms, where “negative bias” infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics; or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user well being and for which there are good reasons to be considered inappropriate.
The standard defines specific methodologies to help users certify how they approach accessing, collecting, storing, utilizing, sharing, and destroying child and student data. The standard provides specific metrics and conformance criteria regarding these types of uses from trusted global partners and how vendors and educational institutions can meet them.
The standard defines specific methodologies to help employers to certify how they approach accessing, collecting, storing, utilizing, sharing, and destroying employee data. The standard provides specific metrics and conformance criteria regarding these types of uses from trusted global partners and how vendors and employers can meet them.
The standard establishes a set of ontologies with different abstraction levels that contain concepts, definitions and axioms which are necessary to establish ethically driven methodologies for the design of Robots and Automation Systems.
“Nudges” as exhibited by robotic, intelligent or autonomous systems are defined as overt or hidden suggestions or manipulations designed to influence the behavior or emotions of a user.
This standard establishes a delineation of typical nudges (currently in use or that could be created). It contains concepts, functions and benefits necessary to establish and ensure ethically driven methodologies for the design of the robotic, intelligent and autonomous systems that incorporate them.
This standard establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems.
The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system’s ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance.
IEEE 7010-2020™(Standard Now Available) – IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being
Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems (A/IS) on Human Well-being is a recommended practice for measuring the impact of A/IS on humans. The overall intent of IEEE P7010™ is to support the outcome of A/IS having positive impacts on human well-being.
The recommended practice is grounded in scientifically valid well-being indices currently in use and based on a stakeholder engagement process. The intent of the recommended practice is to guide product development, identify areas for improvement, manage risks, assess performance and identify intended and unintended users, uses and impacts on human well-being of A/IS.
Now available at no charge in the IEEE Standards Reading Room.
IEEE P7011™ – Standard for the Process of Identifying and Rating the Trustworthiness of News Sources
This standard provides semi-autonomous processes using standards to create and maintain news purveyor ratings for purposes of public awareness. It standardizes processes to identify and rate the factual accuracy of news stories in order to produce a rating of online news purveyors and the online portion of multimedia news purveyors. This process will be used to produce truthfulness scorecards through multi-faceted and multi-sourced approaches.
The standard defines an algorithm using open source software and a scorecard rating system as methodology for rating trustworthiness as a core tenant in an effort to establish trust and acceptance.
The standard identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.
IEEE P7014™ – Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems
This standard defines a model for ethical considerations and practices in the design, creation and use of empathic technology, incorporating systems that have the capacity to identify, quantify, respond to, or simulate affective states, such as emotions and cognitive states. This includes coverage of ‘affective computing’, ’emotion Artificial Intelligence’ and related fields.
IEEE 1232.3-2014™ – IEEE Guide for the Use of Artificial Intelligence Exchange and Service Tie to All Test Environments (AI-ESTATE)
Guidance to developers of IEEE Std 1232-conformant applications is provided in this guide.
A new specification language, named Fuzzy Markup Language (FML), is presented in this standard, exploiting the benefits offered by eXtensible Markup Language (XML) specifications and related tools in order to model a fuzzy logic system in a human-readable and hardware independent way.
A map data representation of environments of a mobile robot performing a navigation task is specified in this standard. It provides data models and data formats for two-dimensional (2D) metric and topological maps.
Cyber analysts are becoming a bottleneck in analyzing ever-increasing amounts of data. Automating cyber analysts actions using AI can help reduce amounts of work for analysts and thereby reduce time to outcome dramatically, record actions in knowledge bases for the training of new cyber analysts, and in general, open up the field for new opportunities. As a result, the state of cybersecurity will improve. It is envisioned that this group will bring together industry stakeholders to engage in building consensus on priority issues for standardization activities on these topics, and providing a platform for IEEE thought leadership to the industry.
IEEE IC20-012 – Roadmap for the Development and Implementation of Standard Oriented Knowledge Graphs
This activity assists organizations or users who develop and apply standard-oriented knowledge graphs to have a basic picture of the framework and general construction method. In addition, it may assist the integrators of knowledge graphs to design a generic interface and follow clarified evaluation metrics. Furthermore, standard-oriented knowledge graphscan be integrated, implemented, and applied more simply and efficiently..
The goal of the IEEE Earth Lab is to develop a Green Guide to Artificial Intelligence Systems (AIS) that will serve as a pragmatic roadmap for engineers, corporate organizations and policy makers to leverage AIS innovation for an effective transition to a green economy. We will achieve this goal by developing and supporting a global network of Living Labs that deploy ecologically aligned AIS for efficient, livable cities, low-carbon, equitable and resilient infrastructures, and thriving ecosystems for and with communities most impacted by the effects of global warming.
The goal of this Industry Connections group is to continue and proliferate the existing efforts of The IEEE Standards Association focused on the ethical issues related to Extended Reality as outlined in the Extended Reality Chapter of Ethically Aligned Design while inviting Working Group members from the multiple Standards Working Groups focused on augmented and virtual reality and the spatial web and additional subject matter experts from industry and policy to create white papers, workshops, and PARs related to this work to ensure these technologies move from “perilous” to “purposeful.”
The goal of this Industry Connections Program is to strengthen IEEE Standards Association work on biosecurity and safety aligning and supporting with IEEE`s mission of “Advancing Technology for Humanity”.
Nowhere is the potential of Artificial Intelligence (AI) and autonomous intelligent systems (AIS) more apparent than in human health and human biology, where increasingly sophisticated computational data modelling methods have led to dramatic improvements in our ability to precisely diagnose and treat disease, to estimate risks, and to deliver care. Genetic information is increasingly being used in AI algorithms to guide treatment selection and even whether treatment is provided at all. The transformative impact of these technologies and the commodification of our biological and genomic data will have a significant impact on the future biological continuum and geopolitical order.