Registered users can unlock up to five pieces of premium content each month.
Google Putting in More Effort to Promote Explainable AI |
NEWS |
Recently, Google launched a new Artificial Intelligence (AI) service on its Google Cloud Platform, known as Explainable AI (XAI). Still currently in beta, Explainable AI is a set of tools and frameworks that Google offers to developers to help them create Machine Learning (ML) models that are interpretable and bias-free.
This service has three main features: firstly, AI Explanations features scoring mechanism that evaluate the impact of each parameter. It focuses on quantifying the relationship between the patterns and the final model outcome. Secondly, the What-If tool is a tool that allows developers to make adjustment to individual data points, features, and optimization strategies in their models. Finally, Explainable AI also features continuous evaluation capability that provides feedback on AI model performance over time.
Key Players of XAI |
IMPACT |
The field of XAI in general has been growing rapidly in recent years. The call for transparency and bias-free AI, or ethical AI, has intensified in many domains, particularly in the regulatory-heavy automotive and public safety sector. While symbolic or classical AI models are generally more transparent and interpretable ML-based AI models are not, and many end users are rightfully worried about being wrongly profiled or categorized by AI without any clear explanation. The “black box” nature of ML-based AI models, especially deep learning models, causes difficulty in explaining the entirety of the AI training and inference process, as well as the role of each neural network layer, feature, and parameter, leading to anxiety and uncertainty among end users, regulators, and law makers. However, the reason behind the popularity of these “black box” AI models is their performance efficiency. Generally speaking, XAI models are simple and inadequate for solving complex tasks. As such, cloud AI service providers have suggested tackling the explainability issue from the angle of model design.
Interestingly, Google is hardly the market leader in this space. While Google joins Microsoft and IBM as large cloud AI service providers offering such services, smaller AI service vendors, such as H2O.ai and Element AI, have been championing XAI as their core focus since their inception. H2O Driverless AI, for example, adopts a modular approach to XAI by employing a combination of techniques and methodologies, such as Local Interpretable Model-agnostic Explanations (LIME), Shapley, surrogate decision trees, and partial dependence in a dashboard to explain the results of both Driverless AI models and external models. On the other hand, Element AI adopts various hybrid explainable models, such as Deep k-Nearest Neighbors (DkNN) and Self-Explaining Neural Network (SENN), and frameworks that are able to provide both prediction and explanation concurrently, such as the Teaching Explanations for Decisions (TED) framework.
The Governance of XAI |
RECOMMENDATIONS |
Many in the industry agree that the creation of XAI is not only a design challenge, but also a governance challenge. The success of the development of ethical AI also hinges on framework-based governance, i.e., an all-around step-by-step approach that governs and upholds core ethical values throughout the AI model formulation process in both short-term and long-term manners. All potential loopholes and downfalls need to be scrutinized and accounted for under such a framework to limit negative impacts on future suppliers, partners, customers, and end users.
In light of the need, governments and regulators across the world—including the United Kingdom, Canada, and Singapore as well as global non-government institutions such as the United Nations, Amnesty International, and the Institute of Electrical and Electronics Engineers (IEEE)—have published various documentations on AI-related ethical and governance frameworks. Large cloud AI companies, including Google itself, have AI ethics principles to ensure self-regulation. Even with a general consensus on the importance of AI frameworks, other factors, such as geopolitics, may get into the way. AI powerhouses, like the United States and China, have vastly different opinions about how AI should be implemented and regulated.
At the same time, impartial third-party testing is equally important. In June 2019, German testing and measurement company TÜV SÜD launched the openGenesis platform. The aim is to validate and certify AI models used in Autonomous Vehicles (AV) for both classical AI and ML. The platform will help promote common understanding of the AI quality in AV and encourage responsibility and accountability among technology suppliers. As a member of the platform, automotive driving developers and solution suppliers retain their intellectual property but are required to share their results and methodologies within the community. Not restricted to automotive sector, TÜV SÜD aims to bring this solution to other industries, including industrial manufacturing, and offer certification and validation services to relevant AI vendors in those verticals.
Ultimately, XAI is the battle between AI performance and explainability. Most, if not all, existing commercial AI solutions have great accuracy and outcomes, but lack transparency and interpretability. Developers who want to design AI models with explainability in mind will also need to deal with extra complexity. This means longer Time to Market (TTM) and more testing and validation processes. As AI is going to become more ubiquitous across various facets of our daily lives, ABI Research believes that it is the responsibility of all AI developers and implementers to identify the right balance between AI performance and explainability.