Accountancy firm, EY, has announced it has developed a solution designed to help enterprises quantify the impact and trustworthiness of AI systems.
The EY Trusted AI platform, enabled by Microsoft Azure, offer users an integrated approach to evaluate, monitor and quantify the impact and trustworthiness of AI. The platform leverages advanced analytical models to evaluate the technical design of an AI system, measuring risk drivers that include its objective, underlying technologies, technical operating environment and level of autonomy compared with human oversight, and then produces a technical score.
Keith Strier, EY global and Americas advisory leader of artificial intelligence, said: “Trust must be a front-line consideration, rather than a box to check after an AI system goes live. Unlike traditional software, which can be fixed, tested and patched, if a neural network is trained on biased data, it may be impossible to fix, and the entire investment could be lost.
“The EY Trusted AI conceptual framework was launched last year, and now this offering is being launched to help organisations worldwide build trust in and derive sustained value from AI.”
The new platform provides insights to users such as AI developers, executive sponsors and risk practitioners. The technical score it provides is also subject to a complex multiplier, based on the impact on users, taking into account unintended consequences such as social and ethical implications.
An evaluation of governance and control maturity acts as a further mitigating factor to reduce residual risk. The risk scoring model is based on the EY Trusted AI framework, which is being used to help enterprises understand and plan for these new risks that could undermine products, brands, relationships and reputations.
Cathy Cobey, EY global trusted artificial intelligence advisory leader, added: “Currently, a lack of trust is the leading barrier to the adoption of AI. If AI is to reach its full potential, we need a more granular view – the ability to predict conditions that amplify risks and then target mitigation strategies for risks that may undermine trust, while still considering traditional system risks such as reliability, performance and security.
“Sponsors and users alike want to develop AI in a transparent, accountable and therefore trusted manner. This innovative EY capability is a significant step toward reaching that goal.”
The interactive, web-based interface guides users through a series of schematic assessment tools to build the risk profile of an AI agent. Visualisations will provide users with a quick snapshot of the relative risk scores across their AI portfolio, with drill-down capabilities to reveal additional details.
Steve Guggenheimer, corporate vice president of AI Business, at the Microsoft Corporation, said: “Helping customers focus on the ethical use of AI as they build new solutions or infuse their existing solutions with AI is one of the core principles of Microsoft’s approach.
“A key component of our Microsoft Azure cloud platform is enabling the creation of applications and services using artificial intelligence by any developer or data scientist across a wide range of scenarios. The EY Trusted AI platform, enabled by Azure, is an important step in helping enterprises build their AI systems with the trust and security that is so essential to AI systems’ successful deployment.”