In critical areas such as health, finance, human resources or cyber security, decisions need to be justified to ensure that they comply with ethical and legal standards.
Artificial intelligence (AI) is interfering with our lives on many levels. In the future, technology will play an increasingly important role in our daily lives. It will support us both in our professional lives and in our personal environment. AI gives everyone access to new analytical and automation capabilities. When used properly, AI can boost productivity and enable us to make better-informed decisions. But, because there is a 'but', the use of AI also invites us to ask and answer the right questions.
Because AI influences our decisions
"All businesses, large and small, are going to have to use AI. It will increasingly influence the decisions we make. In medicine, for example, AI-assisted medical imaging analysis will help to detect cancer at an early stage, which would not have been possible with the naked eye. In the banking world, it can speed up decision-making on the granting of credit, based on a risk analysis that takes into account a wide range of parameters. In human resources, the technology will speed up the processing of CVs and job applications," explains Nicolas Vivarelli, Head of Data & AI at DEEP by POST Group. Given the sensitive nature of the decisions that can be made using AI, it is vital to be able to explain why and how it arrives at the proposed result or the decision taken."
All businesses, whether large or small, will have to use AI. It will increasingly influence the decisions we make.
Identifying biases and correcting them
AI systems are often influenced by biases in the training data.
"Explainable AI makes it possible to identify and correct these biases built into the algorithms, guaranteeing more accurate decisions. For example, a recruitment model could be analysed to check that it does not discriminate on the basis of gender or origin," continues Nicolas Vivarelli.
In Europe, legal frameworks such as the RGPD and the European AI Act impose transparency obligations, so companies are required to explain automated decisions that have an impact on individuals. For example, citizens have the right to ask for an explanation of automated decisions that affect them (refused for a bank loan, the customer has the right to understand why). Companies must therefore comply with these requirements to avoid sanctions and litigation.
Facilitating adoption, guaranteeing trust
Beyond the regulatory obligation, putting in place solutions that explain the results and decisions of AI is essential if we want to encourage acceptance of these tools by employees and guarantee customer confidence. If the doctor is unable to understand how AI identifies a risk of cancer, he or she will certainly not be inclined to tell the patient," comments Nicolas Vivarelli. On the contrary, by understanding what has led the AI to make a diagnosis, he will be able to strengthen his analyses, improve his skills and evolve by relying on these tools."
Between explicability and performance
For DEEP's expert, guaranteeing the explainability of AI solutions is no longer an option, but a standard. These requirements must be met by design, from the moment the tools are conceived and implemented.
"To achieve this, we need to be able to rely on experienced data scientists with in-depth expertise in the field and the ability to handle advanced tools. Explainability can only be envisaged if robust data governance is implemented and if the data is properly structured. Explainable AI depends on clean, well-structured data that is free from bias. Data management therefore becomes a central pillar of this approach. Over time, we need to carry out regular checks on the solution", continues Nicolas Vivarelli.
DEEP's teams, with their solid experience in this field, support organisations in the implementation of solutions that guarantee explicability.
"In this area, there are no established standards. The approaches used must be considered on a case-by-case basis. Beyond data governance, we also need to be vigilant about the choice of AI models we deploy. Guaranteeing explicability often means finding a compromise between performance and transparency. The best performing models, such as deep neural networks, are often the most complex and therefore the least explainable. Making these models explainable can sometimes reduce their effectiveness, and striking a balance remains a complex task.
But we can no longer choose performance over transparency. In practice, all the decisions taken by AI must be interpretable by humans. Its operation must be documented and the processes that rely on the technology must be auditable. This is a necessity."
Guaranteeing explicability often involves finding a compromise between performance and transparency. The most powerful models, such as deep neural networks, are often the most complex and therefore the least explainable.
Keeping control
This requirement for explicability guarantees better control of AI. "Technology must remain a tool at the service of human beings. It is therefore essential to give users the means to retain control and have their say. If they are not able to understand how the AI arrived at the result provided, it is de facto out of control", concludes Nicolas Vivarelli.