16
Why do we need to think of Explainable AI ?
(This is a placeholder for my learnings in the space of Explainable AI, updated almost daily, until I complete the course)
Most of the times, AI algorithms might give us good results, probably accurate results, as to what products to be shown to customers while they browse the catalogue.
Or whether a particular customer is not genuine, in the incident of fraud detection scenarios.
Or it could also predict house price accurately, given certain factors.
But, can AI explain, why did it reach to a particular conclusion ?
AI remains as a blackbox for the humans. And humans cannot reason enough, as to how AI algorithm was able to arrive at a particular output, given a particular input.
Explainable AI is the key to solve such problems.
XAI is a type of AI that gives insights, as to the reasons behind the outcomes of AI.
Some of the questions which I have got are as follows:
1) Do we really need models to be explainable ?
2) If models cannot explain why it has reached a particular conclusion, what are the consequences?
3) Given a set of domains, where machine learning plays key role, what would be the case, if models cannot be explained ?
For example, in the domain of finance, if a customer incident is classified as a fraud transaction, how did it arrive at that conclusion ? Or, in the case of a medical diagnosis, why did a model arrive at a conclusion that, a patient is diagnosed with a brain tumour. Because, day by day, machines/systems are going to be trusted with their decisions. Sometimes, blindly trusting machines might work, but, given certain domains, blindly trusting will not work.
4) What should a machine learning engineer be aware of, if they need to make their models explainable ?
- Know the how & why behind your AI solutions by fiddler.ai fiddler.ai
( Throughout the course of learning about explainable AI, I do collect some interesting online resources, books, youtube videos so that, we get a full and deeper picture in this field )
Learning continues....
16