Towards Responsible AI
When we talk about AI, we usually mean a machine learning model that is used within a system to automate something. For example, a self-driving car can take images from sensors. A ML model can use these images to make predictions (example: the object in front of us is a tree). The car uses these predictions to make decisions (example: turn left to avoid the tree). We refer to this entire system as Artificial Intelligence.
This is just an example. AI can be used for anything, from insurance underwriting to cancer detection. The defining characteristic is that there is no limited human involvement in the decisions the system makes. This can create many potential problems and companies need to define a clear approach to the use of AI. Responsible AI is a governance framework intended to do exactly that.
Responsible AI can include details about what data can be collected and used, and how the models should be implemented, evaluated and monitored. You can also define who is responsible for the negative results of the AI implementation. You can define specific approaches and others more open to interpretation. They all seek to achieve the same thing: to create AI systems that are interpretable, fair, secure, and respectful of user privacy.
The objective of this session is to talk about the responsible use of Artificial Intelligence in the generation of fair, equitable and explainable machine learning models.