PyData London 2023

Ade Idowu

A lead software engineer and data scientist. Has over 15 years’ experience in the development of software and AI/ML solutions. Pragmatic, analytic problem solver and builder of artificial intelligence solutions for business seeking efficiency and value. A passionate advocate of the development and use of ethical AI in products and services.

The speaker's profile picture


Hands-on Intro to developing Explainability for Recommendation Systems
Ade Idowu

Over the last decade, the commercial use of recommendation engines/systems by business has grown substantially, enabling the flexible and accurate recommendation of items/services to users. Examples of popular recommenders include (to name a few) movies, videos and books recommendation engines offered by Netflix, Youtube and Amazon respectively.

In general, most recommender systems are typically “black-box” algorithms trained to provide inference of relevant items to users using techniques such as collaborative or content-based filtering models or hybrid models. The algorithms used in these systems are broadly opaque, thus making the predicted recommendations lack full interpretability/explainability. Making recommenders explainable is very essential, as they try to provide transparency and address the question of why were particular items recommended by the engine to users/system designers.

Over the last few years there has been a growing area of research and development in explainable recommendation systems. Explainable recommendations systems are generally classified as Post-hoc (i.e. explainability is done post-recommendation) or Intrinsic (explainability is integrated into the recommender model) approaches. This workshop will provide a hands-on implementation of some of these approaches.