Explainability Research Internship

Explainability Research Internship

Amsterdam Internship Product

When this research project suits you  🤖

  • You have a preference for a pragmatic, ambitious, small and close team.
  • You are finishing your Econometrics, AI, Data Science or related studies and are on the lookout for an internship/research project.
  • You want to have a great learning experience and want to apply your ML & AI skills in a real world setting.
  • You have demonstrable experience with Python and ML.
  • Experience with explainability in AI (XAI) is a plus.
  • You are passionate about the world of Machine Learning and AI and believe it is of great importance to apply it in the fairest way possible.

What you will be doing  👩‍💻

Computer vision (CV) has a rich history of efforts to enable computers to perceive visual stimuli meaningfully. Machine perception spans a range of levels, from low-level tasks such as identifying edges, to high-level tasks such as understanding complete scenes. Advances in the last decade have largely been due to three factors:

  1. The maturation of deep learning (DL)
  2. Compute power via GPU
  3. Open-sourcing datasets to train algorithms

When developers create a computer vision model, they find themselves interacting with a backbox and being unaware of what feature extraction is happening at each layer. With the help of explainability methods, it becomes easier to comprehend and know when enough layers have been added and what feature extraction has taken place at each layer. It is difficult to explain what is actually going behind the models or how the outcomes are coming determined. However, this understanding is crucial in applications affecting humans, for example in medical, public and financial applications.

There are several explainability methods available to stakeholders, among which:

  • SHAP Gradient Explainer
  • SHAP (SHapley Additive exPlanations)
  • Visual Activation Layers
  • Occlusion Sensitivity
  • Grad-CAM
  • Integrated Gradients

The goal of this research project is to better understand which explainability methods fit best in which situation, and if the current explainability methods are sufficient. We differentiate between:

  • Level: global and local explainability
  • Audience: developer or end-user (civilian)
  • Computing power: close-to-real-time or post-hoc
  • Training: agnostic or model specific

We apply this research on several real-life cases in the medical, processing and/or governmental field. Potential outcomes of the research could be that the available explainers are not directly usable for specific situations. This could lead to the development of an alteration of an existing explainer to that situation, or the creation of a new explainer, combining advantages of existing explainers.

How to apply  🚀

Send your CV and cover letter to jobs@deeploy.ml.
Know someone who would be a great fit? Help us by sending them a link to this page.

How we proceed  🔎

All jobs and internships at Deeploy start with an assessment of your experience and motivation based on the CV and cover letter, progressing to 1-2 short interviews and a short mock project assignment.

Rather than trying to guess if we’ll work well together based on lengthy assessments or questionnaires, we instead invite promising candidates to work on a real life mock project with us. The mock projects is typically 4 hours of work, and give us an opportunity to get to know each other prior to pursuing an offer. It’s also a chance for candidates to make sure that it’s a good fit for them. For the case you will be invited to our office.

Hope to see you soon!

Email us