Learn | Announcements | Matilde Alves | 25 April 2024

Deeploy ramps up efforts to support Explainable AI in KServe

We are proud to announce the latest big step in our open-source efforts: ramping up efforts to support Explainable AI in KServe!

About Deeploy

As AI evolves and becomes more complex, ramping up the research around explainability becomes paramount. Deeploy was founded with the intent to pioneer and innovate in the field of control, transparency, and explainability of AI models, especially when it comes to high-risk applications.

In Deeploy’s mission to enable transparent AI that people can control, we have always prioritized actively participating in the open-source AI community. Early in our Deeploy journey, we adopted KServe as one of the model serving frameworks we believe in and want to contribute to, and are now proud to share we’ve decided to become one of its main contributors.

“Our goal is to stimulate the development and usage of XAI within the KServe community. Since we benefited from the community efforts in developing Deeploy we decided to give back and help with innovation in this important topic.”

Tim Kleinloog, CTO

About KServe

KServe is one of the largest open-source AI communities, focusing on developing innovative solutions for highly scalable and standards-based Model Inference on Kubernetes.

Deeploy initially got involved with KServe before v0.5, and immediately started to adopt concepts like the API inference protocol that already had support for many influential ML and AI model serving runtimes as well as novel support for explanations.

Our contributions

Since we benefited from the KServe community in developing Deeploy as an enterprise-ready product, we decided to give back and increase our involvement in the community. Since taking this decision earlier this year, we’ve been working closely with the community to lead efforts in various areas, with a clear focus on explainable AI (XAI) features.

We believe that this should be a key focus point to build AI that can be trusted. The most important change that was needed in the KServe codebase was to support pluggable runtimes for explainers. By contributing and providing support to this change, new explainability methods can be contributed more easily, helping the community accelerate innovation in this field!

What’s next

We see that the importance of transparency in AI is only increasing (see for example the adoption of the EU AI act). As such, Deeploy intends to keep focussing on this topic and enable organizations to deploy trustworthy AI that contributes to a sustainable AI future. One of the major focus points is explainability in open-source LLMs. We expect to focus our contributions on integrating research breakthroughs (like Inseq and CIU) in this field and make them available in Kserve.