The Deeploy Core Features
Deeploy makes makes ML deployments accountable by giving explainable AI (XAI) a central place in ML operations (MLOps). Giving humans the power to understand what models are doing and correct automated decisions. These core features make sure to make your AI systems truly accountable.
Easily deploy new versions of models on Deeploy, without downtime.
- Deeploy includes several ways to deploy machine learning models and release new versions, without downtime.
- An intuitive UI guides you through the steps to deploy a model.
- Deeploy integrates with Git to fetch new versions.
- User management controls who has access to deployments and data.
UI, API and SDK
Collaborate with your data science team through Deeploys UI, API and Python SDK.
- Deeploy comes with an intuitive user interface to deploy and monitor models, including visual explainability methods.
- An extensive API is provided to connect deployments to any service in order to integrate machine learning models.
- A Python SDK is available to both deploy and fetch information about deployments for further analysis. Integrate Deeploy with your local workflow.
Monitor drift and performance of deployed models.
- Deeploy monitors performance of models in production using off the shelve drift degradation metrics.
- Deeploy alerts users when models degrade and should be retrained.
- Performance, errors and warnings are measured over time.
Every change, every prediction, every explanation and every deployment can be traced back. Deeploy guarantees control and reproducibility of decisions.
- Deeploy logs historical changes, predictions and explanations.
- This makes it possible to time travel back in time, and make results reproducible.
- Changes by users, like new deployments and overwrites, are stored.
Every deployment, every prediction and every explanation is made reproducible, in order to keep control.
- All events, ranging from deployments to predictions, are logged.
- Full reproducibility of decisions and explanations.
- Control over decisions made by machine learning models.
Deployments are always tight to users, with a historical change log to keep overview.
- Every deployment of a model comes with a deployment owner.
- Ownership makes sure models can be traced back to users.
- User management aids to collaborate in a compliant way.
Deeploy comes with a large set of default explainers, both local and global, model specific and model agnostic.
- Deeploy supports both local and global explainers out of the box.
- This includes SHAP, LIME, Anchors.
- Explainers can be agnostic (applicable to any model) or trained on a specific model which is supplied by you.
Custom models and explainers
Besides default explainers and model frameworks, Deeploy aids with industry-specific explainability methods.
- Besides default explainers like SHAP, LIME and anchors, Deeploy offers a way to provide your own explainer.
- We follow the framework to build your own explanation.
Provide feedback on both predictions and explainers in order to learn and continuously improve performance.
- Experts can validate and overrule certain predictions ánd explainers.
- This feedback loop stimulates continuous learning and improvements.