AI Principles

Trustworthiness

Trustworthiness comes with trustworthy system, that is ethically adherent, lawful, technically efficient, accurate and repeatable. Also the whole lifecycle should be trustworthy as well and meaningful to all of the stake holders with standardized guidelines and best architectural and deployment practices. The system should also have a reasonable level of scalability and reliability in a high-volume setting. On top of that the outcome of the system is more explainable for the system to be long-term sustainable as well.

Explainability

Deep learning itself is a black-box and we don’t know how and why the predictions were made, however there are certain processes we can use the make the predictions more explainable.  For example, in case of Diabetic Retinopathy detection when AI algorithm scans a retina’s image it detects right away if signs of diabetic retinopathy is present or not via machine learning approach called image classification in deep learning. This is done by inherent feature-based learning system and a simplest approach in deep learning called image classification, where the image is classified as normal or with diabetic retinopathy. Now why the AI has made this decision is unknown and called an “AI Black-box”. However we combine another machine learning technical where AI detects detailed anomalies in the image via lesion level segmentation

Sustainability

Artificial intelligence works will on the specific use cases and more focused outcomes, if we would like to generalize to multiple populations it becomes challenging. However, in the medical domain the AI should be more robust and generalized to cater a wider population of patients and hence to sustain such a system we would need more generalized AI. To achieve this state, our principal is to keep our models in a close looped AI with data governance in place all the times. Meaning, we monitor the model drifts and data drifts and keep our models in a learning mode to be better with the highest quality and best practices in machine learning. Eventually our models are more explainable, sustainable and trustworthy

AI Principles