X0PA AI is one of the five startups in the Asia Pacific and Europe, chosen by TTC Labs, Meta, and IMDA to collaborate to engage start-ups and policymakers on prototyping policies and products that focus on AI explainability.
With the increasing adoption of AI across various industry verticals and use cases, the conversations around greater transparency and explainability of AI systems are also growing. The overall AI ecosystem is looking to foster trust in AI. Globally, several principles, guidelines, frameworks, and laws have been created to govern the development and use of AI. Singapore has published a Model AI Governance Framework (Model Framework) which advocates transparency and explainability in the decisions made by, or with, the assistance of AI.
The Model Framework and the associated guide, the Implementation and Self-Assessment Guide for Organisations (ISAGO), provide practical tools to help developers and users of AI systems do this responsibly. To ensure the implementation of this framework and guide, the Infocomm Media Development Authority of Singapore (IMDA) partnered with Meta’s TTC Labs and Open Loop to translate key concepts of transparency and explainability into prototypes.
X0PA amongst other four startups was chosen to test the product framework for AI explainability, build prototypes, and share their findings, insights, and experiences. “X0PA participated so we get to work together with other start-ups, Meta, TTC Labs and IMDA to further improve our product development prototyping process so that we can make our AI more explainable and provide a better user experience. We also participated as we strongly believe in working together to push for a people-centric and explainable approach to AI,” says Lee Gang, Head, Data Science, X0PA AI.