Authors: Yacine Izza, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev, Joao Marques-Silva
Published on: May 14, 2024
Impact Score: 7.6
Arxiv code: Arxiv:2405.08297
Summary
- What is new: Introduction of novel algorithms for scaling up the performance of logic-based explainers to handle machine learning models with a large number of inputs.
- Why this is important: Logic-based XAI struggles with the complexity of logic reasoning for highly complex ML models, particularly with a large number of inputs.
- What the research proposes: Developing enhanced algorithms that effectively scale logic-based explainability for complex machine learning models with extensive inputs.
- Results: The novel algorithms improved the scalability and performance of logic-based explainers, making it feasible to compute and enumerate explanations for models with numerous inputs.
Technical Details
Technological frameworks used: nan
Models used: Logic-based XAI models
Data used: nan
Potential Impact
High-stakes industries such as healthcare, finance, and autonomous driving that rely on complex ML models for critical decision-making could benefit significantly.
Want to implement this idea in a business?
We have generated a startup concept here: ClarifyAI.
Leave a Reply