Filter
Exclude
Time range
-
Near
Abstract Buchwald-Hartwig amination reaction catalyzed by palladium plays an important role in drug synthesis. In the last few years, ... | XGBoost‐based intelligence yield prediction and reaction factors analysis of amination ... bit.ly/3rPdWkO | #XGBoost #news #XAI
0
0
0
0
Ever wondered how to measure the importance of feature groups for ML models in a model-independent way? Then check out this work with an application to app usage data contributed by @ClemensStachl. #xai #interpretableML #explainableAI #FeatureImportance #MachineLearning
New paper from us on grouped feature interpretation. On arxiv + under review. Review of group imp techniques + guidelines + new method to visualize grouped effects -- dubbed "combined features effect plot". arxiv.org/abs/2104.11688 #InterpretableMachineLearning
0
3
0
7
Critical Mass Tech retweeted
The AI industry is full of excitement. How do you know when the enthusiasm is justified, and when it's superficial? Here's how to clear the hype. sbee.link/er4w68khdm By @Ditto_AI #AI #XAI #IT
0
1
0
0
.@ThePulseofAI podcast interviews @datta_cs of TruEra about how to make #AI more transparent, fair, and effective. How can companies both drive AI quality and eliminate bias? loom.ly/9qhO_es #XAI #RAI #EthicalAI #AIQM #AIQuality #ExplainableAI #MLPM #ML
0
0
0
2
The Data Science Bot retweeted
0
4
0
6
💯I love interviews that take deep dives. That's why I loved doing this with @andrey_kurenkov. 🎯Check this out if you're curious about behind-the-scenes details on #XAI --the stuff you can't find in papers. 💡Spans work at the HCAI Lab w/ @mark_riedl @ICatGT @gtcomputing
Care about AI explainability? Check out our interview with @UpolEhsan, a doctoral candidate at @ICatGT focused on explainable AI (XAI). We discuss his path to AI, work on rationale generation, human-centered XAI, expanding AI explainability, and more! 👇 thegradient.pub/upol-ehsan-i…
Show this thread
0
0
0
5
HubOfML retweeted
0
6
0
7
Roman Senkerik retweeted
Tomorrow (Dec 7th at 01:24pm CET), @RistoTrajanov is going to present our work about explainable single-objective optimisation algorithm performance prediction at @IEEESSCI2021 #benchmarking #XAI arxiv.org/pdf/2110.11633.pdf
2
1
0
5
RT Tutorial on Surface Crack Classification with Visual Explanation (Part 2) dlvr.it/SDrw8M #xai #explainableai #crack #deeplearning #pytorch
0
0
0
0
Rosaria Silipo retweeted
Counterfactual Explanations is an intuitive XAI technique that provides reasoning to users on what changes are required to alter the model prediction Workflow link : lnkd.in/dVtFUpzT Python Library (Alibi) : lnkd.in/dnjX3WrS #knime #python #XAI #Keras @paolotamag
0
7
0
8
Excella Labs retweeted
Can you name the other benefits of #XAI (Explainable #AI)? Our #DataScience experts @datanurturer and Henry Jia share what you need to know when infusing XAI into an AI project. hubs.ly/Q010fZrY0 #ML #ResponsibleAI #Federal #Commercial #DataScience
0
1
0
2