Affinity Posters
Blog Track Session 1
David Dobre · Claire Vernade · Charlie Gauthier · Gauthier Gidel · Fabian Pedregosa · Leo Schwinn
Halle B
Live content is unavailable. Log in and register to view live content
Schedule
Tue 1:45 a.m. - 3:45 a.m.
|
Fairness in AI: two philosophies or just one?
(
Poster
#3
)
>
link
Poster Location: Halle B #3 The topic of fairness in AI has garnered more attention over the last year, recently with the arrival of the EU's AI Act. This goal of achieving fairness in AI is often done in one of two ways, namely through counterfactual fairness or through group fairness. These research strands originate from two vastly differing ideologies. However, with the use of causal graphs, it is possible to show that they are related and even that satisfying a fairness group measure means satisfying counterfactual fairness. |
MaryBeth Defrance 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Exploring Meta-learned Curiosity Algorithms
(
Poster
#2
)
>
link
Poster Location: Halle B #2 This blog post delves into Alet et al.'s ICLR 2020 paper, Meta-learning curiosity algorithms, which introduces a unique approach to meta-learning curiosity algorithms. Instead of meta-learning neural network weights, the focus is on meta-learning pieces of code, allowing it to be interpretable by humans. The post explores the two meta-learned algorithms, namely Fast Action Space Transition (FAST) and Cycle-Consistency Intrinsic Motivation (CCIM). |
Batsirayi Mupamhi Ziki 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
How to compute Hessian-vector products?
(
Poster
#1
)
>
link
Poster Location: Halle B #1 The products between the Hessian of a function and a vector, so-called Hessian-vector product (HVPs) is a quantity that appears in optimization and machine learning. However, the computation of HVPs is often considered prohibitive, preventing practitioners from using algorithms that rely on these quantities. Standard automatic differentiation theory predicts that computing a HVP has a cost of the same order of magnitude as computing a gradient. The goal of this blog post is to provide a practical counterpart to this theoretical result, showing that modern automatic differentiation frameworks, Jax and Pytorch, allow for efficient computation of these HVPs in standard deep learning cost functions. |
Mathieu Dagréou · Pierre Ablin · Samuel Vaiter · Thomas Moreau 🔗 |