Skip to yearly menu bar Skip to main content


Invited talk - 20 min
in
Workshop: AI for Earth and Space Science

Model Interpretability as Key Trust Element for Onboard Science Autonomy

Lukas Mandrake


Abstract:

The extremely limited communications bandwidth between Earth and distant spacecraft forms one of the greatest challenges to planetary science advancement. Rather than the 100’s of TB’s of remote sensing data now common in terrestrial remote sensing, a mission to distant ocean worlds like Enceladus or Europa may have only 75 MB of total downlink for both science observations and engineering data. Onboard science capabilities, a unique new form of autonomy, can mitigate much of this bottleneck by recognizing, summarizing, and prioritizing science observations based on their utility to ground science teams. As this requires building a new level of trust with mission scientists, these onboard systems must be constructed with reconfigurability and interpretability as primary requirements, as well as the capability to provide overlapping lines of evidence for any drawn conclusions, all with very limited onboard compute power. As these systems often incorporate machine learning and other data-driven solutions, they form a unique challenge area advancing the definition and boundaries of model interpretability for scientific insight generation.

Chat is not available.