Skip to yearly menu bar Skip to main content


5th Workshop on practical ML for limited/low resource settings (PML4LRS) @ ICLR 2024

Esube Bekele · Maysa Macedo · Matimba Shingange · Aisha Alaagib · Waheeda Saib · Meareg Hailemariam · Timnit Gebru · Kevin Compher · John Wamburu · Nyalleng Moorosi · Gilles Hacheme

Strauss 3

Fri 10 May, 11:30 p.m. PDT

The constant and breakneck speed of progress being made in artificial intelligence (AI) and generative AI needs to be resource optimized for practical societal impacts. Adapting the state-of-the-art (SOTA) methods such as large language models (LLMs), Diffusion Models, and Neural Radiance Fields (NeRFs) to resource-constrained environments, to run (even few-show fine-tuning and inference) under low resources such as those typically in developing countries and computing at the edge, is highly challenging in practice. Partly due to the lack of diversity in the data and personnels (involved in annotating and validating), their high demand in computational resources, variations in the selection of performance metrics. For example, recent breakthroughs in natural language processing (NLP), computer vision, and speech analysis, for instance, rely on increasingly complex and large models (e.g. most models based on transformers and attention such as BERT, GPT-2/GPT-3, DALLE-2, and stable diffusion) that are pre-trained in on large corpus of unlabeled data. Applying these models in a resource-constrained environment is a non-trivial challenge. Moreover, the potential risks associated with such large models in low resource settings, e.g., disinformation, is virtually unexplored. Low/limited resources mean a hard path towards the adoption of these breakthroughs for most edge applications as well as in developing countries. As a result, most of these advances are limited to giant technology companies and institutions that have access to computational resources and big datasets which, unintentionally, marginalizes institutions and companies with lower resources as well as significantly hampers edge use cases. These challenges downgrade the overall trustworthiness of such AI solutions to achieve positive societal impact worldwide. Methods such as data-augmentation, transfer learning and synthetic data will not solve the problem either due to bias in the original pre-training datasets as well as the prohibitive cost and resource needs of fine-tuning these large scale models. If disparities in resources continue as models become more resource-intensive, it would exacerbate the widening income inequalities across the globe and disturb the constant progress towards equity.

Chat is not available.
Timezone: America/Los_Angeles