Accelerating AI Systems: Let the Data Flow!
As the benefits from Moore’s Law diminish, future computing performance improvements must rely on specialized accelerators for applications in artificial intelligence and data processing. In the future, these applications will be characterized by terabyte sized models, data sparsity and irregular control flow that will challenge the capabilities of conventional CPUs and GPUs.
In this talk, I explain how Reconfigurable Dataflow Accelerators (RDAs) can be used to boost the performance of a broad set of data-intensive applications with these characteristics. SambaNova Systems is using RDA technology contained in Reconfigurable Dataflow Units (RDUs) to achieve record-setting performance on challenging machine learning tasks.
The 5th ML in Korea
"We invite everyone who is part of or interested in the ML research community in Korea. Participants introduce their own ML research presented in ICLR 2022. They also casually introduce papers that are found interesting among those presented in ICLR 2022 and other venues, and discuss those with other participants. Other potential discussion topics include (but are not limited to): Korean NLP, computer vision and datasets, ML for the post-COVID19 era, and career chances in both academia and industry in Korea. We welcome everyone from anywhere in the world. Note that we have had the same social events in ICLR 2020, 2021, NeurIPS 2020, and 2021 with active participation of more than 100 people.
Our social program is 3 hours long, including a 30 min keynote speech by Yejin Choi (Univ. of Washington), two 1 hour sessions with three tracks: industry tech. share, career mentoring for industrial research labs, and research discussions. As before, we will gather remotely through GatherTown.
We have 9 organizers from 3 different academy institutes (Tae-Hyun Oh
Machine learning benefits for developing nations
Advanced economies are already using machine learning to solve problems like medical diagnosis, Improving Ecommerce Conversion Rates, traffic congestion, saving cows from bad drivers and improving healthcare, while developing nations lags conspicuously behind. Therefore it is important to discuss how machine learning could be used to address developing nation’s challenges by highlighting how some of the major challenges can be solved using certain machine learning techniques.
The main objective of the workshop will be to create awareness about the machine learning tools and its application in solving different problems in developing nations such as Ethiopia. Panelists are expected to brief how machine learning algorithms can be applied in health, agriculture, education, IcT, finance sectors.
Shine in your technical presentation
Our presentations are most likely the highest impact activities we have as researchers. They are often quite dense. In those 10 minutes in your conference oral, you have the chance to show your work to a large audience for world-wide recognition. This is both incredibly stressful and difficult to do. The months of research that you've done, with all the ideas and all the results, have to be jam packed in a short time interval, and your audience is tired of the long conference and the information hoses they are drinking from. How do you make the most out of your presentation? How do you make sure that people understand your work, get excited by it, and remember you in the future?
In the first part of the session we will cover:
- How to structure your presentation and storytelling
- Captivating your audience and making them remember you
- Guiding your audience through tough and difficult to parse material
- Dynamic and easy to follow slide creation
- Preparation for the big moment
- Frequent mistakes and the psychology of insecurity.
In the second part you are invited to bring your own presentations, which will be discussed in a small group. Don’t worry about getting it right, we’re all here to learn. Your teacher for the session will be Tijmen Blankevoort, Senior Staff Manager of Engineering at Qualcomm Technologies Netherlands. A public speaker with over 7 years of experience, recurring radio and podcast guest, former founder of a successful AI start-up, and research team-lead in Qualcomm working on model efficiency.
Doina Precup
Reinforcement learning achieved great success in domains ranging from games to complex control tasks. But reinforcement learning can go beyond specific tasks, and provide the foundation for building AI agents that can continually learn from interaction, in order to build knowledge and achieve goals.
In this talk, I will discuss the importance of rewards as a way to specify goals, and the way in which reinforcement learning can be used to learn general procedural and predictive knowledge. I will outline recent progress made in this area, and important open questions.
Better Developing Pretraining-based Models and Beyond
Pretraining techniques and models largely advance research in language and vision. Several techniques are designed to better drive the model pretraining and fine-tuning in a more intelligent, robust and efficient manner. Meanwhile, with greater power of models come with greater concerns of their social impacts. Do larger models become more powerful consistently? Can larger models work reliably in larger-scale or even real-world applications? Developing models with fairness and reliability considerations therefore becomes increasingly significant.
This social event is launched in terms of collecting sparks of minds and opening discussions from findings to tips in better developing and fine-tuning large pretrained models, as well as the prospect of those models in social and scaling aspects.