WASL 2024: Optimizing Human Learning: 4th International Workshop eliciting Adaptive Sequences for Learning LAK24 The 14th International Learning Analytics and Knowledge Conference Kyoto, Japan, March 19, 2024 |
Conference website | https://humanlearn.io |
Submission link | https://easychair.org/conferences/?conf=wasl2024 |
Abstract registration deadline | December 4, 2023 |
Submission deadline | December 4, 2023 |
Description
What should we learn next? In this current era where digital access to knowledge is cheap and user attention is expensive, a number of online applications have been developed for learning. These platforms collect a massive amount of data over various profiles, that can be used to improve the learning experience: intelligent tutoring systems can infer what activities worked for different types of students in the past, and apply this knowledge to instruct new students. In order to learn effectively and efficiently, the experience should be adaptive: the sequence of activities should be tailored to the abilities and needs of each learner, in order to keep them stimulated and avoid boredom, confusion, and dropout. In the context of reinforcement learning, we want to learn a policy to administer exercises or resources to individual students.
Educational research communities have proposed models that predict mistakes and dropout, in order to detect students that need further instruction. Such models are usually calibrated on data collected in an offline scenario, and may not generalize well to new students. There is now a need to design online systems that continuously learn as data flows, and self-assess their strategies when interacting with new learners. These models have already been deployed in online commercial applications (ex. streaming, advertising, social networks) for optimizing interaction, click-through rate, or profit. Can we use similar methods to enhance the performance of teaching in order to promote lifetime success? When optimizing human learning, which metrics should be optimized? Learner progress? Learner retention? User addiction? The diversity or coverage of the proposed activities? What are the issues inherent to adapting the learning process in online settings, in terms of privacy, fairness (disparate impact, inadvertent discrimination), and robustness to adversaries trying to game the system?
Student modeling for optimizing human learning is a rich and complex task that gathers methods from machine learning, cognitive science, educational data mining, and psychometrics. This workshop welcomes researchers and practitioners in the following topics (this list is not exhaustive):
- abstract representations of learning
- additive/conjunctive factor models
- adversarial learning
- causal models
- cognitive diagnostic models
- deep generative models such as deep knowledge tracing
- item response theory
- models of learning and forgetting (spaced repetition)
- multi-armed bandits
- multi-task learning
- reinforcement learning
Program
- Invited Talks
- Keynote by Aritra Ghosh (Meta) & Andrew Lan (University of Massachusetts, Amherst)
- Tutorial by Yizhu Gao (University of Georgia) & Yong Zheng (Illinois Institute of Technology)
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Short papers between 2 and 3 pages, LNCS format
- Full papers between 4 and 6 pages, LNCS format
Submissions can be made by EasyChair
Workshop topics
- How to put the student in optimal conditions to learn? e.g. incentives, companion agents, etc.
- When optimizing human learning, which metrics should be optimized?
- The progress of the learner?
- The diversity or coverage of the proposed activities?
- Fast recovery of what the student does not know?
- Can a learning platform be solely based on addiction, maximizing interaction?
- What kinds of activities give enough choice and control to the learner to benefit their learning (adaptability vs. adaptivity)?
- Do the strategies differ when we are teaching to a group of students? Do we want to enhance social interaction between learners?
- What feedback should be shown to the learner in order to allow reflective learning? e.g. visualization, learning map, score, etc. (Should a system provide a fake feedback in order to encourage the student more?)
- What student parameters are relevant? e.g. personality traits, mood, context (is the learner in class or at home?), etc.
- What explicit and implicit feedbacks does the learner provide during the interaction?
- What models of learning are relevant? E.g. cognitive models, modeling forgetting in spaced repetition.
- What specific challenges from the ML point of view are we facing with these data?
- Do we have enough datasets? What kinds of datasets are missing? In particular, aren’t the current datasets too focused on STEM disciplines?
- How to guarantee fairness/trustworthiness of AI systems that learn from interaction with students? This is especially critical for systems that learn online.
Organizers
Workshop Chairs
- Samuel Girard, Inria Saclay, France
- Hisashi Kashima, Kyoto University, Japan
- Fabrice Popineau, CentraleSupélec & LISN, France
- Jill-Jênn Vie, Inria Saclay, France
- Yong Zheng, Illinois Institute of Technology, USA
Program Committee
- Hisashi Kashima, Kyoto University, Japan
- Fabrice Popineau, CentraleSupélec & LISN, France
- Jill-Jênn Vie, Inria Saclay, France
- Jacob Whitehill, Worcester Polytechnic Institute, USA
To contact us, join our Google group: optimizing-human-learning.
Webpage: https://humanlearn.io