Workshops

Introducing an Open-source Adaptive Tutoring System to Accelerate Learning Sciences Experimentation

Organizers: Ioannis Anastasopoulos (ioannisa@berkeley.edu), Shreya K. Sheel (shreya_sheel@berkeley.edu), Zachary A. Pardos (pardos@berkeley.edu

Description: Ioannis Anastasopoulos, Shreya Sheel and Zachary Pardos  

Description: Learning @ Scale has embraced movements that spread access to education through open and free learning platforms. In this tutorial, we introduce OATutor (presented at CHI’23 [1]), the field’s first free and open-source adaptive tutoring system based on ITS principles and designed for rapid experimentation. The MIT-licensed platform can be configured and deployed to git-pages in only a few clicks and supports BKT mastery-based adaptive problem selection. We demonstrate, with hands-on tutorials, how the system can be used to rapidly run A/B experiments, analyze the data, and publish the entire tutor, content, and analysis scripts to GitHub to facilitate unprecedented ease of replication and transparency, as demonstrated in a recent study comparing ChatGPT generated hints to human-tutor hints [2]. Our four-part tutorial will include adding lessons to the system and linking them to assignments in a MOOC platform or LMS via LTI. The structured JSON format of the four CC BY courses worth of content released with OATutor opens up avenues for researchers to apply new and existing educational data mining and NLP techniques (e.g., KC tagging) and rapidly evaluate the impact of any subsequent changes on learners with an experiment. 

+info: https://cahlr.github.io/OATWeb/las.html 

Workshop: Fourth Annual Workshop on A/B Testing and Platform-Enabled Learning Research 

Organizers: Steve Ritter, Neil Heffernan, Joseph Jay Williams, Derek Lomas, Klinton Bicknell, Jeremy Roschelle, Ben Motz, Danielle McNamara, Richard Baraniuk, Debshila Basu Mallick, Rene Kizilcec, Ryan Baker, Stephen Fancsali, and April Murphy

Learning engineering adds tools and processes to learning platforms to support improvement research. One kind of tool is A/B testing–common in large software companies and also represented academically at conferences like the Annual Conference on Digital Experimentation (CODE). A number of A/B testing systems focused on educational apps have arisen recently, including UpGrade and E-TRIALS. A/B testing can help improve educational platforms, yet challenging issues in education go beyond the generic paradigm. This workshop will explore how A/B testing in educational contexts is different, opportunities provided by platforms, and how these empirical approaches can be used to drive gains in student learning. We invite papers (up to 4 pages in CHI Proceedings format) addressing issues with conducting A/B testing and LE platforms, including: 

  • The role of A/B testing systems in complying with SEER principles (https://ies.ed.gov/seer/) 
  • The use of existing learning platforms to conduct research (http://seernet.org)
  • Ethical, data security, and privacy issues A/B testing within adaptive software Attrition and dropout 
  • UX issues 
  • Educator involvement and public perceptions of experimentation 
  • Balancing practical improvements with open science

+ info: https://sites.google.com/carnegielearning.com/4theducationalabtestingatscale/home

Tutorial: How to Open Science: Promoting Principles and Reproducibility Practices within the Learning @ Scale Community 

Organizers: Aron Haim, Stacy Shaw and Neil Heffernan

Across the past decade, open science has increased in momentum, making research more openly available and reproducible. In addition, learning at scale systems have been developed to collect and apply models, features and reports to better support students and teachers towards their goals. This tutorial will provide an overview of open science practices and their benefits and mitigation within research. In the second part of this tutorial, we will use the Open Science Framework to make, collaborate, and share projects – demonstrating how to open materials, code, and data. This tutorial’s final part will review some mitigation strategies when releasing datasets and materials so other researchers may easily reproduce them. Participants in this tutorial learn what open science practices are, how to use them in their own research, and how to use the Open Science Framework. All resources for the website are on OSF: https://osf.io/jp6cq/

+ info:  https://lats2023-tutorial.howtoopenscience.com/