About Learning @ Scale
L@S investigates large-scale, technology-mediated learning environments with many learners and few experts to guide them. Large-scale learning environments are incredibly diverse: massive open online courses, intelligent tutoring systems, open learning courseware, learning games, citizen science communities, collaborative programming communities (such as Scratch), community tutorial systems (such as StackOverflow), shared critique communities (such as DeviantArt), and the countless informal communities of learners (such as the Explain It Like I’m Five sub-Reddit) are all examples of learning at scale. These systems either depend upon large numbers of learners, or they are enriched through use of data from previous use by many learners. They share a common purpose–to increase human potential–and a common infrastructure of data and computation to enable learning at scale.
Investigations of learning at scale naturally bring together two different research communities. Since the purpose of these environments is the advancement of human learning, learning scientists are drawn to study established and emerging forms of knowledge production, transfer, modeling, and co-creation. Since large-scale learning environments depend upon complex infrastructures of data storage, transmission, computation, and interface, computer scientists are drawn to the field as powerful site for the development and application of advanced computational techniques. At its very best, the L@S community supports the interdisciplinary investigation of these important sites of learning and human development.
The ultimate aim of the L@S community is the enhancement of human learning. In emerging education technology genres (such as intelligent tutors in the 1980s or MOOCs circa 2012), researchers often use a variety of proxy measures for learning, including measures of participation, persistence, completion, satisfaction, and activity. In the early stages of investigating a technological genre, it is entirely appropriate to begin lines of research by investigating these proxy outcomes. As lines of research mature, however, it is important for the community of researchers to hold each other to increasingly high standards and expectations for directly investigating thoughtfully-constructed measures of learning. In the early days of research on MOOCs, for instance, many researchers documented correlations between measures of activity (videos watched, forums posted, clicks) and other measures of activity, and between measures of activity and outcome proxies including participation, persistence, and completion. As MOOC research matures, additional studies that document these kinds of correlations should give way to more direct measures of student learning and of evidence of instructional techniques, technological infrastructures, learning habits, and experimental interventions that improve learning. As a community, we believe that the very best of our early papers define a foundation to build upon but anticipate that future papers will bring us well beyond.
L@S 2019 Important Dates
Important dates for Learning @ Scale 2019 are as follows. The deadline time for each day is 11:59PM UTC-12 (Anywhere on Earth).
|Abstract for Research or Synthesis Paper Submission||February 1, 2019|
|Full Research or Synthesis Paper Submission||February 8, 2019|
|Work-in-Progress Posters and Demonstrations||April 1, 2019|
|Camera-Ready Research or Synthesis Paper Submissions||April 18, 2019|
About the 2019 Conference
The 2019 Learning at Scale conference will take place on June 24 and 25 in Chicago, Illinois, USA, at the Palmer House Hilton. The conference is co-located with and immediately precedes the 2019 International Conference on AI in Education in the same city and venue.
The Committee Members are:
- John C. Mitchell, Stanford University, Program Co-Chair
- Kaska Porayska-Pomsta, University College London, Program Co-Chair
- David Joyner, Georgia Institute of Technology, General Chair