Call for Papers – Learning at Scale 2019
L@S investigates large-scale, technology-mediated learning environments that typically have many active learners and few experts on hand to guide their progress or respond to individual needs. Modern learning at scale typically draws on data at scale, collected from current learners and previous cohorts of learners over time. Large-scale learning environments are very diverse: evolving forms of massive open online courses, intelligent tutoring systems, open learning courseware, learning games, citizen science communities, collaborative programming communities (such as Scratch), community tutorial systems (such as StackOverflow), shared critique communities (such as DeviantArt), and countless informal communities of learners (such as the Explain It Like I’m Five sub-Reddit) are all examples of learning at scale. A growing number of current campus-based courses in popular fields also involve many learners, relative to the number of course staff, and leverage varying forms of data collection and automated support. All share a common purpose to increase human potential, leveraging data collection, data analysis, human interaction, and varying forms of computational assessment, adaptation and guidance.
Research on learning at scale naturally bring together two different research communities. Learning scientists are drawn to study established and emerging forms of knowledge development, transfer, modelling, and co-creation. Computer and data scientists are drawn to the specific and challenging needs for data collection, data sharing, analysis, computation, and interaction. The cornerstone of L@S is interdisciplinary research and progressive confluence toward more effective and varied future learning.
The L@S research community has become increasingly sophisticated, interdisciplinary and diverse. In the early years, researchers began by investigating proxy outcomes for learning, such as measures of participation, persistence, completion, satisfaction, and activity. Early MOOC researchers in particular documented correlations between easily observed measures of activity – videos watched, forums posted, clicks – and these outcome proxies. As the field and tools mature, however, we have increasing expectations for new and established measures of learning. As MOOCs morph into a more varied and provocative medium and L@S research expands, we aim for more direct measures of student learning, accompanied by generalizable insight around instructional techniques, technological infrastructures, learning habits, and experimental interventions that improve learning.
The ACM Learning at Scale conference solicits original research paper submissions on methodologies, case studies, analyses, tools, or technologies for learning at scale, broadly construed. Four kinds of contributions will be accepted: Research Papers, Synthesis Papers, Work-in-Progress Posters, and Demonstrations.
Paper submissions, review and notification to authors will be handled using the Easy Chair system. Full research and synthesis papers must not exceed 10 pages and must use the ACM CHI Archive Format, available in latex and Word. Submissions must be in PDF format, written in English, contain original work and not be under review for any other venue while under review for this conference.
Accepted papers must be presented at the conference and will be included in the proceedings.
We solicit empirical and theoretical papers on a diverse range of topics relevant to successful learning at scale. Consistent with past years, we welcome submission on: (1) design of systems for learning at scale, (2) effective learning interactions at scale, (3) understanding and supporting learners at scale. Accounts of robust methodologies from the learning sciences theory, practice, and/or the engineering perspectives are encouraged. Additional and illustrative topics are listed below (at the end of this call). All submissions on any topic will be reviewed on the basis of originality, research quality, potential impact and value to the development of future learning at scale.
In order to increase high quality papers and independent merit, the evaluation process will be double blind. The papers submitted for review MUST NOT contain the authors’ names, affiliations, or any information that may disclose the authors’ identity (this information is to be restored in the camera-ready version upon acceptance). Please replace author names and affiliations with Xs on submitted papers. In particular, in the version submitted for review please avoid explicit auto-references, such as “in  we show” — consider “in  it is shown”. You should cite your own relevant previous work, so that a reviewer can access it and see the new contributions. The text should not explicitly state that the cited work belongs to the authors.
In order to support collaboration between learning scientists, computer scientists and contributors from other relevant fields, we invite papers that evaluate, synthesize, and contextualize existing bodies of knowledge and research that may be targeted at one or more communities. Such papers may have high value to the community but might not otherwise be accepted only on the basis of original research contributions. Suitable papers include survey papers that provide useful perspectives on major research areas, papers that support or challenge long-held beliefs with compelling evidence, or papers that provide an extensive and realistic evaluation of competing approaches to solving specific problems. Synthesis paper submissions will be reviewed by the full program committee and held to the same standards as research papers except instead of emphasizing novel research contributions, the emphasis will be on value to the community.
Synthesis paper submissions should follow the same guidelines for double-blind reviewing as research papers, described above.
A Work-in-Progress (WiP) concisely summarizes recent findings or other types of innovative or thought-provoking work that has not yet reached a level of completion for a full paper. Topics are the same as for full papers. At the conference, all accepted WiP submissions will be presented in poster form. Selected WiPs may be invited for oral presentation during the conference. Rejected full-papers can be resubmitted as WiP and will be evaluated accordingly.
Work-in-Progress submissions must be 4 pages or fewer and must use the ACM CHI Archive Format, available in latex and Word and submitted as a PDF file. References must fit within the four-page limit. WiP submissions are not anonymous and should therefore include all author names, affiliations and contact information. If accepted, you should expect to prepare a poster to present at the conference venue.
Demonstrations show aspects of learning at scale in an interactive hands-on form. A live demonstration is a great opportunity to communicate ideas and concepts in a powerful way that a regular presentation cannot. We invite demonstrations of learning and analytical environments and other systems that have direct relevance to learning at scale. We especially encourage authors of accepted papers and industrial partners to showcase their technologies using this format. Demonstration submissions are 2 pages or fewer in length and must use the ACM CHI Archive Format, available in latex and Word , and submitted as a PDF file. A demonstration proposal should address two components:
- The merit and nature of the demonstrated technology. If the proposed demonstration is associated with a Full Paper or a WiP submission, please point to the title of the submission instead of repeating the information here.
- Details of how the demo will be executed in practice, and how visitors will interact with it during the conference.
Important dates for Learning @ Scale 2019 are as follows. The deadline time for each day is 11:59PM UTC-12 (Anywhere on Earth).
Abstract for Research or Synthesis Paper Submission
February 1, 2019
Full Research or Synthesis Paper Submission
February 8, 2019
Work-in-Progress Posters and Demonstrations
April 1, 2019
Camera-Ready Research or Synthesis Paper Submissions
April 18, 2019
Example topics: Specific topics of relevance include, but are not limited to:
- Novel assessments of learning, including those drawing on computational techniques for automated, peer, or human-assisted assessment.
- New methods for validating inferences about human learning from established measures, assessments, or proxies.
- Experimental interventions that show evidence of improved learning outcomes, such as
- Domain independent interventions inspired by social psychology, behavioural economics, and related fields, including those with the potential to benefit learners from diverse socio-economic and cultural backgrounds
- Domain specific interventions inspired by discipline-based educational research that may advance teaching and learning of specific ideas or theories within a field or redress misconceptions.
- Heterogeneous treatment effects in large experiments that point the way towards personalized or adaptive interventions
- Best practices in open science, including pre-planning and pre-registration
- Alternatives to conducting and reporting null hypothesis significance testing
- Best practices in the archiving and reuse of learner data in safe, ethical ways
- Advances in differential privacy and other methods that reconcile the opportunities of open science with the challenges of privacy protection
- The blended use of large-scale learning environments in specific residential or small-scale learning communities, or the use of sub-groups or small communities within large-scale learning environments
- The application of insights from small-scale learning communities to large-scale learning environments
- Learning environments for neurodevelopmental, cultural, and socio-economic diversity
- Status indicators of student progress or instructional effectiveness
- Methods to promote community, support learning, or increase retention at scale
- Tools and pedagogy such as open learner models, to promote self-efficacy, self-regulation and motivation
- Assessing reasons for student outcome as determined by modifying tool design
- Modelling learners based on responses to variations in tool design
- Evaluation strategies such as quiz or discussion forum design
- Instrumenting systems and data representation to capture relevant indicators of learning
- Games for learning at scale
- Automated feedback tools, such as for essay writing, programming, and so on
- Automated grading tools
- Tools for interactive tutoring
- Tools for learner modelling
- Tools for increasing learner autonomy in learning and self-assessment
- Tools for representing learner models
- Interfaces for harnessing learning data at scale
- Innovations in platforms for supporting learning at scale
- Tools to support for capturing, managing learning data
- Tools and techniques for managing privacy of learning data