Learning at Scale investigates large-scale, technology-mediated learning environments with many learners and few experts to guide them. Large-scale learning environments are incredibly diverse: massive open online courses, intelligent tutoring systems, open learning courseware, learning games, citizen science communities, collaborative programming communities (such as Scratch), community tutorial systems (such as StackOverflow), shared critique communities (such as DeviantArt), and the countless informal communities of learners (such as the Explain It Like I’m Five sub-Reddit) are all examples of learning at scale. These systems either depend upon large numbers of learners, or they are enriched through use of data from previous use by many learners. They share a common purpose--to increase human potential--and a common infrastructure of data and computation to enable learning at scale.
Investigations of learning at scale naturally bring together two different research communities. Since the purpose of these environments is the advancement of human learning, learning scientists are drawn to study established and emerging forms of knowledge production, transfer, modeling, and co-creation. Since large-scale learning environments depend upon complex infrastructures of data storage, transmission, computation, and interface, computer scientists are drawn to the field as powerful site for the development and application of advanced computational techniques. At its very best, the Learning at Scale community supports the interdisciplinary investigation of these important sites of learning and human development.
The ultimate aim of the Learning at Scale community is the enhancement of human learning. In emerging education technology genres (such as intelligent tutors in the 1980s or MOOCs circa 2012), researchers often use a variety of proxy measures for learning, including measures of participation, persistence, completion, satisfaction, and activity. In the early stages of investigating a technological genre, it is entirely appropriate to begin lines of research by investigating these proxy outcomes. As lines of research mature, however, it is important for the community of researchers to hold each other to increasingly high standards and expectations for directly investigating thoughtfully-constructed measures of learning. In the early days of research on MOOCs, for instance, many researchers documented correlations between measures of activity (videos watched, forums posted, clicks) and other measures of activity, and between measures of activity and outcome proxies including participation, persistence, and completion. As MOOC research matures, additional studies that document these kinds of correlations should give way to more direct measures of student learning and of evidence of instructional techniques, technological infrastructures, learning habits, and experimental interventions that improve learning. As a community, we believe that that the very best of our early papers define a foundation to build upon but are not an established standard to aspire to.