Monday, April 25, 2016
Time | Session |
---|---|
07:30 AM | Registration & Tea/Coffee |
08:30 AM | Welcome and opening remarksRoom: Pentland E |
09:00 AM |
Session #1: Global villageRoom: Pentland ESession chair: Carolyn Penstein Rose, Carnegie Mellon University, USA
1A: The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums by Justin Reich, Brandon Stewart, Kimia Mavon, Dustin Tingley
Keywords: MOOCs; civic education; discourse; text analysis; political ideology; structural topic model
Abstract:
In this study, we develop methods for computationally measuring the degree to which students engage in MOOC forums with other students holding different political beliefs. We examine a case study of a single MOOC about education policy, Saving Schools, where we obtain measures of student education policy preferences that correlate with political ideology. Contrary to assertions that online spaces often become echo chambers or ideological silos, we find that students in this case hold diverse political beliefs, participate equitably in forum discussions, directly engage (through replies and upvotes) with students holding opposing beliefs, and converge on a shared language rather than talking past one another. Research that focuses on the civic mission of MOOCs helps ensure that open online learning engages the same breadth of purposes that higher education aspires to serve.
1B: Mobile Devices for Early Literacy Intervention and Research with Global Reach by Cynthia Breazeal, Robin Morris, Stephanie Gottwald, Tinsley Galyean, Maryanne Wolf
Keywords: Open platform for education; early literacy; reading brain; virtual preschool; pre-k learning and technology; global literacy project
Abstract:
Extensive work focuses on the uses of technology at scale for post-literate populations (e.g., MOOC, learning games, Learning Management Systems). Little attention is afforded to non-literate populations, particularly in the developing world. This paper presents an approach using mobile devices with the ultimate goal to reach 770 million people. We developed a novel platform with a cloud backend to deliver educational content to over a thousand marginalized children in different countries: specifically, in remote villages without schools, urban slums with overcrowded schools, and at-risk, rural schools. Here we describe the theoretical basis of our system and results from case studies in three educational contexts. This model will help researchers and designers understand how mobile devices can help children acquire basic skills and aid each other’s learning when the benefit of teachers is limited or non-existent.
1C: Online Urbanism: Interest-based Subcultures as Drivers of Informal Learning in an Online Community by Ben U Gelman, Chris Beckley, Aditya Johri, Carlotta Domeniconi, Seungwon Yang
Keywords: Informal Learning; Online Communities; Interest-based Subcultures; Scratch; Programming.
Abstract:
Online communities continue to be an important resource for informal learning. Although many facets of online learning communities have been studied, we have limited understanding of how such communities grow over time to productively engage a large number of learners. In this paper we present a study of a large online community called Scratch which was created to help users learn software programming. We analyzed 5 years of data consisting of 1 million users and their 1.9 million projects. Examination of interactional patterns among highly active members of the community uncovered a markedly temporal dimension to participation. As membership of the Scratch online community grew over time, interest-based subcultures started to emerge. This pattern was uncovered even when clustering was based solely on social network of members. This process, which closely resembles urbanism or the growth of physically populated areas, allowed new members to combine their interests with programming.
|
10:30 AM | Morning tea |
11:00 AM |
Session #2: EngagementRoom: Pentland ESession chair: Dan Dan Russell, Google, USA
2A: Effects of In-Video Quizzes on MOOC Lecture Viewing by Geza Kovacs
Keywords: in-video quizzes; lecture viewing; lecture navigation; seeking behaviors; MOOCs
Abstract:
Online courses on sites such as Coursera use quizzes embedded inside lecture videos (in-video quizzes) to help learners test their understanding of the video. This paper analyzes how users interact with in-video quizzes, and how in-video quizzes influence users' lecture viewing behavior. We analyze the viewing logs of users who took the Machine Learning course on Coursera. Users engage heavily with in-video quizzes -- 74% of viewers who start watching a video will attempt its corresponding in-video quiz. We observe spikes in seek activity surrounding in-video quizzes, particularly seeks from the in-video quiz to the preceding section. We show that this is likely due to users reviewing the preceding section to help them answer the quiz, as the majority of users who seek backwards from in-video quizzes have not yet submitted a correct answer, but will later attempt the quiz. Some users appear to use quiz-oriented navigation strategies, such as seeking directly from the start of the video to in-video quizzes, or skipping from one quiz to the next. We discuss implications of our findings on the design of lecture-viewing platforms.
2B: Brain Points: A Deeper Look at a Growth Mindset Incentive Structure for an Educational Game by Eleanor O'Rourke, Erin Peach, Carol S. Dweck, Zoran Popovic
Keywords: Educational games; growth mindset; incentive structures.
Abstract:
Student retention is a central challenge in systems for learning at scale. It has been argued that educational video games could improve student retention by providing engaging experiences and informing the design of other online learning environments. However, educational games are not uniformly effective. Our recent research shows that player retention can be increased by using a brain points incentive structure that rewards behaviors associated with growth mindset, or the belief that intelligence can grow. In this paper, we expand on our prior work by providing new insights into how growth mindset behaviors can be effectively promoted in the educational game Refraction. We present results from an online study of 25,000 children who were exposed to five different versions of the brain points intervention. We find that growth mindset animations cause a large number of players to quit, while brain points encourage persistence. Most importantly, we find that awarding brain points randomly is ineffective; the incentive structure is successful specifically because it rewards desirable growth mindset behaviors. These findings have important implications that can support the future generalization of the brain points intervention to new educational contexts.
2C: Explaining Student Behavior at Scale The Influence of Video Complexity on Student Dwelling Time by Frans Van der Sluis, Jasper Ginn, Tim Van der Zee
Keywords: MOOCs; video; information complexity; dwelling time; learning analytics; student behavior.
Abstract:
Understanding why and how students interact with educational videos is essential to further improve the quality of MOOCs. In this paper, we look at the complexity of videos to explain two related aspects of student behavior: the dwelling time (how much time students spend watching a video) and the dwelling rate (how much of the video they actually see). Building on a strong tradition of psycholinguistics, we formalize a definition for information complexity in videos. Furthermore, building on recent advancements in time-on-task measures we formalize dwelling time and dwelling rate based on click-stream trace data. The resulting computational model of video complexity explains 22.44% of the variance in the dwelling rate for students that finish watching a paragraph of a video. Video complexity and student dwelling show a polynomial relationship, where both low and high complexity increases dwelling. These results indicate why students spend more time watching (and possibly contemplating about) a video. Furthermore, they show that even fairly straightforward proxies of student behavior such as dwelling can already have multiple interpretations; illustrating the challenge of sense-making from learning analytics.
|
12:30 PM | Lunch |
01:30 PM |
Keynote 1: The Future of Learning by Professor Sugata Mitra, Newcastle University, the UK.Room: Pentland ESession chair: Vincent Aleven, Carnegie Mellon University, USA Abstract:In this talk, Sugata Mitra will take us through the origins of schooling as we know it, to the dematerialisation of institutions as we know them. Thirteen years of experiments in children's education takes us through a series of startling results – children can self organise their own learning, they can achieve educational objectives on their own, can read by themselves. Finally, the most startling of them all: Groups of children with access to the Internet can learn anything by themselves. From the slums of India, to the villages of India and Cambodia, to poor schools in Chile, Argentina, Uruguay, the USA and Italy, to the schools of Gateshead and the rich international schools of Washington and Hong Kong, Sugata’s experimental results show a strange new future for learning. Using the TED Prize, he has now built seven ‘Schools in the Cloud’, of which some glimpses will be provided in the talk.
[Recording]
|
02:45 PM | Afternoon tea |
03:15 PM |
Session #3: Learner modellingRoom: Pentland ESession chair: Armando Fox, University of California at Berkeley, USA
3A: Using Multiple Accounts for Harvesting Solutions in MOOCs by Jose A. Ruiperez-Valiente, Giora Alexandron, Zhongzhou Chen, David E. Pritchard
Keywords: Academic dishonesty; educational data mining; learning analytics; MOOCs
Abstract:
The study presented in this paper deals with copying answers in MOOCs. Our findings show that a significant fraction of the certificate earners in the course that we studied have used what we call harvesting accounts to find correct answers that they later submitted in their main account, the account for which they earned a certificate. In total, around 2.5% of the users who earned a certificate in the course obtained the majority of their points by using this method, and around 10% of them used it to some extent. This paper has two main goals. The first is to define the phenomenon and demonstrate its severity. The second is characterizing key factors within the course that affect it, and suggesting possible remedies that are likely to decrease the amount of cheating. The immediate implication of this study is to MOOCs. However, we believe that the results generalize beyond MOOCs, since this strategy can be used in any learning environments that do not identify all registrants.
3B: How Mastery Learning Works at Scale by Steve Ritter, Michael Yudelson, Stephen E Fancsali, Susan R Berman
Keywords: Adaptive educational systems; intelligent tutors; mastery learning; big data; longitudinal data; educational outcomes
Abstract:
Nearly every adaptive learning system aims to present students with materials personalized to their level of understanding (Enyedy, 2014). Typically, such adaptation follows some form of mastery learning (Bloom, 1968), in which students are asked to master one topic before proceeding to the next topic. Mastery learning programs have a long history of success (Guskey and Gates, 1986; Kulik, Kulik & Bangert-Drowns, 1990) and have been shown to be superior to alternative instructional approaches. Although there is evidence for the effectiveness of mastery learning when it is well supported by teachers, mastery learning’s effectiveness is crucially dependent on the ability and willingness of teachers to implement it properly. In particular, school environments impose time constraints and set goals for curriculum coverage that may encourage teachers to deviate from mastery-based instruction. In this paper we examine mastery learning as implemented in Carnegie Learning’s Cognitive Tutor. Like in all real-world systems, teachers and students have the ability to violate mastery learning guidance. We investigate patterns associated with violating and following mastery learning over the course of the full school year at the class and student level. We find that violations of mastery learning are associated with poorer student performance, especially among struggling students, and that this result is likely attributable to such violations of mastery learning.
3C: $1 Conversational Turn Detector: Measuring How Video Conversations Affect Student Learning in Online Classes by Adam Stankiewicz, Chinmay Kulkarni
Keywords: video discussions; turn taking; peer learning
Abstract:
Massive online classes can benefit from peer interactions such as discussion, critique, or tutoring. However, to scaffold productive peer interactions, systems must be able to detect student behavior in interactions at scale, which is challenging when interactions occur over rich media like video. This paper introduces an imprecise yet simple browser-based conversational turn detector for video conversations. Turns are detected without accessing video or audio data. We show how this turn detector can find dominance in video-based conversations. In a case study with 1,027 students using Talkabout, a video-based discussion system for online classes, we show how detected conversational turn behavior correlates with participants’ subjective experience in discussions and their final course grade.
|
05:00 PM | Break |
05:30 PM | Posters and Reception sponsored by Oracle AcademySession chairs: Vincent Aleven, Carnegie Mellon University, USA and Ido Roll, University of British Columbia, CanadaWork-in-progress papers: [link]Demonstrations: [link] |
Tuesday, April 26, 2016
Time | Session |
---|---|
07:30 AM | Registration & Tea/Coffee |
08:45 AM | Opening remarksRoom: Pentland E |
09:00 AM |
Keynote 2: Effective Pedagogy at Scale: Social Learning and Citizen Inquiry by Professor Mike Sharples, The Open University, the UK.Room: Pentland ESession chair: Vincent Aleven, Carnegie Mellon University, USA Abstract:For the past four years The Open University has published annual Innovating Pedagogy reports. Our aim has been to shift the focus of horizon scanning for education away from novel technologies towards new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation. In the most recent report, from over thirty pedagogies, ranging from bricolage to stealth assessment, we have identified six overarching themes, of scale, connectivity, reflection, extension, embodiment, and personalisation [8]. Delivering education at massive scale has been the headline innovation of the past four years. This success begs the question of “which pedagogies can work successfully at scale?”. Sports coaching is an example of teaching that does not scale. It involves monitoring and diagnosis of an individual’s performance, based on holistic observation of body movements, followed by personal tutoring and posture adjustments. Any of these elements might be deployed at scale (for example, diagnostic learning analytics [10], or AI-based personal tutoring [4] but in combination they require the physical presence of a human coach. The major xMOOC platforms were initially based on an instructivist pedagogy of a repeated cycle of inform and test. This has the benefit of being relatively impervious to scale. A lecture can be presented to 200 students in a theatre or to 20,000 viewers online with similar impact. Delivered on personal computers, instructivist pedagogy offers elements of personalisation, by providing adaptive feedback on quiz answers and alternative routes through the content.
[Recording]
|
10:00 AM | Morning tea |
10:30 AM |
Session #4: Automated assessmentRoom: Pentland ESession chair: Jeremy Roschelle, SRI International, USA
4A: A Data-Driven Approach for Inferring Student Proficiency from Game Activity Logs by Mohammad H. Falakmasir, Jose P. Gonzalez-Brenes, Geoffrey J. Gordon, Kristen E. DiCerbo
Keywords: Educational Games; Student Modeling; Stealth Assessment; Hidden Markov Models
Abstract:
Student assessments are important because they allow collecting evidence about learning. However, time spent on evaluating students may be otherwise used for instructional activities. Computer-based learning platforms provide the opportunity for unobtrusively gathering students' digital learning footprints. This data can be used to track learning progress and make inference about student competencies. We present a novel data analysis pipeline, Student Proficiency Inferrer from Game data (SPRING), that allows modeling game playing behavior in educational games. Unlike prior work, SPRING is a fully data-driven method that does not require costly domain knowledge engineering. Moreover, it produces a simple interpretable model that not only fits the data but also predicts learning outcomes. We validate our framework using data collected from students playing 11 educational mini-games. Our results suggest that SPRING can predict math assessments accurately on withheld test data (Correlation=0.55, Spearman rho=0.51).
4B: An Exploration of Automated Grading of Complex Assignments by Chase Geigle, ChengXiang Zhai, Duncan C. Ferguson
Keywords: Automatic grading; ordinal regression; supervised learning; learning to rank; active learning; text mining
Abstract:
Automated grading is essential for scaling up learning. In this paper, we conduct the first systematic study of how to automate grading of a complex assignment using a medical case assessment as a test case. We propose to solve this problem using a supervised learning approach and introduce three general complementary types of feature representations of such complex assignments for use in supervised learning. We first show with empirical experiments that it is feasible to automate grading of such assignments provided that the instructor can grade a number of examples. We further study how to integrate an automated grader with human grading and propose to frame the problem as learning to rank assignments to exploit pairwise preference judgments and use NDPM as a measure for evaluation of the accuracy of ranking. We then propose a sequential pairwise online active learning strategy to minimize the effort of human grading and optimize the collaboration of human graders and an automated grader. Experiment results show that this strategy is indeed effective and can substantially reduce human effort as compared with randomly sampling assignments for manual grading.
4C: Fuzz Testing Projects in Massive Courses by Sumukh Sridhara, Brian Hou, Jeffrey Lu, John DeNero
Keywords: automated assessment; behavioral analytics; online learning
Abstract:
Scaffolded projects with automated feedback are core instructional components of many massive courses. In subjects that include programming, feedback is typically provided by test cases constructed manually by the instructor. This paper explores the effectiveness of fuzz testing, a randomized technique for verifying the behavior of programs. In particular, we apply fuzz testing to identify when a student's solution differs in behavior from a reference implementation by randomly exploring the space of legal inputs to a program. Fuzz testing serves as a useful complement to manually constructed tests. Instructors can concentrate on designing targeted tests that focus attention on specific issues while using fuzz testing for comprehensive error checking. In the first project of a 1,400-student introductory computer science course, fuzz testing caught errors that were missed by a suite of targeted test cases for more than 48% of students. As a result, the students dedicated substantially more effort to mastering the nuances of the assignment.
4D: Peer Grading in a Course on Algorithms and Data Structures: Machine Learning Algorithms do not Improve over Simple Baselines by Mehdi S. M. Sajjadi, Morteza Alamgir, Ulrike von Luxburg
Keywords: machine learning; peer grading; peer assessment; peer review; L@S; ordinal analysis; rank aggregation
Abstract:
Peer grading is the process of students reviewing each others' work, such as homework submissions, and has lately become a popular mechanism used in massive open online courses (MOOCs). Intrigued by this idea, we used it in a course on algorithms and data structures at the University of Hamburg. Throughout the whole semester, students repeatedly handed in submissions to exercises, which were then evaluated both by teaching assistants and by a peer grading mechanism, yielding a large dataset of teacher and peer grades. We applied different statistical and machine learning methods to aggregate the peer grades in order to come up with accurate final grades for the submissions (supervised and unsupervised, methods based on numeric scores and ordinal rankings). Surprisingly, none of them improves over the baseline of using the mean peer grade as the final grade. We discuss a number of possible explanations for these results and present a thorough analysis of the generated dataset.
|
12:30 PM | Lunch |
01:30 PM |
Session #5: Flipped session - Crowdsourcing the feedbackRoom: Pentland EedXedge course [link]Session chairs: Piotr Mitros, edX, USA, and Nina Huntemann, edX, USA, and Ido Roll, University of British Columbia, Canada
5A: AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning by Joseph Jay Williams, Juho Kim, Anna Rafferty, Samuel Maldonado, Krzysztof Z. Gajos, Walter S. Lasecki, Neil Heffernan
Keywords: Explanation; learning at scale; crowdsourcing; learnersourcing; machine learning; adaptive learning
Abstract:
While explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a system for obtaining explanations. AXIS asks learners to generate, revise, and evaluate explanations as they solve a problem, and then uses machine learning to dynamically determine which explanation to present to a future learner, based on previous learners' collective input. Results from a case study deployment and a randomized experiment demonstrate that AXIS elicits and identifies explanations that learners find helpful. Providing explanations from AXIS also objectively enhanced learning, when compared to the default practice where learners solved problems and received answers without explanations. The rated quality and learning benefit of AXIS explanations did not differ from explanations generated by an experienced instructor.
5B: Improving the Peer Assessment Experience on MOOC Platforms by Thomas Staubitz, Dominic Petrick, Matthias Bauer, Jan Renz, Christoph Meinel
Keywords: MOOC; Online Learning; Peer Assessment; Assessment.
Abstract:
Massive Open Online Courses (MOOCs) have revolutionized higher education by offering university-like courses for a large amount of learners via the Internet. The paper at hand takes a closer look on peer assessment as a tool for delivering individualized feedback and engaging assignments to MOOC participants. Benefits, such as scalability for MOOCs and higher order learning, and challenges, such as grading accuracy and rogue reviewers, are described. Common practices and the state-of-the-art to counteract challenges are highlighted. Based on this research, the paper at hand describes a peer assessment workflow and its implementation on the openHPI and openSAP MOOC platforms. This workflow combines the best practices of existing peer assessment tools and introduces some small but crucial improvements.
5C: Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms by David A. Joyner, Wade Ashby, Liam Irish, Yeeling Lam, Jacob Langson, Isabel Lupiani, Mike Lustig, Paige Pettoruto, Dana Sheahen, Angela Smiley, Amy Bruckman, Ashok Goel
Keywords: Peer review; online education.
Abstract:
Large classes, both online and residential, typically demand many graders for evaluating students' written work. Some classes attempt to use autograding or peer grading, but these both present challenges to assigning grades at for-credit institutions, such as the difficulty of autograding to evaluate free-response answers and the lack of expert oversight in peer grading. In a large, online class at Georgia Tech in Summer 2015, we experimented with a new approach to grading: framing graders as meta-reviewers, charged with evaluating the original work in the context of peer reviews. To evaluate this approach, we conducted a pair of controlled experiments and a handful of qualitative analyses. We found that having access to peer reviews improves the perceived quality of feedback provided by graders without decreasing the graders' efficiency and with only a small influence on the grades assigned.
|
03:00 PM | Afternoon tea |
03:30 PM |
Session #6: Flipped session - Outside the MOOCRoom: Pentland EedXedge course [link]Session chairs: Piotr Mitros, edX, USA, and Nina Huntemann, edX, USA, and Ido Roll, University of British Columbia, Canada
6A: Learning Transfer: Does It Take Place in MOOCs? An Investigation into the Uptake of Functional Programming in Practice by Guanliang Chen, Dan Davis, Claudia Hauff, Geert-Jan Houben
Keywords: transfer learning; MOOCs; GitHub; functional programming
Abstract:
The rising number of Massive Open Online Courses (MOOCs) enable people to advance their knowledge and competencies in a wide range of fields. Learning though is only the first step, the transfer of the taught concepts into practice is equally important and often neglected in the investigation of MOOCs. In this paper, we consider the specific case of FP101x (a functional programming MOOC on edX) and the extent to which learners alter their programming behaviour after having taken the course. We are able to link about one third of all FP101x learners to GitHub, the most popular social coding platform to date and contribute a first exploratory analysis of learner behaviour beyond the MOOC platform. A detailed longitudinal analysis of GitHub log traces reveals that (i) more than 8% of engaged learners transfer, and that (ii) most existing transfer learning findings from the classroom setting are indeed applicable in the MOOC setting as well.
6B: The Role of Social Media in MOOCs: How to Use Social Media to Enhance Student Retention by Saijing Zheng, Kyungsik Han, Mary Beth Rosson, John M. Carroll
Keywords: Massive Open Online Course; MOOCs; Social Media; Facebook; Coursera; Mixed Method
Abstract:
The Massive Open Online Courses (MOOC) have experienced rapid development. However, high dropout rate has become a salient issue. Many studies have attempted to understand this phenomenon; other have explored mechanisms for enhancing retention. For instance, social media has been used to improve student engagement and retention. However there is a lack of (1) empirical studies of social media use and engagement compared to embedded MOOC forums; and (2) rationales for social media use from both instructors’ and students’ perspectives. We addressed these open issues through the collection and analysis of real usage data from three MOOC forums and their associated social media (i.e., Facebook) groups as well as conducting interviews of instructors and students. We found that students show higher engagement and retention in social media than in MOOC forums, and identified both instructors’ and students’ perspectives that lead to the results. We discuss design implications for future MOOC platforms.
|
04:30 PM |
Keynote 3: Practical Learning Research at Scale by Professor Ken Koedinger, Carnegie Mellon University, USA.Room: Pentland E+WSession chair: Ido Roll, University of British Columbia, Canada Abstract:Massive scale education has emerged through online tools such as Wikipedia, Khan Academy, and MOOCs. The number of students being reached is high, but what about the quality of the educational experience? As we scale learning, we need to scale research to address this question. Such learning research should not just determine whether high quality has been achieved, but it should provide a process for how to reliably produce high quality learning. Scaling practical learning research is as much an opportunity as a problem. The opportunity comes from the fact that online courses are not only good for widespread delivery, but are natural vehicles for data collection and experimental instrumentation. I will provide examples of research done in the context of widely used educational technologies that both contribute interesting scientific findings and have practical implications for increasing the quality of learning at scale.
[Recording]
|
05:45 PM | Closing remarks |
06:00 PM | Joint networking reception with Learning Analytics & Knowledge 2016 conference (tickets required) |