About
The 2012 Summer Institute, cosponsored by the University of Washington Computer Science & Engineering and Microsoft Research will be held at the Suncadia Resort in Cle Elum, Washington from July 18-20, 2012. Cle Elum is located in the Cascades, ninety minutes
southeast of Seattle.
Abstract
Strong recent enthusiasm for leveraging online platforms for education has highlighted opportunities to leverage "the crowd" in novel ways. We seek to explore challenges and opportunities at the intersection of online education and crowdsourcing--at a time when ideas and methods in both areas are accelerating. In the arena of crowdsourcing and human computation, Wikipedia, Stackoverflow, Mechanical Turk, Odesk, the ESP Game, FoldIt and Tweak the Tweet illustrate the huge variety of socially-mediated collaboration. More broadly, research in crowdsourcing is evolving into a mature discipline, with multiple vibrant workshops and now an international conference (HCOMP). In online education, Khan Academy, the Stanford online courses, and intelligent tutoring systems are the harbingers of adaptive educational interactions which stimulate interpersonal discussions and tutoring across the cloud. On the latter, efforts on intelligent tutoring systems with small groups of students and limited evaluations have had mixed results. Having access to large populations promises to catalyze a renaissance in developing and refining intelligent automated tutoring methods. Overall, opportunities are broad and extend to leveraging the massive skills, variance in abilities and learning styles, and efforts and insights of the crowd to author and provide unique, personalized experiences to future students. We see several reasons why "now" is the time to forge connections across these areas:
- Scaling education requires a crowd. Today's autograding methods only handle code and multiple-choice questions, so how can one teach creative processes (eg English composition or UI design) where evaluation is subjective & feedback essential? Rather than paying graders, one could ask students to evaluate each others work, but how can this process of providing feedback be made educational in itself?
- Online education draws a crowd. By enlisting communities of 100K-10M regular visitors, education sites make a great platform for crowdsourcing applications -- they provide access to a crowd with known advanced skills and an intrinsic motivation to learn.
- Crowdsourcing methods can greatly improve education: existing machine learning algorithms can be adapted to simultaneously estimate answer quality, human accuracy, and question difficulty. Statistical A/B testing can uncover the best ways to present materials and might be harnessed for personalized education. Decision-theoretic methods promise to be valuable for optimizing the sequencing of educational concepts & workflows.
- Education exposes new crowdsourcing challenges: automated student evaluation, personalization of instructional materials, and incentive mechanisms to encourage students to help each others and improve course design are just a few of the open problems facing crowdsourcing researchers.
The symposium will discuss many areas for innovation, including (but not limited to) the following:
- Transcending the 1-many broadcast model of education for 1-1 adaptive methods and personalized interactive learning. Creation (possibly crowdsourced) of alternative, effective ways to convey content through virtual manipulatives, visualizations, videos, simulations, group exercises or just plain text.
- Scalable assessment models that transcend multiple choice, testing creativity and recall, not just recognition (eg co-grading, students proposing test questions, etc). Explicit and implicit methods for identifying students with comprehension difficulties might be used to triage scarce human teaching resources.
- Online "hangout" environments and methods for linking students with common interests yet disparate skill sets - dynamic tutor pairings may benefit the explainer as much as the tutee.
- Long-term engagement - less than 15% of students completed the Norvig/Thrun online AI class; we'll need multiple ways to engage and hook different types of learners. Online courses can be considered laboratories for studying incentive programs (including games) that enhance learning and teaching.
- Multi-dimensional student modeling underlies both personalization and assessment; statistical-relational machine learning algorithms hold promise for automatically determining students' skills and learning styles.
Organizers: Dan Weld (UW CSE), Mausam (UW CSE), Eric Horvitz (MSR), and Meredith Ringel Morris (MSR)
Descriptions of past summer institutes may be viewed at:
www.cs.washington.edu/events/mssi/.
|