Crowd-sourcing is a recent framework in which human intelligence tasks are outsourced to a crowd of unknown people (”workers”) as an open call (e.g., on Amazon’s Mechanical Turk). Crowd-sourcing has become immensely popular with hoards of employers (”requesters”), who use it to solve a wide variety of jobs, such as dictation transcription, content screening, etc. In order to achieve quality results, requesters often subdivide a large task into a chain of bite-sized sub-tasks that are combined into a complex, iterative workflow in which workers check and improve each other’s results. This project
raises an exciting question for AI — could an autonomous agent control these workflows without human intervention, yielding better results than today’s state of the art, a fixed control program?