Crowdsourcing and Collective Intelligence (CCI) Workshop

Ninth International Conference on Complex Systems (ICCS)
Boston, July 23, 2018



The workshop is a forum for committed practitioners in Complex Systems-related areas whose scholarship aims to understand and design new expressions of collective intelligence at the intersection of computer science, artificial intelligence, human-computer interaction, economics, and the social sciences. The event intends to go beyond reporting published and ongoing work by facilitating productive discussions that uncover and elaborate novel or hitherto underrated aspects. The goal is to hang out productively and formulate a shared set of opportunities and challenges.

SCHEDULE


    6:00-6:05    Welcome by Ágnes Horvát, Northwestern University
    6:05-7:15    Crowdsourcing talks (12 minutes/presenter)
    • Daniel Romero, University of Michigan
      Examining the Impact of Shocks on Collaborative Crowdsourcing
    • Jim Bagrow, University of Vermont
      Hunch & Crunch: Iterative Crowdsourced Hypothesis Generation
    • Seth Cooper, Northeastern University
      Player-versus-Level Matchmaking in Human Computation Games
    • Laura Nelson, Northeastern University
      How Women Connect to the Crowd
    • David Pastor, LifeD Lab & Universidad Politécnica de Madrid, Spain
      Collective Intelligence for Resilience to Floods
    • Alex Quinn, Purdue University
      Fair Pay for Crowd-powered List-making
    7:15-7:35    Crowdsourcing Q&A (w/ all six presenters)
    7:35-7:45    Break
    7:45-8:45    Collective intelligence talks (12 minutes/presenter)
    • Joshua Becker, Northwestern University
      Can Individuals Benefit from the Wisdom of Crowds?
    • Jesse Shore, Boston University
      How Intermittent Breaks in Interaction Improve Collective Intelligence
    • Edmond Awad, MIT Media Lab
      The Moral Machine Experiment
    • Tao Jia, Southwest University, China
      Apply "User Profile" to Investigate Activities in Science
    • Aaron Shaw, Northwestern University
      Openness and Closure in Online Commons Institutions
    8:45-9:00    Collective intelligence Q&A (w/ all five presenters)
ABSTRACTS

(in alphabetical order of author names)

Edmond Awad, MIT Media Lab
The Moral Machine Experiment

I describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game enabled us to gather 40 million decisions from 3 million people in 200 countries/territories. I report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. I also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. I discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics.


Jim Bagrow, University of Vermont
Hunch & Crunch: Iterative Crowdsourced Hypothesis Generation

Crowdsourcing can be most powerful when crowd participants are asked to contribute novel ideas and out-of-the-box thinking. Here I will discuss projects that combine human intuition and creativity with computational methods to better understand problems of scientific interest. We ask the crowd to collectively construct causal attribution networks, to guide the design and use of machine learning problems, and more. New algorithms improve the scalability of the crowd to large problems and real-world experiments test our approaches.


Joshua Becker, Northwestern University
Can Individuals Benefit from the Wisdom of Crowds?

The wisdom of crowds is useful to third-party aggregators, but offers no direct benefit to individual group members. While it is commonly assumed that individuals must remain independent to preserve group accuracy, preventing social learning, we experimentally test an alternative theory predicting that social influence can improve belief accuracy in decentralized networks. Subjects made financial forecasts before and after learning the beliefs of peers in a social network. Information exchange generated a 25% decrease in individual error while preserving group accuracy, showing that properly structured social networks can allow individuals to benefit from the wisdom of crowds.


Seth Cooper, Northeastern University
Player-versus-Level Matchmaking in Human Computation Games

Human computation games are an approach to combining human problem solving with computational power through crowdsourcing. However, given a large number of players and levels (i.e., tasks), we would like to determine a good next level to serve each player as they progress through the game. I will discuss work in applying player-versus-player matchmaking systems for player-versus-level matchmaking. This work aims to use gameplay data to improve the effectiveness and engagement of HCGs.


Tao Jia, Southwest University, China
Apply "User Profile" to Investigate Activities in Science

The user profile is widely used in computer science to analyze social networks. It is usually generated as a multi-dimensional vector from features of an individual's activities. Here we use research subjects on scientific papers to generate the user profile of a scientist, revealing his/her research interest associated with a given set of papers. We find patterns governing how the research interest changes along an individual career, which can be well captured by a random walk mechanism. The correlation measure between interest change and performance change demonstrates that interest change is positively correlated with the chance to increase the impact, but neutral to the productivity change. Finally, we quantify the similarity between two collaborators' user profiles on their first collaboration. We find that collaboration is driven by homophily, yet a significant fraction of collaboration occurs between two "strangers".


Laura Nelson, Northeastern University
How Women Connect to the Crowd

Gender inequality has been a long-standing concern in the study of organizations and markets. The persistence of gender inequalities in the business world, it is surprising that women outperform men across multiple crowdfunding platforms and all categories of projects and businesses. I discuss one reason for this success: women are better at telling stories that connect to the crowd. I end by discussing how data from crowdfunding platforms provide new opportunities that can help us better understand women's economic performance.


David Pastor, LifeD Lab & Universidad Politécnica de Madrid, Spain
Collective Intelligence for Resilience to Floods

Resilience to natural disasters greatly depends on how society responds and organizes. Now we can measure the social dynamics response to disasters using Big Data. The quantification of disaster impact has to align objective assessments with subjective damage to promote real socio-economic resilience. Networks and blockchain allow for reliable assessments from the affect population perspective. Collective intelligence is key to build up safety nets and effective rapid financial systems for disasters.


Alex Quinn, Purdue University
Fair Pay for Crowd-powered List-making

Crowd-powered systems engage human workers to "automate" data collection, transformation, and aggregation tasks that AI alone cannot yet perform adequately. Most “microtask” sites (e.g., Mechanical Turk) pay workers a fixed reward per task. Even when requesters target an acceptable hourly rate (e.g., $10/hour), maximizing data quality and completeness while minimizing cost can lead to ethical dilemmas. Human expectations must be reconciled with the need for data quality/completeness.


Daniel Romero, University of Michigan
Examining the Impact of Shocks on Collaborative Crowdsourcing

While collaborative crowdsourcing often takes place in settings where changes are substantial, frequent, and unpredictable, much of the prior research on it assumes a fixed environment. In this talk, I will discuss research on the impact of “shocks” – discrete events that disrupt a crowd – on the collaboration dynamics of Wikipedia editors. I will show how such events, from the reduction in group size to the sharp increase in the public's interest in an article, can impact important features of the editors' behavior such as centralization, conflict, and ability to retain newcomers.


Aaron Shaw, Northwestern University
Openness and Closure in Online Commons Institutions

Recent empirical work in online communities engaged in the collective, distributed production of public information goods shows that institutionalized openness enters into tension with countervailing, emergent patterns of closure. The shift from openness to closure suggests a two-phase model of collective organization: At first, projects must solve mobilization problems in order to achieve critical mass and provision valuable public goods. In the (rare) instances when this occurs, open institutions enable effective operations and resource provision. Productivity and growth usher in the second phase, in which management and governance challenges become more salient and require adaptive closure around formalized norms, standards, and structures. These forms of closure can threaten or undermine the goals and functions of projects, resulting in an ongoing navigational challenge if projects are to sustain common pool resource production.


Jesse Shore, Boston University
How Intermittent Breaks in Interaction Improve Collective Intelligence

People influence each other when they interact to solve problems, introducing both benefits (higher average solution quality due to learning) and costs (lower maximum solution quality due to less exploration) relative to independent problem solving. Prior work has focused on how the presence of social influence affects performance; here we investigate the effects of time. We show that when social influence is intermittent, it provides the benefits of constant social influence without the costs.