首页    期刊浏览 2024年12月03日 星期二
登录注册

文章基本信息

  • 标题:On Benchmarking for Crowdsourcing and Future of Work Platforms
  • 本地全文:下载
  • 作者:Ria Mae Borromeo ; Lei Chen ; Abhishek Dubey
  • 期刊名称:Bulletin of the Technical Committee on Data Engineering
  • 出版年度:2019
  • 卷号:42
  • 期号:4
  • 页码:46-54
  • 出版社:IEEE Computer Society
  • 摘要:Online crowdsourcing platforms have proliferated over the last few years and cover a number of importantdomains, these platforms include from worker-task platforms such Amazon Mechanical Turk, worker-forhireplatforms such as TaskRabbit to specialized platforms with specific tasks such as ridesharing likeUber, Lyft, Ola etc. An increasing proportion of human workforce will be employed by these platforms inthe near future. The crowdsourcing community has done yeoman’s work in designing effective algorithmsfor various key components, such as incentive design, task assignment and quality control. Given theincreasing importance of these crowdsourcing platforms, it is now time to design mechanisms so that it iseasier to evaluate the effectiveness of these platforms. Specifically, we advocate developing benchmarksfor crowdsourcing research.Benchmarks often identify important issues for the community to focus and improve upon. This hasplayed a key role in the development of research domains as diverse as databases and deep learning.We believe that developing appropriate benchmarks for crowdsourcing will ignite further innovations.However, crowdsourcing – and future of work, in general – is a very diverse field that makes developingbenchmarks much more challenging. Substantial effort is needed that spans across developing benchmarksfor datasets, metrics, algorithms, platforms and so on. In this article, we initiate some discussion into thisimportant problem and issue a call-to-arms for the community to work on this important initiative..
国家哲学社会科学文献中心版权所有