Crowd intelligence in AI 2.0 era

Wei Li1, Wenjun Wu1, Huaimin Wang2, Xueqi Cheng3, Hua-Jun Chen4, Zhi‐Hua Zhou5, Rong Ding5
1State Key Laboratory of Software Development, Beihang University, Beijing, 100191, China
2National Laboratory for Parallel and Distributed Processing, College of Computer, National University of Defense Technology, Changsha, 410073, China
3Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
4College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
5National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China

Tóm tắt

Từ khóa


Tài liệu tham khảo

Abraham, I., Alonso, O., Kandylas, V., et al., 2013. Adaptive crowdsourcing algorithms for the bandit survey problem. Proc. 26th Conf. on Computational Learning Theory, p.882–910.

Ballesteros, J., Carbunar, B., Rahman, M., et al., 2014. Towards safe cities: a mobile and social networking approach. IEEE Trans. Parall. Distr. Syst., 25(9):2451–2462. http://dx.doi.org/10.1109/TPDS.2013.190

Basili, V.R., Briand, L.C., Melo, W.L., 1996. A validation of object-oriented design metrics as quality indicators. IEEE Trans. Softw. Eng., 22(10):751–761. http://dx.doi.org/10.1109/32.544352

Bhattacharya, P., Neamtiu, I., 2010. Fine-grained incremental learning and multi-feature tossing graphs to improve bug triaging. IEEE Int. Conf. on Software Maintenance, p.1–10. http://dx.doi.org/10.1109/ICSM.2010.5609736

Bird, C., Gourley, A., Devanbu, P., et al., 2006. Mining email social networks. Proc. Int. Workshop on Mining Software Repositories, p.137–143. http://dx.doi.org/10.1145/1137983.1138016

Bird, C., Pattison, D., de Souza, R., et al., 2008. Latent social structure in open source projects. Proc. 16th ACM SIGSOFT Int. Symp. on Foundations of Software Engineering, p.24–35. http://dx.doi.org/10.1145/1453101.1453107

Bird, C., Nagappan, N., Murphy, B., et al., 2011. Don’t touch my code!: examining the effects of ownership on software quality. Proc. 19th ACM SIGSOFT Symp. and 13th European Conf. on Foundations of Software Engineering, p.4–14. http://dx.doi.org/10.1145/2025113.2025119

Bollen, J., Mao, H.N., Zeng, X.J., 2011. Twitter mood predicts the stock market. J. Comput. Sci., 2(1):1–8. http://dx.doi.org/10.1016/j.jocs.2010.12.007

Bonabeau, E., 2009. Decisions 2.0: the power of collective intelligence. MIT Sloan Manag. Rev., 50(2):45–52.

Borne, K.D., Zooniverse Team, 2011. The Zooniverse: a framework for knowledge discovery from citizen science data. American Geophysical Union Fall Meeting.

Burke, J.A., Estrin, D., Hansen, M., et al., 2006. Participatory sensing. Workshop on World-Sensor-Web: Mobile Device Centric Sensor Networks and Applications, p.117–134.

Cao, C.C., She, J.Y., Tong, Y.X., et al., 2012. Whom to ask? Jury selection for decision making tasks on micro-blog services. Proc. VLDB Endow., 5(11):1495–1506. http://dx.doi.org/10.14778/2350229.2350264

Cao, C.C., Tong, Y.X., Chen, L., et al., 2013. Wisemarket: a new paradigm for managing wisdom of online social users. Proc. 19th ACM Int. Conf. on Knowledge Discovery and Data Mining, p.455–463. http://dx.doi.org/10.1145/2487575.2487642

Castaneda, O.F., 2010. Hierarchy in Meritocracy: Community Building and Code Production in the Apache Software Foundation. MS Thesis, Delft University of Technology, Delft, Netherlands.

Chen, X., Lin, Q.H., Zhou, D.Y., 2013. Optimistic knowledge gradient policy for optimal budget al location in crowdsourcing. Proc. 30th Int. Conf. on Machine Learning, p.64–72.

Chen, X., Lin, Q.H., Zhou, D.Y., 2015. Statistical decision making for optimal budget al location in crowd labeling. J. Mach. Learn. Res., 16:1–46.

Dantec, C.A.L., Asad, M., Misra, A., et al., 2015. Planning with crowdsourced data: rhetoric and representation in transportation planning. Proc. 18th ACM Conf. on Computer Supported Cooperative Work, p.1717–1727. http://dx.doi.org/10.1145/2675133.2675212

Dawid, A.P., Skene, A.M., 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Statist., 28(1):20–28. http://dx.doi.org/10.2307/2346806

de Alwis, B., Sillito, J., 2009. Why are software projects moving from centralized to decentralized version control systems? Proc. ICSE Workshop on Cooperative and Human Aspects on Software Engineering, p.36–39. http://dx.doi.org/10.1109/CHASE.2009.5071408

Dekel, O., Shamir, O., 2009. Vox Populi: collecting highquality labels from a crowd. Proc. 22nd Conf. on Learning Theory.

Dempster, A.P., Laird, N.M., Rubin, D.B., 1977. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B, 39(1):1–38.

Difallah, D.E., Demartini, G., Cudré-Mauroux, G.P., 2013. Pick-a-crowd: tell me what you like, and I’ll tell you what to do. Proc. 22nd Int. Conf. on World Wide Web, p.367–374. http://dx.doi.org/10.1145/2488388.2488421

Difallah, D.E., Demartini, G., Cudré-Mauroux, G.P., 2016. Scheduling human intelligence tasks in multi-tenant crowd-powered systems. Proc. 25th Int. Conf. on World Wide Web, p.855–865. http://dx.doi.org/10.1145/2872427.2883030

Dong, X.L., Saha, B., Srivastava, D., 2012. Less is more: selecting sources wisely for integration. Proc. VLDB Endow., 6(2):37–48. http://dx.doi.org/10.14778/2535568.2448938

Erenkrantz, J.R., Taylor, R.N, 2003. Supporting Distributed and Decentralized Projects: Drawing Lessons From the Open Source Community. ISR Technical Report No. UCI-ISR-03–4, Institute for Software Research, University of California, Irvine, USA.

Farkas, K., Nagy, A.Z., Tomñs, T., et al., 2014. Participatory sensing based real-time public transport information service. IEEE Int. Conf. on Pervasive Computing and Communications Workshops, p.141–144. http://dx.doi.org/10.1109/PerComW.2014.6815181

Feng, Z.N., Zhu, Y.M., Zhang, Q., et al., 2014. Trac: truthful auction for location-aware collaborative sensing in mobile crowdsourcing. Proc. IEEE Conf. on Computer Communications, p.1231–1239. http://dx.doi.org/10.1109/INFOCOM.2014.6848055

Fowler, G., Schectman, J., 2013. Citizen surveillance helps officials put pieces together. The Wall Street Journal, April 17. http://www.wsj.com/articles/SB10001424127887324763404578429220091342796

Gao, C., Zhou, D.Y., 2013. Minimax optimal convergence rates for estimating ground truth from crowdsourced labels. ePrint Archive, arXiv:1310.5764.

Gao, D.W., Tong, Y.X., She, J.Y., et al., 2016. Top-k team recommendation in spatial crowdsourcing. LNCS, 9658:191–204. http://dx.doi.org/10.1007/978-3-319-39937-9_15

Gao, L., Hou, F., Huang, J.W., 2015. Providing long-term participation incentive in participatory sensing. IEEE Conf. on Computer Communications, p.2803–2811. http://dx.doi.org/10.1109/INFOCOM.2015.7218673

Ghosh, R.A., 2005. Understanding Free Software Developers: Findings from the Floss Study. MIT Press, Cambrige, USA.

Gousios, G., Pinzger, M., Deursen, A., 2014. An exploratory study of the pull-based software development model. Proc. 36th Int. Conf. on Software Engineering, p.345–355. http://dx.doi.org/10.1145/2568225.2568260

Gousios, G., Zaidman, A., Storey, M.A., et al., 2015. Work practices and challenges in pull-based development: the integrator’s perspective. Proc. 37th Int. Conf. on Software Engineering, p.358–368. http://dx.doi.org/10.1109/ICSE.2015.55

Han, K., Zhang, C., Luo, J., et al., 2016. Truthful scheduling mechanisms for powering mobile crowdsensing. IEEE Trans. Comput., 65(1):294–307. http://dx.doi.org/10.1109/TC.2015.2419658

Hars, A., Ou, S., 2001. Working for free? Motivations of participating in open source projects. Proc. 34th Annual Hawaii Int. Conf. on System Sciences, p.1–9. http://dx.doi.org/10.1109/HICSS.2001.927045

Hassan, A.E., 2009. Predicting faults using the complexity of code changes. Proc. 31st Int. Conf. on Software Engineering, p.78–88. http://dx.doi.org/10.1109/ICSE.2009.5070510

Hertel, G., Niedner, S., Herrmann, S., 2003. Motivation of software developers in open source projects: an Internet-based survey of contributors to the Linux kernel. Res. Polic., 32(7):1159–1177. http://dx.doi.org/10.1016/S0048-7333(03)00047-7

Ho, C.J., Vaughan, J.W., 2012. Online task assignment in crowdsourcing markets. Proc. 26th AAAI Conf. on Artificial Intelligence, p.45–51.

Ho, C.J., Jabbari, S., Vaughan, J.W., 2013. Adaptive task assignment for crowdsourced classification. Proc. 30th Int. Conf. on Machine Learning, p.534–542.

Hoffman, M.L., 1981. Is altruism part of human nature? J. Personal. Soc. Psychol., 40(1):121–137. http://dx.doi.org/10.1037/0022-3514.40.1.121

Jaimes, L.G., Vergara-Laurens, I., Labrador, M.A., 2012. A location-based incentive mechanism for participatory sensing systems with budget constraints. Proc. 10th Annual IEEE Int. Conf. on Pervasive Computing and Communications, p.103–108. http://dx.doi.org/10.1109/PerCom.2012.6199855

Jain, S., Gujar, S., Bhat, S., et al., 2014. An incentive compatible multi-armed-bandit crowdsourcing mechanism with quality assurance. ePrint Archive, arXiv:1406.7157.

Jeong, G., Kim, S., Zimmermann, T., 2009. Improving bug triage with bug tossing graphs. Proc. 7th Joint Meeting of the European Software Engineering Conf. and the ACM SIGSOFT Symp. on the Foundations of Software Engineering, p.111–120. http://dx.doi.org/10.1145/1595696.1595715

Karger, D.R., Oh, S., Shah, D., 2011. Iterative learning for reliable crowdsourcing systems. Advances in Neural Information Processing Systems, p.1953–1961.

Khetan, A., Oh, S., 2016. Achieving budget-optimality with adaptive schemes in crowdsourcing. Advances in Neural Information Processing Systems, p.4844–4852.

Kittur, A., Smus, B., Khamkar, S., et al., 2011. Crowdforge: crowdsourcing complex work. Proc. 24th Annual ACM Symp. on User Interface Software and Technology, p.43–52. http://dx.doi.org/10.1145/2047196.2047202

Krishna, V., 2009. Auction Theory. Academic Press, New York, USA.

Krontiris, I., Albers, A., 2012. Monetary incentives in participatory sensing using multi-attributive auctions. Int. J. Parall. Emerg. Distr. Syst., 27(4):317–336. http://dx.doi.org/10.1080/17445760.2012.686170

Law, E., Ahn, L., 2011. Human computation. Synth. Lect. Artif. Intell. Mach. Learn., 5(3):1–121.

Lazer, D., Kennedy, R., King, G., et al., 2014. The parable of Google flu: traps in big data analysis. Science, 343(6167):1203–1205. http://dx.doi.org/10.1126/science.1248506

Lee, J.S., Hoh, B., 2010. Sell your experiences: a market mechanism based incentive for participatory sensing. IEEE Int. Conf. on Pervasive Computing and Communications, p.60–68. http://dx.doi.org/10.1109/PERCOM.2010.5466993

Li, G.L., Wang, J.N., Zheng, Y.D., et al., 2016. Crowd-sourced data management: a survey. IEEE Trans. Knowl. Data Eng., 28(9):2296–2319. http://dx.doi.org/10.1109/TKDE.2016.2535242

Li, H.W., Yu, B., 2014. Error rate bounds and iterative weighted majority voting for crowdsourcing. ePrint Archive, arXiv:1411.4086.

Li, X., Dong, X.L., Lyons, K., et al., 2012. Truth finding on the deep Web: is the problem solved? Proc. VLDB Endow., 6(2):97–108. http://dx.doi.org/10.14778/2535568.2448943

Lintott, C.J., Schawinski, K., Slosar, A., et al., 2008. Galaxy Zoo: morphologies derived from visual inspection of galaxies from the sloan digital sky survey. Month. Not. R. Astronom. Soc., 389(3):1179–1189. http://dx.doi.org/10.1111/j.1365-2966.2008.13689.x

Liu, Q., Peng, J., Ihler, A.T., 2012. Variational inference for crowdsourcing. Advances in Neural Information Processing Systems, p.692–700.

Luo, T., Tan, H.P., Xia, L.R., 2014. Profit-maximizing incentive for participatory sensing. Proc. IEEE Conf. on Computer Communications, p.127–135. http://dx.doi.org/10.1109/INFOCOM.2014.6847932

Malone, T.W., Laubacher, R., Dellarocas, C., 2009. Harnessing Crowds: Mapping the Genome of Collective Intelligence. MIT Sloan Research Paper No. 4732–09, Sloan School of Management, Massachusetts Institute of Technology, MA, USA. http://dx.doi.org/10.2139/ssrn.1381502

Mamykina, L., Manoim, B., Mittal, M., et al., 2011. Design lessons from the fastest Q&A site in the west. Proc. SIGCHI Conf. on Human Factors in Computing Systems, p.2857–2866. http://dx.doi.org/10.1145/1978942.1979366

Maslow, A.H., Frager, R., Fadiman, J., et al., 1970. Motivation and Personality. Harper & Row, New York, USA.

Mavridis, P., Gross-Amblard, D., Miklós, Z., 2016. Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. Proc. 25th Int. Conf. on World Wide Web, p.843–853.

Meng, R., Tong, Y.X., Chen, L., et al., 2015. Crowd TC: crowdsourced taxonomy construction. Proc. IEEE Int. Conf. on Data Mining, p.913–918. http://dx.doi.org/10.1109/ICDM.2015.77

Mockus, A., Fielding, R.T., Herbsleb, J.D., 2002. Two case studies of open source software development: Apache and Mozilla. ACM Trans. Softw. Eng. Meth., 11(3): 309–346. http://dx.doi.org/10.1145/567793.567795

Moser, R., Pedrycz, W., Succi, G., 2008. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. Proc. 30th Int. Conf. on Software Engineering, p.181–190. http://dx.doi.org/10.1145/1368088.1368114

Nagappan, N., Ball, T., 2005. Use of relative code churn measures to predict system defect density. Proc. 27th Int. Conf. on Software Engineering, p.284–292. http://dx.doi.org/10.1109/ICSE.2005.1553571

Nakakoji, K., Yamamoto, Y., Nishinaka, Y., et al., 2002. Evolution patterns of open-source software systems and communities. Proc. Int. Workshop on Principles of Software Evolution, p.76–85. http://dx.doi.org/10.1145/512035.512055

Ok, J., Oh, S., Shin, J., et al., 2016. Optimality of belief propagation for crowd-sourced classification. ePrint Archive, arXiv:1602.03619.

Ouyang, W.R., Kaplan, L.M., Martin, P., et al., 2015. Debiasing crowdsourced quantitative characteristics in local businesses and services. Proc. 14th Int. Conf. on Information Processing in Sensor Networks, p.190–201. http://dx.doi.org/10.1145/2737095.2737116

Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018

Pierre, L., 1997. Collective intelligence: mankind’s emerging world in cyberspace. Bononno, R., translator. Perseus Books, Cambridge, USA.

Quinn, A.J., Bederson, B.B., 2011. Human computation: a survey and taxonomy of a growing field. Proc. SIGCHI Conf. on Human Factors in Computing Systems, p.1403–1412. http://dx.doi.org/10.1145/1978942.1979148

Raban, D.R., Harper, F.M., 2008. Motivations for Answering Questions Online. http://www.researchgate.net/publication/241053908

Rahman, F., Devanbu, P.T., 2011. Ownership, experience and defects: a fine-grained study of authorship. Proc. 33rd Int. Conf. on Software Engineering, p.491–500. http://dx.doi.org/10.1145/1985793.1985860

Rahman, F., Devanbu, P.T., 2013. How, and why, process metrics are better. Proc. 35th Int. Conf. on Software Engineering, p.432–441.

Rana, R.K., Chou, C.T., Kanhere, S.S., et al., 2010. Earphone: an end-to-end participatory urban noise mapping system. Proc. 9th ACM/IEEE Int. Conf. on Information Processing in Sensor Networks, p.105–116. http://dx.doi.org/10.1145/1791212.1791226

Raykar, V.C., Yu, S., 2012. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res., 13(Feb):491–518.

Raykar, V.C., Yu, S., Zhao, L.H., et al., 2009. Supervised learning from multiple experts: whom to trust when everyone lies a bit. Proc. 26th Int. Conf. on Machine Learning, p.889–896. http://dx.doi.org/10.1145/1553374.1553488

Raykar, V.C., Yu, S., Zhao, L.H., et al., 2010. Learning from crowds. J. Mach. Learn. Res., 11(Apr):1297–1322.

Raymond, E., 1999. The cathedral and the bazaar. Knowl. Technol. Polic., 12(3):23–49. http://dx.doi.org/10.1007/s12130-999-1026-0

Rigby, P.C., German, D.M., Cowen, L., et al., 2014. Peer review on open-source software projects: parameters, statistical models, and theory. ACM Trans. Softw. Eng. Meth., 23(4):No.35. http://dx.doi.org/10.1145/2594458

Rogers, E.M., 2010. Diffusion of Innovations. Simon and Schuster, New York, USA.

Sakaki, T., Okazaki, M., Matsuo, Y., 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. Proc. 19th Int. Conf. on World Wide Web, p.851–860. http://dx.doi.org/10.1145/1772690.1772777

Shah, N.B., Zhou, D., 2015. Double or nothing: multiplicative incentive mechanisms for crowdsourcing. Advances in Neural Information Processing Systems, p.1–9.

Shah, N.B., Zhou, D., 2016. No oops, you won’t do it again: mechanisms for self-correction in crowdsourcing. Proc. 33rd Int. Conf. on Machine Learning, p.1–10.

Shah, N.B., Zhou, D., Peres, Y., 2015. Approval voting and incentives in crowdsourcing. ePrint Archive, arXiv:1502.05696.

She, J.Y., Tong, Y.X., Chen, L., 2015a. Utility-aware social event-participant planning. Proc. ACM Int. Conf. on Management of Data, p.1629–1643. http://dx.doi.org/10.1145/2723372.2749446

She, J.Y., Tong, Y.X., Chen, L., et al., 2015b. Conflict-aware event-participant arrangement. Proc. 31st IEEE Int. Conf. on Data Engineering, p.735–746. http://dx.doi.org/10.1109/ICDE.2015.7113329

She, J.Y., Tong, Y.X., Chen, L., et al., 2016. Conflict-aware event-participant arrangement and its variant for online setting. IEEE Trans. Knowl. Data Eng., 28(9):2281–2295. http://dx.doi.org/10.1109/TKDE.2016.2565468

Shen, H.W., Barabási, A.L., 2014. Collective credit allocation in science. PNAS, 111(34):12325–12330. http://dx.doi.org/10.1073/pnas.1401992111

Smith, J.B., 1994. Collective Intelligence in Computer-Based Collaboration. CRC Press, Boca Raton, USA.

Subramanian, A., Kanth, G.S., Vaze, R., 2013. Offline and online incentive mechanism design for smart-phone crowd-sourcing. ePrint Archive, arXiv:1310.1746.

Subramanyam, R., Krishnan, M.S., 2003. Empirical analysis of CK metrics for object-oriented design complexity: implications for software defects. ACM Trans. Softw. Eng. Meth., 29(4):297–310. http://dx.doi.org/10.1109/TSE.2003.1191795

Sullivan, B.L., Wood, C.L., Iliff, M.J., et al., 2009. eBird: a citizen-based bird observation network in the biological sciences. Biol. Consev., 142(10):2282–2292. http://dx.doi.org/10.1016/j.biocon.2009.05.006

Tamrawi, A., Nguyen, T.T., Al-Kofahi, J.M., et al., 2011. Fuzzy set and cache-based approach for bug triaging. Proc. 19th ACM SIGSOFT Symp. and 13th European Conf. on Foundations of Software Engineering, p.365–375. http://dx.doi.org/10.1145/2025113.2025163

Tang, J.C., Cebrian, M., Giacobe, N.A., et al., 2011. Reflecting on the DARPA red balloon challenge. Commun. ACM, 54(4):78–85. http://dx.doi.org/10.1145/1924421.1924441

Teodoro, R., Ozturk, P., Naaman, M., et al., 2014. The motivations and experiences of the on-demand mobile workforce. Proc. 17th ACM Conf. on Computer Supported Cooperative Work & Social Computing, p.236–247. http://dx.doi.org/10.1145/2531602.2531680

Thebault-Spieker, J., Terveen, L.G., Hecht, B., 2015. Avoiding the south side and the suburbs: the geography of mobile crowdsourcing markets. Proc. 18th ACM Conf. on Computer Supported Cooperative Work & Social Computing, p.265–275. http://dx.doi.org/10.1145/2675133.2675278

Thongtanunam, P., Tantithamthavorn, C., Kula, R.G., et al., 2015. Who should review my code? A file locationbased code-reviewer recommendation approach for modern code review. IEEE 22nd Int. Conf. on Software Analysis, Evolution, and Reengineering, p.141–150. http://dx.doi.org/10.1109/SANER.2015.7081824

Tian, T., Zhu, J., 2015. Max-margin majority voting for learning from crowds. Advances in Neural Information Processing Systems, p.1621–1629.

Tong, Y.X., Chen, L., Ding, B.L., 2012a. Discovering threshold-based frequent closed itemsets over probabilistic data. Proc. IEEE 28th Int. Conf. on Data Engineering, p.270–281. http://dx.doi.org/10.1109/ICDE.2012.51

Tong, Y.X., Chen, L., Cheng, Y.R., et al., 2012b. Mining frequent itemsets over uncertain databases. Proc. VLDB Endow., 5(11):1650–1661.

Tong, Y.X., Cao, C.C., Zhang, C.J., et al., 2014a. Crowd-Cleaner: data cleaning for multi-version data on the web via crowdsourcing. Proc. IEEE 28th Int. Conf. on Data Engineering, p.1182–1185. http://dx.doi.org/10.1109/ICDE.2014.6816736

Tong, Y.X., Cao, C.C., Chen, L., 2014b. TCS: efficient topic discovery over crowd-oriented service data. Proc. 20th ACM Int. Conf. on Knowledge Discovery and Data Mining, p.861–870. http://dx.doi.org/10.1145/2623330.2623647

Tong, Y.X., Chen, L., She, J.Y., 2015a. Mining frequent itemsets in correlated uncertain databases. J. Comput. Sci. Technol., 30(4):696–712. http://dx.doi.org/10.1007/s11390-015-1555-9

Tong, Y.X., Meng, R., She, J.Y., 2015b. On bottleneck-aware arrangement for event-based social networks. Proc. 31st IEEE Int. Conf. on Data Engineering Workshops, p.216–223. http://dx.doi.org/10.1109/ICDEW.2015.7129579

Tong, Y.X., She, J.Y., Meng, R., 2016a. Bottleneck-aware arrangement over event-based social networks: the maxmin approach. World Wide Web, 19(6):1151–1177. http://dx.doi.org/10.1007/s11280-015-0377-6

Tong, Y.X., She, J.Y., Ding, B.L., et al., 2016b. Online mobile micro-task allocation in spatial crowdsourcing. Proc. 32nd IEEE Int. Conf. on Data Engineering, p.49–60. http://dx.doi.org/10.1109/ICDE.2016.7498228

Tong, Y.X., She, J.Y., Ding, B.L., et al., 2016c. Online minimum matching in real-time spatial data: experiments and analysis. Proc. VLDB Endow., 9(12):1053–1064. http://dx.doi.org/10.14778/2994509.2994523

Tong, Y.X., Zhang, X.F., Chen, L., 2016d. Tracking frequent items over distributed probabilistic data. World Wide Web, 19(4):579–604. http://dx.doi.org/10.1007/s11280-015-0341-5

Tong, Y.X., Yuan, Y., Cheng, Y.R., et al., 2017. A survey of spatiotemporal crowdsourced data management techniques. J. Softw., 28(1):35–58 (in Chinese).

Tran-Thanh, L., Stein, S., Rogers, A., et al., 2012. Efficient crowdsourcing of unknown experts using multi-armed bandits. Proc. European Conf. on Artificial Intelligence, p.768–773. http://dx.doi.org/10.3233/978-1-61499-098-7-768

Tsay, J., Dabbish, L., Herbsleb, J.D., 2014a. Influence of social and technical factors for evaluating contribution in GitHub. 36th Int. Conf. on Software Engineering, p.356–366. http://dx.doi.org/10.1145/2568225.2568315

Tsay, J., Dabbish, L., Herbsleb, J.D., 2014b. Let’s talk about it: evaluating contributions through discussion in GitHub. Proc. 22nd ACM Int. Symp. on Foundations of Software Engineering, p.144–154. http://dx.doi.org/10.1145/2635868.2635882

Vasilescu, B., Yu, Y., Wang, H., et al., 2015. Quality and productivity outcomes relating to continuous integration in GitHub. Proc. 10th Joint Meeting on Foundations of Software Engineering, p.805–816. http://dx.doi.org/10.1145/2786805.2786850

Vickrey, W., 1961. Counterspeculation, auctions, and competitive sealed tenders. J. Finan., 16(1):8–37. http://dx.doi.org/10.1111/j.1540-6261.1961.tb02789.x

von Ahn, L., Maurer, B., McMillen, C., et al., 2008. reCAPTCHA: human-based character recognition via web security measures. Science, 321(5895):1465–1468. http://dx.doi.org/10.1126/science.1160379

Wang, D., Abdelzaher, T.F., Kaplan, L.M., et al., 2013. Recursive fact-finding: a streaming approach to truth estimation in crowdsourcing applications. IEEE 33rd Int. Conf. on Distributed Computing Systems, p.530–539. http://dx.doi.org/10.1109/ICDCS.2013.54

Wang, D., Amin, M.T.A., Li, S., et al., 2014. Using humans as sensors: an estimation-theoretic perspective. Proc. 13th Int. Conf. on Information Processing in Sensor Networks, p.35–46.

Wang, H.M., Yin, G., Li, X., et al., 2015. TRUSTIE: a software development platform for crowdsourcing. In: Li, W., Huhns, M.N., Tsai, W.T., et al. (Eds.), Crowd-sourcing. Springer Berlin Heidelberg, Berlin, Germany, p.165–190.

Wang, J.N., Li, G.L., Kraska, T., et al., 2013. Leveraging transitive relations for crowdsourced joins. Proc. ACM Int. Conf. on Management of Data, p.229–240. http://dx.doi.org/10.1145/2463676.2465280

Wang, L., Zhou, Z.H., 2016. Cost-saving effect of crowd-sourcing learning. Proc. 25th Int. Joint Conf. on Artificial Intelligence, p.2111–2117.

Wang, W., Zhou, Z.H., 2015. Crowdsourcing label quality: a theoretical study. Sci. China Inform. Sci., 58(11):1–12. http://dx.doi.org/10.1007/s11432-015-5391-x

Wauthier, F.L., Jordan, M.I., 2011. Bayesian bias mitigation for crowdsourcing. Advances in Neural Information Processing Systems, p.1800–1808.

Welinder, P., Branson, S., Perona, P., et al., 2010. The multidimensional wisdom of crowds. Advances in Neural Information Processing Systems, p.2424–2432.

Whitehill, J., Wu, T., Bergsma, J., et al., 2009. Whose vote should count more: optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems, p.2035–2043.

Wu, W.J., Tsai, W.T., Li, W., 2013. An evaluation framework for software crowdsourcing. Front. Comput. Sci., 7(5):694–709. http://dx.doi.org/10.1007/s11704013-2320-2

Yan, Y., Fung, G.M., Rosales, R., et al., 2011. Active learning from crowds. Proc. 28th Int. Conf. on Machine Learning, p.1161–1168.

Yang, D.J., Fang, X., Xue, G.L., 2013. Truthful incentive mechanisms for k-anonymity location privacy. Proc. IEEE Conf. on Computer Communications, p.2994–3002. http://dx.doi.org/10.1109/INFCOM.2013.6567111

Yang, D.J., Xue, G.L., Fang, X., et al., 2012. Crowd-sourcing to smartphones: incentive mechanism design for mobile phone sensing. Proc. 18th Annual Int. Conf. on Mobile Computing and Networking, p.173–184. http://dx.doi.org/10.1145/2348543.2348567

Ye, Y.W., Kishida, K., 2003. Toward an understanding of the motivation of open source software developers. Proc. 25th Int. Conf. on Software Engineering, p.419–429. http://dx.doi.org/10.1109/ICSE.2003.1201220

Yu, Y., Yin, G., Wang, H., et al., 2014. Exploring the patterns of social behavior in GitHub. Proc. 1st Int. Workshop on Crowd-Based Software Development Methods and Technologies, p.31–36. http://dx.doi.org/10.1145/2666539.2666571

Yu, Y., Yin, G., Wang, T., et al., 2016a. Determinants of pull-based development in the context of continuous integration. Sci. China Inform. Sci., 59(8):080104. http://dx.doi.org/10.1007/s11432-016-5595-8

Yu, Y., Wang, H.M., Yin, G., et al., 2016b. Reviewer recommendation for pull-requests in GitHub: what can we learn from code review and bug assignment? Imform. Softw. Technol., 74:204–218. http://dx.doi.org/10.1016/j.infsof.2016.01.004

Zhang, C.J., Chen, L., Tong, Y.X., 2014a. MaC: a probabilistic framework for query answering with machinecrowd collaboration. Proc. 23rd ACM Int. Conf. on Information and Knowledge Management, p.11–20. http://dx.doi.org/10.1145/2661829.2661880

Zhang, C.J., Tong, Y.X., Chen, L., 2014b. Where to: crowdaided path selection. Proc. VLDB Endow., 7(11):2005–2016. http://dx.doi.org/10.14778/2733085.2733105

Zhang, C.J., Chen, L., Tong, Y., et al., 2015. Cleaning uncertain data with a noisy crowd. Proc. 31st IEEE Int. Conf. on Data Engineering, p.6–17. http://dx.doi.org/10.1109/ICDE.2015.7113268

Zhang, Y., Chen, X., Zhou, D., et al., 2014. Spectral methods meet EM: a provably optimal algorithm for crowdsourcing. Advances in Neural Information Processing Systems, p.1260–1268.

Zhao, D., Li, X.Y., Ma, H.D., 2014. How to crowdsource tasks truthfully without sacrificing utility: online incentive mechanisms with budget constraint. Proc. IEEE Conf. on Computer Communications, p.1213–1221. http://dx.doi.org/10.1109/INFOCOM.2014.6848053

Zhong, J.H., Tang, K., Zhou, Z.H., 2015. Active learning from crowds with unsure option. Proc. 24th Int. Joint Conf. on Artificial Intelligence, p.1061–1067.

Zhou, D., Basu, S., Mao, Y., et al., 2012. Learning from the wisdom of crowds by minimax entropy. Advances in Neural Information Processing Systems, p.2195–2203.

Zhou, Y., Chen, X., Li, J., 2014. Optimal PAC multiple arm identification with applications to crowdsourcing. Proc. 31st Int. Conf. on Machine Learning, p.217–225.

Zhu, Y., Zhang, Q., Zhu, H., et al., 2014. Towards truthful mechanisms for mobile crowdsourcing with dynamic smartphones. Proc. 34th Int. Conf. on Distributed Computing Systems, p.11–20. http://dx.doi.org/10.1109/ICDCS.2014.10