Robust ranking and unfairness detection in social rating systems

Download files
Access & Terms of Use
open access
Copyright: Allah Bakhsh, Mohammad
Altmetric
Abstract
Using Web 2.0 technologies, people can collaboratively create content, share with others, build social networks, play online games and perform many more forms of collaboration on the web. People can also easily collaborate on assessing the quality of content that has been generated by others, mainly through social rating systems. Such people acting as evaluators might have different levels of skill and expertise, sometimes insufficient for evaluating particular content. They may also have various and even biased interests and incentives and consequently cast unfair evaluations. Moreover, several pieces of evidence show that social rating systems have been widely subject to unfair evaluations posted individually or collaboratively to promote or demote particular items based on their personal or group interest and benefits. Identifying these unfair evaluations is challenging and is the key to the success of social rating systems. In this dissertation we present a set of techniques and algorithms for assessing the quality of people and content in social rating systems. We first propose methods for identifying both individual and collaborative unfair evaluators in social rating systems. After analyzing existing collusion detection systems and finding some fundamental limitations when dealing with massive or intelligent collusion attacks, we propose a novel iterative method for collusion detection which helps to solve such problems. This method builds on the idea of detecting the real sentiment of a community of evaluators and using it to robustly evaluate people and content in such a community. To do so, we first present a novel model for reducing a rating task into an election (a voting) activity and then using this technique to detect the sentiment of the community. Our model depends on neither the values of cast evaluations nor their order. We rely on the distributions of cast votes as well as the opinion of the community on the helpfulness of such votes and reviews. In addition to rating products, we use community sentiment along with the voting behavior of people to assess their trustworthiness. Our people evaluation approach employs seven behavioral factors and uses them to distinguish between random voters, genuine users, honest people with low expertise, expert users, etc. We employ a fuzzy logic model to combine these behavioral factors and obtaining one value for trustworthiness of evaluators. The approaches presented in this dissertation have been implemented in prototype tools, and experimentally validated on synthetic and real-world datasets.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Allah Bakhsh, Mohammad
Supervisor(s)
Ignjatovic, Aleksandar
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
2013
Resource Type
Thesis
Degree Type
PhD Doctorate
UNSW Faculty
Files
download whole.pdf 2.56 MB Adobe Portable Document Format
Related dataset(s)