Companies agree to pay fines for false advertising and deceptive business practices related to false product and service reviews on review websites.
October 14, 2013
Marketwatch reports that 19 companies have agreed to pay fines for false advertising and deceptive business practices related to false online reviews of products and services on review websites such as Yelp. The Operation Clean Turf investigation by the New York Attorney General's office found that these firms paid freelance writers to write positive reviews of their products and services, which ranged from bus companies to teeth whitening services. According to a recent study by professors from Harvard University and Boston University, 20 percent of reviews on Yelp may be fraudulent — up from 5 percent in 2006. Yelp says it filters most of these reviews, although visitors can view them by clicking a link at the bottom of each business' page.
Faked user reviews — including known techniques such as "opinion spamming," "shills," and "astroturfing" — represent part of a much larger and still growing worldwide trend toward deceptive and fraudulent business practices that build upon biased or falsified information to deceive and boost sales and profits. It is fueled by the Internet and open access, and is exploited by companies and individuals alike for financial and reputational gain. Moreover, it is occurring in unexpected ways. For example, a study recently published in Science magazine revealed a wave of what some call "predatory" science publishing — shadow publications that use fake addresses and names, and publish research whose quality is not checked. These publications appear only online and are often free to anyone, but it is the authors — scientists who want their work published — who pay all the costs.
Auditors involved in a wide range of organizations and processes, including procurement, advertising, and service delivery, are already, or will be, confronted by this threat of false information and need to be armed with the facts, advice, and strategies necessary to combat it. Obviously, online reviews should always be just one set of data used for decision-making and always taken in aggregate — and a giant grain of salt. This story also mentions some useful ways of detecting problem information. Beyond individual analysis and judgment, numerous fraud-detection methodologies (both statistical and artificial intelligence) are available to auditors, particularly quantitative, Web-based data mining such as the use of pattern discovery and relational modeling. A few of the main red flags that can be revealed include:
1. Review content:
Lexical features such as word n-grams, part-of-speech n-grams, and other attributes. This includes use of marketing-speak normally not used by most people.
Content and style similarity of reviews from different reviewers, including copying and pasting work by other writers/reviewers. Also, auditors should look out for unique phrasings, which can be searched through Google.
Patterns in the use of overly negative and positive language.
2. Reviewer behaviors:
Public data available from websites, such as user profile/reviewer IDs, time of posting, frequency of posting, first reviewers of products, and flooding of the same or similar review at other locations of the same company. For example, a username that has more than three numbers at the end could indicate an automated program is at work.
Website private/internal data, such as IP and MAC addresses, time taken to post a review, number of reviewers who created accounts around the same time (including at the time a domain name was registered), and physical location of the reviewer.
3. Relationships among reviewers, reviews, and entities (e.g., products and stores). Much of this may be hidden and can include:
A lack of editing, review, and appropriate use policies and procedures.
The entity may be compensating, in kind or financially, some reviewers.
Some sites/sponsors have been found to delay posting, alter, or suppress negative reviews.