Online reviews only partially reveal what hotel customers think
by Susan Kelley (7/20/16)
Customers in the hotel industry are writing online reviews more than ever via social networks and travel websites. But their comments are so numerous and hard to analyze using traditional statistical methods that it has been difficult for managers to use reviews to improve operations.
Cornell researchers have come up with a text-mining method of analyzing online customer reviews – and have found that they do not tell the full story of how guests view a hotel.
“Uncovering insights that have real service and managerial implications distinguishes this project from other text-mining and natural language processing works,” said one of the study’s authors, Shawn Mankad, assistant professor of operations, technology and information management at Johnson.
For example, negative comments carry more weight in a guest’s rating than positive ones. That unevenness means a simple average of positive and negative scores may not provide a clear view of a guest’s opinion of the hotel. And it underlines the importance of a hotel’s consistency: It’s better for hotels to provide guests with a moderately good overall experience than a hotel stay that is extremely good in some regards and terrible in others.
“The terrible service will outweigh the good feelings from the stay’s excellent aspects,” Mankad said.
The study, “What Guests Really Think of Your Hotel: Text Analytics of Online Customer Reviews,” was written by Hyun Jeong “Spring” Han of the National University Higher School of Economics in Moscow; Mankad; Nagesh Gavirneni, associate professor of operations, technology and information management at Johnson; and Rohit Verma, the Singapore Tourism Board Distinguished Professor in Asian Hospitality Management at Cornell’s School of Hotel Administration (SHA), and now dean of external relations for the Cornell College of Business. The research was published in SHA’s Center for Hospitality Research.
The researchers analyzed 5,830 reviews from TripAdvisor, a leading hotel review website, covering 57 hotels in Moscow in 2012-13. The study demonstrated how automated software tools with natural-language-processing algorithms analyzed large volumes of text. The tools identified and sorted more than 18,000 keywords related to amenities, location, transactions, value and experience. They also looked at the numerical satisfaction rating, from one to five, that each reviewer gave the hotel, and the type of traveler writing the review: business, family, couple or solo traveler.
The researchers found the reviews’ content can vary substantially – in sentiment, quality of writing and themes – from the numerical satisfaction ratings review writers assigned the hotel.
“This suggests that information from the text can potentially yield insights not indicated in the ratings for how hotels can improve their operations and better meet customer expectations,” the study said.
Reviews with higher numerical ratings tend to be shorter and discuss topics more broadly, whereas reviews with lower ratings tend to be longer and focus on a smaller number of major issues, according to the research.
For the highest-rated hotels, 70 percent of reviews discussed the guest’s experience. In contrast, experience was mentioned in only 32 percent of the reviews in the lower-ranked hotels and in 45 percent of reviews in the middle tier. Reviews of low-tier hotels focused more on transactions and value, compared with reviews of higher-tier hotels. Amenities and location came up more frequently for hotels in the middle tier.
The study’s results suggest hotel operators should pay special attention to the types of customers who stay at their hotel and note what each wants from a hotel stay. Hotels should also take advantage of positive feedback to better market themselves, the researchers said.
As a whole, the hospitality industry could benefit from developing more precise text analysis methods, the study said. “Guests’ true feelings are found in those comments – particularly if they write lengthy reviews that focuses tightly on just a few issues.” This article originally was published in the Cornell Chronicle, March 8, 2016. Reprinted with permission.