2. Informed: “I Don’t Know. I Gotta Get the Best One”
Today maximizers have an extraordinary amount of information available to them. Sites like Yelp and Amazon offer ratings in the form of stars. These ratings can be accompanied by reviews that are often haphazard but sometimes astoundingly detailed.
2. Informed: “I Don’t Know. I Gotta Get the Best One”
Being a consumer is like a job. You have to make sure you get the best one. If you get a Blu-ray player, you gotta do research.… You gotta go on Amazon and read a really long review written by an insane person who’s been dead for months … because he shot his wife and then himself after explaining to you that the remote is counter-intuitive. “It’s got really small buttons on the remote,” he said … before he murder-suicided his whole family. And now you’re reading it and going, “I don’t know. I don’t know which one to get, I don’t know. I gotta get the best one.” Who are you, the King of Siam, that you should get the best one ever? Who cares? They’re all the same, these machines. They’re all made from the same Asian suffering. There’s no difference.
—Louis C.K., “Late Show: Part 1,” Louie, season 3, episode 10, August 30, 2012
Despite the name, Boston’s Micro Center isn’t small—nor is it in Boston. It is, instead, an electronics supermarket on the Cambridge shore of the Charles River. It attracts customers from all over eastern Massachusetts, but I live a few minutes away. Once, while browsing the aisles, I came across an inexpensive accessory for my camera. I wasn’t happy with the included strap and for a mere $6 this one would give me added security against dropping it. But was it any good? I left my phone at home, and so in a quiet corner I found an online laptop and quickly pulled up Amazon. The strap could be had for a few dollars less and, more importantly, it had a few decent reviews: it’s not a lemon. Good, except that a sales associate looked over my shoulder and asked if I needed any help. Embarrassed, I explained that I will indeed be buying the product and was only checking the reviews.
Even when I plan to purchase something in a store, I feel uneasy about buying blind. With Amazon’s “Price Check” app on my phone, a quick photo of a bar code reveals all. As a reviewer of the application noted, “I use this app not for price (unless a huge difference), but to check the product review. It’s so handy! I can avoid buying things [the] amazon community doesn’t give a stamp of approval.” In fact, a third of surveyed adult cell phone owners said that they used their devices to look up reviews and prices while in a store during the 2011 holiday shopping season.1Such people likely include maximizers, a term popularized by Barry Schwartz in his book The Paradox of Choice. Whereas a satisficer can settle for good enough, maximizers must be assured that every decision is optimal.2 They spend hours reading reviews and feel disappointed when an item falls short of expectations or is surpassed by a new model. They suffer from the fear that they could have made a better decision; this is the paradox of increased information and choice.
Today maximizers have an extraordinary amount of information available to them. Sites like Yelp and Amazon offer ratings in the form of stars. These ratings can be accompanied by reviews that are often haphazard but sometimes astoundingly detailed. Reviews can be professional, such as those in Consumer Reports, or amateur, such as those at Yelp, Amazon, and everywhere else on the Web. Because confidence correlates with large numbers, some sites distill the ratings, such as Rotten Tomatoes’ “freshness” percentage, Yelp’s average rating, and Amazon’s histogram. A bimodal distribution in which most ratings are either zero or five stars is a sign of controversy. Reviews themselves can be reviewed as “helpful” and commented upon. Forums and lists provide additional ways for people to discuss and perhaps even form a community. Unsure of the quality of tangible goods, reviewers can view photos and videos of products. On YouTube, these reviews serve as a way for reviewers to fashion their identity as a helpful expert (for example, with comments such as “This is my favorite mascara and here are my application tips”) or conspicuous consumer (“Let’s drop test the latest gadget”).
This information is varied and rich but it is not novel simply because it appears on the screen. An essential function of comment is to inform: we express our thoughts for the benefit of others, and others seek them out to understand and make decisions. As noted in chapter 1, we have been gossiping about each other for a long time, and we can even understand gossip as a part of what makes us human. Similarly, the desire to comment on a written text is as old as writing. As soon as humans began writing, our scribbles have been a source of confusion and contention and have necessitated commentary. Perhaps the earliest instance of this is the Babylonians’ dictionaries of Sumerian. Yet we have needed help with more than just foreign languages. Because early writing lacked many of the conveniences that we take for granted today—such as vowels, punctuation, and spaces between words—the ancients needed help in deciphering their cherished texts. Hence, they developed conventions for annotating these works and these (also ancient) annotations are known as scholia.3
Tom Standage’s 2013 book Writing on the Wall: Social Media—the First Two Thousand Years is a delightful corrective for our myopic tendency to think that the new is necessarily novel. For instance, the Romans wrote on their friends’ walls (literally), and Martin Luther “went viral” thanks to the extraordinary information technology of the day: the printing press. Pope Leo X likened the spread of Luther’s protests to a “plague and cancerous disease.”4 When considering reviews (comments that inform people), it is also worthwhile to consider the past. So I begin this journey to the bottom of the Web with a brief historical excursion. Many books have been written about criticism and review, but I suffice with a brief discussion of what is most often seen online, touching on the origins of the review, gold stars, likers, the crowd, and the critics now found on the Web. This discussion is from the perspective of a gadget addict who also is addicted to gadget reviews. For example, as a maximizer, I spend far too much time in search of the perfect product and often guiltily recall comedian Louis C.K.’s question: “Who are you, the King of Siam, that you should get the best one ever?” Reviews have been around for a while now, but never before were they so accessible that we’d regret our purchase the next day upon reading someone else’s comment.
When buying a camera, the new owner is advised to read the manual and, if overwhelmed and confused by the manual, to purchase another guide (sometimes called a “missing manual”) with more detailed instructions about how to use the gadget’s many functions. That is, an expert is needed to help the user to decipher and apply the product’s manual. Photographer Gary L. Friedman, for example, sells ebooks with “professional insights.” These texts are a hybrid of a professional review and user guide. Content seemingly calls forth even more content, which is a recurrent theme in the history of media.
During the Enlightenment, cloistered scholars slaving over the annotation of ancient authorities gave way to the likes of John Locke, Voltaire, and Isaac Newton; new thoughts and works abounded. However, the glut of work from lesser thinkers led Gottfried Leibniz, Newton’s contemporary and fellow inventor of calculus, to complain of the “horrible mass of books that keeps on growing.” He feared that there would be no end “to books continuing to increase in number,” and he was right.5 By the eighteenth century, print’s proliferation called into being new forms of commentary—ironically, more print—that we now take for granted. The seventeenth century’s comprehensive indexes and collections of abstracts were followed by more discriminating reviews in the eighteenth century. The French Encyclopédie and other reference works are illustrative of a new reading public and their desire to have the mass of knowledge made sensible and accessible. Although some might not view reference works as commentary, early reference work compilers were an opinionated bunch. In Dennis de Coetlogon’s A Universal History of Arts and Sciences (1745), the “Geography” article begins with the nation of France because it is “the first in rank” and “the most fertile, the most agreeable, and the most powerful in Europe.”6
Print’s proliferation accompanied the emergence of a new class of wealthy literates, including merchants and bankers. As chronicled by the German philosopher Jürgen Habermas, the new bourgeois, this “reading public,” constituted a “public sphere” in which all topics were discussed without deference to the authority of the ancients or of contemporary rulers. In fact, this led Charles II of England to issue a “Proclamation for the Suppression of the Coffee-Houses” in 1675.7 (Today we have check-in apps such as Foursquare that permit users to proclaim their fondness for a café in hopes of becoming its “mayor.”) Despite the proclamation, caffeinated commentary was not easily quieted. In the final months of the reign of Louis XVI of France, his government solicited lists of grievances (Cahiers de Doléances) from his subjects as suggestions for reform. This backfired on Louis because, when written down, the complaints hastened his end and the beginning of the French Revolution.
The new literates’ tastes were not limited to the civic and natural domains. As Frank Donoghue argues in The Fame Machine: Book Reviewing and Eighteenth-Century Literary Careers, all types of authors (now severed from the apron strings of patronage) sought to make careers in a marketplace that was characterized by an uneasy mix of contention and cooperation between authors, reviewers, and readers. London’s Monthly Review, established in 1749, initially conceived of itself as being “serviceable for such as would choose to have some idea of the book before they lay out their money or time on it.”8 Many competitors soon followed, especially The Critical Review, and the rivalry between these two reviews shaped the emergence of this genre as much as anything else.
By the beginning of the twentieth century, the “penny dreadful” in the United Kingdom and “dime novel” in the United States exemplified a further increase in popular literacy and a burgeoning consumer culture. Over five thousand new books and editions were published in 1896, the year in which the first “New York Times Book Review” was released (initially as the “Saturday Book Review Supplement”). At some point, the genre of the book review became so popular that it became a primary school assignment, and as early as 1885, teachers’ assignments included asking each student to “prepare a summary of some chapter” from their reading. In 1919, Wid Gunning, a twenty-nine-year-old film fan, began publishing Wid’s Films and Film Folk with reviews of over fifty silent pictures. In 1925, The New Yorker published its first issue, setting the mold for modern commentary and criticism.9
One lesson that can be drawn from this brief history of reviews is that comment leads to more comment. This is still true today, and the pace of change online can be understood as one generation of information management being overwhelmed and complemented by the latest. In the late 1990s, Web “portals” were the rage, and companies raced to dominate the collection and organize information for the users. In the new millennium, hierarchical directories have been supplanted by tags, keywords affixed to a piece of digital content by the users. As David Weinberger argues in Everything Is Miscellaneous, tags are the embodiment of a new “digital disorder” in which the organization of information can be fluid, ad hoc, and disposable.10 People try to stay abreast of it all by consuming reports of trending tags and presentations of “tag clouds” in which terms are rendered in relative proportion to their popularity. Tags have even become a battleground where different factions compete to coopt the tags of their opponents. Online feminists, for instance, post ironic tweets under #INeedMasculismBecause, and masculinists do the same under #tellafeministThankYou. (I return to hash crashing in chapter 5.) In short, comment begets comment.
In a four-star review of “Bacon Flavored Toothpaste,” an Amazon reviewer wrote, “The stuff tastes horrible, but that was why I bought it as a gag gift haha. I gave four stars because besides bacon and mint, another taste was present like a manufacturing … chemical … I don’t know … bad taste that didn’t belong.”11 That is, the reviewer liked the expected surprise taste of bacon but subtracted a star for the unexpected taste of the unknown. We are fortunate that Amazon permits its commenters to explain their allotment of five stars, but in the past few years we have seemingly undergone an even greater compression of expression. If a comment is stripped away of all that is superfluous, a kernel of disposition remains. Do you like this thing? Do you agree (+1)? Do you wish to share it? This concept underlies Facebook’s like button. On its release in April 2010, Mark Zuckerberg predicted that this newest form of (attenuated) commentary would be used over one billion times within its first twenty-four hours. Facebook never substantiated its original boast, but it noted that within the first week fifty thousand Websites implemented its new social plugins.12 Hundreds of millions of people at least saw the new button. Yet a better measure of its impact may be how quickly its competitors moved to provide alternatives. In August 2011, Google launched the +1 button for its social network. By 2012, Web pages were riddled with social buttons.
In January 2013, the synth-pop group The Knife released a nine-minute music video for the lead track from their forthcoming album. Beneath the video was a button for sharing the content on social media. Clicking the “Add This” button led users to top-level buttons for Facebook, LinkedIn, Twitter, email, and “More.” Hitting “More” revealed buttons for print, Gmail, StumbleUpon, Favorites, Blogger, Tumbler, Pinterest, and “More.” The final “More” provided buttons for over 337 different sites! The button glut prompted a prominent designer to advocate that people “Sweep the Sleaze” from their Webpages.13
Of course, stars and like buttons are not innovations of the twenty-first century. School children have long received gold stars for good spelling and handwriting. Some foodies might know that Michelin rates restaurants with stars without knowing they make tires, but it has done both since 1900. The earliest significant guides appeared at the end of the nineteenth century—the Hachette in France and Baedeker in Germany—and were designed for the railroad traveler, with suggestions for carriage excursions. But the car and its tires were the engine of the restaurant review and the source of the now ubiquitous stars.
In his book The Michelin Men: Driving an Empire, Herbert Lottman recounts the history of the company that was started by the Michelin brothers, including the synergy of the new automobile with increased literacy and leisure. In 1900, Michelin printed its first guide, and stars were used only to indicate the class or cost of hotel accommodation, a convention that was used by existing train guides. The guide also included symbols for parking garages, type of motor oil, repair shops, and darkrooms. The following year it indicated whether hotels had flush toilets, bathtubs, and showers. However, it wasn’t until 1925 that Michelin introduced the three stars that we know today, though their eventual significance was not yet appreciated. Between 1925 and 1930, different systems were used in different guides: Michelin distinguished between stand-alone and hotel restaurants, between Paris and the provinces, and used a five-star system in some of its guides. In the 1930s, the three-star system began its ascendancy with a charmingly modest meaning: a restaurant or hotel could be “Worth a detour” (two stars) or “Worth a journey” (three stars). Today, a single star denotes “A good table in its community,” two stars an “Excellent table: worth a detour,” and three stars “One of the best tables in France; worth the trip.”14
Confusion over the number and meaning of stars is a topic that I return to in chapter 7, but two other aspects of the history of the Michelin guide are relevant today. First, for people today, print guides are associated with a cost, and online guides generally are free. The Michelin guide was free, too, for its first twenty years until, as legend has it, one of the Michelin brothers discovered that the guidebooks were being used as wedges under uneven table legs. Its cost was offset in part by advertisements for Michelin products (such as brake shoes) and for automobiles, garages, and hotels. In an effort at impartiality, Michelin removed hotel advertisements in 1908 (an early lesson that is not being followed today given the recent spate of lawsuits about biased and fake reviews). Second, even Michelin, now known for its discriminating but anonymous “inspectors,” used public input. Early guides solicited information from hotels and garages about their establishments. They also welcomed feedback from the public about the accuracy of the guide, including whether the prices listed in the guide reflected actual costs and whether the hotel had bedbugs. Indeed, an early advertisement for the guide portrayed an unhappy wedding night when a hapless couple, without a Michelin guide, made the mistake of staying at a hotel that was ridden with bedbugs.15 A recent resurgence of bedbugs has made them again a (contentious) topic of (online) reviews.
Much like accusations about hotel bedbugs, digital cameras can be a surprisingly contentious subject. Digital photographers can be argumentative and loyal to their brand—the feud between Canon and Nikon “fanboys” is infamous. The site DPReview, now owned by Amazon, offers forums, ratings, rankings, and reviews. Carefully reading a camera review is not to be undertaken lightly: a review can run over twenty pages with thousands of words and dozens of charts, tables, and sample images and movies. To justify its assessment to “pixel-peeping” skeptics (those who zoom into images and compare them at the pixel level), the site provides a “side-by-side camera comparison” that compares the features and sample images of different cameras.
This approach to review coincides with the early twentieth-century movement of engineers who applied scientific methods, rigorous standards, and progressive planning to economic and social reform. The mechanical engineer Frederick Winslow Taylor famously studied the motions of men who were handling pig iron and recommended ways to minimize their wasted movements. Stuart Chase, educated as an engineer and accountant at MIT and Harvard, was cut of a similar cloth, but focused on the inefficiencies of the larger American economy. Chase’s accessible and widely read criticisms grew out of an experience that he had working at the Federal Trade Commission. During World War I, he was part of an investigation of the meat-packing industry. Frustrated by the experience, he concluded that the regulation of the industry was “nothing more than a comedy”—a sentiment that contributed to his dismissal in 1921.16 However, he did not lose his technocratic ideals. Instead, his path was set upon a socialist cant, premised upon the idea that waste in industry and government could be judiciously pruned so that consumers could enjoy a basic level of security and satisfaction.
Chase’s colleague, F. J. Schlink, was a mechanical engineer who also had worked for the U.S. government. His time at the Bureau of Standards, at the quality control departments of Firestone Tire and Bell Laboratories, and at the American Standards Association, imparted a passion for planning and study. In 1927, Chase and Schlink collaborated on a publication that likened the dizzying array of product claims to the outlandishness of Alice’s Wonderland: “Why do you buy the tooth paste you are using—what do you know about its relative merit compared with other tooth pastes—do you know if it has, beyond a pleasant taste, any merit at all?” Every toothpaste and every other product claimed itself to be the best and greatest: “We are all Alices in a Wonderland of conflicting claims, bright promises, fancy packages, soaring words, and almost impenetrable ignorance.”17 Their book, Your Money’s Worth: A Study in the Waste of the Consumer’s Dollar, is a rousing argument against manipulative advertising. In its stead, they advocated that claims should be verified against quality standards. Noting that the federal Bureau of Standards spends $2 million on tests annually but likely saves the government $100 million every year, they ask why similar efforts should not be made on behalf of the American consumer. For instance, given that the tire industry had “voiced a warning that tires were being made to last too long for healthy business,” they imagined conducting experiments in which tires and automobiles could be tested for longevity and safety.18
Their book was a success and brought much attention to the small “consumers’ club” that Schlink maintained in White Plains, New York. In 1929, Schlink and Arthur Kallet, the club’s secretary (also an engineer and MIT alumnus), founded Consumer Research to work on a national scale. In 1933, they published One Hundred Million Guinea Pigs: Dangers in Everyday Foods, Drugs, and Cosmetics. The American consumer was no longer characterized as an “Alice in Wonderland.” The country’s 100 million residents were test subjects for misleading but well-advertised, sometimes useless, and even dangerous products. Did readers realize that the toothpaste that they brushed their teeth with “contains enough poison if eaten, to kill three people; that, in fact, a German army officer committed suicide by eating a tubeful of this particular toothpaste?” Just as the criticisms found in Your Money’s Worth are comprehensible a hundred years later, the critiques of Guinea Pigs could be taken from today’s newspapers’ discussions of sweeteners, additives, and the transformation of food into “borderline food substances.”19 Although Schlink was at the forefront of progressive consumer interests, he was not as sympathetic to socialist concerns as his former colleague Stuart Chase was. In 1936, when three of his employees formed a union, Schlink fired them and acted forcefully against the subsequent strike, which he thought was “an unholy alliance” of strikers and “capitalist advertisers” against consumers.20 The strikers eventually formed the Consumer Union, which became the publisher of the Consumer Reports that we know today.
Since its outset, Consumer Reports has refused advertising and free samples from manufacturers. (This is not something all blog reviewers can say today.) It currently reports that it has 157 shoppers in 30 states with a testing budget of approximately $20 million. In 2002, it tested 1,863 products, including cars. Given its engineering legacy, Consumer Reports continues to focus on rigorous empirical testing of products. Its Website describes efforts to be as statistically accurate and sound as possible. For instance, in a 1999 study, the person responsible for gathering nine thousand condoms did not limit himself to clinics and pharmacies: “When it came time to finding condoms sold in vending machines, the only place he could find them were nightclubs. He spent many an hour self-consciously feeding coins into the machines located in the nightclubs’ men’s rooms. But he was successful in getting the sample we needed!”21 This dogged, careful, and analytic approach to commenting about products continues to today.
Among his many accomplishments, Kevin Kelly is founding editor of Wired magazine and the cofounder of the WELL (Whole Earth ‘Lectronic Link), a seminal bulletin board system. He also began Cool Tools, a blog for tools that “really work.” At his core, Kelly is a liker: a tool guru with an enthusiasm for sharing recommendations about stuff, especially items that are “tried and true.” Almost anything is within scope, be it a literal tool, a kitchen gadget, or a useful book. Tools can be “old or new as long as they are wonderful.” The philosophy of the site is to “post things we like and ignore the rest,” and it asks readers to “tell us what you love.22Cool Tools started as an email list in 2000 but migrated to the Web in April 2003 with a post about a keylike miniknife that might pass through airport security. Unlike many other sites that were launched in blogging’s early days, this one is an extension of a comment culture whose roots stretch back to the 1960s.
In 1966, twenty-eight-year-old Stewart Brand was at the beach, tripping on LSD and gazing at San Francisco’s skyline, when he noted the slight curve of the horizon and mused that if he ascended, the curve of the earth would become more pronounced until he could see the whole of the earth. Such a perspective could be the jolt that people needed to appreciate that planet Earth was “complete, tiny, adrift.” After gaining this insight, people would never “perceive things the same way” and would get on with the business of “getting civilization right.” Given the frenzied activity of NASA and the Soviets, why had we not seen a picture of the earth yet? The next morning he began printing buttons and posters with that very question.23 A couple years later, the Apollo 8 moon mission delivered the photograph, and Brand’s Whole Earth Catalog, published regularly between 1968 and 1972 and intermittently thereafter, featured the image of the blue and green marble on its cover.
The stated purpose of the Whole Earth Catalog reads like a manifesto: “We are as gods and might as well get used to it.” This power arises from the ability of the “individual to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested.” Brand’s larger take on the world was quite unlike the East Coast accountants and engineers. He trained as a biologist at Stanford University and became an entrepreneurial hippie who, among many things, organized the Trips rock music festival. His enthusiastic vision of sharing and human progress was best represented in the Whole Earth Catalog—a few editions of which were edited by Cool Tools’ Kevin Kelly. The catalog sought “tools that aided this process” of human advancement and were useful, furthered self-sufficiency, provided good value, were little known, but easily purchased by mail. Computers, too, could be powerful tools. As they became “faster, smarter, smaller and cheaper,” they shifted the balance from the estrangement of institutional computing to the empowerment of personal computing. There was one problem: “For new computer users these days the most daunting task is not learning how to use the machine but shopping.”24 We could become gods, if we made the right purchases.
Personal computers also could be networked, providing a new way for people to communicate with each other and build community around the Whole Earth ethos. In 1984, Brand began publishing the Whole Earth Software Catalog, and in 1985, with Kelly’s help, he began the WELL bulletin board system in San Francisco. Communication scholar Fred Turner has argued that much of the Internet’s culture is rooted in this West Coast movement from “counterculture to cyberculture.” In a 1995 essay for Time magazine, Brand himself wrote that “We Owe It All to the Hippies”: they provided “the philosophical foundations of not only the leaderless Internet but also the entire personal-computer revolution.”25 LSD, geodesic domes, blue boxes (for phone phreaking), and the personal computer were tools of personal empowerment. Awareness and knowledge about all of this was shared, and this exchange was further amplified and decentralized when it went online at the WELL.26
Echoes of this ethos also can be seen in the career of Mark Frauenfelder, cofounder of Boing Boing, a prominent blog that has struggled with comment (as discussed in chapter 1). The blog’s description of itself as a “directory of wonderful things” reflects that its print predecessor (a zine of the same name) was inspired by the Whole Earth Catalog. Frauenfelder first went online by way of the WELL and worked with Kelly at the launch of Wired.27 In the new millennium, the Whole Earth ethos, complemented by the do-it-yourself (DIY) ethic of zines and the skills of hobbyists and hackers, has resurfaced with the ascent of the maker movement. Frauenfelder has served as the editor-in-chief of Make magazine and in 2013 took the same position for a publishing collaboration with Kelly called Cool Tools Lab. Its first product was a printed book: a “curated selection of the best tools available for individuals and small groups” that was based on ten years of postings from Kelly’s blog. In the promotional video for the book, Kelly noted that “pages of the Whole Earth Catalog were homemade and very personal, filled with deep enthusiasm and amateur obsession about both old and new ways of doing things. You could learn how to start raising bees or begin blacksmithing.… I began carrying on this tradition in a blog called Cool Tools.”28
The Whole Earth ethos, from the original Catalog through Boing Boing and Cool Tools, exemplifies the idea that sharing one’s reviews, likes, and +1s can be a personal offering that reflects enthusiasms and experiences for the betterment of other people—even if it is about an egg timer, a dog canteen, or a pencil sharpener.
The Zagat review for Veggie Galaxy, a restaurant in my neighborhood, reads: “‘Diner classics are reimagined’ in ‘fabulous,’ ‘stereotype-defying’ vegetarian and vegan guise (‘you’ll totally forget you’re not eating meat’) at this ‘retro’ joint near Central Square Theater; ‘courteous’ service and ‘cheap’ tabs round out the ‘wonderful concept.’”29 This Frankenstein sentence is cobbled together from the disparate reviews of ordinary people. In its “About Us” page, Zagat traces the origins of this approach to a 1979 dinner party conversation about unreliable restaurant reviews: “It was at that moment Tim suggested taking a survey of their friends. This led to 200 amateur critics rating and reviewing 100 top restaurants for food, décor, service, and cost. The results, printed on legal-sized paper, were an instant success with copies being scooped up all over town.” Zagat now claims to be the “world’s leading consumer survey–based leisure information source.” As evidence of the value of such commentary, Google acquired Zagat in 2011 for $150 million (and in the summer of 2013 launched a new Zagat site and app that made ratings and reviews freely available online). A Google executive drew a connection between Zagat’s origins and user-generated content of today:
Their surveys may be one of the earliest forms of UGC (user-generated content)—gathering restaurant recommendations from friends, computing and distributing ratings before the Internet as we know it today even existed. Their iconic pocket-sized guides with paragraphs summarizing and “snippeting” sentiment were “mobile” before “mobile” involved electronics. Today, Zagat provides people with a democratized, authentic and comprehensive view of where to eat, drink, stay, shop and play worldwide based on millions of reviews and ratings.30
Although Michelin and other guides have always employed user input, with Zagat the input was the content. However, Zagat’s “user-generated content” was different from that of sites like Yelp and Amazon. Zagat staff selected and combined the pithy anonymized excerpts. At Web 2.0 sites, one can see reviews intact and reviewers are identifiable. (However, as I will discuss in the next chapter, the relative prominence of positive or negative reviews is controlled and manipulated by Websites.) This idea that aggregations of decentralized and democratic opinion can be a valuable form of information is often spoken of as the “wisdom of the crowds.” Like UGC and Web 2.0, this term and its cousins crowdsourcing and collective intelligence are frequently misunderstood. And such buzzwords are not always misunderstood in the same way. Some have little substance, and others have substance that is sometimes forgotten.
The term Web 2.0 is attributed to a 2004 conversation on the naming of a conference about the reemergence of online commerce after the collapse of the 1990s Internet bubble. Tim O’Reilly, technology publisher, wrote that chief among Web 2.0’s “rules for success” is to “Build applications that harness network effects to get better the more people use them. (This is what I’ve elsewhere called ‘harnessing collective intelligence.’)”31 Additionally, the popularization of collective intelligence can be traced back to two men who were associated with the WELL—the Whole Earth ’Lectronic Link. In the 1990s, chaos and complexity theory were hot topics that Kevin Kelly popularized with his book Out of Control: The New Biology of Machines, Social Systems, and the Economic World.32 Kelly showed how order can emerge from seeming chaos: how the beautiful midair choreography of a flock of birds arises when many individuals follow simple rules of interaction. This “new biology” was mostly gleaned from and applied to the natural world, but Kelly also posited it as a theory of social organization and intelligence via the notion of the “hive mind.” This idea persisted into the new millennium, when varied new media-related phenomena required explanation. In 2002, Howard Rheingold, another famous WELL member who had previously authored a seminal and popular treatment of virtual communities, published Smart Mobs.33 In this latter book, Rheingold argued that new forms of emergent social interaction would result from mobile telephones, pervasive computing, location-based services, and wearable computers. Two years later, in The Wisdom of Crowds, James Surowiecki made a similar argument, but instead of focusing on the novelty of technological trends, he engaged directly with the social science of group behavior and decision making.34 Surowiecki argued that groups of people can make good decisions when there is diversity, independence, and decentralization of opinion and when that information is appropriately aggregated. An open question (to which I will return) is whether the sites that feature user reviews (like Yelp and Google) are sufficiently impartial (with respect to their own interests) and fair (about those that they review) to qualify as providing “wise,” or at least useful, comment that informs.
This brief historical excursion has been a first step on an expedition to the bottom half of the Web. But one last archetype should not be missed: the critic. In the age of the Web, the question of who gets to be a critic has been contentious. This can be seen in James Berardinelli’s career as a film critic, which spans a historical inversion. In three moments separated by roughly seven years each, this amateur reviewer and his peers were portrayed as a novelty, an invasion, and the death of “serious” criticism.
In 1997, the Los Angeles Times profiled the then twenty-nine-year-old Berardinelli, author of over twelve hundred film reviews, in an article titled “In Online World, Everyone Can Be a Critic.”35 Berardinelli was noted as one of the best online amateur reviewers. He posted his first review to Usenet in 1992 and on his Website ReelViews in 1996. Berardinelli related his efforts explicitly to the love of movies, a passion demonstrated by the many hours he spent on reviewing in addition to his continued work as an engineer. Reviewing itself provided little monetary compensation; instead he used his day-job to provide “enough money to pay the mortgage, keep up my home theater, finance film festival trips, and buy the 20 gallons of gasoline I need each week to attend screenings.”36 Yet, what he hasn’t gained in coin, he has gained in personal satisfaction. The site Screen Junkies lists him as one of the ten most famous movie critics, along with Rex Reed, Roger Ebert, and Pauline Kael. His online reviews have been collected in print books, one of which is favorably introduced by Roger Ebert. He even met his wife through his Website.
Yet, does his passion for movies necessarily make him a “film critic”?
When I first started reviewing in 1992, I rigorously avoided the term “film critic” because it was a label I didn’t feel I had earned. I referred to myself as a “film reviewer.” It wasn’t until the late ’90s, after the website was on-line and I had 1000 reviews to my name, that I became comfortable with the “film critic” label. I am a populist critic, which means I write for the masses. That’s not to say I am incapable of writing deeper, more literate essays, but the general purpose of a 700- to 1000-word review is to provide an informed opinion about a movie. My goal with a review is threefold: provide my opinion and explain it, present enough information so that someone reading the review will be able to make a determination about whether they might like it (irrespective of whether or not I did), and offer some insight that those who have seen the movie may find interesting. I have some longer pieces on the website for older movies that can run up to 2000 words. Those typically contain more critical analysis than the “regular” reviews.37
The composer and lyricist Stephen Sondheim makes a similar, somewhat more nuanced distinction. Where Berardinelli sees the label critic as a badge of distinction to be earned with practice, Sondheim makes a functional distinction: “Reviewers are reporters; their function is to describe and evaluate, on first encounter, a specific event,” and because they often work on a deadline, some become blandly enthusiastic or cynically jaded. A critic, on the other hand, also describes and evaluates, “but from a loftier perspective” provided by time and distance: “That loftiness sometimes leads them to promote themselves rather than the object of their affection, but loftiness is what the readers look to them for, and often with rewarding results. What readers look for in a reviewer is immediate guidance.” Hence, a reviewer does not require any special knowledge. Echoing the motive of editors of the Monthly Review from the eighteenth century, Sondheim writes that “People read reviews to decide whether they should spend a considerable sum of money to see for themselves the subject under the microscope.”38 Although Berardinelli took writing classes in college, has read countless books on film and its history, and has attended many symposiums, it is his “great love and appreciation for movies” that characterizes his efforts. He believes that requiring a formal film education to write movie reviews is “the height of arrogance”: “One of the great things about movies is that almost everyone has an opinion, and it’s rare that any two will be the same. Film criticism is not surgery—you don’t need a degree to be an effective practitioner of it.”39
By 2004, the proliferation of online film reviews prompted Wired magazine to announce an “Invasion of the Web Film Critics.”40 It had become possible for those writing for online publications to be accredited by the studios, meaning that they could see advance screenings. (Before Berardinelli was accredited, he attended many opening nights to be current.) Some of these writers even earn a living by writing for online publications like Salon and Slant. Moreover, few print reviewers could long ignore the need for their reviews to be available to and engage an online audience. Aggregators like Rotten Tomatoes (1999) and Metacritic (2001) were new forces to be reckoned with and evidence of the logic of the crowd and the recurrent theme that comment often prompts new comment and new means of managing it.
What was happening in film was not unique. The arrival and “invasion” of online reviewers was manifest across media, including music and literature. The critics who evaluate “from a loftier perspective” (according to Sondheim, those who read “to learn something about the cultural landscape”), naturally question their own role in the new landscape. And they have always done so: Michel Foucault, T. S. Eliot, Oscar Wilde, and Walt Whitman each attempted to define the role of the critic. In 1960, Alfred Kazin did so in an essay in the New York Times Book Review titled “The Function of Criticism Today,” and in 2010, the Review asked six “accomplished critics” to examine current criticism by reflecting on Kazin’s essay. The Review’s editors noted that “We live in the age of opinion—offered instantly, effusively and in increasingly strident tones.” Much of this goes by the name of criticism, but “where does it leave the serious critic?”41
Stephen Burn, scholar and author, wrote that “While Kazin could complain in 1960 that ‘the audience doesn’t know what it wants,’ with the advent of Amazon reviews and other rating sites the audience is abundantly vocal.” Indeed, “the audience now talks to itself.… The age of evaluation, of the Olympian critic as cultural arbiter, is over.” Yet the critic can still provide a valuable function by exhuming a work’s context and placing it within a larger frame. Writers Katie Roiphe and Sam Anderson both argue that critics can distinguish themselves by writing well. For Anderson, the role of the critic is to amplify the conversation: “we make the whispered parts of it audible; we translate the coded parts into everyday language.” For Roiphe, critics “have always been a grandstanding, depressive and histrionic bunch,” but if they wish to compete with “the seductions of Facebook … [and with] every bright thing that flies to the surface of the iPhone,” they must write beautifully. Only by exemplifying grace in thought and writing can they have any authority to separate the talent from the transitory: “There is so much noise and screen clutter, there are so many Amazon reviewers and bloggers clamoring for attention, so many opinions and bitter misspelled rages, so much fawning ungrammatical love spewed into the ether, that the role of the true critic is actually quite simple: to write on a different level, to pay attention to the elements of style.”42
A function of comment is to inform—to share our thoughts for the benefit of others. This motive was apparent in the earliest days of digital communication. Internet pioneer Vint Cerf noted that “when e-mail showed up in 1971 on the ARPANET, we discovered instantly that e-mails were a social network phenomenon.” The evidence was the quick appearance of two email lists that were dedicated to “book reports and restaurant reviews”—the SciFi-Lovers and Yum-Yum lists.43 And many models of review that are now common on the Web precede the digital age itself. Stars arose a century ago to discern relative worth, engineers provided detailed comparative analysis, and likers shared recommendations that were rooted in love and experience. The crowd shared its particular, peculiar kind of wisdom, and the critic highlighted and connected with analysis and insight. All that has gone before is present on the Web—and more. Each of these types of informing comment existed before the twenty-first century but never in such number. Nor were they ever so easily accessible as a barcode and a smartphone. Also, there are now genres of review that include and amplify earlier forms into something new. The Wikipedia article on “Unboxing” likens it to “geek porn” and describes it as a video of the “unpacking of new products, especially high tech consumer products.” The earliest instance of the term appears to refer to a 2006 YouTube video of a Nokia E61 smartphone. Yet an unboxing is not a comparative analysis or an expert review. In the age of the Web, where both gadget lust and conspicuous consumption operate on the thin edge of time, unboxing is a novel genre and new ritual. For some, it is even a way to make money, as reviewers buy (and later return) products solely to unbox them online.44
Interestingly, these videos are somewhat gendered as well. Unboxing videos are mostly by men about gadgets received by mail; haul videos are more often the results of women’s shopping trips to local stores. Additionally, different types of products have their own subgenres of review. In 2004, a physical therapist told me that because my shoulders were askew, I should stop using a messenger-style bag over one shoulder and instead use a backpack that evenly distributes weight. Fortunately, I found one that suited me at a thrift shop. Unfortunately, almost ten years later, it was disintegrating and novel finds at thrift stores are not repeatable. So I turned to the Web. Product reviews are numerous and popular on YouTube—high-tech unboxings are only the tip of the iceberg. One can find video reviews for silly putty, the egg genie, a pancake pen, and the double bullet (a sex toy).
There are also the reviews from the “doomsday prepping” survivalist community. There are an estimated three million “preppers” in America and some spend hundreds of thousands of dollars on their bunkers and gear. (They even have their own dating sites.) At YouTube, prepper reviewers are typically white Christian men who are concerned with an over-reaching big government, gun rights, and the collapse of civil society. Their slogan is “pray for the best, prepare for the worst.” As one blog posting noted, these folk are “completely obsessed with both gear and the idea of self-sufficiency. They prize durability and functionality in a product because their fervency makes them believe their lives will depend on it.”45
Many of these reviews are for Maxpedition products, a reputable but expensive brand that initially sold to the military, law enforcement, and emergency responders. Its market expanded when the FR-1 medical pouch was adopted by survivalists. A rural survivalist’s bag will likely include maps, cash, flashlights, a handgun, hand sanitizer, a compass, a GPS navigator, knives, toothpaste, bandages, food bars, water filters, antibiotic ointment, parachute cord, and a battery charger for gadgets—among many other things. (Flashlights and knives are fetishized objects that garner many reviews.) Like members of other subcultures, survivalists have their own lingo. For instance, a “bug-out bag” is a prepacked bag that can be grabbed out of a closet or car trunk and help people survive for seventy-two hours after a disaster. (Discussions about the ideal contents of a bug-out bag are extensive.) An “EDC” is an “everyday carry bag.” A video “load out” is much like an unboxing video, except that as the reviewer unpacks the bag, he discusses his loading strategy and the merits of each item. Many reviewers have military experience or have adopted military vocabulary and speak of PALS webbing and MOLLE-compatible attachments.
The backpack that I purchased, the Maxpedition Pygmy Falcon-II, has dozens of YouTube reviews, some of which are fifteen minutes long. In my favorite review and “load out,” a young man begins by testing the bag’s stability while he attacks a martial arts dummy and then jumps rope. He admits that jumping was not a good idea because the pack comes down when he goes up. As he unpacks, he finds the Bible, the Declaration of Independence, and an anti-Obama tract within its pockets.46 These materials are common in prepper reviews and are reminiscent of the Crystal champagne that rappers often have chilling in their refrigerators on MTV’s Cribs. Reviewers take their task seriously, though sometimes one cannot help but laugh at the bravado. In one odd juxtaposition, an Amazon reviewer of the Falcon-II reports that “I bought this for my 5th grader [and it] works very well.” Additionally, the bag can fit an M4 assault rifle, although “it does not get a five star because the drag handle is small if you needed to drag a wounded team mate while wearing gloves and under fire.”47 (I wonder, how often will his fifth grader need to carry an M4 or drag a wounded teammate?) One can even find some humor, in which rugged survivalism is replaced with domesticity. In one case, a “go-bag” became a favorite diaper bag because the adjustable straps could be quickly fit to the father or mother and the main compartment fit wipes, a full pack of diapers, and other miscellany. Additionally, “there are 2 water bottle holders, which is perfect for carrying a water bottle for you, and a sippy cup for your kid.”48
Despite the relative novelty of the unboxing and haul videos, many insights learned from the past can be applied to online comment today. For instance, comment begets more comment, as was seen in early literary reviews and the glut of social media buttons today. Also, the tensions between public input and expert opinion preceded and continue into the digital age, as do arguments about who can claim to be a critic. Most important, the historical proliferation of comment accompanied an increase in consumerism, as seen in the story of the Michelin stars. This point seems especially salient today. Although many types of online informing comment have historical antecedents, the scale and pervasiveness of comment today are remarkable, and much comment is related to the consumption of goods and services. Online comment is worth billions of dollars and subject to much manipulation—the topic of the next chapter.