It is easy for manufacturers and retailers to know how much product moves out of their doors. It is harder, but very necessary, for businesses to know what kinds of people buy what kinds of goods, and with what frequency, with what degree of brand loyalty and whether in response to advertising. Nowadays that knowledge flows directly – if imperfectly – from Google, Amazon, and Facebook data. In earlier times, it came from consumer surveys.

To make this large subject manageable, and to focus on the area I know best, this essay will tell the story of the leading firms that provided panel research. This is a survey method in which (mostly) the same people are polled at multiple time periods. Manufacturers of FMCG (fast-moving consumer goods) prefer panel research because, for statistical reasons, it yields better estimates of trends in consumer behavior.

Until the 1980s, the two firms dominating the field were the well-known A.C. Nielsen Co., and the perhaps lesser-known Market Research Corporation of America, MRCA. In a tacit division of the research market, Nielsen reported on consumer media habits and MRCA provided data on consumer purchasing. Nielsen’s data led clients to more efficient media buying (more reach, and frequency of exposure, per ad dollar spent). MRCA’s data gave the (mostly) same clients sharper notions of what flavors, package types and sizes, etc. were gaining favor among consumers, as well as what kinds of coupon promotions were garnering the best responses.

Who were these clients?

Both Nielsen and MRCA sold data, and some analyses of these data, to the USA’s biggest FMCG manufacturers, notably Coca-Cola, General Foods, General Mills, Keebler, PepsiCo, and the makers of many, many more top consumer brands that are familiar to you.

These companies were highly motivated to purchase both data streams. Regardless of whether advertising “works,” manufacturers accept that an advertising presence is needed, and will pay more for repeated exposure to larger and more desirable audiences. As a 30-second commercial spot during the Superbowl now costs a manufacturer $4 million, companies want to know that is money well spent! Similarly, gaining a single market share point in a large market, like breakfast cereals, can mean millions in added revenues.

For decades, then, these client companies peered at Nielsen and MRCA reports side by side, seeking the holy grail of market research: To know how consumers’ media behavior relates to their purchasing behavior. Retailer John Wanamaker’s famous complaint – “I know half my ad dollars are wasted, I just don’t know which half” –still bedevils advertisers today, though the Internet has changed almost – almost! – everything.

The clients used a variety of data from sources other than MRCA. In addition to their internal data on marketing costs and shipment volumes, they bought data on the movement of product through intermediate sections of the distribution pipeline; consumer attitude and preference surveys; and data on the advertising expenditures of their competitors and the magazine and newspaper readership of their target market households. They also bought results of taste tests and focus groups. Consumer panel information competed with many other services for the attention of corporate market researchers and for their budget dollars.

When market research was young and a novelty, it commanded the attention of top executives. However, panel data’s point of entry into the client corporation was, by the 1970s, at a fairly low level, where it was collated and filtered upward in the organization. Research results lose their “brand name” identification with the originating supplier as they are summarized and filtered upward through levels of client management. The research supplier loses visibility when this happens. The loss of visibility among clients may have played a small part in the demise of both Nielsen and MRCA, decades later.

The consumer panel industry evolves

The story of Nielsen and MRCA offers lessons on strategy, governance, technological change, and changing markets. Supporting players in the drama are manufacturers, consumers, retailers, and media – with walk-ons by several academic scholars.

Here’s how it all started.

Modern survey research (as opposed to other kinds of market research like focus groups, test kitchens, and so on) adopted its analytic methods from statistics. These methods allow making inferences about large populations based on the observation of small samples. Sir Ronald Fisher pioneered statistical sampling theory in the 1930s. As is well known to anyone who’s seen the “accurate within 3%” disclaimer on polls, this theory is the basis of survey research.

It was the ways in which the sample data are collected, and in the interplay of technological and social changes, however, that did much more to shape the business in subsequent decades.

In 1936, Arthur C. Nielsen, Sr., licensed a device from MIT that would record the stations to which a radio has been tuned. In 1942 he launched the Nielsen Radio Index.

Oskar Morgenstern remains best known for his pioneering work in game theory with John Von Neumann. However, in 1941 Morgenstern started a company called Industrial Surveys Inc., shortly to change its name to Market Research Corporation of America. MRCA’s agents conducted door-to-door interviews, asking householders about their recent purchases, and sometimes with the owner’s permission, performed cupboard inventories. Cupboard inventories determined, for example, that a “Stock up on Soup!” ad campaign would persuade consumers to carry larger inventories of canned soup in the home – relieving distributors and retailers of the cost of this inventory. The resulting data were key-entered onto punch-cards for tabulation and reporting.

The 1950s brought a wave of urbanization. Wary of strangers, new city-dwellers were less responsive to door-to-door interviews. MRCA moved to mail diaries (questionnaires), using a pre-printed diary form in which consumers recorded their purchases for the week.

This post-WWII period saw the USA’s fastest-ever rate of household formation. After an apartment and a refrigerator, city-dwelling families’ next priority was a television. Nielsen commenced measuring TV viewership, using the diaries that sample members, doubtless including many readers of this essay, received in the postal mail. Nielsen’s diary practice continued well into the 1980s.

Nielsen supplemented the diary information with in-home devices that recorded the time of day and what channel the TV was tuned to. Notably, the devices could not measure who was watching the TV, their degree of attentiveness, or, indeed, whether anyone was even in the room with the TV.

The 1960s and ‘70s

These decades saw increased business use of digital computers, proprietary languages for data processing, and mathematical models for analyzing market data. Mathematical modelers took advantage of the large survey data sets that had been assembled (at that time, MRCA’s consumer purchase information constituted the largest proprietary database in the world), and created new algorithms for predicting brand shifting, purchase frequencies, and other marketing phenomena. 

The 1970s brought commercial database software, quickly adopted by the market research industry and embraced by the modelers. Laser scanner technology was commercialized. These scanners could now read standardized bar codes. The codes, originally designed for distribution control, could be used to record purchases of an item at a super-market checkout.

On the social front, more women joined the workplace. Divorce became more common (not to imply that one caused the other!). The economy expanded. The average household size had shrunk. Increasing affluence meant more households were being formed. These households were smaller (often single-member) and displayed new buying patterns. The households rarely had a stay-at-home member, and in general people became or perceived themselves to be more busy than in more traditional times.  Single-member households were often younger, and young people had never been enthusiastic survey respondents.

These influences decreased cooperation rates for market research surveys.  Nonetheless, while it might have seemed that people would be reluctant to fill out the very detailed MRCA and Nielsen diaries, MRCA’s data were validated by comparing their totals with factory shipments. The MRCA and manufacturer data series, while not coinciding (MRCA did not report, e.g., institutional sales to restaurants or the military), did mirror each other’s upward and downward trends in a reliable fashion.

During the 1970s, David B. Learner and partners purchased MRCA. Dave Learner was a PhD experimental psychologist who had studied “top gun” psychology for the Air Force, had been an executive with the famed Madison Avenue ad firm BBDO², and had been an early advocate of math modeling in marketing. Dave soon began to gather funds to buy out his partners. He did this before the end of the decade, becoming sole owner of MRCA.

Nielsen attempted “advanced” TV viewership measurement. Next-generation set-top boxes had buttons for each household member, and members were requested to push their own button upon entering and leaving a room where a television was playing. But the cooperation rate for button boxes was not satisfactory. Medallions containing personalized radio frequency devices were then introduced – but as the styles of the 1970s passed, people were loath to wear medallions on chains. Set-top boxes with heat sensors were the next attempt – and the measured TV audience was augmented by dogs, infants, space heaters and toaster ovens!

The 1970s saw further changes that would transform the panel survey business completely.

The rise of scanner panels

At this time, few stores used checkout scanners. In 1979, a start-up with an audacious plan to revolutionize consumer panels raised enough IPO³ capital to give scanners to every supermarket in a half-dozen “pod markets” throughout the U.S., in return for rights to the checkout data. Each pod market was a small city with demographics mirroring those of the U.S., an isolated grocery shopping area, and an isolated cable TV market. In each pod market, a sample of households were recruited, asked to fill out a paper questionnaire on household demographics, and issued an I.D. card with a unique bar code, to be swiped at the checkout stand prior to scanning the grocery purchase. For the first time, supermarket purchases could be automatically recorded and linked to households with known characteristics. Because this seemed “objective” and eliminated most manual key entry tasks, manufacturers were excited about the prospect of more accurate data.

Nor was this the limit of excitement about this scheme. Arrangements with the cable company enabled manufacturers to air two versions of a commercial, with each version cablecast to a different sub-sample of the pod market’s panel households. It was then possible (or so went the claim) to measure the differential effect of the ad copy on subsequent purchasing! A remarkable passion had been aroused in the mature and conservative consumer goods industry; after fifty years, the “old-fashioned” paper-and-pencil diary questionnaire was to be supplanted by a high-tech solution, and the way seemed clear to answering the question of advertising efficacy. MRCA started to lose clients to the upstart, IRI (Information Resources, Inc.).

What went wrong for IRI?  At first, a lot:

  • Not all manufacturers used their allotted Universal Product [bar] Codes (UPCs) to uniquely differentiate their products. Without the supplementary information given in paper diaries, the specific product purchased often could not be identified.
  • There are regional and urban/rural differences in tastes and available brands. Even the balanced demographics of pod markets could not produce nationally projectable purchase data.
  • IRI panel members could easily forget to take their I.D. cards to the supermarket.
  • The location of the pod markets were well known, and split-cable ad tests could hardly be kept secret. Competitors could and did issue coupons and air opposing ads in order to sabotage other companies’ ad tests. The sabotaged tests proved useless.
  • Scanner data could, in principle, show not just the price of a purchased item (paper diaries could do this just as well), but also the prices of competing products that were not bought at the time the panel member was shopping.  It could show the shelf location of products in the store, or whether they were on aisle-end display. Yet the task of processing scanner databases to extract useful information for decision-making was, initially, too difficult.  Until companies and programmers learned to turn the databases into useful reports, the consumer goods manufacturers could not get full benefit from the data.
  • Only supermarkets were given scanners. But people buy food items at convenience stores, Target store, gas stations and department stores. MRCA’s diaries captured purchases of foods from all retail outlets – a critical advantage in the eyes of manufacturers.
  • Scanner databases suffered from their own, unique key-entry errors.  Through the mid-1990s, studies reported that up to 9% of prices shown at the scanner checkout differed from the prices marked on shelves or packages, or were otherwise in error.⁴
Figure 1: Interrupted growth cycles can be due to faulty product, unfulfilled buyer expectations, or both.

It took manufacturers several months to see that these problems compromised the actionability of their market research data. They then began to re-subscribe to MRCA’s service.

Though ultimately successful, IRI’s early experience perfectly matched the “hype curve” depicted in Figure 1. Customers excited by over-hyped advertising buy, find the product does not live up to expectations, stop buying, and return to buy again after the bugs are fixed.

The 1980s: Substitution of scanner data for diary data

Nielsen, by this time an enormous and diversified company, adopted scanner technology to ease the collection of their “store audit” data service, which involves tracking movement of product through stores (without regard to who buys it). Nielsen used pattern recognition algorithms to recognize what commercial was being received. As IRI began to expand beyond its pod markets, Nielsen moved into the scanner panel business by issuing hand-held scanner wands to a sample of households. The devices use the household’s telephone to upload data to Nielsen’s computers during nighttime hours. 

In the early 80s, Dave Learner asked me (I was then a Vice President of MRCA) to experiment with this kind of electronic data collection from the consumer’s home. Results were not promising. One reason was that children, getting hold of the scanner wand, would be enchanted by its beep and would scan the same box of cereal many times.

Dave was frequently heard to say he did not want to be CEO of a public company. He enjoyed running MRCA. If it were publicly traded, he maintained, he would be miserable, having to spend all day on the phone with Wall Street stock analysts. Raising money on public markets for MRCA to compete with the scanner panels, then, was out of the question.

Instead, we bet on MRCA competing on data delivery, not data collection. MRCA’s responsive strategies included market diversification (for example, adding a service tracking consumers’ use of financial products), and the aggressive technical development of new ways to make its consumer-related data accessible, timely and useful.

In 1984 MRCA underwent another name change.  The new name, MRCA Information Services, conveyed the company’s greater diversity as well as its orientation to the information needs of the customer. Its continuously updated databases then included, from a panel of more than 12,000 households in the lower 48 states, records of the purchase and/or use of financial services, processed foods, personal care items, home cleaning aids, textiles and home furnishings, shoes, jewelry, footwear and luggage. MRCA’s clientele included the largest manufacturers, retailers, trade associations and government agencies associated with these industries.

The flagship product of the renamed firm was DYANA, a fast interactive market research tool that replaced the slow, mainframe-generated reports of yesteryear.  In an instance of technology fusion, technologies from four scientific areas came together to create DYANA:

  1. From marketing theories: Demographic analysis, innovation diffusion theory, repeat-buying and customer segmentation theories.
  2. From probability and statistics: Longitudinal sampling theory, and stochastic models of purchase frequency and of brand choice.
  3. From computer science: Pattern recognition, database software, voice recognition, and interactive computer graphics.
  4. From laser science: Scanners, bar code technology.

   Customers returned to MRCA, due to DYANA, and due to the shortcomings of scanner data.

Technologies that came to the industry in the ‘80s included automated random digit dialing (in response to an increase in unlisted phone numbers, and made possible by the phase-out of rotary phones); cheap microcomputers, microprocessors, and microcontrollers; survey-on-a-disk for computer industry market research (from companies like Sawtooth and Intelliquest), and automated call centers. Yet reliable voice recognition technology for phone polling was still not cost-effective.

In this decade many of scanner data’s problems were overcome, by technological tweaks and price reductions, accelerating clients’ adoption of scanner data services.

I resigned from MRCA in 1988. It was a friendly departure.

The 1990s           

The 90s brought still newer technologies to the industry, and further internationalization of technology markets. Nielsen operated in Europe; in Japan, it was the daily newspapers like Asahi Shinbun that had operated consumer panels since the 1970s.

Wide diffusion of fax machines in the ‘90s allowed consumer surveys by fax. E-mail surveys began to appear and were quickly eclipsed by interactive questionnaires on the World Wide Web, as Internet use exploded. This gave rise to ways to track web page hits and insert “cookies” into users’ computers. The integration of TV and the WWW began.

New data mining techniques leveraged cheaper and faster computers. Embedded image recognition computers recognized individual TV viewers. Home scanners became cheaper. The US Census demonstrated successful voice recognition technology for census data collection.

Nielsen planned to use embedded codes in digital TV to identify incoming programs.

IRI and Nielsen overcame many of the technical difficulties with scanner data, and commenced to recapture market share from MRCA. At the same time, oddly, IRI’s high-profile $6 million syndicated advertising effects study failed, and in 1996 ABC, NBC, CBS, and Fox placed joint ads in the trade press criticizing inaccuracies in Nielsen TV measurement data.

Troubles peak for Nielsen and MRCA

It was the convergence of the technologies for store audits and consumer panels that finally drove out diary panels; IRI and Nielsen began to bundle store audit scanner data with scanner panel data, giving the latter to their clients essentially at no extra charge. This drew legal scrutiny. (Compare it to the question of Microsoft bundling web browsers with its operating systems, at that time an active question in European courts. MRCA could not afford the legal resources to fight what we saw as a righteous cause.)

Clients knew scanner panel data were not really as accurate as diary panel data. But the price was irresistible. Scanner panels became the manufacturers’ data source of choice for consumer package-goods purchase information, essentially driving diary panel services from that market. 

While the scanner panel companies rode the hype curve of Figure 1 – a pattern of hype, disappointment, and renewed growth – diary companies suffered the mirror-image of that curve. See Figure 2. First, not seeing the new technology coming; next, believing the new tech is no good and will have no impact; then relief as clients returned to diaries. Complacency was short-lived, and was followed by rapid declines in sales.

Interpreting Figure 2 psychologically, one can imagine the incumbent diary companies going through stages of myopia, denial, alert, reassurance, and unwarranted complacency, followed by panic and resignation. This was indeed the case at MRCA.

Figure 2. The lower curve duplicates Figure 1’s depiction of scanner panels’ interrupted growth, though now the vertical axis shows market share, rather than sales. The upper line shows the “reaction curve,” i.e., the incumbent’s predicament.

During MRCA’s panic stage the company delayed depositing funds into its mandatory tax and retirement accounts, due to a cash flow crunch. The new client that was supposed to sign on next Wednesday, didn’t, and the expected inflow of funds that would have rectified the tax account did not materialize. In a case instigated by the US Department of Labor and prosecuted by an Assistant US Attorney, top MRCA executives pled guilty and in 2001 were sentenced to brief imprisonment, and substantial fines.

This was the end of MRCA.

Diary panels are still in demand, however, for tracking sales of items that are not checked out using standard codes or scanners. This includes many consumer goods such as clothing, auto supplies, shoes, jewelry, and home furnishings.  Diaries are still best for tracking the consumption (as opposed to the purchase) of foods.

Nielsen was also acquired and fragmented. According to Wikipedia (https://en.wikipedia.org/wiki/Nielsen_Holdings),

Nielsen was acquired by the Dun & Bradstreet Company in 1984. In 1996, D&B divided the company into two separate companies: Nielsen Media Research, which was responsible for TV ratings, and AC Nielsen, which was responsible for consumer shopping trends and box-office data. In 1999, Nielsen Media Research was acquired by the Dutch publishing company VNU (Verenigde Nederlandse Uitgeverijen). VNU later acquired AC Nielsen and recombined the two businesses in 2001. In between, VNU sold off its newspaper properties to Wegener and its consumer magazines to Sanoma. The company’s publishing arm also owned several publications including The Hollywood Reporter and Billboard magazine. VNU combined the Nielsen properties with other research and data collection units including BASES, Claritas, HCI and Spectra.

Ironically, IRI found its future not in data collection, but in data analytics, the field that MRCA had abortively attempted. According to its 2019 Bloomberg profile, Chicago-based “Information Resources, Inc. provides big data and predictive analytics solutions” and employs five thousand workers.

Power shifts

By the early 1990s, most stores selling food items had bought their own checkout scanners, and about 60% of retail food product movement passed across checkout scanners. IRI was able to expand its store base (and enter the store audit business) by buying scanner data from stores in cities beyond its original pod markets. 

IRI and Nielsen had thus overcome a few of the difficulties that beset the startup of scanner panels, but home scanners had their own error problems, and store data purchased by IRI represented fewer than 1% of U.S. counties.

Supermarket chains rapidly learned that information is power.  They used their own scanner data to compute the profitability of every foot of shelf-facing in every store. This enabled them to negotiate with manufacturers about the shelf space allocated to each of the manufacturers’ products, and to estimate the profitability of new product offerings from given manufacturers.  In some cases, this led to the levying of “slotting allowances,” payments from the manufacturers to the store to allow the display of new products.  Scanners had indeed shifted power from the manufacturers, where it had been traditionally, to the stores.

Power was also shifting to individuals and to local advertisers, as the range of electronic entertainment options skyrocketed. Business Week noted that in the 1960s, our choices amounted to NBC, CBS, and ABC.  By the 1990s we had added UHF, cable, and direct satellite options, as well as VCRs for timeshifting programs and viewing recorded tapes, for a total of about 75 “channels.” The choices expanded into the multiple thousands as HDTV and Internet-based virtual reality “channels” become widespread. The resulting audience fragmentation means that the number of people viewing (in the future we may say “participating in”) a given channel may be small, and the margin of error in measuring this number will be large. 

The consumer wins because of increased entertainment choices, but advertisers and media have already begun to strike back. “Digital ad insertion” allows cable operators to send different ads to different neighborhoods, during the same commercial break.  Internet television allows different ads to be directd to different Internet Protocol addresses. Even in the 1990s, according to Wired News,

In traditional coaxial cable architecture, one signal [was] sent to all of the tens of  thousands of homes in a service area from a single “head end,” the industry’s term for the transmission source. But now, as the industry moves to a hybrid fiber coaxial architecture, which allows for two-way communication, its systems include many more head-ends to transmit signals to smaller nodes.  This means a different set of signals can be sent to each node, serving as few as 500 homes. 

When a cable operator pulls down a program from a satellite to distribute across its network, national advertising is already inserted into some of the commercial breaks, with spots left open for local ads. In the old days, those gaps had to be dubbed in from tapes.  Now local ads are stored digitally on servers provided by companies like SkyConnect.  And now that the hybrid fiber coaxial architecture has more head-ends, it is easy to plug different ads into the signals sent to different nodes.  

The Wired News article went on, “Targeting could be particularly attractive, for example, to a pizza delivery company that knows which neighborhoods account for the majority of its business.” Digital ad insertion can make local advertising (which is both important for cable company revenues and impossible to measure under older technologies) better targeted and more effective.

Technological change and the death of “quality research”

Interactive computing led to other advances in survey research in the 1980s and 90s. On-screen questionnaires eliminated the confusing “If you answered yes to question 9, go to question 11b” instructions often seen on paper questionnaires. Phone surveys could be completely automated using random digit dialing, voice/audio databases, and push-button telephone tone responses to questions.

Fax and email offered new channels for collecting market research information. As faxes and email addresses were not uniquely identified with particular households, offices or individuals, the ideals of random, demographically representative statistical samples began to fall by the wayside.  Phone survey firms had used callback protocols to maximize the probability of reaching a household that had been chosen for a sample.  Now, with people using answering machines and caller I.D. to filter calls, researchers considered it lucky to reach a household at all. Cell phone “no-call lists” obstruct the appealing idea of reaching individuals. (Appealing to vendors, if not to consumers!)

“Opportunity samples,” rather than rigorous random samples, ruled the day. Ideal statistical sampling was particularly difficult on the World Wide Web, as a culture of “alternate personae” – that is, lying about one’s identity, age, gender and attractiveness – had already taken hold among WWW users. Data mining, the use of automated statistical tests and pattern recognition algorithms to find regularities in large databases, also violated traditional rules of statistical inference – but became common and even necessary in many businesses.

Beyond the ‘90s

Managers, rather than moaning about the demise of traditional, “high-quality” techniques for collecting and analyzing market research data, should instead think about how best to use the newer technologies to assist good decision making.  Some have done so.  One result is WWW advertising billed on a “per click” basis, indicating the prospect not only saw but responded to the ad.  Digital interactive television, combined with cameras and image-recognition systems in set-top boxes, may finally tell researchers which household members are facing the TV at any time.

But adjusting to the new is rarely easy. Many companies, mistakenly viewing the WWW as “the next television,” became concerned with measuring the audiences of websites.  As mentioned, web surfers often use the WWW under false demographic pretenses.  In addition, it was hard to tell whether a hit on a WWW page was a human or a crawler, ‘bot, or search engine.  Individuals access accounts belonging to others, and may routinely erase the “cookies” left on their hard drives. Nielsen’s early claims to have mapped the demographics of Web users were widely questioned, and finally scientifically discredited; Nielsen later introduced an improved methodology. 

Will “share and ratings” numbers for the Web be perfected? It doesn’t matter.  There is little point in answering old questions about new technologies and media. The real challenge is figuring out what new, relevant questions the new technology lets us ask and answer. What are users’ expectations regarding interactive media like the Web?  Are Web surfers as susceptible to suggestion as TV viewers?  Or do their feelings about control and creativity, as they navigate hypermedia⁵, change their attitude to advertising?  How much personal data do they wish to share, and what compensation do they expect for this?  These are a few of the new questions, actually new opportunities, that are opened by the new media.

So what?

Though the 20th-century MRCA and Nielsen databases were laughably small compared to 2019’s big data, their lessons still apply, and I see younger workers re-learning these lessons the hard way:

  • Data buyers still prefer “questionable but cheap” data to “excellent but expensive” data.
  • It’s still easy to make disastrous strategic decisions.
  • Companies will commit blunders and bloopers with new data technologies. Then, it was heat sensors and medallions; now it’s drones and social media.
  • People will still share personal data in exchange for insubstantial compensation.
  • In any data project, errors in study design, data handling, and interpretation far outnumber and outweigh errors in analysis.

Data quality is still important, and still difficult. And still a problem, as evidenced by the completely inappropriate “targeted ads” we all see on our screens. Today’s companies are good at collecting data, and perhaps good at analyzing data, but by no means are they good at cleaning and checking data.

This problem echoes IRI’s difficulty, in the 1980s, in hiring programmers who could comprehend the newly “big” body of scanner-generated data.

Artificial intelligence has not yet mastered the challenge of data preparation, cleaning and checking. These tasks are human-intensive, expensive, and not scalable. (It has been reported⁶ that many business intelligence professionals spend more than half of their work hours ‘cleaning up raw data and preparing to input it into the company’s data platforms.’ This ‘severely limits the potential of big data.’) Solutions will include expanded training and the creation of career paths for the data prep function.


Footnotes

  1. This essay is updated and expanded from material originally published in F. Phillips, Market-Oriented Technology Management: Innovating for Profit in Entrepreneurial Times. Springer, Heidelberg, 2001.
  2. Jack Benny so loved the rhythm of the firm’s full name – Batten, Barton, Durstine, and Osborn – that he used it in his comedy routines.
  3. Initial Public Offering. 
  4. A 1998 study by the Federal Trade Commission (CNN Interactive, http://cnn.com/US/9812/16/price.scanners.01/, found that the wrong price is scanned in one out of every thirty transactions.
  5. Privacy and personal data issues arise because, unlike MRCA and Nielsen volunteer households, HDTV and WebTV viewers involuntarily reveal their web navigation history (and hence perhaps their lifestyle and product preferences), and the technology leaves “cookie” files on the viewer’s computer or phone. Internet businesses defend these practices as making “a visitor’s experience more useful and enjoyable.” One can argue the ownership of the data, but not the fact that the website owner has made unauthorized use of the user’s disk space.
  6. Hiner, J. (2016) Big Data’s Biggest Problem: It’s Too Hard to Get the Data in <http://www.zdnet.com/article/big-datas-biggest-problem-its-too-hard-toget-the-data-in/> accessed 4 Dec 2016.
Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up for business history stories and news from the American Business History Center.

You have Successfully Subscribed!