It is easy for manufacturers and retailers to know how much product moves out their doors. It is harder, but very necessary, for businesses to know what kinds of people buy what kinds of goods, with what frequency, the degree of brand loyalty, and whether it’s in response to advertising. Nowadays a great deal of that knowledge flows directly–if imperfectly–from Google, Amazon, and Facebook data. In earlier times, it came from consumer surveys via panel research. Panel research is a survey method in which (mostly) the same respondents participate over multiple time periods. Historically, manufacturers of fast-moving consumer goods (FMCGs) preferred panel research because, for statistical reasons, it yielded better estimates of trends in consumer behavior. To make this large subject manageable, and to focus on the area we know best, this article will tell the story of the two firms that created panel research.

Until the 1980s, the two firms dominating the field were the well-known A.C. Nielsen Co., and the perhaps lesser-known Market Research Corporation of America, MRCA. In a tacit division of the research market, Nielsen reported on consumer media habits, and MRCA provided data on consumer purchasing. Nielsen’s data led clients to more efficient media buying (more reach and frequency of exposure per ad dollar spent). MRCA’s data gave the (mostly) same clients sharper notions of what flavors, package types and sizes, etc., were gaining favor among consumers, as well as what kinds of coupon promotions were garnering the best responses. Nielsen offered services to a larger client base, essentially all advertisers, thereby having greater potential for revenue growth than MRCA.

Who were the clients?

MRCA and Nielsen both sold data, and analyses of this data, to the USA’s biggest FMCG manufacturers, notably Coca-Cola, General Foods, General Mills, Keebler, PepsiCo, Pillsbury, and the makers of many, many more familiar brands.

These companies were highly motivated to purchase both Nielson’s and MRCA’s data streams. Regardless of whether advertising “works,” manufacturers accepted that an advertising presence was critical, and therefore, paid for exposure (for reach and frequency) to large, desirable audiences. As the cost of advertising grew, companies increasingly wanted to ensure that their money was money well spent! Similarly, gaining a single market share point in a large market, like breakfast cereals, can mean millions in added revenues.

For decades, these client companies peered at Nielsen and MRCA reports side by side, seeking the holy grail of market research: To know how consumers’ media behavior relates to their purchasing behavior. Retailer John Wanamaker’s famous complaint— “I know half my ad dollars are wasted, I just don’t know which half”—still bedevils advertisers today, though today, the Internet provides further data points.

In the beginning, clients used a variety of data from sources other than MRCA. In addition to their internal data on marketing costs and shipment volumes, they bought data on the movement of product through intermediate sections of the distribution pipeline; consumer attitude and preference surveys; and data on the advertising expenditures of their competitors and the magazine and newspaper readership of their target market households. They also bought results of taste tests and focus groups. Consumer panel information competed with many other services for the attention of corporate market researchers and for their budget dollars.

In the 40s and 50s, market research was young and a novelty, it commanded the attention of top executives. However, panel data’s point of entry into the client corporation was, by the 1970s, at a fairly low level, where it was collated and filtered upward in the organization. Research results lose their “brand name” identification with the originating supplier as they are summarized and filtered upward through levels of client management. The research supplier loses visibility when this happens. This loss of visibility among clients may have played a small part in the demise of MRCA decades later.

Marketing Research is born

The first documented instance of marketing research was in 1879 and was conducted by the advertising agency N. W. Ayer. The company surveyed state and local officials about expected levels of grain production. This information was used by a manufacturer of farm equipment in the preparation of an advertising schedule. From that first beginning marketing research slowly evolved. The basic foundation of marketing research was developed during the first 30 years of the 20th century. The first textbook on marketing research was published in 1921, and the first marketing research courses taught on college campuses occurred in the 1930s. The early years of marketing research focused on methods of sampling, collecting data, and analytical techniques. Researchers also focused on ways to measure concepts such as opinions, perceptions, preferences, attitudes, personalities, and lifestyles. The primary goal of marketing research at that time was to measure marketing phenomena and consumer characteristics. Raw data were converted to information, which was then passed on to managers to make decisions.

The period of the 1970s and 1980s is often referred to as the “golden age of consumer research.” It was during this time marketing research became more scientific. Computing power made collecting and analyzing data faster, easier, cheaper, and more accurate. Companies invested substantial dollars into marketing research to better understand the market, the consumer, and the decision process. Few decisions were made that were not supported by marketing research. Research study results became the support or rationale for choosing particular marketing strategies and marketing tactics.

Here’s how consumer panel industry got started:

The story of Nielsen and MRCA offers lessons on strategy, governance, technological change, and changing markets. Supporting players in the drama are manufacturers, consumers, retailers, competitors, media—with walk-ons by many, notable academic scholars.

Modern survey research (as opposed to other kinds of market research like focus groups, test kitchens, and so on) adopted its analytic methods from statistics. These methods allow making inferences about large populations based on the observation of small samples. Sir Ronald Fisher pioneered statistical sampling theory in the 1930s. As is well known to anyone who’s seen the “accurate within 3%” disclaimer on polls, this theory is the basis of survey research.

It was the ways in which the sample data was collected, and in the interplay of people, technological and social changes, however, that did much more to shape the business in subsequent decades.

The Nielsen Corporation was founded in 1923 by Arthur C. Nielsen, Sr. In 1936 Nielsen licensed a device from MIT that would record the stations to which a radio had been tuned. In 1942 he launched the Nielsen Radio Index.

In 1939, Samuel G. Barton, Jr. founded Industrial Surveys Company. Two years later, in August 1942, President Roosevelt created the Office of Price Administration tasked with placing a ceiling on prices of most goods, and to limit consumption by rationing. Sam Barton saw this as an opportunity and obtained a government contract to survey, track and report household consumption of rationed goods. Using a national probability sample of over 4,000 households, product purchases, sizes, and prices of nearly all rationed goods were recorded in diaries that were distributed and retrieved, door-to-door, every two weeks. Barton named this service the National Consumer Panel. Following the end of WWII, as consumer goods markets soared and competitors jockeyed for market share, demand for consumer data also soared and the panel business took off. Barton syndicated services to consumer product marketers since the cost of this method of data collection was substantial.

MRCA’s agents conducted door-to-door interviews, asking householders about their recent purchases, and sometimes with the owner’s permission, performed cupboard inventories. Cupboard inventories determined, for example, that a “Stock up on Soup!” ad campaign would persuade consumers to carry larger inventories of canned soup in the home—relieving distributors and retailers of the cost of this inventory. The resulting data were key-entered onto punch-cards for tabulation and reporting.

The 1950s brought a wave of urbanization. Wary of strangers, new city-dwellers were less responsive to door-to-door interviews. MRCA moved to mail diaries (questionnaires), using a pre-printed diary form in which consumers recorded their purchases for the week. This post-WWII period saw the USA’s fastest-ever rate of household formation. After an apartment and a refrigerator, city-dwelling families’ next priority was a television.

Nielsen commenced measuring TV viewership, in 1950, when they acquired the C. E. Hooper company and began attaching recording devices to a statistical sample of about 1,200 consumer television sets in the U.S. These devices used photographic film in mail-in cartridges to record the channels viewed by the consumer and thus determine audience size. Later, Nielsen developed electronic methods of data collection and transmission. Nielsen’s diary practice continued well into the 1980s.

Notably, the devices could not measure who was watching the TV, their degree of attentiveness, or, indeed, whether anyone was even in the room with the TV.

Meanwhile, Sam Barton and Oskar Morgenstern, who served for a brief period as vice president of MRCA client services, formed a unit of the company and called it Mathematica. The unit was tasked to develop analytic techniques beyond basic reporting that came to be referred to as “special analysis.” These were customized and proprietary studies for companies that commissioned them. During this period, Joseph Woodlock and Louis Fourt, both employees within this division, developed the first model used to estimate product sales using behavioral data. As it would turnout, panel data would become the fodder for the pioneers of consumer modeling.

Competition for panel data and the birth of management science  

These decades saw increased business use of digital computers, proprietary languages for data processing, and mathematical models for analyzing market data. Mathematical modelers took advantage of the large survey data sets that had been assembled (at that time, MRCA’s consumer purchase information constituted the largest proprietary database in the world, and created new algorithms for predicting brand shifting, purchase frequencies, and other marketing phenomena. 

The 1970s brought commercial database software, quickly adopted by the market research industry and embraced by the modelers. Laser scanner technology was commercialized. These scanners could now read standardized bar codes. The codes, originally designed for distribution control, could be used to record purchases of an item at a super-market checkout.

On the social front, more women joined the workplace and divorce became more common (not to imply that one caused the other!). The economy expanded. The average household size shrunk. Increasing affluence meant more households were being formed. These households were smaller (often single-member) and displayed new buying patterns. The households rarely had a stay-at-home member, and in general, people became or perceived themselves to be busier than in more traditional times.  Single-member households were often younger, and young people had never been enthusiastic survey respondents.

These influences decreased cooperation rates for research surveys.  Nonetheless, while it might have seemed that people would be reluctant to fill out the very detailed MRCA and Nielsen diaries, MRCA’s data were validated by comparing their totals with factory shipments. The MRCA and manufacturer data series, while not coinciding (MRCA did not report, e.g., institutional sales to restaurants or the military), did mirror each other’s upward and downward trends in a reliable fashion.

Dr. Alfred A. Kuehn, a professor at Carnegie Institute of Technology, now Carnegie Mellon University, had begun using leading-edge analytics, including WW II innovations such as Artificial Intelligence and advanced statistical inference for problems posed by firms or their management consultants in the 1950s. He began analyzing detailed data on steel shipments and consumer purchase behavior and developing market projections and helping business harness it long before “big data” became a buzzword. A pioneer in applying analytical methods to study consumer behavior and dynamic marketing processes, Kuehn was one of the founders of the disruptive concept of “management science,” a quantitative approach to decision-making and business analytics. Kuehn’s consumer behavior modeling generated interest at Lever Brothers—a Unilever subsidiary—and its data suppliers, MRCA and the Chicago Tribune.

In 1963, he founded Management Science Associates (MSA), the first spin-off of the business school (then called GSIA). MSA continued to mine panel data and in the process the firm became an incubator of analytical talent for the panel industry. In 1968 Tod Johnson, an employee, left MSA to create the National Purchase Diary Panel Inc., NPD (More about NPD later).

Suddenly MRCA had competition for both data collection and data mining.

MRCA clients, most notably The Pillsbury Company, began to develop their own models using panel data. While employed by Pillsbury in the 1960s, Gerald J. Eskin, who would go on to become one of the founders of Information Resources INC. (IRI), was tasked to determine if first-years sales of new products could be determined by using three to six months of purchase behavior data. Eskin left Pillsbury in 1969 and this work was picked up by Channing Stowell2, an MIT grad hired to establish a full market data services department. Stowell unpacked the nature of repeat purchasing’s highly predictable statistical form. A repeat behavior algorithm was combined with the PanPro trial component into a full new product forecasting model that proved extremely accurate. 

Eskin later incorporated that model’s algorithms into his PanPro model. Stowell then created an entire simulated test market system by statistically linking the results of primary research concept tests, used to predict trial, and those of the product usage tests used to predict “true repeat”. Stowell would go on to become a consumer behavior expert and an executive at NPD, MRCA, MSA, Claritas, and Nielsen.

The simulated test market/PanPro model system eventually monetized at Burke Marketing Research, in Covington, Kentucky by Lin Lynn, whom Stowell hired at Pillsbury. Lynn’s version would eventually become known as BASES, a market simulation model which Burke introduced in 1978. BASES would become the most widely used laboratory simulation model in the world and was purchased by Nielsen in 1998, complementing its prior acquisition of the NPD’s CPG purchase panel.[1] 

In 1969, Mathematica was spun off from MRCA by Oskar Morgenstern, who remains best known for his pioneering work in game theory with John Von Neumann. He became the first chairman of Mathematica Inc., and headquartered the company in Princeton, New Jersey.

The departure of Morgenstern left a gap in analytic capability at MRCA that was filled when it acquired a highly regarded, independent survey research firm owed by Robert Henry Hallam. Hallam, a Wharton School graduate, who began his career with Scott Paper Company, became president of the company. In 1971, Hallam, who brought with him a blue-chip roster of clients, tapped them for talent. Among the first the new hires were Dan Sherr from Quaker Oats, where he ran product management for the dog food division, and Dan Merchant from the consumer marketing research department of the Maxwell House Division of General Foods.

A pioneer in advanced statistical modeling, Sherr, who had also been at General Foods, and Merchant, both graduates of Pennsylvania State University, soon developed a consumer segmentation of the pet food market commissioned by GF. It was the first time that purchase behavior from a panel, or anywhere else for that matter, was combined with primary consumer media and psychographic data for segmentation purposes. With professors Ronald Frank and Yoram “Jerry” Wind, both from Wharton, MRCA used canonical correlation analysis to identify and measure the associations among these sets of variables and segments. Wind became internationally known for his work in market segmentation, conjoint analysis and marketing strategy, and Frank served as director of research and PhD programs and vice dean at Wharton for two decades. The project was hugely successful, resulted in successful new advertising copy points and new product developments for General Foods and shined a new light on MRCA capabilities and the analytical power of panel data.    

It was at MRCA where Dan Sherr would meet Sam Barton III, son of the founder. Sam III spent four years at MRCA, from 1969 to 1973, operating out of the Chicago office, focused on developing IT systems and the development of new revenue sources through application of emerging analytics technology.

Sherr and Barton III, would both exit MRCA by 1974 to eventually join Bruce Carroll, David Miller, and Jonathan Robbins to create PRIZM, an extremely effective tool for target marketing, at Claritas, PRIZM would become one of the largest demographic companies in the world and continues today to offer PRIZM Premier which combining demographics, consumer behavior and geographic data for marketers. PRIZM Premier classifies every U.S. household into one of 68 consumer segments based on the household’s purchasing preferences.

The Menu Census  

Another innovation using diary panels was the development of the MRCA Menu Census. A huge undertaking, the MRCA Menu Census survey tracked all foods and beverages (except table salt, table pepper and tap water) consumed by individuals daily, at home and away, at main meals and in between meals throughout a consecutive 14-day period. Conducted once every five years beginning in 1957-1958, Menu Census became an industry standard.

The food diaries were collected from a population sample of man, woman and child, in the household panel. Daily diaries were completed by homemakers who were also long-term members of MRCA’s National Consumer Panel and who were trained and experienced in reporting personal and family eating habits in great detail.

Households included in the survey were nationally representative according to such criteria as geographic location, household size and household income. The demographic characteristics of the households were comparable with U.S. census demographics. A final questionnaire included the self-reported age, gender, pregnancy status, weight, height and diet status of each household member. Other household demographic characteristics were collected separately in an annual questionnaire.

The diaries included a detailed description of each dish eaten and items added to it at the time of preparation or at the time it was eaten, whether it was eaten at home or away from home, whether it was eaten at breakfast, lunch or dinner or consumed at a morning, afternoon, evening or bedtime snack eating occasion, the position of the dish in the meal (i.e., first, second or third course) and which household members ate the dish.

The successful completion of the 1974-1975 Menu Census provided a boost MRCA revenues from the NCP business which were lagging, in large part, due to competition from NPD.

NPD, led by Johnson, would go on to use diaries for The NPD Restaurant Consumer Survey in 1982 and NPD would continue to grow into one of today’s top 10 largest market research companies in the world. NPD Group operates in 20 countries, across more than 20 industries and continues to use diaries.

As the panel business heated up in the 1970s, MRCA faced intense competition. In 1976, Dan Merchant, then SVP sales and client service, transformed the client service team by adding highly skilled, analytical talent and by repackaging the Menu Census as a continuous service, thereby providing the company with a much needed and substantial source of on-going revenue.

The Menu Census was also notably the data resource used by the U.S. Food and Drug Administration under the supervision of Dr. I.J. “Jack” Abrams. The third menu census study, the 1967–68 study, was used in the first GRAS (Generally Regarded as Safe) Survey, phase II, by the National Academy of Science for the Food and Drug Administration. The FDA continued to use the Menu Census data well into the 1980s.

A notable set back to MRCA’s client service team happened in 70s, and it was a big one, was when John Malec, from the Chicago office, joined forces with Gerry Eskin to create Information Resources Inc., IRI. By 2019, IRI is among the top ten marketing research companies in the world and employs over 5,000 people.

David B. Learner and partners purchased MRCA in 1974. Learner, a PhD experimental psychologist who had studied “top gun” psychology for the Air Force, had been an executive with the famed Madison Avenue ad firm BBDO3 and was an advocate of math modeling in marketing.

In 1980, Learner engineered a leveraged buyout and became sole owner of MRCA and Merchant left MRCA in 1980 to join Marketing Corporation of America in Westport, CT as partner and president of the marketing research division, Information Resources.

During this same time period, Nielsen attempted “advanced” TV viewership measurement. Next-generation set-top boxes had buttons for each household member, and members were requested to push their own button upon entering and leaving a room where a television was playing. But the cooperation rate for button boxes was not satisfactory. Medallions containing personalized radio frequency devices were then introduced—but as the styles of the 1970s passed, people were loath to wear medallions on chains. Set-top boxes with heat sensors were the next attempt—and the measured

TV audience was augmented by dogs, infants, space heaters and toaster ovens!

The rise of scanner panels

In 1979, a start-up with an audacious plan to revolutionize consumer panels raised enough IPO4 capital to give scanners to every supermarket in a half-dozen “pod markets” throughout the U.S., in return for rights to the checkout data. Each pod market was a small city with demographics mirroring those of the U.S., an isolated grocery shopping area, and an isolated cable TV market. In each pod market, a sample of households was recruited, asked to fill out a paper questionnaire on household demographics, and issued an I.D. card with a unique bar code, to be swiped at the checkout stand prior to scanning the grocery purchase. For the first time, supermarket purchases could be automatically recorded and linked to households with known characteristics. Because this seemed “objective” and eliminated most manual key entry tasks, manufacturers were excited about the prospect of more accurate data.

Nor was this the limit of excitement about this scheme. Arrangements with the cable company enabled manufacturers to air two versions of a commercial, with each version cablecast to a different sub-sample of the pod market’s panel households. It was then possible (or so went the claim) to measure the differential effect of the ad copy on subsequent purchasing. A remarkable passion had been aroused in the mature and conservative consumer goods industry; after fifty years, the “old-fashioned” paper-and-pencil diary questionnaire was to be supplanted by a high-tech solution, and the way seemed clear to answering the question of advertising efficacy. MRCA started to lose clients to the upstart, IRI (Information Resources, Inc.), but it wasn’t long before IRI got into trouble. What went wrong and why?  At first, a lot:

  • Not all manufacturers used their allotted Universal Product [bar] Codes (UPCs) to uniquely differentiate their products. Without the supplementary information given in paper diaries, the specific product purchased often could not be identified.
  • There are regional and urban/rural differences in tastes and available brands. Even the balanced demographics of pod markets could not produce nationally projectable purchase data.
  • IRI panel members could easily forget to take their I.D. cards to the supermarket.
  • The location of the pod markets was well known, and split-cable ad tests could hardly be kept secret. Competitors could and did issue coupons and air opposing ads in order to sabotage other companies’ ad tests. The sabotaged tests proved useless.
  • Scanner data could, in principle, show not just the price of a purchased item (paper diaries could do this just as well), but also the prices of competing products that were not bought at the time the panel member was shopping.  It could show the shelf location of products in the store, or whether they were on aisle-end display. Yet the task of processing scanner databases to extract useful information for decision-making was, initially, too difficult.  Until companies and programmers learned to turn the databases into useful reports, the consumer goods manufacturers could not get full benefit from the data.
  • Only supermarkets were given scanners. But people buy food items at convenience stores, Target stores, gas stations and department stores. MRCA’s diaries captured purchases of foods from all retail outlets – a critical advantage in the eyes of manufacturers.
  • Scanner databases suffered from their own, unique key-entry errors.  Through the mid-1990s, studies reported that up to 9% of prices shown at the scanner checkout differed from the prices marked on shelves or packages or were otherwise in error5.

It took manufacturers several months to see that these problems compromised the actionability of their market research data. They then began to re-subscribe to MRCA’s service.

Though ultimately successful, IRI’s early experience perfectly matched the “hype-curve” depicted in Figure 1. Customers, over-hyped by advertising buy find the product does not live up to expectations, stop buying, and return to buy again after the bugs are fixed.

Scanner data eclipses diary data

Nielsen, by this time an enormous and diversified company, adopted scanner technology to ease the collection of their audit data service, which involved tracking movement of product through stores (without regard to who buys it). Nielsen used pattern recognition algorithms to recognize what TV commercial was being received. As IRI began to expand beyond its pod markets, Nielsen moved into the scanner panel business by issuing hand-held scanner wands to a sample of households. The devices used the household’s telephone to upload data to Nielsen’s computers during nighttime hours. 

In the early 80s, Dave Learner tasked Fred Phillips, then a vice president of MRCA, to experiment with this kind of electronic data collection from the consumer’s home. Results were not promising. One reason was that children, getting hold of the scanner wand, would be enchanted by its beep and would scan the same box of cereal many times.

Learner was frequently heard to say he did not want to be CEO of a public company. He enjoyed running MRCA. If it were publicly traded, he maintained, he would be miserable, having to spend all day on the phone with Wall Street stock analysts. Raising money on public markets for MRCA to compete with the scanner panels, then, was out of the question.

Instead, MRCA bet on competing on data delivery, not data collection. MRCA’s responsive strategies included market diversification (for example, adding a service tracking consumers’ use of financial products, again with the help of Prof. Jerry Wind), and the aggressive technical development of new ways to make its consumer-related data accessible, timely and useful.

In 1984 MRCA underwent another name change.  The new name, MRCA Information Services, conveyed the company’s greater diversity as well as its orientation to the information needs of the customer. Its continuously updated databases then included, from a panel of more than 12,000 households in the lower 48 states, records of the purchase and/or use of financial services, processed foods, personal care items, home cleaning aids, textiles and home furnishings, shoes, jewelry, footwear and luggage. MRCA’s clientele included the largest manufacturers, retailers, trade associations and government agencies associated with these industries. The flagship product of the renamed firm was DYANAtm, a fast, interactive market research tool that replaced the slow, mainframe-generated reports of yesteryear.  In an instance of technology fusion, technologies from four scientific areas came together to create DYANAtm:

  1. From marketing theories: demographic analysis, innovation diffusion theory, repeat-buying and customer segmentation theories.
  2. From probability and statistics: longitudinal sampling theory, and stochastic models of purchase frequency and of brand choice.
  3. From computer science: pattern recognition, database software, voice recognition, and interactive computer graphics.
  4. From laser science: scanners, bar code technology.

Customers returned to MRCA, due to DYANAtm, and due to the shortcomings of scanner data. Fred Phillips devised the math models underlying DYANA, based on ideas of Prof. ASC Ehrenberg and the cumulative knowledge of his predecessors at MRCA. Profs Abraham Charnes and William W. Cooper, pioneers of the management science field and longtime consultants to MRCA, assisted throughout the project.

Technologies that came to the industry in the ‘80s included automated random digit dialing (in response to an increase in unlisted phone numbers, and made possible by the phase-out of rotary phones); cheap    microcomputers, microprocessors, and microcontrollers; survey-on-a-disk for computer industry market research (from companies like Sawtooth and Intelliquest); and automated call centers. Yet reliable voice recognition technology for phone polling was still not cost-effective.

In this decade, many of scanner data’s problems were overcome, by technological tweaks and price reductions, accelerating clients’ adoption of scanner data services.

In a friendly departure, Fred Phillips resigned from MRCA in 1988.

Trouble

The 90s brought still newer technologies to the industry and further internationalization of technology markets. Nielsen operated in Europe; in Japan, it was the daily newspapers like Asahi Shinbun that had operated consumer panels since the 1970s.

Wide diffusion of fax machines in the ‘90s allowed consumer surveys by fax. E-mail surveys began to appear and were quickly eclipsed by interactive questionnaires on the World Wide Web, as Internet use exploded. This gave rise to ways to track web page hits and insert “cookies” into users’ computers. The integration of TV and the WWW began as well.

New data mining techniques leveraged cheaper and faster computers. Embedded image recognition computers recognized individual TV viewers. Home scanners became cheaper. The US Census demonstrated successful voice recognition technology for census data collection.

Nielsen planned to use embedded codes in digital TV to identify incoming programs. IRI and Nielsen overcame many of the technical difficulties with scanner data and commenced to recapture market share from MRCA. At the same time, oddly, IRI’s high-profile $6 million syndicated advertising effects study failed, and in 1996 ABC, NBC, CBS, and Fox placed joint ads in the trade press criticizing inaccuracies in Nielsen TV’s measurement data.

It was the convergence of the technologies for store audits and consumer panels that finally drove out diary panels for FMCGs; IRI and Nielsen began to bundle store audit scanner data with scanner panel data, giving the latter to their clients essentially at no extra charge. This drew legal scrutiny. (Compare it to the question of Microsoft bundling web browsers with its operating systems, at that time an active question in European courts. MRCA could not afford the legal resources to fight what it saw as a righteous cause.)

Clients knew scanner panel data were not really as accurate as diary panel data. But the price was irresistible. Scanner panels became the manufacturers’ data source of choice for consumer package-goods purchase information, essentially driving diary panel services from that market. 

While the scanner panel companies rode the hype curve of depicted in FIGURE 1: a pattern of hype, disappointment, and renewed growth and diary companies suffered the mirror-image of that curve.

FIGURE 2 (LEFT): Diary companies, at first, anticipating the new technology; and then, believing the new tech was no good and will have no impact; then relief as clients returned to diaries. Complacency was short-lived and was followed by rapid declines sales.

Interpreting Figure 2 psychologically, one can imagine the incumbent diary companies going through stages of myopia, denial, alert, reassurance, and unwarranted complacency, followed by panic and resignation. This was indeed the case at MRCA

During MRCA’s panic stage, the company delayed depositing funds into its mandatory tax and retirement accounts, due to a cash flow crunch. A new client that was supposed to sign on didn’t, and the expected inflow of funds that would have rectified the tax account did not materialize. In a case instigated by the US Department of Labor and prosecuted by an Assistant US Attorney, top MRCA executives pled guilty and in 2001 were sentenced to brief imprisonment and substantial fines. This was the end of MRCA.

Diary panels are still in demand, however, for tracking sales of items that are not checked out using standard codes or scanners. This includes many consumer goods such as clothing, auto supplies, shoes, jewelry, and home furnishings.  Diaries are still best for tracking the consumption (as opposed to the purchase) of foods.

Nielsen was also acquired and fragmented. Nielsen was purchased by the Dun & Bradstreet Company in 1984. In 1996, D&B divided the company into two separate companies: Nielsen Media Research, which was responsible for TV ratings, and AC Nielsen, which was responsible for consumer shopping trends and box-office data. In 1999, Nielsen Media Research was acquired by the Dutch publishing company VNU (Verenigde Nederlandse Uitgeverijen). VNU later acquired AC Nielsen and recombined the two businesses in 2001. In between, VNU sold off its newspaper properties to Wegener and its consumer magazines to Sanoma. The company’s publishing arm also owned several publications including The Hollywood Reporter and Billboard magazine. VNU combined the Nielsen properties with other research and data collection units including BASES, Claritas, HCI and Spectra.

Ironically, IRI found its future not in data collection, but in data analytics, the field that MRCA had abortively attempted. According to its 2019 Bloomberg profile, Chicago-based “Information Resources, Inc. provides big data and predictive analytics solutions” and employs five thousand workers.

Power shifts continue

By the early 1990s, most stores selling food items acquired their own checkout scanners, and about 60% of retail food product movement passed across checkout scanners. IRI was able to expand its store base (and enter the store audit business) by buying scanner data from stores in cities beyond its original pod markets. 

IRI and Nielsen had thus overcome a few of the difficulties that beset the startup of scanner panels, but home scanners had their own error problems, and store data purchased by IRI represented fewer than 1% of U.S. counties.

Supermarket chains rapidly learned that information is power.  They used their own scanner data to compute the profitability of every foot of shelf-facing in every store. This enabled them to negotiate with manufacturers about the shelf space allocated to each of the manufacturers’ products, and to estimate the profitability of new product offerings from given manufacturers.  In some cases, this led to the levying of “slotting allowances,” payments from the manufacturers to the store to allow the display of new products.  Scanners had indeed shifted power from the manufacturers, where it had been traditionally, to the stores.

Power was also shifting to consumers and to local advertisers, as the range of electronic entertainment options skyrocketed. Business Week noted that in the 1960s, our choices amounted to NBC, CBS, and ABC.  By the 1990s we had added UHF, cable, and direct satellite options, as well as VCRs for time-shifting programs and viewing recorded tapes, for a total of about 75 channels. The choices expanded into the multiple thousands as HDTV and Internet-based virtual reality “channels” become widespread. The resulting audience fragmentation means that the number of people viewing (in the future we may say “participating in”) a given channel may be small, and the margin of error in measuring this number will be large. 

The consumer wins because of increased entertainment choices, but advertisers and media have already begun to strike back. “Digital ad insertion” allows cable operators to send different ads to different neighborhoods during the same commercial break.  Internet television allows different ads to be directed to different Internet Protocol addresses. In traditional coaxial cable architecture, one signal [was] sent to all of the tens of thousands of homes in a service area from a single “head end,” the industry’s term for the transmission source. But now, as the industry moves to a hybrid fiber coaxial architecture, which allows for two-way communication, its systems include many more head-ends to transmit signals to smaller nodes.  This means a different set of signals can be sent to each node, serving as few as 500 homes. 

When a cable operator pulls down a program from a satellite to distribute across its network, national advertising is already inserted into some of the commercial breaks, with spots left open for local ads. In the old days, those gaps had to be dubbed in from tapes.  Now local ads are stored digitally on servers provided by companies like SkyConnect.  And now that the hybrid fiber coaxial architecture has more head-ends, it is easy to plug different ads into the signals sent to different nodes.  

Even in the 1990s, according to Wired News, “Targeting could be particularly attractive, for example, to a pizza delivery company that knows which neighborhoods account for the majority of its business.” Digital ad insertion can make local advertising (which is both important for cable company revenues and impossible to measure under older technologies) better targeted and more effective.

Technological change and the death of “quality research”

Interactive computing led to other advances in survey research in the 1980s and 90s. On-screen questionnaires eliminated the confusing “If you answered yes to question 9, go to question 11b” instructions often seen on paper questionnaires. Phone surveys could be completely automated using random digit dialing, voice/audio databases, and push-button telephone tone responses to questions.

Fax and email offered new channels for collecting market research information. As faxes and email addresses were not uniquely identified with particular households, offices or individuals, the ideals of random, demographically representative statistical samples began to fall by the wayside.  Phone survey firms had used callback protocols to maximize the probability of reaching a household that had been chosen for a sample.  Now, with people using answering machines and caller I.D. to filter calls, researchers considered it lucky to reach a household at all. Cell phone “no-call lists” obstruct the appealing idea of reaching individuals. (Appealing to vendors, if not to consumers!)

“Opportunity samples,” rather than rigorous random samples, ruled the day. Ideal statistical sampling was particularly difficult on the World Wide Web, as a culture of “alternate persona,” that is, lying about one’s identity, age, gender and attractiveness, had already taken hold among WWW users. Data mining, the use of automated statistical tests and pattern recognition algorithms to find regularities in large databases, also violated traditional rules of statistical inference – but became common and even necessary in many businesses.

Marketers, rather than bemoaning the demise of traditional, “high-quality” techniques for collecting and analyzing market research data, should instead think about how best to use newer technologies for solid decision making.  Many have done so.  One result is WWW advertising billed on a “per click” basis, indicating the prospect not only saw but responded to the ad.  Digital interactive television, combined with cameras and image-recognition systems in set-top boxes, may finally tell researchers which household members are facing the TV at any time.

But adjusting to the new is rarely easy. Many companies, mistakenly viewing the WWW as “the next television,” became concerned with measuring the audiences of websites.  As mentioned, web surfers often use the WWW under false demographic pretenses.  In addition, it is difficult to tell whether a hit on a WWW page was a human or a crawler, ‘bot, or search engine.  Individuals access accounts belonging to others and may routinely erase the “cookies” left on their hard drives. Nielsen’s early claims to have mapped the demographics of web users was widely questioned, and finally scientifically discredited; Nielsen later introduced an improved methodology. 

Will “share and ratings” numbers for the web be perfected? Does it matter?  There is little point in answering old questions about new technologies and media. The real challenge is figuring out what new, relevant questions the new technology lets us ask and answer. What are users’ expectations regarding interactive media like the Web?  Are Web surfers as susceptible to suggestion as TV viewers?  Or do their feelings about control and creativity, as they navigate hypermedia6, change their attitude to advertising?  How much personal data do they wish to share, and what compensation do they expect for this?  Will this lead to a dystopian surveillance state? In pandemic times, will information power flow to the delivery services as it did earlier to retail stores? These are a few of the new questions, actually new opportunities, that are opened by the new media.

Panel data re-invented

In 2019, Nielsen (NYSE: NLSN) and The NPD Group (private company), released details around a new alliance that reimagines the future of “omnishopper”7 measurement and marks a key milestone in Nielsen’s measurement of the total consumer.

According to a press release issued by the two companies:

“For many retailers and manufacturers, gaining visibility into both the total store and the shopping habits of the consumer who fluidly toggles between online and offline shopping is crucial to surviving, thriving and maintaining relevance in today’s fragmented marketplace. To that end, Nielsen and NPD are jointly offering a large-scale, comprehensive omnishopper7 consumer panel, pairing Nielsen’s trusted consumer packaged goods (CPG) measurement with NPD’s authoritative general merchandise consumer measurement to bring insight into today’s emerging omnishoppers, including all shoppers across all channels, and all products across all categories.  

“Recognizing that in today’s consumption environment, no single panel will meet all measurement needs, the Omnishopper panel will connect to multiple consumer data sources to comprehensively track shopper behavior online and offline across all products and categories. This new approach will follow the consumer across an ever-expanding landscape of digital and physical touchpoints, inclusive of growing brick and mortar and e-tail outlets, and will boost visibility into smaller trip occasions. Additionally, it will increase data granularity by leveraging Nielsen’s and NPD’s deep product reference data and retail market measurement truth sets to deliver accurate omnishopper insights that enable progress.

“’Brands and retailers need to see the total picture as purchasing options and shoppers’ habits continue to evolve,’ said Karyn Schoenbart, NPD’s Chief Executive Officer. “Together we are building a diverse, representative and comprehensive omnishopper panel utilizing NPD’s pioneering receipt harvesting technology that will provide previously unavailable quality measurement across all brands, all industries and all channels.’

‘We are bringing to the table a fundamentally new approach that starts with gaining a solid understanding of the omnishopper, built to capture a consumer’s share of life and ends with a framework for clients that is truly future proofed,’ said John Tavolieri, Nielsen’s Chief Product and Technology Officer. ‘In our pursuit to help our clients measure, predict and activate on well-informed data across established and emerging retailers, we couldn’t be more delighted to expand our alliance with NPD to broaden our measurement of the total consumer.’

“Total consumer is Nielsen’s framework to provide coverage, context and clarity into the new consumer journey—whether that’s in a grocery store, through a retailer’s digital touchpoints, at a restaurant or bar, or through the next emerging retail channel—and the Omnishopper panel is a critical component to this vision. Key milestones in the creation of Nielsen’s total consumer measurement framework have already been delivered, inclusive of e-commerce measurement and Nielsen Omnichannel View, which reveals total market performance by channel, including online, offline and non-traditional channels. Additionally, over the past year, Nielsen has made broader strategic investments in its panel business to improve the coverage, usability and quality of its panel offerings, most notably, increasing its capabilities in mobile collection, projection methodologies, segment representation and access via Nielsen Connect.”

Lessons for “Big Data”

While MRCA did not survive, the use of consumer panels did, just in a different form. NPD came late to the purchase panel business but invested heavily and evolved to become one of the top 10 market research firms in the world. Nielsen adopted new techniques and new markets; management changed hands, but the Nielsen brand survived, and is now, as it was in the 1940s, the leading global brand in market research.

Though the 20th century MRCA and Nielsen databases were laughably small compared to big data today, but it was the start of big data. It was leading edge, it was disruptive, and it has lasted. The lessons from those early days still apply, and we see a new generation re-learning these same lessons the hard way:

Many data buyers still prefer “questionable but cheap and easy” data to “excellent but expensive and complicated” data; SurveyMonkey is widely used for easy, fast surveys as consumers are flooded with post-purchase satisfaction surveys via email and text immediately following interactions with providers. So…

  • Disastrous strategic decisions will be made.
  • Companies will commit blunders and bloopers with new data technologies. Then, it was heat sensors and medallions; now it’s drones and social media.
  • People will still share personal data in exchange for insubstantial compensation.
  • In any data project, errors in study design, data handling, and interpretation far outnumber and outweigh errors in analysis.

Data quality is still important, and still difficult. And, as evidenced by the inappropriate “targeted ads” we all see on our screens, it is still a problem. Today’s companies are good at collecting data, and perhaps getting better at analyzing data, but by no means are they good at cleaning and checking data.

This problem echoes IRI’s difficulty, in the 1980s, in hiring programmers who could comprehend the newly “big” body of scanner-generated data.

Artificial intelligence has not yet mastered the challenge of data preparation, cleaning and checking. These tasks are human-intensive, expensive, and not scalable. It has been reported that many business intelligence professionals spend more than half of their work hours cleaning up raw data and preparing it to input it into the company’s data platform. This severely limits the potential of big data. Solutions will include expanded training for the creation of for those with career paths for the data preparation function.

End notes:

  1. This essay is updated and expanded from material originally published in F. Phillips, Market-Oriented Technology Management: Innovating for Profit in Entrepreneurial Times. Springer, Heidelberg, 2001. Available on Amazon
  2. Channing Stowell, an early market researcher/management scientist at Pillsbury, worked at various times with NPD, MRCA, MSA, Claritas, and Nielsen
  3. Jack Benny so loved the rhythm of the firm’s full name – Batten, Barton, Durstine, and Osborn – that he used it in his comedy routines.
  4. Initial Public Offering. 
  5. A 1998 study by the Federal Trade Commission found that the wrong price is scanned in one out of every thirty transactions.
  6. Privacy and personal data issues arise because, unlike MRCA and Nielsen volunteer households, HDTV and WebTV viewers involuntarily reveal their web navigation history (and hence perhaps their lifestyle and product preferences), and the technology leaves “cookie” files on the viewer’s computer or phone. Internet businesses defend these practices as making “a visitor’s experience more useful and enjoyable.” One can argue the ownership of the data, but not the fact that the website owner has made unauthorized use of the user’s disk space.
  7. An omnishopper is someone who likes to keep their retail experiences interesting, by connecting to the things they need and love in a myriad of ways.

Clickable reference tags: 1936   2001   A.C. Nielsen Co.   Chicago Information Resources Inc.   Market Research Corporation of America   New York     NPD_Group     Professor Ronald Frank  IRI   Yoram “Jerry” Wind    Claritas PRIZM SurveyMonkey   Big Data    Nielsen Holdings  Nielsen NPD Omnishopper Panel   Hiner, J   Al Kuehn   GRAS    Channing Stowell  BASES  FDA 


Subscribe To Our Free Newsletter

Subscribe To Our Free Newsletter

Sign up for business history stories and news from the American Business History Center.

You have Successfully Subscribed!