April 2001
Volume 4 • Number 11

Traffic Jam: Are Internet Businesses Measuring Up?

Suppose you have a Web site. You enter into arrangements with several advertisers that agree to pay based on the number of visitors your site attracts. Advertiser A bases its count on “cookies” placed on a visitor’s computer, and determines you have 500,000 users. Advertiser B also uses cookies, but calculates 1.1 million users. Advertiser C tallies the number of registered users who visit your site, and sees 300,000 users. Advertiser D tracks users’ PC hostnames and detects 650,000 users. Advertiser E uses “hits” and observes 1.5 million. Meanwhile, your own internal computer logs show 750,000 users. Which number is accurate?

There are issues (read “problems”) with each method of calculating Web site traffic. It may well be that none of the figures cited above is accurate. Although you may have a tenuous grasp on your sanity each time you hear from the advertisers, you may believe that’s acceptable. After all, this isn’t supposed to be rocket science—just a method of estimation. The more troubling issue, however, is which number will you use for your SEC filings?

The Origins and Purpose of Internet Metrics

Measuring audience size is far from a new concept. Virtually since their inception, the television and radio industries have measured the number of people in the audience in order to determine advertising rates. Today, the cornerstones of television audience measurement—reach, frequency, gross rating points—are standardized by Nielsen Media Research. As Nielsen argues (from its slightly biased position) “an independent, third-party measurement system embracing the highest standards of accuracy and integrity” is integral to the functioning of the television marketplace.1

Like their counterparts placing television ads, Internet advertisers purchase ad “real estate” on Web sites. The critical inquiry for an Internet advertiser is whether an advertisement on Web site X will maximize its advertising dollars by reaching more of its target audience than an advertisement on Web site Y. To answer this question, advertisers rely on data from three sources: the sites themselves, advertising aggregators like DoubleClick that sell Web site advertising space, and Internet marketing research firms like Nielsen/Net Ratings and Media Metrix. This data, known as “Internet metrics,” typically includes information about the number of times a given Web page is requested or loaded, the number of visitors to the Web site, and the number of times an ad is clicked on.

Just as the Nielsen ratings can “make or break a television program,” Internet metrics “have become gospel in the Internet Economy.”2 However, “[t]here are only dozens of channels to grade on TV”; there are millions of Web sites to be ranked.3 Moreover, unlike Nielsen ratings, Internet metrics are used for more than allocating advertising. As originally envisioned, audience measurement using Internet metrics was “intended [only] to provide online advertisers with an independent, third-party opinion on site audience size and quality,” as a check on the veracity of the numbers provided by self-interested dot.coms.4 Nevertheless, analysts and investors soon co-opted these metrics to fuel their valuation models.

How Internet Metrics Became a Valuation Tool

As research analysts, consumers and dot.coms themselves struggled for a method to value fledgling businesses, people began to look to Internet metrics as a viable means to assess a company’s growth potential. Indeed, traditional metrics such as price/earnings ratios were of no help in valuing dot.coms with concededly no profits or even a foreseeable horizon within which profits might emerge. Yet, no matter how the business models or earnings of dot.coms differed, advertising was the one constant that could be compared, apples to apples, and advertising is predicated on Internet metrics.


[P]eople began to look to Internet metrics as a viable means to assess a company’s growth potential.


At the same time, companies realized that their stock prices could be influenced by the Internet metrics they released. That realization prompted a practice we will refer to as “metric shopping.” Metric shopping (coined from the term “forum shopping”) refers to a company’s disclosure of the Internet metrics produced by the advertising aggregator or marketing research firm that suggests the highest audience numbers, even though those numbers may be inaccurate, or at the very least inconsistent with numbers generated by others.

As numerous articles from 1999 and 2000 illustrated, Internet metrics gained as much prestige with investors and analysts, who relied “heavily on traffic reports when buying and selling stocks,” as they had with advertisers.5 Indeed, rating figures became regular features in investment banking research on dot.coms,6 and the number of unique visitors to a company’s Web site appeared to be a “positive and significant predictor of stock price fluctuations.”7 The problem is, Internet metrics subscribe to no uniform standard of measurement.

Internet Metrics and SEC Filings

If companies are promoting a valuation measure that may have no bearing on value, where does that leave investors? In general, the answer is caveat emptor—let the buyer beware. Disclosures of metric inaccuracies in SEC filings are paltry at best. Issuers may find a prominent place in their documents to highlight the number of unique users to their Web sites, but provide only a fine print, general, boilerplate disclaimer as to the number’s reliability.


[C]ompanies realized that their stock prices could be influenced by the Internet metrics they released.


Since the purpose of securities laws is to promote full disclosure and protect the investing public, shouldn’t dot.coms be more forthcoming about the inaccuracies of the metrics they are touting? More to the point, since there is no consensus on how to calculate Internet metrics, much less about what they mean, how can publicly traded companies feature these numbers in prospectuses or press releases at all? Finally, when companies know, or have reason to know, that two or more credible sources have produced conflicting numbers for the very same metrics, how can they selectively disclose and highlight only the highest numbers without violating securities laws that prohibit materially misleading statements and omissions? In short, why are companies permitted to use these ratings in official documents with complete impunity?

Internet Metrics Gone Bad

Two examples demonstrate the problems Internet metrics present to investors and to the companies that tout them as indications of value. First, in a secondary offering in May 1999, Theglobe.com “raised $70 million … after boasting on the cover of its stock prospectus that it had 10.2 million users.”8 That figure, which came from “market researcher DoubleClick, was more than triple the 3.2 million visitors [calculated] by [DoubleClick’s competitor] Media Metrix.”9 Nevertheless, Theglobe.com publicized only the DoubleClick number in its prospectus, without even mentioning the Media Metrix tally. Approximately one month later, however, Theglobe.com learned that the DoubleClick number it had highlighted was wrong. In contrast to the earlier fanfare, “Theglobe.com quietly retracted its statement [regarding the DoubleClick traffic figure] … declin[ing] to comment.” Interestingly, DoubleClick “blamed the miscalculation on human error and fault[ed] Theglobe.com for using the data in a securities document without permission.”10


If companies are promoting a valuation measure that may have no bearing on value, where does that leave investors?


Alta Vista encountered similar difficulties with Internet metrics. In December 1999, Alta Vista filed for a $281 million initial public offering after seeing its traffic numbers climb to greater than 13.5 million users, making it the ninth-largest Web property according to Media Metrix. However, when Media Metrix reported February traffic figures in March 2000, Alta Vista appeared to have suffered a dramatic decline to 12.3 million visitors, lowering its ranking to the 13th largest Web property. Because of this devastating traffic report (which would have been the most recent number available at the time of the planned IPO) and the downturn in the market, Alta Vista decided to pull its offering. In essence, the company’s ability to go public and secure an acceptable valuation appeared largely to hinge on positive traffic numbers from Media Metrix. Alluding to the fact that Alta Vista’s own records showed that the number of users actually increased during February, an Alta Vista official claimed “‘[Media Metrix’s] inaccuracy, from our perspective, was devastating, and we’re still trying to recover from it.’”11 The sad truth is that all Internet metrics are inherently inaccurate.

Do Investors Care About the Numbers?

The above examples demonstrate the effect Internet metrics can have on a company. The next logical question is whether investors actually rely upon the traffic numbers when making investment decisions. The correlation between Internet metrics and stock prices is best illustrated by the market responses to traffic numbers released by VitaminShoppe.com and SportsLine.com.

In November 1999, VitaminShoppe issued a press release noting that Media Metrix had listed it as one of the “Top 10 E-Commerce Gainers” for the previous week. “[T]he company’s stock soared 59 percent that day.” Similarly, “[f]rom April 1999 to May 2000, there were at least eight instances in which the reporting of ratings figures seemed to have a significant impact on the direction of SportsLine’s stock price. When Media Metrix figures were favorable, the firm often touted the numbers in a press release and its stock price rose. And when the site dropped out of Media Metrix’s top 50 sites in a November 1999 report, the stock fell nearly 22 percent in the following two weeks.”12

Stock analysts also rely on Internet metrics. For example, in 1999, Bear Stearns analyst Scott Ehrens raised his rating of About.com from “attractive” to “buy” merely because reports demonstrated that traffic to the portal had increased significantly, but its price had not yet followed suit.13

Defining the Terms

To better understand the problems with Internet metrics, we first need to understand how they are calculated. One of the first metrics used by Internet firms was “hits.”14 A hit is “the retrieval of any item, like a page or a graphic, from a webserver.”15 A modern day cousin to “hits” is the “page view,” which is “the accessing of a web page. …A page view … count[s] only the number of times a page has been accessed, whereas a hit counts the number of times that all elements in a page, including graphics, have been accessed.”16 Thus, if a user requests a Web page containing four graphics, that request would translate into only one page view, but five hits (one for the Web page itself, and one for each of the component graphics).

Although hits and page views are calculated differently, both require analyzing Web server logs. These logs itemize all requests for individual files, embedded images on a Web page, and “any other associated files that get transmitted” along with the Web page.17 Since both of these metrics are generated by Web servers, most dot.coms can calculate the page view and hit numbers on their own.18 Still, most Web sites elect to retain independent ad servicing firms in order to bolster the veracity of their internally-generated numbers.19

Arguably the most oft-cited metric is “unique users.” The unique user metric (sometimes known as a unique visitor metric) represents the number of different individuals who visit a site within a given period of time.20 Simply stated, the unique user metric is a head count of surfers who have logged in to a Web site.


[S]ince there is no consensus on how to calculate Internet metrics … how can publicly traded companies feature these numbers in prospectuses or press releases…?


There are three ways to calculate unique users. The first method is to require users to register and provide a password to gain access to the site. The second method involves counting user IP addresses. The third method involves “counting cookies.”21 Cookies technology arguably is the most popular tool for calculating unique users, but it is also the most rife with accuracy problems.

Why Internet Metrics are Unreliable

Three significant problems plague Internet metrics. First (and perhaps foremost) is inconsistent terminology. If each company has its own definition of a particular metric, it becomes impossible to undertake an apples-to-apples comparison, thereby vitiating one of the reasons analysts began utilizing traffic reports for valuation purposes in the first place. Internet metrics also suffer from flaws in the methodology used to calculate them and, sometimes, from purposeful manipulation by the companies whose traffic these numbers are meant to track.

Inconsistent use of terms.

To illustrate the definitional problem, let’s start with the term “unique user.” “If a consumer views the same Web page four times in one day, … Media Metrix considers it one unique visitor and unique page view. Most Web sites, conversely, interpret the data as four separate visits and four page views.”22

Indeed, SEC filings show quite a range of inhouse calculation methods. Buy.com Inc.’s prospectus says unique visitor is “an industry term used to describe an individual who has visited a particular Internet site once or more during a specific period of time.”23 Because it is unclear exactly what the “specific period of time” is, that “definition” does not explain how Buy.com would count two visits by the same person.

El Sitio, Inc. takes the same vague approach: “[U]nique visitors” means “users who visit [El Sitio’s] Websites one or more times but who are counted in the relevant period as having visited the Websites only one time.”24 Again, it is hard to ascertain when multiple visits by the same person will be considered one visit or more since we aren’t told what the “relevant period” is.


If each company has its own definition of a particular metric, it becomes impossible to undertake an apples-to-apples comparison.


US Search Corp. Com, by contrast, calculates unique visitors “by counting the number of people that access our Web site … excluding multiple visits by the same person within a five minute period.”25 Using that definition, two visits by the same person within five minutes will be counted as one unique visitor, while two visits by that same person in a ten minute period would be counted as two separate unique visitors.

Flaws in methodology.

The flaws in calculation methodologies can best be illustrated by analyzing two of the most prevalent metrics found in prospectuses and press releases: unique users and hits. As discussed previously, there are three methods to track unique users. The preferred method is counting cookies, but experts have identified at least five problems with cookie counting.

1. Cookie technology is browser and computer specific. That means one person will be counted as a new unique user each time she logs on using a different computer or Web browser. If a user logs in from two computers within her home, or from one at home and one at work, she will necessarily use two different browsers, and will be identified as two unique users. Similarly, if a user reinstalls a browser on his computer, he will be counted as a new unique user the next time he logs in to his favorite Web site.
2. Sophisticated users who turn cookies off, reject cookies, or delete cookies from their computers will be counted as new unique users each and every time they log in to a Web site.
3. Hackers can create fake click-through programs that trick cookies into counting unique users that do not exist.
4. Internet metric calculations often fail to exclude hits generated by robots and search engines. One auditor reports having seen as high as a “75% robot hit rate on some Web sites.”26
5. Counting cookies often fails to exclude hits generated within a company by its own employees checking or working on the Web site.

Calculating unique users through the registration method is problematic as well. This method might yield a conservatively low number, thereby resulting in undercounting, if friends, household members or office personnel share one user name and password. The more common problem, however, is overcounting. It is not unusual for users to register several times under different names (particularly when they forget their password or ID) or to submit bogus information for privacy reasons. The potential for multiple registrations increases when registration on a site is free.

Finally, the user hostname method for calculating unique users (the least popular method) poses problems because “many organizations use Internet gateways, which mask the real Internet hostnames, so user counts may be conservative.”27 Conversely, counting each hostname as a separate user may cause overcounting, as there is no way to discern whether 100 log-ins from a company’s gateway or proxy server constitutes 100 unique individuals or a handful of very under-utilized employees.

Intentional manipulation.

Unique user metrics can also be manipulated by a company trying to make its traffic numbers look better than they are. For example, in one interview, “Media Metrix … [claimed that one] Web site showed a sudden surge in user traffic—but no increase in the total number of pages the users were looking at. When the company investigated, it learned that a corporate directive at the Web site instructed its staff to increase the number of recorded users, one way or another.…”28

Companies can also inflate numbers for “hits” by counting a hit for each graphical element loaded when a user’s computer loads a requested Web page. A typical dispute between dot.coms and the traffic measurement firms “centers on whether a user requesting a Web page that has 10 graphical elements on it can be counted as registering 10 hits … or just one.”29 Moreover, hits are counted when a page with graphical elements is merely accessed. That means search engines contribute to inflated hit counts because they generate hits regardless of whether an actual user has asked to load the page and has seen the ads.

A recent study at USC’s Marshall School of Business confirmed how easily Internet metrics can be manipulated. “[R]esearchers studied thousands of hits on five major websites … and tabulated those hits on the basis of Internet protocol address alone. They then tabulated the hits more accurately by imposing mandatory logins and other identification methods.” The resulting disparities were striking. “Using Internet protocol addresses alone as a means of identifying website hits led to a 39% underestimation of visits, a 64% overestimation of the number of pages seen by each visitor and a 79% overestimation of the time spent on each visit.”30

Independent Advertising and Marketing Research Firms Aren’t Much Better

Because universal standards governing the use of Internet metric terminology and measurement of Web site traffic have not yet been established, each independent firm uses its own system for collecting and calculating metrics. The lack of uniformity among companies presumably tracking the very same data is well documented and, quite often, astounding. For example, the Internet Profiles subsidiary of Engage Technologies tracked a single ad as part of a larger study. “The Internet advertising firm DoubleClick recorded that it delivered 2.1 million advertising impressions, or views of the ad, to 1.8 million Internet users. But the Nielsen/NetRatings service found that about 649,000 impressions of the ad were delivered to 514,000 Internet users—less than a third of the numbers compiled by DoubleClick.”31


[U]niversal standards governing the use of Internet metric terminology and measurement of Web site traffic have not yet been established.


Inconsistency is by no means limited to DoubleClick and Nielsen. Internet marketing research firm Media Metrix has also been the subject of criticism. One study found “the ratings services now operated by Media Metrix and Nielsen/NetRatings favored larger sites—by showing more traffic than those companies’ own records indicate—but undercounted the traffic to smaller sites in comparison to those sites’ own records.”32 Again, the core problem is that “[e]ach firm collects and computes data differently.”33

To further complicate matters, it is very rare for independently generated traffic numbers to correspond to a company’s internally calculated metrics. Generally, every company that hosts a Web site utilizes a Web server, which in turn, automatically creates what are known as Web server access logs. “Every time a user requests a page on [a] site, that request is recorded in the access log.”34 While companies may capture different details with their logs, every log produces traffic statistics. Yet, even these internal logs present accuracy problems.

Turning again to the findings at the USC Marshall School of Business, “[e]rror rates for sites that track users through a log file of IP addresses are as great as 30 percent.…”35 Additionally, many internal logs suffer from the very same limitations plaguing the rest of the industry. For example, “[m]any programs that analyze the logs don’t weed out traffic hits that are generated within the company or by robots that are trolling the Web on behalf of Internet search engines, let alone prevent deliberate fraud.”36 Likewise, many internal log files “tally the number of different computers that access the site—which can lead to double counting if a person surfs from both home and work on different computers.”37


[Should we] permit companies to continue highlighting traffic reports in public documents and releases in light of the clear disparity and lack of agreement within the very industry that calculates the numbers[?]


To understand the problem, consider Alta Vista, which, after engaging in a lavish $120 million national marketing blitz, found that “the Holy Grail of Internet ratings—unique monthly visitors—had inexplicably slipped away … according to market research firms,” despite the fact that Alta Vista’s internal numbers showed a substantial bump in growth. Similarly, “Gay.com, an online community based in San Francisco, noted a Jordanesque jump in visitors in March [2000] (an internal audit revealed 1.8 million U.S. viewers), but Media Metrix said the site had only 530,000 unique visitors that month.”38

While the debate as to which firm provides the most accurate measurements rages on, investors continue to rely on this data for valuation purposes. That reliance, coupled with companies’ apparent ability to pick and choose among different sampling methodologies, encourages metric shopping. For the securities industry, the question is whether to permit companies to continue highlighting traffic reports in public documents and releases in light of the clear disparity and lack of agreement within the very industry that calculates the numbers.

Lack of Disclosures Designed to Actually Inform Investors

Since the range of discrepancy in Internet metrics is vast, and no single number is clearly reliable, one would expect companies to offer a great deal of cautionary language so that investors and analysts do not give undue credence to Internet metrics for valuation purposes. Unfortunately, the actual disclaimers companies publish are less than ideal. Indeed, corporate disclosures regarding the accuracy and reliability of Internet metrics are vague and overbroad at best. Dot.coms prominently display favorable Internet metrics (typically using graphics and oversized, bolded typeface) at the beginning of a document, but bury boilerplate disclaimers several pages later without providing details about the known problems with the numbers.

For example, in Theglobe.com’s secondary offering prospectus, the company highlighted, in color, on the inside front cover that it had “10.2 million users.” The document failed to mention, however, that Media Metrix (among others) had computed a vastly different number for this metric. So what disclosures did Theglobe.com provide? A few pages into the prospectus was a boilerplate statement that “[a]lthough we believe that the data are generally correct, the data are inherently imprecise. Accordingly, you should not place undue reliance on the date.”39 In light of Media Metrix’s much lower tally of unique users, it is questionable how Theglobe.com could harbor such a “belief.”

To be sure, in the “Risk factors” section, under the heading “Internet advertising may not prove as effective as traditional media,” Theglobe.com disclosed (in typical boilerplate fashion):

“[n]o standards have been widely accepted to measure the number of members, unique users or page views related to a particular site. We cannot assure you that standards will accurately measure our users or the full range of user activity on our site … In addition, we depend on third parties to provide these measurement services. These measurements are often based on sampling techniques or other imprecise measures and may materially differ from each other and from our estimates. We cannot assure you that advertisers will accept our or other parties’ measurements.…”40

The iVillage IPO prospectus was no better. The company boasted that, “[a]s of December 31, 1998, iVillage’s membership … consisted of approximately 960,000 unique members, up from approximately 170,000 unique members as of January 31, 1998. …For the month ended December 31, 1998, iVillage.com had approximately 65 million page views and 2.7 million unique visitors.”41 But in contrast to the pains-taking detail the company offers to illustrate how its traffic has grown, the disclosure offered a number of pages later in the “Risk Factors” section is much less informative: “[t]here are currently no standards for the measurement of the effectiveness of Internet advertising, and the industry may need to develop standard measurements to support and promote Internet advertising as a significant advertising medium.…”42


[C]orporate disclosures regarding the accuracy and reliability of Internet metrics are vague and overbroad at best.


Aren’t disclosures like this misleading (or downright untruthful) when the company knows of actual limitations that affect the accuracy of an Internet metric and is aware that other firms have calculated conflicting numbers for that metric? Disclosing generally that the industry has not established a uniform method for defining and measuring Internet metrics, or that Internet metrics are inherently imprecise, is inadequate when information regarding known limitations, biases, discrepancies and other factors that may affect reliability and accuracy is available.

Recommendations

Even if the Internet advertising industry implements standards for the use of Internet metrics, it is questionable whether these standards will meet the needs of the securities community. As Allan Weiner, vice president of analytical services for NetRatings opined, “‘[m]arket research is about providing people direction, understanding of a marketplace and trends.’”43 Estimates and trends may be fine for the advertising industry, but they are inadequate for the higher standard to which the securities industry is held—a standard that legally obligates public companies to avoid making statements that lack factual support. Mere conjecture rather than hard facts is unacceptable if a company knows of limitations that might undermine the veracity of its statements.44

Additionally, in the securities realm, we review the impact of publicly released information from the perspective of the reasonable investor.45 The knowledge and expertise we attribute to such an investor would most likely differ from that of an advertising executive regularly engaged in analyzing Internet metrics. Therefore, it is unlikely that any guidelines adopted by the Internet advertising community, devoid of additional investor protections, would satisfy securities laws pertaining to the use and disclosure of material information.


Estimates and trends may be fine for the advertising industry, but they are inadequate for the higher standard to which the securities industry is held.


To truly ensure adequate investor protections in conjunction with the use of Internet metrics, the securities community should implement the following precautions, either through a self-regulatory body or through formal SEC regulations.

Disclose discrepencies.

We know publicly traded Internet companies regularly pick and choose among metrics and emphasize only the most favorable numbers in their filings and public releases. The only way to ensure that investors are not misled is to prohibit companies from promoting only part of the picture.

Whenever a company highlights or otherwise singles out an Internet metric, it should be required to disclose:(a) that there is another firm that has computed different numbers for the very same metric; (b) the name of the other firm; and (c) the actual discrepancy between the reported number and the number produced by the other firm. If there is more than one firm reporting different numbers for a particular metric, the company should be required to either disclose all the conflicting numbers, or, at its option, disclose the number that differs the most from the company’s highlighted number so investors can make an informed decision as to the veracity of the company’s claims. Similarly, if a company’s internal log files differ from the numbers the company highlights in its public documents, the company should also be required to disclose the numbers from its logs.

In the case of widely held Internet companies, including the high-profile dot.coms, it is a foregone conclusion that there will be multiple measurements for particular metrics, so this obligation will almost always attach to these companies. There can be no stronger evidence of the need for such disclosure than this statement from a spokesperson for BPA International, a nonprofit group that performs Web site traffic audits: “[w]e have had companies not release our audit figures because the numbers are lower than what they’ve said publicly.”46 It is hard to imagine how a company can withhold such information and remain in compliance with securities laws.

Disclose limitations.

Companies should be required to clearly and conspicuously disclose any and all limitations of any Internet metric they utilize in a manner reasonably expected to inform investors. For example, if a company uses cookies to track unique users, it should explain the potential problems with cookie methodology, such as the fact that one Web surfer can generate multiple cookies (and artificially inflate the unique user number) by using different browsers and different computers. Boilerplate disclaimers, such as general statements regarding the absence of any uniform method within the industry for defining and measuring Internet metrics or the inherent unreliability of Internet metrics, are patently inadequate for this purpose.

Define terms.

Because sampling methodology can have a substantial impact on the calculation of an Internet metric, companies that feature Internet metrics in their public communications should provide clear and conspicuous notice about how the company that generated the metric defines it (for example, what is a “unique user”?) and what population (both size and demographics) was sampled.

Make disclosures prominent.

Burying the foregoing information in the back of a document is not adequate, particularly if a company chooses to highlight an Internet metric in the front. In order to reasonably inform investors, disclosures regarding discrepancies in an Internet metric between different firms, limitations in the calculation of the metric, and the sampling methodology utilized should appear in close proximity to the actual metric chosen. Thus, in general, the information should be provided either in-line with the relevant text, or in a footnote that appears on the same page as the relevant text.

Conclusion

In 1998, the Future of Advertising Stakeholders (“FAST”) was assembled in response to growing divergence among Internet metrics definitions and gathering methodologies. The organization’s purpose is “to further the quality and comparability of all online media audience measurement.” In light of the need for universal standards of metric comparability and use, an organization such as FAST seems long overdue. Among other recommendations, FAST encourages the explicit disclosure of the sampling universe a company uses to compile a metric, and responsibility in disseminating metrics by requiring “precautions … against misrepresentation or distortion of results.”47

An Alta Vista official once opined that Internet metrics were “‘like wearing three watches at once—all with different times. …You don’t know what the right time is.’”48 We submit that, regardless of what time it is in dot.com land, the time is long past due for regulators to take a long hard look at this issue and work with industry organizations, such as FAST, to bring some semblance of order and responsibility to this otherwise chaotic discipline.

Notes

[Unless otherwise indicated, the URLs in these footnotes are current as of March 2001.]
1 www.nielsenmedia.com/whoweare.html (last visited Sept. 17, 2000).
2 Maryann Jones Thompson et al., “The Danger of Trading on Ratings,” THE STANDARD (July 14, 2000) www.thestandard.com/article/display/0,1151,16773,00.html.
3 Jon Swartz, “Net Ratings Vex Dot-Coms,” USA TODAY, June 20, 2000, at B1.
4 “The Danger of Trading on Ratings,” supra note 2.
5 Id.
6 Id.
7 Mark Graham Brown, “Metrics for the .Coms,” PERFORM (Summer 2000) www.pbviews.com/magazine/articles/metrics_for_the_coms.html.
8 “Net Rankings Vex Dot-Coms,” supra note 3.
9 Id.
10 Id. See also Christopher Byron, “Theglobe.com, An Internet Stock for Suckers Only,” THE STREET.COM (Sept. 16, 1999) www.thestreet.com/comment/keyhole/784263.html.
11 “The Danger of Trading on Ratings,” supra note 2.
12 Id.
13 Id.
14 “Metrics for the .Coms,” supra note 7.
15 www.webopedia.internet.com./TERM/h/hit.html.
16 www.webopedia.internet.com/TERM/p/page_view.html.
17 www.whatis.techtarget.com/WhatIs_Definition_Page/0,4152,212498,00.html.
18 Maryann Jones Thompson, “U.S., Europe to Unite on Web Traffic Standards,” THE STANDARD (July 7, 2000) www.thestandard.com/article/article_print/1,1153,16643,00.html.
19 FAST Principles of Online Media Audience Measurement, www.fastinfo.org/measurement/pages/index.cgi/audiencemeasurement.html (last visited July 23, 2000).
20 Thomas P. Novak & Donna L. Hoffman, “New Metrics for New Media: Toward the Development of Web Measurement Standards,” WORLD WIDE WEB JOURNAL (Winter 1997) www.2000.ecommerce.vanderbilt.edu/novak/web.standards/webstand.html.
21 Cookies are small data files placed on the user’s browser software after the user first gains access to a Web site. Once installed, the cookies “tag[] users so they may be recognized on future visits.” David Braun, “Tossing Your Cookies,” TECH INVESTOR (June 19, 1997) www.techweb.com/se/directlink.cgi?INV1997061912.
22 “Net Rankings Vex Dot-Coms,” supra note 3.
23 Buy.com Inc., final prospectus filed Feb. 8, 2000, at 1(emphasis added).
24 El Sitio, Inc., final prospectus filed Dec. 13, 1999, at 5.
25 US Search Corp. Com, final prospectus filed June 25, 1999, at 1.
26 Joseph Menn, “Web Firms May Vastly Inflate Claims of ‘Hits,’” LOS ANGELES TIMES, Apr. 17, 2000, Part A1, at 1.
27 “New Metrics for New Media,” supra note 20.
28 “Web Firms May Vastly Inflate Claims of ‘Hits,’” supra note 26.
29 Id.
30 Paul Grand, “Internet Hits & Misses: Measures of Advertising Effectiveness Lack Accuracy,” www.marshall.usc.edu/main/media/news/internet.html.
31 George Mannes, “Internet Traffic Reports May be Pointing the Wrong Way,” THESTREET.COM (Nov. 5, 1999) www.thestreet.com/tech/internet/813406.html.
32 Id.
33 “Net Rankings Vex Dot-Coms,” supra note 3.
34 Todd Coopee, “Going Beyond Hit Counts” (July 14, 2000) <www.infoworld.com/cgi-bin/deleteframe.pl?story=/articles/mt/xml/0.../000717mtwsa.xm>.
35 Mo Krochmal, “Net Ad Measurement is Inexact Art,” (Aug. 6, 1998) www.techweb.com/wire/story/TWB19980806S0016.
36 “Web Firms May Vastly Inflate Claims of ‘Hits,’” supra note 26.
37 “The Danger of Trading on Ratings,” supra note 2.
38 “Net Rankings Vex Dot-Coms” supra note 3.
39 Theglobe.com, final prospectus filed May 20, 1999, at 3.
40 Id. at 18.
41 iVillage Inc., final prospectus filed Mar. 19, 1999, at 3.
42 Id. at 8.
43 “Internet Traffic Reports May be Pointing the Wrong Way,” supra note 31.
44 See In re Apple Computer Sec. Litig., 886 F.2d 1109, 1113 (9th Cir. 1989), cert. denied, 496 U.S. 943 (1990) (finding that soft information, such as projections, may be actionable if “the speaker is … aware of any undisclosed facts tending to seriously undermine the accuracy of the statement”).
45 See TSC Indus., Inc. v. Northway, Inc., 426 U.S. 438, 449 (1976).
46 “Web Firms May Vastly Inflate Claims of ‘Hits,’” supra note 26.
47 FAST Principles of Online Media Audience Measurement, supra note 19.
48 “Net Rankings Vex Dot-Coms,” supra note 3.

About the Authors

Joel Michael Schwarz is Counsel for MetLife on E-Commerce, and formerly was Special Counsel for Internet Matters, New York State Attorney General’s Securities Bureau. Reena Malhotra is a candidate for J.D. (May 2001) at Brooklyn Law School. The authors can be reached at Joel_WSL@joelschwarz.com and REENA2574@aol.com, respectively. The statements, opinions and observations expressed by the authors are theirs alone.

All contents © 2001 Glasser LegalWorks. Click here for reprint permissions, credits and disclaimers.