Archive for category Internet
Since the publication of Susan Crawford’s book on the alleged failings of U.S. Internet policy, several mainstream outlets have run stories repeating her mantra that Internet speeds are too slow, coverage is shoddy, there is a growing “digital divide” among rich and poor, and broadband prices are too high.
Consider the barrage of “bad news” in just one week:
- The Wall Street Journal reported that six percent of Americans “lack high-speed service” in a story provocatively titled “Gaps Persist in High-Speed Web Access”;
- The Financial Times reported that the United States ranks 16th in Internet speeds, and that U.S. prices on a per-megabit-per-second basis (Mbps) are more than double those in Europe; and
- Digital Trends ran an article touting Ms. Crawford’s policies titled “Admit It: U.S. Internet Service Sucks.”
Are things as gloomy as the naysayers claim? A close look at the facts suggests otherwise. (Yes, that is a link to Need for Speed, my new e-book on Internet policy from Brookings Press; if Bob Woodward can shamelessly promote his book in the Washington Post when reporting the origins of the sequester, surely I can do the same.)
Let’s start with connection speeds. According to Akamai, a global provider of Internet services, the United States ranked ninth in average connection speeds (7.7 Mbps) in the third quarter of 2012, and seventh in percent of Internet connections with speeds above 10 Mbps (18 percent). South Korea leads both categories (average speed of 14.7 Mbps, 52 percent above 10 Mbps). It’s a bit misleading to compare our speeds with those of the fastest country in the world; a seven-minute-per-mile runner looks shoddy compared to the fastest runner in the world. And like any average, our nationwide average speed combines fast connections with slow ones. For example, the average connection speed in eight states (mostly along the densely populated Northeast corridor) exceeds 9 Mbps; any of those states would rank third fastest in the world on Akamai’s list. It’s a bit of a stretch to say that we are the tortoise among rabbits; the United States is more like Danica Patrick, who finished eighth at Daytona on Sunday.
Moving on to coverage gaps. The empirical basis for the share of Americans without “high-speed service” is the FCC’s annual report on the state of broadband deployment. There are two important caveats to keep in mind when assessing these data: The FCC counts wireline connections only, and only those wireline connections that exceed 4 Mbps. Thus, a wireline connection of say 3 Mbps (such as DSL) would not be counted in the FCC’s tally, and a wireless connection of say 10 Mbps (such as 4G LTE) would also be ignored. As of 2011, the latest year for which the FCC has reliable data, only about 7 million U.S. households did not have broadband access; if wireless broadband technologies are counted, the number of households without access to broadband at the FCC’s minimum speed is in the range of 2 to 5 million. It is hyperbole to suggest that broadband operators have ignored large swaths of the country.
And what about that growing “digital divide”? Once again, the naysayers ignore speedy wireless connections to create the appearance of a problem. It is not surprising that wealthier people have greater access to the Internet; they likely have greater access to most goods in the U.S. economy. A 2012 Pew survey shows that the same percentage of white, black, and Hispanic adults (roughly 62 percent) go online wirelessly with a laptop or a cellphone; that slightly more blacks and Hispanics own a smartphone than do whites (49 versus 45 percent); and that twice as many blacks and Hispanics go online mostly using their cell phone compared to whites (38 versus 17 percent).
The third statistic may indicate that blacks and Hispanics lack wireline access relative to whites or that blacks and Hispanics simply have stronger preferences for wireless connections relative to whites; if the latter, there is no problem to be solved. And if income differences explain the differences in broadband choices, income-based subsidies are the logical policy instrument.
Broadband price comparisons. There is a lot of casual empiricism in this area. International price comparisons of a differentiated product such as Internet connectivity should be taken with a grain of salt because the quality of Internet service might not be comparable. Moreover, if you put a gun to a provider’s head (as regulators do in Europe), and require it to make its services available to resellers at incremental costs, you are going to get cheap service—and destroy investment incentives as a nasty byproduct. Citing “harsher rules that have sapped profitability,” Reuters reported that European telco stocks were trading at roughly 9.9 times earnings compared to 17.6 times for their U.S. peers.
In large swaths of this country, the incumbent cable operator faces a fiber-based telco offering triple-play packages. Unless you think that cable operators are colluding with the telcos—a position espoused by Ms. Crawford—Internet prices are less than monopoly levels where telco-based fiber is available. And help is on the way for the rest of us in the form of wireless 4G LTE offerings, satellite broadband connections, and further telco deployment.
This is not to say that market forces and a largely hands-off Internet policy have delivered the ideal state of competition. In a market with large fixed costs, when consumers are reluctant to switch providers, and when certain must-have video programming is controlled by the incumbent cable operator, we shouldn’t expect ten broadband providers in each zip code.
The United States appears to being doing just fine in the broadband race; perhaps not in first place, but certainly deserving of a cameo on the next GoDaddy commercial. Any efforts to stimulate greater deployment should be targeted, and they should respect the incentives of broadband operators to continually upgrade their networks. The naysayers have misdiagnosed the state of broadband competition.
Before Washingtonians could fully digest the election results in early November, there was a mild tremor in the tele-cosmos that could have a significant impact on broadband deployment and hence the U.S. economy. AT&T announced that it planned to upgrade its copper network to an IP-based technology and replace some rural lines with wireless connections. It also petitioned the Federal Communications Commission to commence a proceeding in which market trials would be conducted to determine the policy implications associated with its IP transition. According to one consumer advocate, the news was the “single most important development in telecom since passage of the Telecommunications Act of 1996.”
To understand why, one needs a bit of history. A century ago, voice services were provided by a single firm (also named AT&T) based on a social compact struck in 1913 that has lost its relevance due the advance of technology. In exchange for monopoly privileges, AT&T submitted (over the course of the next decade) to rate regulation and a universal service obligation. And the compact delivered on universality: By the early 1980s, over 90 percent of American households had basic telephone service.
But a funny thing happened since the technological era of the Commodore 64 and the Walkman. Our nation was rewired for a second time by cable plant, a third time by wireless networks, and a fourth time by satellite networks. By 2012, high-speed Internet over a cable connection—which supports voice as one of several IP-based applications—was available to 93 percent of U.S. households. By 2010, 99.8 percent of the U.S. population was covered by at least one wireless voice network. And in September 2012, Dish Network launched a nationwide satellite broadband service, targeting customers in rural areas that are underserved with a $40 per month offer that supports, among other IP-based applications, voice services.
Competitive entry puts telecom regulators in a pickle. Anyone following the recent spat between D.C. taxi drivers and Uber services, or the decade-old spat between cable operators and telco-based video providers, understands that when regulators can no longer provider monopoly protection to an incumbent, their basis for imposing monopoly-related fees or obligations washes away. Why should I pay you for the privilege of driving a cab in your city, the taxi driver asks, when my competitor is free from such obligations?
When it comes to voice services, the regulatory obligation that is now under scrutiny is the duty to provide universal telephone service over the old copper network. Based on the original social compact, that duty falls uniquely (and thus perversely) on the telcos. Cable, wireless and satellite providers are free to provide voice service (or not) over the network of their choosing, and they are free to pick and choose which homes to serve. In contrast, telcos must operate two networks at once—an outdated, copper-based legacy network that provides service to a shrinking customer base and a modern, IP-based network that supports data, video, and voice applications.
To understand how onerous these rules are, consider the decision of Google, a recent entrant to the broadband space, not to offer voice service as part of its Google Fiber offering in Kansas City. After studying state and federal regulations for voice services, the vice president of Google Access Services concluded: “We looked at doing that [VoIP]. The cost of actually delivering telephone services is almost nothing. However, in the United States, there are all of these special rules that apply.” It makes little sense to have the telcos abide by those same rules when cable operators and wireless providers (typically five in a city) are direct competitors for voice services.
If supporting two separate networks imposed trivial costs on the telcos, then consumers would be held harmless. Alas, telcos invest a significant amount of resources to maintain the legacy network. One study by the Columbia Institute for Tele-Informations estimated that nearly half of telcos’ capital expenditures are tied up in this rut. Freed from these obligations, telcos could deploy these resources to higher value services, including expanding the reach of their IP-based networks. Broadband consumers, particular those living in areas served by a single wireline provider of broadband services, would benefit from the enhanced competition with cable operators.
There appears to be a growing consensus on the need for reform. Indeed, Public Knowledge, a consumer advocacy group typically adverse to the telcos, acknowledged that the petition for deregulation “raises a valid point of concern if the rules for the [legacy] to IP [conversion] apply only to it and other Local Exchange Carriers (LECs) upgrading their networks.”
Of course, there are still voices who advocate continued monopoly-era obligations, regardless of how many distinct technologies cover or nearly cover the entire nation for voice service. A recent op-ed in the New York Times fantastically asserted the existence of a telco-cable “cartel.” These incessant calls for a public-utility-style approach are outliers in the policy arena, as rational voices from both the left and right seem to be coalescing around the proper idea for how to transition to the modern telecom era.
Although the elections were polarizing for many policy matters, at least broadband policy seems to be bringing folks to the middle for constructive debate and problem solving. It’s time to bring communications policy into alignment with the modern era.
In light of recent stories hinting that the Federal Trade Commission (FTC) will not pursue antitrust claims that Google discriminates in its search results, advocates for rival websites are sounding the alarms. One attorney who represents several websites that have complained about Google’s alleged favoritism in search decried: “If a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Ironically, the opposite is true: By reportedly dropping search discrimination from its case, the FTC has bolstered its integrity.
This is not to say that discrimination against rival websites is a good thing. Rather, discrimination of the kind allegedly practiced by Google is generally not recognized as an antitrust violation. With the exception of extreme cases, such as when a monopolist refuses to sell a product or service to a competitor that it makes available to others for discriminatory reasons, a firm does not expose itself to antitrust liability by merely refusing to deal with a competitor. (By contrast, a firm may expose itself to antitrust liability by refusing to deal with customers or suppliers so long as they deal with the firm’s rival.) Because Google is not refusing to sell a product or service to a rival website that it makes available to others, but instead places its specialized search results—such as maps, image, shopping or local results—at the top of the page when it believes they will be useful to consumers, Google arguably has no “duty to deal” under the antitrust laws.
To make a discrimination square peg fit into an antitrust round hole, the FTC would have needed to invoke an unorthodox section of the FTC Act (Section 5), thereby stretching the agency’s authority. By recognizing the incongruence between the conduct that the antitrust laws are meant to stop and the consumer-centric justifications for Google’s behavior, the FTC appears to have spared itself a tough slog. For example, one element of a duty-to-deal claim under the Sherman Act is proving that Google’s treatment of rival websites harms consumers; even the cleverest economist would be stumped with that assignment.
Google’s rivals are now seeking a do-over at the Justice Department (DOJ). They analogize the Google case to the FTC’s Microsoft investigation, where the DOJ picked up that case shortly after the FTC commissioners deadlocked in 1993. But the FTC does not appear to be deadlocked here; the agency is likely rejecting the Google case because the antitrust law does not support the complainants’ arguments.
Although regulatory relief at the FTC appears to be fleeting (and the DOJ is not the proper forum), website rivals could seek protection against search discrimination from Congress. The blueprint is already established: In 1992, Congress amended the Cable Act to protect independent cable networks against discrimination by vertically integrated cable operators. Section 616(a)(3) of the Act directs the Federal Communications Commission to establish rules governing program carriage agreements that “prevent [a cable operator] from engaging in conduct the effect of which is to unreasonably restrain the ability of an unaffiliated video programming vendor to compete fairly by discriminating in video programming distribution on the basis of affiliation or nonaffiliation of vendors in the selection, terms, or conditions for carriage of video programming provided by such vendors.”
This explains why, for example, the NFL Network brought a discrimination cases against Comcast—a vertically integrated cable operator that owns a national sports network—under the Cable Act and not under the Sherman Act. Had the NFL Network pursued its discrimination claims in an antitrust court, it likely would have failed. By styling its case as a program-carriage complaint, however, the NFL Network took advantage of cognizable harms under the Cable Act such as preserving independent voices that, for better or worse, are not appreciated by the antitrust laws.
If independent websites such as Nextag want relief, then they should lobby Congress to write the analogous non-discrimination provisions covering search engines. Once an agency is designated with the authority to police Google and other vertically integrated search engines (Bing included), website rivals could pursue individual discrimination claims just like the NFL. Importantly, website rivals would have to fund these battles, not with taxpayer money (of which millions were likely spent by the FTC in its antitrust investigation of Google), but with their own resources. Self-funding ensures that only the strongest discrimination cases would come forward; when someone else is footing the bill, all bets are off.
Admittedly, the relief contemplated here would not come quickly. It took years for independent cable networks to convince Congress of their plight. But the impatience of Google’s rivals is no reason for the FTC to bend the antitrust laws. Better to keep the powder dry—and the FTC’s integrity intact—and go after a monopolist that is more blatantly violating the antitrust laws on another day.
The Federal Trade Commission (FTC) is in the final stages of conducting its Google investigation. As the agency contemplates whether Google is a monopolist in the ill-defined market for search, they may find the competitive ground has shifted beneath their feet in just the 15 months since they began investigating. While a year or two ago, Google’s main competition in search might have been Bing and Yahoo, today it’s Apple and Amazon, and tomorrow it may be Facebook. The market is almost certainly broader than general search engines as we normally think of them.
Just last week, the New York Times ran a story explaining that Google and Amazon are “at war to become the pre-eminent online mall.” The story cited survey data from two consultancies that should give the antitrust authority pause:
- Forrester Research found that a third of online users started their product searches on Amazon compared to 13 percent who started their search from a traditional search site; and
- comScore found that product searches on Amazon have grown 73 percent over the last year while shopping searches on Google have been flat.
These impressive statistics suggest that Google lacks market power in a critical segment of search—namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”
One senses that the FTC has not focused much on competition from Amazon in product search, or that they even think of Amazon as a search engine. Instead, antitrust agencies around the globe have fixated on helping middlemen comparison-shopping sites such as Nextag and PriceGrabber, most of whom charge retailers for listings. Google is taking heat from comparison sites for doing the same thing because Google is perceived to be the most important source for online shoppers. That regulators are willing to breathe life into these intermediaries implies they do not recognize the platform-based competition between Google and Amazon for product searches.
Amazon is not the only behemoth that competes with Google for search. Apple’s Siri can do search and whole lot more, from helping Samuel L. Jackson design the perfect dinner to making John Malkovich laugh to helping Martin Scorsese maneuver through New York. As search evolves from links into answers, services like Siri become highly valuable. And the ITunes App Store represents the launching pad for many searches that would otherwise start on Google. A couple in Virginia that enjoys winery tours might begin their search by installing “Virginia Wine in My Pocket” or “Virginia Wineries” on their iPhone rather than search the web. In March of this year, Apple announced that more than 25 billion apps had been downloaded from its App Store by the users of the more than 315 million iPhone, iPad, and iPod touch devices worldwide. One wonders whether any of these downloads are being counted by the FTC in their calculations of Google’s market share.
And now Facebook is getting into search. At a Disrupt conference last week, Mark Zuckerberg explained that search engines are evolving into places where users go for answers, and that Facebook is uniquely positioned to compete in that market: “And when you think about it from that perspective, Facebook is pretty uniquely positioned to answer a lot of the questions that people have. So what sushi restaurants have my friends gone to in New York in the past six months and liked? . . . . These are queries that you could potentially do at Facebook if we build out this system that you just couldn’t do anywhere else.”
It may not be natural to associate Amazon (an online retailer), Apple (a device maker), and Facebook (a social media site) with search, but in the technology industry, your next competitive threat can come from anywhere. Monopoly and the kind of robust platform competition between Apple, Amazon, Google, and Facebook are mutually exclusive portraits of reality. Will the FTC turn a blind eye toward this advanced form of competition?
Last week, the FTC hired outside litigator Beth Wilkinson to lead an investigation into Google’s conduct, which some in the press have interpreted as a grave sign for the search company. The FTC is reportedly interested in pursuing Google under Section 5 of the FTC Act, which prohibits a firm from engaging in “unfair methods of competition.” Along with Bob Litan, who served as Deputy Assistant Attorney General in the Antitrust Division during the Microsoft investigation, I have penned a short paper on the FTC’s seemingly unorthodox Section 5 case against Google. (Disclosure: This paper was commissioned by Google.)
Litan and I explore a few possible theories of harm under a hypothetical Section 5 case and find them wanting, including (1) claims that specialized search results (such as flight, shopping or map results) “unfairly” harm the independent specialized search websites like Kayak (travel) or MapQuest (mapping and directions), or (2) assertions that Google allegedly has “deceived” users or websites by seemingly reneging on pledges not to favor its own sites. For the sake of brevity, I focus on the FTC’s potential deception theory here, and leave it to interested reader to pursue the “unfairness” theory in the paper.
Deception of Users
The alleged bases of Google’s alleged deception are generic statements that Google made, either in its initial public offering (IPO) or on its website, about Google’s attitude toward users leaving the site. The provision of a lawful service, specialized search, launched several years after the IPO statement certainly cannot be deceptive. To conclude that it is, and more importantly, to prevent the company from offering innovations in search would establish a precedent that would surely punish innovation throughout the rest of the economy.
As for the mission statement that the company wants users to get off the site as quickly as possible, it is just that, a mission statement. Users do not go to the mission statement when they search; they go to the Google site itself. Users cannot possibly be harmed even if this particular statement in the company’s mission were untrue. Moreover, if the problem lies in that statement, then any remedy should be directed at amending that statement. There is no justification for the Commission to hamper Google’s specialized search services themselves or to dictate where Google must display them.
Deception of Rivals
An alternative theory suggests that Google deceived its rivals, reducing innovation among independent websites. In a February 2012 paper delivered to the OECD, Tim Wu explained that competition law can be used to “increase the costs of exclusion,” which if successful, would promote innovation among application providers. Wu argued that “oversight of platforms is conceptually similar” to oversight of standard-setting organizations (SSOs). He offers a hypothetical case in which a platform owner “broadly represents to the world that he maintains an open and transparent innovation platform,” gains a monopoly position based on those representations, and then begins to exclude applications “that might themselves serve as platforms.” Once the industry has committed to a private platform, Wu argues, the platform owner “earns oversight of its practices from that point onward.”
So has Google earned itself oversight due to its alleged deception? Google is not perceived by web designers as providing a platform for all companies to have equal footing. Websites’ rankings in Google’s search results vary tremendously over time; no publisher could reasonably rely on any particular ranking on Google. To the contrary, websites want their presence to be known to any and all search engines. That specialized search sites did not base their business plans on Google’s commitment to openness is what distinguishes Google’s platform from Microsoft’s platform in the 1990s. To Wu’s credit, he does not mention Google in this section of the paper; the only platforms mentioned are those of Apple, Android, and Microsoft.
It is even more of a stretch to analogize Google’s conduct to that in the FTC’s Rambus case. Unlike websites that do not depend on a Google “standard”–the website can be accessed by users from any search engine, or through direct navigation–computer memory chips must be compatible with a variety of computers, which requires that chip producers develop a common set of standards for performance and interoperability. According to the FTC, Rambus exploited this reliance by, among other things, not disclosing to chip makers that it had additional divisional patent applications in process. That specialized search sites did not make “irreversible technological” investments based on Google’s commitment to a common standard is what distinguishes Google’s platform from SSOs.
The Freedom to Innovate
A change in a business model cannot be a legitimate basis for a Section 5 case because a firm cannot be expected to know how the world is going to unfold at its inception. A lot can change in a decade. Consumers’ taste for the product can change. Technology can change. Business models are required to adapt to such change; else they die. There should be no requirement that once a firm writes a mission statement, it be held to that statement forever. What if Google failed to anticipate the role of specialized search in 2004? Presumably, Google failed to anticipate a lot of things, but that should not be the basis for denying its entry into ancillary services or expanding its core offerings. As John Maynard Keynes famously replied to a criticism during the Great Depression of having changed his position on monetary policy: “When the facts change, I change my mind. What do you do sir?” If Google exposes itself to increased oversight for merely changing its mind, then other technology firms might think twice before innovating. And that would be a horrible consequence to the FTC’s exploration of alternative antitrust theories.
Economists have long warned against price regulation in the context of network industries, but until now our tools have been limited to complex theoretical models. Last week, the heavens sent down a natural experiment so powerful that the theoretical models are blushing: In response to a new regulation preventing banks from charging debit-card swipe fees to merchants, Bank of America announced that it would charge its customers $5 a month for debit card purchases. And Chase and Wells Fargo are testing $3 monthly debit-card fees in certain markets. In case you haven’t been following the action, the basic details are here. What in the world does this development have to do with an “open” Internet? A lot, actually.
The D.C. Court of Appeals has been asked to consider several legal challenges to the FCC’s Open Internet Order. Passed in December 2010, the Open Internet Order was the regulatory culmination of an intense lobbying campaign by certain websites and so-called consumer groups to regulate the fees that Internet access providers such as Comcast or Verizon may charge to websites.
Although the challenges to the Open Internet Order largely concern the FCC’s authority to regulate Internet access providers and the proper scope of the regulations—for example, whether they should apply to wireline networks only or to all broadband networks including wireless—here’s to hoping that the rules are also judged according to the FCC’s public-interest standard. Along that dimension, the FCC’s experiment in price regulation clearly fails.
Just as Internet access providers bring together websites and users, banks provide a two-sided platform, bringing together merchants and customers in millions of cashless transactions. Because banking networks cost money to create, banks can’t be expected to provide their services for free. If you tell a bank that it can’t charge one side of a two-sided market—particularly when that one side (the merchant side) is less price sensitive than the other (the customer side)—then expect customer fees to rise. It’s not because banks are evil; it is because the profit-maximizing price charged to customers by a bank depends on the price charged to merchants.
Ignoring this economic lesson of two-sided markets, the Durbin Amendment to the Wall Street Reform and Consumer Protection Act instructed the Federal Reserve Board to cap swipe fees charged by banks to merchants. Prodded by consumer advocates to eliminate the fees entirely, the Fed cut the fees in half, to about 24 cents per transaction from an average of 44 cents per transaction. Paradoxically, the smaller the merchant fee, the larger is the debit fee—this is the “seesaw principle” of two-sided markets in action. Say hello to $5 monthly debit fees.
In a classic case of one regulation spawning another, now there is talk of regulating the banks’ debit-card charges. In response to the new debit fees, some members of Congress asked the Justice Department to investigate the major banks, suggesting that the higher fees resulted from a pricing conspiracy and not from their own bone-headed price regulation.
Months before the new debit fees came into effect, Bob Litan of the Brookings Institution predicted in a paper that “consumers and small business would face higher retail banking fees and lose valuable services as banks rationally seek to make up as much as they can for the debit interchange revenues they will lose under the [Federal Reserve] Board’s proposal.” As noted by Todd Zywicki in the Wall Street Journal, Litan’s prediction proved prescient.
Although both the Durbin Amendment and the FCC’s Open Internet Order are price regulations, there are important differences. Unlike the Fed’s rulemaking on swipe fees, the Open Internet Order was not directed by Congress. This shortcoming alone might be fatal for the Appeals Court. And unlike the Fed’s rulemaking, the FCC’s rulemaking regulates the merchant fee out of existence. Regulating prices below market-levels (as the Fed did) is one thing—regulating them to zero (as the FCC proposes) is beyond the pale.
Under the Open Internet Order, Internet access providers are banned from charging websites a surcharge for priority delivery. Indeed, the mere offering of such a fee to one website would be “discriminatory” and thus presumptively anticompetitive, even if the same offer were extended to other websites. Self-described public interest groups advocating for the Open Internet Order believe that if the smallest website in America can’t afford a surcharge for priority delivery, then no one should be allowed to buy it.
Assuming the FCC’s Order withstands legal scrutiny, the rules will clearly retard innovation among application developers: Why develop the next, killer real-time application if you can’t contract for priority delivery?
And if the Durbin Amendment is any guide, the effect of the Open Internet Order will be higher Internet access prices for consumers.
The same Bob Litan who accurately predicted price hikes in banking caused by price regulation made a similar prediction for broadband networks: “Even according to a theoretical model championed by net neutrality proponents, end users are unequivocally worse off under net neutrality regulation, as the end-user price of broadband access is always higher when ISPs are barred from raising revenues from content providers.” Will his sage advice be ignored by regulators twice in the same year?
The Appeals Court should force the FCC to defend the notion that the agency’s Open Internet Order is consistent with the public interest: If higher access prices and less innovation among application developers are the unintended consequences of an “open” Internet, then the FCC will fail on this score. With luck, the Open Internet Order will be seen as the ugly cousin of the Durbin Amendment, and the FCC’s experiment in price regulation will be curtailed.