Archive for category antitrust
Last week, President Obama named Tom Wheeler of Core Capital Partners to be Chairman of the Federal Communications Commission (FCC). Interested parties of all types, from hedge fund managers to Silicon Valley entrepreneurs, are pondering how Mr. Wheeler will manage the agency and what he’ll focus on.
A look back at his musings on a personal blog (aptly named Mobile Musings) and on his more formal writings as chairman of an advisory committee to the FCC may provide some insights. Out of the gate, Mr. Wheeler will be confronted with several pressing issues, ranging from the FCC’s merger-review authority to the broadcast-spectrum auctions to net neutrality to the IP transition.
When it comes to drawing the limits of the FCC’s authority, I have argued that where the conduct under scrutiny fits squarely within the four corners of antitrust (such as mergers), the FCC should take a backseat to the antitrust agencies; for conduct that is not easily recognized as an antitrust violation (such as discrimination by a vertically integrated network owner), the FCC should take the lead. Does Mr. Wheeler agree?
Before the Department of Justice (DOJ) moved to block the AT&T/T-Mobile merger, in April 2011 Mr. Wheeler suggested in his blog that the FCC could regulate the wireless industry via merger-related conditions:
The Communications Act, however, does not prohibit the regulation of the ‘terms and conditions’ of wireless offerings, nor does it prohibit the FCC from imposing merger terms and spectrum auction rules that might seem to be regulation in another guise. It is this authority which offers the Federal government the opportunity to impose on AT&T merger conditions that could define the four corners of wireless regulation going forward; rules that would ultimately impact all wireless carriers.
Shortly after the DOJ filed its complaint in September 2011, Mr. Wheeler opined:
. . . absent a new vehicle the regulation of marketplace behavior that has characterized telecom regulation for almost a century is headed towards the same fate as the dial tone – another fatality of digital zeroes and ones. This trend could have been reversed by the conditions imposed by the government on an AT&T/T-Mobile merger. Skirting the regulatory authority issue in favor of a more flexible public interest standard, AT&T and the FCC/Justice Department would simply agree via a consent decree to pseudo-regulatory behavioral standards.
Keeping the FCC relevant in the evolving telecom landscape is certainly one consideration. But so long as the FCC can impose behavioral remedies on merging parties to promote the public interest, anything goes, including regulation that is wholly disconnected from the merger. Although mergers might generate effects that are not recognized as antitrust harms, there is little chance that a merger would escape antitrust scrutiny. This suggests a more limited role for the FCC when it comes to merger review.
As explained in my new book with Robert Litan, the FCC’s discretion to hold up telecom mergers in return for behavioral remedies invites “rent seeking” activity by competitors, who use the FCC’s merger review as a basis to lobby for welfare-reducing obligations on their rivals. Unless this discretion is removed by Congress, we must hope for a magnanimous regulator at the FCC to waive his discretion—an unlikely outcome given that discretion is a regulator’s currency in Washington. Mr. Litan gently reminded me during a C-SPAN interview that one regulator, Fred Kahn, ceded his discretion while heading the Civil Aeronautics Board. Based on his blog musings, it seems unlikely that Mr. Wheeler will do the same.
Broadcast Spectrum Auction
The first order of business on the auction front is deciding who can participate in the broadcast-spectrum auction and to what extent. In April of this year, the DOJ weighed into this debate by advocating “rules that ensure the smaller nationwide networks, which currently lack substantial low-frequency spectrum, have an opportunity to acquire such spectrum.” It’s not clear whether the DOJ would support barring AT&T and Verizon from the auction entirely, but for those contemplating that idea, consider these consequences: According to a study released last week by Georgetown’s McDonough School of Business, auction revenues would decline by as much as 40 percent as the demand for spectrum artificially contracts, and monthly wireless bills would increase by about 9 percent as capacity-constrained carriers are forced to deploy more expensive solutions.
Fortunately, the pure-exclusion option appears to have little support among policymakers. In his departing speech last week, outgoing Chairman Genachowski advocated a balanced approach in which all four major wireless carriers would have a reasonable chance to expand their spectrum holdings, noting that “even the largest cellphone carriers need access to more airwaves to meet their customers’ booming demand for mobile data.” Regulators might look to the recent UK spectrum auction, in which the regulator (Ofcom) imposed modest caps on the amount of additional low-frequency bands that the two largest providers (Vodafone and O2) were allowed to buy—they already owned significant amounts of that spectrum before the auction—rather than bar those firms from bidding entirely.
Should the FCC follow this path, Mr. Wheeler will hopefully recognize the oncoming battle between wireless and wireline Internet providers, which militates toward a slightly more concentrated wireless industry in exchange for more intense inter-modal broadband competition.
On the net neutrality front, the FCC is awaiting a decision from a court of appeals on whether the agency overstepped its jurisdiction in its 2010 Open Internet Order. The first order of business is determining whether the FCC has the power to regulate Internet access providers. The second order of business is how best to regulate discrimination on the Internet when it rears its ugly head.
As Federal Trade Commissioner (FTC) commissioner Josh Wright correctly explained in a recent speech at George Mason, the FCC erred in the Open Internet Order by treating discrimination by vertically integrated network owners as a per se violation, in contrast to the “rule of reason” treatment afforded to similar “vertical restraints” under the antitrust laws. Mr. Wright advocates that the FTC (and not the FCC) police such conduct under the antitrust laws, arguing that the FTC is less susceptible to political influence than the FCC, and that the FTC has related experience with case-by-case enforcement of vertical restraints.
This is a debate deserving of more attention: Mr. Litan and I argue that the FCC is the better place to police discrimination on the Internet, noting that the agency currently adjudicates discrimination complaints in the video space, and that discrimination of this sort—for example, favoring an affiliated website or application over an independent one—is not an obvious antitrust violation and may generate a harm (reduced innovation) that is not easily proven under stringent antitrust standards.
While Mr. Wheeler likely would seek to maintain the FCC’s power to regulate Internet providers, it is not clear whether he embraces the per se prohibition of discrimination in the FCC’s Open Internet Order. A blog from November 2009, roughly one year before the Open Internet Order was adopted, suggests some moderation here, as least as to whether net neutrality applies to wireless networks:
Rules that recognize the unique characteristics of a spectrum-based service and allow for reasonable network management would seem to be more important than the philosophical debate over whether there should be rules at all.
A final hot topic in telecom circles is whether to release telcos from so-called “legacy regulations” that require them to maintain two separate networks: a copper network and an IP network. A related issue is whether to extend the FCC’s wholesale-access obligations to newly packetized IP networks.
The telcos argue that they could compete more effectively against cable operators if resources currently tied up in maintaining copper networks could be allocated to IP networks. On the other side, resellers argue that a wind-down of the telcos’ copper networks might strand these entrants’ investments in copper-based equipment, thereby raising the entrants’ costs to keep up with the IP transition. These raising-rival-cost arguments assume that resellers impose significant price-disciplining effects on the telcos’ broadband services, even in a world where cable operators compete with telcos for broadband services aimed at businesses.
On this policy debate, Mr. Wheeler’s findings as chairman of an advisory committee to the FCC provide a strong hint as to where he might land. In a June 2011 presentation of the Technical Advisory Committee, Mr. Wheeler explained that the old Public Switched Telephone Network (PSTN) would collapse under its own weight:
As the number of subscribers on the PSTN falls, the cost per remaining customer increases and the overall burden of maintaining the PSTN becomes untenable. A fast transition can generate significant economic activity and at the same time lower the total cost.
The Committee recommended that the legacy copper network should be sunset by 2018.
As the fine print in any investment prospectus repeatedly warns us, past performance is no guarantee of future returns. The same lesson is likely true for the Chairman of the FCC: Past writings cannot serve as a perfect predictor for future policies. But they certainly provide a clue.
With InBev Suit, Feds Fight To Keep Beer Cheap For Young Blue-Collar Men. Maybe That’s Not A Good Idea.
Last week, the Department of Justice sued to block the merger of Anheuser Busch InBEV (“ABI”) and Grupo Modelo (“Modelo”). The coming battle between the antitrust agency and the merging parties could raise several important issues for merger review, including the role of entrants (craft beer makers) and negative externalities (associated with consuming beer).
ABI, the maker of Bud, Bud Light, and Busch, already owns 35 percent of Modelo; the DOJ’s lawsuit seeks to keep ABI’s share right there. For those who haven’t carefully studied the back of their Mexican beer bottles, Modelo is the maker of popular Mexican imports such as Corona Extra, Corona Light, and Pacifico.
ABI’s “partial ownership” of Modelo is no small detail; it complicates the DOJ’s analysis relative to a garden-variety merger analysis. Writing in the Antitrust Law Journal, Salop and O’Brien explain that the “competitive effects of partial ownership depend critically on two separate and distinct elements: financial interest and corporate control.” Depending on those variables, partial mergers “can occur in ways that result in greater or lower harm to competition than a complete merger.” The implication of their finding is that a movement from a partial merger to a complete one could raise or lower prices.
The DOJ’s complaint doesn’t tell us much about the nature of ABI’s existing control over Modelo, except for noting that ABI’s annual report claims that ABI does not have “effective control” over Modelo. Despite this disclaimer and despite the “firewalls” designed to prevent ABI members of Modelo’s board learning about pricing information, it is possible that ABI exerts some influence over Modelo’s decision-making. Setting aside the degree of ABI’s control over Modelo’s prices, economic theory predicts that ABI’s financial interest in Modelo could affect ABI’s prices. The question is whether a full transfer of ownership would really make things worse.
The DOJ’s primary theory of harm is that the merger would facilitate coordinated pricing between ABI and MillerCoors, the second largest beer manufacturer in the United States. According to the complaint, ABI and MillerCoors have been forced to discount their prices to discourage consumers from “trading up” to Modelo brands; take away Modelo’s aggressive pricing and the industry leaders could better coordinate their price increases. Secondarily, the DOJ argues that the merger would permit ABI to unilaterally raise its prices without concern about customer defection to Modelo’s brands.
One bone of contention between the dueling antitrust experts will be the likely role of “craft beers” or microbrews in the coming years. To the extent that craft beers play a larger role in the near future—one estimate suggests that craft beers currently account for six percent of all sales but are growing at 13 percent—then a merger of two “low-end” labels is not as important for consumers. According to the Brewers Association, there were 2,000 U.S. breweries in operation by the end of 2012, and there are another 1,000 in the planning stage; the expansion of microbreweries suggests a “shift in the palate” of U.S. beer consumers toward craft beers. With this backdrop, the combination of two low-end brands might not generate much pricing power.
To be fair, ABI has some high-end labels, such as Stella Artois and Beck’s, and craft beers such as Goose Island and Shock Top. But these brands are drowned out in a sea of differentiated flavors, including popular brews such as Abita, Lagunitas, and Shiner. There is an exciting microbrew story in nearly every state—for example, you can’t visit the Blue Ridge region of Virginia without stopping at Devils’ Backbone (Roseland) or Blue Mountain Brewery (Afton).
The DOJ’s discussion of the proposed “relevant product market” is good reading. Apparently, ABI’s Bud Light Lime-a-Rita sits within the “premium plus” category. Where I come from, serving a margarita in an aluminum can is blasphemy. The agency asserts that all segments of the beer industry—from the “sub-premium” segment to “high-end”—compete in the same product market: Query whether sub-premium beers or even the “premium” segment are not constrained by the price of water, the closest available substitute. Craft beers are mentioned in passing only.
The key demographic for low-priced beer drinkers is blue-collar males in their 20s, who might shy away from the premium prices commanded by craft beers. Presumably, the DOJ’s lawsuit aims to protect these drinkers. Given the negative externalities associated with consuming alcohol, however, the movement to higher priced, heavier-tasting, craft beers that are not guzzled like Mad Dog might not be a bad thing. Which leads to one to wonder: Should the supply of beer be competitive or should we tolerate a little market power along with reduced levels of consumption?
If the DOJ has its way and blocks this merger—and if the agency is right about the likely price effects—then we will get more alcohol consumption relative to a world in which ABI owns 100 percent of Modelo. Be careful what you wish for.
Before Washingtonians could fully digest the election results in early November, there was a mild tremor in the tele-cosmos that could have a significant impact on broadband deployment and hence the U.S. economy. AT&T announced that it planned to upgrade its copper network to an IP-based technology and replace some rural lines with wireless connections. It also petitioned the Federal Communications Commission to commence a proceeding in which market trials would be conducted to determine the policy implications associated with its IP transition. According to one consumer advocate, the news was the “single most important development in telecom since passage of the Telecommunications Act of 1996.”
To understand why, one needs a bit of history. A century ago, voice services were provided by a single firm (also named AT&T) based on a social compact struck in 1913 that has lost its relevance due the advance of technology. In exchange for monopoly privileges, AT&T submitted (over the course of the next decade) to rate regulation and a universal service obligation. And the compact delivered on universality: By the early 1980s, over 90 percent of American households had basic telephone service.
But a funny thing happened since the technological era of the Commodore 64 and the Walkman. Our nation was rewired for a second time by cable plant, a third time by wireless networks, and a fourth time by satellite networks. By 2012, high-speed Internet over a cable connection—which supports voice as one of several IP-based applications—was available to 93 percent of U.S. households. By 2010, 99.8 percent of the U.S. population was covered by at least one wireless voice network. And in September 2012, Dish Network launched a nationwide satellite broadband service, targeting customers in rural areas that are underserved with a $40 per month offer that supports, among other IP-based applications, voice services.
Competitive entry puts telecom regulators in a pickle. Anyone following the recent spat between D.C. taxi drivers and Uber services, or the decade-old spat between cable operators and telco-based video providers, understands that when regulators can no longer provider monopoly protection to an incumbent, their basis for imposing monopoly-related fees or obligations washes away. Why should I pay you for the privilege of driving a cab in your city, the taxi driver asks, when my competitor is free from such obligations?
When it comes to voice services, the regulatory obligation that is now under scrutiny is the duty to provide universal telephone service over the old copper network. Based on the original social compact, that duty falls uniquely (and thus perversely) on the telcos. Cable, wireless and satellite providers are free to provide voice service (or not) over the network of their choosing, and they are free to pick and choose which homes to serve. In contrast, telcos must operate two networks at once—an outdated, copper-based legacy network that provides service to a shrinking customer base and a modern, IP-based network that supports data, video, and voice applications.
To understand how onerous these rules are, consider the decision of Google, a recent entrant to the broadband space, not to offer voice service as part of its Google Fiber offering in Kansas City. After studying state and federal regulations for voice services, the vice president of Google Access Services concluded: “We looked at doing that [VoIP]. The cost of actually delivering telephone services is almost nothing. However, in the United States, there are all of these special rules that apply.” It makes little sense to have the telcos abide by those same rules when cable operators and wireless providers (typically five in a city) are direct competitors for voice services.
If supporting two separate networks imposed trivial costs on the telcos, then consumers would be held harmless. Alas, telcos invest a significant amount of resources to maintain the legacy network. One study by the Columbia Institute for Tele-Informations estimated that nearly half of telcos’ capital expenditures are tied up in this rut. Freed from these obligations, telcos could deploy these resources to higher value services, including expanding the reach of their IP-based networks. Broadband consumers, particular those living in areas served by a single wireline provider of broadband services, would benefit from the enhanced competition with cable operators.
There appears to be a growing consensus on the need for reform. Indeed, Public Knowledge, a consumer advocacy group typically adverse to the telcos, acknowledged that the petition for deregulation “raises a valid point of concern if the rules for the [legacy] to IP [conversion] apply only to it and other Local Exchange Carriers (LECs) upgrading their networks.”
Of course, there are still voices who advocate continued monopoly-era obligations, regardless of how many distinct technologies cover or nearly cover the entire nation for voice service. A recent op-ed in the New York Times fantastically asserted the existence of a telco-cable “cartel.” These incessant calls for a public-utility-style approach are outliers in the policy arena, as rational voices from both the left and right seem to be coalescing around the proper idea for how to transition to the modern telecom era.
Although the elections were polarizing for many policy matters, at least broadband policy seems to be bringing folks to the middle for constructive debate and problem solving. It’s time to bring communications policy into alignment with the modern era.
In light of recent stories hinting that the Federal Trade Commission (FTC) will not pursue antitrust claims that Google discriminates in its search results, advocates for rival websites are sounding the alarms. One attorney who represents several websites that have complained about Google’s alleged favoritism in search decried: “If a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Ironically, the opposite is true: By reportedly dropping search discrimination from its case, the FTC has bolstered its integrity.
This is not to say that discrimination against rival websites is a good thing. Rather, discrimination of the kind allegedly practiced by Google is generally not recognized as an antitrust violation. With the exception of extreme cases, such as when a monopolist refuses to sell a product or service to a competitor that it makes available to others for discriminatory reasons, a firm does not expose itself to antitrust liability by merely refusing to deal with a competitor. (By contrast, a firm may expose itself to antitrust liability by refusing to deal with customers or suppliers so long as they deal with the firm’s rival.) Because Google is not refusing to sell a product or service to a rival website that it makes available to others, but instead places its specialized search results—such as maps, image, shopping or local results—at the top of the page when it believes they will be useful to consumers, Google arguably has no “duty to deal” under the antitrust laws.
To make a discrimination square peg fit into an antitrust round hole, the FTC would have needed to invoke an unorthodox section of the FTC Act (Section 5), thereby stretching the agency’s authority. By recognizing the incongruence between the conduct that the antitrust laws are meant to stop and the consumer-centric justifications for Google’s behavior, the FTC appears to have spared itself a tough slog. For example, one element of a duty-to-deal claim under the Sherman Act is proving that Google’s treatment of rival websites harms consumers; even the cleverest economist would be stumped with that assignment.
Google’s rivals are now seeking a do-over at the Justice Department (DOJ). They analogize the Google case to the FTC’s Microsoft investigation, where the DOJ picked up that case shortly after the FTC commissioners deadlocked in 1993. But the FTC does not appear to be deadlocked here; the agency is likely rejecting the Google case because the antitrust law does not support the complainants’ arguments.
Although regulatory relief at the FTC appears to be fleeting (and the DOJ is not the proper forum), website rivals could seek protection against search discrimination from Congress. The blueprint is already established: In 1992, Congress amended the Cable Act to protect independent cable networks against discrimination by vertically integrated cable operators. Section 616(a)(3) of the Act directs the Federal Communications Commission to establish rules governing program carriage agreements that “prevent [a cable operator] from engaging in conduct the effect of which is to unreasonably restrain the ability of an unaffiliated video programming vendor to compete fairly by discriminating in video programming distribution on the basis of affiliation or nonaffiliation of vendors in the selection, terms, or conditions for carriage of video programming provided by such vendors.”
This explains why, for example, the NFL Network brought a discrimination cases against Comcast—a vertically integrated cable operator that owns a national sports network—under the Cable Act and not under the Sherman Act. Had the NFL Network pursued its discrimination claims in an antitrust court, it likely would have failed. By styling its case as a program-carriage complaint, however, the NFL Network took advantage of cognizable harms under the Cable Act such as preserving independent voices that, for better or worse, are not appreciated by the antitrust laws.
If independent websites such as Nextag want relief, then they should lobby Congress to write the analogous non-discrimination provisions covering search engines. Once an agency is designated with the authority to police Google and other vertically integrated search engines (Bing included), website rivals could pursue individual discrimination claims just like the NFL. Importantly, website rivals would have to fund these battles, not with taxpayer money (of which millions were likely spent by the FTC in its antitrust investigation of Google), but with their own resources. Self-funding ensures that only the strongest discrimination cases would come forward; when someone else is footing the bill, all bets are off.
Admittedly, the relief contemplated here would not come quickly. It took years for independent cable networks to convince Congress of their plight. But the impatience of Google’s rivals is no reason for the FTC to bend the antitrust laws. Better to keep the powder dry—and the FTC’s integrity intact—and go after a monopolist that is more blatantly violating the antitrust laws on another day.
The New York Times just ran a provocative story titled “Americans Paying More for LTE Service,” suggesting that prices charged by U.S. wireless operators for access to their new 4G networks are triple what they would be were our wireless markets more competitive. In support of this claim, they compare the price per gigabyte charged by Verizon Wireless for its bundled voice-data plan ($7.50) to the “European average” LTE price for data-only plans ($2.50), as calculated by the consultancy Wireless Intelligence. Time to call in the trust busters? Hardly.
As any first-year economic student understands, prices are determined by supply and demand conditions. When performing international price comparisons, one should account for these differences before proclaiming that U.S. consumers spend “too much” on a particular service. Of course, it is much easier to generate readership (and hence advertising dollars) with fantastic claims that our wireless markets are not competitive.
Let’s start with differences in demand that could affect the value of wireless data services and thus relative prices. While it makes sense for The Economist to compute a Big Mac Index for a product that is basically the same wherever it is sold, price comparisons of services that are highly differentiated across countries are less revealing. And the quality of wireless LTE networks varies significantly. Verizon’s LTE network covered two-thirds of the U.S. population in April 2012. In contrast, the geographic coverage of European carriers’ LTE networks is anemic, prompting the European Commissioner Neelie Kroes to proclaim this month that the absence of LTE across the continent was proving to be a major problem in Europe. No wonder it is hard to get Europeans to pay dearly for LTE services!
Turning to the supply-side of the equation, while the surface area of the U.S. LTE “coverage blanket” is relatively larger, the European coverage blanket is thicker than ours. U.S. wireless carriers don’t have as much spectrum, the key ingredient in delivering wireless service, as their European counterparts. As pointed out by wireless analyst Roger Entner, U.S. carriers have only one-third of the spectrum available in Italy (on a MHz-per-million-subscribers basis), and one-fifth of the spectrum as France, Germany, and the UK. Given this relative scarcity of spectrum, U.S. carriers must prevent overuse of their LTE networks through the price mechanism—else their data networks would be worthless. As more spectrum comes online, basic economic theory predicts that U.S. data prices will fall.
The staggered LTE offerings by U.S. carriers are another factor affecting the supply-side of the equation. As the New York Times article notes, Verizon was the first to market LTE in the United States in December 2010. AT&T, Sprint, and T-Mobile unveiled LTE offerings at a later date and are playing catch up. To compete for LTE customers, these latecomers are undercutting Verizon, which in turn, will lead to lower prices. By offering unlimited LTE data plans, Sprint charges $0 on a per-gigabyte basis at the margin. T-Mobile also offers an “Unlimited Nationwide 4G” plan at $90 per month (including unlimited voice minutes) that sets the marginal price on a per-gigabyte basis to zero. Although AT&T does not offer unlimited data plans, one can compute the “imputed” price per gigabyte for its bundled voice-data plans by subtracting the price of a comparable unlimited voice plan and then dividing by the gigabytes permitted. The result? A lower price per gigabyte than the European average. (Interested readers can email me for the math.)
Thus, even if you think U.S. wireless data prices are “too high” today, the competitive process should be given more than one year to work its magic. Consider the competition for wireless voice services, which has played out over a decade. According to Merrill Lynch, the United States enjoyed a lower price for voice services on a per-minute-of-use basis ($0.03) than France ($0.10), Germany ($0.08), or the UK ($0.08) in the fourth quarter of 2011. How can the New York Times say, on the one hand, that these European countries serve as a competitive benchmark for wireless data services in the United States, but that the prices for voice services in these same countries should be ignored? Are we to mimic European policies with respect to data services and shun their policies with respect to voice services?
The lesson here is that what’s happening to European prices for wireless voice, wireless data, healthcare, or any differentiated product for that matter depends on several things, none of which is controlled for when making these simplistic international price comparisons. I know, I know. We need to sell Internet advertising. Can you imagine the headline: “Difference-in-difference regression shows that U.S. data prices are just right?”
Two dominant schools of thought have emerged in the broadband policy arena. The first, represented by the views of Susan Crawford, a visiting professor at Harvard Law School, is that there is not enough competition to cable modem service and thus government must intervene to prevent a likely abuse of market power. A second camp believes that there is no basis for proactive policies designed to increase the number of broadband providers, even in local markets served by a single provider. The high margins enjoyed by the first provider, they claim, rewards risk-taking behavior and will induce further entry.
A third perspective gaining some traction and to which I and hopefully a few others subscribe posits that there is still a limited role for policy so long as improving consumer welfare is the objective. After penning this blog, I might be disinvited to the Christmas parties of camps one and two this year.
Camp three is agnostic as to the “right” number of broadband providers, but believes that “more than one” will likely increase consumer welfare. Although government should not subsidize entry by rivals—this is tantamount to appropriating the returns of first movers, which decreases consumer welfare in the long run—it should remove any barriers that prevent more robust competition. Whereas my camp has a healthy respect for investment incentives on a going-forward basis, camp one sees investments by cable operators as sunk and thus ripe for the taking.
The role of wireless 4G networks likely separates those with at least some faith in market forces and those without any (camp one). Ms. Crawford and her ilk relegate wireless to somewhere less relevant than pink elephants when it comes to broadband competition. At a Brookings event last week, she referred to wireless as a “complementary product” for most Americans, the insinuation being that wireless is not to be taken seriously as a solution to Internet connectivity.
Although wireless might be perceived as a complement to wireline connections today, the new 4G mobile connections will afford consumers roughly seven times more speedy downloads as compared to the experience on prior generations (3G) of smartphones. With sufficient spectrum to provide endurance (another dimension of network quality), 4G operators could offer broadband consumers the full suite of services to which they have become accustomed on wireline connections in the near future.
If you don’t believe in wireless, and if you think that no amount of tinkering with the rules will get fiber deployed in more areas, then you have what Ms. Crawford refers to as a “natural monopoly” in homes served by cable modem providers but not fiber. What to do then?
In these cases, says Ms. Crawford, government “has a very important role to play.” In particular, government should “provide assistance to people who don’t have fiber access;” it should “make sure pricing is fair;” and it should provide “equal facilities to all Americans.” This is scary stuff. Although I have been critical of certain cable practices, it is a step too far to suggest that cable companies should be subject to price regulation or government-subsidized overbuilding because they invested in neighborhoods where no else has been willing to follow.
So what policies are being peddled by camp three? When it comes to broadband competition, the FCC should remove barriers to entry for wireless broadband operators seeking to deploy 4G wireless technologies, and eliminate the disincentives facing telcos for deploying fiber beyond the 55 million U.S. homes that were served as of March 2012.
Two FCC Commissioners recently sent signals to the marketplace along these lines. In a speech at the Wharton business school, Chairman Genachowski discussed the need for additional spectrum: “In addition to promoting competition, reducing barriers to broadband build-out and driving broadband investment, we of course need to keep clearing inefficiently used spectrum and reallocating it for licensed flexible use.” Can I get an Amen?
On C-SPAN’s The Communicators, Commission Ajit Pai was asked about how to spur additional fiber investment: “For one, we shouldn’t extend legacy regulations of copper wire telephone monopoly era to next generation networks. The Title II docket remains open to this day. To the extent we wanted to send a signal to the private sector that we weren’t going to take a heavy handed approach, we should close that docket.” Translation: The FCC should clarify its rules towards IP networks so that telcos understand the implications of making fiber investments; if those investments are subject to onerous requirements, then telcos will be less inclined to invest.
Dare I count the Chairman of the FCC and FCC Commissioner Pai as honorary members of my third camp? I’ll let you know if I get any Christmas invitations.
Last week, the FCC decided not to extend certain provisions of the “program access” protections of the 1992 Cable Act. Reading the popular press gives one the false impression that the entire program-access regime was taken apart. In reality, the ban on exclusive distribution arrangements between cable operators and cable networks will be lifted, while other protections for rival distributors will remain in force.
Although the FCC’s Sunset Order suggests that lifting the ban will mostly affect cable-affiliated networks, those networks are generally distributed by their affiliated cable owner without a contract. There is no reason to add an exclusivity provision to a contract that does not exist.
Accordingly, permitting exclusive contracts likely will have a greater impact on independent networks (such as Disney Channel), which are distributed pursuant to a contract. Under the old rules, a cable operator could not tell an independent network: “I will carry you only if you agree not to deal with DISH Network, DirecTV, Verizon, and AT&T.” With the ban on exclusive agreements lifted, a cable operator may make such a take-it-or-leave-it offer.
To ensure access to newly exclusive programming, the FCC will rely on a case-by-case review of any complaints brought by distribution rivals. This ex post approach to adjudicating access disputes is similar to the one used by the Commission for “program carriage” complaints, in which an independent cable network must persuade the agency to permit a complaint to be heard by an administrative law judge. In contrast, the case-by-case approach embraced in the Sunset Order is not consistent with the ex ante prohibition against discriminatory contracting by broadband network owners in the Commission’s Open Internet Order of 2010. When it comes to handling discrimination, the Commission is anything but consistent.
In the Sunset Order, the FCC gave special treatment to cable-affiliated sports programming, often carried on regional sports networks (RSNs). In particular, the FCC established a “rebuttable presumption” that an exclusive contract involving a cable-affiliated RSN violates the Cable Act. Because sports programming is one of the few types of “must-have” programming, this exemption implies that the competitive balance among cable operators and their competitors may not be altered significantly. This is not to say that non-sports programming is meaningless—as the FCC recognized in its Comcast-NBCU Order, the refusal to supply a collection of non-sports programming could impair a rival distributor. But exempting sports programming takes much of the bite out of the rule change.
In addition to effectively exempting the most likely basis for a program access dispute, the Sunset Order makes clear that a distribution rival still can bring a complaint under other sections of the Cable Act. For example, a rival can allege “undue influence” under Section 628(c)(2)(A); discrimination under Section 628(c)(2)(B); or a “selective refusal to deal” under Section 628(c)(2)(B). In other words, the FCC removed one of several ways a cable operator can violate the Cable Act. The agency is still watching.
The FCC also pointed out that approximately 30 cable-affiliated, national networks and 14 cable-affiliated RSNs are subject to program-access merger conditions adopted in the Comcast-NBCU Order until January 2018. These conditions require Comcast to make these affiliated networks available to competitors, even after the expiration of the exclusive contract prohibition. Because these networks account for a significant share (about one third) of all cable-affiliated programming, the effect of removing the exclusivity ban will be further diminished.
The choice between an ex ante prohibition of certain conduct and an ex post, case-by-case review of complaints turns on the potential for efficiency justifications. In reaching its decision, the Commission noted one potential procompetitive benefit of permitting exclusive deals—ostensibly, to promote investment in new programming. While promoting investment in new programming is important (notwithstanding the fact that there are literally hundreds of cable networks, many of which sprouted up during the exclusivity ban), so too is promoting investment in rival distribution networks. With 55 percent of all U.S. households beholden to a single, fixed-line provider of broadband access (mostly cable modem service), the Commission should consider how each of its rules affects broadband investment. Alas, the agency disposed of this consideration in a single paragraph in the Sunset Order, arguing that the case-by-case approach was sufficient to protect the investment incentives of broadband operators.
It is no accident that the relaxation of the exclusivity ban was opposed by Google, Verizon, and AT&T—each of whom is deploying broadband networks (of both the fixed and mobile variety) in competition with incumbent cable operators. If these rival networks cannot secure access to cable programming, then convincing a cable customer to “cut the cord” will be that much harder. And if rivals cannot reach a certain level of penetration, then their investments will not generate positive returns; if that happens, we won’t see as much broadband investment as we hoped for.
To the extent that the Sunset Order is a harbinger of the FCC’s newfound embrace of case-by-case adjudication of discriminatory conduct, then it is a good thing. To ensure that 4G network operators or Google do not lose their appetite to invest in broadband networks, however, the FCC must be vigilant in enforcing the new rules.
Today the commissioners of the Federal Communications Commission (FCC) are meeting to vote on two issues that will be pivotal to the future of the wireless industry: (1) whether to impose a “spectrum cap” on wireless providers, and (2) how to design the “incentive auction” of the broadcasters’ spectrum. There is a lot at stake for the U.S. economy in getting these policies right: A new analysis by Deloitte estimates that mobile broadband network investments over the period 2012–2016 could expand U.S. GDP between $73 and $151 billion, and account for up to 771,000 jobs.
A spectrum cap would prevent a single provider (say, Verizon) from acquiring more than a certain amount of the airwaves or “spectrum rights” in a given geographic area (say, Washington, D.C.). Spectrum is the most important input in the supply of wireless services—without it, a provider literally can’t compete. The objective of a spectrum cap is to prevent any single carrier from monopolizing a key input in the production process; more wireless entry means greater competition, which means lower wireless prices. So why is this idea so controversial?
The reason is that even carriers with significant spectrum holdings need more of it to survive. To make things concrete, compare the spectrum holdings of Verizon with those of Sprint and T-Mobile. According to Deutsche Bank, Verizon has about 18 percent of all available spectrum on a population-weighted basis (including the spectrum recently obtained from SpectrumCo), compared to about nine percent each for Sprint and T-Mobile. Yet Verizon is desperate for more spectrum because its subscriber base is larger than that of its rivals, and because today’s wireless customers are finding cool (and bandwidth-intensive) things to do with their new 4G phones, straining the capacity of its wireless network. According to one noted wireless analyst, the demand for mobile broadband will surpass the spectrum available to meet it in mid-2013. Even the Chairman of the FCC recognizes that “biggest threat to the future of mobile in America is the looming spectrum crisis.”
Reinserting the spectrum cap—it was sent to the regulatory dustbin several years ago—and setting it at say one-fifth of all available spectrum would effectively bar Verizon from acquiring any more spectrum, whether in an auction or through the secondary markets. And that means that Verizon’s customers would suffer a serious degradation in their wireless connections relative to a world in which Verizon could augment its spectrum capacity. As one Nobel laureate economist famously said, “there’s no such thing as a free lunch.” Taking away from Verizon to give to smaller carriers entails serious tradeoffs.
And to understand those tradeoffs, the FCC must think hard about what the ideal market structure of the wireless industry should look like. A spectrum cap equal to one-fifth of all spectrum implies that the ideal market structure is five national carriers. But even five might be too many given the evolving wireless technology: With the enhanced download speeds made available by 4G networks—Verizon’s 4G network is seven times as fast as its 3G network according to PC World—wireless consumers will be streaming high-definition movies and FaceTiming with their friends, placing even greater pressure for more spectrum. The FCC needs to come to grips with the fact that its policies are in conflict with these technological trends and the associated economies of scale in the supply of wireless services.
Five carriers might also be the wrong number when one considers the role of mobile broadband in the larger broadband market. According to the FCC’s Wireline Competition Bureau, as of mid-2011, 55 percent of all U.S. households relied on a single wireline broadband provider capable of meeting the FCC’s definition of broadband. This means that wireless 4G connections could serve as the second broadband pipeline in over half of U.S. homes. Given the competitive implications of moving from one to two broadband providers—cable modem prices have been shown to fall significantly in the face of competitive entry—the right number of wireless carriers might be closer to three.
But who really knows? The market should decide whether the optimal number of wireless carriers is three or four or five, not the regulators. If the FCC is worried about a single carrier buying up the entirety of the spectrum in the forthcoming broadcast spectrum auction, then a simple rule forbidding such an outcome in that auction is more efficacious than a clumsy spectrum cap. By micro-managing the structure of the wireless industry, the commission tasked with overseeing the communications industry risks making the wrong call.
The Federal Trade Commission (FTC) is in the final stages of conducting its Google investigation. As the agency contemplates whether Google is a monopolist in the ill-defined market for search, they may find the competitive ground has shifted beneath their feet in just the 15 months since they began investigating. While a year or two ago, Google’s main competition in search might have been Bing and Yahoo, today it’s Apple and Amazon, and tomorrow it may be Facebook. The market is almost certainly broader than general search engines as we normally think of them.
Just last week, the New York Times ran a story explaining that Google and Amazon are “at war to become the pre-eminent online mall.” The story cited survey data from two consultancies that should give the antitrust authority pause:
- Forrester Research found that a third of online users started their product searches on Amazon compared to 13 percent who started their search from a traditional search site; and
- comScore found that product searches on Amazon have grown 73 percent over the last year while shopping searches on Google have been flat.
These impressive statistics suggest that Google lacks market power in a critical segment of search—namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”
One senses that the FTC has not focused much on competition from Amazon in product search, or that they even think of Amazon as a search engine. Instead, antitrust agencies around the globe have fixated on helping middlemen comparison-shopping sites such as Nextag and PriceGrabber, most of whom charge retailers for listings. Google is taking heat from comparison sites for doing the same thing because Google is perceived to be the most important source for online shoppers. That regulators are willing to breathe life into these intermediaries implies they do not recognize the platform-based competition between Google and Amazon for product searches.
Amazon is not the only behemoth that competes with Google for search. Apple’s Siri can do search and whole lot more, from helping Samuel L. Jackson design the perfect dinner to making John Malkovich laugh to helping Martin Scorsese maneuver through New York. As search evolves from links into answers, services like Siri become highly valuable. And the ITunes App Store represents the launching pad for many searches that would otherwise start on Google. A couple in Virginia that enjoys winery tours might begin their search by installing “Virginia Wine in My Pocket” or “Virginia Wineries” on their iPhone rather than search the web. In March of this year, Apple announced that more than 25 billion apps had been downloaded from its App Store by the users of the more than 315 million iPhone, iPad, and iPod touch devices worldwide. One wonders whether any of these downloads are being counted by the FTC in their calculations of Google’s market share.
And now Facebook is getting into search. At a Disrupt conference last week, Mark Zuckerberg explained that search engines are evolving into places where users go for answers, and that Facebook is uniquely positioned to compete in that market: “And when you think about it from that perspective, Facebook is pretty uniquely positioned to answer a lot of the questions that people have. So what sushi restaurants have my friends gone to in New York in the past six months and liked? . . . . These are queries that you could potentially do at Facebook if we build out this system that you just couldn’t do anywhere else.”
It may not be natural to associate Amazon (an online retailer), Apple (a device maker), and Facebook (a social media site) with search, but in the technology industry, your next competitive threat can come from anywhere. Monopoly and the kind of robust platform competition between Apple, Amazon, Google, and Facebook are mutually exclusive portraits of reality. Will the FTC turn a blind eye toward this advanced form of competition?
Last week, the FTC hired outside litigator Beth Wilkinson to lead an investigation into Google’s conduct, which some in the press have interpreted as a grave sign for the search company. The FTC is reportedly interested in pursuing Google under Section 5 of the FTC Act, which prohibits a firm from engaging in “unfair methods of competition.” Along with Bob Litan, who served as Deputy Assistant Attorney General in the Antitrust Division during the Microsoft investigation, I have penned a short paper on the FTC’s seemingly unorthodox Section 5 case against Google. (Disclosure: This paper was commissioned by Google.)
Litan and I explore a few possible theories of harm under a hypothetical Section 5 case and find them wanting, including (1) claims that specialized search results (such as flight, shopping or map results) “unfairly” harm the independent specialized search websites like Kayak (travel) or MapQuest (mapping and directions), or (2) assertions that Google allegedly has “deceived” users or websites by seemingly reneging on pledges not to favor its own sites. For the sake of brevity, I focus on the FTC’s potential deception theory here, and leave it to interested reader to pursue the “unfairness” theory in the paper.
Deception of Users
The alleged bases of Google’s alleged deception are generic statements that Google made, either in its initial public offering (IPO) or on its website, about Google’s attitude toward users leaving the site. The provision of a lawful service, specialized search, launched several years after the IPO statement certainly cannot be deceptive. To conclude that it is, and more importantly, to prevent the company from offering innovations in search would establish a precedent that would surely punish innovation throughout the rest of the economy.
As for the mission statement that the company wants users to get off the site as quickly as possible, it is just that, a mission statement. Users do not go to the mission statement when they search; they go to the Google site itself. Users cannot possibly be harmed even if this particular statement in the company’s mission were untrue. Moreover, if the problem lies in that statement, then any remedy should be directed at amending that statement. There is no justification for the Commission to hamper Google’s specialized search services themselves or to dictate where Google must display them.
Deception of Rivals
An alternative theory suggests that Google deceived its rivals, reducing innovation among independent websites. In a February 2012 paper delivered to the OECD, Tim Wu explained that competition law can be used to “increase the costs of exclusion,” which if successful, would promote innovation among application providers. Wu argued that “oversight of platforms is conceptually similar” to oversight of standard-setting organizations (SSOs). He offers a hypothetical case in which a platform owner “broadly represents to the world that he maintains an open and transparent innovation platform,” gains a monopoly position based on those representations, and then begins to exclude applications “that might themselves serve as platforms.” Once the industry has committed to a private platform, Wu argues, the platform owner “earns oversight of its practices from that point onward.”
So has Google earned itself oversight due to its alleged deception? Google is not perceived by web designers as providing a platform for all companies to have equal footing. Websites’ rankings in Google’s search results vary tremendously over time; no publisher could reasonably rely on any particular ranking on Google. To the contrary, websites want their presence to be known to any and all search engines. That specialized search sites did not base their business plans on Google’s commitment to openness is what distinguishes Google’s platform from Microsoft’s platform in the 1990s. To Wu’s credit, he does not mention Google in this section of the paper; the only platforms mentioned are those of Apple, Android, and Microsoft.
It is even more of a stretch to analogize Google’s conduct to that in the FTC’s Rambus case. Unlike websites that do not depend on a Google “standard”–the website can be accessed by users from any search engine, or through direct navigation–computer memory chips must be compatible with a variety of computers, which requires that chip producers develop a common set of standards for performance and interoperability. According to the FTC, Rambus exploited this reliance by, among other things, not disclosing to chip makers that it had additional divisional patent applications in process. That specialized search sites did not make “irreversible technological” investments based on Google’s commitment to a common standard is what distinguishes Google’s platform from SSOs.
The Freedom to Innovate
A change in a business model cannot be a legitimate basis for a Section 5 case because a firm cannot be expected to know how the world is going to unfold at its inception. A lot can change in a decade. Consumers’ taste for the product can change. Technology can change. Business models are required to adapt to such change; else they die. There should be no requirement that once a firm writes a mission statement, it be held to that statement forever. What if Google failed to anticipate the role of specialized search in 2004? Presumably, Google failed to anticipate a lot of things, but that should not be the basis for denying its entry into ancillary services or expanding its core offerings. As John Maynard Keynes famously replied to a criticism during the Great Depression of having changed his position on monetary policy: “When the facts change, I change my mind. What do you do sir?” If Google exposes itself to increased oversight for merely changing its mind, then other technology firms might think twice before innovating. And that would be a horrible consequence to the FTC’s exploration of alternative antitrust theories.