Archive for category spectrum
Today the Senate will convene a distinguished panel of experts to discuss the state of wireless competition in America. Although it is trendy among the cognoscenti to complain about the wireless industry, the reality is that wireless competition is vibrant here, and U.S. carriers are leaving their European counterparts in the dust.
A common refrain among those calling for regulators to “level the playing field” is that two carriers—AT&T T +1.8% and Verizon—are running away from the pack, due to their allegedly superior spectrum holdings. The resulting imbalance in competition can be remedied, they claim, by capping the spectrum holdings of the larger carriers and steering newly available spectrum to smaller carriers. Any relative improvement in the smaller carriers’ networks would attract more customers, which would reduce wireless concentration.
One problem with this story is that wireless concentration—a very fuzzy indicator of competition when it comes to wireless services—is not climbing as predicted. In fact, U.S. wireless concentration as measured by the FCC has held steady since 2008, indicating that Sprint and T-Mobile are not losing ground. Indeed, 2012 was a particularly good year for these carriers, as both enjoyed significant subscriber gains. T-Mobile recently completed its merger with MetroPCS, giving the combined company access to more subscribers and more spectrum.
Perhaps the best indicator of the smaller carriers’ prospects is the bidding war for Sprint that has erupted between Softbank and Dish Network. If Sprint stood no chance to compete with AT&T and Verizon due to its allegedly inferior spectrum, then these savvy investors would not be so bullish about Sprint’s future. Put differently, Sprint’s spectrum holdings are valued dearly in the marketplace despite their “high-frequency” nature.
The same voices calling for intervention will likely cite lower wireless prices in Europe as proof that reducing concentration will bring lower prices. But a new study by GSMA, a trade association representing 800 of the world’s mobile operators, concludes that “Europe now lags far behind the United States in the deployment of next-generation mobile technologies and the advanced services made possible through mobile,” rendering any straight-up price comparison unreliable. The study found that U.S. mobile customers consume five times more voice minutes and nearly twice as much data as their European counterparts, and average mobile data connection speeds in the United States are now 75 percent faster than those in Europe.
By convening a panel on the state of wireless competition, the Senate must be careful not to miss the forest for the trees. The phrase “wireless competition” implies incorrectly that wireless carriers compete exclusively among themselves. New data suggests that wireless competes increasingly with wireline connections such as cable modem and DSL for broadband customers. According to a consumer survey by Leichtman Research Group, hundreds of thousands of Americans canceled their home Internet service in 2012, taking advantage of the proliferation of Wi-Fi hot spots and fast new wireless networks accessible to smartphones and tablets. Indeed, more U.S. households stopped paying for home Internet subscriptions (and relied on wireless access instead) than cancelled their pay-television subscriptions (and relied on video over Internet services).
How quickly will wireless overtake wireline broadband connections? Dish’s chairman is projecting that as many as a third of all Americans one day could find it more efficient to get their home Internet service wirelessly; Cisco IBSG recently projected that up to 15 percent of U.S. consumers could “cut their cord” in favor of a mobile data connection by 2016; and Samsung recently predicted that mobile networks could supplant wireline broadband by 2020.
The oncoming battle between wireless and wireline Internet providers suggests a more permissive attitude toward wireless concentration. For those who can’t (or won’t) recognize this “inter-modal competition,” any increase in wireless concentration is mistakenly perceived as bad news for consumers. The quest to promote wireless competition via spectrum policy could result in less competition where it matters most.
Last week, President Obama named Tom Wheeler of Core Capital Partners to be Chairman of the Federal Communications Commission (FCC). Interested parties of all types, from hedge fund managers to Silicon Valley entrepreneurs, are pondering how Mr. Wheeler will manage the agency and what he’ll focus on.
A look back at his musings on a personal blog (aptly named Mobile Musings) and on his more formal writings as chairman of an advisory committee to the FCC may provide some insights. Out of the gate, Mr. Wheeler will be confronted with several pressing issues, ranging from the FCC’s merger-review authority to the broadcast-spectrum auctions to net neutrality to the IP transition.
When it comes to drawing the limits of the FCC’s authority, I have argued that where the conduct under scrutiny fits squarely within the four corners of antitrust (such as mergers), the FCC should take a backseat to the antitrust agencies; for conduct that is not easily recognized as an antitrust violation (such as discrimination by a vertically integrated network owner), the FCC should take the lead. Does Mr. Wheeler agree?
Before the Department of Justice (DOJ) moved to block the AT&T/T-Mobile merger, in April 2011 Mr. Wheeler suggested in his blog that the FCC could regulate the wireless industry via merger-related conditions:
The Communications Act, however, does not prohibit the regulation of the ‘terms and conditions’ of wireless offerings, nor does it prohibit the FCC from imposing merger terms and spectrum auction rules that might seem to be regulation in another guise. It is this authority which offers the Federal government the opportunity to impose on AT&T merger conditions that could define the four corners of wireless regulation going forward; rules that would ultimately impact all wireless carriers.
Shortly after the DOJ filed its complaint in September 2011, Mr. Wheeler opined:
. . . absent a new vehicle the regulation of marketplace behavior that has characterized telecom regulation for almost a century is headed towards the same fate as the dial tone – another fatality of digital zeroes and ones. This trend could have been reversed by the conditions imposed by the government on an AT&T/T-Mobile merger. Skirting the regulatory authority issue in favor of a more flexible public interest standard, AT&T and the FCC/Justice Department would simply agree via a consent decree to pseudo-regulatory behavioral standards.
Keeping the FCC relevant in the evolving telecom landscape is certainly one consideration. But so long as the FCC can impose behavioral remedies on merging parties to promote the public interest, anything goes, including regulation that is wholly disconnected from the merger. Although mergers might generate effects that are not recognized as antitrust harms, there is little chance that a merger would escape antitrust scrutiny. This suggests a more limited role for the FCC when it comes to merger review.
As explained in my new book with Robert Litan, the FCC’s discretion to hold up telecom mergers in return for behavioral remedies invites “rent seeking” activity by competitors, who use the FCC’s merger review as a basis to lobby for welfare-reducing obligations on their rivals. Unless this discretion is removed by Congress, we must hope for a magnanimous regulator at the FCC to waive his discretion—an unlikely outcome given that discretion is a regulator’s currency in Washington. Mr. Litan gently reminded me during a C-SPAN interview that one regulator, Fred Kahn, ceded his discretion while heading the Civil Aeronautics Board. Based on his blog musings, it seems unlikely that Mr. Wheeler will do the same.
Broadcast Spectrum Auction
The first order of business on the auction front is deciding who can participate in the broadcast-spectrum auction and to what extent. In April of this year, the DOJ weighed into this debate by advocating “rules that ensure the smaller nationwide networks, which currently lack substantial low-frequency spectrum, have an opportunity to acquire such spectrum.” It’s not clear whether the DOJ would support barring AT&T and Verizon from the auction entirely, but for those contemplating that idea, consider these consequences: According to a study released last week by Georgetown’s McDonough School of Business, auction revenues would decline by as much as 40 percent as the demand for spectrum artificially contracts, and monthly wireless bills would increase by about 9 percent as capacity-constrained carriers are forced to deploy more expensive solutions.
Fortunately, the pure-exclusion option appears to have little support among policymakers. In his departing speech last week, outgoing Chairman Genachowski advocated a balanced approach in which all four major wireless carriers would have a reasonable chance to expand their spectrum holdings, noting that “even the largest cellphone carriers need access to more airwaves to meet their customers’ booming demand for mobile data.” Regulators might look to the recent UK spectrum auction, in which the regulator (Ofcom) imposed modest caps on the amount of additional low-frequency bands that the two largest providers (Vodafone and O2) were allowed to buy—they already owned significant amounts of that spectrum before the auction—rather than bar those firms from bidding entirely.
Should the FCC follow this path, Mr. Wheeler will hopefully recognize the oncoming battle between wireless and wireline Internet providers, which militates toward a slightly more concentrated wireless industry in exchange for more intense inter-modal broadband competition.
On the net neutrality front, the FCC is awaiting a decision from a court of appeals on whether the agency overstepped its jurisdiction in its 2010 Open Internet Order. The first order of business is determining whether the FCC has the power to regulate Internet access providers. The second order of business is how best to regulate discrimination on the Internet when it rears its ugly head.
As Federal Trade Commissioner (FTC) commissioner Josh Wright correctly explained in a recent speech at George Mason, the FCC erred in the Open Internet Order by treating discrimination by vertically integrated network owners as a per se violation, in contrast to the “rule of reason” treatment afforded to similar “vertical restraints” under the antitrust laws. Mr. Wright advocates that the FTC (and not the FCC) police such conduct under the antitrust laws, arguing that the FTC is less susceptible to political influence than the FCC, and that the FTC has related experience with case-by-case enforcement of vertical restraints.
This is a debate deserving of more attention: Mr. Litan and I argue that the FCC is the better place to police discrimination on the Internet, noting that the agency currently adjudicates discrimination complaints in the video space, and that discrimination of this sort—for example, favoring an affiliated website or application over an independent one—is not an obvious antitrust violation and may generate a harm (reduced innovation) that is not easily proven under stringent antitrust standards.
While Mr. Wheeler likely would seek to maintain the FCC’s power to regulate Internet providers, it is not clear whether he embraces the per se prohibition of discrimination in the FCC’s Open Internet Order. A blog from November 2009, roughly one year before the Open Internet Order was adopted, suggests some moderation here, as least as to whether net neutrality applies to wireless networks:
Rules that recognize the unique characteristics of a spectrum-based service and allow for reasonable network management would seem to be more important than the philosophical debate over whether there should be rules at all.
A final hot topic in telecom circles is whether to release telcos from so-called “legacy regulations” that require them to maintain two separate networks: a copper network and an IP network. A related issue is whether to extend the FCC’s wholesale-access obligations to newly packetized IP networks.
The telcos argue that they could compete more effectively against cable operators if resources currently tied up in maintaining copper networks could be allocated to IP networks. On the other side, resellers argue that a wind-down of the telcos’ copper networks might strand these entrants’ investments in copper-based equipment, thereby raising the entrants’ costs to keep up with the IP transition. These raising-rival-cost arguments assume that resellers impose significant price-disciplining effects on the telcos’ broadband services, even in a world where cable operators compete with telcos for broadband services aimed at businesses.
On this policy debate, Mr. Wheeler’s findings as chairman of an advisory committee to the FCC provide a strong hint as to where he might land. In a June 2011 presentation of the Technical Advisory Committee, Mr. Wheeler explained that the old Public Switched Telephone Network (PSTN) would collapse under its own weight:
As the number of subscribers on the PSTN falls, the cost per remaining customer increases and the overall burden of maintaining the PSTN becomes untenable. A fast transition can generate significant economic activity and at the same time lower the total cost.
The Committee recommended that the legacy copper network should be sunset by 2018.
As the fine print in any investment prospectus repeatedly warns us, past performance is no guarantee of future returns. The same lesson is likely true for the Chairman of the FCC: Past writings cannot serve as a perfect predictor for future policies. But they certainly provide a clue.
Since the publication of Susan Crawford’s book on the alleged failings of U.S. Internet policy, several mainstream outlets have run stories repeating her mantra that Internet speeds are too slow, coverage is shoddy, there is a growing “digital divide” among rich and poor, and broadband prices are too high.
Consider the barrage of “bad news” in just one week:
- The Wall Street Journal reported that six percent of Americans “lack high-speed service” in a story provocatively titled “Gaps Persist in High-Speed Web Access”;
- The Financial Times reported that the United States ranks 16th in Internet speeds, and that U.S. prices on a per-megabit-per-second basis (Mbps) are more than double those in Europe; and
- Digital Trends ran an article touting Ms. Crawford’s policies titled “Admit It: U.S. Internet Service Sucks.”
Are things as gloomy as the naysayers claim? A close look at the facts suggests otherwise. (Yes, that is a link to Need for Speed, my new e-book on Internet policy from Brookings Press; if Bob Woodward can shamelessly promote his book in the Washington Post when reporting the origins of the sequester, surely I can do the same.)
Let’s start with connection speeds. According to Akamai, a global provider of Internet services, the United States ranked ninth in average connection speeds (7.7 Mbps) in the third quarter of 2012, and seventh in percent of Internet connections with speeds above 10 Mbps (18 percent). South Korea leads both categories (average speed of 14.7 Mbps, 52 percent above 10 Mbps). It’s a bit misleading to compare our speeds with those of the fastest country in the world; a seven-minute-per-mile runner looks shoddy compared to the fastest runner in the world. And like any average, our nationwide average speed combines fast connections with slow ones. For example, the average connection speed in eight states (mostly along the densely populated Northeast corridor) exceeds 9 Mbps; any of those states would rank third fastest in the world on Akamai’s list. It’s a bit of a stretch to say that we are the tortoise among rabbits; the United States is more like Danica Patrick, who finished eighth at Daytona on Sunday.
Moving on to coverage gaps. The empirical basis for the share of Americans without “high-speed service” is the FCC’s annual report on the state of broadband deployment. There are two important caveats to keep in mind when assessing these data: The FCC counts wireline connections only, and only those wireline connections that exceed 4 Mbps. Thus, a wireline connection of say 3 Mbps (such as DSL) would not be counted in the FCC’s tally, and a wireless connection of say 10 Mbps (such as 4G LTE) would also be ignored. As of 2011, the latest year for which the FCC has reliable data, only about 7 million U.S. households did not have broadband access; if wireless broadband technologies are counted, the number of households without access to broadband at the FCC’s minimum speed is in the range of 2 to 5 million. It is hyperbole to suggest that broadband operators have ignored large swaths of the country.
And what about that growing “digital divide”? Once again, the naysayers ignore speedy wireless connections to create the appearance of a problem. It is not surprising that wealthier people have greater access to the Internet; they likely have greater access to most goods in the U.S. economy. A 2012 Pew survey shows that the same percentage of white, black, and Hispanic adults (roughly 62 percent) go online wirelessly with a laptop or a cellphone; that slightly more blacks and Hispanics own a smartphone than do whites (49 versus 45 percent); and that twice as many blacks and Hispanics go online mostly using their cell phone compared to whites (38 versus 17 percent).
The third statistic may indicate that blacks and Hispanics lack wireline access relative to whites or that blacks and Hispanics simply have stronger preferences for wireless connections relative to whites; if the latter, there is no problem to be solved. And if income differences explain the differences in broadband choices, income-based subsidies are the logical policy instrument.
Broadband price comparisons. There is a lot of casual empiricism in this area. International price comparisons of a differentiated product such as Internet connectivity should be taken with a grain of salt because the quality of Internet service might not be comparable. Moreover, if you put a gun to a provider’s head (as regulators do in Europe), and require it to make its services available to resellers at incremental costs, you are going to get cheap service—and destroy investment incentives as a nasty byproduct. Citing “harsher rules that have sapped profitability,” Reuters reported that European telco stocks were trading at roughly 9.9 times earnings compared to 17.6 times for their U.S. peers.
In large swaths of this country, the incumbent cable operator faces a fiber-based telco offering triple-play packages. Unless you think that cable operators are colluding with the telcos—a position espoused by Ms. Crawford—Internet prices are less than monopoly levels where telco-based fiber is available. And help is on the way for the rest of us in the form of wireless 4G LTE offerings, satellite broadband connections, and further telco deployment.
This is not to say that market forces and a largely hands-off Internet policy have delivered the ideal state of competition. In a market with large fixed costs, when consumers are reluctant to switch providers, and when certain must-have video programming is controlled by the incumbent cable operator, we shouldn’t expect ten broadband providers in each zip code.
The United States appears to being doing just fine in the broadband race; perhaps not in first place, but certainly deserving of a cameo on the next GoDaddy commercial. Any efforts to stimulate greater deployment should be targeted, and they should respect the incentives of broadband operators to continually upgrade their networks. The naysayers have misdiagnosed the state of broadband competition.
The New York Times just ran a provocative story titled “Americans Paying More for LTE Service,” suggesting that prices charged by U.S. wireless operators for access to their new 4G networks are triple what they would be were our wireless markets more competitive. In support of this claim, they compare the price per gigabyte charged by Verizon Wireless for its bundled voice-data plan ($7.50) to the “European average” LTE price for data-only plans ($2.50), as calculated by the consultancy Wireless Intelligence. Time to call in the trust busters? Hardly.
As any first-year economic student understands, prices are determined by supply and demand conditions. When performing international price comparisons, one should account for these differences before proclaiming that U.S. consumers spend “too much” on a particular service. Of course, it is much easier to generate readership (and hence advertising dollars) with fantastic claims that our wireless markets are not competitive.
Let’s start with differences in demand that could affect the value of wireless data services and thus relative prices. While it makes sense for The Economist to compute a Big Mac Index for a product that is basically the same wherever it is sold, price comparisons of services that are highly differentiated across countries are less revealing. And the quality of wireless LTE networks varies significantly. Verizon’s LTE network covered two-thirds of the U.S. population in April 2012. In contrast, the geographic coverage of European carriers’ LTE networks is anemic, prompting the European Commissioner Neelie Kroes to proclaim this month that the absence of LTE across the continent was proving to be a major problem in Europe. No wonder it is hard to get Europeans to pay dearly for LTE services!
Turning to the supply-side of the equation, while the surface area of the U.S. LTE “coverage blanket” is relatively larger, the European coverage blanket is thicker than ours. U.S. wireless carriers don’t have as much spectrum, the key ingredient in delivering wireless service, as their European counterparts. As pointed out by wireless analyst Roger Entner, U.S. carriers have only one-third of the spectrum available in Italy (on a MHz-per-million-subscribers basis), and one-fifth of the spectrum as France, Germany, and the UK. Given this relative scarcity of spectrum, U.S. carriers must prevent overuse of their LTE networks through the price mechanism—else their data networks would be worthless. As more spectrum comes online, basic economic theory predicts that U.S. data prices will fall.
The staggered LTE offerings by U.S. carriers are another factor affecting the supply-side of the equation. As the New York Times article notes, Verizon was the first to market LTE in the United States in December 2010. AT&T, Sprint, and T-Mobile unveiled LTE offerings at a later date and are playing catch up. To compete for LTE customers, these latecomers are undercutting Verizon, which in turn, will lead to lower prices. By offering unlimited LTE data plans, Sprint charges $0 on a per-gigabyte basis at the margin. T-Mobile also offers an “Unlimited Nationwide 4G” plan at $90 per month (including unlimited voice minutes) that sets the marginal price on a per-gigabyte basis to zero. Although AT&T does not offer unlimited data plans, one can compute the “imputed” price per gigabyte for its bundled voice-data plans by subtracting the price of a comparable unlimited voice plan and then dividing by the gigabytes permitted. The result? A lower price per gigabyte than the European average. (Interested readers can email me for the math.)
Thus, even if you think U.S. wireless data prices are “too high” today, the competitive process should be given more than one year to work its magic. Consider the competition for wireless voice services, which has played out over a decade. According to Merrill Lynch, the United States enjoyed a lower price for voice services on a per-minute-of-use basis ($0.03) than France ($0.10), Germany ($0.08), or the UK ($0.08) in the fourth quarter of 2011. How can the New York Times say, on the one hand, that these European countries serve as a competitive benchmark for wireless data services in the United States, but that the prices for voice services in these same countries should be ignored? Are we to mimic European policies with respect to data services and shun their policies with respect to voice services?
The lesson here is that what’s happening to European prices for wireless voice, wireless data, healthcare, or any differentiated product for that matter depends on several things, none of which is controlled for when making these simplistic international price comparisons. I know, I know. We need to sell Internet advertising. Can you imagine the headline: “Difference-in-difference regression shows that U.S. data prices are just right?”
Two dominant schools of thought have emerged in the broadband policy arena. The first, represented by the views of Susan Crawford, a visiting professor at Harvard Law School, is that there is not enough competition to cable modem service and thus government must intervene to prevent a likely abuse of market power. A second camp believes that there is no basis for proactive policies designed to increase the number of broadband providers, even in local markets served by a single provider. The high margins enjoyed by the first provider, they claim, rewards risk-taking behavior and will induce further entry.
A third perspective gaining some traction and to which I and hopefully a few others subscribe posits that there is still a limited role for policy so long as improving consumer welfare is the objective. After penning this blog, I might be disinvited to the Christmas parties of camps one and two this year.
Camp three is agnostic as to the “right” number of broadband providers, but believes that “more than one” will likely increase consumer welfare. Although government should not subsidize entry by rivals—this is tantamount to appropriating the returns of first movers, which decreases consumer welfare in the long run—it should remove any barriers that prevent more robust competition. Whereas my camp has a healthy respect for investment incentives on a going-forward basis, camp one sees investments by cable operators as sunk and thus ripe for the taking.
The role of wireless 4G networks likely separates those with at least some faith in market forces and those without any (camp one). Ms. Crawford and her ilk relegate wireless to somewhere less relevant than pink elephants when it comes to broadband competition. At a Brookings event last week, she referred to wireless as a “complementary product” for most Americans, the insinuation being that wireless is not to be taken seriously as a solution to Internet connectivity.
Although wireless might be perceived as a complement to wireline connections today, the new 4G mobile connections will afford consumers roughly seven times more speedy downloads as compared to the experience on prior generations (3G) of smartphones. With sufficient spectrum to provide endurance (another dimension of network quality), 4G operators could offer broadband consumers the full suite of services to which they have become accustomed on wireline connections in the near future.
If you don’t believe in wireless, and if you think that no amount of tinkering with the rules will get fiber deployed in more areas, then you have what Ms. Crawford refers to as a “natural monopoly” in homes served by cable modem providers but not fiber. What to do then?
In these cases, says Ms. Crawford, government “has a very important role to play.” In particular, government should “provide assistance to people who don’t have fiber access;” it should “make sure pricing is fair;” and it should provide “equal facilities to all Americans.” This is scary stuff. Although I have been critical of certain cable practices, it is a step too far to suggest that cable companies should be subject to price regulation or government-subsidized overbuilding because they invested in neighborhoods where no else has been willing to follow.
So what policies are being peddled by camp three? When it comes to broadband competition, the FCC should remove barriers to entry for wireless broadband operators seeking to deploy 4G wireless technologies, and eliminate the disincentives facing telcos for deploying fiber beyond the 55 million U.S. homes that were served as of March 2012.
Two FCC Commissioners recently sent signals to the marketplace along these lines. In a speech at the Wharton business school, Chairman Genachowski discussed the need for additional spectrum: “In addition to promoting competition, reducing barriers to broadband build-out and driving broadband investment, we of course need to keep clearing inefficiently used spectrum and reallocating it for licensed flexible use.” Can I get an Amen?
On C-SPAN’s The Communicators, Commission Ajit Pai was asked about how to spur additional fiber investment: “For one, we shouldn’t extend legacy regulations of copper wire telephone monopoly era to next generation networks. The Title II docket remains open to this day. To the extent we wanted to send a signal to the private sector that we weren’t going to take a heavy handed approach, we should close that docket.” Translation: The FCC should clarify its rules towards IP networks so that telcos understand the implications of making fiber investments; if those investments are subject to onerous requirements, then telcos will be less inclined to invest.
Dare I count the Chairman of the FCC and FCC Commissioner Pai as honorary members of my third camp? I’ll let you know if I get any Christmas invitations.
Last week, the FCC decided not to extend certain provisions of the “program access” protections of the 1992 Cable Act. Reading the popular press gives one the false impression that the entire program-access regime was taken apart. In reality, the ban on exclusive distribution arrangements between cable operators and cable networks will be lifted, while other protections for rival distributors will remain in force.
Although the FCC’s Sunset Order suggests that lifting the ban will mostly affect cable-affiliated networks, those networks are generally distributed by their affiliated cable owner without a contract. There is no reason to add an exclusivity provision to a contract that does not exist.
Accordingly, permitting exclusive contracts likely will have a greater impact on independent networks (such as Disney Channel), which are distributed pursuant to a contract. Under the old rules, a cable operator could not tell an independent network: “I will carry you only if you agree not to deal with DISH Network, DirecTV, Verizon, and AT&T.” With the ban on exclusive agreements lifted, a cable operator may make such a take-it-or-leave-it offer.
To ensure access to newly exclusive programming, the FCC will rely on a case-by-case review of any complaints brought by distribution rivals. This ex post approach to adjudicating access disputes is similar to the one used by the Commission for “program carriage” complaints, in which an independent cable network must persuade the agency to permit a complaint to be heard by an administrative law judge. In contrast, the case-by-case approach embraced in the Sunset Order is not consistent with the ex ante prohibition against discriminatory contracting by broadband network owners in the Commission’s Open Internet Order of 2010. When it comes to handling discrimination, the Commission is anything but consistent.
In the Sunset Order, the FCC gave special treatment to cable-affiliated sports programming, often carried on regional sports networks (RSNs). In particular, the FCC established a “rebuttable presumption” that an exclusive contract involving a cable-affiliated RSN violates the Cable Act. Because sports programming is one of the few types of “must-have” programming, this exemption implies that the competitive balance among cable operators and their competitors may not be altered significantly. This is not to say that non-sports programming is meaningless—as the FCC recognized in its Comcast-NBCU Order, the refusal to supply a collection of non-sports programming could impair a rival distributor. But exempting sports programming takes much of the bite out of the rule change.
In addition to effectively exempting the most likely basis for a program access dispute, the Sunset Order makes clear that a distribution rival still can bring a complaint under other sections of the Cable Act. For example, a rival can allege “undue influence” under Section 628(c)(2)(A); discrimination under Section 628(c)(2)(B); or a “selective refusal to deal” under Section 628(c)(2)(B). In other words, the FCC removed one of several ways a cable operator can violate the Cable Act. The agency is still watching.
The FCC also pointed out that approximately 30 cable-affiliated, national networks and 14 cable-affiliated RSNs are subject to program-access merger conditions adopted in the Comcast-NBCU Order until January 2018. These conditions require Comcast to make these affiliated networks available to competitors, even after the expiration of the exclusive contract prohibition. Because these networks account for a significant share (about one third) of all cable-affiliated programming, the effect of removing the exclusivity ban will be further diminished.
The choice between an ex ante prohibition of certain conduct and an ex post, case-by-case review of complaints turns on the potential for efficiency justifications. In reaching its decision, the Commission noted one potential procompetitive benefit of permitting exclusive deals—ostensibly, to promote investment in new programming. While promoting investment in new programming is important (notwithstanding the fact that there are literally hundreds of cable networks, many of which sprouted up during the exclusivity ban), so too is promoting investment in rival distribution networks. With 55 percent of all U.S. households beholden to a single, fixed-line provider of broadband access (mostly cable modem service), the Commission should consider how each of its rules affects broadband investment. Alas, the agency disposed of this consideration in a single paragraph in the Sunset Order, arguing that the case-by-case approach was sufficient to protect the investment incentives of broadband operators.
It is no accident that the relaxation of the exclusivity ban was opposed by Google, Verizon, and AT&T—each of whom is deploying broadband networks (of both the fixed and mobile variety) in competition with incumbent cable operators. If these rival networks cannot secure access to cable programming, then convincing a cable customer to “cut the cord” will be that much harder. And if rivals cannot reach a certain level of penetration, then their investments will not generate positive returns; if that happens, we won’t see as much broadband investment as we hoped for.
To the extent that the Sunset Order is a harbinger of the FCC’s newfound embrace of case-by-case adjudication of discriminatory conduct, then it is a good thing. To ensure that 4G network operators or Google do not lose their appetite to invest in broadband networks, however, the FCC must be vigilant in enforcing the new rules.
Today the commissioners of the Federal Communications Commission (FCC) are meeting to vote on two issues that will be pivotal to the future of the wireless industry: (1) whether to impose a “spectrum cap” on wireless providers, and (2) how to design the “incentive auction” of the broadcasters’ spectrum. There is a lot at stake for the U.S. economy in getting these policies right: A new analysis by Deloitte estimates that mobile broadband network investments over the period 2012–2016 could expand U.S. GDP between $73 and $151 billion, and account for up to 771,000 jobs.
A spectrum cap would prevent a single provider (say, Verizon) from acquiring more than a certain amount of the airwaves or “spectrum rights” in a given geographic area (say, Washington, D.C.). Spectrum is the most important input in the supply of wireless services—without it, a provider literally can’t compete. The objective of a spectrum cap is to prevent any single carrier from monopolizing a key input in the production process; more wireless entry means greater competition, which means lower wireless prices. So why is this idea so controversial?
The reason is that even carriers with significant spectrum holdings need more of it to survive. To make things concrete, compare the spectrum holdings of Verizon with those of Sprint and T-Mobile. According to Deutsche Bank, Verizon has about 18 percent of all available spectrum on a population-weighted basis (including the spectrum recently obtained from SpectrumCo), compared to about nine percent each for Sprint and T-Mobile. Yet Verizon is desperate for more spectrum because its subscriber base is larger than that of its rivals, and because today’s wireless customers are finding cool (and bandwidth-intensive) things to do with their new 4G phones, straining the capacity of its wireless network. According to one noted wireless analyst, the demand for mobile broadband will surpass the spectrum available to meet it in mid-2013. Even the Chairman of the FCC recognizes that “biggest threat to the future of mobile in America is the looming spectrum crisis.”
Reinserting the spectrum cap—it was sent to the regulatory dustbin several years ago—and setting it at say one-fifth of all available spectrum would effectively bar Verizon from acquiring any more spectrum, whether in an auction or through the secondary markets. And that means that Verizon’s customers would suffer a serious degradation in their wireless connections relative to a world in which Verizon could augment its spectrum capacity. As one Nobel laureate economist famously said, “there’s no such thing as a free lunch.” Taking away from Verizon to give to smaller carriers entails serious tradeoffs.
And to understand those tradeoffs, the FCC must think hard about what the ideal market structure of the wireless industry should look like. A spectrum cap equal to one-fifth of all spectrum implies that the ideal market structure is five national carriers. But even five might be too many given the evolving wireless technology: With the enhanced download speeds made available by 4G networks—Verizon’s 4G network is seven times as fast as its 3G network according to PC World—wireless consumers will be streaming high-definition movies and FaceTiming with their friends, placing even greater pressure for more spectrum. The FCC needs to come to grips with the fact that its policies are in conflict with these technological trends and the associated economies of scale in the supply of wireless services.
Five carriers might also be the wrong number when one considers the role of mobile broadband in the larger broadband market. According to the FCC’s Wireline Competition Bureau, as of mid-2011, 55 percent of all U.S. households relied on a single wireline broadband provider capable of meeting the FCC’s definition of broadband. This means that wireless 4G connections could serve as the second broadband pipeline in over half of U.S. homes. Given the competitive implications of moving from one to two broadband providers—cable modem prices have been shown to fall significantly in the face of competitive entry—the right number of wireless carriers might be closer to three.
But who really knows? The market should decide whether the optimal number of wireless carriers is three or four or five, not the regulators. If the FCC is worried about a single carrier buying up the entirety of the spectrum in the forthcoming broadcast spectrum auction, then a simple rule forbidding such an outcome in that auction is more efficacious than a clumsy spectrum cap. By micro-managing the structure of the wireless industry, the commission tasked with overseeing the communications industry risks making the wrong call.
Economists recognize that the source of sustainable, private-sector jobs is investment. Due to measurement problems with investment data, however, it is sometimes easier to link a byproduct of investment—namely, adoption of the technology made possible by the investment—to job creation. This is precisely what economists Rob Shapiro and Kevin Hassett have done in their new study on the employment effects of wireless investments.
Shapiro and Hassett credit the nation’s upgrade of wireless broadband infrastructure from second-generation (2G) to third-generation (3G) technology with generating over one million jobs between 2006 and 2011. To demonstrate that adoption of 3G handsets “caused” job creation in an econometric sense, the authors studied the relationship between the change in a state’s employment and the cumulative penetration of cell phone technologies. According to their econometric model, every 10 percentage point increase in the penetration of a new generation of cell phones in a given quarter causes between a 0.05 and 0.07 percentage point increase in employment growth in the following three quarters.
How reasonable are these results? In 2010, Bob Crandall and I estimated that investment in second-generation broadband infrastructure of roughly $30 billion per year, including wireless infrastructure, sustained roughly 500,000 jobs between 2006 and 2009. We further estimated that spillover effects in other industries that exploit broadband technology could sustain another 500,000, bringing the total job effect close to one million jobs per year. Although Shapiro’s and Hassett’s estimates (based on wireless deployment only) significantly exceed ours (based on all broadband deployment), their estimate is not outside the realm of the possibility.
Crandall, Lehr, and Litan (2007) also conducted a regression analysis using state-level broadband penetration data from 2003-2005 to estimate job effects. They projected that for every one percentage point increase in broadband penetration in a state, employment increases by 0.2 to 0.3 percent per year. On a national level, their results imply an increase of approximately 300,000 jobs per year per one-percentage-point increase in broadband penetration. Once again, Shapiro’s and Hassett’s estimates are consistent with this prior work.
Scholars may differ on the precise way to measure the employment effects, but that debate misses the more important policy point—namely, that broadband technologies generally and wireless broadband in particular have become a vital engine of job creation. The observed correlation between wireless adoption and employment is not accidental: To induce customers to adopt the coolest handset, firms must continuously invest in the next generation of network and device technologies. And these costly investments sustain jobs.
Moreover, contrary to the FCC’s opinion in its 15th annual wireless competition report, private industry’s sustained and widespread investment in new wireless broadband technologies is consistent with the sector being intensely competitive. Industry critics have decried such evidence, arguing instead that the industry is in the death grip of monopolists. Although a monopolist may have an incentive to innovate to protect against a future threat, firms in a competitive industry have incentives to invest and innovate as a way to protect against losing market share today.
Policymakers should ask themselves this question: Why would wireless carriers continually invest billions of dollars on next-generation technologies if they could sit back and exploit their alleged monopoly rents? Experience and common sense tell us that in fact, companies in this space are not behaving like monopolists. Rather, wireless providers of all stripes are desperately trying to distinguish themselves from their rivals. Wireless tablets and phones are driving demand for more and faster wireless broadband, while spectrum-devouring apps like Siri have captured the imagination of millions. The wireless arms race is on, and the U.S. economy stands to benefit directly as wireless companies try to outmaneuver one another with the fastest networks, coolest devices, and deepest array of killer apps.
Regulated firms and their Washington lawyers study agency reports and public statements carefully to figure out the rules of the road; the clearer the rules, the easier it is for regulated firms to understand how the rules affect their businesses and to plan accordingly. So long as the regulator and the regulated firm are on the same page, resources will be put to the most valuable use allowed under the regulations.
When a regulator’s signals get blurry, resources may be squandered. For starters, take the FCC’s annual wireless competition report and the Commission’s pronouncements on spectrum policy. For several years, the competition report cited a trend of falling prices and increasing entry as evidence of robust competition while at the same time noting that industry concentration was slowly rising.
In an abrupt turnaround, the FCC’s 2010 competition report cited the slow but steady increase in concentration as evidence of a lack of competition despite the continued decline in prices and increase in new-firm entry. In other words, in the face of the same industry trends, the agency’s conclusion on competition reversed. The increased weight placed on concentration also seemed at odds with the DOJ’s revised Merger Guidelines, which deemphasized concentration in favor of direct evidence of market power.
At last week’s Consumer Electronics tradeshow, the FCC chairman suggested that the competition report’s objective was not to provide guidance on Commission policy but instead “to lay out data around the degrees of competition in the different sectors.” So much for clearing up the ambiguity. Industry participants expect more than a Wikipedia entry on something so weighty as an annual report to Congress regarding one of the economy’s most critical sectors.
The agency’s signals on spectrum policy are even murkier. On one hand, during the last few years, the current FCC has been calling for more frequencies to be made available to support and grow wireless broadband networks. The FCC has also been publicly supporting voluntary incentive auctions—a market-based tool to compensate existing spectrum licensees for returning their licenses—as the best way to reallocate unused broadcast spectrum to wireless broadband. However, in a confusing set of remarks at the same tradeshow, the FCC now seems to be saying that it only wants to see more spectrum made available if the agency can dictate who gets the spectrum and how they can use it. The very discretion that the FCC now seeks will invite rent-seeking behavior among auction contestants, who will lobby the agency to slant the rules in a way that limits competition and advances their narrow interests; better to immunize the FCC from this lobbying barrage by limiting its discretion.
The agency’s inconsistent and confusing analysis and statements in these two critical policy arenas—wireless competition and spectrum policy—created the perfect storm last year when AT&T sought to acquire T-Mobile. AT&T argued that it wanted to purchase T-Mobile and use its spectrum to augment existing spectrum and infrastructure resources, consistent with the agency’s acknowledgement that wireless carriers needed more spectrum to support surging demand for bandwidth-intensive wireless services such as streaming video. Had AT&T understood the FCC’s intentions, it would not have offered a four-billion-dollar breakup fee to T-Mobile’s parent; these resources could have been put to better use.
The singular objective that should drive the Commission in all matters wireless is getting spectrum into the hands of firms that value it the most. The last 20 years of wireless-industry growth has proven that those who value spectrum the most put it to use most quickly. To commit to this course of action, the agency needs to more clearly and consistently signal its regulatory intentions. If the agency wants to spur competition, it should support Congressional efforts to authorize incentive auctions without restrictions. It also needs to let the evidence of lower prices, growing adoption, and increasing innovation inform its understanding of the state of competition.
Yesterday, AT&T announced it was halting its plan to acquire T-Mobile. Presumably AT&T did not think it could prevail in defending the merger in two places simultaneously—one before a federal district court judge (to defend against the DOJ’s case) and another before an administrative law judge (to defend against the FCC’s case). Staff at both agencies appeared intractable in their opposition. AT&T’s option of defending cases sequentially, first against the DOJ then against the FCC, was removed by the DOJ’s threat to withdraw its complaint unless AT&T re-submit its merger application to the FCC. The FCC rarely makes a major license-transfer decision without the green light from the DOJ on antitrust issues. Instead, the FCC typically piles on conditions to transfer value created by the merger to complaining parties after the DOJ has approved a merger. Prevailing first against the DOJ would have rendered the FCC’s opposition moot.
The FCC’s case against the merger was weak. I have already blogged about the FCC’s Staff Report, but one point is worth revisiting as we digest the fate of T-Mobile’s spectrum: The FCC placed a huge bet on the cable companies’ breathing life into a floundering firm. In particular, the Staff Report cited a prospective wholesale arrangement between Cablevision and T-Mobile as evidence that some alternative suitor—whose name did not rhyme with “Amy and tea” or “her eyes on”—could preserve the number of actual competitors in the marketplace. However, within days of the FCC’s placing its bet on the cable industry, Verizon announced its intention to gobble up the spectrum of Comcast, Time Warner, and Bright House. Over the weekend, Verizon declared its purchase of spectrum from Cox. To be fair, Verizon’s acquisition does not preclude T-Mobile and Cablevision from entering into some spectrum-sharing arrangement; let’s not hold our breath.
This episode highlights the danger of regulators’ industrial engineering: The wireless marketplace is so dynamic that a seemingly reasonable bet by an agency was revealed to be a stunning loser in just a matter of days. By virtue of AT&T’s “winning the auction” for T-Mobile’s assets—Deutsche Telekom, T-Mobile’s parent, is leaving the American wireless industry one way or another—the marketplace selected the most efficient suitor for T-Mobile. If the cable companies or some other suitor were interested in entering the wireless industry, then presumably they would have stepped forward when T-Mobile was still on the open market.
Can you blame the cable companies for their lack of interest in wireless? Who wants to enter an industry with declining prices that requires billions in network investment that cannot be re-deployed elsewhere in the event of a loss? When asked what Deutsche Telekom plans to do with its U.S. assets now that the AT&T deal has unraveled, a company spokesman said: “There’s no Plan B. We’re back at the starting point.” Such gloom is hard to reconcile with the FCC’s belief that a viable suitor is lurking in the background.
Short of Google’s or DISH Network’s or some non-communications giant’s swooping down in the coming days, the net costs of the FCC’s risky intervention will begin to mount. The ostensible benefits of intervention were to prevent a price increase and to preserve the cable companies’ play on T-Mobile’s spectrum. The second benefit has evaporated and the first benefit was never proven in the FCC’s Staff Report. On the cost side of the ledger, AT&T’s customers will soon experience increased congestion as their demand for wireless video and other bandwidth-intensive applications outstrip the capacity of AT&T’s network. And T-Mobile’s customers will never get to experience 4G in all its glory. (Deutsche Telekom has little incentive to upgrade a network it plans to sell.) The FCC has certainly capped AT&T’s spectrum holdings in place, but has the agency advanced the public interest?