Archive for category competition
Two dominant schools of thought have emerged in the broadband policy arena. The first, represented by the views of Susan Crawford, a visiting professor at Harvard Law School, is that there is not enough competition to cable modem service and thus government must intervene to prevent a likely abuse of market power. A second camp believes that there is no basis for proactive policies designed to increase the number of broadband providers, even in local markets served by a single provider. The high margins enjoyed by the first provider, they claim, rewards risk-taking behavior and will induce further entry.
A third perspective gaining some traction and to which I and hopefully a few others subscribe posits that there is still a limited role for policy so long as improving consumer welfare is the objective. After penning this blog, I might be disinvited to the Christmas parties of camps one and two this year.
Camp three is agnostic as to the “right” number of broadband providers, but believes that “more than one” will likely increase consumer welfare. Although government should not subsidize entry by rivals—this is tantamount to appropriating the returns of first movers, which decreases consumer welfare in the long run—it should remove any barriers that prevent more robust competition. Whereas my camp has a healthy respect for investment incentives on a going-forward basis, camp one sees investments by cable operators as sunk and thus ripe for the taking.
The role of wireless 4G networks likely separates those with at least some faith in market forces and those without any (camp one). Ms. Crawford and her ilk relegate wireless to somewhere less relevant than pink elephants when it comes to broadband competition. At a Brookings event last week, she referred to wireless as a “complementary product” for most Americans, the insinuation being that wireless is not to be taken seriously as a solution to Internet connectivity.
Although wireless might be perceived as a complement to wireline connections today, the new 4G mobile connections will afford consumers roughly seven times more speedy downloads as compared to the experience on prior generations (3G) of smartphones. With sufficient spectrum to provide endurance (another dimension of network quality), 4G operators could offer broadband consumers the full suite of services to which they have become accustomed on wireline connections in the near future.
If you don’t believe in wireless, and if you think that no amount of tinkering with the rules will get fiber deployed in more areas, then you have what Ms. Crawford refers to as a “natural monopoly” in homes served by cable modem providers but not fiber. What to do then?
In these cases, says Ms. Crawford, government “has a very important role to play.” In particular, government should “provide assistance to people who don’t have fiber access;” it should “make sure pricing is fair;” and it should provide “equal facilities to all Americans.” This is scary stuff. Although I have been critical of certain cable practices, it is a step too far to suggest that cable companies should be subject to price regulation or government-subsidized overbuilding because they invested in neighborhoods where no else has been willing to follow.
So what policies are being peddled by camp three? When it comes to broadband competition, the FCC should remove barriers to entry for wireless broadband operators seeking to deploy 4G wireless technologies, and eliminate the disincentives facing telcos for deploying fiber beyond the 55 million U.S. homes that were served as of March 2012.
Two FCC Commissioners recently sent signals to the marketplace along these lines. In a speech at the Wharton business school, Chairman Genachowski discussed the need for additional spectrum: “In addition to promoting competition, reducing barriers to broadband build-out and driving broadband investment, we of course need to keep clearing inefficiently used spectrum and reallocating it for licensed flexible use.” Can I get an Amen?
On C-SPAN’s The Communicators, Commission Ajit Pai was asked about how to spur additional fiber investment: “For one, we shouldn’t extend legacy regulations of copper wire telephone monopoly era to next generation networks. The Title II docket remains open to this day. To the extent we wanted to send a signal to the private sector that we weren’t going to take a heavy handed approach, we should close that docket.” Translation: The FCC should clarify its rules towards IP networks so that telcos understand the implications of making fiber investments; if those investments are subject to onerous requirements, then telcos will be less inclined to invest.
Dare I count the Chairman of the FCC and FCC Commissioner Pai as honorary members of my third camp? I’ll let you know if I get any Christmas invitations.
Last week, the FCC decided not to extend certain provisions of the “program access” protections of the 1992 Cable Act. Reading the popular press gives one the false impression that the entire program-access regime was taken apart. In reality, the ban on exclusive distribution arrangements between cable operators and cable networks will be lifted, while other protections for rival distributors will remain in force.
Although the FCC’s Sunset Order suggests that lifting the ban will mostly affect cable-affiliated networks, those networks are generally distributed by their affiliated cable owner without a contract. There is no reason to add an exclusivity provision to a contract that does not exist.
Accordingly, permitting exclusive contracts likely will have a greater impact on independent networks (such as Disney Channel), which are distributed pursuant to a contract. Under the old rules, a cable operator could not tell an independent network: “I will carry you only if you agree not to deal with DISH Network, DirecTV, Verizon, and AT&T.” With the ban on exclusive agreements lifted, a cable operator may make such a take-it-or-leave-it offer.
To ensure access to newly exclusive programming, the FCC will rely on a case-by-case review of any complaints brought by distribution rivals. This ex post approach to adjudicating access disputes is similar to the one used by the Commission for “program carriage” complaints, in which an independent cable network must persuade the agency to permit a complaint to be heard by an administrative law judge. In contrast, the case-by-case approach embraced in the Sunset Order is not consistent with the ex ante prohibition against discriminatory contracting by broadband network owners in the Commission’s Open Internet Order of 2010. When it comes to handling discrimination, the Commission is anything but consistent.
In the Sunset Order, the FCC gave special treatment to cable-affiliated sports programming, often carried on regional sports networks (RSNs). In particular, the FCC established a “rebuttable presumption” that an exclusive contract involving a cable-affiliated RSN violates the Cable Act. Because sports programming is one of the few types of “must-have” programming, this exemption implies that the competitive balance among cable operators and their competitors may not be altered significantly. This is not to say that non-sports programming is meaningless—as the FCC recognized in its Comcast-NBCU Order, the refusal to supply a collection of non-sports programming could impair a rival distributor. But exempting sports programming takes much of the bite out of the rule change.
In addition to effectively exempting the most likely basis for a program access dispute, the Sunset Order makes clear that a distribution rival still can bring a complaint under other sections of the Cable Act. For example, a rival can allege “undue influence” under Section 628(c)(2)(A); discrimination under Section 628(c)(2)(B); or a “selective refusal to deal” under Section 628(c)(2)(B). In other words, the FCC removed one of several ways a cable operator can violate the Cable Act. The agency is still watching.
The FCC also pointed out that approximately 30 cable-affiliated, national networks and 14 cable-affiliated RSNs are subject to program-access merger conditions adopted in the Comcast-NBCU Order until January 2018. These conditions require Comcast to make these affiliated networks available to competitors, even after the expiration of the exclusive contract prohibition. Because these networks account for a significant share (about one third) of all cable-affiliated programming, the effect of removing the exclusivity ban will be further diminished.
The choice between an ex ante prohibition of certain conduct and an ex post, case-by-case review of complaints turns on the potential for efficiency justifications. In reaching its decision, the Commission noted one potential procompetitive benefit of permitting exclusive deals—ostensibly, to promote investment in new programming. While promoting investment in new programming is important (notwithstanding the fact that there are literally hundreds of cable networks, many of which sprouted up during the exclusivity ban), so too is promoting investment in rival distribution networks. With 55 percent of all U.S. households beholden to a single, fixed-line provider of broadband access (mostly cable modem service), the Commission should consider how each of its rules affects broadband investment. Alas, the agency disposed of this consideration in a single paragraph in the Sunset Order, arguing that the case-by-case approach was sufficient to protect the investment incentives of broadband operators.
It is no accident that the relaxation of the exclusivity ban was opposed by Google, Verizon, and AT&T—each of whom is deploying broadband networks (of both the fixed and mobile variety) in competition with incumbent cable operators. If these rival networks cannot secure access to cable programming, then convincing a cable customer to “cut the cord” will be that much harder. And if rivals cannot reach a certain level of penetration, then their investments will not generate positive returns; if that happens, we won’t see as much broadband investment as we hoped for.
To the extent that the Sunset Order is a harbinger of the FCC’s newfound embrace of case-by-case adjudication of discriminatory conduct, then it is a good thing. To ensure that 4G network operators or Google do not lose their appetite to invest in broadband networks, however, the FCC must be vigilant in enforcing the new rules.
Today the commissioners of the Federal Communications Commission (FCC) are meeting to vote on two issues that will be pivotal to the future of the wireless industry: (1) whether to impose a “spectrum cap” on wireless providers, and (2) how to design the “incentive auction” of the broadcasters’ spectrum. There is a lot at stake for the U.S. economy in getting these policies right: A new analysis by Deloitte estimates that mobile broadband network investments over the period 2012–2016 could expand U.S. GDP between $73 and $151 billion, and account for up to 771,000 jobs.
A spectrum cap would prevent a single provider (say, Verizon) from acquiring more than a certain amount of the airwaves or “spectrum rights” in a given geographic area (say, Washington, D.C.). Spectrum is the most important input in the supply of wireless services—without it, a provider literally can’t compete. The objective of a spectrum cap is to prevent any single carrier from monopolizing a key input in the production process; more wireless entry means greater competition, which means lower wireless prices. So why is this idea so controversial?
The reason is that even carriers with significant spectrum holdings need more of it to survive. To make things concrete, compare the spectrum holdings of Verizon with those of Sprint and T-Mobile. According to Deutsche Bank, Verizon has about 18 percent of all available spectrum on a population-weighted basis (including the spectrum recently obtained from SpectrumCo), compared to about nine percent each for Sprint and T-Mobile. Yet Verizon is desperate for more spectrum because its subscriber base is larger than that of its rivals, and because today’s wireless customers are finding cool (and bandwidth-intensive) things to do with their new 4G phones, straining the capacity of its wireless network. According to one noted wireless analyst, the demand for mobile broadband will surpass the spectrum available to meet it in mid-2013. Even the Chairman of the FCC recognizes that “biggest threat to the future of mobile in America is the looming spectrum crisis.”
Reinserting the spectrum cap—it was sent to the regulatory dustbin several years ago—and setting it at say one-fifth of all available spectrum would effectively bar Verizon from acquiring any more spectrum, whether in an auction or through the secondary markets. And that means that Verizon’s customers would suffer a serious degradation in their wireless connections relative to a world in which Verizon could augment its spectrum capacity. As one Nobel laureate economist famously said, “there’s no such thing as a free lunch.” Taking away from Verizon to give to smaller carriers entails serious tradeoffs.
And to understand those tradeoffs, the FCC must think hard about what the ideal market structure of the wireless industry should look like. A spectrum cap equal to one-fifth of all spectrum implies that the ideal market structure is five national carriers. But even five might be too many given the evolving wireless technology: With the enhanced download speeds made available by 4G networks—Verizon’s 4G network is seven times as fast as its 3G network according to PC World—wireless consumers will be streaming high-definition movies and FaceTiming with their friends, placing even greater pressure for more spectrum. The FCC needs to come to grips with the fact that its policies are in conflict with these technological trends and the associated economies of scale in the supply of wireless services.
Five carriers might also be the wrong number when one considers the role of mobile broadband in the larger broadband market. According to the FCC’s Wireline Competition Bureau, as of mid-2011, 55 percent of all U.S. households relied on a single wireline broadband provider capable of meeting the FCC’s definition of broadband. This means that wireless 4G connections could serve as the second broadband pipeline in over half of U.S. homes. Given the competitive implications of moving from one to two broadband providers—cable modem prices have been shown to fall significantly in the face of competitive entry—the right number of wireless carriers might be closer to three.
But who really knows? The market should decide whether the optimal number of wireless carriers is three or four or five, not the regulators. If the FCC is worried about a single carrier buying up the entirety of the spectrum in the forthcoming broadcast spectrum auction, then a simple rule forbidding such an outcome in that auction is more efficacious than a clumsy spectrum cap. By micro-managing the structure of the wireless industry, the commission tasked with overseeing the communications industry risks making the wrong call.
The Federal Trade Commission (FTC) is in the final stages of conducting its Google investigation. As the agency contemplates whether Google is a monopolist in the ill-defined market for search, they may find the competitive ground has shifted beneath their feet in just the 15 months since they began investigating. While a year or two ago, Google’s main competition in search might have been Bing and Yahoo, today it’s Apple and Amazon, and tomorrow it may be Facebook. The market is almost certainly broader than general search engines as we normally think of them.
Just last week, the New York Times ran a story explaining that Google and Amazon are “at war to become the pre-eminent online mall.” The story cited survey data from two consultancies that should give the antitrust authority pause:
- Forrester Research found that a third of online users started their product searches on Amazon compared to 13 percent who started their search from a traditional search site; and
- comScore found that product searches on Amazon have grown 73 percent over the last year while shopping searches on Google have been flat.
These impressive statistics suggest that Google lacks market power in a critical segment of search—namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”
One senses that the FTC has not focused much on competition from Amazon in product search, or that they even think of Amazon as a search engine. Instead, antitrust agencies around the globe have fixated on helping middlemen comparison-shopping sites such as Nextag and PriceGrabber, most of whom charge retailers for listings. Google is taking heat from comparison sites for doing the same thing because Google is perceived to be the most important source for online shoppers. That regulators are willing to breathe life into these intermediaries implies they do not recognize the platform-based competition between Google and Amazon for product searches.
Amazon is not the only behemoth that competes with Google for search. Apple’s Siri can do search and whole lot more, from helping Samuel L. Jackson design the perfect dinner to making John Malkovich laugh to helping Martin Scorsese maneuver through New York. As search evolves from links into answers, services like Siri become highly valuable. And the ITunes App Store represents the launching pad for many searches that would otherwise start on Google. A couple in Virginia that enjoys winery tours might begin their search by installing “Virginia Wine in My Pocket” or “Virginia Wineries” on their iPhone rather than search the web. In March of this year, Apple announced that more than 25 billion apps had been downloaded from its App Store by the users of the more than 315 million iPhone, iPad, and iPod touch devices worldwide. One wonders whether any of these downloads are being counted by the FTC in their calculations of Google’s market share.
And now Facebook is getting into search. At a Disrupt conference last week, Mark Zuckerberg explained that search engines are evolving into places where users go for answers, and that Facebook is uniquely positioned to compete in that market: “And when you think about it from that perspective, Facebook is pretty uniquely positioned to answer a lot of the questions that people have. So what sushi restaurants have my friends gone to in New York in the past six months and liked? . . . . These are queries that you could potentially do at Facebook if we build out this system that you just couldn’t do anywhere else.”
It may not be natural to associate Amazon (an online retailer), Apple (a device maker), and Facebook (a social media site) with search, but in the technology industry, your next competitive threat can come from anywhere. Monopoly and the kind of robust platform competition between Apple, Amazon, Google, and Facebook are mutually exclusive portraits of reality. Will the FTC turn a blind eye toward this advanced form of competition?
Last week, the FTC hired outside litigator Beth Wilkinson to lead an investigation into Google’s conduct, which some in the press have interpreted as a grave sign for the search company. The FTC is reportedly interested in pursuing Google under Section 5 of the FTC Act, which prohibits a firm from engaging in “unfair methods of competition.” Along with Bob Litan, who served as Deputy Assistant Attorney General in the Antitrust Division during the Microsoft investigation, I have penned a short paper on the FTC’s seemingly unorthodox Section 5 case against Google. (Disclosure: This paper was commissioned by Google.)
Litan and I explore a few possible theories of harm under a hypothetical Section 5 case and find them wanting, including (1) claims that specialized search results (such as flight, shopping or map results) “unfairly” harm the independent specialized search websites like Kayak (travel) or MapQuest (mapping and directions), or (2) assertions that Google allegedly has “deceived” users or websites by seemingly reneging on pledges not to favor its own sites. For the sake of brevity, I focus on the FTC’s potential deception theory here, and leave it to interested reader to pursue the “unfairness” theory in the paper.
Deception of Users
The alleged bases of Google’s alleged deception are generic statements that Google made, either in its initial public offering (IPO) or on its website, about Google’s attitude toward users leaving the site. The provision of a lawful service, specialized search, launched several years after the IPO statement certainly cannot be deceptive. To conclude that it is, and more importantly, to prevent the company from offering innovations in search would establish a precedent that would surely punish innovation throughout the rest of the economy.
As for the mission statement that the company wants users to get off the site as quickly as possible, it is just that, a mission statement. Users do not go to the mission statement when they search; they go to the Google site itself. Users cannot possibly be harmed even if this particular statement in the company’s mission were untrue. Moreover, if the problem lies in that statement, then any remedy should be directed at amending that statement. There is no justification for the Commission to hamper Google’s specialized search services themselves or to dictate where Google must display them.
Deception of Rivals
An alternative theory suggests that Google deceived its rivals, reducing innovation among independent websites. In a February 2012 paper delivered to the OECD, Tim Wu explained that competition law can be used to “increase the costs of exclusion,” which if successful, would promote innovation among application providers. Wu argued that “oversight of platforms is conceptually similar” to oversight of standard-setting organizations (SSOs). He offers a hypothetical case in which a platform owner “broadly represents to the world that he maintains an open and transparent innovation platform,” gains a monopoly position based on those representations, and then begins to exclude applications “that might themselves serve as platforms.” Once the industry has committed to a private platform, Wu argues, the platform owner “earns oversight of its practices from that point onward.”
So has Google earned itself oversight due to its alleged deception? Google is not perceived by web designers as providing a platform for all companies to have equal footing. Websites’ rankings in Google’s search results vary tremendously over time; no publisher could reasonably rely on any particular ranking on Google. To the contrary, websites want their presence to be known to any and all search engines. That specialized search sites did not base their business plans on Google’s commitment to openness is what distinguishes Google’s platform from Microsoft’s platform in the 1990s. To Wu’s credit, he does not mention Google in this section of the paper; the only platforms mentioned are those of Apple, Android, and Microsoft.
It is even more of a stretch to analogize Google’s conduct to that in the FTC’s Rambus case. Unlike websites that do not depend on a Google “standard”–the website can be accessed by users from any search engine, or through direct navigation–computer memory chips must be compatible with a variety of computers, which requires that chip producers develop a common set of standards for performance and interoperability. According to the FTC, Rambus exploited this reliance by, among other things, not disclosing to chip makers that it had additional divisional patent applications in process. That specialized search sites did not make “irreversible technological” investments based on Google’s commitment to a common standard is what distinguishes Google’s platform from SSOs.
The Freedom to Innovate
A change in a business model cannot be a legitimate basis for a Section 5 case because a firm cannot be expected to know how the world is going to unfold at its inception. A lot can change in a decade. Consumers’ taste for the product can change. Technology can change. Business models are required to adapt to such change; else they die. There should be no requirement that once a firm writes a mission statement, it be held to that statement forever. What if Google failed to anticipate the role of specialized search in 2004? Presumably, Google failed to anticipate a lot of things, but that should not be the basis for denying its entry into ancillary services or expanding its core offerings. As John Maynard Keynes famously replied to a criticism during the Great Depression of having changed his position on monetary policy: “When the facts change, I change my mind. What do you do sir?” If Google exposes itself to increased oversight for merely changing its mind, then other technology firms might think twice before innovating. And that would be a horrible consequence to the FTC’s exploration of alternative antitrust theories.
Economists recognize that the source of sustainable, private-sector jobs is investment. Due to measurement problems with investment data, however, it is sometimes easier to link a byproduct of investment—namely, adoption of the technology made possible by the investment—to job creation. This is precisely what economists Rob Shapiro and Kevin Hassett have done in their new study on the employment effects of wireless investments.
Shapiro and Hassett credit the nation’s upgrade of wireless broadband infrastructure from second-generation (2G) to third-generation (3G) technology with generating over one million jobs between 2006 and 2011. To demonstrate that adoption of 3G handsets “caused” job creation in an econometric sense, the authors studied the relationship between the change in a state’s employment and the cumulative penetration of cell phone technologies. According to their econometric model, every 10 percentage point increase in the penetration of a new generation of cell phones in a given quarter causes between a 0.05 and 0.07 percentage point increase in employment growth in the following three quarters.
How reasonable are these results? In 2010, Bob Crandall and I estimated that investment in second-generation broadband infrastructure of roughly $30 billion per year, including wireless infrastructure, sustained roughly 500,000 jobs between 2006 and 2009. We further estimated that spillover effects in other industries that exploit broadband technology could sustain another 500,000, bringing the total job effect close to one million jobs per year. Although Shapiro’s and Hassett’s estimates (based on wireless deployment only) significantly exceed ours (based on all broadband deployment), their estimate is not outside the realm of the possibility.
Crandall, Lehr, and Litan (2007) also conducted a regression analysis using state-level broadband penetration data from 2003-2005 to estimate job effects. They projected that for every one percentage point increase in broadband penetration in a state, employment increases by 0.2 to 0.3 percent per year. On a national level, their results imply an increase of approximately 300,000 jobs per year per one-percentage-point increase in broadband penetration. Once again, Shapiro’s and Hassett’s estimates are consistent with this prior work.
Scholars may differ on the precise way to measure the employment effects, but that debate misses the more important policy point—namely, that broadband technologies generally and wireless broadband in particular have become a vital engine of job creation. The observed correlation between wireless adoption and employment is not accidental: To induce customers to adopt the coolest handset, firms must continuously invest in the next generation of network and device technologies. And these costly investments sustain jobs.
Moreover, contrary to the FCC’s opinion in its 15th annual wireless competition report, private industry’s sustained and widespread investment in new wireless broadband technologies is consistent with the sector being intensely competitive. Industry critics have decried such evidence, arguing instead that the industry is in the death grip of monopolists. Although a monopolist may have an incentive to innovate to protect against a future threat, firms in a competitive industry have incentives to invest and innovate as a way to protect against losing market share today.
Policymakers should ask themselves this question: Why would wireless carriers continually invest billions of dollars on next-generation technologies if they could sit back and exploit their alleged monopoly rents? Experience and common sense tell us that in fact, companies in this space are not behaving like monopolists. Rather, wireless providers of all stripes are desperately trying to distinguish themselves from their rivals. Wireless tablets and phones are driving demand for more and faster wireless broadband, while spectrum-devouring apps like Siri have captured the imagination of millions. The wireless arms race is on, and the U.S. economy stands to benefit directly as wireless companies try to outmaneuver one another with the fastest networks, coolest devices, and deepest array of killer apps.
Regulated firms and their Washington lawyers study agency reports and public statements carefully to figure out the rules of the road; the clearer the rules, the easier it is for regulated firms to understand how the rules affect their businesses and to plan accordingly. So long as the regulator and the regulated firm are on the same page, resources will be put to the most valuable use allowed under the regulations.
When a regulator’s signals get blurry, resources may be squandered. For starters, take the FCC’s annual wireless competition report and the Commission’s pronouncements on spectrum policy. For several years, the competition report cited a trend of falling prices and increasing entry as evidence of robust competition while at the same time noting that industry concentration was slowly rising.
In an abrupt turnaround, the FCC’s 2010 competition report cited the slow but steady increase in concentration as evidence of a lack of competition despite the continued decline in prices and increase in new-firm entry. In other words, in the face of the same industry trends, the agency’s conclusion on competition reversed. The increased weight placed on concentration also seemed at odds with the DOJ’s revised Merger Guidelines, which deemphasized concentration in favor of direct evidence of market power.
At last week’s Consumer Electronics tradeshow, the FCC chairman suggested that the competition report’s objective was not to provide guidance on Commission policy but instead “to lay out data around the degrees of competition in the different sectors.” So much for clearing up the ambiguity. Industry participants expect more than a Wikipedia entry on something so weighty as an annual report to Congress regarding one of the economy’s most critical sectors.
The agency’s signals on spectrum policy are even murkier. On one hand, during the last few years, the current FCC has been calling for more frequencies to be made available to support and grow wireless broadband networks. The FCC has also been publicly supporting voluntary incentive auctions—a market-based tool to compensate existing spectrum licensees for returning their licenses—as the best way to reallocate unused broadcast spectrum to wireless broadband. However, in a confusing set of remarks at the same tradeshow, the FCC now seems to be saying that it only wants to see more spectrum made available if the agency can dictate who gets the spectrum and how they can use it. The very discretion that the FCC now seeks will invite rent-seeking behavior among auction contestants, who will lobby the agency to slant the rules in a way that limits competition and advances their narrow interests; better to immunize the FCC from this lobbying barrage by limiting its discretion.
The agency’s inconsistent and confusing analysis and statements in these two critical policy arenas—wireless competition and spectrum policy—created the perfect storm last year when AT&T sought to acquire T-Mobile. AT&T argued that it wanted to purchase T-Mobile and use its spectrum to augment existing spectrum and infrastructure resources, consistent with the agency’s acknowledgement that wireless carriers needed more spectrum to support surging demand for bandwidth-intensive wireless services such as streaming video. Had AT&T understood the FCC’s intentions, it would not have offered a four-billion-dollar breakup fee to T-Mobile’s parent; these resources could have been put to better use.
The singular objective that should drive the Commission in all matters wireless is getting spectrum into the hands of firms that value it the most. The last 20 years of wireless-industry growth has proven that those who value spectrum the most put it to use most quickly. To commit to this course of action, the agency needs to more clearly and consistently signal its regulatory intentions. If the agency wants to spur competition, it should support Congressional efforts to authorize incentive auctions without restrictions. It also needs to let the evidence of lower prices, growing adoption, and increasing innovation inform its understanding of the state of competition.