Posts Tagged TOTM
Last week, the FTC hired outside litigator Beth Wilkinson to lead an investigation into Google’s conduct, which some in the press have interpreted as a grave sign for the search company. The FTC is reportedly interested in pursuing Google under Section 5 of the FTC Act, which prohibits a firm from engaging in “unfair methods of competition.” Along with Bob Litan, who served as Deputy Assistant Attorney General in the Antitrust Division during the Microsoft investigation, I have penned a short paper on the FTC’s seemingly unorthodox Section 5 case against Google. (Disclosure: This paper was commissioned by Google.)
Litan and I explore a few possible theories of harm under a hypothetical Section 5 case and find them wanting, including (1) claims that specialized search results (such as flight, shopping or map results) “unfairly” harm the independent specialized search websites like Kayak (travel) or MapQuest (mapping and directions), or (2) assertions that Google allegedly has “deceived” users or websites by seemingly reneging on pledges not to favor its own sites. For the sake of brevity, I focus on the FTC’s potential deception theory here, and leave it to interested reader to pursue the “unfairness” theory in the paper.
Deception of Users
The alleged bases of Google’s alleged deception are generic statements that Google made, either in its initial public offering (IPO) or on its website, about Google’s attitude toward users leaving the site. The provision of a lawful service, specialized search, launched several years after the IPO statement certainly cannot be deceptive. To conclude that it is, and more importantly, to prevent the company from offering innovations in search would establish a precedent that would surely punish innovation throughout the rest of the economy.
As for the mission statement that the company wants users to get off the site as quickly as possible, it is just that, a mission statement. Users do not go to the mission statement when they search; they go to the Google site itself. Users cannot possibly be harmed even if this particular statement in the company’s mission were untrue. Moreover, if the problem lies in that statement, then any remedy should be directed at amending that statement. There is no justification for the Commission to hamper Google’s specialized search services themselves or to dictate where Google must display them.
Deception of Rivals
An alternative theory suggests that Google deceived its rivals, reducing innovation among independent websites. In a February 2012 paper delivered to the OECD, Tim Wu explained that competition law can be used to “increase the costs of exclusion,” which if successful, would promote innovation among application providers. Wu argued that “oversight of platforms is conceptually similar” to oversight of standard-setting organizations (SSOs). He offers a hypothetical case in which a platform owner “broadly represents to the world that he maintains an open and transparent innovation platform,” gains a monopoly position based on those representations, and then begins to exclude applications “that might themselves serve as platforms.” Once the industry has committed to a private platform, Wu argues, the platform owner “earns oversight of its practices from that point onward.”
So has Google earned itself oversight due to its alleged deception? Google is not perceived by web designers as providing a platform for all companies to have equal footing. Websites’ rankings in Google’s search results vary tremendously over time; no publisher could reasonably rely on any particular ranking on Google. To the contrary, websites want their presence to be known to any and all search engines. That specialized search sites did not base their business plans on Google’s commitment to openness is what distinguishes Google’s platform from Microsoft’s platform in the 1990s. To Wu’s credit, he does not mention Google in this section of the paper; the only platforms mentioned are those of Apple, Android, and Microsoft.
It is even more of a stretch to analogize Google’s conduct to that in the FTC’s Rambus case. Unlike websites that do not depend on a Google “standard”–the website can be accessed by users from any search engine, or through direct navigation–computer memory chips must be compatible with a variety of computers, which requires that chip producers develop a common set of standards for performance and interoperability. According to the FTC, Rambus exploited this reliance by, among other things, not disclosing to chip makers that it had additional divisional patent applications in process. That specialized search sites did not make “irreversible technological” investments based on Google’s commitment to a common standard is what distinguishes Google’s platform from SSOs.
The Freedom to Innovate
A change in a business model cannot be a legitimate basis for a Section 5 case because a firm cannot be expected to know how the world is going to unfold at its inception. A lot can change in a decade. Consumers’ taste for the product can change. Technology can change. Business models are required to adapt to such change; else they die. There should be no requirement that once a firm writes a mission statement, it be held to that statement forever. What if Google failed to anticipate the role of specialized search in 2004? Presumably, Google failed to anticipate a lot of things, but that should not be the basis for denying its entry into ancillary services or expanding its core offerings. As John Maynard Keynes famously replied to a criticism during the Great Depression of having changed his position on monetary policy: “When the facts change, I change my mind. What do you do sir?” If Google exposes itself to increased oversight for merely changing its mind, then other technology firms might think twice before innovating. And that would be a horrible consequence to the FTC’s exploration of alternative antitrust theories.
Last month, the Federal Reserve released a study, titled “The U.S. Housing Market: Current Conditions and Policy Considerations,” which offers prescriptions on how to cure the housing mess. Given the importance of this issue to the nation’s economic wellbeing—a large portion of our assets are tied up in real estate, and the associated housing-wealth effects are large—I am surprised how little attention the housing market is getting in the Republican debates. Debate sponsors, presumably driven by ratings, seem more interested in Newt’s love life and Mitt’s finances than in economic policy.
The concluding comments of the Fed study are worth repeating here:
The significant tightening in household access to mortgage credit likely reflects not only a correction of the unsound underwriting practices that emerged over the past decade, but also a more substantial shift in lenders’ and the GSEs’ willingness to bear risk. Indeed, if the currently prevailing standards had been in place during the past few decades, a larger portion of the nation’s housing stock probably would have been designed and built for rental, rather than owner occupancy. Thus, the challenge for policymakers is to find ways to help reconcile the existing size and mix of the housing stock and the current environment for housing finance. Fundamentally, such measures involve adapting the existing housing stock to the prevailing tight mortgage lending conditions–for example, devising policies that could help facilitate the conversion of foreclosed properties to rental properties—or supporting a housing finance regime that is less restrictive than today’s, while steering clear of the lax standards that emerged during the last decade. Absent any policies to help bridge this gap, the adjustment process will take longer and incur more deadweight losses, pushing house prices lower and thereby prolonging the downward pressure on the wealth of current homeowners and the resultant drag on the economy at large.
Translation: If we can expedite the transition of our housing stock, we can turn this economy around faster. The study offers several policy prescriptions, including facilitating the conversion of foreclosed properties to rental properties, minimizing unnecessary foreclosures through the use of a broad menu of types of loan modifications, and supporting policies that facilitate deeds-in-lieu of foreclosure or short sales.
On page 14 (of a 26 page report), the study offers yet another approach: land banks, which are described as “public or nonprofit entities created to manage properties that are not dealt with adequately through the private market.” Before the free-market crowd gets worked up, they should recognize that a string of abandoned homes generates a negative externality in a neighborhood, which is precisely the occasion for intervention. Properties acquired by land banks may be rehabilitated as rental units or demolished, as market conditions dictate, which could harness deflationary forces caused by excess supply and neighborhood blight.
My only nit with the section is that the Fed limits the land-bank option to “low-value properties,” which they seem to define as properties below $20,000. This is too timid: If land banks are successful at revitalizing neighborhoods—imagine a park in every neighborhood—then why limit the policy to homes that are effectively worthless? Despite this limitation, the Fed calls for increased funding and technical assistance to existing land banks and for creating a national land bank program.
Kudos to the Fed for taking such a bold stand! If only we could get the debate moderators to ask candidates how to solve the housing mess.
Economists recognize that the source of sustainable, private-sector jobs is investment. Due to measurement problems with investment data, however, it is sometimes easier to link a byproduct of investment—namely, adoption of the technology made possible by the investment—to job creation. This is precisely what economists Rob Shapiro and Kevin Hassett have done in their new study on the employment effects of wireless investments.
Shapiro and Hassett credit the nation’s upgrade of wireless broadband infrastructure from second-generation (2G) to third-generation (3G) technology with generating over one million jobs between 2006 and 2011. To demonstrate that adoption of 3G handsets “caused” job creation in an econometric sense, the authors studied the relationship between the change in a state’s employment and the cumulative penetration of cell phone technologies. According to their econometric model, every 10 percentage point increase in the penetration of a new generation of cell phones in a given quarter causes between a 0.05 and 0.07 percentage point increase in employment growth in the following three quarters.
How reasonable are these results? In 2010, Bob Crandall and I estimated that investment in second-generation broadband infrastructure of roughly $30 billion per year, including wireless infrastructure, sustained roughly 500,000 jobs between 2006 and 2009. We further estimated that spillover effects in other industries that exploit broadband technology could sustain another 500,000, bringing the total job effect close to one million jobs per year. Although Shapiro’s and Hassett’s estimates (based on wireless deployment only) significantly exceed ours (based on all broadband deployment), their estimate is not outside the realm of the possibility.
Crandall, Lehr, and Litan (2007) also conducted a regression analysis using state-level broadband penetration data from 2003-2005 to estimate job effects. They projected that for every one percentage point increase in broadband penetration in a state, employment increases by 0.2 to 0.3 percent per year. On a national level, their results imply an increase of approximately 300,000 jobs per year per one-percentage-point increase in broadband penetration. Once again, Shapiro’s and Hassett’s estimates are consistent with this prior work.
Scholars may differ on the precise way to measure the employment effects, but that debate misses the more important policy point—namely, that broadband technologies generally and wireless broadband in particular have become a vital engine of job creation. The observed correlation between wireless adoption and employment is not accidental: To induce customers to adopt the coolest handset, firms must continuously invest in the next generation of network and device technologies. And these costly investments sustain jobs.
Moreover, contrary to the FCC’s opinion in its 15th annual wireless competition report, private industry’s sustained and widespread investment in new wireless broadband technologies is consistent with the sector being intensely competitive. Industry critics have decried such evidence, arguing instead that the industry is in the death grip of monopolists. Although a monopolist may have an incentive to innovate to protect against a future threat, firms in a competitive industry have incentives to invest and innovate as a way to protect against losing market share today.
Policymakers should ask themselves this question: Why would wireless carriers continually invest billions of dollars on next-generation technologies if they could sit back and exploit their alleged monopoly rents? Experience and common sense tell us that in fact, companies in this space are not behaving like monopolists. Rather, wireless providers of all stripes are desperately trying to distinguish themselves from their rivals. Wireless tablets and phones are driving demand for more and faster wireless broadband, while spectrum-devouring apps like Siri have captured the imagination of millions. The wireless arms race is on, and the U.S. economy stands to benefit directly as wireless companies try to outmaneuver one another with the fastest networks, coolest devices, and deepest array of killer apps.
Regulated firms and their Washington lawyers study agency reports and public statements carefully to figure out the rules of the road; the clearer the rules, the easier it is for regulated firms to understand how the rules affect their businesses and to plan accordingly. So long as the regulator and the regulated firm are on the same page, resources will be put to the most valuable use allowed under the regulations.
When a regulator’s signals get blurry, resources may be squandered. For starters, take the FCC’s annual wireless competition report and the Commission’s pronouncements on spectrum policy. For several years, the competition report cited a trend of falling prices and increasing entry as evidence of robust competition while at the same time noting that industry concentration was slowly rising.
In an abrupt turnaround, the FCC’s 2010 competition report cited the slow but steady increase in concentration as evidence of a lack of competition despite the continued decline in prices and increase in new-firm entry. In other words, in the face of the same industry trends, the agency’s conclusion on competition reversed. The increased weight placed on concentration also seemed at odds with the DOJ’s revised Merger Guidelines, which deemphasized concentration in favor of direct evidence of market power.
At last week’s Consumer Electronics tradeshow, the FCC chairman suggested that the competition report’s objective was not to provide guidance on Commission policy but instead “to lay out data around the degrees of competition in the different sectors.” So much for clearing up the ambiguity. Industry participants expect more than a Wikipedia entry on something so weighty as an annual report to Congress regarding one of the economy’s most critical sectors.
The agency’s signals on spectrum policy are even murkier. On one hand, during the last few years, the current FCC has been calling for more frequencies to be made available to support and grow wireless broadband networks. The FCC has also been publicly supporting voluntary incentive auctions—a market-based tool to compensate existing spectrum licensees for returning their licenses—as the best way to reallocate unused broadcast spectrum to wireless broadband. However, in a confusing set of remarks at the same tradeshow, the FCC now seems to be saying that it only wants to see more spectrum made available if the agency can dictate who gets the spectrum and how they can use it. The very discretion that the FCC now seeks will invite rent-seeking behavior among auction contestants, who will lobby the agency to slant the rules in a way that limits competition and advances their narrow interests; better to immunize the FCC from this lobbying barrage by limiting its discretion.
The agency’s inconsistent and confusing analysis and statements in these two critical policy arenas—wireless competition and spectrum policy—created the perfect storm last year when AT&T sought to acquire T-Mobile. AT&T argued that it wanted to purchase T-Mobile and use its spectrum to augment existing spectrum and infrastructure resources, consistent with the agency’s acknowledgement that wireless carriers needed more spectrum to support surging demand for bandwidth-intensive wireless services such as streaming video. Had AT&T understood the FCC’s intentions, it would not have offered a four-billion-dollar breakup fee to T-Mobile’s parent; these resources could have been put to better use.
The singular objective that should drive the Commission in all matters wireless is getting spectrum into the hands of firms that value it the most. The last 20 years of wireless-industry growth has proven that those who value spectrum the most put it to use most quickly. To commit to this course of action, the agency needs to more clearly and consistently signal its regulatory intentions. If the agency wants to spur competition, it should support Congressional efforts to authorize incentive auctions without restrictions. It also needs to let the evidence of lower prices, growing adoption, and increasing innovation inform its understanding of the state of competition.
Can Profit-Maximizing Enterprises Systematically Leave Money on the Table? The Curious Case of the BCS
For years the public has been clamoring for a playoff system to crown a champion in college football. Yet the geniuses at the BCS stubbornly defended—at least until now—their computer-knows-best system for inviting the two most worthy teams. By injecting doubt over the legitimacy of its invitees, the current system diminishes the meaning of the BCS title game, as evidenced by the abysmal Nielsen ratings for Monday night’s Alabama-LSU game (only 13.8 percent of U.S. television households tuned in to watch the television equivalent of paint drying) and last year’s Auburn-Oregon title game (15.3 percent). By comparison, the title game between Alabama and Texas just two years ago drew 17.2 percent of U.S. households; if this were a publicly traded firm, its shares would be falling fast.
Even worse, the current system diminishes the importance of the other BCS games. Besides alumni, who wants to watch an exhibition game between Oregon and Wisconsin (this year’s Rose Bowl) if the winner cannot advance to the next round? This year’s Rose Bowl drew a meager 9.9 percent of U.S. television households, down 15 percent from last year’s Rose Bowl between TCU and Wisconsin. And last year’s Rose Bowl drew 11.3 percent, down 15 percent from the prior year. Can anyone spot a pattern?
In contrast, the first round of the NFL playoffs this year drew massive audiences. For example, NBC’s coverage of the Saints-Lions earned a 19.3 overnight rating, the third-best overnight for a Wild Card Saturday game since the 1999 playoff season. Along with 42.4 million of my closest friends, I found myself compelled to watch the Broncos-Steelers Wild Card game (25.9 rating), not because I care about either team, but because the investment of my time would pay off in even greater happiness next week.
It is a tragedy that the BCS would run these valuable assets into the ground. Imagine the excitement of a Cinderella team like Baylor, Boise State, or TCU sneaking into the championship. Organized as a playoff, the Rose Bowl (or any BCS non-title game) would experience a significant lift in ratings, along the lines of the lift enjoyed by NFL post-season games relative to NFL regular-season games. To be fair, the profit function of the BCS conferences is presumably much more complicated than “maximize the value of the television revenues for the BCS games.” But these television revenues must be a critical component of their joint profits. Which begs the question: Why would the BCS systematically err when so much money is at stake?
In yesterday’s Washington Post, Health and Human Services Secretary Kathleen Sebelius makes an impassioned plea for skeptics to reconsider the Affordable Care Act. Secretary Sebelius argues that the Act will bring down health care costs by, among other things, assisting those who cannot afford health insurance coverage. Although expanding health insurance coverage is a worthy goal, bringing more folks into the health care system could result in higher prices for health care services. The housing market provides a nice example: although subsidized mortgage rates allowed more people to own homes, more buyers eventually meant higher home prices.
Secretary Sebelius reminds us of the broth of new regulations designed to constrain the worst impulses of insurance providers, including requiring providers to justify premium increases above 10 percent in an online forum; to spend at least 80 percent of premium dollars on health care as opposed to salary or advertising; to accept applicants with preexisting conditions; and to charge zero copays for so-called preventative services. This level of micro-management seems excessive, even by regulated-industry standards.
Given the raging debate over the constitutionality of the Act’s requirement that everyone buy health insurance, the other provisions of the Act have received relatively little attention. To an economist who believes in the efficacy of prices to allocate scarce resources in an economy, the zero-copay rule is perhaps the most offensive provision of the Act. Even for preventative services, a positive copay ensures that users do not abuse their privileges. For any doubters (who live or work in major cities), look out the window during rush hour to see what happens when an activity (using a road) is priced at zero. It is not clear that the increase in demand for preventative services will be offset by the promised decrease in demand for treatment of chronic ailments. Moreover, providers are likely to react to a zero-copay rule by raising deductibles; these terms are highly interrelated. Finally, there is no limit to what constitutes preventative medicine; some men do get breast cancer, but not enough to justify free mammograms for all men.
This is not the first time the Administration has imposed a zero-price rule. The chairman of the Federal Communications Commission, who was carefully screened by President Obama on the issue of net neutrality, adopted the Open Internet Order, which banned an Internet service provider (such as AT&T) from charging a price to an Internet content provider (like Sony) in exchange for speedier delivery. Under the Commission’s rationale, if some websites could not afford the surcharge for higher quality of service, then no one should.
It seems that prices for “critical” services such as preventative medicine and Internet access are evil because they exclude certain segments of the economy. To be fair, under certain conditions such as information asymmetries, externalities, and adverse selection (common in health insurance markets) market-based prices may result in too little or too much consumption relative to the socially optimal level. But the attacks on the price mechanism by these two pieces of regulation do not seem to be grounded in those traditional market-failure arguments. Without a limiting principle, one could oppose prices for just about any good or service, as there will always be someone who cannot afford it. Better to leave prices in place (and subsidize those who cannot afford the “critical” service) than to ban pricing altogether. In contrast to a zero-price rule, the cost of the subsidy is transparent to taxpayers.
Yesterday, AT&T announced it was halting its plan to acquire T-Mobile. Presumably AT&T did not think it could prevail in defending the merger in two places simultaneously—one before a federal district court judge (to defend against the DOJ’s case) and another before an administrative law judge (to defend against the FCC’s case). Staff at both agencies appeared intractable in their opposition. AT&T’s option of defending cases sequentially, first against the DOJ then against the FCC, was removed by the DOJ’s threat to withdraw its complaint unless AT&T re-submit its merger application to the FCC. The FCC rarely makes a major license-transfer decision without the green light from the DOJ on antitrust issues. Instead, the FCC typically piles on conditions to transfer value created by the merger to complaining parties after the DOJ has approved a merger. Prevailing first against the DOJ would have rendered the FCC’s opposition moot.
The FCC’s case against the merger was weak. I have already blogged about the FCC’s Staff Report, but one point is worth revisiting as we digest the fate of T-Mobile’s spectrum: The FCC placed a huge bet on the cable companies’ breathing life into a floundering firm. In particular, the Staff Report cited a prospective wholesale arrangement between Cablevision and T-Mobile as evidence that some alternative suitor—whose name did not rhyme with “Amy and tea” or “her eyes on”—could preserve the number of actual competitors in the marketplace. However, within days of the FCC’s placing its bet on the cable industry, Verizon announced its intention to gobble up the spectrum of Comcast, Time Warner, and Bright House. Over the weekend, Verizon declared its purchase of spectrum from Cox. To be fair, Verizon’s acquisition does not preclude T-Mobile and Cablevision from entering into some spectrum-sharing arrangement; let’s not hold our breath.
This episode highlights the danger of regulators’ industrial engineering: The wireless marketplace is so dynamic that a seemingly reasonable bet by an agency was revealed to be a stunning loser in just a matter of days. By virtue of AT&T’s “winning the auction” for T-Mobile’s assets—Deutsche Telekom, T-Mobile’s parent, is leaving the American wireless industry one way or another—the marketplace selected the most efficient suitor for T-Mobile. If the cable companies or some other suitor were interested in entering the wireless industry, then presumably they would have stepped forward when T-Mobile was still on the open market.
Can you blame the cable companies for their lack of interest in wireless? Who wants to enter an industry with declining prices that requires billions in network investment that cannot be re-deployed elsewhere in the event of a loss? When asked what Deutsche Telekom plans to do with its U.S. assets now that the AT&T deal has unraveled, a company spokesman said: “There’s no Plan B. We’re back at the starting point.” Such gloom is hard to reconcile with the FCC’s belief that a viable suitor is lurking in the background.
Short of Google’s or DISH Network’s or some non-communications giant’s swooping down in the coming days, the net costs of the FCC’s risky intervention will begin to mount. The ostensible benefits of intervention were to prevent a price increase and to preserve the cable companies’ play on T-Mobile’s spectrum. The second benefit has evaporated and the first benefit was never proven in the FCC’s Staff Report. On the cost side of the ledger, AT&T’s customers will soon experience increased congestion as their demand for wireless video and other bandwidth-intensive applications outstrip the capacity of AT&T’s network. And T-Mobile’s customers will never get to experience 4G in all its glory. (Deutsche Telekom has little incentive to upgrade a network it plans to sell.) The FCC has certainly capped AT&T’s spectrum holdings in place, but has the agency advanced the public interest?