Archive for October, 2011

World Series Meltdown, Moneyball, and Ron Washington

Watching my baseball team get eliminated from the World Series was hard to swallow. Watching it happen two years in a row was worse. And watching a manager make critical mistakes when a championship was on the line in Game 6 was beyond the pale. As endearing as Ron Washington is to watch during a rally—his running in place stirs the heart—and as much as the players seem to love playing for him, it’s time to pass the baton.

(Note to TOTM readers: While the connection between a law-and-economics blog and a plea to Rangers’ management is admittedly tenuous, you have to let me vent. I am taking liberty with the “more” in “Academic commentary on law, economics, business, and more.” The alternative is doing bad things to my body or to a Cardinals fan’s body. Just remember that Moneyball has more to do with economics and arbitrage than with baseball. I’ll connect this back to Moneyball in a moment.)

Sports writers have picked up on the questionable “gut” calls by Washington during the Series, including (1) why Esteban German was added to the Game 1 lineup that was always bottom-heavy, (2) why, with a two-run lead in the 10th inning of Game 6, Neftali Feliz was removed facing the bottom two hitters of the Cardinals lineup, (3) why Derek Holland did not start Game 7, and (4) why Nelson Cruz was in the ninth inning of Game 6 with a bad groin with a two-run lead with a better defensive player, Endy Chavez, on the bench.  I want to focus on the fourth, and in my mind, most critical error.

Even if Cruz were at 100 percent health, the decision to replace him in the ninth inning with a defensive specialist was straightforward. Because Cruz’s bad defense in Games 1 (he slid feet-first into a catchable ball) and 6 (after failing to get in “no-doubles-defense” position, he misplayed a ball against the wall) were predictable based on his performance during the regular season, Washington should get the blame. Given Cruz’s groin injury, the decision to replace him at 50 percent health in the ninth inning was a no-brainer.

Thanks to Moneyball, we know that defensive statistics (beyond the simple recording of errors) are kept for each player. Fangraphs has done a good job of recreating the somewhat outdated “ultimate zone rating” (UZR) of players.

UZR assigns a run value to defense, quantifying how many runs a player saved or gave up through their fielding prowess relative to the average fielder. For an outfielder, there are three components to UZR:

Outfield Arm Runs (ARM) – the amount of runs above average a fielder saves with their arm by preventing runners to advance.

Range Runs (RngR) – the amount of runs above average a fielder saves by reaching balls outside their range.

Error Runs (ErrR) – the amount of runs above average a fielder surrenders based on committing errors.

And here are the relevant stats on Cruz in right field in 2011:

Arm = -3.3, Rng = -1.9, ErrR = -1.3, UZR = -6.5, UZR per 150 games = -9.3

These data suggest that Cruz is a defensive liability in right field relative to the average player at that position. In right field, Cruz cost the Rangers 6.5 runs relative to the average fielder during the regular season in 2011. Had he played 150 games at that position (he was injured and he played a bit in left field), he would have cost the Rangers 9.3 runs relative to the average fielder.

Bottom line: Those miscues by Cruz in the World Series were predictable. Because the 10th percentile UZR is -9.5 (the mean is by construction of this metric, the median or 0), Cruz (at -9.3) is actually among the worst in the league at his position—that is, roughly 90 percent of all right fielders are better than Cruz defensively. (As it turns out, Cruz is an above-average defender in left field according to UZR, and David Murphy is above-average in right field, which begs the question: Why were they playing the wrong position the entire season?)

Does Washington follow these statistics or is he just going by his gut? If the latter, then he should be moved to player development and replaced with a more cerebral manager. Ironically, Washington, played by Brent Jennings in Moneyball, was the Oakland A’s coach who traveled with Billy Beane in search of talent that could only be seen through the lens of statistical analysis. Too bad none of it rubbed off on him.

Advertisements

, , , ,

Leave a comment

Debunking the New York Times Editorial on Wireless Competition

Yesterday, the editorial page of the New York Times opined that wireless consumers needed “more protection” than that afforded by voluntary agreements by the carriers and existing regulation. The essay pointed to the “troublesome pricing practices that have flourished” in the industry, including Verizon’s alleged billing errors, as the basis for stepped up enforcement. As evidence of a lack of wireless competition, the editorial cites several indicia, none of which is persuasive.

Let’s address each indicator in turn. (For readers interested in a more rigorous analysis of wireless competition, see the recent paper I co-authored with Gerry Faulhaber and Bob Hahn, forthcoming in the Federal Communications Law Journal.)

NYT: “Most customers, locked into their contracts by high early-termination penalties, have no easy way to switch providers.”

Although many wireless contracts contain early-termination fees (ETFs), the fees generally decline as the contract nears expiration. For example, AT&T’s fee begins at $325 and declines by $10 each month. One year into the contract and the fee becomes a fancy dinner date and a movie (not including the popcorn). The editorial fails to recognize that ETFs are a substitute for handset subsidies by the carriers: Limit ETFs and those subsidies will decline. Moreover, customers who are highly adverse to ETFs can either pay the full freight of the handset under a post-paid plan or enter into a pre-paid plan (and again pay full freight). I am happy to make a commitment—the prospect of an ETF makes my commitment credible—in exchange for a big discount on the latest iPhone.

NYT: “And two companies — Verizon and AT&T — now control 60 percent of the market nationwide.”

According to the FCC’s 14th Annual report (Chart 1), the combined shares of AT&T and Verizon are 55 percent, not 60 percent. Setting aside that rounding error, barring evidence of coordinating pricing by AT&T and Verizon, it makes no sense to add together their respective shares when assessing market power. To the extent that market shares have any meaning, Verizon’s market share alone (and not the sum of Verizon’s and AT&T’s share) informs Verizon’s market power. The same is true for AT&T. But as demonstrated by the Faulhaber paper, market shares have no predictive power of wireless prices (and hence market power), whether across time or across geography at a given point in time.

NYT: “Take cellphone text messaging. Companies typically charge from $5 for 250 texts a month (2 cents per message) to $20 for an unlimited package. Pay-as-you-go rates can be as high as 20 cents a message. But the cost of sending a text message is about a third of a penny, according to Congressional testimony from Srinivasan Keshav, a professor at the University of Waterloo. The markup is enormous.”

Like wireless voice services, the price of text messaging is falling. According to Nielsen (2011), the effective price per text message declined from six cents to a penny from 2005 to 2010. If one cent per message is not the right price, what is? Of course, the margins on any service in a network industry with steep fixed costs will be large. But they need to be large to induce the operator to take the risk of building the network in the first instance. Moreover, savvy wireless users with a data plan can download applications that replicate the functionality of text messaging for free. A search for “SMS text messaging for free” from Apple’s App Store yields myriad hits, beginning with textPLUS, Textfree, Fake-a-Message, Text-for-Free, Text Me!, Messagey, Fake Text Free, iText, and Texter. So long as carriers allow users to install these apps, we should expect text messaging prices to decline further.

NYT: “Even the most expensive monthly wireless data plans, costing about $15 for 250 megabytes or 6 cents per megabyte, are orders of magnitude cheaper than cellphone text pricing.”

For the reasons explained above, the days of surcharges for texting appear to be numbered. As these free-texting and other applications such as Viber (free Internet-based calls) increasingly cannibalize the carriers’ basic offerings, difficult policy issues emerge, like whether the carriers should be allowed to decide which applications can be used on their systems. Limiting a carrier’s ability to exclude such applications (or even charge for them) might put upward pressure on the price of the data plan.

NYT: “It is hugely profitable for companies to segregate voice, data and text into different plans and to force customers to buy a different plan for each device, like a phone or a tablet. But, on today’s networks, segregating services makes little sense technologically. This expensive segregation would be more difficult to maintain if the market were truly competitive and consumers could easily switch from one company to another that offers a better deal.”

Setting aside the inconvenient fact that voice, data and text are different services from the perspective of customers, why not look at switching data directly to see whether customers are trapped? Churn refers to the percentage of wireless customers who depart for other carriers. Assuming that most departing customers find a new wireless home, churn data are a reasonable proxy for switching data. In the first quarter of 2011, U.S. carriers reported monthly churn rates between 1.3 (for pre-paid customers) and 4.3 percent (for post-paid customers). Ignoring customer additions, a carrier that begins the year with 10 million customers and experiences a 2 percent monthly churn ends up losing 2.15 million of those original customers (roughly 21 percent) over the year. By this measure, wireless customers are hardly locked into place.

Finally, in addition to asking the FCC to curtail ETFs on phone contracts, the editorial requests that additional spectrum be made “available to more competitors.” I interpret “more” to mean “carriers that do not rhyme with KT&T or with Spurizon.” If only there was no such thing as spectrum exhaust, the social costs of excluding AT&T and Verizon from the next auction would be trivial. But alas, as acknowledged by the FCC’s National Broadband Plan and demonstrated in a new report by Peter Rysavy, existing carriers need access to at least 500 MHz of additional spectrum to support the precipitous increase in demand for bandwidth-intensive wireless applications. Yet another new study by the Global Information Industry Center at the University of California, San Diego shows wireless demand outstripping capacity shortly. When licensing any future spectrum, the FCC must carefully balance the need to ensure quality of service for existing subscribers against the speculative benefit of adding a sixth or seventh local carrier—90 percent of the population is served by at least five carriers according to the FCC’s 15th Wireless Report—to an already competitive wireless environment.

, , , , ,

Leave a comment

The Bulldozer Solution to the Housing Crisis

My inaugural blog on two-sided markets did not elicit much reaction from TOTM readers. Perhaps it was too boring. In a desperate attempt to generate a hostile comment from at least one housing advocate, I have decided to advocate bulldozing homes in foreclosure as one (of several) means to relieve the housing crisis. Not with families inside them, of course. In my mind, the central problem of U.S. housing markets is the misallocation of land: Thanks to the housing boom, there are too many houses and not enough greenery. And bulldozers are the fastest way to convert unwanted homes into parks.

(Before the housing advocates lose their cool, an important disclaimer: Every possible effort should be made to keep a family in their homes, including taxpayer-financed principal modifications for deserving, underwater borrowers. My proposal applies only to vacated homes that have completed the foreclosure process.)

Until the Washington Post ran an article last week, titled Banks turn to demolition of foreclosed properties to ease housing-market pressure, I was reluctant to admit my position in public. I had whispered my idea into the ears of several finance professors, but none was willing to stand behind it. And for good reason: How can one advocate bulldozing a home when so many families are losing their homes?

According to the Post, some of the nation’s largest banks have begun giving away abandoned properties to the state and even footing the $7,500 bill per demolition. In 2009, Ohio passed a law creating “land banks” with the power and money to acquire unwanted properties and put them to better use, like community gardens. Similar laws were passed in Georgia, Maryland, and New York. Wells Fargo donated 300 properties nationwide last year, and Fannie Mae donated 30 properties per month to the Cuyahoga (Ohio) land bank. The story even identified a “land bank expert” at Emory University. Now that the Post has given me cover of plausibility, let’s discuss the costs and benefits.

One of the first lessons in an undergraduate microeconomics class is that bulldozing homes to create construction jobs is a bad idea. Even after those new construction workers rebuild the bulldozed homes, society has the same amount of homes as before but lacks whatever output those workers could have created in the alternative. The objective of economic policy is not to maximize jobs—if that were the case, entire cities would be bulldozed and reconstructed—but rather to allocate resources efficiently. Because so many economists have this lesson in mind (and because so many are pacifists), it is hard to embrace any policy that involves a bulldozer.

But this bulldozer scheme is motivated for different reasons. Too much land has been allocated to homes, many of which were built in bubble during the early half of last decade. As a result, too many neighborhoods in America are afflicted with abandoned properties. A vacant house is estimated to be worth half its normal market value. Imagine trying to sell your house at market rates when a close facsimile is available across the street for half the price! To add insult to injury, the excess supply of abandoned houses invites vandalism and neighborhood blight—the textbook negative externality—further depressing home values. Using data from foreclosures in the Cleveland area, Kobie and Lee (2010) show that the length of time that a home is in foreclosure has a significant drag on neighboring home values.

Well-functioning markets tend to equilibrate supply and demand, but housing markets are highly inefficient in this regard because of the time lag between beginning construction and selling a home: A housing boom sends signals to builders that new construction will be profitable. By the time the housing bust comes, the new builds become permanent mistakes.

To illustrate this “market failure,” consider downtown Miami. A drive down Brickell Avenue reminds one of New York City. Whereas there used to be one row of high-rises on the bay-side, the avenue now boasts rows and rows of developments as far as the eye can see. Had the developers known that many of these complexes would stand empty—the Census Bureau estimates that a whopping 18 percent of Florida’s homes stood vacant in March 2011—they would have tempered their enthusiasm. According to the Florida Association of Realtors, the inventory overhang has sent home prices plunging: the median price for homes sold in January 2011 was seven percent less than January 2010, and prices are expected to fall by another five percent in 2011.

And why is this so troubling for the economic recovery? According to the Fed, the nation’s stock of household real estate declined by $6.5 trillion since 2006. A family spends its income based in part on its perceived wealth; when housing values decline, families spend less. Economists call this the “housing-wealth effect.” Case, Quigley and Shiller (2006) found a statistically significant and rather large effect of housing wealth upon household consumption, and weak evidence of a stock market wealth effect.

A robust stock market might offset this decline in wealth (and hence spending), but the Dow hasn’t cracked 13,000 since April 2008. In the meantime, families are hoarding their cash. The $6.5 trillion elimination in household wealth puts the President’s $300 billion jobs-stimulus program in perspective: If the housing-wealth effect is dragging down spending, then a one-time injection of $300 billion dollars won’t have much of an impact. In contrast, a 10 percent increase a housing wealth—housing values are off 30 percent since 2006—would increase consumption between 0.4 and 1.4 percent according to Case, Quigley and Shiller.

When applied to vacated homes that have completed the foreclosure process, the bulldozer scheme would eliminate some of the excess supply of housing, which would temper the downward pressure on home values. In the place of a cluster of abandoned homes sucking the life of a neighborhood, imagine a children’s park, a dog park, or a community garden. Now that the banks have figured out bulldozing can be cheaper than maintaining the properties, paying taxes, and marketing the properties, the only thing stopping this idea from gaining traction is public sentiment.

My lunch crowd, comprised of economists, retort that the elimination of excess housing supply via bulldozers might be a boon to existing homeowners but would punish future homeowners. But wouldn’t a future homeowner prefer to invest in a slightly more expensive asset class with expected growth over a less expensive asset class with negative expected growth for the foreseeable future?

Finally, the bulldozing scheme need not be mutually exclusive with other schemes to relieve the housing crisis. Other ideas are worth trying, even if they wouldn’t spur much economic activity. Some are calling on Congress to eliminate the barriers keeping underwater homeowners from refinancing their mortgages. According to Macroeconomic Advisers, such a plan might boost GDP growth by 0.1 to 0.2 percentage points, as it merely redistributes money from lenders to borrowers. Others have called for massive debt forgiveness, achieved via a federal program to purchase troubled mortgages and give homeowners better rates. As Ezra Klein of the Post points out, however, the politics of using taxpayer dollars to pay off mortgages are impossible to crack. To stabilize the housing market, Larry Summers calls on government sponsored enterprises to finance mass sales of foreclosed properties to those prepared to rent them out, and to drop their posture of opposition to experimentation for programs such as principal reductions.

Whichever course we take, speed is of the essence: The housing drag is not going away on its own. According to RealtyTrac, the nation’s banks, along with Fannie Mae and Freddie Mac, have an inventory of more than 816,000 foreclosed properties, with an additional 800,000 working their way through the foreclosure process. Insisting that each of those homes be paired with a family—a noble cause—is tantamount to pushing off recovery for several more years.

I modestly propose to remove a fraction of these homes from inventory. If you don’t like the ring of a bulldozer scheme, how about “The Neighborhood Parks” scheme? Even if I can’t convince any economists to get on board, environmentalists should be pleased.

, , , , , ,

Leave a comment

The Fate of the FCC’s Open Internet Order–Lessons from Bank Fees

Economists have long warned against price regulation in the context of network industries, but until now our tools have been limited to complex theoretical models. Last week, the heavens sent down a natural experiment so powerful that the theoretical models are blushing: In response to a new regulation preventing banks from charging debit-card swipe fees to merchants, Bank of America announced that it would charge its customers $5 a month for debit card purchases. And Chase and Wells Fargo are testing $3 monthly debit-card fees in certain markets. In case you haven’t been following the action, the basic details are here. What in the world does this development have to do with an “open” Internet? A lot, actually.

The D.C. Court of Appeals has been asked to consider several legal challenges to the FCC’s Open Internet Order. Passed in December 2010, the Open Internet Order was the regulatory culmination of an intense lobbying campaign by certain websites and so-called consumer groups to regulate the fees that Internet access providers such as Comcast or Verizon may charge to websites.

Although the challenges to the Open Internet Order largely concern the FCC’s authority to regulate Internet access providers and the proper scope of the regulations—for example, whether they should apply to wireline networks only or to all broadband networks including wireless—here’s to hoping that the rules are also judged according to the FCC’s public-interest standard. Along that dimension, the FCC’s experiment in price regulation clearly fails.

Just as Internet access providers bring together websites and users, banks provide a two-sided platform, bringing together merchants and customers in millions of cashless transactions. Because banking networks cost money to create, banks can’t be expected to provide their services for free. If you tell a bank that it can’t charge one side of a two-sided market—particularly when that one side (the merchant side) is less price sensitive than the other (the customer side)—then expect customer fees to rise. It’s not because banks are evil; it is because the profit-maximizing price charged to customers by a bank depends on the price charged to merchants.

Ignoring this economic lesson of two-sided markets, the Durbin Amendment to the Wall Street Reform and Consumer Protection Act instructed the Federal Reserve Board to cap swipe fees charged by banks to merchants. Prodded by consumer advocates to eliminate the fees entirely, the Fed cut the fees in half, to about 24 cents per transaction from an average of 44 cents per transaction. Paradoxically, the smaller the merchant fee, the larger is the debit fee—this is the “seesaw principle” of two-sided markets in action. Say hello to $5 monthly debit fees.

In a classic case of one regulation spawning another, now there is talk of regulating the banks’ debit-card charges. In response to the new debit fees, some members of Congress asked the Justice Department to investigate the major banks, suggesting that the higher fees resulted from a pricing conspiracy and not from their own bone-headed price regulation.

Months before the new debit fees came into effect, Bob Litan of the Brookings Institution predicted in a paper that “consumers and small business would face higher retail banking fees and lose valuable services as banks rationally seek to make up as much as they can for the debit interchange revenues they will lose under the [Federal Reserve] Board’s proposal.” As noted by Todd Zywicki in the Wall Street Journal, Litan’s prediction proved prescient.

Although both the Durbin Amendment and the FCC’s Open Internet Order are price regulations, there are important differences. Unlike the Fed’s rulemaking on swipe fees, the Open Internet Order was not directed by Congress. This shortcoming alone might be fatal for the Appeals Court. And unlike the Fed’s rulemaking, the FCC’s rulemaking regulates the merchant fee out of existence. Regulating prices below market-levels (as the Fed did) is one thing—regulating them to zero (as the FCC proposes) is beyond the pale.

Under the Open Internet Order, Internet access providers are banned from charging websites a surcharge for priority delivery. Indeed, the mere offering of such a fee to one website would be “discriminatory” and thus presumptively anticompetitive, even if the same offer were extended to other websites. Self-described public interest groups advocating for the Open Internet Order believe that if the smallest website in America can’t afford a surcharge for priority delivery, then no one should be allowed to buy it.

Assuming the FCC’s Order withstands legal scrutiny, the rules will clearly retard innovation among application developers: Why develop the next, killer real-time application if you can’t contract for priority delivery?

And if the Durbin Amendment is any guide, the effect of the Open Internet Order will be higher Internet access prices for consumers.

The same Bob Litan who accurately predicted price hikes in banking caused by price regulation made a similar prediction for broadband networks: “Even according to a theoretical model championed by net neutrality proponents, end users are unequivocally worse off under net neutrality regulation, as the end-user price of broadband access is always higher when ISPs are barred from raising revenues from content providers.” Will his sage advice be ignored by regulators twice in the same year?

The Appeals Court should force the FCC to defend the notion that the agency’s Open Internet Order is consistent with the public interest: If higher access prices and less innovation among application developers are the unintended consequences of an “open” Internet, then the FCC will fail on this score. With luck, the Open Internet Order will be seen as the ugly cousin of the Durbin Amendment, and the FCC’s experiment in price regulation will be curtailed.

,

Leave a comment