In Defense of Some (But Not All) Behavioral Remedies: A Response to Feld’s Case for Structural Relief
I promised a rejoinder to Harold’s latest blog bashing behavioral remedies in the media space. My Forbes piece explained the success of several discrimination complaints against Comcast, but I want to spill a little more ink on this passage from Harold below:
As I noted back above, Comcast/NBCU is usually held up as exhibit “A” in “why behavioral conditions totally suck.” None of the behavioral conditions effectively stopped Comcast from using its market power in exactly the way we were worried about (such as zero-rating its own streaming service, or refusing to sell content to online streaming services, or discriminating against independent programmers in favor of its own content and other whacky shennanigans). The experience was so awful and pathetically lame that it led directly to rejecting the Comcast/TWC deal.
First, that a complainant avails itself of the protections of a complaint system is not proof that the behavioral remedy is failing. What matters is the speed (and probability) of achieving relief for meritorious complaints. And the success of Bloomberg, an unnamed OVD, NFL Network, and MASN suggests that complainants can and do achieve relief under the FCC’s case-by-case protections.
Second, as Trump tests the limits of our constitution, Comcast is a stress test of behavioral remedies. Comcast appears hardwired to discriminate in favor of its own, or at least it has acted that way in the past. Subjecting any other distributors to a nondiscrimination standard—whether it’s AT&T or Google in a soon-to-be created Net Tribunal—would likely generate fewer complaints, as the culture of any platform (other than Comcast) would be more amenable to following the rules and respecting social norms of neutrality. So if the behavioral remedy survives the Comcast stress test, it can survive anything. Ditto for our constitution.
Third, ex post adjudication under both the Comcast-NBCU protections and section 616 of the Cable Act (program-carriage rules) has warts—namely, the appeals process—but those warts should be identified and (excuse the pun) cut off. The seemingly endless appeals process of case-by-case review at the FCC is highly politicized (with FCC commissioners always voting their party lines); postpones the delivery of justice; and benefits the well-heeled distributors who can afford to extend the fight with first-rate appellate lawyers. (Yes, I’m still licking my wounds from Tennis Channel.) Rather than trash the adjudication process, however, we should press for reforms to expedite relief and to level the playing field between independent content providers and vertically integrated distributors. For example, I have advocated for injunctive relief immediately upon a finding of discrimination by the administrative law judge (ALJ) or tribunal, as well as for skipping over the appeals to the FCC commissioners and going straight to the D.C. Circuit. You wouldn’t send a Porsche 911 to the scrap heap because it needed a new muffler, now would you?
[Note: Because the FCC is not involved in the AT&T/Time Warner review, the FCC wouldn’t be tasked with adjudication here. But that does not imply that adjudication is impossible. The ALJ and her staff could be housed anywhere.]
With respect to Harold’s thesis that the “The DOJ’s Case Against AT&T Is Stronger Than You Think,” that is largely an empirical question: The case will come down to whether the content in Time Warner’s portfolio is sufficiently powerful such that AT&T could lure enough new video subscribers away from rival distributors to offset the loss in content license revenues from a foreclosure strategy. In a case cited by Harold (Time Warner/Turner), the Federal Trade Commission certainly believed these networks to be must-have as of 1996 (the year in which MSNBC and Fox News were both launched), alleging that “CNN, TNT and WTBS are viewed by cable distributors as ‘marquee’ or crown jewel services.” Query whether the same is true in the (richer) media landscape of 2017. During a standoff with Turner involving the same properties in 2014, Dish Chairman Charlie Ergen reportedly “told Wall Street analysts that he wasn’t sure that CNN was still a must-carry network.” Whether Ergen was bluffing or being sincere, those words might get special attention in DOJ’s suit. Stay tuned!
Good morning. I want to thank Marshall Steinbaum and Eric Bernstein for organizing this important event. And thanks to Sally for that kind introduction. People might not know that, as her night gig, Sally hosts a podcast called Women Killing It, which celebrates women’s accomplishments in business, politics, and the arts, and is quite inspiring. And sticking with the feminist theme, I can’t help but noticing the gender composition of this panel. They all should look this way, right? Marshall called me in panic, saying he needed a token, bald Jewish man on a DC policy panel for diversity reasons.
I want to use my opening to discuss remedies that have been floated to combat the threat to edge innovation posed by dominant platform owners. Some like Sally are calling for beefed-up antitrust enforcement under the current standards. Others like Marshall and Lina Khan are calling for a change in the standards, to accommodate concerns not captured under the consumer-welfare framework. Still others like retired tech columnist Walt Mossberg are calling for new protections that would operate outside of antitrust, such as a tribunal to adjudicate disputes between edge providers and dominant platforms. I affectionately call last this one the “Mossberg Plan.”
These various remedies may be complementary, in which case no one has to prove that her plan is best. But my competitive juices compel me to make the case today for the Mossberg Plan. To do that, I offer three principles that should guide us as we tackle this difficult competition problem: (1) Policy Stability, (2) Speed, and (3) Symmetry.
Policy stability is the notion that regulatory outcomes involving a particular issue are reasonably predictable, so that the relevant actors—here, in the Internet ecosystem—can make long-term plans. Confidence in equitable outcomes is the key to spurring edge innovation by independent content and app providers. The same confidence in predictable results attracts platform owner investment that spurs innovation. Of course, there is no such thing as a perfectly predictable outcome; even antitrust cases aren’t perfectly predictable. But procedural and substantive precedent from the repeated application of well-understood standards will guide the next decision, narrowing the range of outcomes and making prediction easier.
Relative to agency review, adjudication in a court or a tribunal is shielded from political influence—another force working against policy stability. A judge with a lifetime appointment or a sufficiently long tenure is not thinking about how her decision will affect her future income path, as determined by her political constituency.
For the opposite of policy stability, check out the modern FCC, which has become highly politicized. FCC Commissioners have injected politics and thus policy instability, for example, by voting perfectly along party lines when presented with an Administrative Law Judge’s findings of discrimination by a vertically integrated cable operator. Republicans have voted to overturn a finding of discrimination in both cases—an admittedly small sample size, but troubling nonetheless.
Not to pick on Republicans, Tom Wheeler, the Democratic Chairman under President Obama, injected an unhealthy dose of policy instability in his design of a case-by-case regime in the 2015 Open Internet Order. Rather than commit to a well-understood standard such as nondiscrimination, as his predecessor Julius Genachowski did in the FCC’s 2010 Open Internet Order, Chairman Wheeler opted for something more nebulous called the “General Conduct” standard and declined to use an independent factfinder to adjudicate disputes. This gave Wheeler, and future FCC Chairs, maximum flexibility to achieve whatever regulatory result they desired, enshrining an arbitrariness that undermines investment. In January 2017, Wheeler’s Wireless Telecom Bureau found that AT&T’s and Verizon’s zero-rating plans were in violation of the nebulous General Conduct standard—only to be reversed by Chairman Pai a few months later. This is the opposite of policy stability.
In contrast, an independent tribunal tasked with enforcing a well-understood standard such as nondiscrimination, whose decisions have binding influence on future cases and could not be reversed by agency heads, would achieve policy stability.
Let’s move to the second principle for identifying a good remedy here—speed. Speed in this context is the notion that a complainant that prevails on the merits enjoys injunctive and potentially monetary relief in a timely fashion. Proponents of the use of antitrust as a means to police discriminatory conduct on the Internet, such as Josh Wright, are silent when it comes to speed. Based on my 20-year career working on antitrust cases, I can say with some authority that antitrust moves like molasses. It’s as if antitrust procedures were designed by lawyers to ensure job security! The breakup of AT&T occurred ten years after DOJ’s complaint in 1974, and Microsoft also took a decade to resolve. That means Netscape and others operating on the edge of Microsoft’s platform were allowed to twist in the wind for ten years. And this is how edge innovation dies.
In contrast to the five-to-ten year ordeal of antitrust litigation, a specialized tribunal tasked only with determining whether discrimination had occurred and, as a result, the complainant was materially injured, should be able to adjudicate cases in one-to-two years.
And finally, let’s briefly touch on the third principle to guide us in policy design—symmetry. Symmetry is the notion that no set of dominant firms is immunized from the regulation. Just as it made zero sense to subject some Internet firms to opt-in standards for privacy protections and others subject to opt-out, it would make zero sense to design a regime that policed ISPs (and only ISPs) for discriminatory conduct, while permitting Google/Facebook/Amazon to discriminate against independent edge providers with impunity, especially since the tech platforms are a bigger threat to edge innovation. According to two dozen interviews by the Post’s Elizabeth Dwoskin of top tech investors and entrepreneurs, the threat posed by Facebook “is having a profound impact on innovation in Silicon Valley, by creating a strong disincentive for investors and start-ups to put money and effort into creating products Facebook might copy.”
A tribunal that created a forum for edge providers to bring discrimination complaints against both tech platforms and ISPs, evaluated pursuant to the same evidentiary criteria, would satisfy the symmetry principle.
When judged along these dimensions—policy stability, speed, and symmetry—the Mossberg Plan is the best option for policymakers. It would make innovation great again.
My Comment on Wright’s “Antitrust Provides a More Reasonable Framework for Net Neutrality Regulation”
Professor Josh Wright has written a piece that responds, at least in part, to my new article explaining why antitrust cannot accommodate net neutrality violations in particular, nor other mild forms of discrimination on the Internet in general. In his Perspective for the Free State Foundation, Wright argues that my analysis of the limits of antitrust as applied to this particular type of discriminatory conduct “reveal[s] a profound and fundamental lack of understanding of the rule of reason framework.” Ouch. As it turns out, Wright’s understanding of antitrust, as reflected in prior writings, is exceedingly narrow, which proves my point.
For those getting up to speed, my article makes two major points: (1) a pure innovation-based case that lacks short-run price and output effects—the types of antitrust injury that are readily cognizable and more easily proven—would be exceedingly difficult to win given prevailing evidentiary requirements under antitrust law; and (2) even if one could prevail on the merits of such a case, the practical difficulties for a private litigant to bring a pure innovation-based case, in terms of time and resources, renders antitrust ineffective in policing discrimination on the Internet.
Wright ignores the second point of my article entirely, focusing instead on the claim that antitrust does in fact recognize innovation-based harms. But the practical challenges to an antitrust approach cannot be ignored: The speed of decision-making in this arena is critical, and as demonstrated below, the Federal Trade Commission (FTC) has not shown it can move fast. Wright fails to appreciate that securing a lasting solution—dare I say legislative compromise—to the net neutrality problem requires offering a good-faith alternative to Title II-based rules that provides timely relief to edge providers. Telling an edge provider that it must endure a three-to-ten year adjudication process costing multiple millions of dollars fails that requirement.
With respect to the first point of my article, the cases Wright cites as evidence of the FTC’s willingness and ability to take on pure innovation (non-price) cases actually prove my point, since in each substantive discussion, the FTC cites higher prices as at least a partial basis for their requested relief. (And even if an agency would be willing to bring such a hypothetical case, Wright offers no prescription for how a private litigant could secure relief from discrimination.) None of the cases he cites serves as counterexamples to my claim that a pure innovation-based, single-firm monopolization case would not likely prevail under current antitrust standards. Deprived of any compelling counterexamples, Wright is left to cite an amicus brief by the FTC in Mylan.
In Intel, his best counterexample, Wright fails to note that the FTC pulled the plug by settling, which means the case has no precedential value for future plaintiffs. He also fails to note that the FTC’s theory of harm in Intel involved both short-run price effects and innovation harms. A quick review of the complaint reveals the FTC’s price-based theory of harm, demonstrating the case supports my theory, contrary to Wright’s claim:
- “On the one hand, Intel threatened to and did increase prices, terminate product and technology collaborations, shut off supply, and reduce marketing support to OEMs that purchased too many products from Intel’s competitors.” (paragraph 6)
- “Intel’s use of penalties, rebates, lump-sum and other payments across multiple products, differential pricing, and other conduct alleged in this Complaint maintained or is likely to maintain Intel’s monopoly power to the detriment of competition, customers, and consumers. Intel would not have been able to continue charging comparably higher prices across its product lines but for its conduct, as alleged in this Complaint, that harmed competition.” (paragraph 55)
- “To combat this competition, Intel charged those OEMs significantly higher prices because they used a non-Intel graphics chipset or GPU.” (paragraph 89)
- “Intel’s conduct adversely affects competition and consumers by, including but not limited to: causing higher prices of CPUs and GPUs and the products containing microprocessors;” (paragraph 94)
- “Absent such relief, for OEMs and consumers of the relevant products, the consequences have been and likely will continue to be supracompetitive prices, reduced quality, and less innovation.” (paragraph 95)
Economists have demonstrated that, under certain conditions, share-based loyalty discounts can be used by monopolists to extract supra-competitive prices; when a firm enjoys monopoly power over a buyer’s initial requirements (or “noncontestable units”), the firm can offer to waive a “penalty price” on the noncontestable units in exchange for higher prices on the contestable units. Because Intel allegedly employed this precise strategy to secure higher chip prices, the case does not make for a good counterexample to my thesis regarding lax antitrust enforcement of pure innovation-based cases.
Wright cites Grifols, S.A. as an example of a “conduct case where the theory of harm was decreased innovation.” But Grifols arose in a merger context, where the theories of harm that an antitrust agency may pursue are arguably more expansive than those an agency can pursue in a single-firm monopolization case; merger review is made pursuant to Section 7 of the Clayton Act, whereas monopolization cases are pursued under Section 2 of the Sherman Act. Again, these are weak counterexamples.
Lastly, by citing Microsoft (a case that is nearly two decades old), Wright inadvertently proves my second point, which he is otherwise silent on: Antitrust generally, and the antitrust agencies specifically, are currently ill-equipped to effectively pursue a platform owner that commands sufficient market power to stifle innovation. While the Department of Justice arguably prevailed over Microsoft, it was unable to do so fast enough to save Netscape, the innovative browser company that was run over by Microsoft’s unlawful support of Explorer, its rival. (Rival chipmaker AMD similarly twisted in the wind for years while Intel was resolved.) That the FTC/DOJ have not litigated a major Section 2 case since Microsoft, certainly not one involving platform technologies, is remarkable. Until the FTC demonstrates a track record and the willingness to bring Section 2 cases, Wright’s arguments are nothing more than hollow promises.
In today’s global Internet marketplace, any delay of innovation in the United States will likely be countered by deployments of innovation elsewhere, disadvantaging U.S. companies and consumers. Attempting to address inequities in the fast-moving Internet space with the hoop-jumping required by the Administrative Procedure Act of 1946 and the Federal Trade Commission Act of 1913 is as nonsensical as trying to govern the Internet with the 1934 Communications Act, which even Wright agrees is in adequate for the Information Age.
Perhaps most surprisingly, Wright appears to abandon his prior stance that modern antitrust is ineffective in combatting a monopolist’s efforts to stifle innovation. In a 2010 publication titled Google and the Limits of Antitrust—the title says it all—Wright offers an exceedingly narrow view of antitrust. Wright and his co-author Geoff Manne are generally skeptical about the scope of antitrust enforcement as applied to platform providers such as Google. They conclude (page 74): “that plaintiffs cannot or should not prevail against Google in a monopolization claim based on the two types of conduct considered here: exclusive syndication agreements and use of the quality score metric to extract greater rents.” Given “the apparent lack of any concrete evidence of anticompetitive effects or harm to competition,” they argue, “an enforcement action against Google on these grounds creates substantial risk for a false positive which would chill the innovation and competition currently providing immense benefits to consumers.” It is not clear how a plaintiff could ever prove “concrete evidence anticompetitive effects” for an innovation-based harm that has not yet materialized.
Perhaps most revealing, Manne and Wright block quote (at page 63) the evidentiary standards from the Areeda-Hovenkamp treatise for exclusive dealing cases:
In order to succeed in its claim of unlawful exclusive dealing, a plaintiff must show the requisite agreement to deal exclusively and make a sufficient showing of power to warrant the inference that the challenged agreement threatens reduced output and higher prices in a properly defined market. Then it must also show a foreclosure coverage sufficient to warrant an inference of injury to competition, depending on the existence of other factors that give significance to a given foreclosure percentage, such as contract duration, presence or absence of high entry barriers, or the existence of alternative sources or resale.
The treatise does not say that a harm to innovation can be substituted for a showing of “reduced output or higher prices.” When it comes to exclusionary conduct, it’s all about the prices!
When the FTC appeared poised to file an antitrust complaint against Google, Manne and Wright issued a statement in June 2011 that succinctly reflected their views of the limits of antitrust: “The focus of any antitrust inquiry must always be on consumer harm—not harm to certain competitors. We are skeptical that any such harm can be proven here.” On this Wright and I agree: Under the consumer-welfare standard, discriminatory or exclusionary conduct by a platform provider that does not generate a price or output effect will largely go unchecked by antitrust law.
And that’s precisely the regulatory gap that my proposed tribunal seeks to fill. Under the consumer-welfare lens, the antitrust agencies and courts take a narrow approach to antitrust; until that changes, we cannot count on the FTC to police discrimination on the Internet.
I’m not a fan of all-inclusive resorts. Having just returned from one in Punta Cana (DR), which appears to offer over 100 all inclusives, the indignities suffered are fresh in my mind. Allow me to share my horror and impart a bit of economics.
The basic problem with all inclusives is vertical integration of hotels into restaurants. The lack of synergy between the two skill sets is made worse by the big bundle, which eliminates prices for the “tied” product—in this case, resort food and drinks.
Don’t mistake this as a rant against vertical integration generally. Some skill sets nicely complement each other. For example, local breweries tend to be good at making food, presumably because someone who has a knack for making tasty brews understands the palate. Economists call these “synergies,” and they should be exploited whenever possible.
I learned this lesson first hand at the Atlantis Resort in the Bahamas. Atlantis offers its own restaurants. It also (wisely) contracts out resort space to third-party restaurateur such as Nobu. Trust your fearless blogger when he tells you the homegrown resort food is barely edible. In contrast to our local brewery example, the skills sets of making elaborate water parks (or just plain swimming pools) and making cuisine simply don’t overlap.
(A quick econ digression: A seminal piece by Carlton (2001) shows how bundling by an all-inclusive resort also can be bad for island natives. Before the resort bundled food with hotel stays, there was a thriving sector of independent restaurateurs, which catered to both island natives and tourists. After the bundle is introduced, tourists now eat all of their meals “for free” at the resort—really at no incremental charge—driving the independents out of business due to a lack of viable scale. Island natives are suddenly beholden to a monopoly provider of restaurants. But this piece is trying to convince you that tourists like you are harmed as well!)
When mediocre, resort-owned restaurants and bars are not allowed to charge a price due to the all-inclusive bundle, the problem of vertical integration in the absence of synergies is exacerbated. Now the interests of the resort and the guests are almost perfectly in conflict—the resort’s new profit objective is to minimize expenditure on food and drink given their zero incremental contribution to margin. And the guests can’t buy their way out of the predicament.
Even when something is “free” for guests at the margin, so long as there is a cost to provide the good (a meal or a drink), the supplier will find a way to ration supply. This is the role normally reserved for prices. But all inclusives kill the normal market mechanism.
In the case of an all-you-can-drink poolside bar—a cool idea in theory once you get over where guests are urinating—that means understaffing the bar so that patrons tread water for long periods before getting the bartender’s attention.
Rationing can also be achieved by decreasing the quality of spirits. Put bluntly, this entails cheating their guests. Stock the crummiest wine possible: Call one “white” and the other “red.” The all inclusive at which I stayed offered one red, one white, and one beer (Presidente). You should have seen the bartender’s expression when I asked for an Old Fashioned with Rye.
At the extreme end of the cheating spectrum, look to the Spanish chain Iberostar, an all-inclusive resort that substituted bootleg liquor for the real stuff across several properties in Mexico, and sent several guests to the hospital and some to their death. (When I fumed on Twitter the other day, Iberostar responded by saying: “We only purchase sealed bottles that satisfy all standards required. Safety and satisfaction of our guests is of utmost importance for us.” Hope they have a better defense in court!)
The same incentives apply to all-inclusive resort food. None of these outfits could survive outside of the resort. I met some New Yorkers who paid a hotel chain $1,500 on day three of their vacation to move from one resort to another (mine), because their family could not stomach the grub at the original resort. They literally upgraded from horrible to not horrible.
In addition to suffering low quality and long queues, guests at an all inclusive cannot incentivize staff via tipping. When you pay for drinks on the beach a la carte, you can add a gratuity at the end of your experience when presented with the bill. The next day your friendly server will remember the tip, and (hopefully) give you the royal treatment. Sometimes the gratuity is even already added to the bill. But at the all-inclusive resorts, because guests are not presented with a bill, the only way to tip your server and thus incentivize him is with cash. But who swims with cash in their pockets?
Are you convinced that prices are wonderful? Of course, charging a positive price at the margin for booze has a drawback, in that it causes guests to drink too little relative to the socially optimal level. Resorts make more money the more you drink, and your friends have more fun (making fun of you) the more you drink. But good resorts that offer food and drink a la carte know how to solve that problem—namely, by giving away free drinks.
“Hold on one second,” my free-market friends insist. “All inclusives are sensitive to the reputational costs of driving away repeat business. There must be at least some all-inclusives that have built up a brand name, have a lot of loyal repeat customers, care about their reputation, and treat their loyal customers well. They follow an all-inclusive model not to screw their customers, but to guarantee an all-around high quality experience.”
If only. The all inclusives rely on a steady supply of myopic, one-shot guests. Sure there are loyalists, but by repeatedly withstanding these indignities, they have revealed that they either don’t put a premium on food—did I mention there was free booze?—or don’t know good food from bad food. And economists like Gabaix and Laibson (2006) have shown that firms would find it more profitable to pursue a pricing strategy that exploited myopic consumers with higher prices than to attempt to steal customers from one another by slashing prices of ancillary services, even in “highly competitive markets.”
There. I’m done with that rant. And I’m also done with all inclusives.
I’ve been called many bad things. Global elitist (or just “telecom elite“) is a popular slight these days for economists and others with advanced degrees. But not until this morning has someone said I commute to “Econ Cloud Cuckoo Land.” That really stings.
Harold Feld’s latest blog characterizes (and mischaracterizes) some of my opinions about the harms from Title II. For example, he neglects to mention that, aside from investment effects, the harm from a bright-line ban on paid priority (secured by Title II) is that certain efficient arrangements between ISPs and content providers might never be struck, and certain real-time applications (in need of higher quality of service) might never come to market.
My head might be above the econ clouds, as Harold suggests, but my feet are firmly planted on solid empirical ground. Let’s take apart some of these zingers, which are set off below in block quotes:
So, after diligently pruning away all investment that “real” economics said doesn’t count, Hal Singer and others found that there had been a measurable drop in carrier investment, which therefore proved that Title II was bad for investment in infrastructure, which therefore proved that Title II was eventually going to cause slowerworsebroadband.
Yes, in reaching my investment figures for 2015 and 2016, I did “prune away” AT&T’s investment in Mexico, as well as its investment made through DIRECTV, both of which (unfortunately) are included in AT&T’s top-line capital expenditures. I have yet to hear a good argument for why these investments should be included in any measure of broadband capital. Recall, we are testing the hypothesis that common-carrier regulations chased investment from the core of the BROADBAND network in THE UNITED STATES OF AMERICA. If an Internet service provider (ISP) moved its investment from broadband to some other sector (say, movie making) or some other country as a result of Title II, we wouldn’t raise a glass of champagne.
The same logic dictates that one “prune away” Sprint’s short-lived investment in wireless handsets. In its financials, Sprint breaks this spending out separately from its network investment. If Sprint did not step in between the customer and handset maker for a nano-second, as it does now, the same amount of money would be spent on Sprint’s handsets. Put differently, Sprint is not increasing aggregate spending in the broadband ecosystem via this accounting measure. And even if you could convince yourself that handset capitalization was really incremental broadband capital (it’s not), Sprint did not embrace this policy until the fourth quarter of 2014. Because 2014 is the benchmark year–against which one compares investment during the Title II regime–one cannot make an apples-to-apples comparison by including Sprint’s handsets. Period.
With respect to causation (Harold’s last point in the block quote), I’ve said repeatedly that comparing 2014 ISP investment levels to those in 2015 or 2016 is not a proof of regulatory impact. Shall I say it again? Other things may have changed during the experiment that affect capital formation in the sector. This is why I like to point to the natural experiment of the late 1990s/early aughts, in which telcos were uniquely subject to Title II, and telco capital formation was outpaced by cable capital formation.
Folks in [Econ Cloud Cuckoo Land] have responded, rather predictably, by saying that if you junk all that real world stuff and use the “real economics” favored in ECCL, you see that Singer does a much better analysis than Turner, so there! This prompts much applause and beatific smiles in Econ Cloud Cuckoo Land.
Doug Brake at ITIF recently compared my investment data to that of Free Press, and determined that the two studies arrive at nearly the same place if one controls for the things I mentioned above (AT&T in Mexico, AT&T via DIRECTV, and Sprint’s handsets). George Ford at the Phoenix Center found that Free Press inappropriately combined 2015-16 data to mask a downturn in 2016 by Free Press’s own accounting. USTelecom replicated my 2016 analysis on a larger database of ISPs (mine was limited to the 12 biggest ISPs) and reached a similar conclusion to mine (a four percent decline in ISP investment in 2016 relative to 2014). And PPI replicated my 2015 analysis and reached a similar conclusion (a modest decline in ISP investment in 2015 relative to 2014).
Do all of these folks commute to Econ Cloud Cuckoo Land? Maybe so. But it’s also possible that Free Press is the outlier. Recall that Free Press famously misled the Commission into believing that Title II caused telco investment to increase in the late 1990s/early aughts, because telco investment increased during the period. (See footnote 1210 of the 2015 Open Internet Order.) Never mind that (1) the dot.com boom coincided with the Title II period, contaminating the experiment, and (2) as noted above, capital accumulation for cable operators outpaced that of telcos during this period, implying that Title II actually retarded telco capital formation.
All of this finally brings us back to the question I expect readers really care about — what does all this mean for Pai’s plan to kill net neutrality? Frankly, it means that the primary pillar Pai points to as supporting roll back is going to be a big fat flop in court if he decides to go that way. Why? You may have heard of a critter called a honey badger who, thanks to this delightful Youtube video, has the tag line: “Honey badger don’t care; he don’t give a f—.” That describes the attitude of judges when reviewing the economic evidence from Econ Cloud Cuckoo Land. Judges don’t care, they don’t give a f—.
I agree with Harold that the D.C. Circuit will not likely care about what happened to broadband investment in response to Title II. To the chagrin of the dissenting opinion from Judge Williams, the D.C. Circuit’s decision approving reclassification by Wheeler’s FCC had zero to do with economics. The court determined that the expert agency was owed deference in these matters. Indeed, when asked about a change in circumstances that would warrant a change in classification, the FCC’s attorney was tongue-tied during oral argument. By symmetry, there is no reason why Pai’s FCC should be (uniquely) compelled to demonstrate a change in circumstances now.
And this is precisely why legislation locking in a light-touch regulatory regime is needed. In its absence, we can count on over-enforcement of net neutrality rules under a Democrat-led FCC, and (potentially) under-enforcement under a Republican-led FCC. (Based on the NPRM, which preserves several options, we don’t know which way Pai is heading. It is a misstatement, however, for Harold to say that Pai’s plan is to “kill net neutrality.”) With luck, even Harold will agree that major reversals in FCC rules every four or eight years is not conducive to attracting capital to the broadband sector.
In the meantime, I encourage Harold to come visit me in Econ Cloud Cuckoo Land. The commute to and from the D.C. suburbs is a breeze. During downtime, we enjoy personal yoga instructors. And when we’ve exhausted our econ brains, we rest our heads on little pillows made from clouds, and snuggle under the 1000 thread-count sheets.
Here’s an excerpt from a new paper on the limits of antitrust, which I will present at the ABA Antitrust in the Americas Conference in Mexico City on June 1. If you are interested in reading the whole submission (a little over 5,000 words), please write me.
[Update: The full article is available here.]
* * *
Consider a hypothetical case in which an Internet service provider (ISP) offers preferential treatment for an online content supplier’s packets for a fee, but declines to make the same terms available to other content providers. To make the matter concrete, assume the preferred content supplier offers telemedicine service, a real-time application that performs better with enhanced quality of service from the ISP. Preferential treatment could also take the form of the ISP’s not counting the content provider’s packets against the customer’s data cap (known as “zero rating”). To an economist, the precise nature of the preference afforded the content provider is not critical, so long as preference of some kind is provided for a fee. What matters from a competition perspective is that as a result of the pay-for-preference arrangement, the favored content provider operates at a competitive advantage vis-à-vis its content rivals. Because the offer of preference is, by assumption here, not extended to all comers, the arrangement is discriminatory, plain and simple.
But does it amount to an antitrust offense? This essay answers that question in the negative: Unlike traditional discriminatory-refusal-to-deal (DRTD) cases in antitrust, there is no effort by the ISP in our hypothetical to disadvantage a horizontal rival. Even if an edge provider could structure its net neutrality complaint as a DRTD, private litigants are unlikely to pursue antitrust cases where the only harm to competition is an innovation loss (in the form of less investment/innovation by edge providers in future periods). Moreover, antitrust litigation imposes significant cost on private litigants, and it does not provide timely relief; if the net neutrality concern is a loss to edge innovation, a slow-placed antitrust court is not the right venue. While public enforcement of innovation-based claims is possible, it likely would take an edge provider months if not years to motivate an antitrust agency to bring a case. Finally, competition is not the only value that net neutrality aims to address; end-to-end neutrality or non-discrimination is a principle that many believe is worth protecting on its own.
This essay also offers an alternative, ex post regime patterned loosely from the tribunal used to adjudicate discrimination complaints against cable video operators pursuant to Section 616 of the Cable Act. Like a rule-of-reason case under antitrust, the tribunal would begin with the presumption that preferential arrangements extended by ISPs to edge providers are presumptively not in violation of a (to-be-adopted) non-discrimination standard, but would allow complainants to overturn that presumption upon meeting certain evidentiary criteria. Importantly, the tribunal need not import the evidentiary criteria verbatim from antitrust—for example, there would be no need to establish market power, profit-sacrifice, or anticompetitive effects. Because the gaps in antitrust identified in this essay also fail to restrain search-neutrality violations, there is no reason why the tribunal could not accommodate complaints against dominant Internet intermediaries such as Google and Facebook. In this sense, a new tribunal could provide a layer-neutral approach to dealing with neutrality issues.
Are the Antitrust Laws a Good Fit?
Monopolists are generally free to choose their suppliers and engage in price discrimination under the antitrust laws. Where such constraints exist, the source is often industry-specific regulation. For example, the obligation to deal with rivals or content suppliers on nondiscriminatory terms flows from common-carriage or program-carriage rules under Section 202 of the Communications Act and Section 616 of the Cable Act. As explained by Yoo (2013), telecom regulators ensure nondiscrimination by requiring the telephone company to offer service under the terms specified by a tariff to any requesting party that qualifies to receive the service. Importantly, these nondiscrimination obligations do not flow from the antitrust laws.
Indeed, the recent tendency in antitrust jurisprudence has been to relax nondiscrimination obligations. In Terminal Railroad, the defendant discriminated against rival railroads by refusing to grant access to its terminal facilities. The essential-facilities doctrine, which grew out of that case, has been undermined by more recent developments. In Trinko, the Supreme Court ruled that telephone companies had no antitrust obligation to deal with resellers (horizontal rivals) above and beyond the unbundling obligations in the Telecommunications Act. Trinko cast doubt in the viability of the essential-facilities doctrine, particularly as applied to regulated industries such as telecom and potentially Internet access.
The closest surviving cognizable antitrust offense for our hypothetical case of discrimination by an ISP is a discriminatory refusal to deal (“DRTD”). For example, a dominant firm may discriminate by refusing to deal with—or offering worse terms to—horizontal rivals or those buyers (or suppliers) who deal with horizontal rivals. In Aspen, the defendants discriminated against rivals by refusing to sell lift tickets to its rival at any price. In Otter Tail, the defendant discriminated against rivals by refusing to supply electric power to those municipalities that competed with the defendant in retail distribution. In Dentsply, the defendant discriminated against rivals (indirectly) by using exclusive contracts with dental-product dealers to limit rival manufacturers’ access to dental laboratories that purchase artificial teeth. And in Lorain Journal, the defendant discriminated against rivals (indirectly) by refusing to sell advertising space to those advertisers who dealt with its rival. Based on a review of these and other seminal DRTD cases, Elhauge (2003) explains that a duty to deal turns not on a prior course of dealings with the buyer or distributor, but instead on whether the dominant firm’s present dealings discriminate between rivals and non-rivals; in particular, whether the dominant firm deals only with non-rivals and excludes rivals. And even then, to prevail under the antitrust laws, the plaintiff would still need to demonstrate that the DRTD enhanced or maintained defendant’s monopoly power and harmed competition. Kulick (2013) develops an alternative post-Chicago model of exclusive dealing where exclusive dealing takes the form of a DRTD.
Here’s an excerpt from an introduction-to-econometrics paper written for lawyers, which I will present at the ABA Antitrust Spring meeting in DC in late March. If you are interested in reading the whole submission (a little over 4,000 words), please write me.
* * *
Well, if that’s all regression does, you might ask, why in the heck do we need it? The answer is that many factors in addition to the challenged conduct likely affect prices in this market, and we need to control for those factors in case they changed around the time that the challenged conduct ended. Prices are typically determined as a markup over the cost of serving the customer. Suppose the seller (the defendant) in this case always imposes a markup of 50 percent over costs; in the during period, average costs were $667 and average prices were $1,000. Suppose further that costs on average declined in the after period relative to the before period by $100, bringing average costs to $567 and average prices to $850 (equal to $567 plus 0.5 x $567). We now have an independent reason—unrelated in any way to the challenged conduct—for why prices would have declined in the after period!
Suppose the analyst is unaware that costs had changed or that cost data are not available. He regresses the simple model from equation . The estimated parameter on the conduct indicator comes back at $250, but we know that the parameter is biased. Technically, this means the expected value of the parameter in repeated samples will not be equal to the true value. The regression is attributing too much of the change in prices between the during and the after period to the challenged conduct. This problem is referred to in the econometrics literature as “omitted variable bias,” and it represents a major challenge for applied economists.
Here’s why: Remember that assumption on the error term in equation ? It required that the error term was not correlated with the conduct indicator. By omitting cost from the regression, however, we violated it. In particular, we know that costs declined remarkably right around the time that the conduct ended; hence, when the conduct was absent (present), costs were lower (higher). Without controlling for costs, B will now capture the sum of the direct effect of the conduct on prices (what we want) plus the indirect effect of the conduct on costs, which in this case is positive. So when we omit costs from the regression, our predictions of prices based on equation  will be worse in the presence of the conduct—that is, the error term is now correlated with the conduct indicator. In general, whenever the omitted variable (in this case, cost) is positively correlated with both the included regressor (the conduct) and the dependent variable (the price), the estimate of the included variable’s coefficient will be upwardly biased. Because this rule is hard to memorize, I’ve presented a simple table for reference below.
|Correlation between omitted variable and included regressor||Correlation between omitted variable and dependent variable||Direction of Bias on Included Regressor|
It bears noting that most if not all regressions ever estimated have omitted at least some explanatory variables from the equation (otherwise, there would be no error term, and the R-squared would be 100 percent). But that does not imply that the resulting parameters of the imperfect model were biased. Two conditions must be present for an omitted variable to result in a biased regression estimate: (1) the omitted variable must be a factor that explains the dependent variable; and (2) the omitted variable must be correlated with an independent variable specified in the regression. The second condition is a generalization of the phenomenon we just encountered with costs and the challenged conduct. This means that it is not sufficient for an opposing economist to merely point out that a regression is missing a key variable. For the critique to be valid, the opposing economist must demonstrate that both conditions are satisfied. One way to do this is indirectly, by providing an evidentiary basis that the allegedly omitted variable is a factor in defendant’s pricing, and that it is correlated with the conduct variable. Alternatively, the opposing economist can demonstrate omitted-variable bias directly by re-running the regression with the omitted variable included, and showing that not only does it belong in the regression (as evidenced by a statistically and economically significant effect), but also that the revised estimate of the conduct parameter is no longer statistically or economically significant or of the expected sign.