For those who have not had the pleasure of seeing it, I recommend the fascinating and, honestly, fun, new study by Barton Beebe and Jeanne Fromer on the arbitrariness and unpredictability of the U.S. Patent & Trademark Office's refusals of trademarks that are deemed to be "immoral" or "scandalous."
The study, entitled Immoral or Scandalous Marks: An Empirical Analysis, has been posted on SSRN. This paper served as the basis for Professors Beebe and Fromer's amicus brief in Iancu v. Brunetti.
This study follows up on Megan Carpenter and Mary Garner's prior 2015 paper, published in the Cardozo Arts & Entertainment Law Journal and Anne Gilson LaLonde and Jerome Gilson's 2011 article, Trademarks Laid Bare: Marks That May Be Scandalous or Immoral.
All of these studies come to similar conclusions: there are serious inconsistencies in trademark examiners' application of the Section 2(a) "immoral-or-scandalous" rejection. The Beebe/Fromer study is technically 161 pages long, but it's mostly exhibits, and it's very accessible – worth at least a read to see some of the examples they give, and to oggle at the bizarre interplay between Section 2(a) "immoral-or-scandalous" refusals and Section 2(d) "likely to confuse with prior registered mark" refusals.
The issue in Brunetti is whether the Section 2(a) "scandalous-or-immoral" refusal is an unconstitutional restriction on free speech under the First Amendment. The test asks
"whether a substantial composite of the general public would find the mark scandalous, defined as shocking to the sense of truth, decency, or propriety; disgraceful; offensive; disreputable; ... giving offense to the conscience or moral feelings; ... or calling out for condemnation."
(6-7) (quoting In re Brunetti, 877 F.3d 1330, 1336 (Fed. Cir. 2017) (citations omitted)).
In sum, the Beebe/Fromer study takes advantage of a large amount of empirical data (3.6 million trademark registration applications), and creatively uses the interplay between Section 2(a) "immoral-or-scandalous" refusals and Section 2(d) "likely to confuse with prior registered mark" refusals, to emphasize just how unpredictable and capricious the examiners have been in determining what the general public might find "shocking."
In particular, the authors show that
the PTO routinely refuses registration of applied-for marks on the ground that they are immoral or scandalous under § 2(a) and confusingly similar with an already registered mark under § 2(d); in other words, the PTO routinely states that it cannot register a mark because the mark is immoral or scandalous and in any case because it has already allowed someone else to register the mark on similar goods. Furthermore, the PTO arbitrarily allows some applied-for marks to overcome an immoral-or-scandalous refusal while maintaining that refusal against other similar marks. ...
For example, the mark at issue in Brunetti is FUCT for apparel. Here is what the authors say about this:
In 2009, the PTO refused to register the mark FUK!T in connection with apparel (Class 25) and the operation of an internet website (Class 42) on the bases that the applied-for mark was immoral or scandalous under § 2(a) and confusingly similar under § 2(d) to the recently-registered mark PHUKIT for apparel (Class 25). Similarly, on June 18, 2013, the PTO registered the mark PHUC for apparel (Class 25). Four days before, on June 14, 2013, the PTO sent out an office action refusing to register the mark P.H.U.C. CANCER (PLEASE HELP US CURE CANCER) in connection with apparel (Class 25) on the bases that the mark was immoral or scandalous and confusingly similar to the about-to-be-registered mark PHUC for apparel. At no time during its registration process did the earlier-filed mark PHUC for apparel receive any immoral-or-scandalous refusal....
My brilliant:) Akron Law Trademark Law 2019 students might call this a "Schrödinger’s cat argument" from the USPTO examiners. On the one hand, the mark FUCT is unregisterable because it's 2(a) "scandalous"; but on the other hand, FUCT is unregisterable because we already registered a mark just like it. Doh!
In 2008, the PTO issued an immoral-or-scandalous refusal to an application for the mark CAJONES for dietary supplements (Class 5). It cited evidence from urbandictionary.com, among other sources, in support of the conclusion that: the proposed mark “CAJONES” means “TESTICLES” or “BALLS” and is thus scandalous because it is a commonly used vulgar slang term for a part of the male genitalia.
Yet in 2008 the PTO registered the mark CAJONES for party games (Class 28) without any immoral-or-scandalous objection, even though, with authorization from the applicant’s attorney, it amended the application record to include the following translation statement: “The foreign wording in the mark translates into English as drawers, and as a slang term for testicles.” Similarly, in 2005 the PTO issued no immoral- or-scandalous refusal to the mark CAJONES for beer (Class 32) and published the mark. In an office action, the PTO had asked the applicant for a translation of the mark, stating: “The following translation statement is suggested: ‘The English translation of CAJONES is drawers.’”
Beebe and Fromer assert in their SSRN paper that these sorts of inconsistencies support that the 2(a) "immoral-or-scandalous prohibition" is being applied in an arbitrary and viewpoint- discriminatory matter" that violates the First Amendment. (They have a couple of theories for how this works under First Amendment doctrine, see pp. 27-32).
These empirical studies by IP professors are likely to be influential on the outcome of the case. It seems clear the work has already been read by some of the Supreme Court Justices or their clerks. For instance, as Professor and Dean of University of New Hampshire School of Law Megan Carpenter noted on SCOTUSblog, at oral arguments on Monday, April 15, Justice Gorsouch was
particularly troubled by inconsistencies in acceptances and rejections in the PTO’s application of this provision over time, and the resultant inability to give adequate notice to trademark owners. ... He added that he himself could not “see a rational line” through the refusals and registrations, and asked “is it a flip of the coin?”
The full transcript reveals even more that suggests the Justices are reading the Beebe/Fromer amicus or other empirical studies. From Justice Gorsuch:
JUSTICE GORSUCH: But I can come up with several that are granted that ... have phonetics along the lines you've described and a couple that have been denied. And what's the rational line? How is a person -- a person who wants to get a mark supposed to tell what the PTO is going to do? Is it a flip of the coin? (p. 21)
From Justice Kavanaugh:
JUSTICE KAVANAUGH: How ...do you deal with the problem of erratic or inconsistent enforcement, which seems inevitable with a test of the kind you're articulating? (p. 16)
As someone who lacks a strong view on whether this provision of the Lanham Act should be struck down as unconstitutional, I am just enjoying hearing the examples...and seeing the Justices squirm a bit:
JUSTICE GORSUCH: I don't want to -- I don't want to go through the examples. I really don't want to do that. (Laughter.) (p. 21)
What was the "promise of the patent doctrine"? The short answer is: a controversial doctrine that originated in English law and that, until recently, was applied in Canadian patent law to invalidate patents that made a material false promise about the utility of the invention. A common example would be a claim to therapeutic efficacy in a specification that is not born out.
Warning: the content of this doctrine this may seem bizarre to those familiar with U.S. patent law.
According to Professor Siebrasse, pre-abolishment, Canadian utility doctrine effectively had "two branches": (1) the "traditional utility requirement," which is similar to U.S. law's, and requires merely a "scintilla" of utility; and (2) "the Promise Doctrine."
The basic idea of the Promise Doctrine was that
"where the specification does not promise a specific result, no particular level of utility is required; a “mere scintilla” of utility will suffice. However, where the specification sets out an explicit “promise”, utility will be measured against that promise." (quoting Lilly v Novopharm / Olanzapine)
Starting around 2005, until the Supreme Court of Canada's decision in AstraZeneca, Canadian courts applied the Promise Doctrine in the pharmaceutical context to invalidate patents. The "promise," Siebrasse explained, could be found "anywhere in the specification[.]" If there were multiple “promises," the patent had to satisfy all of them, or the entire patent would be invalidated.
Here is an example from Siebrasse's article. (35-36). In a Canadian case circa 2009, a judge construed a patent as making a "promise" of a certain utility based on the following statements, in bold italics, within the patent specification:
The compounds of this invention have useful pharmacological properties. They are useful in the treatment of high blood pressure. The compounds of the present invention can be combined with pharmaceutical carriers and administered in a variety of well-known pharmaceutical forms suitable for oral or parental administration to provide compositions useful in the treatment of cardiovascular disorders and particularly mammalian hypertension. (35) (citing Sanofi v Apotex/ramipril).
The patent would be invalidated under the Promise Doctrine if the promise of utility turned out to be false—or if the court deemed the promised of utility to be premature and unfounded at the time of filing. This is an important caveat, because, Professor Siebrasse explains, in essentially all the Canadian cases invalidating the patent on the basis of the Promise Doctrine, the promise was in fact true. It's just that the heightened promise of utility was speculative at the time of filing. Only later was it was proven to be true later, when validity was challenged in court. So it's not just that the applicant makes a "false" promise; it's that the applicant makes a promise on which s/he may not be able to deliver.
Professor Siebrasse was not happy about courts' use of the Promise Doctrine to invalidate patents. His view seems to have won out in AstraZeneca, where the Court's language, at least to me, suggests the Doctrine is unambiguously dead:
"...the Promise Doctrine is not the correct method of determining whether the utility requirement under s. 2 of the Patent Act is met. Given the correct approach, as set out below, the drug for which the ‘653 patent was granted is useful as a PPI; thus, it is an “invention” under s. 2 of the Act. The ‘653 patent is therefore not invalid for want of utility."
Siebrasse gleefully keeps watch on the Promise Doctrine's fate in posts with titles like "Whack the Zombies Dead Once and for All", where he discusses unsuccessful attempts by generic drug companies to revive the doctrine.
What I think is really interesting here is the history of the Promise Doctrine. According to Professor Siebrasse, the Promise Doctrine evolved in English law. To paraphrase Siebrasse, in English law "the grant of a patent was an exercise of the royal prerogative, and as such wholly within the discretion of the Crown." Patents thus could be retracted for many reasons, including a false promise of utility, as "measured by the representations made in the patent." (5-6). This was codified in the English Patent Act until 1977. (7) ("[T]he English false promise doctrine was codified by a statutory provision that a patent would be void if obtained on the basis of a 'false suggestion.' ”).
There was an important difference. In the older English cases from which the Canadian Promise Doctrine originated, the elevated promise of utility was actually false or at least misleading. For example, in the 1919 English case, Hatmaker v Joseph Nathan & Co Ltd., the invention claimed a process for producing dried milk. The specification stated that the process would produce milk solids “in a dry but otherwise unaltered condition” and that the reconstituted milk was “of excellent quality.” But it turned out the dried milk was not actually as good as real milk. (8) (citing case). This is why the older English cases referred to a "false" promise. But in the modern Canadian practice, the promise was typically not actually false in hindsight.
Our shared origins in English law means the Promise Doctrine is a path U.S. law could have taken too. It is interesting to ask then: how might a "Promise Doctrine" evolve in U.S. law today? There are a few analogues.
Second, there is a general prohibition on lying to the Patent Office (e.g. duty of candor, inequitable conduct). If statements about utility are in false and this false claim is "material" in the "but for" sense that the examiner would not have granted the patent unless it believed the assertions, this could potentially make the patent vulnerable to invalidation for inequitable conduct. But merely a premature promise of utility would not be false. And moreover, even a false statement of higher-than-actual utility would not be "material" in most cases, given the currently lax utility standard.
Third, closely related to utility is Section 112's "enablement" requirement. Enablement asks, could a person of "ordinary skill in the art" make the invention work in the stated way? But this does not necessarily require assessing the veracity of therapeutic claims. So long as a PHOSITA can practice the invention as claimed without "undue experimentation," it would not strictly matter whether the inventions' therapeutic benefits pan out. For example, it would not matter whether the patients that are treated with a claimed drug live or die. This would be the FDA's concern, not patent courts' and examiners'.
That said, cases like Brenner v. Mason have shown how U.S. law's utility requirement might be beefed up to weed out patents that are filed well before claims of efficacy have been verified. For instance, filing a patent that makes "promises" of therapeutic efficacy when testing has not even been performed in mice might be seen as a premature assertion of utility that warrants invalidation. See Brenner v. Mason, 383 U.S. 519, 534-35 (1966) ("The basic quid pro quo contemplated by the Constitution and the Congress for granting a patent monopoly is the benefit derived by the public from an invention with substantial utility. Unless and until a process is refined and developed to this point—where specific benefit exists in currently available form—there is insufficient justification for permitting an applicant to engross what may prove to be a broad field.").
But Professor Siebrasse is quick to point out that Brenner's notion of "substantial utility" corresponds to the "scintilla" branch of Canadian utility law, mentioned above, which he says already requires assessing whether the specific asserted benefit of the invention has been developed to the point where it is currently available. The Promise Doctrine, in contrast, is a separate standard that seeks out "promises" and then imposes a higher standard on them.
I suspect there is more here to uncover on the history of the so-called "promise of the patent" doctrine. I am curious as to why it was not discussed in the Oil States debates, which centered on the conations under which patents can be retracted. I didn't catch it mentioned in the amicus briefs that I read. I checked Professor Oren Bracha's thesis (now book) on U.S. patent law history. He does mention some aspects of this issue his discussion of "working clauses." For example, Bracha states that "working clauses," which required grant holders to practice the invention to which they sought rights,
were a clear manifestation of the two main characteristics of English patents. They expressed the understanding of patents as royal discretionary policy tools, by creating mechanisms for insuring the 'execution' of the specific consideration promised by the patentee as the basis of the patent deal. They reflected the dominant notion of the subject matter of patents as new industries or trades, by focusing on actual putting into practice rather than on mere disclosure of information.
(Bracha, 20). But I didn't see anything about invalidations based on a "false promise of the patent" doctrine.
Yesterday and today, the University of Kansas School of Law hosted the ninth annual Patent Conference—PatCon9—largely organized by Andrew Torrance. Schedule and participants are here. For those who missed it, here's a recap of my live Tweets from the conference. (For those who receive Written Description posts by email: This will look much better—with pictures and parent tweets—if you visit the website version.)
Christal Sheppard (@ipwe_, former director of 1st @uspto regional office): Why do we need a team of lawyers to determine who owns a patent? Why can't lay people figure out a patent's quality or value? Why is it so hard to negotiate a license? Can commercial patent analytics help?
Q from @IowaPatentLaw: What's the legal hook for using high creation costs & low distribution costs to decide something can be a license rather than a sale? Doesn't that apply to books? Olson: Maybe it could apply to books. See abandoned Aspen program. #PatCon9
Matthew Sipe (@gwlaw VAP): Patent Law's Latent Philosophical Schism. Argues both sides have it wrong: utilitarians overstate relevance of their theories, moral perspectives understate b/c important for understanding infringement doctrine. #PatCon9
Infringement, by contrast, involves lots of moral concerns: damage enhancement, DOE, inequitable conduct, prior use. Injunctions have greater moral differentiation post-eBay: distinction in cases between "good" trolls like universities and "bad" trolls. #PatCon9
Where, in Sipe's view, does this split come from? 1. the adjudicatory split between the USPTO and district courts 2. the influence of traditional property law 3. the mix of private-law and public-law features that patents exhibit#PatCon9
Sipe notes @wsurferesq & O'Dorisio's mock jury finding that infringers are more likely to escape liability if plaintiff is NPE: https://t.co/TI3H237dui (Plus more I can't fit in a Tweet. Read the paper!) #PatCon9
Ian Wetherbee: Examples of what you can do easily w/ Google's BigQuery patent data: What CPCs have most CAFC cases? Which Orange Book patents have gov't interest statements? How do common office action sequences change over time? #PatCon9
Bernard Chao (@wsurferesq): In product mislabeling 3x3x2 test on mTurk, found evidence that salience & anchoring can lead to overcompensation. Relevant to patent damages on multicomponent products? Paper here: https://t.co/25QMnPmt9M#PatCon9
In follow-up work, @jjonasanderson will survey patentees of these unenforceable patents to ask why. @ProfDSchwartz has useful addendum: Also survey patentees of medical method patents that were cut from sample b/c they are enforceable (e.g., tied to device) as control. #PatCon9
.@NormanSiebrasse: Utility std similar in US and Europe, but used more in Europe—why? Genus/species rules similar in both. Europe focuses on information, not possession, but likely similar results. (There are 50+ slides, so zooming through!)
Pairolero: First-action decision has omitted variable bias—patent quality becomes correlated w/ grade & experience because of prosecution delays. Also argues time crunch can be beneficial: incentivizes use of examiner's amendment. #PatCon9
Super interesting work but Nick is looking at first office allowance only. The greater use of examiner amendments at higher gs level to explain differences in first office allowances, as Nick points out, may not extend to full allowance.
.@lexvivo & @ProfGReilly ask about costs of examiner's amendments to claim clarity & notice. @ProfDSchwartz says in his experience these often occur due to deadlines (procrastination!) and asks about kind of amendments. #PatCon9
.@ProfDSchwartz & @ccotropia: Abandoned applications are MORE valuable than issued patents on a number of dimensions, including more likely to be used as prior art when rejecting other patents. #PatCon9
A compelling new project from Lisa Ouellette (@PatentScholar) and Daniel @DanielJHemel argues that the U.S. opioid epidemic reflects a failure of innovation institutions. Illustrative of the argument are stories of two drugs: OxyContin (oxycodone) and Evzio (naloxone). #PatCon9
Next at #PatCon9: @PatentScholar (joint work with @DanielJHemel ) argues that innovation institutions contributed to the opioid epidemic in a variety of ways including that addictive drugs have negative externalities and drugs that treat overdose have positive externalities.
Bill Rich (Washburn) argues patents should receive protection under the Privileges or Immunities Clause, which empowers Congress to override state immunity. For his prior work on patents & state immunity, see https://t.co/LMRffYwjsT#PatCon9
Friday and Saturday I'll be at PatCon9 at the University of Kansas School of Law. I'll discuss a scholarly work-in-progress with Janet Freilich, and I've also been invited to serve on a panel on "Roles and Influence of Patent Blogs" (with Kevin Noonan from Patent Docs and Jason Rantanen from Patently-O, and Written Description's own Camilla Hrdy serving as moderator). So I thought this would be a good opportunity to reflect on why I've written over 300 blog posts throughout the past eight years. (For those who want to read some highlights, note that many individual words below link to separate posts.)
I started Written Description in February 2011 when I was a 3L at Yale and was winding down my work as a Yale Law Journal Articles Editor, which had been a great opportunity to read a lot of IP scholarship. I noted that there were already many blogs reporting on the latest patent news (like Patently-O and Patent Docs), but that it was "much harder to find information about recent academic scholarship about patent law or broader IP theory." The only similar blog I knew of was Jotwell, but it had only two patent-related posts in 2010. (In 2015, I was invited to join Jotwell as a contributing editor, for which I writeoneposteveryspring.) Written Description has grown to include guest posts and other blog authors—currently Camilla Hrdy (since 2013) and Michael Risch (since 2015).
Most of my posts have featured scholarship related to IP and innovation. Some posts simply summarize an article's core argument, but my favoritepostshaveattemptedtosituate an article (or articles) in the literature and discuss its implications and limitations. I also love using my blog to highlighttheworkofyoungscholars, particularly those not yet in faculty positions. And I enjoyed putting together my Classic Patent Scholarship project; inspired by Mike Madion's work on "lost classics" of IP scholarship, I invited scholars to share pre-2000 words that they thought young IP scholars should be aware of.
I sometimes debate whether blogging is still worth my time. I could instead just post links to recent scholarship on Twitter, or I could stop posting about new scholarship altogether. But a number of people who aren't on Twitter—including from all three branches of the federal government—have told me that they love receiving Written Description in their email or RSS feed. Condensing patent scholarship seems like a valuable service even for these non-academic readers alone. And the pressure to keep writing new posts keeps me engaged with the recent literature in a way that I think makes me a better scholar. I don't think blogging is a substitute for scholarship, or that it will be anytime soon. Rather, I view my blogging time as similar to the time I spend attending conferences or commenting on other scholars' papers over email—one of many ways of serving and participating in an intellectual community.
I still have a lot of questions about the role of law-related blogs today, and I hope we'll discuss some of them on Thursday. For example: Has the role of blogs shifted with the rise of Twitter? Should blog authors have any obligation to study or follow journalism ethics and standards? How do blog authors think about concerns of bias? For many patent policy issues, the empirical evidence base isn't strong enough to support strong policy recommendations—do blog authors have any obligation to raise counterarguments and conflicting evidence for any decisions or academic papers they are highlighting? What are the different financial models for blogs, and how might they conflict with other blogging goals? (This may be similar to the conflicts traditional media sources face: e.g. clickbait to drive readership can come at the cost of more responsible reporting.) Do the ethical norms of blog authorship differ from those of scholars? How should blogs consider issues of diversity and inclusion when making choices about people to spotlight or to invite for guest authorship?
I'll conclude by noting that for PatCon8 at the University of San Diego School of Law, I tried a new blogging approach: I live Tweeted the conference and then published a Tweet recap. (Aside: I started those Tweets by noting that 19% of PatCon8 participants (13 out of 68) were women. The PatCon9 speaker list currently has 24% (7 out of 29) women speakers, but I don't know which direction non-speaker participants will push this.) For PatCon9, should I (1) live Tweet again (#PatCon9), (2) just do a blog post with some more general reactions (as I've done for someconferencesbefore), or (3) not blog about the conference at all?
With the Supreme Court agreeing to hear the Brunetti case on the registration of scandalous trademarks, one might wonder whether allowing such scandalous marks will open the floodgates of registrations. My former colleague Vicenç Feliú (Nova Southeastern) wondered as well. So he looked at the trademark database to find out. One nice thing about trademarks is that all applications show up, whether granted or not, abandoned or not. He's posted a draft of his findings, called FUCT® – An Early Empirical Study of Trademark Registration of Scandalous and Immoral Marks Aftermath of the In re Brunetti Decision, on SSRN:
This article seeks to create an early empirical benchmark on registrations of marks that would have failed registration as “scandalous” or “immoral” under Lanham Act Section 2(a) before the Court of Appeals for the Federal Circuit’s In re Brunetti decision of December, 2017. The Brunetti decision followed closely behind the Supreme Court’s Matal v. Tam and put an end to examiners denying registration on the basis of Section 2(a). In Tam, the Supreme Court reasoned that Section 2(a) embodied restrictions on free speech, in the case of “disparaging” marks, which were clearly unconstitutional. The Federal circuit followed that same logic and labeled those same Section 2(a) restrictions as unconstitutional in the case of “scandalous” and “immoral” marks. Before the ink was dry in Brunetti, commentators wondered how lifting the Section 2(a) restrictions would affect the volume of registrations of marks previously made unregistrable by that same section. Predictions ran the gamut from “business as usual” to scenarios where those marks would proliferate to astronomical levels. Eleven months out from Brunetti, it is hard to say with certainty what could happen, but this study has gathered the number of registrations as of October 2018 and the early signs seem to indicate a future not much altered, despite early concerns to the contrary.
The study focuses not on the Supreme Court, but on the Federal Circuit, which already allowed Brunetti to register FUCT. Did this lead to a stampede of scandalous marks? It's hard to define such marks, so he started with a close proxy: George Carlin's Seven Dirty Words. This classic comedy bit (really, truly classic) nailed the dirty words so well that a radio station that played the bit was fined and the case wound up in the Supreme Court, which ruled that the FCC could, in fact, ban these seven words as indecent. So, this study's assumption is that the filings of these words as trademarks are the tip of the spear. That said, his findings about prior registrations of such words (with claimed dual meaning) are interesting, and show some of the problems that the court was trying to avoid in Matal v. Tam.
It turns out, not so much. No huge jump in filings or registrations after Brunetti. More interesting, I thought, was the choice of words. Turns out (thankfully, I think) that some dirty words are way more acceptable than others in terms of popularity in trademark filings. You'll have to read the paper to find out which.
I've previously recommended subscribing to Jotwell to keep up with interesting recent IP scholarship, but for anyone who doesn't, my latest Jotwell post highlighted a terrific forthcoming article by Michael Frakes and Melissa Wasserman. Here are the first two paragraphs:
How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?
In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.
I'm a big fan of transformative use analysis in fair use law, except when I'm not. I think that it is a helpful guide for determining if the type of use is one that we'd like to allow. But I also think that it can be overused - especially when it is applied to a different message but little else.
The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
Fair use is copyright law’s most important defense to claims of copyright infringement. This defense allows courts to relax copyright law’s application when courts believe doing so will promote creativity more than harm it. As the Supreme Court has said, without the fair use defense, copyright law would often “stifle the very creativity [it] is designed to foster.”
In today’s world, whether use of a copyrighted work is “transformative” has become a central question within the fair use test. The U.S. Supreme Court first endorsed the transformative use term in its 1994 Campbell decision. Since then, lower courts have increasingly made use of the transformative use doctrine in fair use case law. In fact, in response to the transformative use doctrine’s seeming hegemony, commentators and some courts have recently called for a scaling back of the transformative use concept. So far, the Supreme Court has yet to respond. But growing divergences in transformative use approaches may eventually attract its attention.
But what is the actual state of the transformative use doctrine? Some previous scholars have empirically examined the fair use defense, including the transformative use doctrine’s role in fair use case law. But none has focused specifically on empirically assessing the transformative use doctrine in as much depth as is warranted. This Article does so by collecting a number of data from all district and appellate court fair use opinions between 1991, when the transformative use term first made its appearance in the case law, and 2017. These data include how frequently courts apply the doctrine, how often they deem a use transformative, and win rates for transformative users. The data also cover which types of uses courts are most likely to find transformative, what sources courts rely on in defining and applying the doctrine, and how frequently the transformative use doctrine bleeds into and influences other parts of the fair use test. Overall, the data suggest that the transformative use doctrine is, in fact, eating the world of fair use.
The Article concludes by analyzing some possible implications of the findings, including the controversial argument that, going forward, courts should rely even more on the transformative use doctrine in their fair use opinions, not less.
In the last six years of the study, some 90% of the fair use opinions consider transformative use.* This doesn't mean that the the reuser won every time - quite often, courts found the use to not be transformative. Indeed, while the transformativeness finding is not 100% dispositive, it is highly predictive. This supports Asay's finding that transformativeness does indeed seem to be taking over fair use. And he is fine with that. Asay recommends based on his findings that two of the fair use factors be used much less often. He arrives at that conclusion based on the types of works that receive transformative treatment. In short, while there are some cases that seem to go too far, the courts do seem to require more than a simple change in message to support transformativeness.
The paper has a lot of great detail - including transformative analysis over time, which precedents/articles are cited for support, which circuits see more cases and how they rule, the interaction of each factor with the others, the interaction of transformativeness in the other factors, and (as noted above) the types of works and uses that are at issue. Despite having this detail, it's a smooth and easy read. The only information I would have liked in more detail is a time based analysis of win rates, especially for shifting media.
*There is a caveat that the study omits many "incomplete" opinions that leave out discussion of multiple fair use factors - it is unclear what these look like. While this decision is defensible, especially in light of the literature, given the paper's suggestion that two of the fair use factors be eliminated I think it would have been interesting to see the role of transformativeness in those cases where the courts actually did eliminate some factors.
And so I read with great interest Jeremy Sheff's latest article, Jefferson's Taper. This article challenges everyone's understanding of Jefferson. The draft is on SSRN, and the abstract is here:
This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s often-cited letter to Isaac McPherson regarding the absence of a natural right of property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson likely copied this Parable of the Taper from a nearly identical passage in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law theory that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law theory rests on a pre-Enlightenment Classical Tradition of distributive justice in which distribution of resources is a matter of private judgment guided by a principle of proportionality to the merit of the recipient — a view that is at odds with the post-Enlightenment Modern Tradition of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with the historical pivot in the intellectual history of the West from the Classical Tradition to the Modern Tradition, but modern readings of the Parable of the Taper, being grounded in the Modern Tradition, ignore this historical context. Such readings cast Jefferson as a proto-utilitarian at odds with his Lockean contemporaries, who supposedly recognized property as a pre-political right. I argue that, to the contrary, Jefferson’s Taper should be read from the viewpoint of the Classical Tradition, in which case it not only fits comfortably within a natural law framework, but points the way toward a novel natural-law-based argument that inventors and other knowledge-creators actually have moral duties to share their knowledge with their fellow human beings.
I don't have much more to say about the article, other than that it is a great and interesting read. I'm a big fan of papers like this, and I think this one is done well.
There are few patent law topics that are so heatedly debated as patent holdup. Those who believe in it, really believe in it. Those who don't, well, don't. I was at a conference once where a professor on one side of this divide just..couldn't...even, and walked out of a presentation taking the opposite viewpoint.
The debate is simply the following. The patent holdup story is that patent holders can extract more than they otherwise would by asserting patents after the targeted infringer has invested in development and manufacturing. The "classic" holdup story in the economics literature relates to incomplete contracts or other partial relationships that allow one party to take advantage of an investment by the other to extract rents.
You can see the overlap, but the "classic" folks think that patent holdup story doesn't count, because there's no prior negotiation - the party investing has the opportunity to research patents, negotiate beforehand, plan their affairs, etc.
In their new article forthcoming in Washington & Lee Law Review, Tom Cotter (Minnesota), Erik Hovenkamp (Harvard Law Post-doc), and Norman Siebrasse (New Brunswick Law) try to solve this debate. They have put Demystifying Patent Holdup on SSRN. The abstract is here:
Patent holdup can arise when circumstances enable a patent owner to extract a larger royalty ex post than it could have obtained in an arm's length transaction ex ante. While the concept of patent holdup is familiar to scholars and practitioners—particularly in the context of standard-essential patent (SEP) disputes—the economic details are frequently misunderstood. For example, the popular assumption that switching costs (those required to switch from the infringing technology to an alternative) necessarily contribute to holdup is false in general, and will tend to overstate the potential for extracting excessive royalties. On the other hand, some commentaries mistakenly presume that large fixed costs are an essential ingredient of patent holdup, which understates the scope of the problem.
In this article, we clarify and distinguish the most basic economic factors that contribute to patent holdup. This casts light on various points of confusion arising in many commentaries on the subject. Path dependence—which can act to inflate the value of a technology simply because it was adopted first—is a useful concept for understanding the problem. In particular, patent holdup can be viewed as opportunistic exploitation of path dependence effects serving to inflate the value of a patented technology (relative to the alternatives) after it is adopted. This clarifies that factors contributing to holdup are not static, but rather consist in changes in economic circumstances over time. By breaking down the problem into its most basic parts, our analysis provides a useful blueprint for applying patent holdup theory in complex cases.
The core of their descriptive argument is that both "classic" and patent holdup are based on a path dependence: one party invests sunk costs and thus is at the mercy of the other party. In this sense, they are surely correct (if we don't ask why the party invested). And the payoff from this is nice, because it allows them to build a model that critically examines sunk costs (holdup) v. switching costs (not holdup). The irony of this, of course, is that it's theoretically irrational to worry about sunk costs when making future decisions.
But I guess I'm not entirely convinced by the normative parallel. The key in all of these cases is transactions costs. So, the question is whether the transactions costs of finding patents are high enough to warrant the investment without expending them. The authors recognize the problem, and note that when injunctions are not possible parties will refuse to pay a license because it is more profitable to do so (holdout). But their answer is that just because there is holdout doesn't mean that holdup isn't real and a problem sometimes. Well, sure, but holdout merely shifts the transactions costs, and if it is cheaper to never make an ex ante agreement (which is typical is these days), then it's hard for me to say that being hit with a patent lawsuit after investment is the sort of path dependence that we should be worried about.
I think this is an interesting and thoughtful paper. There's a lot more than my brief concerns. It attempts to respond to other critiques of patent holdup, and it provides a framework to debate these questions, even if I'm not convinced by the debate.