The Council on Library and Information Resources (CLIR) and the Digital Library Federation (DLF) have launched a new publication series, with the inviting name of "Ruminations."  It will feature short research papers and essays with fresh perspectives in the digital environment for scholarship and teaching.

Kicking off the launch is a new rumination from John P. Wilkin, who we interviewed not so long ago, about his work helping old titles "rise" into the public domain.

John writes us:

"I'd like to point readers to a piece I recently wrote about publication patterns and copyright status, which was just published on the CLIR website at  Based on the analysis of over 5 million books in HathiTrust and several years of copyright status analysis for US 1923-1963 works, I point out some important patterns in the dates and origin of the works.  The date distributions and work Michigan has led on copyright determination helps make clear how few of these books (proportionately) are likely to be in the public domain.  On a more speculative note, the numbers lead me to conclude that 'orphans' may represent a startlingly high percentage of published books.  If nothing else, I hope what I show here stimulates more debate and even more work to help refine our sense of what's in the public domain, what's in copyright, what's likely to be an orphan, and what the consequences of these numbers is."
The Council on Library and Information Resources (CLIR) and the Digital Library Federation (DLF) have launched a new publication series, with the inviting name of "Ruminations."  It will feature short research papers and essays with fresh perspectives in the digital environment for scholarship and teaching.

Kicking off the launch is a new rumination from John P. Wilkin, who we interviewed not so long ago, about his work helping old titles "rise" into the public domain.

John writes us:

"I'd like to point readers to a piece I recently wrote about publication patterns and copyright status, which was just published on the CLIR website at  Based on the analysis of over 5 million books in HathiTrust and several years of copyright status analysis for US 1923-1963 works, I point out some important patterns in the dates and origin of the works.  The date distributions and work Michigan has led on copyright determination helps make clear how few of these books (proportionately) are likely to be in the public domain.  On a more speculative note, the numbers lead me to conclude that 'orphans' may represent a startlingly high percentage of published books.  If nothing else, I hope what I show here stimulates more debate and even more work to help refine our sense of what's in the public domain, what's in copyright, what's likely to be an orphan, and what the consequences of these numbers is."
ivyanderson.JPGA New Twist -- Securing Authors' Rights When Negotiating Content Licenses: an interview with Julia Blixrud, ARL and Ivy Anderson, University of California

Conducted by Mary Minow and Eli Edwards, at ALA Midwinter Meeting in San Diego, California

Minow: Tell us about this major new step forward in the quest for open access.

Julia Blixrud: A part of the background for this effort was an author rights addendum that came out of work several years ago by SPARC, the Scholarly Publishing and Academic Resources Coalition. We worked with lawyers to develop a legal instrument that modifies the publisher's agreement and allows authors to keep key rights to their articles.  How could authors amend their agreements to allow them to use their own work in the way they wanted to?

Ivy Anderson: That was for an individual author, which is different from content licensing.

Blixrud: At the time, we thought the best way to be able to get our authors' content made freely accessible in libraries was for authors to say, "oh, wait I ought to retain some of my rights in order to be able to deposit and use my work in my environment."

You see, a lot of authors get an agreement from a publisher and they just automatically sign it without reading it. The agreement basically says, we the publisher have all rights to do whatever we want with this article in perpetuity.

Anderson: Which means that if you're the author, and you want to reuse your own work, you may have to get permission.

Blixrud: Get permission, or pay some fees ... and no one at your institution can do anything with your stuff either, unless they bought it and paid fees and so on.

The author addendum was the first attempt to get that content opened up and made available to the author herself as well as to the institution.

Minow: How's that been working out? SPARC put out some wonderful training tools, videos, slide shows, etc., for campuses to use to educate faculty authors.

Blixrud: Yes, but it's hard. It's a one-off. It's an added step for the author to get the publisher to understand why they have to sign the new agreements. Some of the publishers made an initial push back, and the authors would say, I don't want to go through the extra effort to push back on it -- that's one more hurdle I have to jump through get my work published.

We don't have data. There's no reporting when they had a failure.  We get stories once in a while of someone who persisted and was successful. And a lot of publishers have created better contracts that let the author retain some rights. Some people just strike through "exclusive" and write "non-exclusive."  I've done that myself.

Sometimes it turns out that the publisher actually has a better agreement that they'll pull out if there is push-back.

Anderson: There's also a real problem of scalability and consistency. The authors and institutions have a hard time knowing what rights they've actually obtained.

Blixrud: Right - because I could adjust my agreement, but my buddy in the next office might adjust his agreement slightly differently.  So now what does the university have for its institutional website? It's hard to keep track of what the different authors have agreed to. Do the authors even know what they've signed? The burden gets put on the libraries to do one-offs on each of these articles.

Part I of Open Access Scholarship was an interview of Michelle Pearse, conducted by Executive Editor Mary Minow.  As promised, here is part II, which will specifically address law reviews and legal scholarship.


Eli Edwards: Nearly two years ago, a group of academic law library directors promulgated the Durham Statement on Open Access to Legal Scholarship. It called for (1) open access publication of law school-published journals, and (2) an end to print publication of law journals, coupled with a commitment to keeping the electronic versions available in 'stable, open, digital formats."

Recently, we talked with Richard A. Danner, Rufty Research Professor of Law and Senior Associate Dean for Information Services at Duke Law School.

Danner: Duke University adopted an open access policy in March 2010. The policy, which is available here [PDF] has not to my knowledge ever been published in final form. The policy is very similar to those adopted at schools and other entities at Harvard; the main difference being that Duke's policy is university-wide. It is also stated as a mandatory policy, but the legislative history, such as it is, indicates it is not.

Duke has had DSpace running, under the name DukeSpace, for several years, mostly for electronic theses and dissertations. In the short term it is also being used as the platform for archiving and access of faculty publications, but the longer term plans are to move toward a new repository system being developed at Duke using the Fedora Commons.

The Law School started its own Faculty Scholarship Repository in 2005, providing free access to the majority of all articles published by then members of the Duke Law faculty. The contents of that archive are now the foundation of the Duke Law Scholarship Repository on BePress, with which we have partnered through Nellco since 2009. Our repository policies do not include a mandate, but in practice nearly our faculty publishes of a scholarly nature is included. Over time, it will include: the texts of lectures delivered at Duke Law, webcasts from scholarly presentations and conferences, publications of Duke Law's research centers, Duke Law student works, and more.

Edwards: Prof. Danner, you recently presented a paper at Duke, at the workshop, "Implementing the Durham Statement: Best Practices for Open Access Law Journals" on Oct. 22, 2010. [The current draft of the paper is available here: Danner draft formatted RAD 23 Nov 2010.PDF (PDF); see below for an excerpt of the most recent draft - EE] If you were to summarize the progress made in the last two years, what would you say?

I am not sure what actually I can add to what is in the paper I sent, which my co-authors and I did revise substantially after the October conference. As Michelle [Pearse - see previous blogpost/interview] knows, student law journal editors often say that they are reluctant to move to all-electronic publishing because authors are reluctant to publish in non-print journals. [This and other issues surrounding open access legal scholarship will be part of an upcoming article in Duke Law Magazine.]

To assess these comments, my Duke colleagues, Marguerite Most and Kiril Kolev, and I have designed a brief survey to gather information about attitudes toward electronic publication of scholars who have recently published articles in leading law reviews. After pretesting the survey in December and gaining approval from University reviewers, we will administer the survey in mid-January to authors of articles in the last two completed volumes of the lead journals published at the US News top-fifteen ranked law journals.

Faculty members who frequently publish articles in the top law reviews and journals will share their attitudes towards online publications by filling out a nine-question online questionnaire. about 500 scholars will be contacted via email and invited to fill out the survey. The questions are designed to gain insight into how important print publication is to authors who publish in leading law journals.

I think this should be an interesting exercise and look forward to the results.

Edwards: As will we! Thank you so much for talking with us, Prof. Danner.


Eli Edwards is an intern and Content Minion of the Stanford Copyright & Fair Use site.

Richard A. Danner is Senior Associate Dean for Information Services, and Archibald C. and Frances Fulk Rufty Research Professor of Law. Professor Danner has been active in the affairs of the American Association of Law Libraries, the International Association of Law Libraries, the American Bar Association Section of Legal Education and Admissions to the Bar, and the Association of American Law Schools. From 1984-94, he served as editor of AALL's Law Library Journal. He was President of AALL in 1989-90 and has chaired several AALL special committees and task forces; he served on the executive committee of the AALS from 2002-2004, and as first vice-president of the IALL from 2004-2010.



The Durham Statement Two Years Later: 

Open Access in the Law School Journal Environment

The Durham Statement calls for law schools to end print publication of law journals in a planned and coordinated effort led by the legal education community, focused on ensuring access to and preservation of the electronic journal literature. Without that effort, in an economic environment in which external factors are more than ever impacting libraries' collection decisions and law school budgets, what can we do to assure that electronically-published legal scholarship will remain available to future scholars?


1. It is time for law librarians to explore alternatives for preserving legal scholarship working in concert with the other stakeholders, including:

  • Existing efforts to preserve legal information, such as the Legal Preservation Alliance (LIPA), which in 2010 established the Legal Information Archive as a collaborative digital archive . . . to preserve and ensure permanent access to vital legal information currently published in digital formats.
  • Legal publishers holding extensive libraries of law journal content in electronic format -- LexisNexis and Westlaw, but perhaps primarily HeinOnline, with its extensive retrospective collections. Will their interests in preserving access to law journals for their commercial value mean they will now preserve digital content as libraries have traditionally preserved print content?
  • Established preservation and electronic archiving programs such as Portico and LOCCKS, which have worked mostly with libraries and publishers outside of law.
  • The Library of Congress, which already receives copies of all law journals whether published in print or electronic format under the mandatory deposit requirements of the Copyright Act, and works to establish best practices for digital preservation through the National Digital Information Infrastructure & Preservation Program (NDIIPP).
  • Institutional repositories, such as Harvard University's local Digital Access to Scholarship at Harvard (DASH), or services such as the bepress Digital Commons, which hosts repositories for a number of law schools and supports law review publication.
  • Printers of law journals, in order to forge the future role of print for preservation or print-on-demand services for legal scholarship.

2. It is also necessary to promote the use of common standards for formatting the files of the documents. Joe Hodnicki has noted ALA's and ACRL's calls for across-the-board format standardization, and the use of a standard mark-up language (e.g., XML) instead of PDF. Wayne Miller has proposed developing mutually-agreed upon law journal formats for archiving, preservation, and other uses.

3. It is time as well to take the initiative to create opportunities for dialogue with law school deans, law review editors, interested faculty, and legal information vendors on the need for concerted action regarding access to and preservation of electronically published law journals. These activities do not answer all of the concerns raised regarding the Durham Statement's call to end print publication of law journals, but they should at least provide a start for action toward meeting those concerns.


Mary Minow had a chance to talk with a colleague at Harvard Law School about Open Access.

Nearly two years ago, the Harvard University Faculty of Arts and Sciences unanimously voted to grant the university a non-exclusive, irrevocable, worldwide license to distribute faculty's scholarly articles, with an opt-out mechanism for instance in the case of incompatible rights assignment to a publisher.

Today, Mary talked with Michelle Pearse, Research Librarian for Open Access Initiatives and Scholarly Communication, Harvard Law School Library.

Minow: Michelle, now that the Open Access Policy has been in place for two years, how has it been working out?

Pearse: It has been an interesting journey. We are still in the process of reaching out to and educating the faculty, trying to get them to understand the policy and get it into their personal workflows. As part of our reorganization in Summer 2009, we made publication support part of library services, so we have tried to implement and educate faculty about the policy in that context (i.e. the policy is one aspect of the publication process now). The policy is often referred to as a mandate, which is a bit of a misnomer because faculty are always free to seek a waiver. (See the Director of Harvard's Office for Scholarly Communication posting about this issue on his Occasional Pamphlet blog.)

It can be challenging implementing such a policy. It is important that we make the process as simple and straightforward as possible. While the traditional mark of repository success seems to be the number of items deposited, I think the more important metric at this point is progress in educating the faculty and cultivating relationships with them so they see the library as a partner in their publishing experience---from initial research to the disseminating the final product.

The open access policy itself applies only to scholarly journal articles, and our faculty actively publish books and other materials that do not even fall under the policy. We envision a "one-stop-shopping" system literally and figuratively. We are trying to develop workflows and technical systems that can truly realize that vision.

Minow: Since you have experience now with the journals, what has been the journal reaction to the policy?

Pearse: Overall, there is confusion about what these policies mean or are trying to do, so there is quite a bit of education with the publishers. The "teachable moment" often comes up when an author uses the addendum that the university has provided for faculty to send along with publication agreements. Most of the larger publishers of the peer-reviewed journals are already aware of the policy, and some have started asking their authors to show proof that they have submitted waivers. We have waiver language for faculty, that states that the faculty member has granted Harvard a license with respect to his or her scholarly articles, and that a waiver is requested for a particular article.

In an odd way, it actually facilitates my outreach work with faculty as it brings the issue to the forefront.

There have been some instances where even when a waiver has been submitted, in the end the publisher agrees to budge a little bit from its routine policy as a compromise.

Minow: In what way?

Pearse: For example, the publisher may authorize self-archiving of a later version than it normally permits. With some of the bigger publishers, it can be a challenge figuring out the appropriate person with whom to discuss these issues.

Minow: Law reviews are produced by the law schools, and edited by students. Do you get a different reception from law reviews than you do from other journal publishers?

Pearse: Yes. By contrast, the law school law reviews are generally more supportive of the policy (particular the ones that have their contents open or "gratis open access"), but they are not always comfortable with or understand the terms of the Harvard license. We are trying to compile a list of law journals that are expressly supportive of the policy to facilitate workflow and educate faculty when they are publishing. At some point, if more law schools adopt open access policies, it would be great to have that information incorporated into submission systems and journal web pages.

Minow: How has it been implementing it in a university environment that has different schools enacting open access (e.g. centralized vs. local practices)?

Pearse: We were only the second school after the Faculty of Arts and Sciences (FAS) to adopt the open access policy, so it has been interesting to watch the Office for Scholarly Communication (OSC) evolve over time. We now have 6 schools at Harvard with OA policies. The growth in the number of schools has provided a fabulous opportunity to meet with colleagues working on similar issues, to share thoughts and processes for workflow, experiences with implementing the policies, etc. ... especially where scholarship has become so interdisciplinary now. Over time, the OSC has also developed rich external and internal sites where we can share tools to help with the administrative aspects of implementing the policy. It also has open access student "fellows" that we have occasionally used to help populate the repository. We are also hoping that centralized discussions and negotiating with publishers will be helpful in communicating with publishers and facilitating the deposit of content.

Some of the "advantages" of centralization, however, can also create some of the biggest challenges. For example, we are fortunate to have a central office to run the repository on a technical level (it uses DSpace), but it also means we sometimes have to wait for certain developments to take place or compromise if have different ideas about the look and feel of the interface. In general, these issues tend to work themselves out. For example, delays in technical developments that are problematic for us often tend to be important to other schools as well, which can cause them to move up the priority list. The schools (and disciplines) have very different cultures, so it is interesting to see how these local cultural differences sometimes affect how we might approach certain aspects of implementing the policy like outreach and workflow. It is also interesting to see how the language of the policies themselves are slightly different and have evolved with each new school adopting a policy. (At this point, each school has its own language and responsibilities in figuring out how it wants the policy to operate in its own school.) While we can share technical resources and information and harness the synergies that exist, I think we will have to think about ways to create overlays and develop underlying workflows that can be customized to accommodate our own needs.

Minow: Thank you so much for your update!


For part two of Open Access Scholarship, we will be discussing the Durham Statement and what has happened in the two years since its publication with Richard A. Danner, Rufty Research Professor of Law and Senior Associate Dean for Information Services at Duke Law School.


Mary Minow is the Executive Editor of the Stanford Copyright & Fair Use site.

Michelle Pearse is the Research Librarian for Open Access Initiatives and Scholarly Communication, Harvard Law School Library. You can follow her on Twitter at @aabibliographer.

New Copyright Videos

| | Comments (0)

Copyright and controversies over its enforcement by no means limited to the United States. The world’s first copyright legislation was England’s Statute of Anne, enacted in 1710. The Berne Convention for the Protection of Literary and Artistic Works, the first international copyright agreement, was first written in 1886.

And while debates over copyright enforcement, length of protection and the extent of exemptions continue in the U.S., similar efforts and arguments are being made in Canada, the UK and Europe. Our video page has excerpts from the ongoing conversation. One highlight is a speech on copyright from Mathias Klang, a researcher and senior lecturer at the University of Göteborg in Sweden. Most of the latest videos are from a July 2010 conference called ORGCon, conducted by the Open Rights Group, a group devoted to advocating digital rights in the UK.

But for you hardcore Lawrence Lessig fans (and I am one, thank you very much), there’s also a new TED talk from him on copyright, fair use and remix culture mashed up with politics. Brief, but humorous and thought-provoking, as one would expect from Prof. Lessig.

   -- Eli Edwards, Content Minion

The Center for Internet and Society presents

Judith Finell
Invasion of the Tune Snatchers - Does Copyright Law Inhibit or Enhance Musical Creativity Today?

Thursday, October 21, 2010
Room 280A, Stanford Law School
Lunch will be served.

Music technology has radically changed the way in which music is composed, produced, performed, and obtained. Many artists openly utilize the works of others, often altering the core sonic characteristics of a sampled fragment. These developments pose new challenges to doctrines such as fair use, scenes a faire, and infringement criteria, such as access, transformative use, and prior art. Musicologist and expert witness Judith Finell will discuss these issues, and present musical examples from recent copyright cases.

Judith Finell is a musicologist who specializes in issues involving music as intellectual property. Her arena is the intersection of music, law, and technology. She formed her consulting firm Judith Finell Musicservices Inc. in New York over 20 years ago, to serve copyright and entertainment attorneys, and the music, entertainment, media, technology, and advertising industries. She has testified as an expert witness in many leading copyright cases throughout the country, and is a frequent guest speaker before attorney groups, law schools, and intellectual property organizations.

Her paper on this topic can be found at:

Rising Into the Public Domain: The Copyright Review Management System (CRMS) at the University of Michigan

Interview with John Wilkin, Associate University Librarian for Library Information Technology and Executive Director, HathiTrust and Principal Investigator for CRMS


Mary Minow: Where does CRMS fit into the scheme of other copyright tools, such as the Determinator?

John Wilkin: The Determinator is a good point of comparison for us. It serves as a resource for helping someone make a determination, and what we wanted to do is actually make determinations. The focus is on materials in our Collections, across the HathiTrust partnership. We are not so concerned about where a book comes from, because we think of [the corpus] as a "collective collection" ... materials from across the board.

I think we did have, early on, perhaps a naive sense that we might be able to make those determinations without the materials being in front of us, digitally or in print. We quickly concluded, though, that the only way to do the work was to have those works in hand. And we chose to have them in hand, digitally. And the digital flow of materials drives the prioritization process.

Minow: When you say digitally in hand, it sounds like researchers are allowed to look at the text, the preface, etc.

Wilkin: That's right. We have a strong authentication and authorization system, and it's tied into the Michigan CoSign system, but also it uses Shibboleth. So that gives us a lot of tools there. In this case, we use a two factor authentication for all reviewers. They have to authenticate [with a password], and they have to be, essentially, at their desk. They can't take their identities home and start looking at materials that are still in copyright. So it's very much justified by the work they're doing.

Minow: Doesn't Google make its own determinations of what's in the Public Domain? Do they come up with different determinations? Is there duplicative work going on?

Wilkin: We're doing the 1923-1963 work.

Minow: That is, a focus on books published between 1923 and 1963. Books published in the U.S. prior to 1923 are in the Public Domain. The Copyright Renewal Act of 1992 automatically extended the copyright terms of works published in 1964 and later.

Wilkin: Right. So far as we know, Google is not doing the 23-63 work. Both Google and HathiTrust do a layer of very automatic determinations. Ours is entirely automatic, based on elements in the MARC record. They have reviewers look at materials to do some [consultation] because occasionally the bibliographic information is not reliable. That's the point at which we'll look most similar, with some exceptions.

There are important areas where we deviate. We are opening up U.S. Federal Docs, post 1922. Google is considering that now, but they have been slow to do that. They're considering what classes of materials they'll open up. HathiTrust will say that U.S. government docs are, by and large, in the Public Domain.

Then we diverge. For example, we're going to look at U.S. pre-1923 materials as in the Public Domain, and we're going to look at users outside the U.S. differently for materials that were published outside the US does that make sense?

Minow: Help me out here.

Wilkin: For the user in the U.S. or really for anybody in the world, we deem U.S. works pre-1923 as being in the Public Domain. And for the user in the U.S., we also deem non-U.S. works pre-1923 as in the Public Domain. For users outside the U.S., we are fairly conservative with non-U.S. works. I think the date we're using now is about 1870. It's a rolling wall, and essentially a best guess. It would be that date for a young author who lived a long time who published something. We use statistical probability, and we roll that wall forward every year.

Minow: How do you figure out if the work was published first outside the country?

Wilkin: We primarily use the bib record of the publication. If the place of publication is outside the U.S., we assume that it was [first published there]. Effectively we are conservative unless we get a good look at something and make an individual determination.

We ingested 700,000 volumes one month, so that gives you a sense of the scale we're working at. We're never going to have the resources needed to do individual sorts of this one should go here and that one should go there.

Minow: You mentioned that you're using the Determinator, but that's only available for Class A books. Are most of your materials Class A books?

Wilkin: They're all Class A books. The reviewers use the Determinator and other tools, they look at the book and they make an assessment. They look to see that there are not embedded rights problems in making those determinations.

Minow: Inserts - photos, stories, poems - you'd almost have to read every page.

Wilkin: Well, we look at acknowledgements, not the entire book. There are going to be some cases where the acknowledgements are not that adequate. We have an advertised takedown policy, and we've never been contacted about anything that is an insert.

Minow: It takes my breath away to look at that level.

Wilkin: The insert issue is of particular concern in Congressional materials, such as materials that are inserted into the record for hearings. We work with the assumption that these inserts are part of the public record and that they are provided or reproduced in that context.

Minow: In Section 108(h), the copyright law gives 20 years back to libraries and archives even on the web, if not subject to normal commercial exploitation. Here's a chart I made, showing that, for example, that libraries and archives may make and distribute copies of works up through 1934 this year, instead of 1922. The catch is that the works cannot be subject to an undefined "normal commercial exploitation."

Wilkin: We're not taking advantage of that at this point.

Minow: Another thought I had, after reading Melissa Levine's article, is that many authors of older works retain their digital rights, because when they signed publisher agreements, digital rights were not yet contemplated. Are you taking advantage of that? [Opening Up Content in HathiTrust: Using HathiTrust Permissions Agreements to Make Authors' Work Available, Research Library Issues, no. 269 (April 2010): Special Issue on Strategies for Opening Up Content]

Wilkin: We're not. We're just testing the waters, taking baby steps. We're only dealing with works where the rights have reverted to the author and when the author or publisher knows they own the rights. As it turns out, we've had some fairly large lump permissions. For example, in at least one case where a journal died, the journal publisher gave us permission to open up the full run of the journal. As it turns out, a few organizations have opened up a large number of publications.

Melissa's article is an early step for us. We haven't gone out to seek permissions from authors, yet. But it's most definitely something we want to do.

Minow: The University of Michigan is a player in the OCLC pilot project, the WorldCat Copyright Evidence Registry. Does that mean your determinations of copyright for the works you examine then feed into that Registry?

Wilkin: I think that effort is in limbo right now. We did set up a mechanism that we could share our determinations with them. The Registry was set up to allow institutions to identify records that need to be enhanced or annotated with information about URLs and rights, etc. In our distribution mechanism, there's one record for every volume in the repository at this point.

We think of OCLC as a central switching point for bibliographic info, so it seemed like a natural for them to have a registry of copyright evidence. We were making data available to them, but in fact we have now 6 million volumes, each identified with our either automatic or manual copyright determination, so that's more than what OCLC would have, I guess, aspired to do.

In the CRMS process, that's only been tens of thousands of volumes, but someone could start with our 6 million volumes and look for changes.

Minow: But it wouldn't be open in the sense that someone could put their own data in, right?

Wilkin: Exactly, and the Copyright Evidence Registry was intended to be that.

Minow: Is there anything you'd like to add?

Wilkin: Well, for us, the question is "what next?" The easiest "what next" is expanding to other partners. Anne's been busy as we laid out in the grant, she is training staff in Indiana, Minnesota and Wisconsin - just finished Wisconsin - the three pilots along with the Michigan staff. [Anne Karle-Zenith, Copyright Review Project Librarian]. This winter she'll probably incorporate staff at a California partner.

And as we bring more hands in, it puts more pressure on the training and reliability piece as more people are making determinations.

Minow: Do you see members of the public as becoming able to add notes or comments in the future?

Wilkin: We have a tagging application for bib records. Probably not a day passes when someone doesn't say, "I think this is in the Public Domain" or ask, "is this in the public domain?" That's what stimulates someone to look at it. So it is user driven now. We won't take someone's assertion as fact, but it provides a good starting point to do investigation.

Minow: Do you have plans to add other materials, besides "Class A" books?

Wilkin: In HathiTrust, we have much more than "Class A," but the only ones we're pushing into the workflow right now are "Class A." So that becomes a question for you. Then. How would we go beyond "Class A"? How could we build sustainable cost effective system? Probably going to be something piece by piece, right?

Minow: I've heard that the Copyright Office is working on a retrospective conversion of the copyright registration and renewal records of rest of the material types, beyond "Class A books." If they make the records available in bulk, as they did with "Class A," then others can set up or build on databases like Stanford's "Determinator."

Wilkin: Did you know that we've found about between 55% and, 60% of our materials have been found in the public domain?

Minow: Fantastic!

Wilkin: The numbers you see out there say like, only 15% are in copyright. Some assertions are pretty wild. There was some early work done by the copyright office, but the law was in flux at the time. Best to have something so statistically sound. I'm guessing that between pre-CRMS and CRMS, we've gone through 100,000 titles and those numbers have held. I think we have another 400,000 titles to deal with in that period. One question we have, how many titles ARE there in the 23-63 period? There's just so much indeterminacy because of variation in cataloguing practice and ways of reporting things, and so.

Minow: Are the other 40% ones that you've determined are in copyright or you just can't figure them out?

Wilkin: I think early on it was about 30% in copyright and 10% in UND (undetermined or undeterminable). Anne found that as staff got more experienced, they were getting stuck on complicated problems, and we often found a lower yield of public domain determinations. So Anne encouraged staff to push things to UND rather than get some finality. So the number of UND has gone up, but the numbers in the Public Domain have stayed constant. That's really a workflow strategy kind of thing.

It's exciting to get those works opened up. The surprise has come in the titles. Because of the required renewal process, it's stunning to see what was not renewed. The first time I encountered this was with my 13 year old daughter, who was doing a book report on code breakers. We found really modern materials by living mathematicians. I thought, "oh, we're in trouble." Then, looking further, these were ones where renewal did not take place. Interesting to learn the behavioral piece ...

But the numbers, the numbers are really very interesting, the 60/40 sort of thing.

Minow: And yet, going forward, this is not going to be the case, because now there's no renewal required. An anomaly really, unless law changes again in the other direction, which doesn't seem likely.

Wilkin: That's something for us to ponder as a society, as a culture, that these works are overwhelmingly not on the market. What's happening is, without this effort, no one is able to take advantage of the information that's there, or only in a limited way.

Another surprise is the Committee on Institutional Cooperation, the CIC, the non-Michigan, non-Wisconsin CIC institutions, don't get back their in-copyright materials ... by contract with Google. I think what we ought to say is they don't get back those things that are putatively in copyright. With those numbers in mind, think about what are we not able to put online because they're assumed to be in copyright, when we know that 60% or some large percent are in the public domain.

Minow: You mean, those institutions are not getting access to the full text of their own books?

Wilkin: They stay at Google, they're embargoed. That may change with an amended agreement, but for now, Google doesn't provide them back.

Minow: I thought those were called "library copies."

Wilkin: It is important to call them "embargoed copies." Jack Bernard, our Assistant General Counsel, has asked us to use the term "rising into the public domain" instead of "falling into the public domain."

Minow: That's a good title for this interview. Thanks so much for talking with us today.

Fair Use, Free Speech and Social Value
Anthony Falzone, Esq. - Executive Director, Fair Use Project, Lecturer in Law, Stanford Law School

Fair use has been enshrined as a First Amendment safeguard. But is it doing the job? A look back at recent fair use decisions suggests we might need to recalibrate the four-factor analysis to address more explicitly the social functions of copyright and fair use.

Boston Bar Association, CLE - Recent Trends in Copyright and Trademark Fair Use - How Fair is Fair Enough?

The 2010 DVD Exemption to the DMCA: An Interview with Abigail De Kosnik, Gary Handman and Mark Kaiser of University of California, Berkeley

Guest interviewer: Eli Edwards


The latest round of Digital Millennium Copyright Act exemptions, granted by the Librarian of Congress, has received a lot of press, partly for an exemption for bypassing DRM on DVDs and partly for the 2 exemptions that allow "jailbreaking" of smartphone operating systems  (such as the iPhone) to allow non-authorized software and applications to run on the phone, or use the phone on a non-authorized wireless network. 

The most recent DVD exemption is as follows:

(1)  Motion pictures on DVDs that are lawfully made and acquired and that are protected by the Content Scrambling System when circumvention is accomplished solely in order to accomplish the incorporation of short portions of motion pictures into new works for the purpose of criticism or comment, and where the person engaging in circumvention believes and has reasonable grounds for believing that circumvention is necessary to fulfill the purpose of the use in the following instances:

(i) Educational uses by college and university professors and by college and university film and media studies students;

(ii) Documentary filmmaking;

(iii) Noncommercial videos

To find out more about the DVD exemption and what it means for the educational community, we talked to three people who advocated for the DVD exemptions at the DMCA rulemaking hearing held at Stanford Law School by the Copyright Office last year. Professor Abigail ("Gail") De Kosnik, Gary Handman and Mark Kaiser are all educators at the University of California, Berkeley and all three addressed the copyright panel on the importance of being able to make high-quality film clips for their teaching and researching activities (transcript of the Stanford hearing here).

Your suggestions are welcomed at any time. Please send to