Personnel

The size of the HathiTrust collection represented an incredible opportunity and an enormous task. Few institutions have the resources to accomplish over three hundred thousand copyright determinations in seven years. The willingness of nineteen institutions to work together made that achievement possible.

As CRMS grew in scale, we gained a better understanding of what remote collaboration could accomplish and what it required. Remote collaboration required significant investments in the development of tools and techniques to train and communicate with more than sixty reviewers in geographically diverse locations. Management of a large project required frequent communication with reviewers and their supervisors, maintenance of technical infrastructure, global access to the review interface, and consistent project documentation. This section offers insights on staffing, maintaining, and expanding a remote network of reviewers like those who made up CRMS.

Selecting Reviewers

The skills suited to employment in other areas of the library are very similar to the skills needed to be a successful copyright reviewer. Your project should seek reviewers who demonstrate fine attention to detail, facility with a computer, and an ability to think critically. A willingness to ask questions and adapt are also very important reviewer traits. Because reviewers follow a defined decision tree, it is not necessary for them to be copyright “specialists” or to have more than a fairly basic knowledge of copyright law.

It is important to select reviewers with pattern recognition and critical thinking skills. The realm of the possible in monographic publishing is immense and varied. Often a single phrase or caption in a volume can affect a decision, and that kind of examination requires thorough attention to detail and an ability to think critically. Training will not be able to cover every eventuality. However, if done correctly, it will enable reviewers to understand why decisions are made and how they can apply their knowledge in new situations.

We do not enforce a production mind-set on our reviewers, but some reviewers exhibit this tendency and execute a high number of determinations. Others take their time on detailed searches for an elusive author death date. Either characteristic could be more or less attractive based on the desired outcomes of your project. In our experience, the accuracy of reviews is relatively consistent across reviewers regardless of individual pace and work styles. If you ask your reviewers to focus on high production numbers, you should anticipate that a greater percentage of reviews will be indeterminate, as reviewers will set more complex volumes aside. For projects with a focus on higher determinacy, reviewers will take more time or require more specialized resources.

Time Commitments

The first step in bringing new reviewers onto the CRMS project is securing a formal and documented commitment from the partner institution. A specific time commitment for each reviewer is essential, given the substantial resources the CRMS management team expends in training them. The time commitment for a reviewer must be reasonable and achievable, and it should be settled prior to the commencement of training. After several years of observations and discussions with CRMS partners, we can offer recommendations for reviewer time commitments.

Minimum Time Commitments As with any skill that requires practice to attain proficiency, copyright review requires a minimum weekly time commitment for review skills to remain at their sharpest. One of our first observations in CRMS-World was that a majority of reviewers who had a time commitment of 5 percent FTE (a full time equivalent of two hours a week) either stopped performing reviews altogether or voluntarily increased their time. From this we concluded that working two hours per week on copyright review is not a sustainable model for maintaining engagement.

Maximum Time Commitments We noticed a decline in productivity for those reviewers who had time commitments at 33 percent to 50 percent FTE (thirteen to twenty hours per week). Many of these reviewers were not reaching the numbers we would have expected given the productivity of reviewers working at lower time commitments. We sampled average productivity biannually during the first two years and found that the decline in productivity seemed to affect those at 33 percent FTE or greater time commitments.

Further Consideration Discussions with our partners brought to light information that might explain these observations. Some of the reviewers assigned to higher time commitments also held managerial positions within their library. Their concurrent job priorities competed for time with CRMS. To compound the issue, the copyright review process itself is very repetitive and tedious when performed at length. Personally we have found that twenty hours a week or longer performing copyright review is unsustainable in the long term. We would caution project planners against having unrealistic expectations of reviewers.

Our current position is that a time commitment between 15–25 percent FTE (six to ten hours per week) is ideal. Reviewers will have sufficient time to retain skills without the risk of overload. We recognize that, ultimately, your project team will have to allocate human resources based on the priorities of your institution. We accommodated time commitments outside of our recommended range; however, it is best to understand the staffing implications when discussing project expectations with your partners.

Security and Authorizing Reviewers for Access

A fundamental requirement of the CRMS copyright review process is access to potentially in-copyright digital scans. We gained access to scans by partnering with HathiTrust, which manages the security and authorization mechanism. Pulling physical books from the library shelves is a viable choice for copyright review, but not for a project at this scale.

HathiTrust and the University of Michigan Library impose access restrictions to protect the system infrastructure and the copyrighted material under review. This made it unnecessary for the CRMS project team to develop an access control system of our own. Access restrictions are expensive and challenging to develop, so the opportunity to comply with HathiTrust’s established and robust system was a significant advantage for CRMS.

For each individual CRMS reviewer, the CRMS project manager works with HathiTrust to authorize access to digital scans. Authorization is limited by purpose, location, and time. Reviewers may only use their access for the purposes of copyright review, and the digital scans can only be accessed from their designated IP address. After a set time (usually six months), the reviewer must sign a new “Statement for Access” form in order to renew the reviewer’s access.

Training

It is our observation that a centrally run training program works better than a distributed “train the trainer” approach. If you intend to have a large group of participants on your project, your team should include someone who is familiar with instructional design and has teaching experience. This person should also keep up their skills by participating in copyright review regularly. A supervisor who knows theory but does not regularly perform copyright review will not have the practical experience necessary to reliably teach the research process. A good and responsive trainer must also be prepared to answer questions and manage personal communication, serving as a primary contact for the reviewers throughout the project.

Once a staff member has been designated and both parties agree that her time commitment is reasonable and achievable, then she needs to proceed through a training process. We budget approximately ten hours of managerial time per person for training. The length of time a staff member needs to complete training depends on her ability and the amount of time she can devote each day to it. It can take between three weeks to three months for a new reviewer to complete training, averaging at around a month and a half.

We have experimented with both one-on-one tutoring and group training methods. There are pros and cons to each approach. One-on-one tutoring does not require a time investment in the creation of online learning objects such as videos and tests, and trainers can schedule individual sessions to give demonstrations and comments via screen sharing. Essentially private tutoring, this method adds an element of personal accountability and can more quickly help confirm concept mastery. It is also the most time-intensive method for the training team and does not scale up well. No more than three trainees assigned per tutor is a good rule to follow with this method. We employ it when there are only a few people who must be trained quickly.

We expected a group class method to make training move more quickly while also saving staff time. With it we were able to scale up in a way that was not possible with individual tutoring. Hosting group classes also confined training to discrete and scheduled cycles, giving the management team a break from constant activity. We did this by creating video tutorials and online testing modules that were part of a standardized educational plan. This was intended to give all trainees as similar an experience as possible, minimizing gaps in topic coverage. We reused the course videos and documents for several subsequent cycles, but after two years, the majority were in need of updating. Overall, group training does not significantly reduce the amount of time needed from the management team but shifts it to other activities.

During the training period, the management team will engage in the following tasks:

  • Leading videoconferences to introduce the project and provide a basic foundation
  • Grading and providing feedback on comprehension tests
  • Answering daily questions
  • Adjudicating practice reviews
  • Communicating weekly progress to supervisors
  • Providing individual tutoring as needed
  • Troubleshooting system access problems

Group training does enable a higher volume of people to be trained but results in much longer training periods. Factors that may increase the length of training time include supervisors not allotting the trainee enough time to do the work, access problems in the computing environment, and environmental factors like too small a monitor. A group training class of about fifteen trainees can typically require two months or more.

Distance Learning

Early training of CRMS reviewers happened on site at the University of Michigan. This was logistically difficult, with high costs for travel and hosting. As our institutional partners and reviewers have increased in number over time, in-person training has become more of a barrier to flexibility in making necessary personnel changes. Personnel changes were needed as staff retired or were transferred to other jobs. Robust distance learning options helped the project adapt to midstream staffing changes.

One of the fundamental elements of the CRMS grant was to study possible methods for sharing large-scale copyright review among institutions. Our CRMS grant explicitly pointed to online training as a vehicle for extending the work more broadly: “Online Training: We will develop and implement a web-based online training course to teach qualified librarians and similar professionals to be reviewers so they may make copyright determinations. This process will be refined and documented in the pre-grant period, reviewed, and validated by the Advisory Working Group. This will allow us to scale up the number of reviewers over the course of the grant.”[64]

Distance learning fulfilled its promise, and we now rely exclusively on remote training for bringing new reviewers into the system. We have explored a number of remote training tools, which we discuss in the following sections.

Sandbox

In order to give trainees a chance to practice, we created a static “sandbox” instance of the review interface. The sandbox is a clone of the production interface but totally separate, so any mistake a trainee makes has absolutely no impact on daily CRMS production. This offers new reviewers the opportunity to become accustomed to the tools they will be using. Hands-on practice in the sandbox makes it easier to visualize and internalize the decision-making steps.

The sandbox is populated with recently validated reviews pulled from production. A new trainee needs only to complete the second review of the pair and their work can be checked against the first. This takes advantage of work produced by experienced reviewers and allows us to simulate pairing new recruits with veteran reviewers. In a relatively short period of time, we can gauge how quickly new reviewers are learning CRMS practices and also better understand any areas of confusion.

The sandbox system requires secure authorization, which may take a few days to complete. While waiting for authorization, trainees are asked to study CRMS documentation and demonstrate a basic understanding of the process. We administer two short tests of multiple-choice and short-answer questions to confirm their mastery of the process. Once they pass, trainees are free to work independently within the sandbox.

Other Training Tools

A number of additional tools have proven useful for training reviewers. Most are general library-supported products or more affordable options.

  • Qualtrics. Used to create “open-book” tests in which the answers are validated and a report is automatically e-mailed to the instructors via trigger e-mail. Qualtrics provides results in a PDF format for trainees to refer to, with instructor commentary on missed answers. (See appendices for examples of two Qualtrics tests used in the project.)
  • Adobe Acrobat Professional. Used to add instructor comments onto PDF format survey/test results.
  • Skype or BlueJeans videoconferencing. Used to connect with trainees in one-on-one sessions. Screen-sharing features allow trainees to go through several reviews while the instructor prompts them with additional questions and commentary. For a time, Skype did away with its screen-sharing ability unless you paid for a premium subscription, and we also had trouble installing the client on computers at some institutions. On the whole, BlueJeans performed better with diverse computing environments, but the interface was moderately less intuitive and required more explanation for some trainees.
  • Headset microphones. Used to allow hands-free videoconference screen sharing while demonstrating reviews. Generally trainees can borrow a headset microphone for the few days that they require it.
  • MediaWiki. Used to provide a password-protected wiki site to document common questions and reviewer scenarios. This is a good knowledge-sharing tool and allows reviewers to seek answers to commonly asked questions.
  • Camtasia Studio. Used to create screen capture videos with voice-over and captioning to demonstrate basic steps and actions taken within the interface. The videos are stored online and can be used to demonstrate features of the project to outside observers. This is immensely helpful for demonstrating features of the CRMS system and interface.
  • Flowcharting software. Used to diagram a workflow and create CRMS decision trees. Free online programs did not permit us to create charts that could be easily edited; Microsoft Word proved to be a flexible, lightweight, readily available alternative that allowed us to easily update workflow documentation as needed.
  • MediaGallery (U-M Library’s video content management system). Used to host screencast videos in a location where anyone with the link can gain access and view them. Online sites such as Screencast.com could work as well, but we ran into bandwidth limits using the free service. This was not sustainable, as videos could not be viewed until the bandwidth was reset in the next month.

Readiness for Production

Trainees were required to complete a minimum of one hundred practice reviews with over a 92 percent accuracy rate before they were approved for production. This desired accuracy rate confirmed a reviewer’s ability to follow CRMS processes. If trainees did not meet this standard, we assessed their invalidated reviews and worked with them to improve their understanding of the CRMS process.

Reviewer Communication

There are a number of ways in which we communicate directly with reviewers rather than through their supervisor. These communications are intended to motivate, build community, announce policy changes, and share data about individual and group progress.

Some of the communication methods we tested were less useful than anticipated. One was the chat reference tool Zoho that we linked to the sandbox interface. It was intended to provide real time Q&A with the experts for a trainee in the process of doing a copyright review. We stopped using Zoho chat after learning that it was difficult to maintain staffing with only three people who could provide reference. Also, the trainees preferred getting an answer by e-mail so they could archive the response.

Likewise, we explored the notion of displaying a personal “progress toward goal” bar. This would be able to track the number of minutes reviewers spent in the system and display a thermometer chart of their monthly progress. However, this was not an accurate metric for the time actually spent doing work on reviews. In the end, we decided not to implement this feedback tool because the inaccuracy could be demotivational.

Quite a few of the methods we tested have been effective, and we continued to use and refine them throughout the course of the project:

  • All-reviewer e-mail group. Any reviewer can post a question, share an interesting item, or report technical troubles. The way reviewers use this group has changed over time, both with their comfort level in doing reviews and as the number of people on the list has grown. At the start of the project, it was highly useful to help calibrate decision making during reviews. Toward the end of the project, it became primarily an arena for notifying others of access and outage problems.
  • Trainee-only e-mail group. A closed group available only to new people during a training period. This provides a semiprivate space to ask questions.
  • All-reviewer conference calls. Scheduled twice a year via Adobe­Connect. We use the conference calls to update reviewers on CRMS practices, share helpful tips, introduce resources that will make review work easier, and provide general progress updates. The calls help everyone feel connected to the project as a whole and its goals.
  • Weekly automated data e-mail. A lightweight stats report sent Wednesday mornings to all reviewers, giving a snapshot of how each institution did the previous week. It is a friendly motivator and a convenient reminder to contribute time each week to the project.
  • Trigger e-mail following seven days of inactivity. An e-mail triggered on an individual basis when a reviewer has not been in the system for seven days. It reminds inactive reviewers to contact their supervisor or us if their availability has changed.
  • Personal stats display. A personal tally page within the interface that updates daily to display personal number of reviews accomplished, minutes worked, and validation statistics. Some reviewers track this information more closely than others and are motivated by it.
  • Historical reviews. An interface for searching all determinations made during the project. Searchable by user and verdict so reviewers may check and learn from their reviews or the reviews of others.

Benchmarking and Ongoing Reviewer Management

In order to assess and manage reviewer time commitments, you should have a system for benchmarking productivity that helps to set expectations while recognizing the complexity of copyright law. This is more art than science, and we are attentive to the fact that some reviewers take longer to reach a determination and some have time-intensive research skills that others do not. From our perspective, reviewers with a diversity of research skills, speed, and persistence can complement each other to great effect. With that said, we recommend establishing reasonable baseline expectations, along with mechanisms for holding reviewers to those standards.

It is difficult to set performance goals without an idea of how many can reasonably be done within a time period. At the beginning of a project, work with your reviewers to study the time required to perform a set number of reviews. Identify the percentage of public domain, in-copyright, and undetermined volumes in your sample and evaluate whether adjusting productivity benchmarks would improve the determinacy outcomes. From that sample, set your benchmarks for productivity. Build flexibility and room to breathe into your standards and be sure to assess your benchmarks as the project evolves.

We would encourage you to consider both speed and determinacy when setting standards for your project. A high-determinacy project will likely require more time per volume; a high-production project may by necessity set more volumes aside as undetermined.

Experts

As part of the CRMS review process, two different reviewers look at each candidate volume independently. If their results match, their shared judgment is accepted. If their results do not match, then there is a conflict, and an expert evaluates both independent reviews and adjudicates between them. An expert in CRMS is a reviewer with substantial review work experience who has demonstrated a high level of knowledge of CRMS processes. After receiving additional training, experts are qualified to examine and adjudicate mismatches in the copyright determinations of their fellow reviewers.

Having an appropriate number of experts is necessary to avoid a bottleneck in the workflow. Roughly 30 percent of reviews require an expert adjudication. We have found that an individual expert reviewer can look at approximately 200 conflicts a day. A system of similar scale to CRMS-World, which generates about 130 new conflicts per day, would ideally involve four trained experts. This number provides a margin of safety in the event of staffing changes and helps distribute the workload.

Supervisor Communication

Clear and regular communication with partner institution supervisors is the key to helping CRMS reviewers meet their time commitments, as copyright review work often competes for time and attention with other high-priority institution-specific work. When communicating with supervisors, we work to ensure that CRMS reviewers can commit the time and attention necessary and are not overwhelmed by competing priorities. When committing people to new work, supervisors must consider what other duties will need to be reduced.

We saw some areas where additional materials could help facilitate communication with supervisors, including the following:

  • A CRMS reviewer job description that can be placed in a personnel file and used to discuss the work with supervisors who are unfamiliar and may otherwise see the work as “extra” rather than part of regular duties
  • An “external administrator” role that allows a supervisor to view personal statistics of reviewers at that institution
  • A weekly inactivity report that is used to discuss personnel changes and absences with supervisors

CRMS, as a cross institutional collaboration, has benefitted from thoughtful development of our modes of communication. The swift increase in the project scale made informal communication methods less effective with a large group. Communication requires time and human resources, but it is vital to the health of a large-scale project.

Cost-Share Reports

Cost-share partnerships have been a part of CRMS since the start of the second National Leadership Grant from the Institute for Museum and Library Services (IMLS) in 2011. Cost-share partner institutions must report their contribution toward the overall grant match required by IMLS. Tracking partners’ progress and financial reporting is a significant administrative undertaking.

It was tempting for supervisors and reviewers to think of contributions to the project solely in terms of the number of hours spent working with the interface. However, the cost-share commitments were expressed in dollar amounts, so the reviewers’ salaries were the critical factor when tracking fulfillment. This could make replacing a departing reviewer more complicated if the incoming reviewer made a different wage because the new arrival would have to devote a different number of hours to the project in order to match a predecessor’s contribution.

Cost-share management, therefore, also depended on education and regular updates for supervisors at the partner institutions. If a partner began falling behind on a commitment, the earlier we notified them the easier it was for them to make up the difference.