Jisc Inform article

Jisc Inform

Jisc Inform, the charity’s termly online magazine, published a look at KB+ in their spring edition. It includes a round-up of the work so far and a look into the future of KB+.

You can sign up to receive Jisc Inform and keep up-to-date with all things education, research and technology!

Understanding historical entitlements to journals (or not)

Along side the work that we’ve been doing for KB+ to make sure that we have accurate data on the titles included in 2012 NESLi2 agreements, JISC Collections has been working with EDINA on a scoping study for an Entitlement Registry and PECAN2. These projects are almost at an end with final reports due in mid-April – and we are currently running workshops with institutions to review what has been done and what institutional priorities might be.

This work is very closely aligned with KB+, providing a historical record of title coverage, institutional subscriptions and post-cancellation access rights for NESLi2 agreements.

Unfortunately, as so often seems to be the case, this is easier said than done.

As one librarian said, at any one time there seem to be atleast 3 different records of what titles an institution subscribes to: the institution, the publisher and the subscription agent. Trying to reach agreement on this is enormously time consuming, but it also appears to be work that has to be repeated year in, year out at enormous cost and effort on all sides.

Now, some may say that this isn’t important and no one is claiming that there are huge issues with access to subscribed content, but I think there are some important reasons why as a community we should have a solid understanding of what we do and don’t have rights to:

  1. Institutional knowledge – at the moment many institutions have to ask third parties for information on what that institution has and hasn’t subscribed to, yet they seldom have huge faith in the answers that they receive from those external partners.
  2. Best practice – at a very simple level it makes sense to understand what has been purchased and what rights one has to that content. From a licensing perspective, it should be up to institutions, publishers and those who act on their behalf such as JISC Collections, to make sure that the licences are clear on this.
  3. Understanding an offer – being able to understand the impact of an offer and any decisions you may wish to make about cancellations, renewals, substitutions etc requires a knowledge of what the impact on access will be.
  4. Transition to electronic and relegation of print – uncertainty about post-cancellation access rights is a barrier to institutions when considering getting rid of their print collections or fully moving to electronic.
  5. Decision making – time repeatedly spent working out what has been purchased and what rights apply to it, is time that isn’t spent on more important decisions about collection development, improving the user experience or considering the nature of the library service that will be delivered.
  6. Improved services – being able to make accurate records of this information available could provide an opportunity for subscription agents, systems vendors, publishers and negotiating bodies to improve the services that they can provide to institutions.

However, we are where we are and the amount of work involved in putting this right is considerable, but based on the work undertaken so far and the valuable feedback from institutions we are starting to understand some priorities and some practical ways of achieving these that could be beneficial to all.

Sustainability: What will happen beyond August 2012?

Another output from Phase I of the project will be a business plan and model for further development of the knowledgebase service. It is expected that this will identify the costs and workflows associated with the maintenance and creation of both the data and software tools to manage it. Phase I will not involve costs to individual institutions as it is HEFCE funded.

The Project’s sustainability will depend on the extent of community ownership and the extent to which it succeeds in bringing related service together (hopefully more of a hub than yet another spoke!)

In the medium to long term there is the potential for the development of more radical services and initiatives using data provided from this knowledgebase system in conjunction with other national and local databases (see for e.g Library Data Impact Project) and other emerging shared services.

Academic libraries and institutions have an interest in and responsibility for more than just their e-journals and databases – e-books, open access titles and individual articles, open educational resources and open data all pose new questions for electronic resources management (ERM). The project will also seek to identify the workflows that will allow these to be incorporated into shared ERM from Phase II onwards.

Knowledge Base+ Community Advisory Group – Jan 12 Meeting

Key issues identified:

  • Involvement of the staff who actually work on e-resources at the coalface will be crucial.
  • What structures and processes are needed to enable the whole community to contribute to the development of the knowledge base? What is the scope for ‘crowdsourcing’?
  • What will the data verification process look like? Who will be involved? How will those involved feedback? How will feedback be provided in a timely fashion so the data is still relevant by the time it is finally made available to the wider community?
  • The Project Team is advised to look at other community-owned initiatives which have worked, e.g. the Kuali OLE project, with a view to learning from their structures and underlying technologies (e.g. use of Google workspaces). In the UK the Journal Usage Statistics Portal (JUSP)  project has also had excellent take-up.
  • Would some kind of ‘voting’ or ‘liking’ functionality be useful to support the verification process? How will changes be suggested or flagged by the community?
  • The burden of contributing must not be overly onerous: almost contributing without knowing you are contributing is the ideal.
  • The platform must be interactive, not passive.
  • What will the underlying processes be, what model will emerge and what technical infrastructure will then be needed?
  • How will the data in the knowledge base be presented so it is immediately useful to local institutional electronic resources management (ERM) processes and workflows? This will affect how the technical infrastructure is designed. See also TERMS for typical institutional ERM workflows.
  • Quality: The data must be perceived to be at least as useful as what institutions have achieved locally – accuracy must be retained or improved over what most have at the moment. How will this be measured?
  • How will expectations be managed? There is a balance to be struck between accuracy and timeliness and a point where “Good enough is good enough”. How will this be judged and agreed?
  • Timeliness: Libraries don’t want yet another system to have to update – how ensure updates are  fed through regularly? The Community Advisory Group suggested that data should be released to the community for checking sooner rather than later.
  • How can we link more closely with key stakeholders such as UKSG, university mission groups, Library Management System (LMS) user groups, etc.?
  • How can we best build trust between the Project and libraries, and between libraries and publishers, agents and vendors?
  • Other useful foundational work to be done:
    • Open data – devise a matrix to show types of data and guidelines on what institutions will be able to do with it (rights in and rights out).
    • Devise a matrix to demonstrate potential value, impact and importance of the different types of data that will be provided and what some of their many practical uses might be at an institutional level.
  • Would an upload area for sharing unverified metadata be useful? There should perhaps be an option to present ‘unverified’ datasets for sharing that could still save people a great deal of time, even if they still have to make some local adjustments?
  • Entitlements data – may make most sense to provide generic information at a macro level (e.g. for each big deal) and then work towards title-by-title entitlements?