Indicative KB+ workflows – reflections from UEA

At UEA we’re still KB+ novices, but we thought we’d share some of the KB+ workflows which we think will be relevant to our institution.

*Other institutions are warmly invited to add to the list or make corrections*

Workflow 1: Creating and recording subscriptions for all our nesli2 and related deals.

This can be done for existing and past years and includes adding titles (entitlements) and editing issue entitlement dates manually where necessary. This gives us an accurate online record of our key packages and their holdings. We can potentially do this for all packages in KB+ (and at package level for any subscriptions not in KB+). So this is helpful in gradually replacing data otherwise stored in our filing cabinets.

Workflow 2: Adding and viewing core subscriptions, including cancellation rights.

This gives us an historical record of our core subscribed titles for particular years, as when entering core titles we are able to specify if they are Print only, Print+Electronic or Electronic.  The workflow also ensures that entitlements around core individual journal subscriptions, on which “big deals” are based, are not lost in the noise of the generic entitlements for the big deal package. This functionality also makes it easy to review our list of core titles annually to claim any cancellation entitlements that may be part of the current “big deal” package agreement.

Workflow 3: Checking past entitlements.

The Title List functionality includes backfile and frontfile subscriptions, so we can search and drill down for details about individual titles we have subscribed to in the past (making sure we have left the ‘Subscriptions valid on’ blank so as to select ‘all years’). Then we can use the ‘Full Issue Entitlement’ details to view data about any potentially missing past entitlements.

Workflow 4:  Using the Titles functionality and data to populate our link resolver.

From the Titles option, we can select packages and our local entitlements data using the filters and can then get a.csv export. The format needs manual editing to be in the correct format for our link resolver upload (we understand this functionality is being worked on by the KB+ team to become more automated).

Workflow 5: Add Licenses that apply to our subscriptions and view/amend the summary data.

These are simple to select and present helpful summary data about what is, and is not, permitted, under our current licenses, e.g. Walk-In Users, Course Packs, Interlending, concurrent users, remote access, post-cancellation access, partner, alumni, SME and multi-site access. We can also manually edit these summary data to take into account our local circumstances.

Workflow 6: Upload our own institutional local licenses.

This is achieved through the ‘Add Documents’ functionality. This functionality is helpful if there is no generic license available from KB+ or if one of our local licenses has substantial differences from a generic one.

Workflow 7: Identifying possible overlapping subscriptions.

Filtering a Title List to the current year’s subscriptions, we can search and drill down for details about individual titles (the ‘Full Issue Entitlements’ screen) and note if the same content is being delivered through several packages and potentially cancel a title (or even package) if the duplication is unnecessary.

Workflow 8: Identifying alternative packages that might include this title.

Filtering a Title List to our current year’s packages, we can search and drill down for details about individual titles and note if the same content is available through several packages and potentially choose a different, possibly cheaper, one.

Workflow 9: The annual renewal process.

Accessed from the ‘Manage – Generate renewals’ spreadsheet option which shows us all possible subscriptions, not just our own, and allows us to compare this year’s with next year’s packages, for example.  We can see which titles have been dropped for 2014 but also any added as well.

Workflow 10: Dashboard and alerts.

The alerts functionality enables cross-institutional knowledge-sharing about known problems and issues and appears on our dashboard. We can comment on any alerts to add further support to the issues that have been shared. There are also alerts for changes to generic licenses.

Workflow 11: Notes.

This functionality allows us to add local Notes to our subscriptions and licenses in case there are particular known issues or points to remember (e.g. renewal dates).

 Copyright free. This content can be edited, redistributed and reused without attribution.

 

Advertisements

KB+ Shared community activity

KB+ has achieved a huge amount, predominantly through central community management. The vision was always two fold, though, central community management AND shared community activity. It is worth asking the question as to whether we could or should leverage even more value from KB+ in terms of shared community activity?

The first challenge is how to make KB+ more comprehensive in its package and title coverage. The team has done a fantastic job with the nesli2 packages, etc.,  but a comprehensive centrally managed knowledgebase may require a different scale of operation. Rather than just building up a larger centralised team (important though that is!), is there still potential for more shared community activity here, an issue the Community Advisory Group debated extensively last year?

For example, the ability to upload spreadsheets you are working on to share with other  KB+  community users. You could then iron-out some of the main problems using your shared knowledge across institutions according to your own timescales. A draft (non-verified) version could then be made available for the KB+ community in a timely fashion. This draft version could then be reviewed in due course by the main central KB+ community team for final clean-up, verification and formal adoption into the main KB+. Another by-product of such an approach is that a ‘virtuous circle’ may be created as contributors will naturally begin to feel more ownership of KB+ – they will talk of ‘being part of KB+’ rather than ‘subscribing to KB+’ – a change of discourse we would do well to adopt going forward.

Such an approach might help to ‘fast track’ some packages that may otherwise be on the central community management ‘waiting list’ and could not otherwise be looked at for several months. So the scaleability of KB+ in terms of the comprehensiveness of its coverage is one of its challenges and success in this area will have a significant impact on community take-up and future sustainability.

Another key challenge for the service is how to make the KB+ site really “sticky”, i.e. the first port of call for e-resource librarians and library staff involved with ERM processes. My utopian vision is to see KB+ being so attractive to library staff involved in ERM that they want to log in first thing in the morning and keep it open all day.

For example, I’d like to see the KB+ dashboard including more portal functionality to other services too. The integration with other services such as JUSP and elcat will obviously help with this, making KB+ more of a one stop shop. But here are a few other ideas off the top of my head:

1. Ability to add an Alert (and attach a spreadsheet) without necessarily tying it to an existing KB+ package – that way we can start to discuss packages and journal titles not yet on KB+ (that would also help with prioritising)

2. An instant messaging service and/or online chat service between users with private or shared messaging

3. An RSS feed you can customise for relevant mailing lists and E-resource related blogs:

4. A twitter feed – already on homepage but not on Dashboard

5. A log file to show real time ‘who else is working on’ and ‘what others are editing’ with optional display of institutions (institutions could opt-in to having their information displayed)

6. Software enhancements list with ‘Me too!’ and/or voting functionality

7. Data and packages priority list with ‘Me too!’ and/or voting functionality

8. Headline reports shown in graphical format

etc.

In other words, functionality to make KB+ feel more of an interactive three dimensional, living, evolving community (not just a service) – i.e. very different from other systems. Such functionality could potentially be achieved through APIs.

We have to be able to confidently answer the question – what makes KB+ different from other services? In answering that, we need to get our mindsets into the roles of the library staff who administer ERM activities and the information they need to be scanning and reviewing in ‘real time’. They may have a different vision from the one I have articulated above, so let’s hear more about their needs going forwards. What is stopping some E-resources Librarians from embracing KB+?

Although there is no guarantee that functionality to support this kind of additional community activity can be included until after the July 2013 release, it seems an appropriate time to be considering this if we are to make KB+ as attractive as possible to the community and ensure library directors will subscribe.

Trust, Transparency and Cultural Change

This week saw the second GOKb Steering Committee meeting with colleagues from KUALI and JISC, as well as the latest KB+ Community Advisory Group meeting.

Whilst both projects are putting in place the technical infrastructure to support shared community approaches to data management, much of the discussion at both meetings was given over to the necessary conditions required for a hard pressed librarian to make the cultural change from working with well known and understood local practices, to adopting new shared services.

How does one initiate such change and more importantly sustain and embed it in the daily grind of getting the job done?

From the discussions it is clear that to be successful KB+ and GOKb will need to earn the Trust of their respective communities:

  • Trust that the accuracy of the data is at least as good as what you have now, if not better.
  • Trust in the capability of new partners to do as good a job as you maintaining data.
  • Trust that the services themselves will be available in the longer term, that their development plans respond to community needs and are worth investing time and effort in.

It is a shared belief of us all that transparency is the best way to achieve that level of trust:

  • putting in place a governance structure and  a group of test institutions that ask searching questions and have very high standards
  • providing information on who is or has been working on title lists – has it only been checked by institutions, or JISC Collections, or the publisher? Or has it been checked by all of them? We also had some interesting discussions about ‘buddying’ subject matter experts from the UK and US as a way of building trust through collaborative work.
  • provide information on the level of certainty we have in the accuracy of the information – flagging concerns and working collectively to build more accurate data sets
  • transparency about the limitations of KB+ at launch – what is in the service, what isn’t and what level of effort an institution may need to put in.

The implication of all of this is that we won’t always be able to tell people what they want to hear, but then again maybe that isn’t such a bad thing. After all the whole basis for KB+ is that the library community is deeply unsatisfied with the current system and decided that the best way to achieve both the quality of data and efficiency savings required by the community as a whole is to change the culture of e-resource management from a local level activity and pursue a shared community driven approach.

KB+ Workflow Task Group – Looking to Autumn 2012

The development of library workflows and associated support (such as alerts) is a priority task for KB+ developments in Autumn 2012. In order to ensure grounded input from the start, we’ve established a task group running from June to September, with volunteer members from the library teams at Birmingham, Cambridge, Kings and Salford. Other institutions are helping with parallel reports and user interface task groups.

The workflow group has agreed a simple five-step work plan.

Step 1 – Agree what we mean by ‘workflow’ and which types of workflow support will make KB+ most useful to library operations. We listed seven types of activities ranging from coordinating publisher updates and supporting renewals decisions (both really important) to task-based messaging within the local library team (not a priority – email does that pretty well for now).

Step 2 – Meet up to detail the important workflows that will make a difference from Autumn 2012 onwards. The Cambridge team kindly hosted 9 of us on 27 June, when we focused on publisher updates and decision support around new deals and renewals. We covered approximately 75 square feet of white board space in 4 hours (sounds impressive), generating just 5 iPhone photos (all that work for 5 low quality snaps) … and a mass of important thinking. We found an old fashioned ‘swim lane’ diagram (once it was explained to us by @owenstephens) to be a good way of systematizing workflow actions and ideas as a shared service design – each column in the diagram relates to a key actor in the envisaged process. From this annotated photo of our efforts, you can see that the write up in Step 4 will be essential to bring this to life!

Discussions of the KB+ Workflow Task Group

 

Step 3 – Compare our ideas with GOKb partners. KB+ is collaborating with the Mellon Foundation funded GOKb (http://gokb.org/post/25021222983/gobkpressrelease)  project involving four US Higher Eds from the Kuali OLE consortium (Chicago, Duke, North Carolina State and Penn). We want to leverage their efforts with data beyond the UK deals and also share ideas about optimal workflows between local library, above-campus and vendor functions. We’ll also look at the Kuali Rice community source software that they are using to enable workflows. We have two meetings in July and August.

Step 4 – Draft and mutually agree a report. As emphasized by the KB+ Community Group, this report needs to be an accessible document that sets out the workflow priorities for KB+ development from the perspective of how they will fit with real library operations and key local systems (such as Link Resolvers). It will also provide the ‘Use Case’ requirements to inform the development team. The final report will be reviewed by the group in early September.

Step 5 – Use the report as the basis for a library update meeting in the autumn. Our group is suggesting that the report should form a good basis for a UK community update meeting for managers and practitioners to discuss how KB+ will fit and enhance their practices and lessen their local workload.

 David Kay

Knowledge Base+ Community Advisory Group – Jan 12 Meeting

Key issues identified:

  • Involvement of the staff who actually work on e-resources at the coalface will be crucial.
  • What structures and processes are needed to enable the whole community to contribute to the development of the knowledge base? What is the scope for ‘crowdsourcing’?
  • What will the data verification process look like? Who will be involved? How will those involved feedback? How will feedback be provided in a timely fashion so the data is still relevant by the time it is finally made available to the wider community?
  • The Project Team is advised to look at other community-owned initiatives which have worked, e.g. the Kuali OLE project, with a view to learning from their structures and underlying technologies (e.g. use of Google workspaces). In the UK the Journal Usage Statistics Portal (JUSP)  project has also had excellent take-up.
  • Would some kind of ‘voting’ or ‘liking’ functionality be useful to support the verification process? How will changes be suggested or flagged by the community?
  • The burden of contributing must not be overly onerous: almost contributing without knowing you are contributing is the ideal.
  • The platform must be interactive, not passive.
  • What will the underlying processes be, what model will emerge and what technical infrastructure will then be needed?
  • How will the data in the knowledge base be presented so it is immediately useful to local institutional electronic resources management (ERM) processes and workflows? This will affect how the technical infrastructure is designed. See also TERMS for typical institutional ERM workflows.
  • Quality: The data must be perceived to be at least as useful as what institutions have achieved locally – accuracy must be retained or improved over what most have at the moment. How will this be measured?
  • How will expectations be managed? There is a balance to be struck between accuracy and timeliness and a point where “Good enough is good enough”. How will this be judged and agreed?
  • Timeliness: Libraries don’t want yet another system to have to update – how ensure updates are  fed through regularly? The Community Advisory Group suggested that data should be released to the community for checking sooner rather than later.
  • How can we link more closely with key stakeholders such as UKSG, university mission groups, Library Management System (LMS) user groups, etc.?
  • How can we best build trust between the Project and libraries, and between libraries and publishers, agents and vendors?
  • Other useful foundational work to be done:
    • Open data – devise a matrix to show types of data and guidelines on what institutions will be able to do with it (rights in and rights out).
    • Devise a matrix to demonstrate potential value, impact and importance of the different types of data that will be provided and what some of their many practical uses might be at an institutional level.
  • Would an upload area for sharing unverified metadata be useful? There should perhaps be an option to present ‘unverified’ datasets for sharing that could still save people a great deal of time, even if they still have to make some local adjustments?
  • Entitlements data – may make most sense to provide generic information at a macro level (e.g. for each big deal) and then work towards title-by-title entitlements?