Knowledge Base+ Community Advisory Group – Jan 12 Meeting

Key issues identified:

  • Involvement of the staff who actually work on e-resources at the coalface will be crucial.
  • What structures and processes are needed to enable the whole community to contribute to the development of the knowledge base? What is the scope for ‘crowdsourcing’?
  • What will the data verification process look like? Who will be involved? How will those involved feedback? How will feedback be provided in a timely fashion so the data is still relevant by the time it is finally made available to the wider community?
  • The Project Team is advised to look at other community-owned initiatives which have worked, e.g. the Kuali OLE project, with a view to learning from their structures and underlying technologies (e.g. use of Google workspaces). In the UK the Journal Usage Statistics Portal (JUSP)  project has also had excellent take-up.
  • Would some kind of ‘voting’ or ‘liking’ functionality be useful to support the verification process? How will changes be suggested or flagged by the community?
  • The burden of contributing must not be overly onerous: almost contributing without knowing you are contributing is the ideal.
  • The platform must be interactive, not passive.
  • What will the underlying processes be, what model will emerge and what technical infrastructure will then be needed?
  • How will the data in the knowledge base be presented so it is immediately useful to local institutional electronic resources management (ERM) processes and workflows? This will affect how the technical infrastructure is designed. See also TERMS for typical institutional ERM workflows.
  • Quality: The data must be perceived to be at least as useful as what institutions have achieved locally – accuracy must be retained or improved over what most have at the moment. How will this be measured?
  • How will expectations be managed? There is a balance to be struck between accuracy and timeliness and a point where “Good enough is good enough”. How will this be judged and agreed?
  • Timeliness: Libraries don’t want yet another system to have to update – how ensure updates are  fed through regularly? The Community Advisory Group suggested that data should be released to the community for checking sooner rather than later.
  • How can we link more closely with key stakeholders such as UKSG, university mission groups, Library Management System (LMS) user groups, etc.?
  • How can we best build trust between the Project and libraries, and between libraries and publishers, agents and vendors?
  • Other useful foundational work to be done:
    • Open data – devise a matrix to show types of data and guidelines on what institutions will be able to do with it (rights in and rights out).
    • Devise a matrix to demonstrate potential value, impact and importance of the different types of data that will be provided and what some of their many practical uses might be at an institutional level.
  • Would an upload area for sharing unverified metadata be useful? There should perhaps be an option to present ‘unverified’ datasets for sharing that could still save people a great deal of time, even if they still have to make some local adjustments?
  • Entitlements data – may make most sense to provide generic information at a macro level (e.g. for each big deal) and then work towards title-by-title entitlements?
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s