Thursday, September 29, 2016

Proving Library Value Using Data

It has been a while since I have had the chance to post on here--the life of an instruction librarian can be tough in the month of September! We are working on a number of projects to prove the value of the library to internal and external stakeholders. There has been a lot of discussion about this in the library world, so I likely do not need to rehash it for you. What I have come to know is that some schools are able to access student data and others are not. We only recently got access to some of this data (and in two different ways) and we are looking for ways to leverage it to this end. The first way that we were able to get some results in the aggregate was to link up people who use the library overnight to the Banner data. We did this by sending the information from scanned IDs to our campus IT and having them create a report. Since students need to scan their student ID card when they walk in the door after 11, we had this listing of a particular group of students. In order to protect the privacy of students, campus IT stripped out identifying information and only gave us information back that was in the aggregate. We had some interesting results. For instance, we can tell that people who study overnight are often female and that they are in the Nursing field. This is interesting, because we also have a health sciences library on campus. (Full disclosure: this project was led by my supervisor and not by me.)

This led me to think about other ways we get student information using IDs. We do not scan IDs coming into our instructional sessions, although we have talked about doing this and at one point, we passed a sheet around the room to manually collect "Pirate IDs," which can easily be used to figure out Banner IDs. This was discontinued because of the amount of time it took to get the IDs and then have someone (a student) manually enter them into a spreadsheet. It was also not being done in every class. What is an easier way? If we knew the course code for the class, maybe we could reverse engineer a listing of students who would have been in the library session. But what would we compare this number to? Classes that did not come over? And how can you be sure that the students in a particular class were better at research because of our one-hour session? In the same vein of ID scanning, we could also have students scan in at events, but we are currently only set up to scan IDs in the evening at the front door. Scanners are expensive at about $1000+ a pop. At the IFLA conference I attended recently in Columbus, OH, I saw a poster where the librarians were scanning student IDs, so this might be something that is gaining some ground in academic libraries. They reported a little bit of librarian push-back. I would be interested to hear from any people who are doing this!

We have one person who is allowed to connect to Banner data in the library now. We are trying a few smaller-scale projects to create a sustainable workflow for making comparisons with this data. Since we also collect email addresses and names from the research consultations that we do through our Book a Librarian service, and this is a smaller body of students who use the service, we tried to connect student data collected by the library to GPAs. We entered the emails of people who had used the service into a spreadsheet but we did not get results that were as accurate as our first attempt. It looks like the information we are able to access in-house when it comes to GPA is not as accurate as what we got from our campus IT unit. It is possible that we do not have access to the correct fields in the database. Another interesting idea would be to connect whether a student used books or reserves in the library and their GPA. Of course, we will always protect their privacy. At any rate, I will keep you posted.

Another recent attempt we have made steps for uses a rubric. We have made steps for next year for this process, which I think allows for a bit more of an in-depth attempt at looking at student research. We have hundreds of students coming through our English composition courses, which makes analyzing their papers individually for evidence of research quite a hard battle. But we have an award called the Keats Sparrow awards, which is a writing award that looks at written papers completed for ENGL 1100/2201. We have added a check mark on the submission sheet to ask if students have had a library instruction session as part of the ENGL 1100/2201 course that they are submitting for. We normally get 30-40 submissions. What I am proposing is that we create a rubric for the readers of the awards submissions and ask them to judge them based on the new information literacy criteria of the rubric. This could kill two birds with one stone. They would be judged, and we would have two groups. One that received library instruction and one that did not. Perhaps we could then compare the rubrics of each of the groups and see if library instruction made a difference. The remaining questions are these: how many will have not received library instruction, and since these conceivably are the best of the bunch (since they are applying for the award), does that skew the results?

I am attending the ARL Assessment Conference this October 31- November 2. I have been thinking about these topics a lot in the past months. I would love to hear from those who are also working on assessment and proving library value!

No comments:

Post a Comment