All ArticlesDigital LearningTeaching and Learning

Curriculum Analytics

Imagine an analytics dashboard like the one above that could diagnose the health of entire university programmes. This was the premise of a fascinating conference hosted by JISC, the Joint Information Systems Committee. Working with data from their partner universities, JISC has been able to isolate some significant correlations between the variety of online resources on a unit site and student attainment. They have even been able to model the point at which resource density starts to have an adverse effect. It is still early days, but what they are discovering so far has exciting potential for the field of curriculum design.

Attendees were organized into four separates groups, each with their own unique scenarios to investigate. Our scenario involved University X which had implemented an institutional dashboard to measure the impact of metrics like digital engagement, attendance, and policy changes on key performance indicators like progression, retention, and student satisfaction.

University X had also recently made several important policy changes including greater standardization of the VLE, more support for blended learning and lecture capture, as well as a cap on the number of assessments. The Dean of Arts at U-X wanted to know whether these changes were having a positive or negative impact on student pass rates.

We were given several dashboards to review to help answer this question. The first presented a top-down, programme level report which showed a small 1% decrease in average grade, a 6% increase in digital engagement, and a 5% increase in ‘pass-on-first-attempt’ successes. The following dashboards got progressively more and more granular to the point that we were able to see the average grade, digital engagement, and pass rate of individual courses and units relative to their adoption of specific technologies like polling, smart-boards, and lecture capture, the number of modules and assessments, and attendance data.

Although there is still a long way to go to make this information intelligible to the average user (including myself), our discussion lead to several key takeaways. First, we need more granular information. For example, what is the ratio of domestic to international students in a classroom. What impact does mandatory attendance have on participation and completion? Class size could also have a substantial effect of attainment scores particularly when the cohort was historically larger or smaller. Second, we need to identify at what level relationships are meaningful and at what level they are not. The relationship between attainment and the number of assessments may be misleading if one of the assessments is actually a portfolio with multiple component deadlines.

What JISC has observed is that analytics are never as simple as a 1-to-1 relationship. Certain individual factors can have positive or adverse effects in different combinations. The challenge of identifying those relationships and at what point they become meaningful may produce an infinite regression of causes, but it doesn’t diminish the value learning analytics has even at such an embryonic stage. There may be hundreds of explanations for a low course GPA but knowing that students who participate in weekly forum activities do better than their peers, for example, can help to eliminate many of them. We are still a long way off from creating a reliable set of metrics that can predict learning outcomes and course success, but I believe the ability to ask the right questions of the data may be even more valuable.

Leave a Reply

Your email address will not be published. Required fields are marked *