Recognize rigor in learning analytics is multi-faceted

By Bodong Chen

November 11, 2020

First of all, I’d like to thank our JLA editors-in-chief and SoLAR for organizing this webinar, and more events to come on the topic of rigor. As a community, learning analytics has grown substantially and to a great extent matured since it emerged around 10 years ago. Thanks to tremendous work by colleagues, we have an annual conference, a professional society, an established journal, and probably even more importantly, significant interest in our field from society. For our community to keep growing and evolving, it is time to come up with a set of self-regulating norms around rigor, not only to achieve broader impacts but also to ensure the impacts contribute to the betterment of learning and education.

So, what do we mean by rigor in learning analytics? Full disclosure: I approach this question as someone who was primarily trained in educational technology and learning sciences, and who relies on design-based research, mixed methods, and design partnerships to come up with solutions that get piloted and evaluated in classroom settings.

Answering the question of “what rigor is” is quite hard in learning analytics. A primary reason is that learning analytics is a multidisciplinary area of research and practice. We have an eclectic mix of researchers and practitioners from various disciplinary backgrounds, which often have different sets of standards for judging rigor.

As I think about this question, I continued to return to the fundamental premise that motivated the formation of our field. That is: if we do a good job at measuring, collecting, analyzing, and reporting learning data, we could better understand and eventually improve conditions for learning ( see SoLAR’s definition of learning analytics). This is a laudable and ambitious endeavor that is truly complex. To break this work down, we can recognize different types of work going on in our research community. On one hand, there is exciting work that tries to apply computational techniques or data science methods, to investigating learning and teaching using all sorts of data. On the other hand, many of us are designing tools and interventions for learners and educators to act on certain information, in many cases outputs of computational analysis. As we discuss rigor in learning analytics, we need to recognize that these two distinct “buckets of work” are quite different and need to be judged based on quite different criteria.

Today, I would like to turn more attention to the second bucket of work that is about designing tools and interventions to improve conditions for learning. Within this scope, in addition to these important notions of conceptual rigor, methodological rigor, measurement rigor, and reporting rigors, ideas that you find in methodology textbooks, I want to argue that we need to highlight relevance to practice and the extent to which practice is engaged as additional criteria of rigor. Following this idea, there are at least two facets of rigor to consider:

First, we need to disclose the educational ideals and value premises that motivate a particular study or intervention, and the ways in which educational practice contributes to the formation of a solution. The work of putting learning analytics tools into practice is fundamentally interventionist. The rigor with our design solutions is not only about their data sources or computational techniques, but how education as a (complex) system is considered, how key stakeholders such as learners and teachers are involved, how different perspectives (and their tensions) are considered, etc. Are the value premises hold by the researchers, designers or technologies in line with these key stakeholders, especially if they are impacted by the introduced intervention. Answering these questions may be even more important than demonstrating conceptual and methodological rigor. As we instrument learning analytics solutions in practical contexts, we need a bi-directional dialogue to improve rigor of our solutions in a sense that they are relevant to practice and in line with educational wisdom. Human-centered design approaches, which involve both quantitative and qualitative methods, are important for this work.

Second, as we move into evaluating an intervention, rigor requires sustained, direct, and systematic documentation of what takes place in authentic learning contexts. Learning and teaching are messy. The transition of research knowledge (embedded in tools) into professional practice is also highly complex. We should not rush to conduct RCT trials in the early phase of a new learning analytics intervention. Instead, we need to gain a nuanced view of how students and teachers might react to the intervention in practice, and how the local context might dynamically react to the intervention. Students and teachers operate within existing systems. Intended outcomes in a classroom are mediated by factors that operate at school and district levels. In this case, qualitative work is more than relevant, but crucial for documenting how an intervention works in an authentic setting to illuminate various ways an intervention could go awry.

To conclude, as the field of learning analytics continue to evolve, we need to recognize the diverse body work happening in our community and have continuing conversations about what rigor is – for different types of work. There might be a shared set of principles that cut across different studies/projects. This could be especially helpful for guiding our communication at conferences or in journals. In the meantime, we may need to stay content with disagreements and healthy debates simply because different work needs to answer to different criteria of rigor. Finally, as learning analytics make more real-world impacts – in some cases at scale – finding ways to engage practice as we consider rigor is important for realizing the field’s potential.

Posted on:
November 11, 2020
Length:
5 minute read, 924 words
See Also:
comments powered by Disqus