A Reflection on Digital Assessment

Introduction

Over the last eight or so weeks I have gone from knowing almost nothing about digital assessment to suddenly having more than enough resources and data to better inform my approach as a future Instructional Technologist. Throughout that time, I, along with the rest of my peers have dived head-first into the world of data-driven instruction and have steadily, but surely gained a vital understanding as to its prevalence as well as the best uses and applications with regards to applying that data towards standards-based or competency-based assessment.

We have looked at how and why instructors conduct assessments, and through individual input, learned that we all feel almost unanimously indifferent towards the current state of assessments for students at the elementary and secondary level. Some of us have experienced these challenges personally as educators or staff while others (hey, that’s me!) have relied on the information provided by others to construct an understanding of the challenges and complexities inherent with assessments.

For me, this course has been about discovering the importance of assessment, despite any personal reservations regarding its overall impact towards measuring learner performance. The other important aspect of the course has revolved around data and learning how and why it is used for determining effective assessment. While I lack some of the traditional educational experience that my fellow students have in abundance, I feel that my time working in higher education staff settings has helped to provide me a unique perspective for how technology has directly impacted data retrieval and ultimately, the way traditional and digital instruction has changed due to technological advances.

Important Topics

“Driven By Data”

This guidebook has been a central focus for our class throughout the semester and provides a framework for educators and learners to follow that ensures satisfactory outcomes for those who follow the content described. While I am unsure of the overall effectiveness that an educator or school district might have with adopting the “Driven by Data” guide, it does include some very helpful suggestions in reference to standards and assessment, as demonstrated by the following examples (Bambrick-Santoyo, 2010):

  • Standards are meaningless until you define how you will assess them.
  • Identify and work towards meaningful rigor.
  • The building blocks of effective assessment are that assessments must be the starting point, should be transparent, should be common, and should be delivered on a frequent or interim basis.
  • Design assessment to evaluate for reassessment of earlier material.
  • Identify interim assessments vs. in-the-moment (real-time) assessments.

I viewed these examples as helpful guides for those who must consider data usage and the implementation towards effective instructional design and assessments.

The Complexities of Assessment

One of the more involved and data-rich subjects we discussed for digital assessment was the shifting standards that have traditionally been the focus of educational assessments and the need to augment those standards to a changing landscape of varying learners, environments, and technology. I wrote briefly on this blog regarding the gaining popularity of frameworks involving Standards-Based Grading (SBG) and Competency-Based Education (CBE). No longer should learners be at the mercy of ineffective assessments that only indicate how much work they complete and not what they have learned. Standards and Competency approaches allow instructional designers, districts, and organizations the opportunity to develop standards based not only on a single measurement of achievement but can be set to measure competency of the standards indicated at the state-level and collegiate level for those in elementary and secondary education as well. The focus lies primarily with the need to measure course outcomes and their overall effectiveness more accurately for future frameworks of success. During this process, many aspects of the instructional design process are considered including: learning goals and outcomes, educational models, summative and formative reflection, and learning environments among other factors. While not every one of those must be considered (and many more could be implemented as well, should a designer see that benefit), it is important to consider all aspects of the design process with relation to the student, learning environment, budget, and culture.

Online Cheating

While our coverage of online cheating and academic dishonesty was very brief, I consider it to be an important topic of discussion and a point of emphasis and research that I plan to learn more about as I continue my courses in the Educational Technology program with Sul Ross.

The advancement of technology has not only led to new opportunities for educational endeavors but has also expanded the ability for students to cheat should they choose to. I look forward to gaining more knowledge on how to subvert attempts to cheat. I found the following article online and it appears to have some noteworthy suggestions on how to prevent cheating in online examinations. I will certainly reference it later and recommend those of you who are unfamiliar with the landscape of cheating in online courses to check it out as well:

https://www.facultyfocus.com/articles/educational-assessment/fourteen-simple-strategies-to-reduce-cheating-on-online-examinations/

Data-Analysis

In the future, I hope to see instructional design and the use of standards-based assessment become a more reliable tool for designers and instructors when constructing their courses. For now, the challenge is not so much the lack of data to build successful course outcomes, but the lack of data-analysis to better serve the expectations and decisions instructors hope to accomplish.

It is this aspect of the design process that I think holds the most potential in providing successful frameworks for educators and students moving forward. In my view, after reading and analyzing the multiple aspects and components of instructional design, the data inherent with assessment is what I consider to be the most important piece of determining where effective course design and successful learner performance may occur. However, just as the process of instructional design itself is dependent on varying factors, so too is the development of creating standards and courses built around culture, budgets, laws, and organizational expectation.

But what exactly does data analysis help accomplish?

We recently developed a set of course outcomes and a quiz/test/assessment portion attached to align with those outcomes (copy of assessment is attached at the end of this post). Using the Canvas platform to design this was my first attempt at implementing a quiz with consideration for learning outcomes. While others and I had difficulty aligning our outcomes with the assessments we developed, it was helpful just to have a small preview of what our instructors and other educators experience all the time when they are creating tests and quizzes. As a student, you sometimes don’t consider that someone had to create the assessments in the courses you take, that they are part of a pre-set or already-developed bank of questions and answers, but that is not the case. The act of putting together a simple quiz is almost as involved as any assignment or written submission that a student works to complete, yet the standard is even more important. How well and how accurately we can use the data from these assessments is a key component of the data-analysis phase. Not only does it inform us as instructors about learner performance, but it also lets us view how well students understand what they learn based on the outlined outcomes. From here, adjustments can be made. Instead of taking one heavily weighted exam at the end of a course or school year, having consistent assessments allow the opportunity to address inconsistencies or a lack of learner understanding in very focused outcomes.

While these approaches are not absolute guarantees to successful course implementation or learner performance, having the data to inform our decisions is vital in building productive frameworks for education and training. The data is not just numbers or a collection of arbitrary standards, it is the result of input on the part of an individual whose performance depends on their ability to obtain knowledge. Whether satisfactory or lacking in that performance, using the data resulting from the knowledge being measured creates opportunities to assist learners with collecting the requisite information to successfully complete their courses or training.

Do we rely on data too much?

In the article “Big Data Comes to School: Implications for Learning, Assessment, and Research” authors Cope & Kalantzis (2016) make the following assertion towards the future of data in the educational landscape:

However, much work still needs to be done in the nascent field of education data sciences before the affordances of computer-mediated learning can be fully realized in educational practice. For this reason, the case we have presented here is by necessity part description of an emergent reality and at the same time part agenda for future research and development. This is a journey that we have barely begun.

Another important note to mention from that same article is the subject of data privacy and research ethics. Part of the agreement now when schools and districts attempt to adopt new online and digital initiatives for their students is that they use platforms that allow them to do the following: conduct their courses, complete assignments, store data and workflow, maintain digital correspondence. All these things are usually controlled by a third party, so it is important to be excited by the promise of online/digital instruction, but also cautious in how the data that is generated from students and educators work can be used for means other than what they were intended for.

Data helps in many aspects of the instructional design process, especially since it makes it simpler to apply adjustments over a large collection of individuals and learners, however it should not be the replacement for practical and reasonable design. It is always important to remember that learners are constantly changing and while data may have proved beneficial towards successful outcomes for a particular set of learners, the changing landscape of technology, culture, and education makes it necessary to reflect on the aspects of instructional design that data cannot simply measure.  

Canvas Quiz:

https://docs.google.com/document/d/1WOljPRFhOhJdlpMpXqvD8mlBDjse7pwO/edit

Oral Presentation:

https://voicethread.com/myvoice/thread/17664696/111868418

 

Bambrick-Santoyo, P. (2010). Driven by Data: A Practical Guide to Improve Instruction. San Francisco, CA: Jossey-Bass.

Cope, B., & Kalantzis, M. (2016). Big Data Comes to School: Implications for Learning, Assessment, and Research. AERA Open. https://doi.org/10.1177/2332858416641907

Comments

Popular posts from this blog

Instructional Innovation & Competency Based Learning

Predictive Analytics & Emerging Technologies

ESSA & the Texas Education System