AUTOMATING ASSESSMENT makes collecting, analyzing and sharing data easier and shifts the focus to the evidence and the development of action plans that target low performing outcomes.
In some systems, connecting outcomes data to self-paced supplemental instruction systems could allow students to obtain personalized, targeted instruction without instructors even knowing. According to research in the field of Supplemental Instruction, this anonymity increases the likelihood for low-performing students to seek help. If these supplemental systems work, then these students continue advancing with their peers—all of this occurs without additional instructor inputs. All of my curriculum bundles include formative and summative assessment systems designed to build strong, inclusive relationships between teachers and students. More recently, however, I have adopted the supplemental instruction model by adding in-person tutoring to support classroom instruction and build procedural knowledge, and online “smart” reading systems to support declarative, knowledge-building.
Since 2010, I have been involved in designing, launching and evaluating effective assessment models in higher education. However, I have also observed what happens when educators don’t consider outcomes: more often than not, we leave the most at-risk behind. Ignoring gaps in knowledge, especially when this knowledge includes essential skills like reading and writing, contributes to a downward spiral. These learning deficits build up over time, and at some point catching up may seem impossible.
Properly calibrated assessment tools allow us to ‘see’ learning.
Data creates tread marks that allow us to track successful pathways, but also identify those that have strayed off course. Carefully calibrated assessment coupled with analysis and faculty-developed action plans help us build personalized instructional systems, implement targeted program improvements, such as human or machine-based supplemental instruction, and helps educators make sure everyone stays on track.
Starting in 2013, I have been training and developing program-wide, outcome aligned, automated assessment systems. These trainings include faculty in development of metrics, collection and analysis of data for the purpose of advising on targeted program and outcomes improvement. Involving faculty as stakeholders in the assessment process and using automated tools improves student learning outcomes.
But here is the real challenge of assessment: Faculty opinion on assessment.
According to Entangled Solutions’s new Quality Assurance standards report, evaluating Higher-Education should include verifiable outcomes produced by assessment metrics in each of these areas: 1) Learning, 2) Completion Rate, 3) Job Placement Rate, 4) Earnings, 5) Stakeholder Satisfaction.
But what about “soft skills”? How do we develop systems that identify/define/measure the intra and inter personal skills that have become so valuable in today’s workplace?