AUTOMATING ASSESSMENT makes collecting, analyzing and sharing data easier and shifts the focus to the evidence and the development of action plans that target low performing outcomes. Outcomes data should connect students to self-paced supplemental instruction systems that allow for personalized, targeted instruction without holding back students from advancing with their peers.
Since 2010, I have been involved in designing, launching and evaluating effective assessment models in higher education. However, I have also observed what happens when educators don’t consider outcomes: more often than not, we leave the most at-risk behind. Ignoring gaps in knowledge, especially when this knowledge includes essential skills like reading and writing, contributes to a downward spiral. These learning deficits build up over time, and at some point catching up may seem impossible. The dismal graduation rates at four-year institutions clearly document the effects of this cycle.
Properly calibrated assessment tools allow us to ‘see’ learning.
Data creates tread marks that allow us to track successful pathways, but also identify those that have strayed off course. Carefully calibrated assessment coupled with analysis and faculty-developed action plans help us build personalized instructional systems, implement targeted program improvements, such as human or machine-based supplemental instruction, and helps educators make sure everyone stays on track.
Starting in 2013, I have been training and developing program-wide, outcome aligned, automated assessment systems. These trainings include faculty in development of metrics, collection and analysis of data for the purpose of advising on targeted program and outcomes improvement. Involving faculty as stakeholders in the assessment process and using automated tools improves student learning outcomes.
But here is the real challenge of assessment: Faculty opinion on assessment.
Please view my CV by Clicking Here.
Please view the following portfolio highlighting my recent assessment projects:
Collaboration with Writing Across the Curriculum Director to Develop University Written Communication Rubric
Assessment Faculty Training: Improving Written Communication Outcomes through Faculty Feedback Loops
Assessment Faculty Training: Developing Ethics Curriculum in a Writing Intensive Course through Outcomes Assessment
Course Redesign with Technology and Program Coordination Report:
Example of Semi-Automated Science Assessment Report and Faculty Feedback Loop
Launching System-based Solutions to Improve Teaching and Learning
According to Entangled Solutions’s new Quality Assurance standards report, evaluating Higher-Education should include verifiable outcomes produced by assessment metrics in each of these areas: 1) Learning, 2) Completion Rate, 3) Job Placement Rate, 4) Earnings, 5) Stakeholder Satisfaction.
But what about “soft skills”? How do we develop systems that identify/define/measure the intra and inter personal skills that have become so valuable in today’s workplace?