Assessment is how an institution learns — not how it reports, but how it actually learns. Alta Williams formalized this into a four-form cycle. Form 1: what are we assessing and how? Form 2: what did we find? Form 3A: what will we change? Form 3B: did the change work? Each form answers one question. Together, they create a closed loop.
The critical insight — and this is Alta's — is that Form 3B closes the loop. Without it, the cycle is just data collection. With it, the data tells a story: we found a problem, we intervened, and here is what happened. This is what ICCB and HLC reviewers look for — evidence that the institution uses its data, not just collects it.
Form 1: CSLO 3 — Critically evaluate fiscal and monetary policies. Instrument: 10-question quiz + 4 essays. Benchmark: 80% meet or exceed.
Form 2: 91% of assessed students (32/35) met or exceeded. 42% non-participation rate flagged.
Form 3A: Flipped classroom model introduced. Added discussion boards, multiple assessment points, equitable variety.
Form 3B: Success rate 75% (+7% year over year). Retention 89% (+3%). Loop closed.
One faculty member. One course. One complete loop. The institution learned something real.
The committee chair needs a single view — not of individual student data, but of institutional assessment activity. Which departments are in cycle? Which are stalled? Where is the evidence strong enough for accreditation, and where are the gaps? The Dean needs the same view, filtered for action: what requires a signature, what's overdue, what's ready for ICCB.
The coordinators are the experts in this ecosystem. They're not form completers — they are reviewers. Their value is professional judgment: does this assessment plan measure what it claims to? Is this improvement action connected to the data? They need infrastructure that frees them to do that work, not paperwork that buries them in logistics.
Assessment work serves three frameworks simultaneously. The institution needs evidence of continuous improvement. The state (ICCB) needs program review data on a five-year cycle — 15 questions for Academic Disciplines, nearly 40 for CTE. The accreditor (HLC) needs evidence that assessment is "ongoing and systematic" under Criterion 3.E. These are not three separate tasks. They are three views of the same work.
The cycle is designed to close. Assessment plans lead to data, data leads to improvement, improvement leads to evidence. But when the cycle stalls, when visibility disappears, when the same work gets written three times — the institution loses the ability to prove it learns. Three cases trace what goes wrong and what the Hub catches at each step.
The Humanities ICCB program review presents a case study in cycle failure. Across multiple disciplines, the report relies on identical language: "Data indicates that courses are meeting their academic standards and course curriculum assessments." No rubric scores, no portfolio percentages, no artifact analysis — just the claim.
The cycle broke at Form 2 — data was collected but never used. No Form 3A improvement plan was filed. No Form 3B closed the loop. The boilerplate language replaced evidence because there was no infrastructure requiring the cycle to complete.
The Hub makes the form the workspace — with rubric-quality guidance at the point of entry. A faculty member who has never written an assessment plan sees, in the form itself, what "meets expectations" looks like. The tool doesn't just collect data — it trains good assessment practice.
The design constraint: Zero additional faculty burden. The improvement is the purpose. The compliance falls out naturally. The guidance at point of entry is the professional development. One action, four functions.
The assessment chair's primary tool is email. Sixteen departments. Four forms per department per cycle. That's 64 documents to track. The chair downloads each file individually, reads it, writes feedback, sends it back. There is no aggregated view.
Sixteen departments. Color-coded. Green: cycle complete. Yellow: in progress. Red: not started. The chair opens one tab before the meeting and knows exactly which departments need attention. No spreadsheet. No email archaeology. The meeting starts with substance.
Deadline tracking. Supplemental question completion. Sign-off workflow. When the chair finalizes a department's cycle, it appears in the Dean's queue. Review, sign off, ready for ICCB. The question "are we on track for HLC?" is answered by opening a tab.
Form data auto-maps to the 2025 Revised HLC Criteria: 3.E (Assessment), 3.F (Program Review), 4.A (Shared Governance). When an HLC peer reviewer asks "show me evidence that assessment is ongoing and systematic," the institution hands them a timestamped record — documented as the work happened, not reconstructed after.
ICCB program review asks up to 40 questions for CTE programs, 15 for Academic Disciplines. HLC wants evidence under Criterion 3.E. The institution's own template has its own format. Three frameworks. Three vocabularies. Three deadlines. One faculty member's work — written three times.
The Assessment Hub auto-extracts 28 of 38 ICCB program review fields directly from the assessment form data. Faculty never see the ICCB questions. They answered them already — through the forms they submitted as part of their normal assessment work.
Networking (Chyu): Pre/post mean 56% → 82%. CSLOs 2 and 8 identified as weak → all above proficiency by FA23.
Cybersecurity (Bui): CSLOs 7, 8, 11 at 68% — below benchmark. Quiz redesigned with CSLO mapping. Result: all targeted CSLOs to 80%+.
Software Development (Chen): 45%+ did not meet expectations on problem-solving. Baseline established, improvement planned.
The principle: Compliance documentation should be a byproduct of doing the work well, not a separate activity that competes with it. The people closest to the work should own the tools that document it.
Click any role to open the live Assessment Hub.