Assessment Hub

From HI-C to Automation · Richard J. Daley College · Williams & Nino · 2026
You know the engine. Alta showed you HI-C in February. This is what the engine looks like when it runs — and we want your feedback.
§1. The P-ARC Assessment Cycle

Assessment is how an institution learns — not how it reports, but how it actually learns. Alta Williams formalized this into a four-form cycle. Form 1: what are we assessing and how? Form 2: what did we find? Form 3A: what will we change? Form 3B: did the change work? Each form answers one question. Together, they create a closed loop.

The critical insight — and this is Alta's — is that Form 3B closes the loop. Without it, the cycle is just data collection. With it, the data tells a story: we found a problem, we intervened, and here is what happened. This is what ICCB and HLC reviewers look for — evidence that the institution uses its data, not just collects it.

HI-C in Practice — Economics 201

Kosipa's Complete Cycle (FA23–SP24)

Form 1: CSLO 3 — Critically evaluate fiscal and monetary policies. Instrument: 10-question quiz + 4 essays. Benchmark: 80% meet or exceed.

Form 2: 91% of assessed students (32/35) met or exceeded. 42% non-participation rate flagged.

Form 3A: Flipped classroom model introduced. Added discussion boards, multiple assessment points, equitable variety.

Form 3B: Success rate 75% (+7% year over year). Retention 89% (+3%). Loop closed.

One faculty member. One course. One complete loop. The institution learned something real.

Discussion

  1. Think about your own department. When was the last time someone completed the full cycle — through Form 3B? What made that possible, or what prevented it?
  2. Kosipa's loop took two semesters. What institutional conditions make a two-semester cycle normal rather than heroic?
Checkpoint · §1
Kosipa completed the cycle. The data shows +7% success, +3% retention, loop closed. But across the college, how many departments have a complete Form 3B on file? And what does the ICCB submission look like when departments claim "courses are meeting standards" without one?
Kosipa completed the cycle. The institution learned. But what happens when other departments file reports claiming "courses are meeting academic standards" — and the data says otherwise?
→ See what happens in Stage 1
§2. The Committee View

The committee chair needs a single view — not of individual student data, but of institutional assessment activity. Which departments are in cycle? Which are stalled? Where is the evidence strong enough for accreditation, and where are the gaps? The Dean needs the same view, filtered for action: what requires a signature, what's overdue, what's ready for ICCB.

The coordinators are the experts in this ecosystem. They're not form completers — they are reviewers. Their value is professional judgment: does this assessment plan measure what it claims to? Is this improvement action connected to the data? They need infrastructure that frees them to do that work, not paperwork that buries them in logistics.

Discussion

  1. As a coordinator, where does your expertise get used most — reviewing quality, or tracking compliance?
  2. If the Dean asks "are we on track for HLC?" today, how long does it take to answer?
Checkpoint · §2
The committee structure is designed for expert review at every level — coordinator, chair, dean. What happens when the chair's primary tool for tracking sixteen departments is an email inbox and a personal spreadsheet?
That's what committee oversight should look like. What does it actually look like when the chair's primary tool is an email inbox and a personal spreadsheet?
→ See what happens in Stage 2
§3. The Compliance Chain

Assessment work serves three frameworks simultaneously. The institution needs evidence of continuous improvement. The state (ICCB) needs program review data on a five-year cycle — 15 questions for Academic Disciplines, nearly 40 for CTE. The accreditor (HLC) needs evidence that assessment is "ongoing and systematic" under Criterion 3.E. These are not three separate tasks. They are three views of the same work.

~40
CTE program
review fields
15
Academic Discipline
fields
7
HLC Criterion 3
sub-components
1
Body of
faculty work

Discussion

  1. What would it mean for your department if ICCB questions were answered automatically from the assessment data you already submitted?
  2. If compliance were a byproduct of doing the work well rather than a separate activity, how would that change faculty attitudes toward assessment?
Checkpoint · §3
One body of work, three compliance frameworks. That's the design. What actually happens three weeks before an ICCB deadline when the coordinator discovers the data exists but none of it is in the format ICCB wants?
One body of work, three compliance frameworks. That's the design. Now — what actually happens three weeks before an ICCB deadline?
→ See what happens in Stage 3

When the Cycle Breaks: Three Institutional Failures — and What Catches Them

The cycle is designed to close. Assessment plans lead to data, data leads to improvement, improvement leads to evidence. But when the cycle stalls, when visibility disappears, when the same work gets written three times — the institution loses the ability to prove it learns. Three cases trace what goes wrong and what the Hub catches at each step.

Stage 1: When the Cycle Stalls

The Humanities ICCB program review presents a case study in cycle failure. Across multiple disciplines, the report relies on identical language: "Data indicates that courses are meeting their academic standards and course curriculum assessments." No rubric scores, no portfolio percentages, no artifact analysis — just the claim.

Art 131: 25% success rate
Philosophy sections: 8% success rate
Humanities 201: 39% success rate
The claim: "Courses are meeting academic standards"
Forms 3A/3B: Do not exist
From §1 — The P-ARC Cycle
The four-form cycle: Plan (Form 1) → Outcomes (Form 2) → Improve (Form 3A) → Close the Loop (Form 3B). Without Form 3B, the cycle is just data collection. Kosipa's Economics 201 completed the full cycle: +7% success, +3% retention, 12/15 ICCB fields addressed. The evidence writes itself when the cycle completes.
The Question
The data directly refutes the claim. Art 131 at 25%, Philosophy at 8%, Humanities at 39% — yet the report says "meeting standards." No Form 3A, no 3B. Where did the cycle break — and what would have caught this before it reached the ICCB submission?
What the Hub Catches

The cycle broke at Form 2 — data was collected but never used. No Form 3A improvement plan was filed. No Form 3B closed the loop. The boilerplate language replaced evidence because there was no infrastructure requiring the cycle to complete.

The Hub makes the form the workspace — with rubric-quality guidance at the point of entry. A faculty member who has never written an assessment plan sees, in the form itself, what "meets expectations" looks like. The tool doesn't just collect data — it trains good assessment practice.

Economics 201 — 2021 ICCB
  • Zero course-specific assessment data
  • Assessment section referenced only History, Psychology, Sociology
  • Success rate declining: 85% → 65%
  • No documented intervention
After One HI-C Cycle
  • CSLO-level statistical evidence
  • 12/15 ICCB fields addressed
  • Confirmed improvement: +7% success
  • Full loop closed with Form 3B
1
Submission
per cycle
4
Outputs
generated
0
Additional
faculty burden
$0
Platform
cost

The design constraint: Zero additional faculty burden. The improvement is the purpose. The compliance falls out naturally. The guidance at point of entry is the professional development. One action, four functions.

Stage 2: The Visibility Gap

The assessment chair's primary tool is email. Sixteen departments. Four forms per department per cycle. That's 64 documents to track. The chair downloads each file individually, reads it, writes feedback, sends it back. There is no aggregated view.

Committee meeting reality: First 25 minutes lost to logistics — who submitted, who hasn't, who emailed the form to the wrong person
With 20 minutes remaining: One department presents. Nobody discusses cross-department patterns because nobody has read both submissions.
Dean asks: "Are we on track for HLC?" · Honest answer: "I think so, but I'd need a week to verify."
From §2 — The Committee View
Coordinators are reviewers — their value is professional judgment. The chair needs a single view of all 16 departments. The Dean needs the same view filtered for action. The design calls for expert review at every level, freed from logistics. Right now, the infrastructure doesn't exist to support that design.
The Question
25 minutes of logistics per meeting. 64 documents tracked manually. Zero cross-department visibility. When the Dean asks "are we on track?" — what would it take to answer that in 30 seconds instead of a week?
What the Hub Catches

Heatmap — Status at a Glance

Sixteen departments. Color-coded. Green: cycle complete. Yellow: in progress. Red: not started. The chair opens one tab before the meeting and knows exactly which departments need attention. No spreadsheet. No email archaeology. The meeting starts with substance.

Dean Sign-Off & Readiness

Deadline tracking. Supplemental question completion. Sign-off workflow. When the chair finalizes a department's cycle, it appears in the Dean's queue. Review, sign off, ready for ICCB. The question "are we on track for HLC?" is answered by opening a tab.

Auto-Mapped Accreditation Evidence

Form data auto-maps to the 2025 Revised HLC Criteria: 3.E (Assessment), 3.F (Program Review), 4.A (Shared Governance). When an HLC peer reviewer asks "show me evidence that assessment is ongoing and systematic," the institution hands them a timestamped record — documented as the work happened, not reconstructed after.

25
Minutes lost to
logistics per meeting
64
Documents tracked
manually
1
Tab to answer
"are we on track?"

Stage 3: The Triple-Write Problem

ICCB program review asks up to 40 questions for CTE programs, 15 for Academic Disciplines. HLC wants evidence under Criterion 3.E. The institution's own template has its own format. Three frameworks. Three vocabularies. Three deadlines. One faculty member's work — written three times.

The coordinator's reality: Receives ICCB template. Question 3.5 asks about performance and equity. Question 3.17 asks about assessment methods. The data exists — in forms, in Brightspace, in IR reports. None of it is in the format ICCB wants.
Result: Two weeks translating. Some data never collected in the right format. Improvises. Two months later, HLC wants the same evidence in a different format. Another rewrite.
The work was done once. The reporting was done three times.
From §3 — The Compliance Chain
One body of faculty work serves three compliance frameworks: institutional continuous improvement, ICCB program review (~40 CTE / 15 Academic fields), and HLC Criterion 3 (7 sub-components). These are three views of the same work — not three separate tasks. The fundamental problem: assessment documentation at most community colleges is a retrospective reconstruction of practice, not a contemporaneous record of it.
The Question
One body of work, three compliance frameworks — why is anyone rewriting? What does it look like when the form you submit is the same form that writes your ICCB report?
What the Hub Catches

The Assessment Hub auto-extracts 28 of 38 ICCB program review fields directly from the assessment form data. Faculty never see the ICCB questions. They answered them already — through the forms they submitted as part of their normal assessment work.

CIS CTE Pilot — Three Programs, One Architecture

Networking (Chyu): Pre/post mean 56% → 82%. CSLOs 2 and 8 identified as weak → all above proficiency by FA23.

Cybersecurity (Bui): CSLOs 7, 8, 11 at 68% — below benchmark. Quiz redesigned with CSLO mapping. Result: all targeted CSLOs to 80%+.

Software Development (Chen): 45%+ did not meet expectations on problem-solving. Baseline established, improvement planned.

FY22 ICCB Submission
  • "Pre and Post Assessments" for methods
  • "None" for curriculum revisions
  • No CSLO-level data anywhere
  • Advisory board engagement: gap
After One HI-C Cycle
  • CSLO-level statistical evidence
  • Specific interventions documented
  • Confirmed improvement data
  • Full CTE crosswalk validated
28/38
ICCB fields
auto-populated
5/7
HLC criteria
evidenced
$0
Platform
cost
0
Rewrites
required

The principle: Compliance documentation should be a byproduct of doing the work well, not a separate activity that competes with it. The people closest to the work should own the tools that document it.

Try It Live

Click any role to open the live Assessment Hub.

Open the Dashboard →

"Assessment reporting should be a byproduct of doing the work well — not a separate activity that competes with it."
— The design constraint