A-Team Session Guide · The ICCB Assessment Cycle

Forms 1→2→3A→3B · Course Health Indicators · The Compilation Guide · Richard J. Daley College · March 2026
§1 · Normal
📋 The Humanities Department is preparing for ICCB review. Five years of data, six courses, one story to tell.
§1. The Assessment Cycle: Forms 1→2→3A→3B Normal

Assessment at Daley runs on a four-form cycle. Each form answers a different question, and together they create a closed loop: plan → measure → improve → verify. When the cycle works, every ICCB program review question can be answered with existing evidence.

The Four-Form Cycle

FormQuestion It AnswersWhenKey Output
Form 1"What are we assessing and how?"Start of cycleCSLOs, instrument, benchmark, sample, timing
Form 2"What did we find?"After data collectionResults, tier breakdown (Exceeding/Meeting/Not Meeting)
Form 3A"What will we change?"After analysisImprovement plan targeting specific CSLOs
Form 3B"Did the change work?"Next cycleEvidence of improvement or next iteration

The critical insight is that Form 3B closes the loop. Without it, the cycle is just data collection. With it, the data tells a story: we found a problem, we intervened, and here is what happened. This is what ICCB and HLC reviewers look for — evidence that the institution uses its data, not just collects it.

FORM 1 Plan: What & How? FORM 2 Measure: What did we find? FORM 3A Improve: What will we change? FORM 3B Verify: Did it work? CLOSED LOOP
The four-form assessment cycle. Form 3B closes the loop by verifying whether the Form 3A intervention produced measurable improvement.
Key Principle: Assessment is not a reporting obligation — it is a feedback loop. The forms are the mechanism; the loop closure is the purpose.
ICCB Questions 3.1–3.11 map directly to data produced by Daley Forms 1–3B. The Compilation Guide shows exactly which form feeds which question — no new faculty work required.
The cycle looks clean on paper. But what happens when a department submits boilerplate instead of evidence?
→ Broken View
§2. Reading Course Health Data Normal

The ICCB review requires five years of longitudinal data per course. Three indicators together tell you whether a course is healthy: enrollment trends (is there demand?), credit hours produced (is it fiscally efficient?), and success rates (are students mastering outcomes?).

The Health Indicator Triad

IndicatorWhat It MeasuresHealthy SignalWarning Signal
EnrollmentStudent demandStable or growingVolatile or declining
Credit HoursFiscal efficiencyProportional to enrollmentHigh enrollment + low completion = waste
Success Rate (≥C)Outcome mastery≥70% stable<50% or volatile

A "High Health" course like Philosophy 106 shows enrollment steady around 65 students, credit hours consistently productive at 195, and success rates between 69%–93%. The three indicators reinforce each other — steady demand, efficient output, strong outcomes.

Humanities Department: Success Rate Comparison (Most Recent Year)
SPA 102
92%
High
PHI 106
85%
High
PHI 107
80%
Mod-Hi
SPA 101
75%
High
HUM 202
50%
Low
HUM 201
39%
Low
ART 131
25%
Critical
PHI (8%)
8%
≥70% Healthy 50–69% Moderate <50% Low <30% Critical
Success rates (% C or better) across Humanities courses. The gap between the top cluster (≥75%) and bottom cluster (≤39%) defines the committee's intervention priorities.

Conversely, the relationship between indicators matters. High enrollment with low success means students are "staying in the seat" — consuming resources — but failing to achieve outcomes. That mismatch is the most actionable signal in a program review.

Key Principle: No single metric tells the story. Health is the convergence of demand, efficiency, and mastery over time.
Philosophy 106 is healthy. But Humanities 201 has a 39% success rate — and the ICCB report claims "courses are meeting academic standards." What does the data actually say?
→ Broken View
§3. The Compilation Guide: Connecting Forms to ICCB Normal

The Compilation Guide is the bridge between Daley's internal assessment forms and the ICCB program review template. It answers a simple question: for each ICCB question, which Daley form provides the evidence?

The guide exists in two editions. The Academic Disciplines edition covers departments like Humanities, Social Science, and Natural Sciences — 15 ICCB fields, of which Forms 1–3B populate 12 directly. The CTE edition covers Career and Technical Education programs (Networking, Cybersecurity, Software Development) and adds fields for labor market data, Perkins indicators, and industry certification.

What the Guide Does

For each ICCB question (3.1 through 3.11), the Compilation Guide identifies: (1) which Daley form contains the relevant evidence, (2) the specific field or data point to extract, (3) what HLC criterion the same evidence supports, and (4) where a supplemental data source is needed (Business Office, HR, Institutional Research). The department chair extracts; the Assessment Committee verifies alignment.

The Economics 201 pilot proved the concept: the full assessment cycle (Forms 1–3B) populated 12 of 15 ICCB fields and provided direct evidence for 5 of 7 HLC Criterion 3 sub-components. Only 3 fields required supplemental data — exactly as designed. In the previous submission cycle, Economics had zero specific assessment data.

Daley Forms Form 1 · Planning Form 2 · Outcomes Form 3A · Improve Form 3B · Verify COMPILATION GUIDE Form → ICCB Question Map Academic + CTE Editions ICCB Review 12/15 fields ✓ from Forms HLC Criterion 3 5/7 sub-components ✓ from Forms 3 fields → Supplemental (IR/HR)
The Compilation Guide translates Daley assessment forms directly into ICCB and HLC submissions. Only 3 of 15 ICCB fields require supplemental data from other offices.
The CIS CTE pilot extended this further. Three programs (Networking, Cybersecurity, Software Development), 12 completed forms, all mapped against the CTE-specific ICCB template. The Compilation Guide identified every crosswalk point — and flagged where Perkins data and labor market analysis require Office of Research input.
The guide maps everything cleanly. But when the committee reviews the actual Humanities ICCB report, the claims don't match the data. What's missing?
→ Broken View
§1. When the Cycle Breaks: Boilerplate and Missing Evidence Broken

The Humanities ICCB report presents a case study in cycle failure. Across multiple disciplines, the report relies on identical language: "Data indicates that courses are meeting their academic standards and course curriculum assessments." No rubric scores, no portfolio percentages, no artifact analysis — just the claim.

The Vague Statement Problem

Art 131 has a success rate of 25%–29%. Philosophy sections report an 8% success rate. The Humanities 201 rate sits at 39%. Yet the same report states that courses are "meeting academic standards." This is a contradiction — the data directly refutes the claim. Without specific evidence (mean rubric scores, performance thresholds, artifact examples), reviewers have no way to verify the assertion.

⚠ Common Mistake: Equating Submission with Assessment

Submitting data = doing assessmentUsing data to change instruction = doing assessment. The Humanities report includes five years of enrollment and success rate numbers. But there is no Form 3A improvement plan and no Form 3B evidence of loop closure for the flagged courses. Data was collected, but the cycle never completed.

The Assessment Committee review identified a second failure mode: autonomy without evidence of varied assessment. Course descriptions repeatedly state only that "Writing assignments, as appropriate to the discipline, are part of the course." There is no mention of oral exams, digital portfolios, or collaborative projects — a single assessment modality for a diverse student body.

FORM 1 ✓ Plan submitted FORM 2 ✓ Data collected (39%, 8%) FORM 3A ✗ No improvement plan FORM 3B ✗ Loop never closes OPEN LOOP
The Humanities cycle stopped after data collection. Without Forms 3A and 3B, problematic outcomes (39%, 8%) are documented but never addressed.
The cycle is broken. How do we move departments from boilerplate to evidence-based reporting?
→ Fix View
§2. What the Data Actually Says: The Humanities Health Snapshot Broken

Let's read the Humanities data the way the committee reads it — looking for the convergence (or divergence) of the three indicators.

Humanities 201: Low Health

Enrollment swings wildly — 40 → 69 → 44 → 80 → 41 students. Credit hours mirror the same unstable pattern: 120 → 207 → 132 → 240 → 123. Success rates sit between 31% and 62%, most recently declining to 39%. All three indicators are negative simultaneously: volatile demand, inefficient output, poor outcomes. The data signals that without instructional redesign, sustainability is limited.

Compare with Spanish 102: success rates between 86%–95% — the highest in the department. But enrollment drops from 139 to 107, and the 101→102 progression shows nearly 50% attrition. The students who stay succeed, but half the pipeline disappears. This is an entirely different problem — not instructional quality, but structural barriers to sequence completion (scheduling, cost, advising).

100 80 60 40 20 70% FY20 FY21 FY22 FY23 FY24 40 69 44 80 41 62% 45% 50% 31% 39% Enrollment Success Rate (%) 70% Benchmark
HUM 201: Enrollment swings wildly while success rate stays persistently below benchmark. The divergence signals instructional misalignment, not demand problems.
⚠ Common Mistake: Treating All Low Numbers the Same

Low success rate = bad teachingLow success rate requires root cause analysis. Humanities 201 at 39% may reflect misalignment between writing-intensive course demands and student preparedness. Art 131 at 25% may reflect assessment tool rigor beyond introductory level. Philosophy at 8% may reflect a single section with a specific adjunct. Each requires a different intervention.

Equity Signal: An 8% success rate is a definitive indicator of a barrier that likely disproportionately affects specific student demographics. The committee's recommendation: Equity-Minded Syllabus Reviews and Early Alert Systems where faculty must trigger support intervention after two missed consecutive assignments.
We've diagnosed the problems. What interventions does the committee recommend — and how does the data guide the choice?
→ Fix View
§3. The Gap: What ICCB Asks vs. What Departments Submit Broken

ICCB Question 3.5 asks about Performance and Equity — disaggregated data showing whether all student populations are achieving outcomes equitably. HLC Criterion 3.G requires Student Success Outcomes documentation. Both expect specific, evidence-backed responses.

What the Humanities report provides: "Disaggregated data was reviewed" and "no identifiable gaps." Meanwhile, the data contains an 8% success rate in Philosophy sections and 25% in Art 131. Either the disaggregation wasn't performed, or the gap analysis was not connected to the narrative.

The Stalled Sequence Problem

ICCB expects evidence of programmatic continuity. But advanced courses in Literature and Foreign Language are "often not offered due to low enrollment." If a student completes French 101 but cannot access French 102, the program goal of "Language Proficiency" is logically unachievable. The report does not address this — and the Compilation Guide's crosswalk shows exactly where this gap maps to ICCB Question 3.9 (Program Goals).

The CTE template introduces additional complexity. CIS programs must report on labor market alignment, Perkins completion indicators, and industry accreditation status — fields that live outside the assessment forms entirely. Without the CTE edition of the Compilation Guide, chairs have no roadmap for which office holds which supplemental data.

What the Report Claims ✗ "Meeting academic standards" ✗ "No identifiable gaps" ✗ "Data reviewed" (no specifics) ✗ No Form 3A or 3B attached ✗ Single assessment modality listed GAP What the Data Shows ✓ HUM 201: 39% success rate ✓ PHI section: 8% success rate ✓ ART 131: 25% with full seats ✓ SPA 101→102: 50% attrition ✓ Stalled sequences (no 102 offered)
The ICCB report narrative (left) contradicts the quantitative evidence (right). The committee's role is to close this gap before submission.
The gap is clear. How does the Compilation Guide — tested with Economics and CIS — actually solve this problem?
→ Fix View
§1. From Boilerplate to Evidence: The TILT Framework Fix

The committee's primary recommendation for departments stuck in boilerplate reporting is the TILT framework (Transparency in Learning and Teaching). Instead of stating "courses are meeting standards," future reviews must include a Data Evidence Table showing, for example, "85% of students achieved 'Proficient' rating on the Harlem Renaissance Visual Analysis Rubric."

The Evidence Table Template

CSLOInstrumentBenchmarkResultAction
CSLO 3: Analyze cultural textsEssay rubric (4-point scale)70% at Proficient+62% ProficientForm 3A: scaffold drafting
CSLO 5: Evaluate diverse perspectivesDiscussion lead rubric75% at Proficient+81% ProficientMaintain current practice

This table replaces a paragraph of boilerplate with specific, verifiable claims. Each row is a self-contained Form 2 → Form 3A pipeline.

The committee also recommends Angelo & Cross's Classroom Assessment Techniques (CATs) as a framework for varied assessment. Future reports should document at least three modalities: primary source analysis rubrics (Humanities), oral proficiency interviews (Foreign Language), progressive sketchbook reviews (Fine Arts), and digital discussion lead rubrics (Literature).

❌ Before: Boilerplate "All courses in the department are meeting established academic standards." TILT ✓ After: Evidence CSLO 3 · Essay Rubric · 70% target 62% achieved → Form 3A: scaffold drafts Angelo & Cross CATs — Assessment Variety by Discipline 📝 Humanities Source analysis rubrics 🗣 Foreign Lang Oral proficiency interviews 🎨 Fine Arts Sketchbook reviews 💬 Literature Discussion lead rubrics
TILT transforms vague claims into verifiable evidence rows. Angelo & Cross CATs ensure each discipline uses assessment tools matched to its pedagogy.
Key Principle: Evidence replaces assertion. A department that can produce one row of that evidence table has completed more assessment work than a department that writes five pages of boilerplate.
Evidence tables fix the reporting problem. But what about the courses with 25%–39% success rates? The data demands intervention at the instructional level.
→ §2 Review
§2. Intervention Mapping: Matching the Fix to the Problem Fix

Each data pattern calls for a different intervention. The committee's recommendations are calibrated to the root cause, not just the symptom.

Intervention Map: Humanities Department

CourseProblem PatternRoot Cause HypothesisIntervention
HUM 201 (39%)Volatile enrollment + low successMisalignment: writing-intensive demands vs. student preparednessScaffolded writing drafts, writing lab integration, midterm diagnostic assessments
HUM 202 (50%)Low enrollment + low successInsufficient demand; sustainability concernBlock scheduling with HUM 201; consider consolidation or alternate delivery
ART 131 (25%)Full enrollment + very low successAssessment rigor misaligned with introductory level"Rigor vs. Relevance" audit — shift from high-stakes terminal projects to scaffolded formative assessments
PHI (8% section)Isolated catastrophic failureAdjunct section without supportClassroom observations, professional development, Early Alert after 2 missed assignments
SPA 101→10250% attrition between levelsStructural barriers (cost, scheduling, advising)Pathway Mapping, Co-Req model, hybrid modality for 102, 8R1/8R2 scheduling

Notice that the Art 131 intervention is fundamentally different from the Philosophy 8% intervention. Art 131 has students staying but failing — suggesting the assessment tools are too demanding for an introductory course. The fix is formative assessment redesign. The Philosophy section has a single adjunct producing catastrophic results — the fix is observation and professional development. Same department, completely different interventions.

Same Department, Different Interventions ART 131 25% · Full seats Root Cause: Assessment rigor too high for intro level Fix: Scaffolded formative assessments (NILOA) PHI (8%) Isolated section Root Cause: Adjunct section without support structure Fix: Observations + PD + Early Alert system SPA 101→102 50% attrition Root Cause: Structural barriers scheduling / cost / advising Fix: Pathway Mapping Co-Req + hybrid modality HUM 201 39% · Volatile Root Cause: Writing demands vs. student preparedness Fix: Scaffolded writing Writing lab + diagnostics SIGNAL ROOT CAUSE INTERVENTION
Each data signal maps to a distinct root cause and specific intervention. One-size-fits-all responses miss the diagnostic precision the data provides.
NILOA Best Practice: When foundational course success rates drop below 50%, research from the National Institute for Learning Outcomes Assessment recommends shifting to scaffolded "low-stakes" formative assessments that build student confidence incrementally — rather than high-stakes terminal evaluations.
Interventions are mapped. Now how does all of this translate into the actual ICCB submission? The Compilation Guide connects every intervention to a specific review question.
→ §3 Review
§3. The Pilot Proof: Economics, CIS, and What Comes Next Fix

The Economics 201 pilot demonstrated that the full assessment cycle populates almost all ICCB fields without additional faculty work. The key finding: in the 2021 ICCB submission, Economics had zero course-specific assessment data. After one complete Daley Forms cycle, it had evidence for 12 of 15 ICCB fields and 5 of 7 HLC sub-components.

Pilot Results Comparison

MetricEconomics (Academic)CIS (CTE)
ICCB fields addressed12 of 1512 of 15 + CTE supplement
HLC criteria evidenced5 of 75 of 7
Forms completed4 (1 course)12 (3 programs)
Supplemental sources needed3 (Business Office, HR, IR)3 + Perkins + Labor Market
Loop closure evidence+7% success rate improvement (Form 3B)NET-121 all CSLOs above proficiency after cycle

The CIS pilot added a critical validation: the CTE edition of the Compilation Guide correctly identified where Perkins indicators and labor market analysis require supplemental data from Institutional Research — fields the assessment forms cannot and should not try to address. This prevents chairs from either fabricating data or leaving fields blank.

Economics 201: Before & After One Assessment Cycle Before (2021 Submission) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 / 15 ICCB fields Zero course-specific data After (One Cycle) IR HR Biz 12 / 15 from Forms +7% success rate improvement · 5/7 HLC From Daley Forms Supplemental (3 offices)
One complete Daley Forms cycle transformed Economics from zero data to near-complete ICCB coverage. The three remaining fields are correctly flagged as supplemental.
Next Steps (March Agenda): CIS and English are the next ICCB rotation. All units need at minimum 1 course in the cycle, all sections regardless of modality or adjunct/FT status. Outreach begins in April once courses are assigned to instructors. Chairs should identify their courses by the March Pop-Up (March 30–April 3).
The A-Team workflow: Chair identifies course → Assessment outreach assigns instructor → Form 1 (planning) → Data collection → Form 2 (outcomes) → Form 3A (improvement plan) → Implement → Form 3B (close the loop) → Compilation Guide extracts → ICCB submission. One cycle, no gaps, no boilerplate.
Three sections, three problems, three fixes. See how the full session connects in the Resolution Map.
→ Resolution Map

Resolution Map · The ICCB Assessment Cycle

Three problems, three interventions. Follow the thread from how the cycle should work, through where it breaks, to how we fix it.

§1. The Assessment Cycle → Boilerplate Reporting → TILT Evidence Tables

Normal
Four-form cycle: Form 1 (plan) → Form 2 (measure) → Form 3A (improve) → Form 3B (close loop). Each feeds specific ICCB fields.
Broken
Humanities report: identical boilerplate across disciplines. "Meeting standards" with 8%, 25%, 39% success rates. No Form 3A/3B for flagged courses.
Fix
TILT framework: Data Evidence Tables replace boilerplate. One row = one CSLO + instrument + benchmark + result + action. CATs for assessment variety.

§2. Course Health Indicators → Misread Data → Root-Cause Interventions

Normal
Health = convergence of enrollment + credit hours + success rates over five years. Three indicators reinforce or contradict each other.
Broken
HUM 201 (39%): volatile demand + low success. SPA 101→102: 50% pipeline attrition. ART 131 (25%): full seats, no mastery. Each a different problem.
Fix
Intervention matched to cause: scaffolded formative assessment (ART 131), pathway mapping + co-req model (SPA sequence), equity review + early alerts (PHI 8% section).

§3. ICCB Compilation → Claim-Data Gaps → Pilot-Tested Crosswalk

Normal
Compilation Guide maps every Daley form field to every ICCB question. Academic + CTE editions. 12/15 fields from forms alone.
Broken
Claim vs. data disconnect. "No identifiable gaps" alongside 8% success. Stalled sequences. CTE fields with no data source identified.
Fix
Economics pilot: 0 → 12/15 ICCB fields in one cycle. CIS pilot: 12 forms across 3 CTE programs. Loop closure with measured improvement. Template works.
OPEN LOOP Boilerplate · Gaps · 0 data TILT + CATs CLOSED LOOP Plan · Measure · Improve · Verify Compilation Guide ICCB + HLC Submission 12/15 fields · Evidence-based Writes itself from completed cycle
The Big Picture: Assessment is a closed loop, not a reporting obligation. When the cycle completes — plan, measure, improve, verify — the ICCB submission writes itself. The Compilation Guide is not a new burden; it's a translator between work already done and questions already asked. The Humanities review exposed what breaks when the loop stays open. The Economics and CIS pilots proved what happens when it closes. Next rotation: CIS and English. March Pop-Up date: March 30–April 3. Outreach begins April.