Assessment at Daley runs on a four-form cycle. Each form answers a different question, and together they create a closed loop: plan → measure → improve → verify. When the cycle works, every ICCB program review question can be answered with existing evidence.
| Form | Question It Answers | When | Key Output |
|---|---|---|---|
| Form 1 | "What are we assessing and how?" | Start of cycle | CSLOs, instrument, benchmark, sample, timing |
| Form 2 | "What did we find?" | After data collection | Results, tier breakdown (Exceeding/Meeting/Not Meeting) |
| Form 3A | "What will we change?" | After analysis | Improvement plan targeting specific CSLOs |
| Form 3B | "Did the change work?" | Next cycle | Evidence of improvement or next iteration |
The critical insight is that Form 3B closes the loop. Without it, the cycle is just data collection. With it, the data tells a story: we found a problem, we intervened, and here is what happened. This is what ICCB and HLC reviewers look for — evidence that the institution uses its data, not just collects it.
The ICCB review requires five years of longitudinal data per course. Three indicators together tell you whether a course is healthy: enrollment trends (is there demand?), credit hours produced (is it fiscally efficient?), and success rates (are students mastering outcomes?).
| Indicator | What It Measures | Healthy Signal | Warning Signal |
|---|---|---|---|
| Enrollment | Student demand | Stable or growing | Volatile or declining |
| Credit Hours | Fiscal efficiency | Proportional to enrollment | High enrollment + low completion = waste |
| Success Rate (≥C) | Outcome mastery | ≥70% stable | <50% or volatile |
A "High Health" course like Philosophy 106 shows enrollment steady around 65 students, credit hours consistently productive at 195, and success rates between 69%–93%. The three indicators reinforce each other — steady demand, efficient output, strong outcomes.
Conversely, the relationship between indicators matters. High enrollment with low success means students are "staying in the seat" — consuming resources — but failing to achieve outcomes. That mismatch is the most actionable signal in a program review.
The Compilation Guide is the bridge between Daley's internal assessment forms and the ICCB program review template. It answers a simple question: for each ICCB question, which Daley form provides the evidence?
The guide exists in two editions. The Academic Disciplines edition covers departments like Humanities, Social Science, and Natural Sciences — 15 ICCB fields, of which Forms 1–3B populate 12 directly. The CTE edition covers Career and Technical Education programs (Networking, Cybersecurity, Software Development) and adds fields for labor market data, Perkins indicators, and industry certification.
For each ICCB question (3.1 through 3.11), the Compilation Guide identifies: (1) which Daley form contains the relevant evidence, (2) the specific field or data point to extract, (3) what HLC criterion the same evidence supports, and (4) where a supplemental data source is needed (Business Office, HR, Institutional Research). The department chair extracts; the Assessment Committee verifies alignment.
The Economics 201 pilot proved the concept: the full assessment cycle (Forms 1–3B) populated 12 of 15 ICCB fields and provided direct evidence for 5 of 7 HLC Criterion 3 sub-components. Only 3 fields required supplemental data — exactly as designed. In the previous submission cycle, Economics had zero specific assessment data.
The Humanities ICCB report presents a case study in cycle failure. Across multiple disciplines, the report relies on identical language: "Data indicates that courses are meeting their academic standards and course curriculum assessments." No rubric scores, no portfolio percentages, no artifact analysis — just the claim.
Art 131 has a success rate of 25%–29%. Philosophy sections report an 8% success rate. The Humanities 201 rate sits at 39%. Yet the same report states that courses are "meeting academic standards." This is a contradiction — the data directly refutes the claim. Without specific evidence (mean rubric scores, performance thresholds, artifact examples), reviewers have no way to verify the assertion.
Submitting data = doing assessment → Using data to change instruction = doing assessment. The Humanities report includes five years of enrollment and success rate numbers. But there is no Form 3A improvement plan and no Form 3B evidence of loop closure for the flagged courses. Data was collected, but the cycle never completed.
The Assessment Committee review identified a second failure mode: autonomy without evidence of varied assessment. Course descriptions repeatedly state only that "Writing assignments, as appropriate to the discipline, are part of the course." There is no mention of oral exams, digital portfolios, or collaborative projects — a single assessment modality for a diverse student body.
Let's read the Humanities data the way the committee reads it — looking for the convergence (or divergence) of the three indicators.
Enrollment swings wildly — 40 → 69 → 44 → 80 → 41 students. Credit hours mirror the same unstable pattern: 120 → 207 → 132 → 240 → 123. Success rates sit between 31% and 62%, most recently declining to 39%. All three indicators are negative simultaneously: volatile demand, inefficient output, poor outcomes. The data signals that without instructional redesign, sustainability is limited.
Compare with Spanish 102: success rates between 86%–95% — the highest in the department. But enrollment drops from 139 to 107, and the 101→102 progression shows nearly 50% attrition. The students who stay succeed, but half the pipeline disappears. This is an entirely different problem — not instructional quality, but structural barriers to sequence completion (scheduling, cost, advising).
Low success rate = bad teaching → Low success rate requires root cause analysis. Humanities 201 at 39% may reflect misalignment between writing-intensive course demands and student preparedness. Art 131 at 25% may reflect assessment tool rigor beyond introductory level. Philosophy at 8% may reflect a single section with a specific adjunct. Each requires a different intervention.
ICCB Question 3.5 asks about Performance and Equity — disaggregated data showing whether all student populations are achieving outcomes equitably. HLC Criterion 3.G requires Student Success Outcomes documentation. Both expect specific, evidence-backed responses.
What the Humanities report provides: "Disaggregated data was reviewed" and "no identifiable gaps." Meanwhile, the data contains an 8% success rate in Philosophy sections and 25% in Art 131. Either the disaggregation wasn't performed, or the gap analysis was not connected to the narrative.
ICCB expects evidence of programmatic continuity. But advanced courses in Literature and Foreign Language are "often not offered due to low enrollment." If a student completes French 101 but cannot access French 102, the program goal of "Language Proficiency" is logically unachievable. The report does not address this — and the Compilation Guide's crosswalk shows exactly where this gap maps to ICCB Question 3.9 (Program Goals).
The CTE template introduces additional complexity. CIS programs must report on labor market alignment, Perkins completion indicators, and industry accreditation status — fields that live outside the assessment forms entirely. Without the CTE edition of the Compilation Guide, chairs have no roadmap for which office holds which supplemental data.
The committee's primary recommendation for departments stuck in boilerplate reporting is the TILT framework (Transparency in Learning and Teaching). Instead of stating "courses are meeting standards," future reviews must include a Data Evidence Table showing, for example, "85% of students achieved 'Proficient' rating on the Harlem Renaissance Visual Analysis Rubric."
| CSLO | Instrument | Benchmark | Result | Action |
|---|---|---|---|---|
| CSLO 3: Analyze cultural texts | Essay rubric (4-point scale) | 70% at Proficient+ | 62% Proficient | Form 3A: scaffold drafting |
| CSLO 5: Evaluate diverse perspectives | Discussion lead rubric | 75% at Proficient+ | 81% Proficient | Maintain current practice |
This table replaces a paragraph of boilerplate with specific, verifiable claims. Each row is a self-contained Form 2 → Form 3A pipeline.
The committee also recommends Angelo & Cross's Classroom Assessment Techniques (CATs) as a framework for varied assessment. Future reports should document at least three modalities: primary source analysis rubrics (Humanities), oral proficiency interviews (Foreign Language), progressive sketchbook reviews (Fine Arts), and digital discussion lead rubrics (Literature).
Each data pattern calls for a different intervention. The committee's recommendations are calibrated to the root cause, not just the symptom.
| Course | Problem Pattern | Root Cause Hypothesis | Intervention |
|---|---|---|---|
| HUM 201 (39%) | Volatile enrollment + low success | Misalignment: writing-intensive demands vs. student preparedness | Scaffolded writing drafts, writing lab integration, midterm diagnostic assessments |
| HUM 202 (50%) | Low enrollment + low success | Insufficient demand; sustainability concern | Block scheduling with HUM 201; consider consolidation or alternate delivery |
| ART 131 (25%) | Full enrollment + very low success | Assessment rigor misaligned with introductory level | "Rigor vs. Relevance" audit — shift from high-stakes terminal projects to scaffolded formative assessments |
| PHI (8% section) | Isolated catastrophic failure | Adjunct section without support | Classroom observations, professional development, Early Alert after 2 missed assignments |
| SPA 101→102 | 50% attrition between levels | Structural barriers (cost, scheduling, advising) | Pathway Mapping, Co-Req model, hybrid modality for 102, 8R1/8R2 scheduling |
Notice that the Art 131 intervention is fundamentally different from the Philosophy 8% intervention. Art 131 has students staying but failing — suggesting the assessment tools are too demanding for an introductory course. The fix is formative assessment redesign. The Philosophy section has a single adjunct producing catastrophic results — the fix is observation and professional development. Same department, completely different interventions.
The Economics 201 pilot demonstrated that the full assessment cycle populates almost all ICCB fields without additional faculty work. The key finding: in the 2021 ICCB submission, Economics had zero course-specific assessment data. After one complete Daley Forms cycle, it had evidence for 12 of 15 ICCB fields and 5 of 7 HLC sub-components.
| Metric | Economics (Academic) | CIS (CTE) |
|---|---|---|
| ICCB fields addressed | 12 of 15 | 12 of 15 + CTE supplement |
| HLC criteria evidenced | 5 of 7 | 5 of 7 |
| Forms completed | 4 (1 course) | 12 (3 programs) |
| Supplemental sources needed | 3 (Business Office, HR, IR) | 3 + Perkins + Labor Market |
| Loop closure evidence | +7% success rate improvement (Form 3B) | NET-121 all CSLOs above proficiency after cycle |
The CIS pilot added a critical validation: the CTE edition of the Compilation Guide correctly identified where Perkins indicators and labor market analysis require supplemental data from Institutional Research — fields the assessment forms cannot and should not try to address. This prevents chairs from either fabricating data or leaving fields blank.
Three problems, three interventions. Follow the thread from how the cycle should work, through where it breaks, to how we fix it.