Resources for Academic Programs

Components of Annual Assessment Reports

Every academic program (generally defined as degree and major) should:

  • identify expected student learning outcomes,
  • assess the extent to which it achieves these outcomes, and
  • provide evidence of seeking improvement based on analysis of their results.

Mercer currently uses an online platform, Anthology Planning, for completing and submitting assessment reports that achieve the three requirements listed above. Within Anthology Planning, there are two important plan items that must be included in an academic program's assessment cycle. The first one is the "Student Learning Outcome" template, and it is in this template that programs provide their assessment methods and results for each SLO in their program. The second plan item is the "Update and Improvement Plan". It is in this template that programs reflect on how their assessment data shows an attempt to improve student learning. The Update and Improvement Plan also provides programs the opportunity to include a narrative about upcoming changes or strategies they plan to implement in order to improve their program. Successful completion of these two plan items within the annual assessment report achieves the three required components listed above. Please note that some academic programs may also use a third type of plan item, Program/Departmental Outcomes (PDOs). These outcomes are focused on programmatic operations rather than on student learning. PDOs are not required in a program's assessment plan but they can be useful tools for programmatic improvement.

Instructions for Accessing Plan Items in Anthology Planning

Instructions for Using Anthology Planning to Manage Outcomes

Instructions for Using Anthology Planning to Complete an Update and Improvement Plan

How to Run and Export Academic Assessment Reports in Anthology Planning

Curriculum Mapping

Each academic program should create a curriculum map for their particular student learning outcomes, and the curriculum map should be included in the SLO plan in Anthology. In essence, a curriculum map is a description of how the program helps students successfully achieve each SLO and specifies where (in the curriculum) each SLO is being assessed. While curriculum maps can take several different formats, they are most often represented as a matrix with the SLOs along one axis and required courses/experiences within the program along the other axis. Below is an example of a curriculum map for a hypothetical program in art collection:

Example Curriculum Map

As you scan each SLO from top to bottom, the curriculum map indicates where students were first introduced to that skill , and where that skill was subsequently reinforced and mastered. The curriculum map also specifies the course(s) in which each SLO is being assessed. It is also worth noting that programmatic assessment takes place at the program level. Each course in a curriculum does not need to be an opportunity for assessment. In this example, courses 0053-423, 0553-426, and 0553-438 are not an assessment opportunity for any of the SLOs, and that is fine. They are still an important part of the curriculum and are helping students reinforce one of the programmatic SLOs.

It is probably worthwhile to note some ways in which this curriculum map could be improved, too. The first SLO (on the far left) is only being assessed at one point in the curriculum, in the course in which it is first introduced. That same SLO is reinforced and mastered in later courses, but there isn't a way to assess the effectiveness of that reinforcement. If you are only assessing SLOs at one point in the curriculum, it is preferable to assess them in later courses after that skill has been reinforced and/or mastered. Please keep in mind that this is just an example and the curriculum map for you program will likely be different.

Review of Annual Assessment Reports

Every year, the University Assessment Council reviews and provides feedback on the assessment reports from selected academic programs. The intent of this review is to provide constructive feedback that will aid programs in using appropriate assessment data for continuous programmatic improvement. The UAC uses a rubric for reviewing the academic assessment reports (rubric modified from Fulcher 2018, James Madison University). Note that the rubric has four performance levels (beginning, developing, good, and exemplary) and there are descriptions for the criteria at each performance level. To illustrate what would qualify a report as beginning, developing, good, or exemplary, please see the example reports below for a fictional BA in 80's pop culture program (modified from Fulcher and Alahmadi, 2023, Assessment 101). Each of the example reports is followed by some descriptions of notable aspects of the report.

Beginning Example

Notes on Beginning Example

  • The student learning outcome could be improved by focusing on what students should be able to do, rather than on what faculty are going to teach
  • A curriculum map should be included for each SLO that documents how the program is preparing students to be successful
  • The assessment method could be improved by providing some context for deployment of the exit survey (e.g., Who is taking the survey? Who is administering the survey? How and when is the survey being completed?). A target or benchmark for the assessment method is included, which is good. This SLO is only being assessed by an indirect method, though. SLOs should always be assessed with at least one direct assessment.
  • The assessment results provide a summary statistic (average survey response), which is good. Some additional information (e.g. sample size, response rate, etc.) would strengthen this data.
  • The Update and Improvement Plan could be improved by including a thorough "backward-looking" narrative that compares the assessment results from this year to previous years or any discussion about possible reasons that the target was missed. Likewise, the Update and Improvement Plan could benefit from a more "forward-looking" action plan section that does more than make vague reference to investigating this issue further.
  • Finally, the Update and Improvment Plan could use a more thorough description of how the faculty in this program are reviewing and discussing the assessment results to drive improvement efforts.

Developing Example

Notes on Developing Example

  • The student learning outcome in this example is improved over the "beginning example", as it focuses on students rather than faculty. The SLO could be improved by replacing "understand" with a more specific action or skill that students should be able to perform. That is, what will students actually be asked to do in order to show that they understand ethical reasoning?
  • This example has some courses listed in the curriculum map, but it isn't clear how or when these courses contribute to students' development in ethical reasoning. It may also be notable that all of the courses listed in the curriculum map are introductory (100/1000) level courses.
  • The assessment used in this example is a direct assessment, which is an improvement over the first example. There is also a specific target or benchmark for the assessment method. It should be noted, though, that final exam grades are often not good measures to use for a specific SLO, as final exams are often covering multiple different topics or skills. As such, there isn't good alignment between the SLO and the measurement. This method could be improved by identifying a particular question or questions on the final exam that do focus specifically on ethical reasoning and using students' performance on those questions as the measurement. This method could also be improved by providing some additional context about this assessment and its target population. 
  • The assessment results provide a summary statistic (average exam score), which is good. Some additional information (e.g. sample size) would strengthen this data.
  • The Update and Improvement Plan includes some "backward-looking" narrative that indicates students have performed well on this assessment in the past, which is a good addition to this assessment report. This example, though, could be improved by including a more thoughtful narrative in the "forward-looking" improvement plan section. Just because a target or benchmark has been achieved doesn't mean that no action is needed or that nothing can be done to improve student learning related to this SLO. There is still 16.6% of the student population who didn't meet the benchmark, so it may be worthwhile investigating the demographics of these students. That type of data may be useful for creating and implementing interventions that will allow those students to be successful in the future.
  • The Update and Improvement Plan indicates that the assessment results were shared with some faculty, but there isn't any indication of who these faculty members were or how the faculty are using this information for programmatic improvement.

Good Example

Notes on Good Example

  • The student learning outcome in this example is much improved over the "developing example". This SLO contains clear and specific action verbs that describe what students should be able to do - "analyze" and "provide a recommendation". It should be noted that having two distinct verbs makes this a "bundled" SLO, which can potentially cause some alignment issues if the assessment method cannot differentiate between the two skills (e.g., students can successfully analyze the ethical situation but are unable to provide a recommendation). Bundled SLOs can be OK, as long as each skill is measured independently.
  • The curriculum map for this SLO is also improved over the previous example. It contains some description of activities in a variety of courses, but it could be improved by including some additional details about how these activities specifically help develop students ability in the two skill listed in the SLO.
  • The assessment method in the "good example" is a direct measurement of an assignment that is assessed using rubric containing criteria that specifically align with the both skills in the SLO. The rubric is described, but it may be better if the rubric were uploaded/attached. The assessment method has a specific target or benchmark and also includes a narrative related to how the artifacts were collected.
  • The assessment results and analysis provide disaggregated data for each skill contained in the outcome, which is key for any bundled SLO.
  • The Update and Improvement plan includes a brief "backward-looking" update that provides some context for this year's assessment data. This update narrative could be strengthened by providing some a fuller context for their assessment efforts, including any relevant data from previous assessment cycles. The "forward-looking" improvement plan is thorough and it is clearly based on the assessment data that was collected in this assessment cycle.
  • The Update and Improvement Plan indicates that the assessment results were emailed to all faculty, but there isn't any indication of how the faculty are using this information for programmatic improvement.

Exemplary Example

Notes on Exemplary Example

  • The SLO in this example is similar to the one in the "good example", and it has the additional benefit of including some context. That is, it specifies that this SLO is relevant to students who are graduating from this particular program.
  • The curriculum map in this example is also improved by including some details about activities in these courses that specifically help students develop these skills.
  • The assessment method is described in a very similar fashion to the "good example". Additionally, it contains a description of how faculty members were trained in using the rubric to assess student work and includes information related to inter-rater reliability measures.
  • The assessment results and analysis provide disaggregated data for each skill contained in the outcome, which is key for any bundled SLO.
  • The Update and Improvement Plan includes a thorough "backward-looking" updates and a "forward-looking" improvement plan. There is a clear cycle in which previous assessment data was used to drive decision making and implement programmatic improvement. The benefit of that programmatic improvement is clearly evident in next iterations of the assessment cycle.
  • It is also clear how the information from programmatic assessment was distributed and used by the faculty to drive their improvement efforts.

Have Questions or Need Some Help?

Appointment_Calendar