I highly recommend aligning your institutional service activities with your academic mission. If you don’t have one, then align your service with your passion, interests, and skills. My interests in creating excellent ASL-English interpreter education programs in the US is why I’m the point person for degree program assessment at my current institution. I served in a similar capacity in my previous institution.

Today, I thought I’d give a glimpse into the process of creating a program assessment plant.

Start with the End in Mind

As Steve Covey says, “Start with the end in mind.” While this mantra is a habit of highly successful people. It’s also an excellent way to begin developing a program assessment process.

Wiggins & McTighe (2005) call this curricular process backward design, but it’s the same idea as Covey’s. Before you know what to do on a daily basis, you have to know your intended outcome.

Before you can assess program outcomes, you first have to know what the intended outcomes are. Some faculty development folks I know, refer to this as defining the “ideal graduate.”

So, you need to identify the skills, knowledge, and dispositions that you expect graduates to achieve. Of course we expect our students to develop many skills, knowledge in various areas, and have dispositions that are aligned with professional expectations.

We can’t list all of the different things we want students to learn…well, we could, but that might get overwhelming – especially when turn to assessing if our program is helping students achieve the outcomes. Programs may group similar skills, knowledge, or dispositions into larger categories. Some examples are below.

Sample Program Outcomes

A program such as Ecology and Environmental Sciences at the University of Maine, will have a different profile for an “ideal graduate.” Their program level outcomes include things like the ones listed below, for example.

The BA in Deaf Studies at the University of Arizona includes these program outcomes (these are just a sample).

Determine How to Assess the Outcomes

Once you know what you want students to achieve by the time they graduate, the program needs to come up with a way to determine if each student has met the criteria. If you have 3 program level outcomes, you’ll need at least 3 measures of success – one for each outcome.

So, for example, a language program may have a program level outcome related to students’ language proficiency – Students will achieve Advanced-Low Proficiency across the domains of interpersonal, interpretive, and presentational communication skills according to ACTFL Standards. For this outcome, there’s actually 3 different skills rolled into one. So, it would take 3 assessments (at least) to determine if students met this one program level outcome.

Another important consideration is that the assessment measures the outcome, not a conflation of that outcome and something else. For example, course grades are rarely an accurate measure of a specific program outcome. Whereas, a dispositional outcome may be measured by an evaluation with relevant questions that is completed by an internship supervisor with direct knowledge of the student’s disposition in the workplace.

This part of the process is where I currently am…determining options for how to assess student achievement of the program level outcomes, and creating a rubric or way to assess and report the data. We could use assessments that are part of a course (or multiple courses) or we could use an outside assessment measure (for example, ASLPI scores, site supervisors, etc.).

Scheduling Data Collection

Of course data collection is easiest if it is embedded within a specific course in the required course sequence. This ensures that each student who majors in our program takes the course where the data is collected, and if it is a required assignment for the course, there is a high likelihood they’ll complete the specific assignment.

When program level assessments are not embedded into a course, the program is at the mercy of others to report the data. For example, if students take a statewide or national exam, then we depend on them to report their scores to us. If we ask site supervisors to complete surveys about our students, we depend on them to submit them.

These outside sources of information can provide important measures of success with program outcomes; however, it may not be best to rely solely on external measures.

Because the program I’m currently creating the assessment plan for is new, I am beginning the process by relying on internal data and measures that our faculty can collect as part of coursework.

During this first phase, I asked each faculty member to let me know the types of assessments and activities they were already doing in their courses that might be an acceptable measure of a program level outcome. For some outcomes there were more assessments already in use than for other options.

Now, we’re in the process of creating a set of assessments that we can use to assess student learning for program level outcomes and we’ll begin incorporating that assessment into the relevant courses.

Instructors will not have to assess the assignment using the same criteria or rubric that the program assessment report will. If they choose to do so, that will save some time in the data analysis phase.

Updates to follow

Today I wanted to give a glimpse into the first steps of creating a program evaluation plan. I’ll report back in the coming months with what we decided and how we plan to move forward with analyzing the data, reflecting on practice, and then reporting the data to our institution.