Skip to Search Skip to Global Navigation Skip to Local Navigation Skip to Content
Show/Hide University Links

Program Assessment

Program assessment should resonate with what faculty does in their scholarly research. They try
creative new approaches, but need to conduct careful analyses and apply
appropriate metrics to evaluate whether they are really working and achieving
the outcomes they anticipated and want to see.


Program Assessment – Getting Started


The program assessment process depicted in Figure 1, below, is very similar to the teaching-learning-assessment cycle, depicted in Figure 2.




The teaching/learning/assessment cycle depicts instructor activities at the course level. The instructor begins by determining student learning objectives. Next, she provides students with opportunities to learn what they need to know to meet those objectives. Following the learning opportunities, tests or assignments are administered to determine the extent to which students have met the objectives. Finally, results are used to make improvements in the course.


The program assessment process involves almost the same steps at the program level. It differs because it provides the opportunity for faculty members to come together to discuss student learning within the program and to see how courses fit together.


The formal program assessment process begins with the determination of the mission and goals of the program, and is ideally linked to the mission and goals of the institution. However, in practical terms, the process often begins with the program objectives. The following pages describe the basic program assessment process, which involves the following six steps.


 

 

  1. Identify program goals*
  2. Identify program objectives*
  3. Align courses with program goals and objectives
  4. Providing evidence of how well students are meeting your goals and objectives
  5. Conduct assessments and use results for improvement
  6. Develop a plan for ongoing student outcomes assessment

 

 


Identifying program goals


 

At the beginning of every semester, you probably think about your courses in terms of what you want your students to be able to accomplish by the end of the semester. These are usually broad statements (learning goals) that relate to the integration of several concepts and skills. The program assessment process begins the same way. One of the first stages of the process is to revisit the goals identified by your department when it became a degree-granting program. These are the statements that identify the concepts and skills that students should attain by the time they graduate. Like the broad goals for each of your courses, these program goals specify what the overall curriculum intends to accomplish. Goal statements usually put into words the over-arching knowledge, skills, and attitudes (in italics in the sample goals listed below) that relate to your program.


Although goal statements are usually written as part of the Program Degree, you might need to revise the goals from time to time in order to see if they are meeting the current needs of graduates in your discipline. If program learning goals do not exist, you will need to develop them.


Use the following questions to guide your writing: 


  • What are the needs of our graduates upon completion of a degree in our discipline?
  • Are there specific accreditation or certification requirements for our department and/or college?
  • Are there any recommendations for goals that have been developed by professional organizations in our field that are aligned the goals we want graduates to achieve?
  • Are there any recommendations made by business and industry that could translate into goals for our program?
  • Have peer institutions published goals that might be appropriate for our graduates?
 


It is not necessary to achieve complete consensus by all program faculty, but most groups should be able to achieve a functional consensus (i.e., sufficient agreement to prevent paralysis and allow progress). All-important goals should be included, even if it could be difficult to document students’ progress and achievement.


 

 

Identifying Desired Measurable Learning Objectives


 

In your previous step, you developed or modified your program goals. Although each of these statements defines what we want our students to attain, they aren’t usually specific enough to be measurable. When you write learning objectives you are describing the specific knowledge, skills, or attitudes that students should have when they complete your program. Objectives should be measurable so you can produce evidence that the graduates of your program are meeting the intended goals. Objectives can specify student actions, expected perceptions of faculty, students, or employers, or expected student performance on assignments.


Each program goal developed in the previous step should be associated with one or more program objectives. The faculty determines the number of objectives per goal. Because program goals are typically phrased in general terms, they can sometimes be met in a variety of ways. Goals that may be met in a variety of ways will have multiple objectives.


Faculty may choose to identify program objectives concurrently with or subsequent to identification of program goals. Some programs will focus on more general goals before delving into specific expectations for students in a program. Other programs will begin by identifying how students demonstrate their skills and knowledge, and later cluster these objectives under a few, more general goals.


There are three different types of learning objectives - knowledge, skills, and attitudes - and the statements written for each directly relate to the goals you have set.


When writing knowledge objectives, you are trying to define the main concepts (e.g., theoretical principles) that students know when they graduate.


When writing skill objectives, you are trying to describe the larger skills (e.g., problem solving) that students will be able to do when they graduate.


Finally, attitudinal objectives usually describe beliefs about the nature of the field or perceptions about interdisciplinary connections (e.g., ethics) that you want students to attain before they graduate.


When writing learning objectives, it helps to think in terms of the level of knowledge, skill or attitude you expect your graduates to attain. Bloom (1956) developed a taxonomy of learning objectives that is useful for developing learning objectives. Using a verb to describe the student actions makes the statement measurable and helps you later define the type of assessments needed to show the extent to which the objectives were achieved. Sample Verbs for Learning Objectives (pdf) provides a list of verbs associated with each level.


See examples of learning objectives below. See also Program Assessment: Options for Getting Started (pdf) for suggestions about how to approach the development of learning objectives at the program level. 



 

Aligning Courses with Program Goals and Objectives


After your program goals and objectives are written you are ready to determine which courses are contributing to the attainment of each. The best method for making these connections is to use a mapping matrix (curriculum matrix) like the one below.

 
 
 

Course/Objective Matrix   Template

Goal

Course 1

Course 2

Course 3

Course x

Objective 1

Objective 2

Objective 3

Objective x


Begin by creating a table that lists the program objectives in the left column and the courses in your program across the top. The courses should be listed in the prescribed order that students take them, with your elective courses last. Once you have the chart completed, give a copy to each faculty member in your department and have them indicate which courses are contributing to the attainment of each objective and the depth of coverage using the following scale (or one like it): 


  • 0 or blank: no emphasis or coverage
  • 1: Topics are only introduced to produce “awareness”
  • 2: Topics are introduced and further developed or reinforced
  • 3: Topics are fully introduced, developed or reinforced throughout the course    


This strategy helps you determine which courses address your objectives and helps identify the courses to target for data collection, which is the next step in the program assessment process. The courses that list the amount of coverage at a level 3 are usually the ones where you can focus some of your data collection efforts.


This alignment activity also allows program faculty to demonstrate the role and importance of their courses in the curriculum and helps to verify that the curriculum is appropriately structured and balanced to attain the program objectives. The process offers faculty an opportunity to identify gaps and redundancies and to make deliberate decisions about whether those are acceptable.


Each course in the curriculum should be linked to at least one program objective. Some courses will be associated with more than one objective. Since objectives are directly linked to goals, this alignment will, by default, ensure that each course is linked to at least one program goal.


If specific general education courses are required as pre-requisites for courses in the degree program, those should be identified and included in the alignment.


This information should be presented in a way that reflects the traditions of the field. Some degree programs will present the alignment in a table or grid format or as a list of courses under each objective, but your program is not limited to these two formats.

 


Providing evidence of how well students are meeting your goals and objectives


After goals and objectives are mapped to the courses in the curriculum, it is time to select the evidence that will best demonstrate the extent to which students have met the stated goals/objectives. The curriculum map will help you determine which courses are likely to provide the most appropriate evidence. Evidence may be drawn from a wide variety of sources, including answers to specific test questions, student writing samples, team project reports, and survey questionnaires. Once this array of material, information, and evidence is gathered, the program will have a better sense of the program’s outcomes, i.e., the overall level of achievement.


Together the program faculty will need to decide:


  • How to present existing information and evidence of student learning
  • Whether existing information provides sufficient evidence and a compelling case that students are achieving as expected
  • Whether additional sources need to be tapped or a different method used to gather information
  • Whether evidence needs to be collected more systematically in order to present a comprehensive picture of students’ learning
  • What curricular, instructional, or program changes would improve students' learning, if any

 

Assessment experts recommend a mix of direct and indirect evidence of student learning (see below). If assignments will be used, it is not necessary to collect them from all students. It is acceptable to sample student work for the purposes of program assessment. However, it is important for faculty to determine a benchmark level of performance that is deemed acceptable. Evidence of student learning and achievement should be presented in aggregate form for the program. Evidence should be anonymous. Students’ names should be removed and course level data should not be linked to or used in evaluations of individual faculty members.


Be careful not to collect more information than you will be able to analyze, because doing so can divert attention from critical areas and result in overwork. Over time, each program will refine its process and more accurately target areas of concern or interest.


Direct evidence


Direct evidence typically includes samples of students’ work from courses or other scholarly activities, such as papers, essays or presentations. If faculty members use specific criteria such as performance on professional, national, or state exams, or rubrics to evaluate students at the program level, these instruments and outcomes can also provide direct evidence of student achievement. You may have heard that student course grades are not considered useful evidence for program assessment. See Can grades be used for assessment? (pdf) for reasons why and for appropriate ways to use course assignment scores for evidence of student learning.


Scoring Student Performance


Scoring a multiple-choice test is straightforward. However, scoring assignments such as projects, papers, performances or presentations, is much more complex. In these instances, using a scoring guide (rubric) provides the best option. For a description of how to create a good rubric for program assessment purposes, see Using Rubrics to Facilitate Program Assessment. You can find additional rubric resources on our website. Once you have developed the rubric, multiple faculty members can use it to score student work consistently.


Setting Benchmarks


The next decision to be made is to set benchmarks. What score on the assignment or set of MC questions indicates that a student has “met” the objective? Would you be satisfied with an individual student score of 75%, or would you expect a 90%? According to Linda Suskie (2012), the benchmark depends on the objective. In some cases, 75% might be enough. In other cases, for example nurses learning to do injections, you probably want your benchmark closer to 100%.


The next question to address is the level of overall performance you will accept as evidence that your students, collectively, have “met the objective.” That is, what performance levels will you accept as evidence of success and what level will lead you to want to make changes? Suskie suggests that you do some further investigation on any rubric criteria or test questions for which fewer than 50% of students reached your benchmark.


Indirect evidence


Perceptions and viewpoints that can be obtained through focus groups or surveys of students, alumni, and employers serve as indirect evidence. Retention, graduation, and job placement statistics, as well as data on participation in university or department programs that can be linked to program goals (e.g., internships, study abroad, undergraduate research participation rates) are also considered indirect evidence.


Reference


Suskie, L. (2012) Summarizing, Understanding and Using Assessment Results.


Interpreting Evidence and Making Changes


Displaying Data


The way that you display your evidence has an impact on the ease with which you can interpret it. For example, percentages are helpful for large numbers of students (roughly over 20 or 25). But if you have fewer students than that, numbers will be easier to interpret. You may also want to sort the data in a meaningful way; perhaps from highest to lowest scores, to highlight areas you want to attend to. The key to displaying your data is to make it easy for you to determine the strengths and weaknesses of your students so that you know where to focus your attention.


Decisions for Action


If you determine that too few students have met your benchmark, you will need to explore why that might have happened so that you can make appropriate improvements. Sometimes it may be obvious. Other times you may need work harder to find the cause. Consider the possibility that negative assessment results can be rooted in any of the elements of the teaching/learning/assessment cycle below (Suskie, 2012).





For example, you may determine that the program objective was not written well, or perhaps isn’t really important after all. In this case, your strategy would be to re-write, replace or delete the objective. Or, you may hypothesize that the objective isn’t being properly addressed in the course(s). Perhaps students aren’t getting enough practice, for example, or there isn’t enough emphasis by the instructor. Alternatively, it may be that the assessment itself is not constructed in a way that best addresses the objective. Perhaps the assignment directions and/or rubric need to be revised. For multiple choice questions, it isn’t uncommon for the test question itself to be worded in a way that results in many students missing it even if they know the material. Analyzing test questions can be done using a statistical procedure called item analysis.


After you have determined what is underlying the students’ poor performance, you can make the necessary changes. Be sure to repeat the assessment so that you will know if your changes made a difference.


Reference


Suskie, L. (2012) Summarizing, Understanding and Using Assessment Results.


 

Faculty


Upcoming Events



Master of Arts in Education Information Session

Monday, February 27, 2017

TIME: 5:00 - 6:00 p.m.

LOC: Main Campus

BLDG: Main Building, Room 0.410


International Literacy Association Student Council General Meeting

Monday, February 27, 2017

TIME: 7:00 - 8:00 p.m.

LOC: Main Campus

BLDG: HEB University Center, Hidalgo Room, 2.214


Women’s History Month Opening Program and Reception

Wednesday, March 1, 2017

TIME: 2:15 PM

LOC: Main Campus

BLDG: Travis Room HUC 2.202


All Events