After developing program learning outcomes and identifying where program outcomes are being taught/delivered, the next step is to consider ways to measure them. Assessment plans should include a combination of direct and indirect methods; at least fifty-percent of measures should be direct.
For additional information, the University of Stanford offers a handy guide.
According to Maki (2004), direct methods prompt students to represent or demonstrate their learning or produce work so that observers can assess how well students’ texts, responses and skills fit program-level expectations.
Direct measures explicitly demonstrate students’ knowledge and/or skill; attainment of the objective is obvious and does not need to be inferred.
- Capstone course/experience – program learning outcomes are integrated into assignments with the intent of directly measuring student performance, knowledge, and skills (See course embedded grading below) For a myriad of examples, google "senior capstone" and your discipline's name.
- Scores and pass rates on standardized tests – entry-to-next level program exams, comprehensive exams, national standardized subject matter exams, certification or licensure exams, professional exams, or locally developed tests
- Clinical, internship or practicum observation – evaluations of specific student knowledge or skills from internship supervisors or faculty overseers based on stated program objectives and structured observation of student performance
- Evaluation of student work, such as writing samples, presentations, performances, projects, research, and portfolios
- Pre- and post-tests/assessments - score gains indicating the “value added” to the students’ learning experiences
- Course embedded grading – (Course letter grades (A-F or S/N) are insufficient for program assessment.) If the criteria are explicit through the use of a rubric and the feedback loop includes the department, then a course-embedded grading process can be used for direct assessment (Walvoord, 2004). While grading is typically focused on strengths and weaknesses in each individual student's learning for use by each student, scoring for assessment is focused on patterns of strengths and weaknesses in a group of students for use by program-level decision makers. When grading is used for assessment, a second process of identifying patterns among students is necessary.
- Embedded questions related to program learning outcomes included within course exams. For example, all sections of a particular course include an essay question related to a program learning outcome. The exam is graded as usual. Analysis of identified exam question for program learning assessment happens separately.
According to Maki (2004), indirect methods capture students’ perceptions of their learning and the educational environment that supports that learning, such as access to and the quality of services, programs, or educational offerings that support their learning.
- Alumni, employer, or student surveys (see NSSE and SES information below), questionnaires, focus-groups, or interviews; including any student self-judgment of learning
- Retention and graduation rates
- Length of time to degree
- Job placement rates of graduates
- Analysis of course grade distributions
- Reputation of graduate or post-graduate programs accepting graduating students
- Quantitative data, such as enrollment numbers
- Honors, awards, scholarships, and other forms of public recognition earned by students and alumni
- Matrices used to summarize the relationship between learning outcomes and courses, course assignments, or course syllabi. Example, program course mapping to program learning outcomes (Step #2 of UMD’s planning templates)
Programs are encouraged to utilize institutional sources for indirect student learning data, such as the National Survey of Student Engagement (NSSE) and the U of MN Student Experiences/Senior Exit Survey (SES). See NSSE and SES SLO Mapping for details on how these surveys relate to UMD Student Learning Outcomes.
Maki, P.L. (2004). Assessing for learning: building a sustainable commitment across the institution. Sterling, VA: AAHE; and Walvoord, B. E. (2004). Assessment Clear and Simple. San Francisco: Jossey-Bass.