Posted by: Gregory Linton | 02/01/2019

Designing methods to assess core competencies

In previous posts, I have described how to identify core competencies for the undergraduate curriculum, how to define the levels and criteria for each competency, and how to map the core competencies to the curriculum. The next step in the process is to design procedures to assess each core competency. The purpose of the assessment procedures is to determine when each student has achieved a certain level of each core competency.

Faculty members can investigate two available sources of assessment procedures. First, they can scan the requirements of each course in order to identify ways in which the core competencies are already assessed. Using course-embedded assessments that already exist is much easier than developing assessments from scratch. Second, they can examine commercially-published instruments that assess the core competencies. For example, several commercial instruments focus on assessing critical thinking skills. To determine the usefulness of these instruments, the faculty will have to consider the expense and the congruence of the instrument with the aims of the faculty.

The faculty can also develop assessment procedures from scratch. The criteria for each competency provide a ready-made set of rubrics that can be used to evaluate the performance of a student. The criteria for the three levels can be placed along a rating scale from 1 to 5. These rating scales can be used by observers to evaluate student performances. Evaluation by an expert who observes a student performance is widely accepted as a legitimate direct measure of student learning.

Assessment methods are categorized in different ways. Practitioners distinguish direct and indirect methods. Direct methods evaluate student work directly. They require students to demonstrate a competency so that an expert can evaluate their level of achievement. Any student activity related to a core competency that can be observed and evaluated by a faculty member can become a direct assessment. Such activities include published tests, locally developed tests, papers, projects, homework assignments, in-class presentations, musical performances, competence interviews, internship supervisor evaluations, clinical practicums, and portfolios.

Indirect methods rely on student feedback concerning their own assessment of their learning. Such methods include commercial surveys such as the National Survey of Student Engagement, locally-developed surveys of students or alumni, focus groups, reflective essays, journals, course evaluations, and interviews. Indirect methods are considered less reliable because people generally cannot evaluate themselves objectively. Also, students lack expert knowledge to determine how well they have achieved a competency. These methods can be helpful in identifying whether students have experienced high-impact learning activities that, according to research, produce better learning outcomes. This is the primary focus of the NSSE, for example. But faculty members should not put too much weight on students’ evaluation of how well they have achieved outcomes.

Assessment methods also differ in whether they gather quantitative or qualitative evidence. Quantitative evidence generally consists of numerical scores that indicate how much or how well students have learned. Rating scales and rubrics provide this kind of quantitative data, but it can also be gathered from exams, papers, projects, and other evidence of student learning.

Qualitative evidence consists of verbal descriptions that detail the strengths and weaknesses of a student performance. This kind of evidence can be gathered from various types of student performances or products such as recitals, oral presentation, papers, interviews, focus groups, reflective essays, and responses to open-ended survey questions. Qualitative evidence often requires content analysis to determine themes and threads that run through the student performances.

Quantitative evidence is most useful with larger samples, but it must be used with some caution because it may not provide as much actionable detail as qualitative evidence. For example, if the average score on a rating scale is 3.8 out of 5.0, what does that mean? Is that good or bad? What guidance does it provide faculty for improving learning outcomes? Consequently, I prefer gathering qualitative evidence when possible. Even rating scales can include a comment section so that faculty members can record their observations for later analysis.

I believe some of the best assessments occur when several faculty members get together in a room and discuss the strengths and weaknesses that they observed in a set of student papers or performances. Such a process may seem more informal, but it involves direct assessment by experts, and it can result in very helpful strategies for improving the learning outcomes. Because of my preference for qualitative assessments, I tend to use the term assessment methods instead of assessment measures. Measures aren’t always helpful.

When possible, faculty members should aim for a minimum of three different procedures or measures for each core competency. L. Suskie (2018) advises: “Using a variety of assessments acknowledges the variety of experiences that students bring, gives all students their best chance to shine, and lets us infer more confidently how well students have achieved key learning goals” (p. 29). Preferably, only one of the assessment methods will be an indirect method. Emphasis should be placed on direct methods. This combination of various methods will provide adequate data for determining the effectiveness of the curriculum in developing the core competencies. As an example, the faculty at Great Lakes Christian College adopted three summative measures of critical thinking: the California Critical Thinking Skills Test (CCTST); evaluation of research papers in the senior capstone course; and the College BASE Science Test. By triangulating the results of these different methods, the faculty could identify where the students were excelling and where they needed to improve.

I would offer a couple more tips about assessment. Assessment works best when it focuses on student performances or products that occur at the end of a student’s program. The goal of assessment is to see whether students have reached the goals we have set for them; consequently, assessing them when they are sophomores or juniors will not provide that kind of information.

I would also caution against relying on pretest-posttest approaches, such as administering a test to incoming freshmen and then administering the same test to them when they graduate. This approach is difficult to administer because it requires recording the score of each student and comparing it to their score four years later. To obtain accurate results, the examiner has to calculate the gain or increase for each individual student (rather than the cohort as a whole) and then average it for the group. Not only are the logistics complicated, but this approach also does not allow any changes or improvements to be made to the assessment in the intervening years.

Recommended Resources

Allen, M. J. (2004). Assessing academic programs in higher education. Bolton, MA: Anker Publishing.

Diamond, R. M. (2008). Designing and assessing courses and curricula (3rd ed.). San Francisco: Jossey-Bass.

Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses. Boston: Allyn and Bacon.

Palomba, C. A., & Banta, T. W. (2014). Assessment essentials: Planning, implementing, and improving assessment in higher education (2nd ed.).San Francisco: Jossey-Bass.

Suskie, L. (2018). Assessing student learning: A common sense guide (3rd ed.). San Francisco: Jossey-Bass.

Walvoord, B. E. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education (2nd ed.) San Francisco: Jossey-Bass.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: