Posted by: Gregory Linton | 04/13/2012

How student mobility complicates assessment of student learning

In earlier posts, I have shown how student mobility creates difficulties for curriculum design and reporting of statistics such as graduation rates. Student mobility also complicates efforts to assess student learning. Assessment efforts assume that colleges and universities can provide evidence that their curriculum improved the knowledge and skills of their graduates. But they can demonstrate this convincingly only for those students who enter their institutions as first-time freshmen and never take a course at another institution.

If students take any courses at another institution, then new variables are added to the data that make it difficult to prove that students learned what they learned at their degree-granting institution. If one considers also the vast numbers of students who take courses as dually-enrolled high schooled students, earn credit by examination, or participate in study abroad programs, the results of assessment are even more questionable.

These facts are ignored in the assessment community. The standard books on assessment such as M. J. Allen (2004), P. L. Maki (2004), C. A. Palomba and T. W. Banta (1999), and L. Suskie (2004) never address the issue of how to consider transferred courses when assessing student learning. In fact, the word “transfer” does not appear in the index of any of these books. Suskie (2004) discusses accurate reporting of assessment results, including qualifiers and caveats regarding the conclusions to be drawn, but inexplicably she does not mention the possible contamination of the results by transfer courses.

Assessment experts are familiar with the dictum of A. W. Astin (1991) that assessment must consider not just the outcomes that students display but also where they started and the effect of the environment and their experience on the improvement that they demonstrate. Nevertheless, they fail to discuss student swirl as a dominant aspect of the environment that students experience. B. Pusser and J. K. Turner (2004) rightly observe: “In a world where students may attend several institutions prior to graduation, it is difficult to know how to measure the effectiveness and contribution of the various institutions to student progress and success” (p. 40).

Many assessment procedures are designed in blissful, if not willful, ignorance of student flow. For example, the approach to assessment that involves pre-tests of freshmen and post-tests of seniors are not applicable to students who transfer into an institution. Another approach is to use capstone assessments that can be compared with national benchmarks or evaluated against standards set by the faculty, but this approach is rendered invalid for students who have taken courses elsewhere.

To illustrate this point, let’s imagine that we want to evaluate our students to see how well they have achieved the ability to think critically. So we have them take an assessment such as the California Critical Thinking Skills Test to evaluate their level of critical thinking. Perhaps a student scores very high on that test, but that same student took a class at a community college that he transferred in to his degree-granting institution. How do we know whether he learned critical thinking from that one course he took at another institution and not from any of the courses at his degree-granting institution?

Or let’s say that student receives a low score on that test. Could it be that he failed to learn critical thinking because he took that one course at another institution where critical thinking was not emphasized as much?

Because assessment procedures at most institutions do not take into account what students may have learned from courses at other institutions, they cannot actually prove that the degree-granting institution has accomplished its objectives effectively. They can show what students have learned from the amalgam of courses that they have patched together from different institutions, but they cannot decisively show what students have learned from a particular institution, unless those students have taken classes at only that one institution.

Student mobility is the elephant in the room that assessment theorists pretend is not there. At the very least, institutional researchers could disaggregate their data into categories that relate to students who have taken courses at only one institution, students who transferred into the institution, and students who started and finished at the institution but also took courses elsewhere.

Sources:

Allen, M. J. (2004). Assessing academic programs in higher education. Bolton, MA: Anker.

Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York: American Council on Education/Macmillan.

Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus.

Palomba, C. A., and Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

Pusser, B., & Turner, J. K. (2004, March/April). Student mobility: Changing patterns challenging policymakers. Change, 36(2), 36-43.

Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories