Posted by: Gregory Linton | 03/22/2012

Student mobility renders graduation rates meaningless

Although student flow is such a common phenomenon as I demonstrated in the previous post, it is often ignored by faculty, administrators, and policymakers when they plan and design programs and policies. A prime example is how it affects statistical reporting. The federal government, state governments, accrediting agencies, and other organizations such as the College Board require colleges and universities to report certain formulae that they have judged to indicate effectiveness and quality. Unfortunately, these formulae often provide simplistic data that ignore the reality of student flow. Consequently, they are meaningless bits of information that reveal very little about the effectiveness and quality of an institution.

Because of the recent report by the National Student Clearinghouse Research Center, the effect of student mobility on graduation rates is starting to receive some attention. In 2006, I wrote a paper on this topic for a class on “Public Policy in Higher Education” at Michigan State University, and I am sharing that information below with some updated references. Because I researched and wrote this in 2006, some of the statistics may be outdated.

A prime example of a meaningless statistic is the “program completion rate” (Selingo, 2012). In 1990, Congress passed legislation requiring colleges to publicize their graduation rates. As a result, some accrediting agencies require that their member institutions publish this statistic in their catalogs. It took the U.S. Department of Education five years to determine how graduation rates would be calculated. The formula reports the percentage of first-time, full-time freshmen who enroll in the fall semester and complete their degree at the same institution within 150 percent of the allotted time for the degree. The Department of Education decided to exclude transfer students since they would complicate the calculations (Burd, 2004).

Every year the Department of Education collects graduation-rate data from every four-year college and university in America through its Graduation Rate Survey. These graduation rates are based on the formula described above. Similarly, retention rates are based on first-time, full-time freshmen who enroll in the fall semester. According to C. Adelman (2006), these formulae exclude half of traditional-age students from the calculation.

The narrow focus of this formula ignores a number of realities. First, it does not include part-time students, who make up 45 percent of the undergraduate population (Lipka, 2012). Consequently, an institution could have students take a part-time load their first semester and then choose to take a full-time load from then on and graduate within four years, but they would never be included in the program completion rate.

Second, it does not include students who begin their college careers in the spring semester. Adelman (2006) has shown that only 82.1 percent of 1992 12th-graders entered postsecondary education in the fall semester, whereas 5.8 percent begin in the summer and 12.1 percent begin the winter or spring. Adelman (2006) concludes that “any measure of retention or completion that confines its universe to students who began their postsecondary careers in the fall term is, to put it gently, grossly incomplete” (p. 46).

Third, graduation rates do not include students who transfer into an institution and graduate or students who transfer out of one institution to another institution and graduate from the second institution. This is especially unfair to community colleges, as J. Sbrega (2012) has argued. According to K. Carey (2005), including the latter group of students would add an average of eight percentage points to an institution’s graduation rate. J. Marcus (2012) reports that including students who transfer to a four-year institution before completing an associate degree would more than double the completion rate for community colleges.

I am not aware of any reporting on the retention and graduation of transfer students that a college or university must make to any agency. Apparently, those who determined what data are essential to report never considered the reality of transfer students. When one considers these realities, one is inclined to agree with Adelman that graduation rates are “anachronistic formulas that do not track students through increasingly complex paths to degrees” (2006, p. xvi).

Fourth, program completion rates do not consider that some students can attend an institution with the full intention of transferring after a year or two. This has long been the case in the institution where I teach. Some students intend to come here for a year or two to be grounded in their faith by studying the Bible and theology but then transfer to another institution because we do not offer a program in their chosen field of work (business, for example). These intentional decisions by students lower our graduation rates even though the institution has done nothing wrong. The graduation rate is not a clear indicator of quality. Most or all of those students could go on to earn a degree elsewhere in four or five years, but that would make no difference in our graduation rate.

To keep this post from being too long, I will discuss in the next post more ways that student mobility complicates statistical reporting.

Sources:

Adelman, C. (2006, February). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education. Retrieved May 29, 2006, from http://www.ed.gov/rschstat/research/pubs/toolboxrevisit/index.html.

Burd, S. (2004, April 2). Graduation rates called a poor measure of colleges. Chronicle of Higher Education, 50(30), A.1.

Carey, K. (2005, January). One step from the finish line: Higher college graduation rates are within our reach. Washington, DC: The Education Trust.

Lipka, S. (2012, March 2). Students who don’t count. The Chronicle of Higher Education. Retrieved March 22, 2012, from http://chronicle.com/article/The-Students-Who-Dont-Count/131050/

Marcus, J. (2012, March 8). Community colleges want to boost grad rates–by changing the formula. The Hechinger Report. Retrieved March 19, 2012, from http://hechingerreport.org/content/community-colleges-want-to-boost-grad-rate-by-changing-the-formula_8076/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+HechingerReport+%28Hechinger+Report%29

Sbrega, J. (2012, March 13). Let’s change how we measure success–now.” Community College Times. Retrieved March 19, 2012, from http://www.communitycollegetimes.com/Pages/Campus-Issues/Lets-change-how-we-measure-success-now.aspx

Selingo, J. (2012, March 2). The rise and fall of the graduation rate. The Chronicle of Higher Education. Retrieved March 22, 2012, from http://chronicle.com/article/The-RiseFall-of-the/131036/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories