Posted by: Gregory Linton | 03/23/2012

Solutions to complications for graduation rates caused by student mobility

In the last post, I described how transfer students are ignored in the calculations of program completion rates. Even experts in statistical analysis of education data fail to take student swirl into consideration. An oft-cited study by P. T. Ewell, D. P. Jones, and P. J. Kelly (2003) found that, of every 100 students who entered the ninth grade, only 18 would complete an associate’s degree or a bachelor’s degree within 10 years of their first year in high school. V. M. H. Borden (2004), for example, cites this study but represents it as referring only to bachelor’s degrees. The statistic is often used to lament the “leaky pipeline” of education in America, as was the case in a “policy alert” published by the National Center for Public Policy and Higher Education in April 2004.

This study, however, excludes transfer students, which makes the statistics look more dismal than the reality. Adelman, who was a senior research analyst at the U.S. Department of Education, publicly called attention to the fact that analysts who use this data overlook that it includes only students who graduate from the institution where they first enrolled (Glenn, 2006). He argued that the National Education Longitudinal Study has shown that the statistic is actually 35 percent, not 18 percent. Also, the Census Bureau’s Current Population Survey of 2003 found that 28.4 percent of Americans between the ages of 25 and 29 had earned at least a bachelor’s degree, which would be impossible if it were true that only 18 percent earned an associate’s or bachelor’s degree within 10 years of the ninth grade.

A proposed solution to this complication is the development of a national tracking system (Ewell, Schild, & Paulson, 2003). Such a system would keep track of where students attend college and when they complete their program. This system may prove useful for nationwide analysis of higher education trends, but it would assist little in determining the effectiveness and quality of any individual institution, as retention rates and graduation rates are assumed to do. At this point, however, “the technology for a full census of completion does not yet exist” (Adelman, 2006, p. 85). Also, efforts to implement a national tracking system were thwarted because of privacy concerns (Selingo, 2012).

Another solution is to cease trying to calculate institutional graduation rates and instead track student graduation rates. As Adelman (2006) observes, “it is the student’s success that matters to families—and to the nation,” not the institution’s (p. xvi). Up to now, statistical reporting has been institution-centered rather than student-centered. In contrast to the other low graduation rates reported elsewhere, Adelman (2006) has shown that when the graduation rate includes students who earn a degree from a different four-year college than the one in which they originally enrolled, the six-year completion rates are in the 62-67 percent range. This statistic is confirmed by The Condition of Education 2003, which reported that, of students who intended to earn a bachelor’s degree and began their postsecondary education at a 4-year institution in 1995-96, 63 percent had obtained a bachelor’s degree with six years.

Adelman (2006) also modifies the traditional retention rate, or “persistence rate” as he prefers to call it, by including any student who earns any credit at a postsecondary institution in a calendar year (July 1-June 30) and earns credits at any time and at any institution during the next calendar year. In contrast to claims that a fourth of four-year college entrants do not return for their second year, Adelman’s persistence rate shows that 96 percent actually take credits in the second year.

More recently, a national commission under U.S. Education Secretary Arne Duncan has proposed modifications to criteria for measuring community college performance that takes student mobility into consideration. Transfer students will now be included in the criteria for student success and persistence.

Another step in the right direction has been taken by more than 300 public four-year colleges who have joined together to form the Voluntary System of Accountability Program. The completion metric used by that program includes transfer students. We can only hope that, in the future, higher education policy wonks will recognize the reality of transfer students and build that reality into their reporting and planning requirements.

In future posts, I will discuss how student mobility also complicates curriculum design and assessment of student learning.

Sources:

Adelman, C. (2006, February). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education. Retrieved Marc 23, 2012, from http://www2.ed.gov/rschstat/research/pubs/toolboxrevisit/toolbox.pdf.

Borden, V. M. H. (2004, March/April). Accommodating student swirl: When traditional students are no longer the tradition. Change, 36(2), 10-17.

Ewell, P. T., Jones, D. P., & Kelly, P. J. (2003). Conceptualizing and researching the college student pipeline. Boulder, CO: National Center for Higher Education Management.

Ewell, P. T., Schild, P. R., & Paulson, K. (2003, April). Following the mobile student: Can we develop the capacity for a comprehensive database to assess student progression? Lumina Foundation for Education Research Report.

Glenn, D. (2006, April 21). Government analyst says shoddy statistics tell a false tale about higher education. Chronicle of Higher Education, 52(33).

Selingo, J. (2012, March 2). The rise and fall of the graduation rate. The Chronicle of Higher Education. Retrieved March 22, 2012, from http://chronicle.com/article/The-RiseFall-of-the/131036/


Leave a comment

Categories