Achieve recently released a survey of parents of recent high school graduates that was conducted in August 2015. Of the 917 parents who participated in the survey, 568 were parents of children enrolled in a two- or four-year college, and 349 were parents of children not currently enrolled.

One of the key findings was that, compared to employer and educators, parents feel that their high school did a better job of preparing their child for college or the workplace. The survey found that 84% of parents are at least somewhat satisfied with the job their child’s high school did preparing them for success after high school, but only 35% of college instructors are satisfied with the job U.S. high schools are doing preparing recent graduates for work/college after high school.

This finding is similar to the ACT National Curriculum Survey 2012, which found that 89 percent of high school teachers reported that their students were “well” or “very well” prepared for college work in their content area. In contrast, only 26 percent of college instructors viewed their incoming students as well or very well prepared for first-year credit-bearing courses in their content area. These differences suggest that parents and high school educators are not totally aware of the higher expectations of college instructors.

Another interesting finding that seems to contradict the earlier one is that only 1 in 3 parents agreed that their child’s high school set high expectations so that their child was academically challenged. If only 33 percent think that their child had a rigorous education, one wonders why 84 percent feel that their high school has prepared their child for success. Perhaps this is similar to the phenomenon where voters rate their own congressman highly but rate congressmen in general very low. In the same way, parents may have a low opinion of American high schools in general, but they feel that their own high school is an exception.

Source: Achieve, “Rising to the Challenge: Are Recent High School Graduates Prepared for College and Work?”

Posted by: Gregory Linton | 10/21/2015

NCES releases data on nontraditional students

In September, the National Center for Education Statistics released demographic information on nontraditional undergraduates as part of their Web Tables series. The data were culled from students who applied for financial aid in 2011-12, so we should recognize that the data does not reflect the college student population as a whole.

As the report notes, nontraditional students are generally thought to have the following characteristics: being independent for financial aid purposes, having one or more dependents, being a single caregiver, not having a traditional high school diploma, delaying postsecondary enrollment, attending school part-time, and being employed full-time. However, the term “nontraditional” is a misnomer because 74 percent of 2011-12 undergraduates possessed at least one of these characteristics. Actually, the nontraditional student now is the student high school graduate who enters college full-time the fall semester after graduation, is dependent on his or her parents, has no dependents of his or her own, and works part-time or less.

Meris Stansbury of eCampus News has provided a helpful summary of key findings in this report. These findings show that nontraditional students differ from traditional students in significant ways. These differences should be considered by those academic leaders who are seeking to expand their offerings to a segment of the population that will keep increasing in the future. For example, online courses are more appealing to nontraditional students than to traditional students.

Source: NCES, “Demographic and Enrollment Characteristics of Nontraditional Undergraduates: 2011–12”

What key experiences in college prepare graduates for successful engagement in the workplace? The Gallup-Purdue Index, a study of 30,000 college students, has identified six key experiences (“the Big Six”) that contribute to success in the workplace. Four of the six highlight the key role that professors play in the development and future success of students. Perhaps, this evidence supports the higher quality of education at smaller colleges where that kind of faculty-student interaction is more likely to occur. Unfortunately, the study found that 25% of all college graduates did not experience any of the Big Six while they were in college. It would be interesting to see how many of them attended a large university versus a small school.

Source: Many College Graduates Not Equipped for Workplace Success

Posted by: Gregory Linton | 09/15/2015

Gender Gap

This story from Community College Week provides more distressing data about the widening gap between men and women in higher education. Their analysis shows that, in 2013-2014, 61% of associate degrees were earned by women.

Source: Gender Gap

Posted by: Gregory Linton | 08/13/2015

Students learn from failure

This Mindshift article by Holly Korbey offers interesting insights into the role that failure plays in the development of students.

What do Students Lose by Being Perfect? Valuable Failure | MindShift | KQED News.

Gallup’s survey of 140,000 adults found that college students do not answer “yes” to the question “do you learn something new or interesting every day?” any more frequently than non-graduates. Hmm, then all that time on Facebook and Instagram must be a total waste!

Busteed, “No Evidence That Bachelor’s Degrees Lead to Lifelong Learning”

Posted by: Gregory Linton | 08/13/2015

Lukianoff & Haidt, “The Coddling of the American Mind”

In a lengthy and insightful article in the The Atlantic, Greg Lukianoff and Jonathan Haidt analyze the increasing sensitivity of college students to words and ideas they don’t like. This trend is captured by the rise in popularity of the terms “microaggression” and “trigger warnings.” They show why these trends are harmful to the intellectual and personal development of students.

Mindshift reports on a study by Harvard researchers in Oakland, California, to test the theory that kids learn best when they’re actively engaged in designing and creating projects to explore concepts. The article does not mention whether the study will investigate differences between boys and girls, but I hope they include that in their study. This could confirm those who propose that boys would be more engaged in learning if they were given hands-on projects to complete.

Wan Hulaimi, “The impact of violent video games on children”

This article in the New Straits Times summarizes research on the negative effects of video games on cognition and brain development.

Recently, I have seen several articles about the connection between video game usage and aggression. I have provided the links below with brief descriptions of each article.

Mark Ellis, “Video nastiness: Kids as young as four act out violence they see in computer games, teachers reveal”: This article in The Mirror in the UK is based on anecdotal evidence rather than research. It describes violent acts committed by teens who were addicted to violent video games.

Ulrika Bennerstedt, Jonas Ivarsson, & Jonas Linderoth, “How gamers manage agression: Situation skills in collaborative computer games”: This study from researchers at the University of Gothenburg was published in The International Journal of Computer-Supported Collaborative Learning.It has received some attention in the media because they argue that portrayals of violence and aggressive action in video games force the games to develop skills in collaboration.

Mike Tuttle, “Do violent video games really cause aggression? Or do they foster coooperation?”: This WebProNews post summarizes the results of the study mentioned above.

Sofi Papamarko, “Video games foster cooperation, new study says”: This is another report on the study mentioned above.

ScienceDaily, “Link between violent computer games and aggressiveness questioned”: Here is another report on this study by ScienceDaily.

Andrew Keen, “Does the internet breed killers?”: In this editorial on, Keen reflects on the role that social media and violent video games such as World of Warcraft may have had on Anders Brievik, who committed the massacre in Norway.

Paul Tassi, “The idiocy of blaming video games for the Norway massacre”: In this Forbes blog post, Tassi reacts to Keen’s (and others’) attempt to blame the Norway massacre on video game usage by the killer.

Erik Kain, “As video game sales climb year over year, violent crime continues to fall”: This Forbes editorial follows up on Tassi’s post by defending video games against accusations that they cause violence. Interestingly, Kain notes that correlation is not the same as causation, but then he shows that as video game sales have increased, violent crimes have gone down in frequency. One might wonder if violent crimes might have decreased even more if video games were not so popular.

In earlier posts, I have shown how student mobility creates difficulties for curriculum design and reporting of statistics such as graduation rates. Student mobility also complicates efforts to assess student learning. Assessment efforts assume that colleges and universities can provide evidence that their curriculum improved the knowledge and skills of their graduates. But they can demonstrate this convincingly only for those students who enter their institutions as first-time freshmen and never take a course at another institution.

If students take any courses at another institution, then new variables are added to the data that make it difficult to prove that students learned what they learned at their degree-granting institution. If one considers also the vast numbers of students who take courses as dually-enrolled high schooled students, earn credit by examination, or participate in study abroad programs, the results of assessment are even more questionable.

These facts are ignored in the assessment community. The standard books on assessment such as M. J. Allen (2004), P. L. Maki (2004), C. A. Palomba and T. W. Banta (1999), and L. Suskie (2004) never address the issue of how to consider transferred courses when assessing student learning. In fact, the word “transfer” does not appear in the index of any of these books. Suskie (2004) discusses accurate reporting of assessment results, including qualifiers and caveats regarding the conclusions to be drawn, but inexplicably she does not mention the possible contamination of the results by transfer courses.

Assessment experts are familiar with the dictum of A. W. Astin (1991) that assessment must consider not just the outcomes that students display but also where they started and the effect of the environment and their experience on the improvement that they demonstrate. Nevertheless, they fail to discuss student swirl as a dominant aspect of the environment that students experience. B. Pusser and J. K. Turner (2004) rightly observe: “In a world where students may attend several institutions prior to graduation, it is difficult to know how to measure the effectiveness and contribution of the various institutions to student progress and success” (p. 40).

Many assessment procedures are designed in blissful, if not willful, ignorance of student flow. For example, the approach to assessment that involves pre-tests of freshmen and post-tests of seniors are not applicable to students who transfer into an institution. Another approach is to use capstone assessments that can be compared with national benchmarks or evaluated against standards set by the faculty, but this approach is rendered invalid for students who have taken courses elsewhere.

To illustrate this point, let’s imagine that we want to evaluate our students to see how well they have achieved the ability to think critically. So we have them take an assessment such as the California Critical Thinking Skills Test to evaluate their level of critical thinking. Perhaps a student scores very high on that test, but that same student took a class at a community college that he transferred in to his degree-granting institution. How do we know whether he learned critical thinking from that one course he took at another institution and not from any of the courses at his degree-granting institution?

Or let’s say that student receives a low score on that test. Could it be that he failed to learn critical thinking because he took that one course at another institution where critical thinking was not emphasized as much?

Because assessment procedures at most institutions do not take into account what students may have learned from courses at other institutions, they cannot actually prove that the degree-granting institution has accomplished its objectives effectively. They can show what students have learned from the amalgam of courses that they have patched together from different institutions, but they cannot decisively show what students have learned from a particular institution, unless those students have taken classes at only that one institution.

Student mobility is the elephant in the room that assessment theorists pretend is not there. At the very least, institutional researchers could disaggregate their data into categories that relate to students who have taken courses at only one institution, students who transferred into the institution, and students who started and finished at the institution but also took courses elsewhere.


Allen, M. J. (2004). Assessing academic programs in higher education. Bolton, MA: Anker.

Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York: American Council on Education/Macmillan.

Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus.

Palomba, C. A., and Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

Pusser, B., & Turner, J. K. (2004, March/April). Student mobility: Changing patterns challenging policymakers. Change, 36(2), 36-43.

Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker.

Older Posts »