Wan Hulaimi, “The impact of violent video games on children”

This article in the New Straits Times summarizes research on the negative effects of video games on cognition and brain development.

Recently, I have seen several articles about the connection between video game usage and aggression. I have provided the links below with brief descriptions of each article.

Mark Ellis, “Video nastiness: Kids as young as four act out violence they see in computer games, teachers reveal”: This article in The Mirror in the UK is based on anecdotal evidence rather than research. It describes violent acts committed by teens who were addicted to violent video games.

Ulrika Bennerstedt, Jonas Ivarsson, & Jonas Linderoth, “How gamers manage agression: Situation skills in collaborative computer games”: This study from researchers at the University of Gothenburg was published in The International Journal of Computer-Supported Collaborative Learning.It has received some attention in the media because they argue that portrayals of violence and aggressive action in video games force the games to develop skills in collaboration.

Mike Tuttle, “Do violent video games really cause aggression? Or do they foster coooperation?”: This WebProNews post summarizes the results of the study mentioned above.

Sofi Papamarko, “Video games foster cooperation, new study says”: This is another report on the study mentioned above.

ScienceDaily, “Link between violent computer games and aggressiveness questioned”: Here is another report on this study by ScienceDaily.

Andrew Keen, “Does the internet breed killers?”: In this editorial on CNN.com, Keen reflects on the role that social media and violent video games such as World of Warcraft may have had on Anders Brievik, who committed the massacre in Norway.

Paul Tassi, “The idiocy of blaming video games for the Norway massacre”: In this Forbes blog post, Tassi reacts to Keen’s (and others’) attempt to blame the Norway massacre on video game usage by the killer.

Erik Kain, “As video game sales climb year over year, violent crime continues to fall”: This Forbes editorial follows up on Tassi’s post by defending video games against accusations that they cause violence. Interestingly, Kain notes that correlation is not the same as causation, but then he shows that as video game sales have increased, violent crimes have gone down in frequency. One might wonder if violent crimes might have decreased even more if video games were not so popular.

In earlier posts, I have shown how student mobility creates difficulties for curriculum design and reporting of statistics such as graduation rates. Student mobility also complicates efforts to assess student learning. Assessment efforts assume that colleges and universities can provide evidence that their curriculum improved the knowledge and skills of their graduates. But they can demonstrate this convincingly only for those students who enter their institutions as first-time freshmen and never take a course at another institution.

If students take any courses at another institution, then new variables are added to the data that make it difficult to prove that students learned what they learned at their degree-granting institution. If one considers also the vast numbers of students who take courses as dually-enrolled high schooled students, earn credit by examination, or participate in study abroad programs, the results of assessment are even more questionable.

These facts are ignored in the assessment community. The standard books on assessment such as M. J. Allen (2004), P. L. Maki (2004), C. A. Palomba and T. W. Banta (1999), and L. Suskie (2004) never address the issue of how to consider transferred courses when assessing student learning. In fact, the word “transfer” does not appear in the index of any of these books. Suskie (2004) discusses accurate reporting of assessment results, including qualifiers and caveats regarding the conclusions to be drawn, but inexplicably she does not mention the possible contamination of the results by transfer courses.

Assessment experts are familiar with the dictum of A. W. Astin (1991) that assessment must consider not just the outcomes that students display but also where they started and the effect of the environment and their experience on the improvement that they demonstrate. Nevertheless, they fail to discuss student swirl as a dominant aspect of the environment that students experience. B. Pusser and J. K. Turner (2004) rightly observe: “In a world where students may attend several institutions prior to graduation, it is difficult to know how to measure the effectiveness and contribution of the various institutions to student progress and success” (p. 40).

Many assessment procedures are designed in blissful, if not willful, ignorance of student flow. For example, the approach to assessment that involves pre-tests of freshmen and post-tests of seniors are not applicable to students who transfer into an institution. Another approach is to use capstone assessments that can be compared with national benchmarks or evaluated against standards set by the faculty, but this approach is rendered invalid for students who have taken courses elsewhere.

To illustrate this point, let’s imagine that we want to evaluate our students to see how well they have achieved the ability to think critically. So we have them take an assessment such as the California Critical Thinking Skills Test to evaluate their level of critical thinking. Perhaps a student scores very high on that test, but that same student took a class at a community college that he transferred in to his degree-granting institution. How do we know whether he learned critical thinking from that one course he took at another institution and not from any of the courses at his degree-granting institution?

Or let’s say that student receives a low score on that test. Could it be that he failed to learn critical thinking because he took that one course at another institution where critical thinking was not emphasized as much?

Because assessment procedures at most institutions do not take into account what students may have learned from courses at other institutions, they cannot actually prove that the degree-granting institution has accomplished its objectives effectively. They can show what students have learned from the amalgam of courses that they have patched together from different institutions, but they cannot decisively show what students have learned from a particular institution, unless those students have taken classes at only that one institution.

Student mobility is the elephant in the room that assessment theorists pretend is not there. At the very least, institutional researchers could disaggregate their data into categories that relate to students who have taken courses at only one institution, students who transferred into the institution, and students who started and finished at the institution but also took courses elsewhere.

Sources:

Allen, M. J. (2004). Assessing academic programs in higher education. Bolton, MA: Anker.

Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York: American Council on Education/Macmillan.

Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus.

Palomba, C. A., and Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

Pusser, B., & Turner, J. K. (2004, March/April). Student mobility: Changing patterns challenging policymakers. Change, 36(2), 36-43.

Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker.

Posted by: Gregory Linton | 04/04/2012

Student Mobility Complicates Curriculum Design

In an earlier post, I showed that, even though student flow is such a common phenomenon, it is ignored when program completion rates are calculated. In this post, I want to show how student mobility complicates curriculum design in colleges and universities.

Let’s focus first on the design of the general education curriculum. A major concern of those who design general education programs is coherence. Professors design an intentional sequence of required courses that build on one another in order to develop the knowledge and skills in students that they need to succeed.

To achieve coherence, many institutions have initiated interdisciplinary courses, clustered courses, and themes that tie together sequences of courses. As a result, student choice and distribution requirements are becoming less common in higher education. A survey of chief academic officers in 2000 found that 54 percent revised their general education programs in order to achieve greater coherence. In spite of these efforts, only 38 percent thought that the changes resulted in increased coherence (Johnson, Ratcliff, & Gaff, 2004).

D. K. Johnson and J. L. Ratcliff (2004) argued that academicians have misplaced the focus on coherence. Coherence has been viewed as a structural issue, involving the design of courses and requirements. In contrast, they recommend viewing it as something that takes place in the minds of the students. Faculty should avoid designing programs that appear irrelevant to the students, offer too much or too little information, appear obscure or indirect, or appear inaccurate or incorrect.

Nowhere in their discussion, however, do Johnson and Ratcliff acknowledge that even the most coherently designed curriculum is actually experienced by less than half the students at most institutions. Many students transfer in from other institutions, or they take courses somewhere else and transfer them to their home institution.

Of course, coherence in general education could be ensured if all institutions of higher learning agreed on the required courses for general education, but such agreement would deny the distinctive priorities and emphases of various institutions. In order to circumvent the complication of student mobility, some state systems, such as Missouri, have tried to make general education requirements consistent across all their campuses. Such statewide approaches are promoted in the AAC&U report entitled General Education in an Age of Student Mobility, but as of the publication date, only twenty-two states had implemented statewide core curricula (Association of American Colleges and Universities, 2001).

As long as a student takes courses only within one state system, achieving the intended outcomes of the general education program is a reasonable possibility. Unfortunately, Adelman found that 20 percent of all undergraduates who earned more than 10 credits attended institutions in more than one state as undergraduates, and 24 percent of bachelor’s degree recipients attended institutions in more than one state as undergraduates (Adelman, 2003).

The recent report by the National Student Clearinghouse found similar statistics: “Within the public sector, more than one fifth (21.6 percent) of students who transferred from two-year institutions and more than one quarter (25.5 percent) of those from four-year institutions moved to an institution in a different state” (National Student Clearninghouse Research Center, 2012, p. 29). These statistics show that attempts by states to standardize their general education curriculum are frustrated by the mobility of students who take courses in different states.

Many students also transfer in general education credits from Advanced Placement courses in high school and CLEP exams or by taking courses at a community college and transferring in the credits. It would be very interesting to know the percentage of students who actually take all of their general education credits at one institution, thereby fully experiencing the designed purpose of the curriculum.

I see this complication in my own program. The Bible major at my institution consists of twelve courses that are intentionally designed to build on each other in a logical progression so that students will gain the desired knowledge and skills by the time they graduate. But how many actually take all twelve courses within our program? Some students transfer courses in from elsewhere, and some take distance learning courses or summer courses at other institutions. I don’t know how many actually experience our curriculum they way that we have designed it.

Another complication follows from this. If students are not experiencing our wisely designed curriculum as we intend, then what are we actually assessing at the end of their program? In the next post, I will discuss how student mobility complicates assessment of student learning.

Sources:

Adelman, C. (2003, September). Postsecondary attainment, attendance, curriculum, and performance: Selected results from the NELS:88/2000 postsecondary education transcript study (PETS), 2000. Washington, DC: U.S. Department of Education, Institute of Education Sciences.

Association of American Colleges and Universities. (2001). General education in an age of student mobility: An invitation to discuss systemic curricular planning. Washington, DC: Association of American Colleges and Universities.

Johnson, D. K., and Ratcliff, J. L. (2004, Spring). Creating coherence: The unfinished agenda. New Directions for Higher Education, 125, 85-95.

Johnson, D. K., Ratcliff, J. L., and Gaff, J. G. (2004, Spring). A decade of change in general education. New Directions for Higher Education, 125, 9-28.

National Student Clearinghouse Research Center. (2012). Transfer & mobility: A national view of pre-degree student movement in postsecondary institutions.

The Neurology of Gaming | Online Universities.

This infographic by Onlineuniversities.com reveals the effects of playing video games on the brain.

In the last post, I described how transfer students are ignored in the calculations of program completion rates. Even experts in statistical analysis of education data fail to take student swirl into consideration. An oft-cited study by P. T. Ewell, D. P. Jones, and P. J. Kelly (2003) found that, of every 100 students who entered the ninth grade, only 18 would complete an associate’s degree or a bachelor’s degree within 10 years of their first year in high school. V. M. H. Borden (2004), for example, cites this study but represents it as referring only to bachelor’s degrees. The statistic is often used to lament the “leaky pipeline” of education in America, as was the case in a “policy alert” published by the National Center for Public Policy and Higher Education in April 2004.

This study, however, excludes transfer students, which makes the statistics look more dismal than the reality. Adelman, who was a senior research analyst at the U.S. Department of Education, publicly called attention to the fact that analysts who use this data overlook that it includes only students who graduate from the institution where they first enrolled (Glenn, 2006). He argued that the National Education Longitudinal Study has shown that the statistic is actually 35 percent, not 18 percent. Also, the Census Bureau’s Current Population Survey of 2003 found that 28.4 percent of Americans between the ages of 25 and 29 had earned at least a bachelor’s degree, which would be impossible if it were true that only 18 percent earned an associate’s or bachelor’s degree within 10 years of the ninth grade.

A proposed solution to this complication is the development of a national tracking system (Ewell, Schild, & Paulson, 2003). Such a system would keep track of where students attend college and when they complete their program. This system may prove useful for nationwide analysis of higher education trends, but it would assist little in determining the effectiveness and quality of any individual institution, as retention rates and graduation rates are assumed to do. At this point, however, “the technology for a full census of completion does not yet exist” (Adelman, 2006, p. 85). Also, efforts to implement a national tracking system were thwarted because of privacy concerns (Selingo, 2012).

Another solution is to cease trying to calculate institutional graduation rates and instead track student graduation rates. As Adelman (2006) observes, “it is the student’s success that matters to families—and to the nation,” not the institution’s (p. xvi). Up to now, statistical reporting has been institution-centered rather than student-centered. In contrast to the other low graduation rates reported elsewhere, Adelman (2006) has shown that when the graduation rate includes students who earn a degree from a different four-year college than the one in which they originally enrolled, the six-year completion rates are in the 62-67 percent range. This statistic is confirmed by The Condition of Education 2003, which reported that, of students who intended to earn a bachelor’s degree and began their postsecondary education at a 4-year institution in 1995-96, 63 percent had obtained a bachelor’s degree with six years.

Adelman (2006) also modifies the traditional retention rate, or “persistence rate” as he prefers to call it, by including any student who earns any credit at a postsecondary institution in a calendar year (July 1-June 30) and earns credits at any time and at any institution during the next calendar year. In contrast to claims that a fourth of four-year college entrants do not return for their second year, Adelman’s persistence rate shows that 96 percent actually take credits in the second year.

More recently, a national commission under U.S. Education Secretary Arne Duncan has proposed modifications to criteria for measuring community college performance that takes student mobility into consideration. Transfer students will now be included in the criteria for student success and persistence.

Another step in the right direction has been taken by more than 300 public four-year colleges who have joined together to form the Voluntary System of Accountability Program. The completion metric used by that program includes transfer students. We can only hope that, in the future, higher education policy wonks will recognize the reality of transfer students and build that reality into their reporting and planning requirements.

In future posts, I will discuss how student mobility also complicates curriculum design and assessment of student learning.

Sources:

Adelman, C. (2006, February). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education. Retrieved Marc 23, 2012, from http://www2.ed.gov/rschstat/research/pubs/toolboxrevisit/toolbox.pdf.

Borden, V. M. H. (2004, March/April). Accommodating student swirl: When traditional students are no longer the tradition. Change, 36(2), 10-17.

Ewell, P. T., Jones, D. P., & Kelly, P. J. (2003). Conceptualizing and researching the college student pipeline. Boulder, CO: National Center for Higher Education Management.

Ewell, P. T., Schild, P. R., & Paulson, K. (2003, April). Following the mobile student: Can we develop the capacity for a comprehensive database to assess student progression? Lumina Foundation for Education Research Report.

Glenn, D. (2006, April 21). Government analyst says shoddy statistics tell a false tale about higher education. Chronicle of Higher Education, 52(33).

Selingo, J. (2012, March 2). The rise and fall of the graduation rate. The Chronicle of Higher Education. Retrieved March 22, 2012, from http://chronicle.com/article/The-RiseFall-of-the/131036/

Posted by: Gregory Linton | 03/22/2012

Student mobility renders graduation rates meaningless

Although student flow is such a common phenomenon as I demonstrated in the previous post, it is often ignored by faculty, administrators, and policymakers when they plan and design programs and policies. A prime example is how it affects statistical reporting. The federal government, state governments, accrediting agencies, and other organizations such as the College Board require colleges and universities to report certain formulae that they have judged to indicate effectiveness and quality. Unfortunately, these formulae often provide simplistic data that ignore the reality of student flow. Consequently, they are meaningless bits of information that reveal very little about the effectiveness and quality of an institution.

Because of the recent report by the National Student Clearinghouse Research Center, the effect of student mobility on graduation rates is starting to receive some attention. In 2006, I wrote a paper on this topic for a class on “Public Policy in Higher Education” at Michigan State University, and I am sharing that information below with some updated references. Because I researched and wrote this in 2006, some of the statistics may be outdated.

A prime example of a meaningless statistic is the “program completion rate” (Selingo, 2012). In 1990, Congress passed legislation requiring colleges to publicize their graduation rates. As a result, some accrediting agencies require that their member institutions publish this statistic in their catalogs. It took the U.S. Department of Education five years to determine how graduation rates would be calculated. The formula reports the percentage of first-time, full-time freshmen who enroll in the fall semester and complete their degree at the same institution within 150 percent of the allotted time for the degree. The Department of Education decided to exclude transfer students since they would complicate the calculations (Burd, 2004).

Every year the Department of Education collects graduation-rate data from every four-year college and university in America through its Graduation Rate Survey. These graduation rates are based on the formula described above. Similarly, retention rates are based on first-time, full-time freshmen who enroll in the fall semester. According to C. Adelman (2006), these formulae exclude half of traditional-age students from the calculation.

The narrow focus of this formula ignores a number of realities. First, it does not include part-time students, who make up 45 percent of the undergraduate population (Lipka, 2012). Consequently, an institution could have students take a part-time load their first semester and then choose to take a full-time load from then on and graduate within four years, but they would never be included in the program completion rate.

Second, it does not include students who begin their college careers in the spring semester. Adelman (2006) has shown that only 82.1 percent of 1992 12th-graders entered postsecondary education in the fall semester, whereas 5.8 percent begin in the summer and 12.1 percent begin the winter or spring. Adelman (2006) concludes that “any measure of retention or completion that confines its universe to students who began their postsecondary careers in the fall term is, to put it gently, grossly incomplete” (p. 46).

Third, graduation rates do not include students who transfer into an institution and graduate or students who transfer out of one institution to another institution and graduate from the second institution. This is especially unfair to community colleges, as J. Sbrega (2012) has argued. According to K. Carey (2005), including the latter group of students would add an average of eight percentage points to an institution’s graduation rate. J. Marcus (2012) reports that including students who transfer to a four-year institution before completing an associate degree would more than double the completion rate for community colleges.

I am not aware of any reporting on the retention and graduation of transfer students that a college or university must make to any agency. Apparently, those who determined what data are essential to report never considered the reality of transfer students. When one considers these realities, one is inclined to agree with Adelman that graduation rates are “anachronistic formulas that do not track students through increasingly complex paths to degrees” (2006, p. xvi).

Fourth, program completion rates do not consider that some students can attend an institution with the full intention of transferring after a year or two. This has long been the case in the institution where I teach. Some students intend to come here for a year or two to be grounded in their faith by studying the Bible and theology but then transfer to another institution because we do not offer a program in their chosen field of work (business, for example). These intentional decisions by students lower our graduation rates even though the institution has done nothing wrong. The graduation rate is not a clear indicator of quality. Most or all of those students could go on to earn a degree elsewhere in four or five years, but that would make no difference in our graduation rate.

To keep this post from being too long, I will discuss in the next post more ways that student mobility complicates statistical reporting.

Sources:

Adelman, C. (2006, February). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education. Retrieved May 29, 2006, from http://www.ed.gov/rschstat/research/pubs/toolboxrevisit/index.html.

Burd, S. (2004, April 2). Graduation rates called a poor measure of colleges. Chronicle of Higher Education, 50(30), A.1.

Carey, K. (2005, January). One step from the finish line: Higher college graduation rates are within our reach. Washington, DC: The Education Trust.

Lipka, S. (2012, March 2). Students who don’t count. The Chronicle of Higher Education. Retrieved March 22, 2012, from http://chronicle.com/article/The-Students-Who-Dont-Count/131050/

Marcus, J. (2012, March 8). Community colleges want to boost grad rates–by changing the formula. The Hechinger Report. Retrieved March 19, 2012, from http://hechingerreport.org/content/community-colleges-want-to-boost-grad-rate-by-changing-the-formula_8076/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+HechingerReport+%28Hechinger+Report%29

Sbrega, J. (2012, March 13). Let’s change how we measure success–now.” Community College Times. Retrieved March 19, 2012, from http://www.communitycollegetimes.com/Pages/Campus-Issues/Lets-change-how-we-measure-success-now.aspx

Selingo, J. (2012, March 2). The rise and fall of the graduation rate. The Chronicle of Higher Education. Retrieved March 22, 2012, from http://chronicle.com/article/The-RiseFall-of-the/131036/

Posted by: Gregory Linton | 03/13/2012

The Frequency of Student Mobility in Higher Education

In the last post, I defined terms related the phenomenon of student mobility. In this post, I will summarize statistics that reveal how frequently students earn credits from more than one institution.

Several studies have revealed the frequency of student mobility. A. C. McCormick (1997) found that, of students who began their postsecondary education in 1989-90, 45 percent had enrolled as undergraduates at more than one institution by 1994. Of these students, 35 percent had actually transferred to another institution by 1994. He also found that 28 percent of students who began at a four-year institution transferred and that 43 percent of students who began at a two-year institution transferred.

The 2005 National Survey of Student Engagement (NSSE) found that “almost half (45%) of all seniors completed at least one course at another postsecondary institution since graduating from high school but prior to enrolling at their current institution.” Also, it found that “one-third of all seniors took at least one course at another postsecondary institution since first enrolling at their current institution” (p. 19).

The National Educational Longitudinal Study of 1988 found that 57 percent of students who were high school seniors in 1992 and earned more than 10 college credits attended more than one school as undergraduates. This statistic was 51 percent for the class of 1982 and 47 percent for the class of 1972. Nearly 60 percent of the class of 1992 who earned bachelor’s degrees attended more than one school as undergraduates. These trends show that student mobility has increased in frequency over the decades. The study also found that one out of five of those who started in a four-year college and earned a bachelor’s degree earned the degree from an institution other than the one in which they began their college education (Adelman, 2004).

The recent report by the National Student Clearinghouse Research Center (2012) builds on these earlier studies with additional data. This study focused on all students in the U.S. who began postsecondary education in the fall of 2006, a total of 2.8 million students. It found that one-third of all students change institutions at some time before earning a degree. Of those who transfer, 37 percent do it in their second year, but 13 percent transferred in the fourth year and an additional 9 percent transferred in the fifth year. Over one-fourth of all transfers crossed state lines. Transfer rates were similar across all types of institutions except that the rate for private for-profit institutions was much lower.

The mobility of students in higher education correlates with general geographic mobility of young adults. According to Schachter, “in 2003, about one-third of 20- to 29-year olds had moved in the previous year, more than twice the moving rate of all people 1 year and older” (2004, p. 3). These trends suggest that student mobility will become even more common in the future.

Although student flow is such a common phenomenon, it is often ignored by faculty, administrators, and policymakers when they plan and design programs and policies. The recent NSC report is beginning to bring attention to the complications caused by student mobility. In future posts, I will discuss how student mobility affect areas of higher education such as curriculum design, assessment of student learning, and statistical reporting.

Sources:

Adelman, C. (2004, January). Principal indicators of student academic histories in postsecondary education, 1972-2000. Washington, DC: U.S. Department of Education, Institute of Education Sciences.

McCormick, A. C. (1997, June). Transfer behavior among beginning postsecondary students: 1989-94. Postsecondary Education Descriptive Analysis Reports, NCES 97-266. Washington, DC: U.S. Department of Education, National Center for Education Statistics.

National Survey of Student Engagement. (2005). Exploring different dimensions of student engagement: 2005 annual survey results. Retrieved May 23, 2006, from http://nsse.iub.edu/pdf/NSSE_2005_Annual_Report.pdf.

Schachter, J. P. (2004, March). Geographic mobility: 2002 to 2003. Washington, DC: U.S. Department of Commerce, U.S. Census Bureau. Current Population Reports P20-549.

Posted by: Gregory Linton | 03/08/2012

A Glossary of Terms about Student Mobility

Research continues to show the increasing incidence of student mobility in higher education. The most recent report on the phenomenon was just released by the National Student Clearinghouse Research Center with the title Transfer & mobility: A national view of pre-degree student movement in postsecondary institutions. Other terms used to describe this phenomenon are “student swirl” and “student flow.” These terms refer to students taking classes from an institution other than the one that grants them their degree.

Although the facts about student mobility have been publicized thoroughly in the past and now in this latest report, academic policymakers and administrators often fail to take it into consideration. Instead, they generally design programs and policies based on the model of the traditional student who enters a four-year college immediately after finishing high school, lives on campus, and graduates from the same college without ever taking a course elsewhere (Borden, 2004).

C. Adelman (2006) has shown, however, that only “a third of traditional-age students who started in a four-year college earned a bachelor’s degree from the same school in the ‘traditional’ four-year period” (p. xxiv). Such students are a minority among the undergraduate population, and their numbers are decreasing. A minority of students experience a linear progression through their education; however, this linear progression is the default assumption of most academic policies and measures of effectiveness. Consequently, policies, programs, and assessment measures based on this model are flawed.

In future posts, I am going to discuss what the research shows us about how common it is for students to receive credits from more than one institution. And then I will discuss the overlooked impact of this on academic areas such as curriculum design, assessment of student learning, program completion rates, transfer of credit policies, and first-year assimilation programs. But first, I want to lay the groundwork by defining terms related to student mobility, relying mainly on the NSC report mentioned above. These terms will illustrate the variety of ways in which this phenomenon manifests itself.

Vertical or upward  or forward transfer: student movement from a community college to a four-year institution.

Reverse transfer: student movement from a four-year institution to a two-year institution.

Lateral transfer: student movement from a four-year institution to another four-year institution (or from a two-year to a two-year).

Student swirl: students who leave their original institution, take classes at another institution, and then return to their original institution.

Concurrent enrollment: students taking courses at more than one institution at the same time.

Serial transfers: students taking courses successively from a number of institutions.

These terms show that student mobility takes a variety of forms. In the next post, I will examine the statistics about the prevalence of student mobility

Sources:

Adelman, C. (2006, February). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education. Retrieved May 29, 2006, from http://www.ed.gov/rschstat/research/pubs/toolboxrevisit/index.html.

Borden, V. M. H. (2004, March/April). Accommodating student swirl: When traditional students are no longer the tradition. Change, 36(2), 10-17.

Robert Lee Hotz, “When Gaming Is Good for You,” WSJ.com.

This article in the Wall Street Journal summarizes the results of recent research on the positive effects of gaming on creativity, decision-making, perception, hand-eye coordination, and vision changes. It balances this information by noting the negative effects of gaming also, especially those that contain violence.

Fili Bogdanic, “Generation Y: The Internet’s Effects on Cognition and Education | Triple Helix Online.”

This blog post summarizes the research on the positive and negative effects of internet use on cognition and education.

« Newer Posts - Older Posts »

Categories