Category Archives: For Schools

I can advise on program management, curriculum, faculty issues, and business development.

Unintended Consequences? Effects of the 2010 Accreditation Act on Intensive English Programs

The Accreditation Act passed in 2010 required that F-1 students pursuing an English language training program must attend a program that is accredited by a Department of Education recognized accrediting agency. University-governed programs were covered by their university’s regional accreditor, which meant that for them, an additional specialized accreditation was optional. All proprietary programs – mostly for-profit language schools – were required to seek and gain accreditation.

The Accreditation Act was supported, and its passage celebrated by, program directors and leaders at university-governed and well-established, already-accredited for-profit language school companies. They were motivated by a strong desire to bring greater professionalism to the field and to weed out a significant number of unscrupulous and fly-by-night operators who had cleared the relatively low bar for entry into the industry and whose low standards were tainting the field as a whole. Since the passage of the Act, the two specialized accrediting agencies for intensive English programs, CEA and ACCET, have added hundreds of intensive English programs to their rolls. Plenty of programs that sought accreditation have been denied, and the weeding out process has been largely successful.

But some consequences are not so unequivocally positive for the field:

  • The accreditation process costs up to $10,000, plus annual sustaining fees. This is a significant financial burden on programs, especially during a time of enrollment challenges. While university-governed programs have the option of sheltering under their institution’s accreditation and avoiding these costs, proprietary programs have no choice but to pay up or cease doing business.
  • The requirement for an IEP to be accredited creates a Catch-22 for potential new entrants into the market. A proprietary program has to be in business for two years (ACCET) or one year (CEA) before it can apply for accreditation. The accreditation process itself takes around 18 months, and if it succeeds, the program must then wait for F-1 issuing approval from the federal government. In the words of one IEP administrator in this situation, “It felt like being choked to death for four years.” During this time, the program has to survive on non-F-1 students. The near-impossibility of this makes the price of entry extremely high for those wanting to enter the field. While there were always requirements to become an F-1 school, the Accreditation Act raised almost insurmountable barriers to new proprietary players.
  • A consequence of this is greater consolidation in the proprietary IEP market. If you cannot start a new school, you have to purchase an existing one. Inevitably, those with the resources to do this are large companies seeking to develop branded chains of English schools. Further, accrediting agencies make it relatively easy for existing schools to open new branches through a simplified accreditation process for the new branch, thus allowing existing companies to expand while new entrants continue to struggle to gain entry.
  • Accreditation likely has the effect of curbing innovation in the field. Adherence to accreditation standards tends to result in institutional isomorphism (the phenomenon of institutions of a certain type looking the same), and programs are reluctant to launch anything radically different for fear of not complying with accreditation standards. Aside from surface details (number of levels, number of weeks per session, etc.), IEPs can be quite difficult to tell apart. This, combined with the lengthy SEVP approval process for new programs, in turn leads to commodification in the industry: potential students have difficulty telling one program apart from another, and use price, location, and established brand reputation to make their choice rather than any specific features of a program.

Overall, the benefit to the field has been positive. Students can apply to U.S. IEPs with the knowledge that their chosen program has been verified by an accreditor to meet high standards. The price to the industry as a whole is high though, and we should look for ways to mitigate the downsides – in particular to find ways to foster innovation and be open to new models – as we continue to face challenging market conditions in the years to come.

How valid is that speaking test really?

students in testing lab

Language learners who take an online language test usually expect to receive an evaluation of their speaking ability in the results. But online tests don’t do a very good job of assessing speaking ability because they lack construct validity: they cannot create the type of conditions the learner will be speaking the language in, such as a conversation or presentation. The iBT TOEFL has speaking components, but the test taker has no interlocutor, creating a highly unrealistic speaking situation – a monologue spoken into a microphone with no audience – on which speaking ability will be evaluated. Some online tests contain no speaking component at all; claims about the test taker’s speaking ability is even more inferential than those of the iBT. None of this prevents test makers from making confident claims about their test’s ability to measure learners’ speaking ability.

Speaking is a particularly difficult skill to test properly, especially the ‘spoken interaction’ described in the Common European Framework of Reference. Research has shown that learners perform differently under different conditions. For example, a test taker scored more highly when paired with another learner in a conversation than when assessed by interview with an examiner (Swain, Kinnear, & Steinman, 2011). Conversation is co-constructed by participants, who build on and scaffold each other’s utterances. Conversation requires cooperation, the successful negotiation of meaning, strategies to understand the other person, asking questions, requesting clarification, affirming, and paraphrasing. Is it likely that any of this can be evaluated by an assessment that does not require the learner to do any of these things?

Online tests have emerged from the psychometric testing tradition, which assumes that an ability is stable in an individual, and therefore requires isolation of the individual in order to avoid extraneous influences. This is the opposite of most spoken language in use. We should call into question the usefulness of tests that make claims based on a lack of validity.

The best way for spoken language to be assessed is by an expert interlocutor interacting with and observing learners in interactions with others over a period of time. Language teachers – trained and experienced in assessment and evaluation techniques, and in many cases able to assess learners over the course of a session or semester – are best placed to offer this kind of assessment.

Reference
Swain, M, Kinnear, P., & Steinman, L., Sociocultural Theory in Second Language Education, Multilingual Matters 2011

Picture credit https://tc.iupui.edu/

How SWBATs and can-do statements shortchange language learners

“Can keep up with an animated discussion, identifying accurately arguments supporting and opposing points of view.” “Can tell a story or describe something in a simple list of points.” If your program is using Common European Framework of Reference (CEFR) descriptors as its outcomes statements, you’ll be familiar with ‘can-do’ statements like these.

The CEFR was developed as a means to assess and describe language proficiency. It was built on the European tradition of communicative language teaching (CLT), which emphasized the performance of language tasks. Since language performance can be observed, the CEFR’s can-do statements were a perfect match for the measurable-outcomes-based accountability initiatives that came in the wake of No Child Left Behind. Many teachers have been trained, encouraged, or badgered to plan their lessons and courses around SWBAT (‘students will be able to’) or can-do statements.

There is a persuasive case to be made that CEFR (and similar) performance statements are a useful way to describe language proficiency. Employers, for example, what to know what a potential employee can do in a language – what practical uses the employee can use the language for. Language educators are not employers, though. What language educators need to know is whether and to what extent learning has taken place, and here’s the problem.

Broadly speaking, two educational traditions have informed language teaching: the behavioral, and the cognitive. Behaviorists see learning as a change in behavior, one that can be observed or measured. Cognitivists see learning as acquiring and understanding knowledge. The cognitivist tradition fell out of fashion with the demise of the grammar-translation method and the rise of behavior-based approaches to language teaching. These days, we can probably all agree that in language learning, we need to refer to both traditions: the acquisition or construction of a mental representation of the language, and the skill required to be able to use it in practice. When our outcomes are can-do statements, we focus on observable or measurable behaviors, but tend to pay less attention to acquired or constructed knowledge. We want to know if the learner ‘can tell a story,’ or ‘keep up with an animated discussion,’ for example.

If you have taught students from various countries, you know that some are great performers even if they lack a solid language base – somehow, they manage to draw on sparse linguistic resources to communicate. And on the other hand, you know that some learners have extensive language knowledge, especially grammar and vocabulary knowledge, but have a great deal of difficulty ‘performing.’ Hence, Chomsky wrote of language proficiency, “behavior is only one kind of evidence, sometimes not the best, and surely no criterion for knowledge,” (as cited in Widdowson, 1990). The one is not necessarily indicative of the other.

If you are an educator (as opposed to an employer), you are interested in student learning in any form. You want to know what progress a learner has made. From a cognitive point of view, that includes changes in the learner’s mental representation of the language – a clearer understanding of the form, meaning, and use of the present perfect, for example – even if that has not yet resulted in a change in behavior, such as the ability to use that tense easily in a conversation. A learner who has made great strides in his/or mental representation of the language but is still speaking in telegraphic speech may be of little interest to an employer, but should be of great interest to an educator, because learning has taken place that is a basis for future teaching. Assessment and description of the learner’s language should address this type of progress. The behavioral tradition, with its can-do outcomes statements have no interest in such cognitive development – it is not interested until there is a change of behavior, an observable, measurable performance.

This approach to assessment shortchanges learners who may have made real progress on the cognitive side. So, I’m calling on language educators not to accept uncritically the use of CEFR and similar performance-based descriptors as measures of language learning.

Reference
Widdowson, H.G., Aspects of Language Teaching, Oxford University Press, 1990

The Accreditation-Ready Program

There are few obligations for faculty and staff that cause knots in the stomach and departmental wrangling than preparing the accreditation self-study. It is often viewed as a burden, a distraction from everyone’s ‘real’ work, and a process of bureaucratic box-checking or of trying to fit the round peg of the program into the square hole of accreditation requirements.

In Five Dimensions of Quality, Linda Suskie draws on years of experience with accreditation, institutional and program assessment, and accountability to re-frame the role of accreditors as “low-cost consultants who can offer excellent collegial advice” (p. 245) to schools and programs seeking to demonstrate their value to stakeholders in an increasingly competitive market.  Accreditation should be viewed not as an imposition of alien practices on an established program, but as a way for a school or program to gain  external affirmation of already-existing quality. The challenge is not to make the program ‘fit’ accreditation standards, but actually to be a quality program and demonstrate that quality.

Accreditation success, then, flows naturally from the pursuit of quality, and is not an end in itself. But what is quality? Suskie breaks it down into five dimensions or ‘cultures’:

A Culture of Relevance
Deploying resources effectively to put students first, and understand and meet stakeholders’ needs.

A Culture of Community
Fostering trust among faculty, students, and staff, communicating openly and honestly, and encouraging collaboration.

A Culture of Focus and Aspiration
Being clear about school or program  purpose, values, and goals.

A Culture of Evidence
Collecting evidence to gauge student learning and program or school effectiveness.

A Culture of Betterment
Using evidence to make improvements and deploy resources effectively.

Fostering these cultures is the work of leadership, since they require widespread buy-in from all stakeholders. The challenge in many institutions is institutional inertia, as Suskie points out in her chapter, “Why is this so hard?” Faculty, staff, and governing boards may feel satisfied that the school’s reputation is sufficient for future success; resources – especially money and people’s time – may not be forthcoming; faculty and staff may live in comfortable isolation from the  real-world needs of students; there may be an ingrained reluctance to communicate successes; there is frequently resistance to change; and siloed departments in programs and institutions make across-the-board cultural change difficult to pull off.

The question administrators and faculty should ask themselves is, “Do we put our efforts into pursuing quality, or into maintaining our accreditation?” Suskie’s book presents a convincing case that working on the former will make the latter much easier and will result in quality rather than box-checking. For its straightforward prose (including jargon alerts scattered throughout), its sound advice, and its call for schools to demonstrate quality in a highly competitive environment, Five Dimensions of Quality should be a go-to resource on the reference bookshelf of decision-makers and leaders in higher education programs.

Suskie, L., Five Dimensions of Quality, Jossey-Bass 2015

More of my education-related book reviews are at Amazon.

Challenge and change in intensive English programs

From left: Bill Hellriegel, Carol Swett, Michelle Bell, Amy Fenning, Alan Broomhead

Challenges over the past few years have deeply impacted intensive English programs, forcing irreversible changes in their organizational cultures that result in anxiety and tension, but also innovation and adaptation. That was the theme of a panel session, “Organizational Culture in University and Proprietary IEPs: Challenges and Changes,” presented by Michelle Bell (University of Southern California), Amy Fenning (University of Tennessee at Martin), Bill Hellriegel (Southern Illinois University), Carol Swett (ELS Language Centers at Benedictine University, Illinois) and myself at the TESOL International Convention on March 28. Recognizing the cultural types of IEPs and how they are affected by changes is the first step in adapting and surviving in an increasingly competitive field.

IEP cultures can roughly be divided into collegial and managerial types, following Bergquist and Pawlak’s (2007) typology of academic cultures. A collegial culture, more likely to be found in a university-governed IEP, is faculty-focused, with faculty scholarship and teaching, academic autonomy and freedom, and faculty ownership of the curriculum as the organizing principle. A managerial culture is administration-driven, motivated by considerations of fiscal responsibility and effective supervision, and organized by systems, processes, and standards.

The massive shift to accreditation in IEPs has moved collegially-oriented programs in a managerial direction. Faculty are required to plan, teach, and assess in compliance with program-wide student learning outcomes; policies and procedures have to be written and followed; and program success is measured by data, which has to be systematically collected, analyzed, and evaluated. Proprietary IEPs are seeing a a shift in the other direction: faculty standards require minimum levels of certification, experience, and ongoing professional development, and these are affecting faculty hiring and employment practices in many proprietary programs.

The severe enrollment challenge of the past two years has also affected both types of program. University IEPs are becoming more revenue-driven and entrepreneurial, actively seeking new recruitment partnerships and designing new programs – such as short-term high school programs – to respond to changing demand. Faculty may have little say in these initiatives. Meanwhile, proprietary IEPs are increasingly developing conditional-admit and TOEFL-waiver agreements with partner universities, requiring them to make programs more academically-focused and hire masters-level teachers who are qualified to teach English for academic purposes.

These are ground-shifting developments, and program leaders who recognize the need to address profound cultural change in their organizations – and not just surface-level adjustments – will be in the strongest position to navigate these challenging times.

Reference
Bergquist, W.H. & Pawlak, K., Engaging the Six Cultures of the Academy, Jossey-Bass 2007