Tag Archives: assessment

The Accreditation-Ready Program

There are few obligations for faculty and staff that cause knots in the stomach and departmental wrangling than preparing the accreditation self-study. It is often viewed as a burden, a distraction from everyone’s ‘real’ work, and a process of bureaucratic box-checking or of trying to fit the round peg of the program into the square hole of accreditation requirements.

In Five Dimensions of Quality, Linda Suskie draws on years of experience with accreditation, institutional and program assessment, and accountability to re-frame the role of accreditors as “low-cost consultants who can offer excellent collegial advice” (p. 245) to schools and programs seeking to demonstrate their value to stakeholders in an increasingly competitive market.  Accreditation should be viewed not as an imposition of alien practices on an established program, but as a way for a school or program to gain  external affirmation of already-existing quality. The challenge is not to make the program ‘fit’ accreditation standards, but actually to be a quality program and demonstrate that quality.

Accreditation success, then, flows naturally from the pursuit of quality, and is not an end in itself. But what is quality? Suskie breaks it down into five dimensions or ‘cultures’:

A Culture of Relevance
Deploying resources effectively to put students first, and understand and meet stakeholders’ needs.

A Culture of Community
Fostering trust among faculty, students, and staff, communicating openly and honestly, and encouraging collaboration.

A Culture of Focus and Aspiration
Being clear about school or program  purpose, values, and goals.

A Culture of Evidence
Collecting evidence to gauge student learning and program or school effectiveness.

A Culture of Betterment
Using evidence to make improvements and deploy resources effectively.

Fostering these cultures is the work of leadership, since they require widespread buy-in from all stakeholders. The challenge in many institutions is institutional inertia, as Suskie points out in her chapter, “Why is this so hard?” Faculty, staff, and governing boards may feel satisfied that the school’s reputation is sufficient for future success; resources – especially money and people’s time – may not be forthcoming; faculty and staff may live in comfortable isolation from the  real-world needs of students; there may be an ingrained reluctance to communicate successes; there is frequently resistance to change; and siloed departments in programs and institutions make across-the-board cultural change difficult to pull off.

The question administrators and faculty should ask themselves is, “Do we put our efforts into pursuing quality, or into maintaining our accreditation?” Suskie’s book presents a convincing case that working on the former will make the latter much easier and will result in quality rather than box-checking. For its straightforward prose (including jargon alerts scattered throughout), its sound advice, and its call for schools to demonstrate quality in a highly competitive environment, Five Dimensions of Quality should be a go-to resource on the reference bookshelf of decision-makers and leaders in higher education programs.

Suskie, L., Five Dimensions of Quality, Jossey-Bass 2015

More of my education-related book reviews are at Amazon.

Why language is best assessed by real people


“Classroom decoration 18” by Cal America is licensed under CC BY 2.0

What is the most effective way to assess English learners’ proficiency?

It has become accepted in the field to rely on psychometric tests such as the iBT (Internet-Based TOEFL) and the IELTS for college and university admissions. Yet these and most other language tests are an artifice, a device that is placed between the student’s actual proficiency and direct observation of that proficiency by a real human being. Students complete the limited set of tasks on the test, and based on the results, an algorithm makes an extrapolation as to their broader language abilities.

When you look at a TOEFL score report, it does not tell you that student’s English language ability; what it tells you is what a learner with that set of scores can typically do. And in the case of the TOEFL, this description is an evaluation that is based largely on multiple choice answers and involved not one single encounter with an actual human being. Based on this, university admissions officers are expected to make an assumption about the student’s ability to handle the demands of extensive academic reading and writing, classroom participation, social interaction, written and spoken communications with university faculty and staff, SEVIS regulations, and multiple other demands of the U.S. college environment. (Although the IELTS includes interaction with the examiner and another student, these interactions are highly structured and not very natural. TOEFL writing and speaking tasks are limited, artificial, and assessed by a grader who has only a text or sound or text file to work with.)

Contrast that with regular, direct observation of students’ language proficiency by a trained and experienced instructor, over a period of time. The instructor can set up a variety of language situations involving variation in interlocutors, contexts, vocabulary, levels of formality, and communication goals. In an ACCET or CEA accredited intensive English program, such tasks are linked to documented learning objectives. By directly observing students’ performance, instructors are able to obtain a rich picture of each student’s proficiency, and are able to comment specifically on each student’s strengths and weaknesses.

Consider this a call, then, for colleges and universities to enter into agreements with accredited intensive English programs to waive the need for a standardized test such as the TOEFL. Just as those colleges and universities don’t use a standardized test to measure the learning of their graduates, they should be open to accepting the good judgment of teachers in intensive English programs – judgment based on direct observation of individual learners rather than the proxy scores obtained by impersonal, artificial tests.

Aligning assessment and IEP culture

Since the passage of the Accreditation Act of 2010, intensive English programs (IEPs) have been under pressure to justify their quality claims by recording and reporting on student achievement. This has meant devising program-wide systems for assessing and evaluating students, and has been a challenge for many IEPs.

The type of system a program develops is influenced by its culture. A more managerial (top-down, administratively driven) culture typical of proprietary English schools tends to favor standardization of assessment that includes program-wide level-end tests. Many university IEPs have more of a collegial (faculty-driven with a degree of shared governance) culture in which individual faculty decision-making and autonomy are valued. In the latter type, it can grate against the culture when there is an attempt to introduce or impose standard testing. It may be more agreeable to retain faculty autonomy in assessment but introduce checks to ensure that assessments are aligned with course objectives and outcomes.

Both approaches (and blends of the two) are used by CEA-accredited programs and are able to meet the CEA standards. There is no need to create standard assessments across a program if they do not fit the culture. On the other hand, the imperative to assess students in a more consistent way can be a catalyst for culture change. This will need leadership, persuasion, and buy-in from faculty.

I’ve designed and overseen assessment and evaluation systems in proprietary and university programs, and can support programs in determining and developing the right approach. Get in touch if I can help!

Have a great weekend!

(Learn more about academic cultures in Engaging the Six Cultures of the Academy by William Bergquist and Kenneth Pawlak. I highly recommend it.)