the CLA at FSU: a model for incorporation

First a bit of stage setting: Several years ago, my institution, Fayetteville State University, deliberately became involved with a powerful and flexible assessment and teaching/learning tool, the Collegiate Learning Assessment (CLA). Over the last two years, through faculty development, ongoing course redesign by individual faculty, a variety of faculty-driven efforts, administrative support and leadership, and deliberate shifts in assessment, the CLA has quickly burrowed many deep roots into the soil of FSU.





When I first arrived at FSU two years ago, transitioning from six years of teaching for Ball State University in a maximum security prison to a small southern historically black college where we Philosophy profs teach 3-4 Critical Thinking classes per semester, I had never heard of the CLA. Nor had nearly all of the other professors, staff, and administrators. A few professors had been shipped off to a CLA Academy, and were ready to lead a workshop on it that Fall. Our new Provost glimpsed the potentials of this new mode of assessment based on performance tasks, rubrics, and involving "authentic assessment" of Critical Thinking, Problem Solving, and Writing skills, He decided not only to support but to invest his long-earned institutional capital into advocacy for adopting the CLA at FSU.

Since then, we have. . . well, for a moment, let me put that off (and suspend the obsessive tendency to systematically examine, outline and explain that got me into the Philosophy racket). In the last week, two articles focused on the CLA, both by David Glen, appeared in the Chronicle of Higher Education. The shorter An Assessment Test Inspires Tools for Teaching zeroed in on FSU, and on two of our English faculty who have used the CLA in classes in innovative ways. The longer A Measure of Education Is Put to the Test looked at CLA efforts nationwide, noted some of the difficulties in getting good data on precise scores and degrees of improvement, and concluded by raising a seeming dilemma posed by Richard Shavelson's Measuring College Learning Responsibly: Accountability in a New Era (which builds from his paper A Brief History of Student Learning Assessment)

Here's how Glen sets the dilemma:
Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field. But in his book he also notes the tension between the two basic uses of nationally normed tests: Sometimes they're used for internal improvements, and sometimes they're used as benchmarks for external comparisons. Those two uses don't always sit easily together. Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements. Can the CLA fill both of those roles? That is the experiment that will play out as more colleges unveil their scores.
So, there's the problem: can the CLA be used both in a simplified way to satisfy external stakeholders who demand accountability and clear demonstration of value-added by university education, and also be used in a much more sophisticated way to guide ongoing internal improvement in the university? Is there a real dilemma here? Does pursuing one of these preclude pursuing the other


Lets raise the stakes. Let's turn this into a trilemma: either the CLA can be a PR tool to provide easily interpretable scores to outside stakeholders, or it can be an assessment tool providing much more sophisticated assessment to guide internal improvement, or it can be a pedagogical tool integrated into the curriculum and used in the classroom to improve students' academic skills.

As my poker buddies know, I am a terrible bluffer. When they see me raise, they know I've got something good in my hand. And FSU does too. The CLA turns out to be a very powerful, multi-function approach to education. But you can only find this out when you invest the time and resources into it. The full potentialities of the CLA approach only start coming to light when faculty study and apply it, when administrators get behind it, and when students are being, as the motto goes "taught to the right test."

Since this is starting to sound a little bit like a sales pitch for the CLA company, let me shift back to the three prongs of the trilemma. FSU is simultaneously using the CLA to address all three of these.

Classroom assignments, course design, CLA Performance tasks as in-class tests- -- these are what CLA in the Classroom focuses on. It's bit unfortunate that David Glen went with older, cruder, but easily accessible published work in his shorter article specifically focusing on FSU instead of researching what we FSU faculty have done with CLA in courses since 2008-9. But his lapse actually makes my point for me.

That was the CLA's level of involvement in our classes two years ago. Now, after a year of a CLA Workgroup working to interest and involve faculty, after another CLA Academy early-August, and after the announcement in mid-August that our Quality Enhancement Plan for the next ten years would use the CLA as a central tool, even more instructors are using the CLA in courses. And,it's not all-- or just -- grass-roots, ground-up. More and more institutional support is being committed to faculty development. Some departments are deliberately adopting the CLA. Mine (Government and History) has actually made CLA scores part of its Operational Plan for 2010-11.

That brings us back to the second tine of the trilemma. The CLA does provide a sophisticated tool for assessing levels of student skills in three main, high-demand skill-sets: Critical Thinking (sometimes called Analytic Reasoning), Problem Solving, and Written Communication. I'll just mention three ways we are doing this.

First, my department has committed itself not only to giving and scoring CLAs in a representative proportion of classes, but also to certain levels of student scores on those (or, if those aren't met, figuring out why the targets were not met -- a key part of continual improvement). Second, I have been working on how to use CLA as a teaching strategy, and I've been experimenting and gathering data for the last 3 course cycles on how student's scores can be improved. By the end of the semester, that data will be available to other FSU professors to guide their own CLA pedagogy. Third, FSU has started using -- and reporting the results of -- faculty-generated CLAs for the Entering Freshmen and Rising Junior Examinations.

That brings us to the trilemma's third and last part. Is the CLA being used to show value-added and accountability to outside stakeholders? The answer right now has to be a qualified yes, simply because we don't have years of CLA score data to show and demonstrate improvement to legislators, alumni, and North Carolina taxpayers in general. Not yet. But we've put the means in place for getting those numbers year after year from now on, and we've got one year of data to provide a baseline.

Interestingly enough, FSU originally began its involvement with the CLA because of pressure to provide accountability from higher-up in the educational and legislative food chain. Most of the UNC system schools balked, dragged their feet, or simply ignored what legislators, taxpayers, and the Board of Governors want -- clear demonstration that students are coming out stronger in key workplace skills than they came in -- after the 4-5 years that we have them in our classrooms. They chose a particularly bad time to do so, and FSU picked a particularly fortuitous time to accept the task.

We are now two years into one of the worst budget crises North Carolina has seen, and we are expecting another two years at least of budget cuts for the UNC system. A bad time overall, but not such a bad time if you've got the means to show stakeholders that your university is actually committed to measurable student learning and can provide tangible results of improvement in absolutely key skills.