LTAC Speaker Spotlight: A Pathway for An Institution-Wide Assessment Program

Picture1

Are We There Yet?

Assessment seems to be everywhere. Over the last ten years, many more associations, conferences, funding agencies, presentations, journals, articles, and books have paid increasing attention to assessment and assessment related issues. Most of us know that interest in assessment has been around for as long as teachers have been interested  in knowing if and how much their students were learning. Every institution  with a mission that includes student preparation will be able to provide multiple examples of good assessment, and most of these will emanate from the classroom level.

However, fueled by stakeholders within and outside the institution, assessment has become more formalized. Institution-wide assessment programs and activities are now widespread as recent surveys of institutions of higher education indicate. Assessment work is prevalent, yet use of assessment information  for program improvement and study of the impact of assessment is very limited (Peterson & Vaughan, 2002).

What is it that renders some institutional assessment programs dynamic, useful and constructive, while others seem to gather data endlessly without use or purpose?

Drawing from my experiences working for well over a decade as an assessment practitioner at an institution  that I believe models excellence, I can hazard some pretty good guesses. Hopefully, my model will generate discussion.

Let me begin with two caveats. First, and in all honesty, my presence (or someone like me) as an assessment practitioner is not the missing ingredient; I have met many very talented and competent  assessment professionals that do not enjoy the same fruits from very similar labors. In fact, I have sent out newly minted and highly talented Assessment and Measurement PhDs to institutions that express a true desire to engage in assessment, only to watch them encounter many of the obstacles and barriers I will try to describe in this paper. Second, my institution-wide model is not presented  to suggest that classroom or program level assessment endeavors cannot be robust and have impact. As indicated above, many are, but their influence does not extend beyond the source. Life-giving creative energy and institutional  knowledge are both lost. This essay provides a model for institution-wide assessment programs that are sustainable and have documented success.

My ideal model for the development of institution-wide assessment would look something like Figure 1 where six sequential components are listed. Each component leads logically to the next, and, as with many developmental models, no component can be skipped or assumed. Each component will be described. Table 1 (see last page) follows and provides sample negative and positive diagnostic indicators to foster additional conversations. I’m sure many participants  will be able to contribute additional indicators.

Figure 1. Model for institution-wide assessment development.

Vision >High Standards >Commitment >Resources >Structure >Integration

                                                                                   

                Model Component                Negative lndicator                     Positive lndicator

Vision
  • Assessment ‘gears up’ for accreditation visit
  • One individual responsible for all assessment
  • Talk – No action
  • Compliance mode
  • Assessment tied to institutional mission
  • All divisions & departments engaged
  • Leaders share vision & findings
High Standards
  • Flawed sampling plan
  • Poor Instrumentation
  • Only surveys used
  • No data analysis or interpretation
  • Quality data collected & maintained
  • Measure what matters
  • Faculty across campus contribute & interpret
Commitment
  • Program dies when key individual departs
  • Budget cuts hit
  • Administrators ‘warm the bench,’ delegate
  • Sustained communication of vision
  • Data collection is systematic
  • Methods improve
  • Time to evolve,’get it right’
Resources
  • No one has time
  • Costs too much
  • No institutional rewards or recognition for participants
  • People involved ‘burn out’
  • Findings not used for improvement
  • Time & money invested
  • Personnel lauded for contributions
  • Budget allocation & reallocation informed by quality assessment data
  • Programs use data
Structure
  • No reporting process developed
  • No one knows what happened anywhere else on campus
  • Budget decisions made ‘across the board’
  • Starts and stops observed across individual assessment programs
  • Programs report problem areas & improvements
  • Departmental & School working committees report up, down & across
  • Institution-wide working committees that work
  • Assessment data required for institutional reporting & program review
Integration
  • Negative assessment findings concealed in fear
  • Campus divisions remain demarcated-‘divided’
  • No sense of institutional identity or mission
  • Task Forces work hard with no outcome impact
  • Data & processes can be aggregated to tell institutional story
  • Use of data expected Institution can provide external stakeholders with clear evidence of caring & quality
Vision

The model begins with a vision of what the institution wants assessment to achieve and how it can serve the institution in fulfilling its mission. Note this is an institutional vision, not a classroom or program level vision. For an assessment program to truly have impact on the quality of education, an institutional perspective is prerequisite. This vision must be shared both across and within each division of the institution.

The full support of Administration, Finance, and University Advancement divisions is necessary to use the findings that good assessment can bear. Many institutions perceive only the division of Academic Affairs as a key player in assessment, with a few others including Student Affairs as a partial or second-tier participant.  The active participation of all components of an institution is required to achieve the shared vision of a dynamic and influential assessment program.

A shared notion of what the institution is and how all components fit together builds community. Indeed, accrediting bodies now require involvement by all university divisions as demonstration of institutional effectiveness. I can hear an early death knell tolling when I see a single individual hired to ‘take care of assessment’ for a complex institution.

High Standards

As with all quality endeavors, high standards for both personnel and practice are expected. In order to earn credibility for assessment activities, data, and the results that will be forthcoming, a scientific orientation that can withstand careful scrutiny by skeptics both within and outside the institution is required. We need to measure what matters, not what is easy to count.

Academe is populated by intellectually demanding individuals; they will require solid assessment data collection designs, reliable and valid instrumentation, and sound data analysis. The individuals engaged in these activities will need to provide such evidence, and no institution could responsibly use information  from a set of procedures that does not fulfill these expectations (Sundre, 1994).

Fortunately, all educational institutions have many, many talented critical thinkers from a variety of academic disciplines to draw from. No single division or department has a monopoly on clear thinking or high standards. There is no excuse for not demanding and attaining this component over time.

Commitment

If we have a shared vision and have established high standards for practice, an unswerving commitment must be made that will withstand the ebb and flow of economic tides, as well as changes in leadership at any institutional level. It’s relatively easy to make a real commitment to quality assessment when it aids achievement of institutional  mission and is conducted in a manner that welcomes scrutiny and engagement.

Moreover, making such a commitment clearly communicates both within the institution  and outside, that we assume responsibility for stewardship of the institution toward public goals. Unfortunately, ‘fear of commitment’ is not experienced only by those seeking meaningful romantic relationships; it is all too common in other contexts. Assessment practice is but one of those contexts. All too often, this ‘fear of commitment’ is a legitimate response to a lack of vision and quality in the assessment plan and process. These are the assessment programs that we hear faculty lament as ‘a waste of time and energy.’

Resources

Many institutions  point to a lack of fiscal resources (economic downturn, budget cuts or reallocations) as a primary reason they have not developed a strong assessment program. This is a flawed argument, because the most important assessment resources are not monetary.

Vision, high standards, and commitment cost nothing, but they mean everything in the development of a quality institution of higher education. A quality assessment program can and should be a natural byproduct of these components. Time is a limited resource, and it can only be expended once. Misspending any dear resource such as time represents an opportunity cost.

If we are to spend a precious resource, we must assure that it is directly linked to the acquisition of the institution’s mission and most important objectives. What could possibly be more important than ensuring that student growth and development are monitored with the intention  of continuous  improvement? Expending these resources is an investment worth making; it will reap rich rewards across campus domains and over time.

Structure

The development of an institutional structure  is critical to being in a position to use assessment information  in a timely fashion. Institutional committees at several levels are important means by which faculty and administrators can keep apprised of assessment findings and how they can inform program, curriculum, and instructional delivery decisions.

While this sounds labor intensive, it is not. For some institutions,  this would mean setting committee priorities and working smarter.

Here are a few examples: 1) eliminating or restructuring committees  to pursue more meaningful missions; 2) conducting selected committee  business via email, reserving meeting time for the most important issues; or 3) breaking into subcommittees to independently work on tasks—then reporting back to the committee.

Many other examples can be provided; the point is that careful structure  creates time and maximizes its use for what is most important.

My experience tells me that faculty and administrators truly enjoy interdisciplinary  opportunities to talk about what they care most about— student growth and development. These discussions are intellectually stimulating and professionally developing, but only when the first four components of the model are evident.

Further, program assessment needs a common structure for reporting that will eliminate guesswork about what is wanted and expected as well as foster aggregation of information  for broader knowledge and data use.

While several excellent examples may be available elsewhere, a good example of the provision of solid structure for reporting would be the Academic Program Review Guidelines from James Madison University. This resource is available for review at the following website: https://www.jmu.edu/academic-affairs/apr/index.shtml.  This document  makes clear what assessment information  is expected and how it relates to other institutional  data that can and should be used when evaluating programs.

Integration

If the above five components are in place, achieving an integrated assessment program is highly likely. The successes of one area will be used to promote positive change in others. A sense of community begins to develop about the identity and unique nature of the institution.  This information  helps to credibly promote to many external stakeholders the vitality and professionalism of individual programs as well as the institution as a whole.

Assessment helps to build a ‘culture of evidence’ that serves to inform and strengthen many decisions and commitment to them.

The benefits of strong data collection designs, and the quality of the data obtained far outweigh the costs. Remember that these ‘costs’ were once considered insurmountable. For institutions that have made careful investments over time, the benefits are multifaceted and worthwhile.

Conclusion

It can be intimidating to begin this process, but there are many successful and very diverse institutions that have provided multiple pathways toward achievement (see Banta, 2002 for examples). We have all learned from the experiences of others. I encourage you to continue your quest. If we support one another, we will make progress on the pathway. We will also be able to provide a meaningful answer to the question, “Are we there?”

References

Banta, T. W. and Associates (2002). Characteristics  of effective outcomes assessment: Foundations and examples. In T. W. Banta and Associates (Eds.), Building a Scholarship of Assessment. (pp. 261-283). San Francisco: Jossey-Bass.

Peterson, M. W. and Vaughan, D. S. (2002). Promoting academic improvement:  Organizational and administrative  dynamics that support student assessment. In T.W. Banta and Associates (Eds.), Building a Scholarship of Assessment. (pp. 26-46). San Francisco: Jossey-Bass.

Sundre, D. L. (1994). The practice of student and program assess- ment: Evolution through engagement. Assessment Update, 6, (1), 4-5.

sundre_fullresWritten by: Donna L. Sundre

Emeritus Professor of Graduate Psychology & Emeritus Executive Director
Center for Assessment and Research Studies, James Madison University

Dr. Sundre is a featured presenter at this year’s Conference.

To learn more about this year’s Conference, visit www.livetextconference.com. Hear what your peers are saying  about the LiveText Conference… click here.

FacebookTwitterLinkedInShare