Blog

Growth Mindset…Meet Ed-Tech…You two should be very happy together!

July 21, 2016

growthmindsetEver since Stanford University psychologist Carol Dweck’s Mindset: The New Psychology of Success came out in 2006, terms like fixed and growth mindsets are now buzzwords in education. However, according to Dweck, parents and teachers often misuse growth mindset research, failing to instill the mentality most conducive to students’ success.

Based on decades of research on achievement and success, Dr. Dweck explained mindset in the following way. In a fixed mindset, people believe their basic qualities, like intelligence or talent, are simply fixed traits. They spend their time documenting their intelligence or talent instead of developing them. They also believe that talent alone creates success—without effort.

In a growth mindset, people believe that their most basic abilities can be developed through dedication and hard work—brains and talent are just the starting point. This view creates a love of learning and a resilience that is essential for great accomplishment. Teaching a growth mindset creates motivation and productivity in the worlds of business, education, and sports. It enhances relationships

Dweck also explains how the term growth mindset has been conflated with another buzzword: grit. Encouragement to simply try harder isn’t productive, according to Dweck. What’s essential is cultivating problem solving and critical analysis skills that help students to think smarter and then passionately capitalize on those abilities.

As an educator, you may be hearing a lot about grit these days—and with good reason. Studies show that kids who demonstrate grit persist at hard tasks and outperform their competitors. Grit is a critical strength of most people who are successful. It is especially complex because it is related to other skills such as optimism, purpose, growth mindset, bravery, and even self-control. There are a lot of misconceptions about grit. Grit is much more than just encouraging kids to “try harder” or not give up—it’s also about helping kids find their passion. Having grit does not mean never quitting—it means quitting responsibly and sticking to the things to which you are truly dedicated.

In this sense, cultivating a growth mindset and grit involves more than a shift in attitude; it involves developing the right tools and strategies to make that shift tenable for any teacher. This is where Ed-Tech can play a valuable role in changing students’ attitude toward education, supporting the development of those critical thinking competencies, and engaging students in their learning journeys.

Shift Focus from GPA

Comparing students’ GPAs is an antiquated evaluation system. Because an A doesn’t mean the same thing from school to school, or from region to region, universities often need some very complicated leveling metric to compare applicants. Given that the Common Core is based on skills acquired, couldn’t evaluation mirror those standards?  And would we need standardized testing if evaluation and instruction became the feedback loop they should be?

Tap into Actionable Data

From my experiences as a high school and college writing tutor, the thing that separates a great tutor from a good tutor is the ability to identify exactly what challenge a student is facing for the given problem. It’s a level of understanding that comes with experience, and this capacity can accelerate learning in ways that might seem impossible in a classroom of 25 or more students. With technology, we have the potential to aggregate user experience in ways that can expedite insights that are specific and actionable, empowering teachers to cut to the chase when working with students who often lack the conceptual context to articulate what’s stumping them in the first place.

Rethink Student Feedback

At LiveText, we have worked closely with faculty and administrators to learn firsthand the importance of feedback on student performance. When students receive immediate and ongoing feedback over a term in any given subject, which is possible with technology today, there is a significant boost in engagement and participation which can lead to a shift in mindset.

Comments Off on Growth Mindset…Meet Ed-Tech…You two should be very happy together! | Category Blog | Tags: ,,,,,,

Making the Most of Attending the Conference!

July 5, 2016

Logo_For-SocialI know taking days away from daily work can be expensive in many ways. After attending several professional development seminars in my field over the years, in addition to 8 three- and four-day LiveText sponsored Assessment Conferences, I feel like I have a good idea of some of the most important things to do to get the most out of my time at a conference.

So if you’re attending or thinking about attending our 2016 Assessment Conference this July 11-13 in Chicago, let me share with you some tried and true tips I have used for maximizing time spent at the Conference.

Separate the knowledge from action. Ever take notes so furiously at a conference because you felt like every thought uttered was a new insight or at least something you never before considered? Been there! Whether you’re an avid note taker or just simply like to jot down big ideas, remember to separate the knowledge from the takeaway.

I recently came across my notes from a few different seminars I attended. When I looked back at the notes, I realized I really never even re-read these after the seminars. Quotes, observations, and advice were scribbled amidst many pages of notes. However, I kept the action steps that I had come up with during the seminars in a separate notebook – my “work notebook.” This allowed me to put into practice what I wanted after these seminars.

I’ve seen some people use a different color pen for their action items. Regardless of how you do it, recognize that any conference will probably overwhelm you with notations. So don’t let information overload keep you from taking action after the Conference. Make sure that for every piece of knowledge you find important enough to write down, you associate an action item with it. Applying the knowledge is key and will make your time at any conference more worthwhile.

Plan your agenda ahead of time. Use the Conference app provided ahead of time to plan your time. Pick out at least two session options for each breakout. That way if the first one is full, you already know where you want to head without wasting time, having to consult the registration table, or missing out on any valuable information. Planning your agenda ahead of time and knowing where you want to go ensures that you get in as many sessions as possible that are applicable to your interests.

Reflect for the Takeaway. Every presenter at a conference has his or her own style. Some people tell a story; sometimes there is a video or set of images, sometimes a formal presentation, sometimes a casual conversation. Regardless of the presenters’ styles, reflect at the end of each session and distill the presentation into one or two key takeaways for yourself. Ask yourself, what did I learn? What struck me? And do I want to do anything about it when I return back to work?

Embrace opportunity and talk to your fellow attendees. Some of the greatest benefits of a conference are often found in the seams of the experience. That chance conversation in the coffee line could make all the difference. A great conference is especially fertile ground for collaboration. I’ve often received the most useful information or even made a contact through casual conversation – especially in those beginning moments before a presentation. Those that know me, know that I am not one for small talk. But at events like this, it really is worth it!

Talk to the people in line with you; talk to the people sitting next to you in the sessions; talk to the people in the elevator with the same Conference lanyards. Find out what sessions they are going to and why. We’ve seen how valuable this can actually be, so at LiveText, we build this into our own Assessment Conference, leaving enough time between sessions and strategically scheduling breaks as well as formal social and networking events to provide just such opportunities.

Use social media and conference hashtags to share interesting information, photos, and observations on Twitter. Hashtags can also make it easier to find and share key quotes or ideas from session speakers. For this year’s Conference, you can find us at @LiveTextConf or using the hashtag #LiveTextConf2016 and #LTAC16. Our goal is to be able to re-Tweet and share the Conference experience with you. 

And finally, don’t jet out early. I know it’s tempting, and some days can seem long, but it’s some of the later-in-the-day or final day sessions are often most valuable. And when done, presenters usually linger around the session venue, which provides an excellent opportunity to have a one-on-one or small group discussion with the presenter or other attendees.  

While just a taste of some of the things you can do to make the most out of your conference experience, I do hope you found these tips helpful – especially to those attending our July Conference, www.livetextconference.com.

Comments Off on Making the Most of Attending the Conference! | Category Blog | Tags: ,,,,,,

LTAC Speaker Spotlight: What Students Can Bring to and Learn from Assessment

June 13, 2016

Picture1
Faculty engagement has long been recognized as an essential element in successful assessment programs.  What has received less attention is the value of involving students.  It’s easy, alas, to think of students as the objects of instruction and assessment. But in truth, they are critical partners in the teaching-learning process, uniquely positioned to bring personal experience and voices to the assessment conversation and the process of improvement. Their involvement is good for assessment and also good for students themselves.

It is worth asking, then, how have students been involved in learning outcomes assessment? And what are the possibilities? The ideas and examples that follow here are adapted from “Faculty and Students: Assessment at the Intersection of Teaching and Learning,” by Timothy Reese Cain and me in NILOA’s 2015 volume, Using Evidence of Student Learning to Improve Higher Education.

Historically, the picture has been mixed. In the early years of assessment, students were seen primarily as the source of assessment data.  With policymakers arguing that it was “time for results” (National Governors’ Association, 1986) and with grades held in low regard, the trend was toward externally devised instruments that yielded scores and numbers. Sampling techniques were the order of the day, and attention to improving the experience of individual students was barely in view as campuses scrambled to respond to state and system mandates.

But there were exceptions. At Alverno College beginning in the 1970s, for instance, assessment meant a requirement that every student demonstrate proficiency on a carefully specified set of outcomes. Assessment, as Alverno leaders often put it, was “for learning.”  In that spirit, all students were (and are today) expected to develop the capacity for self‐assessment — the ability “of a student to observe, analyze, and judge her performance on the basis of criteria and determine how she can improve it” (http://lampout1.alverno.edu/saal/terms.html#sa). In this sense, self-assessment gave students a central role in the process. In fact, one might argue that the assessment students do of themselves is the most important form of assessment, as it fosters a capacity to monitor and direct their own learning in life beyond college.

In the early 1990s, the classroom assessment movement—which offered a set of tools and techniques that faculty could use to explore their students’ learning—opened up additional opportunities for students to reflect on and articulate their experience as learners. The one-minute paper, for example, asks students to identify what they understand or take away from the day’s lecture or discussion and also what is as yet unclear to them—an exercise that promotes metacognition, and one that students get better at over time. The purpose of techniques like these is to generate evidence the instructor can use for immediate improvements, but a corollary benefit is increased thoughtfulness by students about their learning experience and themselves as learners.

Today, the use of portfolios clearly puts the student at the center of assessment. Portfolios provide an occasion for students to pull together their work over time, step back from that work, and reflect on its meaning and trajectory.  The majority of undergraduate students today attend two or more institutions on their way to the baccalaureate degree—often starting, stopping out for a time, and starting again, perhaps on a different campus.  These swirling patterns of enrollment underscore the need for experiences that help students connect the various elements of their learning—across courses and disciplines, between the learning they do in the classroom and their lives outside, and over time. Assessment approaches that address this need—as portfolios do—are understandably gaining ground (Kuh, Jankowski, Ikenberry, & Kinzie, 2014).

Some institutions have recently taken the notion of student involvement to another level, finding additional roles for students in the assessment process and seeing them as “an untapped resource as institutions seek ways to prove their value to both students and society” (Welch, 2013, para.1).

One successful model of this approach can be seen at North Carolina A&T State University (NCA&T) as part of its participation in the Wabash National Study of Liberal Education, a multiyear effort to determine how much students change during their time in college. Looking for ways to translate the study’s assessment results into real improvement, the project’s leaders proposed that students be invited into the process.

At NCA&T, this idea led to the creation, in 2008, of the Wabash‐Provost Scholars Program.  Each semester, program leaders train a group of undergraduate students to conduct focus group sessions with their peers, faculty and students, to gather and analyze various kinds of assessment data, to develop written reports and recommendations, and to lead scholarly presentations on their work and experiences. In brief, the Scholars’ task is to “dig deeper” (Baker, 2012, p. 6) into the institution’s assessment results—helping to make evidence more actionable. Their insights are shared with other students, faculty, and administrators at a public presentation.

The University of California (UC)–Merced provides a different example of how students can be involved in assessment.  The SATAL program—Students Assessing Teaching and Learning—involves students in designing, collecting, and analyzing various forms of evidence to help faculty and programs improve their work through formative assessment. This might mean running a focus group with other students and producing a report on the results, interviewing a class and sharing what is learned with the instructor, or administering mid‐ or end‐of‐course evaluations and then tabulating and writing a summary report for the instructor (see http://crte.ucmerced.edu/satal).

In response to exit surveys when they leave the program, SATAL students report high levels of impact on their research skills and their capacities for teamwork and leadership. According to one campus leader, “There’s a real cohesiveness among these students. They spend a lot of time together and gain insights about education and themselves as learners.”

Leaders of student assessment activities at both NCA&T and UC–Merced would probably agree with Josie Welch, director of assessment at Arkansas State University (ASU), who notes that “the key to effectively involving students in outcomes assessment is to intentionally match faculty need with student interest” (2013, para. 1).

Accordingly, ASU students have been involved in a variety of assessment projects. Often they are enrolled in research methods courses, so assessment provides real‐world applications of methods they learn in class. One group of students, drawing on data from NSSE and FSSE, “conducted an experiment that resulted in an evidence‐based report to deans on just how much faculty [could] expect of first‐year students if they ‘saw us as we see ourselves’” (Welch, 2013, para. 3).

In short, Welch says, “When students serve as statisticians, interns, and researchers, this is a 3‐way win for faculty, students, and directors of assessment” (para. 1).

The theme running through all of these examples is that students’ most important involvement in assessment is as learners. Whether they are creating portfolios of their work over their college career, helping to interpret survey data at the campus level, or consulting with a faculty member about his or her classroom, they are bringing their perspective as learners—and also deepening that learning by reflecting on it, talking about it with other students and with faculty, documenting it, and in some cases developing new strategies for studying and understanding it.

At times this can be unsettling, both for students and for educators, as students take on new roles and authority. But as these examples suggest, the process is powerful, giving students a greater sense of agency and an opportunity to contribute to changes that will improve the learning experience both now and in the future.

Written by: Pat Hutchings, Senior Scholar, National Institute for Learning Outcomes Assessment

Dr. Hutchings is a featured presenter at this year’s Conference who will present on the topic of Assessment and Integrative Learning. 

To learn more about this year’s Conference, visit www.livetextconference.com. Hear what your peers are saying  about the LiveText Conference… click here.

References

Baker, G.R. (2012).  North Carolina A&T State University: A culture of inquiry. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Kuh, G.D., Jankowski, N., Ikenberry, S., & Kinzie, J. (2014).  Knowing what students know and can do: The current state of learning outcomes assessment at U.S. Colleges and Universities. Champaign, IL: National Institute for Learning Outcomes Assessment.

National Governors’ Association . (1986). Time for results: The governors’ 1991 report on education . Washington, DC: Author.

Welch, J. (2013, October 14).  Student involvement in assessment: A 3-way win.  [Web blog post].  Retrieved from http://illinois.edu/blog/view/915/98229

******

This essay is adapted from “Faculty and Students: Assessment at the Intersection of Teaching and Learning,” by Timothy Reese Cain and Pat Hutchings in Kuh, G.D., Ikenberry, S.O., Jankowski, N., Cain, T.R., Ewell, P.T., Hutchings, P., & Kinzie, J. (2015). Using Evidence of Student Learning to Improve Higher Education. San Francisco: Jossey-Bass.  It is reproduced with permission of John Wiley & Sons, Inc.

Copyright 2015 by John Wiley & Sons, Inc.  All rights reserved.

 

Comments Off on LTAC Speaker Spotlight: What Students Can Bring to and Learn from Assessment | Category Blog | Tags: ,,

LTAC Speaker Spotlight: Making Connections to Support Student Learning

June 6, 2016

Picture1

 

 

In a fast-paced, technologically-driven world, we desire connections and shared understanding. Yet, those of us who work in assessment often times feel disconnected and alone. We strive to be part of teaching and learning; to be embedded in program and department structures; to capture learning in all the places it might happen; and to address the needs of external stakeholders while protecting the interests of those within institutions. All of these are worthwhile tasks but our colleagues see few of them as meaningful. Sometimes we are considered part of the faculty, but other times as part of the administration, or part of neither. Finding a place to exist within colleges and universities can be just as difficult as helping our institutions uncover connected, student-centered data to help us improve. Our institutions are siloed and at times disjointed. So how can we, as assessment professionals, help make connections to support student learning?

One place we can be useful is in considering the sources of collective evidence we have about our students and where we might find additional points of connection. For instance, NILOA was working with an institution on assessment processes and on the way into the assessment committee meeting someone from the career services office stopped me. They wanted to know if they could join the meeting. They had submitted all of their reports, had aligned learning outcomes for students with the larger institutional goals, and so in the eyes of the assessment committee on this campus had achieved everything that was needed of them. Yet, they had meaningful data they wanted to share.

Students who come to the career services office have the option to participate in mock interviews. Now it may have been a while since you participated in your last mock interview, so as a refresher some of the questions asked to students are for them to provide examples: Can you provide an example of a time when you did X? When have you in the past undertaken Y? In other words, career services staff had evidence on where students said they learned something and were trying to get to the assessment committee table with that information– they had data that could inform curricular decisions and help explore learning experiences through the eyes of students, but they had no mechanism to include it as part of a conversation on student learning.

Of course we brought career services into the assessment committee meeting and spent our time together mapping out various possible sources of evidence and points of connection, but for assessment professionals moving around campuses all across the US, what are the lessons from this story for building connections?

One major take away aligns well with the message of the 2015 NILOA book, Using Evidence of Student Learning to Improve Higher Education, that when assessment is undertaken for reporting or compliance purposes it is not focused on students and their learning. If data collected for purposes of assessment are about completing a report, then we limit who is part of possible conversations about the students we have and their needs. In this instance, a focus on reporting created limited space for conversations beyond the submitting of reports. Such a compliance driven approach adds to a feeling of assessment as disconnected from teaching and learning as well as from improving student learning. As outlined in a recent NILOA policy statement in principles for assessment (2016), if we focus on improvement, accreditation and other reporting needs will take care of themselves.

Another take away is what counts as evidence of student learning widens if we view assessment as about our students as well as improving their learning. Assessment isn’t something we do to our students; it is something we do with them. If we don’t have student voices at the table in a variety of forms, we are limiting our ability to communication curricular coherence as well as heighten educational experiences for our students. Operating with a limited conception of what counts as evidence blocks points of entry for different conversations on how best to meet our students needs. Coupled with limited views of evidence, a lack of focus on students makes our conceptions of possible partners equally limited. Focusing on learning, in all the places it can happen, as our students experience it, builds connections across campus on how students move through and experience the various parts of our institutions.

As professionals who care deeply about student learning, it is part of our job to foster connections – in the form of conversations, broader views of evidence, or even in possible partnerships in support of students. One of our roles is to help people that are caught in the business of getting assessment off of their list of things to do to see the various points of connection that a focus on student learning can bring to our institutions. The majority of assessment related criticisms I hear on campuses is in a lack of understanding of the value and purpose for engaging with assessment in the first place. Faculty will resist if the approach is one of reporting – the value of critically examining how we foster or hinder student learning will be lost. Assessment professionals are uniquely poised to make meaningful connections to support student learning, in all the forms it takes, in all the places it happens. And it’s past time that we leveraged that position.
JankowskiBy: Dr. Natasha Jankowski, Associate Director, National Institute for Learning Outcomes Assessment (NILOA)

Want to hear more from Natasha? She’s a featured speaker at our 2016 Assessment & Collaboration Conference. Click here to read more about our Conference!

References:

Kuh, G. D., Ikenberry, S. O., Jankowski, N., Cain, T. R., Ewell, P. T., Hutchings, P., & Kinzie, J. (2015). Using evidence of student learning to improve higher education. San Francisco, CA: Jossey-Bass.

National Institute for Learning Outcomes Assessment. (2016, May). Higher education quality: Why documenting learning matters. Urbana, IL: University of Illinois and Indiana University, Author.

Comments Off on LTAC Speaker Spotlight: Making Connections to Support Student Learning | Category Blog | Tags: ,,,,

LTAC Speaker Spotlight: A Pathway for An Institution-Wide Assessment Program

May 31, 2016

Picture1

Are We There Yet?

Assessment seems to be everywhere. Over the last ten years, many more associations, conferences, funding agencies, presentations, journals, articles, and books have paid increasing attention to assessment and assessment related issues. Most of us know that interest in assessment has been around for as long as teachers have been interested  in knowing if and how much their students were learning. Every institution  with a mission that includes student preparation will be able to provide multiple examples of good assessment, and most of these will emanate from the classroom level.

However, fueled by stakeholders within and outside the institution, assessment has become more formalized. Institution-wide assessment programs and activities are now widespread as recent surveys of institutions of higher education indicate. Assessment work is prevalent, yet use of assessment information  for program improvement and study of the impact of assessment is very limited (Peterson & Vaughan, 2002).

What is it that renders some institutional assessment programs dynamic, useful and constructive, while others seem to gather data endlessly without use or purpose?

Drawing from my experiences working for well over a decade as an assessment practitioner at an institution  that I believe models excellence, I can hazard some pretty good guesses. Hopefully, my model will generate discussion.

Let me begin with two caveats. First, and in all honesty, my presence (or someone like me) as an assessment practitioner is not the missing ingredient; I have met many very talented and competent  assessment professionals that do not enjoy the same fruits from very similar labors. In fact, I have sent out newly minted and highly talented Assessment and Measurement PhDs to institutions that express a true desire to engage in assessment, only to watch them encounter many of the obstacles and barriers I will try to describe in this paper. Second, my institution-wide model is not presented  to suggest that classroom or program level assessment endeavors cannot be robust and have impact. As indicated above, many are, but their influence does not extend beyond the source. Life-giving creative energy and institutional  knowledge are both lost. This essay provides a model for institution-wide assessment programs that are sustainable and have documented success.

My ideal model for the development of institution-wide assessment would look something like Figure 1 where six sequential components are listed. Each component leads logically to the next, and, as with many developmental models, no component can be skipped or assumed. Each component will be described. Table 1 (see last page) follows and provides sample negative and positive diagnostic indicators to foster additional conversations. I’m sure many participants  will be able to contribute additional indicators.

Figure 1. Model for institution-wide assessment development.

Vision >High Standards >Commitment >Resources >Structure >Integration

                                                                                   

                Model Component                Negative lndicator                     Positive lndicator

Vision
  • Assessment ‘gears up’ for accreditation visit
  • One individual responsible for all assessment
  • Talk – No action
  • Compliance mode
  • Assessment tied to institutional mission
  • All divisions & departments engaged
  • Leaders share vision & findings
High Standards
  • Flawed sampling plan
  • Poor Instrumentation
  • Only surveys used
  • No data analysis or interpretation
  • Quality data collected & maintained
  • Measure what matters
  • Faculty across campus contribute & interpret
Commitment
  • Program dies when key individual departs
  • Budget cuts hit
  • Administrators ‘warm the bench,’ delegate
  • Sustained communication of vision
  • Data collection is systematic
  • Methods improve
  • Time to evolve,’get it right’
Resources
  • No one has time
  • Costs too much
  • No institutional rewards or recognition for participants
  • People involved ‘burn out’
  • Findings not used for improvement
  • Time & money invested
  • Personnel lauded for contributions
  • Budget allocation & reallocation informed by quality assessment data
  • Programs use data
Structure
  • No reporting process developed
  • No one knows what happened anywhere else on campus
  • Budget decisions made ‘across the board’
  • Starts and stops observed across individual assessment programs
  • Programs report problem areas & improvements
  • Departmental & School working committees report up, down & across
  • Institution-wide working committees that work
  • Assessment data required for institutional reporting & program review
Integration
  • Negative assessment findings concealed in fear
  • Campus divisions remain demarcated-‘divided’
  • No sense of institutional identity or mission
  • Task Forces work hard with no outcome impact
  • Data & processes can be aggregated to tell institutional story
  • Use of data expected Institution can provide external stakeholders with clear evidence of caring & quality
Vision

The model begins with a vision of what the institution wants assessment to achieve and how it can serve the institution in fulfilling its mission. Note this is an institutional vision, not a classroom or program level vision. For an assessment program to truly have impact on the quality of education, an institutional perspective is prerequisite. This vision must be shared both across and within each division of the institution.

The full support of Administration, Finance, and University Advancement divisions is necessary to use the findings that good assessment can bear. Many institutions perceive only the division of Academic Affairs as a key player in assessment, with a few others including Student Affairs as a partial or second-tier participant.  The active participation of all components of an institution is required to achieve the shared vision of a dynamic and influential assessment program.

A shared notion of what the institution is and how all components fit together builds community. Indeed, accrediting bodies now require involvement by all university divisions as demonstration of institutional effectiveness. I can hear an early death knell tolling when I see a single individual hired to ‘take care of assessment’ for a complex institution.

High Standards

As with all quality endeavors, high standards for both personnel and practice are expected. In order to earn credibility for assessment activities, data, and the results that will be forthcoming, a scientific orientation that can withstand careful scrutiny by skeptics both within and outside the institution is required. We need to measure what matters, not what is easy to count.

Academe is populated by intellectually demanding individuals; they will require solid assessment data collection designs, reliable and valid instrumentation, and sound data analysis. The individuals engaged in these activities will need to provide such evidence, and no institution could responsibly use information  from a set of procedures that does not fulfill these expectations (Sundre, 1994).

Fortunately, all educational institutions have many, many talented critical thinkers from a variety of academic disciplines to draw from. No single division or department has a monopoly on clear thinking or high standards. There is no excuse for not demanding and attaining this component over time.

Commitment

If we have a shared vision and have established high standards for practice, an unswerving commitment must be made that will withstand the ebb and flow of economic tides, as well as changes in leadership at any institutional level. It’s relatively easy to make a real commitment to quality assessment when it aids achievement of institutional  mission and is conducted in a manner that welcomes scrutiny and engagement.

Moreover, making such a commitment clearly communicates both within the institution  and outside, that we assume responsibility for stewardship of the institution toward public goals. Unfortunately, ‘fear of commitment’ is not experienced only by those seeking meaningful romantic relationships; it is all too common in other contexts. Assessment practice is but one of those contexts. All too often, this ‘fear of commitment’ is a legitimate response to a lack of vision and quality in the assessment plan and process. These are the assessment programs that we hear faculty lament as ‘a waste of time and energy.’

Resources

Many institutions  point to a lack of fiscal resources (economic downturn, budget cuts or reallocations) as a primary reason they have not developed a strong assessment program. This is a flawed argument, because the most important assessment resources are not monetary.

Vision, high standards, and commitment cost nothing, but they mean everything in the development of a quality institution of higher education. A quality assessment program can and should be a natural byproduct of these components. Time is a limited resource, and it can only be expended once. Misspending any dear resource such as time represents an opportunity cost.

If we are to spend a precious resource, we must assure that it is directly linked to the acquisition of the institution’s mission and most important objectives. What could possibly be more important than ensuring that student growth and development are monitored with the intention  of continuous  improvement? Expending these resources is an investment worth making; it will reap rich rewards across campus domains and over time.

Structure

The development of an institutional structure  is critical to being in a position to use assessment information  in a timely fashion. Institutional committees at several levels are important means by which faculty and administrators can keep apprised of assessment findings and how they can inform program, curriculum, and instructional delivery decisions.

While this sounds labor intensive, it is not. For some institutions,  this would mean setting committee priorities and working smarter.

Here are a few examples: 1) eliminating or restructuring committees  to pursue more meaningful missions; 2) conducting selected committee  business via email, reserving meeting time for the most important issues; or 3) breaking into subcommittees to independently work on tasks—then reporting back to the committee.

Many other examples can be provided; the point is that careful structure  creates time and maximizes its use for what is most important.

My experience tells me that faculty and administrators truly enjoy interdisciplinary  opportunities to talk about what they care most about— student growth and development. These discussions are intellectually stimulating and professionally developing, but only when the first four components of the model are evident.

Further, program assessment needs a common structure for reporting that will eliminate guesswork about what is wanted and expected as well as foster aggregation of information  for broader knowledge and data use.

While several excellent examples may be available elsewhere, a good example of the provision of solid structure for reporting would be the Academic Program Review Guidelines from James Madison University. This resource is available for review at the following website: https://www.jmu.edu/academic-affairs/apr/index.shtml.  This document  makes clear what assessment information  is expected and how it relates to other institutional  data that can and should be used when evaluating programs.

Integration

If the above five components are in place, achieving an integrated assessment program is highly likely. The successes of one area will be used to promote positive change in others. A sense of community begins to develop about the identity and unique nature of the institution.  This information  helps to credibly promote to many external stakeholders the vitality and professionalism of individual programs as well as the institution as a whole.

Assessment helps to build a ‘culture of evidence’ that serves to inform and strengthen many decisions and commitment to them.

The benefits of strong data collection designs, and the quality of the data obtained far outweigh the costs. Remember that these ‘costs’ were once considered insurmountable. For institutions that have made careful investments over time, the benefits are multifaceted and worthwhile.

Conclusion

It can be intimidating to begin this process, but there are many successful and very diverse institutions that have provided multiple pathways toward achievement (see Banta, 2002 for examples). We have all learned from the experiences of others. I encourage you to continue your quest. If we support one another, we will make progress on the pathway. We will also be able to provide a meaningful answer to the question, “Are we there?”

References

Banta, T. W. and Associates (2002). Characteristics  of effective outcomes assessment: Foundations and examples. In T. W. Banta and Associates (Eds.), Building a Scholarship of Assessment. (pp. 261-283). San Francisco: Jossey-Bass.

Peterson, M. W. and Vaughan, D. S. (2002). Promoting academic improvement:  Organizational and administrative  dynamics that support student assessment. In T.W. Banta and Associates (Eds.), Building a Scholarship of Assessment. (pp. 26-46). San Francisco: Jossey-Bass.

Sundre, D. L. (1994). The practice of student and program assess- ment: Evolution through engagement. Assessment Update, 6, (1), 4-5.

sundre_fullresWritten by: Donna L. Sundre

Emeritus Professor of Graduate Psychology & Emeritus Executive Director
Center for Assessment and Research Studies, James Madison University

Dr. Sundre is a featured presenter at this year’s Conference.

To learn more about this year’s Conference, visit www.livetextconference.com. Hear what your peers are saying  about the LiveText Conference… click here.

Comments Off on LTAC Speaker Spotlight: A Pathway for An Institution-Wide Assessment Program | Category Blog | Tags: ,,,,,,

LTAC Speaker Spotlight: Three Helpful Resources on Building a Culture of Assessment

May 24, 2016

Picture1

 

 

Many years ago, a trusted colleague told me a story about her plan to set up a series of facilitated assessment workshops for her faculty to calibrate rubrics for use at a public university. She placed a request for resources through which participating faculty would be compensated for their time. In denying the facilities request, her Department Chair laughed dismissively and suggested that she instead hold a bake sale to raise money for the events.    

When we speak of building a culture of assessment, we should ask ourselves important questions such as, “To what extent do we embrace student learning and make it a priority to gather evidence”? (Hersh & Keeling, 2013). Are our conversations about teaching inclusive and empowering, or instead isolating? Are we integrating meaningful experiences that take place outside of the classroom? Are we cultivating collaborative relationships across departments and maintaining them? And essentially – are we playing nice with one another? The literature is full of wonderful collective wisdom that tells us the essential ingredients for success.

Read More

Comments Off on LTAC Speaker Spotlight: Three Helpful Resources on Building a Culture of Assessment | Category Blog | Tags: ,,,

LTAC Speaker Spotlight: Building a Culture of Assessment – An Unlikely Partnership between Mahatma Gandhi and CAEP

May 16, 2016

Picture1

Mahatma Gandhi once described culture as something that “resides in the hearts and in the soul” of a group. Gandhi’s implication was this: Culture is an integral force within a group, a subconscious pulse of its existence. Although Gandhi was referring to the culture of a nation, we can draw parallels to the culture of assessment within higher education.

Read More

Comments Off on LTAC Speaker Spotlight: Building a Culture of Assessment – An Unlikely Partnership between Mahatma Gandhi and CAEP | Category Blog | Tags: ,,,,

Speaker Spotlight: A Personal Perspective on LiveText’s Assessment Conference

May 2, 2016

Picture1

In the summer of 2002, LiveText hosted its first annual assessment conference. At that time, my institution was considering adopting LiveText so I attended the conference with a primary goal of getting some early clients’ input regarding both the technology and the company itself. I recall a total conference attendance of about 35 people and perhaps 7 or 8 LiveText employees, and I left with a clear picture that the company’s clients had very high regard for both the technology and the quality of customer service.

Read More

Comments Off on Speaker Spotlight: A Personal Perspective on LiveText’s Assessment Conference | Category Blog | Tags: ,,,

Successful edTPA Implementation with Technology

April 25, 2016

Join our complimentary webinar!
thinking-light-bulb-clip-art-sketch-idea

Successful edTPA Implementation with Technology

Tuesday, June 7 from 12:00pm to 1:00pm CT

Implementing the edTPA can be a challenging process for a teacher education program. Knowing what resources are available, embedding edTPA into a curriculum, engaging faculty, providing necessary supports for candidates, making decisions about scoring options, and establishing retake procedures are all important considerations. LiveText, as an authorized edTPA Platform Provider, can help in many of these tasks. This webinar will present strategies for successful edTPA implementation, using LiveText as a Platform Provider.

To read more and register for any of our webinars: click here.

There are no registration fees associated with our webinars.

Comments Off on Successful edTPA Implementation with Technology | Category Blog | Tags: ,,,

LiveText Conference Registration Open! It’s a Who’s Who of Assessment!

April 19, 2016

conf design
I wish to extend the following personal invitation to you for LiveText’s 15th Annual Assessment & Collaboration Conference, taking place at the Renaissance Downtown Hotel in Chicago,  IL on July  11, 12 & 13. Attendees of our Conference leave with the kind of insights into assessment best practices that can be put to use immediately. This year, we are featuring sessions led by:

Read More

Comments Off on LiveText Conference Registration Open! It’s a Who’s Who of Assessment! | Category Blog | Tags: ,,,,