Interpreting Student Evaluations for Assessing Teaching Effectiveness

Presenter Information

Meca Williams-JohnsonFollow

Location

Ballroom

Session Format

Presentation

Abstract

This presentation offers an examination of interpreting student evaluations in higher education. Student evaluations of teaching (SETs) have become a ubiquitous tool for assessing teaching effectiveness, faculty development, and program improvement. However, the interpretation of student evaluations remains a complex and often contentious task, as it involves decoding the multifaceted voices of students and separating valid feedback from potential biases and misinterpretations.

For this study, I use a mixed-methods approach, combining quantitative data analysis and qualitative content analysis, to explore the nuances of student evaluations. I explore the multifaceted factors that can influence student feedback, including instructor characteristics, course content, and the student's own background and motivations. By examining a multiyear dataset of student evaluations, this research offers a deeper understanding of the intricacies of student feedback.

Furthermore, the unintended consequences of relying solely on numerical ratings for instructor evaluation, such as grade inflation and concerns regarding instructor bias, are also described. The presentation will address the importance of a balanced approach that includes both quantitative and qualitative insights to provide a more accurate and meaningful assessment of teaching effectiveness.

The research suggests that promoting a culture of open communication and constructive feedback between instructors and students can lead to improved teaching practices and student learning outcomes.

Keywords

Student Evaluations, Teaching Effectiveness

Professional Bio

Meca Williams-Johnson is a Professor of Educational Research at Georgia Southern University. Her research interest are Emotions in Teaching and Learning, Self-Efficacy Beliefs, Critical Race Theory, Black Feminist Thought, Parental Involvement, Homeschooling and Undergraduate Research

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

This document is currently not available here.

Share

COinS
 
Feb 2nd, 10:30 AM Feb 2nd, 12:00 PM

Interpreting Student Evaluations for Assessing Teaching Effectiveness

Ballroom

This presentation offers an examination of interpreting student evaluations in higher education. Student evaluations of teaching (SETs) have become a ubiquitous tool for assessing teaching effectiveness, faculty development, and program improvement. However, the interpretation of student evaluations remains a complex and often contentious task, as it involves decoding the multifaceted voices of students and separating valid feedback from potential biases and misinterpretations.

For this study, I use a mixed-methods approach, combining quantitative data analysis and qualitative content analysis, to explore the nuances of student evaluations. I explore the multifaceted factors that can influence student feedback, including instructor characteristics, course content, and the student's own background and motivations. By examining a multiyear dataset of student evaluations, this research offers a deeper understanding of the intricacies of student feedback.

Furthermore, the unintended consequences of relying solely on numerical ratings for instructor evaluation, such as grade inflation and concerns regarding instructor bias, are also described. The presentation will address the importance of a balanced approach that includes both quantitative and qualitative insights to provide a more accurate and meaningful assessment of teaching effectiveness.

The research suggests that promoting a culture of open communication and constructive feedback between instructors and students can lead to improved teaching practices and student learning outcomes.