Interpreting and Using
Student Ratings of Teaching Effectiveness
Data collected from student ratings of teaching effectiveness may be from forced-choice questions, open-ended
inquiries, or a combination of these. Careful interpretation of the data in combination with reflective practices will
provide you with valuable information about your teaching effectiveness.
Common Statistical Terms – What they mean and how to use them
The following are common statistical terms often found on data analysis reports.
NThe letter “N” represents the sample size (number of students who responded overall or to a particular item).
It is important to compare the number of student responses (N) to the total class enrollment. Additionally,
check the N for each question since students may not answer every item. Results should be interpreted with
caution if less than 75% of the enrolled students completed the survey.
Reliability (i.e., consistency) of the data is compromised with a sample size of fewer than ten students.
MeanThe mean score represents the numerical average for a set of responses. The following points assume a
scale in which a low score is assigned to negative responses (i.e., poor) and a high score to positive responses
(i.e., excellent).
Generally, the higher the mean score, the better the evaluation.
It may be helpful to rank order the mean scores for a set of evaluations in order to identify those that are
higher or lower than the majority of your scores. For example, on a 5-point scale, if most of your mean
scores tend to cluster around 3.8, then those falling below that value (approximately 3.4 or lower) may
signal areas of teaching that need attention.
On a 5-point scale, items with mean scores above 4.0 generally reflect teaching aspects that are particularly
effective.
Standard Deviation
The standard deviation represents the distribution of the responses around the mean. It
indicates the degree of consistency among student responses.
The standard deviation is often abbreviated in data tables as s, sd, SD, std, or StD.
The standard deviation in conjunction with the mean provides a better understanding of your data. Begin
by adding the standard deviation to the mean. Next subtract the standard deviation from the mean. The
range between the two calculated values represents where approximately 2/3 of your students’ responses
fall. For example, if the mean score is 3.3 with a std of 0.4, then 2/3 of the students’ responses lie between
2.9 (3.3 - 0.4) and 3.7 (3.3 + 0.4).
The standard deviation represents the degree of similarity among the students’ responses. A small standard
deviation (as in the example above) reflects a high degree of consensus among the students. Since there is a
small numerical range (2.9 - 3.7) in which 2/3 of the ratings fall, the response pattern among the students is
very consistent.
A large standard deviation indicates that there was considerable disagreement among the students’
responses. For example, if the mean score is 3.3 with a std of 1.0, then 2/3 of the students’ responses lie
between 2.3 and 4.3. This indicates a wide disparity among the responses to this item, with the mean
simply representing a numerical average of the responses and not a consensus rating by the class. In such a
situation, carefully examine the survey question. Might a particular activity or teaching approach not be
equally effective for all students? Perhaps the class is not in agreement on course organization or grading.
Why might this be? How can you make adjustments to your teaching that would more readily meet the
needs of all students?
Interpreting and Using
Student Ratings of Teaching Effectiveness
Students’ Written Comments – What they mean and how to use them
Including open-ended questions on student rating forms can help to clarify statistical data and provide valuable
suggestions for course improvement. By asking students to write comments and allowing sufficient time for
completion of the evaluation, you can increase the number and quality of written comments.
Making Sense of the Comments
It is often difficult to interpret written comments because they lack the built-in structure found in the
forced-choice component of the evaluation. Initially, the comments appear to be a collection of random,
unconnected statements. Bringing a sense of organization to the written comments can provide important insights
into your teaching.
Strategies for Organizing Student Comments
Read, organize, and compare the comments from each question separately. That is, if you have more than
one open-ended question on your evaluation, begin by reading the responses to just the first question on all
the evaluations. Then go back and read the responses to the second question, then the third question, etc.
This approach permits you to focus on one topic at a time.
Group the comments by teaching components (i.e., course organization, communication, faculty/student
interaction, feedback) or by categories that are most meaningful to you. The grouping of comments by
similar topics allows you to focus on comments related to one teaching component at a time.
Depending on how your data were analyzed and the type of student information collected, you may be able
to correlate the students’ comments with items such as: their major, year of study, expected grade, and
overall instructor and/or course rating. This type of analysis may provide insight into how various
subpopulations of students view the course.
Once the students’ comments have been organized and analyzed, compare them with your quantitative
feedback. Look to the written comments for explanations of the ratings on the forced-choice responses. For
example, students may have given lower overall ratings to the questions addressing “faculty/student
interaction.” Written comments may reveal consistent student concerns about your helpfulness and
availability outside of class. By exploring this link between the responses to open-ended and forced-choice
questions, you may be able to identify specific changes that may enhance your teaching effectiveness.
Positive and Negative Comments
Research indicates that the students who are very satisfied or very dissatisfied generally provide written
comments.
It is helpful to determine the proportion of negative to positive comments for interpretative purposes. This
will assist you in determining if the comments are representative of the entire class or a small minority of
students.
Comments that reflect positively on your teaching effectiveness can usually be considered genuine. Since
the course evaluation is anonymous, students do not usually write positive comments unless they mean
them.
Since students’ anonymity is protected on student ratings, students may write negative comments that
range from sarcastic to vicious. Obviously, not all of these comments are constructive. Pressures unrelated
to you or your course may also underlie some of these comments. Keeping this in mind may help to limit
overreaction to certain comments.
Negative comments need to be interpreted with caution. Normal human behavior often causes one to take
the negative comments to heart, regardless of how small the number.
Reference any questions from the forced-choice portion of the evaluation that relate to the issues raised in
the negative comments. If these questions received lower ratings, the negative comments may reflect a
concern with that particular dimension of your teaching.
Interpreting and Using
Student Ratings of Teaching Effectiveness
Office of Institutional Research and Assessment
400 Ostrom Avenue • Syracuse, NY 13244-3250
Phone: (315) 443-8700 • Fax: (315) 443-1524 • E-mail: [email protected]
• Web: http://oira.syr.edu
Inconsistent or Contradictory Comments
When students’ comments primarily reflect satisfaction (or dissatisfaction) with various aspects of your teaching
effectiveness, you can be fairly confident in your response. Occasionally, student comments can be contradictory.
For example, one subset of students may indicate they found the interactive group activities enhanced their
learning. Another group of students may have found the same activities a waste of their time. These
inconsistencies are often due to variations in student development and/or preferred student learning style.
Large introductory level classes, which often attract students with a wide range of developmental levels, may be
especially prone to such inconsistencies. There may be students who are not yet developmentally capable of
accepting the challenges of your course. They may not be comfortable thinking independently, accepting a high
degree of individual responsibility, or reasoning at higher cognitive levels. If this is the case, develop strategies to
help these students succeed in your course.
Another explanation for contradictory comments may be that students are not familiar or comfortable with a
particular teaching method or assignment model. Students who prefer a teacher-centered, theoretical-based
learning environment may be uncomfortable with an inquiry-based approach that requires group discussions and
student interaction. If learning style preferences appear to be the source of the discrepancy, help students to
understand the advantages of a particular teaching method and to expand their learning style repertoire.
Additionally, varying the teaching style that you use will accommodate the range of learning styles represented
among the students in your class.
Overall Reflection
As one of your final exercises in analyzing your student rating data, take time to step back and reflect generally
over the collective data. By addressing the following questions, or others you feel are appropriate, you can begin
to view the data from a perspective that allows for insights that will positively impact your teaching effectiveness.
Is the overall evaluation of the course relatively consistent, or are there clear differences in how students
are experiencing the course? What may account for these differences?
What overall responses stand out? Why? How representative of the total response group are they?
How do the students’ responses compare with the other types of feedback you already have (i.e., nonverbal
cues, student questions or concerns raised either in class or during office hours, attendance patterns)?
What are you doing in the classroom that would result in the type of feedback you received? For example,
if students feel that their contributions in class discussions are valued, think what you are doing to
encourage them. Conversely, if students express a problem with knowing when assignments and exams are
due, reflect on how you communicate these expectations to your students.
What aspects of your teaching appear to be effective?
What areas of your teaching could be improved?
What new understandings/insights have you gained about your teaching?
Overall, does the pattern of responses tell a story? That is, are there underlying factors affecting scores in a
number of areas, or are strengths and weaknesses specific?
Does your assessment of your teaching match that of your students? If not, why not?
Which aspects of your teaching do you want to work on improving? How will you go about this? What
resources are available to you?