Course Evaluations

Statistical Survey Reports

There are two standard reports that the ESCI Office produces regularly: end-of-quarter reports and  instructor/TA summary reports. We can also produce other standard reports or customized secondary analyses upon request. The reports provide faculty with sets of student-generated data about the students’ experience in a course, some of which is official documentation required by the university for faculty reviews, teaching awards, or job applications. The information below describes the statistics, vocabulary, and format of the reports as well as guidelines for interpreting and analyzing ESCI. Consider using some of the teaching evaluation methods elsewhere on our website to gather other feedback and data about your courses and teaching.

TA looking at laptop screen

When requesting a report, please include:
Name, Employee ID, type of report, and elaborate on any special requests
 Request a Summary or Custom Report 

The end-of-quarter ESCI report is designed to provide anonymized, clear and concise summaries of student ratings data for an individual course in a given quarter, including all scalar and text responses (as from the time that the department began using ESCI Online). The report also provides comparative information between the surveyed course and the academic department in the current quarter, the academic department over time (the most recent five years), and the campus over time. The reports are produced as PDFs within 2 weeks after grades are submitted and made available to departmental ESCI contacts who then distribute them to instructors and TAs.

Sample End-of-Quarter Report

The five and ten year summary reports are for individual instructors and TAs, and are required for faculty cases, some teaching award nominations, job applications and the like. They provide summarized responses for each question asked on an instructor or TA’s ESCI questionnaires over the previous five or ten years. This helps facilitate comparisons across iterations of a course and across all courses taught by that instructor. 

Sample Summary Report Explanation of Instructor Reports Explanation of TA Reports

Departments can request a custom report and/or request secondary analysis to review curricula and courses for program review. Secondary analysis can help departments answer questions about their curricula, such as how student evaluations of specific subsets of courses have changed over time (e.g. large intro courses), or how required courses are doing compared to non-major courses. We work closely with departments to ensure instructor confidentiality in all secondary analyses reports. In some circumstances, the ESCI Office can supply datasets with raw scores, aggregated as necessary to fit the confidentiality guidelines, rather than reports to enable departments to carry out their own analyses.

Interpreting and Analyzing your Reports

ESCI reports are intended to summarize students’ experience of teaching and can help you identify strengths of the course, potential areas for improvement, and trends in students’ feedback over time. Be aware that no single metric of the course is likely to be unbiased, and end-of-quarter course evaluations are no exception. Therefore, instructors should use multiple metrics to evaluate their teaching in a quarter and over time. Use the suggestions and resources below as guidelines to help you analyze and interpret your ESCI data, and/or reach out to us to talk through the data together.

Handout for Analyzing ESCI Data

Analyzing and Interpreting ESCI Reports

  1. Begin by looking at the numerical results of the closed-ended (multiple choice) questions. Ignore the median and the mean at first, and concentrate instead on the percentage distribution of responses for each question. Looking at the entire distribution lets you see how the responses cluster or are spread across the response categories. 

  2. Next, look at the distribution of responses for survey questions that you care the most about and/or that align with the teaching values and goals in your teaching statement or job/case materials. Check to see how your response distribution compares to the norms for your department and the campus, or how they change over time in the summary reports. If your distributions are more than 20% different from the norms, then it’s worth investigating further to determine what might be unique to your class or teaching style. 

  3. Also look for distributions or means/medians on all the questions to look for anything that seems inordinately higher or lower than you were expecting, or that noticeably jumps up or down between quarters on the summary reports. Consider graphing the results of your summary reports to note changes or show consistency.

As you read through student comments, tally up all the superficial positive and negative comments. Also try to categorize students’ comments by finding a word or phrase to capture the main topic or suggestions. Tally up the number of times each of those categories is mentioned in a comment, then list them in order from least to greatest to get an overall picture. Pay attention to important outliers, like really insightful suggestions or unique circumstances. 

Compare your tallied student comments with the analysis of scalar responses to see if they complement each other or illustrate specific themes, values, or skills that you can expand on in your application or merit case materials with other forms of teaching evidence.

Questions A and B are overall “summary" questions and appear on all instructor surveys at UCSB, but not on TA surveys. The results of A and B are reported to the Committee on Academic Personnel for faculty merit cases along with student comments. There are two ways to look at the data from A and B: 

  1. How well students think you are doing in “absolute” terms, as defined by the “anchor points” of the scale: “Excellent”, “Very Good”, “Good”, “Fair”, “Poor”. 

  2. How well students in your course think you are doing relative to what students say about other courses in the department and campuswide. (See information on Norms below.)

  3. Results of Questions A and B should not be overinterpreted, and should be corroborated (or questioned) via other evidence about the course before reaching important conclusions.

What are ESCI Norms and Norm Pools?

A useful feature of ESCI is its norming capability, which allows you to see how your students’ feedback compares to students’ feedback in other courses that ask the same question. Each question has calculated “departmental norms” (based on students’ feedback on that question for all courses in the department) and “campus norms” (which summarizes results for all courses on campus that ask that particular question). Norms are calculated both for the current quarter and over the most recent five-year period. They are calculated separately for faculty, graduate student teaching associates, and teaching assistants so that instructors in each rank are only compared to others in the same rank.

Sample End-of-Quarter Report


Course Evaluation

Which Norms should I use to analyze my report?

  • Department norms compare your students’ responses to all other courses in your department in the current quarter and over the past 5 years.

  • Student-weighted norms are calculated with each student’s response contributing equally to the norms. That is, every student’s response to that question is put into the norm pool regardless of the number of courses that had that question on their survey.

  • Course-weighted norms are calculated with each course contributing equally to the norms, regardless of the number of students in it. That is, only the average score of all student responses in each course is put into the norm pool, regardless of class size.

When data are aggregated for norming (either for the current quarter or over time), the results for large courses will “swamp” the results for a smaller course in “student-weighted norms”. That is, the larger courses will tend to set the norms, because of their larger enrollments. 

For small courses use the "Course-weighted" norms since they are calculated with each course contributing equally to the norms (and not all students from every class regardless of size). Importantly, each student’s response carries significant weight in small courses; for example if 20 students respond, each response is worth 5%, and if 10 students respond, each response is worth 10%. Therefore, high response rates are particularly important in small courses. In addition, in very small courses (less than 10 students) there is a risk of students’ responses being recognizable by the instructor or TA instead of remaining totally anonymous, and a different type of evaluation may be more appropriate.


H: Mon - Fri; 8:00am-12:00pm & 1:00pm-5:00pm

Instructional Consultation
1130 Kerr Hall

ESCI Office
1124 Kerr Hall

TA Development Program
1130 Kerr Hall

The OIC Team
George Michaels
Lisa Berry
Mindy Colin
Olga Faccani
Mark Rosenberg
Inna Slyutova