Feed on

Nearly everyone who has taught a college course is familiar with the significance of student teaching evaluations. They are an important component of job applications, promotion and tenure portfolios, and the self-efficacy of many professors. While these evaluations receive a great deal of attention and are certainly a key measure of teaching effectiveness, a second (and often overlooked) form of evaluation is crucial for success in the classroom: self-evaluations. By taking time to reflect and evaluate how our own goals are being achieved throughout the semester, we can identify strengths or areas for improvement early on, find ways to be creative in the classroom, and ultimately provide the best possible learning experience for our students. There are other tangible benefits for the teacher too. Self-evaluations may help us detect ways we can save time and energy in our teaching, form the basis of a teaching philosophy, and even raise the scores of student evaluations at the end of the semester. Here are a few tips for how to evaluate your own teaching:

  1. Return to the learning goals you established at the beginning of the semester. What progress have your students made in achieving these goals? How have they demonstrated this progress during class or through assigned work? In your upcoming classes, determine at least one thing you can do to help your students meet the goals you have for them. (For more on how to use learning goals, see this past blog post.)
  2. Reflect on your teaching while it is fresh in your mind. After each class, ask yourself: what is one component of the class that worked really well? What is one aspect that did not work as well as you had hoped? Be sure to write down these reflections for the next time you teach the class.
  3. Evaluate your teaching methods by reviewing current research on teaching pedagogy. Consider trying a new method in an upcoming class and comparing it to your previous method of teaching that particular subject. Even well-designed courses can benefit from updating and finessing.
  4. Design and administer mid-semester student evaluations. Fill the evaluation out for yourself and write brief comments on why you answered the way you did.
  5. Contact the Kaneb Center to arrange an individual consultation or a collaborative teaching reflection. These are excellent opportunities to enhance your teaching by assessing your strengths and areas for improvement.

Self-evaluations, if utilized throughout the semester, allow us to make adjustments to better serve our students and to make our teaching more effective and enjoyable. For additional resources on self-evaluations of teaching, check out:

Becoming a Critically Reflective Teacher by Stephen Brookfield
Am I Teaching Well? Self-Evaluation Strategies for Effective Teachers by Vesna Nikolic and Hanna Cabaj
“Evaluating Your Own Teaching” by L. Dee Fink

Happy Spring Break!  Are you looking for some great readings on teaching and learning to enrich your break?  Then look no further than the Kaneb Center’s library!  Here are some of our favorites that we think you may enjoy as well:

Wishing you a restful and productive break!

Kaneb Center for Teaching and Learning

A Selection of Education Research

Teaching is tough. The students change every semester along with the field and while you are filled with ideas to improve or update a class, you are unsure if the time invested is worth it. There are never enough hours in a day and it can be easy to fall back on a previous year’s assignments instead of innovating and incorporating recent research and current events. In what follows is a brief glance at two recent teaching articles covering two very different topics that may help spark some creativity in your approach to teaching and mentoring.


The Role of Emotional Competencies in Faculty-Doctoral Student Relationships
Kerry Ann O’Meara, Katrina Knudsen, and Jill Jones

Using categories from the framework developed for emotional intelligence (see Emotional Intelligence) at the 1998 Consortium for Research on Emotional Intelligence in Organizations meeting, O’Meara, Knudsen, and Jones examine the faculty-doctoral student relationship and the role that emotional competencies play in this connection. Specifically they interviewed 11 faculty members and 10 graduate students (all volunteers for the study) in an Anthropology department at an undisclosed university. The interviews were semi-structured and were used to probe the various personal competency areas displayed by the faculty members and graduate students and they specifically focused on:

  • Self-awareness: emotional awareness, accurate self-assessment, and self-confidence
  • Self-regulation: self-control, trustworthiness, and adaptability
  • Self-motivation: achievement drive, commitment, and initiative
  • Social awareness: empathy, service orientation, developing others, leveraging diversity, and political awareness
  • Social skills: influence, communication, leadership, conflict management, building bonds, collaboration and cooperation

They found that the faculty members tended to display much higher levels of social awareness and skills when compared to the graduate students but that both groups displayed similar levels of self-awareness, regulation, and motivation. The researchers concluded their article suggesting that any future work may benefit by focusing on how to increase the doctoral student’s social competencies.


The Impact of Grading on the Curve: A Simulation Analysis
George Kulick and Ronald Wright

Using Monte Carlo (see Monte Carlo) techniques Kulick and Wright construct a number of testing scenarios to examine how ‘grading on the curve’ can lead to unexpected results. Kulick and Wright identify that the testing scenarios that would most likely lead to this situation are those when the distribution of the class is far from a standard normal distribution centered around a C, i.e. “…situations in which all students are highly qualified and well prepared”. They further establish their simulation methodology by setting up fictitious tests that would only include a portion of the material covered in a course, since that is a real-world reflection and a consequence of limited testing times. They then created different distributions of students centered on various “ability” levels. Each “student” would take the test and using the Monte Carlo method would have some probability of answering questions correctly or incorrectly based on their assigned ability. The higher the ability, the more likely they would answer the question correctly. Final scores would be correlated against assigned ability and while in a normally distributed class there was a strong correlation. In some of their testing scenarios, there was no correlation between ability and score. They establish that this is a simple examination of the situation and suggest that more in-depth analysis could take place and would possibly change the results, but they are confident that within their assumptions they have identified a problem with grading on the curve.

Planning ahead

If reading and discussing current research topics in higher education is interesting to you and you would want to join an education literature club that the Kaneb Center is considering starting this summer, please leave a post or send an email to jmichalk @ nd.edu or kaneb @ nd.edu so that the interest level can be judged.



O’Meara, Kerry Ann, Knudsen Katrina, and Jones, Jill (2013). “The Role of Emotional Competencies in Faculty-Doctoral Student Relationships,” The Review of Higher Education: Vol. 36: No. 3, pp 315-347

Kulick, George and Wright, Ronald (2008) “The Impact of Grading on the Curve: A Simulation Analysis,” International Journal for the Scholarship of Teaching and Learning: Vol. 2: No. 2, Article 5.



The following entry from the 2014-2015 Teaching Issues Writing Consortium: Teaching Tips was contributed by Rachel Winter, Eastern Kentucky University

“Lecturing to 15 students is much the same as lecturing to 90”

(Dr. A, Professor of Biology, personal communication, 20 March 2014).

The above quote was remarked by Dr. A during a classroom observation of his course applying Flipped Classroom strategies, which utilize the higher levels of Bloom’s Revised Taxonomy during course meetings. While a class of 15 students is arguably more amenable to the construction and maintenance of student-instructor relationships than a 90 member course, the lecture method precludes the advancement of this rapport. One of the most instrumental components of enabling active learning is the establishment of a relationship between an instructor and his or her students, an achievement much more easily realized through effective use of the classroom space.

There are four differently defined spaces in the contemporary classroom: Authoritative, Supervisory, Surveillance, and Interactional. The Authoritative Space refers to the position (generally located at the front center of the classroom) from which the instructor conducts formal teaching and facilitates student activity. This space is also the furthest from students, which is one of the primary hindrances to student-instructor interaction (Lim, O’Halloran, & Podlosov, 2010). The Authoritative Space is typically utilized for the dissemination of information via lecture, an activity that fails to stimulate the higher cognitive levels of analysis, evaluation, and creation, while also preventing students from establishing a personal connection with their instructor.

When departing from the Authoritative Space, an instructor may choose to “patrol” the space between and around the class members, observing, but not interacting with, student activity. When the instructor “pace[s] alongside the rows of students’ desks as well as up and down the side of the classroom,” this activity transforms these sites into the Supervisory Space (Lim, O’Halloran, & Podlosov, 2010, p. 238). While the Supervisory Space physically locates instructors nearer students, the purely observational function of this space does not facilitate the construction and maintenance of student-instructor relationships.

Within the Supervisory Space is the Surveillance Space. This space serves roughly the same function as the Supervisory Space, but involves stationary observation. Similar to Foucault’s Panopticon, the utilization of this space involves the implicit assertion of authority over the observed individuals through an “all-seeing” monitor, in this case positioned at the rear of the classroom (Lim, O’Halloran, & Podlosov, 2010). The function of this space unfortunately precludes the development of a community of peers, as instructors constantly exercise their authority over the members of their class rather than actively facilitating interactions within and among student groups.

The most helpful for the purposes of establishing instructor-student rapport is the Interactional Space. This space can be utilized by the stationary positioning of the instructor “alongside the students’ desks or between the rows of students’ desks” (Lim, O’Halloran, & Podlosov, 2010, p. 238). Interactional Space is most commonly used during student activities, whether individually or in groups. This space represents the closest proximity between instructor and students and “facilitates interaction and reduces interpersonal distance” (Lim, O’Halloran, & Podlosov, 2010, p. 238). This interaction may include personal consultation regarding classroom topics, clarification of previously disseminated material, or even personal interaction, developing student-instructor rapport.

In order to effectively engage students in their learning processes, instructors must take care to utilize their classroom space to enhance student-instructor rapport. An awareness of one’s activity in the classroom can contribute to an enhanced learning environment and can mean the difference between reserved, withdrawn students, and students who actively apply material and participate in a community of peers. When determining the most beneficial use of one’s classroom space, instructors must consider the impact of their use of physical space on the interpersonal distance between students and instructor.



Berrett, D. (2012). How ‘flipping’ the classroom can improve the traditional lecture. Education Digest, 78(1), 36-41.

Lim, F. V., O’Halloran, K. L., & Podlasov, A. A. (2012). Spatial pedagogy: mapping meanings in the use of classroom space. Cambridge Journal of Education, 42(2), 235-251. doi:10.1080/0305764X.2012.676629


The following entry from the 2014-2015 Teaching Issues Writing Consortium: Teaching Tips was contributed by Jodie Hemerda and Julie Frese, Ph.D., Director of Assessment and Academic Quality, University of the Rockies

“To be effective, feedback needs to be clear, purposeful, meaningful, and compatible with students’ prior knowledge and to provide logical connections” (Hattie & Timperley, 2007, p. 104).

Task specific – feedback requires learning context and therefore needs to be task specific. There is no advantage to tangential conversations when providing feedback.

Self-regulation – feedback should encourage the learner’s self-regulation by enhancing self-efficacy and self-esteem. This concept corresponds with teaching learners how to learn.

Low task complexity – feedback should address tasks of low complexity. Goals should be broken down into manageable tasks, as this increases the effectiveness of feedback.

Timing – the timing of feedback is not as straight forward as some may think. Quick turnaround on the correctness of simple tasks benefits students. While students may prefer instantaneous feedback, the literature supports that task process feedback benefits from a delay where students have time to think about difficult tasks before receiving the feedback.

Praise – the most prevalent and least effective, praise disrupts the positive effects of feedback. It should be used cautiously, as students tend to enjoy private praise though it fails the need for task specificity.

Technology enhanced – used appropriately, technology has the ability to provide timely feedback, improve collaboration, increase social presence, increase dialogue, improve reflection, support learning principles, and increase student satisfaction. Consider using the technologies available at your school to optimize technology in providing students feedback.



Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), pp. 81-112. doi: 10.3102/003465430298487 Retrieved from http://rer.sagepub.com/content/77/1/81.full.pdf+html


The following entry was contributed by Joseph Michalka, Graduate Associate, Kaneb Center for Teaching and Learning.


You have just finished grading the first test and while there were a couple of high B’s, you were shocked that the average was a 52. With your syllabus saying that the first exam would be a third of the final grade you realize that if you enter the grades as is, then nearly no one will be able to get an A in the course and you are faced with a difficult decision, to curve or not tocurve.

As a teacher, you do not want to be stuck in a situation like this and while there are many ways to prevent such a scenario (see Using Grading Rubrics, Grading Student Work, Fundamentals of Course Design III: Assessment and Exam Design), dealing with it after the fact requires having a flexible grading policy. Grading policies can vary drastically, but most will be contained within the categories of criterion based or reference based assessments. The rest of this post will examine some of the specifics of each and situations where they might be appropriate.


Criterion Assessment

Generally, the criterion method focuses on measuring a student’s progress against stated learning goals or outcomes. As labeled, this grading strategy typically depends on rubrics or criteria that establishes how assignments, projects, and exams will be graded. Depending on the implementation, this can make the grading process extremely transparent which can help prevent questions and complaints. The lack of direct competition has also been touted as helping to increase student cooperation, since the students will not be fighting for a limited number of high grades. Finally, there is no guarantee of a bell distribution of grades as all of the students could end up failing or acing the class, depending on how their developed skills match up against the set learning goals of the class.


Reference Assessment

The reference or comparative assessment is often consistent with the idea of grading on the curve. This method ranks students against each other, typically by normalizing the grade distribution so that the assigned grades match a bell curve with most grades being around a C, or ‘average’ grade. This can be a very effective strategy when students need to be ranked or compared against each other, whether that be for a fellowship opportunity or entrance into a selective program of study. Additionally, this method has been touted as a possible solution to grade inflation (see Princeton’s report on a decade of institution-wide curving). However, a strict following of this method could result in high grades for students who still have not mastered the material, if they happened to know slightly more than their peers or in the inverse situation, students who only missed a couple of points on an exam could receive B’s or C’s if a larger number of students earned perfects. Work by Kulick and Wright convincingly argues that a strict policy of grading on the curve can actually result in false assessments of a student’s capabilities.


Combination Assessments

A real class is complicated and a combination of both assessment methods may be appropriate for your specific situation. As an example, perhaps your department does not require a grade distribution and as such you are fine assigning everyone an A, assuming they earn it. However, you may still prefer to have an internal ranking for your own personal records to help when writing recommendation letters. Another common situation that might require a combination of policies would involve teaching an extremely difficult course, i.e. organic chemistry. The students might only score mediocrely against the course’s stated criterion, but when the difficulty of the class is compared against the rest of the university, a curve might be appropriate so as to not punish them for taking a difficult course.


These brief discussions on grading will hopefully help you start formulating your own grading schema which will likely include aspects from both. Whatever you choose, keep in mind what the assessment process is supposed to provide and aim to accomplish those goals, i.e. feedback, motivation, and a report of the student’s learning. A similar topic focusing on the philosophy of assessment and the perennial problem of grade inflation will be examined in a future blog post.


For more thoughts on grading policies


With Super Bowl XLIX in recently memory, it is easy to see how being a student is like being part of a sports team.  You work hard for years and finally, you make the team (get into college).  You’re excited, because this will develop you into the player you always knew you could be.  But let’s say that once you get there, you’re benched and you don’t even get the chance to practice your skills.  Then, one day, your coach throws you in the game and you struggle.  Now, any good coach knows that players need to practice their skills in order to improve and to perform better during the big game.  Why would learning in the classroom be any different?  If our goal is for students to learn concepts and skills for use outside of the classroom and on our exams, why would we not give them ample opportunities to practice these skills in the classroom first?  The best teachers carefully design their assignments and activities to help students practice their skills throughout the semester and incorporate active learning.  A classroom without active learning is like being an athlete without practice.

Active learning gives students the chance to apply what they’re learning and to achieve deeper understanding.  Therefore, as instructors, we need to give our students time to practice in the classroom to be better prepared for “the big game.”  Here are some active learning strategies to get your students practicing:

  • Debate.  Have students debate two sides of an issue.  If they work in teams to form their cases (and to anticipate the other side’s arguments), they will learn a great deal about the topic. [Also, check out the Kaneb Center’s upcoming workshop about using debate in the classroom]
  • Practice Teaching a Topic.  Have students select a topic on which they will have to teach the rest of the class.  After researching their topics, have students figure out creative ways to teach the material to their classmates.  This will give them experience explaining tough topics to others, as well as increase their own understanding of the topic.
  • One-Minute Paper.  After teaching a particularly difficult topic, give students one minute to write down their understanding of the topic and formulate any questions they may have.  By giving students time to think about the subject in class, they may be able to see the gaps in their understanding.  Students can even discuss what they wrote with other students to see if they can explain their understanding of the topic and work through those gaps together.  The gaps or questions that remain, then, can be addressed by the instructor.

For more active learning strategies, check out these blog posts:

And for books with more active learning activities, check out these selections from the Kaneb Center Library:

Do you have a great active learning activity to share?

The following entry from the 2014-2015 Teaching Issues Writing Consortium: Teaching Tips was contributed by Rachel A Rogers, Ph.D., Assistant Professor, Psychology Department, Community College of Rhode Island

“But I studied for hours! I don’t understand why I got such a low test grade!”

I am sure that most faculty have heard these words spoken at least once during their teaching careers. What some students do not yet realize is that the quality of study strategies matters almost as much as the amount of time they spend using them. What advice can be given to these motivated students who struggle to study effectively?

In a recent monograph, Dunlosky and colleagues¹ reviewed research from educational and cognitive psychology surrounding ten popular learning strategies. Their findings suggest that some very popular study strategies are actually detrimental to learning and understanding (and were rated ‘low utility’), some are somewhat helpful or are only helpful under certain circumstances (and were rated ‘moderate utility’), and some are helpful in virtually any learning setting (and were rated ‘high utility’).

High utility strategies include Practice Testing and Distributed Practice. Practice Testing, also known as retrieval practice, supports both recall and comprehension of course material for students of all ages, all abilities, and in many subject areas. Practice testing can be aided with practice questions from faculty, or could be as simple as using flashcards to check memory of key terms. The key component to practice testing is that students must retrieve the answer from their long term memories. There are no benefits to looking up the answer in the book, or flipping the flashcard over immediately. Distributed Practice is about spacing out study sessions over time instead of “cramming” the night before a test. Encourage your students to use these two strategies. If possible, make them required parts of your courses so that everyone can benefit from them.

Moderate utility strategies include Elaborative Interrogation, Self-Explanation, and Interleaved Practice. Elaborative Interrogation involves the student generating an explanation for why a fact or concept is true. Self-Explanation is similar. Students explain how new information is related to known information, or explain steps taken during problem solving. Both of these strategies help students connect new and already-known information, which aids in memory encoding. Both work best if the student, not the instructor, generates the explanation. Interleaved Practice is a schedule of practice that mixes different kinds of problems within a single study session. This strategy shows the best results in math classes. Switching between different kinds of computations may result in lower performance during class, but in the long run, learning to identify which types of problems need which type of computations is quite helpful. Help students understand how and when to use these strategies when they come to you for help.

Low utility strategies include Summarization, Highlighting/Underlining, Keyword Mnemonic, Rereading, and Imagery for Text. Rereading and Highlighting/Underlining are two of the most frequently reported student study strategies, but unfortunately, are two of the least effective. Some research on highlighting/underlining shows that it may even harm the student’s ability to make inferences about that topic. The Keyword Mnemonic not only requires excessive instructor support, it also is not helpful in many subject areas, and may lead to accelerated forgetting. Imagery for Text and Summarization do not actually harm learning like other strategies in this category, but they are not as helpful as the high or moderate utility strategies in improving learning. When discussing learning strategies with students, encourage them to use those that have proven to be more efficient and effective.

¹Dunlowsky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T., (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58. doi:10.1177/1529100612453266

As a new TA or a young professor, the task of establishing your credibility as an instructor can be an intimidating one.  In Teaching What You Don’t Know, Therese Huston provides some advice on how to establish this credibility early in the semester.  Research suggests that the following common pitfalls can actually cause an instructor to lose credibility: showing up late to class, being unable to explain difficult concepts, failing to ask students if they understand the instructor’s explanations, displaying a lack of familiarity with the text, not making attempts to answer students’ questions, failing to follow course policies outlined in the syllabus (particularly with regard to grading), and not reminding students of upcoming deadlines and due dates.

The good news is there are many things instructors can do to establish their credibility!

  • When designing your course, begin with material within your wheelhouse.  If you can start the semester with material you’re more comfortable teaching (because it is your area of expertise), you’ll be more likely to speak with authority and you’ll gain confidence early.
  • Show up to class early to set up and spend time connecting with students and finding out if they have any questions.
  •  Stopping periodically throughout each class period to gauge student understanding and to answer questions.
  • Provide clear reminders before assignments are due to help keep students on-track.

Keep all these elements in mind as you begin teaching and you’ll be sure to establish your credibility as an instructor and create the best possible classroom environment for yourself and your students.

Additional Resources:

If you’re looking for some great books to read over the break, please check out our library for a wealth of teaching resources to help you make next semester your best yet.  From all of us at the Kaneb Center for Teaching and Learning, we wish you a happy and safe holiday season!

Happy Holidays



« Newer Posts - Older Posts »

Copyright © 2010 | Kaneb Center for Teaching & Learning | kaneb@nd.edu | 574-631-9146
Get Adobe Flash player