Feed on

Are you interested in learning more about discipline-specific teaching and learning in the university setting? The university offers short credit-bearing summer graduate courses on university teaching and learning in various fields!  Consider taking one of this year’s courses:

GRED 60501: Teaching Engineering Tutorials and Laboratories

GRED 60601: Preparing for an Academic Career in Physics, Math, and Engineering

GRED 60610: How to Teach Effectively and Prepare for an Academic Career in the Humanities and Social Sciences

GRED 60612: Effective and Exciting Teaching in Social Sciences and Humanities

GRED 60640: Designing and Teaching Your First Biology or Chemistry Course

GRED 64600: Teaching and Learning Online

For more information visit: http://kaneb.nd.edu/programs/gred/  and see the 2014 brochure at http://kaneb.nd.edu/assets/127381/2014_gred_brochure.pdf

Forty-four Notre Dame Graduate Student Teaching Assistants (TAs) have been named as the recipients of the 2014 Outstanding Graduate Student Teaching Award.

Created to recognize graduate student instructors and TAs who demonstrate commitment to exceptional teaching in lectures, seminars, labs, and across the academic profession, the Graduate School and the Kaneb Center for Teaching and Learning present the award annually to TAs that are nominated by their departments.  Departments may only nominate up to 5% of their TAs for the honor.

An awards dinner honoring the recipients took place Wednesday, April 16, in the McKenna Hall Conference Center.  Laura Carlson, dean of the Graduate School, and Kevin Barry, director of the Kaneb Center, presented recipients with their awards following a keynote address by Philippe Collon, associate professor of physics.

Laura Carlson, Dean of the Graduate School

Kevin Barry, Director of the Kaneb Center for Teaching and Learning

Philippe Collon, Associate Professor of Physics

A full listing of the award recipients follows:


Art, Art History, & Design
Katelyn Seprish

Clare Brogan

Creative Writing Program
Jayme Russell

Robert Lester

Rachel Banke
Benjamin Wetzel

International Peace Studies
Kyle Lambelet

Alexander Erik Larsen

Medieval Literature
Christopher Scheirer

Jeffrey Tolly

Political Science
Michael Hartney
Soul Park

Xin Tong

Romance Languages & Literature
James Cotton

Ana Velitchkova

Stephen Gaetano
Justus Gnormley
Brian Hamilton

University Writing Center
Kara Donnelly

University Writing Program
Damian Zurro


Aerospace & Mechanical Engineering
Tyler Kreipke
Melinda Lake
Gaojin Li
Matthew Meagher
Arman Mirhashemi
Matthew Mosby

Chemical & Biomolecular Engineering
Fernando Garcia
Raymond Seekell

Civil & Environmental Engineering & Earth Sciences
Ryan Alberdi
Tori Tomiczek

Computer Science & Engineering
Paige Rodeghero

Electrical Engineering
Kaijun Feng


Applied & Computational Mathematics & Statistics
Wenzhao Sun

Biological Sciences
Erin Franks
Shayna Sura
Lindsey Turnbull

Chemistry and Biochemistry
Eric Hansen
Jared Lamp
Joseph Michalka
Brandon Tutkowski

Victor Ocasio Gonzalez
Ryan Thompson

Allison Showalter


Metaphorically, if each student has a bucket that he or she progressively fills with knowledge throughout the semester while learning in your course, is it not important for that student to keep the contents of that bucket even after the course is over?  Undoubtedly, most instructors would shudder at the idea of students walking out of the final exam, tipping over their buckets, and pouring its contents onto the sidewalk, never to be used again.  Therefore, we should aim for student learning to have a lasting impact and for those buckets to stay filled for as long as possible.


After sharing a semester together, undoubtedly both you and your students have learned a great deal from one another.  Although the last day of class is often filled with student presentations, final exam review, or maybe even some last minute housekeeping details, you may wish to incorporate some end-of-semester activities, which will allow you to assess (1) to what degree students have filled their metaphorical buckets, (2) with what, and (3) how long they think they will maintain its contents.


According to Fink (2003), “For learning to occur, there has to be some kind of change in the learner.  No change, no learning.  And significant learning requires that there be some kind of lasting change that is important in terms of the learner’s life” (p. 34).  Teaching What You Don’t Know” (Huston, 2009) and Teaching With Your Mouth Shut” (Finkel, 2000) suggest that one last day of class activity to consider is asking students what they will remember from your class in five years.  Interestingly, students and instructors may not see eye-to-eye on what were the primary take-away messages for the course.  Thus, this is another way to gauge whether your learning goals for students were achieved and whether students perceived them as having a lasting impact.


There are other interesting questions you could ask students on the last day to facilitate discussion and to assess student learning, such as:

  • Have you changed your opinions or views as a result of this course?  Why or why not?
  • Complete the following sentences: One thing I was surprised to learn in this course is __________________.  I was surprised to learn this because __________________.
  • If you could share one idea from this course with others, what would it be, and why?


And finally, some additional resources worth checking out:

In her recent two-part workshop series, Amy Buchmann–a Graduate Associate of the Kaneb Center for Teaching and Learning–discussed some of the fundamentals of course design.  An integrated course design (Fink, 2003) has three primary elements: (1) learning goals, (2) feedback and assessment, and (3) teaching and learning activities.

Integrated Course Design

Figure 1. Integrated course design model. Fink, L. D. (2003). Creating significant learning experiences: An integrated approach to designing college courses. San Francisco, CA: Jossey-Bass.

The first installment of the workshop focused on crafting learning goals and effective feedback and assessment strategies.  Using a technique described by Bain (2004) as planning backwards, there are a series of activities which can incorporate these primary elements into a cohesive course design.  First, drafting “big questions” (Bain, 2004; Huston, 2012) about what you want students to gain by the end of the course, such as what questions would students be able to answer, or what skills, abilities, or qualities would they develop throughout the course.

Using those “big questions,” it becomes possible to construct a list of learning goals for students (e.g., students will be able to X by the end of this course), which can subsequently be revised to include specific language.  For example, rather than saying “students will understand X,” more active and specific language may include words like, “students will predict X” or “students will differentiate X and Y” or “students will be able to generate X” (more helpful examples on how to do this available on p. 7 of this handout).  Once these learning goals are revised, each learning goal can be transformed into a graded assessment or a chance to provide students with feedback.  Angelo and Cross (1993) provided a list of course assessment techniques or (CATs) which serve as a guide.  Using your learning goals to create course assessments ensures that what you hope students will take away from your class is what students actually are doing in the course.

The second installment of this workshop focused on crafting a learner-centered syllabus, which includes formulating teaching and learning activities.  For many instructors, the easiest way to develop a course is to first decide what texts or topics to cover in lectures, then to arrange exams and assignments around these.  However, these “text/lecture-focused courses” often fail to incorporate learning goals.  Therefore, designing your course with an “assignment focus” may allow for a more integrated course design.  Using those learning goals that you previously established and revised, you can build your course one learner-centered assignment at a time.  While doing so, there are three questions to keep in mind: (1) “Are the assignments likely to result in the learning you want?”, (2) “Is the assignment aligned with the learning goals?”, and (3) “Is the workload appropriate?”  Once you have figured out which assessments to include, it becomes easier to craft your syllabus.

The syllabus is where all three primary elements of the integrated course design come together.  In general, the syllabus serves a variety of functions, not the least of which is that it acts as a “contract” between the instructor and the student (Slattery and Carlson, 2005).  As instructors, the syllabus serves as a planning tool for the semester (and helps us to meet course goals in a timely manner) and helps to set the class tone.  For students, the syllabus helps to structure their workload and inform them about course policies.  Thus, the best syllabi will communicate your expectations, emphasize student responsibility, and answer student questions before they ask.

In order to accomplish these tasks, well-designed syllabi often include some common elements.  Though not an exhaustive list, common elements include: contact information, course description/goals, student learning goals, materials, schedule or calendar, requirements/responsibilities, policies, and grading info.  It may also be a good idea build flexibility into the schedule, but Huston (2012) has some suggestions for how to do so.  For instance, using a phrase like “The instructor reserves the right to change the syllabus at any time” may appear disorganized or suggest to students that you may not honor the “contract” between the two of you.  Instead, diplomatic wording might include “your learning is my principal concern, so I may modify the schedule if it will facilitate your learning” or “we may discover that we want to spend more time on certain topics and less time on others.  I’ll consider changing the schedule if such a change would benefit most students’ learning in this course.”  These phrasings inform students that any changes are for their benefit and that the instructor truly cares about student learning.

Taking all of these fundamentals of course design into account, you can construct an organized, assessment-focused course with student learning in mind.  To learn about this and other Kaneb Center events visit our workshop series page.

The following entry from the 2013-2014 Teaching Issues Writing Consortium: Teaching Tips was contributed by Freya Kinner, Instructional Developer, Western Carolina University


You turn a test back to your students. They look at their papers, and you span the room. Your students’ visages are telling – some look shocked, others proud, and still others are hurt or even bored. Perhaps one or two students ask to meet with you after class to “talk about their grade” or ask for the dreaded extra credit assignment. But, how often do they ask themselves how their studying approach (other than perhaps amount of time spent studying) affected their performance? Do they analyze their feedback to see if there were particular content areas they struggled with? Particular test item types?

In other words, do your students ever stop and take stock, whether of a test, an in-class activity, an assignment, or a conversation?

We work in a world of quick transitions and immediate gratification, and we seldom take the time to stop, look inward, and take stock. If we do, we often don’t use that “stock” to make changes or plans for the future. This is where metacognition plays a key role. Simply put, metacognition is thinking about thinking. It includes:

  • becoming aware of how we learn (cognitive awareness),
  • monitoring our learning strategies and evaluating how well those learning strategies work (self-regulation), and
  • adapting our learning strategies when and if needed (Flavell, 1979).

In general, students who use metacognitive strategies (i.e., plans or techniques used to help students become more aware of what and how they know) tend to have higher performance than students who do not use metacognitive strategies (e.g., Ertmer & Newby, 1996; Lovett, 2008; Nett, Goetz, Hall, & Frenzel, 2012). One way to help students take stock and learn about metacognitive strategies is through a variation on the gallery walk, wherein you ask students to reflect on both their academic successes and failures.

First, introduce the concept of metacognition (including awareness, monitoring, and adaptation), and ask students to think about their academic successes and failures. Ask students to write responses to the following prompts on sticky notes:

Think about a time when…

  • you learned a lot. What did you do?
  • a writing assignment was particularly successful. What did you do to make it successful?
  • you performed particularly well on a test. How did you prepare?
  • you just didn’t “get it.” What were you doing at that moment?
  • a writing assignment failed. How did you work through the assignment?
  • you failed a test. How did you prepare?

Students place their responses to each prompt on separate charts (one chart per prompt) placed around the room. You (the instructor) facilitate a whole group conversation, walking from chart to chart (in essence, you’re taking a “gallery walk” with each chart a work of art). What are common characteristics across students’ successes? Their failures? What were the students doing in each of those situations? How are the characteristics related to awareness, monitoring, and adaptation? Through this process, students see a pattern in their collective academic successes and struggles.

Then, ask students, “Based on the gallery walk and what we’ve learned about metacognition, how will you plan differently for your next assignment/project/exam?” This final question could be addressed through a minute paper, a take-home assignment, or another chart in the gallery walk.


Ertmer, P. A., & Newby, T. J., (1996). The expert learner: Strategic, self-regulated, and reflective. Instructional Science, 24, 1–24.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911.

Lovett, M. C. (2008). Teaching metacognition. Paper presented at the annual EDUCAUSE meeting, Orlando, FL.

Nett, U. E., Goetz, T., Hall, N. C., & Frenzel, A. C. (2012). Metacognition and test performance: An experience sampling analysis of students’ learning behavior. Education Research International, 1-16.


Submitted by:
Freya Kinner, Instructional Developer
Coulter Faculty Commons
Western Carolina University

The following entry from the 2013-2014 Teaching Issues Writing Consortium: Teaching Tips was contributed by Emma Bourassa, Experiential Learning and Field Test Instructor
Vancouver Community College


Here are a few ideas for encouraging critical thinking and self-reflection on learning that can be used during the semester as feedback on learning for the instructor.

At the beginning of the semester,

  1. Do a silent discussion.
  • Prepare single pieces of paper with a provocative statement or question about the content/concepts/focus of the course- one per student plus 5 extra.
  • Students take one paper, and write a response- agree, disagree, add to the idea or ask a question
  • Students return their paper to the pile, pick up a different one, read and respond. Allow for 4-5 rounds.

Then ask: “Why did I ask you to do that?” The purpose could be to have students start considering the course, modeling giving each student time to process and consider answers (especially if there is group discussions or assignments), and that you want to know where they are at so you can plan for their learning.

  1. During or after a concept introduction use images to probe students’ articulation of understanding

e.g. 1. Using Escher’s two head mobius, ask how does this relate to….(e.g. intercultural communication)?

e.g. 2. Using a variety of abstract images, ask students to: choose one or two images and relate them to … (e.g. theme of the story, group theory, complexity theory etc.)

Midterm, pre-final review

e.g. 1. Have students write possible questions for the quiz based on Bloom’s analysis, synthesis and evaluation (may need to be pre-taught). I have done this and it has provided immediate feedback as to student’s level of knowledge. I can then review what’s necessary or reteach if it wasn’t the outcome I wanted.

e.g. 2. Have students draw a picture of their learning and then explain in writing or orally depending on preference.

The biggest challenge with this kind of student activity is waiting as they need to have process time!

With Spring Break rapidly approaching, what better time than now to check out the Kaneb Center’s library?  Housed at the Kaneb Center are hundreds of books, periodicals, and other materials on a variety of topics related to teaching and learning in higher education.  Whether you’ll be traveling or staying in town, borrowing and reading one of our exceptional books on teaching and learning can be a great way to unwind while also getting great ideas for your classroom when classes resume.  Have a book or topic in mind?  You can search our collection through ND’s library website.  Having trouble deciding what to borrow?  Here are a few of our favorites:

Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School
John Medina; Pear Press; 2008. 301 pages.

The Courage to Teach: Exploring the Inner Landscape of a Teacher’s Life
Parker Palmer; Jossey-Bass; 2007. 272 pages.

Discussion as a Way of Teaching: Tools and Techniques for Democratic Classrooms, 2nd Ed.
Stephen Brookfield and Stephen Preskill; Jossey-Bass; 2005. 336 pages.

How Learning Works: Seven Research-Based Principles for Smart Teaching
Susan Ambrose, et al.; Wiley; 2010. 336 Pages

The Last Lecture
Randy Pausch; Hyperion; 2008. 206 pages.

Peer Review of Teaching: A Sourcebook, 2nd. Ed.
Nancy Van Note Chism; Jossey-Bass; 2007. 228 pages.

Presentation Zen: Simple Ideas on Presentation Design and Delivery, 2nd Ed.
Garr Reynolds; New Riders; 2011. 312 pages.

Teaching in Eden: Lessons from Cedar Point
John Janovy; Routledge; 2003. 208 pages.

Teaching for Critical Thinking: Tools & Techniques to Help Students Question Their Assumptions
Stephen Brookfield; Jossey-Bass; 2011. 304 pages.

Teaching What You Don’t Know
Therese Huston; Harvard University Press; 2009. 320 pages.

Teaching with Your Mouth Shut
Donald Finkel; Heineman; 2000. 208 pages.

What the Best College Teachers Do
Ken Bain; Harvard University Press; 2004. 224 pages.

What Video Games Have to Teach Us about Learning and Literacy, 2nd Ed.
James Paul Gee; Palgrave Macmillan; 2008. 256 pages.

To borrow an item, visit us, call, or send an email request.  Happy reading!

With midterm season quickly approaching, many instructors are concerned with how to curb cheating in the classroom.  In his talk  at Notre Dame in November 2013, Dr. James Lang (Associate Professor of English at Assumption College) noted that when students engage in academically dishonest behaviors, often times they are reacting inappropriately to a learning environment that has not sufficiently captured their attention or motivated them to learn.  Using his intriguing research as a guide, he presented “Five Features of a Learning Environment that Induce Cheating” from his book “Cheating Lessons: Learning from Academic Dishonesty.”  Those five features include:

  1. Motivation is extrinsic: If students are driven by factors outside themselves (e.g., grades, parental-approval), then they may be more likely to cheat.  As an instructor then, we should try to foster intrinsic motivation in students.
  2. Orientation toward performance: If emphasis is on performance–such as tests, exams, papers–rather than mastery-oriented goals, students may be more likely to cheat; however, if instructors design mastery-oriented courses, students may be less inclined to cheat.
  3. Infrequent, high-stakes assessments: If students’ grades are based on only a few assessments, cheating may be more likely to occur.  Considering such, creating multiple low-stakes assessments (smaller grades spread throughout the semester) in lieu of placing emphasis on only a few larger grades gives students several chances to succeed and reduces the pressure to perform well on any single assignment.
  4. Low self-efficacy: When students believe they cannot do the work–whether they fear they will not be graded fairly or that they may not be able to do the work at all–they may be more likely to cheat.  Therefore, it is important that we, as instructors, promote self-efficacy in students by practicing fair and well-defined grading procedures, having reasonable expectations, and scaffolding learning experiences to help students rise to the occasion.
  5. Cheating perceived as common and approved by peers: When students believe cheating is a common classroom occurrence and that their peers also engage in or condone cheating, cheating is more likely to occur.  In order to discourage these behaviors, by improving the classroom environment and getting to know your students.

Eliminating or minimizing these features in your classes and assessment strategies will help to minimize cheating among your student.  Beyond curbing cheating, there are numerous advantages to promoting a positive learning environment and we should always remember to consider the learning environment we provide for students.

Resources: Lang, J. M. (2013). Cheating lessons: Learning from Academic Dishonesty. Cambridge, MA: Harvard University Press.

[This resource is available through the Kaneb Center Library]

The following entry from the 2013-2014 Teaching Issues Writing Consortium: Teaching Tips was contributed by Ken Sagendorf, Ph.D., Director, Center for Excellence in Teaching and Learning (CETL), Regis University


In the last couple of weeks, I have had multiple faculty approach me asking about their multiple-choice tests that they have given in their classes and specifically, asking when to get rid of a question based upon student responses. This week’s teaching tip focuses on some resources to help us create and use better multiple-choice exams but the information included applies to all types of assessment.

Multiple-choice exams are often part of the assessment repertoire of many faculty because they are easy to grade. But writing good multiple-choice tests is hard to do. I think there are a couple reasons that make this so:

1. Most of us have had no training whatsoever in creating these kinds of assessments.

When I was in grad school, we had a joint doctoral program between Exercise Science and Science Education. My Exercise Science department head gathered all of the doctoral students together to ask us what we thought the value of the education side was. Among the only people to speak up, I asked my department head how he knew if he was asking good multiple-choice questions. He responded that he kept asking the same ones for three years and threw out those where students couldn’t answer correctly. He said it wasn’t hard. He was right. Asking questions and getting answers is not hard. Asking good questions that get students to think the way you intend, now that is hard. Needless to say, I finished my Ph.D. in Science Education.

There are many, many resources about MC tests out there from some very quick and applied papers (i.e., http://www.theideacenter.org/sites/default/files/Idea_Paper_16.pdf) to full books and research articles (i.e., http://web.ebscohost.com/ehost/pdfviewer/pdfviewer?sid=81790701-e732-4a68-9e0c-993437437ef1%40sessionmgr111&vid=4&hid=122).

2. Students have developed really good test-taking skills.

As a native New Yorker, I grew up taking Regents exams – tests at the end of the year in science, math, foreign language, English, social studies, etc. In four years of high school, we took 11 or 12 of these tests and we bought these books teaching us how to take and pass the tests. Our students today have likely taken many more tests than I or you would have and may have even been privy to the prep courses that prepare people for the SAT, ACT, GRE, LSAT, MCAT or any of the plethora of multiple-choice laden tests. They know the drill. Read the choices. Eliminate the choices that make no sense with the others. You can probably narrow down the choices to two. This is not what we envision when we give a test! We want students to think! So, we need to eliminate the ability for students to do well on test taking skills alone. The BYU guide for writing MC questions has been around a long time but I think it is still one of the best guides out there for how to construct good questions: http://testing.byu.edu/info/handbooks/betteritems.pdf

3. It is easy to forget what we are measuring when we use multiple-choice tests.

I have been approached in the last couple weeks by faculty telling me that they heard that they should throw out MC test questions if 50% of the students get the question wrong (I will explain in the next paragraph where this comes from). Another faculty told me that the value was 65% (I believe this is slightly confused with accepted value for how reliable a question is – a way of analyzing your tests). Now, these numbers are not incorrect but they need the proper context around them.

For instance, if you are using a MC test to identify the top performers in your class (this is also known as norm-referenced testing), then it may be proper to write a test where 5% of the items are answered correctly by 90% of the students (to boost confidence), 5% of the test items are answered correctly by 10% of the students, and the remainder of the items are answered correctly by an average of 50% of the students (Davis, 2009). This is where I believe the 50% number comes from.

Certainly, there are many ways to quantitatively evaluate your tests but it is important to recognize that it is not the only way.

If you are using a MC test to measure if students are using information, skills, and competencies (like critical thinking) that you want all students to have acquired, you are testing for something different – how well the test questions represent the things you want them do. In this case, when students perform poorly on test questions, there are multiple possibilities: was the test item unclear or poorly written? Was the content of the question too challenging? Were the students insufficiently prepared? Looking at the choices that students made in a bar graph format will give you some insight as to how students were thinking when they answered. Here, if a good number of your students chose the same answer, whether it was the right answer or a wrong one, it would be indicative that the thinking students used was similar and that the question posed was a good question at measuring that way of thinking. It is your call as to whether that was the kind of thinking you desired to have them do.

There are many resources on campus and online to assist you in these questions and the quest to write better multiple choice tests.


Clegg, V.L. and Cashin, W.E. (1986)“Improving Multiple Choice Tests.” Idea Paper. No. 16. Found online at: http://www.theideacenter.org/sites/default/files/Idea_Paper_16.pdf

Davis, B.G. (2009). Teaching Tools. 2nd Edition. San Francisco: Jossey-Bass. Available in the CETL.

Jacobs, L.C. and Chase, C.I. (1992). Developing and Using Tests Effectively: A Guide For Faculty. San Francisco: Jossey-Bass. AVAILABLE IN THE LIBRARY AT: http://lumen.regis.edu/search~S3/? searchtype=t&searcharg=Developing+and+Using+Tests+Effectively%3A+A+Guide+For+Faculty&searchscope=3&SORT=D&extended=0&searchlimits=&searchorigarg=ttips+for+improving

Kehoe, J. (1995). “Writing Multiple-Choice Test Items.” Practical Assessment, Research and Evaluation. 4 (9). Full text available through the library: http://pareonline.net/getvn.asp?v=4&n=9

Lowman, J. (1995). Mastering the Techniques of Teaching. San Francisco: Jossey-Bass. Available in the CETL.

Sehcrest, L., Kihlstrom, J.F., and Bootzon, R. (1999). How to Develop Multiple-Choice Tests. IN B. Perlamn, L.I. McCann, S.H. McFadden (Eds.), Lessons Learned: Practical Advice for the Teaching of Psychology. Washington, D.C.: American Psychological Society.

Wergin, J.F. (1988). “Basic Issues and Principles in Classroom Assessment.” In J.H. McMillan (Ed.), Assessing Students’ Learning. New Directions for Teaching and Learning, No. 34. San Francisco: Jossey-Bass. Available through Prospector.

Early Semester Evaluations

Were you unable to attend last week’s workshop on early semester evaluations? It’s not too late to start thinking about early semester evaluations. This is a great way to get feedback from your students to help you improve your teaching, and it shows that you are invested in their learning.

One of the most popular forms of early semester evaluation is the Teacher Designed Feedback Form. This form contains questions that you design about specific strategies, methods, and activities in your class — these questions can be either open-ended or Likert style. These can be administered in class or electronically outside of class.

Here is an example of an early semester evaluation I gave in the Applied Math Methods course that I taught last fall. I had made a few changes to the course, and this gave me the opportunity to get student feedback on how these aspects of the course were going. It also gave me an idea on how the course was going overall.

Things to consider when reading the early semester evaluation results:

  1. Look for general trends in the responses. Disregard extreme outliers (both positive and negative)
  2. Discuss the results with your students. Address the general trends that you observed, and let your students know of any changes that will be made as a result of the feedback. If you choose not to make changes that were suggested, use this time to explain why. 

Check out the workshop page for more information related to early semester evaluations. 


« Newer Posts - Older Posts »

Copyright © 2010 | Kaneb Center for Teaching & Learning | kaneb@nd.edu | 574-631-9146
Get Adobe Flash player