05 August, 2010


By L. Dee Fink
Published in Improving College Teaching by Peter Seldin (ed.).
Reprinted here with permission of the University of Oklahoma Instructional Development Program, July 20, 1999.

Each year faculty members in institutions of higher education take on the task of teaching others. For most of these people, this is a recurring task. In fact, for the majority, this is the central task of a life-long career.

 Assuming that no one is perfect and therefore everyone has room for improvement, evaluation is the means by which we try to identify which aspects of our teaching are good and which need to be changed. The question then arises as to who should take responsibility for doing this evaluation. My belief is that evaluation is an inherent part of good teaching. Therefore it is the teacher himself or herself who should take primary responsibility for doing the evaluation.

In this chapter, I will offer a basic definition of evaluation, state a few reasons why one should invest time and effort into evaluation, describe five techniques for evaluation, and identify resources for helping us evaluate and improve our teaching.

A Definition of "Evaluation"

Doing good evaluation is like doing good research. In both cases, you are trying to answer some important questions about an important topic. The key to doing both activities well is (a) identifying the right questions to ask and (b) figuring out how to answer them.

What are the key questions in the evaluation of teaching? Basically they are: "How well am I teaching? Which aspects of my teaching are good and which need to be improved?" The first question attempts to provide a global assessment, while the second is analytical and diagnostic in character.

Before moving to the task of figuring out how to answer these questions, we should look at the reasons for taking time to evaluate.

Why Evaluate?

It takes a certain amount of time and effort to effectively evaluate our own teaching. Is this a wise use of time? I would argue that it is, for three reasons.

1. First, consider the following diagram:

Figure 1
The Effect of Evaluation on Our Teaching

Regardless of how good or how poor we are as teachers, we all have the potential to get better over time (see the arrow in Figure 1). Yet some teachers continually improve and approach their potential (see arrow) while others experience a modest improvement early in their career and then seem to level off in quality or sometimes even decline (see arrow). Why? I would argue that the primary difference between those who do and those who do not improve, is that only the former gather information about theirteaching and make an effort to improve some aspect of it -- every time they teach.

2. A second reason to evaluate is to document the quality of one's teaching for others. All career professionals have other people who need to know about the quality of their teaching. It may be the person's current department or institution head, or it may be a potential employer. But once people teach, they have a track record, and others need and want to know how well they taught. The only way a teacher can provide them with that information is to gather it, and that means evaluation. Teaching portfolios are becoming a common way of communicating this information to others. As it turns out, putting a portfolio together also helps the teacher understand his or her own teaching better. (See Zubizarreta, this volume.)

3. Third, there is a very personal and human need to evaluate. This is for our own mental and psychological satisfaction. It is one thing to do a good job and think that it went well; it is quite another, and a far more enjoyable experience, to have solid information and thereby know we did a good job. That knowledge, that certainty, is possible only if we do a thorough job of evaluation.

If evaluation is worth doing then, how do we do it?

Five Sources of Information

There are five basic sources of information that teachers can use to evaluate their teaching. All evaluation efforts use one or more of these basic sources. Each of these five sources has a unique value as well as an inherent limitation.

In the following portion of this chapter, I will discuss the unique value, recommended frequency, limitation, and appropriate response to that limitation, for each of the five sources of information.

Figure 2


1. Self-monitoring

Self-monitoring is what people do semi-automatically and semi-consciously whenever they teach. Most of their mental activity is concerned with making the presentation or leading the discussion. But one portion of their mental attention is concerned with "How is it going?" "Are they with me?" "Am I losing them?" "Are they interested or bored?"

Unique Value. The first value of this is that it is immediate and constant. You do not have to wait a week or a day or even an hour to get the results. It happens right away. Hence adjustments are possible right away.

The second value is that this information is automatically created in terms that are meaningful to the teacher because it is the teacher who creates the information. It is the teacher, not someone else, who looks at the situation and says "This is what is happening." This does not mean that we always know why it is happening, or what to do about it if it is something we do not like. But we do have our own sense of what is happening.

Frequency. This does and should happen all the time. We may only take a mental pause every few minutes to size up the situation. But by comparison with the other sources of information discussed below, this takes place continuously.

Limitation. The very strength of this source is also its weakness. Because this information is created by us for us, it is also subject to our own biases and misinterpretations. I thought they were understanding the material. I thought they looked interested --when in fact they weren't. We all have our own blind spots and lack complete objectivity. This means that, at times, we are going to misread the responses of students to our teaching.

Appropriate Response. What can be done about the subjectivity of self-monitoring? Turn to an objective source of information, one without subjective bias.

  1. Audiotape and Videotape Recordings:
Modern technology has given us relatively inexpensive and easy access to audio and video recordings of what we do as teachers. We can put a small audio recorder on the teachers desk or put a video recorder on the side of the classroom and let it run during a class session. Then later we can listen to or view it. 

Special value. The value of this kind of information is that it gives us totally objective information. It tells us exactly what we really said, what we really did, not what we thought we said or did. How much time did I spend on this topic? How many times did I ask questions? How often did I move around? These are questions the audio and video recordings can answer with complete accuracy and objectivity.

Frequency. I had the experience of giving a workshop once that was recorded. Listening to the recording later, I discovered to my surprise that I had some disruptive speech patterns of which I was completely unaware. And I am an experienced observer of teachers! The lesson from this was that, no matter how good we are at monitoring others, we can only devote a certain amount of our mental attention to monitoring our own teaching; hence we miss things.

As a result of that experience, I now try to do an audio recording at least once or preferably twice in each full-semester course I teach. This gives me a chance to see if any speech problems are still there or if new ones have cropped up. If they have, the second recording tells me if I have gotten them under control.

Video recordings are probably useful once every year or two. What do we look like to others? As we grow older, we change, and we need to know what the continuously anew me looks like to others.

Limitation. What could be more valuable than the objective truth of audio and video recordings? Unfortunately the unavoidable problem with this information is that it is true but meaningless -- by itself. The recordings can tell me if I spoke at the rate of 20 words per minute, or 60 words, but they can't tell me whether that was too slow or too fast for the students. They can tell me whether I moved and gestured and smiled, but it can't tell me if those movements and facial expressions helped or hindered student learning.

Appropriate response. To determine the effect of my teaching behavior, rather than the behavior itself, I need to find another source of information. (Are you starting to see the pattern here?)
2. Information from Students

As the intended beneficiaries of all teaching, students are in a unique position to help their teachers in the evaluation process.

Special value. If we want to know whether students find our explanations of a topic clear, or whether students find our teaching exciting or dull, who else could possibly answer these kinds of questions better than the students themselves? Of the five sources of information described here, students are the best source for understanding the immediate effects of our teaching, i.e., the process of teaching and learning.

This information can be obtained in two distinct ways: questionnaires and interviews, each with its own relative values.

a. Questionnaires. The most common method of obtaining student reactions to our teaching is to use a questionnaire. Lots of different questionnaires exist but most in fact ask similar kinds of questions: student characteristics (e.g., major, GPA, reasons for taking the course), the students characterization of the teaching (e.g., clear, organized, interesting), amount learned, overall assessment of the course and/or the teacher (e.g., compared to other courses or other teachers, this one is ...), and sometimes, anticipated grade.

The special value of questionnaires, compared to interviews, is that they obtain responses from the whole class and they allow for an anonymous (and therefore probably more candid) response. The limitation of questionnaires is that they can only ask a question once, i.e., that cannot probe for further clarification, and they can only ask questions that the writer anticipates as possibly important.

Questionnaires can be given at three different times: the beginning, middle and end of a course. Some teachers use questionnaires at the beginning of a course to get information about the students, e.g., prior course work or experience with the subject, preferred modes of teaching and learning, and special problems a student might have (e.g., dyslexia). Many use mid-term questionnaires to get an early warning of any existing problems so that changes can be made in time to benefit this set of students. The advantage of end-of-term questionnaires is that all the learning activities have been completed. Consequently, students can respond meaningfully to questions about the overall effectiveness of the course.

b. Interviews. The other well-established way of finding out about student reactions is to talk to them. Either the teacher(if sufficient trust and rapport exist) or an outside person (if more anonymity and objectivity are desired) can talk with students for 15-30 minutes about the course and the teacher. As an instructional consultant, I have often done this for other teachers, but I have also done it in some of my own courses. I try to get 6-8 students, preferably a random sample, and visit with them in a focused interview format immediately after class. I have some general topics I want to discuss, such as the quality of the learning thus far, reactions to the lectures, labs, tests, and so forth. But within these topics, I will probe for clarification and examples of perceived strength and weakness. I also note when there is divergence of reactions and when most students seem to agree.

The special value of interviews is that students often identify unanticipated strengths and weaknesses, and the interviewer can probe and follow-up on topics that need clarification. The limitation of course is that a professor can usually only interview a sub-set of the class, not the whole class. This leaves some uncertainty as to whether their reactions represent the whole class or not.

As for the frequency of interviews, I would probably only use a formal interview once or at most twice during a term. Of course, a teacher can informally visit with students about the course many times, and directly or indirectly obtain a sense of their reaction to the course.

General limitation. Returning to the general issue of information from students, regardless of how such information is collected, one needs to remember that this is information from students. Although they know better than anyone what their own reactions are, they can also be biased and limited in their own perspectives. They occasionally have negative feelings, often unconsciously, about women, people who are ethnically different from themselves, and international teachers. Perhaps more significantly, students usually do not have a full understanding of how a course might be taught, either in terms of pedagogy or content. Hence they can effectively address what is, but not what might be.

Appropriate response. As with the other limitations, the appropriate response here is to seek another kind of information. In this case, we need information from someone with a professional understanding of the possibilities of good teaching.

4. Students' test results.

Teachers almost always give students some form of graded exercise, whether it is an in-class test or an out-of-class project. Usually, though, the intent of the test is to assess the quality of student learning. We can also use this same information to assess the quality of our teaching.

Special value. The whole reason for teaching is to help someone else learn. Assuming we can devise a test or graded exercise that effectively measures whether or not students are learning what we want them to learn, the test results basically tell us whether or not we are succeeding in our whole teaching effort. This is critical information for all teachers. Although the other sources of information identified here can partially address this question (I think they are learning, The students think they are learning.), none address it so directly as test results: I know they are learning because they responded with a high level of sophisticated knowledge and thinking to a challenging test.

Frequency. How often should we give tests? Many teachers follow the tradition of two mid-terms and a final. In my view this is inadequate feedback, both for the students and for the teacher. Weekly or even daily feedback is much more effective in letting students and the teacher know whether they are learning what they need to learn as the course goes along. If the teacher's goal is to help the students learn, this is important information for both parties. And remember, not all tests need to be graded and recorded!

Limitation. It might be hard to imagine that this information has a limitation. After all, this is what it's all about, right? Did they learn it or not?

The problem with this information is its lack of a causal connection: we don't know why they did or did not learn. Did they learn because of, or in spite of, our teaching? Some students work very hard in a course, not because the teacher inspires or motivates them but because their major requires a good grade in the course and the teacher is NOT effective. Therefore they work hard to learn it on their own.

Appropriate response. If we need to know whether one's actions as a teacher are helpful or useless in promoting student learning, we need a different source of information, such as the students themselves.

5. Outside observer

In addition to the two parties directly involved in a course, the teacher and the students, valuable information can be obtained from the observations of a third party, someone who brings both an outsider's perspective and professional expertise to the task.

Special value. Part of the value of an outside observer is that they do not have a personal stake in the particular course, hence they are free to reach positive and negative conclusions without any cost to themselves. Also, as a professional, they can bring an expertise either in content and/or in pedagogy that is likely to supplement that of both the teacher and the students.

A variety of kinds of observers exist: a peer colleague, a senior colleague, or an instructional specialist.

  1. Peer colleagues, e.g., two TA's or two junior professors, can visit each others classes and share observations. Here the political risk is low and each one can empathize with the situation and challenges facing the other. Interestingly, the person doing the observing in these exchanges often finds that they learn as much as the person who gets the feedback.
  2. Senior colleagues can be of value because of their accumulated experience. Although one has to be selective and choose someone who is respected and with whom the political risk is low, experienced colleagues can offer ideas on alternative ways of dealing with particular topics, additional examples to illustrate the material, etc.
  3. A third kind of outside observer, an instructional consultant, is available on many campuses. They may or may not be able to give feedback on the clarity and significance of the content material, but their expertise in teaching allows them to comment on presentation techniques, discussion procedures, and ideas for more active learning.

Frequency. Beginning TA's and beginning faculty members should consider inviting one or more outside observers to their classes at least once a semester for two or three years. They need to get as many new perspectives on teaching as soon as possible. After that, more experienced teachers would probably benefit from such feedback at least once every year or two. We change as teachers; as we do, we need all the feedback and fresh ideas we can find.

Limitations. Again, the strength of being an outsider is also its weakness. Outside observers can usually only visit one or two class sessions and therefore do not know what happens in the rest of the course.

Apart from this general problem, each kind of observer has its own limitation. The peer colleague may also have limited experience and perspectives; the senior colleague may be someone who makes departmental decisions about annual evaluations and tenure; and the instructional consultant may have limited knowledge of the subject matter.

Appropriate response. As with the other sources, the response to these limitations is to use a different source, either a different kind of outside observer or one of the other sources described above.

A Comprehensive Evaluation Scenario

The thesis of this chapter is that a comprehensive plan of evaluation for improvement requires all five sources of information. Each one offers a special kind of information that none of the others do. How would this work out in action?

To answer this question, I will describe a hypothetical professor who is not a perfect teacher and therefore has some yet-to-be identified weaknesses in his teaching, but he also wants to improve his teaching. What steps should he take to evaluate his teaching as a way of identifying those aspects that need changing?


By Barbara Gross Davis, University of California, Berkeley.
From Tools for Teaching, copyright by Jossey-Bass. For purchase or reprint information,
contact Jossey-Bass. Reprinted here with permission, September 1, 1999.


There are no hard-and-fast rules about the best ways to grade. In fact, as Erickson and Strommer (1991) point out, how you grade depends a great deal on your values, assumptions, and educational philosophy:

if you view introductory courses as "weeder" classes -- to separate out students who lack potential for future success in the field -- you are likely to take a different grading approach than someone who views introductory courses as teaching important skills that all students need to master.

All faculty agree, however, that grades provide information on how well students are learning (Erickson and Strommer, 1991). But grades also serve other purposes. Scriven (1974) has identified at least six functions of grading:

  1. To describe unambiguously the worth, merit, or value of the work accomplished
  2. To improve the capacity of students to identify good work, that is, to improve their self-evaluation or discrimination skills with respect to work submitted
  3. To stimulate and encourage good work by students
  4. To communicate the teacher's judgment of the student's progress 
  5. To inform the teacher about what students have and haven't learned
  6. To select people for rewards or continued education

For some students, grades are also a sign of approval or disapproval; they take them very personally. Because of the importance of grades, faculty need to communicate to students a clear rationale and policy on grading.

If you devise clear guidelines from which to assess performance, you will find the grading process more efficient, and the essential function of grades -- communicating the student's level of knowledge -- will be easier. Further, if you grade carefully and consistently, you can reduce the number of students who complain and ask you to defend a grade. The suggestions below are designed to help you develop clear and fair grading policies. For tips on calculating final grades, see "Calculating and Assigning Grades."

General Strategies

Grade on the basis of students' mastery of knowledge and skills. Restrict your evaluations to academic performance. Eliminate other considerations, such as classroom behavior, effort, classroom participation, attendance, punctuality, attitude, personality traits, or student interest in the course material, as the basis of course grades. If you count these non-academic factors, you obscure the primary meaning of the grade, as an indicator of what students have learned. For a discussion on why not to count class participation, see "Encouraging Student Participation in Discussion." (Source: Jacobs and Chase, 1992)

Avoid grading systems that put students in competition with their classmates and limit the number of high grades. These normative systems, such as grading on the curve, work against collaborative learning strategies that have been shown to be effective in promoting student learning. Normative grading produces undesirable consequences for many students, such as reduced motivation to learn, debilitating evaluation anxiety, decreased ability to use feedback to improve learning, and poor social relationships. (Sources: Crooks, 1988; McKeachie, 1986)

Try not to overemphasize grades. Explain to your class the meaning of and basis for grades and the procedures you use in grading. At the beginning of the term, inform students, in writing (see "The Course Syllabus") how much tests, papers, homework, and the final exam will count toward their final grade. Once you have explained your policies, avoid stressing grades or excessive talk about grades, which only increases students' anxieties and decreases their motivation to do something for its own sake rather than to obtain an external reward such as a grade. (Sources: Allen and Rueter, 1990; Fuhrmann and Grasha, 1983)

Keep students informed of their progress throughout the term. For each paper, assignment, midterm, or project that you grade, give students a sense of what their score means. Try to give a point total rather than a letter grade. Letter grades tend to have emotional associations that point totals lack. Do show the range and distribution of point scores, and indicate what level of performance is satisfactory. Such information can motivate students to improve if they are doing poorly or to maintain their performance if they are doing well. By keeping students informed throughout the term, you also prevent unpleasant surprises at the end. (Sources: Lowman, 1984; Shea, 1990)

Minimizing Students' Complaints About Grading

Clearly state grading procedures in your course syllabus, and go over this information in class. Students want to know how their grades will be determined, the weights of various tests and assignments, and the model of grading you will be using to calculate their grades: will the class be graded on a curve or by absolute standards? If you intend to make allowances for extra credit, late assignments, or revision of papers, clearly state your policies.

Set policies on late work. Will you refuse to accept any late work? Deduct points according to how late the work is submitted? Handle late work on a case-by-case basis? Offer a grace period? See "Preparing or Revising a Course."

Avoid modifying your grading policies during the term. Midcourse changes may erode students' confidence in your fairness, consistency, objectivity, and organizational skills. If you must make a change, give your students a complete explanation. (Source: Frisbie, Diamond, and Ory, 1979)

Provide enough opportunities for students to show you what they know. By giving students many opportunities to show you what they know, you will have a more accurate picture of their abilities and will avoid penalizing a student who has an off day at the time of a test. So in addition to a final exam, give one or two midterms and one or two short papers. For lower-division courses, Erickson and Strommer (1991) recommend giving shorter tests or written assignments and scheduling some form of evaluation every two or three weeks.

Consider allowing students to choose among alternative assignments. One instructor presents a list of activities with assigned points for each that take into account the assignments' educational and motivational value, difficulty, and probable amount of effort required. Students are told how many points are needed for an A, a B, or a C, and they choose a combination of assignments that meets the grade they desire for that portion of the course.
Here are some possible activities:
  • Writing a case study
  • Engaging in and reporting on a fieldwork experience
  • Leading a discussion panel
  •  Serving on a discussion panel
  • Keeping a journal or log of course-related ideas
  •  Writing up thoughtful evaluations of several lectures
  • Creating instructional materials for the course (study guides, exam questions, or audiovisual materials) on a particular concept or theme
  • Undertaking an original research project or research paper
  • Reviewing the current research literature on a course-related topic
  • Keeping a reading log that includes brief abstracts of the readings and comments, applications, and critiques
  • Completing problem-solving assignments (such as designing an experiment to test a hypothesis or creating a test to measure something)
(Source: Davis, Wood, and Wilson, 1983)

Stress to students that grades reflect work on a specific task and are not judgments about people. Remind students that a teacher grades only a piece of paper. You might also let students know, if appropriate, that research shows that grades bear little or no relationship to measures of adult accomplishment (Eble, 1988, p. 156).

Give encouragement to students who are performing poorly. If students are having difficulty, do what you can to help them improve on the next assignment or exam. If they do perform well, take this into account when averaging the early low score with the later higher one. (Source: Lowman, 1984)

Deal directly with students who are angry or upset about their grade. Ask an upset student to take a day or more to cool off. It is also helpful to ask the student to prepare in writing the complaint or justification for a grade change. When you meet with the student in your office, have all the relevant materials at hand: the test questions, answer key or criteria, and examples of good answers. Listen to the student's concerns or read the memo with an open mind and respond in a calm manner. Don't allow yourself to become antagonized, and don't antagonize the student. Describe the key elements of a good answer, and point out how the student's response was incomplete or incorrect. Help the student understand your reasons for assigning the grade that you did. Take time to think about the student's request or to reread the exam if you need to, but resist pressures to change a grade because of a student's personal needs (to get into graduate school or maintain status on the dean's list). If appropriate, for final course grades, offer to write a letter to the student's adviser or to others, describing the student's work in detail and indicating any extenuating circumstances that may have hurt the grade. (Sources: Allen and Rueter, 1990; McKeachie, 1986)

Keep accurate records of students' grades. Your department may keep copies of final grade reports, but it is important for you to keep a record of all grades assigned throughout the semester, in case a student wishes to contest a grade, finish an incomplete, or ask for a letter of recommendation.

Making Effective Use of Grading Tactics

Return the first graded assignment or test before the add/drop deadline. Early assignments help students decide whether they are prepared to take the class (Shea, 1990). Some faculty members give students the option of throwing out this first test (Johnson, 1988). Students may receive a low score because they did not know what the instructor required or because they underestimated the level of  preparation needed to succeed.

Record results numerically rather than as letter grades, whenever possible. Tests, problem sets, homework, and so on are best recorded by their point value to assure greater accuracy when calculating final grades. (Source: Jacobs and Chase, 1992)

Give students a chance to improve their grades by rewriting their papers. Many faculty encourage rewriting but do not count the grades on rewritten papers as equivalent to those of papers that have not been rewritten. See "Helping Students Write Better in All Courses."

If many students do poorly on an exam, schedule another one on the same material a week or so later. Devote one or more classes to reviewing the troublesome material. Provide in-class exercises, homework problems or questions, practice quizzes, study group opportunities, and extra office hours before you administer the new exam. Though reviewing and retesting may seem burdensome and timeconsuming, there is usually little point in proceeding to new topics when many of your students are still struggling. (Source: Erickson and Strommer, 1991)

Evaluating Your Grading Policies

Compare your grade distributions with those for similar courses in your department. Differences between your grade distributions and those of your colleagues do not necessarily mean that your methods are faulty. But glaring discrepancies should prompt you to reexamine your practices. (Source: Frisbie, Diamond, and Ory, 1979)

Ask students about your grading policies on end-of-course questionnaires. Here are some sample questions (adapted from Frisbie, Diamond, and Ory, 1979, p. 22):

To what extent:

  • Were the grading procedures for the course fair?
  • Were the grading procedures for the course clearly explained?
  • Did you receive adequate feedback on your performance?
  • Were requests for regrading or review handled fairly?
  • Did the instructor evaluate your work in a meaningful and conscientious manner?


Allen, R. R., and Rueter, T. Teaching Assistant Strategies. Dubuque, Iowa: Kendall/Hunt, 1990.

Crooks, T. J. "The Impact of Classroom Evaluation Practices on Students." Review of Educational Research, 1988, 58(4), 438-48 1.

Davis, B. G., Wood, L., and Wilson, R. The ABCs of Teaching Excellence. Berkeley: Office of Educational Development, University of California, 1983.

Eble, K. E. The Craft of Teaching. (2nd ed.) San Francisco: Jossey-Bass,1988.

Erickson, B. L., and Strommer, D. W. Teaching College Freshmen. San Francisco: Jossey-Bass, 1991.
Frisbie, D. A., Diamond, N. A., and Ory, J. C. Assigning Course Grades. Urbana: Office of Instructional Resources, University of Illinois, 1979.

Fuhrmann, B. S., and Grasha, A. F. A Practical Handbook for College Teachers. Boston: Little, Brown, 1983.

Jacobs, L. C., and Chase, C. I.. Developing and Using Tests Effectively: A Guide for Faculty. San Francisco: Jossey-Bass, 1992.

Johnson, G. R. Taking Teaching Seriously. College Station: Center for Teaching Excellence, Texas A & M University, 1988.

Lowman, J. Mastering the Techniques of Teaching. San Francisco: Jossey-Bass, 1984.

McKeachie, W. J. Teaching Tips. (8th ed.) Lexington, Mass.: Heath, 1986.

Scriven, M. "Evaluation of Students." Unpublished manuscript, 1974.

Shea, M. A. Compendium of Good Ideas on Teaching and Learning. Boulder: Faculty Teaching Excellence Program, University of Colorado, 1990.



There are five basic types of questions:

  1. Factual;
  2. Convergent;
  3. Divergent;
  4. Evaluative; and
  5. Combination

The art of asking questions is one of the basic skills of good teaching. Socrates believed that knowledge and awareness were an intrinsic part of each learner. Thus, in exercising the craft of good teaching an educator must reach into the learner's hidden levels of knowing and awareness in order to help the learner reach new levels of thinking.

Through the art of thoughtful questioning teachers can extract not only factual information, but aid learners in: connecting concepts, making inferences, increasing awareness, encouraging creative and imaginative thought, aiding critical thinking processes, and generally helping learners explore deeper levels of knowing, thinking, and understanding.

As you examine the categories below, reflect on your own educational experiences and see if you can ascertain which types of questions were used most often by different teachers. Hone your questioning skills by practicing asking different types of questions, and try to monitor your teaching so that you include varied levels of questioning skills. Specifically in the area of Socratic questioning techniques, there are a number of sites on the Web which might prove helpful. Simply use Socraticquestioning as a descriptor. Don't forget to hyphenate the term.

1. Factual - Soliciting reasonably simple, straight forward answers based on obvious facts or awareness. These are usually at the lowest level of cognitive or affective processes and answers are frequently either right or wrong.
Example: What is the name the Shakespeare play about the Prince of Denmark?

2. Convergent - Answers to these types of questions are usually within a very finite range of acceptable accuracy. These may be at several different levels of cognition -- comprehension, application, analysis, or ones where the answerer makes inferences or conjectures based on personal awareness, or on material read, presented or known.
Example: On reflecting over the entirety of the play Hamlet, what were the main reasons why Ophelia went mad? (This is not specifically stated in one direct statement in the text of Hamlet. Here the reader must make simple inferences as to why she committed suicide.)

3. Divergent - These questions allow students to explore different avenues and create many different variations and alternative answers or scenarios. Correctness may be based on logical projections, may be contextual, or arrived at through basic knowledge, conjecture, inference, projection, creation, intuition, or imagination. These types of questions often require students to analyze, synthesize, or evaluate a knowledge base and then project or predict different outcomes. Answering divergent questions may be aided by higher levels of affective functions. Answers to these types of questions generally fall into a wide range of acceptability. Often correctness is determined subjectively based on the possibility or probability. Frequently the intention of these types of divergent questions is to stimulate imaginative and creative thought, or investigate cause and effect relationships, or provoke deeper thought or extensive investigations. And, one needs to be prepared for the fact that there may not be right or definitely correct answers to these questions. Divergent questions may also serve as larger contexts for directing inquiries, and as such may become what are know as "essential" questions that frame the content of an entire course.
Example: In the love relationship of Hamlet and Ophelia, what might have happened to their relationship and their lives if Hamlet had not been so obsessed with the revenge of his father's death?
Example of divergent questions that are both essential and divergent: Like many authors throughout time, Shakespeare dwells partly on the pain of love in Hamlet. Why is painful love so often intertwined with good literature. What is its never ending appeal to readers?

4. Evaluative - These types of questions usually require sophisticated levels of cognitive and/or emotional judgment. In attempting to answer evaluative questions, students may be combining multiple logical and/or affective thinking process, or comparative frameworks. Often an answer is analyzed at multiple levels and from different perspectives before the answerer arrives at newly synthesized information or conclusions.

a. What are the similarities and differences between the deaths of Ophelia when compared to that of Juliet?

b. What are the similarities and differences between Roman gladiatorial games and modern football?

c. Why and how might the concept of Piagetian schema be related to the concepts presented in Jungian personality theory, and why might this be important to consider in teaching and learning?

5. Combinations - These are questions that blend any combination of the above.

*More details and suggestions on this topic see - This rough magic

Lindley, D. (1993) This rough magic. Westport, CN. Bergin & Garvey.

Erickson, H. L.. (2007) Concept-based curriculum and instruction for the thinking classroom. Thousand Oaks, CA. Corwin Press.



  • Plan key questions to provide structure and direction to the lesson. Spontaneous questions that emerge are fine, but the overall direction of the discussion has been largely planned.                 Example: a "predicting discussion" (Hyman, 1980)
  1. What are the essential features and conditions of this situation?
  2. Given this situation, what do you think will happen as a result of it?
  3. What facts and generalization support your prediction?
  4. What other things might happen as a result of this situation?
  5. If the predicted situation occurs, what will happen next?
  6. Based on the information and predictions before us, what are the probable consequences you now see?
  7. What will lead us from the current situation to the one you predicted?
  • Phrase the questions clearly and specifically. Avoid vague and ambiguous questions.
  • Adapt questions to the level of the students' abilities
  • Ask questions logically and sequentially
  • Ask questions at various levels
  • Follow up on students' responses: A) Elicit longer, more meaningful and more frequent responses from students after an initial response by -
  1. Maintaining a deliberate silence
  2. Making a declarative statement
  3. Making a reflective statement giving a sense of what the students said
  4. Declaring perplexity over the response
  5. Inviting elaboration
  6. Encouraging other students to comment

  •  Give students time to think after they are questioned (Wait Time)
  • The three most productive types of questions are variants of divergent thinking questions (Andrews, 1980):
1. The Playground Question

  • Structured by instructor's disignating a carefully chosen aspect of the material (the "playground")
  •  "Let's see if we can make any generalizations about the play as a whole from the nature of the opening lines."

2. The Brainstorm Question

  • Structure is thematic
  • Generate as many ideas on a single topic as possible within a short period of time
  •  "What kinds of things is Hamlet questioning - not just in his soliloquy, but throughout the play?"

3. The Focal Question
  • Focuses on a well articulated issue
  • Choose among a limited number of positions or viewpoints and support your views
  • "Is Ivan Illych a victim of his society or did he create his problems by his own choices?"


By Barbara Gross Davis, University of California, Berkeley.
From Tools for Teaching, copyright by Jossey-Bass. For purchase or reprint information,
contact Jossey-Bass. Reprinted here with permission, September 1, 1999.

Many teachers dislike preparing and grading exams, and most students dread taking them. Yet tests are powerful educational tools that serve at least four functions. First, tests help you evaluate students and assess whether they are learning what you are expecting them to learn. Second, well-designed tests serve to motivate and help students structure their academic efforts. Crooks (1988), McKeachie (1986), and Wergin (1988) report that students study in ways that reflect how they think they will be tested. If they expect an exam focused on facts, they will memorize details; if they expect a test that will require problem solving or integrating knowledge, they will work toward understanding and applying information. Third, tests can help you understand how successfully you are presenting the material. Finally, tests can reinforce learning by providing students with indicators of what topics or skills they have not yet mastered and should concentrate on. Despite these benefits, testing is also emotionally charged and anxiety producing. The following suggestions can enhance your ability to design tests that are effective in motivating, measuring, and reinforcing learning.

A note on terminology: instructors often use the terms tests, exams, and even quizzes interchangeably. Test experts Jacobs and Chase (1992), however, make distinctions among them based on the scope of content covered and their weight or importance in calculating the final grade for the course. An examination is the most comprehensive form of testing, typically given at the end of the term (as a final) and one or two times during the semester (as midterms). A test is more limited in scope, focusing on particular aspects of the course material. A course might have three or four tests. A quiz is even more limited and usually is administered in fifteen minutes or less. Though these distinctions are useful, the terms test and exam will be used interchangeably throughout the rest of this section because the principles in planning, constructing, and administering them are similar.

General Strategies

Spend adequate amounts of time developing your tests. As you prepare a test, think carefully about the learning outcomes you wish to measure, the type of items best suited to those outcomes, the range of difficulty of items, the length and time limits for the test, the format and layout of the exam, and your scoring procedures.

Match your tests to the content you are teaching. Ideally, the tests you give will measure students' achievement of your educational goals for the course. Test items should be based on the content and skills that are most important for your students to learn. To keep track of how well your tests reflect your objectives, you can construct a grid, listing your course objectives along the side of the page and content areas along the top. For each test item, check off the objective and content it covers. (Sources: Ericksen, 1969; Jacobs and Chase, 1992; Svinicki and Woodward, 1982)

Try to make your tests valid, reliable, and balanced. A test is valid if its results are appropriate and useful for making decisions about an aspect of students' achievement (Gronlund and Linn, 1990). Technically, validity refers to the appropriateness of the interpretation of the results and not to the test itself, though colloquially we speak about a test being valid. Validity is a matter of degree and considered in relation to specific use or interpretation (Gronlund and Linn, 1990). For example, the results of a writing test may have a high degree of validity for indicating the level of a student's composition skills, a moderate degree of validity for predicting success in later composition courses, and essentially no validity for predicting success in mathematics or physics. Validity can be difficult to determine. A practical approach is to focus on content validity, the extent to which the content of the test represents an adequate sampling of the knowledge and skills taught in the course. If you design the test to cover information in lectures and readings in proportion to their importance in the course, then the interpretations of test scores are likely to have greater validity An exam that consists of only a few difficult items, however, will not yield valid interpretations of what students know.

A test is reliable if it accurately and consistently evaluates a student's performance. The purest measure of reliability would entail having a group of students take the same test twice and get the same scores (assuming that we could erase their memories of test items from the first administration). This is impractical, of course, but there are technical procedures for determining reliability. In general, ambiguous questions, unclear directions, and vague scoring criteria threaten reliability. Very short tests are also unlikely to be highly reliable. It is also important for a test to be balanced: to cover most of the main ideas and important concepts in proportion to the emphasis they received in class. If you are interested in learning more about psychometric concepts and the technical properties of tests, here are some books you might review:

Ebel, R. L., and Frisbie, D. A. Essentials of Educational Measurement. (5th ed.) Englewood Cliffs, N.J.: Prentice-Hall, 1990.

Gronlund, N. E., and Linn, R. Measurement and Evaluation in Teaching. (6th ed.) New York: Macmillan, 1990.

Mehrens, W. A., and Lehmann, I. J. Measurement and Evaluation in Education and Psychology. (4th ed.) New York: Holt, Rinehart & Winston, 1991.

Use a variety of testing methods. Research shows that students vary in their preferences for different formats, so using a variety of methods will help students do their best (Jacobs and Chase, 1992). Multiple-choice or shortanswer questions are appropriate for assessing students' mastery of details and specific knowledge, while essay questions assess comprehension, the ability to integrate and synthesize, and the ability to apply information to new situations. A single test can have several formats. Try to avoid introducing a new format on the final exam: if you have given all multiple-choice quizzes or midterms, don't ask students to write an all-essay final. (Sources: Jacobs and Chase, 1992; Lowman, 1984; McKeachie, 1986; Svinicki, 1987)

Write questions that test skills other than recall. Research shows that most tests administered by faculty rely too heavily on students' recall of information (Milton, Pollio, and Eison, 1986). Bloom (1956) argues that it is important for tests to measure higher-learning as well. Fuhrmann and Grasha (1983, p. 170) have adapted Bloom's taxonomy for test development. Here is a condensation of their list:

  • To measure knowledge (common terms, facts, principles, procedures), ask these kinds of questions: Define, Describe, Identify, Label, List, Match, Name, Outline, Reproduce, Select, State. Example: "List the steps involved in titration."

  • To measure comprehension (understanding of facts and principles, interpretation of material), ask these kinds of questions: Convert, Defend, Distinguish, Estimate, Explain, Extend, Generalize, Give examples, Infer, Predict, Summarize. Example: "Summarize the basic tenets of deconstructionism."

  • To measure application (solving problems, applying concepts and principles to new situations), ask these kinds of questions: Demonstrate, Modify, Operate, Prepare, Produce, Relate, Show, Solve, Use. Example: "Calculate the deflection of a beam under uniform loading."

  • To measure analysis (recognition of unstated assumptions or logical fallacies, ability to distinguish between facts and inferences), ask these kinds of questions: Diagram, Differentiate, Distinguish, Illustrate, Infer, Point out, Relate, Select, Separate, Subdivide. Example: "In the president's State of the Union Address, which statements are based on facts and which are based on assumptions?"

  • To measure synthesis (integrate learning from different areas or solve problems by creative thinking), ask these kinds of questions: Categorize, Combine, Compile, Devise, Design, Explain, Generate, Organize, Plan, Rearrange, Reconstruct, Revise, Tell. Example: "How would you restructure the school day to reflect children's developmental needs?"

  • To measure evaluation (judging and assessing), ask these kinds of questions: Appraise, Compare, Conclude, Contrast, Criticize, Describe, Discriminate, Explain, Justify, Interpret, Support. Example: "Why is Bach's Mass in B Minor acknowledged as a classic?"
Many faculty members have found it difficult to apply this six-level taxonomy, and some educators have simplified and collapsed the taxonomy into three general levels (Crooks, 1988): The first category knowledge (recall or recognition of specific information). The second category combines comprehension and application. The third category is described as "problem solving," transferring existing knowledge and skills to new situations.

If your course has graduate student instructors (GSIs), involve them in designing exams. At the least, ask your GSIs to read your draft of the exam and comment on it. Better still, involve them in creating the exam. Not only will they have useful suggestions, but their participation in designing an exam will help them grade the exam. Take precautions to avoid cheating. See "Preventing Academic Dishonesty"

Types of Tests

  • Multiple-choice tests. Multiple-choice items can be used to measure both simple knowledge and complex concepts. Since multiple-choice questions can be answered quickly, you can assess students' mastery of many topics on an hour exam. In addition, the items can be easily and reliably scored. Good multiple-choice questions are difficult to write-see "Multiple-Choice and Matching Tests" for guidance on how to develop and administer this type of test.

  • True-false tests. Because random guessing will produce the correct answer half the time, true-false tests are less reliable than other types of exams. However, these items are appropriate for occasional use. Some faculty who use true-false questions add an "explain" column in which students write one or two sentences justifying their response.

  • Matching tests. The matching format is an effective way to test students' recognition of the relationships between words and definitions, events and dates, categories and examples, and so on. See "Multiple-Choice and Matching Tests" for suggestions about developing this type of test.

  • Essay tests. Essay tests enable you to judge students' abilities to organize, integrate, interpret material, and express themselves in their own words. Research indicates that students study more efficiently for essay-type examinations than for selection (multiple-choice) tests: students preparing for essay tests focus on broad issues, general concepts, and interrelationships rather than on specific details, and this studying results in somewhat better student performance regardless of the type of exam they are given (McKeachie, 1986). Essay tests also give you an opportunity to comment on students' progress, the quality of their thinking, the depth of their understanding, and the difficulties they may be having. However, because essay tests pose only a few questions, their content validity may be low. In addition, the reliability of essay tests is compromised by subjectivity or inconsistencies in grading. For specific advice, see "Short-Answer and Essay Tests." (Sources: Ericksen, 1969, McKeachie, 1986) A variation of an essay test asks students to correct mock answers. One faculty member prepares a test that requires students to correct, expand, or refute mock essays. Two weeks before the exam date, he distributes ten to twelve essay questions, which he discusses with students in class. For the actual exam, he selects four of the questions and prepares well-written but intellectually flawed answers for the students to edit, correct, expand, and refute. The mock essays contain common misunderstandings, correct but incomplete responses, or absurd notions; in some cases the answer has only one or two flaws. He reports that students seem to enjoy this type of test more than traditional examinations.

  • Short-answer tests. Depending on your objectives, short-answer questions can call for one or two sentences or a long paragraph. Short-answer tests are easier to write, though they take longer to score, than multiple-choice tests. They also give you some opportunity to see how well students can express their thoughts, though they are not as useful as longer essay responses for this purpose. See "Short-Answer and Essay Tests" for detailed guidelines.

  • Problem sets. In courses in mathematics and the sciences, your tests can include problem sets. As a rule of thumb, allow students ten minutes to solve a problem you can do in two minutes. See "Homework: Problem Sets" for advice on creating and grading problem sets.

  • Oral exams. Though common at the graduate level, oral exams are rarely used for undergraduates except in foreign language classes. In other classes they are usually time-consuming, too anxiety provoking for students, and difficult to score unless the instructor tape-records the answers. However, a math professor has experimented with individual thirty-minute oral tests in a small seminar class. Students receive the questions in advance and are allowed to drop one of their choosing. During the oral exam, the professor probes students' level of understanding of the theory and principles behind the theorems. He reports that about eight students per day can be tested.

  • Performance tests. Performance tests ask students to demonstrate proficiency in conducting an experiment, executing a series of steps in a reasonable amount of time, following instructions, creating drawings, manipulating materials or equipment, or reacting to real or simulated situations. Performance tests can be administered individually or in groups. They are seldom used in colleges and universities because they are logistically difficult to set up, hard to score, and the content of most courses does not necessarily lend itself to this type of testing. However, performance tests can be useful in classes that require students to demonstrate their skills (for example, health fields, the sciences, education). If you use performance tests, Anderson (1987, p. 43) recommends that you do the following (I have slightly modified her list):A) Specify the criteria to be used for rating or scoring (for example, the level of accuracy in performing the steps in sequence or completing the task within a specified time limit). B) State the problem so that students know exactly what they are supposed to do (if possible, conditions of a performance test should mirror a real-life situation).C) Give students a chance to perform the task more than once or to perform several task samples.

  • "Create-a-game" exams. For one midterm, ask students to create either a board game, word game, or trivia game that covers the range of information relevant to your course. Students must include the rules, game board, game pieces, and whatever else is needed to play. For example, students in a history of psychology class created "Freud's Inner Circle," in which students move tokens such as small cigars and toilet seats around a board each time they answer a question correctly, and "Psychogories," a card game in which players select and discard cards until they have a full hand of theoretically compatible psychological theories, beliefs, or assumptions. (Source: Berrenberg and Prosser, 1991)

Alternative Testing Modes

Take-home tests. Take-home tests allow students to work at their own pace with access to books and materials. Take-home tests also permit longer and more involved questions, without sacrificing valuable class time for exams. Problem sets, short answers, and essays are the most appropriate kinds of takehome exams. Be wary, though, of designing a take-home exam that is too difficult or an exam that does not include limits on the number of words or time spent (Jedrey, 1984). Also, be sure to give students explicit instructions on what they can and cannot do: for example, are they allowed to talk to other students about their answers? A variation of a take-home test is to give the topics in advance but ask the students to write their answers in class. Some faculty hand out ten or twelve questions the week before an exam and announce that three of those questions will appear on the exam.

Open-book tests. Open-book tests simulate the situations professionals face every day, when they use resources to solve problems, prepare reports, or write memos. Open-book tests tend to be inappropriate in introductory courses in which facts must be learned or skills thoroughly mastered if the student is to progress to more complicated concepts and techniques in advanced courses. On an open-book test, students who are lacking basic knowledge may waste too much of their time consulting their references rather than writing. Open-book tests appear to reduce stress (Boniface, 1985; Liska and Simonson, 1991), but research shows that students do not necessarily perform significantly better on open-book tests (Clift and Imrie, 1981; Crooks, 1988). Further, open-book tests seem to reduce students' motivation to study. A compromise between open- and closed-book testing is to let students bring an index card or one page of notes to the exam or to distribute appropriate reference material such as equations or formulas as part of the test.

Group exams. Some faculty have successfully experimented with group exams, either in class or as take-home projects. Faculty report that groups outperform individuals and that students respond positively to group exams (Geiger, 1991; Hendrickson, 1990; Keyworth, 1989; Toppins 1989). For example, for a fifty-minute in-class exam, use a multiple-choice test of about twenty to twenty-five items. For the first test, the groups can be randomly divided. Groups of three to five students seem to work best. For subsequent tests, you may want to assign students to groups in ways that minimize differences between group scores and balance talkative and quiet students. Or you might want to group students who are performing at or near the same level (based on students' performance on individual tests). Some faculty have students complete the test individually before meeting as a group. Others just let the groups discuss the test, item by item. In the first case, if the group score is higher than the individual score of any member, bonus points are added to each individual's score. In the second case, each student receives the score of the group. Faculty who use group exams offer the following tips:

  • Ask students to discuss each question fully and weigh the merits of each answer rather than simply vote on an answer.

  • If you assign problems, have each student work a problem and then compare results.

  • If you want students to take the exam individually first, consider devoting two class periods to tests; one for individual work and the other for group.

  • Show students the distribution of their scores as individuals and as groups; in most cases group scores will be higher than any single individual score.

A variation of this idea is to have students first work on an exam in groups outside of class. Students then complete the exam individually during class time and receive their own score. Some portion of the test items are derived from the group exam. The rest are new questions. Or let students know in advance you will be asking them to justify a few of their responses; this will keep students from blithely relying on their work group for all the answers. (Sources: Geiger, 1991; Hendrickson, 1990; Keyworth, 1989; Murray, 1990; Toppins, 1989)

Paired testing. For paired exams, pairs of students work on a single essay exam, and the two students turn in one paper. Some students may be reluctant to share a grade, but good students will most likely earn the same grade they would have working alone. Pairs can be self-selected or assigned. For example, pairing a student who is doing well in the course with one not doing well allows for some peer teaching. A variation is to have students work in teams but submit individual answer sheets. (Source: Murray, 1990)

Portfolios. A portfolio is not a specific test but rather a cumulative collection of a student's work. Students decide what examples to include that characterize their growth and accomplishment over the term. While most common in composition classes, portfolios are beginning to be used in other disciplines to provide a fuller picture of students' achievements. A student's portfolio might include sample papers (first drafts and revisions), journal entries, essay exams, and other work representative of the student's progress. You can assign portfolios a letter grade or a pass/not pass. If you do grade portfolios, you will need to establish clear criteria. (Source: Jacobs and Chase, 1992)

Construction of Effective Exams

Prepare new exams each time you teach a course. Though it is timeconsuming to develop tests, a past exam may not reflect changes in how you have presented the material or which topics you have emphasized in the course. If you do write a new exam, you can make copies of the old exam available to students.

Make up test items throughout the term. Don't wait until a week or so before the exam. One way to make sure the exam reflects the topics emphasized in the course is to write test questions at the end of each class session and place them on index cards or computer files for later sorting. Software that allows you to create test banks of items and generate exams from the pool is now available.

Ask students to submit test questions. Faculty who use this technique limit the number of items a student can submit and receive credit for. Here is an example (adapted from Buchanan and Rogers, 1990, p. 72):

You can submit up to two questions per exam. Each question must be typed or legibly printed on a separate 5" x 8" card. The correct answer and the source (that is, page of the text, date of lecture, and so on) must be provided for each question. Questions can be of the short-answer, multiple-choice, or essay type.

Students receive a few points of additional credit for each question they submit that is judged appropriate. Not all students will take advantage of this opportunity. You can select or adapt student's test items for the exam. If you have a large lecture class, tell your students that you might not review all items but will draw randomly from the pool until you have enough questions for the exam. (Sources: Buchanan and Rogers, 1990; Fuhrmann and Grasha, 1983)

Cull items from colleagues' exams. Ask colleagues at other institutions for copies of their exams. Be careful, though, about using items from tests given by colleagues on your own campus. Some of your students may have previously seen those tests.

Consider making your tests cumulative. Cumulative tests require students to review material they have already studied, thus reinforcing what they have learned. Cumulative tests also give students a chance to integrate and synthesize course content. (Sources: Crooks, 1988; Jacobs and Chase, 1992; Svinicki, 1987)

Prepare clear instructions. Test your instructions by asking a colleague (or one of your graduate student instructors) to read them. Include a few words of advice and encouragement on the exam. For example, give students advice on how much time to spend on each section or offer a hint at the beginning of an essay question or wish students good luck. (Source: "Exams: Alternative Ideas and Approaches," 1989)

Put some easy items first. Place several questions all your students can answer near the beginning of the exam. Answering easier questions helps students overcome their nervousness and may help them feel confident that they can succeed on the exam. You can also use the first few questions to identify students in serious academic difficulty. (Source: Savitz, 1985)

Challenge your best students. Some instructors like to include at least one very difficult question -- though not a trick question or a trivial one -- to challenge the interest of the best students. They place that question at or near the end of the exam.

Try out the timing. No purpose is served by creating a test too long for even well-prepared students to finish and review before turning it in. As a rule of thumb, allow about one-half minute per item for truefalse tests, one minute per item for multiple-choice tests, two minutes per short-answer requiring a few sentences, ten or fifteen minutes for a limited essay question, and about thirty minutes for a broader essay question. Allow another five or ten minutes for students to review their work, and factor in time to distribute and collect the tests. Another rule of thumb is to allow students about four times as long as it takes you (or a graduate student instructor) to complete the test. (Source: McKeachie, 1986)

Give some thought to the layout of the test. Use margins and line spacing that make the test easy to read. If items are worth different numbers of points, indicate the point value next to each item. Group similar types of items, such as all true-false questions, together. Keep in mind that the amount of space you leave for short-answer questions often signifies to the students the length of the answer expected of them. If students are to write on the exam rather than in a blue book, leave space at the top of each page for the student's name (and section, if appropriate). If each page is identified, the exams can be separated so that each graduate student instructor can grade the same questions on every test paper, for courses that have GSIs.


Anderson, S. B. "The Role of the Teacher-Made Test in Higher Education." In D. Bray and M. J. Blecher (eds.), Issues in Student Assessment. New Directions for Community Colleges, no. 59. San Francisco: Jossey-Bass, 1987.

Berrenberg, J. L., and Prosser, A. "The Create-a-Game Exam: A Method to Facilitate Student Interest and Learning." Teaching of Psychology, 1991, 18(3), 167-169.

Bloom, B. S. (ed.). Taxonomy of Educational Objectives. Vol. 1: Cognitive Domain. New York: McKay, 1956.

Boniface, D. "Candidates' Use of Notes and Textbooks During an Open Book Examination." Educational Research, 1985, 27(3), 201-209.

Brown, I. W. "To Learn Is to Teach Is to Create the Final Exam." College Teaching, 1991, 39(4), 150- 153.

Buchanan, R. W., and Rogers, M. "Innovative Assessment in Large Classes." College Teaching, 1990, 38(2), 69-73.

Clift, J. C., and Imrie, B. W. Assessing Students, Appraising Teaching. New York: Wiley, 1981.

Crooks, T. J. "The Impact of Classroom Evaluation Practices on Students." Review of Educational Research, 1988, 58(4), 438-481.

Ericksen, S. C. "The Teacher-Made Test." Memo to the Faculty, no. 35. Ann Arbor: Center for Research on Learning and Teaching, University of Michigan, 1969.

"Exams: Alternative Ideas and Approaches." Teaching Professor, 1989, 3(8), 3-4.

Fuhrmann, B. S., and Grasha, A. F. A Practical Handbook for College Teachers. Boston: Little, Brown, 1983.

Geiger, T. "Test Partners: A Formula for Success." Innovation Abstracts, 1991, 13 (l1). (Newsletter published by College of Education, University of Texas at Austin)

Gronlund, N. E., and Linn, R. Measurement and Evaluation in Teaching. (6th ed.) New York: Macmillan, 1990.

Hendrickson, A. D. "Cooperative Group Test-Taking." Focus, 1990, 5(2), 6. (Publication of the Office of Educational Development Programs, University of Minnesota)

Jacobs, L. C., and Chase, C. I. Developing and Using Tests Effectively: A Guide for Faculty. San Francisco: Jossey-Bass, 1992.

Keyworth, D. R. "The Group Exam." Teaching Professor, 1989, 3(8), 5.

Liska, T., and Simonson, J. "Open-Text and Open-Note Exams." Teaching Professor, 1991, 5(5), 1-2.

Lowman, J. Mastering the Techniques of Teaching. San Francisco: Jossey-Bass, 1984.

McKeachie, W. J. Teaching Tips. (8th ed.) Lexington, Mass.: Heath, 1986.

Milton, O., Pollio, H. R., and Eison, J. A. Making Sense of College Grades: Why the Grading System Does Not Work and What Can Be Done About It. San Francisco: Jossey-Bass, 1986.

Murray, J. P. "Better Testing for Better Learning." College Teaching, 1990, 38(4), 148-152.

Savitz, F. "Effects of Easy Examination Questions Placed at the Beginning of Science Multiple-Choice Examinations." Journal of Instructional Psychology, 1985, 12(l), 6-10.

Svinicki, M. D. "Comprehensive Finals." Newsletter, 1987, 9(2), 1-2. (Publication of the Center for Teaching Effectiveness, University of Texas at Austin)

Svinicki, M. D., and Woodward, P. J. "Writing Higher-Level Objective Test Items." In K. G. Lewis (ed.), Taming the Pedagogical Monster. Austin: Center for Teaching Effectiveness, University of Texas, 1982.

Toppins, A. D. "Teaching by Testing: A Group Consensus Approach." College Teaching, 1989, 37(3), 96-99.

Wergin, J. F. "Basic Issues and Principles in Classroom Assessment." In J. H. McMillan (ed.), Assessing Students' Learning. New Directions for Teaching and Learning, no. 34. San Francisco: Jossey-Bass, 1988.

Syllabus Subdivisions

Syllabus Subdivisions

Course information
  • What do students need and/or want to know about the course?
  • What pre-requisites exist?
Course description

  • What content will the course address? How does the course fit in with other courses in the discipline? Why is the course valuable to the students?
  • How is the course structured? Large lecture with discussed sessions? Large lecture with laboratory and discussion sessions? Seminars?
  • How are the major topics organized?
Course objectives

  • What will the students know and be able to do as a result of having taken this course?
  • What levels of cognitive thinking do I want my students to engage in?
  • What learning skills will the students develop in the course?
 Instructional approaches

  • Given the kind of learning the teacher would like to encourage and foster, what kinds of instructional interactions need to occur? Teacher-student, student-student, student-peer tutor?
  • What kinds of instructional approaches are most conductive to helping students accomplish set learning objectives?
  • How will classroom interactions be facilitated? In-class? Out-of-class? Online? Electronic discussion? Newsgroups? Chatroom?
Course requirements & assignments
  • What will students be expected to do in the course?
  • What kinds of assignments, tests do most appropriately reflect the course objectives?
  • Do assignments and tests elicit the kind of learning the teacher wants to foster? Assignments (frequency, timing, sequence)? Tests? Quizzes? Exams? Papers? Special projects? Laboratories? Field trips? Learning logs? Journals? Oral presentations? Research on the web? Web publishing? Electronic databased?
  • What kinds of skills do the students need to have in order to be successful in the course? Computer literacy? Research skills? Writing skills? Communication skills? Familiarity with software?
 Course polices

  • What is expected of the student? Attendance? Participation? Student responsibility in their learning? Contribution to group work? Missed assignments? Late work? Extra credit? Academic dishonesty? Makeup policy? Classroom management issues? Laboratory safety?
 Grading evaluation
  • How will the students work be graded and evaluated? Number of tests? In-class? Take-home? Point value? Proportion of each test toward final grade? Grading scale?
  • How is the final grade determined?
  • How do students receive timely feedback on their performance? Instructor? Self-assessment? Peer review? Peer tutors? Opportunity for improvement? Upgraded assignment?
  • What kinds of materials will be used during the course? Electronic databases? Course Webpage? Software? Laboratory equipment?
  • What kind of instructional technologies will be used?
 Course calendar

  • In what sequence will the content be taught? When are major assignments due? Fieldtrips? Guest speaker?
 Study tips/ learning resources

  • How will the student be most successful in the course?
  •  What resources are available? Online quiz generator? Study guides? Lecture notes online? Lecture notes on reserve in library? Guest speaker to explain/demonstrate online resources? Study group? Academic Services Center? Writing Center? Evaluation of online resources? Citation of web resources?
Student feedback on instruction

  • Anonymous suggestion box on the web? Email?
  • Student feedback at midterm for instructional improvement purposes?
  • End-of-term student feedback? Supplement to departmental student feedback form?

Designing a Syllabus


Designing a Syllabus

Your syllabus can be an important point of interaction between you and your students, both in and out of class. The traditional syllabus is primarily a source of information for your students. While including basic information, the learning-centered syllabus can be an important learning tool that will reinforce the intentions, roles, attitudes, and strategies that you will use to promote active, purposeful, effective learning.

Suggested Steps for Planning Your Syllabus:

  • Develop a well-grounded rationale for your course
  • Decide what you want students to be able to do as a result of taking your course, and how their work will be appropriately assessed
  •  Define and delimit course content
  • Structure your students’ active involvement in learning
  • Identify and develop resources
  • Compose your syllabus with a focus on student learning
Suggested Principles for Designing a Course that Fosters Critical Thinking :

  • Critical thinking is a learnable skill; the instructor and peers are resources in developing critical thinking skills.
  • Problems, questions, or issues are the point of entry into the subject and a source of motivation for sustained inquiry.
  • Successful courses balance the challenge to think critically with support tailored to students'’developmental needs.
  • Courses are assignment centered rather than text and lecture centered. Goals, methods and evaluation emphasize using content rather than simply acquiring it.
  • Students are required to formulate their ideas in writing or other appropriate means.
  • Students collaborate to learn and to stretch their thinking, for example, in pair problem solving and small group work.
  • Courses that teach problem-solving skills nurture students’ metacognitive abilities.
  • The developmental needs of students are acknowledged and used as information in the design of the course. Teachers in these courses make standards explicit and then help students learn how to achieve them.
Syllabus Functions:

  • Establishes an early point of contact and connection between student and instructor
  • Helps set the tone for your course
  • Describes your beliefs about educational purposes
  • Acquaints students with the logistics of the course
  • Contains collected handouts
  • Defines student responsibilities for successful course work
  • Describes active learning
  • Helps students to assess their readiness for your course 
  • Sets the course in a broader context for learning
  • Provides a conceptual framework
  • Describes available learning resources
  • Communicates the role of technology in the course
  • Can expand to provide difficult-to-obtain reading materials
  • Can improve the effectiveness of student note-taking
  • Can include material that supports learning outside the classroom
  • Can serve as a learning contract
Checklist for a learning-centered syllabus:

• Title Page

• Table of Contents

• Instructor Information

• Letter to the Student

• Purpose of the Course

• Course Description

• Course and Unit Objectives

• Resources

• Readings

• Course Calendar

• Course Requirements

• Evaluation

• Grading Procedures

• How to Use the Syllabus

• How to Study for This Course

• Content Information

• Learning Tools

*Cited in Kurfiss, J. G. (1988) Critical thinking: Theory, research, practice and possibilities. ASHE-ERIC Higher Education Report No. 2. Washington, DC: Association for the Study of Higher Education.