Which type of assessment compares the students performance to the performance of others?

Assessment methods should help the instructor answer the questions, “How do I know the required learning has taken place? What might I need to modify about the course to best support student learning?”  

Information about student learning can be assessed through both direct and indirect measures. Direct measures may include homework, quizzes, exams, reports, essays, research projects, case study analysis, and rubrics for oral and other performances. Examples of indirect measures include course evaluations, student surveys, course enrollment information, retention in the major, alumni surveys, and graduate school placement rates. 

Approaches to measuring student learning 

Methods of measuring student learning are often characterized as summative or formative assessments: 

  • Summative assessments - tests, quizzes, and other graded course activities that are used to measure student performance. They are cumulative and often reveal what students have learned at the end of a unit or the end of a course. Within a course, summative assessment includes the system for calculating individual student grades. 
  • Formative assessment - any means by which students receive input and guiding feedback on their relative performance to help them improve. It can be provided face-to-face in office hours, in written comments on assignments, through rubrics, and through emails. 

Formative assessments can be used to measure student learning on a daily, ongoing basis. These assessments reveal how and what students are learning during the course and often inform next steps in teaching and learning. Rather than asking students if they understand or have any questions, you can be more systematic and intentional by asking students at the end of the class period to write the most important points or the most confusing aspect of the lecture on index cards. Collecting and reviewing the responses provides insight into what themes students have retained and what your next teaching steps might be. Providing feedback on these themes to students gives them insight into their own learning. 

You can also ask students to reflect and report on their own learning. Asking students to rate their knowledge about a topic after taking your course as compared to what they believe they knew before taking your course is an example.  

Considerations for Measuring Student Learning

As you develop methods for assessing your students consider:

  • including indirect and direct assessments as well as formative and summative assessments
  • evaluating whether or not the assessment aligns directly with a learning outcome
  • ensuring the measurement is sustainable and reasonable in terms of time and resources, both for the students and the instructors (e.g., grading, response time, and methods). To estimate the time that students need to complete different assignments, see the Rice University workload calculator
  • using a mid-semester student survey, such as the CTI's Mid-Semester Feedback Program, a great way to gather feedback on what students are learning and what is helping them learn
  • using the results of the assessments to improve the course. Examples include revising course content in terms of depth vs. breadth, realignment between goals and teaching methods, employment of more appropriate assessment methods, or effective incorporation of learning technologies

Getting started with measuring student learning

At the course level, it is helpful to review course assignments and assessments by asking: 

  • What are the students supposed to get out of each assessment? 
  • How are the assessments aligned with learning outcomes? 
  • What is its intrinsic value in terms of: 
    • Knowledge acquired? 
    • Skill development? 
    • Values clarification? 
    • Performance attainment? 
  • How are homework and problem sets related to exams? 
  • How are the exams related to each other? 
  • What other forms of assessment (besides exams) can be used as indicators of student learning? 
  • If writing assignments are used, are there enough of them for students to develop the requisite skills embedded in them? 
  • How is feedback on student work provided to help students improve? 
  • Are the assessments structured in a way to help students assess their own work and progress? 
  • Does the assignment provide evidence of an outcome that was communicated? Is the evidence direct or indirect? 

Formative and Summative Assessment

Formative Assessments: Formative assessments (interactive classroom discussions, self-assessments, warm-up quizzes, mid-semester evaluations, exit quizzes, etc.) monitor student learning.

  • These are short term, as they are most applicable when students are in the process of making sense of new content and applying it to what they already know.
  • The most striking feature of these types of assessments is the immediate feedback, which helps students make changes to their understanding of the material and allows the teacher to gauge student understanding and adapt to the needs of the students.
  • These types of assessments often do not carry any credit associated with the student grade.

Interim Assessments: Interim assessments (concept tests, quizzes, written essays, etc.) may be more formal and can occur throughout the semester.

  • Typically, students are given the opportunity to revisit and perhaps revise these assessments after they have received feedback.
  • This type of assessment can be particularly useful in addressing the knowledge gaps in student understanding and can help you formulate better lesson plans during the course.
  • The feedback to students is quick but not necessarily immediate.
  • These types of assessment may count toward a small percentage of the student grade.

Summative Assessments: Summative assessments (typically midterm or final exams) evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark.

  • These assessments are formal and have a direct impact on student grades.
  • The feedback to the student may be extremely limited.
  • Generally students do not have the opportunity to re-take the assessment.
  • The results of these assessments can help students understand where they stand in the class by comparing grades and, if applicable, by looking at the descriptive statistics such as average, median and standard deviation.

For an explanation of specific techniques you can use for formative and interim assessment, please see [Hyperlink to PDF of Classroom Assessment Techniques (Angelo and Cross) .pdf]

Authentic Assessment

Authentic assessment is a form of assessment in which students demonstrate meaningful application of knowledge and skills by performing real-world tasks. These tasks involve effectively and creatively addressing problems faced by professionals, consumers, and citizens in that field. Student performance is evaluated utilizing a rubric.

Authentic assessment is a form of direct assessment because it provides direct evidence of application of knowledge, skills, and attitudes. It is often referred to as performance assessment or alternative assessment.

With traditional assessments, instructors often discuss and are discouraged against “teaching to the test.” With authentic assessment, instructors are encouraged to “teach to the test” because students need to learn how to perform the meaningful tasks associated with their real-world experience. To develop student knowledge, skills, and attitudes in order to perform well, the instructor should show the students models of good and inadequate or inaccurate performance. Sharing the scoring rubric with the students is also encouraged. By sharing the rubric, the instructor is not providing the answers to the assessment, but assisting students in understanding the key focus areas and what is considered a strong performance.

Examples of authentic assessments

  • Oral interviews
  • Writing samples
  • Exhibitions
  • Experiments
  • Observation
  • Producing a commercial
  • Composing a song
  • Creating a flyer
  • Debating
  • Portfolios

Authentic versus traditional

Authentic and traditional assessments differ from each other in key ways:

Authentic assessment Traditional assessment
Perform a task Select a response
Real-life experience/scenario Contrived by the instructor
Focuses on inquiry (higher-level Bloom’s) Focuses on bits of information (lower-level Bloom’s)
Assumes knowledge has multiple meaning Assumes knowledge has a single meaning
Treats learning as active (student-structured) Believes learning is passive (teacher-structured)
Direct evidence of learning Indirect evidence of learning

Combining traditional and authentic assessments

Traditional and authentic assessments complement each other when utilized in combination. Instructors do not need to limit themselves to only traditional assessments or authentic assessments in their course. The combination of both traditional and authentic assessments may prove a stronger approach than either alone. Student knowledge can be evaluated through the use of a traditional assessment, such as multiple choice questions or essays, but their ability to apply that knowledge in real-life scenarios that require skill demonstration can additionally be evaluated with an authentic assessment. For example, a medical student’s knowledge of a medical condition can be tested with a traditional assessment, followed by the student’s ability to appropriately treat a patient with that same condition by going on medical rounds.

Tips:

  • Design backwards.  As with all teaching, instructors should start with intended learning objectives. By knowing what the student should be able to do when learning is complete, the instructor can easily plan the assessment and the learning experience.
  • Break the real-world experience down into small steps. To avoid overwhelming students, instructors can break the steps necessary to complete the experience into smaller chunks.
  • Don’t get frustrated. Developing a strong authentic assessment can be challenging but very rewarding. Rubric develop, in particular, can be challenging to instructors. Expect challenges and work through them. Repeated experience by the instructor and the student with authentic assessments will improve the experience, the rubric itself, and the comfort of instructors and students with the process and tools.
  • Never underestimate the power of student reflection. By reflecting on the experience and assessment, students will further evaluate and recognize what they have learned. The reflections will also assist the instructor in identifying challenges experienced by the students.

Additional resources:

  • Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman.
  • Meyer, C. A. (1992). What's the difference between authentic and performance assessment? Educational Leadership, 49, 39-40.
  • Newmann, F. M. & Wehlage, G. G. (1993). Five standards of authentic instruction. Educational Leadership, 50, 8-12.
  • Rolheiser, C., Bower, B. & Stevahn, L. (2000). The portfolio organizer: Succeeding with portfolios in your classroom. Alexandria, VA: Association for Supervision and Curriculum Development.
  • Steffe, L. P., & Gale, J. (Eds.). (1995). Constructivism in education. Hillsdale, NJ: Erlbaum.
  • Stiggins, R. J. (1987). The design and development of performance assessments. Educational Measurement: Issues and Practice, 6, 33-42.
  • Wiggins, G. P. (1993). Assessing student performance. San Francisco: Jossey-Bass Publishers.
  • Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass Publishers.
  • Wiggins, G. P., & McTighe, J. (1998). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.
  • Worthen, B. R., White, K. R., Fan, X., & Sudweeks, R. R. (1999). Measurement and assessment in schools. New York: Longman.

Summary of Indirect Assessment Techniques

(Assessing Academic Programs in Higher Education by Allen 2004)

Technique Potential Strength Potential Limitations 
Surveys
  • Are flexible in format and can include questions about many issues
  • Can be administered to large groups of respondents
  • Can easily assess the views of various stakeholders
  • Usually have face validity – the questions generally have a clear relationship to the objectives being assessed
  • Tend to be inexpensive to administer
  • Can be conducted relatively quickly
  • Responses to closed-ended questions are easy to tabulate and to report in tables or graphs
  • Open-ended questions allow faculty to uncover unanticipated results
  • Can be used to track opinions across time to explore trends
  • Are amenable to different formats, such as paper-and-pencil or online formats
  • Can be used to collect opinions from respondents at distant sites
  • Provide indirect evidence about student learning
  • Their validity depends on the quality of the questions and response options
  • Conclusions can be inaccurate if biased samples are obtained
  • Results might not include the full array of opinions if the sample is small
  • What people say they do or know may be inconsistent with what they actually do or know
  • Open-ended responses can be difficult and time-consuming to analyze
Interviews
  • Are flexible in format and can include questions about many issues
  • Can assess the views of various stakeholders
  • Usually have face validity – the questions generally have a clear relationship to the objectives being assessed
  • Can provide insights into the reasons for the participants’ beliefs, attitudes, and experiences
  • Interviewers can prompt respondents to provide more detailed responses
  • Interviewers can respond to questions and clarify misunderstandings
  • Telephone interviews can be used to reach distant respondents
  • Can provide a sense of immediacy and personal attention for respondents
  • Open-ended questions allow faculty to uncover unanticipated results
  • Generally provide indirect evidence about student learning
  • Their validity depends on the quality of the questions
  • Poor interviewer skills can generate limited or useless information
  • Can be difficult to obtain a representative sample of respondents
  • What people say they do or know may be inconsistent with what they actually do or know
  • Can be relatively time-consuming and expensive to conduct, especially if interviewers and interviewees are paid or if the no-show rate for scheduled interviews is high
  • The process can intimidate some respondents, especially if asked about sensitive information and their identity is known to the interviewer
  • Results can be difficult and time-consuming to analyze
  • Transcriptions of interviews can be time-consuming and costly
Focus Groups
  • Are flexible in format and can include questions about many issues
  • Can provide in-depth exploration of issues
  • Usually have face validity – the questions generally have a clear relationship to the objectives being assessed
  • Can be combined with other techniques, such as surveys
  • The process allows faculty to uncover unanticipated results
  • Can provide insights into the reasons for the participants’ beliefs, attitudes, and experiences
  • Can be conducted within courses
  • Participants have the opportunity to react to each other’s ideas, providing an opportunity to uncover the degree of consensus on ideas that emerge during the discussion
  • Generally provide indirect evidence about student learning
  • Require a skilled, unbiased facilitator
  • Their validity depends on the quality of the questions
  • Results might not include the full array of opinions if only one focus group is conducted
  • What people say they do or know may be inconsistent with what they actually do or know
  • Recruiting and scheduling the groups can be difficult
  • Time-consuming to collect and analyze data
Reflective Essays
  • Are flexible in format and can include questions about many issues
  • Can be administered to large groups of respondents
  • Usually have face validity – the writing assignment generally has a clear relationship to the objectives being assessed
  • Can be conducted relatively quickly
  • Allow faculty to uncover unanticipated results
  • Can provide insights into the reasons for the participants’ beliefs, attitudes, and experiences
  • Can provide direct assessment of some learning objectives
  • Generally provide indirect evidence about student learning
  • Their validity depends on the quality of the questions
  • Conclusions can be inaccurate if biased samples are obtained
  • Results might not include the full array of opinions if the sample is small
  • What people say they do or know may be inconsistent with what they actually do or know
  • Responses can be difficult and time-consuming to analyze

Which assessment type compares between students?

Benchmark or interim assessment is a comparison of student understanding or performance against a set of uniform standards within the same school year.

What kind of assessments compare a student's performance to other students of the same age or grade quizlet?

Norm-referenced assessments are standardized assessments that are designed to compare a student's performance against a national sample of students who are the same age or in the same grade.

What are the 4 types of assessment?

A Guide to Types of Assessment: Diagnostic, Formative, Interim, and Summative.

What type of assessment is a performance assessment?

Key Terms. Performance Assessment: An approach to educational assessment that requires students to directly demonstrate what they know and are able to do through open-ended tasks such as constructing an answer, producing a project, or performing an activity.