Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

style, effective prompts, and instructors like to see the distribution of scores

ID: 3668825 • Letter: S

Question

style, effective prompts, and instructors like to see the distribution of scores on an exam before they assign ads. You're working for an art history professor who has asked you to develop a 3 Many instruct ou grades. Y ram that will read all of the scores from an exam and print out a bar chart that shows their distr at most 250 students in the class. Use or modify the SortedList class from this are chapter as necessary to help you do this task. The integer scores are entered into a file called exams dat in random order. Your program's job is to read in the data, sort it and output a bar chart with one (star) for each exam that has a particular score. that has a particular score. The first bar in the chart should be the highest score, and the last bar in the chart should be the lowest score. Each line of output should start with the score value, followed by the appropriate number of stars. When there is a score value that didn't appear on any exams, just output the value and no stars, then go to the next line. 4 Enhance the program in Problem 3 as followr Th

Explanation / Answer

Some Advantages of Grading Based on Comparison to Absolute Standards

Some Disadvantages of Grading Based on Comparison to Absolute Standards

Comparisons Based on Learning Relative to Improvement and Ability


The following two comparisons--with improvement and ability--are sometimes used by instructors in grading students. There are such serious philosophical and methodological problems related to these comparisons that their use is highly questionable for most educational situations.

Relative to Improvement . . .


Students' grades may be based on the knowledge and skill they possess at the end of a course compared to their level of achievement at the beginning of the course. Large gains are assigned high grades and small gains are represented by low grades. Students who enter a course with some pre-course know-ledge are obviously penalized; they have less to gain from a course than does a relatively naive student. The post test-pretest gain score is more error-laden, from a measurement perspective, than either of the scores from which it is derived. Though growth is certainly important when assessing the impact of instruction, it is less useful as a basis for determining course grades than end-of-course competence. The value of grades which reflect growth in a college-level course is probably minimal.

Relative to Ability . . .


Course grades might represent the amount students learned in a course relative to how much they could be expected to learn as predicted from their measured academic ability. Students with high ability scores (e.g., scores on the Scholastic Aptitude Test or American College Test) would be expected to achieve higher final examination scores than those with lower ability scores. When grades are based on comparisons with predicted ability, an "overachiever" and an "underachiever" may receive the same grade in a particular course, yet their levels of competence with respect to the course content may be vastly different. The first student may not be prepared to take a more advanced course, but the second student may be. A course grade may, in part, reflect the amount of effort the instructor believes a student has put into a course. The high ability students who can satisfy course requirements with minimal effort are penalized for their apparent "lack" of effort. Since the letter grade alone does not communicate such information, the value of ability-based grading does not warrant its use.

A single course grade should represent only one of the several grading comparisons noted above. To expect a course grade to reflect more than one of these comparisons is too much of a communication burden. Instructors who wish to communicate more than relative group standing, or subject matter competence or level of effort, must find additional ways to provide such information to each student. Suggestions for doing so are noted near the end of Section V of this booklet.

Table of Contents


III. BASIC GRADING GUIDELINES

Two common complaints found on students' post-course evaluations are that grading procedures stated at the beginning of the course were either inconsistently followed or were changed without explanation or even advanced notice. One could look at the situation of altering or inconsistently following the grading plan as being analogous to playing a game wherein the rules arbitrarily change, sometimes without the players' knowledge. The ability to participate becomes an extremely difficult and frustrating experience. Students are placed in the unreasonable position of never knowing for sure what the instructor considers important. When the rules need to be changed all of the players must be informed (and hopefully be in agreement).|

Table of Contents

IV. SOME METHODS OF ASSIGNING COURSE GRADES


Various grading practices are used by college and university faculty. Following is an examination of the more widely used methods and discussion of the advantages, disadvantages and fallacies associated with each.

Weighting Grading Components and Combining Them to Obtain a Final Grade Grades are typically based on a number of graded components (e.g., exams, papers, projects, quizzes). Instructors often wish to weight some components more heavily than others. For example, four combined quiz scores may be valued at the same weight as each of four hourly exam grades. When assigning weights the instructor should consider the extent to which:

Once it has been decided what weight each grading component should have, the instructor should insure that the desired weights are actually used. This task is not as simple as it first appears. An extreme example of weighting will illustrate the problem. Suppose that a 40-item exam and an 80-item exam are to be combined so they have equal weight (50 percent-50 percent in the total). We must know something about the spread of scores or variability (e.g., standard deviation) on each exam before adding the scores together. For example, assume that scores on the shorter exam are quite evenly spread throughout the range 10-40, and the scores on the other are in the range 75-80. Because there is so little variability on the 80-item exam, if we merely add each student's scores together, the spread of scores in the total will be very much like the spread of scores observed on the first exam. The second exam will have very little weight in the total score. The net effect is like adding a constant value to each student's score on the 40-item exam; the students maintain essentially the same relative standing.