Advertisement

What are exams supposed to measure

Started by September 30, 2011 03:24 PM
7 comments, last by nilkn 13 years, 1 month ago
Hi,

So just to make this clear from the beginning, this isn't supposed to be a whine thread, I'm interested in other peoples experiences and thoughts on the matter.

I've been studying for quite a while now especially because I switched rather late from physics to CSE. This obviously means I went through lots of exams of all kinds. Now the more I get to actually "do stuff" for projects, internships etc. the more I'm getting annoyed with exams that I up to now just considered a part of the education. More precisely I'm starting to question what they are trying to measure and how they do it. In the last exam session there were two exams that in my opinion were especially badly "designed".

One was in a course on fluid dynamics, the problem there was, that whoever wrote the questions thought it was a good idea to put "interesting questions" into the exam. By that I mean questions that were very different to the ones in the exercises and exams of earlier years. Now you might say that the whole point of doing science and engineering is to find solutions to new problems. Sure, but if you have people solve five "new" problems in an exam that lasts 90 minutes that is going to be an issue. We (as in the people who took the exam) discussed the stuff afterwards and mostly came to the conclusion that we would be able to solve those problems given a reasonable time frame. But during a 90 minute exam where solving 5 "regular" problems is already a close call, you usually just skip the "new looking" problems in the first go, and try to come back to them later, and if you then spend 30 minutes on solving one of those you are already out of time... Basically what happened was that literally everyone partly solved the exact same two questions out of 5 and in the end they had to MASSIVELY distort the scale just so they wouldn't have to fail everyone. There were 30 points total to be made, the grade of 3.25 was given to people that got 1 (!) point and every point more or less amounted to 0.25 grades, pretty much everyone ended up in the range of 3.75 and 4.75.

Sidenote: I guess I should explain the swiss grades system. we get grades from 1-6 with 6 being the highest and 1 the worst. You usually pass if you get at least a 4. So the grades are something like: 6 = perfect, 5=good, 4=sufficient, 3=insufficient, 2=bad, 1=WTF? (1 are so bad that... you usually have to hand in empty sheets to get one).

Now I don't think this exam turned out to be a real problem for anyone when it came to failing. But the actual grades we got are mostly noise and don't assess our actual capabilities.

The other example was for a course that is called "Technische Informatik" (I'm not sure if the literal translation "technical informatics" makes sense). It's basically about general stuff you should know about computer hardware and operating systems. The lecture covered stuff like virtual memory, caching, thread/process management/synchronization, etc. So it's not a particularly hard class since you know most of the stuff anyway when you are a "programmer". Also in the exam they allowed us to bring lecture notes etc. Again the only thing that makes this exam "hard" is a draconian time limit. A total of 22. questions most with parts a.-c. in 120 minutes. I haven't spoken to anyone that actually managed to finish. Which is sad, since by flipping through the rest of the questions when time was called, most of us concluded that they knew the answers to the rest as well. So this exam was mostly a measure of writing speed and not of actual ability. Working fast is of course also a relevant metric for actual ability, but I don't think that argument applies when most of the questions are things like enumerating network layers or calculating the completion time of a hard disk scheduling algorithm by hand.

Especially these "speed writing contest"-type exams are something that I encountered multiple times so far... I generally feel a lot of "examination technique" cater to auxiliary abilities like speed and the ability to memorize large amounts of data in a short time, even if that is not the primary skill required to master the subject matter in question.

What are your experiences and opinions on this?

What are your experiences and opinions on this?

My experience is that generally when time is an issue on longer exams (like your 20 question one) the teacher expects to curve the test because not every question gets answered. I found this more true in university than high school. Though I'm not sure that 20 questions in 120 minutes is that unreasonable. 6 minutes per question is still 2 minutes per question part. Most questions that have a-c usually have 2 short answer 1 long answer form, so you usually have 1 minute for the first two and 4 minutes for the last one. That's actually a fairly decent amount of time.

On the short test, the teacher probably just fucked up. If you got a curve it's not worth complaining about.

There is also something to be said for the fact that sometimes students doing universally bad on some tests is just that the students weren't accurately prepared. Most often it's a sign of bad teaching, but I've been in plenty of classes where the students did universally bad through no fault of the teacher's.
Advertisement
Frequently teachers just royally screw up an exam's design. One introductory combined CS/EE course I took had a final exam with a 3-hour limit. I scored the highest in the class with 60-some percent of the questions complete. I never even saw the third (of three) sections of the test, and I was moving really fast. The class average on the exam was something around 35%. As the earlier poster indicated, as long as the exam is curved, it's not worth fighting over (I think I wound up around 117% after curve, average around 80%). Move on, and don't take classes from the same professor again if possible.
I've had teachers/professors employ vastly varying approaches.

One that stood out to me was my assembly language professor. He always made his tests open materials (textbook, notes, cheatsheet, etc) so you could use whatever you brought into class (no neighbors or computer->internet). His mentality was that in the real world, your boss isn't going to care if you look up the answer; it was more important that you could narrow down the topic so you could look up the answer in a timely manner. For that class and approach, I felt having a time limit was very reasonable and fit well with the layout (your boss doesn't care if you look up the answer, but he does want the project done in a reasonable time limit). Most questions were things you could get directly from the book provided you had read the chapters and/or knew the subject matter. And tests usually had a problem to solve in handwritten code as well. If you attended the classes, we usually covered a very similar problem so you would have it available in your notes.

I guess that style didn't push us to solve new problems on the fly but it felt like it gauged your knowledge of the particular subject well and included both answering questions and writing code (although I'd usually question the "usefulness" in having to write accurate code by hand).


Especially these "speed writing contest"-type exams are something that I encountered multiple times so far... I generally feel a lot of "examination technique" cater to auxiliary abilities like speed and the ability to memorize large amounts of data in a short time, even if that is not the primary skill required to master the subject matter in question.

What are your experiences and opinions on this?


A exam just measures your ability to take an exam--nothing more. If you get good grades on your exams, then it means that your are good at taking exams; it doesn't mean anything about other skills that may be required in the real world. If they wanted to test your ability to program, they'd have you write a program. If they wanted to test your understanding of a concept, they'd have you explain it in detail in some presentation or give a lecture on the topic. It may seem like they are testing your comprehension of concepts, but they are not. Like you said, they are testing your ability to absorb information in a very short time. I've had many exams where I solve a problem completely different from the way we did it in class and the teacher even admits that it is correct, but it still gets marked wrong because I didn't do it the "right way. " (One striking example is where I used real analysis to prove a lemma in asymptotic analysis in order to get a trivial proof of the running time of some complicated algorithm by applying L'Hopital's rule.)

As such, there isn't much difference between being good at exams and being good at trivia. I approach it as some kind of silly game/sport where the students are the players and the teacher is the referee keeping track of score. Having high grades doesn't mean anything other than implying that you wasted a lot of time on something that no one will care about a couple years after you graduate. I've met plenty of high GPAs who can't program. (For the record, I have good grades but not because I put in extra study time--I have a good memory). In the end, school is largely a waste of time but it seems to be a necessary evil to obtain a descent paying job.
Good judgment comes from experience; experience comes from bad judgment.

I generally feel a lot of "examination technique" cater to auxiliary abilities like speed and the ability to memorize large amounts of data in a short time, even if that is not the primary skill required to master the subject matter in question.

What are your experiences and opinions on this?


My experience and opinions on exams? They serve their purpose. In general, exams can show if one student has better understanding of the subject than the others.

If it takes you 30 minutes to solve a problem, I guarantee you there are somebody out there who could solve the same problem in less than 5 minutes. Why did it take you 30 minutes? It's because you didn't spend enough time studying. Even if you had studied hard, one quarter/semester is not enough to cover the diverse range of subjects taught. Here's an example of a description for a freshman physics from Stanford:

[quote name='Stanford']
Advanced freshman physics. For students with a strong high school mathematics and physics background contemplating a major in Physics or interested in a rigorous treatment of physics. Special theory of relativity and Newtonian mechanics with multi- variable calculus. Postulates of special relativity, simultaneity, time dilation, length contraction, the Lorentz transformation, causality, and relativistic mechanics. Central forces, contact forces, linear restoring forces. Momentum transport, work, energy, collisions. Angular momentum, torque, moment of inertia in three dimensions. Damped and forced harmonic oscillators. Recommended prerequisites: Mastery of mechanics at the level of AP Physics C and AP Calculus B/C or equivalent.
[/quote]
So if you take this class, you are learning: Newtonian physics, Einstein Special Relativity, Lorentz transformation, time dilation, etc. And, at the end of the semester, you have a final exam that tests your knowledge on all of these subjects. Do you think an average student will be able to put all of these into his head within a period of 4 months?
For somebody who's been studying these subjects for years, do you think the final exam would be as difficult? No. He'd have more chance of scoring perfect score on the exams.

This is where education fails, not the exams.
Advertisement
On 9/30/2011 at 4:22 PM, alnite said:

This is where education fails, not the exams.

Definitely true.

But that doesn't mean the exams aren't hopelessly broken as well. When you get an exam with three sections, which essentially only has three problems because all 30 or so questions in each section are the exact same problem with different numbers... It's not that the exam doesn't test your ability to solve those three problems. It's that it seems as if it's deliberately designed for the information, about whether or not you understand how to solve the problems, to be completely overwhelmed in your score by the "noise", of your rote execution ability.

But yeah, it's a double whammy, because the professors are generally just as bad at teaching as they are at testing.

You learn pretty quickly in school that the "system" is just running a grading meta-game which is separate from education, and that extracting an education is entirely your responsibility. You can focus on one or both, but they're relatively independent.

[That being said, I still think that earning a relevant degree is the most efficient way to become well educated in a subject and (aside from applicable experience) to prove to interested parties that you have.]


Especially these "speed writing contest"-type exams are something that I encountered multiple times so far... I generally feel a lot of "examination technique" cater to auxiliary abilities like speed and the ability to memorize large amounts of data in a short time, even if that is not the primary skill required to master the subject matter in question.

What are your experiences and opinions on this?


Speed contest exams definitely have their place. I will be eternally grateful to the math teacher I had in the two last years of high school for pushing me to my limits in how fast I could solve that type of exam problems (differentiating and integrating real functions, analytic geometry problems, probabilistic and statistical problems). Obviously, that doesn't mean all teaching should be about rote exercise. Learning about high-level concepts is important, too, but you really do need both.

I'm now nearing the end of my PhD in math (hopefully ;) ), and the thing is, when you're thinking about something high-level, there is often a need to check something low-level as well. If doing the low-level computation takes you too much time, it breaks the flow of the high-level thought. Furthermore, a lot of high-level thinking is about recognizing patterns, which implicitly requires doing many, many low-level things confidently and quickly.

I also see this when I am TA'ing students (including engineering students) who are taking e.g. linear algebra or analysis. It is not unusual for them to arrive at very non-sensical solutions without realizing that the solutions don't make sense. For example, they might solve some system of equations, make some mistake on the way, and obtain a result that is obviously wrong, and could easily be seen to be wrong with some basic arithmetic. If they were able to do this simple arithmetic in their head rapidly and had a habit of doing so, they would make fewer mistakes. And it's not just about making fewer mistakes either: when you often make mistakes, you might fail to understand the big picture properly, because things don't fall into place nicely, and you just end up being confused.

Bottom line: Being able to perform stupid and basic skills quickly and confidently is really important, not because it's a goal in itself (after all, you could use a computer to do the computations), but because it is an important tool when learning the higher level concepts that really matter.
Widelands - laid back, free software strategy
In my opinion, exams are not meant to "measure" anything at all. They are meant to sort out students. Or put differently, adjust the student level to the training capacity level.

When I started uni, my year had 400 people (that was after going through numerus clausus and aptitude test). The first two years were good for absolutely nothing, except to comb out 50 people every semester in exams, and another 100 at the (totally pointless) preliminary examination. Which meant, after those 2 years, we were only 50 people left. Which was incidentially exactly the number of training positions available. The actual education took place after that. Ironically, I passed the prelimiary examination having no clue whatsoever in biochemistry. I thought I might as well try, and got through. The joy of multiple choice tests...
The reality is that exams like this are just a consequence of how many students take the classes and how few professors are available to teach them. Academia doesn't get enough funding to hire enough full-time professors to reduce the class sizes to reasonable numbers.

You're right that these exams are nowhere close to accurately and fairly measuring students' mastery of the material, but when there are so many students there aren't many other options.


However, if you are having exams like this is small and manageable classes, then your professor is just being lazy. There are much better examination techniques. My school is "renowned" for its Honor Code, which basically makes a lot of the tests take-home. The problems as a result are much harder and more open-ended, but they pose a true challenge that you cannot overcome with mere speed or memorization.

This topic is closed to new replies.

Advertisement