They have been used widely to capture our ideas and thoughts. Multiple choice questions are often complementing other written or oral ways of examination. Filling and evaluating them is easy and relatively fast, which is especially important with a high number of participants. Quizzes can be a valuable learning tool when getting the right answer immediately follows the choice, which is actively used in some MOOCs. Carefully selected questions enable a teacher to identify knowledge gaps and seek ways to close them. The more questions need to be answered, the better profile of a person can be made. Some of the most revealing quizzes have well over 30 questions, despite the fact that many people may not take the time required to fill them. At the same time, the accuracy of the obtained results is what drives more people to try. We can expect that a rise in the number of questions will lead to decreasing levels of engagement and higher percentage of people that give up. The same effect can be seen with multi-page web forms, which hide how many fields remain to be filled. This lack of visible progress on each step is what makes us perceive them as uncertain and not deserving our time. I frequently abandon such forms.
The more participants in a quiz, the better the questions can be. Adjusting their difficulty often happens in the light of the already known. Thus, if all students pass a test, but their results are fairly homogenous, then the teacher is the one that may not be able to learn from it. With every data gathering tool, we need to know what our goals are and how we intend to use the data later. A potential problem is that this is often not communicated to the people whose results are obtained. This way, the data may be shared with other parties without the agreement of the person who generated it. We hear that everything we say will be used against us and we see it daily in the form of online advertisements, which know exactly what we need based on the current day of the week. To say it short, I think that a quiz should be targeted at improving a person and not a company. The second must be viewed as a result of the first and not vice versa. Yet, companies often tend to see people only as mere consumers that fill their Christmas and New Year accounts. It is still rare to see that all the collected data hasn’t been dedicated to support this. It is as if human contact has somehow been lost in this sea of coins; as if our senses have been clouded for anything else but the falling coins at the end of a “Fort Boyard” episode, considering the time limit to get them.
The goals of quizzes and commerce don’t always have to intersect. In web design, when given to clients, questionnaires can improve our understanding of the areas we need to improve. They can be used to obtain valuable feedback and not so much as a way to collect data. The feedback is what helps us learn and grow, especially when it comes from another human being and not from an automatic inference system. A quiz can be used to evaluate different aspects of our work on a scale from 1 to 10. Another useful scheme is to assign different weights to different criteria and see the result from the combined perspective. This can sometimes be more useful than a performance review where colleagues may be unwilling to say something offending. This doesn’t mean that we should avoid good manners; just that we should be aware of how they may interfere with our work.
It may sound strange, but quizzes can help us find interesting things about ourselves that we are unaware of. You may want to try a quiz by Dr. Martin Seligman to understand what I mean here. By the time I took it, I found the results strikingly accurate, which isn’t surprising considering all the research that goes behind the questions. When I shared this with other people, their first reaction was disbelief. After all, why lose your time with stupid questionnaires? To be honest, I also didn’t expect much by not knowing what I missed. But I still think that the more we know about who we are, the better we could direct our choices in life. If we expect every quiz to be alike, it will be.
With quizzes, the way we present the results to the participants can be important. Consider a person that made many mistakes. I remember one (online) case of trying to move a small object through a maze of tiny paths. Any time I hit the boundaries of the path, a skull appeared on full screen accompanied by a loud and creepy laugh. Concentrating on the small pixels, while having such a sudden surprise, scared me quite a bit. The results of any quiz shouldn’t be scary or in any way discouraging and we should carefully evaluate how we present the results. When I occasionally take quizzes on zeit.de, I answer at around 30% of the questions correctly (not exactly something to be proud of). At the end, they show you for each question the chosen and the correct answers, which isn’t the case with many other quizzes. Very often, you may be given a note, not knowing how it was formed, which isn’t always helpful for improvement. By letting others know the right answers, if there are such, we can improve the quality of the quiz, although this will make it effective only once. Because good quizzes are made for a single, specific use, we often underappreciate the effort of the people that create and fine-tune the questions. Once the answers are well-known, a quiz is rarely useful to the same people again, but at least they have learned something. Still, radical openness doesn’t mean that a person who is scanning the results of his choices to dozens of questions is able to draw consequences from them quickly enough. We can present the repetitive sequence of question-chosen answer-correct answer on a single, long page of alternating texts, but this will require constant switches in thinking. Another way is to present the results in a table, where correctness itself can draw selective attention to particular questions and their answers. This allows people to learn which questions were problematic and why instead of examining every single case. Every vertical of the table has the same type of information, so going through it can be faster. If there are hundreds of questions, we can see a colorful mosaic of reds and greens that gives us a more visual way to perceive everything at once. Instead of giving individual explanations to people like in analog quizzes, we can store the correct answers only once and let everyone discover them for themselves. This allows for the questions that may arise after that to have a more conceptual nature than to simply require motivation on correctness. At the same time, we should be aware that it is a mistake to always require others to have searched online first before they ask. Not only can this deteriorate our explanatory skills, but we are neglecting other people’s personality based on the assumption that the best information is always online and that others need not bother us when this is the case. An individual will always require personal attention and we should carefully consider whether we say otherwise.
Quizzes can have time limits, where finding the right answers in a given time resembles more naturally an everyday working environment. In this case, the combined time needed to answer all available questions is seen as more important than the the time required for individual ones. Many companies use such quizzes to assess candidates on a variety of tasks in an attempt to improve their hiring process. Yet, hiring based entirely on quizzes is no different than hiring based entirely on experience, a one-hour practical test or on the available portfolio. None is better than the other—different people are best at different tasks. It is only the combined perspective that matters. Hiring people, because they are particularly good at working in a well-defined system today is a recipe for trouble later when this system is brought on its head by environmental change that will require fast adaptation. It’s not clear how future behavior can be examined by questions, yet, this is where an employee and a company will spend most of their time together.
Let’s say that we want to present the results of a sample quiz online. We have already chosen that a table might be a relatively good way to do this. If it has six columns to present everything that we know about a single question, we can expect this number to multiply with the number of rows in the table. For each row we would need 1 <tr> and 6 <td> elements, each of which, combined with its closing tag will take 9 bytes when every character is represented through a single byte. We have 63bytes per-line overhead plus the sum of bytes needed to fill all columns with content. Usually we want to have a higher content:metadata ratio, so lets choose a 10:1 in our case. This decision leads us to 11 x 63 = 693bytes average row size. Today we tend to load web pages of around 1.5MB. The data in a table can be extracted at once, so we don’t need as many HTTP requests. If we assume that a megabyte has million bytes, then 1500000 / 693bytes ≈ 2164 rows. We can store more than 2000 questions with their answers (14000 elements is still acceptable), before people start to notice that we are abusing their bandwidth. Imagine what knowledge hides behind knowing the answers to 2000 questions. Of course, the more questions we have, the harder it becomes to distinguish them. The fact that we are capable of showing everything at once doesn’t mean that it will be digestible in the preference of byte-sized chunks.
Let’s consider another example. We we want to present alphabetically the number of points that 5000 people received individually on 20 different tasks, assuming that the points available to each task are up to 20, and there are many Radoslav Kostadinov Kostadinov-like long names in it. Then we need:
- 2 x average bytes of the person’s identification number (1-5000) (we decide to place it at the start and the end of the table). Since most people will have a 4 digit ID (I came to an average of ≈ 3.77), we will need approx. 8bytes
- 1 x average bytes for the name (say 30bytes, although even three names can’t identify a person uniquely)
- 20 x average bytes to represent each task score (0-20), say 20 x 1.5byte = 30bytes
- 1 x average bytes for the total score (0-400), say 3bytes
- 1 x metadata (1 <tr> + 20 <td>s for tasks + 2<td>s for ID + 1 <td> for name + 1 <td> for total score) 25x9 = 225bytes
As we can see, we require around 8+30+30+3+225 = 296bytes per row, which is more than twice less than in the previous example (we are mostly storing digits), but here we use 25 elements, which is almost 4 times more than the 7 elements in the previous example. If a machine is capable of showing, say 25000 elements without noticeable performance decrease, this would mean that we can only show the data of the first 1000 people on the list (and maybe not even that due to the extreme load!), before the growth of the exponential curve becomes so steep that the machine can no longer present the results. Although the data itself would take only 5000 x 296 = 1480000bytes ≈ 1.48MB, we may not be able to present everything at once due to number of elements that our machine can handle. Thus, even similar (on the surface) situations may exhibit different bottlenecks that call for different approaches. Any quiz should remain only temporary if we want to keep our perspective fresh.