The origin of the phrase ‘rule of thumb’ is uncertain. It might have evolved from the use of the thumb as a measurement device ("rule"). Some claim that it comes from beer brewing before the invention of thermometers, when brewers would use their thumbs to measure the temperature of batches of beer.
In Greek the rule of thumb is heuristic, which is term used in philosophy and psychology. In cognitive psychology it refers to experience-based techniques applied to various cognitive processes like problem solving, learning, and logical thinking. In situations where an exhaustive search is impractical, rule of thumb or heuristic methods are used to speed up the process of finding a satisfactory solution. The terms such as educated guess, intuitive judgment and commonsense are considered as equivalent to heuristics. In more precise terms, heuristics are strategies using readily accessible information to problem solving.
In psychology, heuristics refers to simple, efficient rules which are inbuilt upon the mental processes of the individual through evolutionary processes or learning. These rules have been proposed to explain how people make decisions, come to judgments, and solve problems, typically when facing complex problems or using incomplete information. These rules work well under most circumstances, but in certain cases lead to errors. These errors are due to cognitive biases.
A cognitive bias is a deviation in judgment that occurs in particular situations. This deviation from the normally expected thinking process leads to distorted or erroneous perception, inaccurate judgment, illogical interpretation, or what is broadly called irrationality. The concept of “deviation" refers to a standard of comparison with what is normally expected from an individual in a particular situation. This comparison may be with the judgment of people those who are outside the particular situation, or with a set of facts which can be objectively verified by others. A long and ever-growing list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics.
Cognitive biases are examples of mental behavior which are evolved in the course of the individual’s life or inbuilt by the evolution of the human species. Many of the cognitive biases are evidently adaptive because they lead to more effective actions in given contexts or enable faster decisions when faster decisions are of greater value for survival. These are called adaptive biases.
Adaptive bias is the idea that the human brain has evolved to gain the cognitive ability of reason adaptively, rather than rationally, and that cognitive bias may have evolved as a mechanism to reduce the overall cost of cognitive errors as opposed to merely reducing the number of cognitive errors, when faced with making a decision under conditions of uncertainty.
Example of adaptive bias
The ambiguity effect is an example of adaptive bias. The situation of ambiguity is created by lack of sufficient information. The decision making in such situations is affected by lack of information, or "ambiguity". People tend to select options for which the probability of a favorable outcome is known, over an option for which the probability of a favorable outcome is unknown.
The 30-ball-experiment has proved the existence of ambiguity effect in human thinking. The test is as follows: A bucket contains 30 balls. The balls are colored red, black and white. There is definite information that ten of the 30 balls are red. Of the remaining 20 some are white and some are black, with all combinations of black and white being equally likely. That is all the combinations like 1:19, 2:18, 2:17…. And 19:1are equally possible. The participants should take a ball blindly from the bucket after selecting an option. In option X, drawing a red ball wins a person $100, and in option Y, drawing a black ball wins them $100. The probability of picking a winning ball is the same for both options X and Y. In option X, the probability of selecting a winning ball is 1 in 3 (10 red balls out of 30 total balls). In option Y there is uncertainty of the outcome because there is no information how many balls are black and how many are white. The difference between the two options is that in option X, the probability of a favorable outcome is known, but in option Y, the probability of a favorable outcome is unknown ("ambiguous"). But there is a chance in option Y the probability could be more than that in X because the combination of Black : White could be 19:1. If so, the chance of winning is 2 in 3. In spite of the chance for high probability of a favorable outcome, people have a greater tendency to select a ball under option X, where the probability of selecting a winning ball is perceived to be more certain. The uncertainty as to the number of black balls means that option Y tends to be viewed less favorably. Despite the fact that there could possibly be twice as many black balls as red balls, people tend not to want to take the opposing risk that there may be fewer than 10 black balls. The "ambiguity" behind option Y means that people tend to favor option X, even when the probability is equivalent.
One possible explanation of the effect is that people have a rule of thumb (heuristic) to avoid options where information is missing or where there is ambiguity. This will often lead them to seek out the missing information. In many cases, though, the information cannot be obtained. The effect is often the result of calling some particular missing piece of information to the person's attention. The ambiguity effect evidently helped the survival in natural conditions.
Assessment of probability by rule of thumb
In daily life people take decisions after assessing probability of outcome in situations of uncertainty and ambiguity. In assessing probability people use the rule of thumb. Example: I am going to toss a coin for six times. Before actually doing it I am presenting two probabilities. Probability one: Head, Head, Head, Tail, Tail, Tail. Probability two: Head, Tail, Tail, Head, Tail, Head, Head. Which outcome is more probable? A trained statistician would say both are equally probable. But almost all non professionals would say the second outcome is more probable, because it appears to be the representative of the outcomes in the random events. People predict those which have a representativeness appearance more probable. This rule of thumb is called representativeness heuristic.
Availability in mind
Another rule of thumb in assessment of probability is whatever available in mind is thought to be more probable. This is called availability heuristic. The availability heuristic is a mental shortcut that uses the ease with which examples come to mind to make judgments about the probability of events. The availability heuristic operates on the notion that if you can think of it, it must be important. The availability of consequences associated with an act is positively related to perceptions of the magnitude of consequences of that act. Sometimes, this heuristic is beneficial, but the frequency that events come to mind is usually not accurate reflections of their actual probability in reality.
Media mislead in assessment of probability
Media coverage can help fuel a person's example bias with widespread and extensive coverage of unusual events, such as homicide or airline accidents, and less coverage of more routine, less sensational events, such as common diseases or car accidents. For example, when asked to rate the probability of a variety of causes of death, people tend to rate more "newsworthy" events as more likely because they can more readily recall an example from memory. For example, in the USA, people rate the chance of death by homicide higher than the chance of death by stomach cancer, even though death by stomach cancer is five times higher than death by homicide. Moreover, unusual and vivid events like homicides, shark attacks, or lightning are more often reported in mass media than common and unsensational causes of death like common diseases. Another instance of biased ratings is the relative overestimation of plane crash deaths, compared to car-accident deaths.