Understanding Popper's View on Scientific Discovery and Probability
Written on
Chapter 1: Introduction to Karl Popper's Philosophy
Karl Popper, a prominent philosopher of science, challenged several widely accepted positivist views that appeared to falter due to the rapid evolution of scientific thought in the 20th century. His criticisms of positivism can be summarized in three main points.
The positivist viewpoint can be outlined briefly as follows: First, regarding the demarcation problem—defining what constitutes scientific inquiry—only statements devoid of meaning are considered nonscientific. Thus, disciplines like logic, mathematics, and metaphysics are included within the realm of science due to their meaningful nature. Empirical research is also deemed scientific as it is based on observable phenomena and can be confirmed probabilistically. Secondly, positivists hold that the most probable theory is the one that has received the greatest confirmation. Lastly, they assert that scientific inquiry begins with observations, from which theories are subsequently developed. Positivists emphasize a strong logical connection between theories and the observations they aim to explain. A classic example of this is the alignment of theoretical geometry with a physical triangle, which share overlapping definitions.
Refutations of Positivism
Popper proposed that the essence of science lies in the testability and falsifiability of theories, which disqualifies metaphysics, logic, and mathematics from being classified as scientific. This stands in stark contrast to positivists, who argue that what is not meaningful is nonscientific. To Popper, metaphysics may have meaning but lacks testability, while logic and mathematics are inherently analytic and true, thus not subject to testing.
Another critical issue involves the invalidation of inductive universal principles through Modus Tollens, a long-standing dilemma in the philosophy of science. Inductive reasoning fails to justify the acceptance of universal principles, leading Popper to assert that induction does not drive scientific progress. This issue highlights the asymmetry between validation and invalidation.
For instance, if we accept that a swan is white (P), it follows that all swans are white (Q). However, upon encountering a black swan, we can invalidate the original statement: If A, then B Not B. Therefore, not A.
The challenge of arriving at a universal statement—“all A’s are B”—through induction is evident; a single observation can effectively refute it.
The Role of Probability in Theory Development
Popper also critiqued the positivist approach to probability in theory formation. In his work, "The Logic of Scientific Discovery," he argues that assigning probabilities to theories is practically implausible. Each foundational statement lacks a 100% probability, and adding more foundational statements significantly lowers the overall probability of a theory. Consequently, any theory with multiple foundational statements becomes exceedingly improbable, especially in comparison to simpler theories. This concept can be illustrated through the conjunction fallacy presented by Amos Tversky and Daniel Kahneman:
Consider Linda, a 31-year-old single woman with a philosophy background, active in social justice movements. Which statement is more probable? 1. Linda is a bank teller. 2. Linda is a bank teller who is involved in the feminist movement.
It is logically impossible for two probabilities to be greater than one. Despite our instincts suggesting the second statement is more likely given the context, combining two probabilistic statements can only diminish or maintain the overall probability. Hence, Popper dismisses the use of probability in scientific theory development.
Instead, he suggests assessing theories based on their testability, falsifiability, and ability to endure rigorous testing as a metric for determining which theories warrant consideration. Critics argue that this leads to a non-quantitative approach to theory discovery, introducing subjectivity in defining what constitutes a "severe" test. Without probability as a guiding metric, it becomes challenging to compare two equally testable and falsifiable theories (Popper might advocate for testing both). While Popper has faced criticism for seeming to disregard rationality in theory discovery, he contends that his non-quantitative approach does not imply irrationality. Nevertheless, the vagueness surrounding the concept of a "severe" test remains a point of contention.
Popper viewed positivists as facing a dilemma: to propel scientific advancement, more universal theories with additional foundational statements must supplant simpler theories with fewer statements. However, why not utilize probability differently?
It is puzzling why Popper didn’t propose a modification to the positivist stance instead of outright rejection. For example, in scenarios where a theory is more testable and falsifiable, one could avoid comparing it to theories with fewer foundational statements. Probabilities could be employed to differentiate between equally testable theories with similar foundational statements, based on their performance in tests. However, such probabilities might depend excessively on empirical evidence, which is likely more abundant for one theory than another. What about cases where the testability and falsifiability differ?
It seems intuitive that theories perceived as less testable would have lower probabilities, while those regarded as more testable would have higher probabilities, especially when both withstand rigorous examination. Could we not adjust the probability of a theory based on its test results relative to its level of testability and falsifiability? For instance, if the theory of gravity undergoes extensive testing and succeeds, is there no way to compare its probability against a less testable theory, such as Aristotle's physics? If two theories differ in their testability and falsifiability, the one with greater testability should be considered more probable—contradicting initial probability assessments. The query of how we might revise a theory's probability after rigorous testing remains open.
Popper suggests in "LScD" that "the degree of corroboration of two statements may not be comparable in all cases, any more than the degree of falsifiability; we cannot define a numerically calculable degree of corroboration but can only speak in terms of positive and negative degrees of corroboration." It remains ambiguous whether Popper believes such a method for establishing a probability system exists or could ever exist. Nevertheless, he proposes that logical probabilities independent of empirical evidence may be used to weigh competing theories.
Finally, Popper posits that theory discovery begins with theoretical problems, leading to hypothesis formation and subsequent testing through observation. For Popper, observations serve solely to test theories, not to confirm them, as he completely abandons the notion of theory verification in favor of falsification.
Conclusions
In summary, Popper made several significant contributions to the philosophy of science. He redefined the criteria for what constitutes science, rejected induction as a means of justifying scientific claims, and refuted the notion that probability can effectively aid in theory discovery. Although many scientific practices do not aim to falsify theories, those theories that endure rigorous testing continue to garner our trust. Rather than describing how science operates, Popper can be seen as establishing a standard for how science can approach truth without requiring absolute verification.
In the first video, titled "Karl Popper vs Induction," the discussion revolves around Popper’s criticisms of induction and his alternative approaches to scientific reasoning.
The second video, "Popper on Demarcation Science vs Pseudoscience (Lecture 6, Video 2 of 3)," explores Popper's views on distinguishing between science and pseudoscience, emphasizing his demarcation criteria.