MLP Logo

Hypothesis Testing – A Deep Dive into Hypothesis Testing, The Backbone of Statistical Inference

  • September 21, 2023

Explore the intricacies of hypothesis testing, a cornerstone of statistical analysis. Dive into methods, interpretations, and applications for making data-driven decisions.

problem solving methods hypothesis testing

In this Blog post we will learn:

  • What is Hypothesis Testing?
  • Steps in Hypothesis Testing 2.1. Set up Hypotheses: Null and Alternative 2.2. Choose a Significance Level (α) 2.3. Calculate a test statistic and P-Value 2.4. Make a Decision
  • Example : Testing a new drug.
  • Example in python

1. What is Hypothesis Testing?

In simple terms, hypothesis testing is a method used to make decisions or inferences about population parameters based on sample data. Imagine being handed a dice and asked if it’s biased. By rolling it a few times and analyzing the outcomes, you’d be engaging in the essence of hypothesis testing.

Think of hypothesis testing as the scientific method of the statistics world. Suppose you hear claims like “This new drug works wonders!” or “Our new website design boosts sales.” How do you know if these statements hold water? Enter hypothesis testing.

2. Steps in Hypothesis Testing

  • Set up Hypotheses : Begin with a null hypothesis (H0) and an alternative hypothesis (Ha).
  • Choose a Significance Level (α) : Typically 0.05, this is the probability of rejecting the null hypothesis when it’s actually true. Think of it as the chance of accusing an innocent person.
  • Calculate Test statistic and P-Value : Gather evidence (data) and calculate a test statistic.
  • p-value : This is the probability of observing the data, given that the null hypothesis is true. A small p-value (typically ≤ 0.05) suggests the data is inconsistent with the null hypothesis.
  • Decision Rule : If the p-value is less than or equal to α, you reject the null hypothesis in favor of the alternative.

2.1. Set up Hypotheses: Null and Alternative

Before diving into testing, we must formulate hypotheses. The null hypothesis (H0) represents the default assumption, while the alternative hypothesis (H1) challenges it.

For instance, in drug testing, H0 : “The new drug is no better than the existing one,” H1 : “The new drug is superior .”

2.2. Choose a Significance Level (α)

When You collect and analyze data to test H0 and H1 hypotheses. Based on your analysis, you decide whether to reject the null hypothesis in favor of the alternative, or fail to reject / Accept the null hypothesis.

The significance level, often denoted by $α$, represents the probability of rejecting the null hypothesis when it is actually true.

In other words, it’s the risk you’re willing to take of making a Type I error (false positive).

Type I Error (False Positive) :

  • Symbolized by the Greek letter alpha (α).
  • Occurs when you incorrectly reject a true null hypothesis . In other words, you conclude that there is an effect or difference when, in reality, there isn’t.
  • The probability of making a Type I error is denoted by the significance level of a test. Commonly, tests are conducted at the 0.05 significance level , which means there’s a 5% chance of making a Type I error .
  • Commonly used significance levels are 0.01, 0.05, and 0.10, but the choice depends on the context of the study and the level of risk one is willing to accept.

Example : If a drug is not effective (truth), but a clinical trial incorrectly concludes that it is effective (based on the sample data), then a Type I error has occurred.

Type II Error (False Negative) :

  • Symbolized by the Greek letter beta (β).
  • Occurs when you accept a false null hypothesis . This means you conclude there is no effect or difference when, in reality, there is.
  • The probability of making a Type II error is denoted by β. The power of a test (1 – β) represents the probability of correctly rejecting a false null hypothesis.

Example : If a drug is effective (truth), but a clinical trial incorrectly concludes that it is not effective (based on the sample data), then a Type II error has occurred.

Balancing the Errors :

problem solving methods hypothesis testing

In practice, there’s a trade-off between Type I and Type II errors. Reducing the risk of one typically increases the risk of the other. For example, if you want to decrease the probability of a Type I error (by setting a lower significance level), you might increase the probability of a Type II error unless you compensate by collecting more data or making other adjustments.

It’s essential to understand the consequences of both types of errors in any given context. In some situations, a Type I error might be more severe, while in others, a Type II error might be of greater concern. This understanding guides researchers in designing their experiments and choosing appropriate significance levels.

2.3. Calculate a test statistic and P-Value

Test statistic : A test statistic is a single number that helps us understand how far our sample data is from what we’d expect under a null hypothesis (a basic assumption we’re trying to test against). Generally, the larger the test statistic, the more evidence we have against our null hypothesis. It helps us decide whether the differences we observe in our data are due to random chance or if there’s an actual effect.

P-value : The P-value tells us how likely we would get our observed results (or something more extreme) if the null hypothesis were true. It’s a value between 0 and 1. – A smaller P-value (typically below 0.05) means that the observation is rare under the null hypothesis, so we might reject the null hypothesis. – A larger P-value suggests that what we observed could easily happen by random chance, so we might not reject the null hypothesis.

2.4. Make a Decision

Relationship between $α$ and P-Value

When conducting a hypothesis test:

  • We first choose a significance level ($α$), which sets a threshold for making decisions.

We then calculate the p-value from our sample data and the test statistic.

Finally, we compare the p-value to our chosen $α$:

  • If $p−value≤α$: We reject the null hypothesis in favor of the alternative hypothesis. The result is said to be statistically significant.
  • If $p−value>α$: We fail to reject the null hypothesis. There isn’t enough statistical evidence to support the alternative hypothesis.

3. Example : Testing a new drug.

Imagine we are investigating whether a new drug is effective at treating headaches faster than drug B.

Setting Up the Experiment : You gather 100 people who suffer from headaches. Half of them (50 people) are given the new drug (let’s call this the ‘Drug Group’), and the other half are given a sugar pill, which doesn’t contain any medication.

  • Set up Hypotheses : Before starting, you make a prediction:
  • Null Hypothesis (H0): The new drug has no effect. Any difference in healing time between the two groups is just due to random chance.
  • Alternative Hypothesis (H1): The new drug does have an effect. The difference in healing time between the two groups is significant and not just by chance.
  • Choose a Significance Level (α) : Typically 0.05, this is the probability of rejecting the null hypothesis when it’s actually true

Calculate Test statistic and P-Value : After the experiment, you analyze the data. The “test statistic” is a number that helps you understand the difference between the two groups in terms of standard units.

For instance, let’s say:

  • The average healing time in the Drug Group is 2 hours.
  • The average healing time in the Placebo Group is 3 hours.

The test statistic helps you understand how significant this 1-hour difference is. If the groups are large and the spread of healing times in each group is small, then this difference might be significant. But if there’s a huge variation in healing times, the 1-hour difference might not be so special.

Imagine the P-value as answering this question: “If the new drug had NO real effect, what’s the probability that I’d see a difference as extreme (or more extreme) as the one I found, just by random chance?”

For instance:

  • P-value of 0.01 means there’s a 1% chance that the observed difference (or a more extreme difference) would occur if the drug had no effect. That’s pretty rare, so we might consider the drug effective.
  • P-value of 0.5 means there’s a 50% chance you’d see this difference just by chance. That’s pretty high, so we might not be convinced the drug is doing much.
  • If the P-value is less than ($α$) 0.05: the results are “statistically significant,” and they might reject the null hypothesis , believing the new drug has an effect.
  • If the P-value is greater than ($α$) 0.05: the results are not statistically significant, and they don’t reject the null hypothesis , remaining unsure if the drug has a genuine effect.

4. Example in python

For simplicity, let’s say we’re using a t-test (common for comparing means). Let’s dive into Python:

Making a Decision : “The results are statistically significant! p-value < 0.05 , The drug seems to have an effect!” If not, we’d say, “Looks like the drug isn’t as miraculous as we thought.”

5. Conclusion

Hypothesis testing is an indispensable tool in data science, allowing us to make data-driven decisions with confidence. By understanding its principles, conducting tests properly, and considering real-world applications, you can harness the power of hypothesis testing to unlock valuable insights from your data.

More Articles

  • --> Statistics -->

--> F Statistic Formula – Explained -->

--> correlation – connecting the dots, the role of correlation in data analysis -->, --> hypothesis testing – a deep dive into hypothesis testing, the backbone of statistical inference -->, --> sampling and sampling distributions – a comprehensive guide on sampling and sampling distributions -->, --> law of large numbers – a deep dive into the world of statistics -->, --> central limit theorem – a deep dive into central limit theorem and its significance in statistics -->, similar articles, complete introduction to linear regression in r, how to implement common statistical significance tests and find the p value, logistic regression – a complete tutorial with examples in r.

Subscribe to Machine Learning Plus for high value data science content

© Machinelearningplus. All rights reserved.

problem solving methods hypothesis testing

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free sample videos:.

problem solving methods hypothesis testing

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Clinical problem solving and diagnostic decision making: selective review of the cognitive literature

Arthur s elstein, alan schwarz.

  • Author information
  • Article notes
  • Copyright and License information

Correspondence to: A S Elstein [email protected]

Series information

Evidence base of clinical diagnosis

This article reviews our current understanding of the cognitive processes involved in diagnostic reasoning in clinical medicine. It describes and analyses the psychological processes employed in identifying and solving diagnostic problems and reviews errors and pitfalls in diagnostic reasoning in the light of two particularly influential approaches: problem solving 1 – 3 and decision making. 4 – 8 Problem solving research was initially aimed at describing reasoning by expert physicians, to improve instruction of medical students and house officers. Psychological decision research has been influenced from the start by statistical models of reasoning under uncertainty, and has concentrated on identifying departures from these standards.

Summary points

Problem solving and decision making are two paradigms for psychological research on clinical reasoning, each with its own assumptions and methods

The choice of strategy for diagnostic problem solving depends on the perceived difficulty of the case and on knowledge of content as well as strategy

Final conclusions should depend both on prior belief and strength of the evidence

Conclusions reached by Bayes's theorem and clinical intuition may conflict

Because of cognitive limitations, systematic biases and errors result from employing simpler rather than more complex cognitive strategies

Evidence based medicine applies decision theory to clinical diagnosis

Problem solving

Diagnosis as selecting a hypothesis.

The earliest psychological formulation viewed diagnostic reasoning as a process of testing hypotheses. Solutions to difficult diagnostic problems were found by generating a limited number of hypotheses early in the diagnostic process and using them to guide subsequent collection of data. 1 Each hypothesis can be used to predict what additional findings ought to be present if it were true, and the diagnostic process is a guided search for these findings. Experienced physicians form hypotheses and their diagnostic plan rapidly, and the quality of their hypotheses is higher than that of novices. Novices struggle to develop a plan and some have difficulty moving beyond collection of data to considering possibilities.

It is possible to collect data thoroughly but nevertheless to ignore, to misunderstand, or to misinterpret some findings, but also possible for a clinician to be too economical in collecting data and yet to interpret accurately what is available. Accuracy and thoroughness are analytically separable.

Pattern recognition or categorisation

Expertise in problem solving varies greatly between individual clinicians and is highly dependent on the clinician's mastery of the particular domain. 9 This finding challenges the hypothetico-deductive model of clinical reasoning, since both successful and unsuccessful diagnosticians use hypothesis testing. It appears that diagnostic accuracy does not depend as much on strategy as on mastery of content. Further, the clinical reasoning of experts in familiar situations frequently does not involve explicit testing of hypotheses. 3 , 10 – 12 Their speed, efficiency, and accuracy suggest that they may not even use the same reasoning processes as novices. 11 It is likely that experienced physicians use a hypothetico-deductive strategy only with difficult cases and that clinical reasoning is more a matter of pattern recognition or direct automatic retrieval. What are the patterns? What is retrieved? These questions signal a shift from the study of judgment to the study of the organisation and retrieval of memories.

Problem solving strategies

Hypothesis testing

Pattern recognition (categorisation)

By specific instances

By general prototypes

Viewing the process of diagnosis assigning a case to a category brings some other issues into clearer view. How is a new case categorised? Two competing answers to this question have been put forward and research evidence supports both. Category assignment can be based on matching the case to a specific instance (“instance based” or “exemplar based” recognition) or to a more abstract prototype. In the former, a new case is categorised by its resemblance to memories of instances previously seen. 3 , 11 This model is supported by the fact that clinical diagnosis is strongly affected by context—for example, the location of a skin rash on the body—even when the context ought to be irrelevant. 12

The prototype model holds that clinical experience facilitates the construction of mental models, abstractions, or prototypes. 2 , 13 Several characteristics of experts support this view—for instance, they can better identify the additional findings needed to complete a clinical picture and relate the findings to an overall concept of the case. These features suggest that better diagnosticians have constructed more diversified and abstract sets of semantic relations, a network of links between clinical features and diagnostic categories. 14

The controversy about the methods used in diagnostic reasoning can be resolved by recognising that clinicians approach problems flexibly; the method they select depends upon the perceived characteristics of the problem. Easy cases can be solved by pattern recognition: difficult cases need systematic generation and testing of hypotheses. Whether a diagnostic problem is easy or difficult is a function of the knowledge and experience of the clinician.

The strategies reviewed are neither proof against error nor always consistent with statistical rules of inference. Errors that can occur in difficult cases in internal medicine include failure to generate the correct hypothesis; misperception or misreading the evidence, especially visual cues; and misinterpretations of the evidence. 15 , 16 Many diagnostic problems are so complex that the correct solution is not contained in the initial set of hypotheses. Restructuring and reformulating should occur as data are obtained and the clinical picture evolves. However, a clinician may quickly become psychologically committed to a particular hypothesis, making it more difficult to restructure the problem.

Decision making

Diagnosis as opinion revision.

From the point of view of decision theory, reaching a diagnosis means updating opinion with imperfect information (the clinical evidence). 8 , 17 The standard rule for this task is Bayes's theorem. The pretest probability is either the known prevalence of the disease or the clinician's subjective impression of the probability of disease before new information is acquired. The post-test probability, the probability of disease given new information, is a function of two variables, pretest probability and the strength of the evidence, measured by a “likelihood ratio.”

Bayes's theorem tells us how we should reason, but it does not claim to describe how opinions are revised. In our experience, clinicians trained in methods of evidence based medicine are more likely than untrained clinicians to use a Bayesian approach to interpreting findings. 18 Nevertheless, probably only a minority of clinicians use it in daily practice and informal methods of opinion revision still predominate. Bayes's theorem directs attention to two major classes of errors in clinical reasoning: in the assessment of either pretest probability or the strength of the evidence. The psychological study of diagnostic reasoning from this viewpoint has focused on errors in both components, and on the simplifying rules or heuristics that replace more complex procedures. Consequently, this approach has become widely known as “heuristics and biases.” 4 , 19

Errors in estimation of probability

Availability —People are apt to overestimate the frequency of vivid or easily recalled events and to underestimate the frequency of events that are either very ordinary or difficult to recall. Diseases or injuries that receive considerable media attention are often thought of as occurring more commonly than they actually do. This psychological principle is exemplified clinically in the overemphasis of rare conditions, because unusual cases are more memorable than routine problems.

Representativeness —Representativeness refers to estimating the probability of disease by judging how similar a case is to a diagnostic category or prototype. It can lead to overestimation of probability either by causing confusion of post-test probability with test sensitivity or by leading to neglect of base rates and implicitly considering all hypotheses equally likely. This is an error, because if a case resembles disease A and disease B equally, and A is much more common than B, then the case is more likely to be an instance of A. Representativeness is associated with the “conjunction fallacy”—incorrectly concluding that the probability of a joint event (such as the combination of findings to form a typical clinical picture) is greater than the probability of any one of these events alone.

Heuristics and biases

Availability

Representativeness

Probability transformations

Effect of description detail

Conservatism

Anchoring and adjustment

Order effects

Decision theory assumes that in psychological processing of probabilities, they are not transformed from the ordinary probability scale. Prospect theory was formulated as a descriptive account of choices involving gambling on two outcomes, 20 and cumulative prospect theory extends the theory to cases with multiple outcomes. 21 Both prospect theory and cumulative prospect theory propose that, in decision making, small probabilities are overweighted and large probabilities underweighted, contrary to the assumption of standard decision theory. This “compression” of the probability scale explains why the difference between 99% and 100% is psychologically much greater than the difference between, say, 60% and 61%. 22

Support theory

Support theory proposes that the subjective probability of an event is inappropriately influenced by how detailed the description is. More explicit descriptions yield higher probability estimates than compact, condensed descriptions, even when the two refer to exactly the same events. Clinically, support theory predicts that a longer, more detailed case description will be assigned a higher subjective probability of the index disease than a brief abstract of the same case, even if they contain the same information about that disease. Thus, subjective assessments of events, while often necessary in clinical practice, can be affected by factors unrelated to true prevalence. 23

Errors in revision of probability

In clinical case discussions, data are presented sequentially, and diagnostic probabilities are not revised as much as is implied by Bayes's theorem 8 ; this phenomenon is called conservatism. One explanation is that diagnostic opinions are revised up or down from an initial anchor, which is either given in the problem or subjectively formed. Final opinions are sensitive to the starting point (the “anchor”), and the shift (“adjustment”) from it is typically insufficient. 4 Both biases will lead to collecting more information than is necessary to reach a desired level of diagnostic certainty.

It is difficult for everyday judgment to keep separate accounts of the probability of a disease and the benefits that accrue from detecting it. Probability revision errors that are systematically linked to the perceived cost of mistakes show the difficulties experienced in separating assessments of probability from values, as required by standard decision theory. There is a tendency to overestimate the probability of more serious but treatable diseases, because a clinician would hate to miss one. 24

Bayes's theorem implies that clinicians given identical information should reach the same diagnostic opinion, regardless of the order in which information is presented. However, final opinions are also affected by the order of presentation of information. Information presented later in a case is given more weight than information presented earlier. 25

Other errors identified in data interpretation include simplifying a diagnostic problem by interpreting findings as consistent with a single hypothesis, forgetting facts inconsistent with a favoured hypothesis, overemphasising positive findings, and discounting negative findings. From a Bayesian standpoint, these are all errors in assessing the diagnostic value of clinical evidence—that is, errors in implicit likelihood ratios.

Educational implications

Two recent innovations in medical education, problem based learning and evidence based medicine, are consistent with the educational implications of this research. Problem based learning can be understood as an effort to introduce the formulation and testing of clinical hypotheses into the preclinical curriculum. 26 The theory of cognition and instruction underlying this reform is that since experienced physicians use this strategy with difficult problems, and since practically any clinical situation selected for instructional purposes will be difficult for students, it makes sense to provide opportunities for students to practise problem solving with cases graded in difficulty. The finding of case specificity showed the limits of teaching a general problem solving strategy. Expertise in problem solving can be separated from content analytically, but not in practice. This realisation shifted the emphasis towards helping students acquire a functional organisation of content with clinically usable schemas. This goal became the new rationale for problem based learning. 27

Evidence based medicine is the most recent, and by most standards the most successful, effort to date to apply statistical decision theory in clinical medicine. 18 It teaches Bayes's theorem, and residents and medical students quickly learn how to interpret diagnostic studies and how to use a computer based nomogram to compute post-test probabilities and to understand the output. 28

We have selectively reviewed 30 years of psychological research on clinical diagnostic reasoning. The problem solving approach has focused on diagnosis as hypothesis testing, pattern matching, or categorisation. The errors in reasoning identified from this perspective include failure to generate the correct hypothesis; misperceiving or misreading the evidence, especially visual cues; and misinterpreting the evidence. The decision making approach views diagnosis as opinion revision with imperfect information. Heuristics and biases in estimation and revision of probability have been the subject of intense scrutiny within this research tradition. Both research paradigms understand judgment errors as a natural consequence of limitations in our cognitive capacities and of the human tendency to adopt short cuts in reasoning.

Both approaches have focused more on the mistakes made by both experts and novices than on what they get right, possibly leading to overestimation of the frequency of the mistakes catalogued in this article. The reason for this focus seems clear enough: from the standpoint of basic research, errors tell us a great deal about fundamental cognitive processes, just as optical illusions teach us about the functioning of the visual system. From the educational standpoint, clinical instruction and training should focus more on what needs improvement than on what learners do correctly; to improve performance requires identifying errors. But, in conclusion, we emphasise, firstly, that the prevalence of these errors has not been established; secondly, we believe that expert clinical reasoning is very likely to be right in the majority of cases; and, thirdly, despite the expansion of statistically grounded decision supports, expert judgment will still be needed to apply general principles to specific cases.

This is the fourth in a series of five articles

Series editor: J A Knottnerus

  Preparation of this review was supported in part by grant RO1 LM5630 from the National Library of Medicine.

Competing interests: None declared.

graphic file with name ebcd.f1.jpg

  • 1. Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: an analysis of clinical reasoning. Cambridge, MA: Harvard University Press; 1978. [ Google Scholar ]
  • 2. Bordage G, Zacks R. The structure of medical knowledge in the memories of medical students and general practitioners: categories and prototypes. Med Educ. 1984;18:406–416. doi: 10.1111/j.1365-2923.1984.tb01295.x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 3. Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: theory and implications. Acad Med. 1990;65:611–621. doi: 10.1097/00001888-199010000-00001. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 4. Kahneman D, Slovic P, Tversky A, editors. Judgment under uncertainty: heuristics and biases. New York: Cambridge University Press; 1982. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 5. Sox HC, Jr, Blatt MA, Higgins MC, Marton KI. Medical decision making. Stoneham, MA: Butterworths; 1988. [ Google Scholar ]
  • 6. Mellers BA, Schwartz A, Cooke ADJ. Judgment and decision making. Ann Rev Psychol. 1998;49:447–477. doi: 10.1146/annurev.psych.49.1.447. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 7. Chapman GB, Sonnenberg F, editors. Decision making in health care: theory, psychology, and applications. New York: Cambridge University Press; 2000. [ Google Scholar ]
  • 8. Hunink M, Glasziou P, Siegel J, Weeks J, Pliskin J, Elstein AS, et al. Decision making in health and medicine: integrating evidence and values. New York: Cambridge University Press; 2001. [ Google Scholar ]
  • 9. Patel VL, Groen G. Knowledge-based solution strategies in medical reasoning. Cogn Sci. 1986;10:91–116. [ Google Scholar ]
  • 10. Groen GJ, Patel VL. Medical problem-solving: some questionable assumptions. Med Educ. 1985;19:95–100. doi: 10.1111/j.1365-2923.1985.tb01148.x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 11. Brooks LR, Norman GR, Allen SW. Role of specific similarity in a medical diagnostic task. J Exp Psychol Gen. 1991;120:278–287. doi: 10.1037//0096-3445.120.3.278. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 12. Norman GR, Coblentz CL, Brooks LR, Babcock CJ. Expertise in visual diagnosis: a review of the literature. Acad Med. 1992;66(suppl):S78–S83. doi: 10.1097/00001888-199210000-00045. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 13. Rosch E, Mervis CB. Family resemblances: studies in the internal structure of categories. Cogn Psychol. 1975;7:573–605. [ Google Scholar ]
  • 14. Lemieux M, Bordage G. Propositional versus structural semantic analyses of medical diagnostic thinking. Cogn Science. 1992;16:185–204. [ Google Scholar ]
  • 15. Kassirer JP, Kopelman RI. Learning clinical reasoning. Baltimore: Williams and Wilkins; 1991. [ Google Scholar ]
  • 16. Bordage G. Why did I miss the diagnosis? Some cognitive explanations and educational implications. Acad Med. 1999;74(suppl):S138–S142. doi: 10.1097/00001888-199910000-00065. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 17. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology: a basic science for clinical medicine. 2nd ed. Boston: Little, Brown; 1991. [ Google Scholar ]
  • 18. Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. New York: Churchill Livingstone; 1997. [ Google Scholar ]
  • 19. Elstein AS. Heuristics and biases: selected errors in clinical reasoning. Acad Med. 1999;74:791–794. doi: 10.1097/00001888-199907000-00012. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 20. Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1982;211:453–458. doi: 10.1126/science.7455683. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 21. Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain. 1992;5:297–323. [ Google Scholar ]
  • 22. Fischhoff B, Bostrom A, Quadrell M J. Risk perception and communication. Annu Rev Pub Health, 1993;4:183–203. doi: 10.1146/annurev.pu.14.050193.001151. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 23. Redelmeier DA, Koehler DJ, Liberman V, Tversky A. Probability judgment in medicine: discounting unspecified probabilities. Med Decis Making. 1995;15:227–230. doi: 10.1177/0272989X9501500305. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 24. Wallsten TS. Physician and medical student bias in evaluating information. Med Decis Making. 1981;1:145–164. doi: 10.1177/0272989X8100100205. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 25. Bergus GR, Chapman GB, Gjerde C, Elstein AS. Clinical reasoning about new symptoms in the face of pre-existing disease: sources of error and order effects. Fam Med. 1995;27:314–320. [ PubMed ] [ Google Scholar ]
  • 26. Barrows HS. Problem-based, self-directed learning. JAMA. 1983;250:3077–3080. [ PubMed ] [ Google Scholar ]
  • 27. Gruppen LD. Implications of cognitive research for ambulatory care education. Acad Med. 1997;72:117–120. doi: 10.1097/00001888-199702000-00012. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 28. Schwartz A. Nomogram for Bayes's theorem. http://araw.mede.uic.edu/cgi-bin/testcalc.pl (accessed 28 December 2001).
  • View on publisher site
  • PDF (219.1 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

IMAGES

  1. Hypothesis Testing Solved Examples(Questions and Solutions)

    problem solving methods hypothesis testing

  2. Hypothesis Testing Solved Problems

    problem solving methods hypothesis testing

  3. PPT

    problem solving methods hypothesis testing

  4. PPT

    problem solving methods hypothesis testing

  5. Hypothesis testing Infographics by: Mariz Turdanes

    problem solving methods hypothesis testing

  6. Hypothesis Testing

    problem solving methods hypothesis testing

VIDEO

  1. Deloitte Problem Solving: Overview of Hypothesis Based Problem Solving

  2. 8.1 Steps for Hypothesis Testing

  3. Data Analysis Class

  4. Hypothesis Testing

  5. Hypothesis Test Solving Questions

  6. Testing of Hypothesis Problem 1 MA3251 Statistics and Numerical Methods in Tamil Engineering Sem 2