Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
J Am Med Inform Assoc. 2006 Nov-Dec; 13(6): 653–659.
PMCID: PMC1656967
PMID: 16929042

Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs

K. Ann McKibbon, MLS, PhD a , b , and Douglas B. Fridsma, MD, PhD a

Abstract

Objective

To determine if clinician-selected electronic information resources improve primary care physicians’ abilities to answer simulated clinical questions.

Design

Observational study using hour-long interviews in physician offices and think-aloud protocols. Participants answered 23 multiple-choice questions and chose 2 to obtain further information using their own information resources. We established which resources physicians chose, processes used, and results obtained when looking for information to support their answers.

Measurements

Correctness of answers before and after searching, resources used, and searching techniques.

Results

23 physicians sought answers to 46 questions using their own information resources. They spent a mean of 13.0 (SD 5.5) minutes searching for information for the two questions using an average of 1.8 resources per question and a wide variety of searching techniques. On average 43.5% of the answers to the original 23 questions were correct. For the questions that were searched, 18 (39.1%) of the 46 answers were correct before searching. After searching, the number of correct answers was 19 (42.1%). This difference of 1 correct answer was attributed to 6 questions (13.0%) going from an incorrect to correct answer and 5 (10.9%) questions going from a correct to incorrect answer. We found differences in the ability of various resources to provide correct answers.

Conclusion

For the primary care physicians studied, electronic information resources of choice did not always provide support for finding correct answers to simulated clinical questions and in some instances, individual resources may have contributed to an initially correct answer becoming incorrect.

Introduction

Clinicians use information resources to supplement their knowledge and clinical experience and to keep themselves up to date. 1 They have traditionally used colleagues, books, and journals to find information. 2 but with the advent of computer capabilities and new information products and services, many electronic information resources have been developed and made available in an array of formats. When available, these new clinical information resources are being used in clinical care. Self-reports indicate that specific electronic resources, in addition to the traditional information resources, improve the care process, and may improve outcomes. 3–10

Developers of electronic information resources and healthcare informaticians seek to provide effective and efficient systems that integrate electronic information resources with clinical information systems to support patient- and situation-specific information needs. 11 Their goal is to move toward automation of the information seeking and retrieval processes to provide high-quality and appropriate answers to clinical questions as, or before, they arise. To improve the healthcare process, information systems or resources need to be fully integrated into electronic medical records systems, fast, and above all, accurate and helpful.

Studies show that specific stand-alone (unintegrated) systems can be effective in providing information resources for clinical care. 12 A study by Westbrook et al. 9 showed that providing a collection of information resources that included MEDLINE, a textbook (Merck Manual), and pharmaceutical and therapy databases improved the correctness of answers to simulated clinical questions from 29% to 50%. Little research has been done to show the effectiveness of the information resources that clinicians actually use in practice. To provide truly useful and integrated information support for clinical care, designers and implementers need to know which resources are most effective at providing correct answers to clinical questions, which resources clinicians prefer, and which features of a resource clinicians desire. This study was done to determine whether the electronic information resources chosen by primary care physicians improve their ability to answer simulated clinical questions. Because one of the goals of information resources is to support decision making and influence physician behavior when appropriate, special emphasis was placed on examining the correctness of answers before and after searching. Electronic information aids can provide correct answers, no answers, or incorrect answers to questions—positive, neutral or negative effects as described by various research groups including Ash and colleagues. 13 Researchers in health care have reported inappropriately changed answers and decisions across systems and resources: diagnostic decision support systems, 14 MEDLINE searched by medical and nurse practitioner students, 15 physicians and nurses using a collection of online resources, 9 and an automated electrocardiograph system that provided suggested diagnoses as well as the electrocardiogram tracings. 16 Studies in other settings such as aviation have shown that use of automated information systems can both lead to errors as well as improve outcomes. 17,18 Our study was designed to determine if physician-selected electronic information resources improved primary care physicians’ abilities to answer simulated clinical questions.

Methods

We obtained ethics approval from the University of Pittsburgh and McMaster University as study participants were located in the United States and Canada. Inclusion criteria were family physicians in Canada or the United States and general internists in the United States. Participants had to be seeing patients in a clinic on a regular basis. Hospitalists and emergency physicians were excluded. The major source of primary care physicians was those who were registered to be clinical raters in the MORE system (the McMaster Online Rating of Evidence http://hiru.mcmaster.ca/more/AboutMORE.htm). Participants in this system are practicing clinicians who rate high-quality, recently-published studies and systematic review articles on clinical relevance and newsworthiness scales. These ratings are used in the production of ACP Journal Club, Evidence Based Medicine, and Evidence Based Nursing as well as bmjupdates+. Approximately 2400 physicians are MORE raters. The editor of ACP Journal Club sent a request for volunteers to be part of the dissertation study to primary care physicians in southern and western Ontario, western New York, and western Pennsylvania. In addition, both authors and other members of the dissertation committee approached peers for suggestions of participants, and KAM attended a meeting of a Canadian academic department of family medicine to ask for volunteers. A senior member of the University of Pittsburgh department of Family Medicine e-mailed a request to all faculty members. The selection of physicians was based on availability rather than directed and purposeful sampling.

After potential participants filled out screening questionnaires a researcher (KAM) came to each physician’s office or clinic for a 1-hour interview. Each physician completed several tasks. First they reported on their experience and level of use of computers, the Internet, and electronic information resources. Second, they answered 23 multiple-choice clinical questions previously employed by Hersh and colleagues 15 in another study of information resource use. Third, each physician chose 2 questions from the list of questions and looked for the answer using their own information resources. Physicians were allowed to use any resource including people and in any format. The chosen questions were placed on cards and given to the physicians to help direct their searching. The interviewer placed the initial answers in a file folder so that the physician could not easily go back to check previous answers. Data on process and outcomes were collected by using think-aloud protocols (Eriksson and Simon) 19 that included a training exercise in thinking aloud. We used a mental arithmetic problem and moved to a more visual task (counting the windows in the house in which the participant grew up) if the participant expressed dislike of the first task. The physician was asked to provide an answer to the question after completion of each searching session.

We analyzed data on the correctness of the answers and certainties related to the answer to the clinical questions before and after searching. Each question had 1 of 3 possible answers: yes, no, or can’t tell (evidence is insufficient to decide). We also collected data on times taken to search, resources used, and the features of resources employed.

Results

Invitations to join the study were sent to 260 physicians (102 MORE raters, 26 peer requests, approximately 40 members of the Canadian department of family medicine, and approximately 100 members of the US department of family medicine). Fifty-two physicians completed the screening and interviews were done with 25 physicians (22 MORE raters, 1 recruited from a peer invitation and 2 from department contacts). Of these 25 physicians, 21 were men, 17 were from Canada, 22 were family physicians, 24 were board certified, and the most common decade of graduation from medical school was the 1980s. All participants regularly used computers and considered themselves to be sophisticated in their computer skills. They reported daily use of the Internet, monthly use of MEDLINE, and varying use of a range of electronic information resources (specifically Clinical Evidence from the BMJ Publishing Group, Physician Information and Education Resource [PIER] from the American College of Physicians, and UptoDate). Use and familiarity were related—those systems that were used heavily were reported to be familiar by the study physicians.

One physician chose not to seek information to answer the questions and partial data from another physician were missing because of equipment failure. The analyses that follow therefore include data from 23 physicians who searched for information to answer 46 simulated clinical questions. The physicians used only electronic information resources. The mean time to complete the searching and question answering tasks (2 clinical questions) was 13.0 (SD 5.5) minutes with an average of 7.3 (SD 4.0) minutes for the first question and 5.8 (SD 2.2) minutes for the second question.

Each search included an average of 2.0 (SD 0.9) searching cycles per question. A searching cycle was defined as a session of using similar terms in a single database, Internet site, or search engine. A new cycle was attributed to that searcher if the set of searching terms changed dramatically or if the searching entity (database, etc.) changed. includes the resources that were consulted. Ten different, broadly defined categories of searching resources were used a total of 82 times; for each question, an average of 1.8 resources were used per answer. Combining the information resources into categories as described by Haynes 11 the more summarized information resources such as the Cochrane Database of Systematic Reviews and Clinical Evidence were used 39.2% of the time followed by MEDLINE (Ovid MEDLINE and PubMed) at 35.7%, and Internet resources including Google was third at 22.6%.

Table 1

Table 1. Electronic Information Resources Used by 23 Primary Care Physicians Answering 46 Simulated Clinical Questions

Electronic Information ResourceNumber of Times Resource was UsedResource Used (% of total)
Ovid MEDLINE1922.6%
Google and other Internet sites1922.6%
PubMed1113.1%
Cochrane Database of Systematic Reviews89.5%
MD Consult78.3%
OVID EBMR78.3%
UptoDate565.9%
InfoPOEMs44.8%
Lancet22.4%
Clinical Evidence22.4%
Average/question1.8
Total resources84100%

Abbreviations: Ovid EBMR=Ovid Evidence Based Medicine Reviews databases of summarized clinical information or systematic reviews (4 possible).

The physicians used many search and retrieval techniques from very sophisticated limits strategies in Ovid MEDLINE through to a simple approach to using only textwords or entry of the complete question in a PubMed search. When using the Internet they also showed a wide variety of approaches. For example searchers often went to known sites such as the US Centers for Disease Control and Prevention to determine sexually transmitted disease drug regimens. They stayed away from PDF and PowerPoint presentations because of time constraints in downloading them on slow machines. One family physician switched to the clustering meta-search engine Vivisimo (http://vivisimo.com/) when her usual PubMed and Google search methods did not produce the results she sought on a question of bladder cancer associated with petroleum product exposure. Almost half of the searching techniques used were keyword based techniques and many of the more sophisticated techniques (e.g., MeSH, subheadings, and limiting) were used less often and only in the larger resources such as MEDLINE.

On average, the primary care physicians answered 10 of 23 (43.5%) multiple-choice questions correctly (SD 2.4, range 5 to 14). The 43.5% was low but statistically higher than what would be expected by chance alone (33.3%), (p < 0.001). The certainty of the correctness of their answer was higher if they had indicated a correct answer than an incorrect one (70.5% (SD 11.8) vs. 60.2% (SD 23.3), p = 0.0004). Their willingness to look up the answer to the question if that issue had arisen in routine clinical care however was not associated with the correctness of the answer (41.2% for a correct answer vs. 39.7% for an incorrect answer, p = n.s.).

shows the number of correct and incorrect answers before and after searching using the clinician’s own resources (2 questions of choice from 23). The data in are striking for several reasons. First, the questions on which physicians chose to seek information were incorrect as often as the questions that they chose not to search, giving further weight to the fact that physicians are not always able to identify readily those questions for which they should be seeking information. Second, the questions were difficult to search as seen by the low scores after searching.

Table 2

Table 2. Number and Percent Correct Answers before and after Searching Using the Clinicians’ Own Resources

23 searchers and 46 questionsCorrectness of Answer before Searching
CorrectIncorrectTotals
Correctness after searchingCorrect13 (28.2%)6 (13.0%)19 (41.3%)
Incorrect5 (10.9%)22 (47.8%)27 (58.7%)
Totals18 (39.1%)28 (60.9%)46 (100%)

The most interesting, and disturbing, feature of the data however is seen when examining the changed answers. The absolute difference in the rate of being correct was 1.9% (39.1% correct before searching and 41.3% correct after searching). This absolute difference is a 1-answer difference. On further examination, this 1-answer difference occurred because 6 questions went from incorrect to correct and 5 from correct to incorrect. These 11 instances of changed answers are few in number but they are of concern. Three other studies of the correctness of answers with electronic information resources have also found a substantial number of changed answers with many going from an initially correct answer to an incorrect one. 9,20,21 These changed answers (both appropriate and inappropriate changes) deserve further study.

The resources in are listed as being associated with correct, incorrect, and changed answers after searching. Because the numbers are so few and the data were not planned to be analyzed a priori, no statistical tests were applied. The lists, however, provide direction for further research. Note that the totals add to more than the number of answers because multiple resources were often associated with one searching session and answer. Of special note are that Google and Cochrane are associated with correct answers about half the time. The various forms of MEDLINE (PubMed and Ovid), Ovid Evidence Based Medicine Reviews databases, INFOPoems, and UptoDate are associated more often with incorrect answers than with correct answers.

Table 3

Table 3. Information Resources Associated with Correct Answers after Searching (staying correct after searching and moving from incorrect to correct)

ResourceCorrect Answer after SearchingIncorrect Answers after SearchingCorrect to IncorrectIncorrect to Correct
Google and other Web resources111041
MDConsult5002
Ovid MEDLINE51411
Cochrane reviews4410
PubMed4711
Clinical Evidence2000
Ovid EBMR2500
InfoPOEMs1400
Lancet1001
UptoDate0500
Epocrates1001

The 5 decisions that were changed from correct to incorrect follow. The Appendix lists all 23 questions, how often they were correctly answered before searching, and how many were chosen for searching. Those questions that had changed answers are listed below Table AP1.

  • Question 2. Decision was based on a web page of the American Academy of Family Physicians. No other searching was done.
  • Question 3. (Same person as for Question 2). Decision was made based on a BMJ web page of an editorial found using Google.
  • Question 3. Canadian drug site (Rx Files from Saskatchewan) and PubMed. Drug was hard to find as it is not readily available in Canada.
  • Question 7. UK Fact Sheet with very little data to support recommendations.
  • Question 11. Cochrane and MEDLINE. The searcher did not find a good match to the content of the question in either resource.

These few instances indicate that the physicians did not necessarily choose resources wisely, may not have used effective searching methods, or could have been unable or unwilling to deal quickly and easily with content that was identified.

Discussion

This study is one of the few that includes observations of primary care physicians using their own information resources to answer simulated clinical questions. The participants used multiple information resources (an average of 1.8 resources per question), did considerable searching within systems, and used a multiplicity of searching techniques. They spent approximately 6 minutes per question searching and providing an answer.

Physicians were marginally less certain of their answer when it was incorrect but they did not state they would seek information more often if their answer was incorrect. They also did not choose questions that they answered incorrectly to look up more often than those questions they answered correctly. Clinicians, like all of us, may not always or readily know when they need to seek information. The findings of this study suggest that any integration of information systems into electronic medical records systems must allow clinicians to easily and effectively seek the information they know they need as well as alert the clinician to potential opportunities to seek information and improve care.

When physicians used their own information resources, they correctly answered 42% of the questions compared with 39% correct before searching—a very small gain and a relatively low rate of being correct. This rate of correctness seems low for established physicians using information resources they consider to be helpful in their practices. In addition, the rate of correctness is in between the rate of correctness observed in medical students (51.6%) and nurse practitioner students (34.7%) who answered the same questions after being taught how to use MEDLINE. The students, however, spent a considerably longer time searching for the correct answer (just over 30 minutes per question) than did the clinicians in the study or what can reasonably be allocated to seek information during the care process. The time spent by the students is too long for a system to be productive in clinical care.

Westbrook and colleagues 9 supplied an information resource collection of PubMed, textbooks, and guidelines to Australian physicians and nurses. Use of these resources to address simulated clinical questions improved the rate of the answers being correct from 29% to 50%—a big improvement although 50% correct is still not sufficient to support good health care delivery. This 21% improvement is much higher than the 1.9% absolute improvement seen in this study. We feel that the proscribed collection of resources was more effective than letting the physicians choose their own. Some of the choices of information resources made by the physicians in this study could be considered to be poor ones. For example, Web pages with little supporting evidence and reliance on a 15-year old World Health Organization Technical Report likely contributed to incorrect answers. This suggests that researchers must help identify which information resources should be integrated into electronic medical records systems, provided by hospital and academic libraries, and advocated for physicians to use. Individual choice may not be the most effective method of identifying useful information resources. This is also supported by the fact that any answers started incorrect and stayed incorrect (22 of 46 or 47.8%) possibly indicating that the resources chosen were not appropriate to answer the questions possibly based on the absence of correct or unclear information.

A study of the ability of peers and experts to provide answers to clinical questions had similar findings of low rates of correct answers. Schaafsma and colleagues 22 showed that when occupational physicians asked colleagues for advice, the advice was correct only 47% of the time. If the expert or peer supplied evidence to back his or her answer the rate of correctness increased to 83%. Several instances of poor answers were linked to lack of evidence to support recommendations were observed in this study. This suggests that information resources must be strongly based on evidence to improve the quality of decisions.

Of considerable interest are the questions that started out with a correct answer before searching and after searching the clinician changed his or her mind and chose an incorrect answer. In this study, the rate at which subjects went from correct to incorrect answers after searching was 10.9%. Hersh et al. found the rate of moving from correct to incorrect answers was 4.5% and 13% in 2 separate studies. 20,21 The correct to incorrect rate was 7% in the study by Westbrook et al. 9 Studies of other electronic decision support systems show similar findings of clinicians being unduly (incorrectly) influenced by electronic advice or suggestions. 16,23

Clinician changes from correct to incorrect answers can be attributed to many factors, some related to the resources themselves including the inappropriate choice of resources, inefficient use of resources, a resource with incorrect or outdated information, or the inability of the clinician to be able to quickly analyze and apply the information. In addition, the electronic format of the resources may also influence the decision through mechanisms of automation bias 17,18,24 and social conformity. 25 Automation bias is defined as the biases that influence performance when an automated system (i.e., computerization) is placed into an individual’s workflow. Automation bias has several components and can include the individual being less vigilant with the computer system in place, diffusion of responsibility with the system working, belief in the infallibility of the system, and unwillingness to pursue contradictory evidence to that provided by the computer. Berns et al. 25 found that a disconcordant opinion provided by a computer, as well one provided by peers, influenced study participants to change an answer in a simulated situation (mental rotation) from being correct to incorrect. The magnitude of the influenced change was similar for peers and a computer. All of these factors should be taken into account in studying implementation of information resources into electronic medical records systems.

The study described has several limitations. First only 23 primary care physicians were studied. Second, because only primary care physicians were studied we do not have data on specialists or healthcare professionals from other disciplines. Third, the study used simulated questions rather than observing the physicians seeking information in actual practice. However the questions were collected from primary care physicians and the answers carefully developed and checked by Hersh and colleagues. 15 Fourth, the participant physicians were volunteers. They knew of the broad reason for the study (their access to and use of information resources). In addition, many of the physicians were recruited from a group with strong interest in evidence-based medicine. All participants were sophisticated computer users and many teach in residency and undergraduate medical programs. Therefore the study participants are probably some of the most efficient and effective practitioners at keeping up to date with advances in health care and information technology. The low numbers of correct answers and the detrimental influence of their information resources are troubling in this group who would be expected to perform optimally using electronic information resources.

Conclusion

Substantial variation existed in what information resources the physicians in the study used and the searching techniques that they employed. The information resources used did not substantially improve the participants’ abilities to retrieve good answers to simulated clinical questions. Research has shown that we are on the verge of successful integration of information resources into electronic medical record systems. This study highlights certain areas for consideration:

Scientific study of existing resources must be done to determine which are the most successful at providing information support. Westbrook et al. 9,20,21 had much higher rates of correct answers when they provided a collection of resources than did the physicians in this study who chose which resources to use.

Information resources that can be searched must be available when the clinician has identified that he or she has an information need. The resource must also be sophisticated enough to prompt the clinician when it has identified a potential area where improvement in care may take place.

Accuracy of the system is paramount—incorrect answers, especially in situations where a clinician is unsure of an answer can contribute to incorrect decisions. Social conformity and automation bias add pressure on an individual to conform to the computer output.

The evidence base of the resources must be strong and current.

Because primary care physicians’ information needs are often considered to be greater than specialists, these findings on primary care physicians can likely be applied to other disciplines. Harrold et al. 26 provide summary evidence in a review article that, at least in certain disease conditions, specialists have a stronger knowledge base than generalists as well as different patterns of care.

The information age has allowed us to develop important information tools and implement them in exciting and unprecedented ways. We need to evaluate them well to determine how best to harness the resources to support good clinical decision making.

Appendix

Clinical Questions with Answers, Percentage Correct, and Number of Times Chosen by Clinicians to Seek Additional Information. The first 20 question in this set were collected and the answers validated by Dr. William Hersh, Oregon Health Sciences Center. He used this question set in a study of medical students and nurse practitioner students 21 and provided them for use in the study described in this paper and agreed to publish them here.

Table foo1

NoQuestion TypeQuestionPercent Correct (SD) 25 PhysiciansAnswerTimes Looked Up
1DiagnosisIs their any benefit of routine pap smear in persons who have had a hysterectomy for benign disease?0.64 (0.49)No2
2PrognosisIs ultrasound the best diagnostic test available to exclude the presence of lower extremity deep vein thrombosis?0.44 (0.51)Yes3
3TreatmentAre non-acetylated salicylates really safer, e.g., have less incidence of acid-peptic problems, in patients with NSAID GI intolerance (who benefit from anti-inflammatory effect)?0.32 (0.48)Yes4
4PrognosisIs the elevation of alkaline phosphatase a better indicator of recurring prostate cancer than a rising PSA?0.08 (0.28)Can’t tell4
5DiagnosisIs the cyto brush superior to a spatula in obtaining cells for PAP smears in terms of technical quality (e.g., percent of interpretable smears)?0.88 (0.33)Yes2
6TreatmentFor institutionalized adults, to prevent influenza and reduce mortality from it, is it more effective to vaccinate health care workers for influenza than patient/residents?0.44 (0.51)Yes4
7TreatmentIs there any benefit for ultrasound as physical therapy for sprained ankle?0.60 (0.50)No4
8TreatmentIs penicillin superior to ciprofloxacin for the outpatient treatment of pelvic inflammatory disease?0.16 (0.37)Can’t tell3
9TreatmentIs anti-inflammatory therapy (NSAIDs) better than Tylenol for elderly patients with degenerative joint disease?0.80 (0.41)No0
10EtiologyIs there evidence of an association between petroleum product exposure and bladder cancer?0.00 (0.00)No3
11TreatmentIs a high dose (1200 to 1500 mg daily) regimen of zidovudine therapeutically superior to a low dose (500 to 600 mg daily) one in patients with positive HIV antibody for reducing the progression to AIDS?0.38 (0.49)No3
12DiagnosisWill prostate specific antigen screening lower the mortality rate of prostate cancer in low risk men after they reach the age of 50?0.52 (0.51)Can’t tell0
13TreatmentIs there good evidence that an antibiotic can prevent endocarditis in an 18-year-old woman with rheumatic heart disease (mild mitral regurgitation) who is to have a dental root canal?0.16 (0.37)Can’t tell1
14PrognosisA 52-year-old woman recently had a modified radical mastectomy for infiltrating ductal carcinoma of the breast. Her axillary lymph nodes are negative for tumor. Would estrogen receptor negativity be more like to indicate a relatively poor prognosis for this patient, rather than thyroid hormone receptor positivity?0.16 (0.37)Can’t tell4
15EtiologyA 40-year-old premenopausal woman consults you about her risk of breast cancer. Does prior use of birth control pills increase her risk?0.84 (0.37)No2
16EtiologyDoes anti-reflux surgery in patients with Barrett’s esophagus reduce the risk of developing adenocarcinoma?0.24 (0.44)Can’t tell2
17EtiologyIs long-distance running associated with intervertebral disc narrowing in men?0.20 (0.41)Can’t tell3
18PrognosisWould plasma norepinepherine levels indicate poor prognosis in congestive heart failure better than hyponatremia?0.20 (0.41)Can’t tell3
18TreatmentIs Trental (pentoxifylline) the best drug available to improve symptoms of peripheral vascular disease?0.36 (0.49)No0
20PrognosisDo the majority (>50%) of terminal AIDS patients have clinical symptoms of cardiac involvement?0.40 (0.50)No0
21TreatmentAre antidepressants effective for reducing pain and nausea in patients with IBS?0.60 (0.50)Yes0
22TreatmentIs advocacy a useful approach for helping women leave an abusive situation?0.60 (0.50)Yes0
23TreatmentIs acetazolamide effective for both preventing and treating high altitude sickness?0.75 (0.44)Yes0

* This question replaced the question: Does dietary protein affect the level of proteinuria in patients with protein-losing neprhopathy? which was no longer valid in the fall of 2004.

Answers are correct to the best of our abilities as of September 2004.

Questions that went from being correct to becoming incorrect after searching: 2, 3 (twice), 7, 11, 15.

Questions that went from being incorrect to becoming correct after searching: 1 4, 8, 11, 16, 18.

Footnotes

This study was done in fulfillment of PhD studies by KAM. DBF was the Committee Chair. Other members of the PhD Committee were Dr. Rebecca Crowley, School of Medicine and Center for Biomedical Informatics, University of Pittsburgh; Dr. Ellen Detlefsen, School of Information Sciences and Center for Biomedical Informatics; and Dr. Charles Friedman, School of Medicine and Center for Biomedical Informatics, University of Pittsburgh (now Associate Director of the US National Heart, Lung, and Blood Institute); and Dr. Brian Haynes, Department of Clinical Epidemiology and Biostatistics, Faculty of Health Sciences, McMaster University. Electronic access to the dissertation is available at http://etd.library.pitt.edu/ETD/available/etd-08052005-075912/ (accessed February 17, 2006).

Dr. Bill Hersh, Professor and Chair, Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, OR provided the clinical questions and answers used in the dissertation study and described in this paper. He kindly agreed to allow publication of his questions and answers as an appendix to this report.

References

1. Dawes M, Sampson U. Knowledge management in clinical practice: a systematic review of information seeking behavior in physicians Int J Med Inform 2003;71(1):9-15. [PubMed] [Google Scholar]
2. National Institute of Clinical Studies Information Finding and Assessment Methods that Different Groups of Clinicians Find Most Useful. Melbourne, Australia: NICS; 2003. Prepared by the Centre for Clinical Effectiveness.
3. Westbrook JI, Gosling AS, Coirea WE. The impact of an online evidence system on confidence in decision making in a controlled setting Med Decis Making 2005;25(2):178-185. [PubMed] [Google Scholar]
4. Lindberg D, Siegel E, Rapp B, Wallingford K, Wilson S. Use of MEDLINE by physicians for clinical problem solving JAMA 1993;269(24):3124-3129. [PubMed] [Google Scholar]
5. Haynes RB, Johnson M, McKibbon KA, Walker C, Willan A. A program to enhance clinical use of MEDLINE Curr Clin Trials 1993:544-546. [PubMed]
6. Magrabi F, Coirea WE, Westbrook JI, Vickland V. General practitioners’ use of online evidence during consultation Int J Med Inform 2005;74(1):1-12. [PubMed] [Google Scholar]
7. Klein MS, Ross FV, Adams DL, Gilbert CM. Effect of online literature searching on length of stay and patient care costs Acad Med 1994;69(6):489-495. [PubMed] [Google Scholar]
8. Westbrook JI, Gosling AS, Westbrook MT. Use of point-of-care online clinical evidence by junior and senior doctors in New Sough Wales public hospitals Int Med J 2005;35(7):399-404. [PubMed] [Google Scholar]
9. Westbrook JI, Coirea WE, Gosling AS. Do online information retrieval systems help experienced clinicians answer clinical questions? JAMIA 2005;12(3):315-321. [PMC free article] [PubMed] [Google Scholar]
10. Pluye P, Grad RM, Dunikowski LG, Stephenson R. Impact of clinical information-retrieval technology on physicians: a literature review of quantitative, qualitative and mixed methods studies Int J Med Inform 2005;74:745-768. [PubMed] [Google Scholar]
11. Haynes RB. Of studies, syntheses, synopses, and systems. The “4S” evolution of services for finding current best evidence ACP J Club 2001;134(2):A–11. [PubMed] [Google Scholar]
12. Garg AX, Adhirari NK, McDonald H, et al. Effects of computerized clinical support systems on practitioner performance and patient outcomes: a systematic review JAMA 2005;293(10):1223-1238. [PubMed] [Google Scholar]
13. Ash J, Berg L, Coirea WE. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors JAMIA 2003;11(2):104-112. [PMC free article] [PubMed] [Google Scholar]
14. Friedman CP, Gatti GG, Franz TM, et al. Do physicians know when their diagnoses are correct? Implications for decision support and error reduction J Gen Int Med 2005;20(4):334-339. [PMC free article] [PubMed] [Google Scholar]
15. Hersh WR, Crabtree MK, Hickam DH, et al. Factors associated with success in searching MEDLINE and applying evidence to answer clinical questions J Am Med Inform Assoc 2002;9(3):283-293. [PMC free article] [PubMed] [Google Scholar]
16. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms JAMIA 2003;10:478-483. [PMC free article] [PubMed] [Google Scholar]
17. Cummings ML. Automation bias in intelligent time critical decision support systems AIAA Intel Syst Conf 2004:6313.
18. Skitka LJ. Does automation bias decision-making? Int J Human-Comp Stud 1999;51:991-1006. [Google Scholar]
19. Ericsson KA, Simon HA. Protocol Analysis. Cambridge, MA: MIT Press; 1992.
20. Hersh W, Crabtree M, Hickam D, Sacherek L, Rose L, Friedman CP. Factors associated with successful answering of clinical questions using an information retrieval system Bull Med Librar Assoc 2000;88(4):323-331. [PMC free article] [PubMed] [Google Scholar]
21. Hersh W, Crabtree M, Hickam D, et al. Factors associated with success in searching MEDLINE and applying evidence to answer clinical questions J Am Med Inform Assoc 2002;9(3):283-293. [PMC free article] [PubMed] [Google Scholar]
22. Schaafsma F, Verbeek J, Hulshof C, van Dijk F. Caution required when relying on a colleague’s advice; a comparison between professional advice and evidence from the literature BMC Health Serv Res 2005;5:59. [PMC free article] [PubMed] [Google Scholar]
23. Friedman CP, Elstein AS, Wolf FM. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: a multisite study of 2 systems JAMA 1999;282:1851-1856. [PubMed] [Google Scholar]
24. Parasuraman R, Sheridan TB, Wickens CD. A model for types and levels of human interaction with automation IEEE Trans Syst Man Cybern–Part A: Syst Humans 2002;30:286-297. [PubMed] [Google Scholar]
25. Berns GS, Chappelow J, Zink FF, Pagnoni G, Martin-Skurski ME. Neurobiological conrrelates of social conformity and independence during mental rotation Biol Psychiatry 2005;58:245-253. [PubMed] [Google Scholar]
26. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists J Gen Intern Med 1999;14:499-511. [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

-