????

Vol. 40, No. 1 (Winter 2004)

This full-text article is provided as a member service of the Council on Social Work Education.


Evidence-Based Practice and the Borders of Data in the Global Information Era

Beth R. Crisp
University of Glasgow

Increasing demands that social work be a profession committed to evidence-based practice have coincided with innovations in information technology, which potentially give social workers unprecedented access to a plethora of sources and types of evidence. Because these innovations can enable access to evidence beyond traditional boundaries, the question of how we establish the borders of acceptability warrants consideration. This article explores the range of boundaries that contemporary social workers may encounter as they attempt to negotiate the demands of evidence-based practice. Recommendations for a critical, but not insular, approach to selecting evidence bases for social work interventions are provided.

During the 1990s, an estimated 2 million articles were published annually in more than 20,000 biomedical journals (Mulrow & Lohr, 2001). The number of articles published for social work was far fewer; yet, time pressures were listed by 98.3% of social workers and social care staff in the southwest of England as an obstacle to keeping up with professional literature (Sheldon & Chilvers, 2000). Furthermore, despite increasing demands that social work be a profession committed to evidence-based practice, many social workers reportedly have no access to departmental libraries or to professional journals through their workplaces (Sheldon & Chilvers, 2000). Not surprisingly, it has been found that social workers rarely use research evidence to underpin decision making for client interventions (Rosen, 1994; Rosen, Proctor, Morrow-Howell, & Staudt, 1995).

In recent years, innovations in information technology have resulted in the development of the World Wide Web and less costly computer hardware, giving increasing numbers of social workers unprecedented access to a plethora of sources and types of evidence on which professional practice can be based. As such, these sources and this evidence can appeal to a range of stakeholders:

The idea that good practice is ultimately to be delivered by research informed evidence which is underpinned by rigorous and effective methodologies is deeply appealing to our contemporary technocratic culture. Indeed, evidence-based approaches are likely to gain even more salience in organizations, such as social services, where fiscal and resource crises are forcing human resource rationalizations, ever new restructuring strategies and increased monitoring of accountability through quality audits and control mechanisms. (Webb, 2001, p. 58)

The sheer amount of evidence being made available will often necessitate some degree of selectiveness (Silverman, 1998). However, within social work, there is no consensus as to what is appropriate evidence (MacDonald, 1997). Different stakeholders apply different standards to evidence that they are using to determine whether an intervention is effective (Giacomini, Hurley, & Stoddart, 2000) or even needed in the first place (Bradshaw, 1972; Learmonth & Watson, 1999). Hence, this article explores a range of borders that social workers may apply in selecting evidence for practice.

Borders of Evidence

Place

Not long after arriving in Scotland from Australia, the author suggested to a student that she needed to conduct a literature review to inform her proposed dissertation and was stunned when the student asked whether this could be confined to searching British literature. Such a response would have been unlikely among Australian students, who are instilled with the belief that they should locate their research within an international context. Yet this Scottish student is far from alone in wanting to limit her use of evidence to that produced from within her own country. As one American has observed, there is “the unfortunate tendency among too many American medical academicians to think that all good ideas arise from within its [United States] boundaries” (White, 1997, p. 3). Notwithstanding the fact that on some issues, formative research has been carried out in the United States, American research has often been regarded as the “gold standard,” with non-American research often scantily reported, particularly for similar findings (Crisp & Ross, in press). Such biases may be reinforced when the compilers of research reviews select only journals “published in the United States expressly for or by social workers” on the basis that these were the journals “most likely to be perused by social workers” (Rosen, Proctor, & Staudt, 1999, p. 7). Such an approach rules out examining the effectiveness of alternative interventions tested in other countries—for example, HIV prevention programs aimed at injecting drug users in countries that have adopted a harm-reduction rather than use-reduction approach (MacCoun, Saiger, Kahan, & Reuter, 1993).

Notwithstanding cultural differences, which may limit the effectiveness of interventions when implemented in a different country, differences in findings from carefully controlled trials can emerge between different cities within countries, even when there is no readily identifiable reason why an intervention should be successful in one city and a failure in a seemingly comparable city (Sox, 1993). Consequently, practitioners may regard local research, especially community-based research, as preferable evidence for guiding interventions than research that has been conducted elsewhere, even if it is academically rigorous (Learmonth & Watson, 1999). Conversely, there may be strong biases against local research. As one respondent in a study of English health promotion workers noted, “the Health Authority says base the decisions on an evidence base from elsewhere, not ‘Noddy’1 research” (Learmonth & Watson, 1999, p. 330).

Who

Concerns about locally produced evidence may be overturned if it is produced by eminent researchers, particularly those associated with a prestigious university or research institute. Similarly, the imprimatur of those held in high esteem may be critical in determining what evidence is privileged. Just as a charismatic teacher may have tremendous influence on students, admired colleagues and peers may also influence what we read and who we consider to be credible (Saleebey, 1999). This can produce what Gambrill (1999) has described as “authority-based practice,” which she considers to be all too common in social work:

If there is no means by which to tell what is accurate and what is not, if all methods are equally effective, the vacuum is filled by an “elite” who are powerful enough to say what is and what is not. (p. 343)

Consequently, it has been argued that practice guidelines are too often based on the consensus of “experts” rather than actual evidence (Swan, 1999), although this may not be readily apparent:

Practice guidelines and consensus statement programs have sprung up under the sponsorship of numerous groups, and perhaps the pressing issue now is how to distinguish between the ethical and the unethical, the valid and the invalid, messages that are being disseminated by this variety of sponsors. It is unfortunate that validity of the disseminated message and credibility of the disseminating agent are not always positively related. (Lomas, 1993, p. 233)

In recent years, probably the most prominent example of the discrepancy between the seeming validity of the message and the credibility of the source is what has become known as “Sokal’s hoax.” Alan Sokal, a professor of physics at New York University, published what purported to be a serious article linking developments in quantum physics with postmodern thought in the prestigious cultural studies journal Social Text (Sokal, 1996a). When it was published, he announced in another journal that the Social Text article was an elaborate hoax and claimed to be surprised that the editors had failed to realize this (Sokal, 1996b). Sokal (1996b) describes his intention:

To test the prevailing intellectual standards, I decided to try a modest (though admittedly uncontrolled) experiment: Would a leading North American journal of cultural studies. . . publish an article salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions? (p. 62)

If nothing else, Sokal’s hoax is a pertinent reminder that purveyors of research evidence need to ensure they are not overawed by the credentials of a seemingly notable scholar from a prestigious institution.

Biography and Context

Even when evaluators are aware of evidence that is promulgated by esteemed authorities, that which coheres with their own experiences or beliefs about normality may seem more credible (Freud, 1999). Indeed, when making decisions for themselves, practitioners may demand levels of evidence quite different from what they would demand when making decisions for their clients. Similarly, clients may be quite willing to accept interventions for which practitioners consider there to be little or no evidence. Silverman (1998), an advocate of evidence-based medicine, notes,

It is remarkable how many sensible patients can be hoodwinked by the overblown claims for implausible therapies like imagery, spiritual healing, megavitamin therapy, energy healing, massage and a colourful variety of empirical folk remedies. (p. 68)

Practitioners may regard new knowledge as credible only because it fits with the existing canon of accepted scientific thought to which they subscribe (Vandenbroucke, 1998) or with their political, religious, or other personal beliefs. Hence, training, professional knowledge, or personal beliefs can prevent a practitioner from being open to reviewing any evidence that is available on alternate therapies.

Whether it is always possible to evaluate research evidence without being influenced by one’s background is questionable. Therefore, awareness of biases that may, if unchecked, hinder one’s critical appraisal of research evidence is important. For example, having been a female teenager in the 1970s when discussions of rape and sexual abuse entered the public discourse in Australia, and having been privy to the experiences of many women and some men who have been sexually assaulted, I have come to the conclusion that the effects of sexual abuse are often deeply profound and wide ranging in their impact (Kilpatrick et al., 1989) and that most survivors regard the experience as life changing. Consequently, I realize that I am more likely to want to privilege research findings on sexual assault that concur with this view than those that minimize the impact of sexual assault, irrespective of where these findings have been published or how eminent the researcher (Crisp & Ross, in press).

Language

One key aspect that may limit most people’s ability to review available sources of evidence are linguistic barriers, and, like many, this author is guilty of discarding articles not published in English when conducting a literature review. While restricting reviews of published literature to books and articles written in English may reflect a lack of capacity to have other material translated, it may also suggest “an underlying prejudice that articles from other languages may be of inferior quality” (Moher & Berlin, 1997, p. 256), although limited evidence suggests that this is not so (Moher et al., 1996).

An interesting study from the medical literature on the reporting of clinical trials raises questions about the potential for certain types of evidence to be reported in different languages (Egger et al., 1997). It was observed that German investigators were more likely to submit their successful findings from clinical trials to English-language journals and publish those which were not successful in German. This study also found that placebo-controlled trials were more frequent in the English-language sample, whereas control groups receiving standard treatments were more common among the reports published in German.

The development of Web-based translation programs such as Babel Fish (http://babel.altavista.com), which provide almost instantaneous translation of text between English and a number of other languages, would seem to provide one answer to the problem of reading texts only available in foreign languages. However, as these programs require documents to be available in an online format, the effort required to transform lengthy print documents into a machine-readable format may severely limit the number of documents that can be translated. Furthermore, the extent to which a computer program can ensure that any essential cultural nuancing is incorporated into translations is questionable (Bettelheim, 1983). Nevertheless, the continuing development of programs such as Babel Fish potentially have an important role in breaking down what has often been one of the most intransigent of barriers to accessing research evidence.

Temporal

In reviewing evidence, a further factor social workers should consider is the age of the evidence. Ideas as to what is “normal” and therefore credible change over time (Freud, 1999), so that “yesterday’s precedent may be today’s anachronism” (Mulrow & Lohr, 2001, p. 262), and even texts that are little more than a year old run the risk of being considered outdated in some fields of inquiry (Sackett, Strauss, Richardson, Rosenberg, & Haynes, 2000). Yet old evidence should not necessarily be discounted. For example, although it was demonstrated in 1601 that lemon juice prevented scurvy, it was not until 1795 that the British Navy began to supply lemon juice for its ships, and the merchant marine did not adopt this measure until 1865 (Silverman, 1998). Likewise, ideas in some early social work texts are still considered relevant; at the same time, ideas in more recent texts are now disregarded.

Pragmatic factors often force researchers to set temporal limits when conducting literature reviews. Bibliographic search engines can identify hundreds or even thousands of potentially relevant pieces of evidence in relation to an inquiry, notwithstanding the fact that electronic databases cover only restricted journals (McManus et al., 1998). For one recent study, the author read over a thousand abstracts and that was after limiting the search to a 5-year time period.

Pragmatic Factors

Time pressures can make it difficult to find and evaluate appropriate evidence and then apply it (Haynes, 1993). Not having the time of academics to conduct a detailed search of the literature, professional practitioners may use data to which they have ready access, however imperfect it may be (Brook, 1993). Yet even academics often take shortcuts when using evidence to support their claims, and they may to cite the same references in several pieces of writing, especially those produced by themselves or colleagues.

Payne (2001) notes that when time to seek out research evidence is limited, there may be the temptation

to suggest that the more social workers call on a piece of knowledge, the more basic it may be considered. . . [but] the problem with this approach is that knowledge may not be called upon unless it is known. (p. 138)

Yet there may be little encouragement for social workers to discover what other research evidence is available. Even if seminars are provided locally and at no cost to the individual or the agency, attendance is rarely among the highest of social workers’ competing priorities (Randall, 2002).

Research that is available may be limited to topics for which there are funds. As such, there may be little available current evidence on topics that are not funding priorities (Garattini & Liberati, 2000). Where there is little available evidence, one may use anything that in part supports one’s arguments, even if it has major limitations. Alternately, available evidence is often ignored if it is only for a representative community sample and not a specific client group (Richardson, Moreland, & Fox, 2001).

Publication

While many academics privilege evidence that emanates from articles in peer-refereed journals, social work practitioners make little distinction between academic and practice journals carrying empirical content, although the latter are often not refereed. Indeed, summaries of research in professional magazines, rather than original articles in refereed journals, may be the closest many social workers get to research results (Sheldon & Chilvers, 2000).

Absolute privileging of evidence from refereed journals is not unproblematic. First, the time lag between preparation and publication can result in the newest evidence not being widely available (Haynes, 1993). Second, despite peer review, many articles appearing in journals are methodologically flawed (Silverman, 1998). Third, much research never appears in journals. For example, although rejected from a peer-reviewed journal because it involved a single case study, the development of the smallpox vaccine, which ultimately lead to smallpox becoming the only natural disease eradicated, demonstrates the importance of recognizing an original idea or situation and documenting it (Altman, 1993). Much research, however, is never even submitted for publication in a journal. For example, more than half (55%) of the papers submitted to the Society for Academic Emergency Medicine meeting in 1991 were not published in peer-reviewed journals within the following 5 years. Of these unpublished papers, only 20% were actually submitted to a journal and then rejected. Interestingly, there was no association between efforts to publish and factors such as quality of study, originality, sample size, design, or results (Weber, Callaham, Wears, Barton, & Young, 1998).

There has long been concern that journals disproportionately publish findings from successful studies, especially those where a statistically significant result was found between study groups (Dickersin & Min, 1993; Easterbrook, Berlin, Gopalan, & Matthews, 1991). Furthermore, it has been observed that research with statistically significant differences are more likely to be published in more widely read journals (Easterbrook et al., 1991; Simes, 1987). This practice may reflect researchers’ beliefs that statistically insignificant results lack interest (Dickersin & Min, 1993) or editorial decisions that favor significant results in both papers submitted for publication (Coursol & Wagner, 1986) and presentations at scientific meetings (Koren, Shear, Graham, & Einarson, 1989). Consequently, it has been suggested that the exclusion of “grey literature” (unpublished studies with limited distribution) from meta-analyses results in significantly larger estimates of the effect of interventions (McAuley, Pham, Tugwell, & Moher, 2000).

Profession

For some social workers, the evidence they access will be primarily derived from within social work; others, especially those working in multidisciplinary teams, may prefer to gather their evidence from other disciplines. For them, social work is just one of many fields from which evidence can come. This view may also be influenced by non–social work colleagues who perceive evidence from social work sources as less credible. The extent to which social workers privilege evidence originating from within or beyond social work is unknown.

Methodology

Whether or not social workers prefer evidence that originates from within the profession, there are nevertheless differences between professions as to what is considered evidence. For example, in clinical medicine, the randomized controlled trial, when conducted under the appropriate conditions, is generally considered the “gold” standard of evidence (e.g., Sackett et al., 2000). Thus, a systematic review that examines only results from such trials will be favored more than other reviews of the literature (Sackett et al., 2000). However, within social work, despite a few strident advocates (e.g., MacDonald, 1997), there has been considerable scepticism (e.g., Cheetham, Fuller, McIvor, & Petch, 1992) if not outright rejection (Webb, 2001) of randomized controlled trials. MacDonald (1997) notes,

Anyone who acknowledges randomised controlled trials as the benchmark of rigorous evaluative research and tries to persuade social work practitioners, managers, researchers or funders of their crucial role in the evaluation of social interventions will know what it is like to be a voice calling from the wilderness, markedly less popular than John the Baptist, and with a keen eye for Herod family look alikes. (p. 122)

Yet even within medicine, there is increasing recognition of the limits of randomized controlled trials. Although randomized controlled trials can determine the effectiveness of an intervention in an experimental setting, different methods of research may be required to determine if any harmful effects exist or to examine how participants experience any interventions they receive (Sackett et al., 2000).

Within social work, a wide range of methodologies have been proposed, each with its own limitations. For example, surveys that collect information on demographics, service use, or service requests may be appropriate evidence of demand, but they are not necessarily an appropriate methodology to ascertain the effectiveness of interventions. As Gambrill (1999) comments on social work research,

The terms science and scientific are sometimes used to refer to any systematic effort to acquire information about a subject, including case studies, correlational studies, and naturalistic studies. Each method is subject to certain types of error, which must be considered in evaluating data they generate. (p. 343)

Acceptance of a range of methods entails a wider acceptance of qualitative research than may be found in other disciplines. Models of research in which academics adopt a collaborative strategy with research participants have been advocated by some on the basis that they not only empower the participants but lead to richer and more meaningful data than can be obtained using traditional research paradigms (e.g., Gibbs, 2001). However, this may place other limitations on the research design, which may reduce the extent to which others consider the research findings legitimate. For example, action research methodologies would seem to be much more compatible with a collaborative approach than random controlled trials.

Theories and Concepts

Often closely aligned with methodological issues are theoretical and conceptual matters. Evaluators, for example, need to know whether the evidence they are expected to collect should be based substantially or even wholly on the implementation of a program or on its outcomes. These findings in turn will each have measurement issues associated with them. With respect to measuring outcomes, how these are defined can affect whether or not a program appears effective. For example, when determining effectiveness of treatment for substance abuse, relapse can be considered as either a failure of the treatment or a temporary setback (Prendergast & Podus, 2000). This definition depends on the theories about addiction to which one subscribes.

Navigating the Borders of Evidence

This article has identified several borders that social workers may apply when selecting evidence on which to base their practice, but it makes no claims as to the completeness of the identified set. The reader can, no doubt, identify situations in which the adoption of strict boundaries has lead to other pertinent evidence being ignored, as well as situations in which evidence has been inappropriately applied from one context to another and where arguably maintaining stricter border controls was necessary. Hence, this article will now propose some guidelines for evaluating potential research evidence which social work educators may find equally relevant when conducting their own research or when advising students on how to use the professional literature to support their own writing projects.

While there may be some research studies that are so flawed that any student who passes an introductory research methods course would reject them, hard and fast rules do not necessarily apply to many of the delineations identified in this article. Nevertheless, readers are encouraged to consider the following questions when selecting which research evidence to use in their professional practice:

1. Why am I using this evidence?

If one is using research evidence to support an argument or practice decision, then there ought to be a rationale for doing so. If no rationale is forthcoming, there is no justification for using that evidence.

2. Am I only using this evidence because it is readily available to me or because I believe it to be credible?

It is tempting, especially when one is busy, to use whatever research evidence one can obtain with relatively little effort and not to consider whether it is credible evidence for the particular situation.

3. Is the basis of this evidence methodologically sound?

Sometimes claims made in the research literature are supported by questionable research methodology that requires critical appraisal. Furthermore, secondary sources of research evidence tend to emphasize the findings and may provide scant, if any, details of the research methodology. Thus, users of research evidence are forced to rely on the interpretations of their sources.

4. Am I using this evidence without considering how apt it is for the context because it comes from an eminent source?

Just because evidence comes from a respected and well-known researcher, prestigious institution, or reputable journal does not necessarily make this evidence better than research produced from less well-known sources. What should be of greater importance is whether the evidence is appropriate for the context.

5. To what extent do personal factors impinge on my evaluation of this evidence?

Experiences affect how one understands the world and thus can influence how one interprets research findings. Factors that might affect one’s interpretation of research findings include nationality, education, professional allegiances, linguistic skills, personal beliefs and ideologies, social networks, professional heroes, age, and gender.

6. Will others be convinced by this evidence?

If one is using research evidence to support an argument, then the evidence must be perceived as credible not only by those who develop the argument but also by the audience.

7. Is it possible that there is more appropriate evidence? If so, do I have the resources (including time) to search for other evidence?

Evidence at hand may lend support to an argument but other, more definitive evidence may exist. Searching for more appropriate evidence may take considerable effort, and the resources required to do this may be disproportionately high compared to the benefit.

8. Are there reasons why this evidence cannot be applied?

Research evidence may be quite credible in the context in which it was collected, but fundamental differences (e.g., in cultural, social, or legal contexts) may make generalizations to other situations invalid.

9. Is it possible that this evidence has been superseded?

New research evidence is constantly being produced and it may confirm, contradict, or replace earlier evidence. Using evidence which is widely regarded as having been superseded may lead the audience to consider an argument to be lacking in credibility.

Expanding the Borders of Evidence

While it is essential that social workers understand the limits of the evidence they use and the implications of their choices, a continuing concern is access to information that will facilitate evidence-based social work practice. However, as many social workers are employed in settings where research utilization is not valued (Randall, 2002), let alone research skills (Dalton & Wright, 1999; Forte & Matthews, 1994), changing organizational culture and practices may be just as crucial as developing the expertise of individual workers (Crisp, Swerissen, & Duckett, 2000).

Historically, it is not uncommon for social workers to justify their existence in terms of their outputs rather than their outcomes (Crisp, 2000), and the author’s conversations with recent social work graduates in Scotland suggest that the pressure to maintain high caseloads leaves little, if any, time for seeking out research evidence on which to base their practice. In such contexts, initiatives that promote collecting and distributing information about evidence-based interventions (Card, 2001) can play an important role. Nevertheless, even when evidence has already been collected for them, social workers still need to be able to critically appraise the applicability of particular evidence for specific situations (Gibbs & Gambrill, 2002).

One promising innovation for widening the distribution of research evidence is the development of online versions of many academic journals in social work and other cognate disciplines. However, few social work agencies subscribe to these journals, and access tends to be restricted to agencies affiliated with a university library (Sandell & Hayes, 2002). Although it may be feasible for some larger social work agencies to maintain their own libraries and subscribe to relevant journals, a more cost-efficient method of ensuring that social workers have access to research evidence is for agencies to fund library memberships for employees. Many universities will even provide annual library memberships for free to social workers who supervise student placements. However, library memberships will only facilitate access to research evidence if social workers are able to access libraries on a regular basis and are given time to read and contemplate pertinent research findings.

There will always be limits to the research evidence that social workers can access and choices to be made as to which competing bits of research evidence are acceptable for supporting an argument or practice decision. The challenge for readers now is to critically assess the factors that limit their use of evidence.

References

Altman, L. K. (1993). Bringing the news to the public: The role of the media. In K. S. Warren & F. Mosteller (Eds.), Doing more good than harm: The evaluation of health care interventions (pp. 200–209). New York: New York Academy of Sciences.

Bettelheim, B. (1983). Freud and man’s soul. London: Flamingo.

Bradshaw, J. (1972). The concept of social need. New Society, 19, 640–643.

Brook, R. H. (1993). Using scientific information to improve quality of health care. In K. S. Warren & F. Mosteller (Eds.), Doing more good than harm: The evaluation of health care interventions (pp. 74–85). New York: New York Academy of Sciences.

Card, J. J. (2001). The Sociometrics Program Archives: Promoting the dissemination of evidence-based practices through replication kits. Research on Social Work Practice, 11, 521–526.

Cheetham, J., Fuller, R., McIvor, G., & Petch, A. (1992). Evaluating social work effectiveness. Buckingham, UK: Open University Press.

Coursol, A., & Wagner, E. E. (1986). Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Professional Psychology, 17, 136–137.

Crisp, B. R. (2000). A history of Australian social work practice research. Research on Social Work Practice, 10, 179–194.

Crisp, B. R., & Ross, M. W. (in press). Borders of evidence: A critical reflection. Radical Statistics.

Crisp, B. R., Swerissen, H., & Duckett, S. J. (2000). Four approaches to capacity building in health: Consequences for measurement and accountability. Health Promotion International, 15, 99–107.

Dalton, B., & Wright, L. (1999). Using community input for the curriculum review process. Journal of Social Work Education, 35, 275–288.

Dickersin, K., & Min, Y.-I. (1993). Publication bias: The problem that won’t go away. In K. S. Warren & F. Mosteller (Eds.), Doing more good than harm: The evaluation of health care interventions (pp. 135–148). New York: New York Academy of Sciences.

Easterbrook, P. J., Berlin, J. A., Gopalan, R., & Matthews, D. R. (1991). Publication bias in clinical research. Lancet, 337, 867–872.

Egger, M., Zellweger-Zahner, T., Schneider, M., Junker, C., Lengeler, C., & Antes, G. (1997). Language bias in randomised controlled trials published in English and German. Lancet, 350, 326–329.

Forte, J., & Matthews, C. (1994). Potential employers’ views of the ideal undergraduate social work curriculum. Journal of Social Work Education, 30, 228–240.

Freud, S. (1999). The social construction of normality. Families in Society, 80, 333–339.

Gambrill, E. (1999). Evidence-based practice: An alternative to authority-based practice. Families in Society, 80, 341–350.

Garattini, S. & Liberati, A. (2000). The risk of bias from omitted research: Evidence must be independently sought and free from economic interests. British Medical Journal, 321, 845–846.

Giacomini, M., Hurley, J., & Stoddart, G. (2000). The many meanings of deinsuring a health service: The case of in vitro fertilization in Ontario. Social Science and Medicine, 50, 1485–1500.

Gibbs, A. (2001). Social work and empowerment-based research: Possibilities, process and questions. Australian Social Work, 54(1), 29–39.

Gibbs, L., & Gambrill, E. (2002). Evidence-based practice: Counterarguments to objections. Research on Social Work Practice, 12, 452–476.

Haynes, R. B. (1993). Some problems in applying evidence in clinical practice. In K. S. Warren & F. Mosteller (Eds.), Doing more good than harm: The evaluation of health care interventions (pp. 10–225). New York: New York Academy of Sciences.

Kilpatrick, D. G., Saunders, B. E., Amick-McMullan, A., Best, C. L., Veronen, L. J., & Resnick, H. S. (1989). Victim and crime factors associated with the development of crime-related post-traumatic stress disorder. Behavior Therapy, 20, 199–214.

Koren, G., Shear, H., Graham, K., & Einarson, T. (1989). Bias against the null hypothesis: The reproductive hazards of cocaine. Lancet, 335, 1440–1442.

Learmonth, A. M., & Watson, N. J. (1999). Constructing evidence-based health promotion: Perspectives from the field. Critical Public Health, 9, 317–333.

Lomas, J. (1993). Diffusion, dissemination and implementation: What should do what? In K. S. Warren & F. Mosteller (Eds.), Doing more good than harm: The evaluation of health care interventions (pp. 226–244). New York: New York Academy of Sciences.

MacCoun, R. J., Saiger, A. J., Kahan, J. P., & Reuter, P. (1993). Drug policies and problems: The promise and pitfalls of cross-national comparison. In N. Heather, A. Wodak, E. Nadelmann, & P. O’Hare (Eds.), Psychoactive drugs and harm reduction: From faith to science (pp. 103–117). London: Whurr.

MacDonald, G. (1997). Social work: Beyond control? In A. Maynard & I. Chalmers (Eds.), Non-random reflections on health services research: On the 25th anniversary of Archie Cochrane’s effectiveness and efficiency (pp. 122–146). London: BMJ Books.

McAuley, L., Pham, B., Tugwell, P., & Moher, D. (2000). Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? The Lancet, 356, 1228–1231.

McManus, R. J., Wilson, S., Delaney, B. C., Fitzmaurice, D. A., Hyde, C. J., Tobias, R. S., et al. (1998). Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. British Medical Journal, 317, 1562–1563.

Moher, D., & Berlin, J. (1997). Improving the reporting of randomised controlled trials. In A. Maynard, & I. Chalmers (Eds.), Non-random reflections on health services research: On the 25th anniversary of Archie Cochrane’s effectiveness and efficiency (pp. 250–271). London: BMJ Books.

Moher, D., Fortin, P., Jadad, A. R., Juni, P., Klassen, T., Le Lorier, J., et al. (1996). Completeness of reporting trials published in languages other than English: Implications for conduct and reporting of systemic reviews. The Lancet, 347, 363–366.

Mulrow, C. D., & Lohr, K. N. (2001). Proof and policy from medical research evidence. Journal of Health Politics, Policy and Law, 26, 249–266.

Payne, M. (2001). Knowledge bases and knowledge biases in social work. Journal of Social Work, 1, 133–146.

Prendergast, M. L., & Podus, D. (2000). Drug treatment effectiveness: An examination of conceptual and policy issues. Substance Use and Misuse, 35, 1629–1657.

Randall, J. (2002). The practice–research relationship: A case of ambivalent attachment? Journal of Social Work, 2, 105–122.

Richardson, J., Moreland, J., & Fox, P. (2001). The state of evidence-based care in long-term institutions: A provincial survey. Canadian Journal on Aging, 20, 357–372.

Rosen, A. (1994). Knowledge use in direct practice. Social Service Review, 68, 561–577.

Rosen, A., Proctor, E. E., Morrow-Howell, N., & Staudt, M. (1995). Rationales for practice decisions: Variations in knowledge use by decision task and social work service. Research on Social Work Practice, 5, 501–523.

Rosen, A., Proctor, E. K., & Staudt, M. M. (1999). Social work research and the quest for effective practice. Social Work Research, 23, 4–14.

Sackett, D. L., Strauss, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to teach and practice EBM (2nd ed.). Edinburgh, Scotland: Churchill Livingstone.

Sandell, K. S., & Hayes, S. (2002). The web’s impact on social work education: Opportunities, challenges, and future directions. Journal of Social Work Education, 38, 85–99.

Saleebey, D. (1999). Building a knowledge base: A personal account. Families in Society, 80, 652–661.

Sheldon, B., & Chilvers, R. (2000). Evidence-based social care: A study of prospects and problems. Lyme Regis, UK: Russell House.

Silverman, W. A. (1998). Where’s the evidence? Debates in modern medicine. Oxford, UK: Oxford University Press.

Simes, R. J. (1987). Confronting publication bias: A cohort design for meta-analysis. Statistics in Medicine, 6, 11–29.

Sokal, A. (1996a). Transgressing the boundaries: Toward a transformative hermeneutics of quantum gravity. Social Text, 46/47, 217–252.

Sokal, A. (1996b). A physicist experiments with cultural studies. Lingua Franca, May/June, 62–64.

Sox, H. C. (1993). Using evidence to teach effective use of health interventions. In K. S. Warren & F. Mosteller (Eds.), Doing more good than harm: The evaluation of health care interventions (pp. 245–249). New York: New York Academy of Sciences.

Swan, N. (1999). Applying the evidence in Australia. Journal of the American Medical Association, 281, 1073–1074.

Vandenbroucke, J. P. (1998). Medical journals and the shaping of medical knowledge. The Lancet, 352, 2001–2006.

Weber, E., Callaham, M. L., Wears, R. L., Barton, C., & Young, G. (1998). Unpublished research from a medical specialty meeting: Why investigators fail to publish. Journal of the American Medical Association, 280, 257–259.

Webb, S. (2001). Some considerations on the validity of evidence-based social practice in social work. British Journal of Social Work, 31, 57–79.

White, K. L. (1997). Archie Cochrane’s legacy: An American perspective. In A. Maynard, & I. Chalmers (Eds.), Non-random reflections on health services research: On the 25th anniversary of Archie Cochrane’s effectiveness and efficiency (pp. 3–6). London: BMJ Books.

 

Accepted: 09/03.

Beth R. Crisp is senior lecturer, Department of Social Work, University of Glasgow.

Address correspondence to Beth R. Crisp, Department of Social Work, University of Glasgow, Lilybank House, Bute Gardens, Glasgow G12 8RT Scotland; email: b.crisp@socsci.gla.ac.uk.

© by the Council on Social Work Education, Inc. All rights reserved.

Top

Back to Section Start to JSWE Online Start

CSWE Home | About CSWE | Membership | Accreditation | Member Program Directory | Annual Program Meeting | Programs & Services | Projects | Publications | Links | Login | Search