Running Head: Career Efficacy Synthesis
The Efficacy of Career Development Interventions:
A Synthesis of Research
Kris Magnusson, Ph.D.
The University of Lethbridge
The University of Lethbridge
The Efficacy of Career Development Interventions: A Synthesis of Research
Two recent symposia have highlighted the need for public policy to be guided by evidence pertaining to the efficacy of career development practice. The first was an international symposium, "Career Guidance and Public Policy: Bridging the Gap" held in Toronto in October of 2003, with 43 countries represented. The second was "Working Connections: A Pan-Canadian Symposium on Career Development, Lifelong Learning and Workforce Development", which was held in Toronto in November of 2003. A consistent theme that emerged from both Symposia was the need to develop effective systems of gathering data that attest to the impact that career development/career guidance services have on a number of levels, such as individual well-being, social inclusion, and economic development. Furthermore, discussants at both symposia indicated that data was needed to inform and influence public policy related to the provision of career services.
The experiences of the participants at these Symposia echo a growing call among researchers for more comprehensive efficacy assessment of career practices. Herr (2003) calls for the development of cost-benefit analyses to document the results of career services and for the creation of national research databases to collect and distribute such information. Watts (2002; 2004) called for efficacy research to link career practices to economic efficiency, social equity and sustainability. In Canada, Hiebert (1994) has been making similar calls for increased and more precise efficacy assessment in career counselling. Currently, a number of Canadian researchers, including Bryan Hiebert and Vivian Lalonde at the University of Calgary and Bill Borgen and Norm Amundsen at the University of British Columbia, among others, have been working on the problem of accountability and efficacy measurement in career services.
Despite an increased awareness of the need to better understand how and why career services are effective, the number of outcome research studies has actually decreased in the last 20 years (Whiston, Brecheisen and Stephens, 2003). This decline may be attributable, in part, to the growing recognition of the complexity of career planning. Hughes (2004), for example, commenting on the difficulties associated with assessing the impact of career interventions, notes three major challenges to efficacy research: the range of factors influencing individual choice; the wide variance in client groups, issues and concerns that makes comparison of evidence difficult; and the lack of common outcome measures in the field of career development.
It is clear that a framework for creating, collecting and evaluating career services efficacy is needed. An initial step in that process was taken with the compilation of the Annotated Bibliography of Current Research on the Efficacy of Career Development Interventions and Programs (Roest and Magnusson, 2004). The primary focus of the annotated bibliography was on articles examining the efficacy of career development services and interventions that had been published in English-language career journals over the past 10 years. A parallel initiative, led by Michel Turcotte, is examining articles published in French-language journals. Time constraints did not permit a comprehensive review; however, the articles included provide a representative sampling of research in the field. The central themes and observations from the review of 53 English-language articles are presented here, in the following categories: target audience; populations and samples; research methods, general efficacy findings, and diverging theoretical assumptions.
The majority of articles reviewed spoke to an academic or research audience, and to a lesser extent, practitioners. Given that the review was focused on academic journals, this is hardly surprising. However, in the context of providing evidence to better inform practice, it does pose a few problems. For the most part, descriptions and results are not presented in a manner that would be accessible to many practitioners. Thus, even when specific positive results are found, they may not find their way into general practice. This in turn creates a situation in which there may be frequent replication of efficacy research efforts, and little systematic building upon known data.
The academic nature of the articles reviewed poses a secondary problem for practitioners. Even when positive treatment effects are found, very little description of the nature of the program, service or intervention is provided. Practitioners are thus left on their own to locate more detailed descriptions of what exactly proved to be effective. Furthermore, the majority of the reports focus on holistic program or intervention effects; there is very little analysis of the impact or efficacy of specific treatment or program components.
Practitioners are not the only ones who may not be deriving the full benefit of extant efficacy research. Research, as published in academic journals, rarely makes reference to the implications of the research for public policy. This is somewhat surprising because as Herr (2003) noted, "career counseling, in its many manifestations, is largely a creature of public policy" (p. 8). It would seem reasonable that increased attention would be paid to the constituency upon which most of the funding for career services depends. Herr’s cautions regarding too close of a linkage between career services and public policy are well worth noting; however, the fact remains that little focused research that would support or better inform policy is available.
Population and Samples
The primary participants in career efficacy research have been students of educational institutions. It may be said that the articles contained in the Annotated Bibliography illustrate the principle of "convenience sampling". A total of 34 of the 41 specific research studies described intervention effects on students, mostly Caucasian, within educational settings. Of these, 20 studies were conducted with university or college students, 9 utilized high school students, and 5 were conducted with middle school students. This pattern is common in psychological research in general; most studies are done where it is convenient to gain access to participants. Although in one sense this a reasonable and understandable approach, it still leaves large gaps in our knowledge about the differential effects that career services may have on other groups, such as women, members of varying ethnic or cultural groups, or people from differing educational or socio-economic backgrounds. Based on the findings of this review, it is clear that the focus of research needs to be expanded to include a much broader spectrum of human experience.
The majority of the studies employed quantitative methodology, and some used mixed method designs (i.e., quantitative analysis supplemented by qualitative analysis). The most commonly employed research designs were variations on pretest-posttest, treatment group to control group experimental designs. In some cases, treatment group/control group post-test only designs were employed. Depending on the sophistication of the study, one or more predictor variables were related to one or two criteria variables. In general, the studies attempted to isolate specific treatment effects (e.g., computer assisted guidance systems) on specific outcome measures (e.g., occupational decision-making).
A major concern with the interpretation of the efficacy data is the imprecision of the outcome measures. Often, instruments with questionable standards of reliability and validity serve as the specific outcome measure. For example, studies of youth often employ measures of career maturity, despite the difficulties associated with measuring the career maturity construct. It is quite possible that even stronger efficacy results would be obtained with more accurate outcome measures.
A second concern with efficacy interpretation pertains to the assumptions related to the outcome measures. Often, specific outcomes are used in the assumption that they are linked to positive career planning. For example, increases in occupational exploration behaviours are commonly used as outcome measures, even though we have little evidence to support the assumption that such increases are related to making sound occupational decisions. An equally plausible hypothesis could be that increasing engagement in meaningful activities, regardless of occupational context, will lead to the discovery of satisfying opportunities. It would seem that the majority of the efficacy research published is rooted in what Weinrach (1979) calls the "structural approach" to career development. However, the underlying assumptions governing the selection, and subsequent measurement of appropriate outcomes are rarely made explicit.
Methods of establishing experimental conditions, and of measuring aggregate outcomes are problematic for career efficacy research. Very little attention has been paid to the differential effects interventions may have on sub-groups within the sample or on diverse samples. Furthermore, there are few studies that compare interventions and their treatment effects; one of the most commonly reported types of study is an assessment of a specific intervention or treatment (e.g., "the effects of treatment program A on outcome measure X"). Such studies usually reveal positive, but modest, support for the intervention; however, there are few studies that compare the efficacy of interventions with similar goals (e.g., "is treatment program A any more effective than treatment program B"). An exception to this pattern can be found in studies that attempt to assess the effects of computerized systems of guidance; the impact of these types of programs are frequently compared to individual counselling and/or to combined counselling and computerized interventions. More comparisons of this kind are needed. Furthermore, as Brown and Ryan Krane (2000) note, more attention needs to be paid to the combined effect of interventions.
Methods of data aggregation are problematic for career efficacy research, particularly in the analysis of the efficacy of programs of intervention. While many program evaluation studies provide multiple outcome measures, there are very few that analyze the differential impact of specific program components. The focus on global outcome measures does not help us understand what components, and in what combination, contributed to the outcome. Furthermore, unless process variables are specifically attended to, there is no way of knowing if poor results are related to actual program content or simply to the lack of adherence to program design. Although Hiebert (1994) called for both process and outcome assessment components in program evaluations nearly a decade ago, it would seem that few such comprehensive evaluations are making their way into academic publications.
There have been a few attempts to conduct meta-analyses of career efficacy research (e.g., Sexton, 1996; Whiston, Sexton and Lasoff, 1998; Whiston, Brecheisen and Stephens, 2003). Most of these attempts were hampered by questionable research methodology, insufficient information, or lack of integrity in the reporting of the data in the original studies. Furthermore, there is very little consistency in the choice of outcome measures, even when measuring identical constructs. Therefore, it is very difficult to draw conclusions pertaining to career intervention efficacy across studies. Despite these problems, most of the authors of the meta-analyses and literature reviews agreed that career development interventions are indeed effective. The problem is that little is known about why, how, or for whom they work. Overall, research in career intervention efficacy is piecemeal, fragmented and unsystematic.
Efficacy of Career Interventions
Given the limitations of audience, population samples and research methodology, one might be led to wonder what, if anything, we can conclude about career efficacy research. Despite these limitations, a few trends did emerge among the studies reported. The most common finding in the efficacy research was that career interventions or programs had a positive effect on participant satisfaction. For example, even in studies that demonstrated no specific treatment effects, the authors would report that clients were satisfied with the processes or interventions, or that they "reacted positively" to the different treatments. It can be concluded that participants generally express satisfaction with career interventions.
Much of the evidence for the efficacy of career interventions pertains to changes in client competence (37 of 41 studies) or client behaviour (8 of 41 studies). Even though a broad spectrum of interventions is represented in the studies reviewed, career interventions in general have been shown to have significant effect in two main areas. First, career interventions increase client exploratory behaviours. Participants are more likely to engage in activities that broaden their range of information and knowledge of career options after engaging in some form of career intervention. Second, participants in the studies presented are more likely to make career decisions after engaging in a career intervention. Unfortunately, there is little evidence to suggest if the range of interventions have differential effects; we do not know if one form of intervention is more effective than another for producing these effects.
Very little attention has been paid to aspects of career planning or career development processes other than exploration and decision-making behaviours. Examples of gaps include the role that engagement plays in career planning (e.g., the use of personal meaning in career planning, or the identification of sources of personal meaning as a motivator/guide for career exploration), the development of prerequisite and planning skills needed to actualize a decision, and the development of systems of social support and/or feedback when implementing career decisions. Overall, the research may be characterized by a central assumption that career planning is largely a cognitive process, and that once a decision is made, it can and will be implemented.
The cursory review of the literature reveals another problem with the assessment of career intervention efficacy: scant attention has been paid to broader outcomes of career interventions. There is little follow-up data on whether clients who use career services attain greater levels of later job satisfaction, work performance or life satisfaction compared with those who do not access the services; more longitudinal studies are necessary. Given that most agencies and services find themselves in an era of fiscal restraint, research into global outcomes is essential for sustaining existing programs and for providing evidence of the need for the development of new ones.
Finally, the global impact of career interventions remains virtually unknown. For example, it is very difficult to determine the economic benefits of career interventions. As Hughes (2004) reported, "research findings highlighted that measure the economic benefits of guidance is problematic, mainly because guidance effectiveness research in the United Kingdom is usually short-term and focused on immediate results" (p. 2). The same observation could be applied to studies conducted in North America. Even less is known about the social impact of career interventions. While it may be reasonable to speculate that good occupational decisions would lead to stronger, more stable families, increased connection with community and decreased isolation or alienation, no studies have been found that address such possibilities. Longitudinal research, that is able to build upon multiple sources of research evidence and address multiple factors, is clearly needed.
Divergent Assumptions About the Nature of Career Planning
One observation that can be made from a reading of representative research into the efficacy of career interventions is that results are often presented as if there is agreement regarding the "true aims" of career planning. Theoretical assumptions are rarely made explicit, even though there may be a variety of perspectives about what constitutes "effective" career planning. As noted earlier, most efficacy research seems to have been conducted from a structural perspective (e.g., linking specific individual attributes and occupational choice). Typically, this results in the selection of outcome measures such as increased knowledge of self (e.g., through standardized or informal career assessment measures), increased knowledge of the world of work (e.g., increases in occupational information or occupational information seeking behaviours) or the selection of a specific occupational goal (occupational decision making). The relationship between these variables and broader outcomes such as career satisfaction or career stability is not known. Furthermore, there is little evidence to even suggest that these are the most relevant factors to consider in career planning.
The lack of description of process variables associated with career planning is also problematic. In most studies, there is little differentiation made regarding the process of career planning. Although process approaches to career planning have been described (e.g., Magnusson, 1992; Miller-Tiedeman and Tiedeman, 1990; Super, 1990), research studies rarely identify the process of career planning that interventions are intending to address. A dominant although covert assumption seems to be that attending to one component of career planning improves overall career planning. Examples of such covert assumptions include a belief that increasing exploratory behaviours is a desired result, or that the general goal of interventions is decision making.
In addition to problems with outcome and process measures in career efficacy research, there is also a need to identify client or career problem characteristics that may moderate treatment effects of interventions. Three specific areas of concern are apparent. First, few if any attempts have been made to link interventions to client presenting problems. In most analyses of psychotherapeutic interventions and their efficacy, the client presenting issue or problem is clearly identified, and the subsequent intervention for that presenting problem described and tested. However, in career research, the nature of the client presenting concern is usually not named, or if it is, it is given a generic label such as "undecided". Clients who are "undecided" about their career paths may span the gamut from trying to make a choice from two or more preferred futures to those that perceive that they really have no options available to them. In the former group, information strategies may be more relevant whereas in the latter group, it may issues of self-efficacy that need attention. It would be very difficult to measure the true impact of a general intervention if the intervention is not appropriate for the presenting problem.
The second concern related to the impact of treatment effects pertains to the role of intrapersonal processes in career planning. Perhaps due to the dominance of the assumption of the cognitive nature of career decision-making, very little attention has been paid to affective factors in career planning. The role that emotional states such as anxiety, depression, and anger, or even of the more positive emotions such as anticipation, hope or confidence, have on career planning process and outcome has not received sufficient attention in efficacy research.
The third concern related to the differential impact of interventions that needs further attention in efficacy research is the role that interpersonal processes play in career planning. Decision conflict may arise when an occupational aspiration of a client does not fit with family values, cultural mores, parental aspiration or spousal demands. In a purely structural sense – the matching of individual potential with occupational demand – a decision may be a very good one, but on an interpersonal level, the decision may be problematic. It will be important to devise research programs that identify and attend to the multiple variables that are related to career planning, and to discern the extent to which interventions may have differential effects.
The conclusions to be drawn from this brief review echo the conclusions of recent works in the field (Heppner and Heppner, 2003; Hughes, 2004; Sexton, 1996; Whiston, Brecheisen and Stephens, 2003; Whiston, Sexton and Lasoff, 1998). As Hughes (2004) noted, "much of the research that is conducted has been one-of and fragmented, rather than strategic, and not disseminated widely or effectively" (p. 2). Heppner and Heppner (2003) call for increased research into the career counselling process, so that we can better understand what happens in career counselling and how those processes account for positive outcomes.
The obvious conclusion is that there is a need for a comprehensive research strategy for assessing the efficacy of career interventions. Given the complexity of career process and factors, this may seem like a daunting proposition. However, there is a clear need to identify the multiple facets, targets, processes and outcomes of career development. We need to deepen our understanding of the presenting issues that clients face, of the differential treatment modalities that may be brought to bear on those issues, the combined effect of those treatments on specific client outcomes, and the general and cumulative impact of client change on individual, social and economic well-being.
Confounding the problems associated with the creation of a comprehensive research framework are the problems of relevance and practicality. The field of career development is different from its psychotherapeutic cousin in that its practitioners are often not specifically trained in the theory and practice of career counselling. The presentation of findings, and the means of data collection, must speak to the practical realities facing practitioners, policy makers, employers and researchers.
Finally, the general methodologies that would be included within a comprehensive framework must allow for consistency of data interpretation. Increased attention needs to be paid to the means by which data may be aggregated across context and client concern. The broad range of issues and factors associated with career planning demand a robust means of data aggregation across impact studies. Furthermore, the long-term effects of career interventions can only be determined by conducting longitudinal, cross-sectional research.
Whether or not it is possible – or even desirable – to create such a framework remains to be seen. An important forum to discuss the possibility of creating a comprehensive research framework is being held in Ottawa in March, 2004. Researchers from across Canada will be invited to share their perspectives on the state of career efficacy research, and to discuss what steps can be taken to improve our understanding of the "what, how, why and for whom" of career planning. To help with that process, it would be useful to have access to program evaluation research that has been conducted at agency, municipal, provincial and federal levels.
It will also be important to link the work of Canadian researchers with similar work being conducted in international contexts, particularly Great Britain and the United States. Perhaps the newly created International Centre for Career Guidance and Public Policy would be a useful mechanism for coordinating such international cooperation.
Brown, Steven D. & Ryan Krane, Nancy. (2000). Four (or five) sessions and a cloud of dust: Old assumptions and new observations about career counseling. In S.D. Brown & R.W. Lent (Eds.), Handbook of Counseling Psychology (3rd ed., 740-766). Toronto, ON: John Wiley & Sons.
Heppner, Mary J. & Heppner, P. Paul. (2003). Identifying process variables in career counseling: A research agenda. Journal of Vocational Behaviour, 62, 3, 429-452.
Herr, E. L. (2003). The future of career counseling as an instrument of public policy. The Career Development Quarterly, 52(1), p.8-17.
Hiebert, Bryan. (1994). A framework for quality control, accountability and evaluation: Being clear about the legitimate outcomes of career counselling. Canadian Journal of Counselling, 28, 4, 334-345.
Hughes, D. (2004. Creating evidence: Building the case for career development. The Career Counsellor, 16, p. 2,7.
Magnusson, K. C. (1992). Career counseling techniques. Edmonton, AB: Life-Role Development Group.
Miller-Tiedeman, A. & Tiedeman, D. (1990). Career decision-making: An individualistic perspective. In D. Brown, L. Brooks and Associates (Eds.), Career Choice and Development (2nd Ed.). p. 308-337.
Sexton, Thomas L. (1996). The relevance of counseling outcome research: current trends and practical implications. Journal of Counseling and Development, 74, 590-600.
Super, D. E. (1990). A life-span, life-space approach to career development. In D. Brown, L. Brooks and Associates (Eds.), Career Choice and Development (2nd Ed.). p. 197-261.
Watts, A. G. (2000). Career development and public policy. Journal of Employment Counseling, 37, 62-75.
Watts, A. G. (2003). Career development and public policy. Retrieved from http:www.ccdf.ca/pdf/chapter21/pdf (March 2, 2004).
Weinrach, S. (1979). Career counseling: Theoretical and practical perspectives. New York: McGraw-Hill.
Whiston, S.C., Sexton, T.L., Lasoff, D.L. (1998). Career intervention outcome: a replication and extension of Oliver and Spokane (1988). Journal of Counseling Psychology, 45, 2, 150-165.
Whiston, Susan C., Brecheisen, Briana K. & Stephens, Joy. (2003). Does treatment modality affect career counseling effectiveness? Journal of Vocational Behaviour, 62, 3, 390-410.