Program Evaluation Paper Impact of an educational tool on young women’s knowledge of cervical cancer screening recommendations (Thiel de Bocanegra et al., 2022)
Summary
Health Issue
Cervical cancer screening has decreased the burden of the disease by 80% (Cervical cancer screening (PDQ®), 2024). The recommendations for cervical cancer screening via pap smears are revised and updated as new evidence emerges. In 2016, the American College of Obstetrics and Gynecology endorsed the US Preventative Service Task Force’s recommendation that the interval for pap smears transition from annual to every three to five years. New data suggests that this change better balances the potential benefits of finding precancerous lesions with the possible negative consequences of too liberal screening (Warren et al., 2016). Although these updates have been adopted for several years, the change in patient attitudes and clinical practice has been slower. Many women still prefer annual pap smear testing, irrespective of new recommendations, because this is the standard to which they are accustomed. Some may be concerned that the longer screening interval in the new guidelines may miss a cancer diagnosis.
Program Development
Community health education interventions are often employed to increase patient knowledge of specific topics and enhance their ability to have an informed conversation with their doctor regarding screening decisions. Previous research demonstrates that tailored educational interventions can significantly increase women’s knowledge and ability to make informed decisions about their reproductive health (Foley et al., 2015). Thiel de Bocanegra et al. created one such educational tool to increase adherence to the new cervical cancer screening guidelines among their priority population of women under 30 (2016). This intervention is based on the program theory that increasing women’s understanding of cervical cancer and the new screening guidelines will make them more likely to follow them.
The authors began with a needs assessment, engaging community stakeholders and experts in the field of gynecology in California. The researchers also held focus groups asking women to identify their concerns regarding the new guidelines. The women expressed concerns about missing a cancer diagnosis and worried that the shift from annual screening was a cost-cutting measure (Thiel de Bocanegra et al., 2022). They also suggested topics they thought would be most relevant, including the details of the pap smear procedure, the correlation between HPV and cervical cancer, and the reasons for the update in screening intervals (Thiel de Bocanegra et al., 2022). Using the information from this needs assessment, the authors generated an interactive online educational program to administer to patients in the office before their appointment. They piloted the tool with a small group in the target age demographic and made revisions such as lowering the reading level and featuring more diverse representation in the materials (Thiel de Bocanegra et al., 2022).
Program Evaluation Process
The evaluation procedure was planned prior to the implementation of the educational intervention. The evaluation is summative as it aims to assess the effectiveness of the intervention in achieving the program’s goals of improving women’s knowledge of cervical cancer and increasing their confidence in making an informed decision with their physician about the new guidelines (Thiel de Bocanegra et al., 2022). Theil de Bocanegra et al. conducted a cross-sectional study in which fourteen clinics were randomized to deliver the educational tool to their patients under 30 between February 2016 and June 2016. All clinics were part of the California Family PACT program that provides free health services to low-income women in California. The patients in the intervention group interacted with the educational program prior to their appointment, but the control group did not. After their appointment, patients in both groups completed a 15-item survey, including sociodemographic information and 5-point Likert scale questions about their perception of their knowledge of cervical cancer screening. The women who viewed the educational program also answered nine items about their experience with the tool. The results were compiled into composite scores on the themes of “knowledge of cervical cancer prevention, understanding of screening guidelines, and comfort with patient-provider communication” (Thiel de Bocanegra et al., 2022). Statistical analyses were performed.
Results
Ninety-six women received the intervention, and 133 patients were in the control arm. The groups were statistically similar in age, place of birth, primary language, and education level, demonstrating that the random assignment of the clinics was successful (Thiel de Bocanegra et al., 2022). After viewing the program, women in the intervention group had a higher level of confidence in their knowledge of cervical cancer screening and comfort communicating with their provider compared to the control group (Thiel de Bocanegra et al., 2022). The majority of women in the intervention arm indicated that the tool was clear and helpful and that they would recommend it to a friend. However, nearly 30% of women in the intervention group still preferred annual screening (Thiel de Bocanegra et al., 2022). Besides non-US native status, no other demographic variables were statistical predictors of this preference. In sum, the evaluation demonstrated that the developed cervical cancer educational tool increased women’s perceptions of their understanding of the new cervical cancer screening guidelines. Evidence suggests that women who used the tool may be better able to communicate with their doctors; nonetheless, many still expressed a preference for annual pap smears against the new guidelines (Thiel de Bocanegra et al., 2022).
The authors state that a strength of the intervention is that it is self-paced, easy to use, and available in both English and Spanish, all of which make it highly accessible (Thiel de Bocanegra et al., 2022). However, they acknowledge several weaknesses in the evaluation design. One limitation was the small sample size and randomization that occurred at the clinic level. Another weakness was that they surveyed women during the same visit that they received the intervention, which makes it unclear if they retained the information or if it changed their attitudes over more extended periods (Thiel de Bocanegra et al., 2022).
Evaluation of the Evaluation
Limitations of the Evaluation Design
Several limitations of this evaluation design fall under the category of issues with its sample. I agree with the authors that the sample size of 229 total participants could be improved. While this is not disastrously small, a larger sample size would give the evaluation more statistical power, yielding greater confidence in the results. Some research recommends a sample size of 400 participants for health educational interventions (McConnell et al., 2019). Additionally, the study population only included low-income women without insurance in California, so these results may not be generalizable to all women for whom the updated screening recommendations apply. Previous research has found that age and income are predictors of better knowledge of cervical cancer screening (Bansal et al., 2015). Perhaps older women have higher baseline knowledge, so the educational program may not have a significant effect on this population. Additionally, women in other states or women with different types of healthcare coverage may respond differently to the educational intervention. The narrow study population is a significant shortcoming because a much more diverse population may benefit from the cervical cancer screening educational tool.
Another shortcoming of this design, in which I also agree with the authors of this study, is that the evaluation took place only immediately after the intervention. The results indicate that women were impacted by the educational tool for an hour or so after they first interacted with it, but this evaluation did not assess whether this changed their attitudes long-term. Previous educational intervention studies demonstrate that the effects of an intervention may either persist over time or “fadeout” (Bailey et al., 2020). This study’s design does not allow conclusions as to whether the increase in knowledge and confidence of women who received the intervention is transient or lasting.
A final shortcoming of this evaluation process was that the survey assessed women’s perception of knowledge rather than their actual knowledge. The post-intervention survey asked women to rate their confidence in their knowledge of cervical cancer screening as a proxy for a change in their knowledge. This model fails to account for differences in individuals' perceptions of knowledge and more objective measurements. The literature supports that perceived knowledge is often an overestimation of true knowledge (Kaim et al., 2020). Therefore, this evaluation design cannot discriminate if participants’ knowledge increased or if they only believed it increased as a result of the intervention.
Enhancements of the Evaluation
The evaluation design used in this study could be enhanced in several ways based on its shortcomings. First, the sample size and sample composition could be improved. The intervention should be tested in communities outside of California, preferably in clinics in a diverse array of states, to yield a larger sample size that is a more representative sample of US women under 30 eligible for pap smears. The intervention could also be extended to women outside of this younger demographic to determine if the methods used to inform women under 30 are as robust in older women. The intervention should also be tested on women with different types of health insurance coverage, adding privately insured and Medicare/Medicaid-covered women to this sample of women receiving free services from the California PACT clinics. Expanding the study sample in number and diversity will help determine if this educational tool works as a one-size-fits-all or must be tailored to specific populations for optimal success. Additionally, better randomization would improve this study. Participants were not randomly selected, and random assignment was performed at the clinic level. There would be greater control of confounders if participants were randomly selected from clinics and if random assignment took place on the individual level.
Another way to enhance this evaluation is to use a pre-test and post-test design. Both the intervention and control groups could take a survey before the intervention to establish a baseline to compare to the outcome after the intervention. The researchers could also employ a staged design in which women are given an additional survey at a subsequent visit to determine if the attitude effects of the intervention were enduring and if women retained the knowledge they gained. A final change to enhance this evaluation of the program’s effectiveness would be to measure participants' true knowledge rather than their perception of understanding. The surveys could include specific knowledge-based questions about cervical cancer screening in either a true or false or a multiple-choice format to assess women’s understanding. This objective measure of the program’s effectiveness can determine whether perceptions of knowledge about cervical cancer screening and actual knowledge correlate.
Alternative Study Design
An alternative design to evaluate the cervical cancer screening educational tool should measure a more objective outcome that is more closely aligned with the objective of decreasing the number of unnecessary pap smears. For instance, an evaluation could measure the proportion of nonindicated pap smears performed in the intervention compared to a control group. An outside evaluator would determine whether each patient is eligible for a pap smear based on their personal history and the new screening guidelines. This assessment could be compared to if they received a pap smear at their visit. If the proportion of non-indicated pap smears performed in the intervention group is lower than the control, the educational tool would have successfully decreased unnecessary screening exams. Combining this design with survey data about women’s knowledge would allow for a more complete assessment of the program’s impact and outcome objectives.
One advantage of this design is that it uses objective data with little room for bias; a woman is either eligible or ineligible for a pap smear based on the guidelines, and she either received a pap smear at her visit or she did not. Another advantage is that this study would be easy and cost-efficient to perform. Data could be easily obtained from the medical record, and this approach is simple enough to be retroactively combined with the subjective survey results. This design will help determine if changes in knowledge about cervical cancer screening guidelines are concordant with actual practice.
A potential drawback of this study design is that it would not account for the influence of the physicians. For instance, one doctor may recommend annual pap smears outside the updated guidelines because that is how they have always practiced. They may influence patients to acquiesce to a pap smear, which they would otherwise refuse. Other doctors may strictly adhere to the new guidelines and influence women that way. This poses a challenge in determining if the evaluation results are due to the intervention or confounding factors. However, if the proportion of patients who get unnecessary pap smears differs by physician, this would be valuable information because those doctors may be targeted for intervention themselves. Another limitation of this study design is that it would not account for women who chose to delay their pap smears to another date. Women might defer the screening to another visit if they are menstruating, if they are pregnant, if they have a vaginal infection, if they are uncomfortable or anxious, if they recently had intercourse, or if their doctor chooses to prioritize other health complaints. Though this alternative design tracking rates of nonindicated pap smears performed in the intervention and control group has limitations, it would be a better assess the program’s outcome objective of decreasing unnecessary pap smears.
References
Bailey, D. H., Duncan, G. J., Cunha, F., Foorman, B. R., & Yeager, D. S. (2020). Persistence and fade-out of educational-intervention effects: Mechanisms and potential solutions. Psychological Science in the Public Interest, 21(2), 55–97. https://doi.org/10.1177/1529100620915848
Bansal, A. B., Pakhare, A. P., Kapoor, N., Mehrotra, R., & Kokane, A. M. (2015). Knowledge, attitude, and practices related to cervical cancer among adult women: A hospital-based cross-sectional study. Journal of Natural Science, Biology, and Medicine, 6(2), 324–328. https://doi.org/10.4103/0976-9668.159993
Cervical cancer screening (PDQ®). (2024, June 21). https://www.cancer.gov/types/cervical/hp/cervical-screening-pdq
Foley, O. W., Birrer, N., Rauh-Hain, J. A., Clark, R. M., DiTavi, E., & Del Carmen, M. G. (2015). Effect of educational intervention on cervical cancer prevention and screening in Hispanic women. Journal of Community Health, 40(6), 1178–1184. https://doi.org/10.1007/s10900-015-0045-x
Kaim, A., Jaffe, E., Siman-Tov, M., Khairish, E., & Adini, B. (2020). Impact of a brief educational intervention on knowledge, perceived knowledge, perceived safety, and resilience of the public during COVID-19 crisis. International Journal of Environmental Research and Public Health, 17(16), 5971. https://doi.org/10.3390/ijerph17165971
McConnell, M. M., Monteiro, S., & Bryson, G. L. (2019). Sample size calculations for educational interventions: principles and methods. Journal Canadien d’anesthesie [Canadian Journal of Anaesthesia], 66(8), 864–873. https://doi.org/10.1007/s12630-019-01405-9
Thiel de Bocanegra, H., Dehlendorf, C., Kuppermann, M., Vangala, S. S., & Moscicki, A.-B. (2022). Impact of an educational tool on young women’s knowledge of cervical cancer screening recommendations. Cancer Causes & Control: CCC, 33(6), 813–821. https://doi.org/10.1007/s10552-022-01569-8
Warren, J. B., Gullett, H., & King, V. J. (2016). Cervical cancer screening and updated Pap guidelines. Primary Care, 36(1), 131–149, ix. https://doi.org/10.1016/j.pop.2008.10.008
Post a comment