Hans Veerman1,2,3* • Marinus J. Hagens1,2,3* • André N. Vis2,3 • R. Jeroen A. van Moorselaar2,3 • Pim J. van Leeuwen1,3 • Michel W.J.M. Wouters4,5,6 • Henk G. van der Poel1,2,3
1Department of Urology, Netherlands Cancer Institute – Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; 2Department of Urology, Amsterdam University Medical Centers location Boelelaan, Amsterdam, The Netherlands; 3Prostate Cancer Network the Netherlands, Amsterdam, The Netherlands; 4Scientific Bureau, Dutch Institute for Clinical Auditing, Leiden, The Netherlands; 5Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands; 6Department of Medical Oncology, Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
Abstract: Continuous quality assurance assessment and control in healthcare is essential to provide patients with the best possible care. Quality assurance programs have been developed to improve future healthcare by thoroughly studying patient outcomes on a physician- or institutional-level. Through the continuous and cyclical process of data registration, evaluation and adaptation, opportunities are sought to improve (individual) patient outcomes. Over the past decade, quality assurance programs have been initiated within urological clinical practice, mainly focusing on the diagnosis and surgical treatment of prostate cancer. While they all share the same philosophy to improve healthcare, existing quality assurance programs differ greatly. To date, little is known about their effects on the outcomes of prostate cancer care. In this chapter, we summarize the current knowledge regarding quality assurance program within prostate cancer care. We provide insights into how quality Assurance programs can improve and assure future diagnosis and treatment of prostate cancer.
Keywords: cyclical quality assurance for prostate cancer; improving prostate cancer care; quality assurance for prostate cancer; requirements for quality assurance programs; statistical quality assurance for prostate cancer
Author for correspondence: Hans Veerman, Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NCI-AVL), Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands. Email: h.veerman@nki.nl
Cite this chapter as: Veerman H, Hagens MJ, Vis AN, van Moorselaar RJA, van Leeuwen PJ, Wouters MWJM, van der Poel HG. Improving Prostate Cancer Care through Quality Assurance Programs. In: Barber N and Ali A, editors. Urologic Cancers. Brisbane (AU): Exon Publications. ISBN: 978-0-6453320-5-6. Online first 26 May 2022.
Doi: https://doi.org/10.36255/exon-publications-urologic-cancers-prostate-cancer-care
In: Barber N, Ali A (Editors). Urologic Cancers. Exon Publications, Brisbane, Australia. ISBN: 978-0-6453320-5-6. Doi: https://doi.org/10.36255/exon-publications-urologic-cancers
Copyright: The Authors.
License: This open access article is licenced under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) https://creativecommons.org/licenses/by-nc/4.0/
*These authors contributed equally to the creation of this manuscript
Quality assurance and control has rapidly become part of the everyday vocabulary in healthcare. In the early 1900s, with his landmark publication, Ernest Amory Codman was pioneering in this field (1, 2). With the End Results System, Codman advocated tracking patient outcomes in order to improve the quality of healthcare. It was his belief that high-quality care did not derive from fancy equipment, but rather from self-assessment by healthcare professionals. Although being ostracized by his colleagues for his idea, it forms the basis of many contemporary initiatives to improve the quality of healthcare worldwide.
Following in Codman’s footsteps, physicians have sought a more scholarly approach to quality assurance in healthcare by acquiring knowledge and expertise from the industrial sector (3, 4). Prospective registries, the contemporary, more advanced equivalent to Codman’s End Results Cards, have been implemented and play an important role in present-day quality assurance programs (QAPs). QAPs are structured programs in which healthcare employees critically review the outcomes of their patients and continuously analyze and discuss these results in order to improve the outcomes.
Healthcare QAPs originated in general surgery, but over the past decade they have also been initiated in the urological practice with a particular focus on the diagnosis and surgical treatment of prostate cancer (PCa) (5). The formation and structure of these QAPs have previously been described, but little is known about the effects of these QAPs on the outcomes of PCa care (6). Therefore, this chapter reviews the available literature on QAPs in PCa care, answering the following questions: (ii) what is the theory behind QAPs; (ii) which organizational requirements are necessary; and (iii) what is the available evidence on the effect of QAPs on PCa care?
QAPs use continuous and short-cycled processes of data registration, evaluation, and adaptation to improve outcomes. This ideology did not originate from healthcare, but from the production industry. In the 1950s, Dr. William Edwards Deming developed a cyclical technique to address and solve problems in production lines and thereby improve the quality of industrial/organizational processes continuously, herewith building on the work of Dr Walter Andrew Shewhart (the Plan, Do, Check, Act (PDCA)-cycle; Figure 1) (7). Deming’s philosophy revolutionized the industrial output in post-war Japan, where this philosophy is known as “kaizen”—the continuous search for opportunities for all processes to get better (3). Although Deming’s PDCA-cycle was intended for the industrial sector, it could also be applied in healthcare systems (8). The PDCA-cycle is comprised of four steps: (Plan) identifying clinical steps that require improvement; (Do) implementing interventions; (Check) evaluating clinical outcomes after these interventions; and (Act) implementing these interventions (if outcomes are favorable) in clinical practice.
Figure 1. The Plan, Do, Check, Act (PDCA)-cycle (or Deming’s cycle) – a continuous and cyclical technique to improve outcomes. Figure from: https://www.praxisframework.org/en/library/shewhart-cycle
After the completion of one full cycle, a new period of data collection, data analysis and evaluation ensue. The length of each cycle depends on what is being investigated at the time; a sufficient number of events must occur to detect a change in outcome. Therefore, cycle lengths can range from three cycles in 1 day to one cycle in 16 months (9). Depending on the objective to be achieved, the duration of a cycle chain (first to last cycle of one chain) may also vary enormously (1 day to 4 years). To manage the duration and analyze the quality, it is imperative to have a predetermined end date of the cycle. Therefore, before starting the PDCA-cycle, a statistical well-designed power calculation is essential.
It is essential to analyze outcomes in a correct manner to carefully target improvement efforts and assess the success of implemented pathways, protocols, and improvement plans (10). The use of statistical process control (SPC), developed by the aforementioned Dr. Shewhart, aids in testing the effectiveness of an intervention. SPC has found its way into healthcare systems over the last two decades (11).
In every (production) process, two types of variation can be distinguished: common cause variation and special cause variation, both affecting (product) quality (10, 12). Common cause variation is defined as variance inherent to the process itself; similar to many population characteristics that follow a Gaussian distribution with approximately 5% of measurements that fall outside of the 2 standard deviation limits. Special cause variation, on the other hand, is defined as variance that can be attributed to a specific cause (i.e., an intervention). The presence of special cause variation is a signal that the process has changed (either for better or for worse). Shewhart developed run and control charts to distinguish these types of variation within a production line process. For illustrative purposes, a control chart was created indicating the number of prostate biopsies for a hypothetical cohort of men suspected of having prostate cancer (Figure 2).
Figure 2. Control chart representing the average number of transperineal prostate biopsies per patient over time in a teaching hospital. Normal cause variation is present due to differences in physicians and baseline characteristics of patients; however, special cause variation was observed between July and October 2021. Special cause variation was defined as any outcome above or below 3 standard deviations (SDs) and 4 out of last 5 outcomes above or below 1 SD. This substantial rule violation was accompanied by the introduction of a new physician; a physician-in-training took significantly more prostate biopsies per patient. Through performance feedback and discussion, the number of biopsies normalized again as of November 2021.
Although effective in detecting large shifts in a production line, Shewhart control charts are unable to find moderate or small shifts. This reduced sensitivity can be compensated for by augmenting Shewhart control charts with cumulative sum (CUSUM) control charts (13, 14). Unlike Shewhart control charts, CUSUM control charts represent information of current and previous samples at each point. Plotting the cumulative sums of deviations from the target value of current and previous samples results in greater sensitivity for detecting shifts or trends over the traditional Shewhart control charts. By way of example, a CUSUM chart was created indicating the number of positive surgical margins of one surgeon’s consecutively treated patients (Figure 3).
Figure 3. Risk-adjusted cumulative sum (CUSUM) plot of hypothetical data. The plot represents the surgical margin status of one surgeon’s consecutively treated patients. First, a prediction model is created using logistic regression with clinical variables as input and positive surgical margins (PSM) as output. This model predicts the probability of PSM for each individual patient. The probability ranges from 0 to 1. If a patient had a surgical margin (unwanted outcome), a score of 1 minus the predicted probability is added to the cumulative sum (line goes up). If a patient had a negative surgical margin (desired outcome) the predicted probability is subtracted from the cumulative sum (line goes down). The control limit is calculated using the standard deviation of the mean proportion of PSM and the group size and a weighted parameter (usually 4). If a small difference should be detected or if there are little data points, the weighted parameter can be decreased. For indicative purposes, the control limit is set at 3. The control limit is reached at the 94th patient, which indicates that the surgeon had more PSM than predicted based on the patients’ clinical variables. This could be a reason to evaluate the surgeon’s technique to improve the PSM rate.
In addition to assessing the success of implemented interventions, comparing outcomes of physicians/hospitals is an important feature of QAPs; not to stimulate competition, but to identify variation in care processes that may be associated with outcomes (outcomes research). However, given that patient populations may differ between physicians and hospitals, assessment of and adjustment for case-mix variation is warranted. Methods have been proposed to analyze whether physicians/hospitals differ in outcome when compared to the mean risk of the case-mix subgroup (15, 16). By using multivariable regression models, the observed/expected ratio (O/E ratio) of each physician/hospital and outcome can be calculated. This is the for case-mix adjusted ratio indicating the quality of a physician/hospital. As a visual aid, case-mix adjusted funnel plots can be constructed; any physician/hospital that falls outside of the 95% confidence intervals has outcomes that significantly deviate from the average of the group. Figure 4 shows a funnel plot of hypothetical data illustrating the O/E ratios of positive surgical margins of 9 different surgeons.
Figure 4. A funnel plot was constructed of a hypothetical cohort of prostate cancer patients who underwent robot-assisted radical prostatectomy. The funnel plot displays the observed/expected (O/E)-ratio of positive surgical margins (PSM) of 9 different surgeons. One surgeon, highlighted as 1, is depicted above the 95% confidence interval upper control limit, which indicates that they make more PSM than expected based on the patients’ clinical characteristics when compared with the other surgeons. This could be a reason to evaluate the technique of surgeon 1 in order to improve the PSM rate.
Properly constructing a QAP is essential for its success. Given the continuous nature of QAPs, the first and foremost requirement is motivated physicians that form the steering committee of the program. Their indispensable input identifies clinical steps or outcomes that require improvement. Subsequently, these outcomes must be properly registered in prospective (institutional) databases. Quality assurance can be performed both on a physician or institutional level. Multicenter collaborations and/or hospital networks facilitate an inter-institutional comparison. However, the accompanying data transfers between hospitals can be problematic because of technical or (patient)privacy issues. These problems may be solved by using secured internet-based, multicentric electronic data capture (EDC) systems managed by an independent data processor, who takes care of pseudonomization of patient level data before analyses are performed (17).
Before starting data collection, consensus must be reached on relevant outcomes indicative of quality of care. The International Consortium for Health Outcomes Measurement (ICHOM) has developed specific sets of relevant patient outcome measures for both localized and advanced PCa, that can be registered in a standardized way (18, 19). Confidence in data accuracy and completeness is fundamental for quality assurance and for the provision of sufficiently robust evidence on which to base changes in practice recommendations. Physician-reported data is reliable to benchmark outcomes (20). However, the data collection process should be well described and monitored in order to provide accurate and complete data that can be used for multiple purposes (i.e., research or hospital management). To prevent bias, both analyses and interpretations should be performed by independent parties. Additionally, participants put themselves in a vulnerable position in which they receive feedback (criticism) on their professional functioning, therefore data should be handled confidentially. Presenting the data in a safe environment in which the participants can freely discuss without repercussions is a prerequisite. This can be achieved by anonymizing data or limiting data access to participants of the QAPs only (21). Participating physicians/hospitals are expected to trust the data and the data collection process; if they perceive the feedback as non-credible, they may not be motivated to change their practice.
The last requirement is back-up by hospital managers. Drafting, implementing, and maintaining a QAP requires monetary investment. Therefore, the functioning of the QAP must also be evaluated: is it cost-effective? Depending on the subject, calculating quality-adjusted life years (QALYs) or the incremental cost-effectiveness ratio (ICER), defined by the difference in cost between two interventions divided by the difference in their effect, can provide insight into its cost-effectiveness. Improvement in the quality of care is associated with less comorbidity and less frequent follow-up treatment. Therefore, QAPs can lead to long-term cost reduction (22–24).
We reviewed the literature on studies assessing the effect if QAPs in PCa care. In this, we specifically searched for studies that mention a QAP or improvement cycle according to the definitions of section 2.
The first attempts to develop QAPs for PCa care were made in Sweden. The merger of several regional databases created the National Prostate Cancer Registry (NPCR) (25). Using this database, surgeons were able to compare outcomes of their hospital to historical data of other hospitals and to the national average. In 2017, the NPCR opted for full transparency. All outcomes were made publicly available through an online dashboard in order to stimulate national quality control. With the help of this online dashboard, physicians can compare hospital-specific outcomes between Swedish hospitals. To preserve the privacy of physicians, individual surgeon-specific outcomes are only accessible to colleagues within their own department. The NPCR has already proven its worth. In 2014, an increased rate of readmissions after prostatectomy was observed, mainly due to anastomotic leaks. Videos of these patients were reviewed, and the literature was searched to identify surgical steps during the apical dissection and suturing the anastomosis. As a result, the surgical technique was changed, which led to a decrease in the readmission rate (from 10.6% to 5%) (25).
The Michigan Urology Statewide Improvement Collaboration (MUSIC) is a group of 46 urology practices and over 250 participating urologists (26). The QAP of MUSIC aims to improve the quality and cost-efficiency of PCa care, by reducing variance in practice. Within MUSIC, participating urologists submit data to a web-based clinical registry and, subsequently, receive quarterly reports in which their performance is compared to the statewide average and to other physicians. To date, they have published several papers on quality assurance in both PCa diagnosis and treatment. MUSIC underscores the positive changes that can be achieved with the collaborative QAP on PCa diagnostics. In an effort to improve data completeness, MUSIC has shown that QAPs are able to improve documentation of key variables, such as the clinical TNM-classification. By educating a dedicated urologist in each participating center on the importance of clinical TNM-classification for clinical decision-making and having them share this and their performance data with other members of their practice, documentation ultimately improved (27). Through performance feedback and education interventions, imaging appropriateness has been improved and biopsy-related complications have been reduced (27–29). Additionally, MUSIC has focused on the variation in surgeon-specific outcomes, such as erectile dysfunction, urinary incontinence, and complicated postoperative recovery. In accordance with the NPCR, MUSIC argues that objective identification of surgeons who achieve better outcomes will provide insight into specific techniques associated with those better outcomes (30–32). It has been suggested that peer reviewing of surgical videos and coaching may improve surgical skills and, hopefully, patient outcomes (33, 34).
Participating in a nationwide QAP is mandatory in Germany. In 2008, the German Cancer Society (Deutschen Krebsgesellschaft (DKG)) initiated a certification program to increase the quality of PCa care in Germany (35). To qualify for certification, centers must have established a quality management system and meet quality indicators yearly. Fifteen quality indicators (both treatment and process related) were established based on expert opinion and clinical guidelines. Despite the efforts made, improvements in functional and oncological outcomes could not be demonstrated (36). In the meantime, the German Martini Clinic implemented a physician-initiated QAP on its own initiative. They realized that their institutional prospective database, initially started for scientific purposes, could also be utilized to aid in a QAP. This data collection contributed significantly to constant quality improvements over the years (37). For example, anesthetic regimens were adapted, which led to a decrease in intraoperative blood loss. In addition, nomograms were implemented and the NeuroSAFE technique was developed, which increased nerve-sparing procedures while keeping biochemical recurrence rates steady (38). They also noticed that one of their urologists had improved urinary continence outcomes compared to the others. After watching surgical videos, they found that the surgeon used a specific technique when dissecting the prostatic apex and urethra. Implementation of this technique by all other surgeons improved the urinary continence rate of all surgeons (39).
The London Cancer Network noticed poorer results compared to international colleagues, which motivated them to initiate a QAP. Through image-based surgical planning and monthly peer reviewing of individual surgeons’ outcomes, a high quality of care for patients undergoing radical prostatectomy was pursued (40). The implementation of such a QAP substantially improved quality of care, in terms of both oncological and functional outcomes; nerve-sparing surgery increased significantly while margin status remained static, and postoperative urinary continence and erectile function improved.
Similar to the MUSIC approach, Veerman et al. aimed to reduce catheter-related bladder discomfort (CRBD) after robotic-assisted radical prostatectomy by applying a QAP to the intra-operative anesthesia regime (41). After 8 cycles of different treatments and adapting the treatment protocol, the optimal treatment regime was identified. This regime reduced the incidence of CRBD from 70% to 36%, a relative reduction of 49%. Matulewicz et al. sought to determine the efficacy of comparative quality performance review to improve a surgeon-level measure of surgical oncologic quality. Participating surgeons were provided with confidential report cards detailing information about their patients’ clinical characteristics and positive surgical margin rates (42). These report card also contained information on their historical data, the institutional average, and the blinded results of peers. Before implementation of report cards, the positive surgical margin rate was 10.6%, while during and after the implementation, the positive surgical margin rate dropped significantly to 7.4%.
QAPs are increasingly implemented to achieve and maintain high quality PCa care. The currently existing QAPs differ in focus, execution, motivation, and subjects; however, they all share the same philosophy: to improve future PCa care by thoroughly studying their own retrospective data and identifying outliers. The existing literature has already described interesting results of QAPs on PCa care, such as improvement of functional outcomes, improvement of oncological outcomes and reduction of variability between physicians/hospitals.
QAPs are no stand-alone “research” projects. They are continuous cycles, incorporated into daily practice, striving for the best possible care. While the quality cycles warrant the quality of care, participating physicians should warrant the quality of the cycle. Quality assurance is achieved though the collection and analysis of reliable data and the willingness of physicians to act on these findings. Maintaining prospective registry databases alone is insufficient. Comparing one’s results with peers or the hospital average gives a good indication of performance, but to improve outcomes, discussion between peers and identifying improvement steps is essential. The willingness to improve must come from the physicians themselves; physician-initiated programs have been shown to improve both functional and oncological outcomes, whereas programs in which physicians were enforced to participate did not improve outcomes (35).
No consensus exists on the level of data transparency; opinions differ on the accessibility of quality assurance data. The Swedes have opted for full transparency and have made their data publicly available through an interactive online dashboard. Some, however, believe that patients may misinterpret the outcomes by a lack of context and lack of medical or epidemiological knowledge. Consequently, patients may avoid physicians and/or hospitals with a ‘worse’ performance. Besides, full data transparency may evoke risk-aversive behavior in physicians by not treating high-risk patients, induce registration bias, and limit physicians’ motivation. These actions are counterproductive for the progressive nature of the QAP (43, 44). On the other hand, full or partial access to quality assurance data for professionals is more accepted and even beneficial. Shared insight into the data improves physicians’ confidence in the data accuracy. Moreover, physicians can benchmark their results to the average and to their peers. Consequently, participants can identify points of improvement and they find solutions through a joint approach. In this way, the participants can learn from each other. Transparency of data within selected groups is therefore recommended to maximize the positive effects of QAP.
Criticism on the current literature on QAPs is that its focus is on improving one outcome at a time. Associations with multiple outcomes are not always taken into account. PCa care is dynamic and important outcomes (urinary incontinence, erectile functioning, positive surgical margin, etc.) are related to each other. An improvement of a specific outcome does not necessarily mean an improvement of the whole; other outcomes may be adversely affected by the intervention. The London Cancer Network and the German Martini Clinic should be commended in this respect as they reported all relevant outcomes instead of focusing on only one single quality indicator.
In order to obtain reliable results, a high volume of treated cases is desirable. High-volume centres generally have better outcomes than low-volume centres (45, 46). Additionally, improving patient outcomes through short cycles of quality improvement is easier with a higher volume of treated patients. After all, when a low volume of patients is treated, it may take years to measure a difference in outcomes after a change of practice. Collaborations between hospitals, such as networks and forming a hub-and-spoke model, can effectuate a high patient volume. The formation of hospital networks offers several other advantages. For example, centralisation is associated with increased compliance to guidelines and reduced costs. Centralisation also offers novel surgeons the opportunity to learn from expert surgeons, which may increase the quality of care in the entire network (47, 48).
Randomised controlled trials (RCTs) are the purest way to demonstrate a causal relationship between intervention and effect. However, there is a major drawback to this research method. Due to the highly selected populations used in RCTs, outcomes in daily practice may differ from RCT results. ‘Real world” insights, gained through QAPs, can therefore be of great value in complementing evidence from RCTs (49). In the case of QAPs an unselected population is used, so outcomes are more based on daily practice and reliable for physicians. In addition, high costs are involved in carrying out RCTs, whereas QAPs are associated with minimal costs (50).
Many papers that publish on quality improvement initiatives in PCa care have positive outcomes. This may indicate a publication bias. Centres with less appealing results may be afraid to publish or struggle to find journals that accept their research. Consequently, it is harder to attribute the trend in improved results to the QAP alone. Furthermore, all centres that published on the effects of QAPs on PCa care are high-volume centres that are actively involved in the scientific community. It is possible that the effect is caused by applying the latest scientific insights, rather than learning from the best surgeon in the group. However, the results of the Dutch Institute of Clinical Auditing (DICA) counter this argument. DICA performs quality cycles regarding the treatment of several (non-urological) oncological and non-oncological diseases. All hospitals that treat a specific condition participate in the corresponding quality cycle, making it a nation-wide, population-based QAP. Results of the DICA QAPs have shown several improvements in quality of care, i.e., improved reporting, decreased between-hospital variations, decreased complication rates and improved mortality rates (51).
In conclusion, despite differences in organizational characteristics, the available literature shows positive effect of QAP (providing that motivated participants are involved). The use of QAP should therefore be recommended in urological practices. The key for success is a group of motivated physicians who lead the QAP.
Conflict of Interest: The authors declare no potential conflicts of interest with respect to research, authorship and/or publication of this chapter.
Copyright and Permission Statement: The authors confirm that the materials included in this chapter do not violate copyright laws. Where relevant, appropriate permissions have been obtained from the original copyright holder(s), and all original sources have been appropriately acknowledged or referenced.