• 沒有找到結果。

病人安全與醫療品質研究方法研討會-會議資料

N/A
N/A
Protected

Academic year: 2021

Share "病人安全與醫療品質研究方法研討會-會議資料"

Copied!
36
0
0

加載中.... (立即查看全文)

全文

(1)

Measuring the Quality of Hospital

Care

Min Hua Jen

(2)

Contents

Background

English Hospital Statistics

Case-mix adjustment

Presentation of performance data

• League tables • Bayesian ranking

(3)
(4)

• Heart operations at

the BRI

“Inadequate care for one third of children”

• Harold Shipman

Murdered more than 200 patients

(5)

Bristol (Kennedy) Inquiry Report Data were available all the time

“From the start of the 1990s a national

database existed at the Department of Health

(the Hospital Episode Statistics database)

which among other things held information

about deaths in hospital. It was not recognised

as a valuable tool for analysing the

(6)

Mortality from open procedures in children aged under one year for 11 centres in three epochs; data derived from

Hospital Episode Statistics (HES)

Epoch 3 - April 1991 to March 1995

58/581(10%) 53/482(11%) 42/405(10%) 56/478(12%) 24/323(7%) 24/239(10%) 25/164(15%) 41/143(29%) 26/195(13%) 25/187(13%) 23/122(19%) 0% 5% 10% 15% 20% 25% 30% 35% 40% Unit M o rt al it y ra te

(7)

Following the Bristol Royal Infirmary Inquiry

• Commission for Health Improvement (now Healthcare Commission) - regularly inspect Britain's hospitals and publish some limited performance figures.

• National Clinical Assessment Authority – investigates any brewing crisis.

• National Patient Safety Agency collates information on medical errors.

• Annual appraisals for hospital consultants

• Revalidation, a system in which doctors have to prove they are still fit to practice every five years

(8)

Hospital Episode Statistics

Electronic record of every inpatient or day

case episode of patient care in every NHS

(public) hospital

14 million records a year

300 fields of information including

• Patient details such as age, sex, address • Diagnosis using ICD10

• Procedures using OPCS4 • Admission method

(9)

Why use Hospital Episode Statistics

• Comprehensive – collected by all NHS

trusts across country on all patients

• Coding of data separate from clinician

• Access

• Updated monthly from SUS (previously

NHS Wide Clearing Service)

(10)

Case mix adjustment

Limited within HES?

• Age • Sex

(11)
(12)

Risk adjustment models using HES on 3 index

procedures

• CABG

• AAA

(13)

Risk factors

Age Recent MI admission

Sex Charlson comorbidity score (capped at 6)

Method of admission Number of arteries replaced Revision of CABG Part of aorta repaired

Year Part of colon/rectum removed

Deprivation quintile Previous heart operation

Previous emergency admissions Previous abdominal surgery Previous IHD admissions

(14)

ROC curve areas comparing ‘simple’, ‘intermediate’ and ‘complex’ models derived from HES with models derived from clinical databases for four index procedures

Aylin P; Bottle A; Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ 2007;334: 1044

(15)

Calibration plots for ‘complex’ HES-based risk prediction models for four index procedures showing observed number of deaths against predicted based on validation set

Aylin P; Bottle A; Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ 2007;334: 1044

(16)

Current casemix adjustment model for each

diagnosis and procedure group

Adjusts for

• age • sex

• elective status

• socio-economic deprivation

• Diagnosis subgroups (3 digit ICD10) or procedure subgroups • co-morbidity – Charlson index

• number of prior emergency admissions • palliative care

• year

(17)

Current performance of risk models

ROC (based on 1996/7-2007/8 HES data) for in-hospital mortality

56 Clinical Classification System diagnostic groups leading to 80% of all in-hospital deaths

7 CCS groups 0.90 or above

• Includes cancer of breast (0.94) and biliary tract disease (0.91)

28 CCS groups 0.80 to 0.89

• Includes aortic, peripheral and visceral anuerysms (0.87) and cancer of colon (0.83)

18 CCS groups 0.7 to 0.79

• Includes septicaemia (0.77) and acute myocardial infarction (0.74)

3 CCS groups 0.60 to 0.69

(18)

Presentation of clinical outcomes

“Even if all surgeons are equally good, about half will have below average results, one will have the worst results, and the worst results will be a long way below average”

(19)
(20)

Criticisms of ‘league tables’

• Spurious ranking – ‘someone’s got to be bottom’ • Encourages comparison when perhaps not

justified

• 95% intervals arbitrary

• No consideration of multiple comparisons

(21)

Bayesian ranking

Bayesian approach using Monte Carlo

simulations can provide confidence intervals

around ranks

Can also provide probability that a unit is in

top 10%, 5% or even is at the top of the table

• See Marshall et al. (1998). League tables of in

vitro fertilisation clinics: how confident can we be about the rankings? British Medical Journal, 316, 1701-4.

(22)
(23)

Statistical Process Control (SPC) charts

Shipman:

• Aylin et al, Lancet (2003)

• Mohammed et al, Lancet (2001)

• Spiegelhalter et al, J Qual Health Care (2003)

Surgical mortality:

• Poloniecki et al, BMJ (1998)

• Lovegrove et al, CHI report into St George’s • Steiner et al, Biostatistics (2000)

Public health:

• Terje et al, Stats in Med (1993)

• Vanbrackle & Williamson, Stats in Med (1999) • Rossi et al, Stats in Med (1999)

(24)

Common features of SPC charts

Need to define:

• in-control process (acceptable/benchmark performance) • out-of-control process (that is cause for concern)

Test statistic

• Function of the difference between observed and benchmark performance

(25)
(26)

Funnel plots

No ranking

Visual relationship with volume

Takes account of increased variability of

smaller centres

(27)

Risk-adjusted Log-likelihood CUSUM charts

• STEP 1: estimate pre-op risk for each patient,

given their age, sex etc. This may be national

average or other benchmark

• STEP 2: Order patients chronologically by date of

operation

• STEP 3: Choose chart threshold(s) of acceptable

“sensitivity” and “specificity” (via simulation)

• STEP 4: Plot function of patient’s actual outcome

v pre-op risk for every patient, and see if – and

why – threshold(s) is crossed

(28)

More details

• Based on log-likelihood CUSUM to detect a

predetermined increase in risk of interest

• Taken from Steiner et al (2000); pre-op risks

derived from logistic regression of national data

• The CUSUM statistic is the log-likelihood test

statistic for binomial data based on the predicted

risk of outcome and the actual outcome

• Model uses administrative data and adjusts for

age, sex, emergency status, socio-economic

deprivation etc.

(29)
(30)

Currently monitoring

• 78 diagnoses

• 128 procedures

• 90% deaths

• Outcomes

• Mortality • Emergency readmissions • Day case rates

(31)
(32)
(33)
(34)
(35)

What to do with a signal

• Check the data

• Difference in casemix

• Examine organisational or procedural

differences

(36)

Future

• Patient Reported Outcomes (PROMs)

• Patient satisfaction/experience

• Safety/adverse events

參考文獻

相關文件

5 Create features of V1,V2 and testing data sets for validation set blending, including the predictions of models in step 2 and some optional extra features.. 6 Treat V1 as the

Currency risk is the risk that the fair value or future cash flows of a financial instrument will fluctuate due to changes in currency exchange rates. The Fund’s

Currency risk is the risk that the fair value or future cash flows of a financial instrument will fluctuate due to changes in currency exchange rates. The Fund’s

To this end, we introduce a new discrepancy measure for assessing the dimensionality assumptions applicable to multidimensional (as well as unidimensional) models in the context of

To compare different models using PPMC, the frequency of extreme PPP values (i.e., values \0.05 or .0.95 as discussed earlier) for the selected measures was computed for each

The left panel shows boxplots showing the 100 posterior predictive p values (PPP-values) for each observed raw score across the 100 simulated data sets generated from

We define Flat Direction Hybrid Inflation (FDHI) models as those motivated by the properties of moduli fields or flat directions of the standard model. For moduli fields with no

QCD Soft Wall Model for the scalar scalar & & vector vector glueballs glueballs