临床预测模型的评价与验证

Xianxiong Ma

2020年3月16日

背景

近十年来,临床研究中有一类预测模型构建与验证类的文章数量逐渐增多。简言之,预测模型是通过已知参数来预测临床未知的结局,而模型本身就是一个数学公式。也就是把已知的参数通过这个所谓的模型就可以计算出未知的结局发生的可能性,即是预测。

临床预测模型的统计学本质就是回归建模分析,回归的本质就是发现因变量Y与多个自变量X之间的数学关系。临床研究中最常用的回归建模分析有三种:多重线性回归、Logistic回归与Cox回归。当我们通过训练集构建了一个回归模型,我们如何去科学评价一个回归模型预测的准确与否呢?举个例子,有两位算命的半仙儿在街角各支了一个算命的摊子,作为求姻缘的王家大小姐是选择张半仙儿算命还是选择李半仙儿呢?一个简单的选择方法:谁算命准就找谁算!临床预测模型大抵如此,基本的要求就是要“算命准”。总体来讲评价一个预测模型的优劣可以从三方面来度量:

C-statistics与C-index计算

logistic回归C-statistics计算

Logistic回归模型我们有三种方法可以计算其C-statistics

案例

Hosmer和Lemeshow于1989年研究了低出生体重婴儿的影响因素。结果变量为是否娩出低出生体重儿(变量名为LOW, 1=低出生体重,即婴儿出生体重<2500g;0=非低出生体重),考虑的影响因素(自变量)有:产妇妊娠前体重(lwt,磅) ;产妇年龄(age, 岁) ;产妇在妊娠期间是否吸烟(smoke, 0=未吸、1=吸烟) ; 本次妊娠前早产次数(ptl, 次) ;是否患有高血压(ht, 0=未患、1=患病);子宫对按摩、催产素等刺激引起收缩的应激性(ui, 0=无、1=有);妊娠前三个月社区医生随访次数(ftv, 次) ;种族(race, 1=白人、2=黑人、3=其他民族)。

# C-statistics计算
library(foreign)
library(rms)

mydata <- read.spss("lweight.sav")
mydata <- as.data.frame(mydata)
head(mydata)
##   id      low age lwt     race  smoke ptl       ht ui ftv  bwt
## 1 85 正常体重  19 182   黑种人 不吸烟   0 无妊高症 有   0 2523
## 2 86 正常体重  33 155 其他种族 不吸烟   0 无妊高症 无   3 2551
## 3 87 正常体重  20 105   白种人   吸烟   0 无妊高症 无   1 2557
## 4 88 正常体重  21 108   白种人   吸烟   0 无妊高症 有   2 2594
## 5 89 正常体重  18 107   白种人   吸烟   0 无妊高症 有   0 2600
## 6 91 正常体重  21 124 其他种族 不吸烟   0 无妊高症 无   0 2622
mydata$low <- ifelse(mydata$low =="低出生体重",1,0)
mydata$race1 <- ifelse(mydata$race =="白种人",1,0)
mydata$race2 <- ifelse(mydata$race =="黑种人",1,0)
mydata$race3 <- ifelse(mydata$race =="其他种族",1,0)
attach(mydata)
dd<-datadist(mydata)
options(datadist='dd')
fit1 <- lrm(low ~ age + ftv + ht + lwt + ptl + smoke + ui + race1 + race2, data = mydata, x = T, 
    y = T)
## Error in eval(predvars, data, env): 找不到对象'race1'
fit1  #直接读取模型中Rank Discrim.参数 C
## Error in eval(expr, envir, enclos): 找不到对象'fit1'
mydata$predvalue <- predict(fit1)
## Error in predict(fit1): 找不到对象'fit1'
library(ROCR)
pred <- prediction(mydata$predvalue, mydata$low)
## Error in prediction(mydata$predvalue, mydata$low): Format of predictions is invalid.
perf <- performance(pred, "tpr", "fpr")
## Error in performance(pred, "tpr", "fpr"): 找不到对象'pred'
plot(perf)
## Error in plot(perf): 找不到对象'perf'
abline(0, 1)
## Error in int_abline(a = a, b = b, h = h, v = v, untf = untf, ...): plot.new has not been called yet
auc <- performance(pred, "auc")
## Error in performance(pred, "auc"): 找不到对象'pred'
auc  #auc即是C-statistics,SPSS可以计算AUC95%可信区间(视频第10分钟)
## Error in eval(expr, envir, enclos): 找不到对象'auc'
# fit2<-glm(low~age+ftv+ht+lwt+ptl+smoke+ui+race1+race2,data=mydata,family=binomial())
# mydata$predvalue2<-predict(fit2,type='response') pred <- prediction(mydata$predvalue2,
# mydata$low) perf<- performance(pred,'tpr','fpr') plot(perf) abline(0,1) auc <-
# performance(pred,'auc') auc #auc即是C-statistics

# library(Hmisc)
somers2(mydata$predvalue, mydata$low)  #somers2 {Hmisc}
## Error in somers2(mydata$predvalue, mydata$low): y must have same length as x

至此本章关于Logistic回归中C-Statistics的计算三种方法演示完毕。不管那种方法都未给出标准误,所以可信区间的计算就很麻烦,如果一定要报告C-Statistics可信区间,可以考虑使用SPSS软件进行ROC分析,软件可以直接给出AUC的可信区间。

C-index计算

在很多临床文章中经常看见统计方法里面描述模型的区分能力(Discrimination ability)用C-Statistics来度量,下面我们就用R语言为大家演示这个所谓的C-Statistics如何计算?本文先讲解Cox回归中的C-Statistics(一般称为C-index)的计算,Logistic回归C-Statistics计算将在后续文章中介绍。严格说来C-index包括以下几种,我们仅介绍临床上较为常用的第一种。

1.Harrell’s C

2.C-statistic by Begg et al.(survAUC::BeggC)

3.C-statistic by Uno et al.(survC1::Inf.Cval; survAUC::UnoC)

4.Gonen and Heller Concordance Index forCox models (survAUC::GHCI, CPE::phcpe, clinfun::coxphCPE)

# 模拟一组数据并设置为数据框结构
age <- rnorm(200, 50, 5)
bp <- rnorm(200, 120, 10)
d.time <- rexp(200)
cens <- runif(200, 0.5, 2)
death <- d.time <= cens
os <- pmin(d.time, cens)
sample.data <- data.frame(age = age, bp = bp, os = os, death = death)
head(sample.data)  #展示数据框sample.data的前6行
##        age       bp        os death
## 1 52.40831 100.0022 0.6960055  TRUE
## 2 61.09281 101.7401 0.7949362 FALSE
## 3 57.34448 138.5148 0.3946270  TRUE
## 4 51.61444 102.4800 1.4266687 FALSE
## 5 49.87809 113.5964 0.9913382 FALSE
## 6 47.09066 124.6025 1.1224710 FALSE
# 方法1 survival包
library(survival)  # 载入survival包
fit <- coxph(Surv(os, death) ~ age + bp, data = sample.data)  # coxph函数拟合cox回归模型
sum.surv <- summary(fit)  # summary函数展示模型结果并赋值给对象sum.surv
c_index <- sum.surv$concordance  #展示模型参数concordance,即是c-index
c_index
##          C      se(C) 
## 0.53077419 0.02719402
# 方法2 rms包
library(rms)
set.seed(1)  #这里设置种子,目的是为了能重复最后的结果,因为validate函数的校正结果是随机的。
dd <- datadist(sample.data)
options(datadist = "dd")
fit.cph <- cph(Surv(os, death) ~ age + bp, data = sample.data, x = TRUE, y = TRUE, surv = TRUE)
fit.cph  #模型参数 Dxy*0.5+0.5 即是c-index
## Cox Proportional Hazards Model
##  
##  cph(formula = Surv(os, death) ~ age + bp, data = sample.data, 
##      x = TRUE, y = TRUE, surv = TRUE)
##  
##                      Model Tests       Discrimination    
##                                           Indexes        
##  Obs       200    LR chi2      0.69    R2       0.003    
##  Events    130    d.f.            2    Dxy      0.062    
##  Center 0.5638    Pr(> chi2) 0.7070    g        0.079    
##                   Score chi2   0.69    gr       1.082    
##                   Pr(> chi2) 0.7067                      
##  
##      Coef    S.E.   Wald Z Pr(>|Z|)
##  age  0.0143 0.0180  0.79  0.4269  
##  bp  -0.0013 0.0082 -0.15  0.8779  
## 
# Get the Dxy
v <- validate(fit.cph, dxy = TRUE, B = 1000)
v
##       index.orig training    test optimism index.corrected    n
## Dxy       0.0615   0.0752  0.0276   0.0476          0.0139 1000
## R2        0.0035   0.0125  0.0020   0.0105         -0.0070 1000
## Slope     1.0000   1.0000  0.3784   0.6216          0.3784 1000
## D        -0.0003   0.0013 -0.0005   0.0018         -0.0020 1000
## U        -0.0017  -0.0017  0.0014  -0.0030          0.0014 1000
## Q         0.0014   0.0030 -0.0019   0.0048         -0.0034 1000
## g         0.0792   0.1399  0.0575   0.0824         -0.0032 1000
Dxy = v[rownames(v) == "Dxy", colnames(v) == "index.corrected"]
orig_Dxy = v[rownames(v) == "Dxy", colnames(v) == "index.orig"]

# The c-statistic according to Dxy=2(c-0.5)
orig_c_index <- abs(orig_Dxy)/2 + 0.5  #计算未校正c-index
orig_c_index
## [1] 0.5307742
bias_corrected_c_index <- abs(Dxy)/2 + 0.5  #计算校正c-index
bias_corrected_c_index
## [1] 0.5069641

C-index是Cox回归模型评价中最为重要的参数,它反映了模型预测效果的优劣,但这个参数在SPSS中无法计算,我们今天介绍的用R语言计算的两种方法,能掌握其中一种即可,推荐第一种,因其同时报告了C-index标准误,可以很方便的计算出C-index的可信区间。

NRI计算

NRI与IDI适用于两个模型的比较,一个模型是则用其它评定指标。

背景知识

净重新分类指数(NRI)这个指标最初用于评价诊断试验中新的诊断指标较旧诊断指标把研究对象进行正确分类在数量上的变化。既然可以评估诊断试验预测的准确度,那自然也可以用于判断预测模型的准确度,从已发表的临床研究文献来看,NRI这个指标更广泛应用于比较两个预测模型的准确度。前文已经述及比较两个预测模型的准确性或者区分度的方法,一般计算C-statistics,或者叫ROC曲线下面积AUC,而C-statistics或AUC具有一定的局限性:

计算原理

我们首先以二分类诊断指标为例讲解NRI大致计算原理,然后再将其扩展应用于预测模型预测能力的量化比较,前者一般手动计算即可,后者计算需借助统计软件。简单来讲,旧诊断指标会把研究对象分类为患者和非患者,而新的诊断指标会把研究对象再重新分类为患者和非患者。此时比较新、旧诊断指标对于研究人群的分类变化,就会发现有一部分研究对象,原本在旧诊断指标中被错分,但在新诊断指标中得到了正确划分;同样也有一部分研究对象,原本在旧诊断指标中分类正确,但在新诊断指标中却被错分,因此研究对象的分类在新、旧诊断指标中会发生变化,我们利用这种重新分类的变化,来计算净重新分类指数NRI[1,2]。文字读起来确实有点绕,可参考下面计算过程帮助消化。

首先我们将研究对象按照金标准的诊断结果分为患病和未患病两组,然后分别在这两个分组下,根据新、旧诊断指标的预测分类结果,整理成两个配对的四格表,如下表1.和表2.所示。

表一
患病组(N1)    新指标
旧指标       阳性   阴性
 阳性         a1     b1
 阴性         c1     d1

表二
无病组(N2)    新指标
旧指标       阳性   阴性
 阳性         a2     b2
 阴性         c2     d2

我们主要关注被重新分类的研究对象,从表中可以看出,在患病组(总数为N1),新诊断指标分类正确而旧指标分类错误的有c1个人,新指标分类错误而旧指标分类正确的有b1个人,那么新模型相对于旧模型来说,正确分类提高的比例为(c1-b1)/N1。同理,在非患者组(总数为N2),新诊断指标分类正确而旧指标分类错误的有b2个人,新指标分类错误而旧指标分类正确的有c2个人,那么新诊断指标相对于旧指标正确分类提高的比例为(b2-c2)/ N2。最后,综合患者组和非患者组的结果,新模型与旧模型相比,净重新分类指数NRI= (c1-b1)/N1+(b2-c2)/N2,一般称作绝对NRI。

若NRI>0,则为正改善,说明新指标比旧指标的预测能力有所改善;若NRI<0,则为负改善,新指标预测能力下降;若NRI=0,则认为新模型没有改善。我们可以通过计算Z统计量,来判断NRI与0相比是否具有统计学显著性,统计量Z近似服从正态分布,公式如下图1.,根据Z值可计算P值。

\[Z = \frac{NRI}{\sqrt{{\frac{b1+c1}{N1^{2}}}+\frac{b2+c2}{N2^{2}}}}\]

以上是一个二分类诊断指标的例子,但在预测模型类研究中往往要更复杂,但基本原理是一样的。直接根据二分类诊断指标判断是否患病有时过于“简单粗暴”,研究人员可能更关注的是未来患病风险的大小而非简单的有病或者无病,而预测模型恰恰可以给出患病或者发生终点事件的概率。例如将研究对象根据预测的风险概率划分为低、中、高风险三组,可以更有针对性采取精准的干预措施。针对这种结局风险概率是三分类或者更多分类时,ROC分析就不太合适了,因为ROC分析的结局事件一般是二分类变量,如果扩大ROC分析的应用条件至三分类或者更多分类变量,则ROC曲线可能呈现出一个球面的形状,绘制起来非常困难,即便绘制出来也无法直观的去比较两个预测模型的AUC,更不好解释其临床意义,而NRI却可以很好的解决这些问题。NRI是如何解决这些问题的呢?

以发表在Stat Med杂志上这篇有关分类NRI计算方法学文章为例[2],研究者以著名的Framingham HeartStudy为基础,比较了在经典的模型中加入高密度脂蛋白HDL-C指标后,新模型对于研究对象未来10年冠心病发病风险预测能力的改善情况。研究人员首先比较了新、旧模型的ROC曲线,结果显示新、旧模型AUC分别为0.774和0.762,加入HDL-C后新预测模型AUC增加了0.012,差异无统计学显著性(P=0.092),提示新模型并无显著改善,如图2.所示。随后,研究人员又进一步将研究对象未来10年发生冠心病事件的风险概率,按照 <6%, 6-20%, >20%分低、中、高三组,原文中的表格如图3.所示,并计算了NRI=12.1%,Z=3.616,P<0.001,具有统计学显著性,提示在加入了新的生物标志物后,新模型的预测能力有所改善,正确分类的比例提高了12.1%。 图2 图3

以上原理部分讲完了,下面讲解如何通过R软件计算NRI。这里要分分情况,1.如果只是单纯计算一个新二分类诊断指标较旧指标诊断能力提高多少,可参阅上述计算公式,网上也有大神自编了R语言的计算函数来计算此类NRI;2.基于Logistic回归构建的两个预测模型NRI计算;3.基于Cox回归构建的两预测模型的NRI计算[3]。关于R中NRI的计算方法大家可参阅下表3.,我们重点介绍基于nricens包计算NRI[4-6],建议有关NRI计算首选此包。 表3

二分类结局

案例

示例数据来自于survival包里自带的一份梅奥诊所的数据,记录了418位患者的临床指标与原发性胆汁性肝硬化(PBC)的关系。其中前312个患者来自于RCT研究,其他患者来自于队列研究。我们用前312例患者的数据来预测2000天时间点上是否发生死亡。此处需要说明的是原始数据是一个生存数据,我们重新定义一个二分类的结局,死亡or存活,不考虑时间因素。先载入这份数据,如图4.所示。这个表中的结局变量是status,0 =截尾(删失),1=接受肝移植,2=死亡。胆我们的研究目的“死亡与否”是个二分类变量,所以要做变量变换。再看time一栏,有的不够2000天,这些样本要么是没到2000天就死亡了,要么是删失了。我们要删掉2000天内删失的数据。其他关于数据中各变量的具体含义可用命令:? pbc查看。

library(nricens)
library(survival)
dat = pbc[1:312, ]
dat$sex = ifelse(dat$sex == "f", 1, 0)

## subjects censored before 2000 days are excluded
dat = dat[dat$time > 2000 | (dat$time < 2000 & dat$status == 2), ]

## predciting the event of 'death' before 2000 days
event = ifelse(dat$time < 2000 & dat$status == 2, 1, 0)

## standard prediction model: age, bilirubin, and albumin
z.std = as.matrix(subset(dat, select = c(age, bili, albumin)))

## new prediction model: age, bilirubin, albumin, and protime
z.new = as.matrix(subset(dat, select = c(age, bili, albumin, protime)))

## glm fit (logistic model)
mstd = glm(event ~ ., binomial(logit), data.frame(event, z.std), x = TRUE)
mnew = glm(event ~ ., binomial(logit), data.frame(event, z.new), x = TRUE)

## predicted risk
p.std = mstd$fitted.values
p.new = mnew$fitted.values

## Calculation of risk difference NRI using ('mdl.std', 'mdl.std').
nribin(mdl.std = mstd, mdl.new = mnew, cut = 0.02, niter = 0, updown = "diff")
##                 Estimate
## NRI           0.08712121
## NRI+          0.04545455
## NRI-          0.04166667
## Pr(Up|Case)   0.19318182
## Pr(Down|Case) 0.14772727
## Pr(Down|Ctrl) 0.15972222
## Pr(Up|Ctrl)   0.11805556
## Calculation of risk difference NRI using ('event', 'z.std', 'z.std').
nribin(event = event, z.std = z.std, z.new = z.new, cut = 0.02, niter = 0, updown = "diff")
##                Estimate Std. Error    z value     Pr(>|z|)
## (Intercept)  0.98927136 2.20809035  0.4480212 6.541379e-01
## age          0.07128234 0.01988079  3.5854876 3.364490e-04
## bili         0.61686651 0.10992947  5.6114755 2.006087e-08
## albumin     -1.95859156 0.53031693 -3.6932473 2.214085e-04
##                Estimate Std. Error    z value     Pr(>|z|)
## (Intercept) -1.16682234 2.92204889 -0.3993165 6.896600e-01
## age          0.06659224 0.02032242  3.2767864 1.049958e-03
## bili         0.59995139 0.11022521  5.4429600 5.240243e-08
## albumin     -1.88620553 0.53144647 -3.5491919 3.864153e-04
## protime      0.20127560 0.18388726  1.0945598 2.737095e-01

plot of chunk unnamed-chunk-4

##                 Estimate
## NRI           0.08712121
## NRI+          0.04545455
## NRI-          0.04166667
## Pr(Up|Case)   0.19318182
## Pr(Down|Case) 0.14772727
## Pr(Down|Ctrl) 0.15972222
## Pr(Up|Ctrl)   0.11805556
## Calculation of risk difference NRI using ('event', 'p.std', 'p.std').
nribin(event = event, p.std = p.std, p.new = p.new, cut = 0.02, niter = 0, updown = "diff")
##                 Estimate
## NRI           0.08712121
## NRI+          0.04545455
## NRI-          0.04166667
## Pr(Up|Case)   0.19318182
## Pr(Down|Case) 0.14772727
## Pr(Down|Ctrl) 0.15972222
## Pr(Up|Ctrl)   0.11805556
## Calculation of risk category NRI using ('mdl.std', 'mdl.std').
nribin(mdl.std = mstd, mdl.new = mnew, cut = c(0.2, 0.4), niter = 0, updown = "category")
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2    110     3      0
##   < 0.4      3    30      0
##   >= 0.4     0     2     84
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2      7     0      0
##   < 0.4      0     8      0
##   >= 0.4     0     2     71
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2    103     3      0
##   < 0.4      3    22      0
##   >= 0.4     0     0     13

plot of chunk unnamed-chunk-4

##                  Estimate
## NRI           -0.02272727
## NRI+          -0.02272727
## NRI-           0.00000000
## Pr(Up|Case)    0.00000000
## Pr(Down|Case)  0.02272727
## Pr(Down|Ctrl)  0.02083333
## Pr(Up|Ctrl)    0.02083333
## Using PredictABEL package
library(PredictABEL)
pstd <- mstd$fitted.values
pnew <- mnew$fitted.values
dat_new = cbind(dat, event)
reclassification(data = dat_new, cOutcome = 21, predrisk1 = pstd, predrisk2 = pnew, cutoff = c(0, 
    0.2, 0.4, 1))
##  _________________________________________
##  
##      Reclassification table    
##  _________________________________________
## 
##  Outcome: absent 
##   
##              Updated Model
## Initial Model [0,0.2) [0.2,0.4) [0.4,1]  % reclassified
##     [0,0.2)       103         3       0               3
##     [0.2,0.4)       3        22       0              12
##     [0.4,1]         0         0      13               0
## 
##  
##  Outcome: present 
##   
##              Updated Model
## Initial Model [0,0.2) [0.2,0.4) [0.4,1]  % reclassified
##     [0,0.2)         7         0       0               0
##     [0.2,0.4)       0         8       0               0
##     [0.4,1]         0         2      71               3
## 
##  
##  Combined Data 
##   
##              Updated Model
## Initial Model [0,0.2) [0.2,0.4) [0.4,1]  % reclassified
##     [0,0.2)       110         3       0               3
##     [0.2,0.4)       3        30       0               9
##     [0.4,1]         0         2      84               2
##  _________________________________________
## 
##  NRI(Categorical) [95% CI]: -0.0227 [ -0.0683 - 0.0229 ] ; p-value: 0.32884 
##  NRI(Continuous) [95% CI]: 0.0391 [ -0.2238 - 0.3021 ] ; p-value: 0.77048 
##  IDI [95% CI]: 0.0044 [ -0.0037 - 0.0126 ] ; p-value: 0.28396

生存资料结局

library(nricens)
## here consider pbc dataset in survival package as an example
library(survival)
dat = pbc[1:312, ]
dat$sex = ifelse(dat$sex == "f", 1, 0)

## predciting the event of 'death'
time = dat$time
event = ifelse(dat$status == 2, 1, 0)

## standard prediction model: age, bilirubin, and albumin
z.std = as.matrix(subset(dat, select = c(age, bili, albumin)))

## new prediction model: age, bilirubin, albumin, and protime
z.new = as.matrix(subset(dat, select = c(age, bili, albumin, protime)))

## coxph fit
mstd = coxph(Surv(time, event) ~ ., data.frame(time, event, z.std), x = TRUE)
mnew = coxph(Surv(time, event) ~ ., data.frame(time, event, z.new), x = TRUE)

## predicted risk at t0=2000
p.std = get.risk.coxph(mstd, t0 = 2000)
p.new = get.risk.coxph(mnew, t0 = 2000)

## Calculation of risk category NRI by the KM estimator using ('mdl.std', 'mdl.std').
nricens(mdl.std = mstd, mdl.new = mnew, t0 = 2000, cut = c(0.2, 0.4), niter = 0)
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2    139     7      1
##   < 0.4     17    72      6
##   >= 0.4     0     5     65
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2      9     2      0
##   < 0.4      1    21      4
##   >= 0.4     0     0     51
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2     92     4      1
##   < 0.4      9    29      2
##   >= 0.4     0     3      4
##                 Estimate
## NRI           0.11028068
## NRI+          0.05123381
## NRI-          0.05904686
## Pr(Up|Case)   0.06348538
## Pr(Down|Case) 0.01225156
## Pr(Down|Ctrl) 0.09583016
## Pr(Up|Ctrl)   0.03678329
## by the KM estimator using ('time', 'event', 'z.std', 'z.std').
nricens(time = time, event = event, z.std = z.std, z.new = z.new, t0 = 2000, cut = c(0.2, 0.4), 
    niter = 0)
##                coef exp(coef)    se(coef)         z     Pr(>|z|)
## age      0.03726683 1.0379699 0.009048925  4.118371 3.815600e-05
## bili     0.13531179 1.1448937 0.013711323  9.868617 5.694436e-23
## albumin -1.44611854 0.2354825 0.221997986 -6.514107 7.312356e-11
##                coef exp(coef)    se(coef)         z     Pr(>|z|)
## age      0.03362675 1.0341985 0.009214173  3.649460 2.627925e-04
## bili     0.12517886 1.1333511 0.014406820  8.688861 3.660902e-18
## albumin -1.39395237 0.2480928 0.217046959 -6.422354 1.341831e-10
## protime  0.28602917 1.3311313 0.070536400  4.055058 5.012193e-05
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2    139     7      1
##   < 0.4     17    72      6
##   >= 0.4     0     5     65
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2      9     2      0
##   < 0.4      1    21      4
##   >= 0.4     0     0     51
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2     92     4      1
##   < 0.4      9    29      2
##   >= 0.4     0     3      4

plot of chunk unnamed-chunk-5

##                 Estimate
## NRI           0.11028068
## NRI+          0.05123381
## NRI-          0.05904686
## Pr(Up|Case)   0.06348538
## Pr(Down|Case) 0.01225156
## Pr(Down|Ctrl) 0.09583016
## Pr(Up|Ctrl)   0.03678329
## by the KM estimator using ('time','event','p.std','p.std').
nricens(time = time, event = event, p.std = p.std, p.new = p.new, t0 = 2000, cut = c(0.2, 0.4), 
    niter = 0)
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2    139     7      1
##   < 0.4     17    72      6
##   >= 0.4     0     5     65
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2      9     2      0
##   < 0.4      1    21      4
##   >= 0.4     0     0     51
##         New
## Standard < 0.2 < 0.4 >= 0.4
##   < 0.2     92     4      1
##   < 0.4      9    29      2
##   >= 0.4     0     3      4
##                 Estimate
## NRI           0.11028068
## NRI+          0.05123381
## NRI-          0.05904686
## Pr(Up|Case)   0.06348538
## Pr(Down|Case) 0.01225156
## Pr(Down|Ctrl) 0.09583016
## Pr(Up|Ctrl)   0.03678329
## Calculation of risk difference NRI by the KM estimator
nricens(mdl.std = mstd, mdl.new = mnew, t0 = 2000, updown = "diff", cut = 0.05, niter = 0)

plot of chunk unnamed-chunk-5

##                 Estimate
## NRI           0.10070960
## NRI+          0.05097223
## NRI-          0.04973737
## Pr(Up|Case)   0.22431499
## Pr(Down|Case) 0.17334277
## Pr(Down|Ctrl) 0.10859064
## Pr(Up|Ctrl)   0.05885327
## Calculation of risk difference NRI by the IPW estimator
nricens(mdl.std = mstd, mdl.new = mnew, t0 = 2000, updown = "diff", cut = 0.05, point.method = "ipw", 
    niter = 0)
##                  Estimate
## NRI            0.06361038
## NRI+           0.08444371
## NRI-          -0.02083333
## Pr(Up|Case)    0.22905909
## Pr(Down|Case)  0.14461537
## Pr(Down|Ctrl)  0.05555556
## Pr(Up|Ctrl)    0.07638889

IDI计算

背景知识

上一章《净重新分类指数(NRI)的计算原理与方法》我们比较了ROC分析曲线下面积与NRI,NRI有两个优点:1. 较ROC分析计算的C-Statistics/AUC更为敏感;2.NRI用于在设定的cut-off值下,例如某个指标的诊断截断值或低、中、高风险划分的界值等,来比较新、旧模型的预测能力是否有所提高,在实际临床应用中更容易理解。但NRI也有一些不足:它只考虑了设定某个切点时的改善情况,不能考察模型的整体改善情况,此时我们可以选择另一个指标:综合判别改善指数(IntegratedDiscrimination Improvement, IDI)。有的读者可能会提出疑问:C-Statistics/AUC不就是反应模型整体改善情况的吗?这个问题又回到我们上一章提到的C-Statistics/AUC的两个局限性的问题,如果一定要把IDI与C-Statistics/AUC比较,那么IDI更敏感,临床也更容易解释。

计算原理

我无法从一个读者的角度判断我上面这段话讲得是不是“可以听懂的人话”?但有一个最简单的解决办法:先存疑不论,即不去计较C-Statistics/AUC、NRI、IDI的优劣,在进行两个指标诊断效能比较或两个预测模型比较时,除了传统的ROC曲线计算AUC,也可以同时给出NRI和IDI,更加全方位、立体的展示新模型的改善情况,犹如绘画技法中的多点透视。 从IDI的计算公式看,它反映的是两个模型预测概率差距上的变化,因此是基于疾病模型对每个个体的预测概率计算所得。它的计算公式为: \[\hat{IDI} = (\bar{\hat{p}}_{new,events}-\bar{\hat{p}}_{old,events})- (\bar{\hat{p}}_{new,nonevents}-\bar{\hat{p}}_{old,nonevents})\]

其中Pnew,events、Pold,events表示在结局事件发生组(患病组)中,新模型和旧模型对于每个个体预测疾病发生概率的均值,两者相减表示预测概率提高的量。对于患病组来说,预测患病的概率越高,表示模型越准确,因此,Pnew,events与Pold,events差值越大则提示新模型越好。

而Pnew,non-events、Pold,non-events表示在结局事件发生组(未患病组)中,新模型和旧模型对于每个个体预测疾病发生概率的均值,两者相减表示预测概率减少的量,对于未患病组来说,预测患病的概率越低,表示模型越准确,因此Pnew,non-events与Pold,non-events差值越小则提示新模型越好。

最后,将两部分相减即可得到IDI,总体来说IDI越大,则提示新模型预测能力越好。与NRI类似,若IDI>0,则为正改善,说明新模型比旧模型的预测能力有所改善,若IDI<0,则为负改善,新模型预测能力下降,若IDI=0,则认为新模型没有改善。

我们可以通过计算Z统计量,来判断IDI与0相比是否具有统计学显著性,统计量Z近似服从正态分布,公式如下: \[z = \frac{\hat{IDI}}{\sqrt{(\hat{se}_{events})^{2}+(\hat{se}_{nonevents})^{2}}}\]

其中SEevents为Pnew,events-Pold,events的标准误,首先在患病组,计算新、旧模型对每个个体的预测概率,求得概率的差值,再计算差值的标准误即可;SEnon-events为Pnew,non-events-Pold,non-events的标准误,是在未患病组,计算新、旧模型对每个个体的预测概率,求得概率的差值,再计算差值的标准误即可。

R中计算IDI的包
包的名称        下载地址        分类资料结局          生存资料结局
PredictABEL     CRAN         reclassification函数  不支持
survIDINRI      CRAN        不支持                IDI.INF函数

示例数据依然是上面的pbc数据:

二分类结局

## here consider pbc dataset in survival package as an example
library(survival)
dat = pbc[1:312, ]
dat$sex = ifelse(dat$sex == "f", 1, 0)
## subjects censored before 2000 days are excluded
dat = dat[dat$time > 2000 | (dat$time < 2000 & dat$status == 2), ]
## predciting the event of 'death' before 2000 days
event = ifelse(dat$time < 2000 & dat$status == 2, 1, 0)
## standard prediction model: age, bilirubin, and albumin
z.std = as.matrix(subset(dat, select = c(age, bili, albumin)))
## new prediction model: age, bilirubin, albumin, and protime
z.new = as.matrix(subset(dat, select = c(age, bili, albumin, protime)))
## glm fit (logistic model)
mstd = glm(event ~ ., binomial(logit), data.frame(event, z.std), x = TRUE)
mnew = glm(event ~ ., binomial(logit), data.frame(event, z.new), x = TRUE)
## Using PredictABEL package
library(PredictABEL)
pstd <- mstd$fitted.values
pnew <- mnew$fitted.values
## 用cbind函数把前面定义的event变量加入数据集,并定义为dat_new
dat_new = cbind(dat, event)
## 计算NRI,同时报告了IDI,IDI计算与cutoff点设置无关。 cOutcome指定结局变量的列序号 predrisk1,
## predrisk2为新旧logistic回归模型
reclassification(data = dat_new, cOutcome = 21, predrisk1 = pstd, predrisk2 = pnew, cutoff = c(0, 
    0.2, 0.4, 1))
##  _________________________________________
##  
##      Reclassification table    
##  _________________________________________
## 
##  Outcome: absent 
##   
##              Updated Model
## Initial Model [0,0.2) [0.2,0.4) [0.4,1]  % reclassified
##     [0,0.2)       103         3       0               3
##     [0.2,0.4)       3        22       0              12
##     [0.4,1]         0         0      13               0
## 
##  
##  Outcome: present 
##   
##              Updated Model
## Initial Model [0,0.2) [0.2,0.4) [0.4,1]  % reclassified
##     [0,0.2)         7         0       0               0
##     [0.2,0.4)       0         8       0               0
##     [0.4,1]         0         2      71               3
## 
##  
##  Combined Data 
##   
##              Updated Model
## Initial Model [0,0.2) [0.2,0.4) [0.4,1]  % reclassified
##     [0,0.2)       110         3       0               3
##     [0.2,0.4)       3        30       0               9
##     [0.4,1]         0         2      84               2
##  _________________________________________
## 
##  NRI(Categorical) [95% CI]: -0.0227 [ -0.0683 - 0.0229 ] ; p-value: 0.32884 
##  NRI(Continuous) [95% CI]: 0.0391 [ -0.2238 - 0.3021 ] ; p-value: 0.77048 
##  IDI [95% CI]: 0.0044 [ -0.0037 - 0.0126 ] ; p-value: 0.28396

生存资料结局

## here consider pbc dataset in survival package as an example
library(survival)
dat = pbc[1:312, ]
dat$time = as.numeric(dat$time)
## 定义生存结局
dat$status = ifelse(dat$status == 2, 1, 0)
## 定义时间点
t0 = 365 * 5
## 基础回归模型变量矩阵
indata0 = as.matrix(subset(dat, select = c(time, status, age, bili, albumin)))
## 增加1个预测变量新模型
indata1 = as.matrix(subset(dat, select = c(time, status, age, bili, albumin, protime)))
## 旧模型中预测变量矩阵
covs0 <- as.matrix(indata0[, c(-1, -2)])
## 新模型中预测变量矩阵
covs1 <- as.matrix(indata1[, c(-1, -2)])
library(survIDINRI)
x <- IDI.INF(dat[, 2:3], covs0, covs1, t0, npert = 1000)
## dat[,2:3]设置生存结局,dat数据集第2、3两列分别是生存时间与终点。 covs0,
## covs1,为旧模型与新模型的协变量矩阵 t0为设置的时间。npert设置迭代次数。
IDI.INF.OUT(x)
##     Est.  Lower Upper p-value
## M1 0.025 -0.002 0.066   0.084
## M2 0.226 -0.004 0.391   0.056
## M3 0.012 -0.002 0.041   0.068
## 输出结果IDI.INF计算结果: m1:Result of IDI.  m2:Result of continuous-NRI. m3:Result of
## median improvement in risk score.
IDI.INF.GRAPH(x)

plot of chunk unnamed-chunk-7

自编函数计算NRI和IDI
#NRIcalculate Fuction
NRIcalculate=function(m1="dia1",m2="dia2",gold="gold"){
  datanri=datanri[complete.cases(datanri),]
  for (i in 1:length(names(datanri))){
    if (names(datanri)[i]==m1)nm1=as.numeric(i)
    if (names(datanri)[i]==m2)nm2=as.numeric(i)
    if(names(datanri)[i]==gold)ngold=as.numeric(i)
  }
  if(names(table(datanri[,nm1]))[1]!="0" ||
     names(table(datanri[,nm1]))[2]!="1")stop("指标1诊断值不是0和1")
  if(names(table(datanri[,nm2]))[1]!="0" ||
     names(table(datanri[,nm2]))[2]!="1")stop("指标2诊断值不是0和1")
  if(names(table(datanri[,ngold]))[1]!="0" ||
     names(table(datanri[,ngold]))[2]!="1")stop("金标准诊断值不是0和1")
  datanri1=datanri[datanri[,ngold]==1,]
  table1=table(datanri1[,nm1],datanri1[,nm2])
  datanri2=datanri[datanri[,ngold]==0,]
  table2=table(datanri2[,nm1],datanri2[,nm2])
  p1=as.numeric(table1[2,1]/table(datanri[,ngold])[2])
  p2=as.numeric(table1[1,2]/table(datanri[,ngold])[2])
  p3=as.numeric(table2[2,1]/table(datanri[,ngold])[1])
  p4=as.numeric(table2[1,2]/table(datanri[,ngold])[1])
  NRI=round(p1-p2-p3+p4,3)
  z=NRI/sqrt((p1+p2)/table(datanri[,ngold])[2]+(p3+p4)/table(datanri[,ngold])[1])
  z=round(as.numeric(z),3)
  pvalue=round((1-pnorm(abs(z)))*2,3)
  if(pvalue<0.001)pvalue="<0.001"
  result=paste("NRI=",NRI,",z=",z,",p=",pvalue,sep= "")
  return(result)
}

#以上程序不需要更改
#library(foreign)
datanri=as.data.frame(read.csv("dignosisdata.csv")) #请更改数据存在路径
NRIcalculate(m1="v1",m2="v2",gold="gold")#m1为第一诊断变量名,m2第二诊断变量名,gold为金标准

# IDIcalculate Fuction
IDIcalculate=function(m1="v1",m2="v2",gold="gold"){
  dataidi= dataidi [complete.cases(dataidi),]
  for (i in 1:length(names(dataidi))){
    if(names(dataidi)[i]==m1)nm1=as.numeric(i)
    if(names(dataidi)[i]==m2)nm2=as.numeric(i)
    if(names(dataidi)[i]==gold)ngold=as.numeric(i)
  }
  if(names(table(dataidi[,ngold]))[1]!="0" ||
     names(table(dataidi[,ngold]))[2]!="1")stop("金标准诊断值不是0和1")
  logit1=glm(dataidi[,ngold]~dataidi[,nm1],family=binomial(link='logit'),data=dataidi)
  dataidi$pre1=logit1$fitted.values
  logit2=glm(dataidi[,ngold]~dataidi[,nm2],family=binomial(link='logit'),data=dataidi)
  dataidi$pre2=logit2$fitted.values
  dataidi$predif=dataidi$pre1-dataidi$pre2
  dataidi1=dataidi[dataidi[,ngold]==1,]
  dataidi2=dataidi[dataidi[,ngold]==0,]
  p1=mean(dataidi1$pre1)
  p2=mean(dataidi1$pre2)
  p3=mean(dataidi2$pre1)
  p4=mean(dataidi2$pre2)
  IDI=round(p1-p2-p3+p4,3)
  z=IDI/sqrt(sd(dataidi1$predif)/length(dataidi1$predif)+sd(dataidi2$predif)/length(dataidi2$predif))
  z=round(as.numeric(z),3)
  pvalue=round((1-pnorm(abs(z)))*2,3)
  if(pvalue<0.001)pvalue="<0.001"
  result=paste("IDI=",IDI,",z=",z,",p=",pvalue,sep= "")
  return(result)
}

#以上程序不需要更改
#library(foreign)
dataidi=as.data.frame(read.csv("dignosisdata.csv"))#请更改数据存在路径
IDIcalculate(m1="v1",m2="v2",gold="gold")#m1为第一诊断变量名,m2第二诊断变量名,gold为金标准

Calibration的绘制

构建logistic回归模型

library(rms)
#library(Hmisc)
#library(grid)
#library(lattice)
#library(Formula)
#library(ggplot2) 
## 第二步 读取数据,以survival程序包的lung数据来进行演示
## 列举survival程序包中的数据集
#library(survival)
#data(package = "survival")

## 读取lung数据集
data(lung)

## 显示lung数据集的前6行结果
head(lung)
##   inst time status age sex ph.ecog ph.karno pat.karno meal.cal wt.loss
## 1    3  306      2  74   1       1       90       100     1175      NA
## 2    3  455      2  68   1       0       90        90     1225      15
## 3    3 1010      1  56   1       0       90        90       NA      15
## 4    5  210      2  57   1       1       90        60     1150      11
## 5    1  883      2  60   1       0      100        90       NA       0
## 6   12 1022      1  74   1       1       50        80      513       0
## 显示lung数据集的变量说明
help(lung)

lung$status <- ifelse(lung$status==2,1,0)

## 添加变量标签以便后续说明
lung$sex <- 
  factor(lung$sex,
         levels = c(1,2),
         labels = c("male", "female"))

## 第三步 按照nomogram要求“打包”数据,绘制nomogram的关键步骤,??datadist 可查看详细说明
dd=datadist(lung)
options(datadist="dd") 

## 第四步 构建模型
## 构建logisitc回归模型
f1 <- lrm(status~ age + sex, data = lung) 

## 绘制logisitc回归的风险预测值的nomogram图
nom <- nomogram(f1, fun= function(x)1/(1+exp(-x)), # or "fun=plogis"
                lp=F, funlabel="Risk")
plot(nom)

plot of chunk unnamed-chunk-8

构建Cox回归模型

# f2 <- psm(Surv(time,status) ~ age + sex, data = lung, dist='lognormal')
f2 <- cph(Surv(time, status) ~ age + sex, data = lung, x = T, y = T, surv = TRUE)
f2
## Cox Proportional Hazards Model
##  
##  cph(formula = Surv(time, status) ~ age + sex, data = lung, x = T, 
##      y = T, surv = TRUE)
##  
##                      Model Tests       Discrimination    
##                                           Indexes        
##  Obs       228    LR chi2     14.12    R2       0.060    
##  Events    165    d.f.            2    Dxy      0.206    
##  Center 0.8618    Pr(> chi2) 0.0009    g        0.356    
##                   Score chi2  13.72    gr       1.428    
##                   Pr(> chi2) 0.0010                      
##  
##             Coef    S.E.   Wald Z Pr(>|Z|)
##  age         0.0170 0.0092  1.85  0.0646  
##  sex=female -0.5131 0.1675 -3.06  0.0022  
## 
medi <- Quantile(f2)  # 计算中位生存时间
medi
## function (q = 0.5, lp = 0, stratum = 1, type = c("step", "polygon"
## ), time = c(0, 5, 11, 12, 13, 15, 26, 30, 31, 53, 54, 59, 60, 
## 61, 62, 65, 71, 79, 81, 88, 92, 93, 95, 105, 107, 110, 116, 118, 
## 122, 131, 132, 135, 142, 144, 145, 147, 153, 156, 163, 166, 167, 
## 170, 173, 174, 175, 176, 177, 179, 180, 181, 182, 183, 185, 186, 
## 188, 189, 191, 192, 194, 196, 197, 199, 201, 202, 203, 207, 208, 
## 210, 211, 212, 218, 221, 222, 223, 224, 225, 226, 229, 230, 235, 
## 237, 239, 240, 243, 245, 246, 252, 259, 266, 267, 268, 269, 270, 
## 272, 276, 279, 283, 284, 285, 286, 288, 291, 292, 293, 296, 300, 
## 301, 303, 305, 306, 310, 315, 320, 329, 332, 337, 340, 345, 348, 
## 350, 351, 353, 356, 361, 363, 364, 371, 376, 382, 384, 387, 390, 
## 394, 404, 413, 426, 428, 429, 433, 442, 444, 450, 455, 457, 458, 
## 460, 473, 477, 511, 519, 520, 524, 529, 533, 543, 550, 551, 558, 
## 559, 567, 574, 583, 588, 613, 624, 641, 643, 654, 655, 687, 689, 
## 705, 707, 728, 731, 735, 740, 765, 791, 806, 814, 821, 840, 883, 
## 965, 1010, 1022), surv = c(1, 0.99582063926974, 0.983268480011856, 
## 0.979066678705293, 0.970637845917648, 0.966412142672309, 0.9621796067656, 
## 0.95793838062793, 0.953688856819944, 0.945157699751188, 0.940884313219321, 
## 0.936602349579901, 0.928019919383282, 0.923728362872544, 0.919443639223601, 
## 0.910877916368743, 0.90658657651602, 0.902291689583635, 0.893716031809509, 
## 0.885148145747484, 0.880857722026919, 0.876551712545968, 0.867918849514161, 
## 0.863602385049815, 0.854921387904837, 0.850581892657857, 0.846236692598779, 
## 0.841879350409008, 0.837513133198272, 0.833153110678189, 0.824434062698886, 
## 0.820075381727574, 0.815712655058191, 0.811344412595501, 0.802600662858339, 
## 0.798240755503771, 0.793876207590373, 0.78514841093205, 0.772014618334437, 
## 0.763229613326004, 0.758836573209044, 0.75445532042024, 0.75445532042024, 
## 0.75445532042024, 0.750019767787162, 0.745554272240399, 0.741076283689784, 
## 0.732066598155923, 0.727549165603784, 0.718507596555332, 0.713984920706104, 
## 0.709472990739143, 0.709472990739143, 0.704905526278778, 0.704905526278778, 
## 0.700301822600577, 0.700301822600577, 0.700301822600577, 0.695650239018147, 
## 0.695650239018147, 0.690980944296275, 0.686266398682946, 0.676859983878701, 
## 0.672162019338672, 0.672162019338672, 0.667414837343236, 0.662657472167698, 
## 0.657907194309893, 0.657907194309893, 0.653124119470262, 0.64834196623489, 
## 0.64834196623489, 0.643511150998411, 0.638613358442566, 0.638613358442566, 
## 0.638613358442566, 0.633590248543284, 0.628582067237891, 0.623558166145793, 
## 0.623558166145793, 0.623558166145793, 0.613309816972386, 0.613309816972386, 
## 0.613309816972386, 0.608121961246102, 0.602948755780009, 0.602948755780009, 
## 0.602948755780009, 0.602948755780009, 0.597655595145456, 0.592347235053535, 
## 0.587060891118599, 0.581711481528845, 0.581711481528845, 0.581711481528845, 
## 0.581711481528845, 0.576206273873564, 0.570666582363249, 0.559448515010545, 
## 0.553833523382117, 0.548216118223245, 0.542581337884702, 0.542581337884702, 
## 0.536815575669332, 0.536815575669332, 0.536815575669332, 0.530951759837319, 
## 0.524987965863642, 0.518918448697241, 0.512879092224517, 0.500722031742532, 
## 0.500722031742532, 0.494576839715028, 0.488437358228765, 0.488437358228765, 
## 0.482226550847981, 0.476007969840379, 0.469814375064905, 0.463640908084871, 
## 0.457493673241025, 0.451364656389592, 0.439106845216797, 0.439106845216797, 
## 0.432900764986149, 0.420501699292793, 0.41428107418422, 0.401677868057019, 
## 0.401677868057019, 0.401677868057019, 0.401677868057019, 0.395196069798569, 
## 0.388703937660313, 0.382207208746468, 0.382207208746468, 0.382207208746468, 
## 0.375338100151301, 0.368488040180093, 0.361593633193421, 0.354688891489185, 
## 0.347825410990451, 0.340891247989721, 0.333800987801564, 0.326737789355484, 
## 0.31962136253469, 0.31962136253469, 0.312313565451155, 0.304938001785136, 
## 0.297594477922703, 0.297594477922703, 0.289910522439467, 0.282178196199006, 
## 0.266785785554695, 0.266785785554695, 0.258949346184758, 0.258949346184758, 
## 0.250985131095683, 0.250985131095683, 0.242824395362944, 0.242824395362944, 
## 0.234358645891448, 0.22585733153158, 0.217299861904587, 0.217299861904587, 
## 0.208330753708891, 0.199207475400992, 0.190088267460005, 0.181086806261676, 
## 0.171860303861639, 0.162729094796535, 0.153475464964561, 0.144397358585295, 
## 0.135230357860343, 0.126303665798965, 0.117225834720194, 0.108282768645908, 
## 0.0995190049791329, 0.0995190049791329, 0.0904377687406662, 0.0817135774371588, 
## 0.0817135774371588, 0.0718919514607325, 0.0718919514607325, 0.0718919514607325, 
## 0.0576831557512148, 0.0576831557512148, 0.0576831557512148, 0.0576831557512148
## )) 
## {
##     type <- match.arg(type)
##     if (length(stratum) > 1) 
##         stop("does not handle vector stratum")
##     if (is.list(time)) {
##         time <- time[[stratum]]
##         surv <- surv[[stratum]]
##     }
##     Q <- matrix(NA, nrow = length(lp), ncol = length(q), dimnames = list(names(lp), 
##         format(q)))
##     for (j in 1:length(lp)) {
##         s <- surv^exp(lp[j])
##         if (type == "polygon") 
##             Q[j, ] <- approx(s, time, q, ties = mean)$y
##         else for (i in 1:length(q)) if (any(s <= q[i])) 
##             Q[j, i] <- min(time[s <= q[i]])
##     }
##     drop(Q)
## }
## <environment: 0x000001615fd4d550>
surv <- Survival(f2)  # 构建生存概率函数

## 绘制COX回归中位生存时间的Nomogram图
nom2m <- nomogram(f2, fun = function(x) medi(lp = x), funlabel = "Median Survival Time")
plot(nom2m)

plot of chunk unnamed-chunk-9

## 绘制COX回归生存概率的Nomogram图 注意lung数据的time是以”天“为单位
nom2p <- nomogram(f2, fun = list(function(x) surv(365, x), function(x) surv(730, x)), funlabel = c("1-year Survival Probability", 
    "2-year Survival Probability"))
plot(nom2p, xfrac = 0.4)  # 线之间的距离可以通过xfrac设置

plot of chunk unnamed-chunk-9

## 评价COX回归的预测效果 第一步 计算c-index
rcorrcens(Surv(time, status) ~ predict(f2), data = lung)
## 
## Somers' Rank Correlation for Censored Data    Response variable:Surv(time, status)
## 
##                 C    Dxy  aDxy    SD    Z     P   n
## predict(f2) 0.397 -0.206 0.206 0.051 4.03 1e-04 228
## 第二步 绘制校正曲线 参数说明: 1、绘制校正曲线前需要在模型函数中添加参数x=T, y=T,详细参考帮助
## 2、u需要与之前模型中定义好的time.inc一致,即365或730;
## 3、m要根据样本量来确定,由于标准曲线一般将所有样本分为3组(在图中显示3个点)
## 而m代表每组的样本量数,因此m*3应该等于或近似等于样本量; 4、b代表最大再抽样的样本量

## 重新调整模型函数f2,也即添加x=T, y=T f2 <- psm(Surv(time,status) ~ age+sex, data = lung, x=T,
## y=T, dist='lognormal')
f2 <- cph(Surv(time, status) ~ age + sex, data = lung, x = T, y = T, surv = TRUE)
## 构建校正曲线
cal <- calibrate(f2, cmethod = "KM", method = "boot", u = 365, m = 50, B = 100)
## Using Cox survival estimates at 120 Days
## 绘制校正曲线,??rms::calibrate 可查看详细参数说明 par(mar=c(8,5,3,2),cex = 1.0)
plot(cal, lwd = 2, lty = 1, errbar.col = c(rgb(0, 118, 192, maxColorValue = 255)), xlim = c(0.1, 
    1), ylim = c(0.1, 1), xlab = "Nomogram-Predicted Probability of 1-Year DFS", ylab = "Actual 1-Year DFS (proportion)", 
    col = c(rgb(192, 98, 83, maxColorValue = 255)))

plot of chunk unnamed-chunk-9

决策曲线(Decision Curve Analysis)绘制

背景

前面的文章我们已经探讨了评价一个预测模型区分度的统计学指标C-statistics(即ROC曲线下面积)。但C-statistics是否足够好呢?没有最好,只有更好。比如我通过某个连续指标预测患者是否患病,无论选取哪个值作为截断值,都存在一定的假阳性和假阴性概率。既然这两种情况都无法避免,那我就从构建预测模型的原始动机出发,努力去寻找一个预测净受益最大的模型。如何去计算这种预测净收益呢?

纪念斯隆凯特琳癌症研究所的AndrewVickers等人于2006年发明了一种新的计算方法,叫决策曲线分析法(Decision Curve Analysis,DCA),与二战时期就诞生的ROC分析相比,DCA显然还很“嫩”,但“青出于蓝,而胜于蓝”,Ann Intern Med、JAMA、BMJ、J Clin Oncol等Top临床医学杂志都曾发文力挺使用决策曲线分析法(DCA)。那么问题来了?如何画出一副看起来足够高大上的Decision Curve?

统计学家们总是先想到用R去实现新的算法,事实也的确如此,基于R语言的DCA算法最先公布,后续也有基于SAS与Stata的DCA算法出现,但目前尚未见到SPSS具有此功能(SPSS的劣势在此也可见一斑)。Kerr等人还专门为决策曲线法的实现制作了一个名为DecisionCurve的R包,但请各位注意:目前DecisionCurve包在CRAN官网中已经不提供下载,原DecisionCurve包的所有函数已经整合入rmda包,所以大家只要在R中安装并加载rmda包即可绘制决策曲线。网上有关如何在新版本的R软件中安装DecisionCurve包的教程是一种不恰当的做法,正确做法是直接安装rmda包。

二分类结局

案例:该数据是NHLBI(美国国家心肺血液研究所)著名的Framingham心脏研究数据集的一个子集,包含4000多个样本。自变量分别为性别(sex)、收缩压(sbp)、舒张压(dbp)、血清胆固醇(scl)、年龄(age)、身体质量指数(bmi)等,因变量为冠心病相关死亡事件(chdfate)。因变量是二元变量,随访时间内死亡为1,未死亡为0。数据结构如下:

library(rmda)
Data <- read.csv("2.20.Framingham.csv", sep = ",")
head(Data)
##   sex sbp dbp scl chdfate followup age  bmi month   id
## 1   1 120  80 267       1       18  55 25.0     8 2642
## 2   1 130  78 192       1       35  53 28.4    12 4627
## 3   1 144  90 207       1      109  61 25.1     8 2568
## 4   1  92  66 231       1      147  48 26.2    11 4192
## 5   1 162  98 271       1      169  39 28.4    11 3977
## 6   2 212 118 182       1      199  61 33.3     2  659
## DCA运算
simple <- decision_curve(chdfate ~ scl, data = Data, family = binomial(link = "logit"), thresholds = seq(0, 
    1, by = 0.01), confidence.intervals = 0.95, study.design = "case-control", population.prevalence = 0.3)
# decision_curve()函数中,family =binomial(link = ‘logit’)是使用logistic回归来拟合模型。
# threshold设置横坐标阈概率的范围,一般是0 ~
# 1;但如果有某种具体情况,大家一致认为Pt达到某个值以上,
# 比如40%,则必须采取干预措施,那么0.4以后的研究就没什么意义了,可以设为0 ~
# 0.4。by是指每隔多少距离计算一个数据点。
# Study.design可设置研究类型,是cohort还是case-control,当研究类型为case-control时,还应加上患病率population.prevalance参数。

complex <- decision_curve(chdfate ~ scl + sbp + dbp + age + bmi + sex, data = Data, family = binomial(link = "logit"), 
    thresholds = seq(0, 1, by = 0.01), confidence.intervals = 0.95, study.design = "case-control", 
    population.prevalence = 0.3)
# 基本和simple相同,就是那几个联合应用的变量之间用个+号连接起来。

List <- list(simple, complex)
# 把刚才计算的simple和complex两个对象合成一个list,命名为List。

## DCA曲线绘制
plot_decision_curve(List, curve.names = c("simple", "complex"), cost.benefit.axis = FALSE, col = c("red", 
    "blue"), confidence.intervals = FALSE, standardize = FALSE)

plot of chunk unnamed-chunk-11

# plot_decision_curve()函数的对象就是刚才的List,如果只画一根曲线,就不需要合成的那步,直接把List替换成simple或complex就好了。
# curve.names是出图时,图例上每条曲线的名字,书写顺序要跟上面合成list时一致。cost.benefit.axis是另外附加的一条横坐标轴,损失收益比,默认值是TRUE,所在不需要时要记得设为FALSE。
# col就是颜色。confidence.intervals设置是否画出曲线的置信区间,standardize设置是否对净受益率(NB)使用患病率进行校正。

summary(complex, measure = "sNB")
## 
## Standardized Net Benefit (95% Confidence Intervals):
## ---------------------------------------------------------------------------------------------------------
##    risk      cost:benefit       percent                All            chdfate ~ scl + sbp + dbp +   None 
##  threshold      ratio          high risk                                    age + bmi + sex              
## ----------- -------------- ------------------ ---------------------- ----------------------------- ------
##      0           0:1              100                   1                          1                 0   
##                                (100, 100)             (1, 1)                    (1, 1)                   
## 
##    0.01          1:99             100                 0.976                      0.976               0   
##                                (100, 100)         (0.976, 0.976)            (0.976, 0.976)               
## 
##    0.02          1:49             100                 0.952                      0.952               0   
##                                (100, 100)         (0.952, 0.952)            (0.952, 0.952)               
## 
##    0.03          3:97             100                 0.928                      0.928               0   
##                                (100, 100)         (0.928, 0.928)            (0.928, 0.928)               
## 
##    0.04          1:24             100                 0.903                      0.903               0   
##                                (100, 100)         (0.903, 0.903)            (0.903, 0.903)               
## 
##    0.05          1:19             100                 0.877                      0.877               0   
##                              (99.912, 100)        (0.877, 0.877)            (0.877, 0.877)               
## 
##    0.06          3:47            99.978               0.851                      0.851               0   
##                              (99.562, 100)        (0.851, 0.851)            (0.851, 0.852)               
## 
##    0.07          7:93            99.693               0.824                      0.825               0   
##                              (98.622, 100)        (0.824, 0.824)            (0.824, 0.827)               
## 
##    0.08          2:23            98.949               0.797                      0.799               0   
##                             (97.186, 99.868)      (0.797, 0.797)            (0.796, 0.802)               
## 
##    0.09          9:91            97.86                0.769                      0.772               0   
##                             (95.262, 99.233)      (0.769, 0.769)            (0.769, 0.776)               
## 
##     0.1          1:9             96.157               0.741                      0.747               0   
##                             (92.875, 98.278)      (0.741, 0.741)             (0.74, 0.751)               
## 
##    0.11         11:89            93.614               0.712                      0.718               0   
##                             (90.396, 96.754)      (0.712, 0.712)            (0.709, 0.724)               
## 
##    0.12          3:22            91.313               0.682                      0.689               0   
##                             (87.944, 94.528)      (0.682, 0.682)             (0.68, 0.696)               
## 
##    0.13         13:87            88.988               0.651                      0.662               0   
##                             (85.233, 92.246)      (0.651, 0.651)             (0.65, 0.672)               
## 
##    0.14          7:43            86.193                0.62                      0.632               0   
##                             (82.706, 90.049)       (0.62, 0.62)             (0.622, 0.648)               
## 
##    0.15          3:17            83.689               0.588                      0.611               0   
##                             (80.442, 87.317)      (0.588, 0.588)            (0.596, 0.626)               
## 
##    0.16          4:21            81.212               0.556                      0.589               0   
##                             (77.895, 84.512)      (0.556, 0.556)            (0.572, 0.603)               
## 
##    0.17         17:83            78.835               0.522                      0.56                0   
##                             (75.659, 81.791)      (0.522, 0.522)            (0.546, 0.579)               
## 
##    0.18          9:41            76.293               0.488                      0.54                0   
##                             (73.273, 79.287)      (0.488, 0.488)            (0.521, 0.555)               
## 
##    0.19         19:81            74.008               0.453                      0.51                0   
##                             (71.357, 76.516)      (0.453, 0.453)            (0.494, 0.533)               
## 
##     0.2          1:4             71.688               0.417                      0.49                0   
##                             (69.29, 74.271)       (0.417, 0.417)            (0.469, 0.509)               
## 
##    0.21         21:79            69.788                0.38                      0.465               0   
##                             (67.05, 71.787)        (0.38, 0.38)             (0.445, 0.486)               
## 
##    0.22         11:39            67.306               0.342                      0.441               0   
##                             (64.712, 69.408)      (0.342, 0.342)             (0.42, 0.466)               
## 
##    0.23         23:77            64.688               0.303                      0.422               0   
##                             (62.387, 67.127)      (0.303, 0.303)            (0.396, 0.447)               
## 
##    0.24          6:19            62.144               0.263                      0.399               0   
##                             (60.059, 64.284)      (0.263, 0.263)            (0.377, 0.424)               
## 
##    0.25          1:3             59.955               0.222                      0.372               0   
##                             (57.89, 61.628)       (0.222, 0.222)            (0.348, 0.401)               
## 
##    0.26         13:37            57.347                0.18                      0.352               0   
##                             (55.499, 59.053)       (0.18, 0.18)             (0.324, 0.383)               
## 
##    0.27         27:73            54.863               0.137                      0.337               0   
##                             (53.206, 56.321)      (0.137, 0.137)            (0.305, 0.365)               
## 
##    0.28          7:18            52.376               0.093                      0.312               0   
##                             (50.784, 53.783)      (0.093, 0.093)            (0.284, 0.347)               
## 
##    0.29         29:71            49.858               0.047                      0.295               0   
##                             (48.439, 50.986)      (0.047, 0.047)             (0.267, 0.33)               
## 
##     0.3          3:7             47.593                 0                        0.283               0   
##                             (46.191, 48.498)          (0, 0)                (0.249, 0.313)               
## 
##    0.31         31:69            45.269               -0.048                     0.269               0   
##                             (43.865, 46.086)     (-0.048, -0.048)           (0.232, 0.296)               
## 
##    0.32          8:17            42.553               -0.098                     0.246               0   
##                             (41.175, 43.743)     (-0.098, -0.098)           (0.217, 0.281)               
## 
##    0.33         33:67            40.116               -0.149                     0.225               0   
##                             (38.624, 41.427)     (-0.149, -0.149)           (0.198, 0.266)               
## 
##    0.34         17:33            37.606               -0.202                     0.212               0   
##                             (35.871, 39.012)     (-0.202, -0.202)            (0.18, 0.251)               
## 
##    0.35          7:13            35.265               -0.256                     0.202               0   
##                             (33.564, 36.659)     (-0.256, -0.256)           (0.159, 0.233)               
## 
##    0.36          9:16            32.631               -0.312                     0.174               0   
##                             (31.239, 34.231)     (-0.312, -0.312)           (0.141, 0.213)               
## 
##    0.37         37:63            30.369               -0.37                      0.157               0   
##                             (28.853, 32.021)      (-0.37, -0.37)            (0.125, 0.193)               
## 
##    0.38         19:31            28.493               -0.43                      0.141               0   
##                             (26.55, 30.021)       (-0.43, -0.43)             (0.11, 0.181)               
## 
##    0.39         39:61            26.291               -0.492                     0.131               0   
##                             (24.235, 27.957)     (-0.492, -0.492)           (0.098, 0.169)               
## 
##     0.4          2:3             23.955               -0.556                     0.112               0   
##                             (21.893, 26.005)     (-0.556, -0.556)           (0.084, 0.154)               
## 
##    0.41         41:59            21.882               -0.621                     0.105               0   
##                             (19.585, 24.144)     (-0.621, -0.621)           (0.074, 0.143)               
## 
##    0.42         21:29            19.857               -0.69                      0.096               0   
##                             (17.518, 22.205)      (-0.69, -0.69)            (0.062, 0.132)               
## 
##    0.43         43:57            18.012               -0.76                      0.081               0   
##                             (15.849, 20.35)       (-0.76, -0.76)             (0.054, 0.12)               
## 
##    0.44         11:14            16.516               -0.833                     0.078               0   
##                             (14.025, 18.77)      (-0.833, -0.833)           (0.045, 0.111)               
## 
##    0.45          9:11            14.627               -0.909                     0.068               0   
##                             (12.519, 17.189)     (-0.909, -0.909)            (0.037, 0.1)                
## 
##    0.46         23:27            13.231               -0.988                     0.055               0   
##                             (11.03, 15.765)      (-0.988, -0.988)           (0.027, 0.092)               
## 
##    0.47         47:53            11.773               -1.069                     0.042               0   
##                              (9.716, 14.32)      (-1.069, -1.069)            (0.02, 0.083)               
## 
##    0.48         12:13            10.539               -1.154                     0.039               0   
##                             (8.657, 13.049)      (-1.154, -1.154)           (0.014, 0.074)               
## 
##    0.49         49:51            9.527                -1.242                     0.028               0   
##                              (7.72, 11.917)      (-1.242, -1.242)           (0.007, 0.068)               
## 
##     0.5          1:1             8.659                -1.333                     0.023               0   
##                             (6.738, 10.873)      (-1.333, -1.333)           (0.002, 0.059)               
## 
##    0.51         51:49             7.72                -1.429                     0.025               0   
##                              (5.945, 9.857)      (-1.429, -1.429)           (-0.003, 0.053)              
## 
##    0.52         13:12            6.913                -1.528                     0.023               0   
##                              (5.239, 8.823)      (-1.528, -1.528)           (-0.003, 0.047)              
## 
##    0.53         53:47             6.26                -1.631                     0.017               0   
##                              (4.612, 8.067)      (-1.631, -1.631)           (-0.008, 0.045)              
## 
##    0.54         27:23            5.457                -1.739                     0.014               0   
##                              (4.042, 7.34)       (-1.739, -1.739)            (-0.01, 0.04)               
## 
##    0.55          11:9            4.796                -1.852                     0.017               0   
##                              (3.507, 6.726)      (-1.852, -1.852)           (-0.011, 0.037)              
## 
##    0.56         14:11            4.207                -1.97                      0.012               0   
##                              (3.024, 6.08)        (-1.97, -1.97)            (-0.012, 0.036)              
## 
##    0.57         57:43             3.85                -2.093                     0.008               0   
##                               (2.391, 5.4)       (-2.093, -2.093)           (-0.014, 0.032)              
## 
##    0.58         29:21            3.348                -2.222                       0                 0   
##                              (1.933, 4.86)       (-2.222, -2.222)           (-0.017, 0.027)              
## 
##    0.59         59:41            2.759                -2.358                    -0.002               0   
##                              (1.529, 4.35)       (-2.358, -2.358)           (-0.018, 0.024)              
## 
##     0.6          3:2             2.272                 -2.5                     -0.001               0   
##                              (1.221, 3.907)        (-2.5, -2.5)             (-0.018, 0.02)               
## 
##    0.61         61:39            1.829                -2.65                     -0.001               0   
##                              (1.006, 3.384)       (-2.65, -2.65)            (-0.019, 0.018)              
## 
##    0.62         31:19            1.619                -2.807                    -0.002               0   
##                              (0.844, 2.91)       (-2.807, -2.807)           (-0.018, 0.015)              
## 
##    0.63         63:37            1.241                -2.973                    -0.004               0   
##                              (0.633, 2.494)      (-2.973, -2.973)           (-0.018, 0.013)              
## 
##    0.64          16:9             1.05                -3.148                    -0.002               0   
##                              (0.518, 2.163)      (-3.148, -3.148)           (-0.016, 0.012)              
## 
##    0.65          13:7            0.924                -3.333                    -0.003               0   
##                              (0.401, 1.886)      (-3.333, -3.333)           (-0.016, 0.013)              
## 
##    0.66         33:17            0.775                -3.529                       0                 0   
##                              (0.314, 1.534)      (-3.529, -3.529)           (-0.014, 0.011)              
## 
##    0.67         67:33            0.566                -3.737                    -0.001               0   
##                              (0.254, 1.284)      (-3.737, -3.737)           (-0.013, 0.01)               
## 
##    0.68          17:8             0.44                -3.958                    -0.001               0   
##                              (0.189, 1.085)      (-3.958, -3.958)           (-0.013, 0.01)               
## 
##    0.69         69:31            0.399                -4.194                    -0.003               0   
##                              (0.127, 0.863)      (-4.194, -4.194)           (-0.013, 0.008)              
## 
##     0.7          7:3             0.358                -4.444                    -0.005               0   
##                              (0.104, 0.706)      (-4.444, -4.444)           (-0.014, 0.006)              
## 
##    0.71         71:29            0.295                -4.713                    -0.005               0   
##                              (0.086, 0.626)      (-4.713, -4.713)           (-0.012, 0.006)              
## 
##    0.72          18:7            0.211                  -5                      -0.003               0   
##                              (0.064, 0.526)          (-5, -5)               (-0.012, 0.005)              
## 
##    0.73         73:27            0.168                -5.309                    -0.003               0   
##                              (0.042, 0.459)      (-5.309, -5.309)           (-0.012, 0.005)              
## 
##    0.74         37:13            0.146                -5.641                    -0.001               0   
##                              (0.022, 0.382)      (-5.641, -5.641)           (-0.012, 0.005)              
## 
##    0.75          3:1             0.146                  -6                      -0.001               0   
##                              (0.02, 0.314)           (-6, -6)               (-0.012, 0.005)              
## 
##    0.76          19:6            0.105                -6.389                    -0.003               0   
##                              (0.02, 0.273)       (-6.389, -6.389)           (-0.01, 0.005)               
## 
##    0.77         77:23            0.105                -6.812                    -0.003               0   
##                                (0, 0.247)        (-6.812, -6.812)           (-0.011, 0.005)              
## 
##    0.78         39:11            0.083                -7.273                    -0.001               0   
##                                (0, 0.208)        (-7.273, -7.273)           (-0.009, 0.005)              
## 
##    0.79         79:21            0.083                -7.778                    -0.001               0   
##                                (0, 0.189)        (-7.778, -7.778)           (-0.008, 0.005)              
## 
##     0.8          4:1             0.061                -8.333                     0.002               0   
##                                (0, 0.168)        (-8.333, -8.333)           (-0.007, 0.005)              
## 
##    0.81         81:19            0.061                -8.947                     0.002               0   
##                                (0, 0.146)        (-8.947, -8.947)           (-0.005, 0.005)              
## 
##    0.82          41:9            0.061                -9.63                      0.002               0   
##                                (0, 0.143)         (-9.63, -9.63)            (-0.003, 0.004)              
## 
##    0.83         83:17            0.061               -10.392                     0.002               0   
##                                (0, 0.123)       (-10.392, -10.392)          (-0.002, 0.004)              
## 
##    0.84          21:4            0.061                -11.25                     0.002               0   
##                                (0, 0.123)        (-11.25, -11.25)           (-0.001, 0.004)              
## 
##    0.85          17:3            0.041               -12.222                     0.001               0   
##                                (0, 0.123)       (-12.222, -12.222)            (0, 0.004)                 
## 
##    0.86          43:7            0.041               -13.333                     0.001               0   
##                                (0, 0.123)       (-13.333, -13.333)            (0, 0.004)                 
## 
##    0.87         87:13              0                 -14.615                       0                 0   
##                                (0, 0.104)       (-14.615, -14.615)            (0, 0.003)                 
## 
##    0.88          22:3              0                 -16.111                       0                 0   
##                                (0, 0.102)       (-16.111, -16.111)            (0, 0.003)                 
## 
##    0.89         89:11              0                 -17.879                       0                 0   
##                                (0, 0.082)       (-17.879, -17.879)            (0, 0.003)                 
## 
##     0.9          9:1               0                   -20                         0                 0   
##                                (0, 0.061)           (-20, -20)                (0, 0.002)                 
## 
##    0.91          91:9              0                 -22.593                       0                 0   
##                                (0, 0.041)       (-22.593, -22.593)            (0, 0.001)                 
## 
##    0.92          23:2              0                 -25.833                       0                 0   
##                                  (0, 0)         (-25.833, -25.833)              (0, 0)                   
## 
##    0.93          93:7              0                   -30                         0                 0   
##                                  (0, 0)             (-30, -30)                  (0, 0)                   
## 
##    0.94          47:3              0                 -35.556                       0                 0   
##                                  (0, 0)         (-35.556, -35.556)              (0, 0)                   
## 
##    0.95          19:1              0                 -43.333                       0                 0   
##                                  (0, 0)         (-43.333, -43.333)              (0, 0)                   
## 
##    0.96          24:1              0                   -55                         0                 0   
##                                  (0, 0)             (-55, -55)                  (0, 0)                   
## 
##    0.97          97:3              0                 -74.444                       0                 0   
##                                  (0, 0)         (-74.444, -74.444)              (0, 0)                   
## 
##    0.98          49:1              0                 -113.333                      0                 0   
##                                  (0, 0)        (-113.333, -113.333)             (0, 0)                   
## 
##    0.99          99:1              0                   -230                        0                 0   
##                                  (0, 0)            (-230, -230)                 (0, 0)                   
## 
##      1          Inf:1              0                    NA                        NA                 NA  
##                                  (0, 0)              (NA, NA)                  (NA, NA)                  
## ---------------------------------------------------------------------------------------------------------
# summary(complex,measure= 'NB')

## 绘制临床影响曲线(Clinical Impact Curve)
plot_clinical_impact(simple, population.size = 1000, cost.benefit.axis = T, n.cost.benefits = 8, 
    col = c("red", "blue"), confidence.intervals = T, ylim = c(0, 1000), legend.position = "topright")

plot of chunk unnamed-chunk-11

plot_clinical_impact(complex, population.size = 1000, cost.benefit.axis = T, n.cost.benefits = 8, 
    col = c("red", "blue"), confidence.intervals = T, ylim = c(0, 1000), legend.position = "topright")

plot of chunk unnamed-chunk-11

# 使用simple模型预测1000人的风险分层,显示“损失:受益”坐标轴,赋以8个刻度,显示置信区间,得到图
# 红色曲线(Numberhigh
# risk)表示,在各个阈概率下,被simple或complex模型划分为阳性(高风险)的人数; 蓝色曲线(Number
# high risk with outcome)为各个阈概率下真阳性的人数。意义一目了然。

# DCA算法的设计原理 相当于在回归预测分析的基础上,引入了损失函数。先简单定义几个概念:
# P:给真阳性患者施加干预的受益值(比如用某生化指标预测某患者有癌症,实际也有,予活检,达到了确诊的目的);
# L:给假阳性患者施加干预的损失值(比如预测有癌症,给做了活检,原来只是个增生,白白受了一刀);
# Pi:患者i有癌症的概率,当Pi > Pt时为阳性,给予干预。 所以较为合理的干预的时机是,当且仅当Pi ×
# P >(1 – Pi) × L,即预期的受益高于预期的损失。推导一下可得,Pi > L / ( P + L
# )即为合理的干预时机,于是把L / ( P + L )定义为Pi的阈值,即Pt。
# 但对二元的预测指标来说,如果结果是阳性,则强制Pi=1,阴性则Pi =
# 0。这样,二元和其他类型的指标就有了可比性。
# 然后我们还可用这些参数来定义真阳性(A)、假阳性(B)、假阴性(C)、真阴性(D),即: A:Pi ≥
# Pt,实际患病; B:Pi ≥ Pt,实际不患病; C:Pi < Pt,实际患病; D:Pi < Pt,实际不患病。
# 我们有一个随机抽样的样本,A、B、C、D分别为这四类个体在样本中的比例,则A+B+C+D =
# 1。那么,患病率(π)就是A + C了。 在这个样本中,如果所有Pi ≥ Pt
# 的人我们都给做了活检,那么就会有人确诊,有人白白被拉了一刀,那么净受益率NB = A × P – B × L。
# 但Vickers认为,知道P和L的确切值并没有什么实际意义,人们可能更关心L/P的比值,所以将上面的公式强行除以P,变成NB
# = A – B × L/P。根据Pt定义公式可推导出:NB = A – B × Pt / ( 1 – Pt
# )。以Pt为横坐标,U为纵坐标,画出来的曲线就是决策曲线。 若使用患病率进行校正,则U = A × π – B
# ×(1 –π) × Pt / ( 1 – Pt )。 那么两个极端情况的曲线也很好推导了。当所有样本都是阴性(Pi <
# Pt),所有人都没干预,那么A = B = 0,所以NB = 0。当所有样本都是阳性,所有人都接受干预,那么C =
# D = 0,A = π,B = 1 –π,NB = π– ( 1 –π )Pt / ( 1 – Pt ),所以它斜率为负值。

# 参考资料: 1.Decision curve analysis: anovel method for evaluating prediction models
# 2.Decision curve analysisrevisited: overall net benefit, relationships to ROC curve analysis,
# andapplication to case-control studies 3.Assessing the Clinical Impactof Risk Prediction
# Models With Decision Curves: Guidance for CorrectInterpretation and Appropriate Use

生存资料结局

# Basic Data Set-up Source file to use stdca command
source("stdca.R")
# Creates a survival object with time to event variable as ttcancer and the event is cancer.
data.set = read.delim("dca.txt", header = TRUE, sep = "\t")
attach(data.set)
Srv = Surv(data.set$ttcancer, data.set$cancer)


# Load survival library
library(survival)
# Run the cox model
coxmod = coxph(Srv ~ age + famhistory + marker, data = data.set)
# the probability of failure is calculated by subtracting the probability of survival from 1.
data.set$pr_failure18 = c(1 - (summary(survfit(coxmod, newdata = data.set), times = 1.5)$surv))

# Run the decision curve analysis (with a smoother)
stdca(data = data.set, outcome = "cancer", ttoutcome = "ttcancer", timepoint = 1.5, predictors = "pr_failure18", 
    xstop = 0.5, smooth = TRUE)

plot of chunk unnamed-chunk-12

## $N
## [1] 750
## 
## $predictors
##      predictor harm.applied probability
## 1 pr_failure18            0        TRUE
## 
## $interventions.avoided.per
## [1] 100
## 
## $net.benefit
##    threshold           all none pr_failure18 pr_failure18_sm
## 1       0.01  0.2122351058    0   0.21181105      0.21174613
## 2       0.02  0.2041966885    0   0.20361189      0.20385712
## 3       0.03  0.1959925306    0   0.20030295      0.20030295
## 4       0.04  0.1876174528    0   0.19999857      0.19999857
## 5       0.05  0.1790660576    0   0.18877249      0.18877249
## 6       0.06  0.1703327178    0   0.18397250      0.18397250
## 7       0.07  0.1614115642    0   0.18451216      0.18451216
## 8       0.08  0.1522964725    0   0.17599183      0.17599183
## 9       0.09  0.1429810491    0   0.17050884      0.17050884
## 10      0.10  0.1334586163    0   0.16958715      0.16958715
## 11      0.11  0.1237221963    0   0.16366952      0.16366952
## 12      0.12  0.1137644940    0   0.16432269      0.16432269
## 13      0.13  0.1035778790    0   0.16149156      0.16149156
## 14      0.14  0.0931543659    0   0.15465439      0.15465439
## 15      0.15  0.0824855938    0   0.14784631      0.14784631
## 16      0.16  0.0715628032    0   0.14257246      0.14257246
## 17      0.17  0.0603768129    0   0.14110378      0.14110378
## 18      0.18  0.0489179935    0   0.13798791      0.13798791
## 19      0.19  0.0371762404    0   0.13763724      0.13763724
## 20      0.20  0.0251409434    0   0.12895335      0.12895335
## 21      0.21  0.0128009553    0   0.12448413      0.12448413
## 22      0.22  0.0001445573    0   0.12833148      0.12833148
## 23      0.23 -0.0128405783    0   0.12638034      0.12638034
## 24      0.24 -0.0261674280    0   0.12723684      0.12723684
## 25      0.25 -0.0398496604    0   0.12215471      0.12215471
## 26      0.26 -0.0539016828    0   0.11416302      0.11416302
## 27      0.27 -0.0683386922    0   0.10999741      0.10999741
## 28      0.28 -0.0831767296    0   0.10939791      0.10939791
## 29      0.29 -0.0984327399    0   0.10822856      0.10822856
## 30      0.30 -0.1141246361    0   0.10585886      0.10585886
## 31      0.31 -0.1302713700    0   0.10300367      0.10300367
## 32      0.32 -0.1468930078    0   0.10181986      0.10181986
## 33      0.33 -0.1640108139    0   0.10224584      0.10224584
## 34      0.34 -0.1816473414    0   0.10113914      0.10113914
## 35      0.35 -0.1998265312    0   0.09905699      0.09905699
## 36      0.36 -0.2185738208    0   0.09633101      0.09633101
## 37      0.37 -0.2379162624    0   0.09534742      0.09534742
## 38      0.38 -0.2578826537    0   0.09187725      0.09187725
## 39      0.39 -0.2785036808    0   0.08964716      0.08964716
## 40      0.40 -0.2998120755    0   0.09002751      0.09002751
## 41      0.41 -0.3218427887    0   0.08197475      0.08197475
## 42      0.42 -0.3446331816    0   0.08169947      0.08169947
## 43      0.43 -0.3682232374    0   0.08143814      0.08143814
## 44      0.44 -0.3926557952    0   0.07927021      0.07927021
## 45      0.45 -0.4179768096    0   0.08356079      0.08356079
## 46      0.46 -0.4442356395    0   0.07803458      0.07803458
## 47      0.47 -0.4714853685    0   0.07986688      0.07986688
## 48      0.48 -0.4997831640    0   0.07858227      0.07858227
## 49      0.49 -0.5291906771    0   0.07768162      0.07751346
## 50      0.50 -0.5597744906    0   0.07587186      0.07591062
## 
## $interventions.avoided
##    threshold pr_failure18 pr_failure18_sm
## 1       0.01    -4.198140      -4.7660406
## 2       0.02    -2.865499      -0.7203969
## 3       0.03    13.937020      13.9370196
## 4       0.04    29.714687      29.7146874
## 5       0.05    18.442215      18.4422153
## 6       0.06    21.368990      21.3689903
## 7       0.07    30.690796      30.6907960
## 8       0.08    27.249660      27.2496595
## 9       0.09    27.833653      27.8336527
## 10      0.10    32.515680      32.5156803
## 11      0.11    32.321013      32.3210130
## 12      0.12    37.076008      37.0760083
## 13      0.13    38.757620      38.7576205
## 14      0.14    37.778585      37.7785849
## 15      0.15    37.037739      37.0377387
## 16      0.16    37.280070      37.2800702
## 17      0.17    39.413756      39.4137560
## 18      0.18    40.576297      40.5762971
## 19      0.19    42.828111      42.8281111
## 20      0.20    41.524962      41.5249619
## 21      0.21    42.014147      42.0141471
## 22      0.22    45.448092      45.4480917
## 23      0.23    46.608741      46.6087406
## 24      0.24    48.578018      48.5780178
## 25      0.25    48.601310      48.6013099
## 26      0.26    47.833800      47.8337999
## 27      0.27    48.216799      48.2167988
## 28      0.28    49.519193      49.5191931
## 29      0.29    50.596386      50.5963863
## 30      0.30    51.329482      51.3294816
## 31      0.31    51.922508      51.9225082
## 32      0.32    52.851484      52.8514842
## 33      0.33    54.058169      54.0581691
## 34      0.34    54.893846      54.8938461
## 35      0.35    55.506940      55.5069400
## 36      0.36    55.983082      55.9830819
## 37      0.37    56.744897      56.7448975
## 38      0.38    57.066090      57.0660899
## 39      0.39    57.582567      57.5825669
## 40      0.40    58.475938      58.4759382
## 41      0.41    58.110328      58.1103281
## 42      0.42    58.874509      58.8745086
## 43      0.43    59.606275      59.6062750
## 44      0.44    60.063309      60.0633094
## 45      0.45    61.299040      61.2990397
## 46      0.46    61.309982      61.3099824
## 47      0.47    62.173764      62.1737642
## 48      0.48    62.656255      62.6562552
## 49      0.49    63.164259      63.1469446
## 50      0.50    63.564635      63.5686257