Scolaris Content Display Scolaris Content Display

Pharmacological interventions for acute pancreatitis: a network meta‐analysis

Esta versión no es la más reciente

Appendices

Appendix 1. Glossary of terms

Acute: sudden.

Analogues: a substance that is similar to another substance.

Antioxidants: substances that inhibit oxidation.

Autodigestion: Breaking‐down of the same organ that secretes the substance.

Bacterial colonisation: growth and multiplication of bacteria.

Endoscopic: with the help of an endoscope, a tube inserted into body (in this context, through the mouth and into the stomach and upper part of the small intestine).

Endoscopic sphincterotomy: endoscopic operation to cut the muscle surrounding the common bile duct and the pancreatic duct.

Enzyme: substances that enable and speed up chemical reactions that are necessary for the normal functioning of the body.

Epigastric pain: upper central abdominal pain.

Insulin: substance which helps regulate blood sugar.

Morbidity: illness (in this context, it means complications)

Mortality: death

Necrosectomy: Removal of dead tissue.

Necrosis: death and decomposition of living tissue usually caused by lack of blood supply but can be caused by other pathological insult.

Pancreatic pseudocysts: fluid collections in the pancreas or the tissues surrounding the pancreas, surrounded by a well defined wall and contain only fluid with little or no solid material.

Pathologic insult: substance or mechanism that causes the condition.

Peripancreatic tissues: tissues surrounding the pancreas.

Pharmacological: medicinal drugs.

Platelet activating factor: substance that causes platelets (cells responsible for clotting of blood) to clump together and is an intermediary substance in the inflammatory pathway.

Probiotics: microorganisms that are believed to provide health benefits when consumed.

Protease inhibitors: substances that inhibit proteases.

Protease: an enzyme that digests protein.

Radiology guided percutaneous treatments: Treatments carried out by insertion of needle from the external surface of the body which are guided by a scan (usually an ultrasound or CT (computed tomography) scan).

Serum: clear fluid that separates out when blood clots.

Transient: temporary.

Tumour necrosis factor‐alpha antibody: antibody to Tumour necrosis factor‐alpha, an intermediary substance in the inflammatory pathway.

Appendix 2. CENTRAL search strategy

#1 MeSH descriptor: [Pancreatitis, Acute Necrotizing] this term only

#2 MeSH descriptor: [Pancreatitis] this term only and with qualifier(s): [Etiology ‐ ET]

#3 MeSH descriptor: [Pancreas] this term only and with qualifier(s): [Abnormalities ‐ AB, Pathology ‐ PA, Physiopathology ‐ PP]

#4 (acute near/3 pancrea*)

#5 (necro* near/3 pancrea*)

#6 (inflam* near/3 pancrea*)

#7 ((interstitial or edema* or oedema*) near/2 pancrea*)

#8 #1 or #2 or #3 or #4 or #5 or #6 or #7

Appendix 3. MEDLINE search strategy

1. Pancreatitis, Acute Necrotizing/

2. Pancreatitis/et

3. Pancreas/ab, pa, pp

4. (acute adj3 pancrea*).mp.

5. (necro* adj3 pancrea*).mp.

6. (inflam* adj3 pancrea$).mp.

7. ((interstitial or edema* or oedema*) adj2 pancrea*).mp.

8. 1 or 2 or 3 or 4 or 5 or 6 or 7

9. randomized controlled trial.pt.

10. controlled clinical trial.pt.

11. randomized.ab.

12. placebo.ab.

13. drug therapy.fs.

14. randomly.ab.

15. trial.ab.

16. groups.ab.

17. 9 or 10 or 11 or 12 or 13 or 14 or 15 or 16

18. exp animals/ not humans.sh.

19. 17 not 18

20. 8 and 19

Appendix 4. EMBASE search strategy

1. acute hemorrhagic pancreatitis/

2. Pancreatitis/et

3. acute pancreatitis/

4. (acute adj3 pancrea*).mp.

5. (necro* adj3 pancrea*).mp.

6. (inflam* adj3 pancrea*).mp.

7. ((interstitial or edema* or oedema*) adj2 pancrea*).mp.

8. 1 or 2 or 3 or 4 or 5 or 6 or 7

9. Clinical trial/

10. Randomized controlled trial/

11. Randomization/

12. Single‐Blind Method/

13. Double‐Blind Method/

14. Cross‐Over Studies/

15. Random Allocation/

16. Placebo/

17. Randomi?ed controlled trial*.tw.

18. Rct.tw.

19. Random allocation.tw.

20. Randomly allocated.tw.

21. Allocated randomly.tw.

22. (allocated adj2 random).tw.

23. Single blind*.tw.

24. Double blind*.tw.

25. ((treble or triple) adj blind*).tw.

26. Placebo*.tw.

27. Prospective study/

28. or/9‐27

29. Case study/

30. Case report.tw.

31. Abstract report/ or letter/

32. or/29‐31

33. 28 not 32

34. 8 and 33

Appendix 5. Science Citation Index search strategy

# 1 TS=((acute or necro* or inflam* or interstitial or edema* or oedema*) near/3 pancrea*)

# 2 TS=(random* OR rct* OR crossover OR masked OR blind* OR placebo* OR meta‐analysis OR systematic review* OR meta‐analys*)

# 3 #2 AND #1

Appendix 6. ClinicalTrials.gov search strategy

"Interventional" [STUDY‐TYPES] AND acute pancreatitis [DISEASE] AND ( "Phase 2" OR "Phase 3" OR "Phase 4" ) [PHASE]

Appendix 7. WHO ICTRP search strategy

Acute pancreatitis

Appendix 8. Stata code for network plot

networkplot t1 t2, labels(T1 T2 T3 ...)

Appendix 9. Winbugs code

Binary outcome

Binary outcome ‐ fixed‐effect model

# Binomial likelihood, logit link
# Fixed effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dbin(p[i,k],n[i,k]) # binomial likelihood
# model for linear predictor
logit(p[i,k]) <‐ mu[i] + d[t[i,k]] ‐ d[t[i,1]]
# expected value of the numerators
rhat[i,k] <‐ p[i,k] * n[i,k]
#Deviance contribution
dev[i,k] <‐ 2 * (r[i,k] * (log(r[i,k])‐log(rhat[i,k]))
+ (n[i,k]‐r[i,k]) * (log(n[i,k]‐r[i,k]) ‐ log(n[i,k]‐rhat[i,k])))
}
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
}
totresdev <‐ sum(resdev[]) # Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }

# pairwise ORs and LORs for all possible pair‐wise comparisons, if nt>2
for (c in 1:(nt‐1)) {
for (k in (c+1):nt) {
or[c,k] <‐ exp(d[k] ‐ d[c])
lor[c,k] <‐ (d[k]‐d[c])
}
}
# ranking on relative scale
for (k in 1:nt) {
# rk[k] <‐ nt+1‐rank(d[],k) # assumes events are “good”
rk[k] <‐ rank(d[],k) # assumes events are “bad”
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Binary outcome ‐ random‐effects model

# Binomial likelihood, logit link
# Random effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dbin(p[i,k],n[i,k]) # binomial likelihood
logit(p[i,k]) <‐ mu[i] + delta[i,k] # model for linear predictor
rhat[i,k] <‐ p[i,k] * n[i,k] # expected value of the numerators
#Deviance contribution
dev[i,k] <‐ 2 * (r[i,k] * (log(r[i,k])‐log(rhat[i,k]))
+ (n[i,k]‐r[i,k]) * (log(n[i,k]‐r[i,k]) ‐ log(n[i,k]‐rhat[i,k]))) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific LOR distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of LOR distributions (with multi‐arm trial correction)
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of LOR distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment for multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) # Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)

# pairwise ORs and LORs for all possible pair‐wise comparisons, if nt>2
for (c in 1:(nt‐1)) {
for (k in (c+1):nt) {
or[c,k] <‐ exp(d[k] ‐ d[c])
lor[c,k] <‐ (d[k]‐d[c])
}
}
# ranking on relative scale
for (k in 1:nt) {
# rk[k] <‐ nt+1‐rank(d[],k) # assumes events are “good”
rk[k] <‐ rank(d[],k) # assumes events are “bad”
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}

} # *** PROGRAM ENDS

Binary outcome ‐ inconsistency model (random‐effects)

# Binomial likelihood, logit link, inconsistency model
# Random effects model
# Treatment by design interactions
# ns = number of studies, nt = number of treatments, A = total number of treatment arms in all trials, and D = the number of designs have to be stated.
# The main data are arranged with one record per arm: d and study indicate which design and study that arm belongs to, t indicates its treatment, and b indicates the first treatment in that design. r and n are the numbers of events and individuals in the arm. The supplementary data offset and offset.design list the rows in which the first arm of each trial and of each design is found.
model {
for(i in 1:ns) {
eff.study[i, b[offset[i]], b[offset[i]]] <‐0
for(k in (offset[i] + 1):(offset[i + 1]‐1)) {
eff.study[i,t[k],b[k]] <‐eff.des[d[k],t[k]] + RE[i,t[k]] ‐ RE[i,b[k]]
}
}
# Random effects for heterogeneity
for(i in 1:ns) {
RE[i,1] <‐0
RE[i,2:nt] ˜ dmnorm(zero[], Prec[,])
}
# Prec is the inverse of the structured heterogeneity matrix
for(i in 1:(nt‐1)) {
for(j in 1:(nt‐1)){
Prec[i,j] <‐2*(equals(i,j)‐1/nt)/(tau*tau)
}
}
for(i in 1:A) {
logit(p[i]) <‐mu[study[i]] + eff.study[study[i],t[i],b[i]]
r[i] ˜ dbin(p[i],n[i])}
# For computing DIC
for(i in 1:A) {
rhat[i] <‐p[i] * n[i]
dev[i] <‐2 * (r[i] * (log(r[i])‐log(rhat[i])) + (n[i]‐r[i]) * (log(n[i]‐r[i]) ‐ log(n[i]‐
rhat[i])))
}
devs <‐sum(dev[])
# Priors
for(i in 1:ns) {
mu[i] ˜ dnorm(0,0.01)
}
tau ˜ dunif(0,2)
} # *** PROGRAM ENDS

Continuous outcome (mean difference)

Continuous outcome (mean difference) ‐ fixed‐effect model

# Normal likelihood, identity link
# Fixed effect model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
var[i,k] <‐ pow(se[i,k],2) # calculate variances
prec[i,k] <‐ 1/var[i,k] # set precisions
y[i,k] ˜ dnorm(theta[i,k],prec[i,k])
# model for linear predictor
theta[i,k] <‐ mu[i] + d[t[i,k]] ‐ d[t[i,1]]
#Deviance contribution
dev[i,k] <‐ (y[i,k]‐theta[i,k])*(y[i,k]‐theta[i,k])*prec[i,k]
}
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for control arm
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
# ranking on relative scale
for (k in 1:nt) {
rk[k] <‐ rank(d[],k) # assumes lower is better
# rk[k] <‐ nt+1‐rank(d[],k) # assumes lower outcome is worse
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Continuous outcome (mean difference) ‐ random‐effects model

# Normal likelihood, identity link
# Random effects model for multi‐arm trials
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
var[i,k] <‐ pow(se[i,k],2) # calculate variances
prec[i,k] <‐ 1/var[i,k] # set precisions
y[i,k] ˜ dnorm(theta[i,k],prec[i,k])
theta[i,k] <‐ mu[i] + delta[i,k] # model for linear predictor
#Deviance contribution
dev[i,k] <‐ (y[i,k]‐theta[i,k])*(y[i,k]‐theta[i,k])*prec[i,k]
}
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific MD distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of MD distributions, with multi‐arm trial correction
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of MD distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment, multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for control arm
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)
# ranking on relative scale
for (k in 1:nt) {
rk[k] <‐ rank(d[],k) # assumes lower is better
# rk[k] <‐ nt+1‐rank(d[],k) # assumes lower outcome is worse
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Continuous outcome (mean difference) ‐ inconsistency model (random‐effects)

# Normal likelihood, identity link, inconsistency model
# Random effects model
# Treatment by design interactions
# ns = number of studies, nt = number of treatments, A = total number of treatment arms in all trials, and D = the number of designs have to be stated.
# The main data are arranged with one record per arm: d and study indicate which design and study that arm belongs to, t indicates its treatment, and b indicates the first treatment in that design. y, se, and n are the mean, standard error, and number of individuals in the arm. The supplementary data offset and offset.design list the rows in which the first arm of each trial and of each design is found.
model {
for(i in 1:ns) {
eff.study[i, b[offset[i]], b[offset[i]]] <‐0
for(k in (offset[i] + 1):(offset[i + 1]‐1)) {
eff.study[i,t[k],b[k]] <‐eff.des[d[k],t[k]] + RE[i,t[k]] ‐ RE[i,b[k]]
}
}
# Random effects for heterogeneity
for(i in 1:ns) {
RE[i,1] <‐0
RE[i,2:nt] ˜ dmnorm(zero[], Prec[,])
}
# Prec is the inverse of the structured heterogeneity matrix
for(i in 1:(nt‐1)) {
for(j in 1:(nt‐1)){
Prec[i,j] <‐2*(equals(i,j)‐1/nt)/(tau*tau)
}
}
for(i in 1:A) {
var[i] <‐ pow(se[i],2) # calculate variances
prec[i] <‐ 1/var[i] # set precisions
y[i] ˜ dnorm(theta[i],prec[i]) # normal likelihood
theta[i] <‐mu[study[i]] + eff.study[study[i],t[i],b[i]] # model for linear predictor
}
# For computing DIC
for(i in 1:A) {
dev[i] <‐ (y[i]‐theta[i])*(y[i]‐theta[i])*prec[i]
}
devs <‐sum(dev[])
# Priors
for(i in 1:ns) {
mu[i] ˜ dnorm(0,0.01)
}
tau ˜ dunif(0,2)
for(i in 1:D) {
for(k in (offset.design[i] + 1):(offset.design[i] + num.ests[i])) {
eff.des[i,t[k]] ˜ dnorm(0,0.01)
}
}
} # *** PROGRAM ENDS

Continuous outcome (standardised mean difference)

The standardised mean difference and its standard error for each treatment comparison will be calculated using the statistical algorithms used by RevMan (RevMan 2012).

Continuous outcome (standardised mean difference) ‐ fixed‐effect model

# Normal likelihood, identity link
# Trial‐level data given as treatment differences
# Fixed effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns2) { # LOOP THROUGH 2‐ARM STUDIES
y[i,2] ˜ dnorm(delta[i,2],prec[i,2]) # normal likelihood for 2‐arm trials
#Deviance contribution for trial i
resdev[i] <‐ (y[i,2]‐delta[i,2])*(y[i,2]‐delta[i,2])*prec[i,2]
}
for(i in (ns2+1):(ns2+ns3)) { # LOOP THROUGH THREE‐ARM STUDIES
for (k in 1:(na[i]‐1)) { # set variance‐covariance matrix
for (j in 1:(na[i]‐1)) {
Sigma[i,j,k] <‐ V[i]*(1‐equals(j,k)) + var[i,k+1]*equals(j,k)
}
}
Omega[i,1:(na[i]‐1),1:(na[i]‐1)] <‐ inverse(Sigma[i,,]) #Precision matrix
# multivariate normal likelihood for 3‐arm trials
y[i,2:na[i]] ˜ dmnorm(delta[i,2:na[i]],Omega[i,1:(na[i]‐1),1:(na[i]‐1)])
#Deviance contribution for trial i
for (k in 1:(na[i]‐1)){ # multiply vector & matrix
ydiff[i,k]<‐ y[i,(k+1)] ‐ delta[i,(k+1)]
z[i,k]<‐ inprod2(Omega[i,k,1:(na[i]‐1)], ydiff[i,1:(na[i]‐1)])
}
resdev[i]<‐ inprod2(ydiff[i,1:(na[i]‐1)], z[i,1:(na[i]‐1)])
}
for(i in 1:(ns2+ns3)){ # LOOP THROUGH ALL STUDIES
for (k in 2:na[i]) { # LOOP THROUGH ARMS
var[i,k] <‐ pow(se[i,k],2) # calculate variances
prec[i,k] <‐ 1/var[i,k] # set precisions
delta[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]]
}
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
# ranking on relative scale
for (k in 1:nt) {
rk[k] <‐ nt+1‐rank(d[],k) # assumes higher HRQoL is “good”
#rk[k] <‐ rank(d[],k) # assumes higher outcome is “bad”
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Continuous outcome (standardised mean difference) ‐ random‐effects model

# Normal likelihood, identity link
# Trial‐level data given as treatment differences
# Random effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns2) { # LOOP THROUGH 2‐ARM STUDIES
y[i,2] ˜ dnorm(delta[i,2],prec[i,2]) # normal likelihood for 2‐arm trials
#Deviance contribution for trial i
resdev[i] <‐ (y[i,2]‐delta[i,2])*(y[i,2]‐delta[i,2])*prec[i,2]
}
for(i in (ns2+1):(ns2+ns3)) { # LOOP THROUGH THREE‐ARM STUDIES
for (k in 1:(na[i]‐1)) { # set variance‐covariance matrix
for (j in 1:(na[i]‐1)) {
Sigma[i,j,k] <‐ V[i]*(1‐equals(j,k)) + var[i,k+1]*equals(j,k)
}
}
Omega[i,1:(na[i]‐1),1:(na[i]‐1)] <‐ inverse(Sigma[i,,]) #Precision matrix
# multivariate normal likelihood for 3‐arm trials
y[i,2:na[i]] ˜ dmnorm(delta[i,2:na[i]],Omega[i,1:(na[i]‐1),1:(na[i]‐1)])
#Deviance contribution for trial i
for (k in 1:(na[i]‐1)){ # multiply vector & matrix
ydiff[i,k]<‐ y[i,(k+1)] ‐ delta[i,(k+1)]
z[i,k]<‐ inprod2(Omega[i,k,1:(na[i]‐1)], ydiff[i,1:(na[i]‐1)])
}
resdev[i]<‐ inprod2(ydiff[i,1:(na[i]‐1)], z[i,1:(na[i]‐1)])
}
for(i in 1:(ns2+ns3)){ # LOOP THROUGH ALL STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
for (k in 2:na[i]) { # LOOP THROUGH ARMS
var[i,k] <‐ pow(se[i,k],2) # calculate variances
prec[i,k] <‐ 1/var[i,k] # set precisions
}
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific SMD distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of random effects distributions, with multi‐arm trial correction
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of random effects distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment, multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)
# ranking on relative scale
for (k in 1:nt) {
rk[k] <‐ nt+1‐rank(d[],k) # assumes higher HRQoL is “good”
# rk[k] <‐ rank(d[],k) # assumes higher outcome is “bad”
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Continuous outcome (standardised mean difference) ‐ inconsistency model (random‐effects)

# Normal likelihood, identity link
# Trial‐level data given as treatment differences
# Random effects model
model {
for(i in 1:ns) {
eff.study[i, t[i,1], t[i,1]] <‐0
for(k in 2:na[i]) {
eff.study[i,t[i,k],t[i,1]] <‐eff.des[design[k],t[i,k]] + RE[i,t[i,k]] ‐ RE[i, t[i,1]]
}
}
# Random effects for heterogeneity
for(i in 1:ns) {
RE[i,1] <‐0
RE[i,2:nt] ˜ dmnorm(zero[], Prec[,])
}
# Prec is the inverse of the structured heterogeneity matrix
for(i in 1:(nt‐1)) {
for(j in 1:(nt‐1)){
Prec[i,j] <‐2*(equals(i,j)‐1/nt)/(tau*tau)
}
}


for(i in 1:ns2) { # LOOP THROUGH 2‐ARM STUDIES
y[i,2] ˜ dnorm(delta[i,2],prec[i,2]) # normal likelihood for 2‐arm trials
#Deviance contribution for trial i
resdev[i] <‐ (y[i,2]‐delta[i,2])*(y[i,2]‐delta[i,2])*prec[i,2]
}
for(i in (ns2+1):(ns2+ns3)) { # LOOP THROUGH THREE‐ARM STUDIES
for (k in 1:(na[i]‐1)) { # set variance‐covariance matrix
for (j in 1:(na[i]‐1)) {
Sigma[i,j,k] <‐ V[i]*(1‐equals(j,k)) + var[i,k+1]*equals(j,k)
}
}
Omega[i,1:(na[i]‐1),1:(na[i]‐1)] <‐ inverse(Sigma[i,,]) #Precision matrix
# multivariate normal likelihood for 3‐arm trials
y[i,2:na[i]] ˜ dmnorm(delta[i,2:na[i]],Omega[i,1:(na[i]‐1),1:(na[i]‐1)])
#Deviance contribution for trial i
for (k in 1:(na[i]‐1)){ # multiply vector & matrix
ydiff[i,k]<‐ y[i,(k+1)] ‐ delta[i,(k+1)] + eff.study[i,t[i,k],t[i,1]]
z[i,k]<‐ inprod2(Omega[i,k,1:(na[i]‐1)], ydiff[i,1:(na[i]‐1)])
}
resdev[i]<‐ inprod2(ydiff[i,1:(na[i]‐1)], z[i,1:(na[i]‐1)])
}

for(i in 1:(ns2+ns3)){ # LOOP THROUGH ALL STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
for (k in 2:na[i]) { # LOOP THROUGH ARMS
var[i,k] <‐ pow(se[i,k],2) # calculate variances
prec[i,k] <‐ 1/var[i,k] # set precisions
}
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific SMD distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of random effects distributions, with multi‐arm trial correction
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of random effects distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment, multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)
for(i in 1:D) {
for(k in (offset.design[i] + 1):(offset.design[i] + num.ests[i])) {
eff.des[i,t[i,k]] ˜ dnorm(0,0.01)
}
}
}
} # *** PROGRAM ENDS

Count outcome

Count outcome ‐ fixed‐effect model

# Poisson likelihood, log link
# Fixed effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dpois(theta[i,k]) # Poisson likelihood
theta[i,k] <‐ lambda[i,k]*E[i,k] # failure rate * exposure
# model for linear predictor
log(lambda[i,k]) <‐ mu[i] + d[t[i,k]] ‐ d[t[i,1]]
#Deviance contribution
dev[i,k] <‐ 2*((theta[i,k]‐r[i,k]) + r[i,k]*log(r[i,k]/theta[i,k])) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }

# pairwise RRs and LRRs for all possible pair‐wise comparisons, if nt>2
for (c in 1:(nt‐1)) {
for (k in (c+1):nt) {
rater[c,k] <‐ exp(d[k] ‐ d[c])
lrater[c,k] <‐ (d[k]‐d[c])
}
}
# ranking on relative scale
for (k in 1:nt) {
# rk[k] <‐ nt+1‐rank(d[],k) # assumes events are “good”
rk[k] <‐ rank(d[],k) # assumes events are “bad”
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Count outcome ‐ random‐effects model

# Poisson likelihood, log link
# Random effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dpois(theta[i,k]) # Poisson likelihood
theta[i,k] <‐ lambda[i,k]*E[i,k] # failure rate * exposure
# model for linear predictor
log(lambda[i,k]) <‐ mu[i] + d[t[i,k]] ‐ d[t[i,1]]
#Deviance contribution
dev[i,k] <‐ 2*((theta[i,k]‐r[i,k]) + r[i,k]*log(r[i,k]/theta[i,k])) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific LOR distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of LOR distributions (with multi‐arm trial correction)
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of LOR distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment for multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) # Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)

# pairwise ORs and LORs for all possible pair‐wise comparisons, if nt>2
for (c in 1:(nt‐1)) {
for (k in (c+1):nt) {
or[c,k] <‐ exp(d[k] ‐ d[c])
lor[c,k] <‐ (d[k]‐d[c])
}
}
# ranking on relative scale
for (k in 1:nt) {
# rk[k] <‐ nt+1‐rank(d[],k) # assumes events are “good”
rk[k] <‐ rank(d[],k) # assumes events are “bad”
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}

} # *** PROGRAM ENDS

Count outcome ‐ inconsistency model (random‐effects)

# Poisson likelihood, log link, inconsistency model
# Random effects model
# Treatment by design interactions
# ns = number of studies, nt = number of treatments, A = total number of treatment arms in all trials, and D = the number of designs have to be stated.
# The main data are arranged with one record per arm: d and study indicate which design and study that arm belongs to, t indicates its treatment, and b indicates the first treatment in that design. r and E are the numbers of successes and exposures in the arm. The supplementary data offset and offset.design list the rows in which the first arm of each trial and of each design is found.
model {
for(i in 1:ns) {
eff.study[i, b[offset[i]], b[offset[i]]] <‐0
for(k in (offset[i] + 1):(offset[i + 1]‐1)) {
eff.study[i,t[k],b[k]] <‐eff.des[d[k],t[k]] + RE[i,t[k]] ‐ RE[i,b[k]]
}
}
# Random effects for heterogeneity
for(i in 1:ns) {
RE[i,1] <‐0
RE[i,2:nt] ˜ dmnorm(zero[], Prec[,])
}
# Prec is the inverse of the structured heterogeneity matrix
for(i in 1:(nt‐1)) {
for(j in 1:(nt‐1)){
Prec[i,j] <‐2*(equals(i,j)‐1/nt)/(tau*tau)
}
}
for(i in 1:A) {

r[i] ˜ dpois(theta[i]) # Poisson likelihood
theta[i] <‐ lambda[i]*E[i] # failure rate * exposure

log(lambda[i]) <‐mu[study[i]] + eff.study[study[i],t[i],b[i]] # model for linear predictor
}
# For computing DIC
for(i in 1:A) {

dev[i] <‐ 2*((theta[i]‐r[i]) + r[i]*log(r[i]/theta[i]))
}
devs <‐sum(dev[])
# Priors
for(i in 1:ns) {
mu[i] ˜ dnorm(0,0.01)
}
tau ˜ dunif(0,2)
for(i in 1:D) {
for(k in (offset.design[i] + 1):(offset.design[i] + num.ests[i])) {
eff.des[i,t[k]] ˜ dnorm(0,0.01)
}
}
} # *** PROGRAM ENDS

Time‐to‐event outcome

Time‐to‐event outcome ‐ fixed‐effect model

# Binomial likelihood, cloglog link
# Fixed effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dbin(p[i,k],n[i,k]) # Binomial likelihood
# model for linear predictor
cloglog(p[i,k]) <‐ log(time[i]) + mu[i] + d[t[i,k]] ‐ d[t[i,1]]
rhat[i,k] <‐ p[i,k] * n[i,k] # expected value of the numerators
#Deviance contribution
dev[i,k] <‐ 2 * (r[i,k] * (log(r[i,k])‐log(rhat[i,k]))
+ (n[i,k]‐r[i,k]) * (log(n[i,k]‐r[i,k]) ‐ log(n[i,k]‐rhat[i,k]))) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for control arm
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
# ranking on relative scale
for (k in 1:nt) {
# rk[k] <‐ rank(d[],k) # assumes lower is better
rk[k] <‐ nt+1‐rank(d[],k) # assumes lower outcome is worse
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Time‐to‐event outcome ‐ random‐effects model

# Binomial likelihood, cloglog link
# Random effects model
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dbin(p[i,k],n[i,k]) # Binomial likelihood
# model for linear predictor
cloglog(p[i,k]) <‐ log(time[i]) + mu[i] + delta[i,k]
rhat[i,k] <‐ p[i,k] * n[i,k] # expected value of the numerators
#Deviance contribution
dev[i,k] <‐ 2 * (r[i,k] * (log(r[i,k])‐log(rhat[i,k]))
+ (n[i,k]‐r[i,k]) * (log(n[i,k]‐r[i,k]) ‐ log(n[i,k]‐rhat[i,k]))) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific LOR distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of LOR distributions, with multi‐arm trial correction
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of LOR distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment, multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) #Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){ d[k] ˜ dnorm(0,.0001) }
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)
# ranking on relative scale
for (k in 1:nt) {
# rk[k] <‐ rank(d[],k) # assumes lower is better
rk[k] <‐ nt+1‐rank(d[],k) # assumes lower outcome is worse
best[k] <‐ equals(rk[k],1) #calculate probability that treat k is best
for (h in 1:nt){ prob[h,k] <‐ equals(rk[k],h) } # calculates probability that treat k is h‐th best
}
} # *** PROGRAM ENDS

Time‐to‐event outcome ‐ inconsistency model (random‐effects)

# Binomial likelihood, cloglog link, inconsistency model
# Random effects model
# Treatment by design interactions
# ns = number of studies, nt = number of treatments, A = total number of treatment arms in all trials, and D = the number of designs have to be stated.
# The main data are arranged with one record per arm: d and study indicate which design and study that arm belongs to, t indicates its treatment, and b indicates the first treatment in that design. r ,n, and time are the numbers of events, individuals, and follow‐up time in the arm. The supplementary data offset and offset.design list the rows in which the first arm of each trial and of each design is found.
model {
for(i in 1:ns) {
eff.study[i, b[offset[i]], b[offset[i]]] <‐0
for(k in (offset[i] + 1):(offset[i + 1]‐1)) {
eff.study[i,t[k],b[k]] <‐eff.des[d[k],t[k]] + RE[i,t[k]] ‐ RE[i,b[k]]
}
}
# Random effects for heterogeneity
for(i in 1:ns) {
RE[i,1] <‐0
RE[i,2:nt] ˜ dmnorm(zero[], Prec[,])
}
# Prec is the inverse of the structured heterogeneity matrix
for(i in 1:(nt‐1)) {
for(j in 1:(nt‐1)){
Prec[i,j] <‐2*(equals(i,j)‐1/nt)/(tau*tau)
}
}
for(i in 1:A) {
r[i] ˜ dbin(p[i],n[i]) # Binomial likelihood
cloglog(p[i]) <‐ log(time[i]) + mu[study[i]] + eff.study[study[i],t[i],b[i]] # model for linear predictor
}
# For computing DIC
for(i in 1:A) {

dev[i] <‐ 2 * (r[i] * (log(r[i])‐log(rhat[i]))+ (n[i]‐r[i]) * (log(n[i]‐r[i]) ‐ log(n[i]‐rhat[i])))
}
devs <‐sum(dev[])
# Priors
for(i in 1:ns) {
mu[i] ˜ dnorm(0,0.01)
}
tau ˜ dunif(0,2)
for(i in 1:D) {
for(k in (offset.design[i] + 1):(offset.design[i] + num.ests[i])) {
eff.des[i,t[k]] ˜ dnorm(0,0.01)
}
}
} # *** PROGRAM ENDS

Appendix 10. Technical details of network meta‐analysis

The posterior probabilities (effect estimates or values) of the treatment contrast (i.e., log odds ratio, mean difference, standardised mean difference, rate ratio, or hazard ratio) may vary depending upon the initial values to start the simulations. In order to control the random error due to the choice of initial values, we will perform the network analysis for three different initial values (priors) as per the guidance from The National Institute for Health and Care Excellence (NICE) Decision Support Unit (DSU) documents (Dias 2013). If the results from three different priors are similar (convergence), then the results are reliable. It is important to discard the results of the initial simulations as they can be significantly affected by the choice of the priors and only include the results of the simulations obtained after the convergence. The discarding of the initial simulations is called 'burn in'. We will run the models for all outcomes for 30,000 simulations for 'burn in' for three different chains (a set of initial values). We will run the models for another 100,000 simulations to obtain the effect estimates. We will obtain the effect estimates from the results of all the three chains (different initial values). We will also ensure that the results in the three different chains are similar in order to control for random error due to the choice of priors. This will be done in addition to the visual inspection of convergence obtained after simulations in the burn in.

We will run three different models for each outcome. The fixed‐effect model assumes that the treatment effect is the same across studies. The random‐effects consistency model assumes that the treatment effect is distributed normally across the studies but assumes that the transitivity assumption is satisfied (i.e., the population studied, the definition of outcomes, and the methods used were similar across studies and that there is consistency between the direct comparison and indirect comparison). A random‐effects inconsistency model does not make the transitivity assumption. If the inconsistency model resulted in a better model fit than the consistency model, the results of the network meta‐analysis can be unreliable and so should be interpreted with extreme caution. If there is evidence of inconsistency, we will identify areas in the network where substantial inconsistency might be present in terms of clinical and methodological diversities between trials and, when appropriate, limit the network meta‐analysis to a more compatible subset of trials.

The choice of the model between fixed‐effect and random‐effects will be based on the model fit as per the guidelines of the NICE TSU (Dias 2013). The model fit will be assessed by deviance residuals and Deviance Information Criteria (DIC) according to NICE TSU guidelines (Dias 2013). A difference of three or five in the DIC is not generally considered important (Dias 2012c). We will use the simpler model, i.e., fixed‐effect model if the DIC are similar between the fixed‐effect and the random‐effects models. We will use the random‐effects model if it results in a better model fit as indicated by a DIC lower than that of the fixed‐effect model by at least three.

We will calculate the effect estimates of the treatment and the 95% credible intervals using the following additional code.
# pairwise ORs and MD for all possible pair‐wise comparisons, if nt>2
for (c in 1:(nt‐1)) {
for (k in (c+1):nt) {
OR[c,k] <‐ exp(d[k] ‐ d[c])
#MD[c,k] <‐ (d[k]‐d[c])
}
}

where c indicates control group, k indicates intervention group, OR indicates odds ratio or other ratios, and MD indicates mean difference or other differences.

Appendix 11. Winbugs code for subgroup analysis

Categorical covariate

Only the code for random‐effects model for a binary outcome is shown. The differences in the code are underlined. Similar changes will be made for other outcomes.

# Binomial likelihood, logit link, subgroup
# Random effects model for multi‐arm trials
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dbin(p[i,k],n[i,k]) # binomial likelihood
# model for linear predictor, covariate effect relative to treat in arm 1
logit(p[i,k]) <‐ mu[i] + delta[i,k] + (beta[t[i,k]]‐beta[t[i,1]]) * x[i]
rhat[i,k] <‐ p[i,k] * n[i,k] # expected value of the numerators
#Deviance contribution
dev[i,k] <‐ 2 * (r[i,k] * (log(r[i,k])‐log(rhat[i,k]))
+ (n[i,k]‐r[i,k]) * (log(n[i,k]‐r[i,k]) ‐ log(n[i,k]‐rhat[i,k]))) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific LOR distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of LOR distributions (with multi‐arm trial correction)
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of LOR distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment for multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) # Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
beta[1] <‐ 0 # covariate effect is zero for reference treatment
for (k in 2:nt){ # LOOP THROUGH TREATMENTS
d[k] ˜ dnorm(0,.0001) # vague priors for treatment effects
beta[k] <‐ B[k] # exchangeable covariate effect
B[k] ˜ dnorm(0,.0001) # vague prior for covariate effect
}
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)
# treatment effect when covariate = z[j]
for (k in 1:nt){ # LOOP THROUGH TREATMENTS
for (j in 1:nz) { dz[j,k] <‐ d[k] + (beta[k]‐beta[1])*z[j] }
}
# *** PROGRAM ENDS

Continuous covariate

# Binomial likelihood, logit link, continuous covariate
# Random effects model for multi‐arm trials
model{ # *** PROGRAM STARTS
for(i in 1:ns){ # LOOP THROUGH STUDIES
w[i,1] <‐ 0 # adjustment for multi‐arm trials is zero for control arm
delta[i,1] <‐ 0 # treatment effect is zero for control arm
mu[i] ˜ dnorm(0,.0001) # vague priors for all trial baselines
for (k in 1:na[i]) { # LOOP THROUGH ARMS
r[i,k] ˜ dbin(p[i,k],n[i,k]) # binomial likelihood
# model for linear predictor, covariate effect relative to treat in arm 1
logit(p[i,k]) <‐ mu[i] + delta[i,k] + (beta[t[i,k]]‐beta[t[i,1]]) * (x[i]‐mx)
rhat[i,k] <‐ p[i,k] * n[i,k] # expected value of the numerators
#Deviance contribution
dev[i,k] <‐ 2 * (r[i,k] * (log(r[i,k])‐log(rhat[i,k]))
+ (n[i,k]‐r[i,k]) * (log(n[i,k]‐r[i,k]) ‐ log(n[i,k]‐rhat[i,k]))) }
# summed residual deviance contribution for this trial
resdev[i] <‐ sum(dev[i,1:na[i]])
for (k in 2:na[i]) { # LOOP THROUGH ARMS
# trial‐specific LOR distributions
delta[i,k] ˜ dnorm(md[i,k],taud[i,k])
# mean of LOR distributions (with multi‐arm trial correction)
md[i,k] <‐ d[t[i,k]] ‐ d[t[i,1]] + sw[i,k]
# precision of LOR distributions (with multi‐arm trial correction)
taud[i,k] <‐ tau *2*(k‐1)/k
# adjustment for multi‐arm RCTs
w[i,k] <‐ (delta[i,k] ‐ d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi‐arm trials
sw[i,k] <‐ sum(w[i,1:k‐1])/(k‐1)
}
}
totresdev <‐ sum(resdev[]) # Total Residual Deviance
d[1]<‐0 # treatment effect is zero for reference treatment
beta[1] <‐ 0 # covariate effect is zero for reference treatment
for (k in 2:nt){ # LOOP THROUGH TREATMENTS
d[k] ˜ dnorm(0,.0001) # vague priors for treatment effects
beta[k] <‐ B[k] # exchangeable covariate effect
B[k] ˜ dnorm(0,.0001) # vague prior for covariate effect
}
sd ˜ dunif(0,5) # vague prior for between‐trial SD
tau <‐ pow(sd,‐2) # between‐trial precision = (1/between‐trial variance)
# treatment effect when covariate = z[j] (un‐centring treatment effects)
for (k in 1:nt){
for (j in 1:nz) { dz[j,k] <‐ d[k] ‐ (beta[k]‐beta[1])*(mx‐z[j]) }
}
# pairwise ORs and LORs for all possible pair‐wise comparisons, if nt>2
for (c in 1:(nt‐1)) {
for (k in (c+1):nt) {
# at mean value of covariate
or[c,k] <‐ exp(d[k] ‐ d[c])
lor[c,k] <‐ (d[k]‐d[c])
# at covariate=z[j]
for (j in 1:nz) {
orz[j,c,k] <‐ exp(dz[j,k] ‐ dz[j,c])
lorz[j,c,k] <‐ (dz[j,k]‐dz[j,c])
}
}
}
} # *** PROGRAM ENDS