33rd Annual Conference International Society for Clinical

Transcription

33rd Annual Conference International Society for Clinical
Programme & abstract book
rd
33 Annual Conference
of the
International Society for
Clinical Biostatistics
19-23 August 2012 – Bergen, Norway
2/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Welcome to Bergen 2012
We are delighted to invite you to the 33rd Annual Conference of the ISCB, for the first time in
Norway. Clinical biostatistics is a field that is crucial to medical and health related research to
ensure quality knowledge as basis for treatment and prevention, etiology and health care. The
conference is meant to build a bridge between researchers in the medical and health related fields
and developers of new relevant statistical methodology and software.
The Scientific Programme Committee has set up a broad range of interesting topics both in the
invited and contributed session. The traditional mini-symposia on Thursday 23 August, that can be
attended separately, is devoted to Register-based epidemiology and Novel statistical approaches
used in post-marketing safety surveillance systems, thus giving the conference a Nordic flavour as
well as a global perspective.
The conference is also a unique opportunity to experience Norwegian nature and culture, be it before, during, or after the official
programme. We hope you find the programme attractive and will take this opportunity to participate and visit Bergen, Norway in
August 2012.
Mark your calendar now!
Geir Egil Eide
Chair Local Organizing Commitee
ISBN: 978-82-8045-026-5
Odd O. Aalen
Chair Scientific Programme Committee
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
3/156
Content
Welcome to Bergen 2012.................................................................................................................................................................2
Content.............................................................................................................................................................................................3
International Society for Clinical Biostatistics (ISCB)........................................................................................................................4
Programme Overview.......................................................................................................................................................................6
Scientific Programme........................................................................................................................................................................8
Abstracts – Oral Presentations........................................................................................................................................................29
Sunday, 19 August – Pre Conference Courses.........................................................................................................................29
Monday, 20 August...................................................................................................................................................................31
Morning sessions (IP1, I1 , C1 – C4) ..............................................................................................................................31
Afternoon sessions (IP2, I2 , C5 – C13) ..........................................................................................................................36
Tuesday, 21 August..................................................................................................................................................................48
Morning sessions (I3 - I4 , C14 – C21) ...........................................................................................................................48
Wednesday, 22 August.............................................................................................................................................................59
Morning sessions (IP3, I5 , C22 – C25) ..........................................................................................................................59
Afternoon sessions (I6 , C26 - 34) ...................................................................................................................................64
Thursday, 23 august - Mini-symposia........................................................................................................................................77
Abstracts - Posters..........................................................................................................................................................................80
Author's Index...............................................................................................................................................................................133
Information for Presenters.............................................................................................................................................................146
Statistics in Medicine Special Issue..............................................................................................................................................147
ISCB Awards................................................................................................................................................................................147
Acknowledgements.......................................................................................................................................................................148
Conference Venue........................................................................................................................................................................149
General Information......................................................................................................................................................................150
Vocabulary - Ordbok.....................................................................................................................................................................151
Social Events................................................................................................................................................................................152
Map of Bergen..............................................................................................................................................................................154
Posters and Exhibitors Placement................................................................................................................................................155
Plan Grieghallen...........................................................................................................................................................................156
4/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
International Society for Clinical Biostatistics (ISCB)
The International Society for Clinical Biostatistics (ISCB) was founded in 1978 to stimulate research into the principles and
methodology used in the design and analysis of clinical research and to increase the relevance of statistical theory to the real world
of clinical medicine.
Membership is open to all interested individuals who share the Aims of the Society. ISCB’s membership include clinicians,
statisticians and members of other disciplines, such as epidemiologists, clinical chemists and clinical pharmacologists, working or
interested in the field of clinical biostatistics.
President:
Harbajan Chadha-Borham (Switzerland)
Vice-president:
Koos Zwinderman (The Netherlands)
Secretary:
David W. Warne (Switzerland)
Treasurer:
KyungMann Kim (USA)
Webmaster:
Ingrid Sofie Harboe (Denmark)
Ordinary members:
Michal Abramovicz (Canada)
Lucinda Billingham (UK)
Krisztina Boder (Hungary)
Tomasz Burzykowski (Belgium)
Lutz Edler (Germany)
Catherine Legrand (Belgium)
Saskia Le Sessie (The Netherlands)
Giota Toulumi (Greece)
Zdeněk Valenta (Czech Republic)
Scientific Programme Committee
Chair:
Odd O. Aalen (Norway)
Members:
Per Kragh Andersen (Denmark)
Jan Beyersmann (Germany)
Ørnulf Borgan (Norway)
Michael Campbell (UK)
David Clayton (UK)
Daniel Commenges (France)
Vanessa Didelez (UK)
Clelia Di Serio (Italy)
Jan Terje Kvaløy (Norway)
Sophia Rabe-Hesketh (USA)
Marie Reilly (Sweden)
Terry Therneau (USA)
Stein Emil Vollset (Norway)
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
5/156
Local Organising Committee
Chair:
Geir Egil Eide (Centre for Clinical Research, Bergen)
Vice-chairman:
Stein Atle Lie (Uni Health, Bergen)
Secretary:
Anne Marie Fenstad (Arthroplasty Registry, Bergen)
Treasurer:
Jan Harald Aarseth (MS Registry, Bergen)
Scientific coordinator:
Jan Terje Kvaløy (University of Stavanger, Stavanger)
Course coordinator:
Tore Wentzel-Larsen (RBUP/NKVTS, Oslo)
Symposium coordinator:
Roy Miodini Nilsen (Centre for Clinical Research, Bergen)
Social program coordinator:
Ågot Irgens (Dep. of Occupational Medicine, Bergen)
Sponsorship acquisition :
Stein Atle Lie (Uni Health, Bergen)
Milada Cvancarova Småstuen (Cancer Registry, Oslo)
Webmaster/ editor:
Jörg Aßmus (Centre for Clinical Research, Bergen)
Associate member:
Ivar Heuch (Dep. of Mathematics, University of Bergen)
Mini-symposium Committee
Members:
Stein Emil Vollset (University of Bergen, Bergen)
Rolv A. Skjærven (University of Bergen, Bergen)
Tone Bjørge (University of Bergen, Bergen)
Marjolein Iversen (Bergen University College, Bergen)
Roy Miodini Nilsen (Centre for Clinical Research, Bergen)
Congress Secretariat
Kongress & Kultur AS
Torgalmenningen 1A
Postboks 947 Sentrum
N-5808 Bergen
Phone: + 47 55 55 36 55
Fax:
+ 47 55 55 36 56
e-mail: mail@kongress.no
6/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Programme Overview
Time
Reg.
07:15
07:30
07:45
08:00
08:15
08:30
08:45
09:00
09:15
09:30
09:45
10:00
10:15
10:30
10:45
11:00
11:15
11:30
11:45
12:00
12:15
12:30
12:45
13:00
13:15
13:30
13:45
14:00
14:15
14:30
14:45
15:00
15:15
15:30
15:45
16:00
16:15
16:30
16:45
17:00
17:15
17:30
17:45
18:00
18:30
19:00+
All day
Sunday 19 August
Registration 8.00-21.00
Registration
Pre-conference
courses 1, 2, 3
Monday 20 August
Registration 7.30-16.30
Tuesday 21 August
Registration 7.30-13.00
Registration
Registration
Welcome to ISCB33
Plenary session:
Stephen Senn
(IP1)
Refreshments
Refreshments
Pre-conference
courses 1, 2, 3
Invited session:
Infectious
diseases
(I1)
Contributed
sessions
(C1-C4)
Invited session:
Functional data
analysis
(I3)
Contributed
sessions
(C14-C17)
Poster session
and refreshments
Invited session:
Extensions to
epidemiological
designs
(I4)
Contributed
sessions
(C18-C21)
Lunch
Lunch
Pre-conference
courses 1, 4, 5
Refreshments
Pre-conference
courses 1, 4, 5
Lunch
Invited session:
Evaluating hospital
performance
(I2)
Contributed
sessions
(C5-C8)
Refreshments
Plenary session:
Terry Speed
(IP2)
Conference excursions
Break
Contributed
sessions
(C9-C13)
Registration
(till 21:00)
19:00 – 20:30: Welcome reception
Poster session
Poster session
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
7/156
Programme Overview
Wednesday 22 August
Registration 7.30-16.30
Thursday 23 August
Registration 8.00-13.00
Registration
Registration
Invited session:
Causal
inference
(I5)
Contributed
sessions
(C22-C25)
Refreshments
President's invited Speaker
Debora Ashby
(IP3)
Annual General Meeting
(AGM)
Lunch
Invited session:
Genomics and
system biology
(I6)
Mini-Symposium
Registerbased
epidemiology
(MS1)
Mini-Symposium
Statistics in vaccine
research
(MS2)
Refreshments
Mini-Symposium
Registerbased
epidemiology
(MS1)
Mini-Symposium
Statistics in vaccine
research
(MS2)
Closure of ISCB33
Contributed
sessions
(C26-C29)
Refreshments
Contributed
sessions
(C30-C34)
Time
Reg.
07:15
07:30
07:45
08:00
08:15
08:30
08:45
09:00
09:15
09:30
09:45
10:00
10:15
10:30
10:45
11:00
11:15
11:30
11:45
12:00
12:15
12:30
12:45
13:00
13:15
13:30
13:45
14:00
14:15
14:30
14:45
15:00
15:15
15:30
15:45
16:00
16:15
16:30
16:45
17:00
17:15
17:30
17:45
18:00
18:30
19:00-23:00 Conference dinner
19:00+
Poster session
All day
8/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Posters
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
Scientific Programme
SUNDAY, 19 AUGUST 2012
Pre-conference courses
Småtroll
1
09:00-17:30
Michael Norhtnagel and Bettina Kulle Andreassen
Analysis of rare genetic variants in common diseases
Troldtog
2
09:00-12:45
Thiago Guerra Martins
Bayesian computing with INLA
Klokkeklang
3
09:00-12:45
Miguel Hernán
Estimating treatment effects using longitudinal data
Klokkeklang
4
13:45-17:30
Philip Hougaard
Analysis of interval-censored survival data
Troldtog
5
13:45-17:30
Hélène Jacqmin-Gadda and Cécile Proust-Lima
Latent class mixed models for longitudinal data and time-to-event data
MONDAY, 20 AUGUST 2012
Peer Gynt
08:30-10:00
IP1
08:30-09:00
09:00-10:00
09:00-10:00
Klokkeklang
I1
10:30-12:00
I1.1
10:30-11:00
I1.2
11:00-11:30
I1.3
11:30-12:00
Peer Gynt
C1
10:30-12:00
C1.1 10:30-10:48
C1.2 10:48-11:06
Welcome and Opening of the Conference, Opening Plenary Session
Geir Egil Eide (LOC Chair)
LOC Chair, ISCB President, Director Haukeland University Hospital
Plenary session
Chair: Odd O. Aalen (SPC Chair)
Stephen Senn
Concurrent control: key or sacred cow?
Invited session: Modeling infectious disease
Chair: Birgitte de Blasio
Gianpaolo Scalia Tomba
Modeling disease spread for insight, hindsight, foresight...
Michiel van Boven
Estimation of vaccine efficacy in a disease
Christopher Fraser
Epidemiological and evolutionary dynamics of HIV-1 virulence
Contributed session: Causal inference I
Chair: Miguel Hernan
Vanessa Didelez
Covariates and Confounding in Instrumental Variable Analyses
Kjetil Røysland
11:42-12:00
Troldtog
C2
10:30-12:00
C2.1
10:30-10:48
C2.2
10:48-11:06
C2.3
11:06-11:24
C2.4
11:24-11:42
C2.5
11:42-12:00
Gjendine
C3
10:30-12:00
C3.1
10:30-10:48
C3.2
10:48-11:06
C3.3
11:06-11:24
C3.4
11:24-11:42
C3.5
11:42-12:00
Småtroll
C4
10:30-12:00
C4.1
10:30-10:48
C4.2
10:48-11:06
Contributed session: Adaptive clinical trials
Chair: KyungMann Kim
Tim Friede
Treatment Selection in Seamless Phase II/III Trials Incorporating Information on Short-term
Endpoints
Emmanuel Lesaffre
Comparative Bayesian escalation designs
Karen Pye
A Bayesian Approach to Dose-Finding Studies for Cancer Therapies: Incorporating Later Cycles
of Therapy
J. Jack Lee
Bayesian Outcome-Adaptive Randomization in Clinical Trials
Adelaide Doussau
Phase I dose finding methods using longitudinal data and proportional odds model in oncology
Contributed session: Bioinformatics
Chair: Thomasz Burzykowski
Shu Mei Teo
Challenges associated with detecting copy number variants using depth of coverage with nextgeneration sequencing technology.
Stefanie Hieke
Integration of multiple genome wide data sets in clinical risk prediction models
Veronika Rockova
Incorporation of Prior Biological Knowledge in Bayesian Variable Selection of Genomic Features
Nuala A. Sheehan
Participant Identification in Genetic Association Studies: Methods and Practical Implications
Bart Van Rompaye
Variant detection in D pooled DNA samples
Contributed session: Multistate models
Chair: Zdenek Valenta
Bendix Carstensen
Multistate models with multiple time scales
Jiri Jarkovsky
Risk factors of rehospitalisation and death for acute heart failure using multistate survival
models
Sunday, 19/7
C1.5
Monday, 20/7
11:24-11:42
Tuesday, 21/7
C1.4
Natural effect propagation
Machteld Varewyck
A comparison of statistical methods for benchmarking clinical centers in terms of quality of care
Saskia le Cessie
Comparing population effects of different intervention policies, using a combination of inverse
probability weighting and G-computation
Marinus J. C. Eijkemans
Implementation of G-computation with complex longitudinal data
Wednesday, 22/7
11:06-11:24
Thursday, 23/7
C1.3
9/156
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
10/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Sunday, 19/7
I2.1
13:30-14:00
I2.2
14:00-14:30
I2.3
14:30-15:00
Tuesday, 21/7
C4.4 11:24-11:42
Monday, 20/7
C4.3 11:06-11:24
C4.5 11:42-12:00
Klokkeklang
I2
13:30-15:00
Troldtog
C5
13:30-15:00
C5.1 13:30-13:48
Thursday, 23/7
Wednesday, 22/7
C5.2 13:48-14:06
C5.3 14:06-14:24
C5.4 14:24-14:42
C5.5 14:42-15:00
Gjendine
C6
13:30-15:00
C6.1 13:30-13:48
Posters
C6.2 13:48-14:06
C6.3 14:06-14:24
Biswabrata Pradhan
Semi-parametric Estimation of Quality Adjusted Lifetime Distribution in Semi-Markov Illness-Death
Models
Micha Mandel
Estimating time to disease progression comparing transition models and survival methods
Liesbeth C. de Wreede
Modelling Graft-versus-Host-Disease: statistical approaches incorporating clinical aspects
Invited session: Evaluating hospital performance
Chair: Michael J. Campbell
Michael J. Campbell
Developing a summary hospital mortality index: how can we compare hospitals? A retrospective
analysis of all English hospitals over 5 years
Hayley Jones
Some statistical issues in identifying 'unusual' healthcare providers: multiple testing, regression-tothe-mean and outliers versus extremes
Alex Bottle
Traditional and machine learning methods for comorbidity adjustment in mortality risk models
Contributed session: Clinical trials I
Chair: Nicole Close
Emmanuel Aris
Linear Categorical Marginal Modeling of Solicited Symptoms in Vaccine Clinical Trials
Elizabeth Williamson
Variance estimation for propensity scores in randomised trials
Elasma Milanzi
Properties of Estimators in Exponential Family Settings With Observation-based Stopping
RulesProperties of Estimators in Exponential Family Settings With Observation-based Stopping
Rules
Suzanne Lloyd
Use of record linkage to conduct long-term follow-up of a clinical trial and to investigate
generalising cohorts to the underlying population
Chris Metcalfe
Estimating the optimal treatment effect when the randomised controlled trial design incorporates
variable exposure to active intervention
Contributed session: Epidemiological designs
Chair: Ørnulf Borgan
Eiliv Lund
Methodological challenges by the globolomic design - the Norwegian Women and Cancer
postgenome cohort
Anna Johansson
Analysis of case-cohort studies using flexible parametric models
Aksel Jensen
The case-time-control design with multiple reference periods
C7.1
13:30-13:48
C7.2
13:48-14:06
C7.3
14:06-14:24
C7.4
14:24-14:42
C7.5
14:42-15:00
Småtroll
C8
13:30-15:00
C8.1
13:30-13:48
C8.2
13:48-14:06
C8.3
14:06-14:24
C8.4
14:24-14:42
C8.5
14:42-15:00
Peer Gynt
IP2
15:30-16:30
15:30-16:30
Contributed Session: Survival analysis I
Chair: Maja Pohar Perme
Michal Abrahamowicz
New Method for Controlling for Unobserved Confounding in Time to Event Analyses of
Comparative Effectiveness and Safety of Drugs
Jennifer Rogers
Analysis of repeat event outcome data in clinical trials: examples in heart failure
Katy Trébern-Launay
A multiplicative-regression model to compare the effect of factors associated with the time to
graft failure between first and second renal transplant
Peggy Sekula
Risk assessment of time-varying factors on an acute event using the case-crossover method: A
simulation study
Khangelani Zuma
Analysis of complex correlated interval-censored HIV data from population based survey
Contributed session: Modeling infectious disease
Chair: Michael Nothnagel
Steffen Unkel
Time-varying frailty models and the estimation of heterogeneities in transmission of infectious
diseases
Prague Mélanie
Toward information synthesis with mechanistic models of HIV dynamics
Cédric Laouénan
Modeling hepatitis C viral kinetics to compare antiviral potencies of two protease inhibitors: a
simulation study under real conditions of use
Paddy Farrington
Estimation of the basic reproduction number for infectious diseases with age-varying individual
heterogeneity in contact rates
Vana Sypsa
A mathematical model used as a tool to estimate carbapenemase-producing Klebsiella
pneumoniae transmissibility and to assess the impact of potential interventions in the hospital
setting
Plenary session
Chair: Ivar Heuch
Terry Speed
Epigenetics: A new frontier
Sunday, 19/7
Peer Gynt
C7
13:30-15:00
Monday, 20/7
14:42-15:00
Tuesday, 21/7
C6.5
Sven Ove Samuelsen
Inverse probability weighting for nested case-control studies: Application to a study of vitamin-D
and prostate cancer.
Nathalie C. Støer
Inverse probability weighting for nested case-control studies: A simulation study related to a
nested case-control study of vitamin-D and prostate cancer.
Wednesday, 22/7
14:24-14:42
Thursday, 23/7
C6.4
11/156
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
12/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Sunday, 19/7
Klokkeklang
C9
16:45-18:00
C9.1 16:45-17:03
C9.2 17:03-17:21
C9.3 17:21-17:39
Peer Gynt
C10 16:45-18:00
Tuesday, 21/7
C10.2 17:03-17:21
Wednesday, 22/7
C10.1 16:45-17:03
Troldtog
C11 16:45-18:00
Thursday, 23/7
Monday, 20/7
C9.4 17:39-17:57
C11.4 17:39-17:57
C10.3 17:21-17:39
C10.4 17:39-17:57
C11.1 16:45-17:03
C11.2 17:03-17:21
C11.3 17:21-17:39
Gjendine
C12 16:45-18:00
Posters
C12.1 16:45-17:03
C12.2 17:03-17:21
Contributed session: Causal inference II
Chair: Vanessa Didelez
Stijn Vansteelandt
Simple estimation strategies for natural direct and indirect effects
Jack Bowden
Effective use of RPSFTM's in late stage cancer trials with substantial treatment cross-over
Jenny Häggström
Targeted Smoothing Parameter Selection for Estimating Average Causal Effects
Jozefin Buyyze
Estimating random center effects using instrumental variables
Contributed Session: Meta-analyses
Chair: Willi Sauerbrei
Gerta Rücker
Graph theory meets network meta-analysis
Ralf Bender
Impact of Network Size and Inconsistency on the Results of MTC Meta-Analyses
Ulrike Krahn
Model Selection for Locating of Incoherence in Network Meta-Analysis
Jochem König
Adapting Cochran's Q for Network Meta-Analysis
Contributed session: Evaluating hospital performance
Chair: Alex Bottle
Jessica Kasza
Evaluation and comparison of the performance of Australian and New Zealand intensive care units
Shalini Santhakumaran
Evaluating mortality rates for neonatal units using multiple membership models
Erik van Zwet
Confidence intervals for ranks with application to performance indicators
Yew Yoong Ding
Assessing Hospital Performance for Pneumonia Using Administrative Data With and Without
Clinical Data: Does the Difference Matter?
Contributed session: Latent variable models
Chair: Hélène Jacqmin-Gadda
Youngjo Lee
Extended likelihood approach to large-scale multiple testing
Peter Congdon
Interpolation between spatial frameworks: an application of process convolution to estimating
neighbourhood disease prevalence
C13.1
16:45-17:03
C13.2
17:03-17:21
C13.3
17:21-17:39
C13.4
17:39-17:57
Contributed session: Functional data analysis/longitudinal data
Chair: Emannuel Lesaffre
Daniela Adolf
Parametric and non-parametric multivariate analysis of functional MRI data
Kathrine Frey Frøslie
Path analysis with multilevel functional data: Change in glucose curves during pregnancy and its
impact on birth weight.
Stanislav Katina
Automatic identification and analysis of anatomical curves across human face
Susan Bryan
Prediction of Visual Prognosis to Optimize Frequency of Perimetric Testing in Glaucoma
TUESDAY, 21 AUGUST 2012
Klokkeklang
I3
08:30-10:00
I3.1
08:30-09:00
I3.2
09:00-09:30
I3.3
09:30-10:00
Peer Gynt
C14
08:30-10:00
C14.1
08:30-08:48
C14.2
08:48-09:06
C14.3
09:06-09:24
C14.4
09:24-09:42
Invited session: Functional data analysis
Chair: Per Kragh Andersen
Helle Søresen
Functional data – an introduction towards applications
Jeff Goldsmith
A Modular Framework for Scalar-on-Function Regression
Laura Sangalli
A study of cerebral aneurysms pathogenesis: functional data analysis of three-dimensional
geometries of the inner carotid artery
Contributed session: Clinical trials II
Chair: Elizabeth Williamson
Gerard van Breukelen
Efficient design of cluster randomized trials with treatment-dependent sampling costs and
treatment-dependent unknown outcome variances
Daniel Lorand
Bayesian Phase II randomized design for time-to-event endpoint using historical control Application to Oncology
Andrew Forbes
Evaluation of methods for design and analysis of cluster randomised crossover trials with binary
outcomes with application to intensive care research
Atanu Biswas
Optimal target allocation proportion for correlated binaryresponses in a two-treatment clinical
trial
Sunday, 19/7
Småtroll
C13
16:45-18:00
Monday, 20/7
17:39-17:57
Tuesday, 21/7
C12.4
Sophie Ancelet
Bayesian shared spatial-component models to combine sparse and heterogeneous
epidemiological data informing about a rare disease and detect spatial biases.
Danielle Belgrave
An Investigation of Latent Class Trajectory Models of Prescribing to Define a Phenotypic Marker
of Disease Susceptibility
Wednesday, 22/7
17:21-17:39
Thursday, 23/7
C12.3
13/156
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
14/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Sunday, 19/7
C14.5 09:42-10:00
Gjendine
C15 08:30-10:00
C15.1 08:30-08:48
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
C15.2 08:48-09:06
C15.3 09:06-09:24
C15.4 09:24-09:42
C15.5 09:42-10:00
Troldtog
C16 08:30-10:00
C16.1 08:30-08:48
C16.2 08:48-09:06
C16.3 09:06-09:24
C16.4 09:24-09:42
C16.5 09:42-10:00
Småtroll
C17 08:30-10:00
C17.1 08:30-08:48
Posters
C17.2 08:48-09:06
C17.3 09:06-09:24
C17.4 09:24-09:42
Nicole Close
Statisticians Implementing Change and Cost Effectiveness in Clinical Trials through Risk
Prioritization Monitoring
Based
Contributed session: Statistics for epidemiology I
Chair: Paddy Farrington
Nadine Binder
Bias of relative risk estimates in cohort studies as induced by missing information due to death
Michael Schemper
Explained variation versus attributable risk
Luwis Diya
Quantifying bias in register based research
Michael Johnson Mahande
Recurrence risk of perinatal mortality in Northern Tanzania: A registry-based prospective cohort
study
Oliver Collingnon
Luxemburg acUte myoCardial Infarction registry (LUCKY): estimation of the effect of clinical and
biochemical variables on the New-York Heart Association score using penalized ordinal logistic
regression.
Contributed session: Joint modeling of outcome and time-to-event
Chair: Clelia Di Serio
Magdalena Murawska (Student award winner)
Dynamic Prediction Based on Joint Model for Categorical Response and Time-to-Event
Michael Crowther (Student award winner)
Adjusting for measurement error in baseline prognostic biomarkers: A joint modelling approach
Cécile Proust-Lima
Dynamic predictions from joint models for longitudinal biomarker trajectory and time to clinical
event: development and validation
Jessica Barrett
A closed form likelihood for joint modelling of repeated measurements and survival outcomes, with
an application to cystic fibrosis data.
Ralitza Gueorguieva
Joint Modeling of Repeatedly Measured Continuous Outcome and Interval-censored Competing
Risk Data
Contributed session: Genomics / systems biology
Chair: Koos Zwinderman
Thomasz Burzykowski
High resolution QTL-mapping with whole-genome sequencing data
Boris Hejblum
Application of Gene Set Analysis of Time-Course gene expression in a HIV vaccine trial
Setia Pramana
Detecting genetic differences between monozygous twins by next-generation sequencing
Mikel Esnaola
Modeling count data in RNA-seq experiments using the Poisson-Tweedie family of distributions
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
11:00-11:30
I4.2
11:30-12:00
I4.3
12:00-12:30
Peer Gynt
C18
11:00-12:30
C18.1
11:00-11:18
C18.2
11:18-11:36
C18.3
11:36-11:54
C18.4
11:54-12:12
C18.5
12:12-12:30
Peer Gynt
C19
11:00-12:30
C19.1
11:00-11:18
C19.2
11:18-11:36
C19.3
11:36-11:54
C19.4
11:54-12:12
C19.5
12:12-12:30
Bryan Langholz
Conditional likelihoods for case-cohort data: Do they exist?
Paola Rebora
Estimating cumulative incidence adjusting for competing risk using an optimal two-phase
stratified design
Agus Salim
A semiparametric approach to secondary analysis of nested case-control data
Contributed session: Clinical trials III
Chair: Lucinda Billingham
Lehana Thabane
Dealing with Criticisms and Controversies of Pragmatic Trials
Jeremy Taylor
Finding and validating subgroups of enhanced treatment effect in randomized clinical trials
David Oakes
Monitoring a Long-Term Efficacy Study for Futility: an Application in Huntington's Disease
Olympia Papachristofi
Assessment of surgical interventions through clinical trials: accounting for the impact of
learning curves.
Oke Gerke
Interim analyses in diagnostic versus treatment studies: differences and similarities
Contributed session: Model selection I
Chair: Mette Langaas
Yunzhi Lin (Student award winner)
Advanced Colorectal Neoplasia Risk Stratification by Penalized Logistic Regression
Daniel Commenges
A universal cross-validation criterion and its asymptotic distribution
Carolin Jenker
Modeling continuous predictors with a ‘spike’ at zero: multivariable extensions and handling of
related spike variables
Willi Sauerbrei
Stability investigations of multivariable regression models derived from low and high dimensional
data
Yanzhong Wang
Learning Mixtures through merging components
Sunday, 19/7
Monday, 20/7
I4.1
Invited session: Extensions to Epidemiological Designs
Chair: Marie Reilly
Tuesday, 21/7
Klokkeklang
I4
11:00-12:30
Poster session
All poster presenters
Wednesday, 22/7
P1-P22 10:00-11:00
Knut Wittkowski
From single-SNP to wide-locus GWAS: A Computational Biostatistics Approach Identifies
Pathways in Small Sample Studies
Thursday, 23/7
09:42-10:00
Posters
C17.5
15/156
16/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Sunday, 19/7
Troldtog
C20 11:00-12:30
C20.1 11:00-11:18
C20.2 11:18-11:36
C20.3 11:36-11:54
Monday, 20/7
C21.1 11:00-11:18
Posters
Thursday, 23/7
Wednesday, 22/7
C20.5 12:12-12:30
Tuesday, 21/7
C20.4 11:54-12:12
Småtroll
C21 11:00-12:30
C21.2 11:18-11:36
C21.3 11:36-11:54
C21.4 11:54-12:12
C21.5 12:12-12:30
Contributed session: Prediction in survival analysis
Chair: Philip Hougaard
Heiko Götte
Sample size planning for survival prediction with focus on high dimensional data
Robin Van Oirbeek
Exploring the discriminatory ability of frailty models
Audrey Mauguen
Prediction tool for risk of death using history of cancer recurrences in joint frailty models
Marcel Wolbers
Concordance for prognostic models with competing risks
Paul Blanche
Comparing areas under time-dependent ROC curves under competing risk
Contributed session: Statistical methodology I
Chair: Jan Terje Kvaløy
Yuanzhang Li
High dimensional regression using decomposition-gradient-nuisance method and its application in
epidemiological case control studies
Stian Lydersen
Choice of the Berger and Boos confidence coefficient in an unconditional test for equality of two
binomial probabilities
Christoph Schürmann
Surrogate endpoints in breast and colon cancer: An evaluation of validation studies
Buddhananda Banerjee
Use of surrogate endpoints for improving efficiency, reduction of sample size and modification of
Mantel-Haenszel estimator for odds ratio
Lyle Gurrin
Estimation of between- and within-pair regression effects in logistic regression with shared
measurement error
WEDNESDAY, 22 AUGUST 2012
Peer Gynt
I5
08:30-10:00
I5.1
08:30-09:00
I5.2
09:00-09:30
I5.3
09:30-10:00
Klokkeklang
C22 08:30-10:00
Invited session: Causal inference
Chair: Stijn Vansteelandt
Tyler VanderWeele
Causal mediation analysis with applications to perinatal epidemiology
Andrea Rotnitzky
Estimation and extrapolation of treatment effects
Els Goetghebeur
Protecting against errors: causal effect estimates for the evaluation of quality of care over many
(cancer) centers
Contributed session: Survival analysis II
Chair: Sven Ove Samuelsen
09:06-09:24
C22.4
09:24-09:42
C22.5
09:42-10:00
Småtroll
C23
08:30-10:00
C23.1
08:30-08:48
C23.2
08:48-09:06
C23.3
09:06-09:24
C23.4
09:24-09:42
C23.5
09:42-10:00
Gjendine
C24
08:30-10:00
C24.1
08:30-08:48
C24.2
08:48-09:06
C24.3
09:06-09:24
C24.4
09:24-09:42
C24.5
09:42-10:00
Troldtog
C25
08:30-10:00
C25.1
08:30-08:48
C25.2
08:48-09:06
Contributed session: Measurement error
Chair: Saskia le Cessie
Péter Vargha (Scientist award winner)
Regression toward the mean and ANCOVA in observational studies
Timothy Mutsvari
A multilevel misclassification model for spatially correlated binary data: An application in oral
health research
Kristoffer Herland Hellton
Projecting error: Understanding measurement error in principal components
Øystein Sørensen
Variable Selection by Lasso in Regression with Measurement Error
Nicholas de Klerk
Adjustment for genotyping measurement error in a case-control study
Contributed session: Prediction
Chair: Ulrich Mansmann
Marine Lorent
Relative ROC curves: a novel approach for evaluating the accuracy of a marker to predict the
cause-specific mortality
Thomas Debray
A framework for developing and implementing clinical prediction models across multiple studies
with binary outcomes
Daniel Stahl
Using Machine learning methods for event related potential (ERP) brain activity analysis
Khadijeh Taiyari
How much data are required to develop a reliable risk model?
Z. J. Musoro
Dynamic Predictions of Repeated Events of Different Types by Landmarking
Sunday, 19/7
C22.3
Monday, 20/7
08:48-09:06
Tuesday, 21/7
C22.2
Maja Pohar Perme
Properties of net survival estimation
Therese Andersson
Estimating the loss in expectation of life due to cancer using flexible parametric survival models
Katharina Ingel
Sample Size Calculation and Re-estimation for Recurrent Event Data
Terry Therneau
Mixed Effects Cox Models and the Laplace Transform
Janez Stare
On using simulations to study explained variation in survival analysis
Wednesday, 22/7
08:30-08:48
Contributed session: Multiple imputation methods
Chair: Stian Lydersen
Shaun Seaman
Multiple Imputation of Missing Covariates with Non-Linear Effects and Interactions: an Evaluation
of Statistical Methods
Oya Kalaycioglu
Comparison of multiple imputation methods for repeated measurements studies
Posters
C22.1
17/156
Thursday, 23/7
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
18/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C25.4 09:24-09:42
C25.5 09:42-10:00
Peer Gynt
IP3
10:30-11:30
Posters
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
C25.3 09:06-09:24
10:30-11:30
11:30-13:30
Klokkeklang
I6
13:30-15:00
I6.1
13:30-14:00
I6.2
14:00-14:30
I6.3
14:30-15:00
Peer Gynt
C26 13:30-15:00
C26.1 13:30-13:48
C26.2 13:48-14:06
C26.3 14:06-14:24
C26.4 14:24-14:42
C26.5 14:42-15:00
Troldtog
C27 13:30-15:00
C27.1 13:30-13:48
C27.2 13:48-14:06
Georg Heinze
Confidence intervals after multiple imputation: combining profile likelihood information from
logistic regressions
Jonathan Bartlett
Congenial multiple imputation of partially observed covariates within the full conditional
specification framework
John Carlin
Diagnosing the goodness-of-fit of models used for multiple imputation
President's invited speaker
Chair: Harbajan Chadha-Boreham
Deborah Ashby
A Benefit–Risk Analysis of using Formal Benefit-Risk Approaches for Decision-Making in Drug
Regulation
Chair: Harbajan Chadha-Boreham
Annual General Meeting
Invited session: Genomics and systems biology
Chair: Arnoldo Frigessi
Katja Ickstadt
Nonparametric Bayesian Modelling in Systems Biology
Linn Cecilie Bergersen
Reliable Preselection of Variables in High-dimensional Penalized Regression Problems by Freezing
Doug Speed
Using Heritability Analysis to Devise a Prediction Model for Epilepsy
Contributed session: Comepting risk
Chair: Terry Therneau
Per Kragh Andersen
Decomposing number of life years lost according to causes of death
Paul Lambert
Parametric modelling of the cumulative incidence function in competing risks models
Martin Wolkewitz
Nested case-controls studies in cohorts with competing events
Giorgios Bakoyannis
Late entry bias in cohort studies with competing endpoints
Ronald Geskus
Inverse probability weighted estimators in survival analysis
Contributed session: Statistics for epidemiology II
Chair: Stein Emil Vollset
Marie Reilly
Modeling changes in cancer risk with time from diagnosis of a family member
Myeongjee Lee
A comprehensive model for jointly estimating familial risk in all first-degree relatives
14:42-15:00
Gjendine
C28
13:30-15:00
C28.1
13:30-13:48
C28.2
13:48-14:06
C28.3
14:06-14:24
C28.4
14:24-14:42
C28.5
14:42-15:00
Småtroll
C29
13:30-15:00
C29.1
13:30-13:48
C29.2
13:48-14:06
C29.3
14:06-14:24
C29.4
14:24-14:42
C29.5
14:42-15:00
Klokkeklang
C30
15:30-17:00
C30.1
15:30-15:48
C30.2
15:48-16:06
C30.3
16:06-16:24
Contributed session: Model selction II
Chair: Daniel Commenges
Jan Kalina (Scientist award winner)
Robust Gene Selection Based on Minimal Shrinkage Redundancy
Mar Rodriguez-Girondo
Boosting for variable selection in structured survival models
Bobrowski Leon
Feature subset selection linked to linear separabilty
Axel Benner
Predictive genomic signatures: Biomarker discovery in high-dimensional data
Rosa Meijer
A multiple testing method for ordered data
Contributed session: Statistical design and methodology I
Chair: Hayley Jones
Carla Moreira
Goodness-of-fit tests for a semiparametric model under a random double truncation
Edmund Njeru Njagi
A Framework for Characterizing Missingness at Random in Generalized Shared-parameter Joint
modeling Framework for Longitudinal and Time-to-Event Data
Ikuko Funatogawa
Likelihood based estimation for an effect of a time-varying covariate
Bruce Tabor
Cost-Sensitive Maximum Likelihood Classification: Finding Optimal Biomarker Combinations in
Screening and Diagnosis
Toby Prevost
Designing a preliminary adaptive study to develop biomarker combinations for trial
Contributed session: Statistical design and methodology II
Chair: John Carlin
Stephen Senn
Predicting Patient Recruitment in Multi-Centre Clinical Trials
Hanhua Liu
Evaluation and validation of social and psychological markers: identification and assumptions for
instrumental variables estimation
Esther de Hoop
Sample size calculation for cluster randomized stepped wedge designs
Sunday, 19/7
C27.5
Monday, 20/7
14:24-14:42
Tuesday, 21/7
C27.4
Josué Almansa
Multinomial multi-latent-class model. Application to multiple exposures in occupational setting
and the risk of several histological subtypes of lung cancer.
Katherine Lee
Modelling the age-dependence of risk in a self-controlled case series analysis
Albert Sanchez-Niubo
A parametric approach to the reporting delay adjustment method applied to drug use data
Wednesday, 22/7
14:06-14:24
Thursday, 23/7
C27.3
19/156
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
20/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
C30.4 16:24-16:42
C30.5 16:42-17:00
Peer Gynt
C31 15:30-17:00
C31.1 15:30-15:48
C31.2 15:48-16:06
C31.3 16:06-16:24
C31.4 16:24-16:42
C31.5 16:42-17:00
Gjendine
C32 15:30-17:00
C32.1 15:30-15:48
C32.2 15:48-16:06
C32.3 16:06-16:24
C32.4 16:24-16:42
C32.5 16:42-17:00
Småtroll
C33 15:30-17:00
C33.1 15:30-15:48
C33.2 15:48-16:06
Posters
C33.3 16:06-16:24
C33.4 16:24-16:42
C33.5 16:42-17:00
Are Hugo Pripp
Lifestyle, socioeconomic factors and consumption of dairy foods analysed with structural equation
modeling
Luwis Diya
Bayesian multilevel factor analytic model for assessing the relationship between nurse-reported
adverse events and patient safety
Contributed session: Longitudinal data
Chair: Cécile Proust-Lima
Karolina Sikorska
Fast linear mixed model computations for GWAS with longitudinal data
Sten Willemsen
A Bayesian Model for Multivariate Human Growth Data
Riccardo Marioni
Cognitive lifestyle and cognitive decline: the characteristics of two longitudinal models
Sandra Plancade
A statistical model to explore carcinogenic processes by transcriptomics in prospective studies
Andrew Copas
Analysis of Change Over Time When Measurements are Obtained Only After an Unknown Delay
Contributed session: Survival analysis III
Chair: Michal Abrahamowicz
Anika Buchholz
High-dimensional survival studies - comparison of approaches to assess time-varying effects
Morten Valberg
Frailty modeling of age-incidence curves of osteosarcoma and Ewing sarcoma among individuals
younger than 40 years
Audrey Mauguen
Multivariate frailty models for two types of recurrent events with a dependent terminal event:
Application to breast cancer data
Donghwan Lee
Sparse partial least-squares regression for high-throughput survival data analysis
Arthur Allignol
A Regression Model for the Extra Length of Stay Associated with a Nosocomial Infection
Contributed session: Statistical methodology II
Chair: Michael Semper
Kerry Leask (Scientist award winner)
Modelling Overdispersion in Wadley's Problem with a Beta-Poisson Distribution
Ulrich Mansmann
Global testing for complex ordinal data
Eva Andersson
On-line surveillance of air pollution
Angela Noufaily
An Improved Algorithm for Outbreak Detection in Multiple Surveillance Systems
Siegfried Kropf
The use of symmetric and asymmetric distance measures for high-dimensional tests of inferiority,
equivalence and non-inferiority
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
15:30-15:48
C34.2
15:48-16:06
C34.3
16:06-16:24
C34.4
16:24-16:42
C34.5
16:42-17:00
Miriam Gjerdevik
Improving the error rates of the Begg and Mazumdar test for publication bias in meta-analysis
Anissa Elfakir
Dealing with missing binary outcome data in meta-analysis: application to randomized clinical
trials in nutrition
Antonio Gasparrini
Multivariate meta-analysis for non-linear and other multi-parameter associations
Cono Ariti
Developing a predictive risk model for mortality from multiple cohort studies: an example in
heart failure
Brunilda Balliu
Combining Family and Twin Data in Association Studies to Estimate the Non-inherited Maternal
Antigens Effect
Sunday, 19/7
C34.1
Contributed session: Meta-analyses/Combined data sources
Chair: Sada Nand Dwivedi
Monday, 20/7
Troldtog
C34
15:30-17:00
21/156
09:00-09:30
MS1.2
09:30-10:00
MS1.3
09:06-09:24
MS1.4
10:30-11:00
11:00-11:30
MS1.5
11:30-12:00
MS1.6
12:00-12:30
MS1.7
12:30-13:00
Rolv Skjærven
A woman’s reproductive history is related to diseases later in life
Timo Hauklinen
What can be achieved with a good population-based cancer registry?
Giske Ursin
How can cancer registries improve our biological understanding of cancer and cancer care?
Refreshments
Nancy L. Pedersen
Double delights through twin registry research
Kaare Christensen
Register-based research on the epidemiology of aging
Sven Cnattingius
The Birth Register – how do we find the most beautiful flowers in the garden?
Allen J. Wilcox
Heterogeneity of risk and selective fertility – Subtle biases produce serious confusions
Mini-symposium on Statistics in Vaccines Research
Topic: Novel Statistical Approaches Used in Post-marketing Safety Surveillance Systems – A Global Perspective
Organiser: Jennifer Nelson and Allen Izu
Klokkeklang
MS2
09:00-10:30
Mini-symposium on Statistics in Vaccines Research
Chair: Allen Izu
MS2.1
09:00-09:30
Michael Nguyen
FDA’s Sentinel Initiative: Active Vaccine Safety Surveillance and Pharmacovigilance
Wednesday, 22/7
MS1.1
Mini-symposium on Registerbased Epidemiology
Chair: Stein Atle Lie
Thursday, 23/7
Peer Gynt
MS1
09:00-13:00
Posters
Mini-symposium on Registerbased Epidemiology
Organiser: Roy Miodini Nilsen
Tuesday, 21/7
THURSDAY, 23 AUGUST 2012
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
22/156
MS2.2
09:30-10:00
MS2.3
10:00-10:30
MS2.4
10:30-11:00
11:00-13:00
11:00-11:30
MS2.5
11:30-12:00
MS2.6
12:00-12:30
12:30-13:00
Jennifer C. Nelson
Methodological challenges for sequential vaccine safety surveillance using observational
health care data
Lingling Li
Drug and Vaccine Safety Surveillance: some existing methods and the unmet analytic
needs
Refreshments
Chair: Jennifer Nelson
Stanley Xu
Signal Detection of Adverse Events Using Electronic Data with Outcome Misclassification
Paddy Farrington
Paediatric vaccine pharmacoepidemiology : classification bias in case series analysis and
application to febrile convulsions
Yonas G. Weldeselassie
Self controlled case series method with smooth age effect
Discussion
Posters
P1
P1.1
P1.2
P1.3
P1.4
Wednesday, 22/7
P1.6
P1.7
Thursday, 23/7
P1.5
P2.2
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P1.8
P1.9
P2
P2.1
P3
P3.1
P3.2
P3.3
P3.4
P3.5
P3.6
P3.7
P3.8
P3.9
Adaptive clinical trials
Wenle Zhao: Response Adaptive Randomization - Cost, Benefit and Implementation with Covariate Balancing
Alexandra Graf: Maximum type 1 error rate inflation in multi-armed clinical trials with interim sample size
modifications.
Yunchan Chi: Adaptive two-stage designs for comparing two binomial proportions in phase II clinical trials
Babak Choodari-Oskooei: Impact of lack-of-benefit stopping rules on treatment effect estimates of two-arm multistage (TAMS) trials with time to event outcome
Ruud Boessen: Optimizing trial design in pharmacogenetics research; comparing a fixed parallel group, group
sequential and adaptive selection design on sample size requirements.
Emma McCallum: In a two-stage dose-finding study, how big should the first stage be?
Graham Wheeler: Incorporating prior information into dual-agent Phase I dose-escalation studies from single-agent
trials
Simon Schneider: Blinded and unblinded internal pilot study designs for clinical trials with overdispersed count
data
Eunsik Park: Group sequential testing in covariate-adjusted response-adaptive designs
Bioinformatics
Chen Suo: Joint estimation of isoform expression and isoform-specific read distribution using RNA-Seq data
across samples
Woojoo Lee: Unequal group covariances in microarray data analyses
Causal inference
Kimberley Goldsmith: Exploration of instrumental variable methods for estimation of causal mediation effects in the
PACE trial of complex treatments for chronic fatigue syndrome
Richard Emsley: Principal trajectories: extending principal stratification for repeated measures
Toby Prevost: Improving the detection of causal mediation effects in complex intervention trials
Michel Hof: Estimating the effect of insulin treatment in diabetic type-II patients on cardiovascular disease rates
with marginal structural models
Sabine Landau: Causal inference from trials of complex interventions
Silvana Romio: Marginal Structural Models in Epidemiology: Why not?
Eléonore Herquelot: Average treatment effect estimation with a rare binary outcome: an example and simulations
Roseanne McNamee: G-estimation from an RCT comparing 2 active treatments and placebo given postrandomisation crossover and simultaneous treatments
Georgia Vourli: Direct and indirect effects in the presence of time-dependent confounding
P4.6
P4.7
P4.8
P4.9
P4.10
P4.11
P4.12
P4.13
P4.14
P4.15
P4.16
P4.17
P4.18
P4.19
P4.20
P4.21
P4.22
P4.23
P4.24
P4.25
P4.26
P4.27
P4.28
P4.29
P4.30
P5
P5.1
Consulting
Lehana Thabane: 10 tips for enhancing biostatistical consultations or collaborations in clinical research: lessons
from the trenches
Sunday, 19/7
Monday, 20/7
P4.4
P4.5
Clinical trials
Jen-pei Liu: Application of the Parallel Line Assay to Assessment of Biosimilar Drug Products
Christoph Gerlinger: Statistical derivation of a responder definition for the reduction of hot flushes
Stephen D Walter: Optimisation of the two-stage randomised trial design when some participants have no preferred
treatment.
Yuko Palesch: Revisiting baseline covariate adjustment in randomization and analysis of large clinical trials
Wilhelm Gaus: Is a Controlled Randomised Trial the Non-plus-ultra Design? An Advocacy for Comparative,
Controlled, Non-randomised Trials
Natalja Strelkowa: A biomarker-based designs for a controlled phase II trial in oncology.
Mathai A.K.: Regression model to analyze the continuous primary end point in RCTs when the treatment effect
depends on baseline values of the outcome variable
Thomas Jaki: Confidence intervals for the ratio of AUCs in cross-over bioequivalence trials
Willi Sauerbrei: Interaction of treatment with a continuous variable: simulation study of significance level for
several methods of analysis
John-Philip Lawo: Comparison of groups in the presence of bimodality
Brennan C Kahan: Analysis of multicentre trials with continuous or binary outcomes
Suzie Cro: Are Appropriate Outcome Measures Being Used in Open-label Randomised Trials?
Math Candel: Sample size corrections for varying cluster sizes when testing treatment effects in two-armed
randomized trials with heterogeneous clustering
Hong Sun: Within-center imbalance after balanced allocation using minimization method: Center as a stratification
factor in multi-center clinical trials?
Gang Li: A Semiparametric Accelerated Failure Time Mixture Model for Latent Subgroup Analysis of a Randomized
Clinical Trial
Katrin Roth: Analysis of recurrent event data - an applied comparison of methods using clinical data
Primrose Beryl Gladstone: Probability of Inferiority in Current Non-Inferiority Trials
Rachid el Galta: Blinded estimation of within subject variance
Naohiro Yonemoto: Analysis of case scenario cross-over trial: an application of medical devices manikin study
Gloria Crispino O’Connel: Investigating the Strength of the Association between the Amplitude of the Impedance
Cardiogram (ICG), Thrust and Depth during CPR compressions.
Emmanuel Bouillaud: New approaches for design and analysis of pediatric pharmacokinetic and
pharmacokinetic/pharmacodynamic studies
Alberto Morabito: Propensity score and area under a ROC curves in repeated measures clinical studies
Lisa Belin: Optimization of managing the lost to follow up patients in a Phase II oncology trials.
Robert Parker: Blocking in Unblinded Randomized Clinical Trials
Karen Smith: Non-Inferiority Trials of Non-Pharmaceutical Interventions
Jacob Agris: How to Select Readers for Clinical Trials When There is No Gold Standard
Wenle Zhao: Cost and Prevention Strategies of Randomization Errors in Emergency Treatment Clinical Trials
Andrew Forbes: Short interrupted time series designs in clinical practice and policy research: an analysis
approach using restricted maximum likelihood
Gang Li: A Bayesian approach to assess the active control treatment effect for the design of non-inferiority trials
Ly-Mee Yu: Sample size calculation for time-to-event outcomes in randomized controlled trials: A review of
published trials
Tuesday, 21/7
P4
P4.1
P4.2
P4.3
Wednesday, 22/7
P3.12
Unfortunately, this poster has been withdrawn.
Fabiola Del Greco M.: Investigation of pleiotropy in Mendelian randomisation studies that use aggregate genetic
data
Emmanuel Caruana: A new performance measure of propensity score model
Thursday, 23/7
P3.10
P3.11
23/156
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
24/156
Posters
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
P6
P6.1
P6.2
P6.3
P6.4
P7
P7.1
P7.2
P7.3
P7.4
P7.5
P8
P8.1
P8.2
P8.3
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Diagnostic methodology
Jose Antonio Roldán Nofuentes: Average kappa coefficient: a new measure of accuracy of a binary diagnostic test
Antoine Regnault: Applying Partial Least Squares Discriminant Analysis (PLS-DA) for optimisation of decision rules
based on complex patient-reported data: creation of the FibroDetect® scoring method
Harbajan Chadha-Boreham: Refined nomograms to enhance the interpretation of clinical risk prediction models
Jean- Christophe Thalabard: Comparative assessement of a new imaging technique versus an imperfect invasive
gold standard : early detection of coronary stenosis after arterial switch surgery in children
Epidemiological designs
Mohammad Reza Maracy: Cancer incidence and prevalence: application of mortality data to estimates and projects
for the period 2001-2015, Iran
Ralph Rippe: Selection bias in obesity research: when do sampling weights solve the problem?
Sarah Barry: The Parenting Support Framework in Glasgow: mapping variability in behavioural difficulties
Riccardo Pertile: Pesticides exposure in an apples growing valley (Trentino - Italy): epidemiological study
Rémi Sitta: Use of linked registries in the design of cohort studies, a tool against selection bias: the Constances
example.
P8.7
P8.8
P8.9
P8.10
Evaluating hospital performance
Francesca Ieva: Mixed effect models for provider profiling in cardiovascular healthcare context
Doris Tove Kristoffersen: The use of Kaplan-Meier plots when comparing hospital mortality.
Doris Tove Kristoffersen: Accounting for patients transferred between hospitals when using mortality as a quality
indicator for the comparison of hospitals.
Ondrej Majek: Performance of Screening Colonoscopy Centres in a Nationwide Colorectal Cancer Screening
Programme: Evaluation Using Hierarchical Bayesian Model
Elisa Carretta: Hospital volume and survival from cancer surgery: the experience of a local area with an high
incidence of gastric cancer
Sarah Seaton: Tolerance intervals for identification of outlier healthcare providers: the incorporation of benchmark
uncertainty.
Maria Weyermann: Geographic variations in avoidable hospital admissions for asthma across Germany
Federico Ambrogi: The comparative evaluation of Italian Regional Health System through PLS-SEM
Richard Jacques: Casemix adjustment for comparing standardised event rates
Silke Knorr: Probability of in-hospital mortality: analysis of administrative data in Germany
P9
P9.1
Functional data analysis
Stanislav Katina: Visualisation and spatiotemporal smoothing of single trial EEG data
P10
P10.1
P10.2
Health economics and regulatory affairs
Jasdeep. K Bhambra: Bayesian Evidence Synthesis in a Health Economic Model for Dementia
Antoine Regnault: Analysis of time to patient-reported outcome meaningful change: Illustration from a clinical trial
with catumaxomab in patients with malignant ascites
Dorota Doherty: Prediction of pregnancy outcomes in planned homebirth
Juan R Gonzalez: Multivariate latent class model for non-supervised classification in RNAseq experiments
Werner Vach: The (little) need for and the (large) impact of post hoc application of formal criteria to check clinical
relevance in well conducted RCTs
P8.4
P8.5
P8.6
P10.3
P10.4
P10.5
P11
P11.1
P11.2
P11.3
P11.4
P11.5
P11.6
Incomplete data
Danice Ng: Can the repeated attempts model help to fit MNAR selection models?
Christele Augard: Using planned missing values in longitudinal trials to relieve patient burden and reduce costs
Bola Coker: Regression models for repeated binary measures under different missingness assumptions
Ákos Ferenc Pap: Drop out in randomized controlled non-inferiority trials with time to event outcome: a worst case
sensitivity analysis using a Bayesian method
Rory Wolfe: Dealing with missing data in the development and validation of clinical risk prediction models: is
missing as normal ever a sensible strategy?
Menelaos Pavlou: Contrasting Informative Cluster Size with Missing Data
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P13
P13.1
P13.2
P13.3
P13.4
P13.5
P14
P14.1
P14.2
P14.3
P14.4
P14.5
P14.6
P14.7
P14.8
Latent variable models
Baoyue Li: A Bayesian multivariate multilevel probit model applied to nursing burnout data
Klaus Groes Larsen: Using mixture models for identification of typical trajectories of recovery in patients with Major
Depressive Disorder
Salma Ayis: Assessing Item Properties of the Hospital Anxiety and Depression Scale (HADS) for the Detection of
Depression in Stroke Patinets using Item Response Theory (IRT)
Lena Herich: Application of a Latent-Class-Survival Model for data of a cardiological trial
Peter Brønnum: A Dynamic Prediction Model for Anticoagulant Therapy
Longitudinal data
R. Alonso: A new criterion for choosing the best working correlation structure in GEE analisys
Marek Molas: Joint Hierarchical Generalized Linear Models Using H-likelihood
Maria Josefsson: Causal inference with longitudinal outcomes and non-ignorable drop-out
Duolao Wang: Assessment of Agreement between Digital 12-Lead ECG and continuous Holter ECG Recordings: A
Heterogeneous Mixed Model Approach
Ronald Geskus: A random effects model fitted to dichotomous outcome data with latent classes
Eleni Rapsomaniki: Prognostic biomarkers across the patient journey: Systolic blood pressure before, during and
after myocardial infarction.
Malihe Nasiri: Discriminant analysis to predicting pre-eclampsia based on bivariate longitudinal biomarkers profiles
Renata Majewska: Neonatal exposure to thimerosal from vaccines and child development in the first 3 years of life
- application of generalized estimation equasions
P15
P15.1
P15.2
P15.3
P15.4
P15.5
P15.6
Meta-analyses
Yinghui Wei: A Bayesian approach for multivariate meta-analysis with many outcomes
Joris Menten: Bayesian Meta-analysis of Diagnostic Tests Allowing for Imperfect Reference Standards
Mercy Ofuya: Dichotomisation of continuous outcomes: A systematic review of meta-analyses using birthweight
Catherine Klersy: Metanalysis of high quality observational studies: a surrogate for clinical trials?
Ingeborg van der Tweel: Estimation of between-trial variance in sequential meta-analyses
Sylwia Bujkiewicz: Multivariate meta-analysis of surrogate endpoints in health technology assessment: a Bayesian
approach
P15.7 Elinor Jones: Individual-level statistical analysis without pooling the data
P15.8 Gang Li: Meta-analysis to estimate the treatment effect of Doripenem, Levofloxacin, and Imipenem-cilastin in
complicated urinary tract infections
P15.9 Eleni Rapsomaniki: An age-adjusted metric for risk discrimination, with application to age-specific cardiovascular
disease prediction
P15.10 Pablo Verde: Meta-analysis of paried-comparison studies of diagnostic test data: A Bayesian modelling approach
P16
P16.1
P16.2
Model selection
Abdel Douiri: Inverse problem within a regression framework
Salma Ayis: Quantifying Bias due to Unobserved Heterogeneity at Individual and Cluster Levels when Using Binary
Response Regression Models: A Simulation study
Sunday, 19/7
Monday, 20/7
P12.3
P12.4
P12.5
Tuesday, 21/7
P12.2
Joint modelling of outcome and time-to-event
Z.J. Musoro: A Simulation Study to Investigate The Performance of Frailty Variance Estimates in Repeated Events
Data
Chiara Brombin: Bayesian methods for joint modeling of longitudinal and survival data to assess validity of
biomarkers in AIDS data
Unfortunately, this poster has been withdrawn.
Keith Abrams: Bayesian Modelling of Biomarker Data to Predict Clinical Outcomes
Daniela Mariosa: Causal effects of Total Antioxidant Capacity intake on risk of postmenopausal breast cancer in a
cohort study
Wednesday, 22/7
P12
P12.1
Thursday, 23/7
P11.8
Helena Romaniuk: Multiple imputation in a longitudinal cohort study: a case study of sensitivity to imputation
methods
Tim Morris: Multiple imputation for an incomplete covariate which is a ratio
Posters
P11.7
25/156
Posters
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
26/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P16.3
P16.4
P16.5
Sophie Swinkels: A map for the jungle of choices in Mixed Models for Repeated Measures
Laure Wynants: Variable selection for prediction models based on multicenter data
Javier Roca-Pardiñas: Generalized Additive Models and Computerized Breast Cancer Detection: Clinical Application
P17
P17.1
Modelling in drug and device development
Yuh-Ing Chen: Bioequivalence test based on a nonlinear mixed effect model for pharmacokinetic data in a 2x2
cross over design
Ruediger Paul Laubender: Sample size and power considerations for estimating subject-treatment interactions in
parallel-group designs using covariates
Leslie Pibouleau: Development of a tool to elicit experts' beliefs for medical device evaluation
P17.2
P17.3
P18
P18.1
P18.2
P18.3
P18.4
Modelling infectious disease
Fabian Tibaldi: Model Based Estimates of Long-Term Persistence of Induced HPV Antibodies: A Flexible SubjectSpecific Approach
Doyo Enki: Statistical models for biosurveillance: an empirical investigation
Krisztina Boda: Epidemiological modelling of risk factors of human papilloma virus in women with positive cytology
in the county of Csongrád Epidemiological modelling of risk factors of human papilloma virus in women with
positive cytology in the county of Csongrád
Achilleas Tsoumanis: An alternative to incorporate the number of persons tested when looking for time trends in
STDs
P19
P19.1
P19.2
Observational studies
Geir Egil Eide: Attributable fractions of one year mortality after diagnosis of lung cancer
Klaus Jung: Power Approximation for Logistic Regression Models with Multiple Risk Factors in Observational
Studies
P19.3 Elizabeth McKinnon: Analysis of clustered binary data with extreme proportions
P19.4 Michel Hof: Multistage dynamic sampling design for observational studies
P19.5 RHH Groenwold: Impact of measurement error and unmeasured confounding: a simulation study based on the
example of ascorbic acid intake and mortality
P19.6 Simona Littnerova: Propensity score: alternatives to logistic regression - real example
P19.7 Edwin Amalraj Raja: A comparison of different multilevel models to analyse the effect of maternal obesity on
pregnancy induced hypertension
P19.8 Heather Murray: Use of Scottish Electronic Medical Record Linkage Systems: Illustrated by WOSCOPS 15 Year
Follow-up Data
P19.9 Paul HJ Donachie: Posterior Capsule Rupture complication rates for Cataract surgery from 1,173 ophthalmic
surgeons in 28 UK NHS trusts.
P19.10 Erik Berg: Paternal age - and the risk of oral cleft
P20
P20.1
P20.2
P20.3
P20.4
Prediction
Babak Choodari-Oskooei: A new measure of predictive ability for survival models
Thomas Debray: Aggregating published prediction models with individual patient data
Laura Bonnett: External Validation of a Prognostic Model in Epilepsy: simulation study and case study
Siti Haslinda Mohd Din: Multiple longitudinal profiles of patients reported outcomes as predictors to clinical status
of rheumatoid arthritis patients: A joint modeling approach
P20.5 Kazem Nasserinejad: Using Dynamic Regression and Random Effects Models for Predicting Hemoglobin Levels in
Novel Blood Donors
P20.6 Jose A. Vilar: Time series clustering based on nonparametric multidimensional forecast densities: An application
to clustering of mortality rates
P20.7 Yohann Foucher: Time-dependent ROC curves for the estimation of true prognostic capacity of microarray data
P20.8 David van Klaveren: Assessing discriminative ability in clustered data
P20.9 Eric Ohuma: Modelling crown-rump length (CRL) data used for prediction of gestational age in early pregnancy
when the data is truncated at both ends: The case study of INTERGROWTH-21st Project.
P20.10 Jerome Lambert: Temporal profile of time-dependent discrimination measures in survival analysis
P20.11 Rumana Z Omar: Validation of risk prediction models for clustered data: A simulation study and practical
recommendations
P21.5
P21.6
P21.7
P21.8
P21.9
P21.10
P21.11
P21.12
P21.13
P21.14
P21.15
P21.16
P21.17
P21.18
P21.19
P22
P22.1
P22.2
P22.3
Survival and multistate models
Carlos Martinez: Bonferrini's method to compare k survival curves with recurrent events.
Pierre-Jérôme Bergeron: One-Sample Test for Goodness-of-Fit for Length-Biased Right-Censored Survival Data.
Geraldine Rauch: Planning and evaluating clinical trials with composite time-to-first-event endpoints in a
competing risk framework
Monday, 20/7
Tuesday, 21/7
Statistics for epidemiology
Abdel Douiri: Re-sampling methods in prevalence and incidence studies
Andreas Gleiss: Development of new Austrian height and weight references
J Zhang: Exploring the Quality of Life in Patients with Suspected Heart Failure
Wei-Chu Chie: Cultural vs. Clinical characteristics and health-related quality of life of patients with primary liver
cancer by using the EORTC QLQ-C30 and the EORTC QLQ-HCC18
David Culliford: Exploring the use of body mass index as a covariate in survival models of total knee replacement
Claus Dethlefsen: Assessing the effect of smoking legislation on incidence of cardiovascular diseases
Mark Clements: New tricks for an old dog: using the delta method for non-linear estimators, with an application to
competing risks in continuous time.
Charlotte Rietbergen: Evaluation of the hierarchical power prior distribution.
Christina Bamia: Exploring the estimator associated with the impact of a composite score of multiple binary
exposures on continuous outcomes: An illustration using the Mediterranean Diet Score.
Lesley-Anne Carter: Removal of bias from incidence trend estimation using excess zero models
Anders Gorst-Rasmussen: Using the whole cohort when analyzing case-cohort data - some practical experiences
Alexia Savignoni: Analysis methods comparison for censored paired survival data. A study based on survival data
simulations with application on breast cancer.
Sandra Waaijenborg: Identifying risk behavior for varicella infection using current status survival analysis
Michal Abrahamowicz: When are interaction estimates confounded?
Albert Sanchez-Niubo: Composite retrospective estimates of Drug Use Incidence from Periodic General Population
Surveys in Spain.
Giota Touloumi: Piecewise linear Poisson regression models with unknown break-points
Antonio Gasparrini: A general conceptual and statistical framework for exposure-time-response relationships based
on distributed lag non-linear models
Jenö Reiczigel: On the validity of power simulation based on Fleishman distributions
Agnieszka Kieltyka: Prenatal, perinatal and neonatal risk factors for autism. A case - control study in Poland
Wednesday, 22/7
P21
P21.1
P21.2
P21.3
P21.4
Thursday, 23/7
P20.12 Kirsten Van Hoorde: Updating of polytomous risk prediction models based on sequential dichotomous modeling
improves the performance
P20.13 Sarah Seaton: Probability of survival for very preterm births: production and validation of a prognostic model.
P20.14 Sophie Ancelet: Using Bayesian model averaging to improve radiation-induced cancer risks predictions
P20.15 Hatef Darabi: Assessment of risk prediction and individualised screening of breast cancer among Swedish
postmenopausal women.
P20.16 Eva Janousova: Outcome prediction in schizophrenia patients based on image data
P20.17 Ikhlaaq Ahmed: Meta-analysis methods for examining the performance of a predictive test: going beyond the
average
P20.18 Ben Van Calster: How to assess discrimination performance of polytomous prediction models: review and
recommendations
P20.19 Urko Aguirre: Comparison of two different modelling techniques to determine parameters related to changes in
quality of life in colorectal cancer patients.
P20.20 Ulla B Mogensen: Predictive performance of random forest based on pseudo-values
P20.21 Verena Sophia Hoffmann: Finding cut-offs for continuous prediction models: an overview of methods and pitfalls
P20.22 Gareth Ambler: Validating Prediction Models In Small Datasets
P20.23 Paola M.V. Rancoita: Improving prognostic model development and assessment for survival data
P20.24 Urko Aguirre: Comparison of Logistic Regression and Machine Learning methods: an application to the Colorectal
Cancer stage prognosis.
P20.25 Daan Nieboer: Dynamic updating of prediction models: how to deal with heterogeneity between settings
Sunday, 19/7
27/156
Posters
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
28/156
Posters
Thursday, 23/7
Wednesday, 22/7
Tuesday, 21/7
Monday, 20/7
Sunday, 19/7
P22.4
P22.5
P22.6
P22.7
P22.8
P22.9
P22.10
P22.11
P22.12
P22.13
P22.14
P22.15
P22.16
P22.17
P22.18
P22.19
P22.20
P22.21
P22.22
P22.23
P22.24
P22.25
P22.26
P22.27
P22.28
P22.29
P22.30
P22.31
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Rajvir Singh: Breast Feeding as a Time Varying - Time Dependent Factor for Birth Spacing: Multivariate Models
with Validations and Predictions
Pablo Martínez-Camblor: Expanded renal transplant: A multi-state model approach
Christine Eulenburg: Fine and Gray approach versus cause-specific hazards: competing models or just two views
of the same story?
Zdenek Valenta: Autologous Stem Cell Transplant Study in Lymphoma Patients: Statistical Analysis of Multi-State
Models
Ian James: Exploratory survival analysis using longitudinal mixed-models
Tomas Pavlik: Estimation of current cumulative incidence of leukaemia-free patients in chronic myeloid leukaemia
Matthieu Resche-Rigon: Imputing missing covariate values in presence of competing risk
David Dejardin: Frequentist Evaluation of Bayesian Methods for Survival Data
Marie Vigan: Evaluation of estimation methods and tests of covariates in repeated time to event parametric
models.
Kristin Ohneberg: The Cumulative Proportional Odds Model for Competing Risks
Catherine Quantin: Flexible modeling in Relative Survival:additive vs multiplicative model
Sarah Seaton: Modelling discharge from a neonatal unit: an application of competing risks.
Mark Rutherford: Using restricted cubic splines to approximate complex hazard functions in the analysis of time-toevent data.
Nan van Geloven: Correcting for a dependent competing risk in the estimation of natural conception chances
Michael Lauseker: Models for the Subdistribution Hazard of a Competing Risk under Left Truncation - a
Comparison of two Approaches
Pierre Joly: Predictions and life expectancies in Illness-death model
Arun Pokhrel: Estimation of avoidable deaths based on the theory of competing risks
Sally R. Hinchliffe: The Impact of Under and Over-recording of Cancer on Death Certificates in a Competing Risks
Analysis: A Simulation Study
Kym Snell: Modelling and utilising the baseline hazard in prediction models of clinical outcomes: a missed
opportunity
Markus Pfirrmann: Explaining differences in post-transplant survival between two studies in chronic myeloid
leukaemia through identification of predictive factors by a Cox proportional hazard cure model
Mathieu Bastard: The Use of Latent Trajectories in Survival Models to Explore the Effect of Longitudinal Data on
Mortality
Lucie Biard: Telling curative from palliative effects of covariates in prognostic analysis in a population with a cured
fraction: Application of a biological cure model to metastasis-free survival in uveal melanoma patients.
Nikos Pantazis: Performance of parametric survival models under non-random interval censoring: a simulation
study
Julie Boucquemont: The illness-death model to study progression of chronic kidney disease.
Maria Kohl: Proportional and non-proportional subdistribution hazards regression with SAS
Mathilda Bongers: Multistate modeling in the analysis of cost-effectiveness of NSCLC treatments
Rebecca Betensky: Computationally simple estimation and improved efficiency for special cases of double
truncation
Mike Bradburn: Sample size calculation for time-to event outcomes in randomized controlled trials: An evaluation
of standard methods
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
29/156
Abstracts – Oral Presentations
Sunday, 19 August – Pre Conference Courses
Course 1
Analysis of rare genetic variants in common diseases (Full-day)
Michael Nothnagel1 and Bettina Kulle Andreassen2
1
Institute of Medical Informatics and Statistics, Christian-Albrechts University,
Kiel, Germany
2
Institute of Basic Medical Sciences (IMB), Department of Biostatistics,
University of Oslo, Norway
Technical progress has replaced decades-long scarcity by a close to
overwhelming mass of genetic data. This wealth of data offers unprecedented
chances for elucidating the role of genetic factors in the etiology of common
diseases, but it also causes novel issues in the analysis. Genome-wide
association studies (GWAS) of single-nucleotide polymorphisms (SNPs) in the
DNA have been a standard approach to detect common genetic factors, i.e.
those with a population frequency of at least one percent. Due to some
inherent limitations of GWAS and with the advent of next-generation DNA
sequencing (NGS) technologies, the focus has recently shifted to the analysis
of rare variants with population frequencies less than one percent.
The course will start with an introduction to basic ideas of genetic
epidemiology, including relevant biological and genetic fundamentals.
Concepts of and tools for GWAS will be covered in some detail in the second
part, including a discussion of the pros and cons of this type of study. In a third
part, we will focus on the analysis of rare variants, now routinely obtained from
sequencing whole genomes or parts thereof. We will present different methods
of variant collapsing and weighting in order to achieve reasonable power and
ways of utilizing family pedigree information.
Recent examples will be used for the motivation and illustration of these
topics.
Course 2
Bayesian computing with INLA (Half-day)
Thiago Guerra Martins
Department of Mathematical Sciences, Norwegian University of Science and
Technology, Trondheim, Norway
In these lectures, I will discuss approximate Bayesian inference for a class
of models named `latent Gaussian models' (LGM). LGM's are perhaps the
most commonly used class of models in statistical applications. It includes,
among others, most of (generalized) linear models, (generalized) additive
models, smoothing spline models, state space models, semiparametric
regression, spatial and spatiotemporal models, log-Gaussian Cox processes
and geostatistical and geoadditive models.
The concept of LGM is intended for the modeling stage, but turns out to be
extremely useful when doing inference as we can treat models listed above in
a unified way and using the same algorithm and software tool. Our approach
to (approximate) Bayesian inference, is to use integrated nested Laplace
approximations (INLA). Using this new tool, we can directly compute very
accurate approximations to the posterior marginals. The main benefit of these
approximations is computational: where Markov chain Monte Carlo algorithms
need hours or days to run, our approximations provide more precise estimates
in seconds or minutes. Another advantage with our approach is its generality,
which makes it possible to perform Bayesian analysis in an automatic,
streamlined way, and to compute model comparison criteria and various
predictive measures so that models can be compared and the model under
study can be challenged.
In these lectures I will introduce the required background and theory for
understanding INLA, including details on Gaussian Markov random fields and
fast computations of those using sparse matrix algorithms. I will end these
lectures illustrating INLA on a range of examples in R (see www.r-inla.org).
Required background: Basic knowledge of Bayesian statistics. Optional:
Your own laptop with INLA preinstalled (www.r-inla.org).
Course 3
Estimating treatment effects using longitudinal data (Half-day)
Miguel Hernán
Department of Epidemiology, Harvard School of Public Health, Boston, USA
The availability and use of observational data---electronic medical records,
claims databases, registries, etc. is increasing in medical research. However, a
valid estimation of the causal effects of treatment from observational data
requires strong assumptions regarding confounding and other potential biases.
Estimating the effects of time-varying treatments in the presence of timevarying confounding factors also requires the use of appropriate analytic
methods. The goal of this short course is to describe techniques for the
estimation of causal treatment effects in longitudinal observational data.
Course 4
Analysis of interval-censored survival data (Half-day)
Philip Hougaard
Lundbeck, Copenhagen, Denmark
Interval-censored survival data occur when the time to an event is assessed
by means of blood samples, urine samples, X-ray or other screening methods
that cannot tell the exact time of change for the disease, but only that the
change has happened since the previous examination. This is in contrast to
standard thinking that assumes that the change happens at the time of the first
positive examination. Even though this screening setup is very common and
methods to handle such data non-parametrically in the one-sample case have
been suggested more than 25 years ago, it is still not a standard method.
However, interval-censored methods are needed in order to consider onset and
diagnosis as two different things, like when we consider screening in order to
diagnose a disease earlier. The reason for the low use of interval-censored
methods is that in the non-parametric case, analysis is technically more
complicated than standard survival methods based on exact times. The same
applies to proportional hazards models. The talk will give an introduction to this
type of data, including a discussion of the issues. The statistical theory will not
be dealt with in detail, but high-level differences to results for standard rightcensored survival data will be presented. Both parametric, non-parametric and
semi-parametric (proportional hazards) models will be covered. The talk will
emphasize the applications, using examples from the literature as well as from
my own experience regarding development of microalbuminuria among Type 2
diabetic patients.
30/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Course 5
Latent class mixed models for longitudinal data and time-to-event data
(Half-day)
Hélène Jacqmin-Gadda1,2 and Cécile Proust-Lima1,2
1
2
INSERM U897, Bordeaux, France
Bordeaux Segalen University, Bordeaux, France
Latent class mixed models consist in exploring latent profiles of trajectories
in heterogeneous population. They combine the random-effect models theory
to account for the individual correlation in the repeated measures, and the
latent class models theory to discriminate homogeneous latent groups when
modelling trajectories of a longitudinal outcome. Extended to jointly model a
longitudinal outcome and a time-to-event, they also provide a computationally
attractive alternative to the standard joint modelling approach that are the
shared random-effect models.
The first part of this course will introduce the latent class mixed models, the
estimation methods and the research questions they may address.
The second part of this course will be dedicated to the joint latent class
models. In addition to the theory, the estimation and the predictive dynamic
tools that can be derived from them, a specific interest will be on methods to
evaluate their goodness-of-fit and their predictive ability.
Each concept will be illustrated through examples from cognitive aging
studies as well as cancer studies. Finally, implementation and estimation of
these models will be described within functions of the R package lcmm.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
31/156
Monday, 20 August
Morning sessions (IP1, I1 , C1 – C4)
IP1 Plenary session
IP1
Concurrent control: key condition or sacred cow?
Stephen Senn
Competence Center for Methodology and Statistics, CRP Santé, Luxembourg
Recent dramatic retractions in the field of genome-wide association studies
have underlined the dangers of ignoring concurrent control and indeed the
difficulties in establishing exactly what such control would be. On the other
hand an often voiced criticism of clinical trials is that the obsession with
concurrent control is wasteful of resources and delays progress in medical
research.
For the applied statistician, a key relevant concept in maintaining a balance
between an obsessive and wasteful pessimism that maintains that no use can
be made of historical controls, and a reckless optimism that courts mistaking
noise for signal, is that of the bias-variance trade-off. The problem is that this is
a concept that solves all difficulties in theory but gives no sure guide for how to
proceed in practice.
I shall consider the value of control using some simple frequentist and
Bayesian models but also make liberal use of case studies. Amongst topics I
aim to cover are how one should approach using historical controls and
whether this should involve random effect models and if so how, network metaanalysis, the vexed question of versions of the treatment and the implications
for blinding, add-on trials and medical decision- making in general. I shall also
consider the ethical criticisms of concurrent control and will suggest that a
Rawlsian approach to treatment entitlement suggests equipoise is irrelevant.
I conclude that concurrent control is not essential for making some sort of an
inference but is usually valuable if you want to make a good one.
in Norway and to show the importance of timing of interventions.
Reference:
de Blasio BF, Iversen BG, Scalia Tomba G (2012) Effect of Vaccines and
Antivirals during the Major 2009 A(H1N1) Pandemic Wave in Norway – And the
Influence of Vaccination Timing. PLoS O50%ne 7(1): e30018. Doi:10.1371/
journal.pone.0030018
I1.2
Estimation of vaccine efficacy in a disease transmission framework using
outbreak data
Michiel van Boven, Susan Hahné, Helma Ruijs, Jacco Wallinga, Phil O'Neill
Centre for Infectious Disease Control National Institute for Public Health and
the Environment, Bilthoven, The Netherlands
Vaccine efficacy is usually estimated by a comparison of the infection attack
rates in vaccinated and unvaccinated persons (the cohort method), or by a
comparison of the vaccination status of infected persons with the vaccination
coverage in the population (the screening method).
These methods are easy to apply but make the unrealistic assumption that
every person has had a fixed amount of exposure, independent of the infection
states of others. Here we present a method, based on infectious disease
transmission models, to estimate vaccine efficacy together with the levels of
exposure. We use a Bayesian framework which estimates the epidemiological
parameters together with the (unobserved) infection graphs. The methodology
is applied to ten outbreaks of mumps virus in primary schools in the Netherlas.
The analyses show that
(i) mumps virus is moderately transmissible in the setting of primary
schools (R0hat=2.49; 95%CrI: 2.36-2.63),
(ii) the vaccine is highly effective in preventing infection in a contact
that would have resulted in transmission is the contacted individual
I1 Modelling infectious disease
was unvaccinated (VEhat=0.933; 95CrI: 0.908-0.954),
I1.1
(iii) missing vaccination and infection information can be imputed
Modeling disease spread for insight, foresight, hindsight...
effectively, and
Gianpaolo Scalia Tomba1, Birgitte Freiesleben de Blasio2
(iv) schools with only a handful of infections do not allow one to
1
estimate vaccine efficacy with any precision because escape from
Dept of Mathematics, University of Rome Tor Vergata, Italy, 2Department of
infection in vaccinated persons may have been caused by lack of
Biostatistics,Institute of Basic Medical Sciences, University of Oslo, Norway
exposure. I will discuss a number of open problems and directions
for future developments.
Mathematical models for disease spread can be simple or incorporate many
details, build on realistic population structures and data or assume
homogeneity, be stochastic or deterministic, admit analytical treatment or only I1.3
execution in computer simulations...
Epidemiological and evolutionary dynamics of HIV-1 virulence
All these variants may be studied for their one sake, but when some practical Christophe Fraser
interpretation of results is required, it is desirable that a model be, in Einstein's Department of Infectious Disease Epidemiology, School of Public Health,
words, "as simple as possible, but not simpler..." . It is then useful to distinguish Imperial College London, UK
between possible purposes of modelling. Some models are constructed to
study qualitative behaviour, to further insight into the factors that most influence Mathematical modelling has proven a powerful tool for integrating knowledge
disease spread. Other models are formulated to predict e.g. how fast a new about the complex dynamics of HIV-1 at multiple scales. Here, I will set out a
pandemic may spread over the world. But models may also be used to study hypothesis about the evolution of HIV-1 virulence that emerged from an
what really happened during an epidemic and how successful interventions epidemiological calculation. I will then describe a series of recent studies that
tested this hypothesis, and which the hypothesis passed. Several proposed
really were.
mechanisms of virulence evolution consistent with this hypothesis will be
In the talk, a brief introduction to modelling disease spread will be given, explored. I will also speculate about consequences for public health, and more
followed by an account of how modelling has been used to estimate how large specifically the evolutionary fate of virulence.
the effect of antivirals and vaccination was during the 2009 A(H1N1) pandemic
32/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
References:
Fraser et al, PNAS 2007; Hollingsworth et al, JID 2008; Hollingsworth et al,
PLoS Pathogens 2010; Shirreff et al, PLoS Computational Biology 2011;
Lythgoe et al, in press.
C1 Causal inference I
C1.1
Covariates and Confounding in Instrumental Variable Analyses
Vanessa Didelez
School of Mathematics, University of Bristol, Bristol, UK
Instrumental variables (IVs) provide an approach for consistent inference on
causal effects even in the presence of unmeasured confounding. Such
methods have been used in the context of Mendelian randomisation, where the
instrument is one (or several) genotype(s); as well as in pharmacoepidemiological contexts, where the physician's (or hospital's) preference is
used as an instrument. In both situations it is common that covariates are
available, even if these are deemed insufficient to adjust for all confounding.
In this talk I will address the question of when available covariates must, can,
or should better not be used in an IV analysis. Relevant issues are whether
these covariates are effect modifiers, whether they are prior to or potentially
affected by the instrumental variable, and whether they affect the strength of
the instrument. The resulting biases or the loss/gain in efficiency will be
illustrated by simulation studies.
C1.2
Natural effect propagation
Kjetil Røysland
University of Oslo, Oslo, Norway
The major role of statistics in clinical trials and epidemiological effect studies
has been to carry out formal evaluations to judge whether we can "prove'' a
treatment effect, and not so much "why'' there would be an effect. The latter
question would involve a closer look at underlying mechanisms, or even chains
of such. Much statistical analysis, however, treats biology as just a "black box''.
In real life situations, for instance clinical trials, there is often more data
available that could be used for the purpose of opening this black box. This
could be through analyses of causal pathways and mediators, i.e. examination
of how treatment effects propagate through the underlying system. In order to
learn as much as possible from available data, there should be more attention
in statistics to such causal exploration, not only to confirmatory analyses. The
meaning of a mechanistic understanding, however, depends on the particular
scientific setting. It is difficult to provide a general notion of pathway effects that
fits to most scientific contexts. Judea Pearl's natural direct and indirect effects,
based on nested counterfactuals, do provide a fairly transparent such notion.
We will discuss the more general path specific natural effects in a formal way.
These could be understood in terms of interventions or as signal sensing. We
will furthermore consider examples with longitudinal data that are motivated by
medical applications.
a dichotomous quality indicator. Adjustment for differential case-mix between
centers will be either based on standardization under a fixed or random center
effects model which incorporates patient characteristics, on inverse weighting
by the propensity to belong to the observed center (on the basis of patient
characteristics), or using doubly robust estimation procedures which can be
viewed as a compromise of both sets of approaches. We will discuss the
relative advantages and disadvantages of the different approaches. Moreover,
we will use simulation studies to evaluate their performance in terms of their
ability to correctly classify centers of different quality, thereby focussing on
realistic settings where some centers may contribute low numbers of patients.
C1.4
Comparing population effects of different intervention policies, using a
combination of inverse probability weighting and G-computation
Saskia le Cessie1, Kim Boers1, Ben Willem Mol2, Sicco Scherjon1
1
Leiden University Medical Center, Leiden, The Netherlands, 2Academic
Medical Center, Amsterdam, The Netherlands
In clinical trials, the effect of an intervention is often compared to a ‘wait and
see' policy. After the results of such a study are known, still questions can
remain about the optimal strategy for different subgroups of patients, and the
optimal timing to switch from ‘wait and see' to intervention. We encountered
this in a clinical trial (DIGITAT) (Broers, BMJ, 2010) where induction of labor for
pregnancies with suspected intrauterine growth restriction beyond 36 weeks
gestation was compared with an expectant approach with careful surveillance.
The study showed no significant differences in adverse neonatal outcome.
However neonatal admissions were higher in the intervention group while
infants were more growth restricted in the expectant group. This yielded the
question whether there is an optimal timing of induction for these pregnancies.
Here we discuss how population effects of different induction strategies can be
estimated and compared (for example the effect of induction after a certain
gestational age, or induction if the expected weight percentile is below a certain
limit). This is done using causal methods for deriving optimal treatment regimes
(Cain Int J of Biostat, 2010). Data of the DIGITAT trial and data of the women
who refused to be randomized are used. We censor subjects when they are
not following the proposed treatment regime and use a combination of inverse
probability weighting and G-computation to estimate the outcome for women
who are induced after a certain period of expectant management. Robust
confidence intervals are calculated and compared with bootstrapped intervals.
C1.5
Implementation of G-computation with complex longitudinal data
Willem M. van der Wal, Rutger M. van den Bor, Kit C.B. Roes, Marinus J.C.
Eijkemans
Julius Center, University Medical Center Utrecht, Utrecht, The Netherlands
When estimating causal effects using observational data, a correction for
measured covariates that cause bias either through confounding, or by
inducing missing values or dropout, can be made by fitting a marginal structural
model (MSM). The most widely used method to fit an MSM is inverse
probability weighting (IPW). As we will illustrate, IPW suffers from substantial
small sample bias. An alternative to IPW that does not have this drawback is
G-computation. However, the literature is lacking in describing how GC1.3
computation could be performed in a realistic complex longitudinal setting. We
A comparison of statistical methods for benchmarking clinical centers in terms describe how G-computation can be performed in practice using readily
of quality of care
available standard software, extending the method that Snowden & Mortimer
Machteld Varewyck, Stijn Vansteelandt, Els Goetghebeur
(2011) presented for a simple point treatment setting. We will demonstrate the
use of this method using a real data example from the field of nephrology, in
Ghent University, Ghent, Belgium
which both confounding and informative censoring is present.
Inspired by a study on quality insurance of rectal cancer (Goetghebeur et al.,
2011), we will discuss various statistical methods for benchmarking centers on J. M. Snowden, K. M. Mortimer (2011). Implementation of G-computation on a
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
33/156
simulated data set: demonstration of a causal inference technique. American difference between the toxicity of the control and the combination to search for
Journal of Epidemiology 173(7) 731-738.
the MTD.
C2.3
A Bayesian Approach to Dose-Finding Studies for Cancer Therapies:
C2.1
Incorporating Later Cycles of Therapy
Treatment Selection in Seamless Phase II/III Trials Incorporating Information Karen Pye, Anne Whitehead
on Short-term Endpoints
Lancaster University, Lancaster, UK
Tim Friede1, Cornelia Ursula Kunz2, Nicholas Parsons2, Susan Todd3, Nigel
Existing Bayesian designs for Phase 1 dose-finding studies can be based on
Stallard2
fitting a smooth parametric curve to the relationship between dose and the risk
1
2
University Medical Center Göttingen, Göttingen, Germany, The University of
of a DLT (Dose Limiting Toxicity). Prior information on model parameters are
3
Warwick, Coventry, UK, University of Reading, Reading, UK
combined with binary observations of DLTs during the first cycle of therapy to
Due to their potential to save development costs and to shorten time-to-market update the model parameters and to choose a safe dose to allocate to the next
of a new treatment adaptive seamless phase II/III designs with treatment cohort. Results from just one cycle are used and later observations are
selection at an interim analysis have become increasingly more attractive in ignored.
recent years. If the primary endpoint is observed only after long-term follow-up To incorporate data from later cycles, a new approach based on intervalit may be desirable to use short-term endpoint data at the interim analysis to censored survival methods has been developed within a Bayesian decision
select a treatment. Different methods have been proposed for selection of the procedure. This considers the relationship between the risk of a DLT during a
treatment that will then continue along with the control group. If at least some particular cycle conditional on having no DLT in any previous cycle at that dose
long-term endpoint data are available at the time of the interim analysis, they level, allowing for different risks of DLT in each cycle. The first cohort is
might be used together with the short-term endpoint data to obtain an estimate assigned doses according to prior belief and the second cohort according to
of the treatment effect upon which treatment selection can be based. prior belief plus responses of the first cohort in the first cycle. Allocation of
Otherwise the treatment selection may be based on the short-term endpoint doses is then based on DLTs observed across all completed cycles for all
data alone. While appropriate methods to combine pre and post adaptation subjects.
data ensure control of the family-wise type I error rate in the strong sense, the
power of the different approaches to selection of the best treatment differ A simulation study has been conducted to compare this new method with the
depending on several assumptions. In this talk we present the results of a conventional approach for dose-finding. Results show that the intervalformal comparison of different methods for treatment selection based on censored survival model induces faster updating of the current estimate of
analytical results and a simulation study, together with a summary of the MTD (Maximum Tolerated Dose) so that trials are generally shorter with fewer
strengths and weaknesses of the approaches. Based on these results, we patients whilst keeping the same level of accuracy.
C2 Adaptive clinical trials
show how existing methods can be improved, increasing both the probability of
selecting the most effective treatment and the power.
C2.4
Bayesian Outcome-Adaptive Randomization in Clinical Trials
C2.2
J. Jack Lee
Comparative Bayesian escalation designs
University of Texas MD Anderson Cancer Center, Houston, Texas, USA
Emmanuel Lesaffre1,2, David Dejardin2, Paul Hamberg3, Jaap Verweij1
Outcome-adaptive randomization (AR) has been proposed in clinical trials to
1
Erasmus MC, Rotterdam, The Netherlands, 2KU Leuven, Leuven, Belgium, assign more patients to better treatments based on the interim data. Bayesian
3
Sint Franciscus Gasthuis, Rotterdam, The Netherlands
framework provides a platform for continuous learning and, hence, is ideal for
The primary objective of a Phase I dose escalation cancer study is to find the implementing AR. However, different views are still prevalent in medical and
dose at which the drug will be tested in the subsequent phase II and III trials. In statistical communities on how useful AR really is. Clinical trials should be
order to maximize the effectiveness of the treatment, drugs to treat cancer are designed with the goals of maintaining the type I error rate, achieving a
combined with each other. The combination usually associates drugs that have specified power, and providing better treatments to patients both inside and
outside the trial. Generally speaking, equal randomization (ER) requires a
different mechanism of action against the disease.
smaller sample size and yields a smaller number of non-responders than AR to
The vast majority of the phase I dose escalation studies for single agents and achieve the same type I and type II errors. Conversely, AR produces a higher
combination of agents implement a 3+3 dose escalation scheme to find the overall response than ER by assigning more patients to the better treatments
MTD. This phase I design has been criticized for treating too many subjects at as the information accumulates in the trial. ER is preferred when the patient
suboptimal doses and providing a poor MTD estimate. This design produces population outside the trial is large. AR is preferred when the difference in
unreliable estimation of the true rate of toxicity at the optimal dose. One cause efficacy between treatments is large or when limited patients are available
of unreliability is the nature of Phase I subjects: These subjects usually have outside the trial. The equivalence ratio of outside versus inside trial populations
multiple tumor types and are in advanced stages of disease. Hence, the toxicity can be computed when comparing the two randomization approaches.
levels observed in this population can not be generalized to the general Dynamic graphics and simulations will be presented to evaluate the relative
population.
merits of AR versus ER. A biomarker-based Bayesian adaptive design for
We propose a randomized Bayesian dose escalation design for combinations selecting treatments, biomarkers, and patients for targeted agent development
of drugs that takes advantage of the fact that a drug is added to a standard will be illustrated in the BATTLE trial for patients with non-small cell lung
treatment to obtain, via Bayesian estimation, an improved estimation of the cancer.
MTD and the toxicity level at the MTD.
The proposed design implements a randomization between standard treatment C2.5
and the combination regimen for which we want the dose. We estimate the
Phase I dose finding methods using longitudinal data and proportional odds
34/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
model in oncology
method used by current DOC algorithms, especially for smaller CNVs.
Adelaide Doussau1, Rodolphe Thiebaut1, Xavier Paoletti2
1
Bordeaux University Hospital, Univ. Bordeaux, Univ. Bordeaux, Bordeaux, C3.2
France, 2Curie Institue, Inserm U900, Paris, France
Integration of multiple genome wide data sets in clinical risk prediction models
Introduction:
Stefanie Hieke1, Thomas Hielscher2, Richard F. Schlenk3, Martin Schumacher1,
2
3
4
In oncology, the optimal dose is typically defined as the dose associated with Axel Benner , Lars Bullinger , Harald Binder
1
some level of severe toxicity (DLT) during the first cycle of treatment, although Institute of Medical Biometry and Medical Informatics, University Medical
toxicity is repeatedly measured, on an ordinal scale
Center Freiburg, Freiburg, Germany, 2Division of Biostatistics, German Cancer
Research Center, Heidelberg, Germany, 3Department of Internal Medicine III,
Material and methods:
University Hospital of Ulm, Ulm, Germany, 4Institute of Medical Biostatistics,
We propose a new dose finding method using longitudinal measurements of
Epidemiology and Informatics, University Medical Center Johannes Gutenberg,
ordinal toxic side event using a proportional odds model (POM) to identify the
Mainz, Germany
optimal dose and to detect late drug effects. Optimal dose is then the dose
producing a target rate of DLT per cycle. We compare the performances of our High-throughput microarray technology allows to measure various molecular
approach to parallel-group design analyzed with a POM and to the continual features in parallel. Integration of such multiple genome wide data sets in risk
reassessment method (CRM), and mimic real trials by introducing a censoring prediction models with regard to clinical endpoints could potentially help to
process. Operating characteristics are mainly evaluated in terms of correct improve therapy management for future patients. We systematically investigate
statistical strategies to connect several molecular sources with partial overlap
identification of the target dose and power to detect time effect.
in the biological samples. To take biological hierarchies into account, we adapt
Results:
an approach wich considers first one molecular source. We keep the
After a mean sample size of 28 patients, estimates of the POM model can be information from this source fixed in the model when incorporating the second
obtained in more than 95% of the simulations with substantial gains: In a source. We illustrate this strategy in an application to survival data from acute
scenario without time effect, the target dose is recommended in 52% of the myeloid leukemia patients, considering microarray-based gene expression
simulations with the usual CRM and between 66% and 76% of the simulations profiling and single nucleotide polymorphism microarrays with relatively small
with the longitudinal POM model. In presence of strong time trend for the risk of overlap in the biological samples. While each of the molecular sources could
toxicity (OR=1.6 per extra cycle), the power was greater than 85%.
be considered as first or second source statistically, we will highlight how a
Conclusions:
particular combination corresponds to relevant biological questions.
Using longitudinal POM is feasible in phase I dose finding trials in oncology, Specifically, certain molecular entities are seen to only emerge in clinical risk
increases the ability of picking up the right dose and provides a robust tool to prediction signatures after taking the other molecular levels explicitly into
account. These results indicate how in general statistical procedures can be
detect late effects.
adapted for connecting different molecular sources according to the underlying
biology, resulting in a potentially improved basis for individual therapy
C3 Bioinformatics
management.
C3.1
Challenges associated with detecting copy number variants using depth of C3.3
coverage with next-generation sequencing technology.
Incorporation of Prior Biological Knowledge in Bayesian Variable Selection of
Genomic Features
Shu Mei Teo1, Yudi Pawitan3, Agus Salim2
1
NUS Graduate School for Integrative Sciences and Engineering, Singapore, Veronika Rockova1, Emmanuel Lesaffre1,2
Singapore, 2Saw Swee Hock School of Public Health, National University of 1Department of Biostatistics, Erasmus University, Rotterdam, The Netherlands,
Singapore, Singapore, Singapore, 3Department of Medical Epidemiology and 2L-BioStat, Catholic University Leuven, Leuven, Belgium
Biostatistics,Karolinska Institutet, Stockholm, Sweden
Bayesian variable selection methods have now been customarily used to select
features relevant/predictive for disease phenotype, primarily due to their
Analyzing next generation sequencing (NGS) data for copy number variations flexibility in handling the ``large p small n" problem. In genomic applications we
(CNVs) is a new and challenging field, with no standard protocols or quality are very often provided with complementary biological information regarding:
controls measures. Depth of coverage (DOC) is one of the methods used to (a) the likelihood of association between the feature and the outcome (implied
detect CNVs with NGS data- where a lower than expected DOC indicates e.g. by DNA characteristics when inferring regulatory mechanisms), (b)
deletion and a higher than expected DOC indicates duplication. The algorithm evidence from previous studies, (c) grouping of functionally and biochemically
relies heavily on the assumption that the sequencing process is uniform, but related predictors that belong to the same pathway. Here we set out to
this has been shown to be unrealistic due to factors such as GC-content. Also, investigate how this prior information can be incorporated in Bayesian variable
majority of current DOC algorithms require pre-binning the number of reads selection in a flexible manner.
into non-overlapping windows of a fixed size. One problem is that the choice of We propose a hierarchical prior construction, which extends the Normal
bin affects all downstream analysis and it is not clear if there is an optimal bin Exponential Gamma prior by letting the scale parameter to be dependent on a
size. Using real data from the 1000 genomes project, we investigate whether stochastic linear combination of prior ``association scores". The purpose is to
GC-content correction and pre-filtering reads using PHRED score will improve achieve sufficient adaptability, where coefficients of the features with higher
sensitivity of DOC algorithm. We also introduce a novel concept of estimating ``likelihood of association" are shrunk to a lesser extent. Furthermore, variables
DOC using per-base fragment count that avoids the need for data binning.
within one pathway are allowed to share one common shrinkage parameter,
The results show that GC-correction does not have much effect on sensitivity which encourages grouped selection and similarity of the estimated
but it decreases the overall variance of the data and should improve specificity. coefficients.
Filtering based on PHRED score does not seem to be crucial. Last but not We have developed a generalized EM algorithm for maximum a posteriori
least, the fragment method has higher sensitivity as compared to the binning estimation, which offers huge computational savings as compared to the
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
35/156
MCMC alternative. We have considered benefits of our proposed method in found in other sequencing approaches.
detecting genes predictive of achieving complete remission in probit The proposed analysis is applied to a pooled set of 125 neuroblastoma
regression, using information on pathway membership from online databases. samples.
Second application deals with predicting functional targets of microRNAs,
where prior information on the association is available from multiple online
sources.
C4 Multistate models
C4.1
C3.4
Multistate models with multiple time scales
Participant Identification in Genetic Association Studies: Methods and Practical Bendix Carstensen1, Simona Iacobelli2
Implications
1
Steno Diabetes Center, Gentofte, Denmark, 2Universita Tor Vergata, Roma,
Nuala A. Sheehan, Nicholas Masca, Paul R. Burton
Italy
University of Leicester, Leicester, UK
The normal approach to multistate modelling is to assume a common
A recent method [1] was proposed that purported to detect whether a given underlying time-scale (Markov property) for all transitions between states. This
individual contributed to a particular genomic mixture. This prompted grave greatly simplifies calculation of transition probabilities, because they all reduce
concern about the public dissemination of aggregate statistics from genome- to multiplication of infinitesimal transition probability matrices, which are easily
wide association studies and led to big changes in policy about general derived from fitted models for the transition intensities. This approach is
accessibility to such statistics. It is of clear scientific importance that these data implemented in e.g. R-packages mstate and etm.
be shared widely, but the confidentiality of study participants must not be However, in studies with progression through disease states it is untenable
compromised. The issue of what summary genomic data can safely be posted from a clinical point of view to assume that transitions rates do not
on the web is only addressed satisfactorily when the theoretical underpinnings (additionally) depend on either time of entry into a state or time spent in a state
of the proposed method are clarified and its performance evaluated in terms of or even both.
dependence on underlying assumptions.
In this paper we will argue that whether this is the case or not, choice of
The original method raised a number of concerns and several alternatives timescale(s) is an empirical question, not something to be decided a priori.
have since been proposed including a simple linear regression approach [2]. Thus it should be routinely checked in multistate modelling which timescale(s)
Here we suggest a generalised estimating equation (GEE) approach [3] provide the best description of the transition rates.
enabling inferences that are more robust to approximation of the We will use an example from bone marrow transplant in leukaemia treatment to
variance/covariance structure and can accommodate linkage disequilibrium.
illustrate how this can be checked using simple parametric (spline) models for
We affirm that, in principle, it is possible to determine that a ‘candidate' the rates, and how these models allow standard likelihood-ratio tests for the
individual has participated in a study, given a subset of aggregate statistics relevant hypotheses.
from that study. However, the methods depend critically on certain key factors We will demonstrate the necessary practicalities as well as final graphs needed
including: the ancestry of participants in the study; the absolute and relative to convey the relevant clinical message. Moreover we will show how this type
numbers of cases and controls; and the number of SNPs.
of modelling both facilitates reporting of estimated rates as well as computation
1. Homer N et al (2008) PLoS Genetics 4(8):e1000167.
of transition probabilities in the general case with multiple timescales
influencing rates.
2. Visscher PM & Hill WG (2009) PLoS Genetics, 5(9):e1000628.
3. Masca, N et al. (2011) International Journal of Epidemiology 40: 1629- The emphasis of the talk will be on the practicalities in fitting the models and
transforming the results into sensible reporting.
1642.
C4.2
Risk factors of rehospitalisation and death for acute heart failure using
multistate survival models
Jiri Jarkovsky1, Simona Littnerova1, Jiri Parenica2, Marian Felsoci2, Roman
Miklik2, Jindrich Spinar2
In genetics, massively parallel sequencing allows researchers to generate 1Masaryk University, Brno, Czech Republic, 2University Hospital, Brno, Czech
large numbers of data in a fast and inexpensive manner. One attempt to make Republic
optimal use of this powerful technique pools DNA samples before amplification
and combines them into 3-dimensional designs, hoping to gain power for The acute heart failure (AHF) is serious syndrome with both high risk of
detecting rare variants. These variants are important disease-causing alleles hospitalization mortality and low survival in follow-up. In addition to these
under the 'rare variants, common disease' hypothesis. The correct analysis of terminal states there is after AHF also high risk of rehospitalisation for AHF,
such data is not trivial however, as it ideally translates all steps of the pooling stroke and other serious but reversible events influencing quality of life of
patients and financial demands of AHF treatment. The aim of our work was
and amplification process into a statistical model.
analyzed factors influencing risk of these multistate events and provide risk
We present the most common data complications in such pooled samples, and
stratification of patients after primo hospitalization for AHF using data from the
ways of incorporating them in the data likelihood. Basic challenges originate
Czech national AHF registry AHEAD.
from the varying amplification factors, problems in base calling and the need to
efficiently combine different sources of information (such as forward and The AHEAD Main registry includes 4,153 primo hospitalizations of patients
reverse reads of DNA strands, different pools,...). We show how a variation on because of AHF from 7 centers in the Czech Republic with 24-hour cathlab
maximum likelihood estimation leads to identification of the model parameters, service. Fully consecutive group of 623 patients dismissed after hospitalization
allowing one to estimate the frequency of the variant in the population. We for AHF from university hospital Brno with median follow up 37.5 months were
discuss the power of the design to detect variants, and compare it to the power used for the analysis of risk factors for rehospitalisation and death using
multistate survival models. In the available data on subsequent hospitalization
C3.5
Variant detection in 3D pooled DNA samples
Bart Van Rompaye, Els Goetghebeur
Ghent University, Ghent, Belgium
36/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
for cardiovascular event we found from 1 to 9 rehospitalisation in 22% of
patients; the data were also verified against the Czech Database of Death
Records a database administered by the Czech Statistical Office and 45.9% of
patients deceased during the follow up were found.
The Markov models with rehospitalisation for cardiovascular event as recursive
state and death as terminal state were adopted for the analysis.
properties to estimate survival curves for time to disability progression. In this
talk, I present the transition model approach and discuss its advantages and
drawbacks compared to survival analysis methods. I apply both methods to
data from recent phase 3 studies that aim at quantifying the effect of
fingolimod, the first approved oral treatment for relapsing remitting MS, on
disability progression. Although the approaches are quite different in the
outcome and data they use and in the interpretation of the estimated
parameters, both showed a positive effect for the new treatment compared to
C4.3
placebo, and moreover, survival curves obtained by the two methods were
Semi-parametric Estimation of Quality Adjusted Lifetime Distribution in Semi- almost identical. Statistical issues such as robustness for model assumptions,
Markov Illness-Death Models
dealing with missing values and more will be discussed.
Biswabrata Pradhan, Anup Dewanji
Indian Statistical Institute, Kolkata, India
C4.5
Quality adjusted lifetime (QAL) is an important measure, which incorporates
both quality and duration of life, used for comparison of different treatment
choices in many clinical trials. Estimation of QAL distribution is an important
issue in such situations. In this work, we consider semi-parametric estimation
of QAL distribution in different illness-death models. Hazard rates for the
sojourn times are modelled using Cox's proportional hazards regression model.
In the proposed approach, we write down the theoretical expression for the
QAL distribution in terms of sojourn time distributions in each health state. The
regression coefficients are estimated by maximizing the corresponding partial
likelihood and the baseline cumulative hazards are estimated by using the
method of Breslow. The estimate of QAL distribution is obtained by using these
estimates in the theoretical expression of QAL distribution. By construction, this
method gives a monotonic estimate of the QAL distribution. The asymptotic
normality of the proposed estimator has been established. The performance of
the estimator is judged by a simulation study. Proposed methodology is
illustrated with the analysis of two data sets; Stanford Heart Transplant data
and International Breast Cancer Study Group (IBCSG) Trial V data.
C4.4
Estimating time to disease progression comparing transition models and
survival methods
Micha Mandel1, Francois Mercier2, Benjamin Eckert2, Peter Chin3, Rebecca
Betensky4
1
The Hebrew University of Jerusalem, Jerusalem, Israel, 2Harvard School of
Public Health, Boston, MA, USA, 3Novartis Pharma AG, Basel, Switzerland,
4
Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA
Standard approaches to estimate the time to progression of multiple sclerosis
(MS) utilize survival analysis methods. These may be problematic for the
typical data obtained in MS studies that comprise disability at only a few time
points and assess change in only one direction (worsening). An alternative
approach is to fit a Markov transition model and to use its special probabilistic
Modelling Graft-versus-Host-Disease: statistical approaches incorporating
clinical aspects
Liesbeth C. de Wreede1, Johannes Schetelig2, Hein Putter3
1
Dept of Medical Statistics and Bioinformatics, Leiden University Medical
Center/European Group for Blood and Marrow Transplantation, Leiden, The
Netherlands, 2Medical Department I, University Hospital Carl Gustav Carus,
Dresden, Germany, 3Dept of Medical Statistics and Bioinformatics, Leiden
University Medical Center, Leiden, The Netherlands
Graft-versus-Host-Disease (GvHD) is a serious complication after allogeneic
hematopoietic stem cell transplantation (SCT). However, it is also positively
associated with the ‘Graft-versus-Leukemia' effect, which represents the
immunotherapeutic potential of SCT to prevent relapse of the malignant
disease. Among statisticians, GvHD is a well-known example of an
intermediate event, which is both analysed as an outcome in itself and as a
time-dependent predictor of the competing events relapse (Rel) and nonrelapse mortality (NRM). We will present several issues to take into account in
the analysis of these data.
Firstly, we will explain how clinical considerations lead to different modelling
choices and interpretations.
Secondly, we will compare two statistical models and several refinements. The
first of these is the Cox model for the cause-specific hazards for Rel and NRM
in which GvHD is entered as time-dependent variable. This can be extended to
a model with time-dependent effects of the occurrence of GvHD or to a
dynamic landmarking approach.
The second model is the multi-state model, in which, e.g., the probability of
being alive with GvHD compared for patients with different baseline
characteristics can be estimated. Such a model also allows to study new
outcomes, such as "time spent free from GvHD-treatment and without relapse"
as a measure of treatment success. In addition, the correlation between the
occurrence of GvHD and Rel/NRM can be modelled by means of frailties.
The models will be illustrated on two real datasets.
Afternoon sessions (IP2, I2 , C5 – C13)
days after discharge from hospital.
I2 Evaluating hospital performance
I2.1
Developing a summary hospital mortality index: how can we compare
hospitals? A retrospective analysis of all English hospitals over 5 years
Michael J Campbell, Richard Jacques, James Fotheringham, Jon Nichol, Ravi
Maheswaran
School of Health and Related Research, University of Sheffield, UK
The Sheffield School of Health and Related Research were responsible for
developing the Summary Hospital Mortality Index (SHMI) which is now in use
for the Department of Health in the UK. For the first time an index was to be
used that included all deaths in hospital, and deaths that occurred up to 30
The index was to be used to identify
hospitals which have unexplained high mortality and perhaps should be
investigated to see if the care they deliver is adequate. The model was
developed from data over 5 years from Hospital Episode Statistics (HES) in
England. There are a number of challenges to be overcome when developing
the SHMI . The first is the size of data set (initially 96 million records). The next
is what admissions to include (for example zero length of stay, maternity,
palliative care). We then have to decide which type of model and which
covariates to include to allow for case mix. We decided upon a logistic model
nested within admission diagnosis. We also have to decide between direct and
indirect standardisation and consider model fit. Then comes the question as to
whether a hospital really is an outlier or merely extreme, for which we used
funnel plots and random effects models. Finally we consider some of the
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
limitations of mortality indices and whether they can ever reflect the quality of
care in a hospital.
Reference :
Campbell MJ, Jacques RM, Fotheringham J, Maheswaran R, Nicholl J (2012)
Developing a summary hospital mortality index: retrospective analysis in
English hospitals over five years. BMJ doi=10.1136/bmj.e1001
37/156
including age, sex and deprivation.
Results: Elixhauser outperformed Charlson in all models. Discrimination (area
under the ROC curve: c statistic) for Elixhauser for colorectal excision validated
on data of a second year was 0.817 using regression and 0.817 (0.812) using
SVMs with linear (Gaussian) kernel. Using a second year of admissions data
for internal validation was vital as all models showed some over-fitting when
validated against the same year (generally two to four percentage points). This
was a particular issue, however, for unbalanced data using the nonlinear
I2.2
kernels where differences in c-statistic could become much larger. In these
Some statistical issues in identifying 'unusual' healthcare providers: cases dropping most of the “survivors” turned out to be the best option. The
pattern was similar for pneumonia and AMI.
multiple testing, regression-to-the-mean and outliers versus extremes
In this talk I will discuss comorbidity scores and the differences between
Hayley E Jones1, David J Spiegelhalter2
logistic regression and SVMs, before giving some results and the next steps,
1
School of Social and Community Medicine, University of Bristol, UK,
which include and custom-built kernels that take into account the heterogeneity
2
Statistical Laboratory, University of Cambridge, UK
and nature of the data and applying neural networks.
Measures of the performance of healthcare providers are now commonly
collected routinely at regular intervals over time. As well as monitoring overall
trends and planning resources, it is of interest to identify individual providers
C5 Clinical trials I
that are potentially unusual, for example any with notably high or low event
C5.1
rates during a particular time period, or that have experienced recent changes.
As there are often large numbers of providers in such data sets, this procedure Linear Categorical Marginal Modeling of Solicited Symptoms in Vaccine Clinical
should be automated. I will outline our approaches to dealing with three Trials
statistical challenges that this presents.
Wicher Bergsma1, Emmanuel Aris2, Fabian Tibaldi2
Firstly, I will demonstrate how control of the false discovery rate (FDR) can be 1London School of Economics and Political Science, London, UK,
used to handle the intrinsic multiple testing problem. I will then review the 2GlaxoSmithKline Biologicals, Wavre, Belgium
motivation for fitting hierarchical models to performance data of this type, which
When developing new vaccines, it is necessary to show that new candidates
provide shrinkage estimates of the performance of each healthcare provider.
have an acceptable safety profile. Typically, the clinical safety evaluation of the
Hierarchical modelling has become fairly well established in performance
vaccine is performed regarding two specific aspects. First, the occurrence of a
monitoring, but there is confusion in the literature as to how to identify unusual
certain number of local or general symptoms is checked proactively via diary
providers from such a model. I will examine two possible strategies, carefully
cards recording the occurrence or absence of the symptom during a certain
distinguishing between statistical ‘outliers’ and ‘extremes’, and highlighting a
number of days after the injection. These symptoms are usually called solicited
commonly used approach which we believe to be inappropriate. Finally, I will
symptoms. For ease of recording a standard intensity scale is often used and
demonstrate how tests for recent changes in the performance of individual
contains a certain number of possible intensity of the symptom, typically
units based on hierarchical models appropriately account for regression-to-thebetween 1 and 3. Subjects are then asked to fill in the maximum daily intensity
mean, and consider the implications for statistical power.
of each reported solicited symptom during a specific follow-up period. Second,
Two case studies, mortality following heart surgery in New York State and the subject is also asked to record any occurrence of unsolicited symptoms
Methicillin-resistant Staphylococcus aureus (MRSA) bacteraemia rates in which could also occur after vaccination. Analysis of the occurrence of adverse
English National Health Service (NHS) Trusts, will be presented.
events, and in particular of solicited symptoms, following vaccination is often
needed for the safety and benefit-risk evaluation of any candidate vaccine. In
this presentation, it will be shown that Linear Categorical Marginal Models are
I2.3
well-suited to take the dependencies in the data arising from the repeated
Traditional and machine learning methods for comorbidity adjustment in measurements into account and provide detailed and useful information for
mortality risk models
comparing safety profiles of different products while remaining relatively easy
Alex Bottle
to interpret. Linear Categorical Marginal Models will be presented and applied
Dept of Primary Care and Social Medicine, Dr Foster Unit at Imperial College to a Phase III clinical trial of a candidate meningococcal pediatric vaccine.
London, UK
Background: Using outcome measures such as mortality for comparing C5.2
hospitals requires adjustment for patient factors, such as age and comorbidity, Variance estimation for propensity scores in randomised trials
known as case-mix. Some comorbidity indices such as those by Charlson and
Elizabeth Williamson1, Andrew Forbes2, Ian White3
Elixhauser are in common use but tend to simplify the relation with the 1
outcome and generally need recalibrating for a new data set. For binary Department of Epidemiology & Preventive Medicine, Monash University, and
outcomes, case-mix adjustment is usually done using logistic regression; Melbourne School of2 Population Health, University of Melbourne, Melbourne,
Preventive Medicine,
machine learning methods such as artificial neural networks and support vector Victoria, Australia, Department of Epidemiology &
3
machines have shown promise and in principle are well equipped to explore Monash University, Melbourne, Victoria, Australia, MRC Biostatistics Unit,
Cambridge, UK
interactions between different comorbidities.
Methods: For all hospital admissions in England for several conditions (e.g. Propensity scores were introduced as a tool for estimating effects of binary
colorectal excision, AMI, pneumonia) we compared the model performance of treatments in non-randomised studies. The propensity score is the probability
logistic regression and support vector machines for 30-day total mortality. For of receiving treatment conditional on measured confounding variables, and is
SVMs, we tried different kernels (linear, low-order polynomial, Gaussian radial typically estimated from the data. We consider inverse probability weighting by
basis function, and sigmoid). For comorbidity adjustment, we used i) the 17 the propensity score (IPW). Weights defined by the estimated propensity score
Charlson components and ii) the 30 Elixhauser components in models also are used to create a population in which included confounders are balanced
38/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
between treatment groups. The IPW estimator is the regression coefficient for
treatment from a regression model for the outcome on treatment, applying
these weights as probability weights.
In randomised trials confounding does not occur but IPW estimators can be
used to adjust for chance imbalances of baseline prognostic variables. We
consider a randomised trial with a continuous outcome measured at baseline
and follow-up. We demonstrate that variances of IPW estimators that ignore
the estimation of the weights, including those calculated by standard software,
are typically far too large. We show that the variance can be correctly
estimated using an extended sandwich estimator allowing for the uncertainty in
the weights. Using the correct formula, the variance of the IPW estimator is
comparable to the ANCOVA estimator. We derive similar results for binary
outcomes. While non-convergence of outcome regression models estimating
risk differences/ratios is commonplace, the IPW estimator can always be
calculated.
IPW estimators provide an attractive alternative to traditional analysis
approaches for randomised trials. However, it is important to estimate the
variance correctly; failure to do so may lead to over-inflated standard errors.
C5.3
Properties of Estimators in Exponential Family Settings With Observationbased Stopping RulesProperties of Estimators in Exponential Family Settings
With Observation-based Stopping Rules
Elasma Milanzi1, Geert Molenberghs5, Ariel Alonso2, Michael G. Kenward3,
Geert Verbeke6, Anastasios A. Tsiatis4, Marie Davidian4
1
I-BioStat, Universiteit Hasselt, Diepenbeek, Belgium, 2Department of
Methodology and Statistics, Maastricht University, Maastricht, The
Netherlands, 3Department of Medical Statistics, London School of Hygiene and
Tropical Medicine, London WC1E7HT, UK, 4Department of Statistics, North
Carolina State University, Raleigh, NC, USA, 5I-BioStat, Universiteit Hasselt
and I-BioStat, Katholieke Universiteit, Diepenbeek and Leuven, Belgium, 6IBioStat, Katholieke Universiteit and I-BioStat, Universiteit Hasselt, Leuven and
Diepenbeek, Belgium
Often, sample size is not fixed by design. A typical example is a sequential trial
with a stopping rule, where stopping is based on what has been observed at an
interim look. While such designs are used for time and cost efficiency, and
hypothesis testing theory has been well developed, estimation following a
sequential trial is a challenging, controversial problem. Progress has been
made in the literature, predominantly for normal outcomes and/or for a
deterministic stopping rule. Here, we place these settings in the broader
context of outcomes following an exponential family distribution and, with a
stochastic stopping rule that includes a deterministic rule and completely
random sample size as special cases. We study (1) the so-called
incompleteness property of the sufficient statistics, (2) a general class of linear
estimators, and (3) joint and conditional likelihood estimation. Apart from the
general exponential family setting, normal and binary outcomes are considered
as key examples. While our results hold for a general number of looks, for ease
of exposition we focus on the simple yet generic setting of two possible sample
sizes, N=n or N=2n.
PROspective Study of Pravastatin in the Elderly at Risk Trial (PROSPER) is
used to illustrate the potential uses of record linkage of trial to routine data.
PROSPER was a randomised double-blind trial of pravastatin vs. placebo in
5,804 men and women, aged 70-82 years, with a history, or at risk of vascular
disease. Participants were recruited in Scotland, Ireland and the Netherlands.
Within-trial average follow-up was 3.2 years. Statin treatment reduced the
primary endpoint, coronary death, non-fatal myocardial infarction (MI) or fatal
or non-fatal stroke, by 15% (hazard ratio (HR) 0.85, 95% CI 0.74 - 0.97, p =
0.014), did not reduce deaths or stroke but suggested an increased risk of
cancer in the statin arm.
In Scotland, 11,770 subjects were screened and 2,520 randomised. Record
linkage follow-up was achieved for all subjects for mortality, incident cancers
and all hospital discharge summaries. This provided a total of approximately 10
years follow-up.
Analysis of the Scottish cohort over the entire follow-up found no evidence of a
reduction in terms of all cause mortality (HR 0.99, 95% CI 0.88-1.11, p=0.87).
However, coronary heart disease death or coronary hospitalisation was
reduced (HR 0.81, 95% CI 0.69-0.94, p<0.0001) and overall there was no
evidence of increased cancer risk.
The presentation will describe the record linkage process, the results, a
comparison of outcomes in randomised and non-randomised participants and
an illustration of the use of these data to construct risk models in for elderly
patients.
C5.5
Estimating the optimal treatment effect when the randomised controlled trial
design incorporates variable exposure to active intervention
Chris Metcalfe
University of Bristol, Bristol, UK
In a randomised controlled trial where not all individuals have complied with
their allocated intervention the unbiased intention to treat analysis will
underestimate the treatment effect in the subset who do comply, and a per
protocol analysis is not based on randomly allocated groups and is likely to be
biased. This has motivated the development of unbiased estimators of the
treatment effect, which compare compliers with the active intervention to a
comparable sub-group in the control arm.
There are trials where exposure to the active intervention varies by design.
Randomised trials of screening for example, with two or more treatment
options being randomly allocated in a nested trial amongst individuals found to
have the disease. In such studies there is often a desire to conduct a
secondary analysis which, continuing the example, estimates the optimal effect
of screening in the situation when all those found to have the disease receive
the treatment found to be most effective in the nested trial.
This presentation will include a systematic review of the analytic approaches
used in the published clinical literature, when the aim has been to estimate the
optimal treatment effect in trials where exposure to the active treatment has
varied by design. In addition the appropriateness and utility of extending
estimators, developed to deal with non-compliance with randomly allocated
active intervention, to trials where exposure to active intervention varies by
design, will be evaluated in a simulation study and a real data example.
C5.4
Use of record linkage to conduct long-term follow-up of a clinical trial and to
investigate generalising cohorts to the underlying population
Suzanne Lloyd1, Ian Ford1, David Stott2
1
Robertson Centre for Biostatistics, Institute of Health and Wellbeing,
University of Glasgow, Glasgow, UK, 2Institute of Cardiovascular and Medical
Sciences, University of Glasgow, Glasgow, UK
Routinely collected health data are increasingly used in medical research. The
C6 Epidemiological designs
C6.1
Methodological challenges by the globolomic design - the Norwegian Women
and Cancer postgenome cohort
Eiliv Lund
University of Tromsø, Tromsø, Norway
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
39/156
The description of the genome in 2001 was anticipated to give a paradigmatic
change in cancer research. In cancer epidemiology this lead to major efforts
with gene-environment studies searching for risk related to single nucleotide
polymorphisms, SNPs, and lifestyle. After a decade the findings have been
scarce and the shift is towards functional studies i.e. transcriptomics or gene
expression, epigenetics or microRNA and methylation, and finally proteomics.
This all -omics approach is gaining momentum also in prospective studies
using blood and information from time of enrolment in a multilevel design. The
globolomic design of the NOWAC study adds to this complexity by the
repeated measurements at time of diagnosis and from tumour tissues, all
based on a nested case-control design. The design is dependent on the
general hypothesis of blood as a communication channel for the carcinogenic
process.
The challenges to for the analysis of the globolomic design are many:
 1. Define a carcinogenic model
We suggest an extension of the case-time-control design which again is an
extension of the case-crossover design. The basic idea in the case-crossover
design is to use cases as their own controls by introducing a reference period
back in time and assessing exposure status (e.g., drug use) in this period.
Hereby one can address the problem of confounding by indication inherent in
many epidemiological studies where drug prescription is dictated by the
severity of the disease. However, general increase in drug usage in a
population over time can bias the results. A possible solution is to use the casetime-control design where one, besides the cases, includes 'real' controls, still
introducing a reference period for both the cases and the 'real' controls. It is
then possible to model a linear increase in drug usage over time.
We suggest a design where one introduces several reference periods for both
cases and the 'real' controls. Besides the power gained, this allows us in a
much more flexible way to model change in drug usage over time e.g, by a
polynomial or a spline function. In addition, our model allows for individual
random effects reflecting different personal disease severity.
 2. Improve existing statistical methods to the multilevel, longitudinal An underlying assumption is that the probability of exposure in a given period is
design
independent of previous exposure conditionally on individual random effects.
 3. Statistical testing of weak associations relevant to the How violations to this assumption affect our estimates is examined through
carcinogenic process in the presence of stronger associations due large scale simulation studies, inspired by an example concerning
antidepressant use and risk of out-of-hospital cardiac arrest (Weeke et al.,
to lifestyle factors
Clinical Pharmacology & Therapeutics, in press).
The challenges will be illustrated, but with no definite answers.
C6.4
Inverse probability weighting for nested case-control studies: Application to a
Analysis of case-cohort studies using flexible parametric models
study of vitamin-D and prostate cancer.
Anna Johansson1, Paul Dickman1, Therese Andersson1, Mats Lambe1, Paul Sven Ove Samuelsen, Nathalie C Støer, Haakon E Meyer
Lambert1,2
University of Oslo, Oslo, Norway
1
Karolinska Institutet, Stockholm, Sweden, 2University of Leicester, Leicester, In nested case-control (NCC) studies controls are matched to cases of a
UK, 3Regional Cancer Center of Central Sweden, Uppsala, Sweden
disease on basis of at risk status and possibly other variables. Some expensive
C6.2
The aim of this work is to develop flexible parametric survival models (FPM)
using weighted likelihood for the analysis of case-cohort data. The case-cohort
design was proposed by Prentice in 1986, and is particularly useful in studies
where exposure information is difficult or expensive to obtain on a full cohort,
e.g. biomarkers or medical records. To date, the main analysis tool for casecohort data has been weighted Cox regression, with weights accounting for the
case-cohort sampling, yielding estimates of hazard ratios. Case-cohort data
enables estimation of hazard rates since the design preserves information
about the underlying cohort (person-time at risk). However, from a Cox model
baseline hazard estimation requires kernel smoothing postestimation.
Using weighted likelihood, we propose to use FPM for the analysis of casecohort data. The FPM uses restricted cubic splines to model the log cumulative
hazard, thus the hazard rate is obtainable from model parameter estimates.
One advantage of FPM is that it is easy to model time-dependent effects (nonproportional hazards). Measures such as time-dependent hazard ratios and
rate differences can be constructed to quantify effects between groups.
We show results from an analysis of education level and breast cancer
incidence in Sweden. We compared weighted Cox regression to weighted
FPM, and the results were similar for proportional models. However, the hazard
ratio was strongly time-dependent (non-proportional), and the FPM fitted these
models without requiring splitting of the timescale.
In conclusion, the FPM provides a useful tool for the analysis of case-cohort
data, particularly in large studies.
C6.3
The case-time-control design with multiple reference periods
Aksel Jensen, Per Kragh Andersen, Thomas Gerds
University of Copenhagen, Copenhagen, Denmark
exposure information is then obtained only for cases and controls. Traditionally
such data are analyzed with Cox-regression stratified on matched case-control
sets. If, however, we would like to analyze toward a subtype of the disease we
would with the traditional method, due to the matching, have to discard the
information from controls (and cases) for the other subtypes.
An alternative is to estimate the probability of being selected as a control and
include all available information using inverse probability weighting (IPW).
Although this technique has been around for 15 years it has not been widely
adopted. One reason may be that the estimation of inclusion probabilities
becomes more complicated when there are more matching variables than just
at-risk status.
We apply IPW to a NCC study. Vitamin-D was obtained from stored blood
samples for 700 incident cases of prostate cancer with one control per case,
matched on at-risk status, age ( 6 months) and date of blood sampling ( 2
months). An important objective is also to investigate how vitamin-D is
associated with death from prostate cancer and there were 160 such cases.
We thus aim at making use of all collected exposure information, thereby
increasing efficiency.
The purpose of this talk is to present methods for including all matching
variables when estimating weights in order to apply IPW for matched NCC
studies.
C6.5
Inverse probability weighting for nested case-control studies: A simulation study
related to a nested case-control study of vitamin-D and prostate cancer.
Nathalie C Støer, Sven Ove Samuelsen
University of Oslo, Oslo, Norway
In a related talk we presented methods for reusing controls in nested case-
40/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
controls (NCC) studies using inverse probability weighting (IPW) and applied
these methods to a specific NCC with vitamin-D obtained from stored blood
samples as exposure. Both incidence and death from prostate cancer were
endpoints, but controls were matched to incident cases. Controls were also
matched on age and calendar time of blood sampling.
Since sampling of controls is only carried out once it is difficult to infer from this
how IPW in general will work with additional matching. Thus we set up a
simulation study based on the same cohort with respect to age and time of
blood sampling. Levels of vitamin-D and times until cancer diagnosis and death
of cancer were simulated in accordance with the real study and controls were
sampled with the same matching criteria. IPW appeared to work well in the
simulation mimicking the real study with respect to analyses towards both the
incidence and death endpoints, and showed efficiency improvements for the
death endpoint.
In the real study age at blood sampling had an association with incidence, but
was not correlated with vitamin-D. Calendar month of blood sampling was
correlated with vitamin-D, but not the endpoints. It is important to investigate
the robustness of IPW with respect to degree of dependency between
matching variables, exposure and outcomes.
Furthermore we studied how more closely matching and also batch effect
(blood samples were analyzed in batches of 50) influenced the performance of
IPW.
Jennifer Rogers, Stuart Pocock
London School of Hygiene and Tropical Medicine, London, UK
A composite endpoint (i.e. time to first if several types of disease event) is
commonly used as a primary outcome in clinical trials, as it increases event
rates and combine multiple outcomes into one, avoiding issues of insufficient
power and multiplicity. Heart failure is characterised by repeat hospitalisations
for worsening condition, rendering such a composite endpoint suboptimal, as
recurrent hospitalisations within individuals are ignored. Utilising all of the
hospitalisations within individuals gives a more meaningful treatment effect on
the true burden of disease.
This talk presents statistical analyses of two heart failure clinical trials:
‘Eplerenone in Mild Patients Hospitalization and Survival Study in Heart
Failure' (EMPHASIS-HF) and ‘Candesartan in Heart failure Assessment of
Reduction in Mortality and morbidity' (CHARM). The datasets will be analysed
using the Cox proportional-hazards model for the composite of first
hospitalisation and cardiovascular death, and using the Andersen-Gill model
and Negative Binomial generalised linear model for the repeat hospitalisations.
The results of these analyses are compared and bootstrap simulations used to
investigate statistical power of each method. We observe that analysing all
hospitalisations within an individual gives significant improvements in statistical
power.
A comparison of hospitalisation rates in heart failure can be confounded by the
competing risk of death. An increase in heart failure hospitalisations is
associated with a worsening condition and a subsequent elevated risk of death.
C7 Survival analysis I
Analyses of recurrent events should take into consideration such informative
C7.1
censoring. Statistical methodology being developed to jointly model
New Method for Controlling for Unobserved Confounding in Time to Event hospitalisations and death will briefly be presented.
Analyses of Comparative Effectiveness and Safety of Drugs
Michal Abrahamowicz1, Lise Bjerre2, Yongling Xiao1
C7.3
1
McGill University, Montreal, Quebec, Canada, 2University of Ottawa, Ottawa, A multiplicative-regression model to compare the effect of factors associated
Ontario, Canada
with the time to graft failure between first and second renal transplant
Unobserved confounding is the main source of bias in observational studies of Katy Trébern-Launay1, Magali Giral2, Yohann Foucher1
medication effects. Instrumental variables (IV) approach, that uses physicians' 1EA 4275 Biostatistics, Clinical Research and Subjective Measures in Health
prescribing preferences as IV [1] can remove bias due to unobserved Sciences, Nantes University and Transplantation, Urology and Nephrology
confounding, under certain assumptions [2]. However, the IV method is limited Institute (ITUN), INSERM U1064, Centaure, Nantes, France, 2Transplantation,
to linear regression [1] and is not applicable in time to event analyses of Urology and Nephrology Institute (ITUN), INSERM U1064, Centaure, Nantes,
prospective
studies. France
We propose a new, more general method to detect unobserved confounding of
the estimated treatment effects and to reduce its impact. Similar to the IV Additive-regression models for relative survival are traditionally used in the
methodology, we assume that the treatment of individual patients depends on evaluation of mortality related to chronic diseases and are usually based on the
both: (1) (observed and un-observed) patients' characteristics and (2) expected mortality of general population (life table by sex, calendar year and
subjective physicians' ‘prescribing preferences' [1;2]. We then propose a new age). To our knowledge, we propose for the first time to apply a multiplicativeconceptual framework, and derive (i) a test of the bias of the treatment effect regression model for relative survival (M-RS model) to analyze other outcome
estimated in the conventional multivariable model, and (ii) a corrected than mortality. The clinical objective was to study the factors associated with
estimator of the treatment effect that corrects for unobserved confounding. the time to graft failure (return-to-dialysis or patient death) of second kidney
In simulations with strong unobserved confounding, the proposed test detected transplant recipients (STR) compared to first kidney transplant recipients
the bias with > 90% ‘sensitivity', while ensuring an accurate type I error rate (FTR). 641 STR from the French DIVAT database between 1996 and 2010
(4.6%) when there was no confounding. The proposed corrected estimator of were analyzed. The expected graft failure hazard was estimated using a
the treatment effect reduced the relative bias from 43% to 9%, and improved parametric proportional hazard (PH) model with a stepwise baseline function
based on 2462 FTR. We carried out a multiplicative PH model with a stepwise
the coverage rate from 11% to 91%.
baseline function to estimate the STR relative risk of graft failure. Both models
In conclusion, the proposed method may help detecting unobserved were estimated by maximizing their respective likelihood functions.
confounding of the treatment effect estimates in observational studies of
The hazard ratio (HR) for recipient ≥ 55 years versus < 55 years was 1.6-fold
medications and largely reduce its impact.
higher for STR compared to FTR (p=0.0387). Conversely, HR for donor ≥ 55
[1] Brookhart MA et al. Epidemiology, 2006;17:268-275.
years versus < 55 years was 1.7-fold lower (p=0.0294) and HR for deceased
[2] Abrahamowicz et al. Am J Epidemiology (AJE) 2011;174(4):494-502.
versus living donor 3-fold lower (p=0.0332) for STR compared to FTR. While
Cox model applied to STR and FTR did not offer original results according to
the transplantation literature, the innovative use of M-RS model to study the
C7.2
time to graft failure leads to new results useful for clinicians.
Analysis of repeat event outcome data in clinical trials: examples in heart
failure
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C7.4
Risk assessment of time-varying factors on an acute event using the casecrossover method: A simulation study
Peggy Sekula1, Martin Schumacher1, Maja Mockenhaupt2
1
Institute of Medical Biometry and Medical Informatics, University Medical
Center Freiburg, Freiburg, Germany, 2Dokumentationszentrum schwerer
Hautreaktionen (dZh), Dept. of Dermatology, University Medical Center
Freiburg, Freiburg, Germany
There are certain circumstances when the analysis of a case series might be
benificial. For several reasons, such situation arose when studying the risk of
drugs in severe cutaneous adverse reactions.
The case-crossover method is one method to assess the risk of a time-varying
factor on an acute event. In order to evaluate it in the practical setting, a reanalysis of existing data from a case-control study was done. Although it
revealed reasonable estimates for several drugs, it also provided conspicuous
results or no risk estimates for few drugs. While some discrepancies could be
explained, the fact that case-crossover estimates were lower on average than
case-control estimates could not fully be clarified. The objective was therefore
to develop a simulation model for further assessment of the method to address
questions raised by the application and beyond.
Based on practical setting, the simulation model comprised a time-varying
binary factor (=drug exposure) and a terminal event (=reaction). By assuming a
Markov chain as the underlying stochastic model, simulation of the course of a
population over time was possible from which subjects were drawn for casecrossover and case-control analyses. Several relevant scenarios and
extensions such as the incorporation of a time-varying confounder were
considered.
We conclude that the risk of a time-varying factor using the case-crossover
method could be reliably estimated when necessary requirements were
correctly defined and exposure information was sufficient. Only in the settings
of non-stationary exposure and of insufficiently informative exposure history,
the case-crossover estimates revealed unacceptably high bias.
C7.5
Analysis of complex correlated interval-censored HIV data from population
based survey
Khangelani Zuma
Human Sciences Research Council, GAUTENG, South Africa
In epidemiological of HIV, interval-censored data occur naturally. HIV infection
time is not usually known exactly, only that it occurred before the survey, within
some time interval or has not occurred at the time of the survey. Infections are
often clustered within geographical areas such as enumerator areas and thus
inducing unobserved frailty. In this paper we consider an approach for
estimating parameters when infection is unknown and assumed correlated
within an enumerator area. Dependency is modeled as frailties assuming a
gamma distribution for frailties and a Weibull distribution for baseline hazards.
Data from a household based multi-stage stratified sample design of 23 275
(96.0%) individuals from 10 584 households who were interviewed from whom
15 851 (65.4%) were tested for HIV is analyzed. BED capture EIA assays
were used to test for recent HIV infection leading to interval censored data.
Results show high degree of heterogeneity between enumerator areas
indicating clustering of HIV infection and risk determinants by geographical
areas.
41/156
Steffen Unkel1, Paddy Farrington1, Heather J. Whitaker1, Richard Pebody2
1
The Open University, Milton Keynes, UK, 2Health Protection Agency, London,
UK
In this talk, a unified frailty modelling framework is developed for representing
and making inference on individual heterogeneities relevant to the transmission
of infectious diseases, including heterogeneities that evolve over time. Central
to this framework is the use of multivariate data on several infections. We
propose new simple but flexible families of time-dependent frailty models, in
which the frailty is modulated over time in a deterministic fashion. Methods of
estimation, issues of identifiability and model choice are discussed. Results
from such models are interpreted in the light of concomitant information on
routes of transmission. Applications to paired serological survey data on a
range of infections with same and different routes of transmission are
presented.
C8.2
Toward information synthesis with mechanistic models of HIV dynamics
Prague Mélanie, Commenges Daniel, Thiébaut Rodolphe
Univ. Bordeaux 2 ISPED INSERM U897, Bordeaux, France
Parameters in mechanistic models based on ODE (Ordinary Differential
Equations) have an intrinsic meaning. Thus, HIV modelling should lead to
similar estimated values among clinical trials for some parameters such as the
virus proliferation rate even if patients' histories and treatments differ. In the
perspective of optimizing treatment, we aim to build a model which forecasts
the patient treatment response in several studies. To validate it, we will present,
in a Bayesian framework, a methodology for combined estimation of
parameters over several clinical trials.
We use the "Activated T cell model" with random effects on parameters. A
pharmacodynamic function links the treatment dose to the effect of several
antiretroviral drugs. To account for non-identifiability, a Bayesian approach
allows introducing prior information using data from the literature. In view of the
numerical complexity, we use a Maximum a Posteriori (MAP) estimator instead
of classical MCMC. The EMRODE algorithm (Estimation in Models with
Random effects based on Ordinary Differential Equations) allows computing
the MAP. We analyse sequentially the different studies by taking as prior the
updated posterior of previous analyses.
We applied the methodology on two clinical trials (Albi ANRS 070: n=150
untreated patients starting dual nucleosides therapy and Puzzle ANRS 104:
n=40 heavily pre-treated patients starting salvage therapy). Initial separate
analyses show good prediction abilities of the model and fair agreement
between parameters posterior distributions among studies. Combined analyses
improve the fits and the predictions.
C8.3
Modeling hepatitis C viral kinetics to compare antiviral potencies of two
protease inhibitors: a simulation study under real conditions of use
Cédric Laouénan1, Jérémie Guedj2, France Mentré1
1
Univ Paris Diderot, Sorbonne Paris Cité, INSERM, UMR 738, Paris, France
and AP-HP, Hosp Bichat, Service de Biostatistique, Paris, France, 2Los Alamos
National Laboratory, New Mexico, USA and Univ Paris Diderot, Sorbonne Paris
Cité, INSERM, UMR 738, Paris, France
2011 marked a milestone in HCV therapy with the approval of two protease
inhibitors (PI), telaprevir (TVR) and boceprevir (BOC), in addition to current
treatment. Ongoing MODCUPIC-ANRS trial aims at providing estimation and
comparison of potency of these drugs in real conditions of use. Objectives
were to evaluate estimation performances for the chosen MODCUPIC's design
C8 Modelling infectious disease
and power to detect a difference of potency between both PIs using HCV
dynamic model.
C8.1
Time-varying frailty models and the estimation of heterogeneities in The biphasic viral kinetic model was used to characterize the changes in viral
load levels, in which the drug potency, ɛ, represents the percentage of
transmission of infectious diseases
42/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
blockage of virion production. We assumed εTVR=99.9% and εBOC=99%, 99.5%
or 99.8%. We simulated 500 datasets using MODCUPIC's design (N=30
patients per PI and sampling at 0, 0.33, 1, 2, 3, 7, 14 days). Parameters were
estimated by nonlinear mixed-effects models using the extended SAEM
algorithm in MONOLIX v.4.1 that take into account below limit of detection
data. For all parameters relative bias was <2% for fixed effects and relative root
mean square errors was <5% for fixed effects and <35% for variances. Power
to detect difference between ε was 100%, 100% and 94% with ε BOC=99%,
99.5%, 99.8% respectively. These powers remained very high in absence of
datapoints at 0.33 and 1d (100%, 100%, 89.4%, respectively) or when N=10
per PI (100%, 99.8%, 62.2%, respectively).
Compared with standard approach, modeling approach provides a more
powerful tool to compare the antiviral potencies of TVR and BOC, even with
sparse initial sampling or small number of patients.
C8.4
was used, where health-care workers were the vectors transmitting CPKP from
patient to patient. The model was simulated stochastically assuming Poisson
rates over small time steps for the events included in the model and was fit to
the cumulative number of CPKP cases over time to obtain monthly estimates of
R0 (average number of secondary cases per primary case in the absence of
infection control). It was then modified to account for the effect of infection
control strategies. R0 was estimated equal to 2 in the peak months and the
minimum hand hygiene compliance level necessary to control transmission
was 50%. Simulations allowed to assess the impact of potential infection
control measures on colonization prevalence and indicated that hand hygiene
compliance rates higher than 50% should be coupled with measures such as
CPKP screening and isolation/cohorting of colonized admissions. The present
study is one of the few studies that employed mathematical modeling on
surveillance data to estimate the R0 of a pathogen and to assess the impact of
infection control strategies. Mathematical modeling may improve our ability to
identify key parameters of the transmission process and design more effective
infection control programs.
Estimation of the basic reproduction number for infectious diseases with agevarying individual heterogeneity in contact rates
IP2 Plenary session
Paddy Farrington, Steffen Unkel, Heather Whitaker
IP2
Open University, Milton Keynes, UK
Epigenetics: A new frontier
The basic reproduction number of an infectious disease, denoted R0, is the Terry Speed
average number of secondary cases produced by a typical infectious individual
in a susceptible population. Generally, the greater the value of R 0, the greater Walter & Eliza Hall Institute of Medical Research, Melbourne, Australia
Apart from a few exceptions the DNA sequence of an organism, that is, its
the overall vaccine coverage required to eliminate the infection.
In populations comprising subgroups of individuals with different contact rates, genome, is the same no matter which cell you consider. If we view the genome
for example age groups, R0 is the dominant eigenvalue of the next generation as a universal code for an organism, then how do we obtain cellular specificity?
matrix, which describes contacts occurring within and between the different The answer seems to be via epigenetics, where the Greek prefix epi denotes
subgroups. This definition has been extended to incorporate constant individual above or on top of, that is epigenetics is on top of genetics. If we think of the
heterogeneity. We present a further extension to the more realistic situation genome sequence as the text, some people have likened the epigenome to the
punctuation: the epigenetic marks on DNA help decide how the DNA text is
where the individual heterogeneity is age-dependent.
read. Epigenetics controls the spatial and temporal expression of genes, and is
The model is formulated as an age-dependent frailty model for the effective also associated with disease states. It involves no change in the underlying
contact rate. This gives rise to an age-dependent frailty model for the force of DNA sequence, and epigenetic marks are typically preserved during cell
infection, which can be evaluated by confronting it with paired serological division. Epigenetic control occurs through different mechanisms, with DNA
survey data on infections transmitted by the same route.
methylation and histone modification being the principal ones. There is one
For infections in endemic equilibrium, we obtain a simple algebraic expression major kind of DNA methylation in mammals, and several minor kinds, and
for R0, which involves the left eigenvector of a ‘population' next generation dozens of types of histone modification.
matrix and the equilibrium force of infection. We discuss the robustness of this Epigenetics has been studied in a low-throughput way for over 30 years, using
estimator and the standard estimator, based more directly on the next a wide variety of tools and techniques from molecular biology, including DNA
generation matrix, to bias or misspecification of the contact matrix.
sequencing and mass spectrometry. With the advent of microarrays 15 years
We illustrate the methods using serological survey data on parvovirus B19 and ago, these platforms began to be used to give high-throughput information on
varicella zoster infection. We find that ignoring individual age-specific epigenetics. Methylation microarrays are now very widely used. In the last 5
heterogeneity can severely bias the estimated value of R0.
years, second (also called next-) generation DNA sequencing has been used to
study epigenetics, in particular using bisulphite-treated DNA or chromatin
immunoprecipitation (ChIP) assays, each followed by massively parallel DNA
C8.5
sequencing. There are now large national and international consortia compiling
A mathematical model used as a tool to estimate carbapenemase-producing DNA sequence data relevant to epigenetics, and many statistical challenges
Klebsiella pneumoniae transmissibility and to assess the impact of potential are arising. If we think of the single (reference) human genome, there will be
interventions in the hospital setting
literally hundreds of reference epigenomes, and their analysis will occupy
V Sypsa1, M Psichogiou2, GA Bouzala2, L Hadjihannas2, A Hatzakis1, G Daikos2 biologists, bioinformaticians and biostatisticians for some time to come. This
1
Dept. of Hygiene, Epidemiology and Medical Statistics, Athens University talk will introduce the topic, outline the data becoming available, summarize
Medical School, Athens, Greece, 21st Department of Internal Medicine- some of the progress made so far, and point to future biostatistical challenges.
Propaedeutic, Laikon General Hospital, University of Athens, Athens, Greece
Multiresistant pathogens in healthcare settings are emerging as a major public
C9 Causal inference II
health threat. Microbiological surveillance data collected in an observational
study conducted during May 2009-June 2010 in a surgical unit were combined C9.1
with mathematical modeling to obtain estimates of carbapenemase-producing Simple estimation strategies for natural direct and indirect effects
Klebsiella pneumoniae (CPKP) transmissibility and assess the impact of Stijn Vansteelandt1, Maarten Bekaert1, Theis Lange2
potential interventions. The Ross-Macdonald model for vector-borne diseases 1
Ghent University, Gent, Belgium, 2University of Copenhagen, Copenhagen,
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Denmark
The mediation formula has triggered an enormous progress on mediation
analysis by enabling decomposition of a total effect into a (natural) direct
and indirect effect (mediated through a specific mediator), regardless of the
underlying statistical model. Current procedures calculate these effects through
a combination of parameter estimates from a model for the mediator and
outcome. However, their practical utility remains limited because of
computational difficulties and the high dimensionality of typical results.
van der Laan and Petersen addressed these concerns via parsimonious
models for the natural direct effect. Tchetgen Tchetgen and Schpitser proposed
estimators with desirable properties for the parameters indexing models with
identity or log link, but their relative complexity continues to be a barrier for
practical application. The focus of this talk will therefore be on richer model
structures (e.g. logistic models) for natural direct and indirect effects. Easy-tocalculate estimators, obtainable via standard software, will be proposed and
evaluated. Perspectives will be given on how to deal with exposure-induced
mediator-outcome confounding.
C9.2
Effective use of RPSFTM's in late stage cancer trials with substantial treatment
cross-over.
Jack Bowden, Ian White, Shaun Seaman
MRC Biostatistics Unit, Cambridge, UK
In late stage randomised controlled cancer trials, it is common to give the
experimental treatment to control arm patients at the point of disease
progression. This treatment switching (also called cross-over or contamination)
can dilute the estimated treatment effect on overall survival. This can, in turn,
impact the assessment of a treatment's benefit in future health economic
evaluations.
The rank-preserving structural failure time model (RPSFTM) of Robins and
Tsiatis (1991) offers a potential solution to this problem; it can be used to
estimate the causal effect of a treatment in an RCT allowing for treatment
switching. The method requires specification of an ITT test and the log rank
test is typically used. However, in the presence of substantial switching, it can
have a low power since the hazard ratio is not constant over time.
Schoenfeld (1981) showed that when the hazard ratio is not constant, a
weighted version of the log rank test is more powerful than the standard
version. This motivated us to develop a weighted log rank test statistic for the
late stage cancer trial context, given working assumptions about the underlying
hazard function in the population. We then explored the use of the weighted
statistic within an RPSFTM analysis to estimate the causal effect of treatment.
In simulations we found that this gave more efficient estimates of the causal
effect. Furthermore, violation of the working assumptions was seen to only
affect the efficiency of the estimates but not to induce bias.
43/156
Asymptotic approximations indicate that the smoothing parameters minimizing
this mean squared error converges to zero faster than the optimal smoothing
parameter for the estimation of the regression functions. In a simulation study
we show that the proposed data- driven methods for selecting the smoothing
parameters can yield lower empirical mean squared error than other methods
available such as, e.g., cross-validation.
C9.4
Estimating random center effects using instrumental variables
Jozefien Buyze, Els Goetghebeur
Ghent University, Gent, Belgium
Instrumental variables allow to estimate causal effects of exposure on outcome
in observational studies where it is unrealistic to assume all confounders have
been measured. We utilize them here to study estimation of causal effects of
care centers on outcome, accounting for different patient mixes over the
centers. Efficiency however decreases substantially with increasing numbers of
causal parameters. In line with association models in this field we therefore
introduce random causal center effects, with a parametric assumption on their
distribution. The two-stage model with standard mixed effects estimation in the
second stage regression is not suitable. Since the level of clustering coincides
with exposure here, i.e. care center, conditioning on the random effects
introduces an association between the instrument and unmeasured
confounders. Instead, we propose two approaches to estimate the random
effects and their variance: 1) calculate the posterior distribution of the random
effects conditional on the data and 2) maximize the joint density of the data and
the random effects. Simulation results confirm that both approaches allow to
correctly estimate the distribution of the causal center effects with smaller
standard errors compared to the fixed causal effects model. Power to detect a
deviant center may however decrease because of the shrinkage of random
effects, especially for small centers. We discuss the practical relevance of the
different balances sought for type I and type II errors in this setting.
C10 Meta-analyses
C10.1
Graph theory meets network meta-analysis
Gerta Rücker
University Medical Center Freiburg, Freiburg, Germany
Network meta-analysis is an active field of research in clinical biostatistics,
aiming at combining information from all randomised comparisons among a set
of treatments for a medical condition.
We show how graph-theoretical methods can be applied to network metaanalysis. A meta-analytic graph consists of vertices (treatments) and edges
(randomised comparisons). First, we illustrate the full analogy between metaanalytic networks and electrical networks, where variance corresponds to
C9.3
resistance, treatment effects to voltage, and weighted treatment effects to
Targeted Smoothing Parameter Selection for Estimating Average Causal current flows. Based on this, we then show that graph-theoretical methods that
have been routinely applied to electrical networks work also well in network
Effects
meta-analysis.
Jenny Häggström, Xavier de Luna
Umeå University, Umeå, Sweden
In more detail, denote the edges by k, and let x=(xk)k and w=(wk)k be the
The non-parametric estimation of average causal effects in observational vectors of observed effects and their inverse variances, respectively. Let y be
studies relies on controlling for confounding covariates through smoothing the pointwise product vector of inverse variance-weighted observed effects
regression methods such as kernel, splines or local polynomial regression. (wkxk)k. Using the edge-vertex incidence matrix B, we compute the Laplacian
Such regression methods are tuned via smoothing parameters which regulates matrix L which plays a central role in spectral graph theory:
the amount of degrees of freedom used in the fit. In this paper we propose
L=BTdiag(w)B
data-driven methods for selecting smoothing parameters when the targeted The resulting consistent treatment effects v induced in the edges can be
parameter is an average causal effect. For this purpose, we propose to estimated via the Moore-Penrose pseudoinverse L+ of the Laplacian:
estimate the exact expression of the mean squared error of the estimators.
44/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
v=BL+BTy
For each pair (i,j) of treatments, the variance of the treatment effect is
estimated in analogy to electrical resistance by
Vij=L+ii+L+jj-2L+ij.
We show that this method, being computationally simple, leads to the usual
fixed effect model estimate when applied to pairwise meta-analysis and is
consistent with published results when applied to network meta-analysis
examples from the literature. Moreover, problems of heterogeneity and
inconsistency, random effects modelling and including multi-arm trials are
addressed.
C10.2
Impact of Network Size and Inconsistency on the Results of MTC MetaAnalyses
Ralf Bender1, Sibylle Sturtz1
1
Institute for Quality and Efficiency in Health Care (IQWiG), Cologne, Germany,
2
Medical Faculty of the University of Cologne, Cologne, Germany
Mixed treatment comparison (MTC) meta-analyses, also called multiple
treatment or network meta-analyses, are increasingly used in medical
research. These methods allow a simultaneous analysis of all relevant
interventions in a connected network even if direct evidence regarding two
interventions is missing. The framework of MTC meta-analysis provides a
flexible approach for complex networks. However, this method has yet some
unsolved problems, in particular the choice of the network size and the
assessment of inconsistency. We describe the practical application of MTC
meta-analysis by means of examples. We focus on the impact of the size of the
chosen network and the assumption of consistency. A larger network is based
on more evidence but may show inconsistencies whereas a smaller network
contains less evidence but may show no clear inconsistencies. A choice is
required which network should be used in practice. In summary, MTC metaanalysis represents a promising approach, however, clear application
standards are still lacking. Especially, standards for the identification of
inconsistency and the way to deal with potential inconsistency are required.
explored and illustrated for several published network analyses. This provides
a starting point for more generally identifying conditions under which single
edges can be identified as a source of incoherence.
C10.4
Adapting Cochran's Q for Network Meta-Analysis
Jochem König, Ulrike Krahn, Harald Binder
Institute of Medical Biostatistics, Epidemiology and Informatics, Mainz,
Germany
When synthesizing results from clinical trials that compare two treatments by a
meta-analysis, Cochran’s Q provides a well accepted tool for assessing
heterogeneity between studies. In network meta-analysis, several treatments
can be evaluated, where each individual study might consider only some of
them. So far, Cochran’s Q has hardly been used in this setting. For
investigating how Cochran’s Q could be useful for network meta-analysis, we
consider a two step approach. First, a set of simple meta-analyses is
performed for each pair of treatments where a direct comparison is available.
Second, a reduced network-meta-analysis is based on effect estimates and
standard errors of the first step. Cochran’s Q statistic for the whole network is
seen to be the sum of squared Pearson residuals, and we furthermore show
that it can be decomposed into a sum of within-edge Q statistics and a
between-edges Q statistic. The latter allows for investigating potential
incoherence of the network by inspecting its components. We illustrate the use
of standard regression diagnostic tools for this. Specifically, all mixed treatment
comparisons are shown to be a weighted mean between a direct and an
indirect estimate. The weight of the direct estimate is identical to the hat matrix
diagonal element, which is known as leverage in regression diagnostics. This is
illustrated for a large network meta-analysis of antidepressants. There, the
leverage diagnostic provides important insight into the network structure, which
more generally highlights the usefulness of Cochran’s Q and its decomposition
for network meta-analysis.
C11 Evaluating hospital performance
C11.1
C10.3
Model Selection for Locating of Incoherence in Network Meta-Analysis
Ulrike Krahn, Jochem König, Harald Binder
Institute of Medical Biostatistics, Epidemiology and Informatics (IMBEI),
University Medical Center Johannes Gutenberg University, Mainz, Germany
Evaluation and comparison of the performance of Australian and New Zealand
intensive care units
Jessica Kasza1, John L. Moran2, Patricia J. Solomon1
1
University of Adelaide, Adelaide, South Australia, Australia, 2The Queen
Elizabeth Hospital, Woodville, South Australia, Australia
In a network meta-analysis, several treatments are compared simultaneously
by connecting evidence from different randomized trials and allowing for
indirect comparisons. While estimation assumes a coherent network of
treatment effects, there might be some edges that lead to incoherence. We
investigate how such edges can be identified in the context of fixed effect
meta-analysis with known variances, using the inverse variance weighting
method. The latter assumes normally distributed effect estimates for all studies
and is generalized to network meta-analysis within the framework of general
linear models. Analysis can be equivalently performed in two stages, first
summarizing evidence for each possible treatment comparison, resulting in a
direct edge effect with known variance, and secondly, fitting a linear model to
the direct edge effects. We propose to explore the family of models that results
from subsequently allowing more and more edges to have deviating direct
effects. Both the amount of edge specific incoherence and the model fit after
loosening edges can be assessed by chi-square tests. A further aspect is the
detectable amount of local incoherence in a given network (with given
variances of direct edge effects). It is shown to be a simple function of the hat
matrix, the latter being itself a function of the observed known direct edge
effect variances. Different methods to graphically display the results are
Recently, the Australian Government has emphasised the need for monitoring
and comparing the performance of Australian hospitals. Evaluating the
performance of intensive care units (ICUs) is of particular importance, given
that the most severe cases are treated in these units. Indeed, ICU performance
can be thought of as a proxy for the overall performance of a hospital. We
compare the performance of the ICUs contributing to the Australian and New
Zealand Intensive Care Society (ANZICS) Adult Patient Database, and identify
those ICUs with unusual performance.
It is well-known that there are many statistical issues that must be accounted
for in the evaluation of healthcare provider performance. Indicators of
performance must be appropriately selected and estimated, investigators must
adequately adjust for casemix, statistical variation must be fully accounted for,
and adjustment for multiple comparisons must be made. Our basis for dealing
with these issues is the estimation of a hierarchical logistic model for the inhospital death of each patient, with patients clustered within ICUs. Both
patient- and ICU-level covariates are adjusted for, with a random intercept and
random coefficient for the APACHE III severity score. Given that we expect
most ICUs to have similar performance after adjustment for these covariates,
we follow Ohlssen et al., JRSS A (2007), and estimate a null model that we
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
45/156
expect the majority of ICUs to follow. This methodology allows us to rigorously C11.4
account for the aforementioned statistical issues, and accurately identify those Assessing Hospital Performance for Pneumonia Using Administrative Data
ICUs contributing to the ANZICS database that have comparatively unusual With and Without Clinical Data: Does the Difference Matter?
performance.
Yew Yoong Ding1, John Abisheganaden1, Wai Fung Chong2, Bee Hoon Heng2,
T K Lim3
C11.2
1
Tan Tock Seng Hospital, Singapore, Singapore, 2National Healthcare Group,
Evaluating mortality rates for neonatal units using multiple membership models Singapore, Singapore, 3National University Hospital System, Singapore,
Singapore
Shalini Santhakumaran, Neena Modi, Deborah Ashby
Imperial College, London, UK
Introduction: With growing interest in hospital performance, the validity of
In England, neonatal care is delivered via networks which are designed such outcomes evaluation across providers has never been more crucial. While
that all services are provided within each network. Neonatal units are administrative data have typically been used for this purpose, uncertainty
classified depending on the services they provide, and babies are often remains on whether incorporation of clinical data for risk adjustment can lead
transferred for specialist care. The frequency of transfers and variation in to important differences in comparisons. We sought to examine this issue in the
case-mix makes comparison of outcomes difficult. Previously, mortality has context of older persons hospitalized for pneumonia.
been compared across networks rather than units to circumvent this problem, Methods: Using a retrospective cohort study design, we identified hospital
though results are less useful for evaluating performance. Although statistical episodes for pneumonia among adults aged 55 years or older at 3 acute care
methods exist to tackle these problems they are rarely used for benchmarking public hospitals over one year through their DRG and primary ICD-9-CM (480
in clinical practice. We compared a variety of hierarchical multiple membership to 486) codes. From these, 480 episodes from each hospital were randomly
models to compare case-mix adjusted mortality whilst allowing for neonatal selected. Logistic regression models predicting 30-day mortality were
constructed using: 1) Administrative data (demographics, admission
transfers.
Data were obtained from the National Neonatal Research Database, formed information, and comorbidity), and 2) Administrative and Clinical Data (severity
from anonymised routine clinical data. Three-level Bayesian hierarchical of illness, pneumonia sub-type, and pre-morbid function). Corresponding
multiple membership models were fitted to the data. A range of assumptions expected mortality, observed to expected ratio (O/E), and riskfor dependence and distribution of the unit and network parameters were used. adjusted mortality were computed for each hospital.
Weights were assigned to the unit random effects based on the proportion of Results: Overall 30-day mortality was 23.5%, with unadjusted figures for the 3
time each baby spent in each unit. We adjusted for gestational age and birth hospitals being 22.1%, 26.3%, and 22.1%. Using administrative data alone, the
weight, sex, use of antenatal steroids, maternal age and a multiple birth corresponding risk-adjusted 30-day mortality were 22.8%, 25.8%, and 21.6%,
indicator to control for differences in case-mix. Models were compared using while with administrative and clinical data combined, these were 25.1%, 22.5%,
DIC with consideration given to suitability of assumptions and interpretability of and 23.0% respectively. Hospital performance ranking reversed when both
results. The posterior distributions of the unit and network effects and their data were combined.
ranks were used to allow a neonatal unit to compare their performance with the Conclusion: Addition of clinical data to administrative data for risk
rest of the country, within network and within unit classification.
adjustment led to important differences in hospital performance evaluation for
older persons admitted for pneumonia. This has implications on the practice of
using administrative data alone.
C11.3
Confidence intervals for ranks with application to performance indicators
C12 Latent variable models
Erik van Zwet1, Jelle Goeman1, Aldo Solari2
1
Leiden University Medical Center, Leiden, The Netherlands, 2University C12.1
Milano-Bicocca, Milano, Italy
Extended likelihood approach to large-scale multiple testing
Ranks are notoriously difficult to estimate, yet there is a growing demand to Youngjo Lee1, Jan Bjørnstad2
rank health care providers on the basis of certain "performance indicators". It is 1
Seoul National University, Seoul, Republic of Korea, 2Statistics Norway, Oslo,
therefore very important to convey the uncertainty in a ranking to decision
Norway
makers by providing them with confidence intervals. We propose a method to
construct confidence intervals for ranks that have either simultaneous or To date, only frequentist, Bayesian and empirical Bayes approaches have been
individual coverage of 95%. Simultaneous confidence intervals are appropriate studied for the large-scale inference problem of testing simultaneously
when, for instance, one is interested in the ten worst performing centres. hundreds or thousands of hypotheses. Their derivations start with some
Individual intervals are appropriate as feed-back to a particular medical centre. summarizing statistics without modeling the basic responses. As a
We contrast our approach with the Empirical Bayes (EB) method which is often consequence testing procedures have been developed without necessarily
employed. Our approach is based on a fixed effects model and on testing checking model assumptions, and empirical null distributions are needed in
multiple comparisons, while EB is based on a random effects model. Indeed, order to avoid the problem of rejecting all null hypotheses when the sample
EB assumes latent centre effects which follow a normal distribution. This is a sizes are large. Nevertheless these procedures may not be statistically
problematic assumption, which does not hold if there are centres that are truly efficient. In this paper we present the multiple testing problem as a multiple
under or over performing. Application of the EB approach leads to shrinkage of prediction problem of whether a null hypothesis is true or not. We introduce
the centre effects to a common mean. This has definite advantages, but also hierarchical random-effect models for basic responses and show how the
produces some difficulties. Shrinkage introduces bias which is more severe for extended likelihood is build. It is shown that the likelihood prediction has a
centres with fewer patients and this may not be fair. Also, since the amount of certain oracle property. The extended likelihood leads to new testing
shrinkage varies from year to year, it is difficult to compare EB results across procedures, which are optimal for the usual loss function in hypothesis testing.
years. Our approach is targeted to avoid these problems. We demonstrate on a The new tests are based on certain shrinkage t-statistics and control the local
probability of false discovery for individual tests to maintain the global
large data set from the Netherlands.
frequentist false discovery rate and have no need to consider an empirical null
46/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
distribution for the shrinkage t-statistics. Conditions are given when these false
rates vanish. Three examples illustrate how to use the likelihood method in
practice. A numerical study shows that the likelihood approach can greatly
improve existing methods and finding the best fitting model is crucial for the
behaviour of test procedures.
under-prediction of the disease risk. Finally, when the data are very sparse, as
in our case, adding a non-spatial residual random effect may not be an optimal
modeling choice.
C12.4
An Investigation of Latent Class Trajectory Models of Prescribing to Define a
C12.2
Phenotypic Marker of Disease Susceptibility
Interpolation between spatial frameworks: an application of process Danielle Belgrave1, Iain Buchan1, Christopher Bishop2, Angela Simpson1,
convolution to estimating neighbourhood disease prevalence
Adnan Custovic1
1
Peter Congdon
The University of Manchester, Manchester, UK, 2Microsoft Research
Queen Mary University of London, London, UK
Cambridge, Cambridge, UK
Health data may be collected across one spatial framework (e.g. health
provider agencies), but contrasts in health over another spatial framework
(neighbourhoods) may be of policy interest. In the UK, population prevalence
totals for chronic diseases are provided for populations served by general
practitioner (GP) practices, but not for neighbourhoods (small areas of circa
1500 people), raising the question whether data for one framework can be
used to provide spatially interpolated estimates of disease prevalence for the
other. A discrete process convolution is applied to this end, and has
advantages when there are a relatively large number of area units in one or
other framework. Additionally the interpolation is modified to take account of
observed neighbourhood indicators (e.g. hospitalization rates) of
neighbourhood disease prevalence. These are reflective indicators of
neighbourhood prevalence viewed as a latent construct. An illustrative
application is to prevalence of psychosis in north east London, containing 190
GP practices and 562 neighbourhoods, including an assessment of sensitivity
to kernel choice (e.g. normal vs. exponential). This application illustrates how a
zero inflated Poisson can be used as the likelihood model for a reflective
indicator.
Background: The use of non-respiratory prescription drugs in early life may be
a prognostic indicator of a child's susceptibility to asthma.
Aim: To define the developmental trajectory of susceptibility in early-life based
on patterns of prescriptions.
Methods: We used data from the Manchester Asthma and Allergy Study
(N=916), a prospective population-based birth cohort study designed to
investigate disease development. We fit a taxonomy of longitudinal latent class
models hypothesising subgroups of children who have changing levels of
immune response over time under varying modelling assumptions. We assume
that each child belongs to one of N latent classes, with the number of classes
and their size not known a priori. Models were fit using Bayesian machine
learning in Infer.NET. The models were compared for goodness-of-fit based on
the model evidence which considers both model accuracy and generalizability.
Survival models were used to address whether susceptibility, determined by
patterns of prescription use, represented a higher risk of asthma severity.
Results: The "best" model was a Hidden Markov Model which identified three
latent classes of susceptibility: "Normal Response" (73.7%); "Medium
Susceptibility" (22.6%) and "High Susceptibility" (3.7%). Children with "High
Susceptibility" had a significantly higher hazard of experiencing asthma or
wheeze symptoms within the first 3 years of life compared to those with
C12.3
"Normal Response" (HR=4.23 95%CI 2.46-7.28, p<0.001) and those with
Bayesian shared spatial-component models to combine sparse and "Medium Susceptibility" (HR=3.00, 95%CI 2.39-3.77, p<0.01).
heterogeneous epidemiological data informing about a rare disease and detect
Conclusion: By analysing trajectories of prescription use in early life, we
spatial biases.
obtain a phenotypic definition of susceptibility and subsequent development of
Sophie Ancelet1, Juan José Abellan2, Sylvia Richardson3, Victor Del Rio Vilas 4, asthma.
5
Colin Birch
1
IRSN/LEPID, Fontenay-aux-Roses, France, 2Centre for Public Health
C13 Functional data analysis/longitudinal data
Research & CIBER Epidemiologia y Salud Publica, Valencia, Spain, 3Imperial
College / Department of Epidemiology and Public Health, London, UK, C13.1
4
Department of Food, Environment and Rural Affairs, London, UK, 5Animal Parametric and non-parametric multivariate analysis of functional MRI data
Health and Veterinary Laboratories Agency, New Haw, Addlestone, UK
Daniela Adolf, Siegfried Kropf
We propose several Bayesian shared spatial component models for the Otto-von-Guericke University, Department for Biometrics and Medical
analysis of geographical disease risk informed by multiple, sparse and Informatics, Magdeburg, Germany
heterogeneous disease surveillance sources. We do so first for one disease
and then for two, possibly sharing environmental risk factors. Specifically, our Functional magnetic resonance imaging (fMRI) performs an indirect
work is motivated by the analysis of the spatial variations of risk of two distinct measurement of neuronal activation in the human brain. The response signal is
forms of scrapie infection affecting sheep in Wales using three heterogeneous a temporal series in single three-dimensional regions (voxels). These data are
surveillance data sources. The aim is to hypothesize about the infectious or high-dimensional, because there are generally a few hundred scans (sample
sporadic nature of each form of scrapie and to detect and discuss possible elements) but hundreds of thousands voxels (variables). Furthermore, the
differences in the evidence provided by each surveillance source. We also measurements are correlated in space as well as in time.
consider the problem of comparing the competing Bayesian hierarchical Analyses of these high-dimensional functional imaging data go beyond the
models. In particular, we apply a mixed posterior predictive approach, compare scope of classical multivariate statistics. By default, fMRI data are analyzed
different predictive scores and plot non-randomized PIT histograms to select voxel-wise on the basis of univariate linear models, using a pre-whitening
the best predictive model. Our case study shows that using the proposed method to eliminate the temporal correlation. We adapt this strategy in a
shared spatial component models improve the estimation of disease risks multivariate context applying so-called stabilized multivariate tests, which are
compared to a separate analysis of each surveillance source. This also allows designed to cope with high-dimensional data and are based on the theory of
detection of spatially structured biases and their correction to return the true left-spherical distributions.
underlying spatial risk surface. Methodologically, we observe that There is always a need for approximation of the temporal correlation in this
discrepancies between the observed data and the model assumption lead to
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
procedure. Usually, a first order autoregressive process is assumed for fMRI
measurements and its correlation coefficient is estimated. We propose a nonparametric approach that renders an estimation of the temporal correlation
structure unnecessary. Based on the stabilized multivariate test procedure, we
use a block-wise permutation method including a random shift.
A comparison of these methods using simulated data shows in detail the pros
and cons of the test procedures. An application to real fMRI data illustrates the
practicability of the multivariate approach, particularly when using the nonparametric proposal.
C13.2
Path analysis with multilevel functional data: Change in glucose curves during
pregnancy and its impact on birth weight.
Kathrine Frey Frøslie1, Jo Røislien1, Elisabeth Qvigstad2, Kristin Godang3, Jens
Bollerslev4, Tore Henriksen5, Marit Bragelien Veierød1
1
Department of Biostatistics, University of Oslo (UiO), Oslo, Norway,
2
Norwegian Resource Centre for Women's Health, Oslo University Hospital
(OUH), Oslo, Norway, 3Section of Specialised Endocrinology, OUH, Oslo,
Norway, 4Faculty of Clinical Medicine, UiO, Oslo, Norway, 5Division of
Obstetrics and Gynaecology, OUH, Oslo, Norway
Pregnancy is associated with increased insulin resistance, resulting in elevated
blood glucose levels. High maternal glucose levels are known to increase the
risk of several adverse pregnancy outcomes. Most studies of such effects are
based on simple glucose measurements like the fasting glucose or the twohour value from oral glucose tolerance tests (OGTTs). However, simple
measures might miss physiologically important information in OGTT glucose
profiles. Our aim was to capture important information from entire OGTT
curves during pregnancy and to use it in the analysis of neonatal outcomes.
Functional data analysis includes methods for the analysis of multilevel curve
data. We used multilevel functional principal component analysis (MFPCA) to
analyse glucose curves from two visits during pregnancy in a Norwegian
prospective cohort study of 974 healthy pregnant women. Almost all the
variability in the curves was captured by two functional principal components
(FPCs) at the visit specific level, and three FPCs at the visit/subject specific
level. At both levels the physiologically useful interpretations of FPC1 and
FPC2 were "General level" and "Time-to-peak", respectively. At the visit/subject
level FPC3 represented oscillations.
We further performed a Bayesian path analysis of the impact of early
pregnancy body mass index (BMI) and OGTT curve features on birth weight. A
simplified path model with BMI as an exogenous variable and FPC scores as
intermediate variables was used. The implementation and interpretation of path
models with functional data can be challenging, but add to the physiological
understanding of mechanisms of obesity, glucose metabolism and neonatal
outcomes.
47/156
analysis but landmarks contain only a very small proportion of data available
from captured images. Anatomically defined curves have the advantage of
providing a much richer expression of facial shape.
This is explored in the context of identifying 24 ridge, valley or observed
curves, which are automatically identified by (1) local extremes of surface
curvature, (2) detection of surface slope discontinuities or (3) direct surface
cuts in the normal direction. The P-spline approach to smoothing is then used
to construct a set of semilandmarks on curves at any desired resolution. The
penalty function in this type of smoothing can be adapted to the shapes of the
curves to be identified.
The shape of 57 3D stereophotogrammetric scans of human faces was
determined by means of 30 anatomical landmarks and approximately 1000
equidistantly spaced semilandmarks on curves. The semilandmarks on the
target shape were iteratively adjusted by bending energy to create
geometrically homologous points with respect to a symmetrised reference
shape. The bending energy between the reference and target shape was
minimized and artificial deformation removed. In each step of the algorithm,
Generalized Procrustes Superimposition optimized position, orientation, and
scale of all shape coordinates. This resulted in Procrustes shape coordinates
used in further analyses of multivariate variability and sexual dimorphism.
The research was supported by Wellcome Trust grant WT086901MA.
C13.4
Prediction of Visual Prognosis to Optimize Frequency of Perimetric Testing in
Glaucoma
Susan Bryan1, Koen Vermeer2, Hans Lemij3, Paul Eilers1, Emmanuel Lesaffre1,4
1
Erasmus Medical Center, Rotterdam, The Netherlands, 2Rotterdam
Ophthalmic Institute, Rotterdam, The Netherlands, 3Rotterdam Eye Hospital,
Rotterdam, The Netherlands, 4L-Biostat, Catholic University of Leuven,
Leuven, Belgium
Glaucoma is a leading cause of blindness in the world. Treatment slows the
disease, possibly even halting disease progression. Our ultimate aim is to
predict future field loss and gain knowledge about the individual's rate of
progression in order to determine optimal treatment strategies. Our immediate
aim is however to investigate the point specific evolutions over time.
We use part of a unique database from the Rotterdam Eye Hospital in The
Netherlands, focusing on 139 glaucoma patients followed for at least 10 years.
The response variables are the 54 threshold points which describe the level of
(differential light) sensitivity in each eye, measured from 0dB (blind) to 34dB
(normal).
Literature suggests modelling the response at each position over time and for
every patient separately using linear, quadratic, exponential or tobit models.
Some conclude that the exponential model performs the best, while others
argue to use tobit models since there is censoring (at 0dB). Based on our data,
we find that a linear model for the median fits best for 48.7% of the positions x
C13.3
subjects, while the exponential model was for only 8.8% of the cases the best
Automatic identification and analysis of anatomical curves across human face choice. Besides exploring other models, such as quantile models and various
models that take censoring into account, we are currently developing a
Stanislav Katina, Adrian Bowman
hierarchical model that combines the evolutions of all positions. This is a
The University of Glasgow, Glasgow,Scotland, UK
challenging exercise that needs to take into account that glaucoma might not
Identification of anatomical landmarks is a natural starting point for facial shape be present in both eyes at the start of the study but may develop over time.
48/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Tuesday, 21 August
Morning sessions (I3 - I4 , C14 – C21)
consist in reconstructions of three-dimensional cerebral vascular geometries,
I3 Functional data analysis
I3.1
Functional data - an introduction towards applications
Helle Sørensen
Department of Mathematical Sciences, University of Copenhagen, Denmark
Functional data are samples consisting of curves (or surfaces), usually with an
implicit assumption of smoothness. Each curve is viewed as a single sample
element rather than as a collection of sample elements. The talk will provide
an introduction to the analysis of such data. Chromatography data and
acceleration data from a study on horse gait will be used for illustration.
The first obstacle with functional data is the fact that the curves are in practice
only recorded discretely and with noise. This calls for smoothing, where the
discrete data are converted to actual functional objects. Regularization
methods, where smoothness is imposed on a basis expansion through
penalization, are popular and will be reviewed in the talk.
A second question is how to model relationships among several variables,
when one or more of them are functional. In the talk we will give examples of
regression models for functional data. The functional variables may enter as
outcome, as predictor, or both, and the objects of interest are regression
functions relating the predictor and the outcome.
I3.2
A Modular Approach to Scalar-on-Function Regression
Jeff Goldsmith1, Bruce Swihart2, Ciprian Crainiceanu2
1
Department of Biostatistics, Mailman School of Public Health, Columbia
University, USA, 2Department of Biostatistics, Bloomberg School of Public
Health
Johns Hopkins University, USA
We develop modular, computationally efficient methods for generalized
functional linear models. The functional predictors are projected onto a large
number of smooth eigenvectors allowing application to many real-data
scenarios, and the coefficient function is estimated using penalized spline
regression in a mixed model framework. The mixed model approach allows
direct extension to functions observed longitudinally and facilitates the inclusion
of non-linear effects of scalar covariates. Inferential techniques for functional
covariates are developed and provide tools for model selection. We are
motivated by a study of white matter demyelination via diffusion tensor imaging
in which various cerebral white matter tract properties are used to predict
cognitive and motor function in multiple sclerosis patients. All methods are
implemented in the `refund' package available on CRAN.
I3.3
A study of cerebral aneurysms pathogenesis: functional data analysis of
three-dimensional geometries of the inner carotid artery
Laura Sangalli
MOX - Department of Mathematics, Politecnico di Milano, Italy
I will describe some exploratory statistical analyses performed within the
AneuRisk Project, a scientific endeavor that investigated the pathogenesis of
cerebral aneurysms, in an interdisciplinary effort combining the experience of
practitioners from neurosurgery and neuroradiology with that of researchers
from statistics, numerical analysis and bio-engineering. The data analyzed
obtained from angiographic images. Advanced techniques are developed for
the statistical analysis of these complex functional data, including methods for
multidimensional curve fitting, dimension reduction, registration and
classification.
The seminar is based on joint work with Piercesare Secchi, Simone Vantini,
Alessandro Veneziani and Valeria Vitelli.
C14 Clinical trials II
C14.1
Efficient design of cluster randomized trials with treatment-dependent sampling
costs and treatment-dependent unknown outcome variances
Gerard van Breukelen, Math Candel
Maastricht University, Maastricht, The Netherlands
Cluster randomized trials are randomized experiments where clusters of
persons are randomized to treatment, e.g. schools or general practices, and all
persons sampled within a given cluster are given the same treatment.
Published work on optimal design of cluster randomized trials provides sample
size formulae (how many clusters, how many persons per cluster) as a function
of sampling cost per cluster and per person, outcome variance, and intraclass
correlation. These formulae are based on three restrictive assumptions:
1) an equal sample size per cluster, 2) homogeneous outcome variance and
sampling costs, and 3) a known intraclass correlation (ICC), as the optimal
design depends on this ICC and is thus a locally optimal design (LOD) only.
The assumptions of equal cluster sizes and a known ICC were overcome in
recent work. This presentation relaxes the assumptions of homogeneous
outcome variance and sampling costs, by presenting Maximin designs (MMD)
for treatment-dependent costs and treatment-dependent unknown outcome
variances and ICC. MMD maximizes either the minimum efficiency or the
minimum relative efficiency (relative to the LOD) over the plausible range of
variance and ICC values. By choosing a larger or smaller range MMD allows
balancing between efficiency and robustness. Graphs based on our equations
will show how the optimal budget split between the two treatment arms, and
thereby also the optimal sample size per arm, depends on the sampling costs
and variance range per treatment arm at each design level (cluster, person).
C14.2
Bayesian Phase II randomized design for time-to-event endpoint using
historical control - Application to Oncology
Daniel Lorand, Beat Neuenschwander, Rupam Ranjan Pal, Lanjia Lin
Novartis Pharma AG, Basel, Switzerland
Although single-arm Phase II clinical trials are still being widely used in the
development of oncology drug, there are situations when randomized
controlled trials are preferable. The use of such designs is especially adequate
when a new drug is been combined with an approved drug or when the primary
endpoint is a time-to-event endpoint such as progression-free survival.
Although randomized Phase II trials are typically larger than uncontrolled trials,
they should remain moderate in size so that reliable evidence on the efficacy of
the investigated treatment can be obtained quickly at reasonable cost.
Adequate use of historical control data which can done formally within the
Bayesian framework can lead to highly efficient design.
We describe such a design where clinical benefit is assessed by estimating the
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
hazard ratio between a new treatment and a control using a Bayesian cox
model. A key feature of this approach is the use of an informative prior for the
baseline hazard function in the control group through the implementation of a
novel meta-analysis approach leading to suitable downweigthing of the
historical information derived from published data. Operating characteristics
simulated under a wide range of scenarios and compared to the standard
frequentist approach shows improved false positive and/or negative rates. This
illustrates that relevant historical data can be robustly incorporated into the
design of randomized Phase II oncology clinical trials in a way that may either
lead to smaller control arm or superior decision making when the sample size
is fixed.
C14.3
Evaluation of methods for design and analysis of cluster randomised crossover
trials with binary outcomes with application to intensive care research
Andrew Forbes1, Muhammad Akram1, Rinaldo Bellomo2, Michael Bailey2
1
Monash University, Melbourne, Victoria, Australia, 2Monash University and the
Australia and New Zealand Intensive Care Society Centre for Outcome and
Resource Evaluation, Victoria, Australia
The assessment of interventions in intensive care research in Australia is
hampered by the need for interventions to be applied at intensive care unit
(ICU) level together with the limited number of intensive care units in Australia.
As such, parallel-arm cluster randomised trials cannot be sized appropriately to
detect small effects of universal ICU interventions on mortality such as
intravenous caloric delivery in patients receiving nutrition or oxygen level
targeting in mechanically ventilated patients. The increased efficiency of
cluster randomised crossover designs presents an opportunity to remedy this
situation, however the development and assessment of such designs in the
literature has primarily been with continuous outcomes with associated linear
mixed models rather than for binary outcomes.
In this presentation we report on methods for design and analysis of cluster
crossover trials with binary data. In particular we report on results from a
simulation study based on the cluster size variability, size of period effects, and
within- and between-period correlations observed in the Australian adult patient
intensive care database. Using data generated from a marginal model, we
report on appropriateness and modification of existing sample size formulae for
Gaussian outcomes, as well as size, power and confidence interval coverage
using a variety of cluster-summary and model-based methods. We also
discuss the potential for extension to multi-period designs and their feasibility in
the intensive care research setting.
C14.4
Optimal target allocation proportion for correlated binary responses in a twotreatment clinical trial
Atanu Biswas1, Saumen Mandal2, Camelia Trandafir3
1
Indian Statistical Institute, Kolkata, India, 2University of Manitoba, Winnipeg,
Canada, 3Public University of Navarra, Pamplona, Spain
Optimal allocation designs for the allocation proportion are obtained in the
present paper for a two-treatment clinical trial, in the presence of possible
correlation between the proportion of successes for two treatments. Possibility
of such type of correlation is motivated by some real data. It is observed that
the optimal allocation proportions highly depend on the correlation. We also
discuss completely correlated set up where the binomial assumption cannot be
made.
C14.5
Statisticians Implementing Change and Cost Effectiveness in Clinical Trials
Through Risk Based Prioritization Monitoring
49/156
Nicole Close
EmpiriStat, Inc., Mt Airy, MD, USA
In August 2011, FDA withdrew its 1988 guidance on "Guidance for the
Monitoring of Clinical Investigations" and issued its draft guidance "Oversight of
Clinical Investigations - A Risk-based Approach to Monitoring". The new
guidance suggests risk based approaches, that the source data verification
should be focused on critical fields (key efficacy and safety variables) and less
than 100% source data verification (SDV) on less important fields may be
acceptable. The guidance gives a clear signal that Sponsors are encouraged to
explore the cost-effective ways to conduct the clinical monitoring instead of
solely relying on the on-site monitoring.
Statisticians now have an increasing role in centralized monitoring and should
be prepared to tackle this area of expertise. A template for a Risk Based
Statistical Monitoring Plan for Phase II/III clinical trials will be reviewed.
Targeted key statistical metrics which identify areas for querying remotely and
targeted onsite monitoring and additional training will be reviewed. Those
metrics include defining outliers within sites and across sites, recruitment and
follow-up rates, adverse event rates, and even modeling Investigator and staff
resources,
timelines
and
responsiveness
for
monitoring trial success. With EDC systems enabling centralized access to
both trial and source data and the growing appreciation of the ability of
statistical assessments, risk based plans are easier to write and execute.
The amount of resources and time that can be saved by shifting away from
100% SDV and onsite monitoring, to a risk based approach, or even a hybrid
approach is large for one clinical trial. The amounts saved over a clinical
program is momentous.
C15 Statistics for epidemiology I
C15.1
Bias of relative risk estimates in cohort studies as induced by missing
information due to death
Nadine Binder1, Martin Schumacher2
1
Freiburg Center for Data Analysis and Modeling, Freiburg, Germany,
2
University Medical Center Freiburg, Freiburg, Germany
In most clinical and epidemiological studies information on disease status is
usually collected at regular follow-up times. Often, this information can only be
retrieved in individuals who are alive at follow-up, but will be missing for those
who died before. This is of particular relevance in long-term studies or when
studying elderly populations. Frequently, individuals with missing information
because of death are excluded and analysis is restricted to the surviving ones.
Such naive analyses can lead to serious bias in incidence estimates,
translating into bias in estimates of hazard ratios that correspond to potential
risk or prognostic factors.
We investigate this bias in hazard ratio estimates by simulating data from an
illness-death multi-state model with different transition hazards, and
considering the influence of risk factors following a proportional hazards model.
We furthermore extend an approximate formula for the bias of the resulting
incidence estimate by Joly et al. (Biostatistics, 2002) for a binary risk factor.
Analytical and simulation results closely agree and reveal that the bias can be
substantial and in either direction, predominantly depending on the differential
mortality.
In an application to a Danish long-term study on nephropathy for diabetics,
where complete status information is available, we artificially induce missing
intermediate disease status. The naive risk factor analyses differ significantly
from those obtained in the original analysis and even change sign, giving
further indication that the bias is relevant. This supports the analytical and
simulation results and underlines that missing intermediate disease status
cannot be ignored.
50/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Medical college, Moshi/ Kilimanjaro, Tanzania, 3University of Bergen, Bergen/
Hordaland, Norway, 4Centre for International Health, Bergen/ Hordaland,
Norway, 5University of Bergen, Bergen/ Hordaland, Norway, 6Department of
Obstetrics and Gynaecology, Kilimanjaro Christian Medical Centre,, Moshi/
Kilimanjaro, Tanzania, 7Department of Obstetrics and Gynaecology, Kilimanjaro
8
Kilimanjaro Christian
Both, the proportion of variation in outcome attributable to or explained by a Christian Medical Centre,, Moshi/ Kilimanjaro, Tanzania,
9
Medical
college,
Moshi/
Kilimanjaro,
Tanzania,
University
of Bergen, Bergen/
prognostic factor (PEV; cf., e.g. Schemper, Stat.Med.2003), and the attributable
Hordaland,
Norway
fraction or attributable risk (AR, cf., e.g. SMMR 2001, Issue 3), popular in
epidemiology, take into account the prevalence of an exposure, or, more Objective: Perinatal mortality is as high as 5% in many countries in subgenerally, the distribution of a prognostic factor. They both aim at quantifying Saharan Africa. We compared the risk of a perinatal loss between women who
‘importance' or ‘relative importance' on a 0-1 scale. ‘Importance' (Healy, did, and women who did not lose their baby in a previous pregnancy.
Stat.Med.1990; Nelson and O'Brien, JAIDS 2006) is seen as a function of the Methods: A total 19,811 women who delivered singletons for the first time at
exposed fraction in a population and the strength of the exposure effect, as KCMC hospital between 2000 and 2008 were followed for a total of 4503
quantified by the relative risk. In this presentation we provide the first time a subsequent deliveries up to 2010. Women who had a multiple birth, or who
systematic comparison of both concepts, analytically and empirically. In were referred from rural areas for various medical reasons were excluded. We
particular we see that AR measures the degree to which an exposition (factor) estimated perinatal mortality in a subsequent delivery depending on the
is necessary for disease or death, while PEV quantifies the degree to which outcome of the previous delivery.
this exposition (factor) is necessary and sufficient. Both measures tend to 0 for
an underlying relative risk tending to 1 or an exposure probability tending to 0. Results: A perinatal loss increased a woman's likelihood to be recorded with a
However, as will be demonstrated, they are affected differently by changes in next pregnancy in our data from 19% to 31%. The recurrence risk of perinatal
the distributions of outcomes and exposures. A large Swedish survey of death for women who had already lost one baby was 9.1% compared with a
smoking and respiratory tract cancers is used to illustrate differences of AR and much lower risk of 2.8% for women who already had a surviving child, for a
relative risk of 3.2 (95% CI: 2.2 - 4.7). Recurrence contributed 15% of perinatal
PEV.
deaths in all subsequent pregnancies. Preeclampsia, placental abruption,
placenta previa, induced labour; preterm delivery and low birth weight in a
C15.3
previous pregnancy were also associated with increased perinatal mortality in
the next pregnancy.
Quantifying bias in register based research
Conclusions: Some women in Africa carry a very high risk of losing their child
Luwis Diya, Kamila Czene, Marie Reilly
in a pregnancy. Strategies of perinatal death prevention may attempt to target
Karolinska Institutet, Stockholm, Sweden
pregnant women who are particularly vulnerable or already have experienced a
Aims: Register based research is important in understanding the burden of perinatal loss.
disease in individuals, families and society at large. When registers are linked,
their different start up dates leads to truncation of the individual's or family
members' history. Despite a large volume of published research using these C15.5
registers the potential for bias in these results due to left truncation at register Luxemburg acUte myoCardial Infarction registry (LUCKY): estimation of the
start-up has received very little attention. The aim of this study is to assess the effect of clinical and biochemical variables on the New-York Heart Association
bias in familial risk estimates that use the Swedish Hospital Discharge score using penalized ordinal logistic regression.
Register.
Olivier Collignon1, Stephen Senn1, Michel Vaillant1, Yvan Devaux1, Marie-Lise
Methods: As an illustrative example, we will study familial risk of acute Lair1, Daniel Wagner2
appendicitis, a well-defined diagnosis that is unlikely to be affected by changing 1CRP Santé, Strassen, Luxembourg, 2Institut National de Chirurgie Cardiaque
patterns of diagnosis and treatment. Cases will be identified from the Hospital et de Cardiologie Interventionnelle, Luxembourg, Luxembourg
Discharge Register and family relationships from the Multigenerational
Register. Using the observed incidence rates of acute appendicitis and the Since 2006, every luxemburgish patient with acute myocardial infarction has
Standardized Incidence rate (SIR) in family member of patients, we will been provided care at the INCCI (Institut National de Chirurgie Cardiaque et de
simulate a virtual but complete population of Sweden using available vital Cardiologie Interventionnelle) and recorded in the LUCKY registry (Luxemburg
statistics. We will investigate the magnitude of bias in various measures of acUte myoCardial Infarction). After discharge patients are followed up for
familial relative risk: SIR, Incidence Rate Ratios and Hazard ratios. We will also several years by measuring some variables, like ischemic time, medication,
investigate how the bias is affected by age at disease onset and the magnitude biomarkers, the occurrence of major adverse cardiovascular events and
different indexes of remission of infarction. Especially, the NYHA (New-York
of the familial relative risk.
Heart Association) score, which ranges from 1 to 4 (from mild to severe)
Conclusion: The proposed simulation tool can be used to assess the potential measures the limitation of patients' physical activity. Also we focused on
for bias in linked register data, and thus facilitate sensitivity analyses. This tool determining which features in the registry were linked to an elevated NYHA
is especially important for short life-span registers where the degree of score at the end of follow-up. Proportional odds ordinal logistic regression was
truncation is more pronounced.
performed to estimate the effect of each predictor on the NYHA score. To do
C15.2
Explained variation versus attributable risk
Michael Schemper
Medical University of Vienna, Vienna, Austria
C15.4
Recurrence risk of perinatal mortality in Northern Tanzania: A registry-based
prospective cohort study
Michael Johnson Mahande1, Anne Kjersti Dalveit1, Gunnar Kvaale4, Blandina
Theophil Mmbaga1, Joseph Obure6, Gileard Masenga6, Rachel Manongi2, Rolv
Terje Lie1
1
University of Bergen, Bergen/ Hordaland, Norway, 2Kilimanjaro Christian
this, continuous variables were approached using restricted cubic splines and
the equal slopes assumption was relaxed using a forward continuation ratio
model. In order to propose an alternative to backward stepwise variable
selection and to avoid overlearning due to high dimensionality, several
penalization techniques were used. The final model was first validated by
bootstrap and then simplified by determining a parsimonious submodel whose
predicted probabilities correlated over 0.95 with those obtained with the full
model.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C16 Joint modelling of outcome and time-to-event
C16.1 (Student award winner)
Dynamic Prediction Based on Joint Model for Categorical Response and Timeto-Event
Magdalena Murawska1, Dimitris Rizopoulos1, Emmanuel Lesaffre1,2
1
Erasmus University Medical Center, Rotterdam, The Netherlands, 2I-Biostat
Catholic University of Leuven, Leuven, Belgium
In transplantation studies often categorical longitudinal measurements
reflecting the status of the patient are collected for patients waiting for an organ
transplant.
In this setting it is often of primary interest to assess whether available history
of the patient can be used for predicting patient survival as well as further
performance
on
the
waiting
list.
In this work we use a Bayesian approach to jointly model the performance of
patients described by their categorical status that changes in time while waiting
for the new organ together with the survival time on the waiting list. The model
accounts also for the presence of competing risks due to the fact that patients
are delisted from the list because of death, after transplantation or because of
other reasons. In particular, the submodel for longitudinal categorical
responses is a multinomial logit mixed-effects model, whereas for the event
process we postulate the cause-specific hazard models that share the same
random effects with the multinomial logit model. We illustrate how the fitted
joint model can be used for the dynamic prediction of the cumulative incidence
functions as well as the longitudinal response for patients based on their
available longitudinal measurements of that response.
C16.2 (Student award winner)
Adjusting for measurement error in baseline prognostic biomarkers: A joint
modelling approach
Michael Crowther, Keith Abrams
University of Leicester, Leicester, UK
51/156
biomarker information available up to time s, and they can be updated at each
new measurement.
In this presentation, we present two joint modelling approaches: the shared
random-effect models that include characteristics of the longitudinal biomarker
as predictors in the model for the time-to-event; and joint latent class models
which assume that a latent class structure entirely captures the correlation
between the longitudinal biomarker trajectory and the risk of event.
We show how individual dynamic predictions can be computed from these two
approaches and we detail methods to evaluate their predictive accuracy. Both
approaches are illustrated and compared on datasets from prostate cancer
studies where repeated measures of prostate specific antigen (PSA) and
occurrence of clinical recurrence were routinely collected after the initial
radiation therapy treatment. The objective from this study was to provide tools
of early detection of prostate cancer clinical recurrence based on PSA
trajectory.
C16.4
A closed form likelihood for joint modelling of repeated measurements and
survival outcomes, with an application to cystic fibrosis data.
Jessica Barrett1, Robin Henderson2, David Taylor-Robinson3, Peter Diggle4
1
MRC Biostatistics Unit, Cambridge, UK, 2Newcastle University, Newcastle, UK,
3
University of Liverpool, Liverpool, UK, 4Lancaster University, Lancaster, UK
We propose a joint model for repeated measurements and survival outcomes,
which we the use to analyse data from the UK cystic fibrosis (CF) register.
Cystic fibrosis is the most common serious inherited disease in Caucasian
populations, and most people with CF die prematurely due to lung disease.
The aim is to investigate the relationship between the longitudinal trajectory of
lung function, measured as the forced expiratory volume (%FEV1), and
survival in the UK Cystic Fibrosis Population.
Our model is similar to the one proposed by Diggle and Kenward (Applied
Statistics 43(1), 49-93), with a discretised time scale and a parametric probit
model assumed for the survival distribution. We use recently developed
distribution theory to calculate a closed form for the likelihood. Our method
does not constrain the random effects part of the model, enabling us to
compare models with different forms for the random effects. Estimates and
confidence intervals for covariate effects and random effects parameters are
calculated using maximum likelihood.
C16.5
Joint Modeling of Repeatedly Measured Continuous Outcome and Intervalcensored Competing Risk Data
Ralitza Gueorguieva1, Robert Rosenheck2, Haiqun Lin1
1
Yale University, New Haven, CT, USA, 2VA New England Mental Illness
Research and Education Center, West Haven, CT, USA
Methodological development of joint models of longitudinal and survival data
has been rapid in recent years; however, their full potential in applied settings
are yet to be fully explored. We describe a novel use of a specific association
structure, linking the two-component models, and thus extend joint models to
account for measurement error in a biomarker, even when only the baseline
value of the biomarker is of interest. This is a common occurrence in registry
data sources, where often repeated measurements exist but are simply
ignored. The proposed specification is evaluated through simulation and
applied to data from the General Practice Research Database (GPRD),
investigating the effect of baseline systolic blood pressure (SBP) on the time to
stroke in a cohort of obese patients with type 2 diabetes mellitus. By directly
modelling the longitudinal component we reduce bias in the hazard ratio for the
effect of SBP on the time to stroke, showing the large potential to improve In joint estimation of longitudinal data and dropout, using information regarding
prognostic models which use only observed baseline biomarker values.
the cause of dropout is likely to improve inferences for repeatedly measures
outcomes and provide information about the association between causespecific dropout and the outcome process. A joint model that incorporates
C16.3
cause-specific dropouts and allows for interval-censored dropout times is
Dynamic predictions from joint models for longitudinal biomarker trajectory and
proposed. This model includes a linear mixed model component for the
time to clinical event: development and validation
longitudinal outcome and competing risks models for the interval-censored
Cécile Proust-Lima, Mbéry Séne
cause-specific dropouts. We illustrate the model on data from the Clinical
Antipsychotic Trials in Intervention Effectiveness (CATIE) study in
INSERM, Centre INSERM U897, ISPED, Bordeaux, France
schizophrenia. The results largely confirm the original CATIE findings but there
In clinical studies, joint models can be used to describe correlated biomarker
is indication that our proposed approach improves power for detecting
trajectory and time to a clinical event. Recently, dynamic predictive tools were
treatment differences over time. A limited simulation study demonstrates that
derived from these models. They consist in the predicted probability of event in
our method reduces bias in estimation of treatment effects compared to
a window (s,s+t) given information on the biomarker up to the time of prediction
treating all dropout as the same and to ignoring dropout. Important additional
s. These dynamic predictive tools have two main advantages which make them
advantages of our approach are that it allows estimation of the hazard function
potentially very powerful tools for clinical decision making: they use all the
for each specific dropout cause and can be fit in commercially
52/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
available software.
C17 Genomics / system biology
C17.1
High resolution QTL-mapping with whole-genome sequencing data
Tomasz Burzykowski, Jurgen Claesen
Hasselt University, Diepenbeek, Belgium
during vaccination and 10 repeated measures were available after ATI. Several
immune-related gene sets, such as the Interferon-gamma mediated signalling
pathway, varied significantly after ATI and presented various dynamics over
time.
C17.3
Detecting genetic differences between monozygous twins by next-generation
sequencing
1
, Patrik K.E. Magnusson1, Anna C.V. Johansson2, Lars Feuk2,
Combining high-throughput sequencing technologies with pooling of Setia Pramana
1
segregants, as performed in bulked segregant analysis (BSA), can allow the Yudi Pawitan
simultaneous mapping of multiple quantitative trait loci (QTL) present 1Department of Medical Epidemiology and Biostatistics, Karolinska Institutet,
Stockholm, Sweden, 2Dept. of Immunology, Genetics and Pathology, Rudbeck
throughout the genome. In general terms, BSA consists of three main steps:
Laboratory, Uppsala, Sweden
 1. Controlled crossing of parents with and without a trait
Monozygous (MZ) twins are derived from one fertilized egg, which is believed
 2. Selection based on phenotypic screening of the offspring
to result in identical DNA setup for the pair members. However, recent
 3. Mapping of short offspring sequences against parental reference research shows that twins may not be 100% genetically identical. In this study
the main aim is to investigate genetic differences between twins by identifying
The final step allows detecting SNPs, insertions, and deletions. The occurrence discordant single-nucleotide variants (SNVs) of MZ twins. SNVs can be
of such polymorphisms is a useful feature in order to identify regions, which are detected by using next-generation sequencing (NGS) methods which are now
possibly related to the differences in traits.
extensively used in the genomics studies since they provide cost efficient and
BSA produces data in the form of binomial counts indicating how many reliable large scale DNA sequencing. However, NGS data are known to suffer
offsprings have got the same/different nucleotide as the reference parent at a from high error rates due to many factors, e.g., base-calling and alignment
particular location at the genome. By analyzing trends in the counts, as a errors. This makes identifying true SNVs in whole-genome sequencing to
function of the location, regions, which might contain a gene related to the trait remain a challenging task.
of interest, might be discovered.
Here we propose procedures in order to filter out noise of NGS data, and
We propose the use of a Hidden Markov Model (HMM) to identify the regions detect true discordant SNVs of monozygotic twins. The procedures start with
of interest in the genome. The model includes several states, each associated filtering on coverage depth and phred quality scores in order to reduce basewith a different probability of observing the same/different nucleotide in an calling errors. Then to discover discordant SNVs between twins, Yates's chioffspring as compared to the parent. After estimating the model, the most squared test is implemented. To selectively remove false discordances
probable state for each SNP can be selected. The most probable states can additional filtering procedures based on proximity to indels (insertions and
then be used to indicate regions in the genome with a high probability of deletions) and neighboring SNVs, as well as minor allele frequency (MAF) are
nucleotide (dis)similarity, i.e., which may be likely to contain trait-related genes. applied. The detected discordant SNVs are then validated using independent
methodology. These procedures are applied to NGS data of 6.9 million variants
from two identical twins using SOLiD and Illumina short-read sequencing
C17.2
technologies.
Application of Gene Set Analysis of Time-Course gene expression in a HIV
vaccine trial
C17.4
Boris Hejblum1, Jason Skinner2, Rodolphe Thiebaut1
1
Univ. Bordeaux, ISPED ; INSERM, Centre INSERM U897, F-33000 Bordeaux, Modeling count data in RNA-seq experiments using the Poisson-Tweedie
family of distributions
France, 2Baylor Institute for Immunology Research, Dallas, TX, USA
Mikel Esnaola, Juan R Gonzalez
Transcriptional profiling of human immune responses during the course of
vaccination yields highly dimensional, information rich, data sets requiring Center for Research in Environmental Epidemiology (CREAL), Barcelona,
analysis tools capable of both high sensitivity and the ability to bring functional Spain
meaning to statistically derived results. Gene Set Analysis has emerged as a High-throughput RNA sequencing (RNA-seq) is used to detect genes that are
standard method to accomplish such analyses. Here we extend this tool to differentially expressed among conditions. After some pre-processing steps
longitudinal gene expression analysis, accounting for repeated measurements, that include alignment of the sequenced reads against a reference genome
to attain a systems-level interpretation of dynamic immune responses in the and their posterior summarization into features of interest (e.g., genes), raw
course of a therapeutic HIV vaccine trial.
RNA-seq data is transformed into an initial table of counts. Typically, the
Through the use of generalized additive mixed modeling and of the maxmean Poisson and negative binomial distributions have been used to anlayzed RNAstatistic (Efron et al, 2007), we assess the significance over time of predefined seq experiments. We show, however, that the rich diversity of expression
gene sets and estimate their respective dynamics. Our modelling accounts for profiles produced by extensively-replicated RNA-seq experiments requires
changes inside a significant genetic pathway, which can be heterogeneous, additional count data distributions in order to capture the gene expression
dynamics revealed by this technology. We provide a new method for differential
either in direction or in time.
The DALIA-1 trial is a phase I/II trial evaluating the safety, the immunogenicity expression analysis implemented in a package for R called {\tt tweeDEseq},
at
{\tt
http://www.bioconductor.org/packages/release/bioc}
and the impact on viral dynamics of a Dendritic cells based vaccine in 19 HIV available
{\tt/html/tweeDEseq.html}.
This
is based on a broader class of count-data
infected patients. Vaccination was performed during the first 16 weeks and
models
that
permits
different
distributional
assumptions for different genes and
antiretroviral treatments were interrupted at 24 weeks for 24 weeks or less if
patients reached 350 Cd4 cells/mm3. Gene expression was evaluated with groups of samples. We demonstrate that this results in shorter and more
Illumina microarrays, including 47000 probes by patient and time point. Before accurate lists of differentially expressed genes.
Antiretroviral Treatment Interruption (ATI), 8 repeated measures were available
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C17.5
From single-SNP to wide-locus GWAS: A Computational Biostatistics Approach
Identifies Pathways in Small Sample Studies
Knut Wittkowski
The Rockefeller University, New York, NY, USA
Genome Wide Association Studies (GWAS) have had limited success when
applied to complex diseases. Analyzing SNPs individually requires several
large studies to integrate the often divergent results. In the presence of
epistasis between SNPs, intragenic regions, or genes, multivariate approaches
based on the linear model (including stepwise logistic regression) often have
low sensitivity and generate an abundance of artifacts. Recent advances in
distributed and parallel proc-essing spurred methodological advances in nonparametric statistics. GWAS results based on u-statistics for multivariate data
(µGWAS) are not confounded by unrealistic assumptions such as linearity or
independence and can now also incorporate information about hierarchical
data structures. µGWAS draws on two novel concepts. First, utilizing
information about the sequence of neighboring SNPs and information from the
HapMap project about recombination hotspots. Second, "information content of
multivariate data" is used to assess the reliability of data and results. Taken
together, this computational biostatistics approach increases power and guards
against artifacts, paving the way to comparative effectiveness research and
personalized diagnostics. µGWAS typically identifies clusters of genes around
biologically relevant pathways and pinpoints functionally relevant regions within
these genes. A study of only 185 cases and 370 controls sufficed to integrate
previous findings and to generate novel biologically plausible hypotheses about
the interplay of genetic risk factors. While most drugs target regulatory
processes at the level of the nucleus or cell membrane, µGWAS identified a
cluster of genes controlling functional processes in the cytoplasm (cytoskeleton
dynamics), suggesting novel indications for drugs currently in clinical tests.
I4 Extensions to Epidemiological Designs
53/156
additional information on the cohort.
In this context, the estimation of the survival function through the Kaplan-Meier
method is based on counts of events and numbers at risk in time, where the
contribution of the subject accounts for the sampling probability. This enables
to recover the representativeness of the subcohort.
Here, we extend this approach to deal with the presence of competing risks.
The starting point is the estimator of the crude incidence of a specific cause.
This is written in a Kaplan-Meier form, based on count of events due to that
cause and counts of subjects at risk, after extending the time to competing
events to infinity and applying a suitable weighting to induce independent
censoring. The inverse of the sampling probabilities is applied to the number of
events of the cause of interest and to the weighted number of subjects at risk.
This approach will be applied in the context of a two-phase study on childhood
acute lymphoblastic leukaemia, that was planned in order to evaluate the role
of genetic polymorphism on treatment failure due to relapse. A subsample was
selected for genotyping in a large cohort of patients from a clinical trial.
I4.3
A semiparametric approach to secondary analysis of nested casecontrol data
Agus Salim
Saw Swee Hock School of Public Health, National University of Singapore
Many epidemiological studies use nested case-control (NCC) design to reduce
cost while maintaining study power. Because NCC sampling was conditional on
the primary outcome and matching variables, routine application of logistic
regression to analyze secondary outcome will generally produce biased oddsratios. Recently, several methods have been proposed to analyze secondary
outcome. These methods are based on either weighted-likelihood or maximumlikelihood. A common feature of all current methods is they require the
availability of survival time for the secondary outcome for cohort members not
selected into the NCC study. This requirement may not be easily satisfied when
the cohort is a hospital cohort where often we only have survival data of those
selected into the study. An additional limitation specific to the current
maximum-likelihood method is it assumes the hazards of the two outcomes are
conditionally independent given covariates. This assumption may not be
plausible when individuals have different levels of frailties not captured by the
covariates. We provide a maximum-likelihood method that explicitly model the
individual frailties and avoid the need to have access to the full cohort data.
The likelihood contribution is derived by respecting the original sampling
procedure with respect to the primary outcome. The proportional hazard
models are used to model the marginal hazards and Clayton’s copula is used
to model dependence between outcomes. We show that the proposed method
is more efficient than the weighted likelihood method and is unbiased in the
presence of frailties. We apply the methodology to study risk factors of
diabetes in a cohort of Swedish twins.
I4.1
Conditional likelihoods for case-cohort data: Do they exist?
Bryan Langholz
University of Southern California, Los Angeles, California, USA
Since the introduction of the continuous time pseudo-likelihood analysis of
case-based sampled cohort data introduced by Prentice 1986, the focus of
methodological work on the analysis of case-cohort studies has been on
modifications of Prentice's basic approach. The pseudo-likelihood approach
has some unappealing features including that there is no clear connection
between analysis of case-based binary data and that the variance of the
pseudo-score is not the expected pseudo-information, and the method does
not easily extend to accommodate more complex sampling designs. While
these features are to some degree aesthetic, they are motivation to search for
other approaches that are more closely related to likelihood methods for
sampled data. Observations that suggest that other, more efficient, approaches
C18 Clinical trials III
exist will be discussed and possible ways to develop likelihoods based on
appropriately specified intensities and sampling theor ywill be presented.
C18.1
Dealing with Criticisms and Controversies of Pragmatic Trials
I4.2
Lehana Thabane1, Janusz Kaczorowski2, Lisa Dolovich1, Larry Chambers3, on
Estimating cumulative incidence adjusting for competing risk using an behalf of CHAP Investigators
1
optimal two-phase stratified design
McMaster university, Hamilton, Ontario, Canada, 2Universite de Montreal,
Montreal,
Ontario, Canada, 3University of Ottawa, Ottawa, Ontario, Canada
Paola Rebora, Laura Antolini
University of Milan, Bicocca, Italy
Pragmatic trials are designed to answer the practical question of whether
Research on the role of genetic and biological factors is challenged by the offering an intervention compared with some alternative (eg. usual care) in
need of investigating large cohorts with limited resources. In stratified two- routine health care does more good than harm. In this presentation, we will
phase designs, a convenient subcohort is sampled and investigated to gain review the similarities and differences between pragmatic and explanatory
trials; and common criticisms and controversies of pragmatic trials. We will use
54/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
the
CHAP
(Cardiovascular
Health
Awareness
Program:
www.CHAPprogram.ca) trial-which was a community randomization pragmatic
trial designed to assess whether offering a highly organized, community-based
CHAP intervention compared to usual care can reduce cardiovascular related
outcomes-to illustrate how we addressed some of the criticisms.
before evaluation. Consequently, it is of prime interest to determine parameters
for describing the learning effect such as the rate of change and the duration of
the learning period, as well as the final skill level achieved, in order to adjust for
its presence during assessment. In published studies learning curve
methodology has provided estimates of the rate of learning and final skill level
but not of the time taken to reach it. We demonstrate that the pairwise logistic
model incorporates a parameter for estimating the duration of the learning
C18.2
period and has easily interpretable parameters. This is achieved by breaking
Finding and validating subgroups of enhanced treatment effect in randomized down the model into a branch describing the learning phase and one
clinical trials
describing cases after final skill level is reached, with the break-point
Jeremy Taylor1, Jared Foster1, Stephen Ruberg2
representing the length of learning. This extension is useful as it provides a
1
University of Michigan, Ann Arbor, Michigan, USA, 2Eli Lilly, Indianapolis, measure of the potential cost of learning the intervention and would enable
statisticians to discard cases undertaken during the operator's learning phase
Indiana, USA
and assess the intervention after the optimal skill level is reached. We illustrate
We consider the problem of identifying a subgroup of patients who may have the method using data from cardiovascular surgery.
an enhanced treatment effect in a randomized clinical trial, and it is desirable
that the subgroup be defined by a limited number of covariates. For this
problem, the development of a standard, pre-determined strategy may help to C18.5
avoid the well-known dangers of subgroup analysis. We present a method Interim analyses in diagnostic versus treatment studies: differences and
developed to find subgroups of enhanced treatment effect. This method similarities
involves predicting response probabilities of both potential outcomes for Oke Gerke1, Werner Vach2, Poul Flemming Høilund-Carlsen1
treatment and control for each subject. The difference in these probabilities is 1
Odense University Hospital, 5000 Odense C, Denmark, 2University Medical
then used as the outcome in a classification or regression tree, which can
Center Freiburg, 79104 Freiburg, Germany
potentially include any set of the covariates. We define a measure Q(A) to be
the difference between the treatment effect in estimated subgroup A and the Purpose: To contrast interim analyses in paired diagnostic studies of accuracy
marginal treatment effect. We present several methods developed to obtain an with interim analyses in (randomized controlled) treatment studies with respect
estimate of Q(A), including estimation of Q(A) using estimated probabilities in to differences in planning and conduct.
the original data, using estimated probabilities in newly simulated data, cross- Materials and Methods: The term 'treatment study' refers to (randomized)
validation-based approaches and a bootstrap-based bias corrected approach. clinical trials aiming to demonstrate superiority or non-inferiority of one
Results of a simulation study indicate that the method noticeably outperforms treatment over another and the term 'diagnostic study' to clinical studies
logistic regression with forward selection when a true subgroup of enhanced comparing two diagnostic procedures using a third as gold standard. We
treatment effect exists. Generally, large sample sizes or strong enhanced compare the design and purpose of interim analyses in treatment and
treatment effects are needed for subgroup estimation. Additionally, simulation diagnostic studies and point to some important differences between them using
results suggest that the method is fairly insensitive to moderate variations in simulations to exemplify points regarding sample sizes.
the true model for the observations.
Results: Though interim analyses in paired diagnostic and treatment studies
have similarities regarding a priori planning of timing, decision rules, and
C18.3
consequences of the analyses, they differ with respect to: (A) sample size
Monitoring a Long-Term Efficacy Study for Futility: an Application in adjustments; (B) early decision without early stopping; (C) handling of
emerging trends during study conduct. These differences are due to the
Huntington's Disease
dependence of sample size on the agreement rate between the modalities in
David Oakes
paired diagnostic studies, the possibility to continue a study despite a clear
University of Rochester, Rochester, NY, USA
evidence of the superiority of one of the modalities, and the restricted (or even
Futility monitoring of efficacy studies with long-term follow-up poses a major impossible) long-term blinding of imaging techniques, respectively.
challenge: to enable a useful savings of time and costs, it may be necessary to Conclusion: In diagnostic studies, interim analyses may reveal efficacy early
conduct a futllity analysis before the primary outcome data can be obtained for without the need to stop the trial. In addition, they allow sample size
the majority of subjects. The challenge, and some possible approaches to adjustments when reliable initial estimates of agreement rates cannot be
addressing it, are described in the context of a ongoing study in Huintington's obtained. Finally, by providing common understanding they prevent
disease.
inappropriate actions as results gradually become known to project team
members.
C18.4
Assessment of surgical interventions through clinical trials: accounting for the
C19 Model selection I
impact of learning curves.
C19.1 (Student award winner)
Olympia Papachristofi, Linda Sharples
Advanced Colorectal Neoplasia Risk Stratification by Penalized Logistic
MRC Biostatistics Unit,University of Cambridge, Cambridge, UK
Regression
There has been growing interest in the rigorous assessment of surgical Yunzhi Lin, Menggang Yu, Sijian Wang, Richard Chappell
interventions through the conduct of clinical trials. However, for novel University of Wisconsin, Madison, Madison, WI, USA
interventions it is expected that the performance of the operators will change
over time as experience increases; this learning effect may complicate the Colorectal cancer (CRC) is the second leading cause of death from cancer in
evaluation of the intervention by either delaying the start of the trial, or by the United States. To facilitate a tailored screening recommendation, there is a
masking aspects of its true impact if surgical performance has not stabilised need for rules that stratify risk for CRC among the 90% of U.S. residents who
are considered "average risk". In this article, we investigate into such risk
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
stratification rules for advanced colorectal neoplasia. We use a recently
completed large cohort study collected from two clinical programs of screening
colonoscopy. Logistic regression models have been used in literature to
estimate the risks of CRC based on quantifiable risk factors. However, logistic
regression may be prone to overfitting and instability in variable selection.
Since most of the risk factors in our study have several categories, it is also
tempting to collapse these categories into fewer risk groups. We proposed a L1
penalized logistic regression method that can automatically and simultaneously
select variables, group categories, and estimate their coefficients. The model
penalizes the L1-norm of both the coefficients and their difference. Thus it
encourages sparsity in the categories, i.e. grouping of the categories, and also
sparsity in the variables, i.e. variable selection. We apply the penalized logistic
regression method to our data. Six variables are selected, with close
categories grouped simultaneously, by the penalized regression models. The
models are validated with 10-fold cross-validation. The ROC curves of the
penalized regression models dominate the ROC curve of naive logistic
regression at all cutoff thresholds.
C19.2
A universal cross-validation criterion and its asymptotic distribution
Daniel Commenges, Cécile Proust-Lima, Benoit Liquet
INSERM, Bordeaux, France
We consider inference problems where several estimators are available. A
general framework is that the estimators are obtained by minimizing an
estimating function and they are assessed through another function, that we
call the assessment function. The estimating and assessment functions
generally estimate risks. A classical case is that both estimate an informatin
risk, specifically the cross-entropy. In that case Akaike information criterion
(AIC) is relevant. In more general cases, the assessment risk can be estimated
by leave-one-out cross-valudation. Since this is computationally demanding, an
approximation formula is very useful. A universal approximate crossvalidation
criterion (UACV) is given. This criterion can be applied to different estimators
including penalized likelihood and maximum a posteriori estimators and
different assessment risks such as information risks and continuous rank
probability score. The asymptotic distribution of UACV can be derived. An
illustration for comparing estimators of a distribution for ordered categorical
data derived from threshold models and models based on continuous
approximations will be given.
C19.3
Modeling continuous predictors with a ‘spike’ at zero: multivariable extensions
and handling of related spike variables
Carolin Jenkner1, Eva Lorenz2, Heiko Becher2, Willi Sauerbrei1
1
University Medical Center, Institute of Medical Biometry and Medical
Informatics, Freiburg, Germany, 2Medical Faculty University of Heidelberg,
Epidemiology and Biostatistics Unit, Heidelberg, Germany
In epidemiology and clinical research, predictors often have a proportion of
individuals with value zero and the distribution of the others is continuous
(variables with a spike at zero). Examples in epidemiology are smoking or
alcohol consumption and in clinical research laboratory measures, sometimes
causes by a lower detection limit of the measurement. Recently, an extension
of the fractional polynomial (FP) procedure was proposed to deal with such
situations (Royston and Sauerbrei, Royston et. al). To indicate whether a value
is zero or not, a binary variable is added to the model. In a two-stage
procedure, it is assessed whether the binary variable and/or the continuous FP
function for the positive part is required (FP-spike).
A study investigating the prognostic effect of hormonal values (estrogen
receptor and progesterone receptor) in patients with breast cancer will be used
to illustrate the procedure and to compare the results with those of usual FP
55/156
survival models. As there are several correlated prognostic factors, it is
important to assess the effects of the individual factors in a multivariable
framework. We will present several possibilities of handling a multivariable
spike situation. Furthermore, methods for related spike variables are discussed
with the aim of creating a comprehensive index for these variables.
Royston et al. Stat.Med. 2010; 29: 1219-27.
Becher et al. Analysing covariates with spike at zero: a modified FP procedure
and conceptual issues [submitted]
C19.4
Stability investigations of multivariable regression models derived from low and
high dimensional data
Willi Sauerbrei1, Anne-Laure Boulesteix2, Harald Binder3
1
Institute of Medical Biometry and Informatics, University Medical Center
Freiburg, Freiburg, Germany, 2Department of Medical Informatics, Biometry
and Epidemiology, University of Munich, Munich, Germany, 3Institute of Medical
Biostatistics, Epidemiology and Informatics, University Medical Center Mainz,
Mainz, Germany
Multivariable regression models can link a potentially large number of variables
to various kinds of outcomes, such as continuous, binary or time-to-event
endpoints. Selection of important variables and selection of the functional form
for continuous covariates is a key part of building such models but is
notoriously difficult due to several reasons. Caused by multicollinearity
between predictors and a limited amount of information in the data, (in-)stability
can be a serious issue of models selected. For applications with a moderate
number of variables, resampling-based techniques have been developed for
diagnosing and improving multivariable regression models. Deriving models for
high-dimensional molecular data has led to the need for adapting these
techniques to settings where the number of variables is much larger than the
number of observations. Three studies with a time-to-event outcome, of which
one has high-dimensional data, will be used to illustrate several techniques.
Investigations at the covariate level and at the predictor level are seen to
provide considerable insight into model stability and performance. While some
areas are indicated where resampling techniques for model building still need
further refinement, our case studies illustrate that these techniques can already
be recommended for wider use.
Sauerbrei W., Boulesteix A.-L., Binder H. (2011): Stability investigations of
multivariable regression models derived for low and high dimensional data.
Journal of Biopharmaceutical Statistics, 21:1206-1231.
C19.5
Learning Mixtures through merging components
Yanzhong Wang1, Mike Titterington2
1
King's College London, London, UK, 2University of Glasgow, Glasgow, UK
Among techniques for learning mixtures, the most popular appears to be the
EM algorithm. But as a local search algorithm, EM has a number of limitations
such as slow to converge, sensitive to initialization, and may get stuck in one of
many local maxima of the likelihood function. As an alternative, the IPRA
(Iterative Pair-wise Replacement Algorithm), a components merging method
capable of fitting mixture models with a large number of components, was
proposed by Scott and Szewczyk (2001). The IPRA uses a kernel density
estimate as an initial estimate of the large mixture model, and then simplifies
the large model by iteratively merging pairs of similar components based on a
similarity measure. However, it only applies for one-dimensional data. We
extended the IPRA into multidimensional problems by proposing a multivariate
IPRA, which uses the minimal spanning tree (MST) to limit searches, thereby
reduces the number of comparisons from O(n2) to O(nlogn). With the help of
56/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
the L2E value and information criteria such as AIC and BIC, the best fitting
model is then selected from a sequence of resulting mixture models. We
compared our multivariate IPRA with the standard EM+BIC approach
(MCLUST package in R) on both simulated data and real data from the South
London Stroke Register (SLSR) and found that the multivariate IPRA was able
to determine the right number of components in the mixture, more robust to
outliers, less computational demanding and more efficient on fitting many
clusters in large datasets.
C20 Prediction in survival analysis
C20.1
Sample size planning for survival prediction with focus on high dimensional
data
Heiko Götte1, Isabella Zwiener2
1
Merck KGaA, Darmstadt, Germany, 2IMBEI, University Mainz, Mainz,
Germany
examined. The sample consists of 174 patients contributing with 1347
amalgam restorations (censoring percentage of 86.2 \%). We have found a
strong inter-cluster predictive ability of both covariate and clustering effects, but
a poor intra-cluster predictive ability of the covariate effects.
C20.3
Prediction tool for risk of death using history of cancer recurrences in joint
frailty models
Audrey Mauguen1, Simone Mathoulin-Pelissier2, Virginie Rondeau1
1
INSERM, Centre INSERM U897; Univ Bordeaux, ISPED, Bordeaux, France,
2
Institut Bergonie, Bordeaux, France
Evaluating prognostic of patients according to their demographic, biological or
disease characteristics is a major issue. It may be used for guiding treatment
decisions. In cancer studies, typically, more than one endpoint can be
observed before death. Patients may undergo several types of events, such as
local recurrences, distant metastases and second cancers, death being
considered as terminal event. Accuracy of clinical decisions may be improved
when the history of these different events is considered. Thus we want to
dynamically assess a patient’s prognosis of death using recurrence
information. As previously done in the framework of joint models for longitudinal
and time to event data (Proust-Lima and Taylor 2009; Rizopoulos 2011), we
propose a dynamic prediction tool based on joint frailty models. Joint modelling
accounts for the dependence between recurrent events and death, by the
introduction of a random effect shared by the two processes (Rondeau 2007).
We aim at producing an accurate estimate of the probability to survive beyond
t+w, conditional on information available at the prediction time point t.
Prediction is updated with the occurrence of a new event. We will compare the
performance of the proposed prediction tool with the performance of a simple
prediction model, not taking into account the correlation between intermediate
events and death. The proposed tool will be applied on breast cancer data from
a French comprehensive cancer centre. Patients with a primary invasive breast
cancer and treated with breast-conserving surgery and followed up during
more than 10 years will be analyzed.
Frequently, researchers try to predict the survival outcome based on high
dimensional data. In this case an established procedure is to reduce the
number of covariates by applying a variable selection tool and then fitting a
model based on the selected variables. Although this model building process is
complex, usually no proper study planning is performed and it is often unknown
whether the sample size is sufficient to determine a prediction model which
leads to an appropriate prediction accuracy when the model is applied to
independent validation data. We present formulas for the determination of the
training set sample size for survival prediction. Censoring is considered. The
sample size is chosen to control the difference between an optimal and an
expected prediction error. Prediction is done by Cox models. In the high
dimensional setting the sample size has not only an impact on the standard
errors of the estimates but also on the number of correctly selected variables.
In the case that not all informative variables are included in the final model, the
effect estimates are biased towards zero. Omission of informative variables
leads to a misspecified Cox model which is known to produce biased
estimates. For univariable selection, the magnitude of bias as well as the
number of correctly identified variables can be calculated analytically. An
C20.4
example illustrates the application of the method.
Concordance for prognostic models with competing risks
Marcel Wolbers1, Michael T. Koller2, Jacqueline C. M. Witteman3, Thomas A.
C20.2
Gerds4
Exploring the discriminatory ability of frailty models
1
Oxford University Clinical Research Unit and Wellcome Trust Major Overseas
Robin Van Oirbeek1, Emmanuel Lesaffre1,2
Programme, Ho Chi Minh City, Viet Nam, 2Basel Institute for Clinical
1
2
I-Biostat, KU Leuven, Leuven, Belgium, Department of Biostatistics, Erasmus Epidemiology & Biostatistics, University Hospital Basel, Basel, Switzerland,
Medical Center, Rotterdam, The Netherlands
3
Departments of Epidemiology, Erasmus MC – University Medical Center
The concordance probability is a very popular tool to measure the predictive Rotterdam, Rotterdam, The Netherlands, 4Department of Biostatistics,
ability of a survival model. It typically checks if the ordering of the predicted and University of Copenhagen, Copenhagen, Denmark
the observed survival times is the same or concordant for a randomly selected The concordance probability is a widely used measure to assess the
pair. While useful, we believe that this measure is too rough and that it may discrimination of prognostic models with binary and survival endpoints. Here,
also be of interest to investigate how the concordance probability evolves as we formalize an earlier definition of the concordance probability for predicting
the difference in survival time of the randomly selected pair increases. We an event of interest based on a regression model with a competing risks
therefore propose a concordance measure that measures both ordering and endpoint (Wolbers et al, Epidemiology 2009;20: 555-561). We illustrate the
distance in survival time for a randomly selected pair. Moreover, three specific properties of the concordance probability for varying effects of a single numeric
adaptations of the concordance probability were proposed for the proportional covariate on the cause-specific hazards of both the event of interest and the
hazard frailty model (Van Oirbeek and Lesaffre, 2010). In this presentation, we competing event, respectively. In this setting, the concordance probability for a
generalize these 3 adaptations to every type of frailty model and we develop a covariate which is positively associated with the hazard of the event of interest
procedure to calculate a credible/confidence interval within the decreases if the covariate is also positively associated with the hazard of the
Bayesian/likelihood framework, as well as an internal validation and an outlier competing event whereas concordance increases if the association with the
detection scheme. The properties of all these developments are investigated in competing event is negative.
an extensive simulation study and is illustrated on a real clinical study. In this
study, the effect of different treatment modalities as well as the effect of patient, For right censored data, we investigate inverse probability of censoring
tooth and operator characteristics on the longevity of amalgam restorations is weighted (IPCW) estimates of a truncated concordance index. The estimates
are based on a working model for the censoring distribution and the simplest
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
model assumes that the censoring distribution is independent of the predictor
variables. We show that if the working model is correctly specified then the
IPCW estimate consistently estimates the concordance probability. The small
sample properties of the estimates are assessed in a simulation study. We
further illustrate the methods by computing the concordance probability for a
prognostic model of coronary heart disease (CHD) in the presence of the
competing risk of non-CHD death.
C20.5
Comparing areas under time-dependent ROC curves under competing risk
Paul Blanche, Hélène Jacqmin-Gadda
Univ. Bordeaux, ISPED, INSERM U897, Bordeaux, France
For many diseases, it is relevant to evaluate and compare the ability of
biomarkers to predict the onset of the disease. To do so, estimators of the area
under the time-dependent ROC curve (AUC) have been developed accounting
for censored data due to lost of follow-up (Heagerty et al, 2000; Uno et al,
2007).
Under the competing risk setting, two definitions of ROC curve and AUC can
be considered according to how one defines a case and a control (Zheng et al,
2011). Accounting for censoring, we propose simple Inverse Probability of
Censoring Weighting (IPCW) estimators for both AUC definitions. We establish
large sample theory of these estimators and derive a test statistic to compare
the AUCs of two markers, even when markers are measured on the same
subjects. Two weightings are considered: one based on Kaplan-Meier
estimator leading to a fully non-parametric testing procedure and another
based on a semi-parametric Cox model to deal with marker-dependent
censoring. A simulation study highlights the finite sample behaviour of the test.
We apply the methodology to compare several psychometric tests for
predicting dementia in the elderly, accounting for death competing risk. Data
come from the French PAQUID cohort including 3777 subjects aged 65 years
and older at baseline.
C21 Statistical methodology I
C21.1
High dimensional regression using decomposition-gradient-nuisance method
and its application in epidemiological case control studies
Yuanzhang Li, Tianqing Liu, David Niebuhr
Walter Reed Army Institute of Research, Silver Spring, MD, USA
Regression of high dimensional data is difficult when the sample size is small.
The traditional ordinary least squares estimation performs poorly in this
situation. Even though the sample size is not small, the high correlation among
some of the high dimensional predictors cannot be avoided. Multiple linear
regressions are very sensitive to predictors being in near-collinearity. When this
happens, the model parameters become unstable with large variance. Those
phenomenons often occur in epidemiological studies, especially for the
analyses involving high dimensional predictors. We propose a combined linear
approach which uses a space-decomposition method to reduce the collinearity,
and then a gradient-nuisance method to select "better" predictors as well as
nuisance factors to control for the variance heterogeneity. We apply the
proposed method to a military schizophrenia data including 294 cases and 344
matched controls with 48 biomarkers and seven antibody agents. We identify a
small sub group of biomarkers and antibody agents which provide important
insights on schizophrenia. Using a longitudinal general linear regression on the
combinations of those selected biomarkers and agents, numerical results
demonstrate that the proposed approach can significantly improve predictive
efficiency with a substantial dimension reduction. Simulation based randomly
split data sets shows the selection is robust and stable.
57/156
C21.2
Choice of the Berger and Boos confidence coefficient in an unconditional test
for equality of two binomial probabilities
Stian Lydersen, Mette Langaas, Øyvind Bakke
Norwegian University of Science and Technology, Trondheim, Norway
Exact unconditional tests for comparing two binomial probabilities are generally
more powerful than conditional tests like Fisher’s exact test. Such tests can be
further improved by the Berger and Boos confidence interval method, where a
p-value is found by restricting the common binomial probability under H 0 to a 1γ confidence interval. Different default values of γ, such as 10-3, 10-4, and 10-6,
have been used in software. We studied average test power for the exact
unconditional z-pooled test for a wide range of cases with balanced and
unbalanced sample sizes, and significance levels 0.05 and 0.01. Among the
values 10-3, 10-4, …, 10-10, the value γ = 10-4 was optimal or approximately
optimal in all the cases we looked at, and can be given as a general
recommendation.
Reference
Lydersen, S., Langaas, M., & Bakke, Ø. The Exact Unconditional z-pooled Test
for Equality of Two Binomial Probabilities: Optimal Choice of the Berger and
Boos Confidence Coefficient. Journal of Statistical Computation and
Simulation. Available online: 04 Jul 2011.
C21.3
Surrogate endpoints in breast and colon cancer: An evaluation of validation
studies
Christoph Schürmann, Ralf Bender, Thomas Kaiser, Elke Vervölgyi, Volker
Vervölgyi, Beate Wieseler
Institute for for Quality and Efficiency in Health Care (IQWiG), Cologne,
Germany
In oncology, endpoints like disease free survival or progression free survival
are frequently used as surrogates for overall survival. Yet, validation is
necessary to infer meaningful conclusions from the surrogate to the actual
endpoint. We present the results of a systematic review in which we searched
the literature for validation studies in breast and colon cancer. We developed
and assessed a set of reliability criteria for included studies, i. e. if studies
applied a suitable method of validation, if there was some analysis confirming
robustness of the results, if the data base used was complete, if the analysis
was specific with respect to indication and intervention, and if there was a
consistent definition of endpoints. Apart from these qualitative aspects, we
analysed quantitatively how close the relationship of the surrogate endpoints
with the endpoints of interest was in terms of trial-level correlation. We found 6
validation studies in breast cancer and 15 studies in colon cancer. In each
case, we considered the results not to be reliable, because always at least two
of the given aspects remained unclear or were not satisfactorily fulfilled.
Besides, few of the studies gave evidence of strong correlation.
We propose how to draw general conclusions about the validity based on the
reliability aspects and the size of the correlation. Our analysis shows that, in
principle, validation studies could address our concerns about reliability. But
unless this is properly considered, the validity of surrogate endpoints is not
clearly proven.
58/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C21.4
Massey University, Palmerston North, New Zealand
Use of surrogate endpoints for improving efficiency, reduction of sample size Twin studies provide naturally matched pairs that can exploit within-pair
and modification of Mantel-Haenszel estimator for odds ratio
comparisons of data to avoid confounding exposure-outcome associations by
shared factors. For binary outcomes, paired data can be analysed using the
Buddhananda Banerjee, Atanu Biswas
logistic regression model of Neuhaus and Kalbfleisch (1998) with a linear
Indian Statistical Institute, Kolkata -- 700 108, India
predictor that includes terms for both the mean and difference in exposure with
Surrogate end-points are used when the true end-points are costly or time- between- and within-pair regression coefficients B and W respectively. When
consuming. In a typical set up we observe a fixed proportion of true-and- estimates of B and W differ the former may not provide useful information
surrogate responses, and the remaining proportion are only-surrogate about the latter.
responses. It is obvious that the inclusion of such only-surrogate end-points If B = W one scenario where the issue of differing estimates of B and W has a
increase the efficiency of associated estimation. In this present paper we want straightforward resolution is when the pair exposure mean is measured with
to quantify the gain in efficiency as a function of the proportion of available true error, but that the within-twin difference is subject to negligible error. For
responses. Also we obtain the expression of the gain in true sample size at the instance, siblings reporting nutritional or alcohol intake may be accurate in
expense of surrogates to achieve a fixed power, as a function of the proportion comparison to each other, but less good on an absolute scale. Failure to
of true responses. We present our discussion in the two-treatment set up in the account for the measurement errors leads to attenuation in the estimates of B,
context of odds ratio. We illustrate the procedure using some real data set.
generating an apparent discrepancy with W. By using the SIMEX method of
Cook and Stefanski (1994) with shared within-pair measurement error, it is
possible to adjust for this and generate an estimate of B that is considerably
C21.5
more efficient than estimating B alone or using conditional logistic regression.
Estimation of between- and within-pair regression effects in logistic regression
We examine the efficacy of this approach through a simulation study, and an
with shared measurement error
application exploring the association between low birthweight and cord blood
1
2
3
Lyle Gurrin , Elizabeth Williamson , Martin Hazelton
erythropoietin as a marker of hypoxic stress in utero and possible growth
1
Melbourne School of Population Health, Melbourne, Victoria, Australia, restriction (Carlin, Gurrin and Sterne (2005)).
2
Department of Epidemiology and Preventive Medicine, Monash University,
Melbourne, Victoria, Australia, 3Institute of Fundamental Sciences - Statistics,
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
59/156
Wednesday, 22 August
Morning sessions (IP3, I5 , C22 – C25)
registered patient-specific data,
I5 Causal inference
we ask what outcome difference is to be
expected when patients are treated in center A rather than B.
Following the causal question: what difference does center A make to the
patients' outcome, we evaluate the center's impact on its own typical patient
mix. Statistical approaches under either assumption `no unmeasured
confounders' and `instrumental variables' encounter
serious challenges with many imbalanced and sometimes small centers.
We consider what can be achieved by (doubly robust) (structural) regression
models and how penalization is best exploited.
In performing center evaluations we aim to strike a balance between type I and
type II errors as appropriate for the study goal. Because Belgians focus on
confidential feedback to provide the center with a tool for self correction, we
view this as an early warning system and place greater emphasis on the type II
error. After characterizing the center's global performance, we are concerned
with direct and indirect effects and wonder, for instance, to what extent a lack
of imaging facilities has contributed to a center?s weaker performance.
I5.1
Causal mediation analysis with applications to perinatal epidemiology
Tyler VanderWeele
Department of Epidemiology, Department of Biostatistics, Harvard School of
Public Health, Harvard University, USA
Mediation analysis concerns assessing the mechanisms and pathways by
which causal effects operate. Statistical techniques to address these questions
have been used in the social science and epidemiologic literature for some
time. More recently these techniques have come under critique for
inadequately dealing with issues of confounding and causal interpretation. The
talk will focus on the relationship between traditional methods for mediation
and those that have been developing within the causal inference literature. For
dichotomous and continuous outcomes, we discuss when the standard
approaches to mediation analysis employed in epidemiology and the social
sciences are valid. Using ideas from causal inference and natural direct and
indirect effects, we provide alternative mediation analysis techniques when the
standard approaches will not work. We discuss the no-confounding
C22 Survival analysis II
assumptions needed for these and sensitivity analysis techniques when those
assumptions fail. Further discussion is given as to how such mediation analysis C22.1
approaches can be extended to settings in which data come from a case- Properties of net survival estimation
control study design. The methods are illustrated by various applications to Maja Pohar Perme
perinatal epidemiology.
Department of Biostatistics and Medical Informatics, University of Ljubljana,
Ljubljana, Slovenia
I5.2
The survival analysis of long term studies is often interested in the mortality
Estimation
and
extrapolation
of
treatment
effects due to the disease in question but faced with the problem of many deaths
Andrea Rotnitzky1,2, James Robins3, Liliana Orellana4
occurring due to other causes. Furthermore, the cause of death is often
1
Department of Economics, Di Tella University, Buenos Aires, Argentina, 2 unknown or unreliable. A solution to this problem is to assume that hazard due
Department of Biostatistics, Harvard School of Public Health, Harvard to other causes can be described by the general population mortality. The
University, USA, 3Department of Epidemiology, Harvard School of Public methodology based on this assumption is referred to as relative survival, its
Health, Harvard University, USA, 4FCEyN, Universidad de Buenos Aires, most important field of usage is cancer registry data. One of the basic aims of
the analysis of cancer registry data is to estimate quantities which are
Buenos Aires, Argentina
comparable between different countries or time periods and thus not affected
In this talk I discuss methods for using the data obtained from an observational by the differences in other cause mortality. We have recently shown that the
database in one health care system to determine the optimal treatment regime methods in standard use provide biased estimates and proposed a new
for biologically similar subjects in a second health care system when, for measure of net survival that satisfies this aim. In this work, we study its
cultural, logistical, and financial reasons, the two health care systems differ properties and behaviour in practice and discuss its assumptions and
(and will continue to differ) in the frequency of, and reasons for, both laboratory interpretation.
tests and physician visits. I also describe methods for estimating the optimal The results are illustrated using Slovene cancer registry data.
timing of expensive and/or painful diagnostic or prognostic tests. Diagnostic or
prognostic tests are only useful in so far as they help a physician to determine
the optimal dosing strategy, by providing information on both the current health C22.2
state and the prognosis of a patient because, in contrast to drug therapies, Estimating the loss in expectation of life due to cancer using flexible parametric
these tests have no direct causal effect on disease progression. The proposed survival models
methods explicitly incorporate this no direct effect restriction.
Therese Andersson1, Paul Dickman1, Sandra Eloranta1, Paul Lambert1,2
1
Karolinska Institutet, Stockholm, Sweden, 2Leicester University, Leicester, UK
I5.3
A useful summary measure for survival data is the expectation of life, which
Protecting against errors: causal effect estimates for the evaluation of can be calculated by obtaining the area under a survival curve. The loss in the
quality of care over many (cancer) centers
expectation of life is the difference between the expectation of life in the
Els Goetghebeur, Jozefien Buyze, Machteld Varewyck, Stijn Vansteelandt
general population and the expectation of life in a diseased population. This
Department of Applied Mathematics and Computer Science, Ghent University, measure is used very little in practice as its estimation generally requires
extrapolation of both the expected and observed survival.
Belgium
To evaluate quality of (rectal) cancer care over Belgian centers based on The extrapolation of the expected survival is fairly straight-forward, but
60/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
assumptions have to be made for the observed survival. A parametric
distribution can be used, but it is difficult to find a distribution that captures the
underlying shape of the survival function. Extrapolation using relative survival is
more stable and reliable. Relative survival is defined as the observed survival
divided by the expected survival, and the mortality analogue is excess
mortality. Hakama and Hakulinen showed how extrapolation of relative survival
can be done for life-table data, by assuming that the excess mortality has
reached zero (statistical cure) or has stabilized to a constant. By instead using
a flexible parametric approach, introduced by Royston and Parmar, for
estimating the excess mortality we can estimate the loss in expectation of life
for individual level data.
We have evaluated the extrapolation from flexible parametric models, and the
results agree very well with observed data. We have developed user friendly
software to enable estimation of the loss in expectation of life. Results will be
presented for a variety of cancer sites.
the approximation is excellent when the data set has at least 10-20 events for
each effective degree of freedom (edf) of the model, where the edf is computed
in a penalized regression sense. In models with a random effect per subject the
edf can easily exceed the number of events and a solution based on the
Laplace will seriously underestimate the MLE variance. This can occur for any
data set where a non-neglible fraction of the random effects coefficients b
correspond to subgroups with no events. Interestingly, this is exactly the case
where the MLE is biased large. When the approximation is wrong, it may be
better than the truth. We will try to give some insight into how this occurs along
with practical guideance for users.
C22.3
Sample Size Calculation and Re-estimation for Recurrent Event Data
Katharina Ingel, Antje Jahn-Eimermacher
Institute for Medical Biostatistics, Epidemiology and Informatics, University
Medical Center of the Johannes Gutenberg-University, Mainz, Germany
Simulations usually accompany proposals, or comparisons, of measures of
explained variation (other names may be used) in statistical literature.
Simulations are supposed to illustrate, or often even ‘prove', certain claims
about the measures, or their relative merits. Important aspect of such
simulations is censoring, other aspects, usually less important, are distributions
of covariates, number of covariates, and effects of covariates. And a standard,
almost universal, approach to censoring is such that life times are generated
using exponential distribution, and censoring times are generated using a
uniform distribution. Thus, by changing the supporting interval of the censoring
distribution, different proportions of censoring are obtained.
We argue that such a setting is more or less useless for studying properties of
measures of explained variation. The reason is simple: under such setting, we
can never see life times greater than the upper value of the censoring interval,
so that studying consistency or bias, unless limited to that value, is impossible.
And from this quite some misunderstanding of the properties of the measures
follows.
Some clinical trials compare the repeated occurrence of the same type of
event, e.g. epileptic seizures, between two or more treatment groups. The
Andersen-Gill model has been proposed to analyse recurrent event data
(Andersen and Gill, The Annals of Statistics, 1982).
For sample size calculation Bernardo and Harrington (Statistics in Medicine,
2001) suggest a formula, which relies on the independence of inter-event-times
within individuals conditional on the covariate values. This assumption is
violated if individuals are heterogeneous in their baseline hazard. To control the
type I error in these situations, a robust variance estimate is required for
calculating the test statistic, which will decrease the actual power of the trial.
We propose an adjusted sample size formula to achieve the desired power
even in the presence of patient heterogeneity in baseline hazard. Adjustment is
performed by the use of a nuisance parameter which accounts for the
heterogeneity and is derived from characteristics of the robust variance
estimate (Al-Khalidi et al, Biometrics, 2011).
In the planning phase of a trial there will usually be some uncertainty about the
nuisance parameter. We explore how blinded or unblinded internal pilot data
can be used to estimate the nuisance parameter and to adjust the sample size
based on that estimate. The performance of this internal sample size reestimation design with respect to type I error and power is evaluated through
simulations. We illustrate our results with clinical data on the repeated
occurrence of epileptic seizures.
C22.4
Mixed Effects Cox Models and the Laplace Transform
Terry Therneau
Mayo Clinic, Rochester, Minnesota, USA
Random effects Cox models can be divided into two broad classes: special
cases where the algebra can be worked out explicitly and general programs
that allow a range of models. The latter assume a Gaussian distribution for the
random effects because of it's flexibility, but that leaves an intractable integral.
The Cox partial likelihood, however, is normally very quadratic: the NewtonRaphson converges in 2-4 steps without recourse to alternate starting points,
step halving, or other sophistication. This makes the Laplace transform
attractive, which replaces the integrand with a quadratic approximation, and
this is the computational approach of most software packages. We find that
C22.5
On using simulations to study explained variation in survival analysis
Janez Stare1, Nataša Kejžar1, Delphine Maucort-Boulch2
1
University of Ljubljana, Ljubljana, Slovenia, 2University of Lyon, Lyon, France
C23 Measurement error
C23.1 (Scientist award winner)
Regression toward the mean and ANCOVA in observational studies
Péter Vargha
Semmelweis University, Budapest, Hungary
ANCOVA can produce unbiased estimate of group differences in observational
studies as well if either covariate distribution in the groups can be regarded
identical, or covariate can be assumed as free of error term, i.e., there is no
regression toward the mean (RTM). However, in case of different baseline
distribution RTM results inevitably in biased estimation of group effect.
Three cases are discussed. In the first one the goal is group comparison of
changes, where using ANCOVA with baseline value as covariate is a possible
alternative of t-test. Lord’s paradox is presented with critical evaluation of
Holland and Rubin’s corresponding results. Their explanation of the paradox
uses a term of spontaneous change, not included in the original example. Their
results can one (as some do) mistakenly interpret as proving that the choice
between t-test and ANCOVA in general simply depends on untestable
assumption on spontaneous changes.
An example of the second case, where comparing means of ratio of variables
and ANCOVA are the alternatives, was published by Sir Ronald Fisher himself.
The two approaches resulted in different conclusions, too.
In the general case of group comparison with a covariate there is no simple
alternative to ANCOVA. To evade bias an assumption has to be made on the
share of the covariate in the scatter arround regression lines.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C23.2
A multilevel misclassification model for spatially correlated binary data: An
application in oral health research
Timothy Mutsvari1, Dipankar Bandyopadhyay2, Dominique Declerck3,
Emmanuel Lesaffre1,4
1
L-BioStat, Katholieke Universiteit Leuven and Hasselt Universiteit, Leuven,
Belgium, 2Division of Biostatistics, School of Public Health, University of
Minnesota, Minnesota, USA, 3Department of Oral Health Sciences, Katholieke
Universiteit Leuven, Leuven, Belgium, 4Department of Biostatistics, Erasmus
Medical Center, Rotterdam, The Netherlands
61/156
C23.4
Variable Selection by Lasso in Regression with Measurement Error
Øystein Sørensen, Arnoldo Frigessi, Magne Thoresen
Department of Biostatistics, Institute of Basic Medical Sciences, University of
Oslo, Oslo, Norway
Regression with the Lasso penalty is widely used for variable selection in highdimensional (p>>n) statistical problems. When covariates relevant to the
outcome and covariates irrelevant to the outcome are correlated, Lasso can
select false positives and discard true positives. The Strong Irrepresentable
Condition (SIC) is an upper bound for such correlations, where the bound
depends on the sparsity of true non-zero coefficients. When data satisfy the
SIC, the probability of correct model selection is bounded from below by a
probability tending to 1 asymptotically, under suitable conditions.
In this paper we investigate the effect of measurement error on the variable
selection performance of the Lasso. This is highly relevant, e.g., in the context
of microarray experiments, where the presence of measurement error is well
documented, and the Lasso is a popular option. We introduce the SIC with
Measurement Error (SIC-ME). Furthermore, we prove that SIC-ME implies a
lower bound on the probability for Lasso with error-prone covariates to select
the true model.
Next, we ask: If the true error-free data come from a distribution whose
covariance matrix satisfies SIC, which constraints must the distribution of
measurement errors satisfy for the SIC-ME also to hold? The answer is
illustrated for some commonly assumed covariance structures, and constraints
on the measurement errors are derived.
Finally, we demonstrate through simulations that when the distribution of the
error-prone covariates satisfy the SIC-ME relatively far from its upper limit,
using Lasso with covariates from this distribution gives a good chance of
recovering the true model.
Dental caries is a highly prevalent disease affecting the tooth's hard tissues by
acid-forming bacteria. The past and present caries status of a tooth is
characterized by a response called caries experience (CE). Several
epidemiological studies have explored risk factors for CE. However, the
detection of CE is prone to misclassification which needs to be incorporated
into the epidemiological models on CE. From a dentist's point of view, it is most
appealing to analyze CE on the tooth's surface, implying that the multilevel
structure of the data (surface-tooth-mouth) needs to be taken into account. In
addition, CE data are spatially correlated, i.e. a carious surface may influence
the decay process of the neighboring surfaces. While in the literature it is
assumed that misclassifications occur in an independent manner, the nature of
scoring CE rather suggests that the misclassification process has a spatial
structure. To examine this hypothesis we developed a Bayesian multilevel
logistic regression model with a spatial association structure via a conditional
autoregressive (CAR) prior distribution. This model assumes dependent
misclassification and aims to have better insight in how CE is scored. The
model is applied to validation data of the well-known Signal Tandmobiel study,
whereby 148 children were examined by a benchmark scorer and 16 dental
examiners. Our results indicate a substantial spatial dependency in (wrongly)
scoring CE, that there is also considerable clustering and that some covariates
affect the scoring behavior, i.e. dentition type, tooth type and position of the
tooth in the mouth.
C23.5
Adjustment for genotyping measurement error in a case-control study
C23.3
Matthew Cooper1, Elizabeth Milne1, Kathryn Greenop1, Sarra Jamieson1,
Projecting error: Understanding measurement error in principal components
Denise Anderson1, Frank van Bockxmeer1, Bruce Armstrong2, Nicholas de
Klerk1
Kristoffer Herland Hellton, Magne Thoresen
1
University of Western Australia, Perth, Australia, 2University of Sydney,
University of Oslo, Oslo, Norway
Sydney, Australia
Principal component analysis (PCA) is one of the most widely used dimension
Background
reduction techniques, especially for high-dimensional data such as microarray
Genotyping has become more cost-effective and less invasive with the use of
expression data. With PCA it is possible to visualize the genetic information
buccal cell sampling. However, low or fragmented DNA yields from buccal cells
and obtain a basis for classification and clustering. The components are based
sometimes requires additional whole genome amplification to produce
on different loadings of the variables, which are used to reparametrise the data
sufficient DNA for typing. In our case-control study, discordance was found
as component scores. The loadings are thought to represent the weight of
between genotypes derived from blood and whole genome amplified buccal
each gene in a physiological process explaining the gene variability.
DNA samples.
However, there is inherent noise in microarray expressions, which underline
Aims
the importance of understanding the effect of measurement error on the
To develop a user-friendly method to correct for this genotype misclassification,
loadings and scores. When representing the noise by an additive model, we
as existing methods were not suitable for our purposes.
can characterise the bias caused by measurement error. We propagate the
distributional assumption on the errors through the principal component Methods
analysis, using analytical expression obtain by perturbing the eigenvalue Discordance between the results of blood and buccal-derived DNA was able to
be assessed, but only in disease cases, some of whom had both blood and
problem.
buccal samples. Using the misclassification matrices as probability
Finally, we use the resulting statistical properties to discuss and interpret the
distributions, we sampled likely values for corrected genotypes for controls with
effects of measurement error on loadings and scores. It is also possible to find
only buccal samples, creating multiple datasets for analysis. Each dataset was
the effects on different dimension reduction criterions, such as the Kaiser rule
analysed separately, adjusting for multiple covariates using logistic regression.
and Scree plot. Especially the role of the projected error, which represents the
Regression coefficients were then combined using standard methods for
relationship between the data and the error structure, will be important in
multiple imputed datasets.
identifying situations where measurement error will have little impact.
Results
Application to synthetic datasets was effective in producing correct odds ratios
62/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
(ORs) from data with known misclassification. Moreover, when applied to each
of six bi-allelic loci, correction altered the ORs in the expected way given the
type of misclassification Shown. Increasing the size of the misclassification
data set increased the precision of the effect estimates.
Conclusions
Bias arising from differential genotype misclassification can be reduced by
correcting results using this method whenever data on concordance of
genotyping results with those from a different and probably better DNA source
are available.
and to obtain an appropriate model intercept when applying the model in a new
population. We evaluate the consistency of model performance using the
existing internal-external cross-validation approach, using an empirical Deep
Vein Thrombosis dataset as an example.
We found that the resulting prediction models are most generalizable when
predictor effects are homogeneous. This can be achieved by excluding
heterogeneous variables from the model or by including additional variables
that explain heterogeneity of predictor effects. When baseline risks are
heterogeneous, stratified estimation of the model intercept appears to be the
best approach. An appropriate model intercept can then be derived from the
outcome proportion in the population of interest, yielding superior calibration.
C24 Prediction
The approaches we propose can be used to develop a single, integrated
C24.1
prediction model from multiple IPDs that has superior generalizability. With
Relative ROC curves: a novel approach for evaluating the accuracy of a minimal demographic information, the resulting model can be calibrated to new
populations.
marker to predict the cause-specific mortality
Marine Lorent1, Magali Giral2, Yohann Foucher1
1
Department of Biostatistics EA 4275, Clinical Research and Subjective
Measures in Health Sciences, Nantes University, Nantes, France,
2
Transplantation, Urology and Nephrology Institute (ITUN), Nantes Hospital
and University, Inserm U1064, Nantes, France
C24.3
Using Machine learning methods for event related potential (ERP) brain activity
analysis
Daniel Stahl
Determining prognostic markers of mortality for patients with chronic disease is King's College London, London, UK
important for identifying high-risk subjects for death and optimizing medical
management. The usual approach for this purpose is the use of time- Machine learning and other computer intensive pattern recognition methods
dependant ROC curves which are adapted for censored data. Nevertheless, an are successfully applied to a variety of fields that deal with high-dimensional
important part of the mortality may be not due to the chronic disease and it is data and often small sample sizes such as genetic microarray of fMRI data.
often impossible to individually identify whether or not the deaths are related to The aim of this presentation is to assess the usefulness of machine learning
the disease. In survival regression, a solution is to distinguish between the methods for the analysis of event-related potential (ERP) data. Event-related
expected mortality of a general population (estimated on the basis of mortality brain potentials (ERPs) are a non-invasive method of measuring brain activity
tables) and the excess of mortality related to the disease, by using an “additive during cognitive processing with high temporal resolution. The analysis of
averaged ERP measurements usually involves large number of univariate
relative survival model”.
mean group comparisons, such as comparing responses at different
We propose a new estimator of time-dependant ROC curve that includes electrodes, between a clinical and a control group. This approach typically
relative survival concept in order to evaluate the capacity of a marker to predict results in a multiple testing problem. Machine learning methods allow the
the disease-specific mortality.
analysis of datasets with a large number of variables relative to sample size.
We illustrate the utility of such relative ROC curves by two different Cross-validation methods to assess the predictive performance of a derived
applications: 1) predicting the mortality related to kidney transplant in end- model, thereby avoiding multiple testing problems. The usefulness of two
stage renal disease patients and 2) predicting the mortality due to primary methods, regularized discriminant function analyses and support vector
biliary cirrhosis (PBC) in patients with diffuse large cell lymphoma (DLCL). In machines, will be demonstrated by reanalysing an ERP dataset from infants
these applications, the capacities of prediction of scoring already established (Elsabbagh et al., 2009). Using cross-validation, both methods successfully
are evaluated.
discriminated above chance between groups of infants at high and low risk of a
The results demonstrate the interest of the proposed estimator of relative ROC later diagnosis of autism.
curves.
C24.4
C24.2
How much data are required to develop a reliable risk model?
A framework for developing and implementing clinical prediction models across Khadijeh Taiyari1, Gareth Ambler2, Rumana Z. Omar3
multiple studies with binary outcomes
1
Department of Statistical Science, University College London, London, UK,
2
Thomas Debray1, Karel Moons1, Hendrik Koffijberg2, Richard Riley2
Department of Statistical Science, University College London, & UCL
1
2
Research
Support Centre, London, UK, 3Department of Statistical Science,
UMC Utrecht, Utrecht, The Netherlands, University of Birmingham,
University College London, & UCL Research Support Centre, London, UK
Birmingham, UK
The availability of participant-level data from multiple sources is an increasingly
prevalent phenomenon in prediction research. However, the corresponding
populations typically differ in important aspects, such as baseline risk. This has
driven the adoption of meta-analytical approaches for appropriately dealing
with heterogeneity when combining such data. Unfortunately, these metaanalytical approaches do not provide a single prediction model that can readily
be applied to new populations. Instead, they reveal the variability of baseline
risk and predictor effects across studies, and provide little guidance about how
then to proceed in integrating these findings.
We propose several approaches to account for heterogeneity in baseline risk
The ‘rule of 10’ is often used to determine how many predictors should be
considered for inclusion in a risk prediction model. This rule suggests that the
number of parameters in the model should not exceed the number of events in
the dataset divided by ten. This was originally proposed to ensure the unbiased
estimation of regression coefficients with confidence intervals that have correct
coverage. However, little research has been done to assess the adequacy of
the rule regarding predictive performance. This study evaluates this rule using
simulation based on real cardiac surgery datasets.
Different scenarios were investigated by changing the number of events, the
outcome prevalence, the number of noise variables and the amount of
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
63/156
prognostic information. Logistic regression models were fitted to simulation
data and validated using measures that assess calibration, predictive accuracy
and discrimination.
The results suggest that model calibration deteriorates with decreasing events,
increasing outcome prevalence and decreasing prognostic information.
Additionally, calibration can be poor even when there are 10 events per
variable (EPV). This problem can often be fixed by applying post-estimation
shrinkage, though this may not work if there is little prognostic information.
Discrimination deteriorates with decreasing events in the models with low
prognostic information. Strictly adhering to the “Rule of 10” may produce poorly
calibrated risk models and, scholars need to consider the outcome prevalence
when designing studies to develop risk prediction models.
the analysis involves an interaction. We describe when JAV gives consistent
estimation, explore by simulation the bias of all three methods, and illustrate
methods using the EPIC study.
JAV gives consistent estimation when the analysis is linear regression with
quadratic or interaction term and X is missing completely at random. JAV may
be biased when X is missing at random, but this bias is generally less than for
passive imputation and PMM. Coverage for JAV was usually good when bias
was small. However, in some scenarios with a more pronounced quadratic
effect, bias was large and coverage poor. When the analysis was logistic
regression, JAV's performance was sometimes very poor. PMM generally
improved on passive imputation, in terms of bias and coverage, but did not
eliminate the bias.
C24.5
C25.2
Comparison of multiple imputation methods for repeated measurements
studies
Oya Kalaycioglu, Andrew Copas, Rumana Omar
University College London, Department of Statistical Science, London, UK
Dynamic Predictions of Repeated Events of Different Types by Landmarking
Z.J. Musoro, R.B. Geskus, A.H. Zwinderman
Academic Medical Center, Amsterdam, The Netherlands
The landmarking paradigm offers a flexible way to characterize the association
between a longitudinal biomarker process and the time until a clinical event. By
facilitating direct prediction of survival probabilities in the presence of timedependent covariates and time-variant coefficients, landmark models present
an alternative to full probability joint models. We studied post kidney
transplantation records of 467 patients (at the Academic Medical Center,
Amsterdam) who had repeated events of different types, and were repeatedly
measured for multiple biomarkers. Landmark points were defined at the 20th to
80th percentile of unique infection times (years). Infection-specific Cox
proportional hazards models with landmark-dependent frailties were
considered. The baseline hazard was allowed to vary by landmark.
Dependency of the infection specific hazards on the biomarker history was
assumed to be via current biomarker values at the landmark points only.
Patients’ baseline covariates (age, gender, type of immune suppressive
treatment, and duration of dialysis prior to transplant) were allowed to have
landmark-dependent coefficients. We adopted options to smooth the
coefficients explained over the landmarks. Models assuming landmarkinvariant baseline hazard and coefficients were also evaluated. Our findings
revealed that the prognostic effect of natural killer cells on both viral and upper
respiratory infections was fairly constant over landmarks (approximated hazard
ratio of -1.7 and -1.5 respectively), while the effect of CD3+ cells on upper
respiratory infection was slightly larger for early landmark points (hazard ratio
of -1.4 versus -0.75). Also, the effect of CD3+ cells on viral infections seemed
to increase linearly with landmark numbers.
C25 Multiple imputation methods
Missing data is a common problem in medical research. Various statistical
methods exist to handle missingness, however limited work has been done for
developing imputation methods for longitudinal studies. The aim of this study is
to evaluate existing methods that have been proposed to impute missing data
for repeated measurements studies via simulations based on real data with
50% observations missing. These include: (1) Random intercept extension of
multiple imputation using multivariate normal imputation model (MVNI), (2)
Multiple imputation by chained equations (ICE) (3) Random intercept extension
of ICE, (4) Bayesian multiple imputation with univariate hierarchical imputation
models. Comparisons were made with the likelihood analysis of all available
cases. AC analysis is unbiased after adjusting for predictors of missingness in
the analyses when missingness is at random. The main gain in using MI is the
efficiency, especially if missingness is non-monotone. Amongst the imputation
methods, MVNI provided unbiased estimates for the regression coefficients of
continuous variables, even if these variables showed departures from
normality. However, imputing incomplete binary variables assuming
multivariate normality resulted in some bias. ICE approaches do not guarantee
unbiased estimates for incomplete non-normal continuous variables and
transformations did not help. Bayesian imputation using hierarchical Gamma
and Half-Normal imputation models for skewed variables reduced bias and
improved efficiency. For incomplete variables satisfying multivariate normality,
MVNI and ICE provided similar and unbiased results. Multiple imputation
methods for repeated measurements data should be used after careful
investigation of the distribution of the incomplete variables, assumptions
regarding correlations and missingness pattern.
C25.1
C25.3
Multiple Imputation of Missing Covariates with Non-Linear Effects and Confidence intervals after multiple imputation: combining profile likelihood
Interactions: an Evaluation of Statistical Methods
information from logistic regressions
Shaun Seaman1, Jonathan Bartlett2, Ian White1
Georg Heinze1, Meinhard Ploner2, Jan Beyea3
1
2
MRC Biostatistics Unit, Cambridge, UK, London School of Hygiene and 1Medical University of Vienna, Vienna, Austria, 2data-ploner.com, Brunico, Italy,
3
Tropical Medicine, London, UK
CIPI Consulting in the Public Interest, Lambertville, NJ, USA
Multiple imputation is often used for missing data. When a model contains as In the logistic regression analysis of a small-sized, case-control study on
covariates more than one function of a variable, it is not obvious how to impute Alzheimer's disease, some of the risk factors exhibited missing values,
missing values. Consider regression with outcome Y and covariates X and X 2 motivating the use of multiple imputation. Usually, Rubin's rules (RR) for
In `passive imputation' a value X* is imputed for X and then X 2 is imputed as combining point estimates and variances would then be used to estimate
(X*)2. A recent proposal is to treat X 2 as `just another variable' (JAV) and (symmetric) confidence intervals (CI), on the assumption that the regression
impute X and X2 under multivariate normality.
coefficients were distributed normally. Yet, rarely is this assumption tested, with
We investigate three methods: 1) linear regression of X on Y to impute X and or without transformation. In analyses of small, sparse, or nearly separated
passive imputation of X2; 2) same regression but with predictive mean data sets, such symmetric CI may not be reliable. Thus, RR alternatives have
matching (PMM); and 3) JAV. We also investigate analogous methods when been considered, e.g., Bayesian sampling methods, but not yet those that
64/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
combine profile likelihoods, particularly penalized profile likelihoods, which can
remove first order biases and handle separation of data sets.
To fill the gap, we consider the combination of penalized likelihood profiles by
expressing them as posterior distribution functions (PDF) obtained via a chisquared approximation to the penalized likelihood ratio statistic. PDFs from
multiple imputations can then easily be averaged into a combined PDFc,
allowing confidence limits for a parameter β at level 1-α to be identified as
those β* and β** that satisfy PDFc(β*)=α/2 and PDFc(β**)=1-α/2.
We demonstrate that this, "CLP", method outperforms RR in analyzing both
simulated data and data from our motivating example. CLP can also be useful
as a confirmatory tool, should it show that the simpler RR are adequate for
extended analysis. We also compare the performance of CLP to Bayesian
sampling methods using Markov chain Monte Carlo. CLP is available in the R
package logistf.
inconsistent estimates of the parameters of the model of interest, while the `just
another variable' approach gives consistent results only for linear models and
only if data are missing completely at random. Furthermore, simulation results
suggest that even under imputation model mis-specification our proposed
approach gives estimates which are substantially less biased than estimates
based on passive imputation. The proposed approach is illustrated using data
from the National Child Development Survey in which the analysis model
contains both non-linear and interaction terms.
C25.4
Congenial multiple imputation of partially observed covariates within the full
conditional specification framework
Jonathan Bartlett1, Shaun Seaman2, Ian White2, James Carpenter1
1
London School of Hygiene & Tropical Medicine, London, UK, 2MRC
Biostatistics Unit, Cambridge, UK
Multiple imputation (MI) is an increasingly popular tool for handling missing
data, but there is a scarcity of tools for checking the adequacy of imputation
models. The models used for imputation are statistical models similar to those
used in other contexts and so it seems important to consider whether they
adequately fit the data to which they are applied. The Kolmogorov-Smirnov test
has been identified as a potentially useful diagnostic tool for flagging instances
where the distribution of imputed values deviates from that of the observed
values of a variable with missing data. Although this test is gaining some
recognition in the MI setting, its usefulness as an imputation diagnostic has not
been formally evaluated. We assessed its performance in the simple simulation
setting of a univariate regression model in which the single covariate was
subject to missing data, inducing missingness under MCAR and MAR
mechanisms. Although the test was clearly able to flag differences between the
observed and imputed distributions, these differences were very weakly
associated with the validity of estimation of the regression coefficient. Indeed,
with data MAR one expects distributions to differ while MI should correct bias in
estimation. We conclude that simple automated flagging of distributional
differences is not a useful approach to imputation diagnostics. More focused
approaches are needed, such as the method of posterior predictive checking
(He & Zaslavsky, 2012), which directly assesses the extent to which inferences
of interest are consistent with the imputation model.
Missing covariate data is a common issue in epidemiological and clinical
research, and is often dealt with using multiple imputation (MI). When the
analysis model is non-linear, or contains non-linear (e.g. squared) or interaction
terms, this complicates the imputation of covariates. Standard software
implementations of MI typically impute covariates from models that are
uncongenial with such analysis models. We show how imputation by full
conditional specification, a popular approach for performing MI, can be
modified so that covariates are imputed from a model which is congenial with
the analysis model. We investigate through simulation the performance of this
proposal, and compare it to passive imputation of non-linear or interaction
terms and the `just another variable' approach. Our proposed approach
provides consistent estimates provided the imputation models and analysis
models are correctly specified and data are missing at random. In contrast,
passive imputation of non-linear or interaction terms generally results in
C25.5
Diagnosing the goodness-of-fit of models used for multiple imputation
Cattram Nguyen1, Katherine Lee1,2, John Carlin1,2
1
University of Melbourne, Melbourne, Australia, 2Murdoch Children's Research
Institute, Melbourne, Australia
Afternoon sessions (I6 , C26 - 34)
overcome this limitation.
I6 Genomics and systems biology
I6.1
Nonparametric Bayesian Modelling in Systems Biology
Katja Ickstadt
Faculty of Statistics, TU Dortmund University, Germany
This contribution will begin with a short introduction to nonparametric Bayesian
modelling, including generalized Dirichlet process mixture models as well as
Poisson/gamma models. We will then introduce two specific applications from
systems biology in more detail.
The first application centers around the problem of spatial modelling of protein
structures on the cellular membrane. Spatial effects such as clustering are
supposed to influence signal transmission. Here, a variation of the Dirichlet
process mixture model with mixtures of multivariate normals will be employed
in order to understand the cluster structure of Ras, a small protein adherent to
the plasma membrane.
The main goal of the second example is to understand how several
components of a biological network are connected. In particular, we will study
cell-matrix adhesion sites. Bayesian networks are a main model class for such
problems, however, they have the drawback of making parametric
assumptions. We will employ a nonparametric Bayesian network approach to
In both examples, generalizations of the Dirichlet process prior, like the PitmanYor prior, for nonparametric Bayesian inference will be discussed. Also,
biological prior knowledge will be incorporated into the nonparametric Bayesian
models.
I6.2
Reliable Preselection of Variables in High-dimensional Penalized
Regression Problems by Freezing
Linn Cecilie Bergersen1, Ismaïl Ahmed2, Arnoldo Frigessi3, Ingrid K. Glad1 and
Sylvia Richardson4.
1
Department of Mathematics, University of Oslo, Norway, 2Inserm, CESP
Centre for Research in Epidemiology and Population Health, France,
3
Department of Biostatistics, University of Oslo, Norway, 4Department of
Epidemiology and Biostatistics, Imperial College London, UK
Relating genomic measurements as gene expressions or SNPs to a specific
phenotype of interest, often involves having a large number P of covariates
compared to the sample size n. While P>>n problems can be solved by
penalized regression methods like the lasso, challenges still remain if P is so
large that the design matrix cannot be treated by standard statistical software.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
For example in genome-wide association studies, the number of SNPs can be
more than 1 million and it is often necessary to reduce the number of
covariates prior to the analysis. This is often called preselection. We introduce
the concept of freezing which enables reliable preselection of covariates in
lasso-type problems. Our rule works in combination with cross-validation to
choose the optimal amount of tuning with respect to prediction performance
and finds the solution of the full problem with P covariates using only a subset
P'<<P of them. By investigating freezing patterns, we are able to avoid
preselection bias, even if variables are preselected based on univariate
relevance measures connected to the response. We demonstrate the concept
in simulation experiments and observe impressive data reduction rates, without
loosing variables that are actually selected in the full problem. We also apply
our rule to genomic data, including an ultra high-dimensional regression setting
where we are not able to fit the full regression model.
I6.3
Using Heritability Analysis to Devise a Prediction Model for Epilepsy
Doug Speed
University College London Genetics Institute, UK
There is continued discussion regarding the so called "missing heritability"
problem. By applying a linear mixed model to whole-genome SNP data, a
series of papers headed by Yang et. al. have presented strong evidence that
many complex traits are highly polygenic, so that while common variants can
explain most of the heritability, each on average has such a small contribution
to make their detection by standard size GWAS almost impossible.
The weight of evidence offered by Yang et. al.'s findings hinge on the reliability
of the Linear Mixed Model for heritability estimation. By testing the method
under a range of scenarios, we have found in general the technique to be
highly robust; the exception is when applied to rare variant traits, but we have
developed a fix which leads to dramatically improved performance in this case.
By applying the linear mixed model to our epilepsy data, we have determined
that, even though the condition is almost certain to be highly polygenic, nonethe-less, useful prediction models should be feasible. In this talk, we discuss
our work on heritability estimation. We show how this allows us to narrow down
the search for variants and pathways which influence an individual's
susceptibility to epilepsy, and facilitates a practical model for predicting whether
single-seizure individuals will subsequently develop the condition.
Reference:
Common SNPs explain a large proportion of the heritability for human height;
J. Yang, P. Visscher et. al., Nature Genetics 2010
C26 Competing risk
C26.1
Decomposing number of life years lost according to causes of death
Per Kragh Andersen
Department of Biostatistics, University of Copenhagen, Copenhagen, Denmark
The standard competing risks model is studied and we show that the cause j
cumulative incidence function integrated from 0 to t has a natural interpretation
as the expected number of life years lost due to cause j before time t. This is
analogous to the t-restricted mean life time which is the survival function
integrated from 0 to t. The large sample properties of a non-parametric
estimator are outlined, and the method is exemplified using a standard data set
on survival with malignant melanoma. It is discussed how the number of years
lost may be related to subject-specific explanatory variables in a regression
model based on pseudo-observations. The method is contrasted to causespecific measures of life years lost used in demography.
65/156
C26.2
Parametric modelling of the cumulative incidence function in competing risks
models
Paul Lambert, Sally Hinchliffe, Michael Crowther
University of Leicester, Leicester, UK
Competing risks occur when an individual is at risk of more than one type of
event. With such data interest often lies in estimation and modelling of the
cause-specific cumulative incidence function (CIF), i.e. the cumulative
probability of a particular event occurring in the presence of other competing
events. Geskus (Biometrics 2011;67:39-49) showed that by simple data
expansion together with a time-dependent weighted Kaplan-Meier method it is
possible to directly estimate the CIF. In addition, Geskus showed that fitting a
Cox model with time dependent weights to the expanded data is equivalent to
the competing risks model of Fine and Gray, thus giving estimates of
subdistribution hazard ratios. The aim of this work is to demonstrate that similar
ideas of data expansion with a weighted likelihood can be used for parametric
survival models. This opens up many more opportunities for modelling CIFs
using standard parametric survival analysis tools.
We illustrate the approach by fitting a flexible parametric survival model with
time-dependent weights. The model uses restricted cubic splines to model the
log baseline cumulative subdistribution hazard function, providing smooth
estimates of the CIF. One important advantage of the approach is that it is
easily extended to model time-dependent sub-hazard ratios. In addition,
models on other scales, such as a proportional odds model that incorporates
splines for the baseline, are easily implemented. Simulation studies show that
these models have good statistical properties in terms of bias and coverage.
C26.3
Nested case-controls studies in cohorts with competing events
Martin Wolkewitz1, Mercedes Palomar2, Ben Cooper3, Martin Schumacher1
1
Institute of Medical Biometry and Medical Informatics, Freiburg, Germany,
2
Universitat Autonoma de Barcelona, Barcelona, Spain, 3Mahidol University,
Bangkok, Thailand
The nested case-control design is the most widely used method of sampling
from epidemiological cohorts. Incidence density sampling is the timedependent matching procedure to create such a nested case-control study. For
each case, controls must be disease free at the time of diagnosis of the case to
which they are matched. The potential impact of exposures on disease
occurrence can then be studied in a reduced data set via conditional logistic
regression. This method allows estimation of the incidence rate-ratio (or hazard
ratio) which would be received from a Cox regression model applied on the full
cohort.
However, often the observation of the disease of interest is preceded by other
'competing' events which prevents us from observing the disease of interest.
Two approaches deal with competing event data: the event-specific hazard
approach which addresses the aetiological point of view and the subdistribution hazard approach which is linked to the cumulative incidence
function;
the
latter
is
suitable
for
prediction.
We extend previous work by Lubin (Biometrics. 1985;41:49-54) who studied
nested case-control studies in a competing event setting by focussing on the
event-specific hazard approach. Here, we propose a sampling method for the
sub-distribution hazard approach and suggest a time-dependent version of the
cumulative incidence sampling by combining two sampling procedures:
cumulative incidence and incidence density sampling.
The methodology is illustrated by a hospital infection example.
66/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C26.4
C27 Statistics for epidemiology II
Late entry bias in cohort studies with competing endpoints
C27.1
Giorgos Bakoyannis, Giota Touloumi, on behalf of CASCADE Collaboration in Modeling changes in cancer risk with time from diagnosis of a family member
EuroCoord
Marie Reilly1, Myeongjee Lee1, Paola Rebora2, Kamila Czene1
Athens University Medical School, Athens, Greece
1
Karolinska Institutet, Stockholm, Sweden, 2University of Milano-Bicocca,
In many cohort studies with competing endpoints, individuals are recruited after Monza, Italy
the onset of risks under study. For example, when studying the incidence of
AIDS and non-AIDS related death in HIV infected individuals, subjects are It is well accepted that the diagnosis of cancer confers an increased risk on
recruited at some time after their corresponding seroconversion dates. This family members, but there are many unanswered questions concerning how
phenomenon is known as left truncation in survival analysis. In such settings, this increased risk depends on the age at diagnosis of the index patient, the
individuals are under follow-up conditional on having survived at least until their age of the relative(s) at risk, and the time since the index diagnosis. Using the
recruitment to the study. Such a conditioning may induce late entry bias. In this Swedish cancer register to identify cancer patients, and the Swedish Multiwork we define the basic structure of late entry bias in competing risks studies Generation register to link relatives, we extracted data on families with any one
through Directed Acyclic Graphs (DAGs) and investigate the extent of bias in of five major cancers (colorectal, lung, breast, prostate, melanoma) and
the covariates' effect estimates, under the Fine-Gray model, through simulation matched controls from families who were free of cancer on the date of the
index diagnosis in the case family. The increased risk of cancer in the case
experiments.
It can be shown that late entry bias is a form of selection bias induced through families compared to control families was estimated as IRRs from Poisson
conditioning on survival status at recruitment. In the simple case of a unique regression models with time since the index diagnosis as smoothed splines,
covariate and two competing endpoints, late entry bias in the covariate's effect and from flexible parametric survival models of the time from the index date to
estimate is induced if the covariate affects entry time or the hazard of the cancer in relatives. The overall familial risk estimates confirmed the published
competing event. Simulation studies showed that there is substantial bias in values for the five cancers. The risk profile for family members was found to be
the effect estimates under a Fine-Gray model, as well as low empirical approximately constant for up to 20 years for colorectal, breast, and lung
coverage probabilities, if the covariate affects entry time or the hazard of the cancer, but there was evidence of a small decline in risk in the first 5 years for
competing event. The magnitude of the bias is analogous to the magnitude of melanoma and a sharp decline for prostate cancer, consistent with a lead-in
the effect of the covariate on entry time or on the occurrence of the competing bias from screening of family members. These results can contribute to the
genetic counseling and optimal screening of family members of cancer
risk.
patients.
C26.5
C27.2
Inverse probability weighted estimators in survival analysis
A comprehensive model for jointly estimating familial risk in all first-degree
Ronald Geskus1,2
relatives
1
Academic Medical Center, Amsterdam, The Netherlands, 2Amsterdam Health Myeongjee Lee1, Paola Rebora2, Kamila Czene1, Maria Grazia Valsecchi2,
Service, Amsterdam, The Netherlands
Marie Reilly1
In the analysis with right censored and/or left truncated time-to-event data, the 1Karolinska Institutet, Stockholm, Sweden, 2University of Milano-Bicocca,
hazard plays a predominant role. The reason is that estimation of the hazard is Monza, Italy
straightforward: the numerator is the observed number of events and the
denominator is the observed number at risk. In a competing risks setting, Familial aggregation is usually evaluated by means of standardised incidence
estimation of the subdistribution hazard is more complicated, since individuals ratios of the disease of interest in relatives of affected individuals. This
who experience a competing event remain in the risk set until their (usually approach has the advantage of being simple to implement and interpret but it
has several limitations: it does not account for familial correlation and does not
unobserved) censoring time.
provide a formal statistical test to compare the risk in different relatives. An
In both settings, the standard nonparametric maximum likelihood estimator alternative method has been proposed (Pfeiffer 2004) where the familial risk is
(NPMLE) of cumulative incidence has an algebraically equivalent estimated by the relative risk of first degree relatives of diseased individuals
representation as a weighted empirical cumulative distribution function (cases) compared to relatives of a random sample of unaffected controls who
(WECDF). Weights are determined by a redistribution of probability mass over may be matched with the cases. The Cox model can be applied and where the
future event times for censored cases. Furthermore, left truncation causes study is based on population registers, bootstrapping can be used to account
everything that is observed to be reweighted in order to compensate for missed for the matching and the possible relatedness of cases. We have extended this
individuals (Geskus, 2011). The equivalence allows for an alternative approach approach using interaction terms that enable the formal comparison of the risks
to estimation with some pleasant characteristics. For example, the estimator of for different relationships within an affected family. We present these risks
the subdistribution hazard that is used in the Fine and Gray model (1999) for graphically on a pedigree plot. We applied the method to a study of the
competing risks follows immediately.
aggregation of adult leukaemia in the Swedish population No overall
More complicated schemes occur as doubly truncated, doubly censored and aggregation was found for myeloid leukemia, while for lymphatic leukemia the
interval censored data. Here, the hazard plays a much less important role. With familial aggregation was high (hazard ratio 5.42). Focusing on chronic
doubly truncated data, we show that the expression for the NPMLE as derived lymphatic leukemia, we found evidence that the familial risks for different family
by Shen (2010) is of the weighted ECDF form and a one-step estimator is members were significantly different, which can help provide insight into the
easily derived. Whether the NPMLE has a weighted ECDF form in case of contribution of genes and environmental factors to the risk of this disease.
doubly or interval censored data is unclear. Some ideas in this direction are
discussed.
C27.3
Multinomial multi-latent-class model. Application to multiple exposures in
occupational setting and the risk of several histological subtypes of lung cancer
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Josué Almansa, Lützen Portengen, Roel Vermeulen
IRAS. Utrecht University, Utrecht, The Netherlands
Given the individual heterogeneity in large epidemiologic samples, it could be
expected that the effect of a certain exposure varies across the entire
population, maybe largely caused by unobserved information (e.g. genetic
predisposition or lifestyle). Latent class modeling allows classifying individuals
according to observed and unobserved heterogeneity.
This study assesses the effect of different lifetime-cumulative exposures on the
risk of lung cancer in occupational environment. A multinomial logistic
regression estimates the probability of each histological cancer-subtype
(Squamous cell, Small cell, Adeno-carcinoma and Others). The model includes
two types of latent class variables: first, a class-exposure summarizes the most
relevant combination patterns of the multivariate exposure measurements;
second, each histological cancer-subtype has associated a latent class
variable (cancer-subtype-class) so that the effect of class-exposure and
covariates on cancer-subtype can vary across the population.
It was found that all combinations of the exposure measurements where
summarized by 8 (latent) patterns. Moreover, each of the cancer-subtype-class
variables defined two subpopulations: with and without risk of its cancersubtype. These cancer-subtype latent classes could also be understood as
discrete frailty variables. Among those in the risk-class, there was a significant
effect of lifetime-cumulative exposures only for squamous and small cell.
Compared to a model without cancer-subtype latent classes, our results
showed that there is a part of the population that, even being exposed, they
have no risk of lung cancer. Moreover, the OR's of cancer associated to the
exposures within the risk-classes were larger than the ones obtained from a
(one-class) model for the entire population.
67/156
C27.5
A parametric approach to the reporting delay adjustment method applied to
drug use data
Albert Sanchez-Niubo1, Alessandra Nardi2, Antònia Domingo-Salvany1,
Gianpaolo Scalia-Tomba2
1
Drug Abuse Epidemiology Research Group, IMIM-Hospital del Mar,
Barcelona, Spain, 2Dept. of Mathematics. University of Rome Tor Vergata,
Rome, Italy
In its classical form, the reporting delay adjustment method (Brookmeyer R et
al. Am J Epidemiol 1990;132:355-65) allows simultaneous estimation of cohort
sizes and a non-parametric lag-time distribution based on reported data,
classifiable as to onset period (cohorts), during a sequence of reporting
periods. Since counts for more recent cohorts are truncated at the moment of
analysis, the estimated lag-time distribution is used to estimate the as yet
missing part of each cohort, thus "adjusting" for the delay between onset
(cohort) and reporting. This method has been used with drug use data, where
the onset refers to the start of drug use and the reporting time to first contact
with a treatment centre. In the classical approach, if one wishes to have the
lag-time distribution as a proper distribution, one must assume that the longest
observed delay corresponds to the right end of the support of the distribution
and that this distribution then stays the same for all cohorts. We have opted for
a parametric approach, where the delays are modelled as e.g. truncated
Weibull distributions, discretized to fit the reporting periods. This approach
allows estimation of parameters for each cohort and thus changes in, say,
average lag-time can be monitored. Smoothing of the time series of
parameters has been considered, as well as the possible role of the choice of
parametric distribution. The goodness of fit of the proposed models to the
observed data will be evaluated via residual analysis. Examples of application
to Spanish data will be presented.
C27.4
Modelling the age-dependence of risk in a self-controlled case series analysis
C28 Model selection II
Katherine Lee1,2, John Carlin1,2
1
2
C28.1
(Scientist
award
winner)
Murdoch Childrens Research Institute, Melbourne, Australia, The University of
Melbourne, Melbourne, Australia
Robust Gene Selection Based on Minimal Shrinkage Redundancy
Since the withdrawal of an earlier vaccine against rotavirus (the leading cause Jan Kalina, Zdenek Valenta
of gastroenteritis), several studies have examined evidence for an association Institute of Computer Science AS CR, Prague, Czech Republic
between current rotavirus vaccines and risk of intussusception (a rare bowel
obstruction) in infants. The self-controlled case series (SCCS) method is a Dimension reduction is a common procedure in the analysis of gene
statistical approach to investigate associations between acute outcomes and expression measurements. However, usual gene selection methods have a
transient exposures, and has been widely applied to assess potential vaccine tendency to pick gene sets with an undesirable redundancy, which weakens
side-effects. The method uses cases only, and compares exposed time at risk the performance of consequent classification methods. The Minimum
(e.g. 21 days following a vaccine) with time at risk outside this window within Redundancy Maximum Relevance (MRMR) criterion was proposed to minimize
an individual, using conditional Poisson regression. The risk of outcome the gene set redundancy in the process of gene selection. Usual relevance and
generally varies with age and it is important to allow for this within the analysis. redundancy criteria are either too sensitive to noise or presence of outlying
The standard approach is to split the observation period into age categories, measurements (mutual information, F test statistic) or inefficient (Spearman
and allow a separate risk in each category using indicator variables. rank correlation coefficient). Therefore alternative approaches to the MRMR
Alternatively, we propose using a fractional polynomial to fit a smooth curve dimension reduction are highly desirable.
across the age categories. We compare these two approaches fitted to varying We investigate novel measures of relevance and redundancy, which are based
on modern statistical estimation methods. We propose a shrinkage version of
numbers of age categories using simulation.
We demonstrate that fractional polynomials are more efficient than indicators, the coefficient of multiple correlation and use it as a measure of redundancy of
and can lead to more reliable inference, particularly when there are few cases. a gene set. Another proposal is a highly robust correlation coefficient based on
In contrast, if there is an abundance of cases fractional polynomials may bias the least weighted squares regression with adaptive weights. This method has
the estimated exposure-outcome relationship if the age categorisation is a high breakdown point, which is a crucial statistical measure of sensitivity
against noise or influential outliers in the data. Our MRMR criterion combining
coarse.
these approaches is called Minimum Shrinkage Redundancy Maximum Robust
We conclude that fractional polynomials provide a viable alternative for Relevance (MSRMRR). The method is illustrated on gene expression
modelling age in an SCCS analysis, but highlight the importance of exploring measurements in a study on patients with cerebrovascular stroke. The new
the sensitivity of results to the adjustment method and the number of criterion outperforms standard relevance and redundancy measures,
categories used.
particularly for gene expression measurements contaminated by noise.
68/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
C28.2
Boosting for variable selection in structured survival models
Mar Rodriguez-Girondo1, Thomas Kneib2, Carmen Cadarso-Suárez3, Emad
Abu-Assi4
1
Univesity of Vigo, Vigo, Spain, 2Georg-August-Universität Göttingen,
Göttingen, Germany, 3University of Santiago de Compostela, Santiago de
Compostela, Spain, 4Hospital Clínico Universitario de Santiago, Santiago de
Compostela, Spain
The RLS method of feature selection were used, among othrs, in designing
regression (prognostic) models on the basis of genetic data sets combined with
censored survival time.
This work was supported by the NCBiR project N R13 0014 04
Feature subset selection linked to linear separabilty
Bobrowski Leon1,2
1
Computer Science Department, Białystok University of Technology, Bialystok,
Poland, 2Institute of Biocybernetics and Biomedical Engineering, PAS, Warsaw,
Poland
We present a multiple testing method for hypotheses that are ordered in space
or time. This method combines tests of individual hypotheses with global tests
for intervals of consecutive hypotheses. Although one usually aims at rejecting
individual hypotheses, these rejections cannot always be made because of too
small individual effects. Assuming that consecutive hypotheses will provide
similar information, it can be beneficial to perform global tests on interval
hypotheses in which the individual effects in a particular interval are combined.
These interval hypotheses might capture enough information as to get rejected,
even if this does not hold for the elements they consist of.
To be able to test all individual hypotheses as well as all interval hypotheses
while still controlling the familiywise error rate, we apply the sequential rejection
principle and make use of logical relationships present in the set of
hypotheses. We start at testing the global null hypothesis and when this
hypothesis can be rejected we continue with further specifying the exact
location/locations of the effect present. The final results enable us to derive
statements on how many hypotheses in a certain interval have to be false.
The method is best applied to data in which neighboring covariates are
expected to behave similarly and where intervals of covariates are of intrinsic
interest. For example SNP (single nucleotide polymorphism) data, where
intervals of SNPs might indicate genes or, on a higher level, full chromosomes.
The method is implemented in R and can be used on various data types.
C28.4
Predictive genomic signatures: Biomarker discovery in high-dimensional data
Wiebke Werft, Martina Fischer, Axel Benner
To improve both prognosis and clinical management of acute myocardial German Cancer Research Center, Heidelberg, Germany
infarction patients, an accurate assessment of the different prognostic factors is
No treatment works the same for every patient. Few therapies will benefit all
required.
patients, and some may even cause harm. Hence, biological markers
Recent developments of flexible methods for survival analysis such as the ("biomarkers") are required that can guide patient tailored therapy. Using omics
structured hazard regression models based on penalized splines allow for a technologies the challenge is to derive a predictive genomic signature from a
flexible modeling of the variables affecting survival. Moreover, these models large number of candidates.
enable for the inspection of possible interactions between prognostic factors
and the assessment of time-dependent associations. Despite their immediate Commonly the identification of potentially predictive biomarkers is addressed
appeal in terms of flexibility, these models introduce additional difficulties when by inference of regression models including interaction terms between the
a subset of covariates and the corresponding modeling alternatives have to be (continuous) biomarkers and the treatment assignment. To derive a prediction
model based on a list of potentially predictive biomarkers we propose to
chosen.
combine componentwise screening with a final modelling step comprising a
We propose a boosting algorithm for model selection in the structured survival forward stepwise selection of interactions.
regression framework. Our proposal allows for data-driven determination of the
amount of smoothness required for the nonlinear effects and combines model To screen for predictive biomarkers we investigated several extensions to
selection with an automatic variable selection property. For computation standard approaches including multivariable fractional polynomials,
convenience, we propose to use the piecewise exponential representation for concordance regression, and the application of the permutation of regressor
censored survival times which enables to use a Poisson-likelihood boosting residuals test. In the modelling step grouped penalization was applied.
approach. The performance of our approach was assessed via an intensive We used simulation studies to assess the utility of the proposed procedures.
simulation study. Finally, we apply this method to propose a prognostic model Applications to two prospectively planned, randomized clinical trials will
to predict mortality after discharge for patients who suffered an acute illustrate our findings.
myocardial infarction. We analyze previously established cardiovascular risk
factors, jointly with other less investigated clinical factors as bleeding, an C28.5
increasing incident consequence of the widespread use of aggressive
A multiple testing method for ordered data
management in these patients in recent years.
Rosa Meijer, Jelle Goeman
LUMC, Leiden, The Netherlands
C28.3
Feature selection procedures are aimed at neglecting of the largest possible
number of those features (measurements) which are irrelevant or redundant for
a given classification or prediction problem. Feature selection problem is
particularly challenging in exploration of genetic data sets.
Here we are considering the relaxed linear separability (RLS) method of
feature subset selection based on minimization of the convex and piecewiselinear (CPL) criterion functions. This approach refers to the concept of linear
separability of the learning sets and is considered in the framework of pattern
recognition and data mining methods.
The RLS procedure was applied, among others, to the Breast Cancer data set
which contains descriptions of 46 cancer and 51 non-cancer patients (van't
Veer, L. J., et al., 2002). Each patient was characterized in this set by n =
24481 genes. The RLS method allowed to select the optimal subset of n1 = 12
genes and to find a linear combination (the linear key) of these genes, which
allows to correctly distinguish cancer from non-cancer patients in this set - with
100% accuracy. This example demonstrates the ability to use data mining
techniques based on the CPL criterion functions also when the number of
features is many times greater than the number of objects (Bobrowski,
Lukaszuk, INTECH 2011 ).
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
69/156
coarsening mechanism, where coarsening combines missingness in the
longitudinal outcome and censoring in the time-to-event. The absence of such
C29.1
a taxonomy exists despite the fact that, as illustrated here, the problem of joint
Goodness-of-fit tests for a semiparametric model under a random double modeling of longitudinal and time-to-event data is conceptually similar to that of
truncation
missing data. An extended shared-parameter joint model is proposed. Under
this framework and in contrast to the conventional one, a characterization of
Carla Moreira1,3, Jacobo de Uña- Álvarez1, Ingrid Van Keilegom2
MAR is proposed. An intuitively interpretable and hence appealing sub-class is
1
University of Vigo. Department of Statistics and O.R., Lagoas - Marcosende,
derived. The developments are illustrated using data collected from a study on
2
36 310 - Vigo, Spain, Institute of Statistics, Biostatistics and Actuarial
liver cirrhosis.
Sciences, Université catholique de Louvain, Voie du Roman Pays 20,B 1348
Louvain-la-Neuve, Belgium, 3University of Minho, Campus Azurém C29.3
Guimarães, Portugal
Randomly truncated data frequently appear in Epidemiology and Survival LIKELIHOOD BASED ESTIMATION FOR AN EFFECT OF A TIME-VARYING
Analysis. This happens for instance when the lifetimes or inter-event times at COVARIATE
hand correspond to events falling in some observational window. Whe the Ikuko Funatogawa1, Takashi Funatogawa2
observational window is bounded by both ends one gets doubly truncated data. 1Teikyo University Graduate School of Public Health, Tokyo, Japan, 2Chugai
AIDS incubation times is a typical example of such data, because in practice Pharmaceutical Co., Ltd., Tokyo, Japan
AIDS diagnosis is often restricted to a certain interval of calendar time. Another
example of double truncation is found in the analysis of age at diagnosis of a In some clinical studies, the therapeutic agent is administered repeatedly, and
disease, e.g. childhood cancer. In these settings with doubly truncated data, doses are adjusted in each patient, based on repeatedly measured continuous
the lifetime distribution may be estimated through the Efron-Petrosian NPMLE responses, to maintain the response levels in a target range. Under the
or on the basis of some parametric model for the truncation times, which leads response-dependent dose-modification, it was unknown whether the maximum
to the Moreira-de Uña-Álvarez semiparametric maximum-likelihood estimator likelihood estimators for dose-response relationship are consistent or not.
(SPMLE). The SMPLE outperforms the NPMLE when the paramertic Estimation methods of an effect of a time-varying treatment have been studied
information is correct; however, it may be largely biased when the parametric in an area of causal modeling in epidemiology. In this area, mixed effects
family is misspecified. In this work we propose goodness-of-fit tests for this models have not been used for the measurement process and non-likelihood
semiparametric model. Several testing methods are introduced and compared based estimation methods have been used. In this study, we show that the
through Monte-Carlo simulations. The main conclusion is that the proposed maximum likelihood estimators of mixed effects models with dose as a timemethods respect the significance level well, while being able to detect dependent covariate are consistent when the selection of the dose depends on
misspecifications in the parametric model as the sample size increases. Real the observed, but not on the unobserved, responses. By simulation studies, we
confirm the property of the maximum likelihood estimators in an autoregressive
data illustrations are provided.
linear mixed effects model (Funatogawa I et al. Statistics in Medicine 2007,
2008, 2012; Funatogawa T et al. Statistics in Medicine 2008). This model is an
C29.2
extension of transition models and linear mixed effects models and it can
A Framework for Characterizing Missingness at Random in Generalized express profiles approaching asymptotes in each subject. We also confirm the
Shared-parameter Joint modeling Framework for Longitudinal and Time-to- property of the maximum likelihood estimators in a linear mixed effects model
Event Data
under the assumption that all responses are measured at steady state.
Edmund Njeru Njagi1, Geert Molenberghs2, Geert Verbeke3, Mike G. Kenward 4,
Dimitris Rizopoulos5
C29.4
1
I-BioStat, Universiteit Hasselt, B-3590 Diepenbeek, Belgium, 2I-BioStat, Cost-Sensitive Maximum Likelihood Classification: Finding Optimal Biomarker
Universiteit Hasselt and I-BioStat, Katholieke Universiteit Leuven, B-3590 Combinations in Screening and Diagnosis
Diepenbeek and B-3000 Leuven, Belgium, 3I-BioStat, Katholieke Universiteit
Bruce Tabor, Michael Buckley
Leuven and I-BioStat, Universiteit Hasselt, B-3000 Leuven and B-3590
4
Diepenbeek, Belgium, Department of Medical Statistics, London School of CSIRO Mathematics Informatics and Statistics, Sydney, NSW, Australia
Hygiene and Tropical Medicine, London WC1E7HT, UK, 5Department of Building a screening or diagnostic test involves estimating an optimal "decision
Biostatistics, Erasmus University Medical Center, NL-3000 CA Rotterdam, The boundary" to minimise overall misclassification cost, usually with unequal false
Netherlands
positives and false negative costs. When a single biomarker provides
Models for the analysis of incomplete data are often classified into selection, inadequate discrimination, statistical algorithms such as logistic regression may
pattern-mixture, and shared-parameter frameworks, and, in each of these, a be used to estimate a linear combination of variables. Systematically adjusting
taxonomy that characterizes the missingness mechanism has been developed, the prior class probabilities (the intercept) will trade-off sensitivity for specificity,
leading to the classes of missing-value mechanisms: missing completely at forming an ROC curve. Minimum cost in screening and diagnosis is often near
random (MCAR), missing at random (MAR), and missing not at random the extremes of one class (high specificity or sensitivity), implicitly assuming
(MNAR). Joint modeling of longitudinal and time-to-event data has largely model validity under such extrapolation, but suboptimal otherwise.
focussed on a shared-parameter framework, in which a sub-model for the time- We present an approach that achieves near-optimal multi-marker costto-event process is linked to one for the longitudinal process through a sensitive classification among linear classifiers. The method uses binomial
common latent structure, conditional on which independence is assumed. regression where the link function is chosen so that the implied loss functions
Though inference under this framework has customarily made an assumption (the "deviance" of each class with respect to the linear predictor) are sigmoid
which mimics MAR, the so-called assumption of non-informativeness of approximations of the optimal Bayes loss functions with appropriate costcensoring and the visiting process, nevertheless, the current framework defies weightings. These sigmoid loss functions belong to single parameter family that
an elegant characterization of MAR. An unambiguous taxonomy has not been includes the logistic and exponential losses, a property that is exploited to
developed. Such a taxonomy, analogous to the missing data context, would solve this non-convex problem. A serendipitous result is a form of logistic
classify these models based on the assumptions that they make about the classifier that is robust to outliers, a general feature of this approach.
C29 Statistical design and methodology I
70/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
The method is illustrated using simulated multivariate normal distributions with
unequal variances, where the optimal linear classifier is accessible via analytic
methods, and also with a real dataset. Near-optimal cost-sensitive
classification can have a profound effect on both variable selection and
weighting in the predicted classification boundary, notably when the
assumptions underlying traditional methods are violated.
C29.5
Designing a preliminary adaptive study to develop biomarker combinations for
trial
Toby Prevost1, Jack Bowden2
1
King's College London, London, UK, 2MRC Biostatistics Unit, Cambridge, UK
progress but also for trials that have yet to start. This suggests that what would
be useful would be to add a higher level of the hierarchy: over all trials.
We present one possible approach to doing this using an orthogonal
parameterization of the gamma with parameters on the real line. The two
parameters are modeled separately. We illustrate this approach using data
from 18 trials. We make suggestions as to how this method could be applied in
practice and conclude that the key to successful implementation rests with
careful analysis of data from a reasonable number of previous clinical trials.
C30.2
Evaluation and validation of social and psychological markers: identification
and assumptions for instrumental variables estimation
Hanhua Liu, Richard Emsley, Graham Dunn
Effective biomarkers carry the potential to stratify treatments for patients, Health Sciences Research Group, The University of Manchester, Manchester,
thereby providing personalised medicine. Prior to testing in late phase trials, UK
biomarkers are typically identified through preliminary studies to inform their
predictive potential and required trial parameters. However, biomarker studies Complex intervention trials involve evaluating social and psychological markers
as potential prognostic factors, moderators, mediators or candidate surrogate
can be prohibitively small for purpose, partly due to cost.
outcomes. We focus on using such markers to assess treatment-effect
Here we considered the design of a preliminary study of potentially predictive mediation in the presence of measurement errors, hidden confounding
biomarkers in patients treated for Psoriasis. The objective was to provide a (selection effects) between post-randomisation markers and outcomes and
design allowing multiple biomarkers and their combination to be assessed to missing data.
inform any subsequent trial. We compared a non-adaptive prospective design
with a one having two stages of patient recruitment, where poorly-performing Instrumental variable methods provide unbiased estimates at the expense of
precision, but model identification using more-informative and more-realistic
biomarkers can be discontinued after the first stage.
models requires potentially invalid assumptions. In particular, aside from
Power was assessed through simulation in R-software using Fisher's method, imposing parametric structure, they require: moderation of treatment effects on
involving the product of stage p-values. Effect size was defined in terms of the markers by covariates but no moderation of the direct effect of treatment on
correlation between treatment response over time and a biomarker. Under a outcome, equivalent to using covariate by treatment interactions as
non-adaptive design, an R-squared of 20% could be detected with 90% power, instrumental variables; no moderation of effects of marker on outcome by
5% significance level, with all 17 expensive biomarkers measured in 49 covariates; and no marker by treatment interactions on the direct effect of
patients. The adaptive design offered an interesting alternative, employing treatment on outcome. Further, considering the unmeasured confounder as a
p>0.3 to discontinue with biomarkers quarter-way through recruitment, post-randomisation rather than baseline variable we consider the effect on
requiring 24+72=96 patients. This offered more patients to develop a estimation procedures if the confounder is directly influenced by treatment.
combination from an enriched biomarker set, and improved current practice of
overly small studies when developing combinations in this clinical area.
We perform Monte Carlo simulation studies under a variety of scenarios
The proportion of biomarkers expected to be discontinued, conditional on involving selection effects, measurement error and imperfect prediction of
underlying effect size, was presented as part of a presentation to the study markers. We weaken these identifying assumptions in turn, and allow
team who accepted and implemented the design.
treatment to predict the post-randomisation confounder in two ways: a mean
change between the treatment groups, and by independently introducing
heteroskedasticity. Using these results we provide recommendations
C30 Statistical design and methodology II
concerning informative designs and on corresponding sample size
requirements for marker evaluation in complex intervention trials. We illustrate
C30.1
how the methods can be implemented using a randomised trial of cognitive
Predicting Patient Recruitment in Multi-Centre Clinical Trials
behaviour therapy in psychosis.
2
1
3
Andisheh Bakshi , Stephen Senn , Alan Phillips
1
CRP-Sante, Strassen, Luxembourg, 2University of Glasgow, Glasgow, UK,
3
C30.3
ICON plc, Leopardstown, Ireland
Sample size calculation for cluster randomized stepped wedge designs
In recent work Anisimov has made impressive progress in modelling patient
recruitment in multi-centre clinical trials. He assumes that the distribution of the Esther de Hoop, Willem Woertman, Steven Teerenstra
number of patients in a given centre in a completed trial follow a Poisson Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
distribution. In a second stage the unknown parameter is assumed to come In a stepped wedge design, all clusters start in the control condition after which
from a gamma-distribution. As is well known the overall gamma-Poisson they switch to the intervention at consecutive time points. Eventually, all
mixture is a negative binomial.
clusters will have switched to the intervention. This design is especially useful
For forecasting time to completion, however, it is not the frequency domain when the intervention is thought to do more good than harm.
that is important but the time domain and Anisimov has also illustrated clearly The stepped wedge design is increasingly being used in cluster randomized
the links between the two and the way in which a negative binomial in one trials. However, there is not much information available about the design and
corresponds to a type VI Pearson distribution in the other. Anisimov has also analysis strategies for these kinds of trials. Approaches to sample size and
shown how one may use this to forecast time to completion in a trial in power calculations have been provided, but a simple sample size formula is
progress.
lacking. Therefore, we will present a sample size approach using a design
However, it is not just necessary to forecast time to completion for trials in effect (sample size correction factor).
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
We derived a design effect that corrects for clustering as well as the stepped
wedge design. Furthermore, we compared the required sample size for the
stepped wedge design with a parallel group design. For the design effect of the
stepped wedge design, choices of the number of baseline measurements, the
number of measurements between switches and the number of steps have to
be made. Furthermore, estimates of the cluster size and intracluster correlation
are needed.
Increasing the number of measurements and steps decreases the required
sample size. However, increasing the cluster size increases the total required
sample size. In comparison to a parallel group design, the stepped wedge
design is always more efficient in terms of sample size.
C30.4
Lifestyle, socioeconomic factors and consumption of dairy foods analysed with
structural equation modelling
Are Hugo Pripp
Unit of Biostatistics and Epidemiology, Oslo University Hospital, Oslo, Norway
71/156
data set. In the MCFA we used the Data Augmentation approach since the
multivariate vector of responses consists of binary responses. Model
assessment was carried out on the learning data set to avoid double usage of
the data which might lead to conservative goodness of fit inferences. The
Bayesian approach is more flexible compared to the frequentist approach. We
also proposed the use of Multivariate Analysis of Variance statistics as
discrepancy measures to access the need for multilevel modeling.
Conclusion: Neglecting the multilevel structure of the data in statistical
analysis can lead to invalid inferences. In factor analysis the interpretation and
the number of the factors is also level dependent.
C31 Longitudinal data
C31.1
Fast linear mixed model computations for GWAS with longitudinal data
Karolina Sikorska1, Fernando Rivadeneira1, Patrick Groenen2, Paul Eilers1,
Emmanuel Lesaffre1,3
1
Erasmus Medical Center, Rotterdam, The Netherlands, 2Erasmus University,
Dairy foods are important in the Nordic diet and therefore relevant from a Rotterdam, The Netherlands, 3L-Biostat, Catholic University of Leuven,
health perspective. Observed relationships between food preference and Leuven, Belgium
health could be due to nutritional properties and/or strongly confounded by
other factors. Large cross-sectional studies as The Oslo Health Study Recent genome-wide association studies are directed to identify single
(HUBRO) provide important epidemiological data to study the relationship nucleotide polymorphisms (SNPs) associated with longitudinally measured
between lifestyle, socio-demographic factors, food preference and health. traits. In our motivating data set, the bone mineral density (BMD) of more than
However, presence of missing data, invalidated items or variables that does not 5000 elderly individuals was measured at 4 occasions over a period of 12
specifically describe the factor of interest might limit the use of population- years. We are interested in SNPs that influence the change of BMD over time.
based cross-sectional studies for this purpose. Structural equation modelling This could be done by fitting a linear mixed model with covariates age, gender,
as statistical technique may both improve measurement uncertainty, etc but also including each of the SNPs at a time. However, fitting 2.5 million of
interpretation of items and model complex relationships. It combines elements linear mixed models (1 model per SNP) on a single desktop would take more
from linear regression, path analysis and factor analysis and is extensively than a month.
used in sociological and psychological research. Especially, it has gained Dealing with such prohibitively large computational time, it is desirable to
interest as a method to test and estimate causal relations using statistical data develop a fast technique. We explored a variety of fast computational
and causal assumptions. Structural equation modelling was explored on procedures. The best approximating procedure is based on a conditional twoselected variables describing lifestyle, socio-demographic factors and health in step (CTS) approach. This approach approximates the P-value for the SNPrelation to preference of diary foods using data from The Oslo Health Study. time interaction term from the linear mixed model analysis. Our method is
Physical activity and an overall healthy lifestyle were usually related to based on the concept of a conditional linear mixed model proposed by Verbeke
increased consumption of milk products, but the association with economic et al. (2001). A simulation study shows that this method has the highest
income was not so clear. The gained statistical information using structural accuracy of all considered approximations. Applying the CTS approach
equation modelling compared to classical generalized linear models are reduced the computational time needed to analyze the BMD data to 5 hours.
assessed on data from this large cross-sectional study.
We are now exploring the robustness of the CTS against different simulation
parameters such as sample size, number of measurements, variancecovariance parameters etc. We are also exploring the performance of the CTS
C30.5
in case of more complicated residual errors structure (autocorrelation,
Bayesian multilevel factor analytic model for assessing the relationship
heteroscedascity).
between nurse-reported adverse events and patient safety
Luwis Diya1, Baoyue Li3, Koen Van den Heede2, Walter Sermeus2, Emmanuel
C31.2
Lesaffre2,3
1
Karolinska Institutet, Stockholm, Sweden, 2Katholieke Universiteit Leuven, A Bayesian Model for Multivariate Human Growth Data
Sten Willemsen1, Regine Steegers-Theunissen2, Paul Eilers1, Emmanuel
Leuven, Belgium, 3Erasmus MC, Rotterdam, The Netherlands
Lesaffre1,3
Aims: Adverse Events (AEs) are considered as proxies of patient safety and 1
are often analyzed separately. However the totality or multivariate vector of Department 2of Biostatistics, Erasmus Medical Centre, Rotterdam, The
AEs is a better proxy. Data on AEs is usually multilevel in structure. The aim of Netherlands, Division of Obstetrics and Prenatal Medicine, Department of
Gynaecology, Erasmus Medical Centre, Rotterdam, The
this study is to explore the relationship between nurse-reported AEs using Obstetrics and
Netherlands, 3L-Biostat, Catholic University of Leuven, Leuven, Belgium
multilevel factor analysis.
Methods: Data from the Belgian chapter of the Europe Nurse Forecasting
nurse survey was used to establish the relationship between 6 AEs and patient
safety. As there was no a prior factor structure suggested, the data set was
split into a learning and a validation data set. We used the learning data set to
explore a plausible factor structure using a frequentist Multilevel Exploratory
Factor Analysis (MEFA) and we validated this factor structure by using a
Bayesian Multilevel Confirmatory Factor Analysis (MCFA) on the validation
Having good models of human growth during gestation and childhood is
important for distinguishing between normal and abnormal growth. In the
SuperImposition by Translation And Rotation (SITAR) approach, individual
growth curves are shifted horizontally and vertically and are stretched so they
overlap and form an ‘average' growth curve. This ‘average' profile is estimated
by means of a natural cubic spline.
72/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
In various growth studies different attributes are measured simultaneously and
repeatedly (for example both height and weight). Usually, they are analyzed
one outcome at a time. In doing so, the relationship between different types of
measurements is ignored. However, it might be useful to model the different
growth variables jointly.
We have extended the SITAR approach to the multivariate case allowing us to
consider the relation between the various measurement types. The model is
estimated using an MCMC algorithm. We will apply the method to two sets of
growth data. The first is the Jimma Data set which contains data on the height,
weight and arm circumference of 495 children in the Jimma region of Ethiopia.
The second data set is from the PREDICT study containing crown rump length,
embryonic volume and curvature of the embryo measured in early pregnancy.
Using our methodology we relate growth to several outcomes (later in life) and
determinants. We also use our model to develop multivariate reference curves.
As an extension we show how our Bayesian methodology allows us to easily
vary some of the underlying assumptions of the model thereby performing a
sensitivity analysis.
C31.3
Cognitive lifestyle and cognitive decline: the characteristics of two longitudinal
models
Riccardo Marioni1, Cécile Proust-Lima2, Helene Amieva3, Carol Brayne1, Fiona
Matthews4, Jean-Francois Dartigues3, Hélène Jacqmin-Gadda2
1
University of Cambridge, Cambridge, UK, 2INSERM U897, Univ. Bordeaux
ISPED, Bordeaux, France, 3INSERM U897, Univ. Bordeaux ISPED,
Department of Clinical Neurosciences CHU Pellegrin, Bordeaux, France, 4MRC
Biostatistics Unit, Cambridge, UK
It is rare for longitudinal analyses of cognitive decline in the elderly to account
for death, measurement error of the cognitive phenotype, unequal sensitivity to
cognitive change, cognitive recovery, and differing covariates effects at
different stages of decline. This study applies two models (a multi-state model
and a joint latent class mixed model) that account for these issues to a large,
population-based cohort, Paquid, to model more realistically the relationship
between cognitive lifestyle and cognitive decline. Cognition was assessed over
a 20 year period using the Mini Mental State Examination. Three cognitive
lifestyle variables were assessed: education, mid-life occupation, and late-life
social engagement.
Both approaches found increased education to be associated with a more
favourable cognitive trajectory over time. Late-life social engagement, a
potentially modifiable factor, was strongly associated with mortality irrespective
of cognitive trajectory. Interpretation of parameters from the multi-state model
is easy and this approach explicitly models cognitive recovery. However, it
requires a priori definition of clinically meaningful cognitive states. By contrast,
the mixed model approach enables study of minor changes by using the
quantitative cognitive measure and handles heterogeneous population.
C31.4
A statistical model to explore carcinogenic processes by transcriptomics in
prospective studies
Sandra Plancade1, Gregory Nuel2, Eiliv Lund1
1
University of Tromsø, Tromsø, Norway, 2University Paris Descartes, Paris,
France
Most epidemiological prospective studies collect lifestyle factors and/or
genomic data (SNPs), and aim at the estimation of relative risks and prediction
sets. In such contexts, survival analysis models - in particular the Cox model which parametrize the failure time given the covariates have proven to be
efficient, and have been extended to include time-dependent covariates.
Nevertheless, their implementation on gene expression covariates whose
distribution might be affected by the carcinogenic process, does not enable a
direct biological interpretation, which makes the incorporation of complex
models of carcinogenesis difficult. Alternatively, we consider a direct modelling
of the gene expression as a function of time to diagnosis conditionally to
exposures, which allows more flexibility to build complex statistical models
incorporating biological assumptions.
We propose a latent variable model, based on the multistage model, which
might simultaneously estimate the last-stage length distribution and detect the
genes whose expression changes over time. The parameters are estimated by
a Stochastic EM algorithm, which shows good results on simulated data. This
model constitutes a structure on which correlations between exposures,
genomic and transcriptomic data could be integrated with the carcinogenic
process.
C31.5
Analysis of Change Over Time When Measurements are Obtained Only After
an Unknown Delay
Andrew Copas, David Dolling, David Dunn
MRC Clinical Trials Unit, London, UK
In longitudinal studies repeated measurements of a biomarker, Y, may be
obtained from participants to assess changes in health, in relation to a
participant characteristic X. In HIV disease interest may be in changes after
infection. However patients are only observed after an unknown ‘delay' from
infection to diagnosis, possibly related to X, but we initially assume
conditionally independent of Y values. Right censoring may also arise from
starting treatment if interest is in health without treatment. Some authors have
proposed the use of external data in combination with biomarker values at
diagnosis to deduce the date of infection, but this requires very strong
assumptions. When a linear mixed model for Y can be assumed in terms of X
and time from infection, T, other authors have naïvely ignored the delays and
fitted models based on X and time from diagnosis, T*. Where delay is related to
X then such naïve models can be correctly specified for Y and the fixed effect
parameters relating to T* and XT* coincide with those for T and XT. However
additional fixed and random effects relating to X are induced by the delay.
Correctly incorporating these effects is important to provide robustness against
right censoring which is often ‘missing-at-random'. Simulations based on real
HIV datasets and changes in CD4 count illustrate this point. We conclude that
under assumptions the naïve approach ignoring delay can be used for
inference concerning change from infection but a flexible fixed and random
effects structure should be specified.
C32 Survival analysis III
C32.1
High-dimensional survival studies - comparison of approaches to assess timevarying effects
Anika Buchholz1, Willi Sauerbrei1, Harald Binder2
1
Institute of Medical Biometry and Medical Informatics, University Medical
Center Freiburg, Freiburg, Germany, 2Institute of Medical Biostatistics,
Epidemiology and Informatics, University Medical Center Johannes Gutenberg
University Mainz, Mainz, Germany
The development of cohorts including genomic and transcriptomic (mRNA or
gene expression) data, as well as lifestyle information, opens new perspectives
for the study of carcinogenic dynamics. In particular these designs enable to
explore the time-dependent distributions of the gene expression conditionally to
exposures . Further on, they give the opportunity to connect epidemiological
studies with biological models of carcinogenesis, including the multistage Survival studies with microarray data often focus on identifying a set of genes
with significant influence on a time-to-event outcome for building a gene
model.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
expression signature (i.e. predictor). Most of these predictors are usually
derived using the Cox proportional hazards (PH) model assuming that effects
are constant over time. However, there might be time-varying effects, i.e.
violation of the PH assumption, for the predictor and some of the genes.
Ignoring this may lead to false conclusions about their influence. Hence, it is
important to investigate for time-varying effects.
Recently we have compared several strategies for identifying, selecting and
modelling time-varying effects in low-dimensional settings [1]. Some of them
can also be applied to high-dimensional data and will be illustrated and
compared using publicly available gene expression data with time-to-event
outcome from cancer patients, for which predictors have been derived [2,3]. In
addition, we will investigate whether the time-varying effect of a predictor is
mainly caused by some of its components.
References:
[1] A. Buchholz, W. Sauerbrei. Comparison of procedures to assess non-linear
and time-varying effects in multivariable models for survival data. Biometrical
Journal,
53:308-331,
2011.
[2] C. Desmedt, F. Piette, S. Loi, et al. Strong time dependence of the 76-gene
prognostic signature for node-negative breast cancer patients in the transbig
multicenter independent validation series. Clinical Cancer Research, 13:32073214,
2007.
[3] H. Binder and M. Schumacher. Incorporating pathway information into
boosting estimation of high-dimensional risk prediction models. BMC
Bioinformatics, 10:18, 2009.
C32.2
Frailty modeling of age-incidence curves of osteosarcoma and Ewing sarcoma
among individuals younger than 40 years
Morten Valberg1, Tom Grotmol2, Steinar Tretli2, Marit B. Veierød2, Susan S.
Devesa3, Odd O. Aalen1
1
Department of Biostatistics, Institute of Basic Medical Sciences, University of
Oslo, Oslo, Norway, 2Cancer Registry of Norway, Institute of Population-Based
Cancer Research, Oslo, Norway, 3Division of Cancer Epidemiology and
Genetics, National Cancer Institute, NIH, Bethesda, MD, USA
The Armitage-Doll model with random frailty can fail to describe incidence rates
of rare cancers influenced by an accelerated biological mechanism at some,
possibly short, period of life. We propose a new model to account for this
influence. Osteosarcoma and Ewing sarcoma are primary bone cancers with
characteristic age-incidence patterns that peak in adolescence. We analyze
SEER incidence data for whites younger than 40 years diagnosed during the
period 1975-2005, with an Armitage-Doll model with compound Poisson frailty.
A new model treating the adolescent growth spurt as the accelerated
mechanism affecting cancer development is a significant improvement over
that model. Our results support existing evidence of an underlying susceptibility
for the two cancers among a very small proportion of the population. In
addition, the modeling results suggest that susceptible individuals with a rapid
growth spurt acquire the cancers sooner than they otherwise would have, if
their growth had been slower. The new model is suitable for modeling
incidence rates of rare diseases influenced by an accelerated biological
mechanism.
73/156
Bordeaux, F-33000, France, 4INSERM CIC-EC7, Bordeaux, F-33000, France
For many diseases, individuals may experience several and related types of
relapses or recurrent events during the life course of the disease. For instance,
a breast cancer patient could have locoregional and metastatic relapses. In
addition, follow-up may be interrupted for several reasons, including the end of
a study, or patients' lost-to-follow-up, which are non-informative censoring
events. Death could also stop follow-up, hence, it is considered as a dependent
terminal event.
Frailty models (Vaupel et al. 1979), which are extensions of proportional
hazards survival models for recurrent events, aim to account for potential
heterogeneity caused by unmeasured prognostic factors and inter-recurrence
dependency through a random effect. Rondeau et al. (2007) and Liu et al.
(2004) showed that death process has to be included in a joint modelling
framework with the recurrent event process to avoid biases on regression
parameters.
We propose a multivariate frailty model with possibly time-dependent
regression coefficients that jointly analyzes two types of recurrent events with a
dependent terminal event. Two estimation methods are proposed: a semiparametrical approach using penalized likelihood estimation where baseline
hazard functions are approximated by M-splines, and another one with
parametrical baseline hazard functions. We derived martingale residuals to
check the goodness-of-fit of the proposed models. We illustrate our proposals
with a real data set on breast cancer. The main objective was to measure
potential dependency between the two types of recurrent events (locoregional
and metastatic) and the terminal event (death) after a breast cancer and to
estimate the influence of prognostic factors.
C32.4
Sparse partial least-squares regression for high-throughput survival data
analysis
Donghwan Lee1, Youngjo Lee1, Woojoo Lee2, Yudi Pawitan2
1
Seoul National University, Seoul, Republic of Korea, 2Karolinska Institutet,
Stockholm, Sweden
The partial least-squares (PLS) method has been adapted to the Cox's
proportional hazards model for analyzing high-dimensional survival data. But
since latent components constructed in PLS employ all predictors regardless of
their relevance, it is difficult to interpret the result. In this paper, we propose a
new formulation of the sparse PLS (SPLS) procedure for survival data to allow
both sparse variable selection and dimension reduction. We develop a
computing algorithm for SPLS by modifying an iteratively reweighted PLS
algorithm, and illustrate the method with Swedish breast cancer data. Through
the numerical studies we find that our SPLS method generally performs better
than the standard PLS and sparse Cox regression methods in variable
selection and prediction.
C32.5
A Regression Model for the Extra Length of Stay Associated with a Nosocomial
Infection
Arthur Allignol1, Martin Schumacher2, Jan Beyersmann1,2
1
Freiburg Center for Data Analysis and Modeling, University of Freiburg,
C32.3
Freiburg, Germany, 2Institute of Medical Biometry and Medical Informatics,
Multivariate frailty models for two types of recurrent events with a dependent University Medical Center Freiburg, Freiburg, Germany
terminal event: Application to breast cancer data
The occurrence of a nosocomial infection (NI) constitutes a major complication
Yassin Mazroui1,2, Audrey Mauguen1,2, Simone Mathoulin-Pelissier1,3,4, Gaëtan that can be severe in terms of mortality/morbidity as well as in terms of
Macgrogan3, Véronique Brouste3, Virginie Rondeau1,2
prolonged length of stay (LoS) in the hospital, which is one of the main driver
1
Université Bordeaux Segalen, Bordeaux, F-33000, France, 2INSERM, ISPED, for extra costs due to NI. Information on extra LoS is used in cost-benefit
Centre INSERM U-897-Epidémiologie-Biostatistique, Bordeaux, F-33000, studies which weigh the costs of infection control measures like isolation rooms
France, 3Unité de recherche et d'épidémiologie cliniques - Institut Bergonié, against the costs raised by NI. Estimation of extra LoS is complicated by the
74/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
fact that the occurrence of NI is time-dependent. Cox proportional hazards
models including NI as a time-dependent covariate could be used but do not
permit to quantify the number of extra days following an infection. Using the
multistate model framework, Schulgen and Schumacher (1996) devised a way
to quantify extra LoS comparing the mean LoS given current NI status and
averaging this quantity over time. This quantity has foundations in landmarking
(Anderson et al., 1983). It has also been extended to the competing risks
setting in order to distinguish between discharge alive and death (Allignol et al.,
2011).
However, a way of studying the impact of covariates on this extra LoS is still
lacking. We propose to use the pseudo value regression technique (Andersen
et al., 2003). The idea is to use a generalized estimating equation model on the
pseudo-values of the extra LoS.Motivated by a recent study on hospitalacquired infection, we investigate the use of pseudo values for identifying
additional risk-factors that influence the extra LoS.
C33 Statistical methodology II
(Ackermann and Strimmer (2009)). The advantages of these so-called sum
statistics lie in the applicability in small sample settings and the straightforward
interpretation. Here, inference is usually based on resampling methods to
account for possible dependencies between the marginal test statistics.
Motivated by recent work on global tests for multiple endpoints underlying a
multivariate discrete distribution (Agresti and Klingenberg, 2005; Klingenberg et
al.,
2009), the aim of this talk is to provide two sample sum statistics for testing
against marginal inhomogeneity in complex ordinal data. Particular emphasis is
put on the discussion of valid inference methods. By means of simulated data
we investigate the proposed sum statistics and illustrate some limitations of the
popular permutation approach. In case of tree-structured data we show how
these sum statistics might be used to reveal significant subsets.
We apply the proposed methodology to International Classification of
Functioning, Disability and Health (ICF) data where a lot of ordinal variables
are usually collected (World Health Organization, 2001).
C33.1 (Scientist award winner)
Modelling Overdispersion in Wadley's Problem with a Beta-Poisson Distribution
Kerry Leask1, Linda Haines2
1
CAPRISA, Durban, KwaZulu-Natal, South Africa, 2University of Cape Town,
Cape Town, Western Cape, South Africa
C33.3
On-line surveillance of air pollution
Eva Andersson
Occupational and environmental medicine, Sahlgrenska University hospital
and Sahlgrenska Academy, Goteborg, Sweden
Wadley's problem frequently emerges in dose-response studies and occurs
when the number of organisms surviving exposure to a particular dose of a
drug is observed but the number initially treated is unobserved and is
estimated from control samples. Data which arise from this problem setting are
frequently overdispersed.
Historically, Wadley (1949) modelled the number of organisms initially treated
using a Poisson distribution. The resulting distribution for the number of
survivors is a Poisson model with parameter proportional to the probability of
survival. This model cannot accommodate overdispersion.
As a means of accommodating overdispersion, Anscombe (1949) used a
negative binomial model to model the number of organisms initially treated with
the drug. The result is a negative binomial model for the number of surviving
organisms.
The present study considers an antimalarial drug study conducted by the
Medical Research Council in Durban as part of its Malaria National Program.
Blood samples were collected from suspected malaria sufferers who reported
to clinics in KwaZulu-Natal during April 1989 and March 1990. The samples
were treated with varying concentrations of an antimalarial drug and the
number of surviving malaria parasites was recorded.
The beta-Poisson model is considered for modelling this data set because the
traditional Poisson and negative binomial models provided very poor fits. The
model is derived from the Poisson by modelling the probability of survival using
a beta distribution. Some properties of the model are explored and its fit is
compared with those of the Poisson and negative binomial models.
Air pollution can cause respiratory problems in both children and adults.
Nitrogen oxides (NOx) are formed in combustion and road traffic is the largest
source of emissions in the larger urban areas. Sensitive groups include people
with previously heart disease, asthma or COPD. Airborne particulate matter
(PM) is another much discussed air pollution. There are many sources:
transports, small-scale wood burning, use of studded tires and long-distance
transport of air pollutants from other countries. Association with cardio-vascular
diseases and mortality has been shown.
It would be beneficiary with a system for early detection of increased levels or
air pollution. By continually monitoring e.g. the daily levels, we may detect
changes early. The methodology of statistical surveillance is appropriate, in
which an alarm system (alarm statistic and alarm limit) is constructed. The
alarm system should, ideally, produce few false alarms and quick motivated
alarms. Examples of methods are Shewhart, EWMA and CUSUM.
Many surveillance systems are based on the assumption of a process which is
independent over time, but many data display autocorrelation. One way of
handling serially dependent data is to monitor the residual of the AR-process.
Air pollution data, measured hourly and daily are used to develop monitoring
systems for NOx and PM. Preliminary results show that the NOxmeasurements during the first hours of a day can be used to predict days with
high levels of NOx. For PM, preliminary results show that the residual approach
has high detection ability for signaling days with increased levels of particles.
C33.2
Global testing for complex ordinal data
Ulrich Mansmann, Monika Jelizarow
IBE, LMU, Munich, Germany
Over the past decade, a plethora of methods have been developed to detect
global effects in groups of complex (i.e. multivariate and possibly highdimensional)metric variables such as gene expression or metabolomic data.
Less attention, however, has been constrained to the case when the variables
of interest are categorical. For the former, the construction of a global test
statistic as the sum of univariate test statistics has amply been discussed
C33.4
An Improved Algorithm for Outbreak Detection in Multiple Surveillance Systems
Angela Noufaily, Doyo Gragn, Paddy Farrington, Paul Garthwaite
The Open University, Milton Keynes, UK
There has been a great interest in the use of statistical surveillance systems
over the last decade, prompted by concerns over bio-terrorism, the emergence
of new pathogens such as SARS and swine flu, and the persistent public
health problems of infectious disease outbreaks, for example the recent e-coli
epidemic in Germany. It is important to detect these outbreaks early in order to
take suitable control measures.
In England & Wales, an automated laboratory-based outbreak detection
system has been in operation since the early 1990s, based on a quasi-Poisson
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
75/156
regression model (Farrington et al 1996).
We propose an improved version of this algorithm for outbreak detection of
infectious diseases in large multiple surveillance systems. For a better
treatment of trend and seasonality, we extend the existing quasi-Poisson
regression model into a 10-level factor. For a more appropriate computation of
the prediction intervals, we propose using negative binomial quantiles rather
than the normal approximation involving scaled Anscombe residuals. Weaker
down-weighting (than the existing) of baseline data using the scaled Anscombe
residuals is suggested to reduce the influence of past outbreaks. A new
adaptive scheme for re-weighting based on past exceedances is also
proposed.
Extensive simulations show that the mentioned modelling choices reduce the
high proportion of false alarms without impairing the detection of genuine
outbreaks. Applications to data sets obtained from the United Kingdom's Health
Protection Agency are given.
However, it can be shown in simulations that the significance levels often
deviate considerably from the nominal level. The assumptions for using the
rank correlation test are not strictly satisfied. The pairs of observations fail to be
independent, but the main cause of the poor significance level is a correlation
between standardized effect sizes and sampling variances under the null
hypothesis.
We propose alternative rank correlation tests to improve error rates. An
unstandardized test directly correlates estimated effect sizes and sampling
variances. This test reduces the Type II error rate, unfortunately at the expense
of the Type I error rate. Simulations show that the standardized and
unstandardized test statistics contain about the same amount of information. In
tests for publication bias, it is essential to control the Type II error rate. If the
significance level cannot be fixed, the unstandardized test is preferable.
Another test is based on the simulated distribution of the estimated measure of
association, conditional on sampling variances. Its significance level equals the
nominal level and the Type II error rate is reduced compared to the Begg and
Mazumdar test. Although more computer intensive, this test attains the best
C33.5
significance levels.
The use of symmetric and asymmetric distance measures for high-dimensional
Begg CB, Mazumdar M. (1994). Operating characteristics of a rank correlation
tests of inferiority, equivalence and non-inferiority
test for publication bias. Biometrics 50, 1088-1102.
1
1
2
Siegfried Kropf , Kai Antweiler , Ekkehard Glimm
1
Otto von Guericke University Magdeburg, Magdeburg, Germany, 2Novartis C34.2
Pharma AG, Basel, Switzerland
Dealing with missing binary outcome data in meta-analysis: application to
Tests based on multivariate measures of distance or similarity between sample randomized clinical trials in nutrition
elements have proven as powerful instrument for small samples from highAnissa Elfakir, Sebastien Marque
dimensional data in many medical and biological applications. In the usual
situation of tests for differences, permutation tests can easily be applied as Danone Research, Palaiseau, France
nonparametric versions. Under the assumption of multivariate normal Meta-analysis of randomised clinical trials is considered as the gold standard to
distributions, parametric rotation tests enable the application even in very small demonstrate the efficacy of a health product. The existing literature describes
samples, where the number of possible permutations would not be sufficient for missing data as a critical issue in meta-analysis. Its impact on result validity
effective tests. Both permutation and rotation tests are exact tests and perform has been extensively studied and general recommendations have been
well with surprisingly small samples. The application will be demonstrated by published. The analysis strategy should be pre-specified; explicit assumptions
an example.
on missingness mechanism should be made in the specific context of the study
New attempts are directed at high-dimensional equivalence tests. In these and research area; the primary analysis should be based on a method, valid
situations, strict control of equivalence in each component is a very demanding under the most plausible assumption; sensitivity analyses should be planned
condition for equivalence. With realistic sample sizes it is almost impossible to and the potential influence on the results should be discussed.
declare equivalence with such a criterion. Therefore we suggest a multivariate Clinical studies in nutrition may require daily reporting of outcomes and on-site
approach based on pairwise distance measures between sample elements as visits over several months. Missing data can arise due to premature
an alternative strategy. However, due to the change in hypotheses, type I error withdrawals, intermittent missing daily reporting or visit, or other non-specified
control is more difficult than for tests of differences because the null hypothesis reasons. Current reporting of meta-analysis in nutrition often inadequately
is no longer a single point in the parameter space. We present proposals for handle the missing data issue. For binary outcomes, "complete/available case"
appropriate criteria and how to test them based on resampling techniques and analyses are commonly used, and alternatives are based on simple
show simulation results. Possible applications are comparisons of multi- imputations. The latter method is inefficient and likely misleading in the
species microbial communities.
nutritional setting and the formers underestimate the uncertainty of the
Finally, we consider asymmetric distance measures which are necessary to estimates. As well, robust methods are rarely used.
construct one-sided tests as needed for non-inferiority investigations. Such In the context of a meta-analysis of clinical trials investigating the effect of a
problems arise in safety analyses with a large number of variables.
nutritional product on a binary outcome, the most plausible missingness
mechanism assumption was discussed. Simulations were performed to
examine the relative merits of naïve and robust methods, and define a valid
C34 Meta-analyses/Combined data sources
strategy. This strategy was then applied and its results discussed on a concrete
C34.1
example.
Improving the error rates of the Begg and Mazumdar test for publication bias in
meta-analysis
C34.3
Miriam Gjerdevik, Ivar Heuch
Multivariate meta-analysis for non-linear and other multi-parameter
Department of Mathematics, University of Bergen, Bergen, Norway
associations
The rank correlation test introduced by Begg and Mazumdar (1994) is widely Antonio Gasparrini, Ben Armstrong, Michael G. Kenward
used in meta-analysis to test for publication bias in clinical and epidemiological London School of Hygiene and Tropical Medicine, London, UK
studies. It correlates the standardized treatment effect and the variance of the
In this contribution we formalize the application of multivariate meta-analysis
treatment effect.
and meta-regression to synthesize estimates of multi-parameter associations
76/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
obtained in different studies. This modelling approach extends the standard
two-stage analysis used to combine results across different sub-groups or
populations, and can be applied in multi-site randomized controlled trials or
observational studies including data from multiple locations. The most
straightforward application is for the meta-analysis of non-linear relationships,
described for example by regression coefficients of splines or other functions,
but the methodology easily generalizes to any setting where complex
associations are described by multiple correlated parameters. The modelling
framework of multivariate meta-analysis is implemented in the package
mvmeta within the statistical environment R. As an illustrative example, we
propose a two-stage analysis for investigating the non-linear exposureresponse relationship between temperature and non-accidental mortality using
time series data from multiple cities. Multivariate meta-analysis represents a
useful analytical tool for studying complex associations through a two-stage
procedure.
an overall risk score based on multiple studies. In a final model there were 13
independent predictors of mortality including demographic attributes, disease
status, biomarkers, prior medical history and medication usage. Classifying
patients into deciles of risk produced a marked gradient in 3-year % mortality
rate, from 8% in bottom decile to 71% in top decile. This study provides a
template as to how multiple studies can be meaningfully combined to provide a
generalisable risk score.
J Dobson1, S.J. Pocock1, C Ariti1, K Poppe2, R.N. Doughty2
1
London School of Hygiene and Tropical Medicine, London, UK, 2University of
Auckland, Auckland, New Zealand
It is hypothesized that certain alleles can have a protective effect not only when
inherited by the offspring but also as non-inherited maternal antigens (NIMA).
To estimate the NIMA effect, large samples of families are needed. When large
samples are not available, we propose a combined approach to estimate the
NIMA effect from ascertained nuclear families and twin pairs. We develop a
likelihood-based approach allowing for several ascertainment schemes, to
accommodate for the outcome-dependent sampling scheme, and a familyspecific random term, to take into account the correlation between family
members. We estimate the parameters using maximum likelihood based on the
combined joint likelihood (CJL) approach. Our method has two main
advantages over the existing methods. First, the joint likelihood approach,
which models the joint genotype and phenotype distribution, can be more
efficient than the prospective likelihood (PL), used in existing methods, for
estimating the genetic odds ratios. Secondly, by using twins, as compared to
case-controls used in existing methods, we can infer more accurately their
parental genotypes, needed to estimate indirect effects, by assuming
Mendelian transmission and random mating. Simulations show that the CJL is
more efficient for estimating the NIMA odds ratios as compared to a familiesonly approach. To illustrate our approach, we use data from a family and a twin
study from the National Repository of Family Material of the Arthritis and
Rheumatism Council, and confirmed the protective NIMA effect, with an odds
ratio of 0.477 (95% C.I. 0.264-0.864).
C34.5
Combining Family and Twin Data in Association Studies to Estimate the Noninherited Maternal Antigens Effect
Brunilda Balliu1, Roula Tsonaka1, Diane van der Woude2, Jane Worthington3,
Ann Morgan4, Stefan Boehringer1, Jeanine J. Houwing-Duistermaat1
1
Department of Medical Statistics and Bioinformatics, Leiden University
Medical Center, Leiden, The Netherlands, 2Department of Rheumatology,
Leiden University Medical Center, Leiden, The Netherlands, 3Arthritis Research
C34.4
Epidemiology Unit, University of Manchester, Manchester, UK,
Developing a predictive risk model for mortality from multiple cohort studies: an Campaign
4
Leeds Institute of Molecular Medicine, University of Leeds, Leeds, UK
example in heart failure
Developing risk scores based on combining data from multiple studies requires
dealing with a number of methodological issues including 1) complex patterns
of missing data and 2) heterogeneity among studies in terms of overall patient
risk and differences in risk prediction.
This talk shows the statistical analysis of data from the MAGGIC meta-analysis
which incorporates patient-level data from 30 studies, both observational
registries and RCTs, totaling 39372 patients with heart failure and over 20
baseline variables and subsequent mortality. This dataset has complex missing
value problems including different amounts of missing data including studies
where variables are completely missing.
The model was developed using multivariable piecewise Poisson regression
models with stepwise variable selection. Shared frailty models were used as an
alternative method to fixed effects to model study heterogeneity. Multiple
imputation using chained equations was used to impute missing values of
covariates including the implementation of recent developments to allow for
time to event data and interactions to be correctly modeled.
The model developed captured the multifactorial influences on mortality risk in
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
77/156
Thursday, 23 august - Mini-symposia
MS1 Mini-symposium on Registerbased Epidemiology
Cancer Registry of Norway and Institute of Basic Medical Sciences, University
of Oslo, Norway
Cancer is becoming an increasingly complex disease. New biomarkers are
identified that not only predict prognosis, but also suggest different etiological
pathways for various cancer subtypes. Breast cancer is one example where
there is mounting evidence that risk factors for the disease vary by subtype.
This rapid molecular development offers many new challenges to cancer
registries. For new biomarker information to be useful the laboratories that
report to the registry must conduct similar assays, quality control procedures
need to be adequate, and the results must be reported in a standardized and
uniform manner. The cancer registries that record this information must pay
attention to new developments in the molecular arena, be flexible enough to
include new markers as they emerge, yet require sufficient stringent proof that
these markers are worthwhile before including them in the database.
It is clear that detailed data with up-to-date molecular markers would benefit
cancer epidemiologists and basic scientists trying to understand the etiology of
disease or specific mechanisms in cancer subtypes. Clinicians should, in their
clinical care, appreciate ardent requirements on their laboratories for quality
control and systematic reporting of results on all new markers. Further,
detailed, high-quality and systematic recording of various markers in cancer
registries open up possibilities for large scale clinical trials or observational
studies of the role of cancer medications in a real-life setting. Given the
demand for rapid approvals of new cancer medications that have not been
tested outside of clinical trials, this opens the possibilities for international
multisite evaluation of effects of advanced cancer therapy.
MS1.1
A woman’s reproductive history is related to diseases later in life
Rolv Skjærven
Department of Public Health and Primary Health Care, University of Bergen,
Norway
Studies have shown that there is a relation between preterm birth, stillbirth,
fetal growth restriction and preeclampsia on the risk of cardiovascular diseases
later in life for women. Thus pregnancy represents a unique opportunity to
identify women who may be at increased risk for serious disease and early
death.
Adverse events in reproduction, especially stillbirths, are most often
compensated by additional pregnancies. In Norway, most women (85%) who
have a first pregnancy, will have a second pregnancy, and even more so if the
first pregnancy ends with a perinatal death. Studies on long term health
consequences related to pregnancy outcome have so far focused on
pregnancy outcome of first pregnancy and not evaluated modifying effects of
later reproduction.
We find that negative effects of adverse events during pregnancy on long term
health are reduced by the presence of successive pregnancies. We therefore
suggest that studies on health related to pregnancy outcomes are based on the
women’s complete reproduction, rather than only the outcome of first
pregnancy.
Our studies are based on The Medical Birth Registry of Norway, covering 43 MS1.4
years (1967-2009), linked to maternal (and paternal) death, and to the registry
Double delights through twin registry research
of education. Successive pregnancies to a woman are organized into
Nancy L. Pedersen
reproductive histories through linkage.
Department of Medical Epidemiology and Biostatistics, Karolinska Institutet,
Sweden
MS1.2
Studies of twins were long regarded as the epitome of an epidemiological
What can be achieved with a good population-based cancer registry?
design that could address the role of “environmental” risk factors without the
Timo Hakulinen
confounding of genetic or other familial factors. Indeed, twin studies have
Finnish Cancer Registry, Institute for Statistical and Epidemiological Cancer allowed researchers to address a number of questions concerning the role of
purported environmental risk factors, and at the same time evaluate whether
Research, Finland
A good cancer registry is an active institution for cancer research basing its these effects reflect familial and genetic confounding. During the 1980’s –
activity on a cancer register- a database of all cancer cases occurring in a 1990’s, twin study designs were widely used to estimate heritability (in the
defined population. Its activities encompass the estimation and prediction of absence of measured genotypes). Despite being genetically informative, twin
cancer burden, uncovering the causes of cancer, contributing towards cancer studies have been less popular as a design of choice for studying genetic
prevention, early detection, and evaluation on the outcome -both in survival linkage or association. Nevertheless, twin studies may be particularly efficient
and in a broader sense- of patients in that defined population. Its data and for evaluating potential gene – by – environment interactions (and correlations),
analyses are also used in resource planning and evaluation of various especially when the outcome is a continuously distributed measure. The
programmes and functions of different organisations. They thus provide the double delights of twin registry research will be described with examples
basis of cancer policy and foundations for cost estimates. The target groups of covering a variety of phenotypes, predominantly those within aging and
these activities are the mankind, society, population, authorities, patients and psychiatry.
scientists. In order to achieve all of this, guarantees in legislature and funding
are needed, as well as a good coexistence of registration and the scientific use MS1.5
of the data. The Finnish Cancer Registry that has been in existence for 60
Register-based research on the epidemiology of aging
years is used as an example to elucidate these principles and issues in
Kaare Christensen
practice.
The Danish Aging Research Center and the Danish Twin Registry, Denmark
The Nordic countries have a long tradition for register-based epidemiological
MS1.3
studies on reproduction and diseases taking into account socioeconomic
How can cancer registries improve our biological understanding of
factors. Mainly due to truncation, register-based research on the epidemiology
cancer and cancer care?
of aging has been less extensive. A combination of existing disease- specific
Giske Ursin
registers and national health registers and demographic databases in Denmark
78/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
have opened up new possibilities. Examples of recent uses addressing the through assisted reproductive techniques or ART). Infertile couples’ underlying
male-female health survival paradox, the cancer-longevity trade-off, and the risk can make ART itself appear harmful. Some of the best evidence for such
change of hospitalization use within and across cohorts of the oldest old risk heterogeneity (and the misleading distortions that can result) comes from
citizens in Denmark will be provided to illustrate some of the potentials.
the linked Scandinavian birth registries. Examples will be discussed.
MS1.6
The Birth Register – how do we find the most beautiful flowers in the
garden?
Sven Cnattingius
Department of Medicine, Clinical Epidemiology Unit, Karolinska Institutet,
Sweden
The success of the Scandinavian birth registers is to some extent due to the
specific characteristics of pregnancy and childbirth. The window of exposure is
short (9 months), and data are commonly prospectively collected. In contrast to
chronic diseases, pregnancies often occur more than once. This makes it not
only possible to study risks of recurrent or isolated events, but also to study if
change of exposure from one pregnancy to another (i.e., change in smoking
habits, weight gain or change of partner) influences risks. The Birth Registers
also include information on both exposures and outcomes, which makes it
possible to perform analytic studies without other data sources. As the Birth
Registers in Norway and Sweden started in 1967 and 1973, respectively, it is
possible to perform studies of birth outcomes across generations, and further
information about family relationships (siblings, half siblings on maternal or
paternal side, etc.) provides additional possibilities to study hereditary patterns.
In studies of long-term effects of prenatal exposures, studies within diseasediscordant sibling pairs provide control for otherwise unmeasured familial
(shared genetic and environmental) factors. Limitations in the Birth Registers
include no direct access to stored biological samples, lack of information on
early pregnancy losses, and limited information on prenatal diagnostic
procedures and the delivery process. In future, these limitations may partly be
overcome by adding information from computer-based standardized antenatal
and obstetrical records. However, given the short time of exposure, pregnancy
will always be well suited for prospective cohort studies with more detailed
information.
MS1.7
Heterogeneity of risk and selective fertility – Subtle biases produce
serious confusions
Allen J. Wilcox1 and Rolv Skjærven2
1
National Institute of Environmental Health Sciences, Research Triangle Park,
NC, USA and 2Department of Public Health and Primary Health Care,
University of Bergen, Norway
Measures of perinatal risk (neonatal mortality, for example) may appear
relatively steady for given populations, but in fact they summarize a mix of
people at highly heterogeneous risks. While such heterogeneity is usually
invisible, it can easily be confused with real biological effect when it becomes
visible. This can happen when there is selective over- or under-representation
of high-risk couples. For example, women who suffer a pregnancy loss often
have additional pregnancies in order to achieve their desired family size, which
leads to more high-risk women delivering at older ages. The
overrepresentation of high-risk women at older maternal ages then produces
an association that can be mistaken for a direct effect of maternal “aging.”
High-risk people can also be underrepresented. Adults who have survived a
perinatal problem (such as a birth defect) are usually at risk of passing on
similar outcomes to their offspring -- but may also be less likely than other
adults to contribute offspring to the next generation. Similarly, infertility is
associated with a range of pregnancy problems, but infertile couples are less
likely to contribute those risks to the pool of observed pregnancies – unless
interventions are done to make such pregnancies possible (for example
MS2 Mini-symposium on Statistics in Vaccines
Research
MS2.1
FDA’s Sentinel Initiative: Active Vaccine Safety Surveillance and
Pharmacovigilance
Michael Nguyen
LCDR, U.S. Public Health Service, Acting Chief, Vaccine Safety Branch, FDA
Center for Biologics Evaluation and Research, USA
The ability to monitor the safety of vaccines after licensure is as important as
the ability to evaluate and demonstrate their safety before licensure. In 2008,
the United States Food and Drug Administration (FDA) launched the Sentinel
Initiative to expand its capability to routinely, rapidly and continually monitor a
product’s benefit-risk balance postlicensure. Capitalizing on the growing
availability of electronic health data, the five year Mini-Sentinel pilot was
launched to inform the creation of the fully operational Sentinel System, a
national electronic postmarket risk identification system. As of 2012, MiniSentinel has created a distributed database of more than 126 million
individuals and has already enabled FDA to conduct rapid, population-based
risk assessments of FDA regulated medical products.
The Postlicensure Rapid Immunization Safety Monitoring program (PRISM) is
the Mini-Sentinel program dedicated to vaccine safety. PRISM is the largest
active surveillance vaccine safety system worldwide and uniquely strengthened
by linkages to immunization registries. PRISM is currently focused on
developing new statistical and epidemiologic methods to detect, quantify and
characterize vaccine safety concerns in near real-time. This presentation will
discuss the PRISM program’s current status as well as progress towards
integrating PRISM into FDA’s vaccine regulation processes.
MS2.2
Methodological challenges for sequential vaccine safety surveillance
using observational health care data
Jennifer C. Nelson1,2, Andrea Cook1,2, Onchee Yu1, Lisa Jackson 1,3
1
Group Health Research Institute, 2Department of Biostatistics, University of
Washington, 3Departments of Epidemiology and Medicine, University of
Washington, USA
In order to improve post-licensure vaccine safety surveillance, new systems
have been developed that prospectively monitor observational health care data
from large health plans. Such systems, which include the Centers for Disease
Control and Prevention’s Vaccine Safety Datalink project and the Food and
Drug Administration’s Sentinel System, involve capturing and prospectively
analyzing vaccine and adverse event data among enrollees of multiple large
health plans as new vaccines are received (and often in near-real time). Their
goal has been to evaluate pre-specified suspected safety issues and potentially
prompt a more formal confirmatory study. They offer promise to provide a
safety monitoring framework that is rapid, statistically powerful, and costeffective. Continuous sequential testing has been proposed in this setting to
facilitate the rapid detection of increased risks of adverse events for newly
licensed vaccines. Group sequential methods, commonly used in randomized
clinical trials, have also been considered. In this talk, we describe the key
methodological challenges that arise when applying sequential methods to
observational safety surveillance that uses data from large health plans.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
79/156
Challenges primarily derive from the lack of a controlled experiment and MS2.5
include confounding, outcome misclassification, and unpredictable changes in Paediatric vaccine pharmacoepidemiology : classification bias in case
the data over time, such as differential vaccine uptake.
series analysis and application to febrile convulsions
C. P. Farrington1, C. Quantin2, E. Benzenine2, M. Velten3, F. Huet4, P. TubertMS2.3
Bitter5
Drug and Vaccine Safety Surveillance: some existing methods and the 1Department of Mathematics and Statistics, Open University, Milton Keynes,
unmet analytic needs
UK, 2Départment de l’Information Médicale, CHU Dijon, Université de
Bourgogne, France, 3 Laboratoire d’Epidémiologie et de Santé Publique,
Lingling Li
Université de Strasbourg, France, 4 Pôle Pédiatrie, CHU Dijon, Université de
Department of Population Medicine, Harvard Medical School and Harvard Bourgogne, France, 5 Equipe Biostatistique, Inserm-UPS UMRS 1018, Villejuif,
Pilgrim Health Care Institute, USA
France
The importance of post-marketing surveillance using electronic healthcare The self-controlled case series (SCCS) method was developed in order to
databases for drug and vaccine safety is well recognized as rare but serious study adverse events of vaccines and was originally applied to hospital
adverse events (AE) may not be detected in pre-approval clinical trials. In such discharge data linked to vaccination data in the UK.
surveillance, a sequential test is preferable, in order to detect potential
problems as soon as possible. In this talk, we will briefly introduce several In the first part of the talk, an ongoing project to link data on cases and
sequential analytic methods that have been developed for this purpose, i.e., vaccinations in France will be described. The objective of this study is to
the Poisson and Binomial maximized sequential probability ratio (MaxSPRT) assess the validity of identifying hospitalizations for simple febrile convulsion
tests, the conditional maximized sequential probability ratio test (CMaxSPRT), from discharge summaries recorded in the French hospital automated
the conditional sequential sampling procedure (CSSP), and the propensity database system. Preliminary data from case reviews of 451 children aged
score(PS)-enhanced CSSP test. The Poisson MaxSPRT and CMaxSPRT are between 29 days and 36 months from four hospitals in the Bas-Rhin and the
both extensions of Wald’s classical SPRT, and apply to settings with historical Côte-d’Or indicate that the French hospital automated databases can be used
controls. The Poisson MaxSPRT requires rich historical data to provide stable effectively to identify cases of hospitalization for febrile convulsion.
estimates of the baseline AE counts, while the CMaxSPRT adjusts for In the second part of the talk, the implications of these results for SCCS
uncertainty in both historical controls and surveillance population. The studies, and potential bias from different methods of case ascertainment,
Binomial MaxSPRT and its variants apply to settings with matched con-current including purely automated data extraction, will be discussed.
controls. The CSSP and the PS-enhanced CSSP apply to settings with con- More generally, the impact of classification biases on the SCCS method will be
current controls, but do not require matching.
The CSSP adjusts for assessed, in the specific context of data linkage studies. The relative impact of
confounding by standard stratification while the PS-enhanced CSSP adjusts for sensitivity and specificity of case ascertainment and the accuracy of
confounding by PS stratification. We will discuss the respective advantages vaccination data on the bias and the power of the method will be described.
and disadvantages of these tests. More importantly, in this talk, we will discuss
the remaining knowledge gaps and unmet analytic needs for post-marketing
drug and vaccine safety surveillance to motivate more methodology research MS2.6
to benefit vaccine safety research.
Self controlled case series method with smooth age effect
Yonas G. Weldeselassie, Heather Whitaker, Paddy Farrington
MS2.4
The Open University, Walton hall, Milton Keynes, MK7 6AA, UK
Signal Detection of Adverse Events Using Electronic Data with Outcome The self-controlled case-series method, commonly used to investigate potential
Misclassification
associations between vaccines and adverse events, requires information on
cases only and automatically controls all age-independent multiplicative
Stanley Xu
confounders, while allowing for an age dependent baseline incidence.
Institute for Health Research, Kaiser Colorado, USA
In the parametric version of the method, the age specific relative incidence is
Availability of large amount of electronic health care data makes it possible to
modelled using piecewise constant functions, while in the semiparametric
study the association of rare adverse events with certain vaccines. For
version it is left unspecified. However, misspecification of age groups in the
example, the Vaccine Safety Datalink (VSD) project was established a decade
parametric version leads to biased estimates of the vaccine effect, and the
ago in USA. It collects electronic data including vaccinations and medically
semiparametric approach runs into computational problems when the number
attended adverse events on 8.8 million managed care organizations (MCO)
of cases in the study is large. We propose to use a penalized likelihood
enrollees annually. However, vaccine safety studies using these electronic
approach where age effect is modelled using splines, piecewise polynomial
data bases are limited by the quality of the MCO databases, which are not
functions that are combined linearly to approximate a function on an interval
originally created for research purposes. While the automated vaccination data
with specified continuity constraints.
are of high quality, studies have demonstrated that the accuracy of the
outcome data is often inadequate. In this study, we demonstrated that outcome We use M-splines to approximate the age specific relative incidence and
misclassification could result in both false positive and false negative signals in integrated splines (I-splines) for the cumulative relative incidence. A simulation
screening studies and near-real time surveillances. We developed a joint study was conducted to evaluate the performance of the new approach and its
statistical model that accommodated misclassification of adverse events for efficiency relative to the semiparametric approach. Results show that the new
both cohort and self-controlled case series designs. The joint statistical model approach performs better and works well for large data sets. The new splineconsisted of two components: an incidence rate model for the observed count based approach will be applied to data on febrile convulsions and paediatric
of adverse events and a misclassification model for modeling the likelihood of vaccines.
misclassification of observed adverse events. Simulation studies showed that Key words: Case series, Penalized likelihood, Poisson process,
the newly proposed model reduced the rates of false positive and false Semiparametric model, Smoothing,Ssplines,
negative signals in vaccine safety studies.
80/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Abstracts - Posters
Chia-Min Chen2, Yunchan Chi1
1
Department of Statistics, National Cheng-Kung University, Tainan, Taiwan,
P1.1
2
Graduate Institute of Natural Healing Sciences, Nanhua University, Chiayi,
Response Adaptive Randomization - Cost, Benefit and Implementation with Taiwan
Covariate Balancing
In a randomized two arms phase II clinical trial, the goal is to determine
Wenle Zhao, Yuko Palesch
whether the superior therapy is better than the inferior therapy. Most often, the
Medical University of South Carolina, Charleston, SC, USA
trial uses a two-stage design for ethical considerations. The sample size
Response adaptive randomization (RAR) in clinical trials has been advocated required in the trial is often determined by the expected difference in response
for its presumed benefits in ethical considerations and statistical efficiencies. rates between the two therapies based on desired power. In practice, however,
However, we found that the potential benefit in statistical efficiency is trivial, it is difficult to know the expected difference, especially when there is no
and the ethical benefit may be associated with a loss in test power. available information about the two therapies in the literature. Therefore, this
Furthermore, consideration for baseline covariate balance in randomization via paper follows the idea of Lin and Shih (2004) to develop an adaptive two-stage
stratification makes the implementation of RAR even more complex and less design which allows the expected difference in the alternative hypothesis at the
effective. To overcome this drawback, we propose a RAR algorithm with a second stage to be changed according to the outcome observed at the first
minimal sufficient balancing for important baseline covariates and an stage. Consequently, a promising therapy may be rejected based on the
adjustable response adaptation based on clinical considerations. We adapted alternative hypothesis. The Fisher's exact test is employed and its
demonstrate that prevention of serious imbalances in important baseline exact distribution is used for generating sample sizes required the design.
covariates is superior to the simple RAR with respect to statistical operating Because of the discrete nature of the exact distribution, the Mid-P-value is
characteristics. The covariate-adjusted RAR algorithm aims to achieve a target applied to overcome conservativeness of Fisher's exact test.
allocation ratio which is sufficient to demonstrate ethical advantage and also is
capped in order to contain the power loss of the final statistical analysis. The P1.4
operation characteristics and statistical properties of this RAR are studied with
Impact of lack-of-benefit stopping rules on treatment effect estimates of twocomputer simulation. This design has been implemented in a large multicenter
arm multi-stage (TAMS) trials with time to event outcome
acute ischemic stroke trial funded by NIH.
Babak Choodari-Oskooei1, Mahesh KB Parmar1, Patrick Royston1, Jack
Bowden2
P1.2
1
MRC Clinical Trials Unit, London, UK, 2MRC Biostatistics Unit, Cambridge, UK
Maximum type 1 error rate inflation in multi-armed clinical trials with interim
Background
sample size modifications
In 2011, Royston et al described technical details of a two-arm, multi-stage
Alexandra Graf, Peter Bauer, Franz Koenig
(TAMS) design. The design enables a trial to be stopped part-way through
Medical University Vienna, Vienna, Austria
recruitment if the accumulating data suggests a lack of benefit of the
Sample size modifications in an adaptive interim analysis based on the experimental arm. Crucially, such interim decisions can be made using data on
observed interim effects can considerably inflate the type 1 error rate if the pre- an available `intermediate' outcome. At the conclusion of the trial, the definitive
planned conventional fixed sample-size tests are applied in the final analysis, outcome is analysed. Typical intermediate and definitive outcomes in cancer
ignoring the adaptive character of the study. We investigate scenarios where trials might be progression-free and overall survival, respectively. Despite this
more than one treatment arms are compared to a single control as well as framework being used in practice, concern has been raised about possible bias
scenarios with interim treatment selection by carrying on only the treatment induced in the final estimate of the treatment effect.
with the largest observed interim effect and the control to the second stage. It Methods
is assumed that either a naive testing procedure with a conventional fixed
sample-size test or a multiplicity adjusted Dunnett test is performed in the final We explore the issue of bias empirically through simulation and bootstrapanalysis. The maximum inflation of the type 1 error rate for such types of based reanalyses of cancer trials run by the Medical Research Council.
design can be calculated by searching for "worst case" scenarios, i.e. sample Results
size adaptation rules that lead to the largest conditional type 1 error rate in any In trials with a true lack of benefit of the experimental arm that are stopped at
point of the sample space.
an interim stage, the treatment effect has a small bias at the time of the interim
If allocation rates to treatment arms are modified after an interim analysis, it assessment. This small bias is markedly reduced by further follow-up and
can be shown that the maximum inflation of the type 1 error rate may be reanalysis at the planned end of the trial. In trials with a truly efficacious
substantially larger than in the case of sample size reassessment with stage- experimental arm that continue to the planned end, the bias is of no practical
wise balanced sample sizes. To achieve the maximum type 1 error rate, we importance, being less than 3% of the treatment effect in general.
first assume unconstrained second-stage-sample-sizes. To see how the Conclusions
numbers will change in more realistic scenarios, we put constraints on the The bias in the estimated treatment effect in a TAMS trial is of no practical
second-stage-sample-size, which may lead to scenarios not inflating the type 1 importance, provided that all patients are followed up to the planned end of the
error rate.
trial. Bias correction is unnecessary.
P1 Adaptive clinical trials
P1.3
P1.5
Adaptive two-stage designs for comparing two binomial proportions in phase II Optimizing trial design in pharmacogenetics research; comparing a fixed
clinical trials
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
parallel group, group sequential and adaptive selection design on sample size
requirements
Ruud Boessen1, Frederieke H van der Baan1, Rolf HH Groenwold1, Antoine CG
Egberts3, Olaf H Klungel2, Diederick E Grobbee1, Mirjam J Knol1, Kit CB Roes1
1
Julius Center for Health Sciences and Primary Care, University Medical
Center Utrecht, Utrecht, The Netherlands, 2Division of Pharmacoepidemiology
and Clinical Pharmacology, Utrecht University, Utrecht, The Netherlands,
3
Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht,
The Netherlands
Background: Two-stage clinical trial designs may be efficient in
pharmacogenetics research when there is inconclusive evidence of effect
modification by a genomic marker. Two-stage designs allow to stop early for
efficacy/futility (i.e. group sequential), and could offer the additional opportunity
to enrich the study population to the most promising patient subgroup (i.e.
adaptive selection).
Methods: This study compared sample size requirements for a fixed parallel
group, a group sequential and an adaptive selection design with equal overall
power and control of the family-wise type-I error rate. Designs were evaluated
across scenarios that defined the effect size in the marker positive and marker
negative subgroups, and the genotype distribution in the total study population.
Also considered were scenarios where the actual subgroup effects were
different from those assumed at the planning stage.
Results: Two-stage designs were generally more efficient than the fixed parallel
group design. The largest sample size reduction was associated with the
option to stop early for efficacy/ futility. The possibility to enrich had an
additional advantage when the difference in subgroup effects was large. When
the actual difference in subgroup effects was larger than assumed, an adaptive
selection trial more often concluded significance only for the most responsive
subgroup.
Conclusion: A group sequential design is generally more efficient than a
parallel group design when patient subgroups respond differentially. An
adaptive selection design only adds to the advantage when the difference in
subgroup effects is large. Adaptive selection provides flexibility which may be
desirable when a priori assumptions are tentative.
P1.6
In a two-stage dose-finding study, how big should the first stage be?
Emma McCallum, Adrian Mander, James Wason
MRC Biostatistics Unit, Cambridge, UK
In Phase II of the drug development process, trials are used to establish the
efficacy of a drug and to estimate a dose-response relationship. Response is
usually an efficacy endpoint such as blood pressure or surrogate marker such
as white blood cell count. When the aim is to estimate the full dose-response
relationship, nonlinear monotonic response models, such as the Emax model,
are often used to estimate clinical parameters.
Adaptive optimal designs split a trial into multiple stages; at each stage,
parameters of the model are estimated and future dose choices are based on a
locally D-optimal design. The main problem with these designs is: how big
should the first stage be when there is no information about the model
parameters? We have examined a two-stage design that is partitioned
according to some function. In the first stage, participants are allocated to a set
of pre-defined doses in order to estimate the parameter values of the doseresponse curve. These parameter estimates are then used to find the optimal
design for the second stage. We shall examine what fraction of the participants
should be assigned to the first stage so that the efficiency of the design is
maximised and how this is affected by the initial guesses of the model
parameters. The problem is explored in the context of a one parameter
exponential mean function and also a multi-parameter Emax model. With these
models, some analytic results are obtained about how big the first stage of
81/156
these trials should be.
P1.7
Incorporating prior information into dual-agent Phase I dose-escalation studies
from single-agent trials
Graham Wheeler
MRC Biostatistics Unit, Cambridge, UK
In oncology, there is increasing interest in studying combinations of drugs to
improve treatment efficacy and/or reduce harmful side-effects. Dual-agent
Phase I clinical trials are primarily concerned with drug safety, with the aim to
discover a maximum tolerated combination dose via dose-escalation; small
cohorts of patients are given set doses of both drugs and monitored to see if
any particular toxic reactions occur. Whether to escalate, de-escalate or
maintain the current dose for either drug for subsequent cohorts is based on
the number and severity of observed toxic reactions, and a decision rule.
We investigate the use of Bayesian adaptive model-based designs for dualagent Phase I trials, where prior information from single-agent trials can be
accommodated. We use a meta-analytic method to incorporate information
from past single-agent trials in order to construct prior predictive distributions
for the margins of the dose-toxicity surface. Priors for parameters relating to
drug-drug interactions are initially kept vague.
These methods are used to design a dual-agent dose-escalation study in
pancreatic cancer; the intervention is a combination of Paclitaxel and a novel
Aurora Kinase Inhibitor. In using data obtained from a systematic search of
Paclitaxel toxicities we construct an appropriate prior distribution for one
margin of the dose-toxicity surface. We then consider several prior distributions
for the other margin and interaction terms under various scenarios and show
how they affect the operating characteristics of the design.
P1.8
Blinded and unblinded internal pilot study designs for clinical trials with
overdispersed count data
Simon Schneider1, Heinz Schmidli2, Tim Friede1
1
Department of Medical Statistics, Göttingen, Germany, 2Statistical
Methodology, Novartis Pharma AG, Basel, Switzerland
In the planning phase of a clinical trial with counts as primary outcomes, such
as relapses in Multiple Sclerosis (MS), there is uncertainty with regard to the
nuisance parameters (e.g. overall event rate, the dispersion parameter) which
need to be specified for sample size estimation. For this reason the application
of adaptive designs with blinded sample size reestimation (BSSR) are
attractive (Cook et al. 2009, Friede and Schmidli 2010a). After a comparison of
existing methods we consider in this presentation a modified version of the
maximum likelihood method for BSSR for negative binomial data proposed by
Friede and Schmidli (2010b). The method works well in terms of sample size
distribution and power, if the assumed clinically effect is equal to the true effect.
We compare the BSSR approach to an unblinded procedure in situations
where an uncertainty about the assumed effect size exists. For practically
relevant scenarios we make recommendations when application of the blinded
or unblinded procedures are indicated. In addition, results for unbalanced
designs previously not considered are shown in a simulation study. The
methods are illustrated by a study in Relapsing Remitting MS.
Cook RJ et al.. Two-stage design of clinical trials involving recurrent events.
Statistics in Medicine 2009; 28: 2617-2638.
Friede T, Schmidli H. Blinded sample size reestimation with count data:
Methods and applications in multiple sclerosis. Statistics in Medicine 2010a;
29:
1145-1156.
Friede T, Schmidli H. Blinded Sample Size Reestimation with Negative
82/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Binomial Counts in Superiority and Non-inferiority Trials. Methods of information across samples from the same gene. We apply the model to
Information in Medicine. 2010b; 49: 618-624.
simulated data and a mouse RNA-Seq data. Empirical tests show that our
model provides substantial improvement on the quality of model fitting and
improves the sensitivity in isoform-level differential expression analysis when
P1.9
read distribution deviates from uniform. Some of the expression levels are
Group sequential testing in covariate-adjusted response-adaptive designs
validated using real-time PCR data.
Eunsik Park
Chonnam National University, Gwangju, Republic of Korea
P2.2
Unequal group covariances in microarray data analyses
1
2
Response adaptive design in clinical trial is to minimize the number of subjects Woojoo Lee , Yudi Pawitan
1
assigned to the inferior treatment while maintaining significant statistical Inha university, Incheon, Republic of Korea, 2Karolinska Institutet, Stockholm,
inference at certain level. Recently, the molecular studies, and the genetic and Sweden
proteomic researches cumulates more and more evidences which suggest the
personalized medicine is possible. Toward this end, the clinical design to In testing for differential expression for gene sets, such as genetic pathways,
incorporate the information of covariates becomes important and ethical. In this we often face the problem of singular covariance matrices (if n< p) and of
paper, we study the application of group sequential methods to some covariate unequal covariance matrices between groups. To deal with this singularity
problem, we can apply a shrinkage covariance estimation method and this
adjusted and response adaptive design.
leads to a regularized version of Hotelling's T2. However, the Welch-type
The group sequential methods with covariate adjustment have been studied by regularized Hotelling's T2 lacks sensitivity when the covariances are equal. To
many authors (for example, Jennison and Turnbull, 1997). Recently, in the overcome this problem, we introduce a moderated regularized Hotelling's T2
literature, there are some discussions about application of group sequential that is based on weighing by the probability that the covariances are equal
methods to response adaptive design (for example, Karrison, Huo, Chappell, given data. When a non-trivial proportion of gene sets has unequal covariance
Control Clinical Trials, 2003, pp 506-22). When the response adaptive design is matrices, the false discovery rate (FDR) estimate based on the proposed
applied, the subjects included to study are adaptive to the previous history of statistic is shown to perform better than the existing methods over wider range
response and therefore they are no longer independent. That is, the group of data conditions.
sequential method is built on the theory of Jennison and Turnbull (1997, 2000)
which relies on the asymptotic theory.
P3 Causal inference
By basing treatment allocation to a better treatment, participation of patients to
clinical trials will be improved in serious diseases. This will activate researches
P3.1
in serious diseases.
Exploration of instrumental variable methods for estimation of causal mediation
Research period can be shortened by stopping the study earlier when stopping effects in the PACE trial of complex treatments for chronic fatigue syndrome
criteria are satisfied. This will reduce study costs and time to access to the
Kimberley Goldsmith1, Trudie Chalder1, Peter White2, Michael Sharpe3, Andrew
better treatment will be saved.
Pickles1
1
Institute of Psychiatry, King's College London, London, UK, 2Wolfson Institute
P2 Bioinformatics
of Preventive Medicine, Bart's and the London School of Medicine, Queen
Mary University of London, London, UK, 3University Department of Psychiatry,
P2.1
Joint estimation of isoform expression and isoform-specific read distribution University of Oxford, Oxford, UK
Background
using RNA-Seq data across samples
Chronic fatigue syndrome (CFS) is characterised by chronic disabling fatigue.
Chen Suo1, Stefano Calza2, Agus Salim3, Yudi Pawitan1
1
Department of Medical Epidemiology, Karolinska Institutet, Stockholm, The PACE trial compared four treatments for CFS and found cognitive
Sweden, 2Department of Biomedical Sciences and Biotechnology, University of behaviour therapy (plus specialist medical care, CBT+SMC) and graded
Brescia, Brescia, Italy, 3Department of Epidemiology and Public Health, exercise therapy (GET+SMC) to be more effective than adaptive pacing
therapy (APT+SMC) and SMC alone in improving physical function and fatigue.
National University of Singapore, Singapore, Singapore
Estimates of causal mediation effects are of interest, for example, fear
RNA-sequencing technologies provide a powerful tool for expression analysis avoidance and activity avoidance as mediators of the effect of CBT and GET
at isoform level, but accurate estimation of isoform abundance is still a respectively. Traditional Baron, Judd and Kenny (BJK) methods can be subject
challenge. The first step of transcript quantification is to count the number of to bias; instrumental variable methods (IV) can address this problem. The aims
reads falling into an exon because expression level is expected to be were to explore causal analyses using IVs in PACE and to compare IV and
proportional to the read counts. Standard methods of estimation typically BJK estimates.
assume uniform read intensity along a transcript; these methods would
Methods
produce biased estimates when the read intensity is in fact non-uniform due to,
BJK methods were applied using ordinary least squares regressions. IV
for example, the 5' or 3' bias, certain nucleotide composition effect - such as
methods were applied by assessing several baseline variables in interaction
GC content - or other technical biases. The problem is that the read intensity
terms with treatment arm. Instrument strength was assessed using the R2
pattern is not identifiable from data observed in a single sample. In this study,
change between models with main effects only and with the interaction term.
we propose a joint statistical model that accounts for non-uniform isoformDifferent IV estimators were compared. Collective instrument strength was
specific read distribution and gene isoform expression estimation. The main
assessed using an F test and partial R2.
challenge is in dealing with the large number of isoform-specific read
distributions, which are as many as the number of splice variants in the Results
genome. A statistical regularization with L1 smoothing penalty is imposed to The IVs were weak, with small R2 changes. The IV-derived estimators were
control the estimation. Also, for estimability reasons, the method uses different in magnitude and less precise than the BJK estimators. The relative
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
precision of different IV estimators varied 10-18%. There is scope for modelling
a common effect of mediators on outcomes across trial arms.
Conclusions
Potential IVs for the study of PACE treatment mechanisms can be found,
however, these were weak. Combining trial arms may allow for more efficient
IV analysis.
83/156
mediator. The latter is modifiable at the design stage. We extended the twostage approach, using linear mixed effects and GEE modelling to enable
multiple SF36 outcomes to be modelled multivariately, thereby contributing
more precision to estimate a common mediation effect, and achieving
reductions of 20-30% in the standard error. This offers welcome power gains.
In the trial, the significant mediation effect through sessions indicates that
some of the effect may indeed be genuinely connected with receipt of
intervention material.
P3.2
Principal trajectories: extending principal stratification for repeated measures
P3.4
Richard Emsley, Graham Dunn, PRP (Psychosis Research Partnership) Group
Health Sciences Research Group, The University of Manchester, Manchester, Estimating the effect of insulin treatment in diabetic type-II patients on
cardiovascular disease rates with marginal structural models
UK
Exploring treatment effect heterogeneity is an important aspect of complex Michel Hof, Aeilko Zwinderman
intervention trials, and process variables describing the intervention content are Academic Medical Center, Amsterdam, The Netherlands
crucial components of this. Frequently these variables can only be measured in The first pharmacological treatment of diabetes mellitus (DM) type-II is
intervention groups. Principal stratification, whereby control group participants
commonly a variety of oral anti-diabetics. However, at some point in their
are assigned to the latent class they would have been in had they been treatment, most DM type-II patients require exogenous insulin to control
randomised to intervention, has been proposed for this setting but generally
glucose-metabolism. Unfortunately, there are suspicions that the use of
makes use of a single observation of the process variable, which often has exogenous insulin is associated with increased risk of cardiovascular events
repeated measures collected.
independent of the risk for such events that is associated with having DM typeWe propose a new method making efficient use of all the observed data, II.
termed principal trajectories. We estimate general growth mixture models on
To investigate this hypothesis, baseline biomarker measurements and
the repeated measures of process variables in the intervention group using complete medication histories from 25883 diabetic type-II patients were
maximum likelihood, assigning participants to hypothesised latent trajectory
available. This data was extracted from the PHARMO record linkage system
classes by estimated posterior probabilities. Using baseline covariates which containing the data from Dutch community pharmacies.
predict class membership, we assign which class control group participants
would have been in, had they been randomised to intervention. We then In this epidemiological cohort the health of patients treated with exogenous
examine the effect of random allocation on outcome within each class. If insulin is usually worse than the health of patients treated with oral antidiabetics. To account for this indication-bias we used marginal structural Cox
required, an exclusion restriction can be imposed to aid identification.
proportional hazards models to quantify the causal effect of starting with
We illustrate this method using a randomised trial of cognitive behaviour
exogenous insulin therapy during follow-up. Different types of models were
therapy for prevention of relapse in psychosis. A participant-reported measure considered, based on (double-robust) inverse probability of treatment weighted
of therapist empathy was taken at each session of therapy in the intervention
estimating equations or targeted maximum likelihood.
group, with no corresponding measure available in the control group. Applying
the principal trajectories approach, we find differential effects of randomisation Preliminary results showed that the increased risk that is associated with
starting exogenous insulin treatment is largely explained by the dynamic
on several psychosis-specific outcomes between the latent classes.
selection of patients who were started on insulin treatment.
P3.3
Improving the detection of causal mediation effects in complex intervention
trials
Neil Casey1, Simon Thompson1, Toby Prevost2
1
University of Cambridge, Cambridge, UK, 2King's College London, London, UK
P3.5
Causal inference from trials of complex interventions
Sabine Landau
King's College London, London, UK
In a physical activity trial, there was no evidence of effect on the primary
outcome, though large significant effects amongst eight related SF36
measures of general health. Could such differences be: true effects mediated
through receiving the intervention, other systematic effects such as selfreporting bias, and/or chance? The aim was to develop reliable methods to
investigate whether the intervention effects on these outcomes were mediated
through truly receiving the intervention delivered in sessions.
We adopted a structural mean modelling approach with a two-stage leastsquares estimation algorithm, to estimate mediation effects free from
confounding bias. The mediator (number of sessions attended), and outcomes
(SF36), are predicted from baseline covariates. Each individual has a personal
predicted ‘counterfactual' treatment-effect difference, regressed on the
predicted mediator using a dose-response model structure. However, these
methods do not typically provide sufficiently precise estimates except for such
a simple model structure.
A simulation study established the factors driving the lack of precision to be the
intervention effect size and the degree to which baseline covariates predict the
Complex interventions are characterised by multiple components which when
given in combination are thought to improve health outcome. Outcome
improvements are often hypothesized to be achieved indirectly by an
intervention component inducing change in an intermediate process variable
which is then translated into change in a distal outcome (mediation). In clinical
trials covariates of linear models employed to estimate average causal
treatment component effects and their indirect (mediated) and direct (nonmediated) portions may be endogenous due to non-receipt of assigned
intervention components, unmeasured confounding or measurement error. I
will describe a Monte Carlo simulation study to assess the impact of these
issues on the statistical properties of ordinary least squares (OLS) estimators
and two instrumental variables (IV) estimators (two-stage least squares, 2SLS,
three-stage least squares, 3SLS). The results show that while IV estimators
can correct for hidden confounding and/or measurement error bias they do so
at the expense of increased imprecision. The standard error inflation relative to
that of the OLS estimator is largely driven by the strength of the instruments the stronger the instruments the less inflation. The mediation portions were
found to be most susceptible to bias by naïve OLS estimation. At the same
84/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
time the standard errors of their IV estimators suffered the worst inflation.
There was no efficiency gain from implementing the extra computations
required by the 3SLS procedure. The findings highlight the need to design
(series of) trials of complex interventions such that consistent and precise
estimation of causal parameters of interest is possible.
balanced exposure and three confounding factors. We simulated several
datasets reproducing the original sample to compare the ATE estimation
methods (STD, IPW and DR) in terms of bias, size of confidence interval and
power to detect a non-zero effect. We also conceived new scenarios by varying
the number of cases, the strength of the ATE and the prevalence of outcome
among non-exposed subjects.
In the original sample, the odds ratios among men and women were 0.89
P3.6
[0.33;2.36] and 4.44 [1.17;16.74] for DR, 0.87 [0.32;2.35] and 4.23 [1.16;15.38]
Marginal Structural Models in Epidemiology: Why not?
for IPW and 0.89 [0.34;2.33] and 4.45 [1.27;15.61] for STD, respectively. The
simulations are currently running and their results will be presented at the
Silvana Romio1, Maria Rosaria Galanti2, Maria Paola Caria2, Rino Bellocco3
1
Erasmus University Medical Center, Rotterdam, The Netherlands, 2Karolinska conference.
In the case of a rare outcome, the estimation of the ATE is complex, and
Institutet, Stockholm, Sweden, 3University of Milano-Bicocca, Milan, Italy
simulations are needed to accurately study power and variance.
Background: Although the study of the determinants of disease is a priority in
epidemiology, causal models to study the relation between an exposure and an
outcome are seldom used in epidemiologic studies. Case studies may be P3.8
important in order to understand advantages and limitations of such models G-estimation from an RCT comparing 2 active treatments and placebo given
application.
post-randomisation crossover and simultaneous treatments
Objectives: To compare methods and results from studies where marginal Roseanne McNamee, Matthew Carr
structural models have been applied, in order to highlight aspects of
University of Manchester, Manchester, UK
importance in the application of these models in epidemiology.
Methods: Objectives, study design, causal graphs, and nature of confounding Background and objectives. In a 2-arm placebo-controlled randomised trial
and feasibility of the application were compared in four studies based on (RCT), where cross-over from one arm to the other occurs non-randomly, gcausal models in different contexts. Main focus of this comparison concerned estimation provides an unbiased compliance-adjusted estimate of active
model assumptions and strength of the relationship between confounder and treatment effect on time to event. In the 3-arm RCT with two anti-hypertensive
arms and placebo which gave rise to this work, patients were allowed to switch
exposure of interest.
from one treatment to another, or receive both treatments simultaneously. Our
Results: All studies compared classic and marginal structural models. objectives were to extend g-estimation to 3 arms, and to estimate the efficacy
Substantial difference in the estimation of the effect of exposure on outcome of the active treatments (T1 and T2) on time to death, MI and stroke.
was observed in only one of the considered works. The strength of the
association between time-dependent confounders and exposure was a Method. A structural accelerated life time model - with a re-censoring rule
based on an empirical minimum time (Joffe 2011) - was used to relate the
peculiarity of this study.
(sometimes) unobserved treatment-free event time (T 0) to observed event
Conclusions: The application of marginal structural models in epidemiology times. Effects of active treatments were assumed to act multiplicatively if given
requires further exploration. Assumptions to be fulfilled and construction of together, and the placebo to have no effect; therefore the model had two
causal diagrams are crucial aspects of this exploration, as well as of wider parameters. Parameter estimates were found as the values for which, in a
potential application of the models in epidemiologic studies.
three arm comparison a two degrees of freedom log-rank test of the null
hypothesis of no difference in distribution of T 0, yields p=1. Issues around
confidence interval estimation, competing risks and run-time will also be
Average treatment effect estimation with a rare binary outcome: an example discussed.
Results: Estimated acceleration factors were converted to Hazard Ratios (HR).
and simulations
Whereas the ITT HRs for mortality, for example, under for T1 and T2 were 0.86
1
2
3
2
Eléonore Herquelot , Julie Bodin , Catherine Ha , Yves Roquelaure , Rémi
and 1.07, the g-estimation efficacy estimates were 0.71 (95% CI: 0.49, 1.11)
Sitta1,
Alice
Guéguen1,
Alexis
Descatha4
and 1.33 (95% CI: 0.9, 2.36).
1
Versailles Saint-Quentin-en-Yvelines University, UMRS 1018, Centre for
Research in Epidemiology and Population Health, Population-Based
Epidemiological Cohorts ” Research Platform, Villejuif, France, 2LUNAM P3.9
University, Laboratory of Ergonomics and Epidemiology in Occupational Direct and indirect effects in the presence of time-dependent confounding
Health, University of Angers, Angers, France, 3Department of Occupational
Georgia Vourli, Giota Touloumi
Health, French Institute for Public Health Surveillance (InVS), Saint Maurice,
France, 4AP-HP, Poincaré University Hospital, Occupational Health Unit, Athens University Medical School, Dpt. Of Hygiene, Epidemiology & Medical
Statistics, Athens, Greece, Athens, Greece
Garches, France
P3.7
Standardization (STD), inverse probability weighting (IPW) and doubly robust
estimation (DR) are recommended to estimate the Average Treatment Effect
(ATE). Their asymptotic properties and theoretical variances are already well
known. However, the properties of such ATE estimators in small samples have
not been fully explored and few applications are available.
Our aim was to compare the performance of ATE estimation methods in a
specific example using simulations, in order to estimate the effect of a
balanced exposure on a rare binary outcome using theoretical variances.
In the present work, we consider a cross-sectional study conducted among
2161 men and 1549 women, showed 15 and 14 cases respectively, and a
Causal methodology is mainly focused on the estimation of the total exposure’s
effect. Our aim is to develop a method to generate data appropriate to be
analyzed with Cox marginal structural models (msm), prespecifying the direct
(immediate) and indirect (mediated via other variable(s)) effects and explore
how the total effect decomposes into these components.
Data were simulated assuming a time-dependent confounder, binary or
continuous. Exposure’s direct and indirect hazard ratios were prespecified.
Scenarios where the proportional hazards (PH) assumption is either met or
violated were explored. Deviation of the total effect from the sum of the direct
and indirect effects was assessed. For each scenario, 1000 datasets with 1000
individuals and 10-year follow-up were generated.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Assuming a binary marker, and under PH, the relative bias was -1.6%, with
95% empirical coverage probability 94.7%. When the PH assumption was
violated, the total effect deviated from direct plus indirect effects by 3.3%111.5%, depending on the marker’s trends both on and off therapy, as
expected. In such cases, we explored ways to adequately model the time
dependent indirect treatment effect and the relation of the estimated total effect
with the prespecified parameters.
The proposed simulation method is approximate, as we prespecify direct and
indirect effects instead of the marginal one. However, it is shown that if the PH
assumption holds, the total effect can be obtained by adding the direct and
indirect ones. The proposed simulation method could be useful when direct
and indirect effects are available, even in non-PH cases.
P3.10
85/156
To assess whether the effects of confounding have been reduced, one can
examine the distribution of measured baseline covariates between treated and
untreated subjects in the matched sample, or in the sample weighted by the
IPTW, using the standardized differences (SDs). Then, the sample where the
SDs are minimized should be selected. Because discrepancies according to
the covariates may exist, one would need to choose based on a summarized
value of the SDs, such as the sum or the mean of the SDs. Better, such a
summarized method could take into account the relative importance, in terms
of selection bias, of each one of the covariate X.
The objective of this study is to develop and assess a summary measure (Z) of
the SDs, that would enable to choose between different samples, and that
would take into account, for each covariate X1,...,n, the strength of its
association with the outcome. We will present the results of a series of Monte
Carlo simulations in which we will study the ability of different Z for selecting
the sample associated with the less biased estimation.
Unfortunately, this poster has been withdrawn.
P3.11
Investigation of pleiotropy in Mendelian randomisation studies that use
aggregate genetic data
Fabiola Del Greco M.1, Elinor Jones2, Victoria Jackson2, Irene Pichler1, Andrew
Hicks1, Peter P. Pramstaller1, Nuala Sheehan2, John R. Thompson2, Cosetta
Minelli1
1
EURAC research - Center for Biomedicine, Bolzano, Italy, 2Department of
Heath Science, Centre for Biostatistics and Genetics Epidemiology, University
of Leicester, Leicester, UK
Genes can be used as instruments to provide estimates of the association
between modifiable intermediate phenotypes and disease risk ("Mendelian
randomisation", MR). Unlike direct estimates from observational studies, MR
estimates are free of confounding or reverse causation, provided that some
assumptions are met. The main assumption is the absence of pleiotropy, that is
the gene influences disease only through the given phenotype. Assessing
pleiotropy may be difficult even for well-studied genes, and the use of multiple
genes can indirectly address the issue: if all genes are valid instruments, their
MR estimates should vary only by chance. This can be tested using the over
identification test, but the test requires fitting all genes in the same model and
cannot be used when only aggregate results of gene-phenotype and genedisease associations are available. We present a simple approach based on
the use of meta-analysis that combines MR estimates from multiple genes,
where pleiotropy is assessed through investigation of presence and magnitude
of between-instrument heterogeneity, using heterogeneity test and I2 statistics.
We illustrate the approach with an example, a Mendelian randomisation study
of the effect of iron blood levels on Parkinson´s disease that uses four genes.
Through simulations mimicking our example, we investigate the performance of
the approach under different scenarios, where presence and magnitude of
pleiotropy in one or more genes are varied.
P3.12
P4 Clinical trials
P4.1
Application of the Parallel Line Assay to Assessment of Biosimilar Drug
Products
Jen-pei Liu1,2
1
National Taiwan University, Taipei, Taiwan, 2National Health Research Institute,
Zhunan, Taiwan
Biological drug products are therapeutic moiety manufactured by a living
system or organisms. These are important life-saving drug products for the
patients with unmet medical needs. Due to expensive cost, only few patients
are accessible to life-saving biological products. Most of early biological
products will lose their patent in the next few years. This provides the
opportunity for the generic versions of the biological products, referred to as
biosimilar drug products. The US Biologic Price Competition and Innovation
(BPCI) Act passed in 2009 provides an abbreviated approval pathway for
biological products shown to be biosimilar to, or interchangeable with, an FDAlicensed reference biological product. Hence, cost reduction and affordability of
the biosimilar products to the average patients may become possible.
However, the complexity and heterogeneity of the molecular structures,
complicated manufacturing processes, different analytical methods, and
possibility of severe immunogenecity reactions make evaluation of equivalence
between the biosimilar products and their corresponding innovator product a
great challenge for statisticians and regulatory agencies. We propose to apply
the parallel assay to evaluate the extrapolation of the similarity in product
characteristics such as pharmacokinetic responses to the similarity in efficacy
endpoints with respect to continuous, binary, and censored endpoints. We also
report the results of a large simulation study to evaluation the performance, in
terms of size and power, of our proposed methods. Numerical examples are
presented to illustrate the suggested procedures.
P4.2
A new performance measure of propensity score model
Statistical derivation of a responder definition for the reduction of hot flushes
Emmanuel Caruana, Romain Pirracchio, Matthieu Resche-Rigon, Sylvie
Christoph Gerlinger, Florian Hiemeyer, Thomas Schmelter
Chevret
Bayer Pharma AG, Berlin, Germany
Hôpital Saint Louis, APHP, Paris, France
The propensity score (PS) is increasingly used even in situations with small The clinical relevance of drug effects is often assessed considering the
sample sizes or low prevalence of treatment. In such situations, one may need proportion of responders to treatment. Typically, the definition of who is a
to select the variables to include in the PS model, to avoid model responder is given somewhat arbitrarily, e.g. as a 50% reduction of the
overparametrization. There is a need to choose between different PS models, outcome measure from baseline. However, for patient reported outcomes
including different covariates. Moreover, different approaches such as PS- empirically validated definitions of treatment responders can be derived using
matching or Inverse Probability of Treatment weighting (IPTW) may be statistical methods.
We performed a blinded data analysis from a placebo-controlled study to
considered.
86/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
investigate the efficacy of a treatment for moderate to severe hot flushes in
postmenopausal women. Patients recorded the number of moderate to severe
hot flushes each day in a diary and assessed their satisfaction with treatment
on a Clinical Global Impression scale. The primary outcome was the absolute
changes in the weekly number of moderate to severe hot flushes.
A non-parametric discriminant analysis with normal kernels and unequal
bandwidths was performed to determine the cut-off values between the
patients that felt minimally improved and those that felt unchanged or
worsened. Similarly, the cut-off value between (very) much improved and
minimally improved was calculated. Standard deviations for the cut-off values
were assessed by bootstrapping.
As a result, a responder was defined as having at least a (minimal)
improvement of 19.1 hot flushes per week at week 4 and a (substantial)
improvement of 40.3 hot flushes per week at week 12. This definition was
agreed with the FDA in a type A meeting and subsequently used as an
endpoint in a pivotal clinical trial.
clinical trials. The contentious issues involve statistical operating
characteristics, implementation, and clinical interpretation. Through simulation
using bootstrapping technique on data from a real large clinical trial with two
treatment arms, we assess the operating characteristics of three randomization
approaches that are easily implementable -- simple (SR), stratified permuted
block (SPB), and a recently proposed minimal sufficient balance algorithm
(MSB), with the latter two adjusting for three covariates with strong prognostic
values -- combined with two analysis approaches -- with and without
adjustment of the three covariates. The results show that, as expected, the
type I error probability for the adjusted analysis is slightly conservative for both
SPB (0.049) and MSB (0.048), and more so if the analysis is not adjusted for
the covariates (0.035 and 0.033, respectively). Power is similar among the
three randomization methods for adjusted analyses, but the unadjusted
analyses yield substantially reduced power. For other randomization
performance metrics, SR and MSB are ideal in the probabilities of deterministic
assignments (0) and correct guesses (0.5 and 0.54, respectively), but the MSB
best controls the probability of significant imbalances in the covariates between
the treatment arms. We conclude that SR with adjusted analysis may be the
best approach, with a caveat that it may be more vulnerable to challenges in
P4.3
trial result interpretation due to covariate imbalances. Simulation results under
Optimisation of the two-stage randomised trial design when some participants
additional scenarios also will be presented.
have no preferred treatment.
Stephen D Walter1, Robin M Turner2
P4.5
1
McMaster University, Hamilton, Ontario, Canada, 2University of Sydney,
Is a Controlled Randomised Trial the Non-plus-ultra Design? An Advocacy for
Sydney, New South Wales, Australia
Comparative, Controlled, Non-randomised Trials
In a two-stage randomised trial, participants are randomly divided into two
Wilhelm Gaus, Rainer Muche
subgroups. In one subgroup, treatments are randomly assigned, while in the
other participants choose their treatment. One can then estimate the treatment University of Ulm, Ulm, Germany, Germany
effect, and the potentially important effects of patients' preferences between Background. Many people consider a controlled randomised trial (CRT
treatments (selection effects) and interactions between preferences and identical to a randomised controlled trial RCT) to be the non-plus-ultra design.
treatment received (preference effects).
However, CRTs also have disadvantages. The problem is not randomisation
We previously determined the optimum proportion of participants to randomise itself. Today, patients are educated, self-determined, and self-responsible. The
to the choice subgroup, when all participants have a preference (Walter et al., problem is to obtain informed consent for randomisation and masking of
Stat Med, in press). Here we generalise this, so that some participants have therapies from today's patients according to current legal and ethical
no treatment preference and so are re-randomised to treatment. The optimum standards. We do not want to de-rate CRTs, but we would like to contribute to
allocation to the choice group now also depends on the proportion of the discussion on clinical research methodology.
undecided participants.
Situation. Informed consent to a CRT and masking of therapies plainly selects
The optimum proportion in the choice group ranges between 40% and 50% for patients. The excellent internal validity of CRTs can be counterbalanced by
most reasonable scenarios. It is lower if preferences for one treatment poor external validity, because internal and external validity are antagonists. In
dominate, or if the proportion of undecideds is low; otherwise the optimum is a CRT, patients may feel like guinea pigs, this can decrease compliance, cause
typically slightly below 50%. However, the variances of the selection and protocol violations, reduce self-healing properties, suppress unspecific
preference effects increase if preference for one treatment dominates, or if therapeutic effects and possibly even modify specific efficacy.
many participants are undecided.
Discussion. A control group (comparative study) is most important for the
These ideas will be illustrated using data from a 2-stage randomised trial degree of evidence achieved by a trial. Study control by detailed protocol and
comparing medical vs. surgical management strategies for women with heavy good clinical practice (controlled study) is second in importance and
menstrual bleeding, where approximately 70% of participants in the choice randomisation and masking is third (thus the sequence CRT). Controlled nonrandomised trials (CnRTs) are just as ambitious and detailed as CRTs. Four
group had no treatment preference.
We conclude that two-stage design can be optimised even when some examples are given.
participants have no treatment preference. However the absolute variation in Recommendation. We recommend clinicians and biometricians to perform
these estimates becomes large if the proportion of undecided participants is more high quality CnRTs. They combine good internal and external validity,
better suit daily medical practice, show better patient compliance and fewer
large.
protocol violations, deliver estimators unbiased by alienated patients, and
provide a clearer explanation of the achieved success.
P4.4
Revisiting baseline covariate adjustment in randomization and analysis of
P4.6
large clinical trials
A biomarker-based designs for a controlled phase II trial in oncology
Yuko Palesch, Wenle Zhao, Yanqiu Weng
Natalja Strelkowa, Martin Stefanic, Thomas Bogenrieder, Frank Fleischer
Medical University of South Carolina, Charleston, SC, USA
Boehringer-Ingelheim Pharma GmbH & Co KG, Biberach, Germany
Baseline covariate adjustments in randomization and/or in analysis have
yielded a vast literature of pros and cons of adopting these approaches in We developed a statistical analysis framework for threshold determination of a
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
potentially predictive biomarker based on the outcomes of early phase clinical
trials. For this approach we assume the knowledge of the distributional
behavior for biomarker levels within the patient population originating from
previous trials or in vitro data. We simulate biomarker-stratified patient
recruitment and the conduct of an exploratory controlled phase II clinical study
using progression-free survival as the primary endpoint. In our simulations the
treatment success, measured on a hazard ratio scale, is assumed to be
dependent on the biomarker level. Dependence is modelled by either a change
point model or a piece-wise linear dependency. The results of our simulations
provide estimates for a reasonable sample size and duration of the study as
well as the precision of estimated cut off values for the biomarker level at
which the investigated treatment becomes more effective than the standard
therapy. These results can afterwards be used in the planning of e.g. a phase
III enrichment design.
P4.7
Regression model to analyze the continuous primary end point in RCTs when
the treatment effect depends on baseline values of the outcome variable
Mathai A.K.1, Murthy B.N.2
1
MakroCare Clinical Research, Hyderabad, Andhra Pradesh, India, 2National
Institute of Epidemiology, Chennai, Tamil Nadu, India
The primary aim of conventional randomized controlled clinical trials is to
investigate the efficacy of test drug in comparison with standard drug / placebo
for a particular disease. However, even after randomization, sometimes, the
patients differ substantially with respect to the baseline value of the outcome
variable and there could be a possibility that the response to treatment
depends on the baseline values of the outcome variable. When there are
baseline-dependent treatment effects, differences among treatments vary as a
function of baseline level. Usually, in this situation, Analysis of Covariance
(ANCOVA) is the analytical option for statisticians by considering the baseline
value of the outcome variable as the covariate; treatment groups would be the
factor and the final value of the outcome variable as the dependent variable.
Although variation in outcome variable associated with baseline value is
accounted for in ANCOVA, analysis of individual differences in treatment effect
is precluded by the homogeneity of regression assumption. This assumption
requires that expected differences in outcome among treatments are constant
across all baseline levels. To overcome this difficulty, we propose a general
approach for ‘n' number of treatment groups under two situations namely when
the data follows a normal distribution or otherwise which will be demonstrated
with real life data from clinical trials.
P4.8
Confidence intervals for the ratio of AUCs in cross-over bioequivalence trials
Thomas Jaki1, Martin Wolfsegger2
1
Lancaster University, Lancaster, UK, 2Baxter Innovations GmbH, Vienna,
Austria
87/156
example.
P4.9
Interaction of treatment with a continuous variable: simulation study of
significance level for several methods of analysis
Willi Sauerbrei1, Patrick Royston2
1
Institute of Medical Biometry and Informatics, University Medical Center
Freiburg, Freiburg, Germany, 2MRC Clinical Trials Unit and University College
London, London, UK
Standard methods for modelling treatment-covariate interactions with
continuous covariates are categorisation or an assumption of linearity. Both
approaches are easily criticized, but for different reasons (1). To retain all
information in the data and model the interaction flexibly, we have proposed
MFPI, a parametric approach based on fractional polynomials (FPs).
Essentially, MFPI extends the linear interaction model by allowing non-linear
functions from the FP class. Four variants were suggested, providing greater or
lesser flexibility. In general, the approach appears promising but further
evaluation is still needed.
We conducted a large simulation study featuring different scenarios with
varying true functions and ‘well behaved' and ‘badly behaved' covariate
distributions. We investigated type 1 error rates for the four MFPI variants. We
also considered an approach in which FPs are replaced with regression splines
including differing numbers of knots, a linear function, and models based on
categorisation. Simulations were conducted in a univariate setting, but
extensions to a multivariable setting are straightforward.
The estimated type 1 error rates are close to nominal for most of the
procedures, but for badly behaved data, increased error rates are seen in
several scenarios.
The importance of checking for interactions between treatment and a
continuous covariate is illustrated by reanalysis of data from an RCT
comparing two treatments in cancer patients.
1. Royston, P., Sauerbrei, W. (2004): A new approach to modelling interactions
between treatment and continuous covariates in clinical trials by using
fractional polynomials. Statistics in Medicine, 23:2509-2525.
P4.10
Comparison of groups in the presence of bimodality
John-Philip Lawo1, Tina Müller2
1
CSL Behring, Marburg, Germany, 2Bayer Pharma, Berlin, Germany
Recently, there is an increasing number of published papers in various
research areas [e.g. cancer, eye diseases, pain, RNA/gene expression or
animal studies] that is based on bimodally distributed data. If the bimodality is
ignored during data analysis and standard statistical tools such as t-test and
Wilcoxon test (both based on unimodality of the data distribution) are
employed, this incorrect choice of methods can result in misleading outcomes
and a loss of power.
As a remedy, we propose two strategies for a more appropriate way of dealing
with bimodal data: The first one is based on an adaption of the KolmogoroffSmirnov-test. The second uses clustering to create subsamples for
comparison. The two approaches are compared with the t-test and Wilcoxon
test with regard to power by means of simulation studies, assuming small to
moderate sample sizes. Results demonstrate better properties of the two
proposed approaches in the presence of at least one bimodally distributed
group.
Cross-over designs are recommended for evaluation of average
bioequivalence by regulatory agencies using the non-compartmental approach
for estimation of pharmacokinetic parameters. Bioequivalence is typically
assessed based on the confidence interval inclusion approach for the ratio of
product averages and with the traditional bounds of 0.8 to 1.25.
In this talk we will introduce a non-compartmental approach for obtaining
confidence intervals for the ratio of AUCs in two sequence, two period crossover studies. The estimator and corresponding confidence interval are
constructed to allow sparse sampling making the approach applicable in
paediatric studies and when missing data are present. We evaluate the P4.11
performance of the intervals in a simulation study and illustrate it in an Analysis of multicentre trials with continuous or binary outcomes
88/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Brennan C Kahan, Tim P Morris
MRC Clinical Trials Unit, London, UK
Many multicentre trials randomise patients using permuted blocks stratified by
centre. It has previously been shown that stratification variables used in the
randomisation process should be adjusted for in the analysis in order to obtain
correct inference. Centre-effects can be accounted for either using fixed-effects
or random effects for continuous outcomes, and either fixed-effects, randomeffects, generalised estimating equations, or Mantel-Haenszel procedures.
These analysis methods are compared a large simulation study. For our
simulations, we varied the numbers of centres, the number of patients per
centre, the intraclass correlation coefficient, the method of randomisation, and
the distribution of patients across centres. Results were compared in terms of
type I error rate, power, and precision of treatment effect estimates.
P4.12
Are Appropriate Outcome Measures Being Used in Open-label Randomised
Trials?
Suzie Cro, Sunita Rehal, Brennan Kahan
Medical Research Council Clinical Trials Unit, London, UK
Randomised controlled trials (RCTs) are the gold standard for estimating
treatment efficacy. Blinding is an important design feature known to limit bias in
RCTs, however in some situations blinding is difficult or impossible to achieve
and consequently an open-label trial is necessary. Subjective outcomes could
lead to bias if those in charge of assessing the outcomes are not blinded, as
they may rate the outcome differently depending on treatment arm. Therefore,
in open-label trials it is necessary to ensure that outcomes can be either
objectively measured, or that those in charge of assessing the outcomes are
blinded. We define different types of outcome measures, and discuss whether
they are appropriate for use in open-label trials. We then review RCTs
published in 2010 from four general medical journals (BMJ, The Lancet, JAMA,
and NEJM) to investigate whether trialists are using appropriate outcome
measures in open-label studies, and whether inappropriate outcome measures
could lead to biased estimates of treatment effect.
P4.14
Within-center imbalance after balanced allocation using minimization method:
Center as a stratification factor in multi-center clinical trials?
Shu-Fang Hsu Schmitz1,3, Hong Sun2, Qiyu Li3, Stefan Fankhauser3
1
Institute of Mathematical Statistics and Actuarial Science, University of Bern,
Bern, Switzerland, 2Institut für Medizinische Biometrie und Medizinische
Informatik, Universitätsklinikums Freiburg, Freiburg, Germany, 3Swiss Group
for Clinical Cancer Research (SAKK), Coordinating Center, Bern, Switzerland
Background: If the number of strata within a factor is large or/and the
distribution among strata is uneven, treatment allocation might be unbalanced
in some strata, even using minimization. Such imbalance is likely when center
is a stratification factor in multi-center trials. It is recommended to perform
analyses adjusted for stratification factors after balanced allocation.
Objective: To compare within-center allocation imbalance between including
and excluding center as a stratification factor.
Method: Seven randomized phase II-III trials conducted by our group were
identified. Hypothetical treatment allocation was generated using minimization
for the given patient data and for simulated data from 54 scenarios, with or
without center as a stratification factor. Within-center imbalance was compared.
Results: Sample sizes of the seven trials are 33-319, each with 9-35 centers,
and results of including/excluding center are
Number combinations:
36-1120/4-32
Patients per combination:
0.1-1.9/2.1-39.9
Centers imbalance ≥2 patients (%):
0-31/10-59
Centers one treatment (%):
16-67/28-67
The proportion of centers with imbalance by ≥2 patients is higher when
excluding center for minimization. A similar pattern is observed for the
proportion of centers receiving one treatment, but the difference is small. The
simulations confirmed the patterns and the difference in imbalance by ≥2
patients decreases when number patients per combination decreases.
Conclusions: The advantage of including center in minimization for preventing
≥2 patients imbalance is weak for small trials with many combinations. The
advantage for proportion of centers with only one treatment is generally small.
Caution should be given to one-treatment centers due to difficulty for adjusted
analyses afterwards.
P4.13
Sample size corrections for varying cluster sizes when testing treatment effects
in two-armed randomized trials with heterogeneous clustering
P4.15
Math Candel, Gerard Van Breukelen
Department of Methodology and Statistics, Maastricht University, Maastricht, A Semiparametric Accelerated Failure Time Mixture Model for Latent Subgroup
Analysis of a Randomized Clinical Trial
The Netherlands
1
2
When comparing two different kinds of group therapy or two individual Lily Altstein , Gang Li
1
treatments where patients within each arm are nested within care providers, Novartis Institutes for Biomedical Research, Inc., Boston, MA, USA,
clustering of observations may occur in each of the arms. For such designs the 2University of California at Los Angeles, Los Angeles, CA, USA
efficiency loss due to varying cluster sizes is studied, where other studies are
We study a semiparametric accelerated failure time mixture model for
extended in that allowance is made for between-arm heterogeneity in terms of
estimation of a biological treatment effect on a latent subgroup of interest with
(a) the intraclass correlation, (b) the outcome variance, (c) the average cluster
a time-to-event outcome in randomized clinical trials. Latency is induced
size and (d) the number of clusters. In case of a linear mixed model analysis,
because membership is observable in one arm of the trial and unidentified in
employing maximum likelihood estimation of the treatment effect, the
the other. This method is useful in randomized clinical trials with all-or-none
asymptotic relative efficiency of unequal versus equal cluster sizes is derived.
noncompliance when patients in the control arm have no access to active
The asymptotic relative efficiency is a weighted harmonic mean of the
treatment and in, for example, oncology trials when a biopsy used to identify
efficiency losses of two trials resulting from duplicating each of two treatment
the latent subgroup is performed only on subjects randomized to active
arms. From this expression Taylor approximations of the relative efficiency are
treatment. We derive a computational method to estimate model parameters by
derived. An extensive Monte Carlo simulation for small sample sizes shows
iterating between an expectation step and a weighted Buckley-James
when both approaches are accurate and allows for deriving general guidelines,
optimization step. The bootstrap method is used for variance estimation. The
based on both approaches, to compensate for the efficiency loss due to
performance of our method is corroborated in simulation. We illustrate our
varying cluster sizes when planning the sample size of a two-armed trial with
method through an analysis of a multicenter selective lymphadenectomy trial
heterogeneous clustering.
for melanoma.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P4.16
Analysis of recurrent event data - an applied comparison of methods using
clinical data
Katrin Roth1, Vivian Lanius1, Peter Reimnitz2
1
Bayer Pharma AG, Berlin, Germany, 2Bayer Pharma AG, Wuppertal, Germany
Clinical data is often observed as recurrent events, e.g. time to / number of
exacerbations of chronic diseases in a given time period. In general, the aim is
to reduce the number of unfavourable events through a certain intervention.
There are several methods available for analyzing such data, either focussing
on the time to (recurrent) event(s) or the number of events.
We compared the results of analysing real clinical recurrent event data by a
Cox proportional hazards model for time to first event, an Andersen-Gill model
for time to recurrent event, and generalized linear models using a cumulative
logit distribution, Poisson distribution and negative binomial distribution for
modelling the number of events.
While all methods gave similar results in terms of parameter estimates and pvalues, a closer looked showed differences between the methods. Considering
the time to event, there was no gain in information in estimating the treatment
effect when analyzing recurrent events as compared to the simple time to first
event analysis. Regarding the different generalized linear models, the Poisson
model needs to be adjusted for overdispersion to provide at least a reasonable
fit. The cumulative logit model intuitively seems less appropriate for count data,
but provides a reasonable fit as long as higher number of events are combined
as one category. The best fit was observed for the model based on the
negative binomial distribution, which was also observed for other clinical count
data.
P4.17
Probability of Inferiority in Current Non-Inferiority Trials
Primrose Beryl Gladstone, Werner Vach
Institute of Medical Biometry and Medical Informatics, Freiburg, Germany
89/156
P4.18
Blinded estimation of within subject variance
Rachid el Galta
Merck Research Laboratories, MSD, Oss, The Netherlands
For a randomized, double-blind, placebo-controlled, 4-period cross-over noninferiority trial, an initial sample size was estimated using an expected
treatment effect and a presumed within subject variance of the normally
distributed primary endpoint. Since there was uncertainty regarding the within
subject variability a blinded interim analysis was planned to estimate the within
subject variance and thus re-estimate the sample size.
Different methods for blinded estimation of within group variance in randomized
clinical trials are available. Gould and Shih (1992) have proposed an EM
algorithm based procedure to account for missing information on treatment
status. Chen and Kianifard (2003) and van der Meulen (2005) considered using
a randomization block of size 2 or 4 and used the maximum likelihood estimate
of the mixture of normally distributed data.
In contrast to a parallel design, power calculations for cross-over trials are
usually based on within subject variance. This presentation shows how these
existing methods were applied to estimate the within subject variance of the
primary endpoint of the trial above at the blinded interim look.
References
Gould AL, Shih WJ. Sample size re-estimation without unblinding for normally
distributed outcomes with unknown variance. Communications in Statistics (A)
-Theory and Methods 1992; 21:2833-2853.
Chen M, Kianifard F. Estimating treatment difference and standard deviation
with blinded data in clinical trials. Biometrical Journal 2003; 45:135-142.
van der Meulen E. Are we really that blind? Journal of Biopharmaceutical
Statistics 2005; 15:479-485.
P4.19
Analysis of case scenario cross-over trial: an application of medical devices
Background:
The risk of accepting too many inferior new treatments has been posed as an manikin study
inherent disadvantage of non-inferiority trials. One of the major determinants of Naohiro Yonemoto, Haruyuki Yuasa, Hiroyuki Yokoyama, Hiroshi Nonogi
this risk is the non-inferiority margin. There has been some evidence of wide National cente of neurology and psychiatry, Tokyo, Japan
variability of the non-inferiority margin in practice. The aim of our study is to
quantify the risk of accepting inferior treatments among current non-inferiority Background: Analysis of a cross-over trial with sequence scenario
trials and to assess the impact of the non-inferiority margin on the risk. (intervention) in medical devices training using manikin motivated our work. We
illustrated how inferences a scenario directly and indirectly effect for outcomes.
Methods:
We collected two datasets of current non-inferiority trials - registered and Methods: We consider strategies for modeling repeated sequential scenario
published trials. For each NI trial, we calculated its probability of inferiority and and focus some model-based approaches. The issues are addressed using the
the true treatment effect based on the margin used and the used sample size example of comparison with two medical devices for the first intubation attempt
for three empirical pre trial distributions - optimistic, moderate and pessimistic using a manikin setting. The outcome is binary data (success or failure). We try
scenarios where the average true treatment effect is 0, slightly negative and to perform some models (mixed-effect, GEE and MSMs) and compare
distinctly negative respectively and rate of superiority of the new treatments and check some assumptions.
being 50%, 16% and 2.5% respectively. Distributions of the inferiority Results and Conclusion: The model based approach appears to be more
probabilities and the treatment effects among these current NI trials were suitable for analysis the study design. We need to select appropriate model for
looked at to reflect the current risk of having accepted inferior treatments in the study design and results of interpretation.
successful NI trials.
Results:
P4.20
Standardised margins were obtained from the extracted data belonging to 49
registered NI trials and 104 published NI trials. Corresponding inferiority Investigating the Strength of the Association between the Amplitude of the
probabilities suggested a substantial amount of inferiority among the new Impedance Cardiogram (ICG), Thrust and Depth during CPR compressions
treatments which were accepted in these trials, even with the assumption of an Paul McCanny1, Gloria Crispino-O'Connell1, Rebecca DiMaio1, Andrew Howe2,
optimistic pre-trial probability. Under the moderate scenario, we could observe David McEneaney3, Paul Crawford1, David Brody1, John Anderson1
a distinct trend towards degradation of treatment effects in successful trials.
1
Heartsine Technologies, Belfast, N. Ireland, UK, 2Queen's University Belfast,
Belfast, N.Ireland, UK, 3Craigavon Area Hospital, Belfast, N. Ireland, UK
Background: Distinctive changes in the morphology of the Impedance
90/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Cardiogram (ICG) related to chest compressions can be used to assess
Cardiopulmonary Resuscitation (CPR) efficacy. Following retrospective
analysis of the ICG traces, it was observed that the changes in impedance
cardiogram during compressions are of higher amplitude when compared to
the changes in normal perfusing rhythms. There was evidence that the
amplitude of the ICG varied according to the force applied to compressions.
Design: The objective of the study is to establish the strength of the
association between thrust (force) and ICG amplitude during compressions and
between depth and ICG amplitude in a porcine model and a human pilot study.
The primary variable is the relationship between thrust and ICG amplitude, in
terms of correlation coefficients. The method used for the computation of the
primary variable is the Bland-Altman approach which computes correlation
coefficients with repeated measures [1], where the subject is modelled as a
fixed effect term.
Results: This study found some evidence that the relationship between thrust
and ICG (in terms of ADU) could be modeled using a quadratic expression.
The relationship between Depth and ICG was also investigated and it was
found that the depth was highly correlated with the ICG (in terms of ADU).
Those results suggest that ICG has the potential to be used as a measure of
the quality of compression during CPR.
effects, as for judging the discrimination ability of different statistical models.
The logistic model as well as the generalized linear mixed model (GLMM), are
widely used for predicting probabilities of the positivity of a disease or
condition, and the estimated probability is then used as a biological marker
for constructing the ROC curve and computing the area under the curve.
The propensity score is widely used for the estimation of average
predicted treatment effects in non randomized studies. Furthermore,
calculated summary statistics of the area under a ROC curve have already
been proposed in the context of repeated measures design. We compare
two approaches of computing the prediction through the area under the
ROC curve in the case of a therapeutic decision in critical condition.
An example deals of the need of sedation in ICU unit. The aim is to provide
comfort and minimize anxiety. However, adverse effects of a deep sedation
are noteworthy, and the optimal end point of sedation in intensive care unit
patients is still debated. We analyzed if a level 2 on the Ramsay Scale (ie,
awake, cooperative, oriented, tranquil patient) is suitable for an invasive
therapeutic approach. Along with the need to optimize the use of ICU
resources, "conscious" sedation is becoming increasingly attractive in the ICU.
Guidelines suggest that sedation should be individualized and administered
into shortest time at the lowest effective dose.
P4.23
P4.21
Optimization of managing the lost to follow up patients in a Phase II oncology
New approaches for design and analysis of pediatric pharmacokinetic and trials
pharmacokinetic/pharmacodynamic studies
Lisa Belin1, Philippe Broët2, Yann De Rycke1
1
Jixian Wang, Kai Grosch, Emmanuel Bouillaud
Institut Curie, Paris, France, 2JE2492, Paris Sud University, Villejuif, France
Novartis Pharma AG, Basel, Switzerland
Phase II oncology trials currently evaluate a binary endpoint, usually the
Ethical and practical constraints in pediatric clinical pharmacology studies response to treatment. This endpoint is evaluated at a given point in time,
restrict pharmacokinetic (PK) sampling in young children. Frequently, only which is the same for all subjects. But a problem occurs when some patients
sparse blood sampling is feasible, preventing the application of non-parametric are lost to follow-up. Classical study designs such as Fleming or Simon’s plans
calculation of individual PK parameters, hence population PK modeling is the are not adapted to this situation. The decision rule is based on a binary
common PK analysis and the basis of PK-pharmacodynamic (PD) criterion. On which patients should the response rate be evaluated?
analysis. The application of population PK models for children data requires a The following approaches are usually used:
structural model which may not been established or is only applicable to adults. - Consider patients lost to follow-up as treatment failures
Applying inadequate popPK models may lead to inadequate model fitting,
uncertainty in model selection and parameter estimation. As a model - Exclude patients lost to follow-up.
independent approach we propose a non-parametric estimation for PK - Estimate the response rate by the Kaplan-Meier method by censoring
parameters and time profile based on linear mixed models and interpolation patients lost to follow-up at the date of their last known contact. The stopping
using properly designed sparse sampling scheme. To determine PK sampling boundary defined in the protocol need to be converted on percentage.
schemes we propose using D-optimal designs for nonlinear mixed population - Replace patients lost to follow-up by new included patients.
PK models as the basis, but also consider their performance in non-parametric If the number of patients analysed is different from the number of patients
PK parameter and time profile estimation as well as robustness to parameter planned to be included, the stopping boundary will also be converted on
misspecification. We also consider optimal designs for non-parametric percentage.
estimation based on mixed model approach and practical implementation of
multi-objective designs for model non-model based approaches, modeling of Our objective is to study how censoring can interfere with classical phase II
PKPD relationship as well as model validation. To confirm small sample analysis plans in terms of bias, type I and type II errors and the increase of
properties of these designs, we propose using simulation for fitting population sample size required for the analysis in order to make recommendations on the
PK models and non-parametric PK parameter estimation. The proposed use of the strategies mentioned above. This study will be carried out on
approaches were applied to design of a pediatric trial under real scenario and simulated data. Simulations are done according to Simon’s design parameters
and rate of lost to follow-up patients. The strategies were also applied to real
practical issues and considerations were examined.
data.
P4.22
P4.24
Propensity score and area under a ROC curves in repeated measures clinical
Blocking in Unblinded Randomized Clinical Trials
studies
Robert Parker
Alberto Morabito
University of Michigan, Ann Arbor, MI, USA
Università degli Studi di Milano, Milano, Italy, Italy
The receiver operating characteristic (ROC) curve is widely used for Statistical Blocking is routinely used in randomized clinical trials. It ensures treatment
Evaluation of Medical Tests for Classification and Prediction of treatment groups are balanced for temporal and seasonal factors. In multi-center studies,
randomization is frequently stratified by site to ensure that site specific factors
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
are balanced across treatment groups. When a study is double blind, blocking
would appear to have substantial potential advantages without minimal
disadvantages.
In some studies, however, the treatment received is known by study staff. This
is especially common in clinical effectiveness trials which often forgo any
attempt to mask treatment received. As an example, in a study assessing the
use of 6 months of continuous positive airway pressure (CPAP) in children with
sleep problems, no sham treatment would be used in the control (standard of
care; SOC) group. If blocking is used, study staff will be able to predict to a
greater or lesser extent whether the next potential participant will be allocated
to CPAP or SOC. This can have substantial impact on how actively study staff
attempt to enroll a potential participant. Because CPAP requires considerable
effort to be administered successfully, if staff expect the next participant to be
allocated to CPAP, staff may (consciously or unconsciously) preferentially enroll
"compliant" "dedicated" and "committed" parents while attempting to avoid
enrolling parents expected to be less diligent in administering CPAP. In this
presentation we assess the ability of staff to correctly guess the treatment
assignment with common blocking schemes. More imprtantly, we also suggest
variations from perfect blocking which will reduce such problems.
P4.25
Non-Inferiority Trials of Non-Pharmaceutical Interventions
Karen Smith
University of Leicester, Leicester, UK
Both the FDA and EMEA have produced guidelines for pre-authorisation (preapproval) non-inferiority trials of pharmaceuticals. Ideally such trials have two
aims; to establish, indirectly, the effect of a new intervention relative to placebo
and to estimate the relative effect of the new drug compared to a reference
drug.
There has been a recent increase in the number of non-inferiority trials
conducted to evaluate non-pharmaceutical interventions and it's not clear
whether, and in what ways, the regulatory guidelines might be useful in the
design of such trials. Particular issues of concern include the circumstances in
which a non-inferiority trial would be the most appropriate design, the choice of
primary and secondary outcomes, choice between a single primary outcome or
co-primary outcomes and determination of the non-inferiority margin.
Reporting of non-inferiority trials is acknowledged to be poor and in 2006 an
extension to the CONSORT statement was published. A review of noninferiority trials reported since 2007 has been conducted focussing on
adherence to the reporting guidelines. The particular design issues highlighted
by this review will be presented, with particular reference to nonpharmaceutical interventions.
91/156
how the readers interpret the SoR.
Optimization with an internal SoR requires that reader interpretation of the test
and comparator are correlated, in aggregate, with the interpretation of the SoR.
R = (r1/σ2σ3)+(r2/σ1σ3)+(r3/σ1σ2)
where σ-standard deviation r-correlation.
This was used to select 3 readers from a pool of 9 readers for a Phase_III trial.
The aggregate correlation values varied from -0.7 to 0.6, demonstrating there
are better and worse combinations of readers for optimal outcome measures.
This metric was not identical to the cluster of readers with highest intra-reader
correlation because high inter-reader correlation may be consistently wrong
when compared to the internal SoR artificially reducing sensitivity/specificity
and increasing variance. Alternatively, selecting readers that correlate with the
internal SoR will decrease reader effect. This is independent of the test versus
comparator since they were pooled, eliminating bias. This minimizes reader
impact so that only pure differences between efficacy of the compound versus
the comparator remain.
P4.27
Cost and Prevention Strategies of Randomization Errors in Emergency
Treatment Clinical Trials
Wenle Zhao, Valerie Durkalski, Jordan Elm, Catherine Dillon, Yuko Palesch
Medical University of South Carolina, Charleston, SC, USA
Errors occurring in the subject randomization process affect the results of
clinical trial analysis. Trials treating emergency conditions, such as stroke and
traumatic brain injury, are especially vulnerable to randomization errors due to
the short time window between injury onset and randomization. A
randomization error may result in a subject receiving a treatment other than the
one assigned by the randomization algorithm. This presentation evaluates the
impact of randomization errors on type I error, power and the potential biases
for two-arm clinical trials with a binary outcome testing for superiority or noninferiority with intent-to-treat analysis and per-protocol analysis. For studies
testing the superiority and analyzed under the intent-to-treat principal, two
additional subjects are required to recover the power reduction caused by each
cross-over. For non-inferiority studies, the impact of these errors could result in
an increase to the type I error rate. Randomization errors may further reduce
the credibility of the trial result as these errors may be associated with
suspicion of selection bias. Case reviews of randomization errors in several
large NIH-funded multicenter emergency treatment trials are presented. Many
factors may contribute to randomization errors, including eligibility errors, study
drug inventory management, user permission errors and technical difficulties
for web-based central randomization systems, and human mistakes at various
levels. Our lessons learned may help to protect the quality and efficiency of
future clinical trials.
P4.26
P4.28
Short interrupted time series designs in clinical practice and policy research: an
analysis approach using restricted maximum likelihood
Andrew Forbes1, Muhammad Akram1, Catherine Forbes1, Craig Ramsay2
1
2
A preponderance of clinical trials rely on imaging_endpoints for Monash University, Melbourne, Victoria, Australia, University of Aberdeen,
primary_efficacy which require blinded_reader interpretation to eliminate any Aberdeen, Scotland, UK
clinical bias. This blinded_reader selection has become increasingly critical to Interrupted time series designs are often utilised in medication use research,
the success of trials so that reader effect is minimized.
hospital infection control, health technology assessment, population health and
Reader selection is often performed by review of resumes which is weak numerous other areas. The data consist of the repeated observation of a
evidence. Readers are sometimes selected based on their inter-reader variable in a defined population before and after a population level
consistency. We evaluated whether reader selection can be optimized for the intervention. These time series are often very short in length, and as such they
pooled outcome variables, test and comparator when there is no pose challenges to the use of routine statistical methods for time series
gold_standard, like pathology. Many trials have an internal, image based, analysis, largely due to their poor estimation of the required autocorrelation
standard_of_reference(SoR) where optimal reader selection is dependent on parameters.
How to Select Readers for Clinical Trials When There is No Gold Standard
Jacob Agris1, Dan Haverstock1, Aida Aydemir1, Julie Agris2
1
Bayer HealthCare, Pine Brook, NJ, USA, 2Hofstra University, Hempstead, NY,
USA
92/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
In this talk we consider a regression model with AR(1) errors for short
interrupted time series’. Prior work by others with this model has not produced
estimators with desirable properties apart from a proposed doublybootstrapped bias-and-variance-corrected estimator. In this talk we evaluate
maximum likelihood (REML) estimation not previously applied in the interrupted
time series literature. Using simulations we compare the performance of
regression model parameter estimators using REML with that of a variety of
existing estimators, both with and without double application of the bootstrap.
We further evaluate a Satterthwaite degrees of freedom estimation approach
both with and without an expected information matrix modification. Our results
indicate that the performance (bias, size, power, CI coverage) of the REML
estimator with corrected degrees of freedom matches or exceeds that of
double-bootstrapped estimators. This finding has the potential to enable
simpler and more efficient analyses of short interrupted time series’ as well as
providing opportunity for a detailed study of design aspects of such series’
currently lacking in the literature.
Method: Using a cohort of 469 RCTs published between 1 July and 31
December 2009 indexed on PubMed, we will identify RCTs with parallel arms
reporting time-to-event data as the primary outcome (PO). Studies of
crossover, cluster or factorial trials will be excluded. Data on whether studies
reported a sample size calculation for the PO and, if so, how the sample size
was derived will be extracted. In particular, we will record the nature of the PO
(e.g. overall survival, time to death, time to relapse), the overall number of
participants (and events), the method used to calculate the sample size, how
the effect size for the sample size calculation was derived and the length of
follow-up used in the calculation. We will also document the actual length of
follow-up for the PO, the actual HR and whether the final analysis was adjusted
or unadjusted.
Results and Discussion: We will present a summary of the findings from this
review of published trials at the time of the meeting.
P5 Consulting
P5.1
P4.29
A Bayesian approach to assess the active control treatment effect for the 10 tips for enhancing biostatistical consultations or collaborations in clinical
research: lessons from the trenches
design of non-inferiority trials
Lehana Thabane
Gang Li, Krishan Singh, Jeffrey Wetherington, Linda Mundy
McMaster University, Hamilton, Ontario, Canada
GlaxoSmithKline Pharmaceuticals, Collegeville, Pennsylvania, USA
Interpretation of a non-inferiority (NI) trial requires compelling historical The Biostatistics and Methodological Innovation Working (BMIW) Group is one
evidence of sensitivity to drug effect for the selected active control treatment. of several working groups within the CANadian Network and Centre for Trials
The active control treatment effect is usually obtained from historical trials in INternationally (CANNeCTIN).In addition to advancing biostatistical and
order to determine a clinically acceptable NI margin. In the absence of placebo- methodological research, and building biostatistical capacity in cardiovascular
controlled historical trials in the target population, the effect size has diseases (CVD) and diabetes mellitus (DM), we aim to enhance CVD/DM
traditionally been estimated indirectly by first calculating the difference between research through collaborating with clinician investigators on their studies.
the lower bound of the confidence interval for the active control and the upper Providing effective statistical consultation or collaboration in clinical research
bound of the confidence interval for placebo proxy, followed by additional often requires skills not often taught in most graduate statistical programmes.
discounting. The discounting factor is subjective, which often has significant After numerous collaborations - with multiple errors and others that worked
implications on the NI margin and sample size. We propose a Bayesian well, I share 10 tips based on the lessons I learnt through working and
approach to derive the active control effect which eliminates the additional step interacting with clinician investigators on more than 100 research projects.
of subjective discounting. In this approach the control and placebo responses These are by no means the only issues to worry about - I learned a lot more
are considered as random variables for which the distributions are estimated through trial-and-error, but hopefully these provide a good starting point for
from historical data. The active control effect is then determined by comparing things to consider in enhancing your own collaborations.
these two probability distributions. This quantitative approach provides an
objective assessment of the control effect size by using Bayesian probability to
P6 Diagnostic methodology
rule out extreme or improbable treatment effect. An appropriate NI margin can
be determined from this estimate using clinical and statistical reasoning. This P6.1
proposed approach will be illustrated for efficacy endpoint in complicated Average kappa coefficient: a new measure of accuracy of a binary diagnostic
test
urinary tract infections and the mortality endpoint in nosocomial pneumonia.
Jose Antonio Roldán Nofuentes, Juan de Dios Luna del Castillo
Biostatistics, School of Medicine, University of Granada, Granada, Spain
P4.30
Sample size calculation for time-to-event outcomes in randomized controlled
trials: A review of published trials
Ly-Mee Yu1, Mike Bradburn2, Milensu Shanyinde1, Sally Hopewell1, Gary
Collins1, Merryn Voysey1, Omar Omar1, Rose Wharton1
1
University of Oxford, Oxford, UK, 2University of Sheffield, Sheffield, UK
Background: Systematic reviews on sample size calculation are well
documented. However, formulae for estimating the sample size for time-toevent outcomes are more complicated, since the sample size depends
primarily on the number of events expected during the study. In addition,
common methods are based on the proportional hazard assumptions, i.e. a
constant hazard ratio (HR) over the study period.
Objective: To critically evaluate the methods used to calculate sample sizes for
time-to-event outcomes in a representative sample of randomized controlled
trials (RCTs).
A diagnostic test is a medical test that is applied to a patient in order to confirm
or discard the presence of a particular disease. When the result of a diagnostic
test is positive or negative, in which case the diagnostic test is called a binary
diagnostic test, its discriminatory accuracy is measured in terms of sensitivity
and specificity. Another parameter used to assess the performance of a binary
diagnostic test is the weighted kappa coefficient, defined as a measure of the
beyond-chance classificatory agreement between the diagnostic test and the
gold standard. The weighted kappa coefficient of a binary diagnostic test
depends on the sensitivity and specificity of the diagnostic test, on the disease
prevalence and on the relative loss between the false positives and the false
negatives (called the weighting index). The problem in the use of the weighted
kappa coefficient as a measure of the effectiveness of a diagnostic test is the
assignation of values to the weighting index. In this work, we propose a new
measure of classificatory agreement between the diagnostic test and the gold
standard: the average kappa coefficient. The average kappa coefficient
depends on the sensitivity and the specificity of the diagnostic test and on the
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
93/156
disease prevalence, but does not depend on the weighting index, and it is a In order to overcome these problems we have innovated refinements to
measure of the beyond-chance average classificatory agreement between the nomograms by centring on a reference non-case. Furthermore, distributions of
diagnostic test and the gold standard.
the covariates are visually represented in the centred nomogram. We exemplify
this by using simulated and Framingham data set. Special emphasis will be put
on non-linear relationships and leverage points.
P6.2
We conclude that our refinements can increase acceptance and practical use
Applying Partial Least Squares Discriminant Analysis (PLS-DA) for optimisation
of nomograms by the clinical community.
of decision rules based on complex patient-reported data: creation of the
FibroDetect® scoring method
P6.4
Hélène Gilet, Benoit Arnould, Antoine Regnault
Comparative assessement of a new imaging technique versus an imperfect
Mapi Consultancy, Lyon, France
invasive gold standard: early detection of coronary stenosis after arterial switch
Background: Information from patients can be unique and essential to medical surgery in children
decision. However, heterogeneity of this useful information may lead to
Phalla Ou3, Kevin Yauy1, Kim- Hanh Le Quan Sang4, Gregory Nuel2, Jeancomplex data that are challenging to comprehend. Partial Least Squares (PLS)
Christophe Thalabard5
analysis offers an attractive solution to deal with issues associated with such
1
2
data, as it is able to account for multicollinear variables, incomplete data, and Paris Descartes University, Paris, France, MAP5, UMR CNRS 8145, Paris,
3
4
large numbers of variables. PLS Discriminant Analysis (PLS-DA) was applied France, Pediatric Radiology, Necker Hospital, APHP, Paris, France, Inserm
5
U781,
Paris,
France,
Diagnostic
Center,
HotelDieu,
APHP,
Paris,
France,
to data collected with the 80-item draft version of the FibroDetect
6
(questionnaire designed to help primary care physicians (PCPs) detect Sorbonne Paris Cite, Paris, France
potential fibromyalgia patients) to create the optimal scoring rule for the tool. The arterial switch operation for transposition of the great arteries requires
Methods: An observational, prospective, non-drug study was conducted to transfer of the coronary arteries from aorta to the proximal pulmonary artery
validate the FibroDetect questionnaire as a screening tool in 276 patients with (neo-aorta). In 8-10% of cases, there is evidence of late coronary stenosis,
undiagnosed chronic widespread pain. PLS-DA was applied to patients' often asymptomatic but with clinical consequences. Surviving children are
responses to FibroDetect items, to select the most relevant items for generally proposed routine coronary angiography under general anesthesia as
separation of potential fibromyalgia from non-fibromyalgia patients and create a it remains, though imperfect, the gold standard for early detection of stenosis.
simple scoring method that allows for its quick use by PCPs. Resulting This invasive procedure is now challenged by non invasive high resolution
discriminant models were evaluated using the Area Under the ROC Curve multislice CT (Ou P et al, JACC Cardiovasc Imaging. 2008)
(AUC).
In order to compare the performances of the new procedure to the reference
Results: The first PLS-DA enabled a first set of 35 relevant items to be ones, data from three studies involving 379 children exposed to both
identified. Further PLS-DA were performed after item coding simplification and procedures have been collected in two cardiac centers, using the same
different scoring methods were tested (AUC from 0.71 to 0.75). The final model protocol.
included 9 items (AUC=0.74), resulting in a 0-9 score with a cut-off of 6 for The coronary vessels were considered as a set of M = 6 correlated sites
fibromyalgia
suspicion. (m=1,..,M). For each child (j=1..ni) within each study (i=1..3) and for each
Conclusion: PLS-DA is a powerful statistical method to contribute to the technique (k=1,2), each site was rated 1/0 according to the presence of
creation of PRO screening tools in the context of complex data.
stenosis. The observations Yijkm were modelled using a generalized mixed
model logit(P(Yijkm=1/D,Test) = µ + a1.D + a2.Tk + b1.Zj+ b2.Zm
where D is the unobserved true disease status, Tk the imaging technique, b1.Zj
P6.3
and b2.Zm a subject's random effet and a site random effect, respectively, in
Refined nomograms to enhance the interpretation of clinical risk prediction order to take into account different sources of correlation between
models
observations.
Juan V. Torres-Martin1, Harald Heinzl2, Jordi Cortés3, Harbajan Chadha- Data were analysed using an EM algorithm taking advantage of the standard R
Boreham1
function glmer {lme4} via adapted weights. The performances were studied
1
Biostatistics department, Actelion Pharmaceuticals Ltd., Basel, Switzerland, using simulated data sets of correlated binary variables (Qaqish, Biometrika,
2
Center for Medical Statistics, Informatics, and Intelligent Systems, Medical 2003).
University of Vienna, Vienna, Austria, 3Department of statistics and operations
research. Technical University of Catalonia, Barcelona, Spain
A risk prediction score can straightforwardly be derived from a multivariable
regression model. For ease of clinical use, the regression formula is often
coarsened into a point score. Alternatively, it can be translated into a
nomogram (e.g. with R software) avoiding loss of precision and improving
interpretation by means of a graphical presentation. This enables visualization
of the contribution of individual covariates to the overall prediction score. These
features improve clinical acceptance of a risk prediction tool, as clinicians do
not have to base their decisions on a black box.
The risk score in conventional nomograms is left-aligned, starting from a
subject with the smallest possible value of each covariate. In such a
nomogram, highly specific covariates (primarily identifying non-cases) can
contribute a lot of points for clinically normal findings, compared with sensitive
covariates that primarily identify cases. This can cause problems with clinical
interpretation, even though the final risk prediction based on the contribution of
all the covariates is correct.
P7 Epidemiological designs
P7.1
Cancer incidence and prevalence: application of mortality data to estimates
and projects for the period 2001-2015, Iran
Mohammad Reza Maracy, Farhad Moradpour, Sayed Mohsen Hosseini
Isfahan University of Medical Science, Iran, Iran
Background: The aim of this study was to show up-to-date estimates of
incidence and prevalence in Isfahan for all cancers except non melanoma skin
cancer over the period 2001-2010 to provide projections up to 2015, based on
a statistical method that uses of mortality and cancer patient survival data.
Methods: Mortality data in Isfahan is collected from various sources such as
hospitals, medical forensic, cemetery and health centers. In addition population
data by sex, age, location and calendar year in the period of 2001-2010 were
acquired from the Statistical Center of Iran. Relative survival probabilities for all
94/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
cancers combined and for selected specific cancers were estimated based on
observed cancer death and expected mortality data. Incidence and prevalence
estimates were computed with Mortality-incidence Analysis Model (MIAMOD)
method.
Results: The estimated age-standardized cancer incidence rate had higher
increase rate for urban females than for males. Also the number of prevalent
cancers was higher among females, which was mostly due to better cancer
survival rates in women. Age adjusted incidence was estimated to increase by
6.9 and 8.7 per 100000 annually between 2001 and 2015 in males and
females, respectively. The prevalence is to increase 24 and 40 and mortality
2.8 and 2.5 per 100000 between 2001 and 2015.
Conclusion: The present study does not only have show the incidence and
prevalence estimates of all cancers combined, but also gives information about
cancer burden which can be used as a bases for planning healthcare
management and allocating recourses in public health.
Data was available from the 2010 and 2011 cohorts of children transferring
from local authority or public-private partnership preschools to primary school
at approximately five years of age. The main outcome was total strengths and
difficulties questionnaire (SDQ) score. There was complete data on total SDQ,
postcode and demographics for around 3000 children in each of the two years.
A mixed-effects zero-inflated negative binomial model of SDQ score on age,
gender, deprivation, and looked after status was fitted with year, nursery
establishment attended and area as random effects. The results were
superimposed onto a map of the city. A further map of smoothed SDQ score
was produced from a model incorporating spatial correlation.
The results show that there was considerable variability between areas in
average SDQ score, with many having scores that were substantially better or
worse than would be expected based on their average demographics. Areas in
the least deprived quintile had on average better scores than those in the other
four. Younger children, boys and those looked after in care tended to have
more difficulties.
As the PSF progresses, data will become available on all children in Glasgow
P7.2
across many years. Evaluation of the areas containing children that continue
Selection bias in obesity research: when do sampling weights solve the
to struggle will enable more targeted interventions.
problem?
Ralph Rippe, Saskia le Cessie, Martin den Heijer, Frits Rosendaal
P7.4
Leiden University Medical Center, Leiden, The Netherlands
Pesticides exposure in an apples growing valley (Trentino - Italy):
Selection bias is a well-known phenomenon in (epidemiological) research epidemiological study
designs.
Riccardo Pertile, Martina De Nisi, Silvano Piffer
It can obscure relationships and causal pathways, while this effect is not
trivially detected (Hernan et al, 2004). Index-event bias (Dahabreh & Kent, Department of Clinical and Evaluative Epidemiology - Centre for Health
2011) can be seen as a special case of selection bias, in which subjects are Services of Trento, Trento, Italy
included based on an index-event.
Previous epidemiological studies have indicated that pesticide exposure is
The inclusion of reference observations from the general population can possibly associated with many diseases and health problems. However,
provide more insight in the effect of selection bias. However, if the study considerable heterogeneity has been observed in results. This epidemiological
population and reference observations are sampled with different probabilities, surveillance study aims to test possible associations between pesticide
the need of accounting for sampling weights strongly depends on the type of exposure and health problems, such as tumours, non Hodgkin Lymphoma,
model and type of selection (Lumley, 2010). For example, some confounder leukemia, congenital anomalies, miscarriages, stillborn babies, preterm and
corrections implicitly correct for selection imbalance.
low weight births, asthma, rhinitis and Parkinson disease, during the period
Here, we elaborate on circumstances under which and in what way selection 2000-2009. The area covered by the study consists of 38 municipalities in a
bias influences the relation between exposures and outcomes. This will be mountain valley (Valle di Non) in Trentino region (north east of Italy), very
done in a simulation study and in data from the NEO (Netherlands famous for apples growing. Not the whole valley is dedicated to apples
Epidemiology of Obesity) project. In the NEO study more than 5000 production, so that it has been divided in two areas, according to some criteria
participants with a BMI above 27 kg/m2 are included, together with a reference such as the percentage of hectares cultivated with apples within each
group of 800 subjects with a normal BMI. We consider situations as: municipality, or the altitude. The municipalities supposed to be at risk were 24,
all placed in the low valley, with a more temperate climate. Using existing
1) BMI is a cause of the exposure
informative flows and specific health registers, incidence and prevalence rate
2) BMI is a risk factor for the outcome
ratios (with Fay-Feuer 95% confidence intervals) have been calculated for each
3) BMI is affected by the outcome of interest
year comparing the two areas in the following health field: cancers (20002006), miscarriages, asthma, rhinitis and Parkinson disease (2000-2009). As
4) BMI is a causal intermediate
regards congenital anomalies, stillborn babies, preterm and low weight births, a
5) and combinations of (1), (2), (3) and (4),
logistic regression analysis has been carried out, using as covariate the
and we discuss when the use of the reference group with appropriate sample permanent address of the mother (at risk or not at risk area) to check the
weights is needed.
possible association with the dependent variables (presence of congenital
anomaly in newborns, stillbirth, etc.).
P7.3
The Parenting Support Framework in Glasgow: mapping variability in P7.5
behavioural difficulties
Use of linked registries in the design of cohort studies, a tool against selection
Sarah Barry, Lucy Thompson, Louise Marryat, Jane White, Philip Wilson
bias: the Constances example.
University of Glasgow, Glasgow, UK
Rémi Sitta, Alice Guéguen, Julie Gourmelen, Diane Cyr, Marie Zins, and the
Parenting is the single major factor implicated in health outcomes for children. Constances team
This ongoing study aims to establish the variability in behavioural difficulties Versailles Saint-Quentin-en-Yvelines University, UMRS 1018, Centre for
across the city after the recent introduction of the whole population Parenting Research in Epidemiology and Population Health, `` Population-Based
Epidemiological Cohorts” Research Platform, Villejuif, France
Support Framework (PSF) in Glasgow.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Data from linked registries can contain information concerning health outcomes
as well as risk factors. They are thus increasingly used in epidemiologic and
clinical research, either as a stand-alone data source, or linked with data from
pre-existing study populations. A more intensive use of these registries,
especially for cohort studies, consists in linking registries prospectively for all
eligible subjects, participants as well as non-participants. A cohort of nonparticipants has a high potential for handling possible selection bias that could
occur among the cohort of participants.
We will illustrate this design with the French "general-purpose" cohort
Constances (www.constances.fr), which aims to recruit 200 000 volunteers
starting January 2012, and is intended to serve as an "open epidemiologic
laboratory" accessible to the wider epidemiologic research community. Through
the social security number, databases from the national pension fund, from
health insurance plans and from causes of death registries will be linked for
both participants and non-participants. The large amount of information
collected includes lifelong occupational history, recourse to general healthcare
and hospitalization, medical procedures, etc.
The optimal approach for handling selection bias consists in analysis-specific
use of all available information, by doubly-robust methods for example.
However, these may be too complex for routine use, due mainly to highdimensionality issues. A simpler alternative approach consists in using
standard weights that correct for differential participation, but may in turn
induce variance inflation or even bias amplification. We will present different
weighting schemes we intend to develop, and discuss their potential
advantages and disadvantages.
P8 Evaluating hospital performance
P8.1
Mixed effect models for provider profiling in cardiovascular healthcare context
Francesca Ieva, Anna Paganoni
Politecnico di Milano, Milano, Italy
95/156
Mortality following hospitalization is commonly used for the comparison of
hospital quality. We define 30 days mortality for hospitals to be the number of
all-cause-deaths occurring in- or out-of-hospital within 30 days, counting from
first day of admission, among all patients.
In June 2011, the Norwegian Knowledge Centre for the Health
Services published results from a study assessing 30 days survival after
admission for acute first time myocardial infarction (AMI) and cerebral stroke
(CS) based on data from all Norwegian Hospitals. A logistic regression model
was used for the estimation of 30 days adjusted mortality. In addition to
hospital, the model included the case-mix variables: age, sex, seriousness of
the medical condition, number of previous hospital admissions, and the
Charlson comorbidity index. As mortality is considered a negative framing, the
survival rates were presented rather than the mortality rates. The analyses
identified statistically significant lower survival for some hospitals. One hospital
turned out to have low survival rates for both AMI and CS. Further data
analyses were undertaken including Kaplan-Meier curves for the survival of the
30-days period. The Kaplan-Meier curves revealed different survival patterns
for the two patient groups. For AMI, the outlier hospital showed a drop in
survival the first two days after admission compared the pooled results of the
remaining hospitals. For CS, the drop was observed about 10 days after
admission versus the pooled results. This information was used to inform a
quality improvement project.
P8.3
Accounting for patients transferred between hospitals when using mortality as
a quality indicator for the comparison of hospitals
Doris Tove Kristoffersen, Katrine Damgaard, Jon Helgeland
Norwegian Knowledge Centre for the Health Services, Oslo, Norway
Mortality is commonly used as a quality indicator for hospital comparisons. We
define 30 days mortality for hospitals to be the number of all-cause-deaths
occurring in- or out-of-hospital within 30 days, counting from first day of
admission, among all patients. For patients transferred between hospitals a
major challenge is to attribute the outcome (alive or dead) to each hospital.
The objective of the present work was to evaluate a weighting method based
on all hospital stays for transferred patients when calculating 30 days mortality.
Consider a patient who stayed five days in one hospital, was transferred and
stayed for two days in a second hospital, was transferred to a third hospital in
which the patient died 10 days after admission.
One approach is to include patients with a single hospital stay only (XM). We
propose to use weights proportional to the length of stay in each hospital (WM);
i.e. 5/17 for hospital no. 1, 2/17 for hospital no. 2 and 10//17 for hospital no. 3.
This weighting provides mortality based on all admissions and all hospitals.
The weighted outcomes add up to the total number of patients. Alternatively,
the weights may be used in the model specification of a logistic regression.
To compare XM and FM, we used data from a Norwegian nationwide allhospital sample for patients admitted for acute myocardial infarction, stroke
and hip fracture. Spearman rank correlations were high for AMI (r=0.88) and
stroke (r=0.80), but lower for hip fracture (r=0.67).
The purpose of this work is to highlight how advanced statistical methods can
be used to identify suitable models for complex data coming from clinical
registries, in order to assess hospitals performances in treating patients
affected by STEMI (ST segment Elevation Myocardial Infarction). We fit
different models, trying to enhance the grouping structure of data for profiling
aims, where the hospital of admission is the grouping factor for the statistical
units (the patients). In these models we introduce performance indicators in
order to adjust for different patterns of care as well as different observed casemix. In particular, we propose three methods to profile hospitals: in the first one
we compare the in-hospital survival rates after fitting a Generalized Linear
Model according to the so called Statewide Survival Rate (SSR); in the second
one we fit a parametric Generalized Linear Mixed Effects Model on in-hospital
survival outcome and a clustering procedure is then applied to the point
estimates of hospital effects. Finally, in the third case we classify hospitals
according to the variance components analysis of the random effect estimates,
where nonparametric assumptions have been considered.
The survey we consider for the case study is a clinical observational registry
concerning patients admitted with STEMI diagnosis in any hospital of our
regional district. The nearly unanimous agreement of results obtained P8.4
implementing the three methods on data supports the idea that a real
clustering structure in groups exists. Such methods provide also a useful Performance of Screening Colonoscopy Centres in a Nationwide Colorectal
Cancer Screening Programme: Evaluation Using Hierarchical Bayesian Model
decisional support to people in charge with healthcare planning.
Ondrej Majek1, Stepan Suchanek2, Miroslav Zavoral2, Ladislav Dusek1
1
Masaryk University, Insitute of Biostatistics and Analyses, Brno, Czech
P8.2
Republic, 2Charles University, 1st Faculty of Medicine, Central Military Hospital,
The use of Kaplan-Meier plots when comparing hospital mortality
Department of Medicine, Prague, Czech Republic
Doris Tove Kristoffersen, Katrine Damgaard, Ole Tjomsland, Jon Helgeland
Czech National Colorectal Cancer Screening Programme was initiated in year
Norwegian Knowledge Centre for the Health Services, Oslo, Norway
2000. Patients can undergo faecal occult blood test (FOBT) or primary
96/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
screening colonoscopy. Proportion of patients detected with adenoma at the
FOBT+ follow-up colonoscopy - estimate of positive predictive value for
detection of adenoma (PPV) - was previously proposed as an early
performance indicator in colorectal cancer screening programmes using FOBT.
Our objective was to assess possibility of identifying underperforming providers
using hierarchical modelling with Markov chain Monte Carlo (MCMC)
approach.
The model was specified including centre-specific random effect. To adjust for
case-mix of patients examined at individual centre, sex and five-year age group
were added as patient-level covariates. PPV was modelled using logistic
regression model using WinBugs package. The MCMC approach allows us to
infer probability that the true value of PPV estimated at a particular centre
reaches
the
nationwide
mean
value
within
all
centres.
Our study included 143 centres that recorded more than 50 colonoscopies in
FOBT+ subjects performed in 2010. In total, 16,722 individuals underwent
colonoscopy; adenoma was detected in 5,563 subjects. Overall PPV was
33.3%. Centre-specific ratio of odds for adenoma detection in comparison with
nationwide mean ranges between 0.4 and 2.9. In 22 centres, the estimated
probability of actually reaching the nationwide mean is below 5%, showing
potential ineffectiveness in adenoma detection. The MCMC approach enables
us to detect potentially underperforming centres while adjusting for known
covariates, which may help programme management to address corrective
actions and achieve continuous improvement in quality of screening services.
P8.5
Hospital volume and survival from cancer surgery: the experience of a local
area with an high incidence of gastric cancer
Elisa Carretta1, Mattia Altini1, Paolo Morgagni2, Emanuele Ciotti3, Domenico
Garcea2, Oriana Nanni1
1
Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST),
Meldola (Forlì-Cesena), Emilia-Romagna, Italy, 2Department of General
Surgery, GB Morgagni General Hospital, Forlì, Emilia-Romagna, Italy, 3Local
Health Authority, Bologna, Emilia-Romagna, Italy
Many studies have shown conflicting results on the relationships between
hospital volume and survival of patients undergoing cancer surgical
procedures. The objective of this study is to explore the influence of hospital
volume on survival in patients undergoing gastric cancer surgery in the
hospitals of a local area (Romagna) of the Emilia-Romagna Region (Italy).
Hospital discharge records of patients admitted from January 2004 to
December 2008, were linked with histological reports and regional mortality
registry. Follow-up data were available until December 2011. Adjusted HRs of
operative mortality and long-term survival in high and medium volume hospitals
compare to low volume hospitals were estimated using shared frailty Cox
regression model.
Hospital volume ranged from 15-82 cases/year. The 3-year and 5-year overall
survival for the 1096 patients identified in our cohort was 50% (95%CI:47-53)
and 41% (95%CI:38-44) with a median follow-up of 69 months. Operative
mortality decreases with increasing volume categories (p=0.047). Moreover,
patients undergoing to high and medium volume hospitals had a significant
improvement in long-term survival compared to those treated in low volume
centers (HR=0.82, 95%CI:0.67-0.99; HR=0.77, 95%CI:0.62-0.95). Patient and
tumor characteristics associated with long-term survival included: gender, age,
comorbidity index, procedure type, T staging, number of positive lymph nodes
and number of lymph nodes removed.
The use of linked administrative databases and clinical records enables the
examination of the relationship between hospital volume and survival after
controlling for potential confounding. This methodological approach may help
with planning organizational tools to improve patients outcomes and the quality
of surgical services.
P8.6
Tolerance intervals for identification of outlier healthcare providers: the
incorporation of benchmark uncertainty.
Sarah Seaton, Bradley Manktelow
Department of Health Sciences, University of Leicester, Leicester, UK
Emphasis is increasingly being placed on the reporting of healthcare provider
outcomes with funnel plots, a standard graphical technique to identify providers
with potentially outlying performance. The control limits for such plots are
generally obtained by constructing prediction intervals around an underlying
‘benchmark' and providers that fall outside of these limits are seen as potential
outliers. However, such benchmarks are usually obtained from observed data,
for example the ‘average' outcome across all providers. The conventional use
of prediction intervals ignores any statistical uncertainty associated with the
estimation of the benchmark. An alternative method derived from Statistical
Process Control (SPC), tolerance intervals, allows the inclusion of this
uncertainty.
The construction of tolerance intervals comprises 2 steps: first, a confidence
interval is calculated for the benchmark; second, prediction intervals are then
calculated using the upper and lower bounds of the confidence interval as the
benchmarks for the upper and lower limits respectively.
Tolerance intervals are always wider than the corresponding prediction
intervals due to the additional uncertainty incorporated from the confidence
interval created in the first step. Therefore the use of tolerance intervals will
reduce the probability of falsely identifying a provider which has an underlying
performance equal to the benchmark as an outlier.
Examples of tolerance intervals will be provided using simulated and real data.
Tolerance intervals should be used on funnel plots when the benchmark is
estimated with uncertainty. These limits incorporate the statistical uncertainty
from the estimated benchmark and provide more robust method for the
identification of outliers.
P8.7
Geographic variations in avoidable hospital admissions for asthma across
Germany
Maria Weyermann, Saskia E. Drösler, Ann-Kathrin Weschenfelder, Silke Knorr
Niederrhein University of Applied Sciences, Krefeld, Germany
Background
Ambulatory care for chronic conditions was assessed by adult hospital
admission rates across 22 countries within the OECD Health Care Quality
Indicators (HCQI) Project.
We calculated asthma admission rates across all 16 Federal States of
Germany and investigated possible associations with variations in health care
supply expressed as density of hospital beds and rates of ambulatory care
providers (general practitioners and internists per population).
Methods
Using the 2009 nationwide Diagnosis Related Groups statistic provided by the
National Statistical Office we calculated age-sex standardized asthma
admission rates according to the OECD HCQI Data Collection Guidelines.
Results
Standardized asthma admission rates ranged from 11.5 (Berlin) to 26.3
(Saxony-Anhalt) admissions per 100.000 population. Pearson's correlation
coefficients indicated only weak associations with provider rate (r: -0.48028; p:
0.0597) and hospital beds (r: 0.57145; p: 0.0208). After mutual adjustment both
associations became stronger (provider rate adjusted for hospital beds density:
partially adjusted r:-0.74872; p: 0.0013; hospital beds density adjusted for
provider rate: partially adjusted r: 0.78444; p: 0.0005). Further adjustment for
school education and asthma prevalence decreased correlation between
asthma admission rate and provider rate (partially adjusted r: -0.67011; p:
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
0.0122), but further increased correlation between asthma admission rate and
hospital beds density (partially adjusted r: 0.88359; p: <.0001). Sex-specific
analyses showed similar patterns.
Conclusions
Admission rates calculated on hospital administrative data might be a useful
tool to identify regional variations of asthma hospitalizations. To some extent
variations in health care supply explain divergences.
97/156
indirect standardisation may not be correct because different weights are used
for each comparator.
We propose an alternative method of standardisation where event rates are
calculated in risk groups rather than casemix groups. The complex
multidimensional casemix is converted into a simple one-dimensional risk
distribution using a logistic regression model and then the events are directly
standardised across the risk distribution. This method is illustrated using the
Summary Hospital-Level Mortality Index (SHMI) for English hospital trusts.
P8.8
The comparative evaluation of Italian Regional Health System through PLS- P8.10
SEM
Probability of in-hospital mortality: analysis of administrative data in Germany
Federico Ambrogi, Monica Ferraroni, Adriano Decarli
Silke Knorr, Saskia Drösler, Maria Weyermann
University of Milan, Milan, Italy
Niederrhein University of Applied Sciences, Krefeld, Germany
The National Agency for Regional Health Services (AGENAS) promoted a
project with the goal of evaluating the performances of the Italian Regional
Health Systems using 102 health indicators selected by a group of experts.
In this work Structural Equation Modeling (SEM) estimated by Partial Least
Squares (PLS), was used to analyze the cause-effect relationship between the
latent variables summarizing the health system functioning and composed by
the different indicators. Each regional indicator is supposed to enter only one of
the defined 10 latent construct, namely: availability of the resources; sociodemographic conditions; average health status; exploitation; expenditure;
costs; quality of care; hospital use; effectiveness; efficiency.
In the structural reflective outer model, the first three dimensions were
considered as exogenous explaining differences in the exploitation of the
resources, which directly influence costs and indirectly through expenditure.
The costs influence the quality of care and hospital use, also affected by the
average health status. At the end of the causal path, quality of care and
hospital use affects effectiveness and efficiency of the system.
A negative association between costs and hospital use (r = -0.41) and quality
of care (r = -0.662) was estimated. Hospital use and quality of care are then
negatively associated with effectiveness and efficiency which were finally used
for the comparative analysis of the regional systems.
The approach seems to be useful in conceptualizing a framework for the health
system, although created starting from the considered indicators, producing a
synthesis of the available information incorporating a causal model.
P8.9
Casemix adjustment for comparing standardised event rates
Richard Jacques, James Fotheringham, Michael Campbell, Jon Nicholl
School of Health and Related Research, University of Sheffield, Sheffield, UK
In all branches of the health and social sciences we need to be able to
compare outcomes of groups of patients or people managed in different ways
to understand the impact of different interventions, services and policies. Fair
comparison of outcomes can be difficult to achieve because of differences in
the characteristics of the patients and populations served. Distribution of these
characteristics is known as casemix, and when the casemix is associated with
the outcomes, comparisons of outcomes are confounded with any differences
in casemix. In theory this problem can be solved by adjusting the comparison
for casemix. This can be done by calculating a standardised event rate.
The two most commonly described methods of adjusting for casemix are direct
and indirect standardisation. However, comparisons made using direct
standardisation may not reflect the true performance of the comparators
because of different patterns of random zeros (e.g. for some rare conditions
there may be no cases in some hospitals in some years) or organisational
zeros (e.g. some hospitals don't treat children), and comparisons made using
Background
In-hospital mortality is the basis of established quality indicators for hospital
care such as Hospital Standardized Mortality Ratios (HSMR). As administrative
hospital data include information on discharge status, we aimed to calculate
the probability of in-hospital mortality using various measurements for
comorbidity.
Methods
According to the HSMR methodology of the Canadian Institute for Health
Information (CIHI) we investigated the probability of in-hospital mortality using
the 2008 nationwide G-DRG statistics (17 mio hospitalizations). Case selection
in accordance to CIHI reduced the population to 5.2 mio hospitalizations. In
logistic regression models mortality risk was calculated using age, sex, hospital
transfer, length of stay, admission category, primary diagnosis and comorbidity
measurements (Charlson-, Elixhauser-Index, mean number of secondary
diagnoses) as independent variables.
Results
Adjusted odds ratio for in-hospital mortality was 0.96 (95%CI: 0.95-0.96) for
men compared to women, 1.53 (95%CI: 1.51-1.55) for patients transferred vs.
non-transferred, and 1.94 (95%CI: 1.93-1.96) for emergency compared to
elective admission. Mortality risk increased with age (OR: 1.05 per year;
95%CI: 1.05-1.05) and comorbidity (OR: 1.21 per Charlson-Index unit; 95%CI:
1.21-1.21). Compared to length of stay 3-9 days, odds ratios ranged from 1.02
(95%CI: 1.02-1.04 - 10-15 days) to 2.62 (95%CI: 2.6-2.65 - 1 day). Different
comorbidity measurements did not reveal substantial changes in regression
coefficients or c-statistics.
Conclusions
The HSMR methodology of the CIHI can be applied on German DRG data and
the results are comparable to Canada. The choice of comorbidity measurement
models seems to be negligible.
P9 Functional data analysis
P9.1
Visualisation and spatiotemporal smoothing of single trial EEG data
Stanislav Katina2, Igor Riecansky2
1
The University of Glasgow, Glasgow, Scotland, UK, 2Laboratory of Cognitive
Neuroscience, Institute of Normal and Pathological Physiology, SAS,
Bratislava, Slovakia, 3SCAN Unit, Institute of Clinical, Biological and Differential
Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
For electroencephalographic (EEG) spatial data (scalp voltage measurements),
currently applied methods partially or completely ignore spatial relationship of
3D coordinates of the electrode location called (semi)landmarks. The
coordinates are often projected to 2D and used only for visualisation in the
plane and their covariance structure is not incorporated into subsequent
multivariate statistical modelling. On the other hand, the landmarks contain
98/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
only a very small proportion of data available. However, curves and surface
patches have the advantage of providing a much richer expression of head
shape.
In the talk, this is explored in the context of using principal curves and the Pspline approach in smoothing to construct a set of semilandmarks on curves in
between electrode locations at any desired resolution. Interpolation thin-plate
spline model is then used to estimate EEG-signal in these artificial electrode
positions. Resulting signal is spatially smoothed and visualised as coloured
animated 4D spatiotemporal maps.
The methods are illustrated on an application with real data, where the EEGsignal was recorded from 61 scalp sites using sintered Ag-AgCl electrodes
mounted on an elastic cap (EASYCAP GmbH; Herrsching, Germany) and
electrode positions were determined using photogrammetric head digitizer.
P10 Health economics and regulatory affairs
P10.1
Bayesian Evidence Synthesis in a Health Economic Model for Dementia
Jasdeep. K Bhambra, Fiona. E Matthews, Christohper. H Jackson
MRC Biostatistics, Cambridge, UK
Dementia is a group of disorders that describe a pathological state of cognitive
impairment. Current drug treatments for dementia are intended to improve
cognitive function. To compare long-term effects and costs of different
treatments, models are typically built which describe patients' transitions
between states of health and cost-incurring events. At present, a publicly
accessible, fully comprehensive model for patients with dementia is not
available. We are currently building such a model, an important part of which is
estimating progression in cognitive function from diverse sources of published
data.
Evidence synthesis methods are used to estimate progression rates between
stages of cognition in a Markov model. We examine how these vary with age,
sex, education, comorbidities and settings of care. Different published studies
have used a variety of models to estimate cognitive progression, which is then
expressed in a variety of ways. Such models include survival-type models for
times until cognition declines to a particular threshold, and Markov models for
progression through disease stages. Our methods will need to combine these
diverse estimates of disease process. Bayesian methods are used to model
between-study heterogeneity in progression probabilities and the effects of
treatment and other covariates on them. We discuss uncertainties about the
model structure, prior distributions and the relevance of each source of data,
and examine these using sensitivity analyses.
Directions for future research will be discussed, including the importance of
ensuring that the model is flexible to include not only the patients changes but
also the impact on their careers.
phase II/III clinical trial in patients with symptomatic malignant ascites
comparing paracentesis plus catumaxomab (N=170) with paracentesis alone
(N=88) were analysed using this approach. Health-Related Quality of Life
(HRQoL) was assessed using the EORTC QLQ-C30, a PRO instrument
specifically designed to measure HRQoL of cancer patients. Meaningful
deterioration in the 15 QLQ-C30 scores was defined as a decrease in score of
at least 5 points. Kaplan-Meier estimates with log-rank test and Cox models
adjusted for baseline score, country, and primary tumor type were used to
analyse time-to-HRQoL-deterioration.
Results: Meaningful deterioration in HRQoL scores appeared more rapidly in
control than in the catumaxomab group (medians: 16-28 days vs. 45-49 days).
The difference in time to first deterioration in HRQoL between groups was
statistically significant for all 15 QLQ-C30 scores (p<0.05) and results were
confirmed
using
Cox
models
(p<0.05
for
all
scores).
Conclusion: Time to PRO response is a useful approach to obtain meaningful
results from PRO data with respect to HRQoL outcomes.
P10.3
Prediction of pregnancy outcomes in planned homebirth
Dorota Doherty1, Jeff Cannon1, Janet Hornbuckle2
1
Women and Infants Research Foundation, Perth, Western Australia, Australia,
2
King Edward Memorial Hospital for Women, Perth, Western Australia,
Australia, 3School of Women's and Infants' Health, University of Western
Australia, Perth, Western Australia, Australia
The debate on safety of planned home birth continues in literature, policy and
practice across the developed world. Main concerns include increased
perinatal mortality and excess morbidity for women and their babies who
require intrapartum or postpartum transfer with planned homebirth. The
difficulty of evaluating the outcomes in planned homebirth relates mainly to the
low numbers of homebirths that occur in the local obstetric population. This
difficulty is confounded by the demographic characteristics of women who elect
homebirth that differ from the characteristics of women with planned hospital
birth. We have developed a model simulating a pregnancy cohort that reflects
maternal characteristics, pregnancy complications and pregnancy outcomes in
our local pregnancy population. The model is constructed as an individual
sampling model that uses Monte Carlo simulations to generate the events and
outcomes for individual pregnancies. Up to two pregnancy complications are
allowed to occur at any pregnancy week. Transition probabilities for
complications and/or labour are estimated using logistic regression function
including maternal characteristics, pregnancy complications and current
pregnancy week. When complications occur, transfers of care are modelled
according to the current clinical management guidelines. Our pregnancy
model is used to estimate the rates of adverse outcomes associated with
intrapartum and postpartum transfers into hospital care, maternal morbidity and
perinatal mortality. The method of pregnancy simulation offers distinct
advantages over alternative evaluation strategies because it converts the
P10.2
crossectional pregnancy data into a longitudinal cohort. This method is
Analysis of time to patient-reported outcome meaningful change: Illustration particularly useful for estimation of transfers into hospital care without
from a clinical trial with catumaxomab in patients with malignant ascites
conducting actual clinical studies.
Hélène Gilet, Antoine Regnault
Mapi Consultancy, Lyon, France
P10.4
Background: In many Patient-Reported Outcome (PRO) application contexts,
the meaningful question is not whether a change in the PRO can be observed
but how fast the change occurs. In such case, it is possible to build on the
approach currently recommended to support PRO interpretation, analysis of
PRO response (i.e. individual patient PRO score change that can be
interpreted as meaningful). Indeed, survival analysis techniques can be applied
with PRO response as the event of interest.
Methods: PRO data collected in a 2-arm, randomized, open-label, multicenter,
Multivariate latent class model for non-supervised classification in RNAseq
experiments
Juan R Gonzalez, Mikel Esnaola
Center for Research in Environmental Epidemiology (CREAL), Barcelona,
Spain
High-throughput RNA sequencing (RNA-seq) offers unprecedented power to
capture the real dynamics of gene expression. So far, statistical methods to
analyze this data are based on detecting genes that are differentially
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
99/156
expressed among different conditions. However, large consortia like TCGA or
International Cancer Genome Consortium are generating a vast amount of
RNA-seq data to address other biological questions. Some researchers have
paid attention to detect sub-groups of individuals with similar genetic profiles.
Others are interested in elucidating whether patients show patterns due to nonbiological differences like batch effects. Those problems can be addressed
using non-supervised methods (NSM). The main difficulty when addressing this
problem is that existing NSM are based on normality assumptions. However,
RNA-seq data are counts, and hence, other distributions like Poisson, Negative
Binomial or Poisson-Tweedie should be used instead. In this talk, we will
present how to perform non-supervised classification using a multivariate latent
class model based on count data distributions. We will also show how to
incorporate correlation among genes using genomic information (derived form
gene ontolgy databases) in order to improve accuracy. Through simulation
studies and using real data sets belonging to TCGA consortium we will
compare our proposed method with other existing ones. Finally, an R package
implementing our proposed model will be presented.
in weight loss, models a hypothetical two arm trial. Measurements are taken at
baseline and end of the trial. Final measurements are generated in 3 ways:
from a Normal distribution, from a t-distribution and from a Normal distribution
with extreme outliers added. The parameters in the selection process are
chosen to create missing data at different levels of non-ignorability. The RAMs
used to analyse the simulated data assume all measurements are normally
distributed.
RAMs are found to perform reasonably well when data are simulated under a
Normal distribution or a t-distribution. Although RAM models appear to handle
model misspecification better than selection models, they also seem to be
sensitive to outliers, which introduce severe bias to parameter estimates.
RAMs capture more information about non-respondents and help to fit MNAR
selection models.
P11.1
Can the repeated attempts model help to fit MNAR selection models?
Danice Ng, Dan Jackson, Ian R. White
MRC Biostatistics Unit, Cambridge, UK
Background: Inference from longitudinal studies, with unit missingness, is
prone to bias if the underlying missingness process is not accounted for during
analysis. Dataset with baseline and a repeated measure was derived from the
1958 National Child Development Study. Various regression methods applied
to the data under the assumptions of MCAR, MAR, MNAR.
Method: Inference was estimating the odds of cases having headaches in
adulthood given that they had headaches at childhood. Dataset had 6,753
subjects at baseline and 5,953 at the second time point. The GEE
(independent) model applied when missingness process assumed to be
MCAR; weighed GEE (independent) model with inverse probability weights
when S-MAR; marginalised transition model (MTM) when MAR; a modified
GEE model (Fitmaurice et al, Biostatistics 2000) and a regression model
analysed with MCMC Gibbs sampling when MNAR.
Results: Odd ratios (95% CI) of cases having headaches in adulthood if they
had headaches at childhood at ages 7 or 11 obtained under the missingness
assumption of MCAR is 1.780 (1.499,2.114); S-MAR is 1.818 (1.566,2.109);
P11.2
Using planned missing values in longitudinal trials to relieve patient burden and
reduce costs
Christele Augard1, Ayca Ozol-Godfrey2, Robert D. Small2, Dominika
P10.5
Wisniewska2
The (little) need for and the (large) impact of post hoc application of formal 1Sanofi Pasteur, Marcy l'Etoile, France, 2Sanofi Pasteur, Swiftwater, PA, USA
criteria to check clinical relevance in well conducted RCTs
Some longitudinal trials require subjects to commit to frequent blood draws at
Werner Vach, Primrose Beryl
visits over time. Often the primary endpoint does not require the observations
Clinical Epidemiology, Freiburg, Germany
from every visit. This is true, for example, in vaccine immunogenicity trials and
Recently, the topic to address clinical relevance on the top of statistical diabetes trials. Taking samples at every visit can be burdensome to both the
significance in the analysis of RCTs has been paid increasing attention. The subject and the sponsor. Subjects often do not like many blood draws. The cost
increased interest is related to the fact that in political decisions on of assaying every sample can be high. These facts contribute to increased cost
reimbursement a pure proof of efficacy is no longer regarded as sufficient, but and subject drop out. In this paper we investigate the idea of bleeding random
that a clear clinical benefit has to be proven. In this paper we would like to subsets of subjects at each visit but using the (frequent) high correlation within
quantify the need for and the impact of post hoc application of formal criteria to subjects between visits to build imputation models to implement an MI
assess clinical relevance, if applied in well conducted, single RCTs. We focus approach to analyzing the data. We use the observations present as well as
on assessment of clinical relevance based on the global treatment effect, which other pertinent continuous and categorical variables to build the models. We do
is the method of choice if response cannot be defined at the individual level. the estimation of the imputation models using a method of Raghunathan,
The two criteria we consider are the comparison of the lower bound of the two- Lepkowski et.al. (sequential regression procedure) which is very general and
sided 95% CI for the treatment effect with a pre-specified threshold - which is can handle many variable types. We give examples using data from some
equivalent to testing a shifted null hypothesis - and the comparison of the vaccine trials. We show how various patterns can reduce cost and possibly
treatment effect with a pre-specified threshold. Our results suggest that there is drop outs.
little need for a formal assessment, if the irrelevance limit is lower than 25% of
the effect assumed in the power calculation. Furthermore, in most situations a P11.3
possible gain in controlling the rate of accepting a new treatment with an
irrelevant effect is outperformed by the loss in power to accept treatments with Regression models for repeated binary measures under different missingness
assumptions
relevant effects.
Bola Coker
King's College London, London, UK
P11 Incomplete data
MNAR models are useful when the assumption that data are MAR does not
seem plausible. However, MNAR selection models are very sensitive to
possible outliers and/or modelling assumptions. One way to gain information
about the non-respondents is to make repeated attempts to obtain outcome
data. Non-respondents are believed to behave more similar to laterespondents. Since details on the number of attempts might be able to provide
more information about ignorability of non-response, they might be profitable in
overcoming difficulties encountered when estimating MNAR selection models.
A simulation study is performed to investigate the value of MNAR models in
conjunction with information on number of attempts, namely the repeated
attempts models (RAMs). The simulation study, motivated by a randomised trial
100/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
MAR is 1.774 (1.540,2.042); MNAR - pattern mixture model is 1.784
(1.547,2.052); MNAR - selection model is 1.7916 (1.584,2.026).
Conclusion: All models applied were relatively unbiased in providing odd ratio
estimates. The different methods required assumptions. The S-MAR
assumption required the predictive probabilities; MTM model required the serial
dependence of the variables; while the MNAR models required the distribution
of the dropout pattern. Further work required to optimise predictive and
missingness models with more repeated readings and different percentages of
dropouts.
measures of discrimination and calibration; either complete case (CC) or
multiple imputation (MI) being preferable approaches. However it is unclear
whether MAN is inferior to CC and MI when the true missing data mechanism
is close to MAN.
2000 datasets with n=1500 were simulated using a logistic relationship
between binary outcome and four binary risk factors. Five, 15, 30 and 50
percent of the values of risk factor X1 were set to missing under three different
scenarios: MAR, "very nearly" MAN, "nearly" MAN.
Under a nearly MAN missing data mechanism, using MAN to deal with missing
data led to bias in estimates of X1's odds ratio. Further, MAN under-estimated
the model's discrimination ability. CC led to bias in the assessment of
P11.4
calibration performance. MI exhibited least bias for model coefficients,
Drop out in randomized controlled non-inferiority trials with time to event discrimination and calibration.
outcome: a worst case sensitivity analysis using a Bayesian method
Even when the true missing data mechanism is nearly MAN, using MAN as the
Ákos Ferenc Pap
method for dealing with missing data in development of risk prediction models
Bayer Pharma AG, Wuppertal, Germany
is not preferable to using MI. Additionally, CC is not preferable to MI in this
situation.
Motivation
The EMA Guideline on Missing Data in Confirmatory Clinical Trials suggests a
worst case analysis: assigning the best outcome to missing values (drop outs)
in the control group and the worst outcome to those of the experimental group.
Background
- Study design: randomised-controlled trial, parallel group
- Study outcome: occurrence of prespecified event
- Scientific hypothesis: the experimental treatment is non-inferior to the active
reference treatment
- Non-inferiority margin: determined as preservation of a specific fraction of the
effect (hazard ratio) of the reference treatment versus placebo/no treatment
determined by meta analysis.
- The primary analysis is done by fitting Cox proportional hazards regression
model assuming that all censoring all non-informative.
Issue
Some patients drop out before the end of the study and informative censoring
may be questionable. Sensitivity analysis is needed to assess the impact of the
extent of drop out on the primary analysis.
Sensitivity analysis
The drop out status* treatment group interaction term is included in the
Bayesian proportional hazards model as binary variable using Proc Phreg in
SAS version 9.2 (by MCMC method). For all regression coefficients flat normal
priors were defined except for the drop out status* treatment group interaction
term. For this term the effect of the reference treatment over placebo from the
meta analysis was defined as prior assuming that among drop outs the
experimental treatment was equivalent to placebo/no treatment. Worst case
analysis scenarious to obtain posterior estimates of the hazard ratio for the
overall treatment effect from the above model will be presented.
P11.6
Contrasting Informative Cluster Size with Missing Data
Menelaos Pavlou1, Shaun Seaman2, Andrew Copas3
1
University College London, London, UK, 2MRC Clinical Trials Unit, London
Hub for Trials Methodology Research, London, UK, 3MRC Biostatistics Unit,
Cambridge, UK
When making marginal inference for clustered data with varying cluster size,
three populations are of potential interest. Firstly, there is the population of all
members of clusters. Secondly, there is the population of typical members of
clusters. Inference for these first two populations is different if cluster size is
informative. Thirdly, if the variation in cluster size has arisen because of
missing data, we may view the observed clusters as incomplete and seek
inference for the population of all members of complete clusters.
Alternatively, cluster-specific inference can be sought, using random effects
models. We clarify that if the random-effects model is correctly specified, there
is no distinction between cluster-specific inference for all members and clusterspecific inference for typical members.
Missing data methods are well known by statisticians; methods for informative
cluster size (ICS) are less well known. Previous authors have vaguely referred
to the relation between ICS and missing data mechanisms (MDMs). We clarify
this relation and investigate which MDMs may lead to ICS. We show that when
within each complete cluster each member has the same probability of being
missing, inference for typical members of clusters and inference for all
members of complete clusters are equivalent. We survey the methods for
inference concerning the observed and complete clusters, and explain why
different methods are needed for the two. We describe how the methods which
view the observed clusters as complete can nevertheless be seen as special
cases of methods for (hypothetical) missing data.
P11.5
Dealing with missing data in the development and validation of clinical risk
prediction models: is missing as normal ever a sensible strategy?
Rory Wolfe1, Roman Ahmed1, Gareth Ambler2
1
Monash University, Melbourne, Australia, 2University College London, London,
UK
P11.7
Multiple imputation in a longitudinal cohort study: a case study of sensitivity to
imputation methods
Helena Romaniuk1,2,3, John B. Carlin1,2
1
Clinical Epidemiology & Biostatistics Unit, Murdoch Childrens Research
Institute, Melbourne, Victoria, Australia, 2Department of Paediatrics and School
of Population Health, University of Melbourne, Victoria, Australia, 3Centre for
Adolescent Health, Murdoch Childrens Research Institute, Royal Children’s
Hospital, University of Melbourne, Victoria, Australia
In development of risk prediction models missing risk factor data are
sometimes assumed "missing as normal" (MAN). For example, if a laboratory
test has not been ordered it might be assumed that the true test result would
have been in the "normal" range. If the true missing data mechanism is missing
at random (MAR) then employing MAN when developing risk prediction models Multiple imputation for handling analysis of incomplete data has achieved
can be shown to introduce severe bias in estimates of model coefficients and widespread use over the past decade, and has been used extensively by the
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
authors on a large longitudinal cohort study, the Victorian Adolescent Health
Cohort Study. Although we have endeavoured to follow best practice according
to current literature, we have to date performed limited examination of the
extent to which variations in our approach might have led to different results in
the final substantive analyses. In applying multiple imputation the user makes a
large number of decisions about the method of imputation, whether explicitly or
implicitly (by accepting default options in software) and there is little published
advice to guide practice. We have examined sensitivity of analytic results to
decisions about imputation method in the context of an analysis on the history
of illicit substance use in the cohort that has been published elsewhere 1. Key
factors investigated included: impact of imputation method (mi impute mvn and
ice in Stata 11); inclusion of auxiliary variables; omission of cases with too
much missing data; approaches for imputing highly skewed continuous
distributions that are analysed as dichotomous variables. We found that MVNIbased estimates were consistent across the different approaches, while ICE
estimates were more susceptible to decisions made. Importantly, estimates of
association parameters were less sensitive to imputation model settings than
estimates of prevalence parameters.
1 Swift W et al. Cannabis and progression to other substance use in young
adults: findings from a 13-year prospective population-based study. JECH
2011, doi:10.1136/jech.2010.129056.
101/156
frailties. A wide variety of frailty models and several numerical techniques have
been mentioned in the literature. Herein, we investigate the performance of
frailty variance estimates in a repeated events data scenario. Our study was
motivated by post kidney transplantation records of 467 patients (at the
Academic Medical Center, Amsterdam) who experience repeated infections of
different types, and were also repeatedly measured for five biomarkers. In a
simulation study, we postulated the association between the biomarkers and
the infection times to be according to a Cox proportional hazards model. Two
biomarkers were generated by assuming a bivariate random effects model. An
infection-specific Cox proportional hazards model with log normal frailties was
fitted to each generated data set. A sample size of 300 patients was
considered. We allowed the infection specific hazards to be influenced by timedependent true biomarker values. Findings revealed that although the
estimated variances were on average close to the true values, the true and the
estimated variances had only little correlation (0.08) across 250 simulations.
This disparity reduced with increasing sample and cluster sizes. Also,
estimated variance parameters were more skewed and less spread out. The
true frailty standard deviation was better estimated by the square root of the
mean of the simulated variances than their standard deviations. These findings
had no significant effect on the inference of other parameter estimates.
P12.2
P11.8
Bayesian methods for joint modelling of longitudinal and survival data to
assess validity of biomarkers in AIDS data
Multiple imputation for an incomplete covariate which is a ratio
1
2
1
Chiara Brombin, Clelia Di Serio, Paola M. V. Rancoita
Tim Morris , Ian White , Patrick Royston
1
2
MRC Clinical Trials Unit, London, UK, MRC Biostatistics Unit, Cambridge, UK University Centre for Statistics in the Biomedical Sciences (CUSSB) VitaSalute San Raffaele University, Milano, Italy
In medical applications, regression analyses often include the ratio of two
variables as a covariate. Common examples include body mass index The statistical analysis of observational data arising from HIV/AIDS research is
(BMI=weight÷height^2) and the ratio of total to HDL cholesterol. If one generally tricky since both longitudinal (repeated measurement) as well as
component is missing, the ratio is missing. Incomplete covariates are often survival (time-to-event) data are available.
dealt with by multiple imputation. One strategy for imputing ratios is to impute These outcomes are often separately analyzed. However, in many instances, a
the components and then calculate the ratio from the two imputed values joint modeling approach is required to produce a better insight into the
(passive imputation). Alternatively, one might impute the ratio directly, ignoring mechanisms underlying the phenomenon under study.
observed values if either component is missing (active imputation).
Following the approach proposed in Guo and Carlin (2004), in this paper we fit
`Congeniality' describes the relationship between the model for analysis and a fully Bayesian joint model to analyse epidemiological data on HIV patients
the model for imputation; they are congenial if a full probability model exists enrolled in the CASCADE (Concerted Action on Sero-Conversion to AIDS and
which accommodates both conditional models. When considering the Death in Europe) Study.
imputation of ratios, the passive strategy outlined above is uncongenial; the Actually CASCADE is one of the largest AIDS multicentre studies and
Qrisk cardiovascular risk score is a high profile example of passive imputation represents a collaboration between the investigators of 22 cohorts in 10
going wrong. Meanwhile the active strategy is congenial, as are strategies European countries, Australia and Canada, aiming at monitoring newly infected
which also impute one/both component/s alongside the ratio. It is unclear what individuals and those already enrolled in studies, covering the entire duration of
the impact of uncongeniality on parameters of interest is in real datasets.
HIV infection.
Using two example datasets, one with incomplete BMI and one with incomplete The focus will be on the Italian cohort. We take advantage of the quality of
cholesterol ratio, various approaches to dealing with the incomplete ratios these data to explore alternative models for the joint distribution of the
values via multiple imputation and fully Bayesian methods are illustrated. longitudinal data on CD4 cell count and RNA viral load and on the
Differences between passive and active approaches can be surprisingly small event/survival data. Instead of survival time, we will concentrate on time to
for BMI but are fairly large for cholesterol ratio. We recommend active seroconversion.
imputation because it is more difficult to get wrong, no less efficient and
WinBUGS package will be used and our results will be compared to those
congenial.
obtained using R package JM. The joint Bayesian approach appears to offer
significantly improved and enhanced estimation of median survival times and
other parameters of interest, as well as simpler coding and comparable
P12 Joint modelling of outcome and time-to-event
runtimes. Extension to competing risks is also considered.
P12.1
A Simulation Study to Investigate The Performance of Frailty Variance
P12.3
Estimates in Repeated Events Data
Unfortunately, this poster has been withdrawn.
Z.J. Musoro, R.B. Geskus, A.H. Zwinderman
Academic Medical Center, Amsterdam, The Netherlands
P12.4
In the analysis of event history data with multiple events of the same type, the
dependency between associated events is often quantified via the use of Bayesian Modelling of Biomarker Data to Predict Clinical Outcomes
102/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Keith Abrams1, Michael Crowther1, Paul Lambert2
P13 Latent variable models
1
University of Leicester, Leicester, UK, 2Karolinska Institutet, Stockholm, P13.1
Sweden
A Bayesian multivariate multilevel probit model applied to nursing burnout data
Biomarkers are increasingly being used to both detect disease earlier than
Baoyue Li1, Luk Bruyneel2, Walter Sermeus2, Koen van den Heede3, Kenan
would otherwise be clinically possible, and to monitor treatment efficacy. In
Matawie4, Emmanuel Lesaffre1,5
both situations the key aim is to be able to predict, with associated uncertainty, 1
Rotterdam, The Netherlands, 2University of Leuven, Leuven,
a future clinical event based upon longitudinal biomarker profiles. Thus, a joint ErasmusMC,
3
Belgium,
Belgian
Health Care Knowledge Centre, Brussel, Belgium,
modelling approach is often adopted in which a multivariate longitudinal model 4
5
for the biomarker data and a time-to-event model for the clinical outcome are University of Western Sydney, Sydney, Australia, Katholieke Universiteit
Leuven,
Leuven,
Australia
linked using shared random effects. The use of a Bayesian approach,
implemented using Markov Chain Monte Carlo (MCMC), to such a joint BACKGROUND: Burnout among nurses is a major problem in hospitals and
modelling framework has a number of advantages including; the flexibility that negatively affects patient care. In the multi-country RN4CAST study, burnout
such an approach offers in terms of the complexity of both the longitudinal and was questioned to more than 30,000 nurses, from about 2,000 nursing units in
time-to-event models, the ability of make/obtain (conditional) predictive about 400 hospitals from 12 countries. Three dichotomized measurements of
statements/distributions, and the inclusion of external evidence, especially burnout were captured, as well as three nurse working environmental
regarding the correlation over time of multiple biomarkers. These particular measurements. Previous research showed a significant association between
features of the Bayesian approach will be illustrated using data on a cohort of the two kinds of measurements based on simple regressions. We questioned
4,834 diabetic patients who had Body Mass Index (BMI), serum cholesterol here whether these associations remained the same across all levels and
and diastolic/systolic blood pressure measured repeatedly whilst being similarly for the relationship among the three burnout outcomes. Therefore, on
followed up for the development of cardiovascular disease (myocardial top of the mixed effects mean structure, we added a mixed effects structure in
infarction and/or stroke) and/or death.
the correlation matrix.
OBJECTIVES: We propose a Bayesian tri-variate four-level probit factor model
to estimate the relationship between the three burnout outcomes and the
P12.5
working environmental variables in each level, as well as a flexible correlation
Causal effects of Total Antioxidant Capacity intake on risk of postmenopausal
structure via a common latent factor with structured loadings. MCMC
breast cancer in a cohort study
computations were done using WinBUGS 1.4.3.
Daniela Mariosa1, Weiwu Wang1, Weimin Ye1, Rino Bellocco2
RESULTS: Despite the complex structure of the data, all parameters were well
1
Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, estimated. We obtained significant negative relationships between the working
Stockholm, Sweden, 2Department of Statistics, University of Milano Bicocca, environment and the burnout variables in each level, with different magnitude.
Milano, Italy
Further, we found a positive correlation structure varying across countries but it
Introduction: Established risk factors for postmenopausal breast cancer remained quite stable across hospitals and nursing units within a country.
include alcohol consumption and body fatness, but the existence of additional CONCLUSIONS: The multivariate multilevel probit factor model provides an
mechanisms between diet and onset of breast cancer cannot be ruled out. The elegant manner to flexibly model the multivariate binary data in a multi-level
associations between different antioxidants and postmenopausal breast cancer context. The extension to categorical, ordinal and mixture outcomes presents
risk are inconsistent and the hypothesis of a synergistic effect encourages to no difficulties.
consider total antioxidants.
Objectives: To apply causal inference methods to assess potential effects of P13.2
dietary total antioxidant capacity (TAC) intake on postmenopausal breast Using mixture models for identification of typical trajectories of recovery in
cancer risk.
patients with Major Depressive Disorder
Methods: 19 051 women who were recruited into the Swedish National March
Klaus Groes Larsen
Cohort in 1997 filled out a 106-item food frequency questionnaire, from which
TAC, in terms of total radical-trapping antioxidant parameter (TRAP) or ferric H. Lundbeck A/S, Copenhagen, Denmark
reducing antioxidant power (FRAP), was calculated using published databases. The clinical relevance of the effect size of antidepressants in clinical Phase IIIOccurrence of cancer was ascertained through the Swedish Cancer Register. IV trials is under endless discussion. While there is general agreement that
Follow-up was between October 1, 1997 or onset of menopause (if many antidepressants have an effect on depression symptoms, a combination
premenopausal at enrollment), and occurrence of any malignant cancer, loss to of commonly seen large placebo effects and relatively modest drug-placebo
follow-up, or end of the study (December 31, 2010), whichever came first. Cox differences make it difficult to make a clinically meaningful quantification of the
proportional hazards regression models were employed to estimate adjusted true effect of the drug.
hazard ratios (HRs) and 95% confidence intervals (95%CIs).
We use data from a large database on placebo-controlled studies in
Results: During 193 008 women-years, 606 cases were reported. TAC intake, depression including the antidepressant escitalopram and estimate subgroups
in terms of FRAP or TRAP, was not associated with risk of postmenopausal that benefit differently from treatment. We obtain an interpretation that is
breast cancer (highest vs lowest quintile, FRAP, adjusted HR=0.90, 95%CI completely different from the traditional ‘average effect' from linear regression
0.67-1.20, p-value for trend = 0.260; TRAP, adjusted HR=0.94, 95%CI 0.70- analysis. The mixture model allows for the identification of the proportion of
1.25, p-value for trend = 0.332).
patients in each of two or more subgroups), as well as their mean improvement
Conclusions: This study does not report an effect of dietary TAC on risk of in symptoms score during the trial. Baseline variables may be incorporated into
the model, either as covariates predicting subgroup membership, or as direct
postmenopausal breast cancer.
effects on the observed variable (the symptom score).
Mixture models have only recently started to appear in analyses of CNS trial
data, and they are not without difficulties or even controversies. One notable
issue is the estimation of the number of subgroups, which is central for the
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
interpretation of the results. We compare findings from our analysis with results
and conclusions in the sparse literature on mixture models in depression and
discuss the usefulness of the model framework for learning and understanding
the effects of antidepressants.
P13.3
Assessing Item Properties of the Hospital Anxiety and Depression Scale
(HADS) for the Detection of Depression in Stroke Patinets using Item
Response Theory (IRT)
Salma Ayis, Luis Ayerbe Garcia-Mozon
King's College London, London, UK
The Hospital Anxiety and Depression Scale (HADS) is a screening
questionnaire for the detection of anxiety (7 items) and depression (7 items)
among hospital patients but widely used for general populations. Studies
including systematic reviews that investigated the scale highlighted a range of
cut off points to identify caseness based on a summation score that treats
items equally. To understand the reasons behind the lack of consistency of cut
off points and factor structures across studies we used Item Response Theory
(IRT). Data were obtained from a population-based South London Stroke
Register. 1132 stroke patients were assessed with the (HADS) 3 months after
stroke (1995 - 2009). A two parameter (IRT) model was fitted using Mplus to
examine items difficulty (an item property describing where the item function
along the ability scale, for example more difficult items function among people
with severe depression) and discrimination and to provide an underlying latent
variable of psychological wellbeing.
The most difficult item was "I can enjoy a good book radio or TV programme"
with difficulty 1.85 (se: 0.15), the least difficult was "I feel I am slowed down".
-0.19 (se: 0.06). Discrimination showed a less pronounced variance across
items.
Cut off points and scores that weight items equally are unlikely to reflect the
correct underlying psychological status. A latent variable that takes into account
response patterns and items properties would be more appropriate. The study
extends the evidence on the superiority of Latent Class Models (LCM) for the
assessment of health outcomes.
P13.4
Application of a Latent-Class-Survival Model for data of a cardiological trial
Lena Herich, Christine zu Eulenburg, Karl Wegscheider
University Medical Center Hamburg-Eppendorf, Hamburg, Germany
Structural equation models are a powerful tool for modelling associations
between variables. Corresponding extensions are also possible for survival
analysis problems [1], but are uncommonly used, especially in medical
research [2,3].
The present research demonstrates the application of a latent class model
based on Cox's proportional hazards assumption for risk prediction in patients
post myocardial infarction.
Numerous parameters describing different or similar aspects of Heart rate
variability have been recorded at baseline. To handle multicollinearity, it seems
plausible to assume a categorical latent variable to measure the different signs
and symptoms observed. The effect of latent class membership, reflecting
different patterns of disturbances, on time to event is then evaluated. The
reliability of the model is investigated using bootstrapping techniques.
Advantages and disadvantages compared to a standard Cox approach are
discussed. Finally, the predictive accuracy of the two models will be compared.
References
[1] T. Asparouhov, K. Masyn, K., B. Muthen. Continuous time survival in latent
variable models. Proceedings of the Joint Statistical Meeting in Seattle, August
2006. ASA section on Biometrics, 180-187.
103/156
[2] K. Larsen. Joint analysis of time-to-event and Multiple binary indicators of
latent classes. Biometrics, 60:85-92.
[3] T. Asparouhov, M. Boye, M. Hackshaw, A. Naegeli, B. Muthén. Applications
of continuous-time survival in latent variable models for the analysis of
oncology randomized clinical trial data using Mplus. Technical Report.
P13.5
A Dynamic Prediction Model for Anticoagulant Therapy
Peter Brønnum1, Søren Lundbye-Christensen2, Torben Bjerregaard Larsen0
1
Department of Cardiology, Aalborg Hospital, Aalborg, Denmark, 2Department
of Cardiology, Aalborg AF Study Group, Aalborg Hospital, Aalborg, Denmark
Patients with an increased risk of thrombosis require treatment with vitamin Kantagonists such as warfarin. Treatment with warfarin has been reported
difficult mainly due to high inter- and intraindividual response to the drug.
This poster reports the outcome of the development of a dynamic prediction
model. It takes warfarin intake and International Normalized Ratio (INR) values
as input, and uses an individual sensitivity parameter to model response to
warfarin intake. The model is set on state-space form and uses Kalman filtering
technique to optimize individual parameters.
Retrospective test of the model proved robustness to choices of initial
parameters, and feasible prediction results of both INR values and suggested
warfarin dosage. Further studies to facilitate the impact of clinical outcome are
currently under preparation.
P14 Longitudinal data
P14.1
A new criterion for choosing the best working correlation structure in GEE
analisys
M.C. Pardo1, R. Alonso2
1
Complutense University of Madrid, Madrid, Spain, 2Complutense University of
Madrid, Madrid, Spain
The method of generalized estimating equations (GEE) models the association
between the repeated observations on a subject. An appropriate working
correlation structure for the repeated measures of the outcome variable of a
subject needs to be specified by this method since efficiency of analysis
enhanced when intracluster correlation structure is accurately modelled. Some
existing selection criteria choose the structure for which the covariance matrix
estimator and the specified working covariance matrix are closest. We define a
new criterion based on this idea for selecting a working correlation structure.
We compare our criterion with some existing criteria to identify the true
correlation structure via simulations with Poisson or binomial response, and
exchangeable or AR(1) intracluster correlation structure. Our approach
performs better when the true intracluster correlation structure is exchangeable
as well as for an AR(1) structure since the proportion of selecting a true
correlation structure was higher than that when other criteria were used.
P14.2
Joint Hierarchical Generalized Linear Models Using H-likelihood
Marek Molas1, Maengseok Noh3, Youngjo Lee4, Emmanuel Lesaffre1,2
1
Erasmus MC, Rotterdam, The Netherlands, 2Katholike Universiteit Leuven,
Leuven, Belgium, 3Pukyong National University, Busan, Republic of Korea,
4
Seoul National University, Seoul, Republic of Korea
H-likelihood offers an interesting framework to estimate models for longitudinal
data (Lee and Nelder 1996, 2001). Solutions are found by maximizing the joint
likelihood and the appropriate adjusted profile likelihoods. Obtained estimates
are equivalent to REML estimates in the linear mixed model case, but here
104/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
they extend to the mean and scale mixtures of whole exponential family of
distributions. The whole procedure relies on the neat numerical algorithm. Here
we investigate the use of the h-likelihood methodology to estimate multivariate
longitudinal models with response of exponential family. These are called Joint
Hierarchical Generalized Linear Models (JHGLM).
The numerical procedure uses the approach of h-likelihood method to estimate
the fixed and random effects as well as residual and random-effect variance
components. Random effects are allowed to be correlated within a model or
between the models. Correlation between random components is estimated by
using Newton - Raphson procedure. To estimate the correlation matrix the
Cholesky decomposition is used and the score and hessian matrices are
computed with respect to the parameters in this decomposition.
We apply the method to a rheumatoid arthritis dataset, where two endpoints
are measured: (1) a Health Assessment Questionnaire, which consists of
binary data indicating remission of the disease or a good status, and (2) a
Disease Activity Score (DAS) which evaluates the patient's disease status by a
physician. The setting requires the estimation of a binomial-normal model,
where correlations between latent trajectories of the two outcomes are of
interest.
years).
ECG extractions were made from the Holter recordings and subsequently
analyzed using the same process as the Standard 12-lead triplicate digital
ECG. Seven ECG parameters were included in this study: PR interval (ms),
Heart Rate (bpm), RR interval (ms), QRS interval (ms), QT interval (ms), QTcB
interval (ms), and QTcF interval.
The heterogeneous mixed model provides a simple approach to assessing the
agreement in mean and variance of repeated measurements of an outcome
variable between different methods. The ECG data analysis shows that
although there is a good agreement between two methods in terms of mean
values, Holter method produces consistently larger variance and lower
intersubject correlation coefficients than the standard method in 7 ECG
parameters.
P14.3
The aim of the study was to investigate whether men who have sex with men
(MSM) changed their sexual risk behaviour after notification of HIV-positive
status. We used data from the Amsterdam Cohort Studies on HIV infection and
AIDS. At every visit, participants were asked about their sexual risk behaviour
over the preceding six months. We modeled trends in (un)protected anal
intercourse from four years before the first HIV-positive test until four years
after the first HIV-positive test.
We fitted a random effects logistic regression and a marginal model using
generalized estimating equations (GEE). A problem when fitting a random
effects logistic regression model is that some individuals show consistent
sexual behaviour. For example, 74% of the MSM reported to have practiced
anal intercourse during every period before they became HIV infected. This
violates the assumption that the random effects have a normal distribution on
the logit scale.
Therefore, we also fitted time trends using a latent class model, in which
individuals could belong to three classes: (i): always "0", (ii): always "1" and
(iii): switchers. Only if they belonged to class (iii), time trends were fitted using
a random effects logistic regression. For parameter estimation, we used a
Bayesian approach with MCMC sampling to summarize the posterior
distributions. We compare results with the other two approaches.
Also in the latent class model, estimated random effects for individuals in group
(iii) did not have a normal distribution. However, marginal effects were in
correspondence with the raw data.
Causal inference with longitudinal outcomes and non-ignorable drop-out
Maria Josefsson1, Xavier de Luna1, Lars Nyberg2
1
Umeå University, Umeå, Sweden, 2Umeå center for Functional Brain Imaging,
Umeå, Sweden
In many evaluation studies of a causal agent (treatment), analysts use
observational data in which the treatment and control conditions are not
randomly assigned to participants. By using propensity score matching the
analysts can balance observed covariates between treatment and control
groups and hence reduce potential bias in estimated causal effects. Incomplete
data is common in longitudinal studies due, e.g., to participants’ death or
withdrawal. Such drop-out is said to be non-ignorable when it depends on the
participant’s underlying rate of change in the outcome. Not taking into account
non-ignorable drop-out may yield biased estimates of causal effects. In this
paper we propose a method for estimating the average causal effect of a
treatment, at baseline, on the trajectories of some longitudinal outcome under
the presence of non-ignorable drop-out. We illustrate this method with an
analysis of the causal effect of living alone on memory performance based on
data from a large longitudinal study conducted in Umeå, Sweden.
P14.5
A random effects model fitted to dichotomous outcome data with latent classes
Ronald Geskus1,2
1
Academic Medical center, Amsterdam, The Netherlands, 2Amsterdam Health
Service, Amsterdam, The Netherlands
P14.4
Assessment of Agreement between Digital 12-Lead ECG and continuous
Holter ECG Recordings: A Heterogeneous Mixed Model Approach
Duolao Wang1, Arne Ring2, Jorg Taubel3
P14.6
1
London School of Hygiene and Tropical Medicine, London, UK, 2University of Prognostic biomarkers across the patient journey: Systolic blood pressure
Oxford, Oxford, UK, 3Richmond Pharmacology, London, UK
before, during and after myocardial infarction
Digital standard bed side 12-Lead ECG and continuous 12-Lead Holter ECG Eleni Rapsomaniki1, Harry Hemingway1, Liam Smeeth2, Adam Timmis3, Spiros
Recordings are the two most commonly used ECG recording methods in Denaxas1, Julie George1, Anoop Shah1
thorough QT (TQT) studies and the assessment of agreement in ECG 1University College London, London, UK, 2London School of Health and
measurements between two methods has important implications for design and Tropical Medicine, London, UK, 3Queen Mary University London, London, UK
analysis of thorough QT studies.
Background
In this study, we propose a heterogeneous linear mixed model approach to
evaluating the agreement in ECG parameters between Holter and standard Systolic blood pressure (SBP) is one of the most widely recorded biomarkers
but, to our knowledge, previous studies have not evaluated the prognostic
ECG recording method in terms of central locations and variations.
impact of measurements taken before, during and after an acute myocardial
The ECG data from a first into human trial are used for illustration of the
infarction.
proposed method. Standard 12-lead triplicate digital ECG were recorded using
MAC1200 machines at specified time points and, in addition, continuous 12- Objective
lead Holter ECG recordings were made in 34 healthy male subjects (25.7±7.5 To characterize the relationship between SBP and prognosis of acute
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
myocardial infarction, in relation to shape, longitudinal trends and variations in
SBP.
Methods
We studied repeat SBP measurements from 24,300 patients who had suffered
an MI from CALIBER, an electronic health records collaboration linking data
from primary care, hospital admissions, a myocardial registry and causespecific mortality records. Study period was 2000-2010 with 4 years mean
follow-up and the primary endpoint was subsequent (repeat) MI or fatal CHD.
Associations were adjusted for usual confounders and medication use.
Results
SBP measured prior to MI has a positive linear association with subsequent
events; in contrast, SBP measured after MI is associated inversely. Acute
(admission) blood pressure, known to influence short term (e.g. 30 day)
survival inversely, had no effect on the long term outcomes reported here.
Variation in SBP measurements both before and after an MI was a predictor of
future events. Particularly, standard deviation > 15mmHg among readings
estimated from repeats within a 3-year period were associated with the worst
prognosis.
Conclusion
The shape and direction of association of SBP with further coronary events
after MI depend on the timing of SBP measurement. This may reflect impaired
left ventricular function, resulting from myocardial infarction and persists after
adjusting for blood pressure medications.
P14.7
Discriminant analysis to predicting pre-eclampsia based on bivariate
longitudinal biomarkers profiles
Malihe Nasiri1, Soghrat Faghihzadeh1, Hamid Alavi Majd2, Farid Zayeri2,
Noorosadat Kariman2, Nastaran Safavi2, Nasrin Boroomandnia2
1
Tarbiat Modares University, Tehran, Iran, 2Shahid Beheshti University of
Medical, Tehran, Iran
Discriminant analysis encompasses procedures for classifying observations
into groups. In recent years, a number of developments have occurred in
discriminant analysis procedures for longitudinal data. In different clinical
studies, biomarkers are needed for early detection of a specific disease, taking
into account any longitudinal information that becomes available. In
longitudinal studies there are some situations that researcher want to analyze
simultaneously two biomarkers for prediction of the outcome in discriminant
analysis, because of their relations. Such example concerns pregnant women.
The levels of hemoglobin and hematocrit can help to predict Pre-eclampsia.
Levels that are abnormally high, may be signs that Pre-eclampsia is present.
In a prospective cohort study, 650 pregnant women, who were referred to
prenatal clinic of Milad Hospital were evaluated. Hemoglobin and hematocrit
level in the first, second and third trimester of pregnancy was measured for
each women. The subjects were followed up to record pre-eclampsia.
Covariance pattern and linear mixed effect models are the methods that used
in discriminant analysis for longitudinal data. The accuracy of the classification
rule is described by the misclassification error rate (MER); The MER is
estimated by the apparent error rate (APER).For the comparison of different
models, we used APER and Aikake Information Criterion (AIC). The main
objective of this article is to predict Pre-eclampsia by the longitudinal
hemoglobin and hematocrit data and comparison of the models in univariate
and bivariate longitudinal discriminant analysis by APER, AIC and ROC curve.
The analysis was performed by SAS software.
105/156
Dorota Mrozek-Budzyn, Renata Majewska, Agnieszka Kieltyka, Malgorzata
Augustyniak
Jagiellonian University Medical College, Krakow, malopolska, Poland
This study was designed to examine the relationship between neonatal
exposure to thimerosal-containing vaccine (TCV) and child development. The
cohort was recruited prenatally in Krakow. Child development was assessed
using the Bayley Scales of Infant Development (BSID-II) measured with oneyear intervals over 3 years. Generalized Estimating Equation models adjusted
for potential confounders were used to assess the association.
Children exposed to TCV in neonatal period had significantly lower
psychomotor BSID-II scores over the first three years of life than those not
exposed (β=-3.13; 95%CI:-5.73;0.53). No association was found between TCV
exposure and BSID-II mental tests scores.
TCV exposure in neonatal period was associated with significantly lower
psychomotor development scores during the first three years of life.
P15 Meta-analyses
P15.1
A Bayesian approach for multivariate meta-analysis with many outcomes
Yinghui Wei1, Julian Higgins2
1
MRC Clinical Trials Unit, London, UK, 2MRC Biostatistics Unit, Cambridge, UK
Multivariate meta-analytic models account for the dependency between effect
size estimates and provide nature refinements over a univariate setting.
However, the difficulties in estimation of parameters arise when there are
missing outcome data. It becomes particularly challenging when there are
many outcomes reported by a small number of studies.
We propose a method based on the marginal distribution of the reported data
and modelling of the heterogeneity parameters and correlation matrix
separately. This facilitates incorporating informative empirically based prior
distributions. We further consider reducing model complexity in terms of
parameters in variance-covariance matrix. Several different covariance
structures which can be common in practice are tested.
We applied the methods to an example dataset from medical research
practice. We observed precision increase in parameters estimates when the
multivariate approach in used. Comparing DIC values from different models
confirms the better fit of the multivariate model using the separation strategy.
The analysis suggests benefits of reducing model complexity as it provide
more precise parameter estimates.
P15.2
Bayesian Meta-analysis of Diagnostic Tests Allowing for Imperfect Reference
Standards
Joris Menten1, Marleen Boelaert1, Emmanuel Lesaffre2,3
1
Institute of Tropical Medicine, Antwerp, Belgium, 2L-Biostat KULeuven,
Leuven, Belgium, 3Dep. of Biostatistics, Erasmus University, Rotterdam, The
Netherlands
This work is motivated by a meta-analysis of the accuracy (Sensitivity Se and
Specificity Sp) of rapid diagnostic tests (index test) for visceral leishmaniasis
(VL). This meta-analysis is hampered by the lack of a perfect reference test for
VL. This has two consequences: (1) in some studies Latent Class Analysis
(LCA) is used instead of a reference standard; (2) in other studies a less than
perfect reference standard may have been used. The statistical model used in
the meta-analysis must combine results from studies analyzed with a reference
standard with those from LCA while allowing for imperfect reference tests.
P14.8
We extend the hierachical bivariate logistic normal model as follows. (1) The
Neonatal exposure to thimerosal from vaccines and child development in the logit Se's and Sp's of index and reference tests for each study are modelled
using bivariate normal distributions. (2) For studies using a reference standard,
first 3 years of life - application of generalized estimation equasions
the 2x2 contingency table of index vs. reference test result is modelled using a
106/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
multinomial distribution for the cell counts. (3) For studies estimating Se and Sp
through LCA, the logits of the Se and Sp and the standard errors of these logits
are derived from the estimates and CIs in the source publication. The model is
estimated using Bayesian methods with non-informative priors for prevalences
and Se and Sp of the index test. We classify studies according to the reference
test and use informative priors for the diagnostic accuracy of the reference
tests elicited from Leishmania experts.
The performance of the model is studied through simulation and applied to the
VL data.
P15.3
Dichotomisation of continuous outcomes: A systematic review of metaanalyses using birthweight
Mercy Ofuya1, Odile Sauzet2, Janet Peacock1
1
King's College London, London, UK, 2Universität Bielefeld, Bielefeld, Germany
Background
Power and precision are greater in meta-analyses than in individual analyses.
However, dichotomisation of continuous outcomes poses a problem as studies
can only be pooled if they have a common outcome. Meta-analyses may
include analyses of the continuous and dichotomous form, with a different
combination of studies for each. This may lead to loss of power and/or
selection bias.
Objectives
(a)To perform a systematic review of meta-analyses for the outcome,
birthweight, when analysed as continuous (in grams), or as dichotomous
(usually low birthweight: birthweight <2500g), and assess the consequences of
the duality of outcomes. (b)When dichotomised and continuous data were
reported, to use the distributional method 1 to demonstrate how this could
improve the precision of dichotomised outcomes.
Methods
Pubmed, Embase, Web of Science, and Cochrane Database of Systematic
Reviews were searched for reviews published 2010-2011 with the outcome
birthweight.
Results
Seventy-five meta-analyses were obtained. Of these, 61% (46) analyzed
birthweight as continuous, and 73% (55) as dichotomous. Thirty-five percent
(26) presented both dichotomous and continuous birthweight summaries.
Among these, 6/26 reported results that were statistically significant for one
outcome and not for the other.
Conclusion
Researchers commonly dichotomise continuous outcomes but do not always
report results for means as well as proportions, leading to problems for metaanalyses. Presentation of both outcomes is needed. The distributional method
would allow for more precise and unbiased estimates for dichotomised
outcomes.
1. Peacock JL et al., Dichotomizing continuous data while retaining statistical
power. Statistics in Medicine 2012. In press.
P15.4
Metanalysis of high quality observational studies: a surrogate for clinical trials?
Catherine Klersy, Marco Ferlini, Arturo Raisaro, Valeria Scotti, Anna Balduini,
Luigia Scudelle, Carmine Tinelli, Annalisa De Silvestri
IRCCS Fondazione Policlinico San Matteo, Pavia, Italy
Often, despite evidence from large and/or high quality randomized clinical trials
(RCT) is not available, there are numerous large, high quality cohort studies
presenting adjusted risk ratio (RR). We considered studies evaluating efficacy
of coronary intravascular ultrasound (IVUS) guidance in drug eluting stent
positioning and compared evidence derived from RCT and longitudinal
observational studies.
We performed an exhaustive literature search for full-text articles in peerreviewed journals (2003-2011), excluding uncontrolled studies. We
metanalyzed major adverse cardiac events (MACE: death, acute myocardial
infarction and/or revascularization). Pooled fixed or random effect RR and 95%
confidence intervals (95%CI) were computed Twenty-six full texts out of 217
abstract were retrieved; 17 of them were excluded, (abstract, review,
metanalysis, MACE missing, no pertinence). Of the 9 included articles,1 was a
RCT and 8 were observational (5 with adjusted estimates). A total of 17541
patients were enrolled. Median follow-up duration was 18 months (25th-75th
percentiles 12-24). The RR for the RCT was 0.92 (0.39-2.19, N=210 pts); for
the ‘adjusted' cohort studies it was 0.79 (0.69-0.91, N=15405 pts) and for the
‘unadjusted cohort" it was 0.89 (0.70-1.13, N=2286 pts).
The small RCT does not provide sound evidence, as shown by wide 95%CIs,
while the large well-conducted cohort studies with adjusted estimates indicate
a protective effect of IVUS towards MACE, with high level of confidence. Being
the IVUS patients more critical, its effect would be diluted in unadjusted
studies. Although large RCTs are needed to confirm IVUS role, good quality
cohort studies might better reflect real life.
P15.5
Estimation of between-trial variance in sequential meta-analyses
Putri Novianti, Kit Roes, Ingeborg van der Tweel
Biostatistics and Research Support, Julius Center for Health Sciences and
Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
Various estimators of the variance between treatment effects from randomized
clinical trials in a meta-analysis are known to be divergent or even conflicting.
In a sequential meta-analysis (SMA), their properties are even more important,
as they influence the point in time at which definite conclusions are drawn. To
study their properties in an SMA, we conducted an extensive simulation study
and applied the estimators in real life examples with dichotomous and
continuous outcome data. Bias and variance of the estimators were used as
primary evaluation criteria, as well as the number of patients from the
accumulating trials needed to arrive at stable estimates. Results of simulation
studies showed that the well-known DerSimonian-Laird estimator behave
differently for dichotomous as compare to continuous outcomes. The
DerSimonian-Laird, the REML and the Paule-Mandel estimator all perform well
in an SMA with continuous outcome data. For an SMA with dichotomous
outcome data the Paule-Mandel estimator is to be preferred.
P15.6
Multivariate meta-analysis of surrogate endpoints in health technology
assessment: a Bayesian approach
Sylwia Bujkiewicz1, Alex J. Sutton1, Nicola J. Cooper 1, John R. Thompson1,
Mark J. Harrison2, Deborah P. M. Symmons2, Keith R. Abrams1
1
University of Leicester, Leicester, UK, 2University of Manchester, Manchester,
UK
Surrogate endpoints play an increasingly important role in health technology
assessment, in which clinical effectiveness outcomes are used to derive quality
of life estimates (e.g. EQ5D), as part of the economic evaluation of the new
medical technologies. These effectiveness measures are usually estimated by
meta-analysing outcomes from randomised controlled trials. However when
few clinical trials report the clinical outcomes appropriate for economic models,
the surrogates can be considered through multivariate meta-analysis of studies
that are mixture of those reporting the primary outcome of interest, a surrogate
endpoint or both (preventing valuable data from clinical trials from being
discarded). In a Bayesian framework, multivariate random effects meta-
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
analysis has the advantage of borrowing strength between studies and
between the outcomes. This can lead to reduced uncertainty around the
effectiveness estimate and, in consequence, of the estimates of the costeffectiveness.
This framework will be presented based on an example from rheumatoid
arthritis, in which EQ5D is derived from change in HAQ score (a self-reported
measure of physical function). Multivariate evidence synthesis is used to take
into account data from studies reporting HAQ as well as studies reporting other
measures of response such as ACR, DAS28 or EULAR with the aim of
decreasing uncertainty around HAQ. Multivariate meta-analysis in a product
normal formulation (PNF) (of conditionally independent univariate normal
distributions) will be discussed with the prior for within-study correlation
obtained from individual patients' data. We will discuss sensitivity to prior
distribution for between-study correlation and implied priors for the parameters
of the PNF model.
107/156
for placebo in cUTI. The primary efficacy endpoint was microbiological
eradication rate at the test of cure visit in the microbiological intent-to-treat
population. Adjustments were made to account for inconsistency in time of
primary endpoint assessment among some studies. To account for inter-study
variability, a weighted non-iterative random-effects model was used to obtain
an estimate of the eradication rate using a meta-analysis package in R
(metafor). The estimated eradication rates and 95% confidence intervals (CI)
were 32.2% (26.7%-37.8%) for placebo, 81% (77.7%-84.2%) for Doripenem,
79% (75.9%-82.2%) for Levofloxacin, and 80.5% (71.9%-89.1%) for Imipenemcilastatin. The treatment effect was estimated as the difference between the
lower bound of the CI for each comparator and the upper bound of the CI for
placebo. The treatment effect estimates were 39.9% for Doripenem, 38.1% for
Levofloxacin, 34.1% for Imipenem-cilastatin, and 40.1% for overall. These
estimates will facilitate the design of future cUTI trials and help inform
appropriate NI margins.
P15.9
An age-adjusted metric for risk discrimination, with application to age-specific
cardiovascular disease prediction
Eleni Rapsomaniki1, Ian White2, Simon Thompson3, Angela Wood3, John
Danesh3
Very large sample sizes are required for estimating effects which are known to 1University College London, London, UK, 2MRC Biostatistics Unit, Cambridge,
be small, and for addressing intricate or complex statistical questions. Such UK, 3Strangeways Research Laboratory, Cambridge, UK
sample sizes are often only achievable by pooling data from multiple studies;
effects of interest can then be investigated through an individual-level meta- Discrimination statistics such as the C-index describe the ability of a survival
analysis (ILMA) on the pooled data, or via a study-level meta-analysis (SLMA). model to assign higher risks to individuals who experience earlier events. Their
value depends on the sample distribution of prognostic variables, but this - in
Ethico-legal constraints that govern the agreements and consents for individual
particular the age distribution - is partly controlled by study design. This makes
studies frequently prohibit pooling experimental data. By conducting a SLMA in
discrimination statistics difficult to compare across studies that include different
place of an ILMA, the data are not pooled and only non-disclosing summary
age-groups and may obscure the predictive power of additional risk factors.
statistics are shared between studies.
To overcome this limitation we propose two new discrimination measures: a
It is possible in certain cases to conduct an ILMA without pooling data from
stratified C-index computed within age-bands, and an adjusted C-index based
multiple studies, and without disclosing sensitive information: the original
on the age-band-specific measures. We present results based on
individual-level analysis, in some cases, can be pieced back together using
cardiovascular disease (CVD) risk prediction and associated risk factors, using
only non-disclosing summary statistics from each study. When the data are
baseline measurements from 350,000 participants, aged 40 to 89, from 90
horizontally partitioned between studies, that is, data are collected on the same
cohort studies from the Emerging Risk Factors Collaboration. We show that
variables in each study and a particular study participant can be found in one
between-study heterogeneity in the adjusted C-index is almost 50% less than
study only, it is possible to build, for example, an individual level generalised
in the ordinary C-index in a model adjusted for age and sex, and 13% less in a
linear model, or conduct a generalized estimating equations analysis, without
multiply adjusted model. Further, age modifies the prognostic significance of
pooling the data.
CVD risk factors, most of which were found to be significantly more
We introduce DataSHIELD (Data Aggregation Through Anonymous Summary- discriminative at younger ages. In fact, in those aged 80-89 we observed no
statistics from Harmonised Individual levEL Databases) as a tool to coordinate improvement in discrimination when risk factors were added to a model
such an analysis, and explain why this approach yields identical results to an adjusted for age and sex alone.
ILMA for a range of statistical analyses. IT requirements will also be discussed,
We conclude that meta-analyses of concordance statistics and their
along with the ethical and legal challenges which must be addressed.
incremental values should pay attention to study-specific distributions of age
P15.8
and other design variables, and that stratified and adjusted C-indices avoid
Meta-analysis to estimate the treatment effect of Doripenem, Levofloxacin, and spurious heterogeneity.
Imipenem-cilastin in complicated urinary tract infections
Krishan Singh, Gang Li, Linda Mundy, Fanny Mitrani-Gold, Jeffrey P15.10
Wetherington, Milena Kurtinecz
Meta-analysis of paired-comparison studies of diagnostics test data: A
GlaxoSmithKline Pharmaceuticals, Collegeville, Pennsylvania, USA
Bayesian modelling approach
P15.7
Individual-level statistical analysis without pooling the data
Elinor Jones, Nuala Sheehan, Paul Burton
University of Leicester, Leicester, UK
Active-control, non-inferiority (NI) study designs are commonly used to
establish the effectiveness of a new antimicrobial drug for treatment of serious
infections such as complicated urinary tract infection (cUTI). A meta-analysis
was conducted to estimate the treatment effect of three potential comparators:
Doripenem, Levofloxacin, and Imipenem-cilastatin, for the design of an NI trial.
A systematic literature review used a priori criteria for selection of randomized
clinical trials, including assessment of publication quality using the validated 3item Jadad metric. No historical placebo-controlled cUTI trials were identified.
Three placebo-controlled trials in uncomplicated UTI were identified as a proxy
Pablo Verde
University of Duesseldorf, Duesseldorf, NRW, Germany
Diagnostic paired-comparison studies arise when two diagnostic tests are
applied to the same group of patients. In such studies accuracy characteristics
(e.g. sensitivities and specificities) are correlated between tests. The main
problem in meta-analysis of this type of data is the lack of published
information to account for intra-study correlation and to directly analyze
between tests agreement. In this work we borrow ideas of ecological inference
108/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
for 2x2 tables (Wakefield, 2004) to model indirectly the intra-study correlation of
accuracy characteristics and to infer on tests agreement. Variability between
studies is modeled by extending a Bayesian hierarchical model for metaanalysis of diagnostic test (Verde, 2010). Statistical methods are illustrated
with two systematic reviews. The first one investigates the diagnostic accuracy
of automatic decision tools (e.g. neural networks) compared with unaided
doctors in patients with acute abdominal pain (Liu, et al. 2006). The second
one compares the diagnostic accuracy positron emission tomography with
computer tomography in the detection of lung cancer (Birim, et al. 2005). A
simulation experiment is presented to assess the influence of ignoring intrastudy correlation in this meta-analytic problem. Statistical computations are
implemented in public domain statistical software (JAGS and R).
Keywords: multivariate meta-analysis, diagnostic test, ecological inference,
MCMC
suggest that misspecification of lower cluster levels has worse impact on
estimation than misspecification of higher levels. The variance of unobserved
heterogeneity and the location of the probability's outcome are important
determinants of the size of bias and largest bias occured where response was
rare. The impact on structural parameters is less serious than that on random
parameters. Three levels of unobserved heterogeneity with mean zero and a
variance between 0.01-1.0, were examined, using a single or two binary
explanatory variables that were time varying or time invariant. The findings may
appeal to researchers aware or in doubt about the availability of clustering at
one or more levels but intending to use a simple approach.
{1}Ayis, S. Testing Inference from Logistic Regression Models in Data with
Unobserved Heterogeneity at Cluster Levels. Communications in Statistics,
2009 Vol. 38 (6) 1202-1211
P16.3
P16 Model selection
A map for the jungle of choices in Mixed Models for Repeated Measures
Sophie Swinkels, John Hendrickx
P16.1
Danone Research, Wageningen, The Netherlands
Inverse problem within a regression framework
1,2
Abdel Douiri
Mixed models for repeated measures offer different options for modeling the
1
King’s College London, London, UK, 2NIHR c-BRC, Guy’s and St. Thomas’ mean, modeling the covariance, modeling time, handling baseline responses
and for defining the intervention effect. For modeling the mean options include
NHS Foundation Trust, London, UK
analyzing response profiles and analyzing parametric curves. Covariance can
Regression modelling is a powerful statistical tool often used in clinical trials be modeled using various covariance pattern models or random effects
and epidemiological studies. The regression problem could be formulated as covariance structures. Time can be modeled as fixed occasions or as a
an inverse problem that measures the discrepancy between the target outcome continuous variable. Baseline can be included as covariate or in the response
and the data produced by representation of the modelled predictors. This vector together with the post baseline values. The intervention effect can be
approach could simultaneously perform variable selection and coefficient defined at a specific time point or considering the whole intervention period.
estimation. We focus particularly on linear regression problem, Y ∼ N (Xβ*, Theoretically, each choice can be combined with all the options for the other
σIn), where β* ∈ Rp is the parameter of interest and its components are the choices resulting in a huge amount of possibilities for the total model. One can
regression coefficients. The inverse problem finds an estimate for the easily get lost in this wilderness of possibilities.
parameter β*, which is mapped by the linear operator (L: β→Xβ) to the
observed outcome data Y= Xβ+ε. This problem could be conveyed by finding a During this presentation, a scheme will be introduced that summarizes all these
solution in the affine subspace L-1({Y}). However, in the presence of options and provides guidance in selecting the most suitable choice. Contrasts
multicollinearity or/and high dimensional data, the solution may not be unique, will be used to compare common elements of different models and to show
so the introduction of a prior information to reduce the subset L -1({Y}) and they may not be as different as they appear on first sight. The impact of the
regularize the inverse problem is needed. Inspired by Huber's robust statistics different model options will be illustrated using data from a multi-centre multiframework, we propose a new extension to l1-penalized regression problem: an country, randomized, controlled, double-blind, parallel-group trial, designed to
adaptive Hubert regularizer. A simple method for selection of the regularization assess the effect of a medical food on memory performance in patients with
parameter that minimizes the l1-penalized likelihood will be given. We compare mild AD.
results of adaptive Hubert and two other l 1-penalized regression regularization
methods, Adaptive lasso and elastic net, on both simulated data and real data P16.4
from the South London Stroke Register. The proposed approach can be Variable selection for prediction models based on multicenter data
extended to general linear regression models.
Laure Wynants1,2, Dirk Timmerman3, Sabine Van Huffel1,2, Ben Van Calster1
1
Department of Electrical Engineering (ESAT-SCD), KU Leuven, Leuven,
P16.2
Belgium, 2IBBT Future Health Department, KU Leuven, Leuven, Belgium,
Quantifying Bias due to Unobserved Heterogeneity at Individual and Cluster 3Department of Development and Regeneration, KU Leuven, Leuven, Belgium
Levels when Using Binary Response Regression Models: A Simulation study
Ignoring the clustered nature of multicenter data biases multivariable model
Salma Ayis, Bola Coker
building. Therefore, we investigated how this affected model coefficients and
King's College London, London, UK
variable selection for the development of a prediction model for ovarian tumor
Unobserved factors that affect outcomes of interest are common in medical diagnosis.
and behavioural data. For example differential response to treatments might be To predict malignancy of ovarian masses, two models were developed on data
attributed to unknown characteristics (unobserved heterogeneity) and these for 19 predictor variables from 3510 patients recruited at 21 centers. Method 1
may seriously impact on inference from binary regression models. Simulation ignored the clustered nature of the data whereas method 2 included a random
was used to study factors that influence the bias of the estimated parameters, intercept for center. We first fitted the full models for both methods to compare
including the variance of unobserved heterogeneity, the clustering level that model coefficients. Then we performed backward variable elimination (with
was missing, and models used for estimation. This investigation extends an p=0.01 as selection criterion) using bootstrapping to compare selection
earlier work{1} using the standard and mixed effect logistic models to adjust for frequencies. Finally, we demonstrated the combination of random intercepts
unobserved heterogeneity at cluster levels 1, 2 and 3. Situations where the logistic regression with multivariable model building using the method of
clustering level was correctly specified or missing were examined. Findings multivariable fractional polynomials (MFP).
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
For the full models, model coefficients differed by at least 10% for seven
variables, with a mean difference of 12% and a maximum of 74%. Variable
selection frequencies also depended on whether clustering by center was
accounted for. The selection percentage differed by more than 10 percentage
points for five variables, with the highest difference being 64 percentage points.
Finally, the MFP method selected fewer variables when a random intercept was
used compared to when clustering by center was ignored.
Inter-center differences in predictor effects resulted in more conservative
results when using a random intercept, with higher p-values and lower
selection frequencies. Whereas multicenter data are beneficial for increasing
model generalizability, the clustering by center must be accounted for when
developing prediction models.
P16.5
Generalized Additive Models and Computerized Breast Cancer Detection:
Clinical Application
Javier Roca-Pardiñas1, María J. Lado1, Pablo G. Tahoces2, Carmen CadarsoSuárez2
1
Universitity of Vigo, Vigo, Spain, 2University of Santiago, Santiago de
Compostela, Spain
109/156
under the drug concentration-time curves (AUCs) for two oral-administered
drugs. To do so, we construct a nonlinear mixed effect model for the drug
concentration-time profiles in the 2x2 cross over design. In the statistical
model, one-compartment PK models are employed to describe the drug
concentration-time curves which incorporate both the between-subject and
within-subject variations. Moreover, a multivariate generalized gamma (MGG)
distribution is introduced for the drug concentrations repeatedly measured from
the same volunteer at different periods in each of the two sequences. We then
propose a test for the bioequivalence between the two drugs based on the
discrepancy of the two AUCs estimated from the proposed model fitted into the
observed drug concentration-time profiles. The results of a simulation study
investigation the level and power of the model-based test relative to the
conventional non-compartmental test under different configurations of MGG
distributions, sample sizes and measured time points are also presented.
Finally, a real data set is illustrated based on the proposed model and test.
P17.2
Sample size and power considerations for estimating subject-treatment
interactions in parallel-group designs using covariates
Ruediger Paul Laubender
Institute of Medical Informatics, Biometry, and Epidemiology (IBE), Munich,
Generalized Additive Models (GAMs) are mathematical models that can be Germany
used to predict the mean of a response variable, depending on the values of
other explicative covariates. They allow for the introduction of categorical When the aim of a clinical trial is to estimate treatment-subject interactions
variables, called factors, which can be essential in diverse classification (e.g. for identifying responders), it is usually necessary to rely on repeated
crossover designs. However, such designs are not well suited for non-chronic
problems.
diseases (e.g. cancer) and treatments which irreversibly alter the patients for a
In this work, we have studied one specific classification problem, derived from
long time (e.g. vaccines) so that only parallel-group based designs can be
the development of a Computer-Aided Diagnosis (CAD) system, dedicated to
used. Laubender (2011) recently proposed a maximum likelihood based model
the early detection of one of the primary signs of breast cancer: clustered
for estimating subject treatment interactions for normally distributed outcomes
microcalcifications.
based on the parallel-group design by redefining a predictive covariate as an
Nowadays, breast cancer is one of the most usual cancers. Clustered indicator for the individual treatment effect and by relying on a multivariate
microcalcifications are relevant radiologic signs of irregular shape, varying size, normal distribution. The planning of parallel-group based trials using this model
and located in an inhomogeneous background of parenchymal tissues, which depends on three conditions whether subject-treatment interactions are
appear in 30%-50% of breast cancers.
present at all. Therefore, methods for sample size and power calculations are
To help radiologists to early diagnose breast cancer, CAD schemes have been derived which are based on three partially correlated test statistics. Further,
developed during the last decades. These systems produce, as a result, simulation studies were performed to evaluate the performance of these
suspicious areas that should be analyzed to determine if they correspond sample size and power calculations for different scenarios. Finally, the model of
Laubender (2011) makes some crucial assumptions about how nature is
either to a lesion or a false detection.
In this work, we have developed an approach, based on GAMs and ROC supposed to generate data. These assumptions are presented and statistical
(Receiver Operating Characteristic) analysis, for reducing the number of false methods are offered for checking these assumptions using data from clinical
positives in a CAD system for detecting microcalcifications. A factor was also trials.
included, considering as the categorical covariate the breast tissue. Different Reference:
sets of properties of the detected clusters were selected and analyzed Laubender RP. A statistical approach for identifying responders in vaccine
employing GAMs. The software was developed employing R, and our results studies. Medical Corps International Forum 2011; 4./4-2011.
yielded a high performance of the system (high sensitivity and reduced number
of false detections), when the appropriate group of covariates was selected.
P17.3
P17 Modelling in drug and device development
P17.1
Model-based bioequivalence test in a 2x2 cross over design
Yuh-Ing Chen, Chi-Shen Huang
National Central University, Jhongli, Taiwan
In a pharmacokinetic (PK) study, to investigate if the test and reference drugs
under consideration are bioequivalent, two sequences of volunteers are usually
assigned to take different drugs in two periods under a 2x2 cross over design.
The drug concentrations in blood or plasma are then repeatedly measured
from the same volunteer at certain time points after the drug administration
which is referred to as the drug concentration-time profile or curve. In this
paper, we are concerned with testing for the equivalence between the areas
Development of a tool to elicit experts' beliefs for medical device evaluation
Leslie Pibouleau, Sylvie Chevret
INSERM UMR S717; Hôpital Saint Louis; Université Paris Diderot - Paris 7,
Paris, France
RATIONALE: Bayesian methods appear particularly interesting in implantable
medical device (IMD) evaluation because high quality clinical information on
IMDs is rare whereas substantial prior information often exists. In this context
of uncertainty due to the lack of sound data, the use of experts' beliefs to
inform prior distributions is a key component. OBJECTIVE: To develop a
computer-based tool available on-line for expert opinions elicitation about the
distribution of a parameter involving a Bernoulli process. METHODS:
Information to be elicited was the distribution of success rate of an intracranial
stent. It was encoded using the fixed interval method. The elicitation
110/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
questionnaire consisted of five parts: 1) an identification form, 2) a question on
the predictive factors of success, 3) a training exercise, 4) the elicitation
question itself and 5) a feedback form on the ease of use of the questionnaire.
All corresponding authors of clinical articles on intracranial aneurysms
published in the last six years (n=356) were invited by e-mail to participate to
the survey. RESULTS: Twenty experts (5.6%) completed the survey. Individual
prior distribution was elicited for each participant. Feasibility was judged in view
of the experts' feedback and time to completion. Validity and reliability were
assessed using data on comprehensiveness, internal coherence and testretest reliability. CONCLUSION: This method of belief of elicitation is feasible,
valid and reliable. It should be useful to promote the use of Bayesian methods
in IMD evaluation.
P18 Modelling infectious disease
P18.1
Model Based Estimates of Long-Term Persistence of Induced HPV Antibodies:
A Flexible Subject-Specific Approach
Mehreteab Aregay1, Ziv Shkedy2, Geert Molenberghs2, Marie-Pierre David3,
Fabian Tibaldi3
1
I-BioStat, Katholieke Universiteit, Leuven, Belgium, 2I-BioStat, Universiteit
Hasselt, Hasselt, Belgium, 3GlaxoSmithKline Biologicals, Rixensart, Belgium
In infectious diseases, it is important to predict the long-term persistence of
vaccine-induced antibodies and to estimate the time points where the individual
titers are below the threshold value for protection. This poster focuses on HPV16/18, and uses a so-called fractional-polynomial model to this effect, derived
in a data-driven fashion. Initially, model selection was done from among the
second- and first-order fractional polynomials on the one hand, and the linear
mixed model on the other. According to a functional selection procedure, the
first-order fractional polynomial was selected. Apart from the fractional
polynomial model, we also applied a power law model, which is a special case
of the fractional polynomial model. Both models were compared using Akaike's
information criterion. Over the observation period, the fractional polynomials
fitted the data better than the power-law model; this, of course, does not imply
that it fits best over the long run and hence caution ought to be used when
prediction is of interest. Therefore, we point out that the persistence of the antiHPV responses induced by these vaccines can only be ascertained empirically
by long-term follow-up analysis.
P18.2
Statistical models for biosurveillance: an empirical investigation
Doyo Enki1, Angela Noufaily1, Paddy Farrington1, Paul Garthwaite1, Nick
Andrews2, André Charlett2
1
The Open University, Milton Keynes, UK, 2Health Protection Agency, London,
UK
We consider the problem of choosing appropriate statistical models for large
multiple surveillance systems to detect outbreaks of infectious disease. In such
systems, simple, robust algorithms are required, as detailed model selection
and model checking is not practical. One common approach is to use quasilikelihood methods, with detection thresholds based on normal approximations.
However, such thresholds may be inappropriate for uncommon infections, as
they are found to inflate the false positive detection rate.
We use twenty years' data from a large laboratory surveillance database used
for outbreak detection in England and Wales, involving over 3300 distinct
organism types, to study empirically which parametric families of distributions
are likely to be appropriate. In particular we focus on the negative binomial
distribution with mean μ and variance of the form Φμ, with Φ > 1.
We investigate the mean-variance and mean-skewness relationships for over
1000 organisms with evidence of extra-Poisson dispersion. We find strong
evidence of systematic patterns, which suggests that parametric modelling of
very diverse organisms within a single parametric family is feasible. However,
there is also evidence that the negative binomial allows for insufficient
variability and skewness, though the discrepancy is generally small. We
discuss alternative approaches in the light of these results.
P18.3
Epidemiological modelling of risk factors of human papilloma virus in women
with positive cytology in the county of Csongrád Epidemiological modelling of
risk factors of human papilloma virus in women with positive cytology in the
county of Csongrád
Tibor Nyári, Krisztina Boda
Department of Medical Physics and Informatics, University of Szege, Szeged,
Hungary
Cervical carcinoma, one of the most frequent malignancy in women worldwide, is a major human cancer the viral etiology of which can be considered.
This study was carried out to determine the prevalence and risk factors of
genital HPV infection in women diagnosed with non-negative cytology in southeastern
Hungary.
Cervical samples were collected for cytology and HPV testing from women
seen at gynaecological outpatient clinics and diagnosed with non-negative
cytology. Logistic regression analysis was applied to obtain an overview of the
risk of HPV infection.
A total of 72 women diagnosed with positive cytology were examined for the
prevalence of HPV. The observed overall average HPV infection rate was
found to be 61% (44/72). High-risk HPV positivity was detected in 38 samples
(86%). There were 5 condyloma cases in the HPV-infected group (11%) and 1
condyloma case (3%) in the HPV-negative group.
The difference between the mean ages of the HPV-infected patients (n=44;
mean age: 30.2 years, SD: 9.3 years) and the non-infected women (n=28;
mean age: 31.2 SD: 7.5 years) was statistically non-significant.
The smoking habit proved to be the only risk factor in the logistic regression
analysis that related significantly to the exposure to HPV infection (p=0.02).
Thus, prevention strategies should focus on the regular clinical cytological
screening of HPV-infected patients and on the reduction of smoking.
This study was supported by TÁMOP 4.2.1./B-09/KONV-2010-005 project.
P18.4
An alternative to incorporate the number of persons tested when looking for
time trends in STDs
Achilleas Tsoumanis, Inga Velicko, Sharon Kuhlmann-Berenzon
Swedish Institute for Communicable Disease Control (SMI), Stockholm,
Sweden
When studying time trends of sexually transmitted diseases (STD) based on
surveillance data most often the incidence (cases per population) is modeled
with time as the only covariate. It has been observed, however, that the
number of cases is highly correlated to the number of persons tested, and thus
a trend in incidence might be strongly affected by changes in the population's
testing behavior. An alternative has been to include the number of persons
tested as a covariate in the model. These models, while they seem to describe
incidence quite accurately, they often lead to misinterpretations regarding the
trend when their results are presented.
To circumvent the high correlation between number of cases and persons
tested we propose to model the number of cases per tested, either as the
actual ratio (case ratio or CR) as a Gamma-GLM, or by modeling the number
of cases, using the number of tested as offset in a negative binomial GLM. If
the number of cases and persons tested change with similar rates, their ratio
will remain constant and thus no trend is found. We compared the two
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
suggested models to a negative binomial GLM of incidence. Data describe
annual number of Chlamydia cases in Sweden obtained from the surveillance
system for 1998 to 2004.
The three models gave similar results in terms of predictive values and time
trend. The model with CR gives a more realistic image of the trend in STDs
since it reflects the number of tested simultaneously.
P19 Observational studies
111/156
matrix, we also pick up the concept of 'unconditional power' (Gluck and Muller,
2003), which is obtained by integrating over the distribution of the conditional
power. We evaluate our methods in a simulation study that covers some
specific cases with either normally or binomial distributed risk factors.
Demidenko, E. (2007) Sample size determination for logistic regression
revisited. Statistics in Medicine, 26, 3385-3397.
Glueck, D.H. and Muller, K.E. (2003) Adjusting power for a baseline covariate
in linear models. Statistics in Medicine, 22, 2535-2551.
Senn, S. and Bretz, F. (2007) Power and sample size when multiple endpoints
are considered. Pharmaceutical Statistics, 6, 161-170.
P19.1
Attributable fractions of one year mortality after diagnosis of lung cancer
Geir Egil Eide1, Knut Skaug2, Amund Gulsvik3
P19.3
1
Centre for Clinical Research, Haukeland University Hospital, Bergen, Norway, Analysis of clustered binary data with extreme proportions
2
Helse Fonna, Haugesund, Norway, 3Lung Depa, Haukeland University
Elizabeth McKinnon
Hospital, Bergen, Norway
Murdoch University, Perth, WA, Australia
Background: Lung cancer has high short-term mortality rates determined by
severity of disease at diagnosis. Early detection is crucial for survival. Few Clustered data arise frequently in observational studies, for example when
studies have estimated the potential gain in short-term mortality from possibly collected from patients at intermittent clinic visits. Here we consider the
analysis of clustered binary data, motivated by a study of predictors of aviremia
diagnosis at earlier stages.
among HIV positive patients successfully treated with suppressive antiretroviral
Aim: To quantify the potential reduction in one-year mortality from early
therapy. Whilst the analysis of clustered binary data has received
diagnosis.
considerable attention over the years, there has been less focus on covariate
Method: This is part of a long-term follow-up study of all 271 incident lung assessment or estimation difficulties when the underlying cluster-specific
cancer patients in the Haugaland area, Norway, from 1st January 1990 to 31st probabilities being estimated may be close to zero or one. A common issue
December 1996. By estimating attributable fractions (AF) based on a logistic arising in logistic regression analyses of non-clustered data with extreme
regression analysis of first year mortality on demographic variables, anatomical proportions is that of potential separation, when the likelihood converges
stage, functional performance status and treatment, the potential effect of early without concomitant parameter convergence. We therefore investigate similar
detection of lung cancer is quantified. The results are illustrated by a direct issues in the application of standard analytic methods for clustered data based
acyclic graph and an ordered stepwise strategies diagram. Results: The first on generalized estimating equations or mixed effect models, and compare their
year 192 patients (71%) died. Age (<65, 65-75, >75), stage (I+II, III, IV), performance to that of alternative approaches including a simple method which
performance status (0+1, 2, 3+4) and treatment (surgery, chemo/radio-therapy, utilizes weighted estimating equations. To address the estimation and
supportive only) significantly influenced survival. Adjusted AF %s and convergence problems that arise due to the extreme parameter estimates, we
confidence intervals (CI) were estimated for age 65+: 11.0, 95% CI: (1.8, 19.4); also investigate the utility of a Firth-like penalized likelihood.
stage III+IV: 30.5, 95% CI: (15.4, 42.9); performance status 3+4: 3.1, 95% CI:
(0.8, 5.4); and supportive treatment: 12.2, 95% CI: (1.2, 22.0). The combined
AF was 79.1, 95% CI: (59.1, 89.3). The combined AF due to the three clinical P19.4
Multistage dynamic sampling design for observational studies
variables adjusted for age was 64.2 (47.0, 75.9).
Conclusions: This study confirms that there is a large potential gain in one-year Michel Hof, Anita Ravelli, Aeilko Zwinderman
survival and that the method of attributable fractions is useful for discussing Academic Medical Center, Amsterdam, The Netherlands
risk-reduction strategies.
In population cohort studies the main objective of the sampling design is to
have a sample that is representative for the population. With a representative
P19.2
sample, the prevalence of certain diseases or features of the population can be
Power Approximation for Logistic Regression Models with Multiple Risk Factors accurately estimated.
in Observational Studies
A complication that arises in the recruitment of a representative sample is
heterogeneity in participation willingness. For instance, it is known that males
Klaus Jung, Tim Friede
Department of Medical Statistics, University Medical Center Göttingen, and females have a substantial difference in participation willingness. To deal
with this heterogeneity, multistage sampling methods have been proposed. In
Göttingen, Germany
each stage, a small sample of individuals is invited to the study with a sampling
Logistic regression is one of the most frequently used tools for the analysis of method that can deal with different inclusion probabilities.
data from observational studies. In contrast, sample size methods for planning
In this study we consider two sampling methods for each stage: the cube
such studies are rather rare and focus mostly on simplistic situations. For
method developed by Deville and Tillé and a new dynamic sampling method.
instance, Demindenko (2007) presents sample size formulas for models with
This dynamic sampling method sequentially invites individuals that minimize
one continuous risk factor that is accompanied by several confounding
the difference between the joint distributions of the relevant variables, which
covariates. In this presentation we focus on studies in which several risk
are used to check whether the sample is representative for the population. In
factors are of equal interest. Based on an approximation of the joint distribution
addition, these -to be invited- individuals are weighted with their estimated
of the regression coefficients for all risk factors we propose a new approach for
probability to participate.
power calculations. More specifically, we regard the concept of 'disjunctive
power', i.e. the probability that at least one of the risk factors is significantly The performance of the dynamic sampling designs was evaluated with
associated with the response (Senn and Bretz, 2007). Since the power results simulations. We considered scenarios of cohort studies with and without
of our approach are initially conditioned by the randomness of some design heterogeneous response rates, in which an estimation of the prevalence of a
112/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
particular disease was desired. In both scenarios, the proposed multistage P19.7
dynamic sampling design was considerably more accurate than the multistage
A comparison of different multilevel models to analyse the effect of maternal
cube method.
obesity on pregnancy induced hypertension
Edwin Amalraj Raja, Amanda J Lee, Fiona Denison
P19.5
University of Aberdeen, Scotland, UK
Impact of measurement error and unmeasured confounding: a simulation study
Background: Maternal bodymassindex (BMI) is an important determinant of
based on the example of ascorbic acid intake and mortality
pregnancy outcome with underweight and obesity being associated with an
1
2
2
1
1
2
RHH Groenwold , K Tilling , DA Lawlor , KGM Moons , AW Hoes , JAC Sterne increased risk of perinatal complications. The aim of this study was to
1
Julius Center for Health Sciences and Primary Care, University Medical compare the use of three multilevel models (Population averaged (PA), random
Center Utrecht, Utrecht, The Netherlands, 2School of Social and Community effect (RE) and Fixed effect (FE)) to quantify the effect of maternal BMI on the
Medicine, University of Bristol, Bristol, UK
risk of pregnancy induced hypertension (PIH).
Results from observational studies are potentially biased due to unmeasured or Methods: Routinely collected data from all women who delivered in Scottish
misclassified confounders. For example, the substantial reduction in mortality maternity units between 2003 and Dec 2009 were included. Maternal BMI at
by ascorbic acid (OR 0.48, 95%CI 0.33-0.70) found in a nonrandomized study ≤16 weeks gestation was categorised as underweight, normal, overweight,
was suggested to be biased for these reasons. We assessed to what extent obese and very severely obese. Models were adjusted for potential
observed associations, such as between ascorbic acid and mortality, can be confounders.
explained by unmeasured or misclassified confounders.
Results: The final dataset consisted of 124 280 deliveries nested within 109
We simulated datasets of 100,000 subjects using parameters from the British 592 women. The PA model odds ratio (OR) is interpreted as the average odds
Women's Heart and Health Study, including 28 confounders (6 continuous, 22 of PIH among obese women compared with normal BMI women (OR 2.35;
dichotomous). Unmeasured confounding was evaluated by including only 7 95% CI 2.18, 2.54). The RE model estimates the odds of PIH (OR 2.98; 95%
strong confounders in the adjustment model. Misclassification of confounders CI 2.65, 3.36) for obese women among all women with the same risk. The FE
was assessed by adding a random value (drawn from a normal distribution model shows the odds of PIH (OR 1.09; 95% CI 0.60, 2.00) for an obese
~N(0,σ2)) to each confounder value. The true effect of ascorbic acid on woman who subsequently changed her BMI category from normal since her
mortality was zero (OR=1). Simulated scenarios differed on the mutual last pregnancy.
correlation between confounders (ρ) and outcome incidence (py).
Conclusion: The interpretation of the OR depends on which model is
Unmeasured confounding alone yielded little bias: ORs ranged between 1.00- considered. The PA model provides a between women comparison, FE within
1.03. Misclassification of confounders (without unmeasured confounding) woman between pregnancies estimate and RE both within and between
resulted in bias, e.g. OR 0.78 (for σ=2, ρ=0.3, and p y=0.2). When women. Choice of which model to use will depend on the objective of the
misclassification was present, unmeasured confounding resulted in additional individual study.
bias, e.g. OR 0.73 (for σ=2, ρ=0.3, and py=0.2).
Although misclassification of confounders can results in substantial bias, our P19.8
example shows that it is implausible that this can fully explain the effects of Use of Scottish Electronic Medical Record Linkage Systems: Illustrated by
ascorbic acid on mortality as observed in nonrandomized studies. When WOSCOPS 15 Year Follow-up Data
confounders are mutually related, the impact of unmeasured confounding is
larger if unmeasured confounding is present in combination with Heather Murray, Ian Ford
University of Glasgow, Glasgow, UK
misclassification of confounders.
P19.6
Propensity score: alternatives to logistic regression - real example
Simona Littnerova1, Jiri Jarkovsky1, Jindrich Spinar2, Jiri Parenica2, Marian
Felsoci2
1
Institute of biostatistics and analyses, Brno, Czech Republic, 2Department of
cardiology, Hospital Brno, Brno, Czech Republic
The analysis using propensity score is one of the available methods to modify
the non-random distribution of affecting factors in randomized clinical trials.
Propensity score are typically estimated by logistic regression, but alternative
methods to create propensity score are available. From the public health,
biostatistics, discrete mathematics, and computer science literature we
identified alternative methods for propensity score estimation, which could
have same advantages over logistic regression. We choose two techniques as
alternatives to logistic regression: decision trees (classification and regression
tree - CART) and random forest.
The aim of this work is to compare this method to estimation propensity score
and an example of its potential use is given in its subsequent application to
data from the Czech database of heart failure and myocardial infarction Registry BRNO. Using the propensity score helped to obtain a balanced
dataset.
In 2007 the Long-term follow-up of the West of Scotland Coronary Prevention
Study (WOSCOPS) was published in the New England Journal of Medicine
showing that the risk of coronary heart disease or non-fatal myocardial
infarction was 10.3% in placebo group and 8.6% in the pravastatin group
(p=0.02) in the period approximately 10 years after completion of the study.
Similar percentage reductions were seen for other cardiovascular outcomes,
confirming the safety of pravastatin and suggesting an ongoing benefit in
reducing CHD events. Monitoring the long term safety of outcome trials is one
important use of computerised medical record linkage systems. Here we will
illustrate other uses of record linkage systems using WOSCOPS as an
example.
We will show there is strong correlation between the number of adverse events
/ deaths reported within WOSCOPS trial with number of hospitalisations and
deaths extracted from the electronic hospital discharge and death records.
We will discuss the benefits of increased follow-up such as additional power
achieved by increased number of events and the potential to remove early
events which may be associated with pre existing diseases unknown at
baseline such as cancer and diabetes still allowing enough events to show
significant association with outcomes. We will give examples of strong
associations found between cardiovascular and fatal outcomes with baseline
biomarkers and other risk factors such as BMI and heart rate.
Time to event analysis and Cox proportional hazard models as well as
competing risk models will be used for analysis.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P19.9
Posterior Capsule Rupture complication rates for Cataract surgery from 1,173
ophthalmic surgeons in 28 UK NHS trusts.
Paul HJ Donachie1, Robert L Johnston1, John M Sparrow2, Irene M Stratton1
1
Gloucestershire Hospitals NHS Foundation Trust, Gloucestershire, UK,
2
University Hospitals Bristol NHS Foundation Trust, Bristol, UK
The National Ophthalmology Database (NOD) has collated ophthalmological
surgery data from 28 UK hospital NHS trusts that use Electronic Medical
Record systems and contains data on 248,764 cataract operations performed
by 1,185 surgeons since 01/04/2000. Posterior Capsule Rupture (PCR) is the
most common complication of cataract surgery and is of interest as a measure
of surgical performance for revalidation.
NOD has data on 244,442 cataract operations from 169,779 patients that
underwent planned cataract operations performed by 1,173 surgeons and
which are eligible for PCR rate comparisons. The mean PCR rate was 1.6%
from 276 consultants, 2.1% from 510 independent surgeons, 2.5% from 134
experienced trainees, 4.2% from 253 inexperienced trainees and 1.9% overall.
Un-adjusted for case mix funnel plots of all surgeons and by surgical grade will
be shown.
113/156
significance of prognostic factors. In 2011, Choodari-Oskooei and colleagues
studied 17 R2-type predictive ability measures proposed for survival models.
They showed that all the measures have some shortcomings, but their studies
overall singled out two measures R2PM, R2D, which are recommendable for use
in (censored) survival data. Both measures depend on the variance of the
prognostic index (PI) of the model fitted to the data.
Here, we propose a new measure and compare it to the ones recommended
above. Our proposed measure is an extension of the total gain (TG) statistic,
proposed for a logistic regression model, to survival models. It is based on the
binary regression quantile plot, otherwise known as the predictiveness curve,
and ranges from 0 (no predictive ability) to 1 (‘perfect' predictive ability).
We explored the properties of the TG in censored survival data using
simulations and real data. We investigated the impact of censoring, covariate
distribution, and influential observations. The results of our simulations show
that similar to R2PM and R2D, TG is independent of censoring. But unlike R2D, it is
affected by the follow-up time and outlying observations. Finally, we applied TG
to quantify the predictive ability of prognostic models developed in several
disease areas. On balance, TG performs satisfactorily in our empirical studies
and can be recommended as an alternative measure to quantify the predictive
ability in survival models.
Keywords
Survival analysis, predictive ability, total gain (TG) measure, predictiveness
curve, prognostic models
P19.10
Paternal age - and the risk of oral cleft
Erik Berg, Øystein Haaland, Rolv Terje Lie
P20.2
Department of public health and primary healthcare, University of Bergen,
Aggregating published prediction models with individual patient data
Norway
Debray1, Erik Koffijberg1, Daan Nieboer2, Yvonne Vergouwe2, Karl
This study is a prospective, national cohort study, using information from Thomas
1
Moons
,
Ewout
Steyerberg2
Medical Birth Registry of Norway (MBRN) in the period from 1967 to 2010.
1
2
Orofacial clefts, include cleft lip (CL+/-P) and palate only (CP0), are together University Medical Center Utrecht, Utrecht, The Netherlands, Erasmus
Medical
Center
Rotterdam,
Rotterdam,
The
Netherlands
the most common craniofacial congenital anomaly. With a worldwide
prevalence of 1 per 1200 live births, Norway is among the highest rates of
clefts in the western world.
Earlier studies of paternal age and the risk of orofacial clefts are not
unambiguous. The aim of this study is to test the hypothesis that increased
paternal age is associated with a higher risk of having a child with oral clefts.
Since 1967 all births in Norway after 16 weeks of gestation have been
compulsory reported to the Medical Birth Registry of Norway (MBRN), and
registered with a unique personal identification number. Congenital defects
detected in this period, is recorded according to the International Classification
of Diseases, Eight Revision (ICD-8), on a standard form to the MBRN as a part
of the registry.
We investigate cleft lip and palate (CLP) isolated from the cleft lip only (CPO)
cases. The proportion of clefts registered in the MBRN is known: CLO 83%,
CLP 94%, CPO 57%,
We used various regression techniques to estimate the effect of father's age
on the risk of subgroups of oral clefts.
Preliminary results using generalized additive models indicate that an effect of
high paternal age may be present for subgroups of cleft lip, but with a much
weaker effect for uncomplicated isolated cases.
P20 Prediction
P20.1
A new measure of predictive ability for survival models
Babak Choodari-Oskooei, Patrick Royston, Mahesh KB Parmar
MRC Clinical Trials Unit, London, UK
Background: Previously published prediction models are often ignored during
the development of a novel prediction model. Consequently, numerous
prediction models generalize poorly across patient populations, and might have
been improved by incorporating such evidence. Unfortunately, aggregation of
prediction models is not straightforward, and methods to combine differently
specified models are currently lacking.
Methods: We propose two approaches for aggregating the previously
published prediction models with observed individual participant data. These
approaches yield a new explicit prediction model that, once derived, no longer
requires the original models. The first approach is based on model averaging
and estimates an overall prediction model that weighs the predictions of the
literature models. The second approach is based on stacked regressions, and
combines the predictions of the literature models in a logistic regression
analysis. We illustrate an implementation in two empirical datasets for
predicting Deep Venous Thrombosis and Traumatic Brain Injury, where we
compare the approaches to established methods for prediction modeling.
Results: Results from the case studies demonstrate that aggregation yields
prediction models with an improved discrimination and calibration in a vast
majority of scenarios, and result in equivalent performance (compared to the
standard approach) in a small minority of situations.
Conclusions: The proposed aggregation approaches may considerably
improve the quality of novel prediction models, and are particularly useful when
few participant data are at hand.
P20.3
External Validation of a Prognostic Model in Epilepsy: simulation study and
case study
Predictive ability measures provide important information about the practical Laura Bonnett, Anthony Marson, Paula Williamson, Catrin Tudur Smith
114/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
University of Liverpool, Liverpool, UK
Background: Before a prognostic model can be implemented in practice, it
should be externally validated. We have published a prognostic model in
epilepsy. Now we consider how to externally validate it.
Methods: A simulation study was undertaken assessing the performance of
statistical methods for externally validating prognostic models, and for handling
covariates
missing
from
the
validation
dataset.
An initial systematic review suggested the most common external validation
methods were discrimination and calibration. In our simulation study deviance,
concordance and Royston's measure of prognostic separation were tested.
These methods were also tested on three validation datasets.
Our simulation study tested five adaptations of standard methods for handling
missing data within covariates to entirely missing covariates: random selection
with replacement, hot deck imputation, single imputation via estimation,
random selection with replacement multiple times, and only using covariates
common to both the development and validation dataset. These methods were
then applied to one validation dataset with a missing covariate.
Results: In the simulation study concordance showed almost perfect
agreement between the similarly simulated development and validation
datasets. Variable matching performed poorly. In our case study all five
methods of handling missing covariates performed equally. Via the
concordance method, the prognostic model for epilepsy generalised well to the
validation datasets.
Conclusions: Concordance may be a suitable method of external validation
together with selected methods of imputation for a missing covariate. Further
work is required to determine how these methods perform in alternative
settings and how external validation results should be presented in general
practice.
P20.4
Multiple longitudinal profiles of patients reported outcomes as predictors to
clinical status of rheumatoid arthritis patients: A joint modeling approach
Siti Haslinda Mohd Din1, Marek Molas1, Jolanda Luime1, Emmanuel Lesaffre1,3
1
Erasmus University Medical Centre, Rotterdam, The Netherlands,
2
Department of Statistics, Wilayah Persekutuan Putrajaya, Malaysia, 3Catholic
University of Leuven,L-Biostat, Leuven, Belgium
Longitudinal profiles are most often analyzed as a response. However,
longitudinal profiles can themselves also be used as predictors for a response.
The response can be continuous, categorical, a survival outcome but also
univariate and multivariate. In addition more than one longitudinal profile may
be used as predictor. In this project, longitudinal profiles of multiple patients
reported outcomes (PROs) are used in predicting the clinical status of
rheumatoid arthritis patients. Patients completed a monthly PROs
questionnaire online from month 0 till month 12. The clinical status of 3 month
is measured by the DAS28, which is a physical exam performed by a
rheumatologist or nurse. The nature of the PROs data requires assuming a
longitudinal model for bounded outcome scores as the first component of the
model. Each PRO yields information for estimate of random intercept and
slope, which is used in the second stage to predict the value of DAS28. The
second stage assumes a normally distributed outcome; therefore a classical
normal regression model is used. The aim of this project is to demonstrate the
joint model of multiple longitudinal profiles of bounded outcome score as a
better prediction for rheumatoid arthritis patients' clinical status. We use a
maximum likelihood as well as a Bayesian approach. All results are validated
using K-fold cross validation technique.
P20.5
Using Dynamic Regression and Random Effects Models for Predicting
Hemoglobin Levels in Novel Blood Donors
Kazem Nasserinejad1, Wim de Kort3, Mireille Baart3, Arnost Komarek4,
Emmanuel Lesaffre1,2
1
Department of Biostatistics, Erasmus MC, Rotterdam, The Netherlands, 2LBiostat, Catholic University of Leuven, Leuven, Belgium, 3Sanquin Blood
Supply, Nijmegen, The Netherlands, 4Faculty of Mathematics and Physics,
Department of Probability and Mathematical Statistics, Charles University,
Prague, Czech Republic
In several Western European countries blood donation is done on a voluntary
basis. In order to optimize the planning of blood donation but also to continue
motivating the volunteers it is important to streamline the practical organization
of the timing of donation. Donation may, however, be declined and this for a
variety of reasons. A common reason is a too low hemoglobin level of the
donor which means that the hemoglobin level is below 8.4 mmol/l for men and
below 7.8 mmol/l for women. We wish to predict the future hemoglobin value in
order to better decide when donors can present themselves for donation.
The development of the hemoglobin prediction rule is based on longitudinal
data from blood donations collected by the Sanquin blood supply in the
Netherlands. We explored and contrasted two popular statistical models, i.e.
the transition (Markov) model and the random intercept model as plausible
models to account for the dependence among subsequent hemoglobin levels
within a donor. The aim of this exercise is to ascertain which of the two models
is
better
in
predicting
future
hemoglobin
values.
We showed that the transition (Markov) model and the random intercept model
have almost the same prediction accuracy at the first donation but for longer
series the transition model offers a better prediction. At this moment we
assumed equal time-intervals between subsequent visits. We currently are
exploring the extension to unequal time-intervals and the comparison to
alternative prediction models
P20.6
Time series clustering based on nonparametric multidimensional forecast
densities: An application to clustering of mortality rates
Jose A. Vilar, Juan M. Vilar
University of A Coruña, A Coruña, Spain
Time series clustering is nowadays an active research field with a broad range
of applications. The choice of a suitable dissimilarity measure between two
series is a key issue. However, the notion of similarity between series is nontrivial and can be established in different ways.
We propose a notion of dissimilarity governed by the performance of future
forecasts. Specifically, the dissimilarity between two series is measured by
means of the L1 distance between their corresponding forecast densities for a
sequence of k pre-specified time horizons. Comparing the forecast densities
instead of the point forecasts allows us to separate into different clusters time
series having similar predictions but different generating models. Our clustering
algorithm is outlined as follows.
Step 1. Obtaining samples of bootstrap prediction vectors of length k for each
series.
Step 2. Determining a low-dimensional space where the bootstrap predictions
are projected.
Step 3. Estimating the multidimensional density associated with each set of
projections.
Step 4. Obtaining a pairwise dissimilarity matrix by computing the distance
between each pair of densities.
Step 5. Performing an standard hierarchical clustering algorithm based on the
dissimilarity matrix constructed in Step 4.
Our clustering methodology is applied to a collection of series representing
annual mortality rates for different countries. Our objective is to group these
countries according to their mortality predictions on different future time
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
115/156
periods. Note that the prediction of future mortality rates is a problem of Additionally, we found that poor within cluster concordance was strongly
fundamental importance for the insurance and pensions industry.
related to poor between cluster concordance.
When assessing discriminative ability in clustered data, the within cluster
concordance probability is useful to evaluate clinical models, whereas the
P20.7
Time-dependent ROC curves for the estimation of true prognostic capacity of overall concordance probability is favourable to evaluate public health models.
Furthermore, heterogeneity in concordance among clusters should be taken
microarray data
into account.
1
2
Yohann Foucher , Richard Danger
1
EA 4275 Biostatistics, Clinical Research and Subjective Measures in Health P20.9
Sciences, Nantes University and Transplantation, Urology and Nephrology
Institute (ITUN), INSERM U1064, Nantes, France, 2Transplantation, Urology Modelling crown-rump length (CRL) data used for prediction of gestational age
in early pregnancy when the data is truncated at both ends: The case study of
and Nephrology Institute (ITUN), INSERM U1064, Nantes, France
INTERGROWTH-21st Project
Microarray data can be used to identify prognostic signatures based on time-to1
2
event data. The inherent problem of dimension (number of features i.e. genes Eric Ohuma , Doug Altman
1
Nuffield Department of Obstetrics & Gynaecology, and Oxford Maternal &
>> number of individuals) causes overoptimistic results.
of
We propose a bootstrap 0.632+ estimator of the area under the time- Perinatal Health Institute (OMPHI), Green Templeton College, University
2
Oxford,
Oxford,
OX3
9DU,
UK:
for
the
International
Fet,
Oxford,
UK,
Centre
for
dependent Receiver Operating Characteristic (ROC) curve. Cox model with
lasso penalty is used for the selection of features. The complete re-estimation Statistics in Medicine, University of Oxford, Wolfson College Annexe, Linton
of the model is performed at each iteration, including the value of the tuning Road, Oxford OX2 6UD, UK., Oxford, UK
parameter. We propose an R package entitled ROCt632 available at
www.divat.fr/en/softwares.
We validated the proposed methodology by simulations and comparisons with
other methods: cross-validation, bootstrap, bootstrap cross-validation and
bootstrap 0.632. We applied the methodology to a public microarray dataset,
which includes 240 patients with diffuse large b-cell lymphoma and 7,399
features. Depending on the prognostic time, the area under the curves (AUC)
obtained by using the 0.632+ estimator were between 0.70 and 0.65. This
illustrates the utility of this signature to predict mortality up to 15 years, but it
also illustrates that this signature alone is not sufficient for medical decisionmaking.
ROC-based interpretations are well-accepted in the community of biologists
and clinicians. Thus, the 0.632+ estimator of time-dependent ROC curve
constitutes a useful method to establish and communicate the predictive
accuracy of prognostic signature based on high-dimensional data.
Fetal growth charts are important for monitoring a child's growth and
development over time. Fetal crown-rump length (CRL) is measured in early
pregnancy primarily to determine the gestation age (GA) of a fetus. An estimate
of gestational age can be reliably obtained from women with a regular 28-32
days menstrual cycle and who know their first day of their last menstrual period
(LMP). The CRL is most reliable for estimating GA between 9 and 14 weeks
gestation, using a formula developed in 1975.
The INTERGROWTH-21st Project includes 4000 women with a regular cycle
(28-32 days) whose LMP and CRL estimates of GA agreed within 7 days. We
aim to develop a new CRL centile chart for estimating GA. The main statistical
challenge is modelling data with the outcome variable (GA) truncated at both
ends, i.e. at 9 and 14 weeks. The data span only 5 weeks so using only CRL
data unaffected by truncation leads to a large loss of data and limited clinical
usefulness.
One method is first to create a model for predicting CRL from GA. We assume
that the CRL values are conditionally normal at any given GA and apply
fractional polynomial regression to the mean and SD. We can use the model to
P20.8
predict GA from CRL or as a basis for imputing the missing values. We will
Assessing discriminative ability in clustered data
present analyses using these approaches. We will also consider the
David van Klaveren, Yvonne Vergouwe, Ewout W. Steyerberg
consequences of including only women whose estimated GA was within 7 days
Dept of Public Health, Erasmus University Medical Centre, Rotterdam, The between CRL and LMP.
Netherlands
For clustered data little effort has been put into assessing the discriminative P20.10
ability of prognostic models. Van Oirbeek and Lesaffre recently proposed an Temporal profile of time-dependent discrimination measures in survival
adaptation of the concordance probability to multilevel regression models. We analysis
aimed to study the practical use of the concordance probability as a measure Jerome Lambert, Sylvie Chevret
of discriminative ability in clustered data.
Univ Paris Diderot, Sorbonne Paris Cité; INSERM, UMR 717; AP-HP, Hop
We view the within cluster (e.g. center) concordance probability, representing Saint Louis, Service de Biostatistique et Information Médicale, Paris, France
concordance of pairs of patients belonging to the same cluster, as an
appropriate discrimination measure in clinical practice, because decisions are Introduction
taken at the single center (cluster) level. In public health applications we prefer Prognostic studies are usually constructed with the use of survival models. To
the overall concordance probability, also taking between cluster concordance evaluate the performances of these models, the concept of discrimination,
into account, because decisions are taken at the population level. We illustrate which is well described in the diagnostic framework with logistic models (Cour findings with clinical survival outcome data after traumatic brain injury statistic), has been extended to survival analysis. Thus, several time(6,691 patients clustered in 224 centers) and with binary public health outcome dependent C-statistics have been proposed which can be computed at a given
data of a Chlamydia screening program (80,380 participants clustered in 183 time point of the follow-up. Temporal profile of these discrimination measures
need to be explored to detangle the complex relationships between evolution of
neighbourhoods).
predictive ability over time and statistical property of the time-dependent cThe TBI case study shows that concordance probability estimates can vary
statistics.
substantially among clusters (IQR 0.69-0.85). Because of the observed
2
heterogeneity (I = 0.72) we used random effects meta-analysis to derive Methods
means and prediction intervals for the within cluster concordance probability. Simulation studies were conducted to mimic several scenarios with different
116/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
prognostic multiplicative impact of covariates on baseline hazard, and constant,
increasing, or decreasing hazard ratios over time, and different censoring
rates. Temporal profile of the various C-statistics computed as a function of
time was assessed. Finally, an example on real data was conducted to assess
the prognostic value of three biomarkers in a cohort of 5306 patients
hospitalized for acute heart failure.
Results
Our simulations show that when the prognostic value is non-null, even when
the hazard is constant, the discriminative ability across time improves. Results
of the simulations also illustrate the impact of the violation of the proportional
hazard hypothesis on the shape of discrimination over time.
Conclusion
Various C-statistics have been proposed to assess discriminative ability of a
prognostic model over time. Our study suggests that the shape of
discriminative ability over time can be counter-intuitive and should be
cautiously interpreted.
developed on 2037 patients from oncology referral centers (65% benign, 7%
borderline, 29% invasive tumors) and validated on 1107 patients from public
hospitals (prevalences 83%, 4%, and 13%, respectively). The validation data
were splitted into updating (n=945) and testing sets (n=162). Sequential
dichotomous modeling is used to obtain a polytomous prediction model: the
first model predicts benignity, the second distinguishes borderline from invasive
tumors. To obtain the original model, backward variable selection was
performed on seven variables preselected based on expert knowledge. We
considered recalibration and revision updating methods, and two reference
methods (no updating and redevelopment). Discrimination and calibration was
studied based on 500 splits of the validation data.
No updating method improved the discrimination on the testing data, with
redevelopment even showing a decrease. This decrease was stronger when
smaller updating set sizes were used. Calibration was greatly improved even
by simple recalibration methods. Under redevelopment, variable selection
results were unstable and coefficients overfit.
We conclude that simple dichotomous updating methods behaved well when
applied to polytomous models. When sample size in a new setting is small,
simple updating may be preferred over model redevelopment.
P20.11
Validation of risk prediction models for clustered data: A simulation study and
practical recommendations
P20.13
Rumana Z Omar, Shafiqur Rahman, Gareth Ambler
Probability of survival for very preterm births: production and validation of a
University College London, London, UK
prognostic model.
Clustered binary and survival outcomes commonly arise in health research, for
Bradley Manktelow, Sarah Seaton, David Field, Elizabeth Draper
example, in multicentre studies. Although clustering is accounted for in
explanatory models, it is usually ignored in risk prediction models, particularly Department of Health Sciences, University of Leicester, Leicester, UK
in their validation. This work extends some of the existing validation measures Accurate estimates of the probability of survival of very preterm infants
for use with random effects logistic and Cox frailty models. These are: Harrell's admitted to neonatal care are vital for counselling parents, informing care and
C-index, D statistic, Gonen and Heller's K index and the calibration slope (CS). planning services. In 1999, a prediction model for the probability of survival by
Two approaches are proposed, producing an overall validation measure across gestation, birthweight and gender was published using UK data from The
all clusters and a weighted average of cluster-specific measures. To calculate Neonatal Survey (TNS). This model is widely used in clinical care but
the overall measure, predictions can include the fixed predictors and the improvements in survival required that it be updated.
random effects (P(re)), or assume random effects as zero (P) or consider a
2,995 white singleton infants born at 23+0 to 32+6 weeks gestation from 2008marginal prediction by integrating out the random effects (P(pa)). Their
2010 were identified from TNS. A logistic model was fitted with gestation,
performances were assessed through simulations by estimating bias and mean
birthweight and gender as predictors. Non-linear functions were estimated by
squared errors of the validation measures and the coverage of confidence
fractional polynomials. Bootstrap methods, with 500 repetitions, were used to
intervals. The simulation scenarios were varied by number of clusters, cluster
investigate need for interactions and assess internal validity of the final model
size, intra-cluster correlation coefficient (ICC) and proportion of censoring for
by monitoring the c-statistic and Cox regression coefficients. Discrimination
survival outcomes. Validation measures using the P(u) approach showed
and calibration of the final model were assessed through the c-statistic, Cox
reasonable performance when cluster size was large and those using the P
regression coefficients, Farrington's statistic and Brier score on the entire
and P(pa) approaches performed poorly for moderate values of ICCs as they
dataset and clinically relevant subsets.
ignore clustering. The D-statistic and CS performed well for all simulation
scenarios, except for small cluster sizes. The K index generally performed well. A final prediction model was obtained: c-statistic=0.86; Farrington p=0.44.
The C-index did not perform well particularly in presence of censoring. Care is Predicted survival ranged from 4% to 99%. The bootstrap estimates indicated
needed when validation data is from clusters different from those used in excellent overall discrimination (mean c-statistic: 0.86, range: 0.85, 0.86). The
bootstrap Cox calibration coefficients were found to have a mean intercept of
model development.
0.014 (range: 0.44, 0.43) and a mean slope of 0.99 (range: 0.84, 1.21)
indicating good calibration.
P20.12
These internationally validated survival charts have been updated using robust
Updating of polytomous risk prediction models based on sequential methodologies to ensure adequate predictive performance. Use of fractional
dichotomous modeling improves the performance
polynomials offers a straightforward methodology to produce a simple and
Kirsten Van Hoorde1, Yvonne Vergouwe2, Dirk Timmerman1, Sabine Van clinically plausible model.
Huffel1, Ewout W Steyerberg2, Ben Van Calster1
1
KU Leuven, Leuven, Belgium, 2Erasmus Medical Center, Rotterdam, The P20.14
Netherlands
Using Bayesian model averaging to improve radiation-induced cancer risks
When a dichotomous prediction model performs poorly in a different setting, predictions
model updating is often a sensible approach. This may be particularly relevant Sophie Ancelet, Olivier Laurent, Neige Journy, Dominique Laurier
for polytomous outcomes. We demonstrate the use of dichotomous updating
methods for polytomous models developed using sequential dichotomous IRSN/LEPID, Fontenay-aux-Roses, France
modeling.
The prediction of cancer risks, arising from low doses and dose rates exposure
We consider a case study on ovarian tumor diagnosis. The original model is to ionizing radiation, has been an endeavour that has increased in magnitude
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
and effort over the last century. The Japanese atomic bomb survivor Life Span
Study (LSS) cohort, even if exposed at middle-to-high doses and high dose
rates, is the radiation epidemiology dataset most currently used to assess
radiation-induced cancer risks and extrapolate cancer risks to other
populations. Among the most problematic sources of uncertainty when
predicting risks from this approach are model and parameters uncertainties.
Particularly, the choice of link functions as well as explanatory variables all
contributes to uncertainty in the construction of relevant dose-response
relationships. Neglecting such uncertainties is a serious shortcoming of most of
previous risk assessments. We therefore investigate the possibility of using
Bayesian model averaging (BMA) to simultaneously reflect the impact of these
two sources of uncertainty in risk predictions. Our motivating case study
involves the prediction of the proportions of childhood leukaemia incidence that
may be due to naturally occurring radon gas in France. We used six recently
published risk models for radiation-induced leukaemia. We applied MCMC
algorithms as implemented in WinBUGS to re-fit the competing models to the
LSS cohort data. We computed the posterior model probabilities and carried
out BMA in this context. Our first predictive results suggest that a small still
sizeable percentage of childhood leukaemia cases might be attributable to
radon in France and that uncertainty in the predictions might decrease with
attained age.
117/156
recovery rates. Our aim is to use images for selection of highly discriminative
brain regions between schizophrenia patients and healthy controls and predict
an outcome of patients one year after the first schizophrenia episode based on
the
selected
brain
areas.
METHODS. Deformations of 3-D magnetic resonance images were used as an
input into an algorithm for selection of brain regions which are highly
discriminative between 52 schizophrenia patients and 52 controls. The
algorithm consists of a combination of penalised regression and a resampling
method. This procedure aims to calculate selection probabilities of each image
voxel by repeatedly fitting the penalised regression model on random subsets
of the data set, while keeping track of voxels selected in each iteration. The
final set of discriminative brain regions contains voxels with selection
probability higher than 0.5. The brain regions are then used in prediction of a
good or poor outcome of schizophrenia patients (Global Assessment of
Functioning Scale >70 and <70, respectively) using linear discriminant
functions.
RESULTS. A total of 30,461 highly discriminative voxels were selected. Crossvalidated accuracy of discrimination between patients and controls based on
the voxels was 85.6%. Outcome prediction accuracy was equal to 63.5%.
CONCLUSION. The results compare favourably to those obtained by current
state-of-the-art methodologies. Further work will aim to increase the prediction
accuracy with using nonlinear methods for prediction.
P20.15
Assessment of risk prediction and individualised screening of breast cancer P20.17
among Swedish postmenopausal women
Meta-analysis methods for examining the performance of a predictive test:
going beyond the average
Hatef Darabi, Keith Humphreys
Ikhlaaq Ahmed1, JP Noordzij1, Jon Deeks2, Lucinda Billingham1, Richard Riley2
Karolinska Institutet, Stockholm, Sweden
1
Methodology Research at the University
Over the last decade several breast cancer risk alleles have been identified Based in MRC Midland Hub for Trials
2
which has led to an increased interest in individualised risk prediction for of Birmingham, Birmingham, UK, Affiliated with MRC Midland Hub for Trials
clinical purposes. In the present work we examine several models for Methodology Research at the University of Birmingham, Birmingham, UK
predicting absolute risk, in particular we examine the performance of an up-todate 18 breast cancer risk single-nucleotide polymorphisms (SNPs), together
with mammographic percentage density (PD), body mass index (BMI) and
clinical risk factors in predicting absolute risk of breast cancer, utilising the Gail
approach. Adding mammographic PD, BMI and all 18 SNPs to a baseline
Swedish Gail model improved the discriminatory accuracy (the AUC statistic)
from 55% to 62%. The net reclassification improvement (NRI) was used to
assess improvement in classification of women into low, intermediate, and high
categories of 5-year risk, where significant positive reclassification was
observed (NRI= 0.170). Using published effect estimates for the 18 markers
and the clinical variables we evaluate several approaches to individualised
screening, against age only-based screening, in women aged 40 to 75 years.
We show, for the Swedish female population, that a personalised screening
approach based on a risk prediction model incorporating age, Gail model
variables, PD, BMI and 18 SNPs captures significantly more breast cancer
cases than screening approaches using equal resources based on age and
Gail model variables and on age alone. Taken together, genetic risk factors and
mammographic density offer moderate improvements to clinical risk factor
models for predicting breast cancer.
A predictive test is a single factor that accurately predicts individual outcome
risk for patients with a particular condition. For example, in patients with a
thyroidectomy, parathyroid hormone (PTH) measured at between 1 to 6 hours
post-surgery predicts which patients will become hypocalcemic within 48 hours.
Often multiple studies examine the predictive accuracy of a particular test, and
meta-analysis is then required. In this talk, using data for 9 studies examining
the predictive accuracy of PTH, we describe the use and clinical interpretation
of a bivariate random-effects meta-analysis that synthesises the evidence
about PTH’s sensitivity and specificity. This model accounts for between-study
heterogeneity and correlation in sensitivity and specificity, and leads to
estimates of the average sensitivity and specificity. Though such results are
informative, they do not easily translate to clinical practice, as test performance
in a single clinical setting may differ substantially from the average. To better
understand the impact of using the average sensitivity and specificity in clinical
practice, we discuss three possible approaches: (i) calculate predictions
intervals to reveal how the test’s true sensitivity, specificity and c-statistic may
vary from their average in individual settings; (ii) take the average sensitivity
and specificity estimates, then in each study combine with the observed
prevalence to obtain positive and negative predictive values, and assess study
calibrations (predicted events (E) vs. observed events (O)); (iii) assume a new
settings true prevalence is unknown, and repeat (ii) but use the average study
prevalence across studies, and check calibration.
P20.16
Outcome prediction in schizophrenia patients based on image data
Eva Janousova1, Daniel Schwarz1, Tomas Kasparek2
P20.18
1
Institute of Biostatistics and Analyses, Masaryk University, Brno, Czech How to assess discrimination performance of polytomous prediction models:
Republic, 2Department of Psychiatry, Faculty of Medicine, Masaryk University, review and recommendations
Brno, Czech Republic
Ben Van Calster1, Yvonne Vergouwe2, Vanya Van Belle1, Caspar Looman2,
INTRODUCTION. Recently, there is an effort to use image data for support of Ewout W. Steyerberg2
schizophrenia diagnostics. Improved detection of schizophrenia and early 1
KU Leuven, Leuven, Belgium, 2Erasmus MC, Rotterdam, The Netherlands
intervention with individual treatment strategies would increase patient
118/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Discrimination of risk prediction models for dichotomous outcomes is typically
evaluated with the c-statistic. The evaluation of prediction models for
polytomous models is less straightforward. Several extensions of the c-statistic
to nominal and ordinal outcomes have been suggested in the literature. We
review and suggest tools to assess discrimination of polytomous prediction
models, with illustration in a case study of 1094 men with testicular cancer. For
a first assessment, a "discrimination plot" is attractive, which shows box plots
of estimated risks for each outcome category by actually observed category.
Outcome category prevalences are indicated with horizontal lines. Then, we
suggest to summarize the overall performance with a polytomous c-statistic,
such as the Polytomous Discrimination Index (PDI) for nominal outcomes.
Although some measures weight by prevalence, we advice unweighted
discrimination measures. Misclassification costs and prevalence should be
considered in a later phase, when focusing on clinical usefulness of the model.
If the polytomous index suggests discriminatory ability, this may further be
investigated with pairwise c-statistics. Given that the risks for a pair of
categories, P(A) and P(B), do not sum to one for polytomous outcomes,
pairwise c-statistics for nominal outcomes can be obtained in several ways. We
suggest the "conditional risk" method, which computes the standard c-statistic
using P(A) divided by P(A)+P(B). This method is consistent with the standard
multinomial logistic regression model. In conclusion, several tools can be
recommended for the assessment of polytomous discrimation. Further
research is needed on other performance aspects such as calibration and
clinical usefulness.
P20.19
Comparison of two different modelling techniques to determine parameters
related to changes in quality of life in colorectal cancer patients
Jose M. Quintanta 1, Urko Aguirre1, Nerea Gonzalez1, Marisa Bare2, Santiago
Lazaro1, Cristina Sarasqueta3, Eduardo Briones4, Antonio Escobar5
1
Hospital Galdakao-Usansolo, Galdakao, Bizkaia, Spain, 2Corporacio Parc
Tauli, Barcelona, Spain, 3Hospital Donostia, Donostia, Gipuzkoa, Spain,
4
Hospital de Valme, Sevilla, Spain, 5Hospital Basurto, Bilbao, Bizkaia, Spain
Objectives: to develop and compare different statistical models to predict short
term change in health related quality of life in patients undergoing curative
surgery by colorectal cancer.
Methods: Prospective cohort study of patients diagnosed of colorectal cancer
who underwent curative surgery in any of the 7 participant hospitals. Patients
were followed from the moment they got in touch with the index hospital at
admission and with a follow-up at 30 days after discharge. Patient
sociodemographic (age, gender), clinical (TNM stage, localization,
complications) variables were retrieved from the patient medical record.
Additionally, patients fulfilled the EuroQol-5D questionnaire before surgery and
at 30 days after discharge. Two different multivariate models were developed:
a general linear model (GLM) model and hierarchical linear models (HLM).
They were compared in terms of beta estimates, standard errors, and statistical
significance of the considered variables in the final model.
Results: Based on the multivariate GLM model, he presence of complications
during the admission, the length of stay in the hospital, and the basal EuroQol5D score were correlated with changes in the EuroQol-5D. The multivariate
multilevel model found also that those three variables were related to the
changes in the EuroQol-5D scores but found that gender was related as well.
Conclusions: Though both models showed almost similar information, the HLM
model seems to provide with additional statistical significant variables (gender)
of great importance from a clinical and health services research point of view.
Determining which model is more appropriate and accurate is essential in this
study.
P20.20
Predictive performance of random forest based on pseudo-values
Ulla B Mogensen, Thomas A Gerds
University of Copenhagen, Copenhagen, Denmark
Random forest is a supervised machine learning method that combines many
classification or regression trees for prediction. Here we describe an extension
of the random forest method for building event risk prediction models in
survival analysis with competing risks. With right-censored data the event
status at the prediction horizon will be unknown for some subjects. We propose
to replace the censored event status by a jackknife pseudo-value, and then to
apply an implementation of random forests for uncensored data. In a
simulation study the predictive performance of the resulting pseudo random
forest is compared to that of random survival forests and of combined causespecific Cox regression model. Performance is measured with adoptions of
Brier scores, AUC and the C-index to survival data. We apply the pseudo
random forest to predict the risk of death caused by
cardiovascular diseases for stroke patients in the Copenhagen stroke study.
P20.21
Finding cut-offs for continuous prediction models: an overview of methods and
pitfalls
Verena Sophia Hoffmann
Ludwig-Maximilans-Universität, München, Germany
The prediction of outcomes is a challenge occurring in all areas of medicine. It
is important for therapy selection, stratification in clinical trials, economic
evaluations
and
patient
information.
Depending on number and properties of predictive factors and statistics used
the
result
either
has
many
values
or
is
continuous.
Often this is not useful in clinical practice and classification of predictions is
asked for to enable decision making. A wide range of methods can be used to
find appropriate cut-offs: A simple way can be to use mean or median
predictions, the minimal p-value approach aims to maximize the adequate test
statistic, converging tangents of different slopes to the ROC curve incorporates
different costs of outcomes and tree based models can be grouped by pruning
and pooling. Assets and drawbacks of these methods will be discussed using
appropriate
examples.
Most of the approaches to find a cut-off are data driven. For this reason it is
likely that the classification works better in the data used to find the cut off than
in new data. Therefore the model itself together with the cut-off needs to be
validated. Usage of methods of internal validation, like cross validation and
bootstrap validation, and external validation, as validation in different study
groups or settings, are presented and discussed. Also different approaches to
internal and external validation via the split-sample approach are shown.
The methods are illustrated with applications to data of patients from the
European Treatment and Outcome Study for CML (EUTOS).
P20.22
Validating Prediction Models in Small Datasets
Gareth Ambler, Bridget Candy, Michael King, Rumana Omar
University College London, London, UK
Logistic regression is often used to investigate the relationship between a
binary outcome and a number of predictors. However such models can have
overfitting problems in small datasets. This is the case with data derived from a
systematic review of 67 trials, where the binary outcome was treatment
compliance and there were 25 factors. The aim of the study was to investigate
whether some or all of these factors could be used to design an ‘optimal' new
trial.
To alleviate overfitting, estimation may be performed using a shrinkage method
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
such as ridge or lasso estimation; backwards elimination is often used in
practice. However validating the resulting models can be difficult in such small
datasets. For example, data-splitting further reduces the size of the dataset
used for model development, and K-fold cross-validation produces K values of
the performance measures which may not straightforward to interpret. In
addition, the use of several performance measures, including the ROC area, Dstatistic, and calibration slope, can produce contradictory findings in practice.
In this research we used simulation based on the review data to evaluate these
approaches to validation. Specifically, we simulated new outcomes based on a
range of scenarios; these include scenarios with a) no true predictors and b)
just a small number of true predictors. Different models (ridge, lasso, full,
backward elimination) were fitted then validated, and comparisons made
between their true performance and their apparent performance in the
validation exercise. Recommendations are made regarding the use of such
validation methods in practice.
P20.23
Improving prognostic model development and assessment for survival data
Paola M.V. Rancoita1,2,3, Cassio P. De Campos2,3, Francesco Bertoni3,4
1
University Centre of Statistics for Biomedical Sciences (CUSSB), Vita Salute
San Raffaele University, Milan, Italy, 2Dalle Molle Institute for Artificial
Intelligence (IDSIA), Manno, Switzerland, 3Institute of Oncology Research
(IOR), Bellinzona, Switzerland, 4Oncology Institute of Southern Switzerland
(IOSI), Bellinzona, Switzerland
Prognostic models are often developed and used in medicine to stratify
patients for survival prediction based on clinical and/or biologic variables. One
of the most important aims is to identify patients with very poor outcome, that
might take advantage from experimental therapies, or with very good prognosis
who might be treated with less intensive regimens. Survival trees and their
generalizations are powerful in survival prediction, but they do not account for a
core property of prognostic indices: different combinations of causes (values of
clinical covariates) can lead to similar phenotype/survival outcome. Few
procedures have been suggested for merging the groups corresponding to
leaves (combinations of causes) in survival trees, but they show limitations. We
propose to apply a clustering algorithm on the survival curves of leaves'
groups, using a proper dissimilarity measure. We study several choices for
both the clustering algorithm and the dissimilarity measure and we compare
them with standard procedures in the literature, using simulated and public real
data.
In order to enhance the evaluation of the performance of the prognostic
models, we also derive a new index of separation to be used together with an
error measure of the prediction. This new separation index takes into account
three important characteristics of risk groups (besides the prediction itself): the
retention of their order, the reliability/robustness in terms of size, and the
goodness of separation among all corresponding survival curves. We discuss
the new separation index and how other widely used indices miss to capture all
these characteristics.
119/156
prognosis assignments. Nevertheless, in latter decades, Machine Learning
methods - like Neural Networks (NN) or Support Vector Machines (SVM) - have
been introduced to improve the accuracy of the predictive models. This study
compares the predictive ability of the developed LR, NN and SVM models for
the prognosis of the colorectal cancer stage.
Methods
A set of 749 of patients with colorectal cancer who underwent surgical
intervention was divided into the training and test groups (n = 501 versus n =
248). As output variable patient's colorectal stage was considered. From the
data in the training group, an optimal model for prognosis of advanced cancer
stage (III-IV versus 0-I-II) was developed with the LR approach. Patient's
symptoms, laboratory test and cancer specific antigens were considered as
input variables. From the data of the test group, the areas under the receiver
operating characteristic (ROC) curve (AUC), specificity and sensitivity of LR,
NN and SMV methods have been evaluated.
Results
Potassium, abdominal pain, hemoglobin and cancer specific antigen were
found significant predictors. AUC value from NN was the highest (0.77 vs 0.73
for SVM and 0.71 in LR, p<0.001) and NN showed best screening parameters
(sensitivity=61%, specificity = 78%).
Conclusions
The performance of NN is superior to SVM and LR in the prognosis of
advanced colorectal cancer stage.
P20.25
Dynamic updating of prediction models: how to deal with heterogeneity
between settings
Daan Nieboer1, Yvonne Vergouwe1, Ruud G. Nijman2, Rianne Oostenbrink2,
Ewout W. Steyerberg1
1
Department of Public Health, Erasmus MC, Rotterdam, The Netherlands,
2
Department of General Paediatrics, Erasmus MC-Sophia Children's Hospital,
Rotterdam, The Netherlands
With the implementation of prediction models in clinical decision support
systems, data on new patients can be collected continuously. This makes it
feasible to continuously update prediction models for new settings for which
dynamic update methods are required.
Various degrees of heterogeneity between the development and the new
setting can exist. We consider three different scenarios: coefficients are
identical in the development and new setting, only the intercept is different in
the new setting, or the strength of individual predictor effects are different.
We consider three dynamic updating strategies when new data becomes
available: updating the intercept, logistic recalibration (re-estimating the
intercept and slope of the linear predictor) and re-estimating the individual
regression coefficients. All strategies were applied with Bayesian and
Frequentist approaches.
Empirical datasets formed the motivating example for generating simulated
datasets. A prognostic model was developed and updated for children admitted
P20.24
to the emergency care with fever and suspected of a serious bacterial infection.
Comparison of Logistic Regression and Machine Learning methods: an Updating is performed until 10.000 patients are included.
application to the Colorectal Cancer stage prognosis.
Simulation results show that the adjustments in model parameters and model
Urko Aguirre1, Jose Maria Quintana1, Nerea Gonzalez1, Antonio Escobar2, performance with increasing sample size occur more smoothly in the Bayesian
3
Cristina Sarasqueta
approach, while the Frequentist approach adjusts the model faster in
1
Hospital Galdakao-Usansolo, Galdakao, Spain, 2Hospital Basurto, Bilbao, heterogeneous settings. Hence, there is a trade-off in speed of adjustment and
Spain, 3Hospital, Donostia, Spain
smooth changes, which requires subject knowledge to decide between a
Frequentist or Bayesian approach.
Introduction
The development of accurate prediction models is important in choosing
adequate treatment plans. Logistic regression (LR) models are the most
popular methods in a variety of medical domains for performing diagnostic and
120/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P21 Statistics for epidemiology
study planning.
P21.1
P21.3
Re-sampling methods in prevalence and incidence studies
Exploring the Quality of Life in Patients with Suspected Heart Failure
Abdel Douiri1
J Zhang1, J Hobkirk1, S Carroll2, P Pellicori1, AL Clark1, JGF Cleland1
1
King’s College London, London, UK, 2NIHR c-BRC, Guy’s and St. Thomas’ 1
The University of Hull, Hull, UK, 2The University of Leeds, Leeds, UK
NHS Foundation Trust, London, UK
Age is one of the major determinants of the prevalence and the incidence of Background: The First EuroHeart Failure Survey Questionnaire (EHFSQ-1)
most diseases. There are major discrepancies in the underlying age structures has 39 questions on symptoms and quality of life (QoL). Many questionnaire
of the registry populations of disease occurrence in each country, and it is items are likely to be highly related.
therefore essential that prevalence and incidence studies be assessed Aims: To identify underlying symptom factors among EHFSQ-1 questions and
independent of the age profile of each population. Direct standardization is a to observe if the symptom factors affect the construction of an overall "quality
fundamental tool used to control the effect of the age structures between of life score" from the questionnaire.
populations when making a comparison of different rates. Traditional statistical Methods: Patients referred with symptoms suggestive of heart failure (HF)
methods of computing standardized rates and rate ratios in these studies rely between 2000 and 2009 were asked to complete the EHFSQ-1 as a standard
on a number of assumptions to estimate confidence intervals for obtained component of clinical assessment. Two patient groups were examined: those
estimates. A common assumption is that the rates are distributed as a weighted with HF and those without HF. Exploratory factor analysis based on the
sum of independent Poisson random variables which may not be fully justified principal component technique with a Varimax rotation method was used to
for a series of periodic overlapping studies. To overcome this problem, we identify patterns of QoL questions and the factor scores were calculated. The
recommend a resampling technique based on nonparametric bootstrapping, to 10-fold cross-validation was used to assess the stability of the analysis.
calculate confidence intervals of directly standardized rates with at least 5000 Results: Of 1031 patients, 64% were men and the median age was 71 (IQR:
replications. These methods are distribution-free and no assumptions are 63-77) years. 626 had HF and 405 did not. For patients with HF, seven
required regarding the unknown population probability distribution. The symptom factors were identified: "breathlessness", "psychological distress",
proposed method and other common confidence interval procedures for direct "sleep quality", "frailty", "cognitive/psychomotor function", "cough" and "chest
standardized incidence or prevalence including methods based on binomial, pain", which accounted for 65% of the total variance. Patients with HF or high
poisson and gamma distributions were investigated through Monte Carlo NT-proBNP were more symptomatic. A weighted symptom factor score was
simulations and on real data from the South London Stroke Register.
tightly correlated with a summary QoL score (r=0.99 with p<0.001 and shown
on the scatter plot).
P21.2
Conclusions: Using the EHFSQ-1, we found 7 symptom factors in patients
with HF. Either a weighted symptom factor score or a summary score can be
Development of new Austrian height and weight references
used as a QoL outcome measure.
Andreas Gleiss1, Gabriele Häusler2, Michael Schemper1
1
Center for Medical Statistics, Informatics, and Intelligent Systems, Medical
University of Vienna, Vienna, Austria, 2Department of Paediatrics and
Adolescent Medicine, Medical University of Vienna, Vienna, Austria, 3on behalf
of APEDÖ (Working Group on Paediatric Endocrinology and Diabetology,
Austria), Austria, Austria
In the clinical evaluation of growth disorders and growth monitoring, sex and
age dependent height reference curves derived from a relatively small and
special sample of Swiss children between 1954 and 1970 have been in use in
Austria so far. Due to the secular trend and potential national differences new
reference curves needed to be developed on a broad national basis.
For this purpose a sample of nearly 14,500 children and adolescents between
4 and 19 years of age was drawn via schooling institutions. Sampling was
stratified by provinces according to age and sex specific population
proportions. An existing R implementation for Generalized Additive Models for
Location, Scale and Shape (GAMLSS) was employed for estimating percentile
curves. The flexible Box-Cox Power Exponential distribution was used to
describe the height or weight distribution, respectively, at each age while the
dependence of the four distributional parameters on age was modelled by
cubic spline functions. The degrees of freedom for these splines were selected
by an optimization procedure using information criteria. The functional
dependence was further adapted, if necessary, according to paediatric or
goodness-of-fit considerations.
Various points in the basic estimation process of reference curves are
considered and two novel refinements are presented: first, we introduce a
correction to appropriately take into account the removal of extreme
observations before a GAMLSS estimation. Second, we demonstrate the
quantification of the uncertainty in extremely low percentiles (such as 0.5%,
crucial in diagnostics) based on the bootstrap. This may be valuable for future
P21.4
Cultural vs. Clinical characteristics and health-related quality of life of patients
with primary liver cancer by using the EORTC QLQ-C30 and the EORTC QLQHCC18
Wei-Chu Chie1, Jane Blazeby2, Chin-Fu Hsiao3, Herng-Chia Chiu4, Ronnie
Poon5, Naoko Mikoshiba6, Gillian Al-Kadhimi7, Nigel Heaton7, Jozer Calara7,
Peter Collins8, Katharine Caddick8, Anna Costantini9, Valerie Vilgrain10, Ludovic
Trinquart11, Chieh Chiang3
1
Institute of Epidemiology and Preventive Medicine, Department of Public
Health, College of Public Health, National Taiwan University, Taiwan., Taipei,
Taiwan, 2School of Social & Community Medicine, University of Bristol, UK.,
Bristol, UK, 3Division of Clinical Trial Statistics, Institute of Population Health
Sciences, National Health Research Institutes, Taiwan, Chunan, Taiwan,
4
Department of Healthcare Administration and Medical Informatics, Kaohsiung
Medical University, Taiwan, Kaohsiung, Taiwan, 5Department of Surgery, The
University of Hong Kong, Queen Mary Hospital, Hong Kong, Hong Kong,
China, 6Department of Adult Nursing / Palliative Care Nursing, Graduate
School of Medicine, the University of Tokyo, Japan, Tokyo, Japan, 7Institute of
Liver Studies, King’s College Hospital, London, UK, London, UK, 8Department
of Hepatology, University Hospitals Bristol NHS Foundation Trust, Bristol UK,
Bristol, UK, 9Psychoncology Unit, Sant'Andrea Hospital - Faculty of Medicine
and Psychology Sapienza University of Rome, Italy, Rome, Italy, 10AssistancePublique Hôpitaux de Paris, APHP, Hôpital Beaujon, Department of Radiology,
Clichy, France ; Université Paris Diderot, Sorbonne Paris Cité, INSERM Centre
de recherche Biomédicale Bichat Bea, Paris, France, 11Assistance Publique Hôpitaux de Paris, Hôpital Hôtel-Dieu, Centre d'Epidémiologie Clinique,
France, Paris, France
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
The EORTC QLQ-HCC18 was administered with the core questionnaire, the
EORTC QLQ-C30 to 272 patients from seven centres in six countries (Taiwan,
Hong Kong-China, Japan, the United Kingdom, France, and Italy) in a crosscultural field validation study which was published elsewhere. We found Asian
patients had significantly better quality of life scores than European patients in
many subscales. Significantly better clinical characteristics were also found in
Asian patients than European patients because of early detection of their
illness. To clarify the effects of cultural vs. clinical characteristics, we conducted
multi-way ANOVA, model selection, and correct multiple comparisons by using
the method of false discovery rate. After adjusting for clinical characteristics,
cultural effects existed in physical functioning, role functioning, and fatigue. In
model selection, we found cultural effects existed in role functioning, insomnia,
fatigue, and sexual interest. After correction for multiple comparisons, all
cultural differences became non-significant. The difference in results may be
due to differences in methods and criteria for significance. We concluded that
both clinical characteristics affected the quality of life of patients with primary
liver cancer. Early detection and prompt treatment by active surveillance of
Asian countries may explain part of the results.
Keywords: primary liver cancer, quality of life, the EORTC questionnaires,
cultural differences
121/156
Hospital, Aarhus University Hospital, Aalborg, Denmark, 2Department of
Epidemiology, School of Public Health, Aarhus University, Aarhus, Denmark
Since August 15th 2007, it has been prohibited by law to smoke in public
buildings, bars and restaurants in Denmark. It is expected that this will
encourage smokers to stop smoking, dis-encourage youngsters to start
smoking, and protect people from second-hand smoke. Many beneficial effects
are expected by the introduction of this legislation. We will focus on the
incidence of cardiovascular diseases in Denmark, since we expect smoking to
have an acute effect on the incidence of cardiovascular diseases. Other
studies (Sargent et al, 2004; Bartecchi et al, 2006) have shown up to a 40%
reduction
in
heart
attacks
after
six
months.
From the Danish national patient registry, we identified all incident cases from
August 15th 2002 until August 15th 2011 of the following diseases: acute
myocardial infarctions (N=74.861), stroke (N=125.815), venous
thromboembolism
(N=57.746).
We used Poisson regression to estimate the effect of the smoke legislation law
taking into account both trend and seasonal effects. Furthermore, we stratified
by gender and age.
P21.7
New tricks for an old dog: using the delta method for non-linear estimators, with
P21.5
an application to competing risks in continuous time.
Exploring the use of body mass index as a covariate in survival models of total
Mark Clements, Robert Karlsson, Fredrik Wiklund, Henrik Grönberg
knee replacement
Karolinska Institutet, Stockholm, Sweden
1
1
2
David Culliford , Joe Maskell , Nigel Arden
1
University of Southampton, Southampton, UK, 2University of Oxford, Oxford, The delta method is an important tool in theoretical statistics, however
computer-intensive methods are often preferred for variance estimation for
UK
non-linear estimators. This is partly due to linearity and normality assumptions
Total knee replacement (TKR) is an effective intervention in patients with end- of the delta method not being satisfied, but also due to software
stage knee osteoarthritis. The proportion of those undergoing TKR in the implementations being inflexible. This latter concern has been recently
United Kingdom who are obese has been increasing recently. Time to event addressed, with implementations of the delta method using numerical partial
models for knee replacement failure often incorporate body mass index (BMI) derivatives in SAS (PROC NLMIXED) and Stata (predictnl). We describe an R
as an explanatory variable, but its use as a time-varying covariate, particularly implementation which has considerable flexibility.
within large databases, has not been widely explored.
As an application, we used data from a prostate cancer case-control study to
We describe and compare a range of options for using BMI as a covariate in estimate the increase in cause-specific cumulative incidence (with competing
survival analysis when the availability is variable and the timing irregular. This risks) of prostate cancer associated with a set of risk SNPs. Incidence rates
situation is common within large, population-based general practice databases. were modelled using logistic regression, adjusting for population-level
We use data from the UK General Practice Research Database (GPRD), incidence rates. Cumulative incidence to age 80 years was calculated in
identifying all subjects (N=24738) undergoing TKR between 1991 and 2006, continuous time using an ordinary differential equation solver, adjusting for
including all routinely recorded BMI values with measurement dates. Using Cox death due to other causes. Variance estimates for the log of cumulative
regression, we model time to failure using different scenarios depending on the incidence were calculated using the delta method. For comparison, we
timing of BMI (e.g. pre-TKR, post-TKR, multiply imputed annualised post-TKR), modelled incidence using Bayesian methods and re-calculated cumulative
presenting hazard ratio (HR) estimates for BMI under different scenarios. For incidence. The confidence intervals using the delta method were similar to the
example, standard Cox regression showed significantly higher risk of revision Bayesian credible bounds, where the width from the lower bound of the
surgery for each unit of pre-operative BMI after adjusting for gender, age and confidence interval to the point estimate was 2% less wide and width from the
number of comorbidities - HR 1.04 (CI 1.01,1.06, p=0.035). Furthermore, we upper bound to the point estimate was 3% wider.
compare results using time-varying, post-operative BMI, with and without We found the delta method to be rapid and flexible. From the application, we
multiple imputation.
found that estimation of competing risks in continuous time is a feasible
Although this work does not attempt to assess the theoretical validity of our alternative to non-parametric estimation.
exploratory scenarios, it is hoped that developments may follow which make
fuller use of certain types of repeated, irregularly observed covariates present P21.8
within medical research databases.
Evaluation of the hierarchical power prior distribution
Charlotte Rietbergen1, Ming-Hui Chen3, Irene Klugkist1
P21.6
1
Utrecht University, Utrecht, The Netherlands, 2UMC Utrecht, Utrecht, The
Assessing the effect of smoking legislation on incidence of cardiovascular
Netherlands, 3University of Connecticut, Storrs, Canada
diseases
Claus Dethlefsen1, Søren Lundbye-Christensen1, Anette Luther Christensen1, The power prior distribution in Bayesian statistics allows for the inclusion of
data and results from previous studies into the analysis of new data. It enables
Kim Overvad1,2
the researcher to control the influence of the historical data on the current data,
1
Department of Cardiology, Center for Cardiovascular Research, Aalborg by specifying a prior parameter that determines the amount of historical data to
122/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
be included. This parameter can either be a fixed user-specified value, or can
be estimated from the data. For the latter, current literature states that the size
of the weight parameter should depend on the commensurability of the
historical and current study outcomes. In this research we question whether
this is desirable, since differences between study results might be caused by
sampling variability.
Illustrated with a simulation study, we found on average only a marginal effect
of sampling variability on the posterior estimate for the parameter of interest, at
least for binomial data, and normal data with known variance. In some cases,
however, the joint power prior provides posterior estimates farther from the true
value than the estimates provided by a power prior with a fixed, self-chosen
weight parameter. This supports our supposition that coincidental differences
between current and historical data can affect the posterior distribution for the
power parameter, and consequently the parameter of interest, especially for
smaller samples.
We advocate the inclusion of additional (expert) knowledge on the
commensurability of the current and historical study populations, since only the
commensurability of sample results can never fully justify the value of the
weight parameter.
over time, thus biasing estimates of time trends in disease incidence. This
work aimed to (i) estimate the extent and pattern of false zeros and thereby
estimate the true trend in disease incidence, and (ii) demonstrate the sensitivity
of results to choice of modelling approach.
Methods: Poisson models are commonly used for count data, although
negative binomial models are better suited when there is general overdispersion. Excess zeros can be accounted for using ‘hurdle’ or ‘zero inflation’
versions of these models. Both approaches combine a Bernoulli process with
the basic model; the Bernoulli probability itself can be made dependent on
covariates. Random effects (RE) or GEE models can be used to allow for
within-reporter correlation of counts. We hypothesised that a RE zero inflated
negative binomial model, with time as a covariate in both the negative Binomial
and the Bernoulli processes, was most appropriate for our data, but other
models were fitted for comparison.
Results: Estimates of the overall proportion of false zeros given by the various
models will be presented and also the evidence for variation in the proportion
over time. Estimates of time trends in true incidence with and without
adjustment will also be presented. Software for these analyses will be
discussed.
P21.9
Exploring the estimator associated with the impact of a composite score of
multiple binary exposures on continuous outcomes: An illustration using the
Mediterranean Diet Score.
Christina Bamia1, Marina Zangogianni2, Nikolas Pantazis1, Fotios Siannis3
1
University of Athens, Medical School, Dept. of Hygiene, Epidemiology &
Medical Statistics, AthensS, Greece, 2National Hellenic Research Foundation,
Athens, Greece, 3University of Athens, Dept. of Mathematics, Athens, Greece
P21.11
Using the whole cohort when analyzing case-cohort data - some practical
experiences
Anders Gorst-Rasmussen1, Søren Lundbye-Christensen1, Anne Tjønneland2,
Kim Overvad1
1
Center for Cardiovascular Research, Aalborg Hospital, Aarhus University
Hospital, Aalborg, Denmark, 2Institute of Cancer Epidemiology, Danish Cancer
Society, Copenhagen, Denmark
In epidemiology it is often important to estimate the association of multiple
correlated exposures with an endpoint of interest. An example is nutritional
epidemiology where focus lies on the relation of nutrition (multiple exposures)
with health. Evaluating this association through the coefficients estimated from
regression models with all exposures included simultaneously presents
difficulties in the interpretation and in evaluating complex interactions between
the exposures. Using a composite score based on a function of the exposures
is an alternative. The association of interest is then assessed through the
estimator from regression models, of the relation of the composite score with
the outcome. However, it is not clear what is measured by this estimator and
how it relates to the regression coefficients associated with the individual
exposures.
We consider Xi, (i=1,..n) binary exposures (0,1) and a Z variable representing a
composite score estimated as the summation of the n exposures with and
without applying weights to each exposure. We first explore the properties of Z
in relation to those of the Xi. Secondly, we consider linear regressions on a
continuous outcome Y of Z, as well as, of Xi. We derive formulae for the
regression coefficient b associated with Z and we show that this is a weighted
average of the regression coefficients bi, associated with Xi. The weights are
functions of the variances and the covariances of Xi. We illustrate the above
with an example using Mediterranean Diet Score as composite score and Body
Mass Index as the outcome of interest.
The case-cohort design collects incident cases of a disease alongside a
random sample of the full cohort, providing a cost-efficient way to study
expensive biomarkers in large cohorts. In the Diet, Cancer & Health
prospective cohort of 57,053 Danish participants, case-cohort designs are used
extensively to investigate associations between adipose fatty acids and
incidence of various diseases. Our in-house standard for statistical analysis is
to use Horvitz-Thompson weighted Cox proportional hazards models with
robust variance estimation. This model is easily understood and implemented
by non-statisticians, but disregards valuable information about the exposuredisease association available from variables measured in the full cohort.
Survey statistics provides a framework for incorporating such information in
order to improve efficiency of estimators, and seems an interesting alternative
to our current practice. However, a promise of theoretical superiority is but one
aspect of a method in daily statistical practice. How much narrower confidence
intervals can we really expect from survey estimators; are they simple enough
to implement; and how do we communicate their rationale to clients? We report
here a case study, from a consultant's perspective, of the performance and
prospects of survey statistical methods in some typical case-cohort research
problems encountered in the Diet, Cancer & Health cohort.
P21.10
Removal of bias from incidence trend estimation using excess zero models
Lesley-Anne Carter, Fiona Holland, Roseanne McNamee
University of Manchester, Manchester, UK
P21.12
Analysis methods comparison for censored paired survival data. A study based
on survival data simulations with application on breast cancer.
Alexia Savignoni1, Caroline Giard4, Pascale Tubert-Bitter2, Yann De Rycke1
1
Institut Curie, Hôpital, Paris, France, 2INSERM UMRS 1018, CESP, Villejuif,
France, 3Université Paris Sud 11, Paris, France, 4Institut Curie, Hôpital René
Huguenin, Saint Cloud, France
Background: Extra zeros relative to the Poisson distribution are a common Censored paired survival data are natural (study on twins) or experimental are
problem in longitudinal count data and, if not properly accounted for, could lead often described in clinical research analysis. Exposed-Non-Exposed studies
to bias. In a disease surveillance system with monthly physician reporting, with patients paired on several prognostic factors are conducted to compare
false zero reports were suspected. Furthermore, their extent might increase
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
the evolution between these two groups. Specific analysis methods have to be
used with such correlated data. In our study we aim at evaluating the
performances of four main methods using simulations: the Prentice Wilcoxon
rank test, the frailty Cox model, the Wei Lin Weissfeld paired Cox model and
the Holt paired stratified Cox model. The motivating application is a cohort
study, which aims at the assessment of breast cancer prognosis related to a
subsequent pregnancy occurring over time after treatment in young women.
The exposed subject corresponds to a pregnant woman matched on several
prognostic factors to a non-exposed one, i.e. a non-pregnant woman.
Two thousand samples of pairs are realised. Each time to event, T1 and T2,
are simulated according to an exponential survival model with its parameter
equal to 0,01. Clayton and Marshall-Olkin copulas are used to model their
correlation. Four parameters are chosen and will vary according to simulations
scenarii the pair numbers, the percent of censure, the regression prognostic
parameter and the Spearman correlation coefficient between T1 and T2. For a
nominal 5% risk, power and error risk will be assessed.
These four approaches are applied to real data of women cohort treated and
followed in 8 different French hospitals for their breast cancer.
P21.13
Identifying risk behavior for varicella infection using current status survival
analysis
Sandra Waaijenborg, Susan Hahné, Mirjam Knol, Liesbeth Mollema, Gaby
Smits, Fiona van der Klis, Hester de Melker, Jacco Wallinga
RIVM, Bilthoven, The Netherlands
The median age of infection with the varicella zoster virus - a herpes-virus that
can cause chickenpox in children and shingles in adults - is considerably lower
in the Netherlands compared to other European countries. To gain insight in
this deviation, we used the data form a large cross-sectional population-based
serological study in which also data on contact patterns was collected. We
aimed to study determinants of infection; with a special focus on contact
pattern data.
Due to the nature of the data collection, it could only be determined whether at
the time of the study a person of a certain age had been infected with varicella.
However, the exact age at infection remains unknown. Using current status
survival analysis risk factors are identified.
Our results show that contact patterns are clearly associated with the chance
of infection. We noticed that young children who - on a more than average
basis - attend daycare centers are at higher risk of being infected at a younger
age.
P21.14
123/156
latter condition, the magnitude of the confounding bias increased with
increasing (i) prevalence of the confounder U, (ii) strength of its association
with the outcome, and (iii) strength of the E1*E2 interaction for U. We also
show that a variable that is not a confounder for the main effects of E1 and E2
on Y may still act as a confounder for E1*E2 interaction for Y.
P21.15
Composite retrospective estimates of Drug Use Incidence from Periodic
General Population Surveys in Spain.
Albert Sanchez-Niubo1, Josep Fortiana2, Antònia Domingo-Salvany1
1
Drug Abuse Epidemiology, IMIM-Hospital del Mar, Barcelona, Spain, 2Dept. of
Probability, Logic and Statistics. University of Barcelona, Barcelona, Spain
In the United States in the nineties, due to a tradition of periodic General
Population Surveys on Drug Use (GPS-DU), Gfroerer et al (Addiction
1992;87:1345-51) developed the retrospective method to estimate lifetime
incidence of drug use. The method was little used as few countries kept long
series of periodic GPS-DU and because of its poor reliability for less prevalent
substances like heroin. Recently, other substances are taking the lead, with
certain characteristics that allow reconsidering this method. The aim of our
study was to adapt the retrospective method to exploit eight Spanish biannual
GPS-DU (1995-2009) to estimate yearly lifetime incidence of cannabis and
cocaine use and their age of onset cumulative incidence by birth cohort.
Composite retrospective estimates were weighted means of the estimates of
each survey. The weights consisted of estimates of variances within each
survey. Five birth cohorts (1930-,1945-,1960-,1975-,1985-94) were considered.
Yearly incidence estimates exhibit an increasing trend up to year 2000 (11.5
and 3.9 per 1,000 inhabitants aged 10-64 for cannabis and cocaine,
respectively) and a smooth decrease since then (9.7 and 2.7 respectively in
2008). For both cannabis and cocaine in younger birth cohorts, cumulative
incidences increased while age of onset decreased. Long series of periodic
GPS-DU provide composite estimates which are more robust and have a wider
coverage of retrospective ages of drug use onset. Despite inherent sources of
bias from surveys on illicit substances, periodic GPS-DU can provide incidence
measures, desirable to adequately plan, evaluate and improve prevention
policies.
P21.16
Piecewise linear Poisson regression models with unknown break-points
Giota Touloumi, Nikos Pantazis, Evi Samoli
Athens University Medical School, Dpt of Hygiene, Epidemiology & Medical
Statistics, Athens, Greece
Regression models with piecewise linear terms where the number and the
position of the break-points are unknown are useful in many areas, like for
When are interaction estimates confounded?
example when studying time trends in cancer mortality.
1
1
2
Aihua Liu , Michal Abrahamowicz , Jack Siemiatycki
We propose a maximum likelihood based methodology to fit such models. Data
1
McGill University, Montreal, Quebec, Canada, 2University of Montreal,
are assumed to follow a Poisson distribution and all model parameters,
Montreal, Quebec, Canada
including the break-points can depend on other covariates. Transition functions
Interaction analyses have become an integral part of modern epidemiology but are introduced in order to approximate the segmented relationship through a
pose specific challenges. Observational studies of interactions may be affected continuous differentiable function and the break-points positions are reby unmeasured confounding but conditions under which an interaction effect parameterized in order to constraint them within a specific range. Maximization
estimate will be biased remain unclear. We first investigate such conditions of the likelihood function is performed through numerical techniques. Variancethrough analytical derivations. Then, we report the results of simulations, which covariance matrix of the parameters can be obtained by the inverse of the
allow us to quantify the impact of selected parameters on bias in point observed information matrix or by the outer product of gradients estimator.
estimates and coverage rates of confidence intervals for the interaction effect. The performance of the proposed method is assessed through simulations
We identify two different situations where the failure to adjust for the effect of a where besides the usual measures of bias and precision we estimate type I
risk factor U, results in a biased estimate of the interaction between exposures and type II error rates when comparing nested models with different number of
E1 and E2 on a binary outcome Y: (1) U is associated with E1 and has an break-points. The method is also applied to Greek cancer mortality data from
interaction with E2 for Y; (2) the association between U and E1 varies 1968 to 2006 and compared to two alternative methods.
depending on the value of E2. In simulations, where we assume U meets the
124/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Simulation results showed that the proposed method is asymptotically
unbiased with empirical coverage probabilities and type I error rates being very
close to their nominal levels. Power to detect more complex trends was
depending on the sample size, the difference between subsequent slopes and
the position of the break-points.
In general, the proposed method is flexible, expandable and easy to implement
using modern optimization software.
P21.17
A general conceptual and statistical framework for exposure-time-response
relationships based on distributed lag non-linear models
Antonio Gasparrini
London School of Hygiene and Tropical Medicine, London, UK
In biomedical research, a health effect is frequently the result of protracted
exposures of varying intensity sustained in the past, a phenomenon sometimes
described as an exposure-time-response association. This issue is common to
various research fields, such as environmental, cancer or pharmacoepidemiology. The main complexity of modelling and interpreting such
dependency lies in the additional time dimension needed to express the
association, as the risk depends on both intensity and timing of past
exposures.
This contribution illustrates a general modelling framework for exposure-timeresponse associations based on the extension of distributed lag non-linear
models (DLNMs), an approach originally developed in time series data. This
modelling framework is based on the definition of a cross-basis, a bidimensional space of functions describing the dependency simultaneously in
the spaces of the predictor and lag. A cross-basis is specified through a tensor
product between two independent set of basis functions for the two
dimensions, chosen among alternative options such as step, threshold or
spline functions. The estimated association is represented as a risk surface for
specific exposure and lag values, and predicted risk for specific exposure
experiences
can
be
computed.
As an illustrative application, extended DLNMs are applied to investigate the
relationship between occupational exposure to radon and lung cancer mortality,
through Cox proportional hazard models. This flexible modelling framework,
fully implemented in the R package dlnm, generalizes to various study designs
and regression models, and can be applied to study the health effects of
environmental factors, drugs intake or carcinogenic agents, among others.
P21.18
On the validity of power simulation based on Fleishman distributions
Jenö Reiczigel1, Tamás Ferenci2
1
Szent István University, Faculty of Veterinary Science, Budapest, Hungary,
2
Budapest University of Technology and Economics, Faculty of Electrical
Engineering and Informatics, Budapest, Hungary
Properties of statistical methods are often examined by simulation. Typical
examples are checking robustness of a method against violation of its
applicability conditions, or determining the power of a test under various
distributional assumptions. For such simulations several distribution systems
are available. One of them, the Fleishman system is fairly popular because of
the computational ease and flexibility of the method. Fleishman distributions
are defined as linear combinations of powers of standard normal variates.
Mean, variance, skewness and kurtosis of a Fleishman distribution can be set
to desired values.
To examine the performance of a statistical test when applied to non-normal
data, one can generate data from Fleishman distributions with given skewness
and kurtosis, and apply the test to them. It is questionable however, whether
these results are always valid. We found that for certain distributions (e.g. twocomponent normal mixtures), the fitted Fleishman distribution has a visibly
different density function, despite having identical first four moments. The
difference persists even if more moments are fitted. Since the performance of a
statistical method may depend on the distribution of data even beyond the first
few moments, simulation with Fleishman distributions may be misleading. We
present examples with simulated power of nonparametric tests.
Our conclusion is that if one wants to investigate the performance of a
statistical method by simulation, it is better to generate distributions from
observed data typical on that particular research field than to base the
simulation on arbitrary distributions like the Fleishman system.
P21.19
Prenatal, perinatal and neonatal risk factors for autism. A case-control study in
Poland
Dorota Mrozek-Budzyn, Agnieszka Kieltyka, Renata Majewska
Jagiellonian University Medical College, Krakow, Malopolska, Poland
The objective was to determine a relationship between pre-, peri-, and neonatal
factors and autism.
A case-control study was conducted among 288 children (96 cases with
childhood or atypical autism and 192 controls individually matched to cases by
the year of birth, sex, and general practitioners). Data on autism diagnosis and
other medical conditions were acquired from physicians. All other information
on potential autism risk factors were collected from mothers. The odds ratios
for autism diagnosis were calculated using conditional logistic regression
Autism risk was significantly higher when mothers were taking medications
and smoked during pregnancy. It was also significantly associated with
neonatal dyspnea and congenital anomalies. In gender analysis only
congenital anomalies were significantly associated with autism for girls but all
of mentioned factors stayed independent risk factors for boys.
P22 Survival and multistate models
P22.1
Bonferroni’s method to compare k survival curves with recurrent events
Carlos Martinez1, Guillermo Ramirez2, Aleida Aular1
1
Carabobo University, Valencia, Carabobo, Venezuela, 2Central University of
Venezuela, Caracas, Venezuela
Survival analysis provides useful statistical tools to study events such as viral
diseases, malignant tumors, machines failures, among others. This paper
offers a statistical method of interest to epidemiologists, biostatisticians and
other researchers interested in methodology to apply in medicine and other
health research areas. Comparison nonparametric tests with recurrent events
time data were proposed by Martinez et al. in 2009 and 2011. The main
objective is to propose a new procedure based on Bonferroni's method for
comparing survival curves of population groups with recurrent events. The idea
is to apply a methodology with a sequential procedure on multiple contrasts
with control type I error. The total tests number (q) increases if the total group
number (k) build-up, where q=k(k-1)/2 tests.The hypothesis test is the
following:
H0: S1(t) = S2(t) = ... = Sk(t)
H1: At least one Sr(t) is different with r=1,2,…,k.
Sr(t) is the survival curve of the rth group and its estimation is made by the
Generalized Product-Limit Estimator (GPLE or Kaplan-Meier estimator)
proposed by Peña et al. in 2001. The survival functions will be estimated using
R-language programs and counting processes. To illustrate Bonferroni's
method for comparing survival curves, it will be used the database Byar, in
which it was measured the time (months) of tumor recurrence in 116 patients
with bladder cancer who were assigned randomly to the following treatments:
placebo, pyridoxine and thiotepa.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
125/156
P22.2
space in at least one of the birth spacing (I st to Vth). However, the variables
One-Sample Test for Goodness-of-Fit for Length-Biased Right-Censored varied from the first birth spacing to the fifth birth spacing. Breastfeeding is the
only covariate which is noticed to be a significant protective factor associated
Survival Data
with each birth spacing. Furthermore, this study validates the developed
Jaime Younger, Pierre-Jérôme Bergeron
models with their prediction utilities for birth spacing.
University of Ottawa, Ottawa, Ontario, Canada
Cross-sectional surveys are often used in epidemiological studies to identify
subjects with a disease. When estimating the survival function from onset of
disease, this sampling mechanism introduces bias, which must be accounted
for. If the onset times of the disease are assumed to be coming from a
stationary Poisson process, this bias, which is caused by the sampling of
prevalent rather than incident cases, is termed length-bias. A one-sample
Kolomogorov-Smirnov type of goodness-of-fit test for right-censored lengthbiased data is proposed and investigated with Weibull, log- normal and loglogistic models. Algorithms detailing how to efficiently generate right- censored
length-biased survival data of these parametric forms are given. Simulation is
employed to assess the effects of sample size and censoring on the power of
the test. The method is illustrated using length-biased survival data of patients
with dementia from the Canadian Study of Health and Aging.
P22.3
Planning and evaluating clinical trials with composite time-to-first-event
endpoints in a competing risk framework
Geraldine Rauch1, Jan Beyersmann2
1
Institute of Medical Biometry and Informatics, University of Heidelberg,
Heidelberg, Germany, 2Institute of Medical Biometry and Medical Informatics, ,
University of Freiburg, Germany
Composite endpoints combine several events of interest within a single
variable. These are often time-to-first-event data which are analyzed via
survival analysis techniques. To demonstrate the significance of an overall
clinical benefit, it is sufficient to assess the test problem formulated for the
composite. However, the effect observed for the composite
does not necessarily reflect the effects for the components. Therefore, it would
be desirable that the sample size for clinical trials using composite endpoints
provides enough power not only to detect a clinically relevant superiority for the
composite but also to address the components in an adequate way. The single
components of a composite endpoint assessed as time-to-first-event define
competing risks. We consider multiple test problems based on the
cause-specific hazards of competing events to address the problem of
analyzing both a composite endpoint and its components. Thereby, we use
sequentially rejective test procedures to reduce the power loss to a minimum.
We show how to calculate the sample size for the given multiple test problem
using
a
simply
applicable
simulation
tool
in SAS . Our ideas are illustrated by a clinical study example.
P22.5
Expanded renal transplant: A multi-state model approach
Pablo Martínez-Camblor1, Jacobo de Uña-Álvarez2, Carmen Díaz Corte3
1
CAIBER, Oficina de Investigación Biosanitaria de Asturias, Oviedo, Asturies,
Spain, 2Departamento de Estadística e IO, Universidad de Vigo, Vigo, Galicia,
Spain, 3Hospital Universitario Central de Asturias, Oviedo, Asturies, Spain
The statistical methods must be adapted in order to respond more sophisticate
questions each time. In the analysis of time to event data, multistate models
are a valuable tool which allows to face complex problems from a more realistic
approach than the usual proportional hazard Cox models. They let us to take
into account the different steps that the patients follow before the final
resolution i.e. before reaching the main event in study. In addition, they show
themselves flexible and they let us to avoid most of the usual conditions like,
for instance, the proportional hazard assumption. In this work we are interested
in the effects of the quality of the transplant organ in the final survival of the
transplanted patients. In particular, we study the lifetime in people with kidney
disease which after a dialysis period have been received a renal transplant. In
order to increase the potential number of transplantation organs and reduce
the waiting time, the usual quality standards for the transplanted kidneys are
sometimes relaxed (the new criterion are labeled expanded criterion) and,
these "expanded kidneys" are usually implanted in non suitable candidates (in
the current data set, they were older with...). The organ failure and the effect of
the bad renal function on other vital activities must be also considered. Results
suggest that the expanded kidneys are only little worst than the usual ones.
P22.6
Fine and Gray approach versus cause-specific hazards: competing models or
just two views of the same story?
Christine Eulenburg, Linn Woelber, Karl Wegscheider
2Department of Medical Biometry and Epidemiology, University Medical Centre
Hamburg-Eppendorf, Hamburg-Eppendorf, Germany
In a situation with competing risks, the cause-specific hazards model as well as
the Fine and Gray model for the subdistribution hazard are established and
recommended tools. While the cause-specific hazards model deals with
instantaneous risks, the Fine and Gray method corresponds to marginal event
probabilities. As the two models focus on different aspects of a multistate
process, they are better understood as complements of one another than as
substitutes. An application to a dataset of vulvar cancer patients being exposed
to disease recurrence at different sites and to death demonstrates this. With
P22.4
age, lymph node affection and depth of invasion the three most important
Breast Feeding as a Time Varying - Time Dependent Factor for Birth Spacing:
factors for recurrence-free survival are included as covariates. The joint
Multivariate Models with Validations and Predictions
interpretation of the differing results from the two approaches and moreover, of
Rajvir Singh
their differences enables a deeper insight into the process history than each
model separately.
HMC, Doha, Qatar
Data used in the present study are from the National Family Health Survey
(NFHS) (1992-93), India. The present study has developed Cox model
analyses to see the effect of breastfeeding as a time varying and time
dependent factor on birth spacing. While it is acknowledged that breastfeeding
has a protected effect on birth spacing, such analysis of breastfeeding allows
for a more nuanced understanding of the same. Multivariate analysis revealed
breastfeeding, ever experience of fetal loss, education of women, employment
status of women, education of husband, media exposure, survival status of
index child and place of residence play an important part in extending birth
P22.7
Autologous Stem Cell Transplant Study in Lymphoma Patients: Statistical
Analysis of Multi-State Models
Jana Furstova1,2, Zdenek Valenta3
1
Department of Mathematical Analysis and Application of Mathematics, Faculty
of Science, Palacky University Olomouc, Olomouc, Czech Republic, 2First
Faculty of Medicine, Charles University, Prague, Czech Republic, 3Department
126/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
of Medical Informatics, Institute of Computer Science of the ASCR, Prague, any subsequent complete cytogenetic remission at time t after the initiation of
Czech Republic
CML therapy.
Survival analysis is a collection of statistical methods for inference on time-to- METHODS: Standard nonparametric statistical methods were used for
event data. If several different events occur, specific methods have to be used estimating a principal characteristic of the current CML treatment: the
in order to capture the complex structure of the data. A multi-state model is probability of being alive and leukaemia-free in time after CML therapy
used for modeling a stochastic process which at any time occupies one out of a initiation, denoted as the current cumulative incidence of leukaemia-free
discrete set of possible states (e.g. healthy, ill, dead). Transitions between the patients.
states represent the events. The multi-state model is most commonly used in RESULTS: The results have shown a difference between the estimates of the
the form of a Markov model, where the transitions follow a Markov process. current cumulative incidence function and the common cumulative incidence of
Recent papers have shown that this approach is very flexible and offers broad leukaemia-free patients. Regarding the currently available follow-up period, the
difference has reached the maximum (12.8%) at 3 years after the start of
applications in biomedical research.
The aim of this paper is to recall the Markov multi-state model and to apply the follow-up, i.e. after the CML therapy initiation.
multi-state methods to a real data set obtained from an Autologous Stem Cell CONCLUSION: A new quantity for the evaluation of the efficacy of current CML
Transplant Study in lymphoma patients carried out at the Clinic of Haemato- therapy that may be estimated with standard nonparametric methods has been
oncology of the University Hospital in Olomouc (Czech Republic). States proposed in this work. It reliably illustrates a patient's disease status in time
observed in this study are healthy (i.e. in remission), relapsed and dead. The because it accounts for the proportion of patients in the second and
influence of several clinically relevant risk factors (especially quality of the subsequent disease remissions. Moreover, the underlying model is also
transplanted graft) is analyzed with respect to their effect on the occurrence of applicable in the future, regardless of what the progress in the CML treatment
the events of interest. The results obtained by means of multi-state models are will be and how many treatment options will be available, respectively.
compared to those of standard survival analysis methods. The work on this
project was partially supported by institutional grant RVO:67985807 and ESF P22.10
project CZ.1.07/2.4.00/174.0117.
Imputing missing covariate values in presence of competing risk
Matthieu Resche-Rigon, Sylvie Chevret
P22.8
Université Paris diderot; UMR-S 717; Hôpital Saint-Louis, AP-HP, Paris, France
Exploratory survival analysis using longitudinal mixed-models
Due to its flexibility, its practicability and its efficiency compared to the complete
Ian James, Elizabeth McKinnon
case analysis, multiple imputation by chained equations is widely used to
Murdoch University, Perth, WA, Australia
impute missing data. Imputation models are built using regression models and
In situations where interest focuses on time to achievement of an event which it is well known that to avoid bias in the analysis model, the imputation model
is assessed for each individual at a series of discrete time points, one can must include all the analysis model variables including the outcome.
estimate and compare approximate survival curves by formulating the data as In survival analyses, outcome is defined by a binary event indicator D and the
a sequence of longitudinal binary outcomes, with pseudo-time points added observed event or censoring time T. Unfortunately, estimates obtained by direct
post-event for comparison as necessary, and utilizing standard mixed model inclusion of D and T in the imputation model are biased. Using a Cox model, I.
inference. Whilst this appears to be an inefficient approach, choice of a flexible White and P. Royston showed that the imputation model should include the
form for the distribution function and reasonable autocorrelation provides a event indicator and the cumulative baseline hazard, and therefore
useful and powerful general tool for exploratory survival analysis. It makes few recommended to include the Nelson-Aalen estimator
distributional or structural assumptions, readily accommodates different In the competing-risks setting, subjects may experiment one out of K distinct
censoring, observation and entry patterns and uses standard software. The and exclusive events. Two main approaches have been proposed. The most
analyses can highlight where survival curves differ, can incorporate changing common approach models the cause-specific hazard of the event of interest
covariate effects over time, particularly changing effects of baseline while the second approach models the sub-distribution hazard associated with
differences, and can accommodate such things as multiple weighted the cumulative incidence. We propose to extend the work of I. White and P.
assessments of occurrence of the event where this may be predicted with Royston to the competing-risks setting by including in the imputation model the
uncertainty at each time point. We demonstrate the utility of the approach cumulative hazard of the competing events. Moreover, we will show that
using flexible piecewise linear models, corresponding to piecewise constant cumulative hazards of all the events that compete to each other should be
densities, based on both simulated data and in application to the analysis of included.
times to event among HIV positive patients whose status is assessed at nonPerformance of our approach will be evaluated by a simulation study, then
uniform visit times post-therapy.
applied to a sample of 278 adult patients with acute myeloid leukaemia.
P22.9
Estimation of current cumulative incidence of leukaemia-free patients in chronic
myeloid leukaemia
Tomas Pavlik1, Eva Janousova1, Ladislav Dusek1, Jiri Mayer1, Marek Trneny2
1
Masaryk University, Brno, Czech Republic, 2Institute of Haematology and
Blood Transfusion, Prague, Czech Republic
INTRODUCTION: Traditional measures of treatment efficacy such as
cumulative incidence are unable to cope with multiple events in time, e.g.
disease remissions or progressions, and as such are inappropriate for the
efficacy assessment of the recent chronic myeloid leukaemia (CML) treatment.
OBJECTIVES: To estimate the probability that a CML patient will be in first or in
P22.11
Frequentist Evaluation of Bayesian Methods for Survival Data
David Dejardin1, Emmanuel Lesaffre1,2
1
I-Biostat KULeuven, Leuven, Belgium, 2Department of Biostatistics, Erasmus
MC, Rotterdam, The Netherlands, 3Global Biometric Sciences, Bristol-Myers
Squibb, Braine l'Alleud, Belgium
Bayesian methods are becoming increasingly popular, especially for the
modelling of complex models, even to non-Bayesians. In addition, standard
errors and confidence intervals (CIs) are easily obtained as a by-product from a
Bayesian analysis. Although not explicitly stated, the posterior summary
measures are often assumed to have frequentist properties.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
127/156
The coverage of a Bayesian 95% CI obtained with an objective prior has been
studied by many. In some simple cases, correct coverage can be proven
analytically. In other cases, their favourable properties have been shown by
simulations (Agresti and Min, 2005). In this study, we explored the coverage
properties of the CIs of several Bayesian survival approaches for the analysis
of various types of censored data proposed by Ibrahim et al. (2001). The
motivation for this study comes from a recently suggested non-Bayesian
technique for the analysis of doubly-interval censored data and the difficulties
we encountered in deriving CIs for the model parameters.
The coverage and length of the Bayesian CIs were compared to their
frequentist counter part for parametric and non-parametric approaches for
right, interval and doubly-interval censored data. Our tentative conclusions are
that the Bayesian approaches compare favourably to the frequentist
approaches, although their properties are not uniformly better.
The variety of approaches will also be illustrated on data from an ovarian
cancer RCT.
Aalen estimates in order to derive maximum-likelihood estimates of the
regression coefficient. In a data example, we study the occurrence of
bloodstream infections in neutropenic patients, where the competing risk is
recovery. The cumulative proportional odds model revealed as possibly
restrictive assumption an opposing effect on the categorical outcome variable,
leading to rather extreme results with respect to the cumulative incidence
functions. We also compared the relative merits of the proportional odds model
with proportional cause-specific or proportional subdistribution hazards
modelling.
[1] Rajicic, N., Finkelstein, D.M., and Schoenfeld, D.A. (2009). Analysis of the
relationship between longitudinal gene expressions and ordered categorical
event data. Statistics in Medicine, 28(22):2817-2832.
P22.13
The Cumulative Proportional Odds Model for Competing Risks
Kristin Ohneberg, Jan Beyersmann, Martin Schumacher
Institute of Medical Biometry and Medical Informatics, Freiburg, Germany
P22.15
Modelling discharge from a neonatal unit: an application of competing risks.
Sarah Seaton, Sally Hinchliffe, Paul Lambert, Bradley Manktelow
Department of Health Sciences, University of Leicester, Leicester, UK
Although the proportional odds model has been used within many different
fields of research, it rarely has been used for analyzing competing risks data.
Rajicic, Finkelstein and Schoenfeld [1] proposed a test for ordered categorical
event data using the cumulative proportional odds model and derived a onedegree of freedom hypothesis test for detecting an opposing effect of a
predictor variable on the two extreme outcome categories. Instead of
hypothesis testing this talk focuses on parameter estimation within the
proportional odds model in order to compute model-based cumulative
incidence functions. The approach presented in this talk uses stratified Nelson-
Estimates of length of stay for babies in acute neonatal care are vital for clinical
decision making, counselling parents and planning services. Previous work has
predominately focused only on the length of stay of babies who survive to
discharge, removing those babies who die on the neonatal unit from the
analysis. However, it is highly likely that the babies that die have underlying
conditions that make them fundamentally different to those that survive to
discharge.
Competing risks methodology allows estimation of the probability of discharge
whilst accounting for the competing risk of death. We used data from The
P22.14
Flexible modeling in Relative Survival: additive vs multiplicative model
Amel Mahboubi1, Laurent Remontet2, Christine Binquet1, Jonathan Cottenet1,
Michal Abrahamowicz3, Catherine Quantin1
P22.12
1
Evaluation of estimation methods and tests of covariates in repeated time to Département de l’information médicale, CHU Dijon, INSERM U866, Dijon,
France, 2Hospices Civils de Lyon, Lyon, France, 3McGill University, Montreal,
event parametric models
Canada
1
2
1
Marie Vigan , Jérôme Stirnemann , France Mentré
1
INSERM, UMR 738, Univ Paris Diderot, Sorbonne Paris Cité, Paris, France, Accurate assessment of the effects of continuous prognostic factors requires
2
flexible modeling of both time-dependent (TD) and non-linear (NL) effects [1].
Hospital Jean-Verdier, Univ Paris XIII, Bondy, France
To address this complex issue, in Relative Survival (RS), two flexible
The analysis of repeated time to event (RTTE) data requires frailty models and extensions of the seminal Estèveet al model [2] have been developed. These
specific estimation algorithms. The aims of this simulation study were 1) to extensions differ in that the TD and NL effects of the covariate on the log
assess the estimation performance of the Stochastic Approximation excess hazard are assumed to be (i) additive in [3] but (ii) multiplicative in [4].
Expectation Maximization (SAEM) algorithm in MONOLIX and the Adaptative Specifically, the disease-specific hazard are written, respectively as:
Gaussian Quadrature in PROC NLMIXED of SAS, 2) to evaluate the properties
lc(t|z)=exp(g(t))*exp(ai(zi)+bi(t)*zi) andlc(t|z)=exp(g(t))*exp(ai(zi)*bi(t))
of the tests of a dichotomous covariate on the occurrence of events.
where: g(t) represents the baseline log hazard and, for a continuous covariate
The simulation settings are inspired from a real clinical study. We assumed an
z : a (z ) and bi(t) represent, respectively, its NL and TD effects.
exponential model with random effect and covariate effect additive on log i i i
lambda. We simulated 500 datasets with 200 subjects. We defined the fixed We carried out a systematic comparison of theadditive vs multiplicative RS
effect lambda=0.002 month-1, its variance omega2=1 and a maximum follow-up models. The two models were applied to both simulated datasets (generated
of 144 months. Various values for the effect of the covariate were studied and assuming different relationships between the two effects) and real-life datasets
we also varied omega and the number of subjects. We evaluated estimation (where the relationshipsare unknown).
performance through the relative bias and the relative root mean square error Results of the two models will be evaluated and compared, in terms of the
(RMSE). We studied the properties of both the Wald and the likelihood ratio shape of estimated TD and NL effects, their significance, models' fit to real-life
test (LRT). Estimations were performed with SAS v.9.3 (with 5 quadrature data, and ability to identify the ‘true' model (in simulations).
points) and MONOLIX v.4.0 (with 3 Markov chains).
The two algorithms showed similar estimation performances with small biases [1] Abrahamowicz& McKenzie, Stat Med2007;26:392-408
and RMSE, which decrease as the number of subject increases. Type I errors
[2] Estève et al, StatMed1990;9:529-538
were closed to 5% and power varied as expected.
Despite the small number of repeated events, both algorithms provided good [3] Remontetetal, StatMed2007;26:2214-2228
[4] Mahboubietal, StatMed2011;30:1351-1365.
estimates of the parameters and tests with adequate properties.
128/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Neonatal Survey (TNS), which collects data on babies admitted to 29 neonatal
units in the East Midlands and Yorkshire regions of England. As there was a
non-proportional effect of gender in the study we chose to model the causespecific hazards for death and discharge using a flexible parametric survival
model as it can easily incorporate time-dependent effects. The cumulative
incidence function was then estimated for both death and discharge through a
transformation of the cause-specific hazards using numerical integration.
2176 babies were identified from TNS as being born at 24-28 weeks
gestational age. As gestational age increased the probability of survival to
discharge increased. For both male and female babies born at 24 weeks
gestation with a birthweight equal to the tenth centile, approximately 60% had
died within the first 40 days of life. The analysis has produced a useful tool for
clinicians to plan neonatal services and counsel parents.
dependent competing risk of treatment can be eliminated and where we want
to estimate the marginal survival distribution of natural conception. Without
assumptions on the correlation between the two competing events, this
marginal distribution is not identifiable. After determining the upper and lower
bounds of the marginal distribution, we use inverse probability of censoring
weighting techniques to correct for the dependent competing risk using various
patient characteristics measured during the diagnostic workup. To account for
the possibility of residual dependence conditional on the measured covariates,
we perform a sensitivity analysis modelling the assumed negative dependence
between the two competing events in a multivariable survival model with a
copula. Results will be illustrated using a dataset of 5630 couples with
unfulfilled child wish. The three year pregnancy rate in this cohort is 41% when
assuming treatment censoring is independent of natural conception, whereas
assuming natural conception chances were zero after treatment start leads to
an estimate of only 22%.
P22.16
Using restricted cubic splines to approximate complex hazard functions in the
P22.18
analysis of time-to-event data
Models for the Subdistribution Hazard of a Competing Risk under Left
Mark Rutherford, Michael Crowther, Paul Lambert
Truncation - a Comparison of two Approaches
University of Leicester, Leicester, UK
Michael Lauseker1, Andrea Kuendgen2
If interest lies in reporting absolute measures of risk from time-to-event data 1Institut für medizinische Informationsverarbeitung, Biometrie und
then obtaining an appropriate approximation to the shape of the underlying Epidemiologie, Ludwigs-Maximilians-Universität, München, Germany, 2Klinik für
hazard function is vital. Real-life hazard functions can be complex with, for Hämatologie, Onkologie und Klinische Immunologie, Heinrich-Heineexample, multiple turning points and standard parametric models may fit Universität, Düsseldorf, Germany
poorly. It has previously been shown that restricted cubic splines can be used
to approximate complex hazard functions in the context of time-to-event data. When analyzing the subdistribution hazard of a competing risk, the Fine and
The degree of complexity for the spline functions is dictated by the number of Gray model is usually applied. But when left truncation is present, this model
knots used to model the hazard function. Through the use of simulation, we does no longer work. The problem was recently solved by Geskus (Biometrics,
show that provided a sufficient number of knots are used, the approximated 2011) as well as by Zhang, Zhang and Fine (Stat. Med., 2011) proposing
hazard functions given by restricted cubic splines fit closely to the true function different weights.
for a range of complex hazard shapes.
The objective was to compare both approaches in the case of left truncation
The simulation study is motivated by a dataset of breast cancer patients in depending on one covariate.
England and Wales, which has a hazard function with two turning points. In Similar to the setting that Fine and Gray had used, we performed a simulation
the simulation, complex hazard shapes were generated using a two- experiment for the situation described above, varying the sample size, the
component mixture Weibull distribution. The flexible parametric modelling balance of the competing events and the regression coefficients. For the
approach was fitted to the simulated data using a range of values for the estimator of Zhang et al. we used the variant with the stratified nonparametric
degrees of freedom. The fit of the functions is assessed by using area weights.
differences between the true and fitted functions. Selection criteria (AIC and We evaluated both approaches on a real data example concerning the
BIC) are also compared to assess whether they select the degrees of freedom competing events „disease progression" and „death without prior progression"
for the "best-fitting" model. The results show that provided appropriate care is in patients with myelodysplastic syndromes, where two different treatments
taken, restricted cubic splines provide a good approximation to complex hazard were compared. In this particular data set, only patients from one treatment
functions.
group were left truncated, while patients from the other one were observed
without
delay.
Both
approaches
seem
to
be
equal
when
weights do not depend on covariates.
P22.17
When weights depend on one covariate we found visible differences in the
Correcting for a dependent competing risk in the estimation of natural estimated coefficients of that variable, but only slight differences in the
conception chances
coefficients of the others. So in the case of left truncation depending on one
covariate, Zhang's approach should be preferred. The choice of weights did not
Nan van Geloven1, Ronald Geskus2, Ben Willem Mol3, Koos Zwinderman2
1
Clinical Research Unit, Academic Medical Centre, University of Amsterdam, have any influence on the standard error of the regression coefficients.
Amsterdam, The Netherlands, 2Department of Clinical Epidemiology,
Biostatistics and Bio-informatics, Academic Medical Centre, University of P22.19
Amsterdam, Amsterdam, The Netherlands, 3Centre for Reproductive Medicine, Predictions and life expectancies in Illness-death model
Department of Obstetrics and Gynaecology, Academic Medical Centre,
Pierre Joly, Célia Touraine, Mélanie Le Goff
University of Amsterdam, Amsterdam, The Netherlands
When estimating the probability of natural conception from observational data 1) INSERM, ISPED, INSERM U897, Bordeaux, France, 2) Univ. Bordeaux,
on couples with an unfulfilled child wish, the start of assisted reproductive ISPED, INSERM U897, Bordeaux, France
therapy (ART) is a competing event that cannot be assumed to be independent
of natural conception. In clinical practice, interest lies in the probability of
natural conception in the absence of ART, as this probability determines the
need for therapy. We are thus faced with a situation where for new patients the
In longitudinal studies, several events can occur on the same individual. One
approach to model such data is the multi-state model, which allows subjects to
move over time within a finite number of states. We will focus our attention in
this work to the illness-death model, which is widely used in the medical
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
129/156
literature. In this model, individuals are allowed to move from the "health" state
to the "illness" state or the "death" state and from the "illness" state to the
"death" state.
In this model, we can model the effects of covariates on the three transition
intensities. The estimated regression parameters are useful to determine risk
factors involved in each transition. Transition intensities can also be estimated
and for example, these estimations can be useful to compare the rate of death
among diseased and non-diseased subjects. However, we may estimate other
quantities which provide additional information, sometimes more relevant for
clinicians or epidemiologists. For example, we can give life expectancies and
transition probabilities for given value of covariates. We can also give
probabilities for the status of subjects for a given time window, for example five
years after a visit, in order to find prognostic factors. The aim of this work is to
show how to estimate these quantities using the estimated regression
parameters and the estimated transition intensities. As an illustration, we give
some examples of predictions using data from a large cohort study on cognitive
aging (age and sex-specific life expectancies without dementia,... ).
With linked register and cause of death data becoming more accessible than
ever, competing risks methodology is being increasingly used as a way of
obtaining "real world"' probabilities of death broken down by specific causes.
As this type of analysis relies on the use of cause of death data, it is important,
in terms of the validity of these studies, to have accurate cause of death
information. However, it is well documented that cause of death information
taken from death certificates is often lacking in accuracy and completeness.
We assess through use of a simulation study the effect of under and overreporting of cancer on death certificates in a competing risks analysis
consisting of three competing causes of death: cancer, heart disease and other
causes. Using realistic levels of misclassification, we consider 24 scenarios.
The bias was highest in the older age groups on both the absolute (cumulative
incidence function) and relative scale (cause-specific hazard ratio).
Considering that misclassification is most likely to occur in these age groups,
caution should be taken when making conclusive remarks about the probability
of death from different causes. In the younger age groups, however, the bias
resulting from misclassification of cause of death was reasonably small.
P22.20
Estimation of avoidable deaths based on the theory of competing risks
Arun Pokhrel
Finnish Cancer Registry, Helsinki, Finland
P22.22
Modelling and utilising the baseline hazard in prediction models of clinical
outcomes: a missed opportunity
Kym Snell1, Deborah Stocken2, Lucinda Billingham1, Thomas Debray3, Karel
Moons3, Richard Riley1
1
MRC Midland Hub for Trials Methodology Research, University of
Birmingham, Birmingham, UK, 2Cancer Research UK Clinical Trials Unit,
University of Birmingham, Birmingham, UK, 3Julius Centre for Health Sciences
and Primary Care, Utrecht University, Utrecht, The Netherlands
Background/objective
The concept of 'avoidable deaths' has been of growing interest in populationbased cancer survival studies. In a recent study (Pokhrel et al. 2010), the
theory of competing risks by Chiang (1968) was used to estimate avoidable
deaths. The aim of this study is to present how the Chiang's method can be
used to estimate the number of avoidable deaths.
Material
Patients diagnosed in Finland with cancer at 27 sites in 1971-2005 were linked
with population censuses made every five years in 1970-2000 to obtain
patient's educational level.
Results
By assuming the cancer mortality of high education group (13 years of
education or more) for all, 6% of the cancer deaths in patients diagnosed at
ages 25-89 years during first five years after diagnosis in 1971-1985 would be
theoretically avoidable. For periods 1986-1995 and 1996-2005, these
proportions were even higher, 7 and 9% respectively.
Conclusion
The crude death probabilities derived using Chiang’s method can be used to
estimate the avoidable deaths by eliminating the mortality differences between
different groups of cancer patients. Deaths saved from one cause will not be
actually saved because of competing risk mortality due to other causes. As the
deaths will not be saved for a long time, person-years savings are more
important.
References:
Chiang CL. Introduction to Stochastic Processes in Biostatistics 1968; Wiley:
New York.
Pokhrel A, Martikainen P, Pukkala E, Rautalahti M, Seppä K and Hakulinen T.
Education, survival and avoidable deaths in cancer patients in Finland. Br J of
Cancer 2010: 1109-1114.
In medical research, there is a major interest in developing statistical models
that predict the risk of a future clinical outcome for individuals. Such prediction
models utilise multiple prognostic factors, and aim to predict risk for individuals
based on their own characteristics, to inform therapeutic decisions and patient
counselling. When prediction models are developed using time-to-event data,
one approach is to use Cox regression. This produces a risk score for
individuals, based on the parameters in the model. However, this score is
difficult to interpret directly. Clinically, it is more informative to know individual
risk probabilities over time, but Cox regression does not provide this as the
baseline hazard is not estimated. Alternative approaches, such as flexible
parametric modelling overcome this issue. In this talk, we present a systematic
review of prediction models published in the general medical literature since
2006, and describe how researchers assess and utilise the baseline hazard.
Our findings reveal that the majority of articles use a Cox regression model and
ignore the baseline hazard, with the model's risk score often simply
categorised to define risk groups whose average prognosis differs over time.
Using a dataset in pancreatic cancer, we show why this is a wasted opportunity
and potentially misleading, as an individual's risk may vary considerably from
their group's average. We show that approaches such as the Royston-Parmar
model, which flexibly models the baseline hazard using cubic splines, should
rather be used to enable individual survival probabilities to be predicted.
P22.23
Explaining differences in post-transplant survival between two studies in
chronic myeloid leukaemia through identification of predictive factors by a Cox
proportional hazard cure model
Markus Pfirrmann1, Ruediger Hehlmann2
P22.21
1
Ludwig-Maximilians-Universitaet Muenchen, Insitut für Medidizinische
The Impact of Under and Over-recording of Cancer on Death Certificates in a Informationsverarbeitung, Biometrie und Epidemiologie (IBE), Muenchen,
Competing Risks Analysis: A Simulation Study
Germany, 2Medizinische Fakultaet Mannheim der Universitaet Heidelberg,
Sally R. Hinchliffe, Keith R. Abrams, Paul C. Lambert
Mannheim, Germany
University of Leicester, Leicester, Leicestershire, UK
130/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Background: In two consecutive German studies on chronic myeloid
leukaemia, patients were randomised to receive haematopoetic stem cell
transplantation (HSCT) with a related donor. Despite supposedly comparable
transplantation procedures, the survival probabilities after transplantation in
first chronic phase of the patients treated in study III were significantly lower
than the probabilities of the patients in study IIIA.
Objective: To investigate whether predictive factors for post-transplant survival
can explain the outcome differences.
Methods: For the identification of predictive factors, the Cox proportional
hazard cure model was used [Sy and Tylor, Biometrics 2000]. Parameter
estimation was performed by application of the SAS macro of Corbière and
Joly [Comput Meth Prog Bio, 2007].
Results: Donor matching, age, and time of HSCT after diagnosis were
identified as predictive factors. In an independent dataset of transplantation
registries, the influence of calendar time was determined. Including the
predictive variables as a combined risk score in the cure model, the four
resulting risk groups had a significant influence on latency (survival of the
uncured patients) as well as on the incidence of cure. Added as a further factor,
study origin had no significant impact any longer.
Conclusions: Differences in influential predictive factors contributed to
significantly different post-transplant survival probabilities between the studies.
Apart from the explained variation in survival in dependence on the predictive
risk factors, also random variation might have a share in the outcome
discrepancy.
P22.24
The Use of Latent Trajectories in Survival Models to Explore the Effect of
Longitudinal Data on Mortality
Mathieu Bastard1, Jean-François Etard2
1
Epicentre, Paris, France, 2UMI 233 TransVIHMI, Institut de Recherche pour le
Développement, Université Montpellier 1, Montpellier, France
Lucie Biard, Laurence Desjardins, Sophie Piperno-Neumann, Pascale Mariani,
Corine Plancher, Yann De Rycke
Institut Curie, Paris, France
Therapeutic advances allow for prolonged disease-free survival and sometimes
even cure of certain malignancies. In these cases, standard survival analysis
methods, such as Cox proportional hazards model, with or without timedependent covariates, may not be applicable. They will not adequately account
for the effect of covariates on the event. Cure models, either parametric or
semiparametric, have been specifically developed to account for a cured
fraction in a studied population.
We performed a prognostic analysis of survival of uveal melanoma patients
treated by first-line enucleation, at the Curie Institute, Paris, between 1982
and 2009. We studied specific metastasis-free survival to allow for a cure rate
in the sample.
We implemented a so-called biological parametric cure model, developed by
AY Yakovlev at the Curie Institute, on the basis of tumor latency. We performed
univariate and multivariate analyses. The effect of covariates on the cure rate
is modelized by a logistic function while the specific survival of the uncured
fraction follows a log-logistic function, under the proportional hazards
assumption for the effect of covariates on the time to event.
Procedure of analysis with the cure model addresses the specific issues of a
cure-rate situation: numerous statistical hypotheses are tested (palliative
and/or curative effect of a covariate). The model provides estimates of cure
rates, depending on covariates (OR), as well as estimates of the probability of
metastasis-free survival of the uncured patients (HR), characterizing the
distinct prognostic effects of covariates.
P22.26
Performance of parametric survival models under non-random interval
censoring: a simulation study
Nikos Pantazis1, Michael G Kenward2, Giota Touloumi1, on behalf of CASCADE
0
In survival models, time-dependant covariates increase the complexity of both Collaboration in EuroCoord
1
methodology and interpretation of results. Using latent trajectories is an Athens University Medical School, Dpt of Hygiene, Epidemiology & Medical
alternative method to take into account time-dependant data in survival models Statistics, Athens, Greece, 2Medical Statistics Department, London School of
which leads to an easier model building and interpretation of results.
Hygiene and Tropical Medicine, London, UK
We describe the different steps of the method and we apply it to explore the In many medical studies, individuals are seen at a set of pre-scheduled visits.
effect of adherence to antiretroviral treatment (ART) and CD4 cell-count over In such cases, when the outcome of interest is the occurrence of an event, the
time on mortality in HIV-infected patients.
corresponding times are only known to fall within an interval, formed by two
First, we identify the latent trajectories using a generalized linear latent and consecutive visits. These data are called interval-censored. All available
mixed model. Then, we assign to each patient a unique latent trajectory based methods for the analysis of interval-censored event times rely on a simplified
on the maximal membership probability. We thus obtain a new discrete variable likelihood function assuming that the only information provided by the
which summarizes the whole trajectory of longitudinal data for a given patient. censoring intervals is that they contain the actual event time (i.e. nonFinally, we include this new variable as covariate in a classical survival model informative censoring).
to estimate its effect on mortality.
In this simulation study we assess the performance of parametric models for
In a first example, we identify three latent trajectories of adherence and we interval-censored data when individuals miss some of their pre-scheduled visits
show that compared to patients with a low adherence trajectory, patients with completely at random (MCAR), at random (MAR) or not at random (MNAR)
intermediate and high adherence trajectory are less at risk to die (Hazard Ratio while making a comparison with a simpler approach often used in practice. A
(HR) 95% CI: 0.38 (0.21;0.69) and 0.12 (0.04;0.34), respectively). In a second sample of HIV-1 infected individuals from the CASCADE study is used for
example, two latent trajectories of CD4 are identified, and patients with high illustration in an analysis of the time between antiretroviral treatment’s initiation
and viral load suppression.
CD4 trajectory were less at risk to die on ART (HR: 0.19 (0.08;0.47)).
This method should be considered with interest in clinical and observational Results suggest that parametric models based on flexible distributions (e.g.
research as it provides an easy way to explore and interpret the link between generalised Gamma) can fit such data reasonably well and are robust to
MCAR or MAR mechanisms. Violating the non-informative censoring
longitudinal data and mortality or other time-to-event outcomes.
assumption though, leads to biased estimators with the direction and the
magnitude of the bias depending on the direction and the strength of the
P22.25
MNAR mechanism. Simplifying the data, assuming that event times coincide
Telling curative from palliative effects of covariates in prognostic analysis in a with the end of the interval, and applying standard survival analysis techniques,
population with a cured fraction: Application of a biological cure model to can yield misleading results even when missingness depends only on a
metastasis-free survival in uveal melanoma patients
baseline covariate.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
P22.27
The illness-death model to study progression of chronic kidney disease
Julie Boucquemont1, Marie Metzger2, Georg Heinze3, Karen Leffondré1
1
Univ. Bordeaux, ISPED, Centre INSERM U897-Epidemiologie-Biostatistique,
Bordeaux, France, 2Inserm U1018 Centre for Research in Epidemiology and
Population Health, Univ. Paris Sud 11, Villejuif, France, 3Medical University of
Vienna, Center for Medical Statistics, Informatics and Intelligent Systems,
Vienna, Austria
131/156
software/pshreg/, by means of analysis of our motivating example.
P22.29
Multistate modeling in the analysis of cost-effectiveness of NSCLC treatments
Mathilda Bongers1, Veerle Coupe1, Cary Oberije2, Carin Uyl-de Groot3, Dirk de
Ruysscher2
1
Epidemiology and Biostatistics, VU University Medical Center, Amsterdam,
The Netherlands, 2Maastricht University Medical Centre, Department of
Chronic kidney disease (CKD) progression can be defined by the occurrence of Radiotherapy, MAASTRO Clinic, Maastricht, The Netherlands, 3Institute for
different types of events such as progression in CKD markers or "hard" Medical Technology Assessment, Erasmus University Medical Center,
outcomes like dialysis. Death is a competing event for all these events of Rotterdam, The Netherlands
interest. While the time-to-event is exactly known for dialysis, it is
systematically interval-censored for progression defined by repeated measures Traditionally, the measurement of effectiveness as part of a cost-effectiveness
of quantitative CKD markers. This interval-censoring produces uncertainty on analysis of cancer treatments has involved estimation of mean or median
the disease stage just before death, for subjects who die during follow-up. survival based on one or more studies with a single endpoint only. To establish
Such uncertainty can be accounted for using an illness-death model (IDM) for the cost-effectiveness of new, individualized treatment strategies in cancer,
interval-censored data. In a simulation study, we have shown that this more advanced tools for the estimation of effectiveness are required because
approach conducts to less biased estimates of the effects on the risk of ‘illness' clinical and molecular characteristics need to be taken into account.
of risk factors that also affect death, as compared to standard cause-specific In addition, time to and occurrence of intermediate events such as tumor
hazards models. However, to our knowledge, the IDM for interval-censored recurrence and metastasis are important to establish the main outcomes in
data has never been used to investigate CKD progression. cost-effectiveness studies, namely costs and quality of life. In this study we
The objective of this study is to elucidate when the IDM should be considered introduce multistate modeling to obtain the transition probabilities in a Markov
for estimating the effect of risk factors on CKD progression. To this aim, we model , for different patient profiles. The aim of the model is to evaluate the
compare the estimates from the IDM and standard cause-specific and costs and effects of a new treatment strategy in radiotherapy compared to
care.
subdistribution hazards models, for selected risk factors of CKD progression, current
using cohort data. Different events of interest, defined by quantitative CKD We used data from the Maastro Clinic in Maastricht, The Netherlands. The
markers or dialysis, are considered. Our results enable us to distinguish dataset included 322 patients with data on patient- and tumor characteristics,
practical situations where all estimates are similar from those where they and the time of occurrence of local recurrence, distant metastases and death.
substantially differ. These results, combined with our simulation results, All patients received radiotherapy with curative intent, a subgroup of patients
conduct to some recommendations on when the IDM should be used for was included in a dose-boosting study. Using multi-state regression models,
we developed a micro-simulation Markov model consisting of 6 health states,
investigating CKD progression.
which are: treatment (initial state), local recurrence, distant metastases, local
recurrence and distant metastases, and death. The R package version 2.11.0
P22.28
and the mstate package version 2.6.0 were used for the multistate modeling.
Proportional and non-proportional subdistribution hazards regression with SAS
Maria Kohl1, Karen Leffondré2, Georg Heinze1
P22.30
1
Medical University of Vienna, Vienna, Austria, 2Université Bordeaux 2, ISPED, Computationally simple estimation and improved efficiency for special cases of
Bordeaux, France
double truncation
We consider a study on determinants of progression of chronic kidney disease, Rebecca Betensky1, Matthew Austin1, David Simon2
where the outcome is time to dialysis, with death as competing event. Some of 1Harvard School of Public Health, Boston, MA, USA, 2Beth Israel Deaconess
the risk factors show time-dependent effects on the subdistribution hazard Medical Center, Boston, MA, USA
causing misspecification of a proportional subdistribution hazards (PSH)
regression model. We present a new SAS macro %PSHREG that can be used Doubly truncated survival data arise when event times are observed only if
to fit a PSH model but also accommodates the possibility of non-PSH. Our they occur within subject specific intervals of times. Existing iterative estimation
macro first modifies the input data set appropriately and then applies SAS's procedures for doubly truncated data are computationally intensive (Turnbull,
standard Cox regression procedure, PROC PHREG, using weights and 1976; Efron & Petrosian, 1999; Shen, 2008). These procedures assume that
counting-process format. With the modified data set, standard methods can the event time is independent of the truncation times in the sample space that
then be used to estimate cumulative incidence functions for an event of conforms to their requisite ordering. This type of independence is referred to as
interest. In general, proportional cause-specific hazards do not ensure PSH. In quasi-independence. We identify and consider two special cases of quasicase of non-PSH, random censoring usually distorts the estimate of the time- independence: complete quasi-independence and complete truncation
averaged subdistribution hazard ratio of a misspecified PSH model, as later dependence. For the case of complete quasi-independence, we derive the
event times are underrepresented due to earlier censoring. To address this nonparametric maximum likelihood estimator in closed-form. For the case of
issue, we can optionally weight the summands of the estimating equations, i.e., complete truncation dependence, we derive a closed-form nonparametric
the risk sets at each event time, by inverse-probability-of-censoring or by estimator that requires some external information, and a semi-parametric
number-at-risk expected had censoring not occurred. While the former weights maximum likelihood estimator that achieves improved efficiency relative to the
make time-averaged effect estimates independent from the observed follow-up standard nonparametric maximum likelihood estimator, in the absence of
distribution, the latter allow an appealing interpretation of the average external information. We demonstrate the consistency and improved efficiency
subdistribution hazard ratio as ‚odds of concordance‘ of time-to-dialysis with of the estimators in simulation studies and through asymptotic derivations, and
the risk factor. We illustrate application of these extended methods for illustrate their use in application to studies of AIDS incubation and Parkinson’s
competing risks regression using our macro, which is freely available at disease age of onset.
P22.31
http://cemsiis.meduniwien.ac.at/en/kb/science-research/software/statistical-
132/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Sample size calculation for time-to-event outcomes in randomized controlled
trials: Aan evaluation of standard methods
Mike Bradburn1, Ly-Mee Yu2
1
University of Sheffield, Sheffield, UK, 2University of Oxford, Oxford, UK
Background
Sample size calculations for time-to-event outcomes are usually based on the
number of events needed to detect a postulated hazard ratio (HR). The
derivation of this HR often incorporates data from previous studies. Although
this may be a previously reported HR (e.g. from a Cox regression model), it
may equally be derived from either the survival proportions at a fixed point in
time or from the ratio of the median survival times.
Objective
This work is intended to review the performance of different estimators of the
HR, and to serve as a guide (and a caution) to practising statisticians when
undertaking sample size calculations.
Method
This evaluation is primarily simulation based, comparing how estimators
perform in different scenario including survival distribution (including
exponential, Weibull, and non-PH), type of censoring, and the extent of small
degree of departures from proportionality in hazards.
Results
Unsurprisingly, the HR (and hence sample size) is highly inaccurate when
evaluated on small samples, or evaluated from the ratio of survival probabilities
early in the study duration. The overall model-based estimator of the HR is the
preferred method.
Discussion
Although performance of the methods varies by scenario, we advocate deriving
an overall measure of the HR; methods proposed by Parmar (Statistics in
Medicine, 1998) can be used to derive these. We also propose a minimum of
100 events per arm are needed if results from previous studies are used to
calculate a plausible hazard ratio.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
133/156
Author's Index
Aalen, Odd O
C32.2
Aris, Emmanuel
C5.1
Abellan, Juan José
C12.3
Ariti, Cono
C34.4
Abisheganaden, John
C11.4
Armstrong, Ben
C34.4
Abrahamowicz, Michal
C7.1, P21.14, P22.14
Armstrong, Bruce
C23.5
Abrams, Keith
C16.2, P12.4, P15.6, P22.21
Arnould, Benoit
P6.2
Abu-Assi, Emad
C28.2
Ashby, Deborah
IP3, C11.2
Adolf, Daniela
C13.1
Augard, Christele
P11.2
Agris, Jacob
P4.26
Augustyniak, Malgorzata
P14.8
Agris, Julie
P4.26
Aular, Aleida
P22.1
Aguirre, Urko
P20.19, P20.24
Austin, Matthew
P22.30
Ahmed, Ikhlaaq
P20.17
Aydemir, Aida
P4.26
Ahmed, Ismaïl
I6.2
Ayerbe Garcia-Mozon, Luis
P13.3
Ahmed, Roman
P11.5
Ayis, Salma
P13.3, P16.2
A.K., Mathai
P4.7
Baart, Mireille
P20.5
Akram, Muhammad
C14.3, P4.28
Bailey, Michael
C14.3
Alavi Majd, Hamid
P14.7
Bakke, Øyvind
C21.1
Al-Kadhimi, Gillian
P21.4
Bakoyannis, Giorgos
C26.4
Allignol, Arthur
C32.5
Bakshi, Andisheh
C30.1
Almansa, Josué
C27.3
Balduini, Anna
P15.4
Alonso, Ariel
C5.3
Balliu, Brunilda
C34.5
Alonso, R
P14.1
Bamia, Christina
P21.9
Altini, Mattia
P8.5
Bandyopadhyay, Dipankar
C23.3
Altman, Doug
P20.9
Banerjee, Buddhananda
C21.4
Altstein, Lily
P4.15
Bare, Marisa
P20.19
Ambler, Gareth
C24.4, P11.5, P20.11, P20.22
Barrett, Jessica
C16.4
Ambrogi, Federico
P8.8
Barry, Sarah
P7.3
Amieva, Helene
C31.3
Bartlett, Jonathan
C25.1, C25.4
Ancelet, Sophie
C12.3, P20.14
Bastard, Mathieu
P22.24
Andersen, Per Kragh
C26.1, C6.3
Bauer, Peter
P1.2
Anderson, Denise
C23.5
Becher, Heiko
C19.3
Anderson, John
P4.20
Bekaert, Maarten
C9.1
Andersson, Eva
C33.3
Belgrave, Danielle
C12.4
Andersson, Therese
C6.2, C22.2
Belin, Lisa
P4.23
Andrews, Nick
P18.2
Bellocco, Rino
P3.6, P12.5
Antolini, Laura
I4.2
Bellomo, Rinaldo
C14.3
Antweiler, Kai
C33.5
Bender, Ralf
C10.2, C21.3
Arden, Nigel
P21.5
Benner, Axel
C3.2, C28.4
Aregay, Mehreteab
P18.1
Benzenine, E
MS2.5
134/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Berg, Erik
P19.10
Bowman, Adrian
C13.3
Bergeron, Pierre-Jérôme
P22.2
Bradburn, Mike
P4.30, P22.31
Bergersen, Linn Cecilie
I6.2
Brayne, Carol
C31.3
Bergsma, Wicher
C5.1
Briones, Eduardo
P20.19
Bertoni, Francesco
P20.23
Brody, David
P4.20
Beryl, Primrose
P10.5
Broët, Philippe
P4.23
Betensky, Rebecca
C4.4, P22.30
Brombin, Chiara
P12.2
Beyea, Jan
C25.3
Brønnum, Peter
P13.5
Beyersmann, Jan
C32.5, P22.3, P22.13
Brouste, Véronique
C32.3
Bhambra, Jasdeep K
P10.1
Bruyneel, Luk
P13.1
Biard, Lucie
P22.25
Bryan, Susan
C13.4
Billingham, Lucinda
P20.17, P22.22
Buchan, Iain
C12.4
Binder, Harald
C3.2, C10.3, C10.4, C19.4, C32.1
Buchholz, Anika
C32.1
Binder, Nadine
C15.1
Buckley, Michael
C29.4
Binquet, Christine
P22.14
Bujkiewicz, Sylwia
P15.6
Birch, Colin
C12.3
Bullinger, Lars
C3.2
Bishop, Christopher
C12.4
Burton, Paul R
C3.4, P15.7
Biswas, Atanu
C14.4, C21.4
Burzykowski, Tomasz
C17.1
Bjerre, Lise
C7.1
Buyze, Jozefien
I5.3, C9.4
Bjornstad, Jan
C12.1
Cadarso-Suárez, Carmen
C28.2, P16.5
Blanche, Paul
C20.5
Caddick, Katharine
P21.4
Blazeby, Jane
P21.4
Calara, Jozer
P21.4
B.N., Murthy
P4.7
Calza, Stefano
P2.1
Boda, Krisztina
P18.3
Campbell, Michael
I2.1, P8.9
Bodin, Julie
P3.7
Candel, Math
C14.1, P4.13
Boehringer, Stefan
C34.5
Candy, Bridget
P20.22
Boelaert, Marleen
P15.2
Cannon, Jeff
P10.3
Boers, Kim
C1.4
Caria, Maria Paola
P3.6
Boessen, Ruud
P1.5
Carlin, John
C25.5, C27.4, P11.7
Bogenrieder, Thomas
P4.6
Carpenter, James
C25.4
Bollerslev, Jens
C13.2
Carretta, Elisa
P8.5
Bongers, Mathilda
P22.29
Carr, Matthew
P3.8
Bonnett, Laura
P20.3
Carroll, S
P21.3
Boroomandnia, Nasrin
P14.7
Carstensen, Bendix
C4.1
Bottle, Alex
I2.3
Carter, Lesley-Anne
P21.10
Boucquemont, Julie
P22.27
Caruana, Emmanuel
P3.12
Bouillaud, Emmanuel
P4.21
Casey, Neil
P3.3
Boulesteix, Anne-Laure
C19.4
Chadha-Boreham, Harbajan
P6.3
Bouzala, GA
C8.5
Chalder, Trudie
P3.1
Bowden, Jack
C9.2, C29.5, P1.4
Chambers, Larry
C18.1
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
135/156
Chappell, Richard
C19.1
Cro, Suzie
P4.12
Charlett, André
P18.2
Crowther, Michael
C16.2, C26.2, P12.4, P22.16
Chen, Chia-Min
P1.3
Culliford, David
P21.5
Chen, Ming-Hui
P21.8
Custovic, Adnan
C12.4
Chen, Yuh-Ing
P17.1
Cyr, Diane
P7.5
Chevret, Sylvie
P3.12, P17.3, P20.10, P22.10
Czene, Kamila
C15.3, C27.1, C27.2
Chiang, Chieh
P21.4
Daikos, G
C8.5
Chie, Wei-Chu
P21.4
Dalveit, Anne Kjersti
C15.4
Chin, Peter
C4.4
Damgaard, Katrine
P8.2, P8.3
Chiu, Herng-Chia
P21.
Danesh, John
P15.9
Chi, Yunchan
P1.3
Danger, Richard
P20.7
Chong, Wai Fung
C11.4
Darabi, Hatef
P20.15
Choodari-Oskooei, Babak
P1.4, P20.1
Dartigues, Jean-Francois
C31.3
Christensen, Anette Luther
P21.6
Davidian, Marie
C5.3
Christensen, Kaare
MS1.5
David, Marie-Pierre
P18.1
Ciotti, Emanuele
P8.5
de Blasio, Birgitte Freiesleben
I1.1
Claesen, Jurgen
C17.1
Debray, Thomas
C24.2, P20.2, P22.22
Clark, AL
P21.3
De Campos, Cassio P
P20.23
Cleland, JGF
P21.3
Decarli, Adriano
P8.8
Clements, Mark
P21.7
Declerck, Dominique
C23.3
Close, Nicole
C14.5
Deeks, Jon
P20.17
Cnattingius, Sven
MS1.6
de Hoop, Esther
C30.3
Coker, Bola
P11.3, P16.2
Dejardin, David
C2.2, P22.11
Collignon, Olivier
C15.5
de Klerk, Nicholas
C23.5
Collins, Gary
P4.30
de Kort, Wim
P20.5
Collins, Peter
P21.4
Del Greco M., Fabiola
P3.11
Commenges, Daniel
C8.2, C19.2
Del Rio Vilas, Victor
C12.3
Congdon, Peter
C12.2
de Luna, Xavier
C9.3, P14.3
Cook, Andrea
MS2.2
de Melker, Hester
P21.13
Cooper, Ben
C26.3
Denaxas, Spiros
P14.6
Cooper, Matthew
C23.5
den Heijer, Martin
P7.2
Cooper, Nicola J
P15.6
De Nisi, Martina
P7.4
Copas, Andrew
C25.2, C31.5, P11.6
Denison, Fiona
P19.7
Cortés, Jordi
P6.3
de Ruysscher, Dirk
P22.29
Costantini, Anna
P21.4
De Rycke, Yann
P4.23, P21.12, P22.25
Cottenet, Jonathan
P22.14
Descatha, Alexis
P3.7
Coupe, Veerle
P22.29
De Silvestri, Annalisa
P15.4
Crainiceanu, Ciprian
I3.2
Desjardins, Laurence
P22.25
Crawford, Paul
P4.20
Dethlefsen, Claus
P21.6
Crispino-O'Connell, Gloria
P4.20
de Uña- Álvarez, Jacobo
C29.1, P22.5
136/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Devaux, Yvan
C15.5
Esnaola, Mikel
C17.4, P10.4
Devesa, Susan S
C32.2
Etard, Jean-François
P22.24
Dewanji, Anup
C4.3
Eulenburg, Christine
P22.6
de Wreede, Liesbeth C
C4.5
Faghihzadeh, Soghrat
P14.7
Díaz Corte, Carmen
P22.5
Fankhauser, Stefan
P4.14
Dickman, Paul
C6.2, C22.2
Didelez, Vanessa
C1.1
Farrington, Paddy
MS2.5, MS2.6, C8.1, C8.4,
C33.4, P18.2
Diggle, Peter
C16.4
Felsoci, Marian
C4.2, P19.6
Dillon, Catherine
P4.27
Ferenci, Tamás
P21.18
DiMaio, Rebecca
P4.20
Ferlini, Marco
P15.4
Ding, Yew Yoong
C11.4
Ferraroni, Monica
P8.8
Di Serio, Clelia
P12.2
Feuk, Lars
C17.3
Diya, Luwis
C15.3, C30.5
Field, David
P20.13
Dobson, J
C34.4
Fischer, Martina
C28.4
Doherty, Dorota
P10.3
Fleischer, Frank
P4.6
Dolling, David
C31.5
Forbes, Andrew
C5.2, C14.3, P4.28
Dolovich, Lisa
C18.1
Forbes, Catherine
P4.28
Domingo-Salvany, Antònia
C27.5, P21.15
Ford, Ian
C5.4, P19.8
Donachie, Paul HJ
P19.9
Fortiana, Josep
P21.15
Doughty, RN
C34.4
Foster, Jared
C18.2
Douiri, Abdel
P16.1, P21.1
Fotheringham, James
I2.1, P8.9
Doussau, Adelaide
C2.5
Foucher, Yohann
C7.3, C24.1, P20.7
Draper, Elizabeth
P20.13
Fraser, Christophe
I1.3
Drösler, Saskia
P8.7, P8.10
Friede, Tim
C2.1, P1.8, P19.2
Dunn, David
C31.5
Frigessi, Arnoldo
I6.2, C23.4
Dunn, Graham
C30.2, P3.2
Frøslie, Kathrine Frey
C13.2
Durkalski, Valerie
P4.27
Funatogawa, Ikuko
C29.3
Dusek, Ladislav
P8.4, P22.9
Funatogawa, Takashi
C29.3
Eckert, Benjamin
C4.4
Furstova, Jana
P22.7
Egberts, Antoine CG
P1.5
Galanti, Maria Rosaria
P3.6
Eide, Geir Egil
P19.1
Garcea, Domenico
P8.5
Eijkemans, Marinus JC
C1.5
Garthwaite, Paul
C33.4, P18.2
Eilers, Paul
C13.4, C31.1, C31.2
Gasparrini, Antonio
C34.4, P21.17
Elfakir, Anissa
C34.2
Gaus, Wilhelm
P4.5
el Galta, Rachid
P4.18
George, Julie
P14.6
Elm, Jordan
P4.27
Gerds, Thomas A
C6.3, C20.4, P20.20
Eloranta, Sandra
C22.2
Gerke, Oke
C18.5
Emsley, Richard
C30.2, P3.2
Gerlinger, Christoph
P4.2
Enki, Doyo
P18.2
Geskus, Ronald
Escobar, Antonio
P20.19, P20.24
C24.5, C26.5, C26.5, P12.1,
P14.5, P14.5, P22.17
Giard, Caroline
P21.12
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Gilet, Hélène
P6.2, P10.2
Harrison, Mark J
P15.6
Giral, Magali
C7.3, C24.1
Hatzakis, A
C8.5
Gjerdevik, Miriam
C34.1
Häusler, Gabriele
P21.2
Glad, Ingrid K
I6.2
Haverstock, Dan
P4.26
Gladstone, Primrose Beryl
P4.17
Hazelton, Martin
C21.5
Gleiss, Andreas
P21.2
Heaton, Nigel
P21.4
Glimm, Ekkehard
C33.5
Hehlmann, Ruediger
P22.23
Godang, Kristin
C13.2
Heinze, Georg
C25.3, P22.27, P22.28
Goeman, Jelle
C11.2, C28.5
Heinzl, Harald
P6.3
Goetghebeur, Els
I5.3, C1.3, C3.5, C9.4
Hejblum, Boris
C17.2
Goldsmith, Jeff
I3.2
Helgeland, Jon
P8.2, P8.3
Goldsmith, Kimberley
P3.1
Hellton, Kristoffer Herland
C23.3
Gonzalez, Juan R
C17.4, P10.4
Hemingway, Harry
P14.6
Gonzalez, Nerea
P20.19, P20.24
Henderson, Robin
C16.4
Gorst-Rasmussen, Anders
P21.11
Hendrickx, John
P16.3
Götte, Heiko
C20.1
Heng, Bee Hoon
C11.4
Gourmelen, Julie
P7.5
Henriksen, Tore
C13.2
Graf, Alexandra
P1.2
Herich, Lena
P13.4
Gragn, Doyo
C33.4
Herquelot, Eléonore
P3.7
Greenop, Kathryn
C23.5
Heuch, Ivar
C34.1
Grobbee, Diederick E
P1.5
Hicks, Andrew
P3.11
Groenen, Patrick
C31.1
Hieke, Stefanie
C3.2
Groenwold, Rolf HH
P1.5, P19.5
Hielscher, Thomas
C3.2
Grönberg, Henrik
P21.7
Hiemeyer, Florian
P4.2
Grosch, Kai
P4.21
Higgins, Julian
P15.1
Grotmol, Tom
C32.2
Hinchliffe, Sally R
C26.2, P22.15, P22.21
G. Tahoces, Pablo
P16.5
Hobkirk, J
P21.3
Guedj, Jérémie
C8.3
Hoes, AW
P19.5
Guéguen, Alice
P3.7, P7.5
Hoffmann, Verena Sophia
P20.21
Gueorguieva, Ralitza
C16.5
Hof, Michel
P3.4, P19.4
Gulsvik, Amund
P19.1
Høilund-Carlsen, Poul Flemming C18.5
Gurrin, Lyle
C21.5
Holland, Fiona
P21.10
Haaland, Øystein
P19.10
Hopewell, Sally
P4.30
Ha, Catherine
P3.7
Hornbuckle, Janet
P10.3
Hadjihannas, L
C8.5
Hosseini, Sayed Mohsen
P7.1
Häggström, Jenny
C9.3
Houwing-Duistermaat, Jeanine J C34.5
Hahné, Susan
I1.2, P21.13
Howe, Andrew
P4.20
Haines, Linda
C33.1
Hsiao, Chin-Fu
P21.4
Hakulinen, Timo
MS1.2
Hsu Schmitz, Shu-Fang
P4.14
Hamberg, Paul
C2.2
Huang, Chi-Shen
P17.1
137/156
138/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Huet, F
MS2.5
Kejžar, Nataša
C22.5
Humphreys, Keith
P20.15
Kenward, Michael G
C5.3, C29.2, C34.4, P22.26
Iacobelli, Simona
C4.1
Kieltyka, Agnieszka
P14.8, P21.19
Ickstadt, Katja
I6.1
King, Michael
P20.22
Ieva, Francesca
P8.1
Klersy, Catherine
P15.4
Ingel, Katharina
C22.3
Klugkist, Irene
P21.8
Jackson, Christohper H
P10.1
Klungel, Olaf H
P1.5
Jackson, Dan
P11.1
Kneib, Thomas
C28.2
Jackson, Lisa
MS2.2
Knol, Mirjam
P21.13
Jackson, Victoria
P3.11
Knol, Mirjam J
P1.5
Jacqmin-Gadda, Hélène
C20.5, C31.3
Knorr, Silke
P8.7, P8.10
Jacques, Richard
I2.1, P8.9
Koenig, Franz
P1.2
Jahn-Eimermacher, Antje
C22.3
Koffijberg, Erik
P20.2
Jaki, Thomas
P4.8
Koffijberg, Hendrik
C24.2
James, Ian
P22.8
Kohl, Maria
P22.28
Jamieson, Sarra
C23.5
Koller, Michael T
C20.4
Janousova, Eva
P20.16, P22.9
Komarek, Arnost
P20.5
Jarkovsky, Jiri
C4.2, P19.6
König, Jochem
C10.3, C10.4
Jelizarow, Monika
C33.2
Krahn, Ulrike
C10.3, C10.4
Jenkner, Carolin
C19.3
Kristoffersen, Doris Tove
P8.2, P8.3
Jensen, Aksel
C6.3
Kropf, Siegfried
C13.1, C33.5
Johansson, Anna
C6.2
Kuendgen, Andrea
P22.18
Johansson, Anna CV
C17.3
Kuhlmann-Berenzon, Sharon
P18.4
Johnston, Robert L
P19.9
Kunz, Cornelia Ursula
C2.1
Joly, Pierre
P22.19
Kurtinecz, Milena
P15.8
Jones, Elinor
P3.11, P15.7
Kvaale, Gunnar
C15.4
Jones, Hayley E
I2.2
Lado, María J
P16.5
Josefsson, Maria
P14.3
Lair, Marie-Lise
C15.5
Journy, Neige
P20.14
Lambe, Mats
C6.2
Jung, Klaus
P19.2
Lambert, Jerome
P20.10
Kaczorowski, Janusz
C18.1
Kahan, Brennan C
P4.11, P4.12
Lambert, Paul
C6.2, C22.2, C26.2, P12.4,
P22.15, P22.16, P22.21
Kaiser, Thomas
C21.3
Landau, Sabine
P3.5
Kalaycioglu, Oya
C25.2
Langaas, Mette
C21.1
Kalina, Jan
C28.1
Lange, Theis
C9.1
Kariman, Noorosadat
P14.7
Langholz, Bryan
I4.1
Karlsson, Robert
P21.7
Lanius, Vivian
P4.16
Kasparek, Tomas
P20.16
Laouénan, Cédric
C8.3
Kasza, Jessica
C11.1
Larsen, Klaus Groes
P13.2
Katina, Stanislav
C13.3, P9.1
Larsen, Torben Bjerregaard
P13.5
Laubender, Ruediger Paul
P17.2
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Laurent, Olivier
P20.14
Looman, Caspar
P20.18
Laurier, Dominique
P20.14
Lorand, Daniel
C14.2
Lauseker, Michael
P22.18
Lorent, Marine
C24.1
Lawlor, DA
P19.5
Lorenz, Eva
C19.3
Lawo, John-Philip
P4.10
Luime, Jolanda
P20.4
Lazaro, Santiago
P20.19
Luna del Castillo, Juan de Dios
P6.1
Leask, Kerry
C33.1
Lundbye-Christensen, Søren
P13.5, P21.11, P21.6
le Cessie, Saskia
C1.4, P7.2
Lund, Eiliv
C6.1, C31.4
Lee, Amanda J
P19.7
Lydersen, Stian
C21.1
Lee, Donghwan
C32.4
Macgrogan, Gaëtan
C32.3
Lee, J Jack
C2.4
Magnusson, Patrik KE
C17.3
Lee, Katherine
C25.5, C27.4
Mahande, Michael Johnson
C15.4
Lee, Myeongjee
C27.1, C27.2
Maheswaran, Ravi
I2.1
Lee, Woojoo
C32.4, P2.2
Mahboubi, Amel
P22.14
Lee, Youngjo
C12.1, C32.4, P14.2
Majek, Ondrej
P8.4
Leffondré, Karen
P22.27, P22.28
Majewska, Renata
P14.8, P21.19
Le Goff, Mélanie
P22.19
Mandal, Saumen
C14.4
Lemij, Hans
C13.4
Mandel, Micha
C4.4
Leon, Bobrowski
C28.3
Mander, Adrian
P1.6
Le Quan Sang, Kim- Hanh
P6.4
Manktelow, Bradley
P8.6, P20.13, P22.15
Manongi, Rachel
C15.4
Lesaffre, Emmanuel
C2.2, C3.3, C13.4, C16.1, C20.2,
C23.3, C30.5, C31.1, C31.2,
P13.1, P14.2, P15.2, P20.4,
P20.5, P22.11
Mansmann, Ulrich
C33.2
Maracy, Mohammad Reza
P7.1
Li, Baoyue
C30.5, P13.1
Mariani, Pascale
P22.25
Li, Lingling
MS2.3
Marioni, Riccardo
C31.3
Lie, Rolv Terje
C15.4, P19.10
Mariosa, Daniela
P12.5
Li, Gang
P4.15, P4.29, P15.8,
Marque, Sebastien
C34.2
Lim, T K
C11.4
Marryat, Louise
P7.3
Lin, Haiqun
C16.5
Marson, Anthony
P20.3
Lin, Lanjia
C14.2
Martínez-Camblor, Pablo
P22.5
Lin, Yunzhi
C19.1
Martinez, Carlos
P22.1
Li, Qiyu
P4.14
Masca, Nicholas
C3.4
Liquet, Benoit
C19.2
Masenga, Gileard
C15.4
Littnerova, Simona
C4.2, P19.6
Maskell, Joe
P21.5
Liu, Aihua
P21.14
Matawie, Kenan
P13.1
Liu, Hanhua
C30.2
Mathoulin-Pelissier, Simone
C20.3, C32.3
Liu, Jen-pei
P4.1
Matthews, Fiona E
C31.3, P10.1
Liu, Tianqing
C21.1
Maucort-Boulch, Delphine
C22.5
Li, Yuanzhang
C21.1
Mauguen, Audrey
C20.3, C32.3
Lloyd, Suzanne
C5.4
Mayer, Jiri
P22.9
139/156
140/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Mazroui, Yassin
C32.3
Murawska, Magdalena
C16.1
McCallum, Emma
P1.6
Murray, Heather
P19.8
McCanny, Paul
P4.20
Musoro, ZJ
C24.5, P12.1
McEneaney, David
P4.20
Mutsvari, Timothy
C23.3
McKinnon, Elizabeth
P19.3, P22.8
Nanni, Oriana
P8.5
McNamee, Roseanne
P3.8, P21.10
Nardi, Alessandra
C27.5
Meijer, Rosa
C28.5
Nasiri, Malihe
P14.7
Menten, Joris
P15.2
Nasserinejad, Kazem
P20.5
Mentré, France
C8.3, P22.12
Nelson, Jennifer C
MS2.2
Mercier, Francois
C4.4
Neuenschwander, Beat
C14.2
Metcalfe, Chris
C5.5
Ng, Danice
P11.1
Metzger, Marie
P22.27
Nguyen, Cattram
C25.5
Meyer, Haakon E
C6.4
Nguyen, Michael
MS2.1
Miklik, Roman
C4.2
Nicholl, Jon
I2.1, P8.9
Mikoshiba, Naoko
P21.4
Nieboer, Daan
P20.2, P20.25
Milanzi, Elasma
C5.3
Niebuhr, David
C21.1
Milne, Elizabeth
C23.5
Nijman, Ruud G
P20.25
Minelli, Cosetta
P3.11
Njagi, Edmund Njeru
C29.2
Mitrani-Gold, Fanny
P15.8
Noh, Maengseok
P14.2
Mmbaga, Blandina Theophil
C15.4
Nonogi, Hiroshi
P4.19
Mockenhaupt, Maja
C7.4
Noordzij, JP
P20.17
Modi, Neena
C11.2
Noufaily, Angela
C33.4, P18.2
Mogensen, Ulla B
P20.20
Novianti, Putri
P15.5
Mohd Din, Siti Haslinda
P20.4
Nuel, Gregory
C31.4, P6.4
Molas, Marek
P14.2, P20.4
Nyári, Tibor
P18.3
Mol, Ben Willem
C1.4, P22.17
Nyberg, Lars
P14.3
Molenberghs, Geert
C29.2, C5.3, P18.1
Oakes, David
C18.3
Mollema, Liesbeth
P21.13
Oberije, Cary
P22.29
Moons, Karel
C24.2, P19.5, P20.2, P22.22
Obure, Joseph
C15.4
Morabito, Alberto
P4.22
Ofuya, Mercy
P15.3
Moradpour, Farhad
P7.1
Ohneberg, Kristin
P22.13
Moran, John L
C11.1
Ohuma, Eric
P20.9
Moreira, Carla
C29.1
Omar, Omar
P4.30
Morgagni, Paolo
P8.5
Omar, Rumana Z
C24.4, C25.2, P20.11, P20.22
Morgan, Ann
C34.5
Oostenbrink, Rianne
P20.25
Morris, Tim
P4.11, P11.8
O'Neill, Phil
I1.2
Mrozek-Budzyn, Dorota
P14.8, P21.19
Orellana, Liliana
I5.2
Muche, Rainer
P4.5
Ou, Phalla
P6.4
Müller, Tina
P4.10
Overvad, Kim
P21.6, P21.11,
Mundy, Linda
P4.29, P15.8
Ozol-Godfrey, Ayca
P11.2
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Paganoni, Anna
P8.1
Prague, Mélanie
C8.2
Palesch, Yuko
P1.1, P4.4, P4.27
Pramana, Setia
C17.3
Palomar, Mercedes
C26.3
Pramstaller, Peter P
P3.11
Pal, Rupam Ranjan
C14.2
Prevost, Toby
C29.5, P3.3
Pantazis, Nikos
P21.9, P21.16, P22.26
Pripp, Are Hugo
C30.4
Paoletti, Xavier
C2.5
Proust-Lima, Cécile
C16.3, C19.2, C31.3
Papachristofi, Olympia
C18.4
Psichogiou, M
C8.5
Pap, Ákos Ferenc
P11.4
Putter, Hein
C4.5
Pardo, MC
P14.1
Pye, Karen
C2.3
Parenica, Jiri
C4.2, P19.6
Quantin, Catherine
MS2.5, P22.14
Parker, Robert
P4.24
Quintana, Jose Maria
P20.19, P20.24
Park, Eunsik
P1.9
Qvigstad, Elisabeth
C13.2
Parmar, Mahesh KB
P1.4, P20.1
Rahman, Shafiqur
P20.11
Parsons, Nicholas
C2.1
Raisaro, Arturo
P15.4
Pavlik, Tomas
P22.9
Raja, Edwin Amalraj
P19.7
Pavlou, Menelaos
P11.6
Ramirez, Guillermo
P22.1
Pawitan, Yudi
C3.1, C17.3, C32.4, P2.1, P2.2
Ramsay, Craig
P4.28
Peacock, Janet
P15.3
Rancoita, Paola M V
P12.2, P20.23
Pebody, Richard
C8.1
Rapsomaniki, Eleni
P14.6, P15.9
Pedersen, Nancy L
MS1.4
Rauch, Geraldine
P22.3
Pellicori, P
P21.3
Ravelli, Anita
P19.4
Pertile, Riccardo
P7.4
Rebora, Paola
I4.2, C27.1, C27.2
Pfirrmann, Markus
P22.23
Regnault, Antoine
P6.2, P10.2
Phillips, Alan
C30.1
Rehal, Sunita
P4.12
Pibouleau, Leslie
P17.3
Reiczigel, Jenö
P21.18
Pichler, Irene
P3.11
Reilly, Marie
C15.3, C27.1, C27.2
Pickles, Andrew
P3.1
Reimnitz, Peter
P4.16
Piffer, Silvano
P7.4
Remontet, Laurent
P22.14
Piperno-Neumann, Sophie
P22.25
Resche-Rigon, Matthieu
P3.12, P22.10
Pirracchio, Romain
P3.12
Richardson, Sylvia
I6.2, C12.3
Plancade, Sandra
C31.4
Riecansky, Igor
P9.1
Plancher, Corine
P22.25
Rietbergen, Charlotte
P21.8
Ploner, Meinhard
C25.3
Riley, Richard
C24.2, P20.17, P22.22
Pocock, Stuart
C7.2, C34.4
Ring, Arne
P14.4
Pohar Perme, Maja
C22.1
Rippe, Ralph
P7.2
Pokhrel, Arun
P22.20
Rivadeneira, Fernando
C31.1
Poon, Ronnie
P21.4
Rizopoulos, Dimitris
C16.1, C29.2
Poppe, K
C34.4
Robins, James
I5.2
Portengen, Lützen
C27.3
Roca-Pardiñas, Javier
P16.5
Pradhan, Biswabrata
C4.3
Rockova, Veronika
C3.3
141/156
142/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Rodriguez-Girondo, Mar
C28.2
Schwarz, Daniel
P20.16
Roes, Kit CB
C1.5, P1.5, P15.5
Scotti, Valeria
P15.4
Rogers, Jennifer
C7.2
Scudelle, Luigia
P15.4
Røislien, Jo
C13.2
Seaman, Shaun
C9.2, C25.1, C25.4, P11.6
Roldán Nofuentes, Jose Antonio
P6.1
Seaton, Sarah
P8.6, P20.13, P22.15
Romaniuk, Helena
P11.7
Sekula, Peggy
C7.4
Romio, Silvana
P3.6
Séne, Mbéry
C16.3
Rondeau, Virginie
C20.3, C32.3
Senn, Stephen
IP1, C15.5, C30.1
Roquelaure, Yves
P3.7
Sermeus, Walter
C30.5, P13.1
Rosendaal, Frits
P7.2
Shah, Anoop
P14.6
Rosenheck, Robert
C16.5
Shanyinde, Milensu
P4.30
Roth, Katrin
P4.16
Sharpe, Michael
P3.1
Rotnitzky, Andrea
I5.2
Sharples, Linda
C18.4
Røysland, Kjetil
C1.2
Sheehan, Nuala A
C3.4, P3.11, P15.7
Royston, Patrick
P1.4, P4.9, P11.8, P20.1
Shkedy, Ziv
P18.1
Ruberg, Stephen
C18.2
Siannis, Fotios
P21.9
Rücker, Gerta
C10.1
Siemiatycki, Jack
P21.14
Ruijs, Helma
I1.2
Sikorska, Karolina
C31.1
Rutherford, Mark
P22.16
Simon, David
P22.30
Safavi, Nastaran
P14.7
Simpson, Angela
C12.4
Salim, Agus
I4.3, C3.1, P2.1
Singh, Krishan
P4.29, P15.8
Samoli, Evi
P21.16
Singh, Rajvir
P22.4
Samuelsen, Sven Ove
C6.4, C6.5
Sitta, Rémi
P3.7, P7.5
Sanchez-Niubo, Albert
C27.5, P21.15
Skaug, Knut
P19.1
Sangalli, Laura
I3.3
Skinner, Jason
C17.2
Santhakumaran, Shalini
C11.2
Skjærven, Rolf
MS1.1, MS1.7
Sarasqueta, Cristina
P20.19, P20.24
Small, Robert D
P11.2
Sauerbrei, Willi
C19.3, C19.4, C32.1, P4.9
Smeeth, Liam
P14.6
Sauzet, Odile
P15.3
Smith, Karen
P4.25
Savignoni, Alexia
P21.12
Smits, Gaby
P21.13
Scalia-Tomba, Gianpaolo
I1.1, C27.5
Snell, Kym
P22.22
Schemper, Michael
C15.2, P21.2
Solari, Aldo
C11.2
Scherjon, Sicco
C1.4
Solomon, Patricia J
C11.1
Schetelig, Johannes
C4.5
Sørensen, Helle
I3.1
Schlenk, Richard F
C3.2
Sørensen, Øystein
C23.4
Schmelter, Thomas
P4.2
Sparrow, John M
P19.9
Schmidli, Heinz
P1.8
Speed, Doug
I6.3
Schneider, Simon
P1.8
Speed, Terry
IP2
Schumacher, Martin
C3.2, C7.4, C15.1, C26.3, C32.5,
P22.13
Spinar, Jindrich
C4.2, P19.6
Spiegelhalter. David J
I2.2
Schürmann, Christoph
C21.3
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
143/156
Stahl, Daniel
C24.3
Timmis, Adam
P14.6
Stallard, Nigel
C2.1
Tinelli, Carmine
P15.4
Stare, Janez
C22.5
Titterington, Mike
C19.5
Steegers-Theunissen, Regine
C31.2
Tjomsland, Ole
P8.2
Stefanic, Martin
P4.6
Tjønneland, Anne
P21.11
Sterne, JAC
P19.5
Todd, Susan
C2.1
Steyerberg, Ewout W
P20.2, P20.8, P20.12, P20.18,
P20.25
Torres-Martin, Juan V
P6.3
Touloumi, Giota
C26.4, P3.9, P21.16, P22.26
Stirnemann, Jérôme
P22.12
Touraine, Célia
P22.19
Stocken, Deborah
P22.22
Trandafir, Camelia
C14.4
Støer, Nathalie C
C6.4, C6.5
Trébern-Launay, Katy
C7.3
Stott, David
C5.4
Tretli, Steinar
C32.2
Stratton, Irene M
P19.9
Trinquart, Ludovic
P21.4
Strelkowa, Natalja
P4.6
Trneny, Marek
P22.9
Sturtz, Sibylle
C10.2
Tsiatis, Anastasios A
C5.3
Suchanek, Stepan
P8.4
Tsonaka, Roula
C34.5
Sun, Hong
P4.14
Tsoumanis, Achilleas
P18.4
Suo, Chen
P2.1
Tubert-Bitter, Pascale
MS2.5, P21.12
Sutton, Alex J
P15.6
Tudur Smith, Catrin
P20.3
Swihart, Bruce
I3.2
Turner, Robin M
P4.3
Swinkels, Sophie
P16.3
Unkel, Steffen
C8.1, C8.4
Symmons, Deborah P M
P15.6
Ursin, Giske
MS1.3
Sypsa, Vana
C8.5
Uyl-de Groot, Carin
P22.29
Tabor, Bruce
C29.4
Vach, Werner
C18.5, P4.17, P10.5
Taiyari, Khadijeh
C24.4
Vaillant, Michel
C15.5
Taubel, Jorg
P14.4
Valberg, Morten
C32.2
Taylor, Jeremy
C18.2
Valenta, Zdenek
C28.1, P22.7
Taylor-Robinson, David
C16.4
Valsecchi, Maria Grazia
C27.2
Teerenstra, Steven
C30.3
Van Belle, Vanya
P20.18
Teo, Shu Mei
C3.1
van Boven, Michiel
I1.2
Thabane, Lehana
C18.1, P5.1
van Bockxmeer, Frank
C23.5
Thalabard, Jean- Christophe
P6.4
van Breukelen, Gerard
C14.1, P4.13
Therneau, Terry
C22.4
Van Calster, Ben
P16.4, P20.12, P20.18
Thiebaut, Rodolphe
C2.5, C8.2, C17.2
van den Bor, Rutger M
C1.5
Thompson, John R
P3.11, P15.6
Van den Heede, Koen
C30.5, P13.1
Thompson, Lucy
P7.3
van der Baan, Frederieke H
P1.5
Thompson, Simon
P3.3, P15.9
van der Klis, Fiona
P21.13
Thoresen, Magne
C23.3, C23.4
van der Tweel, Ingeborg
P15.5
Tibaldi, Fabian
C5.1, P18.1
van der Wal, Willem M
C1.5
Tilling, K
P19.5
VanderWeele, Tyler
I5.1
Timmerman, Dirk
P16.4, P20.12
144/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
van der Woude, Diane
C34.5
Wei, Yinghui
P15.1
van Geloven, Nan
P22.17
Weldeselassie, Yonas G
MS2.6
Van Hoorde, Kirsten
P20.12
Weng, Yanqiu
P4.4
Van Huffel, Sabine
P16.4, P20.12
Werft, Wiebke
C28.4
Van Keilegom, Ingrid
C29.1
Weschenfelder, Ann-Kathrin
P8.7
van Klaveren, David
P20.8
Wetherington, Jeffrey
P4.29, P15.8
Van Oirbeek, Robin
C20.2
Weyermann, Maria
P8.7, P8.10
Van Rompaye, Bart
C3.5
Wharton, Rose
P4.30
Vansteelandt, Stijn
I5.3, C1.3, C9.1
Wheeler, Graham
P1.7
van Zwet, Erik
C11.2
Whitaker, Heather J
MS2.6, C8.1, C8.4
Varewyck, Machteld
I5.3, C1.3
Whitehead, Anne
C2.3
Vargha, Péter
C23.1
Veierød, Marit Bragelien
C13.2, C32.2
White, Ian
C5.2, C9.2, C25.1, C25.4, P11.1,
P11.8, P15.9
Velicko, Inga
P18.4
White, Jane
P7.3
Velten, M
MS2.5
White, Peter
P3.1
Verbeke, Geert
C5.3, C29.2
Wieseler, Beate
C21.3
Verde, Pablo
P15.10
Wiklund, Fredrik
P21.7
Vergouwe, Yvonne
P20.2, P20.8, P20.12, P20.18,
P20.25
Willemsen, Sten
C31.2
Williamson, Elizabeth
C5.2, C21.5
Vermeer, Koen
C13.4
Williamson, Paula
P20.3
Vermeulen, Roel
C27.3
Wilson, Philip
P7.3
Vervölgyi, Elke
C21.3
Wilcox, Allen J
MS1.7
Vervölgyi, Volker
C21.3
Wisniewska, Dominika
P11.2
Verweij, Jaap
C2.2
Witteman, Jacqueline C M
C20.4
Vigan, Marie
P22.12
Wittkowski, Knut
C17.5
Vilar, Jose A
P20.6
Woelber, Linn
P22.6
Vilar, Juan M
P20.6
Woertman, Willem
C30.3
Vilgrain, Valerie
P21.4
lbers, Marcel
C20.4
Vourli, Georgia
P3.9
Wolfe, Rory
P11.5
Voysey, Merryn
P4.30
Wolfsegger, Martin
P4.8
Waaijenborg, Sandra
P21.13
Wolkewitz, Martin
C26.3
Wagner, Daniel
C15.5
Wood, Angela
P15.9
Wallinga, Jacco
i1.2, P21.13
Worthington, Jane
C34.5
Walter, Stephen D
P4.3
Wynants, Laure
P16.4
Wang, Duolao
P14.4
Xiao, Yongling
C7.1
Wang, Jixian
P4.21
Xu, Stanley
MS2.4
Wang, Sijian
C19.1
Yauy, Kevin
P6.4
Wang, Weiwu
P12.5
Ye, Weimin
P12.5
Wang, Yanzhong
C19.5
Yokoyama, Hiroyuki
P4.19
Wason, James
P1.6
Yonemoto, Naohiro
P4.19
Wegscheider, Karl
P13.4, P22.6
Younger, Jaime
P22.2
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
145/156
Yuasa, Haruyuki
P4.19
Zhang, J
P21.3
Yu, Ly-Mee
P4.30, P22.31
Zhao, Wenle
P1.1, P4.4, P4.27
Yu, Menggang
C19.1
Zins, Marie
P7.5
Yu, Onchee
MS2.2
zu Eulenburg, Christine
P13.4
Zangogianni, Marina
P21.9
Zuma, Khangelani
C7.5
Zavoral, Miroslav
P8.4
Zwiener, Isabella
C20.1
Zayeri, Farid
P14.7
Zwinderman, Aeilko
C24.5, P3.4, P12.1, P19.4, P22.17
146/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Information for Presenters
Instruction for oral presentations
You can find the session in which your presentation is scheduled on the website of the conference and in the conference program.
Please make your presence known to the session chair at least 10 minutes before your session starts and be present during the
entire session in which your presentation is scheduled.
Length of presentation
–
–
–
Invited presentations have 30 minutes available, including time for questions - i.e. the talk should be around 25 minutes.
Contributed presentations have 18 minutes available, including time for questions – i.e. the talk should be around 15
minutes.
Plenary speakers have specific time limits.
Preparation of presentation
The PCs used for the presentations are running Windows Vista or Windows 7. To ensure optimal software compatibility and support,
presentations must be in one of the following formats:
– Microsoft Office PowerPoint (version 97-2003, 2007 or 2010) for PC.
– Any Adobe Acrobat PDF documents (newer versions preferred)
We strongly recommend that you use standard fonts provided by Microsoft Office as this will guarantee full support on our computer
systems. For PowerPoint presentations we recommend that you also bring a pdf-version as backup if possible. If you have video files
in your PowerPoint presentation it must be in a format supported native under Windows Vista/Windows 7. Preferred format is
Windows Media Video (.WMV) format. QuickTime (.MOV) is not supported!
Transfer of presentation
All invited and contributed oral presentations will be using computers provided by the Conference organization. No personal laptop or
notebook computers will be allowed for invited and contributed oral presentations (exception may be done in “emergency cases”).
Course holders and plenary session speakers may use their own computers.
Speakers should visit the Speaker Ready Room at least two hours (preferably one day) before their scheduled presentation time.
Look for direction signs to the Speaker Ready Room in the Grieg Hall. Hours of operation are as follows:
Sunday, 19 August
16:00 – 20:00
Monday, 20 August
08:00 – 17:00
Tuesday, 21 August
08:00 – 13:00
Wednesday, 22 August
08:00 – 17:00
Thursday, 23 August
08:00 – 12:00
Authors should clearly identify themselves and specify the room, date and time of presentation. We suggest that you use the
following name convention for your presentation: Day_Session_Number_Family name. For instance Mon_C06_3_Smith (Monday,
contributed session 6, talk number 3, by Smith).
If your presentation has supporting video and audio files, please remember to include these files along with your presentation. Gather
all your files in one folder and make a test run of your presentation from this folder before you submit it. We recommend that you
bring a backup copy of your presentation on a memory stick at your presentation.
If you like to submit your presentation before the conference starts you can send your presentation by e-mail to konferanse@avabcac.com no later than Friday 17 August 2012. Use as subject of the e-mail: "ISCB33 Day_Session_Number_Family name" (e.g.
"ISCB33 Mon_C06_3_Smith").
Equipment in the presentation room
All conference rooms are equipped with the following:
– One computer fed from the central server
– One large screen
– One LCD projector
– One podium microphone
– One wireless lavalier microphone
– One laser pointer
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
147/156
Instructions for poster presentations
Posters will be on display in the poster area on level 2 in the Grieg Hall during the entire conference. You should install your poster as
soon as possible upon arrival - the poster boards will be up from Sunday at 13.00. You should install your poster on the poster board
marked with your poster number. You find your poster number in the conference program. A map of the poster placement is found on
page 155.
During the breaks refreshments are served in the poster area and we encourage all poster presenters to be near their posters during
these breaks. In particular all poster presenters have to be at their poster during the poster session Tuesday, August 21, 10.00-11.00.
Deadline for removal of posters is Thursday 23 August at 13.00. The conference organizer is not responsible for posters not collected
at the end of the conference.
Poster board dimensions (including frames) will be 100 cm in width and 250 cm in height. To keep good legibility of the poster, we
recommend a maximum size of 95 cm (37.4 inches) in width and 100 cm (39.4 inches) in length. Tape for fastening the posters will
be available in the poster area.
Statistics in Medicine Special Issue
People who give invited or contributed presentations (oral as well as poster) are invited to submit a paper for a special issue of
Statistics in Medicine. Deadline for the submission is October 31st 2012. The standard rules and procedures for submission will hold
with clear emphasis on the quality of the paper. See the Author’s guide for further information.
ISCB Awards
Student Conference awards (SCA)
Name
Country
Title
Session
Magdalena Murawska
The Netherlands
Dynamic Prediction Based on Joint Model for Categorical
Response and Time-to-Event
C16.1
Michael Crowther
UK
Adjusting for measurement error in baseline prognostic
biomarkers: A joint modelling approach
C16.2
Yunzhi Lin
USA
Advanced Colorectal Neoplasia Risk Stratification by
Penalized Logistic Regression
C19.1
Conference Awards for Scientists (CAS)
Name
Country
Title
Session
1. Jan Kalina
Czech Republic
Robust Gene Selection Based on Minimal Shrinkage
Redundancy
C28.1
2. Kerry Leask
South Africa
Modelling Overdispersion in Wadley's Problem with a BetaPoisson Distribution
C33.1
3. Péter Vargha
Hungary
Regression toward the mean and ANCOVA in observational
studies
C23.1
148/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Acknowledgements
Bergen
Tourist Board
Kongress & Kultur
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
149/156
Conference Venue
The conference will be held in Bergen, Norway in the well-known conference and concert hall Grieghallen, Edvard Griegs Plass 1,
5015 Bergen, Norway. The Grieghallen is located easily accessable in the city centre within 10 minutes walking distance from both
the railway and the central bus station. From most of the recommended hotels you will not have to walk more than 15-20 minutes.
How to get there from the airport
By Bus. There are airport coaches departing every 15 minutes to the city centre during most of the day, and corresponding to flight
arrivals in the evening. The closest airport bus stop is the central bus station. Additionally, there are less expensive local buses,
which do not go directly to the city centre, i.e. require bus changes. Those timetables are found at skyss.no.
By Taxi. The price for taxi from the Flesland airport to the conference venue should be about 350 NOK (~45€).
How to get there from the train or bus.
From both the train and the central bus station it is not more than a 10 minutes walk
150/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
General Information
Getting there
By Air
Bergen Airport Flesland (BGO) handles the flights of SAS, KLM, Norwegian Air Shuttle, Widerøe as well as a number of other
airlines (list). It is located 20 km south of the city centre. There is a good connection between the airport and the city centre by buses
and taxis. The airport shuttle bus departs every 15 minutes, and the ride is 30 minutes. A one-way ticket is NOK 95 (~12€) and a
return-ticket NOK 150,- (~20€). The return-ticket is valid for a month. The bus driver will announce the most important destinations in
the city center. If you ask, your stop will be announced as well. Taxi fare from the airport to the city centre is approximately NOK 350,(45€). Airport bus and taxis can be paid in the vehicle with a VISAcard.
By Land
Train. There is one train line connecting Bergen with Eastern Norway and Oslo. All trains are operated by the Norwegian State Rail
NSB. At nsb.no you find time tables and can book domestic train tickets.
Bus. There are various express bus connections to the most places in southern Norway. Operators are Nor-way bussekspress, Skyss
and Fjord1.
By Sea
After years with a vital ferry traffic till and from Bergen, unfortunately, these services are reduced to one international connection
between Bergen and Hirtshals (Denmark) operated by Fjordline.
Getting around in Bergen
In the city centre of Bergen most distances do not require more than a 20 minutes walk.
Otherwise there are good local bus connection from the city centre to most areas in the greater city area as well as a new light rail,
where the first line was opened in june 2010 and which will expand during the following years. Maps and time tables for all
connections are found at skyss.no. A 90-min-ticket in Bergen costs 27 NOK (~3€). The bus station may be somewhat crowded due to
modification work.
Practical Information
Accomodation
Please use the accomodation information on our webseite (http:www.iscb201.info) or contact Gabriele Zenisek: office@kongress.no
at the conference secretariat.
Certificate of Attendance
Certificates of attendance will be available at the registration desk for all participants.
Climate
Due to its coastal location, Bergen has a maritime climate, i.e. mild winters and rather cool summers as well as a quite high
precipitation, mostly in the autumn and winter months (more details). Usually July and August are warmest months in Bergen and not
among the most humid. Nevertheless, in Bergen you always have to be alert to meet some rain, even in summer.
Currency and Banking
The official currency in Norway is the Norwegian krone (norske kroner, 8 NOK ~ 1€). International credit cards are accepted in the
most hotels and restaurants, not necessarily in all shops, but you can take out cash at all bank offices and the ATMs almost
everywhere.
Electricity
Norway uses a 230 volt 50 Hz system. Sockets are the standard European type and two-prong round pin plugs, with a hole for a
male grounding pin, are standard. To use electric appliances from your country you may need a special voltage converter with an
adapter plug. (wikipedia)
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
151/156
Health Services
Norway has a well-organized health care system. More detailed information about access and costs are found at the website of the
Agency for Public Management and eGovernment.
Language
The official language of the Congress is English without translations.
In the hotels, shops, busses or taxis most people are able to speak English.
Lunches and Refreshments
Lunches and refreshments (Monday-Wednesday) are included in the registration fee. Participants requiring a special diet are
requested to mention this on their registration form.
Passport and Visa
All information about visa rules in Norway are found at the pages of the Norwegian immigration authorities including a list of the
countries without a visa requirements.
Time Zone
Norway is in Central European time zone one hour ahead of GMT. At the time of the congress this will be GMT +2 due to Summer
Daylight Saving Time.
Vocabulary - Ordbok
Norsk
English
Francais
Deutsch
Hei
Hello
Bonjour
Hallo
Ha det (bra)
Goodbye
Au revoir
Auf Wiedersehen
Hvordan går det?
How are you?
Comment allez-vous?
Wie geht es?
Takk
Thank you
Merci
Danke
Vær så god
Please
S'il vous plait
Bitte
Ja
Yes
Oui
Ja
Nei
No
Non
Nein
Hvor er …?
Where is …?
Où est …?
Wo ist …?
Snakker du …?
Do you speak …?
Parlez-vous …?
Sprechen Sie …?
Engelsk, Fransk, Tysk
English, French, German
Anglais, Francais, Allemand
Englisch, Französisch, Deutsch
Jeg forstår ikke
I don't understand
Je ne comprends pas
Ich verstehe nicht
Jeg trenger en lege
I need a doctor
J'ai besoin d'un docteur
Ich brauche einen Arzt
E, to, tre, fire, fem
One, two three, four, five
Un, deux, trois, quatre, cinq
Eins, zwei, drei, vier, fünf
Det regner.
It rains.
Il pleut.
Es regnet.
Har du en paraply?
Do you have an umbrella?
Avez-vous un parapluie?
Haben Sie einen Regenschirm?
152/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Social Events
Reception
Monday
20 August 2012
19:00 - 20:30
Håkonshallen, Bergen
Conference dinner
Wednesday
22 August 2012
19:00 – 23:00
Grieghallen
Conference Trips, Tuesday, 21 August 2012
Short trip 1: City walk in Bergen
We invite you for a classical round trip, first by bus then on foot, through picturesque streets and pass well-known and famous sites
including a walk through the famous Bryggen quarter.
Short trip 2: Troldhaugen – the home of Edvard Grieg
The most famous Norwegian composer Edvard Grieg was born and has been living in Bergen. His home is comprising an exhibition
centre with shop and café, concert hall, composers’ cabin and Grieg’s villa in an idyllic surrounding.
Unfortunately, we had to cancel this trip. Sorry.
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
153/156
Short trip 3: Bergen – City of Arts
Bergen is rich in culture and the fine arts, ranging from giants such as Picasso and Munch to exciting local artists with their own
ateliers.
Short trip 4: Bergen – Gateway to the Fjords
The Norwegian fjords are a spectacular, breathtaking experience, and Bergen is indeed the gateway to them! This trip will allow you
to discover at least little part of these world famous attractions.
Short trip 5: On the roof of Bergen
Join our walk over the roof of Bergen, enjoying spectacular views over the the fjords as well as the mountains around Bergen.
Pre and Post Conference Trips
Norway in a nutshell
Rosendal
Hardanger
Hurtigruten
These trips are not included in the conference trip programme but we can recommend these attractions since you are once in
Norway.
154/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Map of Bergen
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
155/156
Posters and Exhibitors Placement
All Exhibitions are placed at level 2
156/156
ISCB 33 – 19-23 August 2012, Bergen, Norway – www.iscb2012.info
Natural effect propagation
Plan Grieghallen
Foyer
Session rooms
Meeting rooms
FPG
Foyer Peer Gynt (Level 2)
STT
Trolltog (Level 3)
SN
Salon Nina (Level 3)
F1PG
Foyer 1 Peer Gynt (Level 1)
SPG
Peer Gynt (Level 2)
SE
Salon Edvard (Level 3)
FUPG
Foyer U Peer Gynt (Level U)
SKK
Klokkeklang (Level U)
SBK
Bukken (Level 3)
FS
Foyer Spissen
SG
Gjendine (Level U)
SH
Halling (Level 3)
SST
Småtroll (Level U)
SS
Svane (Level 3)
Preview room
SBG
Bøygen (level 2)
ISBN: 978-82-8045-026-5