The Art of Semiparametrics - Humboldt

Transcription

The Art of Semiparametrics - Humboldt
The Art of Semiparametrics
Berlin, 18.-20. October 2003
sponsors
Contents
1 Welcome to Berlin
2
2 Organization
4
3 Useful Information
6
4 Social Programme
10
5 Conference Schedule
16
6 List of Abstracts
21
7 List of Participants
32
8 CASE
Center for Applied Statistics and Economics
34
9 E-Books
39
1
1 Welcome to Berlin
We are delighted to welcome you to The Art of Semiparametrics Conference at the
Humboldt-Universität zu Berlin. This conference is organized by the Center for Applied Statistics and Economics (CASE) and the Sonderforschungsbereich 373.
The aim of this conference is to present important contributions which describe the state
of the art of smoothing and in particular of semiparametrics. The concept of smoothing
is a central idea in statistics. Its role is to extract structural elements of variable complexity from patterns of random variation.
The nonparametric smoothing concept is designed to simultaneously estimate and model
the underlying structure. This involves high dimensional objects, like density functions,
regression surfaces or conditional quantiles. Such objects are difficult to estimate for
data sets with mixed, high dimensional and partially unobservable variables.
The semiparametric modeling technique compromises the two aims, flexibility and simplicity of statistical procedures, by introducing partial parametric components. These
(low dimensional) components allow to match structural conditions like e.g. linearity in
some variables and may be used to model the influence of discrete variables. The flexibility of semiparametric modeling has made it a widely accepted statistical technology.
I am happy to announce that selected papers will be published by Spinger Verlag as
Lecture Notes in Statistics. Editors are Gökhan Aydınlı, Hizir Sofyan and Danilo Mercurio. All other so far submitted papers are published also as SFB 373 discussion papers
and are compiled on the accompanying CD-ROM. Furthermore we will make all papers
available on the web at
http://ise.wiwi.hu-berlin.de/statistik/semiparametrics/.
2
I wish to thank a number of people who have helped to bring this conference together.
These are my colleagues of the Scientific Programme Committee (SPC) who dedicated
their time in selecting and judging the quality of the submissions, my doctoral students of the Local Organizing Committee (LOC) who managed the logistic framework
of the conference and took care of the social events, and finally the session chairs whose
participation is vital to the success of our conference.
Berlin, 16th October 2003
Wolfgang Härdle
3
2 Organization
Scientific Programme Committee
The Scientific Programme Committee (SPC) was responsible for the scientific content
of The Art of Semiparametrics. It prepared the final list of conference topics and invited
speakers, selected contributed papers from amongst the submitted abstracts and refereed
contributed papers. The SPC consists of:
Wolfgang Härdle (Humboldt-Universität zu Berlin)
Joel Horowitz (Northwestern University)
Enno Mammen (Heidelberg University)
Vladimir Spokoiny (Weierstraß-Institut, Berlin)
4
Local Organizing Committee
The preparation of the conference was only possible through the combined effort of
the Institute for Statistics from Humboldt-Universität zu Berlin and the Weierstrass
Institute for Applied Analysis and Stochastics.
The Local Organizing Committee (LOC) was responsible for functional organization,
including the selection of the most suitable locations, preparation of the internet site and
conference software, arrangement of the social programm, production and publication of
the proceedings volume and coordinating the contact between invited speakers, chairs,
contributing authors, participants, sponsors and publishers. The LOC consists of:
Dipl.-Vw. Gökhan Aydınlı
MSc. Stat. Hizir Sofyan
Dipl.-Vw. Danilo Mercurio
5
3 Useful Information
Address for Correspondence
Humboldt-Universität zu Berlin
CASE - Center for Applied Statistics and Economics
Institut für Statistik und Ökonometrie
Gökhan Aydınlı
Hizir Sofyan
Danilo Mercurio
Spandauer Strasse 1
10178 Berlin
Germany
aydinli@wiwi.hu-berlin.de
hizir@wiwi.hu-berlin.de
mercurio@wias-berlin.de
Telephone:
Telefax:
+49-(0)30-2093-5623
+49-(0)30-2093-5959
Registration desk
The registration desk in the foyer of the Business Administration and Economics building
(Spandauer Straße 1, Economics building) is open:
Saturday, October 18
Sunday, October 19
Monday, October 20
from 8:00 until 12:00
from 8:00 until 12:00
from 8:00 until 10:00
The telephone number of the desk is +49(0)163-755-9999.
6
Participation identification
Meeting badges are essential for admission to the Meeting venues and to the academic
sessions and social events. Therefore we would like to ask that you to wear your badge
at all times.
Accompanying persons
For the Boat tour with the conference dinner on Monday, accompanying persons are
asked to buy a ticket at e50 each. Dinner tickets can be purchased at the conference
desk in the Economics building.
Coffee breaks
Coffee and Tea will be served during the session breaks in front of room 125 (Economics
Building, 1st floor).
Internet access
Access to internet and email will be available via workstations from the institute of
statistics. The computers are in the Computer Lab (4th floor). Assistance will be
given by an representative. Furthermore there are several internet cafes near HumboldtUniversität zu Berlin.
Liability
The Humboldt-Universität zu Berlin will not assume any responsibility for accident,
loss or damage, or for delays or modifications in the programme, caused by unforeseen
circumstances. We will not assume indemnities requested by contractors or participants
in the case of cancellation of the Meeting due to unforeseen circumstances.
Bank and exchange
Official opening hours of banks in Germany vary. Some exchange offices located at the
larger train stations (Friedrichstrasse and Alexanderplatz) are open on weekends. It
is also possible to change foreign currency into Euro in many hotels, but for a higher
transaction fee.
7
Electricity
Electric sockets in Germany carry 220V/50Hz and conform to the standard continental
type. Travel adaptors may be useful for electric appliances with other standards and
can be bought, e.g. in the Saturn-department store at Alexanderplatz.
Shopping hours
At the bigger shopping-centers and malls, shops are open until 8p.m. Mondays through
Saturdays. Other shops close at 6:30. Generally, shops are not closed for lunch breaks.
The high temple of the consumer religion is the KaDeWe department store on Wittenbergplatz (U-Bahn station), the largest department store in Europe. The newly opened
complex on the Potsdamer Platz offers more than 100 new shops and a multitude of
restaurants, cafes, and pubs.
Markets
Each weekend many flea-markets invite you to look for great bargains. The most popular one is the Kunst- und Trödelmarkt“ on Straße des 17. Juni, but also along the
”
Kupfergraben near Museum’s Island and Humboldt-Universität. There are many stalls
with art objects, books and records.
Phone Numbers of Taxi-Companies
Funk-Taxi Berlin: (030) 26 10 26
Spree-Funk: (030) 44 33 22
Cab Call: (0800) CAB CALL
8
Pharmacies
In Berlin pharmacies can be found all over the town. For overnight service there are
always one or two pharmacies open in every district. All pharmacies post signs directing
you to the nearest open one.
Hotline: 0 11 89
Emergency Numbers
(English speaker available at all numbers)
Police: 110
Fire Brigade and Ambulance: 112
German Red Cross: (030) 85 00 55
Ambulance: (030) 31 00 31
After-hour doctor: (030) 31 00 33 99
9
4 Social Programme
Bicycle tour City Center Berlin
We are happy to invite our participants to a relaxed bike tour through the Heart of
Berlin. With the friendly support of DBRent the provider of the so called ”Call-a-Bike”
service, we arranged 40 Bikes free of charge for our conference participants. If you are
interested in this unique sightseeing opportunity, please sign-in for the bike-tour at the
registration desk or contact one of our LOC members.
The bikes are available Monday, 20. October 2003, from 14:00 in front of the faculty
building in Spandauer Str. 1. Please remind to arrive timely at 17:00 at our meeting
point (the ”MS Philippa” at the landing stage at Urbanhafen, see also map 4). The
boat will departure at 17:15.
Boat tour with conference dinner
The boat trip through Berlin will take place on the afternoon of October 20, 2003.
We will board the MS Philippa“ at the Urbanhafen“ which is in riding distance to
”
”
Humboldt-Universität and start the cruise across the historical center of Berlin: Museumsinsel, theatre quarter, station Friedrichstrasse, former border between the Eastern
and the Western part of Berlin. After visiting the Reichstag (parliament), the seat of
the German Chancellor and the construction site for the new Central Station Lehrter
”
Stadtbahnhof“ we turn around at Humboldt-harbor and go back to the historical center
where we pass the St. Nicolas quarter, the Alexanderplatz and the Berlin Town Hall.
After passing the remains of the Berlin Wall at the East Side Gallery and the industrial
districts of East Berlin we cruise back to our starting point at Urbanhafen.
During this three hour boat trip we will have the conference dinner. The menu is as
follows:
Fischvorspeisenplatte ”Philippa” auf dem Schiff gebeizter und geräucherter Lachs, Pfeffermakrele, Kieler Sprotten und Heringsvariationen
10
Hähnchenbrustmedaillons und Schweinelendchen auf Zwiebel-Punsch-Konfit
Wildpastete aus dem Rangsdorfer Forst auf Waldorfsalat mit Preisselbeer-OrangenSauce
Gemüsekuchen auf Sesam-Kräuter-Sauce
Bunte Blattsalate mit verschiedenen Dressings Wirsingsalat mit Speckwürfeln und roten
Zwiebeln
Lachs- und Zanderfilet auf Blattspinat in Prosecco-Schaum-Sauce mit Basmatireis Keule
von der Prignitzer Landente aus dem Bratrohr mit Rotweinsauce dazu Birnenspalten, Rotkohl und Spätzle
Große Käseauswahl Brotkorb und Butter
Zimtparfait mit Rotweinfeigen
Warmer Bratapfel mit Mandelfüllung und Vanille-Sauce
Soft drinks, wine (Pinot Gr., Cabernet Sauvignon) and beer will be served.
Remark: Due to insurance reasons smoking in the dining compartment of the ship
is prohibited. Smokers are asked to use assigned areas of the boat.
We thank you for your cooperation.
Sightseeing
Brandenburger Tor, Unter den Linden, Friedrichstraße, Alexanderplatz ... you could
continue the enumeration of first-class sights in Berlin’s old and new centre endlessly, an
excursion through this historical as well as lively district belongs to every Berlin tour.
In the avenues Unter den Linden and Karl-Liebknecht-Straße every building has its
own story to tell - the government and embassy buildings, Staatsoper, Komische Oper
(national and comic opera), Maxim-Gorki-Theater at the Lustgarten park, the Humboldt
University, Museums’ Island with exhibitions of worldwide rank, Berlin’s cathedral, the
former Palast der Republik“ which replaced the busted city castle, the Zeughaus, the
”
Neue Wache, to name only the most significant attractions.
The Brandenburger Tor, symbol of Berlin, of German separation and reunification is
surely the city’s most famous building. In one of its wings you will find an office of
11
Berlin’s tourist information. Alexanderplatz square with its coolish but impressive tower
blocks is surmounted by the 365 meter high Fernsehturm (television tower), the city’s
highest building.
The shopping and strolling avenue Friedrichstraße heads south towards the former border
crossing Checkpoint Charlie (see Kreuzberg) and north towards Oranienburger Straße.
This former Jewish quarter has developped a vital clubbing site that is overtopped
by the New Synagogue’s golden dome. Exclusive and eccentric shops, chic cocktail
bars and scruffy backyard romance encounter in and around Hackesche Höfe. Around
Schiffbauerdamm, near Deutsches Theater, Charité clinic and new government quarter
you will often meet prominent politicians.
If you prefer to avoid turbulences you might wish to visit the Dorotheenstädtischer
”
Friedhof“ cemetary, where outstanding personalities like Hegel, Brecht or the architect
Schinkel are buried.
Mitte has got a lot to offer south of Unter den Linden / Karl-Liebknecht-Straße, too:
Next to the majestic Rotes Rathaus townhall the Nikolaiviertel quarter has preserved
the charme of a small town of the 18th century. Not far from the conspicuous dome
of St. Hedwig’s cathedral you will encounter one of Europe’s most beautiful places:
Schinkel-designed Gendarmenmarkt square with Schauspielhaus theatre and concert hall
and German and French dome. Between the districts of Mitte and Tiergarten spreads
Potsdamer Platz, on of Berlin’s centres.
Public transport and tickets
Berlin has three different fare zones (A, B, C):
Zone A: This is the area within the Berlin urban rail (S-Bahn) ring line.
Zone B: The area outside the ring up to the city border.
Zone C: comprises the area surrounding Berlin (3 honeycombs). This sub-area is divided
into 8 parts, each belonging to an administrative district.
With the AB ticket, you can always be sure of having the right fare when travelling in
Berlin. Single fare tickets are valid for 2 hours whereas short trip tickets can be used for
at most three stops only.
For your convenience we added to this conference book a map of the local transportation
authority BVG.
12
Map Inner City
13
Philippa at Urbanhafen
Econ Building
Map Call-a-Bike
14
Map Boat Tour
15
5 Conference Schedule
Oct 18 2003
08:50-09:00 Opening Session (R. 125)
Opening Address
W. Härdle
09:00-10:30 Smoothing Session I (R. 125)
Chair: M. Benko
09:00 A Simple Deconvolving Kernel for Gaussian Noise
I. Proênça
09:30 Statistical Modelling and Estimation Procedures for Functional
Data
P. Sarda
10:00 Nonparametric and Semiparametric Estimation of Additive Models
with both Discrete and Continuous Variables under Dependence
R. Poo
10:30 Coffee Break
11:00-12:30 Smoothing Session II (R. 125)
Chair: R. Moro
11:00 Penalized Logistic Regression in Gene Expression Analysis
M. Schimek
11:30 Smoothing Techniques When Data Are Curves
P. Vieu
12:00 Does male age influence the risk of spontaneous abortion? An approach using semiparametric regression
R. Slama
16
12:30-14:00 Lunch Break
14:00-15:30 Econometrics I (R.125)
Chair: M. Bianchi
14:00 Productivity Effects of IT-Outsourcing: Semiparametric Evidence
for German Companies
I. Bertscheck, M. Müller
14:30 On Estimating the Mixed Effects Model
A. Kneip
15:00 Some Evidence on Sense and Nonsense of Non- and Semiparametric
Analysis of Econometric Models
S. Sperlich
15:30 Coffee Break
16:00-17:30 Econometrics II (R.125)
Chair: B. Rönz
16:00 Smoothing Berlin - Using location nonparametrically to predict
house prices
A. Werwatz, R. Schulz
16:30 How to Improve the Performances of DEA/FDH Estimators in the
Presence of Noise?
L. Simar
17:00 Some Convergence Problems on Heavy Tail Estimation Using Upper Order Statistics for Generalized Pareto and Lognormal Distributions
R. Molinar
17:30-20:00 Welcome Mixer
location: Library Lounge
17
Oct 19 2003
09:00-10:30 Mathematical Statistics I (R. 125)
Chair: U. Ziegenhagen
09:00 Nonlinear regression estimate of state price density
Z. Hlavka
09:30 Modeling the Learning from Repeated Samples: A Generalized
Cross Entropy Approach
R. Bernardini
10:00 Nonparametric estimation of scalar diffusions based on low frequency data
M. Reiss
10:30 Coffee Break
11:00-12:30 Mathematical Statistics II (R. 125)
Chair: T. Kleinow
11:00 Asymptotic theory for M-estimators of boundaries
K. Knight
11:30 Estimating Semi-parametric Models with Constraints
Y. Xia
12:00 Testing Linear Process in Stationary Time Series
Z. Mohdeb
12:30-14:00 Lunch Break
14:00-15:30 Finance I (R.125)
Chair: O. Blaskowitz
14:00 Implied Volatility String Dynamics
M. Fengler, E. Mammen
14:30 Semiparametric Multivariate Garch Models
C. Hafner
15:00 Autoregressive aided periodogram bootstrap for time series
J.P. Kreiss
15:30 Coffee Break
18
16:00-17:00 Finance II (R.125)
Chair: Y. Chen
16:00 Consistent Testing for Stochastic Dominance under General Sampling Schemes
O. Linton, W. Whang
16:30 Estimation of Models with Additive Structure via Local QuasiDifferencing
S. Hoderlein
19
Oct 20 2003
09:00-10:30 Computing I (R.125)
Chair: R. Witzel
09:00 MD*Book and XQC/XQS - an Architecture for Reproducible Research
H. Lehmann, S. Klinke
09:30 Efficient Estimation in Conditional Index Regression
M. Delecroix
10:00 Additive Nonparametric Models in the Presence of Measurement
Errors
D. Ioannides
10:30 Coffee Break
11:00-12:00 Computing II (R.125)
Chair: S. Borak
11:00 Local Modeling by Structural Adaptation
J. Polzehl
11:30 Immigration and International Trade: a Semiparametric Empirical
Investigation
K. Mundra
12:00-14:00 Lunch
14:00-17:00 Bike tour through Mitte
17:00-22:00 Ship tour with Conference Dinner
20
6 List of Abstracts
A Simple Deconvolving Kernel for Gaussian Noise
Isabel Proênça, Univ. Técnica de Lisboa, Portugal isabelp@iseg.utl.pt
Deconvolving kernel estimators when noise is Gaussian entail heavy computations. It needs an
adequate choice of the damping kernel in order to assure the existence of the respective integral.
This work proposes an approximation to the deconvolving kernel which simplifies considerably
calculations by avoiding the typical numerical integration, and allows the use of popular kernels
like the Gaussian. It shows that this approximation is consistent. Simulations included indicate
that the lost in performance relatively to the true deconvolving kernel, is almost negligible in
finite samples of moderate size.
Statistical Modelling and Estimation Procedures for Functional
Data
Pascal Sarda, Université Paul Sabatier - Toulouse III, France
pascal.Sarda@math.ups-tlse.fr
In this paper, we present at first some general framework for functional data i.e. data which
are curves or surfaces. Different statistical models and estimation procedures introduced in the
literature are discussed. In a second attempt, we concentrate on regression problems where
the predictor is a (random) function. The models studied are the functional linear model and
the functional generalized linear model. For both models spline estimators of the functional
coefficient are defined. We then discuss asymptotic properties as well as computational aspects
for these estimators.
Does male age influence the risk of spontaneous abortion? An
approach using semiparametric regression
Remy Slama, INSERM Paris, France slama@vjf.inserm.fr
Background: Couples in industrialised countries tend to delay attempting to have children,
21
which may lower their chances of livebirth. Aim: We assessed the association between male
age and the risk of spontaneous abortion between weeks 5 and 20 of pregnancy, controlling
for female age. Methods: We interviewed by telephone a random cross-sectional population
of 1,151 French women who had been pregnant between 1985 and 2000 (participation rate,
73Results: Our final model predicted that the risk (rate-ratio, RR) of spontaneous abortion was
2.13-fold higher in 25-year-old women whose partner was over 35 years than in 25-year-old
women whose partner was younger than 35 years (95Conclusion: Increasing male age could
increase the risk of spontaneous abortion when the female partner is below 30 years of age.
The fact that there was no deleterious effect of male age when the female partner was 35 years
was unexpected and might be a chance finding.
Penalized Logistic Regression in Gene Expression Analysis
Michael G. Schimek, Karl-Franzens-University Graz, Austria
michael.schimek@uni-graz.at
In gene expression analysis we typically have biological samples which belong to either one of
two alternative classes. A statistical procedure is needed which, based on the expression profiles
measured, allows to compute the probability that a new sample belongs to a certain class.
Such a procedure would be logistic regression. The problem is that different from conventional
classification tasks there are far more variables (genes) than observations. Hence we have
to cope with multicollinearity and oversmoothing (overfitting). How can we overcome these
obstacles? In that we impose a penalty on large fluctuations of the estimated parameters and
on the fitted curves. Quadratic regularization is know as ridge regression. Other penalties lead
to lasso (Tishirani, 1995) or to bridge regression (Frank and Friedman, 1993). Antoniadis and
Fan (2001), applying wavelet techniques, provide criteria how to choose them. Here we discuss
how logistic regression should be penalized. Further we address the problem of regularization
(smoothing) parameter choice in this context. Eilers et al. (2002) propose for instance Akaike’s
Information Criterion. Last but no least there is more than one computational approach to
penalized logistic regression. A state-of-the-art overview is given, emphasizing the data analytic
requirements of the modern bio-sciences.
Antoniadis, A. and Fan, J. (2001) Regularization of wavelet approximations. JASA, 96, 939967 (with discussion).
Eilers, P. H. C. et al. (2002) Classification of microarray data with penalized logistic regression.
Preprint.
Frank, I. E. and Friedman, J. H. (1993) A statistical view of some chemometric regression
tools. Technometrics, 35, 109-148.
Tishirani, R. (1995) Regression shrinkage and selection via the lasso. JRSS, B, 57, 267-288.
22
Smoothing Techniques When Data Are Curves
Philippe Vieu, Université Paul Sabatier - Toulouse III, France
philippe.vieu@math.ups-tlse.fr
This talk will present recent advances linked with the utilisation of nonparametric techniques
when observed data are of functional nature (for instance when data are curves) This will be
a joint talk with Frederic FERRATY (Toulouse 2), and special attention will be paid both to
the double curse of dimensionality. It will be discussed how the infinite dimensionality of the
observed data can be dealt with, by mean of some concentration model (for instance by mean
of some fractal type model). All the presentation will be centered on several real data sets for
which both the nonparametric model and the functional setting are of crucial importance. These
data will be related with many differt fields of applied sciences (chemiometrics, econometrics,
environmetrics, ...) and they will be concerned with different statistical problems (regression,
supervised classification, time series prediction, ...)
Productivity Effects of IT-Outsourcing: Semiparametric Evidence
for German Companies
Irene Bertschek, ZEW - Centre for European Economic Research Mannheim,
Germany bertschek@zew.de
Marlene Müller, Fraunhofer ITWM Kaierslautern, Germany
marlene.mueller@itwm.fhg.de
This paper analyzes the impact of IT-outsourcing on the labor productivity of 1142 firms from
German manufacturing and service industries surveyed in 2000. An endogenous switching
regression model takes into account that firms might follow different productivity regimes depending on whether or not they source out IT-tasks. A semiparametric approach allows the
outsourcing decision to nonlinearly depend on firm size. First empirical results show that IToutsourcing does not significantly increase the partial production elasticities of the input factors
and that IT-tasks are completely sourced out rather by firms with lower multifactor productivity.
Keywords: information technology, IT-outsourcing, labor productivity, endogenous switching,
semiparametric, partial linear
On Estimating the Mixed Effects Model
Alois Kneip, Universität Mainz, Germany kneip@wiwi.uni-mainz.de
The paper introduces a new estimation method for time-varying individual effects in a panel
data model. An important application is the estimation of time-varying technical inefficiencies
23
of individual firms using the fixed effects model. Most models of the stochastic frontier production function require rather strong assumptions about the distribution of technical inefficiency
(e.g. half-normal) and random noise (e.g. normal) and/or impose explicit restrictions on the
temporal pattern of technical inefficiency. This paper drops the assumption of a prespecified
model of inefficiency, and provides a semiparametric method for estimation of time-varying
effects. The methods proposed in the paper are related to functional principal component analysis, and estimate the time-varying effects using a small number of common functions calculated from the data. Finite sample performance of the estimators is examined via Monte Carlo
simulations. We apply our methods to the analysis of technical efficiency of the U.S. banking
industry.
Efficient estimation in conditional single index regression
Michel Delecroix, ENSAI Bruz, France delecroi@ensai.fr
Semi parametric sigle-index regression involves an unknown finite-dimensional parameter and
an unknown (link) function. We consider estimation of the parameter via the pseudo maximum
likelihood method. We show that the proposed technique of estimation yields an asymptotically
efficient estimator.
Smoothing Berlin - Using location nonparametrically to predict
house prices
Axel Werwatz, DIW Berlin, Germany awerwatz@diw.de
Rainer Schulz, University of Aberdeen Business School, UK r.schulz@abdn.ac.uk
In the popular discourse, location is frequently singled out as the single most important determinant of house prices. On the other hand, the ”location” of an observation is a central ingredient
of nonparametric regression which calculates local averages. Hence, nonparametric regression
appears to be ideally suited to the analysis of house prices, in particular their dependence on
location. In this paper, we use geocoded measures of the location of single-family houses sold in
Berlin to nonparametrically estimate local house price averages. These local averages are used
to predict the price of newly sold homes and are compared to rival predictions from a standard
parametric hedonic regression model.
24
How to Improve the Performances of DEA/FDH Estimators in the
Presence of Noise?
Leopold Simar, UCL, Belgium simar@stat.ucl.ac.be
In frontier analysis, most of the nonparametric approaches (DEA,FDH) are based on envelopment ideas which suppose that with probability one, all the observed units belong to the
attainable set. In these “deterministic’ frontier models, statistical theory is now mostly available (Simar and Wilson, 2000). In the presence of noise, this is no more true and envelopment
estimators could behave dramatically since they are very sensitive to extreme observations that
could result only from noise. DEA/FDH techniques would provide estimators with an error
of the order of the standard deviation of the noise. Some recent results from Hall and Simar
(2002) on detecting change points may be used in order to improve the performance of the
DEA/FDH estimators in the presence of noise. The problem is approached from a nonparametric “stochastic’ frontier perspective, introducing noise in the model. The problem is difficult,
since, in essence, the model is not identified. In this paper we summarizes the basic tool used
and describe their statistical properties: they typically relies on the “size’ of the noise. Then
we show through simulated examples, how we can improve the performances of the classical
DEA/FDH estimators in the presence of noise of moderate size, where moderate is in term of
noise to signal ratio. It turns out that our procedure is also robust to outliers of moderate size.
Nonlinear regression estimate of state price density
Zdenek Hlavka, Humboldt-Universität zu Berlin, Germany
hlavka@wiwi.hu-berlin.de
The aim of the paper is to estimate the state price density, i.e., the second derivative of the
discounted European options prices with respect to the strike price. We use Maximum Likelihood method to derive a simple estimator of the curve such that it is decreasing, convex and its
second derivative integrates to one. Confidence intervals for this estimator can be constructed
using standard Maximum Likelihood theory. The method works well in praxis as illustrated on
the DAX option prices data.
Modeling the Learning from Repeated Samples: A Generalized
Cross Entropy Approach
Rosa P. Bernardini, Universita di Perugia, Italy bernard@stat.unipg.it
In this study we illustrate a Generalized Cross Entropy (GCE) methodology for modeling incomplete information and learning from repeated samples. The basis for this method has its
25
roots in information theory and builds on the classical maximum entropy work of Janes (1957).
We illustrate the use of this approach, describe how to impose restrictions on the estimator, and
how to examine the sensitivity of GCE estimates to the parameter and error bounds. The GCE
approach proceeds by minimizing the entropy between a prior estimate and the reconstructed
probability. If the generalized cross entropy measure is greater than zero we have gained information on the prior and thus learning has occurred. Specifically, in the presence of repeated
samples, cross entropy acts as a shrinkage rule so that the reconstructed probability approaches
the true probability as the sample size approaches infinity (Golan et al., 1996). As would be
expected if the correct prior information is available and it is employed within the estimation
process this improves the accuracy of the estimation. In addition, incorrect prior information does not significantly impact upon the accuracy of the estimation. The reason is because
to achieve an interior solution to the problem the constraints must to be satisfied, but as the
entropy method needs to satisfy the sample information any estimates will not stray too far.
The variance of the GCE is less than the variance of sample-based rules like Least Squares or
Maximum Likelihood, but the use of prior information introduces bias. Nevertheless, this bias
is typically offset by variance reductions and the resulting mean squared error of the estimator
is smaller than sample-based mean squared error. Within this framework, minimal distributional assumptions are necessary and a dual loss function is used to take into account both the
estimation precision and prediction objectives. The GCE formulation is designed to introduce
sample information in either a data or moment form, and it permits to make use of all available information. However, the GCE estimator is a shrinkage estimator where the parameter
estimates are shrunk towards the prior mean, which is based on non sample information and
thus as we increase the degree of shrinkage towards the prior mean we need to make sure that
the prior mean is based on good nonsample information. REFERENCES Golan A., Judge G.
and D. Miller (1996), Maximum entropy econometrics: robust estimation with limited data,
Wiley. Janes E. T. (1957), Information theory and statistical mechanics, Physics review, 106,
620-630.
Nonparametric estimation of scalar diffusions based on low
frequency data
Markus Reiss, Humboldt-Universiät zu Berlin, Germany
reiss@mathematik.hu-berlin.de
Suppose we observe a one-dimensional diffusion process X satisfying
dX(t) = b(X(t))dt + σ(X(t))dW (t)
26
at discrete time points (X(0), X(∆), . . . , X(N ∆)), Delta > 0. We consider the problem of
estimating the functions b(·) and σ(·) nonparametrically in the case where the observation
distance ∆ is not small. Using a spectral estimation method, we obtain optimal minimax rates
under ergodicity conditions for long-time asymptotics, as ∆ > 0 is fixed. The procedure relies
on estimating an eigenfunction-eigenvalue pair of the Markov transition operator and has to
deal with an inherent ill-posed inverse problem. Numerical simulations show that for finite
samples our method is superior to high-frequency methods already for moderate observation
distances ∆, but gets comparatively worse for very small ∆.
Asymptotic theory for M-estimators of boundaries
Keith Knight, University of Toronto, Canada keith@utstat.toronto.edu
We consider the asymptotic theory for M-estimators of the parameters of a linear model whose
errors are non-negative; these estimators are the solutions of constrained optimization problems
and their asymptotic theory is non-standard. Under weak conditions on the design and on the
distribution of the errors, we show that a large class of estimators have the same asymptotic
distributions. We also examine the second order properties of these estimators.
Estimating Semi-parametric Models with Constraints
Yingcun Xia, National University of Singapore, Singapore
yx202@hermes.cam.ac.uk
There are growing demands to use prior and sample information for semi-parametric models.
These information can be imposed on unknown nonparametric functions or on parameters
or both. In this paper, we propose a method to incorporate the information into the model by
“globalising” the local smoothing method. Implementation of the approach is a simple quadratic
programming. An ad hoc approach to check the constraints is proposed. Some real data sets
are analysed.
Implied Volatility String Dynamics
Matthias Fengler, Humboldt-Universität zu Berlin, Germany
fengler@wiwi.hu-berlin.de
Enno Mammen, Universität Heidelberg, Germany
mammen@statlab.uni-heidelberg.de
A primary goal in modeling implied volatility surfaces (IVS) is the complexity reduction of IVS
dynamics. For this purpose it is common practice to fit the IVS each day and apply a principal
27
component analysis using a functional norm. These approaches, however, neglect the degenerated string structure of the implied volatility data and are likely to result in a modeling bias.
Using transaction based German DAX option data from 1998 to May 2001, we approximate
the IVS in a finite dimensional function space by only fitting in the local neighborhood of the
design points. Our approach is a combination of methods from functional principal component
analysis and backfitting techniques for additive models. The basis functions recovered have intuitive financial interpretations. We study the time series properties of the parameter weights
and complete the modeling approach by proposing a vector autoregressive model for the IVS.
Semiparametric Multivariate Garch Models
Christian Hafner, Universiteit Rotterdam, Netherlands chafner@few.eur.nl
Estimation of multivariate GARCH models is usually carried out by quasi maximum likelihood
(QMLE), for which recently consistency and asymptotic normality have been proven under
quite general conditions. However, there are to date no results on the efficiency loss of QMLE
if the true innovation distribution is not multinormal. We investigate this issue by suggesting
a nonparametric estimation of the multivariate innovation distribution, based on consistent
parameter estimates obtained by QMLE. We give conditions under which the semiparametric
efficiency bound can be attained. A simulation experiment demonstrates the efficiency gain
of our procedure compared with QMLE, and an application to a bivariate stock index series
illustrates the results.
Autoregressive aided periodogram bootstrap for time series
Jens P. Kreiss, TU Braunschweig, Germany j.kreiss@tu-bs.de
A bootstrap methodology for the periodogram of a stationary process is proposed which is based
on a combination of a time domain parametric and a frequency domain nonparametric bootstrap. The parametric fit is used to generate periodogram ordinates that imitate the essential
features of the data and the weak dependence structure of the periodogram while a nonparametric (kernel based) correction is applied in order to catch features not represented by the
parametric fit. The asymptotic theory developed shows validity of the proposed bootstrap procedure for a large class of periodogram statistics. For important classes of stochastic processes,
validity of the new procedure is established also for periodogram statistics not captured by existing frequency domain bootstrap methods based on independent periodogram replicates.
28
Consistent Testing for Stochastic Dominance under General
Sampling Schemes
Oliver Linton, London School of Economics and Political Science, UK
lintono@lse.ac.uk
Yoon-Jae Whang, Korea University, Korea whang@korea.ac.kr
We propose a procedure for estimating the critical values of the extended Kolmogorov-Smirnov
tests of First and Second Order Stochastic Dominance in the general K-prospect case. We
allow for the observations to be serially dependent and, for the first time, we can accommodate
general dependence amongst the prospects which are to be ranked. Also, the prospects may be
the residuals from certain conditional models, opening the way for conditional ranking. We
also propose a test of Prospect Stochastic Dominance. Our method is subsampling; we show
that the resulting tests are consistent and powerful against some N-1/2 local alternatives even
when computed with a data-based subsample size. We also propose some heuristic methods
for selecting subsample size and demonstrate in simulations that they perform reasonably. We
show that our test is asymptotically similar on the entire boundary of the null hypothesis, and is
unbiased. In comparison, any method based on resampling or simulating from the least favorable
distribution does not have these properties and consequently will have less power against some
alternatives.
MD*Book and XQC/XQS - an Architecture for Reproducible
Research
Sigbert Klinke, Humboldt-Universität zu Berlin, Germany
sigbert@wiwi.hu-berlin.de
Heiko Lehmann, SAP, Germany mail@hlehmann.de
Juan M. Rodriguez Poo, Universidad de Cantabria, Spain rodrigjm@unican.es
Statistical software has also become an important part of scientific research that is reflected
in the publications of the research results. Publishing a mathematical theorem requires also
the publication of the proof of this theorem. The result of a computation can be seen as the
equivalent of a mathematical theorem. Reproducibility of published results allows fulfilling this
demand offers the possibility to proof computational results. Our MD*Book tool together with
the XQC/XQS architecture presents a potential solution to the challenge stated above.
29
Some Evidence on Sense and Nonsense of Non- and
Semiparametric Analysis of Econometric Models
Stefan Sperlich, Universidad Carlos III de Madrid, Spain,
stefan@est-econ.uc3m.es
The discussion about the use of semiparametric analysis in empirical research in economics is
as old as the methods are. This article can certainly not be more than a small contribution to
the polemic and still open question how useful is non- or semiparametric econometric research.
The goal of this contribution is twofold: to highlight that the use of these methods in economics
have their justification, a point that is categorically declined by many economists; and to highlight what might be reasons for the lack of application of these methods in empirical research.
We do not give a survey of available methods and procedures. Since we discuss the question of
the use of non- or semiparametric methods (in economics) in general, we believe that it is fair
enough to stick to kernel smoothing methods. It might be that we will face some deficiencies that
are more typical in the context of kernel smoothing than for other methods. However, the different smoothing methods share mainly the same advantages and disadvantages we will discuss.
Even though many points of this discussion hold also true for other fields, all our examples
are either based on economic data sets or concentrate on models that are typically motivated
from econometric theory. The interest is directed towards the following problems: feasibility,
implementation and computational expense, parameter choice and econometric modeling.
Additive Nonparametric Models in the Presence of Measurement
Errors
Dimitris Ioannides, University of Macedonia, Greece dimioan@uom.gr
We study the estimation of the additive components in additive regression models in the presence of measurement errors. A deconvoluted kernel procedure based on marginal integration
is used to estimate the unknown nonlinear components. Formulas for the asymtotic bias and
normality of our estimator are established.
Local Modeling by Structural Adaptation
Jörg Polzehl, Weierstraß-Institut für Angewandte Analysis und Stochastik Berlin,
Germany polzehl@wias-berlin.de
Structural adaptive smoothing provides a new approach to nonparametric modelling. Emphasizing on local homogeneity we are able to obtain iterative procedures with remarkable properties.
In Procedures like Adaptive Weights Smoothing (AWS) allow to achieve an almost parametric
30
behaviour if a specified local model is valid globally or in a large homogeneous region. The
method is fully adaptive and dimension free. First applications included imaging problems,
where the underlying image function is piecewise constant or piecewise smooth. Generalizations
allow for a wide class of probabilistic models for image gray values including binary images,
Poisson counts and exponential models. Applications in time series and biosignal analysis focus
on local stationarity.
Immigration and International Trade: a Semiparametric Empirical
Investigation
Kusum Mundra, San Diego State University, San Diego , USA
kmundra@mail.sdsu.edu
This paper examines the effect of immigration on the US trade flows. The model hypothesizes
that immigration facilitates international trade with home countries by lowering transaction
costs. Immigrants also demand products from their country of origin, and thus stimulate trade.
Using a panel data set I estimate a dynamic, fixed-effect model. The immigrant stock, a
proxy for transaction costs, enters the model non-parametrically, whereas other variables enter
the model log-linearly, as implied by the gravity model of international trade. To estimate
this semiparametric model, I develop a new instrumental variable estimator with desirable
asymptotic properties. The results indicate that the immigration effect on imports is positive
for both finished and intermediate goods, but the effect on exports is positive only for finished
goods.
31
7 List of Participants
1
Taleb
Ahmad
Germany
2
Gökhan
Aydınlı
Germany
3
Michal
Benko
Germany
4
Rosella
Bernardini
Italy
5
Irene
Bertschek
Germany
6
Marco
Bianchi
UK
7
Oliver
Blaskowitz
Germany
8
Szymon
Borak
Germany
9
Snigdhansu
Chatterjee
USA
10
Ying
Chen
Germany
11
Michel
Delecroix
France
12
Kai
Detlefsen
Germany
13
Zdenek
Fabian
Czech Republic
14
Matthias
Fengler
Germany
15
Enzo
Giacomini
Germany
16
Wolfgang
Härdle
Germany
17
Christian
Hafner
Netherlands
18
Zdenek
Hlavka
Germany
19
Stefan
Hoderlein
Germany
20
Joel
Horowitz
USA
21
Dimitris
Ioannides
Greece
22
Torsten
Kleinow
Germany
23
Sigbert
Klinke
Germany
24
Alois
Kneip
Germany
25
Keith
Knight
Canada
26
Jens Peter
Kreiss
Germany
27
Heiko
Lehmann
Germany
32
28
Oliver
Linton
UK
29
Enno
Mammen
Germany
30
Danilo
Mercurio
Germany
31
Zaher
Mohdeb
Algeria
32
Raul
Molinar
Mexico
33
Rouslan
Moro
Germany
34
Marlene
Müller
Germany
35
Kusum
Mundra
USA
36
Jörg
Polzehl
Germany
37
Juan
Poo
Spain
38
Isabel
Proênça
Portugal
39
Markus
Reiß
Germany
40
Bernd
Rönz
Germany
41
Pascal
Sarda
France
42
Michael
Schimek
Austria
43
Rainer
Schulz
UK
44
Leopold
Simar
Belgium
45
Remy
Slama
France
46
Hizir
Sofyan
Germany
47
Stefan
Sperlich
Spain
48
Vladimir
Spokoiny
Germany
49
Joseph
Tadjuidje
Germany
50
Marc
Tisserand
Germany
51
Philippe
Vieu
France
52
Michael
Werner
Germany
53
Axel
Werwatz
Germany
54
Yoon-Jae
Whang
Korea
55
Rodrigo
Witzel
Germany
56
Yingcun
Xia
Singapore
57
Uwe
Ziegenhagen
Germany
33
8 CASE
Center for Applied Statistics and
Economics
The Research Program
The large number of complex tasks and problems in economics can only be solved through the
combination of economic expertise and the application of sophisticated quantitative Methods,
with cutting edge computing power.
The Center of Applied Statistics and Economics forms the institutional building block, to
make the reservoir of highly qualified Statisticians, Mathematicians, Economists at Berlin’s
universities and scientific institutions aware of these upcoming problems and to resurrect them
as members.
The Research Team
Prof. Dr. Peter Bank Peter Bank, geb. 1971, ist seit Mai 2002 Juniorprofessor für Stochastische Analysis und Finanzmathematik am Mathematischen Institut der Humboldt-Universität zu Berlin und auch Mitglied des DFG-Forschungszentrums ”Mathematik für
Schlüsseltechnologien”. Seine Forschungsinteressen umfassen allgemein Anwendungen
der Stochastischen Analysis, insbesondere in finanzmathematisch motivierten stochastischen Optimierungsproblemen.
Prof. Dr. Wolfgang Härdle Wolfgang Härdle, geboren 1953, ist seit 1992 Professor für Statistik an der Wirtschaftswissenschaftlichen Fakultät der Humboldt-Universität zu Berlin.
Er ist der Sprecher des Sonderforschungsbereiches 373 - Quantifikation und Simulation
Ökonomischer Prozesse. Seine Forschung beschäftigt sich mit Glättungsmethoden, Discrete Choice - Modellen, der statistischen Modellierung von Finanzmärkten und Computergestützter Statistik. Seine aktuellste Arbeit befasst sich mit der Modellierung implizierter Volatilitäten und der statistischen Analyse des Finanzrisikos.
34
Prof. Dr. Kurt Helmes Kurt Helmes, geboren 1949, ist seit 1995 Professor für Operations Research an der Humboldt-Universität zu Berlin. Studium der Mathematik (Dipl.-Math.),
Promotion (Dr. rer. nat. 1976) und Habilitation (1982). Visiting Associate Professor (1985-86) und Associate Professor (1986-95) an der University of Kentucky. Seine
Forschungsschwerpunkte sind Stochastische Modelle des Operations Research und adaptive stochastische Steuerungstheorie.
Prof. Dr. Lutz Hildebrandt Lutz Hildebrand, geboren 1945, ist seit 1994 Professor für Marketing an der Humboldt-Universität zu Berlin. Er war von 2000 bis 2002 Dekan der
Wirtschaftswissenschaftlichen Fakultät der HU. Seine wissenschaftlichen Aktivitäten
siedeln sich insbesondere in den Bereichen quantitative strategische Erfolgsfaktorenforschung, Marketing-Mix-Management sowie Methoden der Marketingforschung und
Internationales Marketing an.
Dr. Ulrich Horst Ulrich Horst, geb. 1970, ist seit Oktober 2002 wissenschaftlicher Mitarbeiter am DFG-Forschungszentrum ”Mathematik fuer Schlüsseltechnologien”.
Seine
Forschungsinteressen liegen allgemein auf den Gebieten der Finanzmathematik und der
mathematischen Ökonomie. Ein besonderer Schwerpunkt ist dabei die mathematische
Modelliereung und Analyse von Mikrostrukturmodellen für die Preisbildung auf Finanzmärkten.
Prof. Dr. Uwe Küchler Uwe Küchler, geboren 1944, ist seit 1982 Professor für Wahrscheinlichkeitstheorie und Mathematische Statistik am Institut für Mathematik der HumboldtUniversität zu Berlin. Er ist Teilprojektleiter im Sonderforschungsbereich 373 - Quantifikation und Simulation ökonomischer Prozesse. In der Forschung ist er hauptsächlich
tätig auf dem Gebiet der Statistik stochastischer Prozesse und ihrer Anwendungen.
Besonderes Interesse gilt zur Zeit den Stochastischen Differentialgleichungen mit Gedächtnis.
Prof. Dr. Marcel Paulssen Marcel Paulssen, geboren 1966, ist seit Dezember 2002 Juniorprofessor für Industrielles Marketing an der Humboldt Universität zu Berlin. Schwerpunkte
seiner Forschungsaktivitäten liegen in den Bereichen Ziele und zielgeleitetes Handeln,
Dynamik von Kundenbeziehungen sowie dem Einfluss von Bindungsstilen auf Kundenbeziehungen.
Prof. Dr. Christian Schade Prof. Dr. Christian Schade, geb. 1962, ist seit Mai 2000 Leiter
des Instituts für Entrepreneurship/Innovationsmanagement an der Humboldt-Universität
zu Berlin. Im Mai 2001 wurde er im SFB 373 kooptiert. Schwerpunkte seiner wissenschaftlichen Aktivitäten sind die Analyse von Risikoaspekten und begrenzter Rationalität in spiel- und entscheidungstheoretischen Situationen, im Unternehmerverhalten
35
und auf Märkten, u.a. mit Methoden der experimentellen Wirtschaftsforschung.
Prof. Dr. Wladimir Spokoinyi Wladimir Spokoinyi ist Professor für Statistik an der HumboldtUniversität zu Berlin und Leiter des Forschungsteams ”Stochastic Algorithms and Nonparametric Statistics” am Weierstrass Institut für Angewandte Analysis und Stochastik.
Seine Forschung ist fokussiert auf angewandte Wahrscheinlichkeitstheorie und Statistik,
sowohl theoretisch als auch angewandt. Schwerpunkt seiner Arbeit liegt auf Algorithmen,
Numerik und Komplexität. Anwendungsbereiche seiner Forschung sind Wirtschaftswissenschaften, Ingenieurwesen, Medizin und Sozialwissenschaften. Spezielle Anwendungsfälle sind z.B. die Modellierung komplexer Strukturen mit nicht parametrischen Methoden, Risiko Management auf Finanzmärkten und die Entwicklung effizienter stochastischer Algorithmen.
Prof. Dr. Richard Stehle Prof. Richard Stehle, geb. 1946, ist seit Oktober 1992 Leiter des
Instituts für Bank-, Börsen- und Versicherungswesen. Seit 1995 ist er Teilprojektleiter
im Sonderforschungsbereich 373 - Quantifikation und Simulation Ökonomischer Prozesse.
Prof. Stehle hat an der Graduate School of Business der Stanford University promoviert
und in Mannheim habilitiert. Sein hauptsächliches Forschungsgebiet ist die Funktionsweise der Kapital- und Devisenmärkte, insbesondere der Aktienbörse.
Prof. Dr. Harald Uhlig Prof. Harald Uhlig, geb. 1961, ist seit 2000 Professor für Makroökonomie
an der Wirtschaftswissenschaftlichen Fakultät der Humboldt-Universität zu Berlin und
Visiting professor am CentER for Economic Research, Tilburg. Forschungsschwerpunkte
sind angewandte quantitative Theorie and angewandte dynamische, stochastische Gleichgewichtstheorie in den Bereichen Konjunktur, Wachstum, dynamische Verträge, psychologische Grundlagen der dynamischen Entscheidungstheorie und Wirtschaftspolitik.
Prof. Dr. Bengt-Arne Wickström Bengt-Arne Wickström, geboren 1948, ist seit 1992 an der
Wirtschaftswissenschaftlichen Fakultät der Humboldt-Universität zu Berlin Professor
für Finanzwissenschaft. Nach dem Studium der Mathematik und Physik am Bowdoin
College in Brunswick, Maine (B.A., 1969) und an der State University New York at
Stony Brook (M.A., 1970) habilitierte er im Bereich Volkswirtschaftslehre an der State
University New York at Stony Brook (M.A., 1973, Ph.D., 1975). Seine allgemeinen
Forschungsinteressen liegen im Bereich der Wohlfahrtstheorie. Besonderen Schwerpunkte
sind dabei Ökonomische Theorie der Gerechtigkeit und Theorie der Politik sowie soziale
Evolution, Umweltökonomie und Theorie der Alterssicherung.
36
University Staff Associated with CASE
Dipl.-Vw. Gökhan Aydınlı His major interests are computational statistics and quantitative
finance. Furthermore he works in the area of e-learning and e-teaching of statistics in
distributed environments. Another topic of his research is the application of spreadsheets
in statistics and Client-Server based statistical computing.
MA Michal Benko He is interested in nonparametric regression problems, functional data
methods in financial applications, also working in projects related to computer intensive
methods and e-learning.
Dipl.-Vw. Oliver Blaskowitz In his research, he is focussing in particular on trading strategies
and applied quantitative finance. He follows with great interest issues related to credit
risk, yield courve and volatility modeling. Currently, he is extending in his new paper
”Probability Trading” jointly with W. Härdle his former work on ”Trading on Deviations
of Historical and Implied Densities” and ”Skewness and Kurtosis Trades”.
MA Szymon Borak His major interests are Levy processes and financial markets.
MA Ying Chen Her major interests are dynamics of interest rate curves, stochastic modeling
of interest rate processes.
Dipl.-Vw. Matthias Fengler His research interests are semiparametric implied volatility modeling and state price density estimation.
Dr. Zdenek Hlavka He focusses on robust Bootstrap methods in sequential Statistics.
Dipl.-Vw. Danilo Mercurio His research interests are twofold. Primary: financial econometrics, continuous time econometric modelling, change point estimation, volatility modelling. Secondary: mathematical and computational finance.
PD Dr. Marlene Müller She concentrates among others on semiparametric modeling in Economics.
Dipl.-Kfm. Rodrigo Witzel He is dealing with the development of internet based environments for teaching and learning applied statistics as well as ontology based knowledge
engineering, especially automatic generation of ontological knowledge from teaching material.
Dipl.-Kfm. Uwe Ziegenhagen He is dealing with design and implementation of statistical
algorithms and software in Java, XploRe and C++. Further interest is Client-server
based computing.
37
PhD Students Associated with CASE
Taleb Ahmad His major interest is classification and regression trees and secondary interest
is computational statistics.
Kai Detlefsen His major interests are dynamics of risk measures, calibration of jump diffusion
processes, calibration of stochastic volatility option pricing models.
Enzo Giacomini His major interests are quantitative finance, neural network applications in
quantitative finance, credit risk modeling and credit scoring.
Rouslan Moro Currently he is interested in the application of statistical methods such as
support vector machines to determine the problem of company competitiveness.
Marc Tisserand His research interests are option theory (pricing and hedging), quantitative
finance, Monte Carlo simulation and interests rates theory.
Hizir Sofyan His primary research is on data mining, cluster analysis and fuzzy techniques.
Furthermore he is interested in computational statistics.
CASE Advisory Board
Prof. Dr. Jörg Breitung Universität Bonn, Institut für Ökonometrie und Operations Research
Prof. Dr. Günter Franke Universität Konstanz, Fachbereich Wirtschaftswissenschaften
Prof. Dr. Ursula Gather Universität Dortmund, Fachbereich Statistik, Mathematische Statistik und industrielle Anwendung
Prof. Dr. Joel Horowitz Northwestern University, Department of Economics
Prof. Boris A. Portnov, D.Sc. University of Haifa, Departement of Natural Resources Environmental Management
Dipl.-Math. Gerhard Stahl Bundesanstalt für Finanzdienstleistungsaufsicht, Bonn
Prof. Dr. Klaus Zimmermann Deutsches Institut für Wirtschaftsforschung, Berlin
38
9 E-Books
All books are available online @ http://www.xplore-stat.de/ebooks/ebooks.html
XploRe Learning Guide
XploRe Application Guide
COMPSTAT 2002
Applied Quantitative Finance
Einführung in die Statistik der Finanzmärkte
Applied Multivariate Statistical Analysis
Partially Linear Models
Computer-Aided Introduction to Econometrics
39