強震即時警報系統相關研究

Transcription

強震即時警報系統相關研究
強震即時警報系統相關研究
鄧大量1 李汯鑑2
1.美國南加州大學南加地震中心
2.美地質調查所
辛在勤1 蕭乃褀2
1.中央氣象局
2 中央氣象局地震測報中心
1
FINAL REPORT TO THE CENTRAL WEATHER BUREAU
ON
Development of Strong-Motion Rapid Reporting
and Early Warning System – with Assessment on
International Earthquake Prediction Research
Submitted by
Ta-liang Teng
Southern California Earthquake Center
University of Southern California
Los Angeles, California 90089-0740
William H. K. Lee
U.S. Geological Survey
Menlo Part, California 94025
Also at862 Richardson Court
Palo Alto, California 94303
November 15, 2005
2
Executive Summary
This contract performs work that assists the Central Weather Bureau (CWB) in
improving and executing the on-going large seismological observation and research
programs of its Seismological Center. The work further help expand the capability of
the Earthquake Rapid Reporting System (RRS) and the Earthquake Early Warning
System (EWS) to include more realistic moment magnitude information, near real-time
seismic intensity, damage, and casualty assessment, so that governmental emergency
response agencies can more effectively dispatch the rescue resources. A good part of
this contract work also helps in defining the instrumentation specifications during its
acquisition activities, instrument calibrations, data quality control and database
construction – all directly related to the successful Taiwan Strong-Motion instrument
Program (TSMIP). A number of scientific have also published that, together with earlier
published works, has made Taiwan Central Weather Bureau a well-known
governmental agency in the world.
Section A:
(1) A published scientific paper:
A High Frequency View of 1999 Chi-Chi, Taiwan,
Source Rupture and Fault Mechanics
Youlin Chen1, Charles G. Sammis2, and Ta-liang Teng2
Abstract
High-frequency band-pass filtering of broadband strong-motion seismograms
recorded immediately adjacent to the fault plane of the 1999 Chi-Chi, Taiwan
earthquake reveals a sequence of distinct bursts, many of which contain quasi-periodic
sub-bursts with repeat times on the order of a few tenths of a second. The sub-events
that produce these bursts do not appear in conventional slip-maps, presumably because
of the low-pass filtering used in the waveform inversions. The origin times, locations
and magnitudes of about 500 of these sub-events were determined. Those closest to the
hypocenter appear to have been triggered by the P wave, while the earliest sub-events at
3
greater distances are consistent with a rupture front propagating at about 2.0 km/s. Later
sub-events at a given distance follow Omori’s law and may be interpreted as aftershocks
that begin before the Chi-Chi rupture has terminated. The frequency-magnitude
distribution of these sub-events has b-value equal to 1.1. The sub-events are clustered in
space, with most clusters located at shallow depth along the Chelungpu surface rupture.
The larger sub-events tend to occur at greater depth, while the small sub-events are only
located at shallower depths. Cluster locations generally coincide with the large
amplitude slip-patches found by source inversions at lower frequencies. Relocation of
the deeper sub-events suggests a non-planer rupture surface where dip increases with
depth at the southern end.
Introduction
The 1999 Chi-Chi, Taiwan, earthquake (Mw = 7.6) was the largest earthquake to
strike Taiwan in the 20th century. Geological field observations and geophysical studies
have shown that this earthquake ruptured about 85 km of the Chelungpu fault producing
a complicated surface trace. For most of its length, the fault rupture strikes nearly northsouth, dips to the east at a shallow angle (20º - 30º), and is dominated by thrust motion.
At the northern end, the fault bends toward the east and exhibits significant strike-slip
displacement (Lee et al., 2000).
In addition to teleseismic and GPS data, a dense network of broad-band
accelerometers operated by the Central Weather Bureau of Taiwan (CWB) recorded
unprecedented high-quality near-field strong motion data. A number of studies have
used this data to characterize the ground motion and measure source parameters.
Generally, the fault motion was very different on the southern and northern segments of
the Chelungpu fault. The southern segment showed large accelerations but small
displacement and slip velocity, while the northern segment showed large displacement
(over 9m) and an unusually large slip velocity of over 4m/sec (Chung and Shin, 1999;
Huang et al., 2001; Wu et al., 2001). The mainshock released a total moment of 3.38 x
1020 Nm (Harvard CMT solution) over about 30 – 40 sec with an average rupture
velocity of 2.5 – 2.6 km/sec (Wu et al., 2001; Ma et al., 2001; Zeng and Chen, 2001).
The moment release rate peaked between 19 and 25 sec (Chi et al., 2001). The inversion
of strong-motion velocity waveforms by Chi et al. (2001) found that the source was
composed mainly of three large sub-events. The first was primarily a thrust event
located near the hypocenter and extending 30 km to the north. The second was located
at shallow depth near the northern end of the rupture. Slip in this sub-event was oblique
with a temporal rotation of the rake from pure thrust to strike slip producing large
ground velocities. The third was located at the southern end of the rupture. The first two
sub-events were also found by other studies, but they were not clearly separable (e.g.
Ma et al., 2001; Zeng and Chen, 2001). All three sub-events were shallower than 10 15km (Wu et al., 2001).
4
The Chi-Chi mainshock triggered over 700 strong-motion stations across the island
with an average station spacing of 5km. Shin and Teng (2001) used more than 400
records around the Chelungpu fault to construct a movie of the time averaged (~1sec)
and spatially interpolated (<5km) surface acceleration during the rupture. These direct
observations of surface motion did not require a crustal velocity model or wave
propagation theory. Some surface patches had accelerations exceeding 600 gal,
probably indicating large sub-events located directly below. Many more such subevents appear in the movie than are imaged by the slip inversions. Moreover, the movie
events do not appear to follow an orderly progression. Many occurred at times
significantly later than that would be expected if they were triggered by a propagating
rupture front. These late sub-events may indicate discontinuous rupture propagation on
the Chelungpu fault, or they may be early aftershocks that cannot be distinguished from
the Chi-Chi mainshock. It is not possible to decide between these two hypotheses using
teleseismic broadband and GPS data (Ma et al., 2001; Wu et al., 2001; Zeng and Chen,
2001; Chi et al., 2001). Current waveform inversion studies use low-pass filtered
velocity or displacement time series numerically integrated from strong-motion
accelerograms. The low-pass filter is usually significantly less than 1Hz due to the low
resolution of the earth model and poorly known site responses. The absence of subevents in waveform inversion studies, and to a lesser extent in the surface movie, is
probably due to the loss of high-frequency information in the low-passed input
waveforms.
In this study, we image the Chi-Chi rupture at high-frequency (20-50Hz) using the
near-field strong-motion accelerograms. After band-pass filtering, each accelerogram of
the Chi-Chi mainshock appears as a sequence of high-frequency bursts. The largest
were assumed to originate at subevents on the Chelungpu fault and were located using a
sub-array of stations nearest to the hypocenter. Once the bursts corresponding to each
sub-event were identified, the sources were relocated using a standard algorithm. The
relative sizes of these sub-events were then estimated and their distribution in space and
time analyzed.
High Frequency Seismic Records
A total of 441 digital strong-motion records of the Chi-Chi earthquake were
processed and disseminated on CDROM by Lee et al. (2001). From these we have
selected 49 stations around the Chelungpu surface trace using the criterion that the
distance from the station to the Chelungpu fault plane is less than 20 km (Figure 1).
These locations and other station information are given in Table 1, including whether
the station was equipped with Teledyne Geotech A900 (or A900A) or a Terra Tech IDS
(or IDSA) accelerometer. Both instruments have a flat response from DC to 50 Hz and a
full-scale range of ±2g. The 16-bit output was digitized at 200 or 250 samples/sec.
Figure 2 shows the original and filtered accelerograms from the E-W component
of station T084. The top trace shows the first 40 seconds of the original broadband
5
accelerogram. Successive traces show band-pass filtered versions of this trace in
progressively higher frequency bands beginning at 10-20 Hz and ending at 40-50 Hz.
The time axis of each trace is referenced to the origin time of the Chi-Chi mainshock,
and a few seconds of pre-P wave recording are included. The filtered waveforms were
produced using a 4th-order Butterworth filter and zero-phase shift. Some intriguing
characteristics of these high-frequency waveforms are summarized as follows:
1. High frequency band-pass filtering resolves the continuous 40-second long
broadband record into a sequence of distinct high frequency bursts. We assume that
each originates at a sub-event lying on or near the fault plane.
2. Some high-frequency bursts appear in all band-pass filtered accelerograms of a
given record, while others appear only in the low pass or in the high pass bands. These
differences may reflect differences in source size and distance. For example, a burst that
only appears in a low frequency band may correspond to a large sub-event far from the
recording station, its high-frequency components having completely decayed away. On
the other hand, a burst that only appears in higher frequency bands may come from a
small sub-event that is located very close to the recording station, its peak acceleration
frequency being well above the lower frequency bands. This interpretation was useful in
correlating sub-events recorded at different stations; a necessary first step to location.
The short duration of the bursts (less than one second) and the fact that some are only
seen in the highest frequency bands imply that they are individual sub-events and not
the high frequency spectral tail of a larger event.
3. Some high-frequency bursts recorded on the E-W component do not appear on
the other two components at a given station. This observation may be a directivity effect
since these differences appear to be consistent with dip-slip displacement on the subevents.
4. The bursts consist mostly of S-waves. In the unfiltered accelerograms, the
amplitude of the initial P-wave is usually 5 to 10 times smaller than that of initial Swave. Even though the wave amplitudes are greatly reduced by the high-frequency
band-pass filtering, the resultant waves still have amplitude levels comparable to the
initial P-waves, indicating that they are S-waves.
5. Many of the individual bursts appear to be composed of multiple sub-bursts with
similar waveforms as shown in the time-expanded burst in Figure 2. These multiple subbursts often have systematically decreasing or increasing amplitudes and are nearly
evenly spaced in time. We interpret these sub-bursts as corresponding to a series of
stick-slip instabilities on a given slip patch while it is being rapidly loaded by fault slip
during the earthquake. These multiple bursts appear in all high-frequency bands and on
all directional components.
Locating the High-Frequency Sub-Events
The challenge in locating the sub-events was to identify the bursts produced by
any given sub-event on the different records, especially since bursts from different
6
sub-events can arrive in a different time order at different stations. This problem
is similar to that of locating members of a dense cluster of aftershocks immediately
following a large mainshock. Such aftershocks are usually sorted out using bruteforce to compute all possible origin times and locations in the aftershock volume.
We simplified the problem by assuming, at first, that all seismic bursts originated
from sub-events lying on the Chelungpu fault plane. Once the bursts
corresponding to a given sub-event were identified on different records, we relaxed
this assumption by using a conventional location algorithm to refine the locations.
Chelungpu Fault Model
The Chelungpu surface rupture has a complicated geometry, especially at the
northern end where it bends toward the east and at the southern end where it bends
toward southwest. A simple calculation shows that differences between a single fault
plane and actual fault geometry can lead to a 0.3 sec error in travel time from sources on
the fault to stations in the array. Since this uncertaninty is too large to sort out the bursts,
the fault was represented by the five rectangle planes shown in Figure 1, with
orientations, strikes, dips, and dimensions given in Table 2. Each plane was divided into
1km x 1km patches, each of which was regarded as a potential source. The origin time
(1999/09/20 17:47:15.85), epicenter (23º51.15’N 120º48.93’E), and focal depth (8.0km
as determined by CWB) of the Chi-Chi hypocenter were also used in the location
algorithm. There is some overlap of these planes such that there can be two hypocenters
for the same epicenter. The location algorithm defined below picks the hypocenter,
which gives the best fit to the data. In fact, very few events occurred in the overlaps and
those that do are shallow where the differences between overlapping planes is small.
Data Processing
Data files on CDROM from Lee et al. (2001) were organized into four quality
groups (A to D from the best to the worst) based on whether the pre- and post-event
records were long enough, whether any component was unrecorded, and whether they
contain other defects such as noise spikes. Most of the records from our 49 closest
stations were A and B quality as detailed in Table 1.
All A quality accelerographs had accurate absolute timing synchronized by their
own GPS clocks. Most of the remaining accelerographs were not equipped with GPS
timing devices, but the relative times were based on their internal clocks. The apparent
timing errors have been corrected, so that the near-field Chi-Chi mainshock records are
good to 1 second at epicenter distance less than 50km (Lee et al., 2001). These time
corrections are listed in Table 1. A recallibration was necessary because the local
velocity model used in this study is different from the Taiwan regional model used by
Lee et al. (2001). We used a 1-D velocity model from the tomography study of Ma et al.
(1996) for the southwestern Taiwan region (Table 3). It is also the velocity model used
7
in Ma et al. (2001). We first generated a P-wave reference travel-time curve using the
local P-wave velocity model, and checked the curve against the P-wave arrivals at all
selected station. For the stations with GPS timing, the mean difference between the
observed and calculated P-wave travel time is 0.13sec, with a standard deviation of
0.15sec. The scatter of non-GPS stations is larger, probably due to clock errors but also
possibly reflecting lateral heterogeneity, differences in station elevations, and errors in
identifying the onset of the emergent P wave. To make the wave propagation consistent
with the local velocity model, we applied a time shift to the records by lining up the Pwave arrival with P-wave reference travel-time curve. The time shifts are given in Table
1.
In order to use amplitude data to help locate the sub-events, we modeled the
attenuation using a standard relation for the decay of amplitude A with distance r from
Aki and Richards (1980):
πfr
−
A(r) = Aoe Qv
(1)
where Ao is the amplitude at the source, f is frequency, Q is the quality factor, and v is
velocity. We first ignored geometrical spreading, site- and source-effects. In Figure 3,
the maximum amplitude of the horizontal component during the first 2 seconds after the
S-wave arrival is plotted as a function of hypocentral distance for the pass-band
intervals 10-20Hz, 20-30Hz, 30-40Hz and 40-50Hz. We assume that this short initial
part of the S wave is generated at the Chi-Chi hypocenter, and does not contain
radiation from other sub-events (Chen et al., 2001). The theoretical decay calculated
from eqn. (1) for each pass-band is also plotted in Figure 3. For these theoretical curves
we use v = 3.21 km/s, the average S-wave velocity of the crust in Taiwan, Q = 250,
taken from seismic source inversion studies (e.g. Wu et al., 2001; Chi et al., 2001), and
the mean frequency in each band-pass window. At distances less than about 40km, the
observational data are well represented by eqn. (1), with the exception of data at the
four closest stations where significantly deviations are probably introduced by the
radiation pattern. Because the lower frequency data show less scatter than the higher
frequency data, we chose to use the 20-30 Hz frequency band in the location algorithm.
In this band, the amplitude is attenuated by about one order of magnitude for every
20km of propagation. Since the range between the largest and smallest burst on each
record is about an order of magnitude, each station is probably seeing sub-event sources
out to distances of about 20 km.
A typical burst is comprised of a few oscillations as shown in the expanded
accelerograms in Figure 2. For simplicity, we used the peak of the oscillations as the
burst arrival time, because it was easier and more accurate to pick than the initial time.
Since the duration of each burst is less than 0.4 seconds, replacing initial time with peak
time did not significantly affect the final results. We used individual directional
components to locate bursts, but all three components were used to determine the final
location.
8
Location Algorithm
The locations and origin times of the sub-events were determined by the brute force
approach sketched in Figure 4. We calculated the travel time from each 1km x 1km
patch on the composite fault plane in Fig. 1 to each recording station. Beginning with
the origin time to for the Chi-Chi mainshock, we calculated the predicted arrival time at
each station for potential sub-events with origin times to + j∆t on each patch. The time
interval ∆t between potential sub-events on each patch was set to 0.1 sec. The
k
calculated arrival times formed the matrix (tcalc )ij , where i indexes the asperity patch, j
indexes the origin time, and k indexes the station. At each station k, we also formed a
k
vector of the observed arrival times of the bursts where (tobs )n is the observed arrival
time of the nth burst at station k. For each fault patch i and origin time j, we compared
k
its predicted arrival times with the nearest observed arrival (tobs )min at the station k by
calculating their time difference as ∆tijk = (t obs )min − (t calc )ij .
k
k
We then formed a functional Wij that was minimized to find the optimal origin time
and location for each event as
Ns
∑ ∆tijkξ k
,
(2)
Wij = k=1
Ns
∑ξ k
k=1
where ξ k is a weighting factor and the summation is over all N s stations. Because of
the high frequencies, we do not expect the energy from a patch to be recorded at more
distant stations. We thus define the weighting factor to include an attenuation factor
from eqn. (1) and a factor of 1/r for geometrical spreading.
πfr
−
1 Qv
.
(3)
ξk = e
r
where we used f = 25 Hz and v = 3.21 km/sec. We obtained our best results when we
reduced Q from 250 to 100, thereby increasing the relative weight of the nearest stations.
In fact Q = 100 is still a reasonable value for shallow crust in the source area.
Minimizing the functional Wij alone did not give enough constraint to locate all the
sub-events. For example, if a source was close to a single station but far away from all
other stations, the algorithm yielded multiple source locations of equal weight all lying
about the same distances from the closest station. To eliminate such pathological
geometries, we introduced the following additional criteria: 1) we required at least 4
stations within 20 km of a potential source, 2) for all stations closer than 20 km, we
9
required the time difference between the observed and calculated arrivals to be less than
a prescribed T in the range 0.1<T< 0.2 sec. This criterion was based on the observed
0.13 sec mean difference between observed and calculated travel times for stations with
GPS timing (and the other stations which were time corrected using this curve).
Sub-event Locations
The five planes used to model the geometry of the Chelungpu fault plane were
divided into approximately 4000 patches, each with area 1 km2. Travel times from each
patch to all stations were computed for origin times between to and to + 40 sec. in 0.1
sec increments. This resulted in a total of 1.6 million potential sources for which the
functional Wij was calculated. Of these, only about 500 solutions had small values of Wij
and satisfied the two additional constraints. Epicenters of the sub-events located using
each component of the seismograms independently are shown in Figs. 5-7. In these
figures, the time following to of the mainshock was divided into a sequence of five
second intervals and the sub-events belonging to each were assigned a different symbol.
Note that all sub-events are concentrated in the first 25 sec, and that almost all are less
than 12 km deep consistent with other broadband seismic source studies of the Chi-Chi
earthquake (e.g. Ma et al. 2001; Zeng and Chen, 2001; Chi et al. 2001). The temporal
and spatial distribution of the sub-events approximately follows the Chelungpu rupture
history, initiating at the hypocenter and propagating bi-laterally to the north and south.
Spatially, the sub-events cluster in the shaded areas labeled A–G in Figs. 5-7. Individual
clusters, or combined nearby clusters, map onto areas of maximum slip found by the
inversion of low-frequency seismic data. However, the smaller sources of highfrequency bursts located here give a much more detailed picture of the rupture evolution.
Figure 8 compares the sub-event locations determined from the E-W horizontal
components with the locations of major energy release read from the movie of surface
accelerations from Shin and Teng (2001). The origin times of the major surface
acceleration events in the movie were estimated by subtracting the travel time to the
fault plane directly below. Each panel in Fig. 8 shows the activity in a five second time
interval. The epicenter of the Chi-Chis event is indicated by the solid star. The symbols
for the subevents in each panel are the same as in Figs. 5-7.
During the first interval (0 – 5 sec), a surface acceleration event was observed at the
epicenter, while the sub-events occurred at shallow depths up-dip and slightly to the
north of the hypocenter. As we shall show later, these first sub-events nucleate so soon
after the origin time that they appear to be triggered by the P-wave from the hypocenter.
During the second interval (5-10 sec) a surface acceleration event occurs near the
surface trace and the shallow sub-events spread north and south.
During the third interval (10 to 15 sec) the rupture propagated in all directions on
the Chelungpu fault plane, as evidenced by subevents at distances from 10 to 20 km
around the hypocenter. In this time interval, the rupture extended to a depth of about 12
km. All major surface acceleration events occurred to the north of the hypocenter, and
10
each corresponds to a cluster of subevents. On the other hand, many clusters of
subevents did not have a corresponding surface acceleration event. Clusters of
subevents to the north of the hypocenter, which were active about 12 sec after the origin
time, correspond to local maxima in some slip inversions (e.g. Ma et al., 2001; Wu et al.,
2001).
In the fourth and fifth intervals (15 to 25 sec) the sub-events propagate mainly at
shallow depth towards the north and south following the surface trace, but propagation
to the east stops at depths greater than 12 km. Delayed sub-events are still observed at
depths near 12 km around the hypocenter in the interval 15 to 20 sec. Again major
surface acceleration events are observed near some sub-event clusters. Most sub-event
clusters are located at shallow depth from the north to south, but only one major
acceleration event is observed to the north (at 27 sec). No peak in ground acceleration
was reported to the north where the Chelungpu surface trace curves to the east..
Although the temporal and spatial distributions of sub-events identified using N-S
and vertical components are similar to those found using the E-W component, there are
some differences, particularly in cluster G to the north of the hypocenter. While most
sub-events in this cluster that were located using the E-W components occurred between
10-15 sec (Figure 8c), those located using the N-S components occurred between 15-20
sec. This suggests crustal anisotropy where S waves with E-W polarization travel faster.
Cluster F was not observed on the vertical components implying that slip in these
deeper sub-events is mainly an E-W thrust. This also explains why more sub-events
were located using the E-W components than using either the N-S or vertical
components. Similarly Shin and Teng (2001) identified more major surface acceleration
events on the E-W components than on the N-S components. Moreover, the major
acceleration events that they observed after 25 sec on the E-W components did not
appear on the N-S components. Neither did we locate any sub-events after 25 sec using
N-S components.
Relocating the Sub-events
Once arrivals from the different sub-events were identified and preliminary
locations obtained, these locations were refined using a standard location algorithm. We
found the optimal location and origin time for each sub-event by searching a domain
centered at its preliminary location and origin time. The precision of final location
uncertainty was less than 1km in horizontal direction, 0.5 km in depth and 0.1 sec for
origin time.
The locations and origin times of most sub-events did not change significantly
except those in cluster G. The origin times of events in this deep cluster did not change
much, but their locations moved deeper and more to the south relative to their
preliminary locations. Since cluster G is located on segment B of the multi-plane fault
model, this suggests that the dip of the Chelungpu fault increases with depth at its
southern end.
11
Estimating Sizes of the Sub-Events
It is impossible to determine the magnitudes of the sub-events by conventional methods
using only the 20-30 Hz narrow band seismograms that we used to determine their
locations. We can, however, estimate the relative magnitudes of the individual subevents by using the ratios of their peak accelerations (corrected to a common distance).
All magnitude scales are based on the logarithm of displacement amplitude ratios of the
form
(4)
M j = log(A j / Ao )
In this expression, the amplitudes are corrected to a common distance and instrument
response, A j is the amplitude of a magnitude M j event, and Ao is the amplitude of an
arbitrarily defined M = 0 earthquake. If M max is the magnitude of the sub-event having
the largest observed amplitude Amax , then the magnitude of any other sub-event M j
can be found from its amplitude A j using
⎛ Aj ⎞
(5)
M j = M max + log⎜
⎟
⎝ Amax ⎠
Since our data are all in the same narrow frequency band, we can replace the ratio of
displacement amplitudes in (5) with the corresponding measured ratio of observed
accelerations.
We normalized all amplitudes back to the source using
−γr
e j
Ak j (r) = Ak j (0)
(6)
r j Sk
where Ak (r) is the peak acceleration from event j measured by station k at distance r
j
from the event. The attenuation coefficient is γ =
πf
and the station correction at k is
Qv
Sk . For each event, the acceleration amplitude at the source, A j (0), was calculated as
the geometric average of the Ak j (0) over the stations k that were used to locate the
event. The largest A j (0) was defined as Amax (0) and eqn. (5) was used to calculate the
relative magnitudes of the other sub-events.
Data from the E-W components were used to estimate relative local magnitudes of
the sub-events, which ranged from M max down to M max − 2.2 . Figure 9 shows the
magnitude distribution in space, where the sub-events have been binned in 0.5
magnitude intervals. Note that the sub-event magnitudes tend to increase with depth.
Specifically, almost all largest sub-events are deeper than about 12 km, the approximate
lower boundary of seismogenic layer. Smaller events with magnitudes less than
M max −1.5 are all located above 2 km. This is not an artifact of attenuation since the
small events were observed to distances in excess of 20 km and would have been
12
located, even if their hypocenters extended to depths below 2 km. Figure 10 shows the
frequency-magnitude distribution of these sub-events. The logarithm of the cumulative
number is linearly distributed with magnitude except at the high and low magnitude
ends. At the low magnitude end, the curve probably flattens because the sub-event
catalog is incomplete. Not all small sub-events were identified in this study because the
high-frequency bursts with amplitudes less than 1/10 of the maximum on each record
were discarded. At the large magnitude end, the cumulative number also departs from
the linear trend due to the limited sample time. Using the maximum likelihood estimate
based on the modified Gutenberg-Richter law derived by Kagan (2002), we found a =
8.1. b = 1.07, and Mt = Mmax – 0.2. In terms of the geometrical interpretation for b-value
by King (1983), a b-value of 1.0 implies a planar spatial distribution with fractal
dimension D = 2b = 2.
Evolution of the Sub-events in Time and Space
The origin times of sub-events are plotted as a function of their distance to the ChiChi hypocenter in Figure 11. While there is a range of arrival times at each distance, the
earliest origin times (indicated by a plus signs) increase with hypocentral distance. A
linear fit to these first origin times has an inverse slope of about 2.06 km/s. Note
however that this line does not intersect the origin. As indicated in Figure 11,
connecting the earliest sub-event origin time to the hypocenter yields an inverse slope of
5.5 km/s, the P-wave velocity of the Taiwan crust. This suggests that the first sub-events
were triggered by the P-wave from the hypocenter, which then nucleated a rupture front
propagating at about 2 km/s that triggered later sub-events.
It is interesting that the closest events were located near the surface, directly upslope from the hypocenter (vertical triangles in Fig. 5). Our observation that they appear
to be triggered by the P wave in advance of the upwardly propagating rupture front is
consistent with similar observations by King et al. (1985) and Vita-Finzi and King
(1985) who reported the triggering of shallow events by deep fault-slip during large
events in the Gulf of Corinth. They proposed that such premature nucleation of shallow
seismicity may occur on larger flaws under lower normal stress near the free surface.
Sub-events that occurred later than the arrival of the rupture at each distance may
be delayed events and/or repeated slips of previously ruptured sources. We calculated
the time delay of their origin times relative to the arrival time of the rupture at the same
distance to test the hypothesis that they are aftershocks triggered by stress changes at the
rupture front. We used only sub-events larger than M max −1.5 based on the
completeness in Figure 10. The time delays were binned every 1 sec, and the binned
data were fitted to the modified Omori’s law:
dN
A
(10)
=
dt (t + c ) p
A plot of log(dN/dt) as a function of log(t) in Figure 12 gives A = 40 events/s, c = 6.6
sec and p = 1.67.
13
Also plotted on Figure 12 are the aftershock rates in the days and months
following the Chi-Chi earthquake. To affect a direct comparison with our sub-events,
we only included aftershocks that occurred within 2 km of the fault plane. Each line in
the figure represents a range of magnitudes from an assumed Mmax to M max −1.5 , the
same range spanned by our sub-event analysis. For all three curves, Mmax = 6.6, 5, and
4, the rates are significantly higher than the extension of the sub-event curve. One
possibility is that the sub-events we located were significantly larger than magnitude 6.6,
the largest Chi-Chi aftershock, but this seems to be ruled out by their short duration. It
seems more likely that they are localized on the fault plane, and are not part of the
normal more regional aftershock sequence.
Discussion
Recordings of the Mw = 7.6 Chi-Chi earthquake by a dense array of broad-band
accelerometers located within a few kilometers of the Chelungpu fault plane has yielded
a unique view of this large event at high frequencies. When observed in the frequency
band 20-50 Hz, the Chi-Chi earthquake appears as a sequence of short (less than one
second) bursts, which we interpret as being generated by small sub-events on the fault
plane.
It is not surprising that a large earthquake like Chi-Chi is composed of many
smaller sub-events. Chi et al. (2001) found that the source was composed mainly of
three large sub-events. It is not much of a stretch to imagine that these sub-events are
made up of a collection of smaller events, which are themselves comprised of still
smaller events, and so on. This type of hierarchical event structure was proposed by
Sammis et al. (1999) to explain the fractal “Cantor Dust” structure of seismicity
observed on the San Andreas Fault near Parkfield CA. In fact, Das and Aki (1977) and
Aki (1979) noted that a large earthquake must be comprised of a collection of smaller
sources in order to explain the high-frequency seismic energy observed in the near field.
They quantified these ideas in the “barrier model” for large earthquakes. One
interpretation of the high frequency bursts observed here is that we are imaging Aki’s
barriers.
The short duration of our observed bursts suggests that they are small, distinct subevents occurring on discrete slip patches, and are not the high-frequency spectral tails of
larger events. The observation of some events in the highest frequency band (40-50 Hz)
but not in the lower bands supports this interpretation. The observation that our subevents tend to occur in clusters that are correlated in space and time with the larger subevents found by conventional slip inversion and the movie of surface accelerations
suggest that they may be structural components of these larger events.
Two intriguing observations in this study are that the sub-events follow the
Gutenberg-Richter frequency-magnitude distribution with b-value near 1, and that they
follow Omori’s aftershock distribution. The geometrical interpretation of b=1 is that this
distribution of events produces uniform slip on the fault plane (King, 1983). It is
14
important to again point out that in constructing the Omori’s law plot in Fig. 12, the
origin time at each distance was set at the arrival of the rupture front. The implication is
that the delayed response of local seismicity to the stress change at the rupture front at
very short times seems to follow the same physics which governs the delayed response
of regional seismicity to the stress change produced by the entire earthquake over very
much longer times. In a sense, local aftershocks begin before the earthquake has ended.
It appears that the Gutenberg-Richter and Omori Laws, the two most robust descriptors
of the spatial and temporal distribution of regional seismicity over time scales from
hours to years, also play a fundamental role in the spatial and temporal evolution of
individual earthquakes on time scales from seconds to minutes.
Acknowledgements:
The authors thank the Taiwan Central Weather Bureau for recording and providing
the magnificent strong-motion data set. This research will not be able to take place
without it.
References:
Aki, K., Characterization of barriers on an earthquake fault, J. Geophys. Res., 84, 61406148, 1979.
Aki, K and P. G. Richards, Quantitative seismology, Theory and methods, Vol. 1, W. H.
Freeman and Company, Chapter 5, 1980.
Chen, K. C., B. S. Huang, J. H. Wang, W. G. Huang, T. M. Chang, R. D Hwang, H. C.
Chiu, and C. C. P. Tsai, An observation of rupture pulses of the 20 September 1999
Chi-Chi, Taiwan, earthquake from near-field seismograms, Bull. Seis. Soc. Am., 91,
1247-1254, 2001.
Chi, W. C. Dreger, D. and A. Kaverina, Finite source model of the 1999 Taiwan (ChiChi) earthquake derived from a dense strong-motion network, Bull. Seis. Soc. Am.,
91, 1144-1157, 2001.
Chung, J. K. and T. C. Shin, Implications of rupture processes from the displacement
distribution of strong motions recorded during the 21, September 1999 Chi-Chi,
Taiwan earthquake, TAO 10, 777-786, 1999.
Das, S. and K. Aki, Fault planes with barriers: A versatile earthquake model, J. Geophys.
Res., 82, 5658-5670, 1977.
Huang, W. G. H., J. H. Wang, B. S. Huang, K. C. Chen, T. M. Chang, R. D. Hwang, H.
C. Chiu, and C. C. P. Tsai, Estimates of source parameters for the 1999 Chi-Chi,
Taiwan, earthquake based on Brune’s source model, Bull. Seis. Soc. Am., 91, 11901198, 2001.
Kagan, Y. Y., Seismic moment distribution revisited: I. Statistical results, Geophys. J.
Int., 148, 520-541, 2002.
King, G. The accommodation of large strains in the upper lithosphere of the Earth and
other solids by self-similar fault systems: the geometrical origin of b-value,
PAGEOPH, 121, 761-814, 1983.
15
King, G.C.P., Ouyang, Z.X., Papadimitriou, P., Jackson, J.A., Virieux, J., Soufleris, C.,
and Deschamps, A., 1985, The evolution of the Gulf of Corinth (Greece): an
aftershock study of the 1981 earthquakes, Geophysical Journal of the Royal
Astronomical Society, 80, pp. 667-693.
Lee, C. T., K. H. Kang, C.T. Cheng, and C. W. Liao, Surface rupture and ground
deformation associated with the Chi-Chi, Taiwan earthquake, Sino-Geotechnics, 81,
5-18, 2000.
Lee, W. H. K., T. C. Shin, K. W. Kuo, K. C. Chen, and C. F. Wu, CWB free-field
strong-motion data from the 921 Chi-Chi Earthquake: Processed acceleration files
on CD-ROM, Seismology Center, Central Weather Bureau, Taipei, Taiwan, 2001.
Ma, K. F., J. H. Wang, and D. Zhao, Three-dimensional seismic velocity structure of the
crust and uppermost mantle beneath Taiwan, J. Phys. Earth, 44, 85-105, 1996.
Ma. K.F., J. More, S.J. Lee and S.B. Yu, Spatial and temporal distribution of slip for the
Chi-Chi, Taiwan earthquake, Bull. Seis. Soc. Am., 91, 1069-1087, 2001.
Sammis, C.G., R.M. Nadeau, and L.R. Johnson, How strong is an asperity?, J. Geophys.
Res., 104, 10,609-10,619, 1999.
Shin, T. C and T. Teng, An overview of the 1999 Chi-Chi, Taiwan, Earthquake, Bull.
Seis. Soc. Am., 91, 895-913, 2001.
Vita-Finzi, C. and G.C.P. King, The seismicity, geomorphology and structural evolution
of the Corinth area of Greece, Phil. Trans. Roy. Soc. A, 314, 379-407, 1985.
Wu, C. M. Takeo, and S. Ide, Source process of the Chi-Chi earthquake: A joint
inversion of strong-motion data and global positioning system data with a
multifault model, Bull. Seis. Soc. Am., 91, 1128-1143, 2001.
Zeng, Y. and C. H Chen, Fault rupture process of the 20 September 1999 Chi-Chi,
Taiwan earthquake, Bull. Seis. Soc. Am., 91, 1088-1098, 2001.
16
Figure Captions
Figure 1 The 5-plane fault model for the Chelungpu fault. The parameters specifying
each plane are listed in Table 2. Each fault plane is divided into 1km x 1km
patches.Triangles indicate the 49 stations within 20 km of the fault plane used in this
study.
Figure 2. a) The E-W accelerogram for station T084. The top trace is the original
accelerogram, while subsequent traces are processed accelerograms band-pass filtered at
10-20Hz, 20-30Hz, 30-40Hz, and 40-50Hz as indicated. The quasi-periodic multipleburst that occurs between about 25 and 28 seconds is expanded and plotted in the lower
inset.
Figure 3. Amplitudes of S-wave from the Chi-Chi hypocenter as a function of
hypocentral distance. The amplitudes are for horizontal components and have been
band-pass filtered as indicated. The solid lines are the theoretical amplitudes for the four
frequency bands calculated using eqn. (1).
Figure 4. Illustration of the brute-force algorithm used to locate sub-events by
searching the modeled surface of the Chelungpu fault.
Figure 5. Sub-event epicenters obtained from the E-W components. Progressive 5 sec
time intervals are indicated by the different symbols as detailed in the legend. Seven
clusters (A to G) are indicated by the shaded areas.
Figure 6. Same as Figure 5, but for the N-S components.
Figure 7. Same as Figure 5, but for the vertical components.
Figure 8. The sub-event epicenters from Fig.5 are plotted for a progression of 5 second
time intervals. These hypocenters in each panel are compared with the locations of
major surface acceleration events (> 600 gal) in the Chi-Chi movie from Shin and Teng
(2001), which are plotted as solid circles and labeled with their arrival times.
Figure 9. Spatial distribution of the sub-events showing that the magnitudes tend to
increase with depth.
Figure 10. Frequency-magnitude distribution of the sub-events. The maximum
likelihood estimation based on Tapered Gutenberg-Richter model Kagan (2002) was fit
the data yielding b = 1.1. All magnitudes are scaled relative to the largest event, which
is assigned magnitude M max .
Figure 11. The origin time of the sub-events as a function of their distance from the
hypocenter of the Chi-Chi mainshock. A linearly fit to the earliest arrivals (plus signs)
gives an apparent rupture speed of 2 km/sec. The inverse of the slope of a line
connecting the hypocenter to the closest sub-event is approximately the regional P-wave
velocity.
Figure 12. Data in Figure 11 are fitted by modified Omori’s law, where the time axis is
the delay time between the origin time of the sub event and the arrive time of the
rupture front at that sub-event’s distance from the hypocenter. The triangular and square
symbols are the rates of the normal regional aftershock sequence at later times. The
upward pointing triangles are for magnitudes between 2.5 and 4.0, the squares form
17
events between 3.5 and 5.0, and the downward pointing triangles from events between
4.5 and 6.0. These rates are all significantly higher than those extrapolated from the subevents in this study.
18
Table 1. Locations and Station Parameters for the Accelerometers use in this Study.
Code Latitude Longitude Elevation Accelerometer Quality Time_Cor Time_Sft
Type
Group
( º)
( º)
(km)
(sec)
(sec)
C006 23.5815 120.5520 0.200
IDSA
B
4.501
0.2597
C010 23.4653 120.5440 0.205
IDSA
B
-3.654
0.0841
C024 23.7570 120.6062 0.085
A900
B
2.053
0.5051
C028 23.6320 120.6052 0.295
A900
B
0.000
0.0112
C029 23.6135 120.5282 0.105
A900
B
0.000
-0.5084
C034 23.5212 120.5443 0.140
IDSA
B
8.674
0.1569
C035 23.5200 120.5840 0.230
A900A
B
1.817
0.1649
C041 23.4388 120.5957 0.230
A900
B
0.000
0.9921
C074 23.5103 120.8052 2.413
A900A
B
0.000
0.2482
C080 23.5972 120.6777 0.840
A900A
B
1.304
0.4136
C101 23.6862 120.5622 0.075
A900A
B
0.000
0.1497
T048 24.1800 120.5888 0.160
A900
B
-1.593
0.4548
T050 24.1815 120.6338 0.089
A900
B
0.000
-0.1831
T051 24.1603 120.6518 0.068
A900
B
154.996
0.4762
T052 24.1980 120.7393 0.170
A900
B
0.000
-0.5509
T053 24.1935 120.6688 0.127
A900
B
-1.114
0.4294
T054 24.1612 120.6750 0.097
A900
B
73.735
0.4445
T055 24.1392 120.6643 0.090
A900
C
-2.153
0.5084
T056 24.1588 120.6238 0.062
A900
B
-1.623
0.4584
T057 24.1732 120.6107 0.049
A900
B
-2.551
0.4320
T060 24.2247 120.6440 0.138
A900
B
-1.270
0.3995
T061 24.1355 120.5490 0.030
A900
B
0.000
0.9045
T063 24.1083 120.6158 0.039
A900
B
63.718
0.5235
T064 24.3457 120.6100 0.037
A900
B
0.000
0.8022
T065 24.0588 120.6912 0.048
A900
B
0.000
0.0478
T067 24.0912 120.7200 0.073
A900
B
0.000
-0.1382
T068 24.2772 120.7658 0.276
A900
B
-1.050
0.2985
T071 23.9855 120.7883 0.187
A900
B
0.000
0.5048
T072 24.0407 120.8488 0.363
A900
A
0.000
0.3911
T074 23.9622 120.9618 0.450
A900
A
0.000
0.2792
T075 23.9827 120.6778 0.096
A900
A
0.000
0.0866
T076 23.9077 120.6757 0.103
A900
B
0.000
0.0014
T078 23.8120 120.8455 0.272
A900
A
0.000
-0.0920
T079 23.8395 120.8942 0.681
A900
A
0.000
-0.0164
T082 24.1475 120.6760 0.084
A900A
B
-1.522
0.4845
T084 23.8830 120.8998 1.015
A900A
A
0.000
0.1029
T087 24.3482 120.7733 0.260
A900A
B
0.000
1.2349
T088 24.2533 121.1758 1.510
A900A
B
0.000
0.9905
19
T089
T100
T102
T103
T104
T109
T116
T122
T129
T136
T138
23.9037
24.1858
24.2493
24.3098
24.2455
24.0848
23.8568
23.8128
23.8783
24.2603
23.9223
120.8565
120.6153
120.7208
120.7072
120.6018
120.5713
120.5803
120.6097
120.6843
120.6518
120.5955
0.020
0.100
0.188
0.222
0.213
0.023
0.049
0.075
0.110
0.173
0.034
A900A
A900
A900
A900
A900
A900
A900
A900
A900A
IDS
IDSA
A
B
B
B
B
B
B
B
A
B
B
0.000
-1.581
0.000
0.000
6.203
-1.795
-2.382
-3.061
0.000
2.771
-8.589
0.0041
0.4074
-0.5355
0.4694
0.3771
0.5306
0.6037
0.5250
0.1263
0.3266
0.6334
Table 2. Strike, Dip, and Dimension of the Fault Planes for the Chi-Chi Earthquake.
Single plane model
Strike (º )
Dip (º )
Dimension
(km x km)
3.0
29.0
80.0 x 50.0
Multi-plane model
Fault planes from south to north
Strike (º )
Dip (º )
Dimension (km×km)
A
45.0
29.0
11.5 x 30.0
B
3.0
29.0
31.9 x 50.0
C
5.0
25.0
15.0 x 53.0
D
3.0
29.0
23.1 x 50.0
E
65.0
25.0
15.0 x 20.0
Table 3: 1-D velocity model from Ma et al. (2001)
Thickness Vp
Vs
(km)
(km/s)
(km/s)
1.0
3.50
2.00
3.0
3.78
2.20
5.0
5.04
3.03
4.0
5.71
3.26
4.0
6.05
3.47
8.0
6.44
3.71
5.0
6.83
3.95
5.0
7.06
3.99
15.0
7.28
4.21
Half Space 7.87
4.45
20
Figure 1
21
Figure 2
22
Figure 3
23
Figure 4
24
Figure 5
25
Figure 6
26
Figure 7
27
Figure 8
28
Figure 9
29
Figure 10
30
Figure 11
31
Figure 12
32
(2) A submitted manuscript of a scientific paper:
A Proposed Plan for Integrating Earthquake and
Tsunami Warning at CWB in Taiwan*
by
W.H.K. Lee1, K.F. Ma2, T.L. Teng3, and Y.M. Wu4
Presented as an invited paper at
The Workshop on Earthquake Early Warning (EEW),
California Institute of Technology in Pasadena, California,
July 13 -15, 2005
1
MS 977, U. S. Geological Survey (Retired), Menlo Park, CA 94025.
2
Dept. of Geophysics, National Central University, Chung-li, Taiwan.
3
Dept. of Earth Sciences, University of Southern California, Los Angeles, CA 90089.
4
Dept. of Geosciences, National Taiwan University, Taipei 106, Taiwan.
* For completeness, an Appendix distributed at the Tsunami Workshop organized by
Prof. K. F. Ma at the National Central University, Chung-li, Taiwan on March 23-24,
2005 is added in the present version.
33
Abstract
A project for implementing an earthquake early warning system in Taiwan (in
collaboration with USGS) first proposed by W.H.K. Lee in December 1990 was
approved in June 1992. Two plans were put forwarded in January 1993. Plan A was to
implement a prototype system in Hualien using modern technology supplied by a
commercial company. Although the results were encouraging (Chung et al., 1995; Lee
et al., 1996), this plan was abandoned after a brief testing period due to its high cost.
Plan B was to make use of the existing Central Weather Bureau’s (CWB) seismic
telemetry for transmitting data streams of about 10% of the 600 free-field
accelerographs that were deployed at that time was suggested by T.L. Teng. The
necessary software was developed in house based on the realtime seismic software by
Lee (1994). The goal was for rapid earthquake reporting to government officials with
emergency management responsibility (Shin et al., 1996; Teng et al., 1997; Wu et al.,
1997). This plan resulted in an operational rapid earthquake information release system,
which performed well during the 1999 Chi-Chi (Mw=7.6) earthquake (Wu et al., 2000),
and subsequently improved with earthquake warning capabilities (Wu and Teng, 2002;
Wu and Kanamori, 2005).
In response to the disastrous Sumatra tsunami, Teng and Lee (2005) proposed to the
Central Weather Bureau (CWB) an implementation of a tsunami warning system in
Taiwan. Independently, the CWB’s tsunami group (Chen et al., 2005) showed that the
expected tsunami arrival time to Taiwan ports for the March 31, 2002 offshore
earthquake (Mw=7) could be obtained by numerical simulation in about 6 minutes.
Their results agreed well with the tide gauge data.
This paper reviews the current CWB capabilities and proposes to integrate earthquake
and tsunami warning using the existing CWB realtime seismic system with numerical
simulations. We propose an action plan for CWB to consider:
(1) Rapid estimates of earthquake source parameters,
(2) Tapping into existing global information,
(3) Monitoring tides in real time,
(4) Constructing an online tsunami data bank, and
(5) Developing an integrated earthquake and tsunami early warning system.
This poster is intended to supplement an oral paper by Teng et al. (2005), and to serve
as a companion to the poster by Hsiao et al. (2005) in this Workshop.
We also discuss the limitations of an early warning system (EWS), and conclude that its
response time is unlikely to be less than 10 seconds for earthquakes. Thus, its best use is
to activate automated systems to prepare for strong ground shaking. The expected
34
tsunami arrival times could be estimated in about 10 minutes in an EWS, providing tens
of minutes or more for people living in the coastal area to take proper actions, except
very near the tsunami source.
Introduction
Taiwan is a very seismically active region, with more than 10 deadly earthquakes
having occurred in the past 100 years as shown in Figure 1. A project for implementing
an earthquake early warning system in Taiwan was proposed by W.H.K. Lee in
December, 1990, and was approved in June 1992. Two plans were put forwarded in
January 1993.
Plan A was to implement a prototype system in Hualien using modern technology by a
commercial company. Although the results were encouraging (Chung et al., 1995; Lee
et al., 1996), this plan was abandoned after a brief testing period due to its high cost.
35
Plan B (suggested by T. L. Teng) was to make use of the existing seismic telemetry of
the Central Weather Bureau (CWB) for transmitting data streams from 10% of the 600
free-field accelerographs that were being deployed at that time. The necessary software
was developed in house based on the realtime seismic software by Lee (1994). The goal
was for rapid earthquake reporting to government officials with emergency management
responsibility (Shin et al., 1996; Teng et al., 1997; Wu et al., 1997).
This plan resulted in an operational rapid earthquake information release system, which
performed well during the 1999 Chi-Chi (Mw=7.6) earthquake (Wu et al., 2000), and
was subsequently improved with earthquake warning capabilities (Wu and Teng, 2002;
Wu and Kanamori, 2005).
Historical Tsunami Records in Taiwan
Many reports, books, and papers have been published containing information about
historical tsunamis in Taiwan and neighboring regions, and several official websites
contain online information. However, the existing literature and online databases are
often confusing, because:
•
•
•
•
Many Chinese words have been used to describe phenomena of the sea that may
or may not be related to tsunamis (Keimatsu, 1963),
Interpreting historical records is difficult and subject to bias,
There is a lack of financial support and modern tools to gather the literature
(written in many languages and archived in many geographical centers) and
analyze the data adequately, and
There is little progress in constructing reliable online databases due to lack of
adequate support.
Historical “Tsunami” events in Taiwan are shown in Figure 2. No damaging tsunamis
have been reported since 1868. For the 1960 Great Chilean earthquake, Kelung
recorded 66 cm, and Haulien, 30 cm, in the maximum tsunami amplitude as measured
by the tide gauges.
CWB Rapid Earthquake Information Release System
Taiwan is in a good position to integrate a tsunami early warning system with the
existing CWB Rapid Earthquake Information Release System, which also has the
36
capability to serve as an earthquake early warning system. The foundation of the
existing system was established by a series of scientific publications (Lee et al., 1996;
Shin et al., 1996; Teng et al., 1997; Wu et al., 1997). More details about the system are
given the companion paper (Hsiao et al., 2005) presented in this Workshop.
Figure 3 is a time-line flowchart showing the data processing and results of the CWB
rapid report system (RRS) and early warning system (EWS) in Taiwan. For a typical
earthquake, the location, magnitude, and a shake map are available for dissemination in
about 1 minute, and an earthquake report is routinely released in about 3 minutes.
37
38
Recent work by Wu and Teng (2002) and Wu and Kanamori (2005) showed that early
warning capability could be achieved in about 20 seconds after the seismic waves
arrived at the nearest station. Chen et al. (2005) showed that the expected tsunami
arrival times could be computed by numerical simulation and released in about 6
minutes for the March 31, 2002, offshore Taiwan earthquake (Mw = 7), as shown in
Figure 4. Except for the two nearest ports, a few minutes to up to more than 2 hours
would be available for warning purposes.
A Proposed Action Plan for CWB
We proposed the following action plan to CWB for development and implementation:
(1) Rapid Estimates of Earthquake Source Parameters
The present CWB realtime seismic system can estimate the location and magnitude of a
local earthquake in about 1 minute by using telemetered data from about 100 digital
accelerographs in the field. It does not perform well, however, for earthquakes that are
considerably outside the telemetered accelerograph network. The seismic response of
accelerometers to teleseismic events is poor due to rapid amplitude drop off below 1 Hz.
Tokyo-Sokushin has developed a broadband sensor (G3) that is capable of recording up
to 2g in acceleration (a standard broadband sensor will clip at about 0.1g or less). In
2004, we tested a Tokyo-Sokushin 24-bit accelerograph equipped with a G3 sensor and
it worked well. We propose CWB purchases several G3 celerographs and incorporate
them into their telemetered network.
We propose that CWB develop a database of large (M>7) earthquakes for all potential
source regions, so that the incoming broadband waveforms can be rapidly compared
with those stored in the database for estimating location and magnitude (see e.g., Menke
and Levin, 2005).
39
40
(2) Tapping into Existing Global Information
We recommend that CWB participates in the global earthquake and tsunami activities
by:
• Incorporating tsunami warning information from the Pacific Tsunami Warning
Center and the International Tsunami Information Center in their Rapid
Earthquake Information Release System.
• Exploring ways and means to share seismic and tide gauge data in near realtime
with neighbors: mainland China, Japan, Ryukyu, and Philippines.
• Participating in the Deep Ocean Assessment and Reporting of Tsunamis (DART)
program of NOAA (Gonzalez, et al. 1998) by funding one or more DART
stations to be deployed near Taiwan.
• Incorporating USGS global earthquake information in their Rapid Earthquake
Information Release System.
(3) Monitoring Tides in Real-time
Tides in Taiwan are monitored by CWB’s Marine Meteorology Center (MMC). We
propose that data from selected tide gauge stations be incorporated with the realtime
seismic system.
The following information was kindly supplied by Dr. Yueh-jiuan Hsu of MMC:
•
•
•
•
•
•
CWB is responsible for 20 tide stations.
Tide data from other organizations are also collected.
CWB’s tide gauges are being upgraded to the system similar to NOAA/NOS.
Most stations make a measurement every 6 minutes, and some stations record
every 15 seconds.
All the tide data are transmitted to Taipei via telephone line, GSM, or GPRS in
near realtime.
More information is available online at: http://mmc.cwb.gov.tw/.
(4) Constructing an Online Tsunami Databank
41
We propose that CWB perform numerical simulations to build up a “tsunami scenario
databank” on the damage potentials from tsunami sources, both teleseismic and nearby,
as follows.
(A) Tsunami-Generating Source Modeling:
For a given earthquake source, it is possible to model how tsunami waves are
excited. Realistic modeling with a detailed propagating rupture is not required because
the rupture time is short compared with the excited tsunami periods. However, the
tsunami’s source radiation pattern as a consequence of the rupture geometry should be
carefully evaluated.
(B) Tsunami Propagation in Open Ocean:
The “shallow water” wave theory used in the open ocean is well understood (Satake,
2002). This is a linear problem and many numerical codes have been developed. With
known bathymetry, the scenario path and amplification can be computed for an assumed
source. We only need to compute the paths from offshore Taiwan to all points of the
potential tsunami-generating sources and then invoke the source-receiver reciprocal
theorem to get both the tsunami ray path and amplitude amplification as the wave
arriving at offshore Taiwan.
(C) Tsunami Inundation and Runup Calculations:
High-precision near-coast and harbor bathymetry and coastal topography data
should be gathered for use in the finite-element code of J. J. Lee et al. (1998) to
compute potential inundation/runup for an incoming tsunami resulted from Item (B)
above.
(5) Developing an Integrated Earthquake and Tsunami Early
Warning System
We would like to propose that the CWB Rapid Earth-quake Information Release System
be modified to include:
•
Reporting teleseismic events of M > 7, and any potential for tsunami waves that
will arrive in Taiwan;
42
•
•
•
Searching an online database of earthquake waveforms for reliable estimates of
hypocenter location and magnitude;
Processing tide gauge and DART data for rapid changes of water amplitudes that
may indicate an arriving tsunami.
We recommend that CWB’s seismology, tsunami, and tide groups jointly
develop this integrated system and explore possibilities for collaboration with
other countries to share data and results in near real time.
NOAA proposed a global TsunamiReady program by networking regional “Emergency
Operations Centers” (EOCs). We urge CWB to implement emergency operations for
earthquakes and tsunamis within its Seismology Center. CWB should have the
capability of issuing a timely earthquake and/or tsunami warning not only for events in
the Taiwan region, but also for significant events in the world that may affect Taiwan.
This will be a challenging problem as shown by Titov et al. (2005).
Limitations of an Earthquake Early Warning System
The response time, Tr, of an earthquake early warning system may be defined as:
Tr = Tt + Tc
(1)
where Tt is the time required to have received seismic signals from enough stations (e.g.,
8), and Tc is the processing time to have a reliable determination of location and
magnitude. To a first approximation for a seismic network of uniform station spacing
(s), Tt can be estimated from focal depth (h), P-velocity (Vp), and the epicenter location
with respect to the stations in a seismic network. For example, if ∆ is the epicentral
distance to the nearest station, then
T1 = [∆2 + h2] ½ / Vp
(2)
The present CWB realtime seismic system has about 100 telemetered accelerographs at
station spacing of s ≈ 20 km. If we assume an earthquake occurs midway between two
stations well inside the seismic network (i.e., ∆ = 10 km), at h = 10 km deep, and Vp = 5
km/sec, then Tt for having reached 8 stations is given by:
Tt = [(s + ∆)2 + h2] ½ / Vp ≈ 6.3 sec.
(3)
For an event 20 km outside the network, ∆ becomes 30 km, and Tt ≈ 10.2 sec. In
practice, it is difficult to have uniform station spacing and not all stations may be in
43
operation when an earthquake occurs. Therefore, the CWB experience indicated that
Tt ≥ 8 seconds.
Computer time for picking P-arrivals and locating the earthquake is small in comparison
for the time to have received enough amounts of seismic waves for a good estimate of
the earthquake magnitude. Wu and Kanamori (2005) used a minimum of 3-second of P
waves to estimate magnitude, and achieved a response time, Tr ≈ 20 seconds.
Figure 5 shows the P- and S- travel times (left axis) and the average horizontal peak
ground acceleration (PGA, right axis) versus epicentral distance from the 1999 Chi-Chi
earthquake (Lee et al., 2001). We also plot the CWB’s EWS response time at 20 sec.
Since P-wave arrives at about 10 sec at epicentral distance of 50 km, and at about 18 sec
at 100 km, the present CWB’s Earthquake Early Warning System (EWS) can not
response fast enough before people living within 100 km already know an earthquake
has occurred.
At epicentral distance less than 50 km, the CWB’s Earthquake Early Warning System
cannot provide information prior to the onset of strong ground shaking, because the Swaves will arrive at about 18 seconds. At 100-km distance, about 13 seconds will be
available before the S-waves arrive. However, at this distance and beyond, the PGA
values will have dropped to below 0.2g, and thus an early warning message will be
useful for only very big earthquakes.
It is clear from Figure 6 that CWB needs to shorten the response time to Tr ≈10 seconds
in order to be useful at epicentral distance from 30 km to 100 km, where strong ground
shaking
is
still
to
be
expected.
44
45
This means that we need to decrease the station spacing and to develop a much faster
processing scheme. For example from Equation (3), Tt can be reduced approximately
by half if the station spacing is also reduced by half. This means that CWB needs to
increase the number of realtime stations from 100 to 400, which is not a very practical
solution.
Another physical limit is that the rupture time for a big earthquake is more than 10
seconds (e.g., the rupture time for the Mw=7.6 Chi-Chi earthquake is about 30 sec).
Since no reliable magnitude estimate can be made until the rupture stops, it severely
limits how anyone can speed up the processing time (and thus the response time), and
how useful an early warning system can be for a big quake that matters most to the
people.
In summary, our above discussion using the case for Taiwan is generally applicable
everywhere. Major limitations of any earthquake warning system are:
•
•
•
•
Station spacing is the primary factor determining the time required to have
seismic data received at enough stations for processing, and the present practical
limit is Tt ≈ 5 seconds at best.
Earthquake rupture time increases with increasing magnitude, and we need at
least 3 seconds or more of P-waves to have a reasonable lower limit of the
earthquake magnitude, i.e., Tc ≈ 5 seconds at best.
Combined (1) and (2), this implies that the response time of an earthquake early
warning system is limited to Tr ≈ 10 seconds in practice.
Ground motion is most severe in epicentral distance of less than 50 km, and
drops rapidly at 100 km and beyond.
Therefore, an effective use of an earthquake early warning signal is likely to be limited
to activate automated systems to prepare for incoming strong ground shaking (Lee and
Espinosa-Aranda, 2003). Consequently, a single station approach (e.g., Saita and
Nakamura, 2003) will be more practical than the network approach developed, for
example, by CWB.
Nevertheless, any rapid earthquake information release will help in emergency response.
This is the primary reason for driving regional seismic networks around the world for
realtime operations and for releasing earthquake information as rapidly as possible,
typically within 1 to 5 minutes (Lee, 2002).
Conclusions
CWB has demonstrated that an earthquake warning response time can be about 20
seconds, and a local tsunami warning response time can be about 10 minutes. The action
46
plan we propose in this paper is technically feasible to implement, although
considerable amounts of work will be involved. However, transmitting warning
messages to the users will require establishing an infrastructure and considering the
social impacts.
We do not recommend releasing any earthquake early warning message to the general
public. As pointed out by Lomnitz (2003), most people cannot react effectively to an
earthquake warning in a short time that ranges from a few seconds to a few tens of
seconds at best.
On the other hand, tsunami warning messages will be beneficial to people living in the
coastal areas. The expected tsunami arrival times from an offshore Taiwan earthquake
could be estimated in about 10 minutes, and thus provide tens of minutes or more for
people to take proper actions, except in areas that are very near the tsunami source.
Acknowledgements. We thank Jr-hung Chen, Po-fei. Chen, Yueh-jiuan Hsu, Kaiwen Kuo, and Tzay-chyn Shin for providing valuable data and information. We are
grateful to Walter Mooney, Jim Savage, and Woody Savage for their comments.
References
Bryant, E. (2001). Tsunami: The Underrated Hazard. Cambridge University Press.
Chen, J.H., P.F. Chen, N.C. Hsiao, and C.H. Chang (2005). A database of simulated
arrival times of tsunami waves around Taiwan, 2005 Ann. Meeting Geol. Soc.
(Taipei), Chung-Li, Taiwan.
Cheng, S.N., Y.T. Yeh, M.T. Hsu, and T.C. Shin (1999). Photo Album of Ten
Disastrous Earthquakes in Taiwan, Central Weather Bureau and Academia Sinica,
Taipei, Taiwan.
Chung, J.K., W.H.K. Lee, and T.C. Shin (1995). A prototype earthquake warning
system in Taiwan: operation and results. IUGG XXI General Assembly, Boulder,
Colorado (Abstr. A:406).
González, F.I., H.B. Milburn, E.N. Bernard, and J. Newman (1998). Deep-ocean
Assessment and Reporting of Tsunamis (DART): Brief Overview and Status Report,
at: http://www.ndbc.noaa.gov/Dart/
Holt, H.F. (1868). Report of recent earthquakes in northern Formosa. Proc. Geol. Soc.
(London), 24, 510.
47
Hsiao, N.C., W.H.K. Lee, T.C. Shin, T.L. Teng, and Y.M. Wu. (2005). Earthquake
rapid reporting and early warning systems at CWB in Taiwan. Poster presentation,
this Workshop.
Iida, K., D.C. Cox, and G. Pararas-Carayannis (1967). Preliminary catalog of tsunamis
occurring in the Pacific Ocean. HIG-67-10, Hawaii Inst. Geophys., Univ. Hawaii,
Honolulu, Hawaii, USA.
Keimatsu, M. (1963). On the historical tidal waves in China. Zisin (J. Seism. Soc.
Japan), Second Series, 16, 149-160 [in Japanese].
KBTEQ
(2005).
Knowledge
http://kbteq.ascc.net/history.html
Base
of
Taiwan’s
Earthquake
at:
Lee, J.J., C.P. Lai, and Y.G. Li (1998). Application of computer modeling for harbor
resonance studies of Long Beach and Los Angeles Harbor Basin, ASCE Coasting
Engin., 2, 1196-1209.
Lee, W.H.K. (Editor), (1994). Realtime Seismic Data Acquisition and Processing,
IASPEI Software Library, Volume 1 (2nd Edition), Seism. Soc. Am., El Cerrito, CA.
Lee, W.H.K. (2002). Challenges in observational seismology. In: International
Handbook of Earthquake and Engineering Seismology, edited by W. H. K. Lee, H.
Kanamori, P. C. Jennings, and C. Kisslinger, Part A, p. 269-281, Academic Press,
San Diego.
Lee, W.H.K., T.C. Shin, and T.L. Teng (1996). Design and implementation of
earthquake early warning systems in Taiwan, Paper No. 2133, 11th World Conf.
Earthq. Engin., Elsevier, Amsterdam.
Lee, W.H.K., T.C. Shin, K.W. Kuo, K.C. Chen, and C.F. Wu (2001). Data files from
“CWB free-field strong-motion data from the 21 September Chi-Chi, Taiwan,
earthquake”. Bull. Seism. Soc. Am., 91, 1390 and CD-ROM.
Lomnitz, C. (2003). Earthquake disasters: prediction, prevention and early warning. In:
Early Warning Systems for Natural Disaster Reduction edited by J. Zschau amd A.N.
Kuppers, p. 425-431, Springer, Berlin.
Mallet, R. (1852, 1853, and 1854). Catalogue of recorded earthquake from 1606 B.C. to
A.D. 1850. Being the Third Report on the facts of earthquake phenomena,
Transaction British Assoc. Advancement Sci. for Annual Meeting in 1852, 1853, and
1854.
48
Menke, W. and V. Levin (2005). A strategy to rapidly determine the magnitude of great
earthquakes, EOS, 56(19), p. 185 and 189.
Perrey, A. (1862). Documents sur les tremblements de terre et les phénomènes
volcaniques au Japon. Mémoires de l'Académie impériale des sciences, belles-lettres
et arts de Lyon - Classe des sciences, 12, 281-390 [in French].
Saita, J. and Y. Nakamura (2003). UrEDAS: the early warning system for mitigation of
disasters caused by earthquakes and tsunamis. In: Early Warning Systems for Natural
Disaster Reduction edited by J. Zschau and A.N. Kuppers, p. 453-460, Springer,
Berlin.
Satake, K. (2002). Tsunamis. In: International Handbook of Earthquake and
Engineering Seismology, edited by W.H.K. Lee, H. Kanamori, P.C. Jennings, and C.
Kisslinger, Part A, p. 437-451, Academic Press, San Diego.
Shin, T.C., Y.B. Tsai and Y.M. Wu (1996). Rapid response of large earthquakes in
Taiwan using a realtime telemetered network of digitial accelerographs. Paper No.
2137, 11th World Conf. Earthq. Engin., Elsevier, Amsterdam.
Soloviev, S.L., and Ch.N. Go (1974). A catalogue of tsunamis on the western shore of
the Pacific Ocean. Acad. Sci. USSR, Nauka Pub. House, Moscow, 310 pp. [in
Russian; English Translation by Canadian Fisheries and Aquatic Sciences No. 5077,
1984.]
Teng, T.L. and W.H.K. Lee (2005). A proposal for establishing a Taiwan tsunami
warning system, unpublished report submitted to Central Weather Bureau, Taiwan in
March, 2005.
Teng, T.L., L. Wu, T.C. Shin, Y.B. Tsai, and W.H.K. Lee (1997). One Minute after:
strong-motion map, effective epicenter, and effective magnitude, Bull. Seismo. Soc.
Am., 87, 1209-1219.
Teng, T.L., Y.M. Wu, T.C. Shin, W.H.K. Lee, Y.B. Tsai, C.C. Liu, and N.C. Hsiao
(2005). Development of earthquake rapid reporting and early warning systems in
Taiwan. Oral presentation, this Workshop.
Titov, V.V., F.I. Gonzalez, E.N. Bernard, M.C. Eble, H.O. Mofjeld, J.C. Newman, and
A.J. Venturato (2005). Real-time tsunami forecasting: Challenges and solutions,
Natural Hazards, 35, 41-58.
Utsu, T. (2002). A list of deadly earthquakes in the world: 1500-2000. In: International
Handbook of Earthquake and Engineering Seismology, edited by W.H.K. Lee, H.
49
Kanamori, P.C. Jennings, and C. Kisslinger, Part A, p. 691-717, Academic Press,
San Diego.
Wu, Y.M. and H. Kanamori (2005). Experiment on an onsite early warning method for
the Taiwan early warning system, Bull. Seism. Soc. Am., 95, 347-353.
Wu, Y.M., and T.L. Teng (2002). A virtual subnetwork approach to earthquake early
warning. Bull. Seism. Soc. Am., 92, 2008-2018.
Wu, Y.M., C.C. Chen, T.C. Shin, Y.B. Tsai, W.H.K. Lee, and T.L. Teng (1997).
Taiwan Rapid Earthquake Information Release System, Seism. Res. Lett., 68, 931943.
Wu, Y.M., W.H.K. Lee, C.C. Chen, T.C. Shin, T.L. Teng , and Y.B. Tsai (2000).
Performance of the Taiwan Rapid Earthquake Information Release System (RTD)
during the 1999 Chi-Chi (Taiwan) earthquake. Seism. Res. Lett., 71, 338-343.
50
APPENDIX. A Proposal for Establishing a
Taiwan Tsunami Warning System *
By
Ta-liang Teng and William H.K. Lee
(March 22, 2005)
* This proposal was distributed at the Tsunami Workshop organized by Prof. K. F. Ma
at the National Central University, Chung-li, Taiwan on March 23-24, 2005.
51
Introduction
In the aftermath of the Sumatra disaster, it is clear that even though a tsunami is a very
infrequent event and difficult to predict, it can nonetheless happen any time and inflict
severe damage to coastal areas of many countries. Fortunately, because of its slow
propagation speed in the open ocean (~800 km/hour, like that of a jet airplane), an early
warning is possible for tsunamis, except for those generated locally off the coast. As
Taiwan is very seismically active, and a few tsunamis might have caused considerable
damage and fatality in the past, it is important that we prepare for tsunamis through a
better monitoring and warning system.
This proposal is intended to serve as a starting point for discussions at the Workshop on
Taiwan Tsunami Research Program to be held on March 23-25, 2005 at the National
Central University (NCU). We hope that a much better proposal will be developed by
the Workshop participants and invited experts.
Taiwan is in a good position to work on a tsunami early warning program. It currently
leads the world in its accomplishments in earthquake rapid reporting and earthquake
early warning, which is much more demanding than a tsunami early warning. The
foundation was established in a series of scientific publications (Lee et al., 1996; Teng
et al., 1997; Wu et al., 1997).
Furthermore, some preliminary tsunami research has already been carried out by some
recent research projects of NSC and CWB. It is timely that a more comprehensive
research and development program for tsunami be taken place in order to achieve:
(1) Reduction in the loss of life and property in coastal areas of Taiwan and its
offshore islands.
(2) Elimination of false alarms which can result in high economic costs for
unnecessary evacuations.
We hereby propose that Taiwan boosts its instrumental monitoring and numerical
modeling efforts on tsunami and expand research towards a reliable tsunami warning
system. In order for this proposal to be successfully developed and implemented, we
need active participation of everyone in Taiwan who is interested in mitigating tsunami
hazards. In other words, we need to prepare a well thought-out proposal, raise some
funding, and carry out the necessary tasks. Since Taiwan has only a few tsunami
researchers, we must encourage international collaborations and develop an
education/training program in tsunami and related topics.
Tapping into the Pacific Tsunami Warning System
52
Some initial actions should be taken so that the Central Weather Bureau (CWB), which
is responsible for monitoring tsunami in the Taiwan region, may participate in the
global tsunami warning activities.
The Pacific Tsunami Warning Center (PTWC) in Ewa Beach, Hawaii, was established in
1949 by the U.S. Government to provide warnings for tele-tsunamis to Hawaii and most
countries in the Pacific Rim. PTWC is also the warning center for Hawaii's local and
regional tsunamis. As a United States government center, tsunami information issued
by the PTWC is generally available to the public with some time delays. However,
PTWC does not formally interact with other countries with respects to tsunamis warning
monitoring and warning. This function is being carried out by the Intergovernmental
Oceanographic Commission (IOC, see Appendix 1) of the UNESCO, a part of the
United Nations.
Under IOC, the International Coordination Group (ICG) for the Tsunami Warning
System in the Pacific (ITSU) had been established since the 1960s with 26 member
states. IOC also maintains the International Tsunami Information Center (ITIC) as the
operational unit, which in turn relies on the PTWC for actual tsunami monitoring and
warning.
In the January 13, 2005 press release, UNESCO is now working towards the
establishment of a global tsunami warning system that would be operational by June
2007 according to UNESCO Director-General Koichiro Matsuura. In other words, the
present IOC/ITIC will be truly international soon.
The status of IOC clearly states that only members of UNESCO (i.e., UN) can join its
various programs (i.e., including ICG on tsunamis). Since Taiwan is not a member of
UN or UNESCO, joining ICG/ITSU will be difficult politically. Appendix 1 contains
the IOC status.
Mike Blackford (Former Director of PTWC and of ITIC) made the following suggestion
in an e-mail to Willie Lee on January 14, 2005:
If Taiwan is interested in pursuing a more formal participation in ITSU the primary
person to contact is Peter Pissierssens of UNESCO's Intergovernmental Oceanographic
Commission (IOC), mail: IOC of UNESCO, 1 rue Miollis, 75732 Paris Cedex 15,
France; email: p.pissierssens@unesco.org.
In the past, UN and UNESCO have granted “observer” status to some non-UN countries.
Observers can participate in meetings, but have no voting power.
Getting Global Tsunami Warning Information
53
According to Mike Blackford, member states of ICG/ITSU automatically
receive tsunami warning information. However, since the tsunami warning information
is actually provided by PTWC, it is possible to obtain the same information from its
“bulletin board”, which is open for subscription. Obviously, anyone can visit the
PTWC website for its latest bulletins. These two methods of obtaining tsunami warning
information have the obvious draw-back that there will be some time delays.
PTWC also maintains an e-mail list for sending tsunami warning bulletins right away.
Mike Blackford has agreed to help us to explore this possibility by putting a CWB
seismologist on it.
Tsunami Warning Practice in Taiwan
If Taiwan decides to establish its tsunami monitoring and warning system, it is best to
“join” ICG/ITSU to be a part of the international tsunami warning system. However,
establishing an effective system requires, in addition, substantial funding (in millions of
US dollars per year) as PTWC history indicates. Furthermore, how to deal with tsunami
warnings in practice in order to save lives and properties also require a substantial
efforts in education and public outreach, especially in establishing communication and
evacuation plans with governmental officials dealing with emergencies. This last aspect
of “Communication, Education, and Outreach” (or CEO) is more administrative than
scientific, we shall leave it to CWB or other cognizant governmental agencies to work
on out the details.
We may, as Mike Blackford did it nicely, emphasize that:
“No matter how elaborate a tsunami warning system may be, it will not be effective
unless emergency managers, operators of potentially affected coastal facilities, and the
public in general, know what to do when a tsunami warning is issued. Much needs to
be done to communicate the warning to the public, perhaps by sirens and messages on
the media, and if necessary, evacuation plans need to be developed. Perhaps some of
this may be in place [in Taiwan] already with respect to typhoons.”
In summary, we visualize that an adequate Tsunami Warning Program in Taiwan would
include at least the following elements:
A. Tsunami Monitoring – Validation of generation and approaching tsunamis
B. Tsunami Modeling – Construction of Taiwan Tsunami Databank, and run-up and
inundation computation
C. Tsunami History in Taiwan – Database necessary to study the tsunami potential.
54
D. Communication, Education, and Outreach – Important administrative measures
E. Collaborations.
55
Proposed Work
A. Tsunami Monitoring - Validation of Generation and Approaching
Tsunamis
Taiwan currently receives information indirectly from the Pacific Tsunami Warning
Center whenever a tsunami bulletin is issued. Since this information is relayed twice
(via Japan) before reaching CWB, there is some time delays and also a potential
problem in reliability. Therefore, we believe that CWB should explore possible means
to obtain the tsunami bulletins directly from PTWC as discussed above.
In addition, tsunami information from Japan, Ryukyu and Philippines will also be very
valuable, if available in near real-time. Again, CWB should explore possible means to
share and exchange data with them. Obviously, CWB must fist have some relevant
tsunami data in real-time or near real-time to start with. The cost to deploy real-time
modern tide gauge stations along the coast of Taiwan and nearby islands (Jinmen,
Matsu, and Penghu, Lanyu, Dongsha and Badan) is relatively inexpensive. The output,
together with those from the southern Ryukyu Islands (such as Yunaguni, Ishigaki, and
Hirara etc.), should be integrated with the telemetered seismic data at CWB in real-time.
Instruments deployed in the open ocean such as the DART-type (see Appendix 2) are
expensive. However, we may have to join the U.S. effort by deploying a few of them to
insure an adequate converge and to tap into the real-time international tsunami
information.
More importantly, since tsunami bulletins from PTWC will arrive too late if
tsunamigenic earthquakes were to occur about several hundred kilometers off the
Taiwan coast, tide gauge data from the above islands and DART data in the open ocean
near Taiwan would allow Taiwan to timely react to the tsunami waves.
The above provisions, in conjunction with a tsunami modeling effort discussed in
Section B, are aimed at the reduction of the false alarm. Japanese experience shows that
a tsunami false alarm – induced evacuation would cost about US$60 million which
cannot be socially and economically acceptable.
Following the lead of Japan and U.S., Taiwan should make the general tsunami warning
bulletins issued by PTWC more reliable for the coastal areas of Taiwan and nearby
islands. In this regard, the monitoring program should be carried out as a collaborative
effort with the U.S. DART program (see Appendix 2) in addition to securing the tide
gauge data from those islands mentioned above. The participation in the DARTprogram can be similar to the approach of the Institute of Astronomy and Astrophysics
(IAA) of the Academia of Sinica, which “buys in” the Smithsonian Project in building a
large telescope in Hawaii, i.e., IAA contributes 20% of the fund and gets 20% of
56
observation time. Thus, the participation in the DART program is also an economical
way to “buy-in” the PTWC of the U.S. and allows Taiwan to directly obtain the global
tsunami warning data in real-time.
As suggested by Moh-jiann Huang, we should also consider the Japanese approach in
observing offshore waves, tsunamis and tides using GPS Buoys (Nagai, et. al., 2003).
B. Tsunami Modeling:
Databank
I. Construction of a Taiwan Tsunami
Since time is of essence in the warning of a fast approaching event, preparatory
construction of a Taiwan Tsunami Databank (TTD) for the expected arriving tsunami
heights and periods should be carried out:
1. For tsunamis generated and propagated across the open ocean, and
2. For tsunamis after having impinged upon the coast and harbor for run-up and
inundation estimates
The construction of the TTD will be crucial to an operational tsunami warning operation
(see Geist, 2000 and references cited).
When an earthquake that can potentially generates a tsunami occurs, we would look up
the TTD for a computed event nearest to the source and obtain the documented tsunami
heights off the coast of Taiwan. Since the tsunami propagation in the open ocean is
linear, we can quickly scale our documented results using the source parameters of the
just occurred earthquake. This part of the computation is the easiest to implement as the
theories are well known and the necessary software exist. However, to validating our
computational results in the databank, we need to compare with historical tidal gauge
data and run-up observations wherever possible; and to minimizing false alarms we
need to deploy real-time monitoring instruments and/or obtain real-time tsunami heights
somehow. One possibility for the latter is to obtain real-time tidal gauge data from
nearby islands (Jinmen, Matsu, and Penghu, Lanyu, Dongsha and Badan) and nearby
Japanese islands (the southern Ryukyu Islands, such as Yunaguni, Ishigaki, and Hirara,
etc.) through international collaborations.
Task B1. Tsunamis generation and propagation across the open ocean
Taiwan should carry out the Okada-Satake type calculations (Okada, 1992; Satake,
2002) to accumulate and finally to build up a TTD for different sources from all
important tsunami-generating events in the Pacific Rim and the Taiwan Strait. Here, we
57
need the bathymetry data for a “shallow-wave” type of calculation to estimate the
arriving tsunami heights off the coast.
Tsunami Generation: With a given earthquake magnitude (usually M>7), focal depth
(shallow) and moment tensor solution, it is possible to model how tsunami waves get
excited. However, realistic modeling with detailed propagating rupture is difficult, but
probably not required; for the earthquake rupture time is short as compared with the
excited tsunami periods. Kuo-Fong Ma of NCU has ample experience in this part of the
source modeling.
Tsunami Propagation in Open Ocean: This is the easier part of tsunami modeling
because “shallow water” wave theory can be used in the open ocean. This is a linear
problem and many numerical codes have been developed by independent researchers
(e.g., E. Geist, and K. Satake). With known bathymetry data (usually with resolution of
2 minutes in latitude and longitude) of the ocean basin, it is relatively easy to carry out
scenario-type calculations for a known initial displacement of the sea floor.
The initial conditions can be computed using the Okada code, and the subsequent
tsunami propagation to the coast of Taiwan can be computed by the Satake code. Drs. Y.
Okada and K. Satake had kindly made their computer source codes available to Willie
Lee on January 3, 2005. Willie Lee (with the help of Doug Dodge of the Lawrence
Livermore National Laboratory) had successfully implemented an Okada-type computer
program by using the subroutines provided by Okada and the input/output and graphics
display code written by Doug Dodge. After investing the source code supplied by
Satake, however, they concluded that it would be difficult to implement without the
help of Dr. Satake, because of lack of documentation.
An extensive computation is needed to construct the TTD of expected tsunami heights
off the coast of Taiwan -- the initial step for building the TTD for damage potentials of
tsunamis that could reach Taiwan.
Before computing a large number of cases in constructing the TTD of expected tsunami
heights, we need to proceed carefully in using software written by other scientists.
Experience has indicated that it is best to follow the steps below:
(1) Obtaining support from the author(s);
(2) Compiling the source code, and repeating author's test case(s);
(3) Performing some tests with known answers, and
(4) Checking results using a similar program written by different author(s).
After we are comfortable with the software, we need to identify potential earthquake
sources. Fortunately, a modern global earthquake catalog has been published by
Engdahl and Villasenor (2002) and the Harvard Group has computed moment tensor
58
solutions for many large earthquakes
http://www.seismology.harvard.edu/.
since
the
1970s
are
available
at:
Notes: In a recent e-mail from Kuo-Fong Ma of the National Central University, we
learned that she (and possibly several others in Taiwan) has been running the OkadaSatake code for tsunami modeling, and in particular, Ma has extensive collaboration
experience with Satake. Chi Ching Liu of the Institute of Earth Sciences of Academia
Sinica has been working on the tide gauge monitoring in Taiwan. His research can
provide observed run-up data of arriving tsunamis in the recent decades.
For this tsunami project to be successful, we must encourage collaborations with
tsunami experts (e.g., E. Geist, K.F. Ma, P.L.-F. Liu, J.J. Lee, and K. Satake, etc.).
B. Tsunami Modeling: II. Run-up and Inundation Computation
We suggest that this part of the computation will basically follow the approach that has
been developed by Professor Jiin-Jen Lee of University of Southern California – a finite
element method successfully applied to harbor oscillation computation (Lee and Chang,
1990; Lee et al., 1998; Mei, 1983). All the computation discussed here should be made
along-side with the generation of the TTD discussed in Task B1. In fact, results of this
computation should also be made as a part of the TTD.
Referring to Figure 1, assuming that a tsunami input is approaching Taiwan with unit
amplitude and a dominant wavelength of, say, 300 km. Since the wavelength of the incoming wave is comparable to the dimension of Taiwan, the island will serve as an
effective diffraction object and a diffracted wave-field will be generated all around
Taiwan.
59
Figure 1. Map of Taiwan and neighboring regions with 300 km limit to Taiwan coast
drawn. The solid dots show seismic stations (retrieved from the LLNL’s database of
world-wide
seismic
stations).
60
Step 1. The finite element code with an input model that incorporates the effects of
diffraction of the island, reflections from the boundaries including the boundaries of the
mainland China coast of Fujan (with partial or full reflection, depending on the nature
of the absorbing boundaries) and refraction (due to the variable bathymetry surrounding
the island and the Taiwan Strait) as well as energy dissipation at boundaries. It is quite
possible that for a tsunami approaching from the Pacific side, the diffracted wave filed
on the west coast of Taiwan will travel both from the north and the south as the waves
round the corners of the island. Interference of these two wave trains may actually cause
substantial amplification on the west coast. As the west coast of Taiwan has undergone
massive recent developments, this condition could make these areas vulnerable to
severe losses. As waves climb up the continental shelf, they are encountering a
continental slope with the sea bottom rapidly and progressively becoming shallower
until the waves hit the coast. In this region, “shallow water linear waves” no longer
holds. The finite element code of Professor Lee will handle the non-linear wave
propagation with complex bottom topography and sea-bottom friction variations.
Computation is normally carried out in the frequency domain.
Step 2. Computation for the realistic estimate of harbor oscillation, run-up and
inundation. It starts at the result of Step 1 as the input waves that have climbed the
continental shelf or got through the shallow waters of the Taiwan Strait. They would
present themselves at an entrance of a harbor or an estuary. The finite element code will
compute for a detailed 3D geometry (water depth and harbor or estuary shore lines) to
give the response to the input waves for a definitive prediction of run-up and inundation.
Again, effects of diffraction and reflections from the harbor boundaries will be modeled
by the finite element code with a mesh size small enough to yield the desired details. In
actual scenario computations, both Step 1 and Step 2 can be carried out simultaneously.
Moreover, both Step 1 and Step 2 computations can be carried out along side with the
scenario computations outlined in Task B1. All Task B1 and Task B2 results will form
an integral database of the TTD in a manner that once a tsunami is identified, the total
scenario of its impact anywhere on the Taiwan coast can be readily estimated from the
TTD.
Task B2. Computation on Tsunami Run-Up and Inundation
This task is to predict tsunami run-up and inundation in Taiwan coast, harbors and
estuaries. The input is a tsunami (from Task B1) presenting itself off Taiwan coast.
There are three possible cases:
Case I: Teleseismic Source: Long waves (~ several hundreds km wavelength) that
have propagated over a long distance over the open ocean (4~5 km average
61
depth) and arrived with a certain amplitude and azimuth a few hundreds
km off the Taiwan coast.
Case II: Regional Source: Long waves have been generated by a large local, shallow,
dip-slip event a few hundreds km off coast, and
Case III: Local Source: A tsunami has been generated by an event much closer than
a few hundreds km off the Taiwan coast.
Case III above does not offer enough time for an analysis on the tsunami generation
and wave amplitude estimate. At a distance much closer than a few hundreds km, the
generated tsunami would attack the coast in less than 15 minutes. Within 15 minutes,
Taiwan cannot rely on ITIC information. Neither can Taiwan depend on the Global
Seismic Network (IRIS/GSN) for the event location and a moment tensor solution. The
location and magnitude of such a large event will have to be obtained by Taiwan CWB
Earthquake Early Warning System (CWB/EWS), which can deliver information of an
earthquake event in about one minute (Presently, Taiwan leads the world in this regard,
see Teng et al., 1997; Wu and Kanamori, 2005a; 2005b). But this carries no information
whether a tsunami has been generated, let alone giving an estimate on the tsunami wave
amplitude. Thus, Case III is the worst scenario that can happen. Fortunately, tsunami
generated by near-shore structures has not been a problem judging from Taiwan’s recent
tsunami history; it probably has a very low likelihood in future occurrence.
Cases I and II: We shall discuss possible measures Taiwan can take for these two
cases in order to assure an adequate run-up and inundation estimate can be obtained
necessary for timely issuing warning and an evacuation order.
1. We assume that Taiwan does have ITIC tsunami warning information, and can
access the IRIS/GSN broadband seismic data. Based on that, together with
Taiwan’s own broadband data, we can obtain the earthquake location, focal
depth, moment tensor solution, as well as an estimate on the size and
displacement of the rupture plane. This leads to an independent assessment on
the tsunami-generated wave amplitudes in addition to ITIC information. From
the TTD generated in Task B1 above, we should have a first-order estimate on
the input tsunami wave amplitude, its dominant period, and approaching azimuth
as it presents itself on the peripheral (~300 km) off shore of Taiwan.
2. Over off-shore waters in the periphery of Taiwan, we need real-time groundtruth confirmation of the approaching tsunami. This can be furnished by:
a. DART-type open-ocean wave amplitude and period measurements.
b. Tide gauge measurements of wave amplitude and period from southern
Ryukyu Islands (Yunaguni, Ishigaki, Hirara, etc. See Figure 1).
62
c. Tide gauge measurements of wave amplitude and period from CWB
seismic network stations on Jinmen, Matsu, Penghu, Dongsha, Lanyu,
Badan, etc. (See Figure 1).
Information from the above sources a, b, and c will allow a “ground-truth” validation
that would verify that a tsunami of certain wave amplitude and period has indeed been
generated and approaching peripheral waters of Taiwan. This will then trigger
additional tsunami analysis leading to warning and evacuation operation in a manner
that offers accurate prediction of tsunami run-up and inundation as obtained from the
computation as discussed above.
C. Historical Tsunami Records of Taiwan
Many reports, books, and papers had been published containing information about
historical tsunamis in Taiwan and neighboring regions, and several “official” websites
contain online information. On a closer examination, however, we found the existing
literature and online databases are often confusing and containing errors. There are
many reasons: (1) existing literature and relevant data were written in many languages
(Chinese, Dutch, English, French, German, Japanese, Portuguese, Spanish, etc.) and
“scattered” in many geographical centers, such that it is not easy for authors to have full
access, (2) many Chinese words have been used to describe phenomena of the sea that
may or may not be related to tsunamis, (3) interpreting historical records is difficult and
subjected to bias, (4) lack of sufficient financial support and modern tools to gather the
literature and analyze the data adequately, and (5) lack of scholarship in setting up
online databases due to lack of adequate support.
“Language” Problem
Keimatsu (1963) appears to be the first author to consider the “language” problem in
causing confusion in studying historical tsunamis. Dr. Liang-Chi Hsu has kindly
translated Keimatsu (1963) into English. The following is an excerpt:
The Chinese words corresponding to the Japanese word
[tsunami] may include:
[sea flowing over],
[sea water flowing over],
[sea water flowing
[tide flowing over],
[sea tide flowing out],
[sea tide rising],
out],
[sea tide suddenly rising],
[sea water boiling],
[sea water
[sea roaring],
[sea gnawing],
[sea howling];
boiling and gushing],
there may be more similar words.
[tsunami] in Japanese literally means “port
waves”. Although tsunami in Japanese is written in Chinese characters [Kanzi], there
is no such word used in the Chinese literature.
63
[sea howling] as “tsunami”. This can be a major
It is now common to equate
cause for confusion if one interprets
[sea howling] in historical records as
“tsunami”, because
[sea howling] may be generated more frequently by
meteorological effects, rather than by tsunamis. In English, “tsunami” is often
translated as “tidal waves”, a practice that is now abandoned by scientists. We believe
that the word “tsunami” should be used as defined by the Japanese to describe a specific
phenomenon that is caused mostly by a shallow submarine earthquake.
As noted by Keimatsu (1963), tsunami is actually written in Chinese characters:
.
Perhaps, it is less confusing if we use it when writing in Chinese. On the other hand,
one may argue that
[sea howling] is now commonly accepted for “tsunami” when
writing in Chinese, we should not change.
Some Important Historical Dates for Taiwan
Although Taiwan was settled by several native tribes thousands of years ago, they do
not have written languages; the Chinese probably did not know about Taiwan until
about the Sung Dynasty (about 1000 years ago), and some Chinese did not migrate to
the Penghu Islands until the late Ming Dynasty (about 500 years go). The large Chinese
migration to Taiwan did not start until about 1600 (Wu, 1994).
In 1590, Portuguese navigators visited Taiwan and introduced the island to the west as
“Formosa”. After establishing a settlement in northern Taiwan, they left. In 1622, the
Dutch established a military base on the Penghu Islands, and by the end of 1624, the
Dutch occupied southern Taiwan. In 1626, the Spaniards occupied northern Taiwan but
were driven away by the Dutch in 1642. When the Ming Dynasty was being taken over
by the Manchurians starting in 1644, hundreds of thousands of Chinese began to
migrate to Penghu and Taiwan. The Dutch surrendered to the new immigrants under
Cheng Ch’eng-Kung, and left in 1661. When the Manchu (or Qing) Dynasty was firmly
established in the mainland, Taiwan became part of China in 1683 (Hsieh, 1964; Wu,
1994).
Using the 1782 Event as an Illustration of Difficulties
Studying earthquakes and tsunamis of Taiwan is very difficult because many Chinese
records have been lost due to numerous wars and lack of good archiving practice. For
example, the first earthquake noted in Taiwan was in 1655, and the source was traced
back to a Dutch publication (Xie and Tsai, 1987, p. 78). However, many missionaries
(especially Jesuits) went to China (and Taiwan) beginning from about 1500. They
might provide good sources of information about earthquakes and tsunamis, because
they sent letters and reports to their friends, relatives, and superiors in their mother
countries. Searching for these materials will not be easy, but with some modest funding
it can be done by commissioning some local historians in Europe.
64
Before the Sumatra earthquake/tsunami, the most deadly tsunami is an event in the
Taiwan Strait on 1782, in which 50,000 people died, as listed in Bryant (2001, p. 21,
Table 1.5) and shown online on two major databases (NOAA and Russian Tsunami
Laboratory; see References Section below for online websites). Bryant (2001) credited
the NOAA National Geophysical Data Center as the source. An online search at:
http://www.ngdc.noaa.gov/seg/hazard/tsevsrch_idb.shtml gives:
On a closer examination, we discovered there are serious problems in the interpretation
of historical records for this event. The two references cited online above [Iida et al.
(1967) and Soloviev and Go (1974)] credited Mallet (1853) and Perrey (1862) as their
sources. Mallet (1853) has the following entry on p. 202:
On the 22nd [May, 1782] the sea rose with great violence on the coast of Formosa and
the adjacent part of China, and remained eight hours above its ordinary level; having
swept away all the villages along the coast, and drowned immense numbers of people.
No shock is mentioned.
Unfortunately, Mallet (1853) did not give any reference source. Since Mallet was in
close contact with Perrey at that period, we suspect that he obtained the above
information from Perrey. Perrey (1862) gave two references: the earliest one is a
paragraph published in Gazette de France, No. 64 as shown below:
De Paris, le 12 Août 1783.
Une lettre de la Chine fait mention d'un évènement arrivé l'année dernière, & peut-être
plus terrible encore que ceux qu'ont éprouvés la Sicile & la Calabre dans le
commencement de celle-ci féconde en désastres. En attendant une relation plus détaillée,
voici ce que l'on en raconte : Le 22 Mai de l'année dernière, la mer s'éleva sur les côtes
de Fo-Kien à une hauteur prodigieuse, & couvrit presqu'entièrement pendant huit
heures l'île de Formose qui en est à 30 lieues. Les eaux, en se retirant, n'ont laissé à la
place de la plupart des habitations que des amas de décombres sous lesquels une partie
de la population immense de cette Isle est restée ensevelie. L'Empereur de la Chine,
voulant juger par lui-même des effets de ce désastre, est sorti de sa capitale ; en
parcourant ses provinces, les cris de son peuple excités par les vexations de quelques
Mandarins, ont frappé ses oreilles ; & on dit qu'il en a fait justice en faisant couper plus
de 300 têtes.
65
We obtained an English translation (by Robyn Fréchet) from Julien Fréchet (in an email
to Willie Lee, dated March 1, 2005):
Paris, August 12th ,1783.
A letter from China mentions an event which occurred last year, a disaster perhaps
even more frightful than those comparable events experienced at the beginning of this
calamitous year in Sicily and Calabria. In anticipation of a more detailed account, here
is what the letter reports: On May 22nd last year, the sea on the coast of Fukien rose to
an exceptional level, and for eight hours covered almost the whole island of Formosa
30 leagues from the coast. When the waters withdrew most of the dwellings had been
reduced to heaps of debris under which part of the immense population of this island
remained buried. The Chinese Emperor himself left the capital to take stock of the
effects of this disaster; in the course of his journey through the provinces he was
assailed by the outcry of his people against the harassments of certain mandarins; and
it is reported that he delivered justice in having over 300 persons beheaded.
When we asked Julien Fréchet what was the source for the “letter from China” cited
above, Fréchet (in an email to Willie Lee, dated March 9, 2005):
I found out some information about the Minister Bertin who received the letters from
China (Perrey): Henri Leonard Bertin (1720-1792) was a sinology scholar and
politician who developed a long correspondence with the French Jesuits in China
between 1744 and 1792. The manuscripts of this correspondence have been preserved
and are located in a Paris library (Institut de France). I hope Robyn will take a look
there; she may be able to find the original letters.
The Qing Dynasty was at it peak in 1782, but we could not find any records in the
Chinese literature on this disaster so far. In reviewing historical tsunamis in China,
neither Keimatsu (1963) nor Lee (1981) mentioned this event. Therefore, it is important
to make further investigations before accepting the 1782 event as the most deadly
tsunami in history (until the 2004 Sumatra tsunami).
Task C. Investigating Historical Tsunamis in Taiwan and Neighboring Regions
We propose a systematic investigation of historical tsunamis in Taiwan and neighboring
regions using modern tools and international collaborations. With modern imaging
technology (using digital cameras and scanners) it is straight forward to put relevant
literature online. However, we need to locate the “original” sources, imaging and
translating these materials, and interpret them within the historical context. This also
means that we need to include information about the social conditions at that time in
evaluating how reliable are the “original” sources.
66
In addition to building up a “historical tsunamis in Taiwan” database, we also need to
build up a “reference sources” database, an “earthquake” database, and a “tide-gauge”
database. In the References section of this proposal, we include both “online” and
“published” sources. The quality of the online sources varies greatly, and even the
“official” websites contain numerous errors. Some websites are up-to-date, while many
others are not.
D. Communication, Education and Outreach
A necessary action item is to establish society infrastructure for quick response in
response to tsunamis. This includes establishing rapid communication between CWB
and agencies responsible for emergency services, and educating the public on tsunami
hazards. Since we have no experience in these areas, we refer the readers to a well
prepared report on this subject by Good (1995).
E. Collaborations
A good tsunami research program must involve experts in many disciplines. With rapid
advances in science, engineering, and technology, it is not possible for anyone to master
more than a few topics at a given time. Therefore, it is logical to have collaborations
among researchers, especially when expertise in different disciplines is required.
Tsunami research in Taiwan is in its infancy, and therefore, it is important to encourage
Taiwan researchers to collaborate with well established experts.
As noted in the Introduction, this proposal is intended to start discussions. We hope that
small groups of the Workshop participants will be formed to consider each elements of
our proposal, and spend the necessary time to develop well thought-out plans for
execution.
Concluding Remarks
While Taiwan should make preparation for an effective tsunami warning system that is
composed of both empirical monitoring and scenario computations, we must argue that
the likelihood of a major tsunami attack on Taiwan is rather remote based on the
historical records studied in published papers so far. On the other hand, we also wonder
that should the 1999 Chi-Chi earthquake occur at a location either 50 km east or west to
its actual hypocenter, the tsunami damage could have been quite unthinkable. Of course,
one can argue that it is the worse scenario.
67
As we discussed in Section C above, the historical records are “scattered” in many
locations and have not been being carefully examined and interpreted by all the
published papers we studied so far. In fact, there are many contradictions and obvious
errors in the literature and also in the online database as we discussed in Section C
above. Therefore, it is important that all existing reports on tsunamis and related
earthquake and tide-gauge data in Taiwan and neighboring regions should be
systematically collected and carefully re-interpreted.
Finally, we wish to point out that in the early 1990s, tsunami workshops were held in
the United States, resulting in three important reports (Bernard and Gonzales, 1994;
Blackford and Kanamori, 1995; Good, 1995). We should carefully consider their
recommendations in developing a more complete tsunami program in Taiwan.
Acknowledgements
We wish to thank Y. Okada and K. Satake for providing their computer codes to us. We
are grateful to Mike Blackford, Eric Geist, Moh-jiann Huang, Hiroo Kanamori, C.C.
Mei, J.J. Lee and Ted Wu for their informative discussions and helpful suggestions. We
wish to thank Julien and Robyn Fréchet for their kindness in tracking down some
critical information concerning the 1782 tsunami (?) event, and Liang-Chi Hsu for
translating Keimatsu (1963) from Japanese into English.
References
Online Information
If one conducts an online Google search for “tsunami”, Google reports that about
16,800,000 websites contain information for "Tsunami". During the past 2.5 months, Willie
Lee visited about 100 websites, and found the following websites useful in preparing
this proposal. Please note that many popular tsunami sites are not included. Websites
are grouped by “Institution” and “People”.
I. “Institution” Websites
International
Tsunami
Information
http://www.prh.noaa.gov/itic/more_about/itsu/itsu.html
Center
NOAA, Pacific Marine Environmental Laboratory, tsunami research program
http://www.pmel.noaa.gov/tsunami/
68
(ITIC)
NOAA, tsunami data
http://www.ngdc.noaa.gov/seg/hazard/tsu.shtml
Pacific Tsunami Warning Center (PTWC)
http://www.prh.noaa.gov/ptwc/
Russian Tsunami Laboratory, historical tsunami database
http://tsun.sscc.ru/htdbpac/
II. “People” Websites
Geist, Eric
http://walrus.wr.usgs.gov/staff/egeist/
Kanamori, Hiroo
http://www.gps.caltech.edu/faculty/kanamori/kanamori.html
Lee, Jiin-Jen
http://www.usc.edu/dept/civil_eng/dept/faculty/profiles/lee_j.htm
Liu, Philip L.-F.
http://www.cee.cornell.edu/faculty/info.cfm?abbrev=faculty&shorttitle=bio&netid=pll3
Okal, Emile
http://www.earth.northwestern.edu/research/okal/
Satake, Kenji
http://staff.aist.go.jp/kenji.satake/
Synolakis, Costas
http://www.usc.edu/dept/tsunamis/staff/costassynolakis.htm
Published Literature
Bernard, E. N., and F. I. Gonzales (1994). “Tsunami Inundation Modeling Workshop
Report (November 16-18, 1993), NOAA Technical Memorandum ERL PMEL-100,
Pacific Marine Environmental Laboratory, Seattle, Washington, 139 pp.
Blackford, M., and H. Kanamori (1995). “Tsunami Warning System Workshop Report
(September 14-15, 1994), NOAA Technical Memorandum ERL PMEL-105, Pacific
Marine Environmental Laboratory, Seattle, Washington, 95 pp.
69
Bryant, E. (2001). “Tsunami: The Underrated Hazard”. Cambridge University Press.
Engdahl, E. R., and A. Villasenor (2002). Global seismicity: 1900-1999. In
“International Handbook of Earthquake and Engineering Seismology”, edited by W.
H. K. Lee, H. Kanamori, P. C. Jennings, and C. Kisslinger, Part A, p. 665-690,
Academic Press, San Diego.
Gazette de France (1783). An un-authored paragraph on p. 288, No. 64, 12 August,
1783 (in French).
Geist, E. L. (1998). Local tsunamis and earthquake source parameters. In Dmowska, R.
and B. Saltzman, eds., Tsunamigenic Earthquakes and Their Consequences,
Advances in Geophysics, 39, 117-209.
Geist, E. L. (2000). Rapid tsunami models and earthquake source parameters,
unpublished manuscript.
Good, J. W. (1995). “Tsunami Education Planning Workshop Report: Findings and
Recommendations, NOAA Technical Memorandum ERL PMEL-106, Pacific
Marine Environmental Laboratory, Seattle, Washington, 41 pp.
Hsieh, C. M. (1964). “Taiwan – ilha Formosa”. Butterworths, Washington, D.C., 372 pp.
Iida, K., D. C. Cox, and G. Pararas-Carayannis (1967). Preliminary catalog of tsunamis
occurring in the Pacific Ocean. HIG-67-10, Hawaii Institute of Geophysics,
University of Hawaii, Honolulu, Hawaii, USA, 275 pp.
Keimatsu, M. (1963). On the historical tidal waves in China. Zisin (J. Seism. Soc.
Japan), Second Series, 16, 149-160. [English translation by L. C. Hsu is available
from Willie Lee].
Lee, Jiin-Jen, Chin-Piau Lai, and Yigong, Li; (1998). Application of computer modeling
for harbor resonance studies of Long Beach and Los Angeles Harbor Basin, ASCE
Coasting Engineering, Volume 2, 1196 – 1209.
Lee, Jiin-Jen and J. J. Chang: (1990). Nearfield tsunami generated by three dimensional
bed motions, Twenty-Second Coastal Engineering Conference, Coastal Eng. Res.
Council/ASCE, 1172-1185.
Lee, S. P. (1981). “Chinese Earthquakes”. Seismological Press, Beijing, China, 612 pp.
[in Chinese].
70
Lee, W. H. K., T. C. Shin, and T. L. Teng (1996). Design and implementation of
earthquake early warning systems in Taiwan, Paper No. 2133, 11th World Conf.
Earthq. Engin., Elsevier, Amsterdam.
Mallet, R. (1852, 1853, and 1854). Catalogue of recorded earthquake from 1606 B.C. to
A.D. 1850. Being the Third Report on the facts of earthquake phenomena,
Transaction of the British Association for the Advancement of Sciences for Annual
Meeting in 1852, 1853, and 1854.
Mei, C. C. (1983). “The applied dynamics of ocean surface waves” Wiley, New York.
Nagai, T., H. Ogawa, Y. Terada, T. Kanto, and M. Kudaka (2003). Offshore wave,
tsunami and tide observation using GPS buoy. Conference paper in a PDF file from
Moh-jiann Huang.
Okada, Y. (1992). Internal deformation due to shear and tensile faults in a half-space.
Bull. Seism. Soc. Am., 82, 1018-1040.
Perrey, A. (1862). Documents sur les tremblements de terre et les phénomènes
volcaniques au Japon. Mémoires de l'Académie impériale des sciences, belles-lettres
et arts de Lyon - Classe des sciences, v.12, p.281-390 [in French].
Satake, Y. (2002). Tsunamis. In “International Handbook of Earthquake and
Engineering Seismology”, edited by W. H. K. Lee, H. Kanamori, P. C. Jennings,
and C. Kisslinger, Part A, p. 437-451, Academic Press, San Diego.
Soloviev, S. L., and Ch. N. Go (1974). “A catalogue of tsunamis on the western shore of
the Pacific Ocean”. Academy of Sciences of the USSR, Nauka Publishing House,
Moscow, 310 p. [in Russian; English Translation by Fisheries and Aquatic Sciences
No. 5077, 1984, 447 pp.]
Teng, T. L., L. Wu, T. C. Shin, Y. B. Tsai, and W. H. K. Lee (1997). One Minute after:
strong-motion map, effective epicenter, and effective magnitude, Bull. Seismo. Soc.
Am., 87, 1209-1219.
Wu, M. C. (1994). “History of Taiwan”, 2nd edition, Time Post Press, Taipei, 289 pp.
[in Chinese].
Wu, Y. M., C.C. Chen, T.C. Shin, Y.B. Tsai, W.H.K. Lee, and T. L. Teng (1997).
Taiwan Rapid Earthquake Information Release System, Seism. Res. Lett., 68, 931943.
71
Wu, Y. M. and H. Kanamori (2005a). Experiment on an onsite early warning method
for the Taiwan early warning system, Bull. Seism. Soc. Am. 95, 347-353.
Wu, Y. M. and H. Kanamori (2005b). Rapid Assessment of Damaging Potential of
Earthquakes in Taiwan from the Beginning of P Waves, Bull. Seism. Soc. Am. in
press.
Xie, Y. S., and M. P. Tsai (Editors) (1987). “Compilations of Historical Chinese
Earthquake Data”, Volume 3, Part A (A.D. 1644-1736), Seismological Press,
Beijing, 540 pp [in Chinese].
Appendix 1. Status of IOC
Please see Status_of_IOC.pdf, which is the English portion that can be downloaded
from http://ioc.unesco.org/iocweb/about.php
Appendix 2. Deep Ocean Assessment and Reporting of
Tsunamis (DART)
As part of the U.S. National Tsunami Hazard Mitigation Program (NTHMP), the Deep
Ocean Assessment and Reporting of Tsunamis (DART) Project is an ongoing effort to
maintain and improve the capability for the early detection and real-time reporting of
tsunamis in the open ocean. Developed by NOAA's Pacific Marine Environmental
Laboratory (PMEL) and operated by NOAA's National Data Buoy Center (NDBC),
DART is essential to fulfilling NOAA's national responsibility for tsunami hazard
mitigation and warnings.
DART stations have been sited in regions with a history of generating destructive
tsunamis to ensure early detection of tsunamis and to acquire data critical to real-time
forecasts. The 6 buoy operational array shown on the following map was completed in
2001.
72
A DART system consists of an anchored seafloor bottom pressure recorder (BPR) and a
companion moored surface buoy for real-time communications (Gonzalez et.al, 1998).
An acoustic link transmits data from the BPR on the seafloor to the surface buoy. The
data are then relayed via a GOES satellite link to ground stations (Milburn, et al., 1996),
which demodulate the signals for immediate dissemination to NOAA's Tsunami
Warning Centers, NDBC, and PMEL. The moored system is shown in the
accompanying figure below.
73
So far, DART is still in an experimental stage. When all instrumental problems
(mainly the long-term reliability) are solved, it is possible that NOAA will install a few
dozen DARTS in all oceans. Taiwan can then participate in the DART program by
“buying in” three stations to be installed in offshore waters of Taiwan. This “buy in”
should allow Taiwan to share all NOAA tsunami warning information and meet Taiwan
tsunami monitoring purposes.
74
Section B: Specifications and Evaluations
of Strong-Motion Instruments
W. H. K. Lee
November 16, 2005
Contents
I. Introduction ..................................................................................................................76
II. Instrument Specifications............................................................................................76
III. Instrument Evaluation ...............................................................................................76
Appendix B1. 2005 CWB Specifications for Digital Earthquake Strong-motion
Accelerographs ................................................................................................................77
Appendix B2. A Preliminary Evaluation of an ES&S Model Kelunji Echo
Accelerograph................................................................................................................101
Appendix B3. A Preliminary Evaluation of a Geotech Model SMART-24A
Accelerograph................................................................................................................103
Appendix B4. A Preliminary Evaluation of a Reftek Model 130-SMA/01 Accelerograph
.......................................................................................................................................113
Appendix B5. A Preliminary Evaluation of Tests on a Geotech Model SMART-24A
Accelerograph under CWB Monitoring ........................................................................118
75
I. Introduction
Instrumentation specifications and evaluation were performed in 2005 in support of
the CWB 2005 procurements of free-field digital accelerographs.
II. Instrument Specifications
In support of the CWB procurements in 2005, instrument specifications were written
for 24-bit digital accelerographs. These specifications are given in Appendix B1. 2005
CWB Specifications for Digital Earthquake Strong-motion Accelerographs.
III. Instrument Evaluation
I received three technical proposals submitted for bidding the CWB 2005 digital
accelerographs: ES&S, Geotech, and Reftek. Evaluation reports were sent to CWB
on March 16, 2005 (see: Appendix B2. A Preliminary Evaluation of an
ES&S Model Kelunji Echo Accelerograph; Appendix B3. A Preliminary
Evaluation of a Geotech Model SMART-24A Accelerograph; Appendix B4.
A Preliminary Evaluation of a Reftek Model 130-SMA/01 Accelerograph).
Geotech won the 2005 CWB bid and was required to repeat the
technical tests witnessed by a CWB observer. Patricia Wang, a graduate
student living near Dallas, Texas, agreed to serve as the CWB observer. The
results of the monitored tests are given in Appendix B5. A Preliminary
Evaluation of Tests on a Geotech Model SMART-24A Accelerograph under
CWB Monitoring.
76
Appendix B1. 2005 CWB Specifications for
Digital Earthquake Strong-motion
Accelerographs
January 2, 2005
2005 CWB Specifications for
24-bit Digital Earthquake Strong-motion
Accelerographs
I. Introduction
In this fiscal year, CWB would like to purchase 45 units of 24-bit digital
accelerographs. By 24-bit, we mean that a 24-bit A/D chip is used in digitizing the
accelerometer signals and the accelerograph achieves 20 bits (120 dB dynamic range) or
better in the overall system performance for seismic signals in the earthquake frequency
band.
II. Required Items
For 2005, the following items are required:
(1) 45 units of 24-bit digital earthquake strong-motion accelerographs. Each unit must
be able to maintain absolute time to +/- 0.005 sec of UTC when a GPS timing
device is connected to it, and is ready for Internet access from anywhere in the
world when the unit is deployed in the field and is connected to the Internet. [See
Section IV below for specifications).
(2) 45 GPS timing devices (each with a 50-feet receiver cable) that can be used to
connect to the accelerograph for maintaining absolute time to within +/- 0.005 sec of
UTC at all times and to provide geographic location of the accelerograph. [See Item
11 of Sub-section 4 of Section IV].
77
(3) Recommended spare parts for Item (1) and (2) for three years operation, and a
listing of their prices. [See Section V below].
(4) A training program for installation, operation, and maintenance of Item (1) and (2).
[See Section VI].
(5) The required accelerographs and GPS timing devices must carry 3 years' full
warranty and maintenance service (see Note 5 below).
NOTE 1: All new bidders must arrange with Mr. Chien-Fu Wu (phone: 02-2-709-5603;
fax: 02-3-707-3220) for the Internet access test (see Section IV.10) during the following
time period: from _______________ to _______________. Previous bidders who had
passed the Internet access test are exempted.
NOTE 2: A bidder must submit a report of the test results (including computer readable
data files and the required software [see Section IV.6] on floppy disks or CD-ROM) in
their proposal in support of their claims that the proposed model meets the CWB 2005
specifications (see Appendix 1). [See Note 6 for exemption, and see Note 8 for a new
requirement in 2006].
NOTE 3: A bidder must submit the proposed model for test at the CWB Headquarters
and at the CWB Hualien Station for a field test during the following time period: from
_______________ to _______________. Details are specified in Appendix 2. [See Note
7 for exemption].
NOTE 4: All delivered units from the awarded bidder will be subjected to performance
acceptance tests as specified in Appendix 3.
NOTE 5: Full warranty for three years after the final acceptance by CWB or its
designated agent is required. This warranty must include parts and labor for fixing any
breakdown of accelerographs and GPS timing devices under normal operating
conditions in the field (i.e., anywhere in Taiwan) free of charge. Repair or replacement
must be completed within 5 working days after notification by CWB, except if any
replacement parts require importing from outside of Taiwan, an additional 10 working
days will be granted by CWB if requested.
NOTE 6: Accelerographs that were qualified in the CWB 2002 bidding of the 24-bit
digital accelerographs [Model K2 by Kinmetrics, and Model CV575C by Tokyo
Sokushin] are exempted from requirements specified in Note 2 and Note 3 above.
Model 130-SMA/01 by Refraction Technology was conditionally approved in 2004, but
must be subjected to tests under CWB monitoring. In addition, Refraction Technology
must address the technical comments on Model 130-SMA/01 accelerograph by CWB.
78
NOTE 7: If the bidder wins the bid with a new accelerograph, then the same tests as
specified in Appendix 1 must be repeated and witnessed by a CWB appointed observer.
In this case, the bidder is required to give CWB a two-week advance notice for the time
and place for the repeated testing.
NOTE 8: Starting from next year, the 2006 CWB Specifications will require the
“technical tests” specified in Appendix 1 to be witnessed by a CWB appointed observer.
Because there is normally not sufficient time to schedule a monitored test after the
CWB bid announcement, all manufacturers intending to bid their new accelerographs in
year of 2006 should perform the necessary “technical tests” under CWB monitoring
before the 2006 CWB bid announcement.
III. Technical Evaluation
Each bidder is required to bid an accelerograph model that are in production and
meet all the specifications listed below. The bidder should prepare in their bid proposal
a clause by clause commentary indicating compliance with each specification. The bid
proposal must contain a report of the “technical tests” as specified in Appendix 1. This
technical test report must contain a written account of the technical tests (including the
specs of the shaking table system used), and the recorded data files and the required
software (see Section IV.6) on floppy disks or CD-ROM. The “technical tests” must be
conducted in an appropriate test laboratory by the bidder at their own expenses. In
addition, the bidder must submit their recorded data at the CWB Headquarters test and
at the Hualien field test to CWB immediately after the tests, as specified in Appendix 2.
As indicated in Note 6 in Section II, accelerographs that were qualified in the CWB
2004 bidding of the 24-bit digital accelerographs are exempted from the above test
requirements. All bidders with new accelerographs must also arrange with Mr. ChienFu Wu for the Internet access test [see Section IV.10].
The CWB's Instrumentation Advisory Subcommittee will analyze all the recorded
data files from the proposed accelerograph (and the reference unit if applicable) to
determine if the new accelerograph meets the specifications. A bid of an accelerograph
will be automatically rejected if its technical test report (with data files and required
software [see Section IV.6] on floppy disks or CD-ROM for personal computers) is not
included in the bid proposal, or if the bidder failed to provide the test data recorded at
the CWB Headquarters and at the Hualien field test, or if the bidder failed in the
Internet access test. In addition, the bidder of a new accelerograph must provide the
specifications of the shaking table system used in the “technical tests” (see Appendix 1).
If the specifications do not meet the CWB required specs for the shaking table system,
then the bid will be automatically disqualified. However, accelerographs that were
79
qualified in the CWB 2004 bidding of the 24-bit digital accelerographs are exempted
from these requirements.
Technical evaluation will be carried out in the following steps.
(1) Technical evaluation will be based on the bidders' proposals, their technical test
report (including using a shaking table system that meets the CWB specs), test data
recorded at the CWB Headquarters and at the Hualien field test, the Internet access
test, and their reputation with respect to customers' satisfaction of their
accelerograph products. Any bidder whose accelerograph failed the Internet access
test will be automatically disqualified, and any bidder who used a shaking table
system that does not meet the CWB shaking table system specs will also be
disqualified.
(2) Based on results of the technical evaluation in (1), the CWB's Instrumentation
Advisory Subcommittee will decide whether or not a given bid proposal is
technically acceptable.
NOTE 1: The exact bidding and instrument evaluation procedures are given in the
Chinese version of the “CWB (2005) 24-bit Free-Field Accelerograph Specifications”.
NOTE 2: Bidders whose accelerographs were qualified in the CWB 2004 bidding of
the 24-bit digital accelerographs are exempted from technical evaluation. See Note 6 in
Section II for details.
IV. Specifications for Earthquake Strong-Motion Accelerographs
1. General Features
The accelerograph must be rugged, compact, weighing less than 25 kilograms,
transportable over rough terrain by vehicle, and then capable of being installed and field
calibrated with a minimum amount of adjustments. The accelerograph will be installed
in all types of environments and should be designed to withstand extremes of humidity,
dust, and temperature, and to be waterproof [see 2.1(5) below].
After installation, the accelerograph shall remain in a standby condition until
actuated manually for test purposes or triggered by ground motions satisfying the trigger
criteria. After actuation, it shall record data for a prescribed time period, and return to
the standby condition ready to record the next event without servicing or attention.
80
The accelerograph must be designed for quick trouble-shooting by performing
functional tests so that a technician can locate faulty component(s) or circuit board(s)
under field conditions. A field installation site may be a simple instrument shelter in a
remote region with extreme environment conditions.
2. System Operation
The accelerograph is normally packaged in a single unit and consists of four
components: the transducers (triaxial accelerometer), a solid-state digital recorder, a
GPS receiver, and battery power supply. It must be capable of connecting by means of
a user-supplied modem to telephone lines for remote interrogation and data
downloading, and for Internet access of its recorded data files when it is connected to
the Internet [see subsection 10 below]. The case enclosing the accelerograph shall be
rugged enough to permit the accelerograph to operate after having typical non-structural,
earthquake-caused debris, such as plaster, ceiling panels, light fixtures, falling on the
unit from a height of 2.5 meters. The accelerograph must have handle(s) for ease of
carrying and facility for leveling adjustment. If necessary, the triaxial accelerometer can
be packaged separately from the recording unit.
System operation shall be such that it will automatically start recording when the
ground acceleration exceeds a preset triggering criterion. The trigger may actuate from
any selected combination of the three transducer signals.
A scheme for protected and externally visible indicator(s) must be provided to show
the event status. The memory status must be displayed upon user's interrogation via a
PC, and optionally by visible indicator(s).
2.1 System Characteristics
(1) System Accuracy: A "static" system accuracy of +/- 0.03 g for any sensitive axis
aligned with gravity from a tilt test is required, and a "dynamic" system accuracy of +/3% on a RMS basis at room temperature from a shaking table test is required.
(2) System Response: nominally flat (+/- 3 dB) from DC to 50 Hz.
(3) System Noise: The overall system noise must be less than the equivalent of 1 digital
count of a 20-bit system on a RMS basis in the seismic frequency range of 0.01 to 50
Hz.
(4) Temperature Stability: Sensitivity change due to temperature effect must be less than
+/- 0.06% per degree C for the operating temperature range (-10 degree C to 60 degree
81
C). Similarly, zero-level change due to temperature effect must be less than +/- 0.06%
per degree C.
(5) Humidity and Waterproof: Must be able to handle high humidity (up to 100%), and
must be waterproof according to the NEMA (US National Electrical Manufacturers
Association) Standards Publication 250 for NEMA Type 6P enclosures (i.e., protection
against the entry of water during prolonged submersion at a limited depth), or the IEC
standard IP67.
(6) Auto-zeroing of DC level: If the accelerograph has the software feature of autozeroing of DC level, the user must be able to turn it off if necessary.
(7) System DC-Level Drift in Field Operation: After removing the temperature effects
(see Item 4 above), a daily drift of less than +/- 240 digital counts (of a 20-bit system)
and a cumulative drift of less than +/- 720 digital counts (of a 20-bit system) over a
period of 5 days are required in a typical field environment (for a 2g full-scale
accelerograph when auto-zeroing of DC level is turned off).
2.2. Trigger Operation
(1) Trigger Level: Selectable from 0.0001g to 0.1g of any one or more of the 3
accelerometer channels.
(2) Trigger Frequency Response: Triggering criterion is applied only in the frequency
range from 0.1 to 12 Hz. The trigger filter's parameters must be given by the
manufacturer.
(3) Trigger Accuracy: Must be within +/-10% at 1% full-scale trigger level in the
frequency range from 0.1 to 12 Hz.
3. Transducer Sub-Unit
Orthogonally oriented, triaxial (two horizontal and one vertical) accelerometers must
be mounted internally to the recording unit.
(1) Type: Force-balance or force-feedback accelerometers.
(2) Full scale: +/-2g standard.
(3) Dynamic Range: at least 120 dB.
82
(4) Frequency Response: nominally flat (+/- 3 dB) from DC to 50 Hz.
(5) Damping between 0.6 and 0.7 critical damping.
(6) Accuracy: The relationship between output signal and input acceleration is to be
within +/- 1% of full scale for all frequencies from DC to 50 Hz at room temperature.
(7) Cross-axis Sensitivity: 0.03 g/g maximum; 0.02 g/g desirable.
(8) Output: Nominally +/- 2.5 volts full scale, or must match the input requirement of
the recording unit.
(9) Noise: less than 3 dB (on a RMS basis) with respect to a 120 dB system.
(10) The unit itself or its transducer unit must have the facility for tilt testing. There
must also be an adjustment so that each axis's zero-level may be reset to compensate for
non-level mounting surface (< 2 degree ) by either one of the following methods: (i) by
individual axis, or (ii) simultaneously on all three axes. A reference line indicating each
sensor's orientation and polarity shall also be provided.
(11) The unit itself or its transducer unit must have an indicator for leveling the
transducer.
(12) Calibration data (voltage per g and accurate to better than +/- 1%) for the three
internal transducers must be provided with the accelerograph.
4. Digital Recording Sub-Unit
The recording sub-unit shall record three channels with appropriate signal
conditioning, A-D conversion, and solid-state memory. The retrieved digital data must
contain sufficient coded information to enable proper and complete decoding of the data
by the retrieval system using supplied program(s). The format of this recorded digital
data shall be in a form suitable for rapid data reduction by modern computer methods
and existing standard computer systems. Absolute timing to within +/- 5 msec of UTC
must be maintained at all times by the accelerograph if the GPS timing device is used.
In the event of losing the external GPS timing signal, the accelerograph must be capable
of maintaining absolute timing with a drift of less than +/- 26 milliseconds per day.
(1) Filtering: Anti-aliasing filter must be provided suitable for the maximum sampling
rate (see item 3).
83
(2) Analog Channel-to-Channel Sampling Skew: The channel-to-channel sampling must
be completed within 10% of the sample rate in a known fixed manner so that
corrections can be applied.
(3) Sample Rate: 200 samples/sec/channel.
(4) Pre-event Data Storage: 0-30 seconds, selectable in steps of 1 second by software.
(5) Recording Type: Digital, solid-state memory and/or IC memory card.
(6) Resolution: 20 bits or better.
(7) Noise: less than 3 dB with respect to a 120 dB system (on a RMS basis) when the
signal input is shorted.
(8) Full Scale: Matching that of the output of the accelerometer.
(9) Total Recording Capacity: At least 180 minutes of recording time at 200 samples per
second for 3 channels.
(10) Removable Recording Device: A removable recording device (e.g., a PC-standard
removable memory card) of at least 20 megabytes must be provided for ease of data
transfer to a PC for data processing.
(11) Absolute Time and Location: A GPS device is required to provide geographical
location and absolute time to within +/- 0.005 sec of UTC at all the time by the
accelerograph. Data acquisition must not be interrupted by GPS timing adjustments. In
the event of losing the external GPS timing signal, the accelerograph must be capable of
maintaining absolute timing with a drift of less than +/- 26 milliseconds per day.
(12) Coded Information: In addition to the recorded acceleration data, all relevant
instrument parameters are to be recorded in a header for each event. These items
include (but are not limited to): (a) the instrument's serial number, (b) the day and time
as synchronized by a servicing technician or as received from an external time code, and
(c) coded indicators for any options (gain, etc.) that are preset at the factory, and would
be required for processing the data.
(13) IASPEI Software Compatibility: Recorded data must be either written directly in
the PC-SUDS format, or a format conversion routine must be provided for conversion to
the PC-SUDS format. The PC-SUDS format is required so that the recorded data are
compatible with the IASPEI Software Library (jointly published by the International
84
Association of Seismology and Physics of the Earth's Interior and the Seismological
Society of America; see Sub-Section 6. “Required Software” below).
(14) Post Event Shut Off Delay: The system shall continue to record for 10 to 60
seconds (selectable in steps), after the signal drops below the trigger level.
(15) Facility for field calibration must be provided and described.
(16) At least 2 serial ports must be provided: Port #1 provides direct or external modem
(supplied by the user) communications for setup and/or download data; Port #2 is
dedicated to realtime digitized data stream output as specified in Section VII.
(17) Realtime digitized data stream in 16-bit data format: The system must be able to
provide (on a dedicated serial port) a serial stream of digitized 3-component ground
acceleration data at 50, 100, or 200 (user selectable) samples per second per channel for
transmission by hardwire or a suitable modem (supplied by the user) to a receiving
station of the USGS Digital Telemetry System for realtime operation at all time. The
digitized data at 50 or 100 samples per second per channel may be derived from
decimation of the 200 sampling rate data. Suitable anti-aliasing filtering to 50 or 100
samples per second is required. A mating connector to the realtime digitized data stream
must be provided (see Section VII below). Please note that the 16-bit realtime data
stream format is required in order to be compatible with the existing CWB telemetry
system.
5. Power Supply
The accelerograph shall operate from an internal battery that can be charged either
from solar cells or from an 110V +/- 20% AC power source. The accelerograph must
meet the following requirements:
(1) Internal Battery: 12 volt rechargeable, sufficient to operate the system on standby for
a minimum of 36 hours with the GPS timing device (or for a minimum of 48 hours
without the GPS timing device) and then record for 90 minutes without external power
source for charging.
(2) If the external power source for the accelerograph were cut off by more than 36
hours, then the accelerograph must be able to restart automatically and function
properly after the external power source is restored.
(3) Supplemental Power: The accelerograph shall be configured so that an auxiliary
external 12 Vdc power source may be connected in such a way as to add to the
Amp-hour capacity of the internal battery.
85
(4) Because a rechargeable battery can create a safety hazard in a waterproof
accelerograph as hydrogen gas can accumulate and cause an explosion, the
accelerograph must have a safety device (e.g., breather valves) to guard against this
safety hazard.
6. Required Software
There are two main categories of required software.
(1) Instrument Firmware: The instrument's firmware program consists of the code
(normally embedded in EPROMs) to perform the basic functions of recording and
retrieval of earthquake records. Internal data recording format must be able to store 24bit data samples and should be clearly described. Other important functions are event
triggering and pre-event memory control. Also, the programs normally allow the user to
examine and set the instrument's operating parameters, and perform important
diagnostic functions. They should be upgradeable. In addition, a user must be able to
select either the required 16-bit data stream output, or the manufacturer’s 24-bit data
stream output of its internal recorded data.
(2) External Support and Communications Programs: These programs must run on a
typical personal computer (running under either Microsoft Windows or DOS), and
provide the user interface to the instrument. They must support remote communications
via telephone, including Internet access of the recorded data either via anonymous FTP
or by the TCP/IP based software provided by the manufacture. They are also used to
retrieve the data and display it. The display of earthquake records should be able to be
accomplished with a minimum of processing. A stand-alone utility program to convert
the 24-bit recorded data (if it is not written directly in the PC-SUDS format) to the
standard PC-SUDS format for IASPEI software compatibility must be provided.
IASPEI Software (executable code and source code) packages are published jointly by
the International Association of Seismology and Physics of the Earth's Interior and the
Seismological Society of America. They are available for sale from the Seismological
Society of America, 201 Plaza Professional Building, El Cerrito, CA 94530, USA
(Phone: 1-510-525-5474; Fax: 1-510-525-7204).
7. Interconnection with Other Identical Accelerographs
The accelerograph shall be capable of being interconnected for common timing and
common triggering with identical accelerographs. When interconnected, a trigger signal
from any one accelerograph shall cause simultaneous triggering in all interconnected
accelerographs.
86
8. Ancillary Requirements
A convenient means for system calibration and checkout shall be provided. The
calibration of the total system for sensitivity shall be possible by a physical tilt test.
Operability of the total system shall be possible by application of functional test
voltages under software control which stimulate the accelerometer mass, permitting the
determination of the damping and frequency response of the system. In addition, testing
and data retrieval shall be performed with a typical personal computer (running under
either Microsoft Windows or DOS).
Remote interrogation shall be possible so that parameters of the data, including event
count, battery voltages, amount of memory used, and accelerogram parameters (such as
peak value and trigger-time) shall be available via telephone.
A manual shall be provided with complete description in full detail of all operational
characteristics and of all adjustments or options capable of being made in the factory, in
the shop, and in the field. The manual must be sufficiently clearly written that a trained
electronic technician in a shop along with the manufacturer's recommended test
equipment could thoroughly test out every operating feature of the system and therefore
be in a position to judge whether (1) repairs or adjustments are necessary to bring the
system up to the required specifications or (2) a return to the factory is necessary. The
manual must contain a complete and detailed description of the format of the recorded
data. The factory calibration data for individual components, including those for the
transducers, filters, and clocks, shall be provided.
9. Training and Support
The seller must provide a training course at CWB, Taipei, Taiwan. The training
program must provide sufficient instruction on the installation, operation, maintenance
and repair of the accelerograph. The course must also include sufficient instruction on
the installation and operation of all provided software and timing systems. The maker
must supply a copy of their course outline within one month after signing of the contract.
10. Internet Access Capability
The proposed accelerograph must have the Internet access capability; i.e., when the
unit is deployed in the field and is connected to the Internet, data recorded by the
accelerograph must be accessible from anywhere on the Internet for downloading the
recorded data files in near real time either via anonymous FTP or by the TCP/IP based
software provided by the bidder. The test for the Internet access capability must be
87
performed by all bidders with an arrangement with Mr. Chien-Fu Wu (phone: 02-2-7095603; fax: 02-3-707-3220) within the specified time period given above [see Note 1 of
Section II]. A bidder must first set up the proposed accelerograph and connect it to the
Internet at a site with telephone communication. He then arranges with Mr. Chien-Fu
Wu to set up the necessary software (if necessary) in a PC at CWB that is connected to
the Internet. When the bidder is happy with the connection (both Internet and telephone
communication), he requests a formal test. Mr. Wu will then instruct the bidder to tell
the person at the accelerograph site to start recording and to tap the accelerograph at
certain time intervals to generate sudden “pulses”. The recorded file (typically 1 minute
in length) should appear for download either via anonymous FTP or by the bidder’s
TCP/IP based software in near real time (i.e., within 2 minutes after the recording
ended). The downloaded file should be plotted by the bidder using his software to show
that “pulses” did occur at the specific times given over the telephone. A bidder will be
automatically disqualified if the trigger recorded data files can not be downloaded and
shown to have the specified “pulses” after 3 formal requested trials. We realize that
there can be Internet problems beyond the bidder’s control. Therefore, a bidder should
check out everything at CWB first before requesting a formal test.
V. Recommended Spare Parts for Three Years Operation
The bidder must quote the recommended spare parts with an itemized price list, valid
and firm for one year after the contract is signed, needed for the 3-year operation of the
delivered accelerographs.
VI. Specifications for Training and Support
CWB specifications for training at CWB have been given in Subsection 9 of Section
IV above. The seller must provide the training free of charge as follows:
On-site training of CWB staff (20 maximum) and demonstration of installation,
operation, and maintenance for the accelerographs and related items in Taiwan are
required during the period in which the Post Award Performance Acceptance Tests are
conducted.
VII. Specifications for Realtime Digital Data Stream Output
The proposed accelerograph must have two user selectable realtime digital data
stream output formats: (1) a 24-bit format with time tag as designed by the manufacturer,
88
and (2) the 16-bit data format as specified below. A bidder must provide technical
details on their 24-bit data stream format and the software to read these data.
In order to be compatible with existing accelerographs in CWB, digital data are to
be streamed out in packets immediately upon completion of a sample scan of all three
channels by the accelerograph. The output rate is 50, 100, or 200
samples/channel/second (user selectable by either hardware jumpers or software
commands) at 4800, 9600, or 19200 baud, respectively, and each sample packet consists
of eight bytes with the following format:
Byte No.
Description
1
Sync character (user programmable)
2
Most significant byte (MSB) of first channel (16-bit) data
3
Least significant byte (LSB) of first channel (16-bit) data
4
Most significant byte (MSB) of second channel (16-bit) data
5
Least significant byte (LSB) of second channel (16-bit) data
6
Most significant byte (MSB) of third channel (16-bit) data
7
Least significant byte (LSB) of third channel (16-bit) data
8
Auxiliary data byte for timing and error checking
This realtime digital data stream output must be 100% compatible with the USGS
Realtime Digital Telemetry System when the XRTPDB program (published in the
IASPEI Software Library Volume 1; See Sub-Section 6. “Required Software” of
Section IV above) is used for realtime data acquisition of the accelerograph.
NOTE 1: The Auxiliary data byte (8 bits) should be used as follows: (1) the 0th to 5th
bit are used for parity error checking of the six data bytes, (2) the 6th bit may be used
for message if necessary, and (3) the 7th bit may be used for timing if necessary.
NOTE 2: The realtime digital stream output must not be interrupted when the
accelerograph is performing its normal functions.
89
NOTE 3: IASPEI Software (executable code and source code) packages are published
jointly by the International Association of Seismology and Physics of the Earth's
Interior and the Seismological Society of America. They are available for sale from the
Seismological Society of America, 201 Plaza Professional Building, El Cerrito, CA
94530, USA (Phone: 1-510-525-5474; Fax: 1-510-525-7204).
90
Appendix 1. Technical Tests to be Conducted by a Bidder for
a Proposed Accelerograph
“Technical Tests” for a proposed accelerograph must be conducted in an appropriate
laboratory by the bidder at their own expenses and must include the following tests. The
shaking table system used for the Section 1 tests must be at or exceed the CWB
specifications [see Note 1 below]. Otherwise, the bidder will be automatically
disqualified.
A report describing the “technical tests” and results must be included in the bidder's
proposal. In addition, the recorded acceleration data, the recorded displacement data if
applicable, and the required software [see Section IV.6] must be provided as computer
readable files on floppy disks or CD-ROM for personal computers running under
Microsoft Windows or DOS). Failure to submit the technical test report (including the
specified data files on floppy disks or CD-ROM) with the bid proposal will lead to
automatic rejection of the bidder's proposal. However, bidders whose proposed
accelerographs had been qualified in the 2004 CWB bidding of the 24-bit digital
accelerographs are exempted from these required technical tests (See Note 6 of Section
II for details).
1. System Response to Vibration
An accelerograph must be subjected to the shaking table tests using a proper shaking
table system [see Note 1 below]. The accelerometers used to monitor the shake table
(which must be separate from that in the accelerograph) may be used as the reference.
The bidder must also record the time history of the shake-table displacement with a
suitable displacement sensor (+/- 1% accuracy or better) for test (7) below. The
recorded data must be submitted as computer readable files on floppy disks or CDROM, with software to convert the recorded files to the standard PC-SUD format.
Input signals for the shake table are:
(1) 1 Hz, 0.1 g sine waves for 60 seconds in x-direction,
(2) 1 Hz, 0.1 g sine waves for 60 seconds in y-direction,
(3) 1 Hz, 0.1 g sine waves for 60 seconds in z-direction,
(4) 10 Hz, 0.1 g sine waves for 60 seconds in x-direction,
(5) 10 Hz, 0.1 g sine waves for 60 seconds in y-direction,
(6) 10 Hz, 0.1 g sine waves for 60 seconds in z-direction,
(7) 1 Hz, 3 mm displacement "steps" in one direction (with 25 msec to 30-msec
rise time for “rounding” the step corners) for 60 seconds.
91
2. System Static Accuracy
The static accuracy of an accelerograph can be determined by a tilt test of the
accelerograph on a tilt table. A precision tilt table (with better than 0.1 degree tilt
control) must be used. Data must be recorded for 60 seconds each for the following tilt
angles: 0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, and 360 degrees, and
submitted as computer readable files on floppy disks or CD-ROM.
3. Digitizer Performance
A bidder may choose one of the following two choices for testing digitizer
performance: either 3A) Sandia Test, or 3B) the CWB 2002 Test.
3A. Sandia Test
Digitizer performance is to be tested according to the Modified Noise Power Ratio
test as described in Sandia National Laboratories technical report SAND 94-0221,
"Modified Noise Power Ratio Testing of High Resolution Digitizers", by T. S.
McDonald, 1994. This report is available as SANDIA94.PDF from Mr. Chien-Fu Wu
upon request.
The test involves driving two identical digitizer channels with pseudo random, band
limited, Gaussian noise and measuring the noise power ratio (NPR), defined as the ratio
of the RMS input noise to the RMS non-coherent noise floor (both averaged over the
digitizer pass band). The resolution is estimated indirectly by comparing the NPR as a
function of RMS input noise against ideal digitizers.
Vendors are required to provide a plot of NPR in decibels versus loading factor in
decibels compared with theoretical curves for ideal digitizers of varying dynamic ranges
(i.e., number of bits). The loading factor is the ratio of the digitizer clip level to the
RMS input noise. The NPR must be determined at RMS input levels between the RMS
shorted input and clipping in 10 dB steps. Vendors are also required to provide a plot of
shorted input power in decibels versus frequency and at least one plot of the phase of
the non-coherent noise in degrees versus frequency. Both plots must including at least
the frequency band 0 < f < 50 Hz.
3B. The CWB 2002 Test
(1) Noise Test: The inputs to the digitizer are shorted and the system noise is recorded
for 300 seconds by the accelerograph as a computer readable file and to be submitted
with the report. The recorded system noise should be less than the equivalent of 1 digital
count of a 20-bit system on a RMS basis in the frequency range of 0.01 to 50 Hz.
92
(2) Full-Scale Clip: A voltage calibrator is connected to the inputs and the full-scale clip
level of the digitizer on each channel be recorded for 10 seconds each by the
accelerograph as a computer readable file (to be submitted with the report). This allows
the full-scale accuracy to be verified.
(3) Filter Performance Verification: A swept sine is applied to the inputs of the digitizer
to test the amplitude and phase response of the digitizer and be recorded for 60 seconds
by the accelerograph as a computer readable file (to be submitted with the report).
Accelerographs using over sampling techniques will demonstrate the performance of the
DSP filter, while more classical digitizers will demonstrate the performance of the
analog anti-aliasing filter.
(4) Frequency Response Spot Tests: Apply a sine wave of very high spectral purity,
record 60 seconds by the accelerograph as a computer readable file (to be submitted
with the report). CWB will examine the recorded data for noise that should not degrade
a 20-bit system to less than 114-dB dynamic range.
4. Utility Software
The manufacturers must provide utility software perform the following functions for
their proposed accelerograph with their bid proposal:
(1) Operate the unit and set the instrument parameters, including the timing system.
(2) Retrieve data from the accelerograph.
(3) Display the retrieved data.
(4) If the accelerograph does not write data in the PC-SUDS format directly, then
conversion software must be supplied to convert the data to the PC-SUDS format
for test of IASPEI software compatibility.
NOTE 1: The bidder must perform the test for “system response to vibration” using a
proper shaking table system that must meet the following specifications:
(1) The shaking table system must be able to carry the load of the 24-bit accelerograph
to be tested plus the weight of all other monitoring sensors on the shake table, and
must be capable of shaking up to +/- 0.2g at 1 Hz.
(2) The shaking table system must be equipped with a reference 3-component
accelerometer and at least one displacement gauge (e.g., a LVDT displacement
transducer) along the active shaking axis to monitor the shake table motion.
(3) The shaking table system must have a data logger of 24-bit resolution and capable of
sampling at 200 samples per second. We recommend that the bidder uses another
93
unit of the bid 24-bit accelerograph as the data logger to record the output of the
reference accelerometer and displacement gauge.
(4) The shaking table system must be capable of faithfully carrying out the 7 specified
input signals as specified in Section 1 of this Appendix.
(5) The shaking table system must be able to faithfully record the time history of the
displacement of the shake table using a proper displacement gauge with an accuracy
of better than +/-1% for small displacements in the millimeter range.
(6) The time history of the reference accelerometers and of the displacement gauge must
be recorded with a data logger that is time synchronized with the accelerograph
under test. If a 3-channel data logger is used for the reference 3-component
accelerometer and the displacement gauge, the bidder may substitute one channel of
the accelerometer output (in the direction that is not active in shaking) by the out put
of the displacement gauge.
(7) If a uni-axial shaking table system is used, then the accelerograph must be mounted
so that every axis (i.e., x, y, or z) can be tested along the active axis in turn.
(8) A detailed description of the shaking table systems used for the technical tests must
be provided by the bidder in their technical report, including specs of major subsystems (i.e., the manufacturer, the model number, and its technical performance
specifications). Failure to include this information will lead to automatic rejection of
the bid.
Please note that any bidder not using a proper shaking table system (i.e., whose
performance does not meet the above CWB specifications) will be automatically
disqualified.
NOTE 2: IASPEI Software (executable code and source code) packages are published
jointly by the International Association of Seismology and Physics of the Earth's
Interior and the Seismological Society of America. They are available for sale from the
Seismological Society of America, 201 Plaza Professional Building, El Cerrito, CA
94530,
USA
(Phone:
1-510-525-5474;
Fax:
1-510-525-7204).
94
Appendix 2. Test at the CWB Headquarters and at the CWB
Hualien Station
Bidder must contact CWB 10 days before the closing date for bidding to arrange a
schedule for testing at the CWB Headquarters (64 Kung Yuan Road, Taipei), and at the
CWB Hualien Station (24 Hua Kang Street, Hualien).
Bidder must transport the proposed accelerograph to the CWB Headquarters and to
the CWB Hualien Station at their own expenses. All accelerograph operations must be
conducted by the bidder, under monitoring by CWB. A copy of all the recorded data
must be provided to CWB immediately after the test.
However, bidders whose proposed accelerographs had been qualified in the CWB
2002/2004 bidding of the 24-bit digital accelerographs are exempted from these
required technical tests.
I. Test at the CWB Headquarters (1 day):
(1) Tilt table test: at tilt angles of 0, 30, 60, 90, 135, 180, 210, 270, 315, and 360
degrees. Record at least one minute when the accelerograph is at each tilt angle.
(2) RTD (16-bit realtime data stream output) test: Bidder must provide a 2-meter or
longer RS-232 output cable with a 25-pin connector for connecting to CWB’s
realtime time system. Record at least 5 minutes with occasionally shaking of the
accelerograph to simulate an earthquake.
II. Test at the CWB Hualian Station (14 days):
(1) The bidder must set up their proposed accelerograph with GPS timing on the
seismic pier at the CWB Haulien station for a field test of approximately 14 days.
(2) Continuous recording for at least 3 hours or until the memory is full. Provide a copy
of the recorded data to CWB immediately.
(3) Set trigger level at 0.0005g for all 3 seismic channels, and set trigger recording
whenever any one of the 3 seismic channels exceeds 0.0005g, with 30 seconds of
pre-event and 30 seconds of post-event recording, and synchronize the
95
accelerograph clock with GPS timing. Leave the accelerograph at the Hualien
seismic pier for approximately 14 days, and provide a copy of the recorded data to
CWB at the end of the test.
96
Appendix 3. Post Award Performance Acceptance Tests
I. Criteria for Acceptance
The basic question is: how does one know that an accelerograph is functioning
properly and meets the technical specifications? By shaking an accelerograph on a
shake table, one can find out if it is functioning correctly and by analyzing the recorded
data, one can determine if it meets the important technical specifications.
II. Tests for All Accelerographs
If any accelerographs fail to meet any one of the following tests, besides any
applicable penalty clause in the contract, it will be returned to the supplier for repair or
replacement until it passes all the tests.
1. Visual Inspection
All accelerographs will be visually inspected for damage and other imperfections: (i)
Verify that there is no damage to the case, with particular attention to the connectors
and latches; (ii) Generally inspect the visible portions of the accelerograph for evidence
of damage; and (iii) Verify that all items on the packing list are included in the shipment.
An acceptable unit must not have any obvious imperfections. Report any damage or
discrepancies to the supplier's representative. Make notes of any damage during
shipment for use in preparing possible claims against the shipping carrier.
2. Power/Charger Test
Each accelerograph will be connected to its AC power charger and allowed to charge
the internal backup battery for a period of 24 hours with the accelerograph turned off.
After the charging period, the accelerograph will be turned off and its battery cable
disconnected. For an acceptable unit, its open circuit voltage should be 12.9 Vdc +/- 1.3
V at 24 hours later.
97
3. Tilt Test
The accelerograph to be tested will be mounted flat on a precision tilt table, and the
accelerograph will be tilted to various angles. An error of not more than +/- 0.03 g for
the sensitive axis aligned with gravity is required. An additional error of +/- 0.03 g is
allowed for cross-axis effects if applicable.
4. Shake Table Test
All accelerographs (after charging 24 hours) will be placed on a shake table for
shaking tests. The CWB shake table can accommodate 1 accelerograph at a time and
shake along one horizontal direction. Input signals for the shake table are: (1) 1 Hz, 0.1
g sine waves for 60 seconds, (2) 10 Hz, 0.1 g sine waves for 60 seconds, and (3) 1 Hz, 3
mm rounded displacement "steps" (with 25 msec to 30 msec rise time).
An acceptable accelerograph must be able to record all the input test signals, and
must record a time history for any test signal that is within +/- 3% of the signal recorded
by the reference accelerometer for the sine waves (on an RMS basis and adjusted for
sampling time difference), and within +/- 10% of the displacement measured by the
displacement gauge.
III. Tests for Randomly Selected Accelerographs
If any randomly selected accelerograph fails to meet any one of the following tests,
besides any applicable penalty clause in the contract, the supplier is required to correct
the problem(s) for all accelerographs.
1. Power Consumption
Three randomly selected accelerographs will be charged for 24 hours with the units
turned OFF. The units will then be disconnected from their AC power chargers and
placed in their acquisition mode. After being allowed to operate for a period of 48hours
(with GPS timing device off) from the backup battery, the accelerographs will be
triggered to record for 90 minutes.
An acceptable accelerograph (with GPS timing device off) must be able to operate
for 48 hours off the backup battery and then record for 90 minutes. Similar test may be
performed on selected accelerographs with the GPS timing device. In this case, these
accelerographs must be able to operate for 36 hours off the backup battery and then
record for 90 minutes.
98
2. GPS Timing
Three randomly selected accelerographs will be checked for GPS timing against an
external UTC timing device for several times during a day according to the supplier's
procedure. An acceptable accelerograph must be able to maintain time within +/-5
milliseconds of UTC at all the times. In the event of losing the external GPS timing
signal, the accelerograph must be capable of maintaining absolute timing with a drift of
less than +/- 26 milliseconds per day.
3. DC-Level Drift
Three randomly selected accelerographs will be set up for DC-level drift test. The
auto-zeroing feature will be turned off and data will be collected several times every day
for 5 days in an outdoor environment. After temperature effects are removed, an
acceptable accelerograph must have an average DC-level drift (with respect to a 20-bit
system) of less than +/- 240 counts per day and a cumulative drift of less than +/- 720
digital counts over a period of 5 days in a typical field environment for the 2g full-scale
accelerograph when auto-zeroing of DC level is turned off.
4. Trigger Level
Three randomly selected accelerographs will be placed on the CWB's small shake
table. Verify that the trigger level is within +/-10% of the technical specifications.
5. Interconnection
The supplier will demonstrate that data can be download via direct wire and
telephone/modem (supplied by the user) connection, and that the software performs as
specified in the technical specifications.
6. Recording Sub-Unit Noise
The technical specifications for the recording sub-unit call for “noise less than 3 dB
with respect to a 120 dB system (on a RMS basis) when the signal input is shorted”. To
test this requirement, three randomly selected accelerographs will be subjected to the
following test. By disconnecting the sensors from the analog input board and shorting
the input pins together, the noise of the recording unit will be recorded for 10 minutes.
The noise should be less than 1 LSB as measured on a RMS basis in the frequency
range 0.01 to 50 Hz for a 20-bit system.
99
7. Other Tests
CWB may choose to perform additional tests for some randomly selected
accelerographs to verify that the units meet the technical specifications.
100
Appendix B2. A Preliminary Evaluation of an
ES&S Model Kelunji Echo Accelerograph
by
The CWB Instrumentation Committee
March 15, 2005
1. Introduction
The CWB Instrumentation Committee has been charged to evaluate an ES&S model
Kelunji Echo accelerograph with respect to the “2005 CWB Specifications of Procuring
and Installing 24-bit strong-motion accelerographs”.
2. Submitted Technical Proposal
Each member of the CWB Instrumentation Committee received a copy of the submitted
technical proposal with an attached CD-ROM. It is obvious that the submitted materials
were hastily prepared and lacked the required report on the results of the technical tests.
The 2005 CWB Specifications clearly state in Note 2 on Page 6:
101
A bidder must submit a report of the test results (including computer readable data files
and the required software [see Section IV.6] on floppy disks or CD-ROM) in their
proposal in support of their claims that the proposed model meets the CWB 2005
specifications (see Appendix 1).
The second paragraph of Appendix 1 (p. 17) further specifies:
A report describing the “technical tests” and results must be included in the bidder's
proposal. . . . Failure to submit the technical test report (including the specified data
files on floppy disks or CD-ROM) with the bid proposal will lead to automatic rejection
of the bidder's proposal.
3. The submitted Accelerograph and Test at the CWB
Headquarter
The submitted accelerograph has a 3.5 g sensor, which does not meet the CWB
Specifications of a 2 g sensor. During the test at CWB Headquarter, the bidder failed
to complete the required tests, and also did not partipate in the 2-week Hualien field test.
4. Conclusion
The CWB Instrumentation Committee concluded that the ES&S model Kelunji Echo
accelerograph does not meet the CWB 2005 Specifications for reasons summarized in
the two previous sections.
CWB appreciates all reputable seismic instrument manufacturers to join in the bidding
process. However, CWB expects that a bidder should be well prepared and has a
production model that meets the CWB Specifications.
102
Appendix B3. A Preliminary Evaluation of a
Geotech Model SMART-24A Accelerograph
by
The CWB Instrumentation Committee
March 15, 2005
1. Introduction
The CWB Instrumentation Committee has been charged to evaluate a Geotech model
SMART-24A accelerograph with respect to the “2005 CWB Specifications of Procuring
and Installing 24-bit strong-motion accelerographs”.
Due to a tight procurement schedule, the Committee decided to present a preliminary
evaluation report, including its recommendation. A more technical evaluation report
with detailed data analyses will be prepared later.
2. The Submitted Technical Report
Geotech submitted a detailed technical report describing the technical tests that they
conducted and presented their results based on their analysis of the recorded data. Since
103
they also included all the recorded data on a CD-ROM, members of the CWB
Instrumentation Committee have conducted “spot” checks of the recorded data in order
to verify some of the key results presented by Geotech in their technical report.
Geotech conducted the required shake table tests at a commercial testing laboratory, i.e.,
the National Technical Systems (NTS) located in Plano, Texas. However, these tests
were not witnessed by a designated CWB monitor.
On a spot check of the recorded sine-wave tests, we were able to obtained similar results
as those presented by Geotech, but we noted that the records contained excessive 60 Hz
environement noise.
The “step” test, however, was not clearly presented in Geotech’s technical report. In
response to the Committee’s request for clarification, Geotech stated that the “step” test
was probably not conducted properly, due to time constraint. Subsequently, Geotech
repeated the “step” test at NTS and presented the results and data to the Committee on
March 2, 2005. Because Geotech’s addendum was submitted within the evaluation
period, the Committee accepted it for spot checking.
We were able to verify that for the z-component “step” test the double integration of
acceleration (with mean removal only) gave essentially the same result as that measured
directly by a LVDT displacement gauge and within the 10% error permitted by the
CWB
Specificaitons.
3. Tests at the CWB Headquarters
According to C. F. Wu, both the RTD and Internet Access tests at CWB Headquarters
went well for the Geotech SMART-24A accelerograph on February 15, 2005. For the
RTD test, the Geotech accelerograph was placed on a tilt table at different tilt angles
and the resulting signals were transmitted in the required 16-bit format. Unfortunately,
due to an oversight, the data recorded by the accelerograph in 24-bit format were not
saved. Therefore, if Geotech wins the bid, it must perform the tilt table test under CWB
monitoring.
4. Hualien Field Tests
The Geotech Smart-24A accelerograph were deployed at the CWB Hualien Station on
February 17, 2005. On March 5, a “double” earthquake of magnitude about 6 near Ilan
(two shocks of about the same size occurred about one minute apart) was recorded by
104
three 24-bit accelerographs co-located at Hualien. The Geotech record of this “double”
earthquake is shown in Figure 1.
We performed a spectral analysis of both the Geotech and Kinmetrics records for
comparison as shown in Figure 2.
The two spectra were computed by concatenating the data from three components of
each instrument and used that as input to the Matlab “pwelch” function. By combining
components instead of computing separate spectra, the spectral resolution is improved
and errors due to spectral leakage are reduced. The top plot shown in Figure 2 is
computed using no averaging. This allows estimation of frequencies to nearly 0.01 Hz
but increases the random variations in the spectra.
The bottom plot shown in Figure 2 is computed using Hanning windows of length 2048
with 50% overlap. Although low frequencies are not resolved in the plot, the estimates
at individual frequencies are stabilized.
We did not convert digital counts into physical unit in acceleration. Since the 2g fullscale for the Geotech accelerograph corresponds to less than the full-scale of
digitization (about 6.1 million counts vs about 8.4 million counts), the spectral values
for the Geotech record are less than that for the Kinemetrics. Plotting Figure 2 using
digital counts in the present case has the advantage of systematically “displacing” the
Geotech spectrum below the Kinemetrics spectrum, allowing a better visual comparison.
In general, the spectral “curves” between these two accelerographs are very similar as
shown in Figure 2. This result is to be expected if the new Geotech accelerograph
performs well against a well known and tested accelerograph like the Kinemetrics’.
105
Figure 1. Accelerograms of the “double earthquake” at Ilan on March 5, 2005 (19:06
UT) recorded by the Geotech SMART-24A -- Vertical component (top), North-South
component (middle), and East-West (bottom).
106
Figure 2. Two types of spectral comparison between the Geotech and Kinemetrics
records of the March 5, 2005 “double” earthquake near Ilan (see text for explanation).
107
We performed a coherence analysis between data recorded by the Geotech and
Kinemetrics accelerographs. Coherence analysis requires that we use data that are
common to both two time-series. Among the three 24-bit accelerographs installed at
the CWB Hualien Station, Geotech recorded the longest time series, and the
Kinemetrics recorded the next longest time series. Unfortunately, the Tokyo Sokusin
accelerograph recorded the shortest with about 10-sec data gap between the two shocks
in the “double” earthquake. The Tokyo Sokushin record was derived from two
individually triggered records, and the data gap is simply due to the trigger setting of
the Tokyo Sokushin accelerograph, which needed to be adjusted for future recording.
Results of the coherence analysis are shown in Figures 3 to 5.
Coherence between the Geotech and Kinemetrics acceleorgraphs for the Vertical
componet (Figure 3) is very good to about 30 Hz. Coherence between the Geotech and
Kinemetrics acceleorgraphs for the North-South componet (Figure 4) and for the EastWest component (Figure 5) are nearly perfect to about 40 Hz.
The excellent coherence results between these two co-located accelerographs for the
March 5, 2005 “double” earthquake near Ilan are encouraging, and we are now
continuing the Hualien field tests in order to obtain more earthquakes records for
comparison.
5. Conclusion
Based on the results dicussed in the above sections, the CWB Instrumentation
Committee concluded that the Geotech Model SMART-24A accelerograph meets the
2005 CWB Specifications for the items that we have checked. From spectral and
coherence analyses of the records of both the Geotech and the Kinemetrics
accelerographs (for the March 5, 2005 “double” earthquake near Ilan), we tentatively
concluded that the Geotech accelerograph performed well during the Hualien field test.
Because we have time to analyze only one recorded earthquake, more recorded
earthquakes must be analyzed to confirm this result.
This approval recommendation is conditional, because the technical tests performed by
Geotech was not witnessed by a CWB designated monitor, and the Instrumentation
Committee did not have sufficient time to analyze all the recorded data submitted by
Geotech. If Geotech won the bid, Geotech must repeat the technical tests under CWB
monitoring. The CWB Instrumentation Committee will then systematically verify the
new results of Geotech based on the monitored tests, and will also complete the analysis
of the Geotech records of earthquakes obtained during the Hualien field test.
108
Figure 3. Observed vertical component data that are common to Geotech (top) and
Kinemetrics (middle), and their coherence as a function of frequency. Please note that
the Geotech accelerograph was triggered 60 seconds earlier than the Kinemetrics
accelerograph.
109
Figure 4. Observed North-South component data that are common to Geotech (top)
and Kinemetrics (middle), and their coherence as a function of frequency. Please note
that the Geotech accelerograph was triggered 60 seconds earlier than the Kinemetrics
accelerograph.
110
Figure 5. Observed East-West component data that are common to Geotech (top) and
Kinemetrics (middle), and their coherence as a function of frequency. Please note that
the Geotech accelerograph was triggered 60 seconds earlier than the Kinemetrics
accelerograph.
111
The CWB Instrumentation Committee noted the following problems that Geotech must
address:
1. From the Log file, The GPS might not work at Hualien field tests.
2. Station, and sensors information (location, sensitity, orientation etc.) should be
recorded on header or trace datafile.
3. The minimum and maxmum data value at converted PC-SUDS data file, should be
matched with the instrument (+- 6100000 counts ?).
4. The full- scale digital count for a 24-bit accleerograph should be ±223 or
±8,388,608. However, the full-scale clip test (p. 26- 27 of Geotech’s Technical
Proposal) indicated that the full-scale digital count was about ±6,104,000.
5. Data recorded for the first shake table test indicate excessive 60-Hz environmental
noise.
Final approval can onbly be recommended if the Committee members are fully satisfied
with the results of the monitored tests and the above noted problems (plus any problems
that may be found in a more thorough examination of the Geotech technical test and
Hualien field test data) have been adequately addressed by Geotech.
112
Appendix B4. A Preliminary Evaluation of a
Reftek Model 130-SMA/01 Accelerograph
by
The CWB Instrumentation Committee
March 15, 2005
1. Introduction
The CWB Instrumentation Committee has been charged to evaluate a Reftek model
130-SMA/01 accelerograph according to the “2005 CWB Specifications of Procuring
and Installing 24-bit strong-motion accelerographs” (abbreviated as “2005 CWB
Specifications” below).
In the 2005 CWB Specs (p. 7), the following statements were given under NOTE 6:
Model 130-SMA/01 by Refraction Technology was conditionally approved in 2004, but
must be subjected to tests under CWB monitoring. In addition, Refraction Technology
must address the technical comments on Model 130-SMA/01 accelerograph by CWB.
Due to the tight procurement schedule, the Committee decided to present a preliminary
evaluation report, including its recommendation. A more technical evaluation report
with detailed data analyses will be prepared later.
113
2. Monitored Compliance Tests
Ms Patricia Wang, a graduate student, was appointed by CWB to serve as the “monitor”
to witness the compliance tests conducted on two afternoons (February 3, and 15, 2005)
at Refraction Technology, Dallas, Texas. Immediately after the tests, Refraction
Technology downloaded the recorded data on CD-ROMs. Ms Wang sent them to a
Committee member (Willie Lee), who then made copies for other members of the
Committee.
The shake table used by Refraction Technology could not perform the 1 Hz, 3mm
displacement “steps” with 25-msec to 30-msec rise time. It was unfortunate that one of
Committee members (Willie Lee) might have confused Refraction Technology on the
CWB requirement regarding the input signal for the step test. However, the Committee
considers that it is the responsibility of the bidder to follow the “2005 CWB
Specifications” on the technical compliance tests, especially for using an appropriate
shake table to conduct the vibration tests that are specified under NOTE 1 on p. 19.
3. CWB’s Concerns on the 2004 Delivered Units
Refraction Technology did not win the first 2004 CWB bid. However, due to additional
available fund, CWB conducted a second bid of 10 accelerographs, which was won by
Refraction Technology. Unfortunately, due to the need to conclude procurement within
the calendar year, CWB accepted the 10 delivered units without requiring Refraction
Technology to carrying out the monitored technical compliance tests. Some problems
were noted after the 10 units were delivered, as described below:
(1) The sensor input sensitivities (which were recorded on the Reftek “Sensor
Calibration Reports” accompanying the delivered units) are not correct. This type
of error suggests that Reftek did not perform adequate quality control in the final
manufacturing process before shipping the 10 accelerographs to CWB. Table 1
shows the sensitivity values (in micro_g/count) in the “Sensor Calibration Reports”
that accompanied the 10 delivered units. After CWB pointed out the error,
Reftek’s field engineer, Mr. Ian Billings, re-calibrated these 10 units during his
recent trip to Taiwan. The new sensitivity values are shown in Table 2, which
show that the original values were in error by a factor of about 3.
114
Table 1. Sensitivity values (in micro_g/count) in the “Sensor Calibration Reports”
---------------------------S/N
z-axis y-axis x-axis
---------------------------446
1.526
1.503
1.580
464
1.543
1.517
1.506
465
1.491
1.442
1.452
466
1.384
1.373
1.502
467
1.498
1.509
1.519
468
1.428
1.462
1.419
469
1.379
1.422
1.467
471
1.433
1.554
1.301
472
1.554
1.490
1.432
473
1.480
1.493
1.470
----------------------------
Table 2. Re-calibrated Sensitivity values (in micro_g/count)
-----------------------------S/N
z-axis
y-axis
x-axis
-----------------------------446
0.5069
0.5153
0.4921
464
0.5102
0.5182
0.5230
465
0.5095
0.5269
0.5130
466
0.4578
0.4615
0.5217
467
0.4951
0.4907
0.4972
468
0.5004
0.4893
0.4917
469
0.4904
0.5284
0.5096
471
0.5327
0.4912
0.4373
472
0.4944
0.5166
0.4819
473
0.4820
0.4793
0.4749
------------------------------
(2) The PGA information is very important for the CWB's early earthquake warning
system. One percent or better accuracy is implicitly required by the “2005 CWB
Specifications”, but from Table 2, the deviations of sensivity values are generally
greater than 5% from each other. For example, sensitivity (in micro_g/count) among
the 10 accelerographs differs up to about 16% for the z-axis, 14% for the y-axis, and
20% for the x-axis. Sensitivity for a given accelerograph differs, for example, by up to
about 18% among the 3-axes for the S/N 471 unit. Consequently, the delivered Reftek
accelerographs could not be used for the CWB warning system. Furthermore, users of
the acceleration data files recorded by these Reftek accelerographs must perform
instruments corrections -- an extra step in data analysis that should not be necessary.
(3) The bit-weight of the 2004 tested unit(S/N9081) is 0.795E-06 volt. The bit-weight of
all purchased units is 1.161E-06 volt, and the 2005 tested unit is 1.639E-06 volt. The
bidder should have provided the same configured instrument for CWB.
115
CWB is also very concerned that the Taiwan agent of Refraction Technology does not
have a professional engineer/technician on their staff, and the sale people appear to lack
sufficient knowledge about the Refraction Technology accelerographs.
4. Conclusion
The CWB Instrumentation Committee regretfully can not recommend the Model 130SMA/01 accelerograph to CWB for purchase in 2005, because Refraction Technology
did not met the “2005 CWB Specifications” in conducting the step test with an
appropriate shake table and there were serious concerns noted by CWB for the 10
delivered Reftek accelerographs.
In hind sight, the Committee should have been more strict for at least two issues during
the 2004 evaluation of the submitted Reftek accelerograph for testing: (1) the “2004
CWB Specifications” called for a 2g full-scale unit, but a 3.5g full-scale unit was
submitted, and (2) during the 2004 Hualien field test, the Reftek unit failed to complete
the 2-week field test, because the internal battery was not charged properly by the AC
power source due to an error in the installation at Hualien by Reftek.
After more than a decade of instrument evaluations, the CWB Instrumentation
Committee recognized the following procedural changes that have occurred over the
years are not desirable:
1. Due to budget cut, CWB abandoned conducting testing instruments of all
interested manufacturers at the same time in a commercial testing facility.
Without testing all the submitted instruments at the same time in the same
facility, it is difficult for the Instrumentation Committee to compare test results
of different manufacturers.
2. Due to “pressure” from new bidders, CWB allowed bidders to submit
instruments based on their own technical tests for evaluation. Because it is not
possible to write perfect test procedures, it is easy to have “mis-understandings”.
The subsequent monitored tests also add extra expenses to the “successful”
bidders, and extra work for the CWB Instrumentation Committee.
3. Due to the tight CWB procurement schedule, the Instrumentation Committee has
only about 2 weeks to evaluate the submitted test reports and data. This short
period of time is simply not enough for the Committee members to evaluate the
proposed instruments properly, especially when there are 3 or more bids
submitted by new manufacturers. It is also obvious that some new bidders are
116
“rushing” to bid, and the “hastily” prepared technical reports/data are often
difficult to understand.
The CWB management now realizes the danger of accepting conditionally approval
instruments for purchase, and has established an Internal Committee for developing new
procurement procedure and criteria for selection. The CWB Instrumentation Committee,
which consists of external advisors, is now charged for recommendations based on
evaluating the technical reports and data submitted by the bidders with respect to the
CWB Specifications as written.
The CWB Instrumentation Committee is are now conducting extensive field tests of
multiple co-located accelerometers (recorded by the same model of data loggers in
continuous recording mode) and co-located accelerographs (recorded in triggered mode)
in Taiwan. The results from these field tests will be useful for the Committee to
evaluate actual field performance of accelerometers and accelerographs manufactured
by different vendors, and to develop a “performance based” technical specifications for
use by CWB in future procurements.
Finally, the CWB Instrumentation Committee acknowledges the excellent cooperation
provided by Refraction Technology in clarifying technical issues raised by some
Committee members. Refraction Technology has a good reputation for customer
satisfaction, and we look forward to their support for maintaining the 10 Reftek
accelerographs purchased by CWB last year and are now being installed in the field.
117
Appendix B5. A Preliminary Evaluation of Tests
on a Geotech Model SMART-24A Accelerograph
under CWB Monitoring
by
The CWB Instrumentation Committee
April 24, 2005
1. Introduction
The CWB Instrumentation Committee has been charged to evaluate the test data of a
Geotech model SMART-24A accelerograph under CWB monitoring with respect to the
“2005 CWB Specifications of Procuring and Installing 24-bit strong-motion
accelerographs”.
Due to a tight procurement schedule, the Committee decided to present a preliminary
evaluation report, including its recommendation. A more technical evaluation report
with detailed data analyses will be prepared later.
118
2. Evaluation Procedure
The CWB appointed monitor (Ms Patricia Wang) sent the test data on CD-ROMs to
Willie Lee immediately after each set of the tests on 3 occasions (April 12, 14, and 18,
2005). Lee then sent the data to Mr. Chien-Fu Wu of CWB and Mr. Chun-Chi Liu of
the Academia Sinica. Subsequently, Dr. Lani Oncescu submitted a technical report
(electronically on April 20, 2005) describing the technical tests that they conducted and
presented their results based on their analysis of the recorded data.
Members of the CWB Instrumentation Committee conducted checks of the recorded
data in order to verify the key results presented by Geotech in their April 20 Technical
Report. In particular, Willie Lee performed the data analysis using software (written by
Doug Dodge and himself) in order to systematically verify that the results meet the 2005
CWB Specifications.
3. System Response to Vibration
According to the 2005 CWB Specifications, an accelerograph is required to be
subjected to the shaking table tests using a proper shaking table system. The
accelerometer(s) monitoring the shake table motion can be used as the “Reference”.
Recording the time history of the shake-table displacement with a suitable displacement
sensor (+/- 1% accuracy or better) is also required.
Geotech conducted the shake-table test at the National Technical Systems, Plano, Texas
on April 18, 2005, and was witnessed by the CWB monitor. Six sets of sine-wave
shake- tests were conducted at 1 Hz and at 10 Hz, along NS, EW, and Vertical
directions. We analyzed all the records, but will present only the two shake tests along
the Vertical direction, because the results from the NS and EW shaking directions are
similar to those along the Vertical shaking direction.
3.1. Sine-wave Shake-test at 1 Hz
The result is presented in Figure 1, where the upper plot shows the recorded data of the
vertical component of the Geotech accelerograph in digital counts, and the lower plot
shows the corresponding amplitude spectrum. The 1-Hz peak in the spectrum indicates
it is over 120 db above the background noise, although there are many smaller spectral
peaks at 10, 20, … Hz. The corresponding result for the Reference accelerometer
provided by the National Technical System on the shake table is shown in Figure 2.
Because similar spectral peaks at higher frequencies were sensed by both the Geotech
accelerograph and the Reference NTS accelerometer, we concluded that these spectral
119
peaks were due to the shake-table’s vibration noises. This is obvious from the plots of
the observed acceleration from the Geotech accelerograph and from the NTS
accelerometer. Hence, the Geotech accelerograph meets the 2005 CWB Specification
for the sine-wave shake-test at 1 Hz.
3.2. Sine-wave Shake-test at 10 Hz
The result is presented in Figure 3, where the upper plot shows the recorded data of the
vertical component of the Geotech accelerograph in digital counts, and the lower plot
shows the corresponding amplitude spectrum. The 10-Hz peak in the spectrum
indicates that it is over 120 db above the background noise, although there are many
smaller peaks at 10, 20, … Hz. The corresponding result for the Reference
accelerometer provided by the National Technical System on the shake table is shown
in Figure 4.
Because similar spectral peaks at higher frequencies were sensed by both the Geotech
accelerograph and the Reference NTS accelerometer, we concluded that these spectral
peaks were due to the shake-table’s vibration noises. Hence, the Geotech accelerograph
meets the 2005 CWB Specification for the sine-wave shake-test at 10 Hz.
120
Figure 1. 1-Hz vertical sine-waves shake test recorded by the vertical component of the
Geotech Accelerograph.
121
Figure 2. 1-Hz vertical sine-waves shake-test recorded by the vertical component of the
NTS Accelerometer.
122
Figure 3. 10-Hz vertical sine-waves shake test recorded by the vertical component of
the Geotech Accelerograph.
123
Figure 4. 10-Hz vertical sine-waves shake-test recorded by the vertical component of
the NTS Accelerometer.
124
3.3. Step Shake-test at 1 Hz
As required by the 2005 CWB Specifications, “steps” at 1 Hz (of amplitude about 3 mm
and rise time of about 30 milliseconds) were applied to the shake table in the vertical
direction (with the Geotech accelerograph and LVDT sensor mounted on it). We took a
2-second section of data from the middle of the recorded data file, removed the mean,
and plotted the acceleration data (in digital counts) as shown in Figure 5.
We applied integration using the SMQC1 program written by Doug Dodge and obtained
the velocity (in counts*seconds). After the mean was removed, we plotted the velocity
as shown in Figure 6. We applied integration again and obtained the displacement (in
counts*seconds*seconds) as shown in Figure 7.
The maximum peak-to-peak
displacement from Figure 7 is about 960 counts*sec*sec.
Using the conversion factor given by Geotech for the vertical component (3.2118
µm/sec2/count), we obtained a maximum peak-to-peak displacement of about 3.08 mm
from the double integration of the acceleration data.
The LVDT displacement sensor also recorded the shake table’s displacement directly as
shown in Figure 8 for the same time interval. The maximum peak-to-peak displacement
from Figure 8 is about 4.8×105 counts. Using the conversion factor given by Geotech
for the LVDT sensor (0.006386 µm/count), we obtained a maximum peak-to-peak
displacement of about 3.07 mm from the displacement sensor. The difference is about
0.01 mm, well within the 10% accuracy required. Hence, the Geotech accelerograph
meets in the 2005 CWB Specification for the step shake-test.
125
4E6
0
2
Time (seconds)
Figure 5. Recorded vertical acceleration data for the Geotech accelerograph (with
mean removed).
40000
0
2
Time (seconds)
Figure 6. Velocity from integrating the acceleration data for the Geotech accelerograph
(with mean removed).
126
500
0
2
Time (seconds)
Figure 7. Displacement from integrating the velocity shown in Figure 6 for the
Geotech accelerograph.
Counts
3E5
LVDT
0
2
Time (seconds)
Figure 8. Displacement recorded by the LVDT displacement sensor (in digital counts)
for the same time interval as Figures 5, 6, and 7.
127
4. System Static Accuracy
According to the 2005 CWB Specifications, the static accuracy of an accelerograph can
be determined by a tilt test of the accelerograph on a precision tilt table (with better than
0.1 degree tilt control). Data must be recorded for 60 seconds each for the following tilt
angles: 0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, and 360 degrees.
Geotech conducted the tilt test on April 12, 2005 witnessed by the CWB monitor. A
Geotech Smart-24A Accelerograph (S/N 1050) was mounted on a tilt table and rotated
along the direction of the North-South component of the accelerograph from 0° to 360°
at 30° increment. The same test was repeated along the direction of the East-West
component of the accelerograph. Since the theoretical value of the expected acceleration
at a given tilt angle is known, we can compute the difference between the observed
(“Acc”) and the theoretical value (“TrueAcc”).
4.1. Tilt test along the North-South Direction
Table 1 summarized the results from a tilt test along the North-South direction. The
observed value (Acc) is taken as the mean of the recorded acceleration in a 60-second
data file.
Table 1. Results of Tilt Test along the NS-Component Direction
-------------------------------------------------------------Tilt
Comp Acc(counts)
Acc(g)
TrueAcc(g)
Diff(g)
-------------------------------------------------------------0°
NS
-6172
-0.00202
0.00000
-0.00202
30°
NS
1512270
0.49512
0.50000
-0.00488
60°
NS
2624284
0.85919
0.86603
-0.00684
90°
NS
3026963
0.99103
1.00000
-0.00897
120°
NS
2617408
0.85694
0.86603
-0.00909
150°
NS
1500510
0.49127
0.50000
-0.00873
180°
NS
-13393
-0.00438
0.00000
-0.00438
210°
NS
-1535256
-0.50264
-0.50000
-0.00264
240°
NS
-2641277
-0.86475
-0.86602
0.00127
270°
NS
-3043497
-0.99644
-1.00000
0.00356
300°
NS
-2635794
-0.86296
-0.86603
0.00307
330°
NS
-1529642
-0.50080
-0.50000
-0.00080
360°
NS
-3957
-0.00130
-0.00001
-0.00129
--------------------------------------------------------------
128
The values in the difference column (Diff in g) are less than 0.01g, and thus meet the
2005 CWB Specification for the static accuracy. Our results also agree to 3 or better
significant figures with those presented by Geotech.
4.2. Tilt test along the East-West Direction
Table 2 summarized the results from a tilt test along the East-West direction. The
observed value (Acc) is taken as the mean of the recorded acceleration in a 60-second
data file. The values in the difference column (Diff in g) are less than 0.01g (except one
value at 0.1g at 270° tilt), and thus meet the 2005 CWB Specification on static accuracy.
Our results also agree to 3 or better significant figures with those presented by Geotech.
Table 2. Results of Tilt Test along the East-West Component
-------------------------------------------------------------Tilt
Comp Acc(counts)
Acc(g)
TrueAcc(g)
Diff(g)
-------------------------------------------------------------0°
EW
-782
-0.00026
0.00000
-0.00026
30°
EW
1517434
0.49666
0.50000
-0.00334
60°
EW
2620362
0.85764
0.86603
-0.00839
90°
EW
3027240
0.99082
1.00000
-0.00918
120°
EW
2623271
0.85860
0.86603
-0.00743
150°
EW
1512695
0.49511
0.50000
-0.00489
180°
EW
-1024
-0.00034
0.00000
-0.00034
210°
EW
-1504256
-0.49234
-0.50000
0.00766
240°
EW
-2618796
-0.85713
-0.86602
0.00889
270°
EW
-3022304
-0.98920
-1.00000
0.01080
300°
EW
-2619351
-0.85731
-0.86603
0.00872
330°
EW
-1509398
-0.49403
-0.50000
0.00597
360°
EW
2046
0.00067
-0.00001
0.00068
--------------------------------------------------------------
5. Digitizer Performance
According to the 2005 CWB Specifications, the bidder may choose one of the following
two choices for testing digitizer performance: either 3A) Sandia Test, or 3B) the CWB
2002 Test. Geotech chose the 3B test.
5.1. Noise test
This test involves the inputs to the digitizer be shorted and the system noise is recorded
for 300 seconds by the accelerograph. The recorded system noise should be less than the
129
equivalent of 1 digital count of a 20-bit system on a RMS basis in the frequency range
of 0.01 to 50 Hz.
The result for the vertical component is shown in Figure 9, where the upper plot
displays the recorded acceleration data (in digital counts), and the lower plot display the
amplitude spectra.
Figure 9. Plot of the recorded acceleration for the noise test (top), and the
corresponding amplitude spectrum (bottom).
130
The recorded noise has amplitude less than 4 counts for almost all the samples in a 24bit system, much less than the equivalent of 1 digital count of a 20-bit system on a
RMS basis in the frequency range of 0.01 to 50 Hz. Hence the Geotech accelerograph
meets the 2005 CWB Specification for the noise test. In the Geotech Test Report, they
computed the RMS noise level at 2.03 counts.
5.2. Full-scale clip test
According to the 2005 CWB Specifications, a voltage calibrator is connected to the
inputs and the full-scale clip level of the digitizer on each channel be recorded for 10
seconds each by the accelerograph. Geotech reported that the RMS value for an applied
2.50 volt is 6,115,694 counts, or about 3.06×106 counts for 1.25 volts. Since the fullscale of the Geotech accelerograph is ±2 g, we can check this against the results of tilt
test for an expected theoretical value of ±1 g . In Table 1 and 2, we have observed
values from 3.02×106 to 3.04×106 counts, and are thus consistent with the result
obtained in the full-scale clip test.
5.3. Filter performance verification
According to the 2005 CWB Specifications, a swept sine is applied to the inputs of the
digitizer to test the amplitude and phase response of the digitizer and be recorded for 60
seconds by the accelerograph.
The result for the vertical component is shown in Figure 10, where the upper plot
displays the recorded acceleration data (in digital counts), and the lower plot display the
power spectrum. The PSD estimates are essentially flat from the applied swept sinewaves, and the frequency roll-off is at about 65 Hz, higher than the required 50 Hz rolloff. Hence, the Geotech accelerograph meets the 2005 CWB specification for the filter
performance.
131
Figure 10. Plot of the recorded acceleration data for the swept sine-wave test (top),
and the corresponding amplitude spectrum (bottom).
132
5.4. Frequency Response Spot Tests
According to the 2005 CWB specifications, a sine wave of very high spectral purity is
to be applied as input and record 60 seconds by the accelerograph.. CWB will examine
the recorded data for noise that should not degrade a 20-bit system to less than 114-dB
dynamic range. Geotech applied a sine-wave at 1 Hz, and the result for the vertical
component is shown in Figure 11, where the upper plot displays the recorded data (in
digital counts), and the lower plot display the amplitude spectra.
Figure 11. Plot of the recorded data for the spot sine-wave test at 1 Hz (top), and the
corresponding amplitude spectrum (bottom).
133
Geotech also applied a sine-wave at 10 Hz, and the result for the vertical component is
shown in Figure 12, where the upper plot displays the recorded acceleration (in digital
counts), and the lower plot display the amplitude spectra.
Figure 12. Plot of the recorded data for the spot sine-wave test at 10 Hz (top), and the
corresponding amplitude spectrum (bottom).
134
Figures 11 and 12 indicate that the spectral peak is better than 106, or 120 db, above the
noise level. Hence, the Geotech accelerograph meets the 2005 CWB Specifications for
frequency response spot test.
6. Utility Software Demonstration and Water Immersion Test
On April 12, Geotech conducted a water immersion test on the Geotech accelerograph
and was witnessed by the CWB monitor. On Aprilt 14, Geotech demonstrated the utility
software to the CWB monitor. In both cases, the CWB monitored reported that the
demonstration went well and the water immersion test was successful.
7. Conclusion and Recommendation
Based on the submitted Geotech Test Report of April 20, 2005 and the data analyses
performed by members of the Instrumentation Committee on the recorded data sent
directly by the CWB monitor, we concluded that the Geotech accelerograph meets the
2005 CWB Technical Specifications. Therefore, we recommend that CWB completes
the procurement of purchasing the accelerographs made by Geotech Instruments, LLC.
135
Section C. Strong-Motion Data Processing and
Software Development
Willie Lee and Doug Dodge
November 15, 2005
Contents
I. Introduction ................................................................................................................137
II. Software Development .............................................................................................137
Some background information about earthquake location ...........................................137
Other software written...................................................................................................141
Code to Support Relocation of Historic Earthquakes Using Direct Search Method-142
Code to Support Joint Inversion for Hypocenters and Velocity................................164
Code to Compare spectral response of co-located seismometers using earthquake
seismograms ..............................................................................................................192
Code to plot comparison of step responses of different seismometers......................198
Code to plot Acceleration Spectra from shake table tests .........................................199
Code to Plot Sumatra quake and aftershocks on bathymetry ....................................201
Code to plot oriented focal mechanisms on bathymetry ...........................................201
Code for plots of tsunami waves superimposed on tide data ....................................209
136
I. Introduction
During 2005, we made slow but steady progress in systematic processing of the strongmotion data recorded by the Central Weather Bureau (CWB). The basic computer
program for performing quality assurance, SmBrowser, has been described in the 2003
Annual Report (Dodge and Lee, 2004), and will not be repeated here. Further
enhancement has been made to SmBrowser to improve processing efficiency, and
considerable efforts have been devoted to verify station coordinates, which is still now
underway.
II. Software Development
One of the major problems facing CWB is how to locate the offshore earthquakes with
reasonable accuracy, especially for the Real Time Rapid Information System. Willie
Lee has been thinking about this problem for over 30 years, and finally the present PC
computer is now fast enough to attack this problem.
Some background information about earthquake location
Introduction
So far, all commonly used algorithms for locating earthquakes on computers are based
on an inverse formulation, first published by L. C. Geiger (1912). Numerous software
implementations have been made using the Geiger method, which applies the GaussNewton nonlinear optimization technique to find the origin time and hypocenter by
iterative linearization steps starting from a trial solution.
The travel time residuals (i.e., observed minus predicted from a given velocity model)
of the first P-wave (and sometimes the S-wave and later phases) are minimized, usually
in the least squares sense (L2 norm). Waldhauser and Ellsworth (2000) introduced the
“double- difference” algorithm, which minimizes the residuals of travel times
differences for pairs of earthquakes observed at common stations by iteratively
adjusting the vector connecting the hypocenters. Similar to JHD, joint hypocentral
determination (Pujol, 2003), the double-difference algorithm improves “relative”
earthquake locations and works well for a good set of arrival times with large numbers
of stations. For a comprehensive review of recent earthquake location developments,
see Richards et al. (2006). In brief, they are all variants of the Geiger method based on
an inverse formulation.
137
The mathematics of the inverse formulation are elegant as shown next, and it works
well for a good seismic network with stations surrounding the epicenters. However, all
existing location programs work poorly for earthquakes outside a seismic network,
because the available arrival times are not sufficient to solve the problem
mathematically as shown below (greatly condensed from Lee and Stewart, 1981 p. 105139). We will use bold face symbols to denote vectors or matrices.
The Least Squares Method and Nonlinear Optimization
In the least squares method, we attempt to minimize the errors of fit (or residuals) at a
set of m data points where the coordinates of the kth data point are (x)k, k =1, 2,…, m.,
and x is a vector of n independent variables, i.e.,
x = (x1, x2, …, xn)T
(1)
where the superscript T denote a vector transpose. The objective function for the least
squares problem is
F(x) = ∑ [rk(x)]2
(2)
where the summation is from k = 1 to m, and rk(x) denotes the evaluation of residual at
the kth data point. We may consider these residuals as components of a vector in the mdimensional Euclidean space and Equation (1) becomes
F(x) = rT r
(3)
Taylor expansion of this objective function is
F(x + δx) = F(x) + gT δx + ½ δxT H δx + …
(4)
Where g is the gradient vector and H, the Hessian matrix, and it can be shown that
δx = - H-1 g
(5)
To find the gradient vector g, we perform partial differentiation on Equation (2) with
respect to xi , i = 1, 2, …, n, and obtain
∂F(x)/ ∂xi = ∑ 2 rk(x) [∂rk(x) /∂xi], i = 1, 2, …., n
and the summation is from k = 1 to m. In matrix notation,
138
(6)
g = 2 AT r
(7)
where A is the Jacobian matrix, whose elements are defined by
Aki = ∂rk /∂xi,
k = 1, 2, …, m, and i = 1, 2, …, n
(8)
To find the Hessian matrix H, we perform partial differentiation on the set of n
equations in Equation (6) with respect to xj , for j = 1, 2, …, n, assuming that rk(x), k =
1 to m, have continuous second derivatives, and obtain in matrix notation
H ≅ 2 AT A
(9)
by ignoring the cross derivative terms. Hence,
δx = - [AT A]-1 AT r
(10)
The Geiger’s method is essentially applying the least squares method and using the
above Gauss-Newton algorithm to solve the earthquake location problem. Starting from
a trial origin time and hypocenter, the adjustment vector δx as given in Equation (10) is
solved. A new origin time and hypocenter is then obtained and the same procedure is
repeated again until some cutoff criteria is met. However, the Jacobian matrix as given
in Equation (8) is often ill-conditioned for giving a meaningful inverse, and if the trial
solution is not chosen appropriately, the iterative procedure converge to a local
minimum, rather than the global minimum.
For the earthquake location problem, the 4 independent variables are: time t, and
coordinates x, y, and z. The Jacobian matrix A may be written with column vectors as
its elements:
A = ( V1 V2 V3 V4 )
(11)
where
V1 = ( 1
1
• • •
)T
(12)
T
(13)
(14)
T
(15)
1
V2 = (∂t1/∂x ∂t2/∂x
V3 = (∂t1/∂y ∂t2/∂y
• • •
∂tm/∂x )
T
∂tm/∂y )
V4 = (∂t1/∂z ∂t2/∂z
• • •
∂tm/∂z )
• • •
139
where the travel times for m stations are denoted by: t1, t2 , …, tm . Since the travel
time derivatives with respect to time are 1, vector V1 is a unit vector. Let us recall that
the determinant of a matrix is zero if any of its column is multiple of another column.
Since the first column of the Jacobian matrix is all 1’s, it is easy for other columns of A
to be a multiple of it. For example, if an earthquake occurs outside the seismic network,
it is likely that the elements of the ∂t/∂x column, and the corresponding elements of the
∂t/∂y will be nearly proportional to each other. In other words, we do have adequate
observed data to solve the matrix for meaningful adjustments.
Although the Geiger method was published in 1911/1912, computations are too
laborious to be done without electronic computers until the early 1960s. There are many
pitfalls in solving a problem by the inverse approach primarily because no one has yet
found a fool-proof technique to guarantee a true solution in nonlinear optimization since
Gauss time (nearly two hundred years ago). Unlike the Fermats last theorem (which
was solved recently after more than 300 hundred years efforts), many experts in
optimization consider the guarantee for a global minimum is unsolvable in an inverse
problem.
Almost all physical problems involving observations are formulated as inverse problems
simply because solving problems by the method of least squares became so standard
(since Gauss first popularized it) that few scientists (even fewer seismologists) ever
question it. After electronic computers became available in the late 1950s, a few
visionary scientists realized that it is much easier to solve a physical problem involving
observations by the forward formulation. Unfortunately it will involve large amounts of
computations and computers were far too slow at that time. However, it was adequate
for solving most inverse problems. When Lee examined the earthquake location
problem from both the mathematical and computation points o f view in the late 1960s,
he realized that the computers were about 5 orders of magnitude too slow for the
forward formulation and thus had to wait.
By the early 2000, computer speed has increased about 10,000 times faster that in the
1960s, Lee began laying a plan to attach the earthquake location problem by the forward
formulation approach. The least squares method of Gauss assumes that the
observational errors have a normal (or Gaussian) distribution. However, this assumption
is not the appropriate for earthquake arrival times, which often have large outliers.
Lee and Doug Dodge then began investigating the simplex algorithm developed for
minimizing the L1 norm (rather than the L2 norm in least squares). Recently, they are
developing a forward simplex search software for earthquake location, and showed that
it is practical to use it for relocating large numbers of earthquakes, provided a fast multiprocessor PC is available.
140
Forward formulation can solve many more seismological problems, including seismic
tomography. Many seismological problems involves the Greens function, and Greens
function is an inverse operator, as pointed out by Lanczos (1961). The forward
formulation will usher a new era in seismological research because computers are now
fast enough to make the forward approach practical.
References:
Geiger, L. C. (1912). Probability method for the determination of earthquake epicenters
from the arrival time only, Bull. St. Louis Univ., 8, 60-71.
Lanczos, C. (1961). Linear Differential Operators, Van Nostrand, London.
Lee, W. H. K., and S. W. Stewart (1981). Principles and Applications of
Microearthquake Networks, Academic Press, New York.
Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling (1986). Numerical
Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge.
Pujol, J. (2003). Software for joint hypocentral determination using local events, in:
International Handbook of Earthquake and Engineering Seismology, edited by W. H.
K. Lee, H. Kanamori, P. C. Jennings, and C. Kisslinger, Part B, p. 1621-1623,
Academic Press, San Diego.
Richards, P. G., F. Waldhauser, D. Schaff, and W. Y. Kim (2006). The applicability of
modern methods of earthquake location, in: Advances on Studies of Heterogeneities
in the Earth's Lithosphere: The Keiiti Aki Volume II, edited by Y. Ben-Zion and W.
H. K. Lee, Pageoph Topical Volumes, Birkhauser Verlag, Basel, in press.
Waldhauser, F., and W. L. Ellsworth (2000). A double-difference earthquake location
algorithm: Method and application to the northern Hayward fault, Bull. Seism. Soc.
Am., 90, 1353-1368.
Other software written
Much of the software development work done this year consisted of small Matlab
programming projects by Doug Dodge. These programs were mostly intended to
produce graphics that could be inserted into other reports. The code, along with
representative output is included here.
Under the direction of Willie Lee, Doug Dodge wrote code that supported an
experiment whose goal was to investigate the feasibility of combining regional bulletins
to produce joint locations of offshore seismicity. That code is included here with a short
summary of the results.
141
Doug Dodge also devoted some effort to porting the Seismic Analysis Code and the
NonLinLoc code to run under windows. Both these efforts were successful and provide
additional tools that can be used to support our ongoing efforts. We have not included
the code from these two efforts in this report since only a small fraction was modified
during the port.
Code to Support Relocation of Historic Earthquakes Using
Direct Search Method
The JLOC earthquake location code currently under development is an earthquake
location code intended for use in relocating earthquakes in the Taiwan region using data
from bulletins dating back to the early 1900s. The purpose of relocating these old
earthquakes is to improve our knowledge of which faults the earthquakes occurred on,
and thus to improve our knowledge of the earthquake hazard on specific faults.
For many of these events, we expect poor station distribution relative to the probable
locations of the hypocenters. Also, we expect large numbers of discrepant observations
because of the relatively primitive instrumentation available for much of the period.
Therefore, we expect that locators in common use will perform poorly with this data set.
The JLOC code is intended to be a locator that is stable in the presence of poorly
conditioned problems, and relatively insensitive to errors in the input data. It will also
be able to make use of data from stations with very inaccurate clocks if the stations
reported more than one phase, e.g. S-P times.
JLOC finds the best-fitting hypocenter by forward modeling using a combination gridsearch and simplex search algorithm. The misfit function used by JLOC is an L1 norm,
and is thus robust in the presence of outlier observations that are expected to be
prevalent in much of the old data. Origin times are estimated separately from the
hypocentral latitude, longitude and depth. This allows use of bulletins in which some
stations have only S-P times available. Also, by removing this dimension from the
search, the performance is improved.
JLOC is implemented in Java and uses the TauP travel time code by Crotwell and the
IASPEI91 Earth model.
Work completed so far
•
•
Classes for representing basic seismic objects such as sites, observations,
epicenters, etc. completed.
Interface and implementations for ISS bulletin parsing
142
•
•
•
Single-Stage Grid-search inverter completed
Wrote adaptive depth grid search algorithm
Simplex search implemented
Work planned or in progress
•
•
•
•
•
•
•
•
•
•
•
Add code to handle S-P phases. The issue here is that currently the median
of the arrival times is being removed. Need to investigate with a test case
whether an S-P phase is any more stable in the presence of clock errors that
two mean removed phases.
Add azimuthal gap to output. Use method in EModel class to do the
calculation given the collection of observations. Will need to exclude any
observations removed during inversion. Report information on the
hypocenter line.
Add Elevations to travel time calculation.
Add file handling to main.
In output, report on phases that were removed because of large residual or
that could not be calculated.
track processing time and iterations in each step and report in output
Create program launcher.
Add an option to output a user-specified number of residual values as a
function of position over a distance and depth range around the hypocenter.
add Bondar et al criteria in output and use as aid in removing discrepant
observations.
Add Automatic phase removal.
Output Monte Carlo-derived confidence regions.
Code Listing
The remaining pages of this report are a listing of the code written so far for this locator.
JLOC re uses a large number of general-purpose classes in other packages, and these are
not shown here. Rather, this is a listing only of the Jloc package and its two subpackages parsing and simplex algorithm.
package dodge.apps.jloc;
import java.util.Collection;
import java.util.Vector;
public class DepthLimits {
Vector<Double> values;
int numSteps;
public DepthLimits( double min, double max, double stepSize )
{
values = new Vector<Double>();
double value = min;
143
while( value <= max ){
values.add(value);
value += stepSize;
}
numSteps = values.size()-1;
}
public DepthLimits()
{
values = new Vector<Double>();
values.add( 0.0 );
values.add( 5.0 );
values.add( 10.0 );
values.add( 20.0 );
values.add( 30.0 );
values.add( 100.0 );
values.add( 200.0 );
values.add( 400.0 );
values.add( 600.0 );
numSteps = values.size()-1;
}
public DepthLimits getBracketingLimits( double depth )
{
int idxCurrent = values.indexOf( depth );
if( idxCurrent < 0 )
{
double minError = Double.MAX_VALUE;
for( int j = 0; j < values.size(); ++j ){
double error = Math.abs( depth - values.get(j) );
if( error < minError){
minError = error;
idxCurrent = j;
}
}
}
if( idxCurrent == 0 ){
double dz = ( values.get(1) - values.get(0) ) / numSteps;
return new DepthLimits( values.get(1), values.get(0), dz );
}
else if( idxCurrent == numSteps ){
double dz = ( values.get(numSteps) - values.get(numSteps-1) ) / numSteps;
return new DepthLimits( values.get(numSteps-1), values.get(numSteps), dz );
}
else{
double dz = ( values.get(idxCurrent+1) - values.get(idxCurrent-1) ) /
numSteps;
return new DepthLimits( values.get(idxCurrent-1), values.get(idxCurrent+1),
dz );
}
}
public double getSearchRange()
{
return values.get(numSteps) - values.get(0);
}
public String toString()
{
int n = values.size()-1;
StringBuffer sb = new StringBuffer( "MinDepth = " + values.get(0) + ", MaxDepth
= " + values.get(n) + " over " + (n+1) + " increments" );
return sb.toString();
}
144
public Collection<Double> getValues()
{
return values;
}
}
package dodge.apps.jloc;
import llnl.gnem.util.Vertex;
import llnl.gnem.util.EModel;
import java.util.Vector;
import java.util.Collection;
public class EpicenterGenerator {
private
private
private
private
private
Vector<Vertex> epicenters;
static final int EAST_AZIMUTH = 90;
static final int WEST_AZIMUTH = 270;
static final double NORTH_AZIMUTH = 0.0;
static final double SOUTH_AZIMUTH = 180.0;
public EpicenterGenerator(Vertex center, double radiusStep, double maxRadius) {
epicenters = new Vector<Vertex>();
epicenters.add(center);
int numSteps = (int) (maxRadius / radiusStep);
numSteps += numSteps * radiusStep < maxRadius ? 1 : 0;
// Add vertices along line N-S of current vertex.
addNorthSouthVertices(numSteps, radiusStep, center);
for (int j = 1; j <= numSteps; ++j) {
double delta = j * radiusStep;
Vertex vertex = EModel.reckon(center.getLat(), center.getLon(),
EAST_AZIMUTH);
epicenters.add(vertex);
addNorthSouthVertices(numSteps, radiusStep, vertex);
vertex
=
EModel.reckon(center.getLat(),
center.getLon(),
WEST_AZIMUTH);
epicenters.add(vertex);
addNorthSouthVertices(numSteps, radiusStep, vertex);
}
delta,
delta,
}
private void addNorthSouthVertices(int numSteps, double radiusStep, Vertex vertex) {
for (int k = 1; k <= numSteps; ++k) {
double delta = k * radiusStep;
Vertex
v
=
EModel.reckon(vertex.getLat(),
vertex.getLon(),
delta,
NORTH_AZIMUTH);
epicenters.add(v);
v = EModel.reckon(vertex.getLat(), vertex.getLon(), delta, SOUTH_AZIMUTH);
epicenters.add(v);
}
}
public int size()
{
return epicenters.size();
}
public Collection<Vertex> getEpicenters() {
return epicenters;
145
}
}
package dodge.apps.jloc;
import llnl.gnem.util.Vertex;
public class GridSearchHypothesisTester {
public
static
HypothesisTestCollection
createHypothesisTestCollection(ObservationCollection observations,
double
searchRadius,
double
radiusStep,
DepthLimits
depthLimits, HypothesisOrigin initialHypothesis) {
HypothesisTestCollection
hypothesisTestCollection
=
new
HypothesisTestCollection();
Vertex hypothesisEpicenter = new Vertex(initialHypothesis.getVertex());
EpicenterGenerator eg = new EpicenterGenerator(hypothesisEpicenter, radiusStep,
searchRadius);
HypothesisTestResult testResult = new HypothesisTestResult(initialHypothesis,
observations,
HypothesisEvaluator.getInstance().evaluateHypothesis(initialHypothesis,
observations));
System.out.println("Looking
for
improvement
to
initial
hypothesis:
"
+
testResult.toString());
int numEpicenters = eg.size();
System.out.println("Performing initial grid search using " + numEpicenters + "
epicenters.");
System.out.println("Epicenters are centered at (" +
hypothesisEpicenter.toString() +
") and extend for " +
searchRadius +
" degrees at " + radiusStep +
" degree intervals...");
for (Double depth : depthLimits.getValues()) {
System.out.println("Evaluating depth " + depth + " ...");
for (Vertex vertex : eg.getEpicenters()) {
HypothesisOrigin origin = new HypothesisOrigin(vertex, depth);
testResult = new HypothesisTestResult(origin,
observations,
HypothesisEvaluator.getInstance().evaluateHypothesis(origin,
observations));
hypothesisTestCollection.addHypothesisTest(testResult);
}
}
return hypothesisTestCollection;
}
public
static
HypothesisTestCollection
buildEpicenterDepthTestCollection(ObservationCollection observations,
DepthLimits
depthLimits,
HypothesisOrigin currentHypothesis) {
HypothesisTestCollection
hypothesisTestCollection
=
HypothesisTestCollection();
for (Double depth : depthLimits.getValues()) {
Vertex vertex = currentHypothesis.getVertex();
HypothesisOrigin origin = new HypothesisOrigin(vertex, depth);
HypothesisTestResult testResult = new HypothesisTestResult(origin,
observations,
146
new
HypothesisEvaluator.getInstance().evaluateHypothesis(origin,
observations));
hypothesisTestCollection.addHypothesisTest(testResult);
}
return hypothesisTestCollection;
}
}
package dodge.apps.jloc;
import llnl.gnem.util.EModel;
import llnl.gnem.traveltime.TaupTraveltime;
import java.util.Vector;
public class HypothesisEvaluator {
TaupTraveltime calculator;
private static HypothesisEvaluator instance = null;
public static HypothesisEvaluator getInstance() {
if (instance == null)
instance = new HypothesisEvaluator();
return instance;
}
private HypothesisEvaluator() {
try {
calculator = new TaupTraveltime();
} catch (Exception e) {
e.printStackTrace();
}
}
public double evaluateHypothesis(HypothesisOrigin origin, ObservationCollection
observations) {
Vector<Observation> predicted = new Vector<Observation>();
for (Observation observation : observations.getObservations()) {
if (observation.isDefining()) {
Observation predObs = getHypothesisObservation(observation, origin);
if (predObs != null)
predicted.add(predObs);
}
}
ObservationCollection predictedCollection = new ObservationCollection(predicted);
return
ObservationCollection.compareObservedToPredicted(predictedCollection,
observations);
}
private
Observation
getHypothesisObservation(Observation
observation,
HypothesisOrigin origin) {
String phase = observation.getPhase();
double
delta
=
EModel.getDeltaWGS84(observation.getSite().getVertex(),
origin.getVertex());
try {
double
predicted
=
calculator.getTraveltime(phase,
delta,
origin.getDepth()).time;
if (predicted != -999)
return new Observation(observation.getSite(), phase, predicted);
else
return null;
} catch (Exception e) {
return null;
}
}
147
public
double
getPredictedTraveltime(Observation
origin) {
String phase = observation.getPhase();
observation,
HypothesisOrigin
double
delta
=
EModel.getDeltaWGS84(observation.getSite().getVertex(),
origin.getVertex());
try {
return calculator.getTraveltime(phase, delta, origin.getDepth()).time;
} catch (Exception e) {
return -999;
}
}
}
package dodge.apps.jloc;
import llnl.gnem.util.Vertex;
public class HypothesisOrigin {
private Vertex epicenter;
private double depth;
public HypothesisOrigin( Vertex epicenter, double depth )
{
this.epicenter = new Vertex( epicenter );
this.depth = depth;
}
public Vertex getVertex() {
return epicenter;
}
public double getDepth() {
return depth;
}
public String toString()
{
StringBuffer sb = new StringBuffer();
sb.append(epicenter.toString() );
sb.append( ", Depth = " );
sb.append( depth );
return sb.toString();
}
}
package dodge.apps.jloc;
import java.util.TreeMap;
import java.util.Collection;
import java.util.Vector;
public class HypothesisTestCollection {
private TreeMap<Double, HypothesisTestResult> hypotheses;
public HypothesisTestCollection() {
hypotheses = new TreeMap<Double, HypothesisTestResult>();
}
public void addHypothesisTest(HypothesisTestResult testResult) {
hypotheses.put(testResult.getResidual(), testResult);
}
148
public void addCollection( HypothesisTestCollection newData )
{
for( Double residual : newData.hypotheses.keySet() ){
hypotheses.put( residual, newData.hypotheses.get( residual ) );
}
}
Collection<HypothesisTestResult> getBestSolutions( int number )
{
Vector <HypothesisTestResult> result = new Vector <HypothesisTestResult>();
for( Double key : hypotheses.keySet() ){
result.add( hypotheses.get( key ) );
}
return result;
}
public HypothesisTestResult getBestHypothesisTest() {
Double bestKey = hypotheses.firstKey();
if (bestKey != null)
return hypotheses.get(bestKey);
else
return null;
}
}
package dodge.apps.jloc;
import llnl.gnem.util.Vertex;
import java.util.Collection;
import java.text.NumberFormat;
public class HypothesisTestResult {
private HypothesisOrigin hypothesisOrigin;
private ObservationCollection observations;
private double residual;
public HypothesisTestResult(HypothesisOrigin hypothesisOrigin,
ObservationCollection observations,
double residual) {
this. hypothesisOrigin = hypothesisOrigin;
this. observations = observations;
this.residual = residual;
}
public HypothesisOrigin getHypothesis() {
return hypothesisOrigin;
}
public ObservationCollection getObservations() {
return observations;
}
public double getResidual() {
return residual;
}
public String toString() {
NumberFormat f = NumberFormat.getInstance();
f.setMaximumFractionDigits( 4 );
Vertex v = hypothesisOrigin.getVertex();
StringBuffer sb = new StringBuffer("Lat = " +
f.format( v.getLat() ) +
", Lon = " +
f.format( v.getLon() ) +
149
", Depth = " +
f.format( hypothesisOrigin.getDepth() ) +
", Residual = " + f.format( residual ) );
return sb.toString();
}
}
package dodge.apps.jloc;
import llnl.gnem.util.Vertex;
import
import
import
import
java.util.Vector;
java.io.IOException;
java.io.FileOutputStream;
java.io.PrintStream;
import
import
import
import
import
dodge.apps.jloc.parsing.SingleDataFileParser;
dodge.apps.jloc.parsing.DbaseDumpParser;
dodge.apps.jloc.parsing.ParseResults;
dodge.apps.jloc.parsing.IssParser;
gnu.jargs.CmdLineParser;
public class Jloc {
private static void printUsage() {
System.err.println("Usage: jloc inputFileName outputFileName controlFileName");
System.err.println("All arguments are required and must be in the order shown.");
}
public static void main(String[] args) {
CmdLineParser parser = new CmdLineParser();
try {
parser.parse(args);
} catch (CmdLineParser.IllegalOptionValueException e) {
printUsage();
System.exit(1);
} catch (CmdLineParser.UnknownOptionException e) {
printUsage();
System.exit(1);
}
String [] otherArgs = parser.getRemainingArgs();
if (otherArgs.length != 3) {
printUsage();
System.exit(1);
}
String inputFile = otherArgs[0];
String outputFile = otherArgs[1];
String controlFile = otherArgs[2];
Locator locator = new Locator();
SingleDataFileParser fileParser = new IssParser();
try {
ProgramData.getInstance().parseControlFile( controlFile );
ParseResults pr = fileParser.parseInputFile(inputFile);
FileOutputStream out = new FileOutputStream( outputFile );
PrintStream p = new PrintStream( out );
locator.findBestHypocenter(pr.startingOrigin, pr.observations, p);
p.close();
out.close();
} catch (IOException e) {
e.printStackTrace();
//To change body of catch statement use
Settings | File Templates.
}
}
150
File
|
}
package dodge.apps.jloc;
import dodge.apps.jloc.simplex.NelderMead;
import java.util.*;
import java.io.PrintStream;
import llnl.gnem.util.EModel;
import llnl.gnem.util.SeriesMath;
import llnl.gnem.util.TimeT;
public class Locator {
public
void
findBestHypocenter(HypothesisOrigin
ObservationCollection observations, PrintStream out ) {
ProgramData pd = ProgramData.getInstance();
double searchRange = pd.getInitialSearchRange();
double stepSize = pd.getInitialStepSize();
DepthLimits depths = new DepthLimits();
startingOrigin,
HypothesisTestCollection results = new HypothesisTestCollection();
results.addCollection(GridSearchHypothesisTester.createHypothesisTestCollection(observat
ions,
searchRange,
stepSize,
depths, startingOrigin));
HypothesisTestResult best = results.getBestHypothesisTest();
String msg = "Best result from grid search: " + best.toString();
System.out.println(msg);
out.println( msg );
double originTime = getOriginTime(best, observations);
suppressDiscrepantObservations( originTime, best );
// Take the best solution from that and use it as the starting point for a
simplex solution
best = NelderMead.searchForMinimum(best, observations);
originTime = getOriginTime(best, observations);
printOutput(originTime, best, observations, System.out);
printOutput(originTime, best, observations, out );
}
private void suppressDiscrepantObservations(double originTime, HypothesisTestResult
last )
{
Vector<Double> residuals = new Vector<Double>();
for (Observation obs : last.getObservations().getObservations() ) {
if( obs.isDefining() ){
double ttime = HypothesisEvaluator.getInstance().getPredictedTraveltime(obs,
last.getHypothesis());
double residual = obs.getTime() - originTime -ttime;
residuals.add( residual );
}
}
double std = Math.sqrt( SeriesMath.getVariance( residuals ) );
double residualCutoff = ProgramData.getInstance().getResidualCutoff();
151
for (Observation obs : last.getObservations().getObservations() ) {
if( obs.isDefining() ){
double ttime = HypothesisEvaluator.getInstance().getPredictedTraveltime(obs,
last.getHypothesis());
double residual = obs.getTime() - originTime -ttime;
if( Math.abs( residual ) > std * residualCutoff ){
System.out.println("Removing " + obs + " because residual is > than
cutoff...");
obs.setDefining( false );
}
}
}
}
private
void
printOutput(double
originTime,
HypothesisTestResult
last,
ObservationCollection observations, PrintStream out) {
TimeT otime = new TimeT(originTime);
otime.setFormat("yyyy MM dd HH mm ss.SSS");
out.printf("Origin time (%s) Lat = %7.4f Lon = %8.4f Depth = %6.2f Residual =
%6.3f\n",
otime.toString(),
last.getHypothesis().getVertex().getLat(),
last.getHypothesis().getVertex().getLon(),
last.getHypothesis().getDepth(),
last.getResidual());
System.out.println();
OutputStaPhaseData.outputHeader(out);
// Sort observations by delta, ttime before outputing
TreeMap<Double, TreeMap<Double, Observation>> sortedObs = new TreeMap<Double,
TreeMap<Double, Observation>>();
for (Observation observation : observations.getObservations()) {
double
ttime
=
HypothesisEvaluator.getInstance().getPredictedTraveltime(observation,
last.getHypothesis());
double
delta
=
EModel.getDeltaWGS84(last.getHypothesis().getVertex(),
observation.getSite().getVertex());
TreeMap<Double, Observation> distObsMap = sortedObs.get(delta);
if (distObsMap != null) {
distObsMap.put(ttime, observation);
} else {
distObsMap = new TreeMap<Double, Observation>();
distObsMap.put(ttime, observation);
sortedObs.put(delta, distObsMap);
}
}
for (Double delta : sortedObs.keySet()) {
TreeMap<Double, Observation> distObsMap = sortedObs.get(delta);
if (distObsMap != null) {
for (Double ttime : distObsMap.keySet()) {
Observation observation = distObsMap.get(ttime);
if (observation != null) {
double
azimuth
=
EModel.getAzimuthWGS84(last.getHypothesis().getVertex().getLat(),
last.getHypothesis().getVertex().getLon(),
observation.getSite().getVertex().getLat(),
observation.getSite().getVertex().getLon());
OutputStaPhaseData
ospd
=
new
OutputStaPhaseData(observation.getSite().getSta(), observation.getPhase(),
observation.getTime()
originTime,
ttime,
observation.isDefining(), azimuth, delta);
ospd.outputLine(out);
}
}
152
}
}
}
double
getOriginTime(HypothesisTestResult
solution,
ObservationCollection
observations) {
Vector<Double> otime = new Vector<Double>();
HypothesisOrigin origin = solution.getHypothesis();
Collection<Observation> obs = observations.getObservations();
for (Observation observation : obs) {
double
ttime
=
HypothesisEvaluator.getInstance().getPredictedTraveltime(observation, origin);
if (ttime != -999)
otime.add(observation.getTime() - ttime);
}
return SeriesMath.getMedian(otime);
}
}
package dodge.apps.jloc;
public class Observation {
private Site site;
private String phase;
private double time;
private double timeCorrection;
private boolean defining;
public Observation( Site site, String phase, double time )
{
this.site = site;
this.phase = phase;
this.time = time;
setDefining(true);
}
public String toString()
{
StringBuffer sb = new StringBuffer( "Sta = " );
sb.append( site.getSta() );
sb.append(", Phase = " );
sb.append( phase );
return sb.toString();
}
public Site getSite() {
return site;
}
public String getPhase() {
return phase;
}
public double getTime() {
return time;
}
public double getCorrectedTime()
{
return time + timeCorrection;
}
public void setTimeCorrection(double timeCorrection) {
153
this.timeCorrection = timeCorrection;
}
public boolean isDefining() {
return defining;
}
public void setDefining(boolean defining) {
this.defining = defining;
}
}
package dodge.apps.jloc;
import llnl.gnem.util.SeriesMath;
import java.util.Collection;
import java.util.Vector;
public class ObservationCollection {
private Vector<Observation> observations;
public ObservationCollection(Collection<Observation> obs) {
observations = new Vector<Observation>();
observations.addAll(obs);
double medianObservationTime = getMedianObservationTime(observations);
for (Observation obs2 : observations) {
obs2.setTimeCorrection(-medianObservationTime);
}
}
public Collection<Observation> getObservations() {
return observations;
}
private
static
Observation
findMatchingObservation(Observation
ObservationCollection oc) {
for (Observation observation : oc.getObservations()) {
if
(observation.getSite().equals(obs.getSite())
observation.getPhase().equals(obs.getPhase()))
return observation;
}
return null;
}
obs,
&&
private double getMedianObservationTime(Collection<Observation> observations) {
Vector<Double> obsTimes = new Vector<Double>();
for (Observation obs : observations)
if (obs.isDefining())
obsTimes.add(obs.getTime());
return SeriesMath.getMedian(obsTimes);
}
public static double compareObservedToPredicted(ObservationCollection predicted,
ObservationCollection observed) {
double sum = 0;
int nobs = 0;
for (Observation obs : observed.getObservations()) {
if (obs.isDefining()) {
Observation pred = findMatchingObservation(obs, predicted);
if (pred != null) {
sum += Math.abs(obs.getCorrectedTime() - pred.getCorrectedTime());
154
++nobs;
}
}
}
if (nobs > 0)
return sum / nobs;
else
return sum;
}
}
package dodge.apps.jloc;
import java.io.PrintStream;
public class OutputStaPhaseData {
private String sta;
private String phase;
private double predictedTravelTime;
private double measuredTravelTime;
private double azimuth;
private double delta;
private boolean defining;
public OutputStaPhaseData(String sta,
String phase,
double measuredTravelTime,
double predictedTravelTime,
boolean defining,
double azimuth,
double delta) {
this.sta = sta;
this.phase = phase;
this.predictedTravelTime = predictedTravelTime;
this.measuredTravelTime = measuredTravelTime;
this.azimuth = azimuth;
this.delta = delta;
this.defining = defining;
}
public void outputLine(PrintStream stream) {
stream.printf("%-6s %-8s %10.3f %10.3f %10.3f %1s
sta, phase, measuredTravelTime,
predictedTravelTime,
measuredTravelTime - predictedTravelTime,
(defining ? "d" : "n"),
delta,
azimuth);
}
public static void outputHeader( PrintStream stream )
{
stream.println( "STA
PHASE
OBSERVED
AZIMUTH" );
}
}
package dodge.apps.jloc.parsing;
import llnl.gnem.util.FileInputArrayLoader;
import llnl.gnem.util.Vertex;
import llnl.gnem.util.SeriesMath;
import java.io.IOException;
155
%5.2f %6.1f\n",
PRED
RES
DEF DELTA
import java.util.Vector;
import java.util.StringTokenizer;
import
import
import
import
dodge.apps.jloc.Observation;
dodge.apps.jloc.Site;
dodge.apps.jloc.HypothesisOrigin;
dodge.apps.jloc.ObservationCollection;
public class DbaseDumpParser implements SingleDataFileParser{
public ParseResults parseInputFile( String filename ) throws IOException
{
Vector<Double> olatVec = new Vector<Double>();
Vector<Double> olonVec = new Vector<Double>();
Vector<Double> otimeVec = new Vector<Double>();
Vector<Observation> observations = new Vector<Observation>();
String[] lines = FileInputArrayLoader.fillStringsFromFile(
filename );
for( int j = 0; j < lines.length; ++j ){
StringTokenizer st = new StringTokenizer( lines[j] );
olatVec.add( Double.parseDouble( st.nextToken()));
olonVec.add( Double.parseDouble( st.nextToken()));
otimeVec.add( Double.parseDouble( st.nextToken()));
String sta = st.nextToken();
String phase = st.nextToken();
double time = Double.parseDouble(st.nextToken() );
double stla = Double.parseDouble( st.nextToken() );
double stlo = Double.parseDouble( st.nextToken() );
Site site = new Site(sta, new Vertex(stla, stlo), 0.0 );
Observation obs = new Observation( site, phase, time );
observations.add( obs );
}
double olat = SeriesMath.getMedian( olatVec );
double olon = SeriesMath.getMedian( olonVec );
//
double otime = SeriesMath.getMedian( otimeVec );
HypothesisOrigin origin = new HypothesisOrigin(new Vertex( olat, olon ),15.0 );
return new ParseResults(origin, new ObservationCollection( observations ) );
}
}
package dodge.apps.jloc.parsing;
import
import
import
import
dodge.apps.jloc.Observation;
dodge.apps.jloc.Site;
dodge.apps.jloc.HypothesisOrigin;
dodge.apps.jloc.ObservationCollection;
import java.io.IOException;
import java.util.Vector;
import java.util.StringTokenizer;
import llnl.gnem.util.FileInputArrayLoader;
import llnl.gnem.util.Vertex;
import llnl.gnem.util.SeriesMath;
import llnl.gnem.util.TimeT;
public class IssParser implements SingleDataFileParser {
private OriginSolution getOriginLine(String line) {
StringTokenizer st = new StringTokenizer(line);
st.nextToken(); // throw away first token which is the author...
156
int year = Integer.parseInt(st.nextToken());
int month = Integer.parseInt(st.nextToken());
int day = Integer.parseInt(st.nextToken());
int hour = Integer.parseInt(st.nextToken());
int minute = Integer.parseInt(st.nextToken());
double second = Double.parseDouble(st.nextToken());
TimeT time = new TimeT(year, month, day, hour, minute, second);
double
double
double
return
lat = Double.parseDouble(st.nextToken());
lon = Double.parseDouble(st.nextToken());
depth = Double.parseDouble(st.nextToken());
new OriginSolution(lat, lon, depth, time.getEpochTime());
}
public ParseResults parseInputFile(String filename) throws IOException {
Vector<Observation> observations = new Vector<Observation>();
String[] lines = FileInputArrayLoader.fillStringsFromFile(filename);
boolean gotOrigin = false;
OriginSolution origin = null;
double originTime = 0;
for (int j = 0; j < lines.length; ++j) {
if (lines[j].trim().charAt(0) != '*') {
if (!gotOrigin) {
origin = getOriginLine(lines[j]);
gotOrigin = true;
originTime = origin.time;
} else {
StringTokenizer st = new StringTokenizer(lines[j]);
String sta = st.nextToken();
String phase = st.nextToken();
if (phase.length() > 1)
phase = phase.substring(1);
double time = Double.parseDouble(st.nextToken()) + originTime;
double stla = Double.parseDouble(st.nextToken());
double stlo = Double.parseDouble(st.nextToken());
double elev = Double.parseDouble(st.nextToken());
Site site = new Site(sta, new Vertex(stla, stlo), elev);
Observation obs = new Observation(site, phase, time);
observations.add(obs);
}
}
}
return new ParseResults(origin.origin, new ObservationCollection(observations));
}
class OriginSolution {
public HypothesisOrigin origin;
public double time;
public OriginSolution(double lat, double lon, double depth, double time) {
origin = new HypothesisOrigin(new Vertex(lat, lon), depth);
this.time = time;
}
}
}
package dodge.apps.jloc.parsing;
import dodge.apps.jloc.HypothesisOrigin;
import dodge.apps.jloc.ObservationCollection;
157
public class ParseResults {
public HypothesisOrigin startingOrigin;
public ObservationCollection observations;
public
ParseResults(
HypothesisOrigin
observations )
{
this.startingOrigin = startingOrigin;
this.observations = observations;
}
}
startingOrigin,
ObservationCollection
package dodge.apps.jloc.parsing;
import java.io.IOException;
public interface SingleDataFileParser {
ParseResults parseInputFile( String filename )
}
throws IOException;
package dodge.apps.jloc;
import llnl.gnem.util.FileInputArrayLoader;
import java.io.IOException;
import java.util.StringTokenizer;
public class ProgramData {
private static ProgramData ourInstance = new ProgramData();
private double residualCutoff = 2.5; // observations with residuals more than
residualCutoff times the std will not be used.
private boolean restartSimplex = false;
private int numberOfSimplexRestarts = 0;
private int maxSimplexIterations = 200;
private double simplexConvergenceTol = 0.000001;
private double initialSearchRange = 6;
// Range in degrees around starting point
to search in grid search
private double initialStepSize = 1;
// step size in degrees for grid search
public static ProgramData getInstance() {
return ourInstance;
}
private ProgramData() {
}
public double getResidualCutoff() {
return residualCutoff;
}
public boolean isRestartSimplex() {
return restartSimplex;
}
public int getNumberOfSimplexRestarts() {
return numberOfSimplexRestarts;
}
public int getMaxSimplexIterations() {
return maxSimplexIterations;
}
public double getSimplexConvergenceTol() {
158
return simplexConvergenceTol;
}
public double getInitialSearchRange() {
return initialSearchRange;
}
public double getInitialStepSize() {
return initialStepSize;
}
public void parseControlFile(String filename) throws IOException {
String[] lines = FileInputArrayLoader.fillStringsFromFile(filename);
for (String line : lines) {
if (line.trim().charAt(0) != '*') { // Don't process lines that start with
*
int starIndex = line.indexOf('*');
//Remove trailing comments as well
if (starIndex > 0) {
line = line.substring(0, starIndex);
}
StringTokenizer st = new StringTokenizer(line, " \t\n=");
if (st.countTokens() == 2) {
processTokenPair(st.nextToken(), st.nextToken());
}
}
}
}
private void processTokenPair(String token1, String token2) {
if (token1.equalsIgnoreCase("residualCutoff")) {
residualCutoff = Double.parseDouble(token2);
} else if (token1.equalsIgnoreCase("restartSimplex")) {
restartSimplex = Boolean.parseBoolean(token2);
}
else if (token1.equalsIgnoreCase("numberOfSimplexRestarts")) {
numberOfSimplexRestarts = Integer.parseInt(token2);
} else if (token1.equalsIgnoreCase("maxSimplexIterations")) {
maxSimplexIterations = Integer.parseInt(token2);
} else if (token1.equalsIgnoreCase("simplexConvergenceTol")) {
simplexConvergenceTol = Double.parseDouble(token2);
} else if (token1.equalsIgnoreCase("initialSearchRange")) {
initialSearchRange = Double.parseDouble(token2);
} else if (token1.equalsIgnoreCase("initialStepSize")) {
initialStepSize = Double.parseDouble(token2);
}
}
}
package dodge.apps.jloc.simplex;
import dodge.apps.jloc.*;
import llnl.gnem.util.Vertex;
import llnl.gnem.util.EModel;
import java.util.Random;
public class NelderMead {
static Random random = new Random();
public
static
HypothesisTestResult
ObservationCollection observations) {
searchForMinimum(HypothesisTestResult
159
best,
System.out.println("Attempting to refine solution around current point...");
ProgramData pd = ProgramData.getInstance();
boolean perturbAll = false;
SimplexVertex trial = NelderMead.descend(best.getHypothesis(), observations,
perturbAll);
if (pd.isRestartSimplex()) {
int iterations = 0;
while (iterations++ < pd.getNumberOfSimplexRestarts()) {
SimplexVertex thisTrial = NelderMead.descend(trial.origin, observations,
perturbAll);
System.out.println(thisTrial);
trial = thisTrial;
}
}
System.out.println("Simplex iterations finished.");
System.out.println();
return new HypothesisTestResult(trial.origin, observations, trial.residual);
}
public
static
SimplexVertex
descend(HypothesisOrigin
startingOrigin,
ObservationCollection observations, boolean perturbAll) {
HypothesisOrigin[]
initialVertices
=
getStartingSimplex(startingOrigin,
perturbAll);
return descend(initialVertices, observations);
}
public
static
SimplexVertex
descend(HypothesisOrigin[]
initialVertices,
ObservationCollection observations) {
SimplexVertex[] simplex = new SimplexVertex[initialVertices.length];
int nvertex = simplex.length;
for (int j = 0; j < nvertex; ++j) {
simplex[j] = new SimplexVertex(initialVertices[j]);
simplex[j].residual
=
HypothesisEvaluator.getInstance().evaluateHypothesis(simplex[j].origin, observations);
}
ProgramData pd = ProgramData.getInstance();
int maxSimplexIterations = pd.getMaxSimplexIterations();
int ilo = 0;
for (int iter = 1; iter < maxSimplexIterations; iter++) {
/////////// identify lo, nhi, hi points //////////////
double flo = simplex[0].residual;
double fhi = flo;
ilo = 0;
int ihi = 0;
for (int i = 1; i < nvertex; i++) {
if (simplex[i].residual < flo) {
flo = simplex[i].residual;
ilo = i;
}
if (simplex[i].residual > fhi) {
fhi = simplex[i].residual;
ihi = i;
}
}
double fnhi = flo;
for (int i = 0; i < nvertex; i++) {
if ((i != ihi) && (simplex[i].residual > fnhi)) {
fnhi = simplex[i].residual;
}
}
////////// exit criterion //////////////
if (isConverged(simplex, ihi, ilo)) {
160
return simplex[ilo];
}
///// compute ave[] vector excluding highest vertex
// This is the centroid of the face opposite the high point
SimplexVertex ave = VertexOperations.getOppositeFaceCentroidVertex(simplex,
ihi);
///////// try reflect e.g. extrapolate by factor -1 through centroid
SimplexVertex ytry = amotry(ave, simplex, ihi, -1.0, observations);
if (ytry.residual <= flo) {
// try additional extrapolation by a factor of 2
amotry(ave, simplex, ihi, 2.0, observations);
} else if (ytry.residual >= fnhi) {
//Reflected point is worse than the second highest, so look for an
intermediate lower point.
//i.e. do a 1-dimensional contraction
double ysave = simplex[ihi].residual;
ytry = amotry(ave, simplex, ihi, 0.5, observations);
if (ytry.residual >= ysave) {
contractAroundBestVertex(simplex, ilo, observations);
++iter;
}
} else {
//
--iter;
}
}
return simplex[ilo];
}
private static SimplexVertex amotry(SimplexVertex centroid,
SimplexVertex[] simplex,
int ihi,
double scale,
ObservationCollection observations) {
SimplexVertex result = VertexOperations.getScaledVertex(centroid, simplex[ihi],
scale);
result.residual
=
HypothesisEvaluator.getInstance().evaluateHypothesis(result.origin, observations);
if (result.residual < simplex[ihi].residual)
simplex[ihi] = result;
return result;
}
private static boolean isConverged(SimplexVertex[] simplex, int ihi, int ilo) {
ProgramData pd = ProgramData.getInstance();
double convergenceTol = pd.getSimplexConvergenceTol();
double rtol = 2 * Math.abs(simplex[ihi].residual - simplex[ilo].residual) /
Math.abs(simplex[ihi].residual + simplex[ilo].residual);
return rtol < convergenceTol;
}
private static void contractAroundBestVertex(SimplexVertex[] simplex, int ilo,
ObservationCollection observations) {
VertexOperations.contractAroundBestVertex(simplex, ilo);
for (int j = 0; j < simplex.length; ++j) {
if (j != ilo) {
simplex[j].residual
=
HypothesisEvaluator.getInstance().evaluateHypothesis(simplex[j].origin, observations);
}
}
}
static
HypothesisOrigin[]
boolean perturbAll) {
getStartingSimplex(HypothesisOrigin
161
startingOrigin,
HypothesisOrigin[] result = new HypothesisOrigin[4];
if (perturbAll)
result[0] = getPerturbedOrigin(startingOrigin);
else
result[0] = startingOrigin;
for (int j = 1; j < 4; ++j) {
result[j] = getPerturbedOrigin(startingOrigin);
}
return result;
}
private static HypothesisOrigin getPerturbedOrigin(HypothesisOrigin startingOrigin)
{
double
double
double
Vertex
depth = random.nextFloat() * 50;
delta = random.nextFloat() * 5;
azimuth = random.nextFloat() * 360;
vertex = EModel.reckon(startingOrigin.getVertex().getLat(),
startingOrigin.getVertex().getLon(),
delta,
azimuth);
return new HypothesisOrigin(vertex, depth);
}
}
package dodge.apps.jloc.simplex;
import dodge.apps.jloc.HypothesisOrigin;
import llnl.gnem.util.EModel;
import java.text.NumberFormat;
public class SimplexVertex {
public double residual;
public HypothesisOrigin origin;
public SimplexVertex(HypothesisOrigin origin) {
this.origin = origin;
}
public String toString() {
NumberFormat f = NumberFormat.getInstance();
f.setMaximumFractionDigits(4);
StringBuffer
sb
=
new
StringBuffer("Lat
f.format(origin.getVertex().getLat()));
sb.append(", Lon = " + f.format(origin.getVertex().getLon()));
sb.append(", Depth = " + f.format(origin.getDepth()));
sb.append(", Residual = " + f.format(residual));
return sb.toString();
}
=
public boolean equals(Object that) {
if (this == that) return true;
if (!(that instanceof SimplexVertex)) return false;
SimplexVertex thatVertex = (SimplexVertex) that;
return thatVertex.origin.getVertex().equals(origin.getVertex()) &&
thatVertex.origin.getDepth() == origin.getDepth();
}
}
package dodge.apps.jloc.simplex;
import llnl.gnem.util.GeocentricCoordinate;
162
"
+
import llnl.gnem.util.EModel;
import llnl.gnem.util.Vertex;
import javax.vecmath.Vector3d;
import dodge.apps.jloc.HypothesisOrigin;
public class VertexOperations {
private static GeocentricCoordinate getGeocentricCoordinate(SimplexVertex vertex) {
return new GeocentricCoordinate(vertex.origin.getVertex().getLat(),
vertex.origin.getVertex().getLon(), vertex.origin.getDepth());
}
/**
* Returns centroid Simplex vertex. Centroid must be calculated in local coordinates.
Else
* if vertices span +- 180 degrees longitude the calculation fails.
*
* @param vertices
* @return A new SimplexVertex at the centroid (lat, lon, depth) of input vertices.
*/
public static SimplexVertex getCentroidVertex(SimplexVertex[] vertices) {
GeocentricCoordinate coordOrigin = getGeocentricCoordinate(vertices[0]);
double x = 0;
double y = 0;
double z = 0;
for (SimplexVertex vert : vertices) {
GeocentricCoordinate thisCoord = getGeocentricCoordinate(vert);
Vector3d vertex = EModel.getLocalCoords(coordOrigin, thisCoord);
x += vertex.x;
y += vertex.y;
z += vertex.z;
}
x /= vertices.length;
y /= vertices.length;
z /= vertices.length;
Vector3d centroid = new Vector3d(x, y, z);
GeocentricCoordinate gc = EModel.getGeocentricCoords(coordOrigin, centroid);
HypothesisOrigin centroidOrigin = new HypothesisOrigin(new Vertex(gc.getLat(),
gc.getLon()), gc.getDepth());
return new SimplexVertex(centroidOrigin);
}
public static SimplexVertex getOppositeFaceCentroidVertex(SimplexVertex[] vertices,
int highPointIndex) {
SimplexVertex[] face = new SimplexVertex[vertices.length - 1];
int k = 0;
for (int j = 0; j < vertices.length; ++j) {
if (j != highPointIndex) {
face[k++] = vertices[j];
}
}
return getCentroidVertex(face);
}
public static SimplexVertex getScaledVertex(SimplexVertex centroid, SimplexVertex
highPoint, double scale) {
GeocentricCoordinate coordOrigin = getGeocentricCoordinate(centroid);
GeocentricCoordinate thisCoord = getGeocentricCoordinate(highPoint);
Vector3d vertex = EModel.getLocalCoords(coordOrigin, thisCoord);
vertex.scale(scale);
GeocentricCoordinate reflected = EModel.getGeocentricCoords(coordOrigin, vertex);
163
if (reflected.getDepth() < 0)
reflected.setDepth(0);
HypothesisOrigin
reflectedOrigin
=
new
HypothesisOrigin(new
Vertex(reflected.getLat(), reflected.getLon()), reflected.getDepth());
return new SimplexVertex(reflectedOrigin);
}
public
static
void
contractAroundBestVertex(SimplexVertex[]
vertices,
int
bestPointIndex) {
GeocentricCoordinate
coordOrigin
=
getGeocentricCoordinate(vertices[bestPointIndex]);
for (int j = 0; j < vertices.length; ++j) {
if (j != bestPointIndex) {
GeocentricCoordinate thisCoord = getGeocentricCoordinate(vertices[j]);
Vector3d cartesianVertex = EModel.getLocalCoords(coordOrigin, thisCoord);
cartesianVertex.scale(0.5);
GeocentricCoordinate
gc
=
EModel.getGeocentricCoords(coordOrigin,
cartesianVertex);
HypothesisOrigin vertex = new HypothesisOrigin(new Vertex(gc.getLat(),
gc.getLon()), gc.getDepth());
vertices[j] = new SimplexVertex(vertex);
}
}
}
}
package dodge.apps.jloc;
import llnl.gnem.util.Vertex;
public class Site {
private String sta;
private Vertex vertex;
private double elev;
public Site(String sta, Vertex vertex, double elev)
{
this.sta = sta;
this.vertex = new Vertex( vertex );
this.elev = elev;
}
public String getSta() {
return sta;
}
public Vertex getVertex() {
return vertex;
}
public double getElev() {
return elev;
}
}
Code to Support Joint Inversion for Hypocenters and Velocity
Much of the seismicity off the eastern coast of Taiwan is poorly located by the Taiwan
short-period network. This is because the events are outside the network, resulting in a
poorly constrained inverse problem. Many of these events are big enough to be located
164
by various Japanese networks, although the expectation is that such locations would
also be of poor quality since the events are outside those networks as well. However, in
principle, combining data from both the Taiwan and Japanese networks should provide
much better constraints on the locations. We decided to investigate this possibility.
We used two types of JMA bulletins and two types of CWB bulletins for this
experiment. All four bulletin types needed parsers. In addition, we needed to be able to
correlate events between bulletins so that sets of arrivals could be properly combined.
All bulletins were converted into CSS format and loaded into an Oracle 10g database.
There PLSQL trigger code (not shown here) categorized events by Ground-Truth level
using the method of Bondar et al (2004). This allowed selection of only those events for
joint relocation for which the network configuration was optimal.
The data meeting GT20 criteria were then extracted from the database and formatted for
input into the Velest program. Analysis of several runs of the Velest code revealed that
the networks inconsistent timing and the differences in timing were not consistent over
time. This factor destabilized the inversion to such a degree that we decided to abandon
the approach at least with the older data. The codes uses in this effort follow.
package dodge.apps.loadCwbNew;
import java.util.Vector;
import llnl.gnem.util.TimeT;
public class CwbNewObservation {
private String sta;
private String phase;
private double time;
private String chan = "-";
private String auth = "CwbNew";
private String onset = "-";
public CwbNewObservation( String sta, String phase, double time ) {
this.setSta(sta);
this.setPhase(phase);
this.setTime(time);
}
public String toString() {
return getSta();
}
public void adjustTime( double adjust ) {
setTime(getTime() + adjust);
}
public static Vector parseObsLine( String str, CwbNewOrigin origin ) {
Vector result = new Vector();
String sta = str.substring(1,5).trim();
TimeT otime = new TimeT( origin.getTime());
double otimeSec = otime.getSecond();
String phase = "P";
165
double pSecond = Double.parseDouble( str.substring(23,29).trim() );
if( pSecond > 0 ){
double arrivalOffset = pSecond - otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
CwbNewObservation
obs
=
new
CwbNewObservation(
otime.getEpochTime() + arrivalOffset );
result.add( obs );
}
sta,
phase,
phase = "S";
double sSecond = Double.parseDouble( str.substring(39,45).trim() );
if( sSecond > 0 ){
double arrivalOffset = sSecond - otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
CwbNewObservation
obs
=
otime.getEpochTime() + arrivalOffset );
result.add( obs );
}
new
return result;
}
public String getSta() {
return sta;
}
public void setSta(String sta) {
this.sta = sta;
}
public String getPhase() {
return phase;
}
public void setPhase(String phase) {
this.phase = phase;
}
public double getTime() {
return time;
}
public void setTime(double time) {
this.time = time;
}
public String getChan() {
return chan;
}
public String getAuth() {
return auth;
}
public String getOnset() {
return onset;
}
}
package dodge.apps.loadCwbNew;
166
CwbNewObservation(
sta,
phase,
import llnl.gnem.util.TimeT;
public class CwbNewOrigin {
private double time;
private double lat;
private double lon;
private double depth;
private double magnitude;
public CwbNewOrigin( String str ) {
int year = Integer.parseInt( str.substring(1,5).trim());
int month = Integer.parseInt( str.substring(5,7).trim());
int day = Integer.parseInt( str.substring(7,9).trim());
int hour = Integer.parseInt( str.substring(9,11).trim());
int minute = Integer.parseInt( str.substring(11,13).trim());
double second = Double.parseDouble(str.substring(14, 19).trim() );
TimeT tmp = new TimeT(year, month, day, hour, minute, second );
setTime(tmp.getEpochTime() );
setLat(Double.parseDouble(str.substring(19,21).trim())
Double.parseDouble(str.substring(21,26).trim())/60);
setLon(Double.parseDouble(str.substring(26,29).trim())
Double.parseDouble(str.substring(29, 34).trim())/60);
setDepth(Double.parseDouble(str.substring( 34, 40).trim() ));
double mag = -999;
String mag1Str = str.substring( 40,44);
if( mag1Str.trim().length() > 0)
setMagnitude(Double.parseDouble( mag1Str.trim() ));
}
public double getTime() {
return time;
}
public void adjustTime( double adjust ) {
time += adjust;
}
public void setTime(double time) {
this.time = time;
}
public double getLat() {
return lat;
}
public void setLat(double lat) {
this.lat = lat;
}
public double getLon() {
return lon;
}
public void setLon(double lon) {
this.lon = lon;
}
public double getDepth() {
return depth;
}
public void setDepth(double depth) {
this.depth = depth;
}
public double getMagnitude() {
167
+
+
return magnitude;
}
public void setMagnitude(double magnitude) {
this.magnitude = magnitude;
}
public String toString() {
StringBuffer sb = new StringBuffer( "Time = " );
sb.append( time );
sb.append( "
lat = " + lat );
sb.append( "
lon = " + lon );
sb.append( "
depth = " + depth );
sb.append( "
mag = " + magnitude );
return sb.toString();
}
}
package dodge.apps.loadCwbNew;
import
import
import
import
import
import
import
import
import
import
import
import
import
import
import
import
dodge.apps.loadjma2.loadjma2;
java.io.BufferedInputStream;
java.io.BufferedReader;
java.io.File;
java.io.FilenameFilter;
java.io.InputStreamReader;
java.sql.CallableStatement;
java.sql.Connection;
java.sql.DriverManager;
java.sql.SQLException;
java.sql.Types;
java.util.Enumeration;
java.util.Vector;
java.util.zip.ZipEntry;
java.util.zip.ZipFile;
llnl.gnem.util.TimeT;
public class loadCwbNew {
private Connection conn;
class ZIPFilter implements FilenameFilter {
public boolean accept(File dir, String name) {
return name.endsWith(".zip") || name.endsWith(".ZIP");
}
}
public Connection getConnection() throws SQLException{
final String connect_string =
"jdbc:oracle:thin:@localhost:1521:orcl";
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(connect_string, "dodge", "hp15c");
}
public void addEvent( CwbNewOrigin origin, Vector obs ) throws SQLException {
int orid = -1;
CallableStatement
stmt
=
conn.prepareCall("{?
=
add_new_origin(?, ?, ?, ?, ?, ?, ?)}");
stmt.setDouble(2, origin.getLat() );
stmt.setDouble(3, origin.getLon() );
stmt.setDouble(4, origin.getTime() );
168
call
stmt.setDouble(5, origin.getDepth() );
stmt.setDouble(6, origin.getMagnitude() );
stmt.setString(7, "CwbNew" );
stmt.setInt(8, orid );
stmt.registerOutParameter(1, Types.INTEGER );
stmt.registerOutParameter(8, Types.INTEGER );
stmt.execute();
int evid = stmt.getInt(1);
orid = stmt.getInt(8);
if( obs != null ){
for( int j = 0; j < obs.size(); ++j ){
CwbNewObservation jobs = (CwbNewObservation) obs.get(j);
CallableStatement
cs
=
conn.prepareCall("{call
addArrival(?, ?, ?, ?, ?, ?, ?, ?)}");
cs.setString(1, jobs.getSta() );
cs.setString(2, jobs.getPhase() );
double time = jobs.getTime();
cs.setDouble(3, time );
TimeT otime = new TimeT( time );
cs.setInt(4, otime.getJdate() );
cs.setString(5, jobs.getOnset() );
cs.setString(6, "CwbNew" );
cs.setInt(7, evid );
cs.setInt(8, orid );
cs.execute();
cs.close();
}
}
stmt.close();
}
public void
processSingleEvent( ZipFile zipFile, ZipEntry entry ) throws Exception
{
boolean isFirstLine = true;
System.out.println( "\t" + entry.getName() );
BufferedInputStream is = new BufferedInputStream(zipFile.getInputStream(entry));
InputStreamReader isr = new InputStreamReader( is );
BufferedReader br= new BufferedReader( isr );
String str;
CwbNewOrigin cno = null;
Vector observations = new Vector();
while ((str = br.readLine()) != null) {
if( isFirstLine ){
try{
cno = new CwbNewOrigin( str );
}
catch(Exception e){
br.close();
isr.close();
is.close();
return;
}
isFirstLine = false;
}
else{
if( cno != null )
try{
observations.addAll(
CwbNewObservation.parseObsLine(
str,
cno ) );
}
catch( Exception e ){
}
169
}
}
br.close();
isr.close();
is.close();
if( cno != null )
addEvent( cno, observations );
}
public void processSingleZipFile( String filename ) throws Exception {
File sourceZipFile = new File(filename);
System.out.println( "Processing " + filename + " ..." );
// Open Zip file for reading
ZipFile zipFile = new ZipFile(sourceZipFile, ZipFile.OPEN_READ);
// Create an enumeration of the entries in the zip file
Enumeration zipFileEntries = zipFile.entries();
// Process each entry
while (zipFileEntries.hasMoreElements()) {
// grab a zip file entry
ZipEntry entry = (ZipEntry) zipFileEntries.nextElement();
String currentEntry = entry.getName();
// extract file if not a directory
if (!entry.isDirectory()) {
processSingleEvent( zipFile, entry );
}
}
zipFile.close();
}
public void processDirectory(String directory) throws Exception {
File dir = new File(directory);
FilenameFilter filter = new ZIPFilter();
if (!dir.isDirectory())
throw new IllegalArgumentException("FileLister: no such directory");
String[] entries = dir.list( filter );
for(int i = 0; i < entries.length; i++){
processSingleZipFile(directory + "\\" + entries[i]);
}
}
/** Creates a new instance of loadCwbNew */
public loadCwbNew( Connection conn) {
this.conn = conn;
}
public static void main( String[] args ) {
try{
Connection conn = loadjma2.getConnection();
System.out.println( "Connected" );
loadCwbNew lcn = new loadCwbNew( conn );
lcn.processDirectory("C:\\dodge\\taiwan_3d\\CWB_more_recent\\2000" );
lcn.processDirectory("C:\\dodge\\taiwan_3d\\CWB_more_recent\\2001" );
lcn.processDirectory("C:\\dodge\\taiwan_3d\\CWB_more_recent\\2002" );
}
catch( Exception e ){
e.printStackTrace();
}
}
}
170
package dodge.apps.loadCwbOld;
import java.util.StringTokenizer;
import java.util.Vector;
import llnl.gnem.util.TimeT;
public class CwbOldObservation {
private String sta;
private String phase;
private double time;
private String chan = "-";
private String auth = "CwbOld";
private String onset = "-";
public CwbOldObservation( String sta, String phase, double time ) {
this.setSta(sta);
this.setPhase(phase);
this.setTime(time);
}
public String toString() {
return getSta();
}
public void adjustTime( double adjust ) {
setTime(getTime() + adjust);
}
public static Vector parseObsLine( String str, CwbOldOrigin origin ) {
Vector result = new Vector();
String sta = str.substring(1,5).trim();
TimeT otime = new TimeT( origin.getTime());
double otimeSec = otime.getSecond();
String phase = "P";
double pSecond = Double.parseDouble( str.substring(23,29).trim() );
if( pSecond > 0 ){
double arrivalOffset = pSecond - otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
CwbOldObservation
obs
=
new
CwbOldObservation(
otime.getEpochTime() + arrivalOffset );
result.add( obs );
}
sta,
phase,
phase = "S";
double sSecond = Double.parseDouble( str.substring(39,45).trim() );
if( sSecond > 0 ){
double arrivalOffset = sSecond - otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
CwbOldObservation
obs
=
otime.getEpochTime() + arrivalOffset );
result.add( obs );
}
new
return result;
}
public String getSta() {
return sta;
171
CwbOldObservation(
sta,
phase,
}
public void setSta(String sta) {
this.sta = sta;
}
public String getPhase() {
return phase;
}
public void setPhase(String phase) {
this.phase = phase;
}
public double getTime() {
return time;
}
public void setTime(double time) {
this.time = time;
}
public String getChan() {
return chan;
}
public String getAuth() {
return auth;
}
public String getOnset() {
return onset;
}
package dodge.apps.loadCwbOld;
import llnl.gnem.util.TimeT;
/**
public class CwbOldOrigin {
private double time;
private double lat;
private double lon;
private double depth;
private double magnitude;
public CwbOldOrigin( String strIn ) {
String str = " " + strIn;
int year = Integer.parseInt( str.substring(1,5).trim()) + 1900;
int month = Integer.parseInt( str.substring(5,7).trim());
int day = Integer.parseInt( str.substring(7,9).trim());
int hour = Integer.parseInt( str.substring(9,11).trim());
int minute = Integer.parseInt( str.substring(11,13).trim());
double second = Double.parseDouble(str.substring(14, 19).trim() );
TimeT tmp = new TimeT(year, month, day, hour, minute, second );
setTime(tmp.getEpochTime() );
setLat(Double.parseDouble(str.substring(19,21).trim())
Double.parseDouble(str.substring(21,26).trim())/60);
setLon(Double.parseDouble(str.substring(26,29).trim())
Double.parseDouble(str.substring(29, 34).trim())/60);
setDepth(Double.parseDouble(str.substring( 34, 40).trim() ));
double mag = -999;
String mag1Str = str.substring( 40,44);
if( mag1Str.trim().length() > 0)
setMagnitude(Double.parseDouble( mag1Str.trim() ));
172
+
+
}
public double getTime() {
return time;
}
public void adjustTime( double adjust ) {
time += adjust;
}
public void setTime(double time) {
this.time = time;
}
public double getLat() {
return lat;
}
public void setLat(double lat) {
this.lat = lat;
}
public double getLon() {
return lon;
}
public void setLon(double lon) {
this.lon = lon;
}
public double getDepth() {
return depth;
}
public void setDepth(double depth) {
this.depth = depth;
}
public double getMagnitude() {
return magnitude;
}
public void setMagnitude(double magnitude) {
this.magnitude = magnitude;
}
public String toString() {
StringBuffer sb = new StringBuffer( "Time = " );
sb.append( time );
sb.append( "
lat = " + lat );
sb.append( "
lon = " + lon );
sb.append( "
depth = " + depth );
sb.append( "
mag = " + magnitude );
return sb.toString();
}
}
package dodge.apps.loadCwbOld;
import
import
import
import
import
dodge.apps.loadjma2.Jma2Observation;
dodge.apps.loadjma2.loadjma2;
java.io.BufferedInputStream;
java.io.BufferedReader;
java.io.File;
173
import
import
import
import
import
import
import
import
import
import
import
import
java.io.FilenameFilter;
java.io.InputStreamReader;
java.sql.CallableStatement;
java.sql.Connection;
java.sql.DriverManager;
java.sql.SQLException;
java.sql.Types;
java.util.Enumeration;
java.util.Vector;
java.util.zip.ZipEntry;
java.util.zip.ZipFile;
llnl.gnem.util.TimeT;
public class loadCwbOld {
private Connection conn;
class ZIPFilter implements FilenameFilter {
public boolean accept(File dir, String name) {
return name.endsWith(".zip") || name.endsWith(".ZIP");
}
}
public Connection getConnection() throws SQLException{
final String connect_string =
"jdbc:oracle:thin:@localhost:1521:orcl";
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(connect_string, "dodge", "hp15c");
}
public void addEvent( CwbOldOrigin origin, Vector obs ) throws SQLException {
int orid = -1;
CallableStatement
stmt
=
conn.prepareCall("{?
=
add_new_origin(?, ?, ?, ?, ?, ?, ?)}");
stmt.setDouble(2, origin.getLat() );
stmt.setDouble(3, origin.getLon() );
stmt.setDouble(4, origin.getTime() );
stmt.setDouble(5, origin.getDepth() );
stmt.setDouble(6, origin.getMagnitude() );
stmt.setString(7, "CwbOld" );
stmt.setInt(8, orid );
stmt.registerOutParameter(1, Types.INTEGER );
stmt.registerOutParameter(8, Types.INTEGER );
stmt.execute();
int evid = stmt.getInt(1);
orid = stmt.getInt(8);
call
if( obs != null ){
for( int j = 0; j < obs.size(); ++j ){
CwbOldObservation jobs = (CwbOldObservation) obs.get(j);
CallableStatement
cs
=
conn.prepareCall("{call
addArrival(?, ?, ?, ?, ?, ?, ?, ?)}");
cs.setString(1, jobs.getSta() );
cs.setString(2, jobs.getPhase() );
double time = jobs.getTime();
cs.setDouble(3, time );
TimeT otime = new TimeT( time );
cs.setInt(4, otime.getJdate() );
cs.setString(5, jobs.getOnset() );
cs.setString(6, "CwbOld" );
cs.setInt(7, evid );
cs.setInt(8, orid );
cs.execute();
174
cs.close();
}
}
stmt.close();
}
public void
processSingleEvent( ZipFile zipFile, ZipEntry entry ) throws Exception
{
boolean isFirstLine = true;
System.out.println( "\t" + entry.getName() );
BufferedInputStream is = new BufferedInputStream(zipFile.getInputStream(entry));
InputStreamReader isr = new InputStreamReader( is );
BufferedReader br= new BufferedReader( isr );
String str;
CwbOldOrigin coo = null;
Vector observations = new Vector();
while ((str = br.readLine()) != null) {
if( isFirstLine ){
try{
coo = new CwbOldOrigin( str );
}
catch(Exception e){
br.close();
isr.close();
is.close();
return;
}
isFirstLine = false;
}
else{
if( coo != null )
try{
observations.addAll(
CwbOldObservation.parseObsLine(
str,
coo ) );
}
catch( Exception e ){
}
}
}
br.close();
isr.close();
is.close();
if( coo != null )
addEvent( coo, observations );
}
public void processSingleZipFile( String filename ) throws Exception {
File sourceZipFile = new File(filename);
System.out.println( "Processing " + filename + " ..." );
// Open Zip file for reading
ZipFile zipFile = new ZipFile(sourceZipFile, ZipFile.OPEN_READ);
// Create an enumeration of the entries in the zip file
Enumeration zipFileEntries = zipFile.entries();
// Process each entry
while (zipFileEntries.hasMoreElements()) {
// grab a zip file entry
ZipEntry entry = (ZipEntry) zipFileEntries.nextElement();
String currentEntry = entry.getName();
// extract file if not a directory
if (!entry.isDirectory()) {
175
processSingleEvent(
zipFile,
entry );
}
}
zipFile.close();
}
public void processDirectory(String directory) throws Exception {
File dir = new File(directory);
FilenameFilter filter = new ZIPFilter();
if (!dir.isDirectory())
throw new IllegalArgumentException("FileLister: no such directory");
String[] entries = dir.list( filter );
for(int i = 0; i < entries.length; i++){
processSingleZipFile(directory + "\\" + entries[i]);
}
}
/** Creates a new instance of loadCwbOld */
public loadCwbOld( Connection conn) {
this.conn = conn;
}
public static void main( String[] args ) {
try{
Connection conn = loadjma2.getConnection();
System.out.println( "Connected" );
loadCwbOld lco = new loadCwbOld( conn );
lco.processDirectory("C:\\dodge\\taiwan_3d\\CWB_OlderData" );
}
catch( Exception e ){
e.printStackTrace();
}
}
}
package dodge.apps.loadJma;
import
import
import
import
dodge.apps.loadJma.JmaOrigin;
java.util.StringTokenizer;
java.util.Vector;
llnl.gnem.util.TimeT;
public class JmaObservation {
private String sta;
private String phase;
private String onset;
private double time;
public JmaObservation( String sta, String phase, String onset, double time ) {
this.setSta(sta);
this.setPhase(phase);
this.setOnset(onset);
this.setTime(time);
}
public String toString() {
return getSta();
}
public void adjustTime( double adjust )
{
setTime(getTime() + adjust);
}
176
public static Vector parseObsLine( String str, JmaOrigin origin ) {
Vector result = new Vector();
StringTokenizer st = new StringTokenizer( str );
boolean hasTwo = st.countTokens() == 8;
String sta = st.nextToken();
String onset = "-";
String phase = "-";
String tmpPhase = st.nextToken();
if( tmpPhase.length() ==2 ){
onset = tmpPhase.substring(0,1);
phase = tmpPhase.substring(1,2);
}
else
phase = tmpPhase;
int hour = Integer.parseInt(st.nextToken() );
int min = Integer.parseInt(st.nextToken() );
double sec = Double.parseDouble(st.nextToken() );
TimeT otime = new TimeT( origin.getTime());
int otimeHour = otime.getHour();
int otimeMin = otime.getMinute();
double otimeSec = otime.getSecond();
double arrivalOffset = (hour - otimeHour) * 3600 + (min - otimeMin) * 60 + sec otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
JmaObservation
obs
=
new
JmaObservation(
otime.getEpochTime() + arrivalOffset );
result.add( obs );
sta,
phase,
onset,
if( hasTwo ){
onset = "-";
phase = "-";
tmpPhase = st.nextToken();
if( tmpPhase.length() ==2 ){
onset = tmpPhase.substring(0,1);
phase = tmpPhase.substring(1,2);
}
else
phase = tmpPhase;
min = Integer.parseInt(st.nextToken() );
sec = Double.parseDouble(st.nextToken() );
otime = new TimeT( origin.getTime());
otimeHour = otime.getHour();
otimeMin = otime.getMinute();
otimeSec = otime.getSecond();
arrivalOffset = (hour - otimeHour) * 3600 + (min - otimeMin) * 60 + sec otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
obs = new JmaObservation(
arrivalOffset );
result.add( obs );
sta,
}
return result;
}
public String getSta() {
return sta;
177
phase,
onset, otime.getEpochTime() +
}
public void setSta(String sta) {
this.sta = sta;
}
public String getPhase() {
return phase;
}
public void setPhase(String phase) {
this.phase = phase;
}
public String getOnset() {
return onset;
}
public void setOnset(String onset) {
this.onset = onset;
}
public double getTime() {
return time;
}
public void setTime(double time) {
this.time = time;
}
}
package dodge.apps.loadJma;
import java.util.StringTokenizer;
import llnl.gnem.util.TimeT;
public class JmaOrigin{
private double time;
private double lat;
private double lon;
private double depth;
private double magnitude;
public JmaOrigin( String str ) {
StringTokenizer st = new StringTokenizer( str );
int year = Integer.parseInt( st.nextToken());
int month = Integer.parseInt( st.nextToken());
int day = Integer.parseInt( st.nextToken());
int hour = Integer.parseInt( st.nextToken());
int minute = Integer.parseInt( st.nextToken());
double second = Double.parseDouble(st.nextToken() );
TimeT tmp = new TimeT(year, month, day, hour, minute, second );
setTime(tmp.getEpochTime() );
setLat(Double.parseDouble(st.nextToken() ));
setLon(Double.parseDouble(st.nextToken() ));
setDepth(Double.parseDouble(st.nextToken() ));
setMagnitude(Double.parseDouble(st.nextToken() ));
}
public double getTime() {
return time;
}
public void adjustTime( double adjust ) {
time += adjust;
}
178
public void setTime(double time) {
this.time = time;
}
public double getLat() {
return lat;
}
public void setLat(double lat) {
this.lat = lat;
}
public double getLon() {
return lon;
}
public void setLon(double lon) {
this.lon = lon;
}
public double getDepth() {
return depth;
}
public void setDepth(double depth) {
this.depth = depth;
}
public double getMagnitude() {
return magnitude;
}
public void setMagnitude(double magnitude) {
this.magnitude = magnitude;
}
public String toString() {
StringBuffer sb = new StringBuffer( "Time = " );
sb.append( time );
sb.append( "
lat = " + lat );
sb.append( "
lon = " + lon );
return sb.toString();
}
}
package dodge.apps.loadJma;
import
import
import
import
import
import
import
import
import
import
java.io.BufferedReader;
java.io.FileReader;
java.io.IOException;
java.sql.CallableStatement;
java.sql.Connection;
java.sql.DriverManager;
java.sql.SQLException;
java.sql.Types;
java.util.Vector;
llnl.gnem.util.TimeT;
public class loadJma {
private Vector events;
179
public loadJma( String filename ) {
boolean eventStart = true;
events = new Vector();
JmaEvent event = null;
try {
BufferedReader in = new BufferedReader(new FileReader(filename));
String str;
while ((str = in.readLine()) != null) {
if( str.trim().length() < 2 && event != null ){
events.add( event );
event.adjustTime( -9.0 * 3600 );
eventStart = true;
}
else if( eventStart ){
event = new JmaEvent( new JmaOrigin( str ) );
eventStart = false;
}
else
event.addObservations( str );
}
in.close();
System.out.println( events.size());
}
catch (IOException e) {
e.printStackTrace();
}
}
public Connection getConnection() throws SQLException{
final String connect_string =
"jdbc:oracle:thin:@localhost:1521:orcl";
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(connect_string, "dodge", "hp15c");
}
public Vector getEvents() {
return events;
}
public void addEvent( JmaEvent event, Connection conn ) throws SQLException {
JmaOrigin origin = event.getOrigin();
int orid = -1;
CallableStatement
stmt
=
conn.prepareCall("{?
=
add_new_origin(?, ?, ?, ?, ?, ?, ?)}");
stmt.setDouble(2, origin.getLat() );
stmt.setDouble(3, origin.getLon() );
stmt.setDouble(4, origin.getTime() );
stmt.setDouble(5, origin.getDepth() );
stmt.setDouble(6, origin.getMagnitude() );
stmt.setString(7, "jma" );
stmt.setInt(8, orid );
stmt.registerOutParameter(1, Types.INTEGER );
stmt.registerOutParameter(8, Types.INTEGER );
stmt.execute();
int evid = stmt.getInt(1);
orid = stmt.getInt(8);
Vector obs = event.getObservations();
for( int j = 0; j < obs.size(); ++j ){
JmaObservation jobs = (JmaObservation) obs.get(j);
180
call
CallableStatement
cs
addArrival(?, ?, ?, ?, ?, ?, ?, ?)}");
cs.setString(1, jobs.getSta() );
cs.setString(2, jobs.getPhase() );
double time = jobs.getTime();
cs.setDouble(3, time );
TimeT otime = new TimeT( time );
cs.setInt(4, otime.getJdate() );
cs.setString(5, jobs.getOnset() );
cs.setString(6, "jma" );
cs.setInt(7, evid );
cs.setInt(8, orid );
cs.execute();
cs.close();
}
stmt.close();
}
=
conn.prepareCall("{call
public static void main( String[] args ) {
try{
loadJma
jma
loadJma("C:\\dodge\\taiwan_3d\\LudanDatafromJMA\\jma.dat" );
Connection conn = jma.getConnection();
Vector events = jma.getEvents();
for( int j = 0; j < events.size(); ++j ){
JmaEvent event = (JmaEvent) events.get(j);
jma.addEvent( event, conn );
}
conn.close();
}
catch( Exception e ) {
SQLException ex = (SQLException ) e;
ex.printStackTrace();
}
=
}
class JmaEvent {
private JmaOrigin origin;
private Vector observations;
public JmaEvent( JmaOrigin origin ) {
this.origin = origin;
observations = new Vector();
}
public void addObservation( JmaObservation observation ) {
observations.add( observation );
}
public void addObservations( String obsLine ) {
Vector newObs = JmaObservation.parseObsLine( obsLine, origin );
observations.addAll(newObs);
}
public Vector getObservations() {
return observations;
}
public JmaOrigin getOrigin() {
return origin;
}
181
new
public int getNumObs() {
return observations.size();
}
public String toString() {
return origin.toString() + "
}
" + getNumObs();
public void adjustTime( double adjust ) {
origin.adjustTime( adjust );
for( int j = 0; j < observations.size(); ++j ){
JmaObservation obs = (JmaObservation) observations.get(j);
obs.adjustTime( adjust );
}
}
}
}
/*
package dodge.apps.loadjma2;
import java.util.Vector;
public class
Jma2Event {
private Vector observations;
private Jma2Origin origin;
public Jma2Event( Jma2Origin origin ) {
this.origin = origin;
observations = new Vector();
}
public void addObservation( Jma2Observation observation ) {
observations.add( observation );
}
public void addObservations( String obsLine ) {
Vector newObs = Jma2Observation.parseObsLine( obsLine, origin );
observations.addAll(newObs);
}
public Vector getObservations() {
return observations;
}
public Jma2Origin getOrigin() {
return origin;
}
public int getNumObs() {
return observations.size();
}
public String toString() {
return origin.toString() + "
}
" + getNumObs();
public void adjustTime( double adjust ) {
origin.adjustTime( adjust );
for( int j = 0; j < observations.size(); ++j ){
Jma2Observation obs = (Jma2Observation) observations.get(j);
182
obs.adjustTime( adjust );
}
}
}
package dodge.apps.loadjma2;
import java.util.StringTokenizer;
import java.util.Vector;
import llnl.gnem.util.TimeT;
public class Jma2Observation {
private String sta;
private String phase;
private String onset;
private double time;
public Jma2Observation( String sta, String phase, String onset, double time ) {
this.setSta(sta);
this.setPhase(phase);
this.setOnset(onset);
this.setTime(time);
}
public String toString() {
return getSta();
}
public void adjustTime( double adjust )
{
setTime(getTime() + adjust);
}
public static Vector parseObsLine( String str, Jma2Origin origin ) {
System.out.println( str );
Vector result = new Vector();
String sta = str.substring(1,7);
String onset = "-";
String phase = "-";
String tmpPhase = str.substring(27,31).trim();
if( tmpPhase.length() > 1 ){
onset = tmpPhase.substring(0,1);
phase = tmpPhase.substring(1);
}
else
phase = tmpPhase;
String hourStr = str.substring(19,21);
String minStr = str.substring(21,23);
String secStr = str.substring(23,27);
if(
hourStr.trim().length()
<
1
||
minStr.trim().length()
secStr.trim().length() < 1 || phase.length() < 1 )
return result;
<
1
||
int hour = Integer.parseInt( hourStr );
int min = Integer.parseInt( minStr );
double sec = Double.parseDouble( secStr ) / 100;
TimeT otime = new TimeT( origin.getTime());
int otimeHour = otime.getHour();
int otimeMin = otime.getMinute();
double otimeSec = otime.getSecond();
double arrivalOffset = (hour - otimeHour) * 3600 + (min - otimeMin) * 60 + sec otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
183
Jma2Observation
obs
=
new
Jma2Observation(
otime.getEpochTime() + arrivalOffset );
result.add( obs );
sta,
phase,
String phase2 = str.substring(27,31).trim();
String min2Str = str.substring(31,33);
String sec2Str = str.substring( 33,37);
if(
phase2.trim().length()
>
0
&&
min2Str.trim().length()
sec2Str.trim().length() > 0 ){
onset = "-";
phase = "-";
tmpPhase = phase2;
if( tmpPhase.length() > 1 ){
onset = tmpPhase.substring(0,1);
phase = tmpPhase.substring(1);
}
else
phase = tmpPhase;
onset,
>
0
&&
min = Integer.parseInt(min2Str );
sec = Double.parseDouble(sec2Str ) / 100;
otime = new TimeT( origin.getTime());
otimeHour = otime.getHour();
otimeMin = otime.getMinute();
otimeSec = otime.getSecond();
arrivalOffset = (hour - otimeHour) * 3600 + (min - otimeMin) * 60 + sec otimeSec;
if( arrivalOffset < 0 )
arrivalOffset += 86400;
obs = new Jma2Observation(
arrivalOffset );
result.add( obs );
sta,
}
return result;
}
public String getSta() {
return sta;
}
public void setSta(String sta) {
this.sta = sta;
}
public String getPhase() {
return phase;
}
public void setPhase(String phase) {
this.phase = phase;
}
public String getOnset() {
return onset;
}
public void setOnset(String onset) {
this.onset = onset;
}
public double getTime() {
184
phase,
onset, otime.getEpochTime() +
return time;
}
public void setTime(double time) {
this.time = time;
}
}
package dodge.apps.loadjma2;
import java.util.StringTokenizer;
import llnl.gnem.util.TimeT;
public class Jma2Origin{
private double time;
private double lat;
private double lon;
private double depth;
private double magnitude;
public Jma2Origin( String str ) {
int year = Integer.parseInt( str.substring(1,5));
int month = Integer.parseInt( str.substring(5,7));
int day = Integer.parseInt( str.substring(7,9));
int hour = Integer.parseInt( str.substring(9,11));
int minute = Integer.parseInt( str.substring(11,13));
double second = Double.parseDouble(str.substring(13, 17) )/100;
TimeT tmp = new TimeT(year, month, day, hour, minute, second );
setTime(tmp.getEpochTime() );
setLat(Double.parseDouble(str.substring(21,24))
Double.parseDouble(str.substring(24,28))/6000);
setLon(Double.parseDouble(str.substring(32,36))
Double.parseDouble(str.substring(36, 40))/6000);
setDepth(Double.parseDouble(str.substring( 44, 49) ));
double mag = -999;
String mag1Str = str.substring( 52,54);
if( mag1Str.trim().length() > 0)
setMagnitude(Double.parseDouble( mag1Str ) / 10);
}
public double getTime() {
return time;
}
public void adjustTime( double adjust ) {
time += adjust;
}
public void setTime(double time) {
this.time = time;
}
public double getLat() {
return lat;
}
public void setLat(double lat) {
this.lat = lat;
}
public double getLon() {
return lon;
}
public void setLon(double lon) {
this.lon = lon;
185
+
+
}
public double getDepth() {
return depth;
}
public void setDepth(double depth) {
this.depth = depth;
}
public double getMagnitude() {
return magnitude;
}
public void setMagnitude(double magnitude) {
this.magnitude = magnitude;
}
public String toString() {
StringBuffer sb = new StringBuffer( "Time = " );
sb.append( time );
sb.append( "
lat = " + lat );
sb.append( "
lon = " + lon );
return sb.toString();
}
}
package dodge.apps.loadjma2;
import
import
import
import
import
import
import
import
import
import
java.io.BufferedReader;
java.io.FileReader;
java.io.IOException;
java.sql.CallableStatement;
java.sql.Connection;
java.sql.DriverManager;
java.sql.SQLException;
java.sql.Types;
java.util.Vector;
llnl.gnem.util.TimeT;
public class loadjma2 {
private String filename;
private Vector events;
public loadjma2( String filename ) {
this.filename = filename;
}
public void loadFile(Connection conn) throws SQLException{
boolean eventStart = true;
events = new Vector();
Jma2Event event = null;
try {
System.out.println( filename );
BufferedReader in = new BufferedReader(new FileReader(filename));
String str;
while ((str = in.readLine()) != null) {
if( str.charAt(0) == 'C' || str.charAt(0) == 'M' )
continue;
if( str.charAt(0) == 'J' )
eventStart = true;
if( str.trim().length() < 2 && str.charAt(0) == 'E' && event != null ){
186
events.add( event );
event.adjustTime( -9.0 * 3600 );
}
else if( eventStart ){
event = new Jma2Event( new Jma2Origin( str ) );
eventStart = false;
}
else
event.addObservations( str );
}
in.close();
for( int j = 0; j < events.size(); ++j ){
event = (Jma2Event) events.get(j);
addEvent( event, conn );
}
conn.commit();
}
catch (IOException e) {
e.printStackTrace();
}
}
public static Connection getConnection() throws SQLException{
final String connect_string =
"jdbc:oracle:thin:@localhost:1521:orcl";
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(connect_string, "dodge", "hp15c");
}
public void addEvent( Jma2Event event, Connection conn ) throws SQLException {
Jma2Origin origin = event.getOrigin();
int orid = -1;
CallableStatement
stmt
=
conn.prepareCall("{?
=
add_new_origin(?, ?, ?, ?, ?, ?, ?)}");
stmt.setDouble(2, origin.getLat() );
stmt.setDouble(3, origin.getLon() );
stmt.setDouble(4, origin.getTime() );
stmt.setDouble(5, origin.getDepth() );
stmt.setDouble(6, origin.getMagnitude() );
stmt.setString(7, "jma2" );
stmt.setInt(8, orid );
stmt.registerOutParameter(1, Types.INTEGER );
stmt.registerOutParameter(8, Types.INTEGER );
stmt.execute();
int evid = stmt.getInt(1);
orid = stmt.getInt(8);
call
Vector obs = event.getObservations();
for( int j = 0; j < obs.size(); ++j ){
Jma2Observation jobs = (Jma2Observation) obs.get(j);
CallableStatement
cs
=
conn.prepareCall("{call
addArrival(?, ?, ?, ?, ?, ?, ?, ?)}");
cs.setString(1, jobs.getSta() );
cs.setString(2, jobs.getPhase() );
double time = jobs.getTime();
cs.setDouble(3, time );
TimeT otime = new TimeT( time );
cs.setInt(4, otime.getJdate() );
cs.setString(5, jobs.getOnset() );
cs.setString(6, "jma2" );
187
cs.setInt(7, evid );
cs.setInt(8, orid );
cs.execute();
cs.close();
}
stmt.close();
}
public static void main( String[] args ) {
try{
Connection conn = loadjma2.getConnection();
String[] files = {
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1975.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1976.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1977.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1978.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1979.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1980.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1981.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1982.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1983.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1984.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1985.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1986.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1987.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1988.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1989.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1990.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1991.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1992.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1993.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1994.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1995.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1996.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1997.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1998.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d1999.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d2000.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d2001.taiwan",
"C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\d200201_03.taiwan"};
for( int j = 0; j < files.length; ++j ){
loadjma2 lj = new loadjma2( files[j] );
lj.loadFile(conn);
}
}
catch(Exception e ) {
e.printStackTrace();
}
}
}
package dodge.apps.loadjma2;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
public class loadSites {
/** Creates a new instance of loadSites */
public loadSites() {
}
188
public static void main( String[] args )
{
try {
BufferedReader
in
=
new
BufferedReader(new
FileReader("C:\\dodge\\taiwan_3d\\JMAdata4TaiwanQuakes\\kstation"));
String str;
while ((str = in.readLine()) != null) {
String sta = str.substring(0,6);
String londeg = str.substring(6,9);
String lonmin = str.substring( 9, 13 );
String latdeg = str.substring( 13, 15 );
String latmin = str.substring( 15, 19 );
double
lat
=
Integer.parseInt(latdeg)
+
Double.parseDouble(latmin)/100/60.0;
double
lon
=
Integer.parseInt(londeg)
+
Double.parseDouble(lonmin)/100/60.0;
StringBuffer sql = new StringBuffer( "insert into Site select
siteid.nextval, '" );
sql.append( sta + "', " );
sql.append( lat + ", " );
sql.append( lon + ", 0, -1, 9999999, 'jma2' from dual;" );
System.out.println( sql );
}
in.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
}
package dodge.apps.tovelest;
import java.io.IOException;
import java.util.StringTokenizer;
import llnl.gnem.util.FileInputArrayLoader;
public class MakeStaFile {
public static void toVelestSta() throws IOException
{
String[]
strings
=
FileInputArrayLoader.fillStringsFromFile("C:\\dodge\\taiwan_3d\\velest\\velest.sta.tmp");
for( int j = 0; j < strings.length; ++j ){
StringTokenizer st = new StringTokenizer( strings[j] );
String sta = (String) st.nextElement();
if( sta.length() > 4 )
sta = sta.substring(0,4);
String slat = (String) st.nextElement();
String slon = (String) st.nextElement();
String selev = (String) st.nextElement();
double lat = Double.parseDouble(slat);
double lon = Double.parseDouble(slon);
double elev = Double.parseDouble(selev) * 1000;
System.out.printf("%-4s%7.4fN %8.4fE %4d 1 %3d %5.2f
%5.2f\n", sta, lat,
lon, (int)elev, j, 0.0, 0.0 );
}
}
/** Creates a new instance of MakeStaFile */
189
public MakeStaFile() {
}
public static void main( String[] args )
{
try{
MakeStaFile.toVelestSta();
}
catch( Exception ex )
{
ex.printStackTrace();
}
}
}
package dodge.apps.tovelest;
public class ToVelest {
/** Creates a new instance of ToVelest */
public ToVelest() {
}
/*
public static void getEvents( double gtlevel, Connection conn ) throws Exception {
Set<String> stations = new HashSet<String>();
String sql = "select a.evid, b.lat, b.lon, b.depth, b.time, b.mw from
potential_gt a, origin b where " +
"gtlevel = " + gtlevel + " and a.evid = b.evid and auth like 'Cwb%'";
Vector columns = new Vector();
columns.add( "Evid" );
columns.add( "Lat" );
columns.add( "Lon" );
columns.add( "Depth" );
columns.add( "Time" );
columns.add( "Mw" );
Vector result = Database.getColumnSetVectorQueryResult(sql, columns, conn,
false );
for( int j = 0; j < result.size(); ++j ){
System.out.printf( "\n" );
ColumnSet cs = (ColumnSet) result.get(j);
int evid = cs.getValue( "Evid" ).intValue();
double lat = cs.getValue( "Lat" ).doubleValue();
double lon = cs.getValue( "Lon" ).doubleValue();
double depth = cs.getValue( "Depth" ).doubleValue();
double time = cs.getValue( "Time" ).doubleValue();
double mw = cs.getValue( "Mw" ).doubleValue();
TimeT tmp = new TimeT( time );
int year = tmp.getYear();
if( year >= 2000 )
year -= 2000;
else
year -= 1900;
int month = tmp.getMonth();
int dom = tmp.getDayOfMonth();
int hour = tmp.getHour();
int minute = tmp.getMin();
double second = tmp.getSecond();
double latmin = (lat - (int)lat) * 60;
double lonmin = (lon - (int)lon) * 60;
System.out.printf(
"
%02d%02d%02d
%5.2f%7.1f%6.1f%7d\n",
190
%02d%02d
%5.2f
%02d
%5.2f
%3d
year,month,dom,hour,minute,second, (int)lat, latmin, (int) lon,
lonmin,depth, mw,evid/100 );
//
System.out.println( "" + evid + "," + lat + ", " + lon + ", " + depth + ",
" + time + ", " + mw );
Vector arrivals = getArrivals( evid, lat, lon, conn );
int count = 0;
for( int k = 0; k < arrivals.size(); ++k ){
ColumnSet cs2 = (ColumnSet) arrivals.get(k);
String sta = cs2.getValue( "Sta" ).toString();
if( sta.length() > 4 )
sta = sta.substring(0,4 );
String phase = cs2.getValue( "Iphase" ).toString().toLowerCase();
double atime = cs2.getValue( "Time" ).doubleValue() - time;
//
System.out.println( "\t" + sta + " " + phase + " " + atime );
if( atime < 100 && atime > 0 ){
stations.add( sta );
System.out.printf( "%-4s%1s0%7.4f ", sta, phase.substring(0,1),
atime );
++count;
if( count > 5 ){
System.out.printf( "\n" );
count = 0;
}
}
}
if( count > 0 )
System.out.printf( "\n" );
}
Iterator it = stations.iterator();
while( it.hasNext() ){
String sta = (String)it.next();
sql = "insert into tmpSta values('" + sta + "')";
Database.ExecuteDML(conn, sql,false);
}
}
public static void floober( Connection conn ) throws DatabaseException
{
StringBuffer sql = new StringBuffer( "select b.sta, avg(lat) lat, avg(lon) lon,
avg(elev) elev from tmpsta a, Site b where a.sta = b.sta group by b.sta");
Vector columns = new Vector();
columns.add( "Sta" );
columns.add( "Lat" );
columns.add( "Lon" );
columns.add( "Elev" );
Vector result = Database.getColumnSetVectorQueryResult(sql.toString(), columns,
conn, false );
for( int j = 0; j < result.size(); ++j ){
ColumnSet c = (ColumnSet) result.get(j);
String sta = c.getValue("Sta").toString();
double lat = c.getValue("Lat").doubleValue();
double lon = c.getValue("Lon" ).doubleValue();
double elev = c.getValue("Elev" ).doubleValue() * 1000;
System.out.printf("%-4s%7.4fN %8.4fW %4d 1 %3d %5.2f
%5.2f\n", sta, lat,
lon, (int)elev, j, 0.0, 0.0 );
}
}
public static Vector getArrivals( int evid, double elat, double elon, Connection
conn ) throws DatabaseException{
String sql = "select b.sta, b.iphase, avg(b.time) time from event_arrival_assoc
a, arrival b, Site c where a.evid = " + evid +
191
" and a.arid = b.arid and a.siteid = c.siteid and iphase in ( 'P','S' )
and distance( " + elat + ", " + elon + ", c.lat, c.lon) < 650 group by b.sta, iphase
order by b.sta, iphase";
Vector columns = new Vector();
columns.add( "Sta" );
columns.add( "Iphase" );
columns.add( "Time" );
return Database.getColumnSetVectorQueryResult(sql, columns, conn, false );
}
public static void main( String[] args ) {
try{
Connection
conn
=
ConnectionManager.getInstance(
"hp15c" ).getConnection();
ToVelest.getEvents( 20.0, conn );
ToVelest.floober(conn );
} catch( Exception e ) {
e.printStackTrace();
}
}
*/
}
"dodge",
Code to Compare spectral response of co-located eismometers
using earthquake seismograms
An experiment was conducted in which accelerometers from several different
manufacturers were co-located on the same period for a long-enough period of time to
record a number of strong motion events. This provided a way to directly compare the
response of the instruments. The report on this experiment required a number of plots to
be produced showing the comparisons. The code in this section was used to do pairwise comparisons of all the instrument-channels over all the earthquakes in common.
function OutputMatches( Instruments, idx, ThisName, SimilarityThreshold, frequency,
SavePlot2File )
N = length(Instruments(idx).Matches);
for j = 1 : N
for k = 1 : 3
for m = 1 : 3
if Instruments(idx).Matches(j).NumCorrelations(k,m) > 0
C
=
Instruments(idx).Matches(j).Correlations(k,m)
/
Instruments(idx).Matches(j).NumCorrelations(k,m);
if C > SimilarityThreshold
str = sprintf( '\t\t%s ( channel %d to channel %d ) Average coherence
= %4.2f', Instruments(idx).Matches(j).Name, k,m, C );
disp(str )
NC = Instruments(idx).Matches(j).NumCorrelations(k,m);
averageCoherence = Instruments(idx).Matches(j).CoherenceSum{k,m} / NC;
if ~isempty( averageCoherence ) & ( max( averageCoherence) >
min(averageCoherence) + .1 )
plot( frequency, averageCoherence )
str = sprintf( 'Average coherence of Station %s channel %d to
station %s channel %d over %d events ', ThisName, k, Instruments(idx).Matches(j).Name,
m, NC );
title( str )
ylabel('Coherence' )
192
xlabel( 'Frequency (Hz)' )
set(gca,'xscale','log' )
set(gcf, 'paperposition',[.25,.5,8,10])
if SavePlot2File
str = sprintf( 'print -djpeg %s_ch-%d_to_%s_ch-%d_%d-events',
ThisName, k, Instruments(idx).Matches(j).Name, m, NC );
eval( str );
end
end
end
end
end
end
end
function Instruments = AddCorrelationData( Instruments, idx, idx2, channel1, channel2,
coherence, coherenceVector )
if ~isfield( Instruments(idx).Matches(idx2),'Correlations' )
Instruments(idx).Matches(idx2).Correlations = zeros(3);
Instruments(idx).Matches(idx2).NumCorrelations = zeros(3);
Instruments(idx).Matches(idx2).CoherenceSum = cell(3,3);
end;
if isempty( Instruments(idx).Matches(idx2).Correlations )
Instruments(idx).Matches(idx2).Correlations = zeros(3);
Instruments(idx).Matches(idx2).NumCorrelations = zeros(3);
Instruments(idx).Matches(idx2).CoherenceSum = cell(3,3);
end;
Instruments(idx).Matches(idx2).Correlations(channel1,
channel2)
Instruments(idx).Matches(idx2).Correlations(channel1, channel2) + coherence;
Instruments(idx).Matches(idx2).NumCorrelations(channel1,
channel2)
Instruments(idx).Matches(idx2).NumCorrelations(channel1, channel2) + 1;
=
=
coherenceSum = Instruments(idx).Matches(idx2).CoherenceSum{channel1, channel2};
str = sprintf( '%d
disp(str)
%d
%d
%d', idx, idx2, channel1, channel2 );
if isempty( coherenceVector )
return
end
if isempty( coherenceSum )
coherenceSum = coherenceVector;
else
coherenceSum = coherenceSum + coherenceVector;
end
Instruments(idx).Matches(idx2).CoherenceSum{channel1, channel2} = coherenceSum;
function [Instruments, idx] = addInstrument( Instruments, instrumentName )
L = length( Instruments );
idx = L + 1;
Instruments(idx).Name = instrumentName;
function [Instruments, idx2] = addMatchingInstrument( Instruments, idx, instrumentName )
if ~isfield(Instruments(idx),'Matches' )
L = 0;
else
L = length( Instruments(idx).Matches );
end;
idx2 = L + 1;
193
if idx2 == 1
Instruments(idx).Matches.Name = instrumentName;
else
Instruments(idx).Matches(idx2).Name = instrumentName;
end;
% A Program to read a file containing the names of Suds files that have
% recorded the same event and build a correlation matrix comparing all
% possible pairs.
%
%
%
%
%
%
%
For each Event we read in every file recording that event, Clip the data
buffer to start at PrePSeconds Before the (already picked) P-arrival and
extending for BufferLength seconds. For every resulting pair of files we
call a function that calculates the coherence between the two buffers and
returns the average of the coherence between MinFrequency and MaxFrequency.
These will be used to construct a matrix containing the average coherence
between every possible station-channel pair,
PrePSeconds = 1;
BufferLength = 20;
MinFrequency = 1.0;
MaxFrequency = 5.0;
SimilarityThreshold = 0.75;
SavePlot2File = 0;
SaveFinalPlotFiles = 1;
%
Instruments = [];
Data = [];
fid=fopen('correlation.driver.txt');
fgetl(fid);
while 1
j = 0;
tline = fgetl(fid);
if ~ischar(tline), break, end
[token,remainder] = strtok( tline );
[EventTime,remainder] = strtok( remainder );
% For this event get all data from instruments that recorded the event...
% This information goes into the 'Data' structure
[remainder, Data, j] = getInstrumentData( remainder, Data, j, 'A900Perm',
PrePSeconds, BufferLength );
[remainder, Data, j] = getInstrumentData( remainder, Data, j, 'A900Temp',
PrePSeconds, BufferLength );
[remainder, Data, j] = getInstrumentData( remainder, Data, j, 'K2', PrePSeconds,
BufferLength );
[remainder, Data, j] = getInstrumentData( remainder, Data, j, 'Reftek', PrePSeconds,
BufferLength );
[remainder, Data, j] = getInstrumentData( remainder, Data, j, 'TS575', PrePSeconds,
BufferLength );
[remainder, Data, j] = getInstrumentData( remainder, Data, j, 'TSG3', PrePSeconds,
BufferLength );
% Cross correlate all combinations and add as appropriate to the 'Instruments'
structure
[Instruments,
F]
=
doEventCorrelations(
Data,
MinFrequency,
MaxFrequency,
Instruments, SimilarityThreshold, EventTime, SavePlot2File );
end
fclose(fid);
clf
for j = 1 : length( Instruments )
str = sprintf( 'For instrument %s the following matches have been determined:',
Instruments(j).Name );
disp( str );
194
OutputMatches(
Instruments,
SaveFinalPlotFiles );
end
j,
Instruments(j).Name,
SimilarityThreshold,
F,
function [C, D, F] = doCorrelation( Data, MinFrequency, MaxFrequency, EventTime,
SavePlot )
% Compute all pair-wise coherence estimates between seismograms in the
% Data structure. Return the result in the Coherence matrix C.
% Return raw coherence in D. Upper half only.
% For these data restricting the array length to 3200 ensures that all pairs are the
same length.
L = 3200;
WindowLength = 256;
N = length(Data);
for j = 1 : N
for k = j : N
d1 = Data(j);
d2 = Data(k);
f1 = d1.samprate;
f2 = d2.samprate;
if f1 ~= f2 continue; end;
% if f1 and f2 not equal, skip computation.
% Calculate coherence/frequency for data 1 vs data 2
[cxyraw, F] = cohere( d1.data(1:L), d2.data(1:L), WindowLength, f1 );
M = length(cxyraw);
Time = linspace(0, (L-1)/f1, L);
cxy = smooth(cxyraw, 2 );
if j~=k & ~strcmp(Data(j).instrument, Data(k).instrument) & SavePlot
subplot(3,1,1)
plot(Time, d1.data(1:L))
str = sprintf('For Event at time %s Instrument %s, channel %d ', EventTime,
Data(j).instrument, Data(j).channel);
title(str);
ylabel('Amplitude (counts)');
xlabel('Time (sec)');
subplot(3,1,2)
plot(Time, d2.data(1:L))
str = sprintf('For Event at time %s Instrument %s, channel %d ', EventTime,
Data(k).instrument, Data(k).channel);
title(str);
ylabel('Amplitude (counts)');
xlabel('Time (sec)');
subplot(3,1,3)
str = sprintf('For Event at time %s
Comparison of %s, channel %d to %s,
channel
%d',
EventTime,
Data(j).instrument,
Data(j).channel,
Data(k).instrument,
Data(k).channel);
plot(F, cxy);
title(str);
set(gca, 'xscale', 'log');
xlabel('Frequency (Hz)');
ylabel('Coherence');
set(gcf,'paperposition',[1,.5,7,10]);
str = sprintf('print -djpeg %s_%s-%d_%s-%d', EventTime, Data(j).instrument,
Data(j).channel, Data(k).instrument, Data(k).channel);
eval(str);
end;
[idx1, idx2] = getFreqLimits( F, MinFrequency, MaxFrequency );
195
C(j,k) = mean( cxy(idx1: idx2 ) );
C(k,j) = C(j,k);
D{j, k} = cxyraw;
end
end
function c = smooth( c, ns )
% Applies a ns points smoothing operator to vector c
M = length(c);
for j = ns + 1 : M - ns
c(j) = mean( c(j-ns:j+ns) );
end
function [idx1, idx2] =
% Gets the indices into
Z = abs( F-MinFrequency
[Y,I] = sort(Z);
idx1 = I(1);
Z = abs( F-MaxFrequency
[Y,I] = sort(Z);
idx2 = I(1);
getFreqLimits( F, MinFrequency, MaxFrequency )
the frequency array based on the supplied frequency limits.
);
);
function [Instruments, F] = doEventCorrelations( Data, MinFrequency, MaxFrequency,
Instruments, SimilarityThreshold, EventTime, SavePlot2File )
% All recordings for this event have been read and stored in the 'Data' structure. Now
go through all possible
% pairings and compute the coherence between pairs. Add results to the Instrument
structure.
[C,
D,
F]
=
doCorrelation(
Data,
MinFrequency,
MaxFrequency,
EventTime,
SavePlot2File );
for j = 1 : length( Data )
idx = getInstrumentIndex( Instruments, Data(j).instrument );
if idx < 1
[Instruments, idx] = addInstrument( Instruments, Data(j).instrument );
end;
for k = 1 : length(Data )
idx2 = getMatchingInstrumentIndex( Instruments(idx), Data(k).instrument );
if idx2 < 1
[Instruments,
idx2]
=
addMatchingInstrument(
Instruments,
idx,
Data(k).instrument );
end
Instruments = AddCorrelationData( Instruments, idx, idx2, Data(j).channel,
Data(k).channel, C(j,k), D{j,k} );
end
end
function [data, samprate] = getDataBuffer( fname, PrePSeconds, BufferLength )
% Read the file (assumed to have a P-pick set, get the pick time and use that
% to cut the file from PrePSeconds in front of P to a length of BufferLength.
% If BufferLength points are not available, then buffer goes to end of traces.
[ waveforms, stations, origins, picks ] = readsuds(fname);
picktime = picks.Time - waveforms(1).Time;
samprate = waveforms(1).Rate;
idx1 = round( picktime * samprate ) + 1;
idx2 = round( (picktime + BufferLength ) * samprate ) + 1;
196
[nchannels,m] = size( waveforms );
for l = 1 : nchannels
D = waveforms(l).Data;
D = D - mean(D);
if idx2 > length(D)
idx2 = length(D);
end;
tmp = D(idx1:idx2);
data(:,l) = tmp;
end
function [remainder, Data, j] = getInstrumentData( remainder, Data, j, label,
PrePSeconds, BufferLength )
% For the current instrument token inthe control file, open the suds file, get the
relevant data and add it to the
% Data structure.
[instname,remainder] = strtok( remainder );
if ~strcmp( instname, '-' )
[data, samprate] = getDataBuffer( instname, PrePSeconds, BufferLength );
if strcmp(label, 'TSG3' )
data= diff(data);
end
[rows,cols] = size(data);
for m = 1 : cols
j = j + 1;
Data(j).instrument = label;
Data(j).channel = m;
Data(j).data = data(:,m);
Data(j).samprate = samprate;
end
end
function idx = getInstrumentIndex( Instruments, instrumentName )
idx = 0;
if isempty( Instruments )
return;
else
for j = 1 : length( Instruments )
if strcmp( Instruments(j).Name, instrumentName )
idx = j;
return;
end;
end;
end;
function idx2 = getMatchingInstrumentIndex( Instrument, instrumentName )
idx2 = 0;
if ~isfield( Instrument, 'Matches' )
return;
else
for j = 1 : length( Instrument.Matches )
if strcmp( Instrument.Matches(j).Name, instrumentName )
idx2 = j;
return;
end;
end;
end;
197
Code to plot comparison of step responses of different
seismometers
As part of a series of seismometer acceptance tests, it was required to compare an
LVDT type velocity seismometer to accelerometers currently in use by CWB. Testing
was done using an apparatus that can supply a step function to sensors bolted to the
device. During the testing, data was collected continuously into SUDS files which were
analyzed later. Doug Dodge was asked to supply a Matlab code that would produce
comparison plots of the seismometer outputs.
[ waveforms_lvdt, stations_lvdt, origins_lvdt, picks_lvdt ] = readsuds( 'lvdt_acc.dmx' );
[ waveforms_ref, stations_ref, origins_ref, picks_ref ] = readsuds( 'reftek_v_acc.dmx' );
[
waveforms_geo,
stations_geo,
origins_geo,
picks_geo
]
=
readsuds( 'geotech_v_acc.dmx' );
N = length(waveforms_lvdt.Data);
f = waveforms_lvdt.Rate;
T = linspace( 0, (N-1) / f, N );
lvdt = waveforms_lvdt.Data;
lvdt = lvdt / 1565925;
reftek = waveforms_ref.Data;
reftek = -reftek;
reftek = reftek / 2497.81;
geotech = waveforms_geo.Data;
geotech = geotech / 3113.52;
subplot( 3,1,1)
plot(T,lvdt, T, reftek, T, geotech );
legend( 'LVDT diff to acc', 'Reftek', 'Geotech' );
set(gca,'xlim',[20, 22] )
title('Time-domain (2-sec) comparison of differentiated LVDT data to Reftek and Geotech
accelerograms')
xlabel( 'Time (s)')
ylabel('Acceleration (cm/s^2)')
subplot(3,1,2)
WindowLength = 512;
[coh, F] = cohere( lvdt, reftek, WindowLength, f );
plot( F, coh,'g' );
set( gca, 'xscale','log','yscale','log')
xlabel( 'Frequency (Hz)' );
ylabel('Coherence');
set(gca,'ylim',[0.001, 1.1])
title( 'Coherence of differentiated LVDT data with Reftek measurement')
subplot(3,1,3)
[coh, F] = cohere( lvdt, geotech, WindowLength, f );
plot( F, coh,'r' );
set( gca, 'xscale','log','yscale','log')
xlabel( 'Frequency (Hz)' );
ylabel('Coherence');
set(gca,'ylim',[0.001, 1.1])
title( 'Coherence of differentiated LVDT data with Geotech measurement')
set(gcf,'paperunits','inches')
198
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
print -dill comparison;
Code to plot Acceleration Spectra from shake table tests
As part of the process of seismometer acceptance test process, sample seismometers are
bolted to a shake table, and a vibratory input is supplied. Analyis of the resulting
seismograms provides insight into the instrument characteristics. Doug Dodge was
asked to provide a Matlab code that produce plots showing the recorded time series
along with the power spectrum estimate.
[ waveforms, stations, origins, picks ] = readsuds('gmt1_1n2.sud' );
for j = 1 : length( waveforms )
clf
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
waveform = waveforms(j);
rate = waveform.Rate;
dt = 1 / rate;
data = waveform.Data;
data = data - mean( data );
N = length( data );
time = linspace( 0, (N-1) * dt, N );
subplot( 2,1,1)
plot( time, data );
xlabel( 'Seconds from file start' );
title( ['Time Series: Sta = ' waveform.Sta ' Chan = ' waveform.Chan ] )
set(gca,'position',[.13,.7,.775,.230])
subplot(2,1,2)
[s,f] = pwelch(data,N,1,[],rate );
plot( f, s)
set(gca,'xscale','log', 'yscale','log')
set(gca,'position',[.13,.11,.775,.458])
% grid
set(gca,'ylim',[1, 100])
%(3) How to control the range of the X-axis,
%
e.g., how to change the lower X-limit from 10**-2 to 10**-1.
set(gca,'xlim',[.01, 0.1])
title( ['Amplitude Spectrum: Sta = ' waveform.Sta ' Chan = ' waveform.Chan ] )
ylabel('Amplitude');
xlabel('Frequency (Hz)');
cmd = sprintf( 'print -dill %s_%s', waveform.Sta, waveform.Chan );
eval( cmd )
end
[ waveforms, stations, origins, picks ] = readsuds('Tst1_1N2.dmx' );
199
for j = 1 : length( waveforms )
clf
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
waveform = waveforms(j);
rate = waveform.Rate;
dt = 1 / rate;
data = waveform.Data;
data = data - mean( data );
N = length( data );
time = linspace( 0, (N-1) * dt, N );
[s,f] = pwelch(data,N,1,[],rate );
plot( f, s)
set(gca,'xscale','log', 'yscale','log')
set(gca,'ylim',[1.0e-4, 1.0e+9])
grid
set(gca, 'xminorgrid','off', 'yminorgrid','off' )
title( ['Unsmoothed Amplitude Spectrum: Sta = ' waveform.Sta
waveform.Chan ] )
ylabel('Amplitude');
xlabel('Frequency (Hz)');
cmd = sprintf( 'print -dill %s_%s', waveform.Sta, waveform.Chan );
eval( cmd )
end
[ waveforms, stations, origins, picks ] = readsuds('Tst1_1N21.cut' );
'
Chan
for j = 1 : length( waveforms )
clf
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
waveform = waveforms(j);
rate = waveform.Rate;
dt = 1 / rate;
data = waveform.Data;
data = data - mean( data );
N = length( data );
time = linspace( 0, (N-1) * dt, N );
subplot( 2,1,1)
plot( time, data );
xlabel( 'Seconds from file start' );
title( ['Time Series: Sta = ' waveform.Sta ' Chan = ' waveform.Chan ] )
set(gca,'position',[.13,.7,.775,.230])
subplot(2,1,2)
[s,f] = pwelch(data,N,1,[],rate );
plot( f, s)
set(gca,'xscale','log', 'yscale','log')
set(gca,'position',[.13,.11,.775,.458])
grid
title( ['Amplitude Spectrum: Sta = ' waveform.Sta ' Chan = ' waveform.Chan ] )
ylabel('Amplitude');
xlabel('Frequency (Hz)');
cmd = sprintf( 'print -dill %s_%s', waveform.Sta, waveform.Chan );
eval( cmd )
end
200
=
'
Code to Plot Sumatra quake and aftershocks on bathymetry
Soon after the Sumatra earthquake and resulting tsunami, there was great interest in
developing a system that could provide early warning for tsunamis that might threaten
the coastal areas of Taiwan. For one of the proposals, it was desired to have a plot
showing the source region of the Sumatra earthquake along with the main shock
epicenter and the epicenters of the larger aftershocks. This was to be used to show the
region that would need to be modeled in order to reproduce the sea floor displacement
as a first step in modeling the entire process. Iwas asked to produce the plot using
Matlab.
load epi.txt
latlim = [0 15];
lonlim = [90 100];
[latgrat,longrat,map] = SATBATH(1, latlim,lonlim );
worldmap('hi', latlim, lonlim );
surfm(latgrat,longrat,map,map)
demcmap(map); daspectm('m',30)
%camlight(-80,0); lighting phong; material([.3 5 0])
h1=scatterm( 3.09, 94.26,400,'y','o','filled');
set(h1,'marker','pentagram', 'edgecolor','k')
scatterm( epi(:,1), epi(:,2), epi(:,4), 'r', 'filled')
axis square
hc = colorbar
scaleruler( 'units','km');
setm(handlem('scaleruler1'),'color','w')
setm(handlem('scaleruler1'),'yloc',.01)
h(1) = linem( [-1 0], [-1 0]);
set(h(1),
'marker','o','linestyle','none',
'markersize',2,
'MarkerFaceColor','r','MarkerEdgeColor','r');
h(2) = linem( [-1 0], [-1 0]);
set(h(2),
'marker','o','linestyle','none',
'markersize',4,
'MarkerFaceColor','r','MarkerEdgeColor','r');
h(3) = linem( [-1 0], [-1 0]);
set(h(3),
'marker','o','linestyle','none',
'markersize',5,
'MarkerFaceColor','r','MarkerEdgeColor','r');
h(4) = linem( [-1 0], [-1 0]);
set(h(4),
'marker','o','linestyle','none',
'markersize',6,
'MarkerFaceColor','r','MarkerEdgeColor','r');
h(5) = linem( [-1 0], [-1 0]);
set(h(5),
'marker','o','linestyle','none',
'markersize',8,
'MarkerFaceColor','r','MarkerEdgeColor','r');
h(6) = linem( [-1 0], [-1 0]);
set(h(6),'marker','pentagram','linestyle','none',
'markersize',80,
'MarkerFaceColor','y','MarkerEdgeColor','k');
[legh,ogjh,outh,outm] = legend(h, '4.0 - 4.9','5.0 - 5.9','6.0 - 6.9','7.0 - 7.9','8.0 8.9','9.0 - 9.9' );
Code to plot oriented focal mechanisms on bathymetry
201
A project was proposed to create a tsunami warning system for the western Pacific
region. One component of the project was to identify the likely source regions and the
most probable mechanisms with their associated uncertainties. This information would
be used as input to a code that can calculate probable wave height as a function of
position and source characteristics. As part of this effort, Doug Dodge was asked to
provide a Matlab code that could plot Harvard CMT solutions on bathymetry over
selected regions of the western Pacific.
function plotEvents( lat, lon, depth, mw, strike, dip, rake, latlim, lonlim, depthlim,
mwlim, scaleFactor )
I = find( mw < mwlim(1) | mw > mwlim(2) | lat < latlim(1) | lat > latlim(2) | lon <
lonlim(1) | lon > lonlim(2) | depth < depthlim(1) | depth > depthlim(2) );
lat(I) = [];
lon(I) = [];
depth(I) = [];
strike(I) = [];
dip(I) = [];
rake(I) = [];
mw(I) = [];
beachballsize = ( mw - min(mw) + .1) * scaleFactor;
clf
[latgrat,longrat,map] = SATBATH(1, latlim,lonlim );
hmap = worldmap('hi', latlim, lonlim );
%setm(gca,'MapProjection','stereo')
surfm(latgrat,longrat,map,map)
demcmap(map); daspectm('m',30)
hChild = get(hmap, 'children');
for j = 1 : length( hChild )
hitem = hChild(j);
if strcmp( get(hitem, 'Type' ), 'text' )
tag = get(hitem,'Tag' );
k = findstr( tag, 'ames' );
if k > 0
set(hitem, 'visible','off' )
end;
end;
end;
%hc = colorbar;
%scaleruler( 'units','km');
%setm(handlem('scaleruler1'),'color','w')
hidem(gca)
for j = 1 : length(lat)
%
[latc,lonc] = scircle1(lat(j),lon(j),log10(mw(j)),[30, 120]);
%
fillm(latc, lonc,'r' );
beachball(strike(j),dip(j),rake(j),lon(j),lat(j),beachballsize(j),'r');
end
%h=plotm(lat,lon,'r.');
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
202
set(gcf, 'paperposition',[.5,.5,7.5,10])
function handle = beachball(strike,dip,rake,x0,y0,radius,color,handle)
% Usage: handle = beachball(strike,dip,rake,x0,y0,radius,color,handle)
%
% Plot a lower-hemisphere focal mechanism centered at x0,y0
% with radius radius.
% handle is an optional argument. If specified, and if handle
% points to an existing beachball, then the beachball is updated.
% Otherwise a new beachball is created and its handle is returned.
nargs = nargin;
if nargs < 3
error('Not enough args for beachball!');
end
if nargs == 3
x0 = 0;
y0 = 0;
radius = 1;
color = 'k';
handle = [];
end
if nargs == 4
error('Must specify either both x0 and y0 or neither of them!');
end
if nargs == 5
radius = 1;
color = 'k';
handle = [];
end
if nargs == 6
color = 'k';
handle = [];
end
if nargs == 7
handle = [];
end
if radius <= 0
error('Radius must be positive!')
end
if dip < 0, dip = 0;end
if dip > 90, dip = 90; end
%
if isempty(handle)
handle = CreateBeachBall(strike,dip,rake,x0,y0,radius,color);
else
ModifyBeachBall(strike,dip,rake,x0,y0,radius,color,handle);
ModifyBeachBall(strike,dip,rake,x0,y0,radius,color,handle);
end
% ------------------------------------------------------------------------------
function ModifyBeachBall(strike,dip,rake,x0,y0,radius,color,handle)
[x1,y1,x2,y2, Xp, Yp] = Boundaries(strike,dip,rake,x0,y0,radius);
203
set(handle.patch1,'Xdata',x1,'Ydata',y1);
set(handle.patch1,'FaceColor',color);
set(handle.patch2,'Xdata',x2,'Ydata',y2);
set(handle.patch2,'FaceColor',color);
azimuth = (0:360) *pi / 180;
x = x0 + cos(azimuth) * radius;
y = y0 + sin(azimuth) * radius;
set(handle.Equator,'Xdata',x,'Ydata',y);
set(handle.Paxis1,'Position',[Xp(1), Yp(1)]);
set(handle.Paxis2,'Position',[Xp(2), Yp(2)]);
% --------------------------------------------------------------------------
function handle = CreateBeachBall(strike,dip,rake,x0,y0,radius,color)
% Draw focal mechanism plot for current strike, dip, rake
[x1,y1,x2,y2,Xp, Yp] = Boundaries(strike,dip,rake,x0,y0,radius);
handle.patch1 = patch(x1,y1,color,'erasemode','background',...
'Tag','Patch1');
project( handle.patch1 );
handle.patch2 = patch(x2,y2,color,'erasemode','background',...
'Tag','Patch2');
project( handle.patch2 );
azimuth = (0:360) *pi / 180;
x = x0 + cos(azimuth) * radius;
y = y0 + sin(azimuth) * radius;
hBdr = line(x,y,'color','k','erasemode','background', 'Tag','Equator');
project( hBdr );
handle.Equator
= hBdr;
poleText = 'p';
poleText = '';
handle.Paxis1 = text(Xp(1), Yp(1), poleText, 'color','k','erasemode','background',...
'Tag', 'Paxis1','HorizontalAlignment','center',...
'VerticalAlignment', 'middle','fontsize',8);
project( handle.Paxis1 );
handle.Paxis2 = text(Xp(2), Yp(2), poleText, 'color','k','erasemode','background',...
'Tag', 'Paxis2','HorizontalAlignment','center',...
'VerticalAlignment', 'middle','fontsize',8);
project( handle.Paxis2 );
% ----------------------------------------------------------------------------
function [x1,y1,x2,y2, Xp, Yp] = Boundaries(strike,dip,rake,x0,y0,radius)
%
%
%
%
Get the boundaries of the compressional quadrants by starting with a
normalized fault (strike = 0, dip = 90, rake = 0)
Rotate the 1st and third quadrants of this system to the actual fault
orientation and then project onto equal-area lower hemisphere.
R = rotationMatrix(strike,dip,rake);
conv = pi/180;
% Handle special case of dip = 0;
if dip > 90,dip = 90;end
if dip < .001, dip = 0;end
if dip == 0
rot = rake - strike;
204
angle = ( (0:180) + rot + 180) * conv;
angle = angle(:)';
x1 = cos(angle) * radius + x0;
y1 = sin(angle) * radius + y0;
x2 = [];
y2 = [];
% Get projection of P-axis
Paxis = [-1 1;1 -1;0 0] /sqrt(2);
[Xpaxis, Ypaxis] = GetProjection(Paxis, R);
Xp = Xpaxis * radius + x0;
Yp = Ypaxis * radius + y0;
% This must always be a 2-element vector even when only one pole is displayed
if length(Xp) == 1
Xp(2) = 1000;
Yp(2) = 1000;
end
return;
end
angle = (0:180) * conv;
angle = angle(:)';
SI = sin(angle);
ZE = zeros(size(angle));
CS = cos(angle);
% get projection of equatorial plane on normalized system
th2 = (0:360)*conv;
xb = cos(th2);
yb = sin(th2);
VV = [xb;yb;zeros(size(xb))];
EqPlane = inv(R) * VV;
% plane 1
V = [SI; ZE; CS];
%create 1/2 circle in +x-z plane
[xp1,yp1] = GetProjection(V, R);
% plane 2
V = [ZE; SI; CS];
%create 1/2 circle in y-z plane
[xp2,yp2] = GetProjection(V, R);
%
compressional part of equatorial plane connecting plane1 and plane2
II = find(EqPlane(1,:) >=0 &EqPlane(2,:) >=0);
VV=EqPlane(:,II);
[xxe,yye] = GetProjection2(VV,R);
[xp,yp] = Join(xp1,yp1,xp2,yp2,xxe, yye);
x1 = radius * xp + x0;
y1 = radius * yp + y0;
% plane 3
V = [-SI; ZE; CS];
%create 1/2 circle in -x-z plane
[xp3,yp3] = GetProjection(V, R);
% plane 4
V = [ZE; -SI; CS];
%create 1/2 circle in -y-z plane
[xp4,yp4] = GetProjection(V, R);
%
compressional part of equatorial plane connecting plane3 and plane4
II = find(EqPlane(1,:) <=0 &EqPlane(2,:) <=0);
205
VV=EqPlane(:,II);
[xxe,yye] = GetProjection2(VV,R);
[xxp,yxp] = Join(xp3,yp3,xp4,yp4,xxe,yye);
x2 = radius * xxp + x0;
y2 = radius * yxp + y0;
% Get projection of P-axis
Paxis = [-1 1;1 -1;0 0] /sqrt(2);
[Xpaxis, Ypaxis] = GetProjection(Paxis, R);
Xp = Xpaxis * radius + x0;
Yp = Ypaxis * radius + y0;
% This must always be a 2-element vector even when only one pole is displayed
if length(Xp) == 1
Xp(2) = 1000;
Yp(2) = 1000;
end
% ----------------------------------------------------------------
function [xp,yp] = Join(xp1,yp1,xp2,yp2,eqx,eqy)
xp = [];
yp = [];
N = length(xp1);
M = length(xp2);
L = length(eqx);
% First join the two fault planes forcing the joint at the
% endpoints of smallest radius
r = sqrt(xp1.^2 + yp1.^2);
if r(1) > r(N)
xp = xp1(:);
yp = yp1(:);
else
xp = flipud(xp1(:));
yp = flipud(yp1(:));
end
r = sqrt(xp2.^2 + yp2.^2);
if ~isempty(r)
if r(1) > r(M)
xp = [xp; flipud(xp2(:))];
yp = [yp; flipud(yp2(:))];
else
xp = [xp; xp2(:)];
yp = [yp; yp2(:)];
end
end
if isempty(eqx)
return
end
% sometimes eqx-eqy comes in as a closed curve, so check endpoints and
% remove last if necessary
az = atan2(eqy,eqx);
II1 = find(az >=0 & az < pi/2);
II2 = find(az >= pi/2 & az < pi);
II3 = find(az < -pi/2 & az >= -pi);
II4 = find(az < 0 & az >= -pi/2);
if isempty(II1) | isempty(II4)
az(II3) = 2*pi + az(II3);
206
az(II4) = 2*pi + az(II4);
end
[az,II] = sort(az);
eqx = cos(az);
eqy = sin(az);
r = sqrt( (eqx - xp(1)).^2 + (eqy - yp(1)).^2);
if r(1) > r(L)
xp = [xp; eqx(:)];
yp = [yp; eqy(:)];
else
xp = [xp; flipud(eqx(:))];
yp = [yp; flipud(eqy(:))];
end
% --------------------------------------------------------------function [xp,yp] = GetProjection(V, R)
xp = [];
yp = [];
VP = R * V;
%rotate to strike-dip-rake
I = find(VP(3,:) >= 0); %select part of rotated plane with + z
VPP = VP(:,I);
if isempty(VPP),return;end
r = sqrt(VPP(1,:).^2 + VPP(2,:).^2);
inc = ones(size(r)) * pi/2;
II = find(VPP(3,:) ~= 0);
if ~isempty(II)
inc(II) = atan(r(II) ./ VPP(3,II) );
end
thet = atan2(VPP(2,:) , VPP(1,:));
R0 = sqrt(2) * sin(inc/2);
xp = R0 .* sin(thet);
yp = R0 .* cos(thet);
% ----------------------------------------------------------------
function [xp,yp] = GetProjection2(V, R)
% These points are guaranteed to be on the equator...
xp = [];
yp = [];
VP = R * V;
%rotate to strike-dip-rake
if isempty(VP),return;end
thet = atan2(VP(2,:) , VP(1,:));
R0 = 1;
xp = R0 .* sin(thet);
yp = R0 .* cos(thet);
% ----------------------------------------------------------------
function R = rotationMatrix(strike,dip,rake)
conv = pi/180;
phi = strike * conv;
delta = -(90 - dip) * conv;
lambda = rake * conv;
cp = cos(phi);
sp = sin(phi);
cd = cos(delta);
sd = sin(delta);
cl = cos(lambda);
sl = sin(lambda);
207
R3 = [cp -sp 0;sp cp 0; 0 0 1];
% rotation around Z for strike
R2 = [1 0 0 ; 0 cd -sd; 0 sd cd]; % rotation around X for dip
R1 = [cl 0 sl; 0 1 0; -sl 0 cl]; % rotation around Y for rake
R = R3*R2*R1;
% -----------------------------------------------------------------
% load data from the supplied events file and store into vectors...
s = load('events');
lat = s(:,1);
lon = s(:,2);
depth = s(:,3);
moment = s(:,4);
strike = s(:,5);
dip = s(:,6);
rake = s(:,7);
mw = 2/3 * ( log10(moment) - 16.1 );
% set up ranges of data to plot
minLat = -10;
maxLat = 30;
minLon = 120;
maxLon = 160;
minDepth = 0;
maxDepth = 100;
minMw = 6.5;
maxMw = 8;
% Use this to control the absolute size of beach balls in plot (may need to experiment).
beachBallScaleFactor = .5;
% pack limits into arrays and call plotting routine.
latlim = [minLat maxLat];
lonlim=[minLon,maxLon];
depthlim = [minDepth maxDepth];
mwlim = [minMw maxMw];
plotEvents( lat, lon, depth, mw, strike, dip, rake, latlim, lonlim, depthlim, mwlim,
beachBallScaleFactor );
% load data from the supplied events file and store into vectors...
s = load('events');
lat = s(:,1);
lon = s(:,2);
depth = s(:,3);
moment = s(:,4);
strike = s(:,5);
dip = s(:,6);
rake = s(:,7);
mw = 2/3 * ( log10(moment) - 16.1 );
% set up ranges of data to plot
minLat = -0;
maxLat = 15;
minLon = 105;
maxLon = 135;
minDepth = 0;
maxDepth = 600;
minMw = 4;
maxMw = 9;
208
originLat = ( minLat + maxLat ) / 2;
originLon = (minLon + maxLon ) / 2;
I = find( mw < minMw | mw > maxMw | lat < minLat | lat > maxLat ...
| lon < minLon | lon > maxLon | depth < minDepth | depth > maxDepth
lat(I) = [];
lon(I) = [];
depth(I) = [];
strike(I) = [];
dip(I) = [];
rake(I) = [];
mw(I) = [];
);
clf
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,9.5])
set(gcf, 'paperposition',[.5,.5,7.5,9.5])
x = [];
y = [];
for j = 1 : length( lat )
az = azimuth( 'gc', originLat, originLon, lat(j), lon(j) ) * pi / 180;
dist = deg2km( distance( originLat, originLon, lat(j), lon(j) ) );
x(j) = sin( az ) * dist;
y(j) = cos( az ) * dist;
end;
plot3( x,y, -depth , 'r.')
xlabel( 'km east of origin' );
ylabel( 'km North of origin' );
zlabel( 'Depth (km) ' )
axis('equal')
box on
Code for plots of tsunami waves superimposed on tide data
During preliminary work on investigating the feasibility of implementing a tsunami
warning system for Taiwan it was desired to have a plot showing the response of a
particular tide gauge to waves generated by the Sumatra earthquake.
function makeTidePlots( filename, station )
% load the X-Y data file
s = load( filename );
% for convenience, extract 2 vectors from the automatically-assigned 2-column vector
variable.
x = s(:,1);
y = s(:,2);
subplot(3,1,1)
% Create a basic plot assigning result to a handle-graphics variable.
hLine = plot(x,y);
% 2. specify a title: "Residual of Sea Level at Station SALALAH",
209
htitle = title( ['Sea Level at Station ' station] );
%3. specify x-axis label
xlabel( 'Time (Day)' );
%4. specify y-axis label
ylabel( 'Sea Level (mm)' );
%5. specify x-limit (xmin, xmax, x- tick_mark)
%7. options to plot -- data point plot, line plot, line plot with data point
% First option sets the plot to just show data points using a '.' as the marker...
% possibilities for the marker are: + | o | * | . | x | square | diamond | v | ^ | > | <
| pentagram | hexagram | {none}
% set(hLine','linestyle','none')
% set( hLine,'marker', '.' )
% Second option sets the plot to just show a line
% Possibilities for linestyle are: {-} | -- | : | -. | none
set(hLine,'linestyle','-')
set( hLine,'marker', 'none' )
% Third option sets the plot to show a line with data points. Symbol is a '+'
% set(hLine,'linestyle','-')
% set( hLine,'marker', '+' )
%8. option to set color, e.g., Title in RED color, plot in Blue color, axes and label in
BLACK color.
set(htitle,'color','r' )
set(hLine','color','b')
subplot(3,1,2)
dt = x(2) - x(1);
I = find( isnan(y));
y(I) = [];
y = y - mean(y);
[Pxx,F] = pwelch( y, [], [], [], 1/dt );
plot( F,Pxx)
set(gca,'xscale','log', 'yscale','log')
xlabel( 'Frequency (Samples Per Day)')
ylabel( 'cm^2 / (Samples Per Day)')
title( ['Power Spectrum of Sea Level at Station ' station] );
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
subplot(3,1,3)
% filter data from 10 cycles per day to 100 cycles per day using a 3rd-order butterworth
filter.
order = 3;
Nyquist = 1/2/dt; % cycles per day
lowpass = 10; % cycles per day
highpass = 100; % cycles per day
Wn = [lowpass/Nyquist, highpass/Nyquist];
% Create the analog prototype...
[b,a] = butter( order, Wn );
210
% Create an IIR digital filter from the prototype and apply to the data...
Y = filtfilt( b, a, y );
% Create a basic plot assigning result to a handle-graphics variable.
hLine = plot(x,Y);
htitle = title( ['Sea Level at Station ' station ' filtered from 10 - 100 cycles per day
(3rd-order, 2-Pass Butterworth)'] );
%3. specify x-axis label
xlabel( 'Time (Day)' );
%4. specify y-axis label
ylabel( 'Sea Level (mm)' );
%7. options to plot -- data point plot, line plot, line plot with data point
% First option sets the plot to just show data points using a '.' as the marker...
% possibilities for the marker are: + | o | * | . | x | square | diamond | v | ^ | > | <
| pentagram | hexagram | {none}
% set(hLine','linestyle','none')
% set( hLine,'marker', '.' )
% Second option sets the plot to just show a line
% Possibilities for linestyle are: {-} | -- | : | -. | none
set(hLine,'linestyle','-')
set( hLine,'marker', 'none' )
% Third option sets the plot to show a line with data points. Symbol is a '+'
% set(hLine,'linestyle','-')
% set( hLine,'marker', '+' )
%8. option to set color, e.g., Title in RED color, plot in Blue color, axes and label in
BLACK color.
set(htitle,'color','r' )
set(hLine','color','b')
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
function makeFilteredPlot( filename, station )
% load the X-Y data file
s = load( filename );
% for convenience, extract 2 vectors from the automatically-assigned 2-column vector
variable.
x = s(:,1);
y = s(:,2);
y = y - mean(y);
% filter data from 10 cycles per day to 100 cycles per day using a 3rd-order butterworth
filter.
order = 3;
dt = x(2) - x(1);
Nyquist = 1/2/dt; % cycles per day
lowpass = 10; % cycles per day
highpass = 100; % cycles per day
Wn = [lowpass/Nyquist, highpass/Nyquist];
% Create the analog prototype...
[b,a] = butter( order, Wn );
% Create an IIR digital filter from the prototype and apply to the data...
211
Y = filtfilt( b, a, y );
% Create a basic plot assigning result to a handle-graphics variable.
hLine = plot(x,Y);
htitle = title( ['Sea Level at Station ' station ' filtered from 10 - 100 cycles per day
(3rd-order, 2-Pass Butterworth)'] );
%3. specify x-axis label
xlabel( 'Time (Day)' );
%4. specify y-axis label
ylabel( 'Sea Level (mm)' );
%7. options to plot -- data point plot, line plot, line plot with data point
% First option sets the plot to just show data points using a '.' as the marker...
% possibilities for the marker are: + | o | * | . | x | square | diamond | v | ^ | > | <
| pentagram | hexagram | {none}
% set(hLine','linestyle','none')
% set( hLine,'marker', '.' )
% Second option sets the plot to just show a line
% Possibilities for linestyle are: {-} | -- | : | -. | none
set(hLine,'linestyle','-')
set( hLine,'marker', 'none' )
% Third option sets the plot to show a line with data points. Symbol is a '+'
% set(hLine,'linestyle','-')
% set( hLine,'marker', '+' )
%8. option to set color, e.g., Title in RED color, plot in Blue color, axes and label in
BLACK color.
set(htitle,'color','r' )
set(hLine','color','b')
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
function makeSpectralPlot( filename, station )
% load the X-Y data file
s = load( filename );
% for convenience, extract 2 vectors from the automatically-assigned 2-column vector
variable.
% May need to run the 'whos' command to determine the name of the 2-column vector.
x = s(:,1);
y = s(:,2);
dt = x(2) - x(1);
I = find( isnan(y));
y(I) = [];
y = y - mean(y);
[Pxx,F] = pwelch( y, [], [], [], 1/dt );
plot( F,Pxx)
set(gca,'xscale','log', 'yscale','log')
xlabel( 'Frequency (Samples Per Day)')
ylabel( 'cm^2 / (Samples Per Day)')
title( ['Power Spectrum of Sea Level at Station ' station] );
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
212
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
function makeTidePlot( filename, station )
% load the X-Y data file
s = load( filename );
% for convenience, extract 2 vectors from the automatically-assigned 2-column vector
variable.
x = s(:,1);
y = s(:,2);
% Create a basic plot assigning result to a handle-graphics variable.
hLine = plot(x,y);
% 2. specify a title: "Residual of Sea Level at Station SALALAH",
htitle = title( ['Residual of Sea Level at Station ' station] );
%3. specify x-axis label
xlabel( 'Time (Day)' );
%4. specify y-axis label
ylabel( 'Sea Level (mm)' );
%5. specify x-limit (xmin, xmax, x- tick_mark)
%7. options to plot -- data point plot, line plot, line plot with data point
% First option sets the plot to just show data points using a '.' as the marker...
% possibilities for the marker are: + | o | * | . | x | square | diamond | v | ^ | > | <
| pentagram | hexagram | {none}
% set(hLine','linestyle','none')
% set( hLine,'marker', '.' )
% Second option sets the plot to just show a line
% Possibilities for linestyle are: {-} | -- | : | -. | none
set(hLine,'linestyle','-')
set( hLine,'marker', 'none' )
% Third option sets the plot to show a line with data points. Symbol is a '+'
% set(hLine,'linestyle','-')
% set( hLine,'marker', '+' )
%8. option to set color, e.g., Title in RED color, plot in Blue color, axes and label in
BLACK color.
set(htitle,'color','r' )
set(hLine','color','b')
% Set plot dimensions so that it fills an 8.5X11 sheet when printed.
set(gcf,'paperunits','inches')
set(gcf,'units','inches')
set(gcf, 'position',[.5,.5,7.5,10])
set(gcf, 'paperposition',[.5,.5,7.5,10])
213