volume six number two july

Transcription

volume six number two july
VOLUME SIX NUMB ER TWO J UL Y - D ECEMBER 2008
P L AT F O R M
P L AT F O R M
Volume 6 Number 2
2
6
13
21
27
31
38
47
52
65
77
85
91
96
105
111
116
122
129
137
145
152
158
166
VOLUME SIX NU MB ER TWO JU L Y - D EC EMB ER 2008
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
Separation Of Nitrogen From Natural Gas By Nano- Porous Membrane Using Capillary Condensation
Farooq Ahmad, Hilmi Mukhtar, Zakaria Man, Binay. K. Dutta
Mission-Oriented Research: DEEPWATER TECHNOLOGY
Recent Developments In Autonomous Underwater Vehicle (AUV) Control Systems
Kamarudin Shehabuddeen, Fakhruldin Mohd Hashim
Mission-Oriented Research: GREEN TECHNOLOGY
Enhancement Of Heat Transfer Of A Liquid Refrigerant In Transition Flow In The Annulus Of A DoubleTube Condenser R. Tiruselvam, Chin Wai Meng, Vijay R Raghavan
Mission-Oriented Research: PETROCHEMICAL CATALYSIS TECHNOLOGY
Fenton And Photo-Fenton Oxidation Of Diisopropanolamine
Abdul Aziz Omar, Putri Nadzrul Faizura Megat Khamaruddin, Raihan Mahirah Ramli
Synthesis Of Well-Defined Iron Nanoparticles On A Spherical Model Support
Noor Asmawati Mohd Zabidi, P. Moodley, P. C. Thüne, J. W. Niemantsverdriet
Technology Platform: FUEL COMBUSTION
Performance And Emission Comparison Of A Direct-Injection (DI) Internal Combustion Engine Using
Hydrogen And Compressed Natural Gas As Fuels
Abdul Rashid Abdul Aziz, M. Adlan A., M. Faisal A. Mutalib
The Effect Of Droplets On Buoyancy In Very Rich Iso-Octane-Air Flames
Shaharin Anwar Sulaiman, Malcolm Lawes
Technology Platform: SYSTEM OPTIMISATION
Anaerobic Co-Digestion Of Kitchen Waste And Sewage Sludge For Producing Biogas
Amirhossein Malakahmad, Noor Ezlin Ahmad Basri, Sharom Md Zain
On-Line At-Risk Behaviour Analysis And Improvement System (E-ARBAIS)
Azmi Mohd Shariff, Tan Sew Keng
Bayesian Inversion Of Proof Pile Test: Monte Carlo Simulation Approach
Indra Sati Hamonangan Harahap, Wong Chun Wah
Element Optimisation Techniques In Multiple DB Bridge Projects
Narayanan Sambu Potty, C. T. Ramanathan
A Simulation Study On Dynamics And Control Of A Refrigerated Gas Plant
Nooryusmiza Yusoff, M. Ramasamy, Suzana Yusup
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
An Interactive Approach To Curve Framing Abas Md Said
Student Industrial Internship Web Portal Aliza Sarlan, Wan Fatimah Wan Ahmad, Dismas Bismo
Hand Gesture Recognition: Sign To Voice System (S2V) Foong Oi Mean, Tan Jung Low, Satrio Wibowo
Parallelization Of Prime Number Generation Using Message Passing Interface
Izzatdin A Aziz, Nazleeni Haron, Low Tan Jung, Wan Rahaya Wan Dagang
Evaluation Of Lossless Image Compression For Ultrasound Images
Boshara M. Arshin, P. A. Venkatachalam, Ahmad Fadzil Mohd Hani
Learning Style Inventory System: A Study To Improve Learning Programming Subject
Saipudnizam Mahamad, Syarifah Bahiyah Rahayu Syed Mansor, Hasiah Mohamed
Performance Measurement – A Balanced Score Card Approach
P. D. D. Dominic, M. Punniyamoorthy, Savita K Sugathan, Noreen I. A.
A Conceptual Framework For Teaching Technical Writing Using 3D Virtual Reality Technology
Shahrina Md Nordin, Suziah Sulaiman, Dayang Rohaya Awang Rambli,
Wan Fatimah Wan Ahmad, Ahmad Kamil Mahmood
Multi-Scale Color Image Enhancement Using Contourlet Transform
Melkamu H. Asmare, Vijanth Sagayan Asirvadam, Lila Iznita
Automated Personality Inventory System Wan Fatimah Wan Ahmad, Aliza Sarlan, Mohd Azizie Sidek
A Fuzzy Neural Based Data Classification System Yong Suet Peng, Luong Trung Tuan
Other Areas
Research In Education: Taking Subjective Based Research Seriously
Sumathi Renganathan, Satirenjit Kaur
Jul - Dec 2008
I SSN 1 5 1 1 - 6 7 9 4
P LA T F OR M
July-December 2008
Advisor: Datuk Dr. Zainal Abidin Haji Kasim
PLATFORM Editorial
Editor-in-Chief:
Prof. Ir. Dr. Ahmad Fadzil Mohd. Hani
Co-Editors:
Assoc. Prof. Dr. Isa Mohd Tan
Assoc. Prof. Dr. Victor Macam Jr.
Assoc. Prof. Dr. Patthi Hussin
Dr. Baharum Baharuddin
Dr. Nor Hisham Hamid
Dr. Shahrina Mohd. Nordin
Subarna Sivapalan
Sub-Editor:
Haslina Noor Hasni
UTP Publication Committee
Chairman: Dr. Puteri Sri Melor
Members:
Prof. Ir. Dr. Ahmad Fadzil Mohamad Hani
Assoc. Prof. Dr. Madzlan Napiah
Assoc. Prof. Dr. M. Azmi Bustam
Dr. Nidal Kamel
Dr. Ismail M. Saaid
Dr. M. Fadzil Hassan
Dr. Rohani Salleh
Rahmat Iskandar Khairul Shazi Shaarani
Shamsina Shaharun
Anas M. Yusof
Haslina Noor Hasni
Roslina Nordin Ali
Secretary:
Mohd. Zairee Shah Mohd. Shah
zairee@petronas.com.my
Address:
PLATFORM Editor-in-Chief
Universiti Teknologi PETRONAS
Bandar Seri Iskandar, 31750 Tronoh
Perak Darul Ridzuan, Malaysia
h ttp : //w w w . u t p . e d u . m y
fadzmo@petronas.com.my
haslinn@petronas.com.my
Telephone +(60)5 368 8239
Facsimile
+(60)5 365 4088
Copyright © 2008
Universiti Teknologi PETRONAS
ISSN 1511-6794
Contents
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
Separation Of Nitrogen From Natural Gas By Nano-Porous Membrane Using Capillary
Condensation
Farooq Ahmad, Hilmi Mukhtar, Zakaria Man, Binay. K. Dutta
Mission-Oriented Research: DEEPWATER TECHNOLOGY
Recent Developments In Autonomous Underwater Vehicle (Auv) Control Systems
Kamarudin Shehabuddeen, Fakhruldin Mohd Hashim
Mission-Oriented Research: GREEN TECHNOLOGY
Enhancement Of Heat Transfer Of A Liquid Refrigerant In Transition Flow In The Annulus Of A
Double-Tube Condenser
R. Tiruselvam, Chin Wai Meng, Vijay R Raghavan
Mission-Oriented Research: PETROCHEMICAL CATALYSIS TECHNOLOGY
Fenton And Photo-Fenton Oxidation Of Diisopropanolamine
Abdul Aziz Omar, Putri Nadzrul Faizura Megat Khamaruddin, Raihan Mahirah Ramli
Synthesis Of Well-Defined Iron Nanoparticles On A Spherical Model Support
Noor Asmawati Mohd Zabidi, P. Moodley, P. C. Thüne, J. W. Niemantsverdriet
Technology Platform: FUEL COMBUSTION
Performance And Emission Comparison Of A Direct-Injection (Di) Internal Combustion Engine
Using Hydrogen And Compressed Natural Gas As Fuels
Abdul Rashid Abdul Aziz, M. Adlan A., M. Faisal A. Mutalib
The Effect Of Droplets On Buoyancy In Very Rich Iso-Octane-Air Flames
Shaharin Anwar Sulaiman, Malcolm Lawes
Technology Platform: SYSTEM OPTIMISATION
Anaerobic Co-Digestion Of Kitchen Waste And Sewage Sludge For Producing Biogas
Amirhossein Malakahmad, Noor Ezlin Ahmad Basri, Sharom Md Zain
On-Line At-Risk Behaviour Analysis And Improvement System (E-Arbais)
Azmi Mohd Shariff, Tan Sew Keng
Bayesian Inversion Of Proof Pile Test: Monte Carlo Simulation Approach
Indra Sati Hamonangan Harahap, Wong Chun Wah
Element Optimisation Techniques In Multiple Db Bridge Projects
Narayanan Sambu Potty, C. T. Ramanathan
A Simulation Study On Dynamics And Control Of A Refrigerated Gas Plant
Nooryusmiza Yusoff, M. Ramasamy, Suzana Yusup
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
An Interactive Approach To Curve Framing
Abas Md Said
Student Industrial Internship Web Portal
Aliza Sarlan, Wan Fatimah Wan Ahmad, Dismas Bismo
Hand Gesture Recognition: Sign To Voice System (S2V)
Foong Oi Mean, Tan Jung Low, Satrio Wibowo
Parallelization Of Prime Number Generation Using Message Passing Interface
Izzatdin A Aziz, Nazleeni Haron, Low Tan Jung, Wan Rahaya Wan Dagang
Evaluation Of Lossless Image Compression For Ultrasound Images
Boshara M. Arshin, P. A. Venkatachalam, Ahmad Fadzil Mohd Hani
Learning Style Inventory System: A Study To Improve Learning Programming Subject
Saipudnizam Mahamad, Syarifah Bahiyah Rahayu Syed Mansor, Hasiah Mohamed
Performance Measurement – A Balanced Score Card Approach
P. D. D. Dominic, M. Punniyamoorthy, Savita K Sugathan, Noreen I. A.
A Conceptual Framework For Teaching Technical Writing Using 3d Virtual Reality Technology
Shahrina Md Nordin, Suziah Sulaiman, Dayang Rohaya Awang Rambli,
Wan Fatimah Wan Ahmad, Ahmad Kamil Mahmood
Multi-Scale Color Image Enhancement Using Contourlet Transform
Melkamu H. Asmare, Vijanth Sagayan Asirvadam, Lila Iznita
Automated Personality Inventory System
Wan Fatimah Wan Ahmad, Aliza Sarlan, Mohd Azizie Sidek
A Fuzzy Neural Based Data Classification System
Yong Suet Peng, Luong Trung Tuan
Other Areas
Research In Education: Taking Subjective Based Research Seriously
Sumathi Renganathan, Satirenjit Kaur
VOLUME Six NUMBER two july - december 2008 PLATFORM
2
6
13
21
27
31
38
47
52
65
77
85
91
96
105
111
116
122
129
137
145
152
158
166
1
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
Separation of Nitrogen from Natural gas
by Nano-porous Membrane
Using Capillary Condensation
Farooq Ahmad*, Hilmi Mukhtar, Zakaria Man and Binay. K. Dutta,
Universiti Teknologi Petronas, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*farooq_ahmad@petronas.com.my
ABSTRACT
In the present work we have explored the potential of a nano-porous membrane to perform the separation job
of binary mixture of methane/nitrogen by capillary condensation. In case of methane/nitrogen permeation
rate up to 700 gmol/m2.s.bar has been achieved at a temperature lower than the critical temperature of the
permeating species and higher than the critical temperature of the non-permeating species. The results have
the potential to be used for further refining and optimising the process conditions to exploit this strategy for
large scale removal of nitrogen from methane at a low cost.
Keywords: capillary condensation, nano-porous membrane, natural gas, nitrogen, permeability
INTRODUCTION
practically an acceptable flux and selectivity could be
achieved by this technique.
Raw natural gas contains many impurities such as, acid
gases (carbon dioxide and hydrogen sulfide), lower
hydrocarbons (propane and butane) and nitrogen.
Studies performed by the Gas research institute reveal
that 14% of known reserves in the United States are
sub-quality due to high nitrogen content [Huggman
et al (1993)]. The conventional cryogenic route is not
favoured as it requires a lot of energy. Gas permeation
through a nano-porous membrane occurs primarily by
Knudsen diffusion although the interaction between
the permeating molecules and the pore wall may cause
other mechanisms to prevail such as surface diffusion
[Jaguste et al. (1995); Uholhorn et al. (1998); Wijaman,
S et al. (1995)]. Multi-layer adsorption occurs and is
followed by capillary condensation. In an earlier paper
[Ahmad et al. (2005)] we have reported an analysis of
separation of lower hydrocarbon from natural gas
by capillary condensation. It was established that
RESULTS AND DISCUSSIONS
The technique presented by Lee and Hwang are
widely used for the description of condensable
gases through small pores of the membranes [Lee
and Hwang (1986)]. They investigated the transport
of condensable vapours through micro-porous
membrane and predict six flow regimes depending
on the pressure distribution and the thickness of the
adsorbed layer. Here we consider the case of complete
filling of a pore, with condensate in both upstream and
downstream. For the condensation to occur in pore
at both upstream and downstream face of membrane
the condensation pressure (Pcon) should be lesser
than both upstream pressure (feed pressure P1 ) and
downstream pressure (Permeate pressure P2) across
the membrane at certain feed temperature greater
This paper was presented at the 15th Regional Symposium on Chemical Engineering
In Conjunction With 22nd Symposium of Malaysian Chemical Engineers, Kuala Lumpur, 2 - 3 December 2008
2
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
than the critical temperature of the methane and
lesser than the critical temperature of carbon dioxide.
This situation is depicted in Figure 1. For this case,
the entire pore is filled with the capillary condensate
and apparent permeability is given by the following
equation [Lee and Hwang (1986)]
Pt =
2
Kd ρ R T
P (r − t ) 2 P2
(r − t )
×[ 2
ln 1
ln ]
µ M ( P1 − P2 )
r
P0 r 2
P0
(1)
where, Kd = ρπr4/8M, t1 and t2 are the thicknesses of
the adsorbed layer at upstream and downstream face
pore respectively. The thickness of the absorbed layer
at upstream and downstream is assumed as 10 times
the molecular diameter of methane. Since methane
is the more permeating component, the selectivity of
methane over nitrogen is given by
α=
xCH4
y CH4
yN 2 xN 2
(2)
where x is the mol fraction in the pore, and y is the
mol fraction in the bulk
The permeability of condensed methane of methane/
nitrogen binary mixture has been calculated using
equation (1) at different temperatures for various pore
sizes. Since the selected pore diameters are small,
condensation occurs at temperature well above the
normal condensation temperature at the prevailing
pressure. This makes the pore flow separation more
attractive than the cryogenic separation process. A wide
range of pore sizes and temperatures were selected for
computation of permeability and separation factors.
The computed results are presented below. Figure 2
gives the permeability of methane with temperature
for different pore sizes and pore lengths equal to
ten times the molecular diameter of methane. With
increasing temperature permeation rate is increased,
because at a higher temperature more pressure is
required to cause capillary condensation inside the
pore. Figure 2 shows that even at moderate pressure
and temperature slightly below °C, an appreciable
permeability can be achieved. The permeation rate
is reduced at low pore size, but at low pore size the
condensation pressure is reduced and we require
lesser pressure at the feed side to cause condensation
inside the pore. Based on solubility of nitrogen in
condensed methane using Peng-Robinson equation of
state, the separation factor binary mixtures methane/
nitrogen has been calculated by using equation (2)
and is shown in Figure 3. From Figure 3 it can be
seen that separation factor decreases with increasing
temperature, because as stated earlier, at a higher
temperature, more feed pressure is required to cause
capillary condensation and thus solubility of nitrogen
in liquid methane increases and thus separation factor
decreases with increasing temperatures. From Figure
3 it is concluded that a reasonable separation factor
Figure 1. Schematic of condensation flow through a nano pore.
900
500
P (gmol / s.m2.bar)
700
600
500
Separation factor
5 nm
10 nm
20 nm
30 nm
40 nm
50 nm
800
400
300
200
4% N2- 96% CH4
8% N2- 92% CH4
12% N2- 88% CH4
16% N2- 84% CH4
400
300
200
100
100
0
100
110
120
130
140
150
160
170
180
0
100
190
T (K)
110
120
130
140
150
160
170
180
T(K)
Figure 2. Effect of pore size on permeability of methane at
different temperatures.
Figure 3. Separation factor of N2/CH4 binary mixtures at various
temperatures.
VOLUME Six NUMBER two july - december 2008 PLATFORM
3
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
40
Experimental data
Model Predicted data
600
35
400
300
200
25
20
15
10
100
0
360
Data Predicted by Kelvin equation
Experimental data
30
500
Pcon (Bar)
Separation factor
700
5
380
400
420
T (K)
440
460
0
360
480
380
400
420
T(K)
440
460
480
Figure 4. Comparison of Experimental data for methanol
hydrogen separation by [Sperry et al. (1991)] with Model predicted
data [Ahmad et al. (2008)].
Figure 5. Comparison of Experimental data for methanol
hydrogen separation by [Speery et al. (1991)] with the data
predicted using Kelvin equation [Ahmad et al. (2008)]
can be achieved using a nano-porous membrane by
capillary condensation which justifies that the nanoporous membrane using capillary condensation has a
potential to be used. The model compares reasonably
well with the experimental results which justify the
validity of the model. The comparison is shown in
Figure 4. Figure 4 shows that the separation factor
obtained theoretically is less than the experimentally
calculated separation factor. The reason for this is
that the separation factor is calculated on the basis of
equilibrium solubility of hydrogen in the condensed
phase of methanol. In reality such a system operating
at steady state will be away from equilibrium and
a higher separation factor will be achieved. The
permeability increases with temperature although
the separation factor decreases. A balance should be
struck between the two to decide upon the optimum
operating temperature. For tortuous pores the
increased path length will cause the permeability to
decrease. The comparison between experimental and
predicted values of capillary condensation pressure by
the Kelvin equation given by Sperry et al. is been shown
in Figure 5. The computed results and experimental
data compare well at least up to a temperature of 420 K
(147°C). This establishes the validity of the model up
to a reasonably high temperature for gas separation
applications.
CONCLUSIONS
4
The Kelvin equation was used to calculate the
condition for capillary condensation for predicting
the separation of nitrogen from methane. The
separation factor of methane/nitrogen was analysed
based on the principle that methane will condense
preferentially. High separation factor of up to 439 was
achieved, suggesting that the removal of nitrogen from
natural gas by nano-porous membrane is promising.
Permeation rates were also calculated which are in
agreement for other condensed gas system. Also,
condensation occurred at a temperature much lower
than the normal saturation temperature.
REFERENCES
[1]
R. H. Hugman, E. H. Vidas, P. S. Springer, (1993), ”Chemical
Composition of Discovered and Undiscovered Natural Gas
in the United States”, (Update, GRI-93/0456)
[2]
J. G. Wijmans, R. W. Baker, (1995), “The Solution Diffusion
Model, A review” J. Membr. Sci., (107) 1-21
[3]
R. J. R. Uhlhorn, K. Keizer and A. J. Burggraaf, (1998) “Gas
Separation Mechanism in Micro-porous modified γ-alumina
Membrane”, J. Membr. Sci., (39) 285-300
[4]
D. N. Jaguste and S. K. Bhatia, (1995) ”Combined Surface
and Viscous Flow of Condensable Vapor in Porous Media”,
Chemical Engineering Science, (50) 167-182
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
[5]
F. Ahmad, H. Mukhtar, Z. Man, Binay. K. Dutta, (2007)
“Separation of lower hydrocarbon from Natural gas through
Nano-porous Membrane using Capillary Condensation”,
Chem. Eng. Techno. 30 (9) 1266-1273
[6]
K. H. Lee and Hwang, (1986).The Transport of Condensable
Vapors through a Micro-porous Vycor Glass Membrane J.
Colloid Interface Sci., 110 (2).544-554
[7]
F. Ahmad, H. Mukhtar, Z. Man, Binay. K. Dutta, (2008)
“A theoretical analysis for non-chemical separation of
hydrogen sulfide from Natural gas through Nano-porous
Membrane using Capillary Condensation”, Chem. Eng.
Processing, (47) 2203-2208
Hilmi Mukhtar is currently the Director
of Undergraduate Studies, Universiti
Teknologi PETRONAS (UTP). Before joining
UTP, he served as a faculty member of
Universiti Sains Malaysia (USM) for about 6
years. He was a former Deputy Dean of the
School of Chemical Engineering at USM.
He obtained his BEng in Chemical
Engineering from the University of
Swansea, Wales, United Kingdom in 1990. He completed his MSc
in 1991 and later on his PhD in 1995 from the same university. His
doctoral research focused on the “Characterisation and
Performance of Nanofiltration Membrane”.
He has deep research interests in the area of natural gas
purification using membrane processes. Currently, he is leading
a research project under the Separation & Utilisation of Carbon
Dioxide research group. In this project, the removal of impurities
from natural gas, in particular, carbon dioxide, is the key focus of
the study. In addition, he has research interests in environmental
issues particularly wastewater treatment and carbon trading.
VOLUME Six NUMBER two july - december 2008 PLATFORM
5
Mission-Oriented Research: DEEPWATER TECHNOLOGY
Recent Developments in Autonomous
Underwater Vehicle (AUV) Control Systems
Kamarudin Shehabuddeen* and Fakhruldin Mohd Hashim
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*kamashe@petronas.com.my
Abstract
Autonomous Underwater Vehicles (AUVs) are tethered free and unmanned. AUVs are powered by onboard energy
sources such as fuel cells and batteries. They are also equipped with devices such as electronic compass, GPS,
sonar sensor, laser ranger, pressure sensor, inclination sensor, roll sensor and controlled by onboard computers to
execute complex preprogrammed missions. In oil and gas sector, two separate categories of AUVs are classified
for application in Exploration and Production (E&P). A survey class vehicle is for inspection of offshore structures
and data acquisition, and a work class vehicle for underwater manipulation and installation. AUV dynamics
involve six degrees of freedom (DOF). Most of the AUV application requires very stringent positioning precision.
However, AUVs’ dynamics are highly nonlinear and the hydrodynamic coefficients of vehicles are difficult to
be accurately estimated due to the variables associated with different operating conditions. Experimental class
AUVs provide an excellent platform for development and testing of various new control methodology and
algorithms to be implemented in developing advanced AUVs. Control performance requirements of an AUV
are most likely to be achieved from control concepts based on nonlinear theory. Recently developed advanced
control methodologies focused on improving the capability of tracking predetermined reference position
and trajectories. Due to increasing depth of operation expected from future AUVs and onboard limited power
supply, the area of future research in AUV control is most likely to expand into incorporating intelligent control
to propulsion system in order to improve power consumption efficiency. This paper presents a survey on some
of the experimental AUVs and the past, recent and future directions of the AUV control methodologies and
technologies.
Keywords: Autonomous Underwater Vehicles (AUV), Experimental AUVs, Past, Recent and Future Control
Methodologies
Introduction
Oceans cover about two-thirds of the whole earth
surface and the living and non-living resources in the
oceans undoubtedly take an important role in human
life. The deep oceans are hazardous particularly due
to the high pressure environment. However, offshore
oil industry is now forced to deal with increasing
depths of offshore oil well. In recent years, oil well
depths (surface of the sea to sea bed) have increased
far beyond the limit of a human diver. This has resulted
in increasing deployment of unmanned underwater
vehicle (UUV) for operations and maintenance of
deepwater oil facilities.
UUV covers both remotely operated vehicles (ROVs)
and autonomous underwater vehicle (AUVs). ROVs
have tethered umbilical cable to enable remote
This paper was presented at the 5th PetroMin Deepwater, Subsea & Underwater Technology, Conference and Exhibition 2007, Kuala Lumpur
29 - 30 October 2007
6
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: DEEPWATER TECHNOLOGY
The well proven linear controller based on linear
theory may fail to satisfy the control performance
requirements of an AUV. Therefore, interest of many
AUV researchers centred around non-linear control
schemes based on non-linear theory.
2004
1999
1994
1989
1984
1979
1974
1969
1964
Two classes of UUVs are normally used in the E&P
sector. A survey class is for ocean floor mappings,
inspection of offshore structures and data acquisition,
and a work class vehicle for manipulation, installation
and repair operations. Recently, in the UUV market,
ROVs has been gradually replaced by AUVs.
1959
AUVs performing survey, manipulation or inspection
tasks needed to be controlled in six degrees of
freedom [1]. Although the control problem is
kinematically similar to the control of a rigid body in
a six-dimensional space, the effects of hydrodynamic
forces and uncertain hydrodynamic coefficients give
rise to greater challenges in developing a control
system for an AUV. In order for an AUV to achieve
autonomy under the ocean environment, the control
system must have the adaptability and robustness to
the non-linearity and time-variance of AUV dynamics,
unpredictable environmental uncertainties such as
sea current fluctuation and reaction force from the
collision with sea water creatures, modeling difficulty
of hydrodynamic forces.
The offshore oil industry is currently pursuing offshore
oil production with well depths (surface of the sea to
sea bed) that previously would have been considered
technically unfeasible or uneconomical [2]. A study [3],
Figure 1, shows the maximum well depth versus past
years. In 1949, the offshore industry was producing in
about 5 m of water depth and it took 20 years to reach
about 100 m. However, in recent years, the maximum
well depth increased dramatically. It indicates that
the maximum well depth will continue to increase in
the future. The maximum acceptable depth limit for
a human diver is about 300 m. At deeper than these
depths, ocean floor mapping, inspection and repair
operations of facilities must be executed by either
UUVs or inhabited submersibles.
1954
With the technology advancement in battery, fuel
cells, material, computer, artificial intelligence and
communication, AUVs become more popular in
exploring oceanic resources. In the oil and gas
sectors, two separate classes of AUVs are available
for application in Exploration and Production (E&P).
A survey class is for inspection of offshore structures
and data acquisition, and a work class vehicle for
underwater manipulation required for installation of
underwater facilities and equipments.
Market Drivers
1949
AUVs are tethered free, unmanned, powered by
onboard energy sources such as fuel cells and
batteries. AUVs are also equipped with devices such
as electronic compass, GPS, sonar sensor, laser ranger,
pressure sensor, inclination sensor, roll sensor and
controlled by onboard computers to execute complex
preprogrammed missions.
Several recently developed advanced control
methodologies focused on improving the capability
of tracking, given reference position and attitude
trajectories. The objective of the paper is to address
the recent and future directions of AUV control
methodologies.
0
Well Depth (m)
operator to control the operation of the vehicle. Tether
influences the dynamics of vehicle, greatly reducing
maneuverability.
500
1000
1500
2000
2500
Figure 1. Offshore Oil Fields – Maximum well depth versus past
years. Data and figure source [3]
VOLUME Six NUMBER two july - december 2008 PLATFORM
7
Mission-Oriented Research: DEEPWATER TECHNOLOGY
Future of ROVs and AUVs
Due to increasing offshore oil well depth, research
is currently undertaken to enhance the capabilities
of ROVs so that they can become autonomous,
and hence the emergence of AUVs. AUVs are free
from constraints of an umbilical cable and are fully
autonomous underwater robots designed to carry
out specific pre-programmed tasks such as ocean
floor mappings, manipulation, installation and repair
operations of underwater facilities.
PC104+ (Pentium computer CPU 300 MHZ, 128 MB
RAM). The vehicle can be controlled by either an
on-board computer in the autonomous mode or by
operator command using ground station computer
with or without orientation control via tether.
The main challenges in developing high performance
experimental AUVs are the calibration of sensors
and design of appropriate candidate control system,
formulation of control algorithms and implementation
of appropriate actuation amplitudes based on inputs
from sensors.
AUV Control
AUVs performing survey, manipulation or inspection
tasks needed to be controlled in six degrees of
freedom. In order for an AUV to achieve autonomy
under the ocean environment, the control system must
have the adaptability and robustness of non-linearity
and time-variance of AUV dynamics, unpredictable
environmental uncertainties such as sea current
fluctuation and reaction force from the collision with
sea water creatures. Experimental AUVs provide an
excellent platform for the development and testing of
various new control methodologies and algorithms to
be implemented in developing advanced AUVs.
Highlights on Some of the Experimental AUVs
In 1995, with the intention to contribute to AUV
development, the Autonomous Systems Laboratory
(ASL) of the University Of Hawaii has designed and
developed the Omni-directional Intelligent Navigator
(ODIN-I). In 1995, ODIN-I was refurbished and ODINII was born. ODIN-I and ODIN-II have made precious
contribution in the development and testing of various
new control methodology and algorithms.
In 2003, ODIN-III was developed. It has the same
external configuration and major sensors as ODINII, which is a closed-framed spherical shaped vehicle
with eight refurbished thruster assemblies and
a one DOF manipulator [4]. ODIN-III represents
Experimental robotic class AUV. Eight thrusters provide
instantaneous, omni-directional (six DOF) capabilities.
The on-board computer system used in ODIN-III is a
8
A miniature cylindrical shaped AUV called REMUS
(Remote Environmental Monitoring Units) is designed
to conduct underwater scientific experiments and
oceanographic surveys in shallow water [5]. REMUS
is equipped with physical, biological and optical
sensors. A standard REMUS is 19 cm in diameter and
160 cm long. REMUS represents experimental survey
class AUV. The vehicle is controlled by PC104, IBM
compatible computer. REMUS has three motors for
propulsion, yaw control and pitch control. 68HC11
micro-controller is assigned to control the motors.
The communication between the micro-controller
and the CPU is achieved through an RS232 interface.
An even smaller experimental class AUV was developed
to demonstrate that the basic components of an AUV
can be packaged in a 3-inch diameter tube [6]. The
hull was made from standard 3-inch Schedule 40
PVC pipe. The nose cone was machined from a solid
piece of PVC. The tail was made from high density
foam bonded to a piece of aluminum tubing. The
AUV control system is governed by Rabbit 3000 8-bit
microcontroller (RCM3200 module). The processor
runs at 44.2 MHz and is programmed in C language.
AUV Navigation and Sensor Deployments
Inertial navigation system (INS) is primarily used in
AUVs. INS is usually used to measure the angular
displacements of yaw, pitch and roll. A pressure
sensor is used to measure depth. For an experimental
AUV, sonar sensors are used to measure the horizontal
translation displacements. For the commercial scale
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: DEEPWATER TECHNOLOGY
AUVs, the navigation suite usually consists of an INS, a
velocity sensor and a Global Positioning System (GPS)
receiver. The GPS is used to initialise the navigation
system, and for position determination when the AUV
surfaces at intervals.
tested on Ocean Explorer series AUVs developed by
Florida Atlantic University. The problem with SMC is
the chattering.
Some of the Past and Recent AUV Control
Methodologies
NNC is an adaptive control scheme. NNC poses parallel
information processing features of human brain with
highly interconnected processing elements. The main
attractive features of neural networks include selflearning and distributed memory capabilities [11].
Sliding Mode Control (SMC), Neural Net Control (NNC),
Proportional Integral Derivative (PID) and Adaptive
Controls were among the major control methodologies
explored for the position and trajectory control of an
AUV.
Sliding Mode Control (SMC)
SMC is a non-linear feedback control scheme. Yoerger
1985 [7] 1991 [8] developed SMC methodology
for the trajectory control of AUV. The method can
control non-linear dynamics directly, without need
for linearisation. This attribute is crucial for an AUV,
which exhibits highly non-linear dynamics. The
control methodology only require a single design for
the whole operating range of the vehicle. Therefore,
it is easier to design and implement than a stack of
liberalised controllers.
The method was examined in simulation using a
planar model of the Experimental Autonomous
Vehicle (EAVE) developed by the University of New
Hampshire (UNH).
Yoerger in 1991 [9] developed an adaptive SMC for the
control of experimental underwater vehicle.
Song in 2000 [10] proposed a Sliding Mode Fuzzy
Controller (SMFC). Song determined the parameters
of the SMFC using the method based on Pontryagin’s
maximum principle. Sliding mode control is robust to
the external disturbance and system modeling error.
SMFC takes advantage of the robustness property of
the SMC and interpolating property of the fuzzy in
such a way that the non-linear switching curve could
be estimated and the robustness could be sustained.
The effectiveness of the control philosophy was
Neural Net Control (NNC)
J. Yuh in 1990 [12] proposed a multilayered neural
network for the AUV control, in which the error-back
propagation method is used. The development of
this scheme was motivated by [13] which exhibit that
the teaching error signal, the discrepancy between
the actual output signal and teaching signal can be
approximated by the output error of the control
system.
Tamaki URA in 1990 [14] proposed a Self Organizing
Neural Net Controller System (SONCS) for AUVs.
SONCS had a controller called forward model and
an evaluation mechanism, which were linked with
initiation and modification tools. The dynamics of
the vehicle was represented by a forward model. The
difference between the actual motion of the vehicle
and pre-programmed mission is calculated by an
evaluation mechanism. The fundamental concept
of SONCS uses the backward propagated signals to
adjust the controller. Backward propagated signals
were obtained by the evaluation of the controlled
vehicle’s motion.
K. Ishii in 1995 [15] proposed an on-line adaptation
method “Imaginary Training” to improve the time
consuming adaptation process of the original SONCS.
In this method, SONCS tunes the controller network
through an on-line process in parallel with the actual
operation.
J. S. Wang in 2001 [16] proposed the Neuro-Fuzzy
control systems. Wang investigated the strengths
and weaknesses of the rule formulation algorithms
using the static adaptation and dynamic adaptation
VOLUME Six NUMBER two july - december 2008 PLATFORM
9
Mission-Oriented Research: DEEPWATER TECHNOLOGY
1000
Range (km)
100
Hotel = 10 W
Hotel = 40 W
Hotel = 160 W
10
0.1
1
10
AUV speed (ms-1)
Figure 2. AUV Range as a function of hotel load and speed. Calculations were made for a low drag 2.2m long, 0.6m diameter vehicle with
high efficiency propulsion and batteries providing 3.3 kW.h of energy. Data and figure source: [21]
methods based on clustering techniques to create
the internal structure for the generic types of fuzzy
models.
the system instead of the knowledge of the dynamic
model. The controller was tested on ODIN in the pool
of the University of Hawaii.
Proportional Integral Derivative (PID)
S. Zhao in 2004 [20] proposed an adaptive plus
disturbance observer (DOB) controller. Zhao used
a non-regressor based adaptive controller as an
outer-loop controller and a DOB as an inner-loop
compensator. Zhao carried out experimental work
on the proposed adaptive DOB controller using
ODIN. The experimental work involved determining
the tracking errors in associated predetermined
trajectories. The performance of the proposed
adaptive plus DOB controller was compared with
other controllers such as the PID controller, PID plus
DOB controller and the adaptive controller. The
proposed adaptive DOB controller was reported to
be effective in compensating the errors arising from
external disturbances.
PID is for control over steady state and transient
errors. PID has been widely implemented in process
industries. It is also used as a benchmark against
which any other control scheme is compared [17].
B. Jalving in 1994 [18] proposed three separate PID
technique-based controllers for steering, diving,
and speed control. The roll dynamic was neglected.
Jalving defended it by designing the vertical distance
between the centre of gravity and centre of buoyancy
sufficiently long enough to suppress the moment
from the propeller. The concept was tested on the
Norwegian Defense Research Establishment (NDRE)
AUV. The performance of the flight control system
was reported to be satisfactory during extensive sea
testing.
Adaptive Controls
J. Yuh in 2001 [19] proposed a non-regressor based
adaptive controller. The adaptive law approximated
and revised the parameters defined by the unknown
bounds of the system parameter matrices and then
tunes the controller gains based on performance of
10
Energy Storage, Power Consumption and
Efficiency
Limited onboard power supply gives rise to the
principal design challenge of an AUV system design.
Limited onboard energy storage primarily restrictsthe
range of a vehicle. Key design parameters are specific
energy (energy storage per unit mass) and energy
density (energy per unit volume). Fairly new battery
technologies such as lithium ion and lithium polymer
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: DEEPWATER TECHNOLOGY
have higher specific energy and energy density than
the conventional lead acid batteries and are now
available for AUV applications.
Two types of power loads are typically identified by
the AUV designers. One is propulsion load (propulsion
and control surface actuator loads) and the other is
hotel load (on-board computers, depth sensor and
navigation sensors).
Propulsion load typically constitutes a large portion of
the power required to operate an AUV. The amount
of energy required to move the vehicle through the
water is a function of both the drag of the vehicle and
efficiency of the propulsion system [21]. Overall loss
of efficiency of the propulsion system is contributed
by electric motor efficiency losses, hydrodynamic
inefficiency of the propeller, shaft seal frictional losses
and viscous losses.
Reduction in hotel load is obviously beneficial,
and is aided by continuing advances in low power
consumption computers and electronic components.
As the hotel power consumption decreases, the
vehicle speed for best range also decreases. This was
shown by a study on an existing AUV [21], in Figure
2. Currently, this research has found only one source
of reference on the relationship between hotel load,
range and speed.
Conclusion
Recently developed advanced control methodologies
focused on improving the capability of tracking predetermined reference positions and trajectories.
Control performance requirements of an AUV have
been achieved from control concepts based on nonlinear adaptive theory rather than the linear theory
based on fixed parameters. Due to the non-linear (time
variant) hydrodynamic coefficients and unforeseen
disturbances that the AUV control has to deal with,
future research work on AUV position and trajectory
control is likely to investigate the effectiveness of
newly developed non-linear DOB based control
concepts.
Due to the increasing oil well depth, future control
methodologies of AUVs will most likely involve
broadening the scope of control from the sole position
and trajectory control method to that of incorporating
intelligence system for power efficient orientated
mission planning, and will also involve intelligent
control based on actuation of control surface and
thrusters to maximise efficiency in consumption of
the limited onboard power supply.
Due to increasing depth of operations expected
from future AUVs and the varying water pressures
and densities with depth, a key area for future
research and development in AUV control is likely to
investigate the possibility of incorporating innovative
power train technologies (between the electric motor
and propeller) with intelligent control in order that
the limited onboard power supply is consumed with
maximum efficiency. Variable propeller pitch with
intelligent control is also likely to optimise power
consumption.
Due to the deep ocean environment where a
human diver could not reach, future control
methodologies of work class AUVs are also likely to
focus on autonomous coordination based control of
cooperating manipulators or humanoid end effectors
between multiple AUVs performing underwater
equipment installation or repair tasks. While the
adaptive control concepts are better suited for
position control of AUVs, the learning capability of
the NNC system may be considered for coordination
of manipulators. I f the newly developed coordination
based control methodologies were to be represented
in a virtual reality environment, it would be easier to
evaluate responsiveness and behavior of cooperating
manipulators.
References
[1]
Gianluca Antonelli, Stefano Chiaverini, Nilajan Sarkar,
and Michael West, “Adaptive Control of an Autonomous
Underwater Vehicle: Experimental Results on ODIN”, IEEE
Transaction on Control Systems Technology, Vol. 9, No. 5,
pp. 756–765, September 2001
[2]
Loius L. Whitcomb, “Underwater Robotics: Out of the
Research Laboratory and Into the Field”, International
Conference on Robotics and Automation, IEEE 2000
VOLUME Six NUMBER two july - december 2008 PLATFORM
11
Mission-Oriented Research: DEEPWATER TECHNOLOGY
[3]
D. Harbinson and J. Westwood, “Deepwater Oil & Gas – An
Overview Of The World Market”, Deep Ocean Technology
Conference, New Orleans, 1998
[18] Bjvrn Jalving, “The ADRE-AUV Flight Control System”, IEEE
Journal of Ocean Engineering, Vol. 19, No. 4, pp. 497–501,
October 1994
[4]
H. T. Choi, A. Hanai, S. K. Choi, and J. Yuh, “Development of
an Underwater Robot, ODIN-III”, Proceedings of the 2003
IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 836–841, Las Vegas, Nevada, October 2003
[5]
Mike Purcell, Chris von Alt, Ben Allen, Tom Austin, Ned
Forrester, Rob Goldsborough and Roger Stokey, “Nee
Capablities of the REMUS Autonomous Under Vehicle”, IEEE
2000
[19] J. Yuh, Michael E. West, P. M. Lee, “An Autonomous
Underwater Vehicle Control with a Non-regressor Based
Algorithm +”, Proceedings of the 2001 IEEE International
Conference on Robotics and Automation, pp. 2363–2368,
Seoul, Korea, May 2001
[6]
Aditya S. Gadre, Jared J. Mach, Daniel J. Stilwell, Carl E. Eick,
“Design of a Prototype Miniature Autonomous Underwater
Vehicle”, Proceedings of the 2003 IEEE/RSJ International
Conference on Intelligent Robots and Systems, pp. 842–
846, Las Vegas, Nevada, October 2003
[7]
D. Yoerger, J. Newman, “Demonstration of closed-loop
Trajectory Control of an Underwater Vehicle”, 1985
Proceedings of OCEANS, vol 17, pp. 1028–1033, Nov. 1985
[8]
D. Yoerger, J. Slotine, “Robust Trajectory Control of
Underwater Vehicles”, IEEE Journal of Oceanic Engineering,
Vol. OE-10, No.4, pp. 462 – 470, October 1985
[9]
D. Yoerger, J. Slotine, “Adaptive Sliding Control of and
Experimental Underwater Vehicles”, Proceedings of the 1991
IEEE International Conference on Robotics and Automation,
pp. 2746 – 2751, Sacramento, California, April 1991
[10] F. Song and S. Smith, “Design of Sliding Mode Fuzzy
Controllers for Autonomous Underwater Vehicle without
System Model”, OCEANS’2000 IEEE/MTS, pp. 835-840, 2000
[11] Arthur G. O. Mutambara, “Design And Analysis of Control
Systems”, CRC Press 1999
[12] J. Yuh, “A Neural Net Controller for Underwater Robotic
Vehicles,” IEEE Journal of Ocean Engineering, Vol. 15, No. 3,
pp. 161–166, July 1990
[13] J. Yuh, R. Lakshmi, S. J. Lee, and J. Oh, “An Adaptive NeuralNet Controller for Robotic Manipulators”, in Robotics and
Manufacturing, M. Jamshidi and M. Saif, Eds. New York:
ASME, 1990
[14] Tamaki URA, Teruo FUJII, Yoshiaki Nose and Yoji Kuroda,
“Self-Organizing Control System for Underwater Vehicles”,
IEEE 1990
[15] Kazuo Ishii, Teruo Fujii, and Tamaki Ura, “An On-line
Adaptation Method in a Neural Network Based Control
System for AUVs”, IEEE Journal of Ocean Engineering, Vol.
20, No. 3, pp. 221–228, July 1995
[16] Jeen-Shing Wang and C. S. George Lee, “Efficient NeuroFuzzy Control Systems for Autonomous Underwater
Vehicle Control”, Proceedings of the 2001 IEEE International
Conference on Robotics and Automation, pp. 2986–2991
Seoul, Korea, 2001
[20] S. Zhao, J. Yuh, and S. K. Choi, “Adaptive DOB Control
for AUVs”, Proceedings of the 2004 IEEE International
Conference on Robotics and Automation, pp. 4899–4904,
New Orleans, LA, April 2004
[21] James Bellingham, MIT Sea Grant, Cambridge, MA, USA.
(doi:10.1006/rwos.2001.0303)
Kamarudin Shehabuddeen received the
B. Eng. degree in mechanical engineering
from University of Sunderland, UK, in 1996,
and M.S. degree in engineering design
from Loughborough University, UK, in
1998. He is registered with the Board of
Engineers Malaysia (BEM) as a Professional
Engineer (P.Eng.) in the mechanical
engineering discipline.
He worked in Wembley I.B.A.E. Sdn. Bhd., Shah Alam, Malaysia, as
a mechanical design engineer from 1996 – 1997. After receiving
the M.S. degree, he worked with Sanyco Grand Sdn Bhd., Shah
Alam, Malaysia, as a test rig design engineer for automotive brake
master pumps. In 1999, he joined Universiti Teknologi PETRONAS
(UTP), Malaysia, where he is currently a lecturer in the department
of mechanical engineering. His current research interests include
neuro-fuzzy based control, adaptive controls, Global Positioning
System (GPS) based navigation of autonomous underwater
vehicle and unmanned air vehicle. Currently he is pursuing his
PhD degree in mechanical engineering at UTP.
Fakhruldin Mohd. Hashim is currently an
Associate Professor at the Department of
Mechanical
Engineering,
Universiti
Teknologi PETRONAS (UTP). Formerly the
head of the department and now the
Deepwater Technology R&D cluster leader,
he holds a BEng in Mechanical Engineering
(RMIT), an MSc (Eng) in Advanced
Manufacturing Systems & Technology
(Liverpool) and a PhD in Computer Aided Engineering (Leeds).
Currently he is a UTP Senate and Research Strategy Committee
member, and the Chairman of the Research Proposal Evaluation
Committee for UTP. Dr. Fakhruldin’s areas of interest include
Engineering Systems Design, Sub-sea Facilities and Underwater
Robotics. He has been engaged as a consultant in over 20
engineering projects and has secured a number of major R&D
grants during his 20-year career as an academician. He is one of
the assessors for the Malaysian Qualifications Agency (MQA) and
an external examiner of one of the local public university. He is
also a trained facilitator in strategic planning and organisational
management.
[17] Ahmad M. Ibrahim, “Fuzzy Logic for Embedded Systems
Applications”, Elsevier Science 2004
12
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: GREEN TECHNOLOGY
ENHANCEMENT OF HEAT TRANSFER OF A
LIQUID REFRIGERANT IN TRANSITION FLOW
IN THE ANNULUS OF A DOUBLE-TUBE CONDENSER
R. Tiruselvam1, W. M. Chin1 and Vijay R. Raghavan*
*Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
1OYL R&D Centre, Selangor, Malaysia
Abstract
The basis of the present study is that augmentation can reduce the temperature difference across the condenser
and evaporator of a refrigeration system and increase its energy efficiency. This research is conducted on the
inner tube having a 3D corrugated outer surface. The annulus-side coefficients are determined using the Wilson
Plot technique. It was found that the form of correlation by Hausen for transitional annular flow region with
adjusted empirical constants predicts to a good accuracy the subcooled transitional flow of liquid. For the single
phase heat transfer correlation proposed, all predicted data lie within +5% of the experimental values.
Keywords: Double-Tube Condenser, Transition Flow in Annulus, Augmented Surface, Heat Transfer Enhancement.
Introduction
Condensation heat transfer, both inside and outside
horizontal tubes, plays a key role in refrigeration, airconditioning and heat pump applications. In recent
years, the adoption of substitute working fluids and
new enhanced surfaces for heat exchangers has
thrown up new challenges in condensation heat
transfer research. Well-known and widely established
correlations to predict heat transfer during
condensation may prove to be inaccurate in some
new applications, and consequently a renewed effort
is now being dedicated to the characterization of flow
conditions and associated predictive procedures of
heat transfer with condensing vapour. Much research
effort has been directed at miniaturizing thermal
systems and identifying innovative techniques
for heat transfer enhancement. These techniques
are classified as: passive enhancement techniques
and active enhancement techniques. The double-
tube condenser is an example of the use of passive
enhancement technique. For the given arrangement
of double tube, the refrigerant flows in the annulus
and the cooling water flows in the inner tube, shown
in Figure 1.
Figure 1. Concentric Tube Configuration (Double Tube Heat
Exchanger)
This paper was presented at the 5th European Thermal Sciences Conference, Netherlands
18 - 22 May 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
13
Mission-Oriented Research: GREEN TECHNOLOGY
Objectives
The purpose of the study is to obtain the single
phase heat transfer coefficient for the subcooled
liquid refrigerant flow in the annulus of the double
tube exchanger. This research is conducted for a 3D
corrugated outer surface enhancement (Turbo-C
Copper Top Cross) on the inner tube used in a doubletube design. It is difficult to generate a mathematical
model for such a complex geometry, and no standard
correlation for the heat transfer coefficient is available
for the Turbo-C copper top cross. Modification
of equations from previous literature will require
extensive measurements of pressure and temperature
of the refrigerant and water sides, and the surface
temperature of the fin and fin root surfaces. With the
above mentioned motivation, the current research
aims to characterize the heat transfer performance
of the Turbo-C Copper Top Cross tube for subcooled
liquid and the enhancement in comparison with a
plain tube. Experiments are conducted to obtain the
necessary data using R-22 (Chlodifluoromethane) as
test fluid and plain tube and Copper Top Cross as test
surfaces.
Experiments
The commissioning of the test section was carried
out using R-22 refrigerant in the annulus and water
as coolant in the inner tube. The primary objective of
these tests is to determine the correlation for overall
heat transfer of the condensation process. The overall
energy flow in the test section can be determined
using three independent routes. These routes use:• temperature increase in the coolant flow
• the mass of condensate collected in the test
section
• circumferentially averaged temperature drop
across the tube wall
Deans et al. (1999) have reported that the maximum
difference in the calculated overall energy flows
using these three routes to analyse the condensation
process was less than 5%. The temperature increase
in the coolant flow is chosen in the present case due
to its ease in experimental measurements as well as
14
in the Wilson Plot. The test facility was designed and
assembled so as to cater for this need. The test facility
is capable of testing either plain or enhanced straight
double-tube condenser at conditions typical of a
vapour compression refrigeration system.
Setup
The double-tube heat exchanger configuration used
in this study is a one-pass, single-track system. Singletrack system means that that only one test section
and one refrigerant compressor may be installed in
the system for individual testing. The compressed
refrigerant is initially condensed into subcooled liquid
using a pre-condenser before it passes through the
double-tube condenser only once as it travels from
the high-side (compressor discharge line) to the lowside (compressor suction line), as shown in Figure
2. The single-track and single-pass system makes it
possible to obtain one successful data point for every
setting. If the operating conditions such as refrigerant
mass flow rate or compressor discharge pressure
are varied (non-geometric variables), it is possible to
obtain additional data points without changing the
test section or compressor. The use of the electronic
expansion valve (EXV) permits such operation. Use
of a conventional expansion device such as the
capillary tube will involve repetition of work where
the refrigerant circuit has to be vacuumed, leak tested
and recharged for the individual length of capillary
tube needed to throttle the refrigerant flow. The
two main variable components in this test facility
are the test section and the refrigerant compressor.
The facility is designed and installed with valves
and fittings for both the refrigerant medium and
the cooling water. This is to allow for quick and easy
replacement of the test section and/or refrigerant
compressor. Each refrigerant compressor has an
individual range of refrigerant flow rate, depending
on the amount of refrigerant charge and compressor
suction and discharge pressure. Three different
refrigerant compressors were chosen (1HP, 2HP, and
3HP) to provide a sufficient range of refrigerant mass
flow rates. A straight horizontal test section was used
in this study, with two types of inner tube, i.e. Plain
Tube (Plain Annulus) and Turbo-C (Enhanced Annulus),
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: GREEN TECHNOLOGY
Figure 2. Schematic Diagram of Experimental facility
VOLUME Six NUMBER two july - december 2008 PLATFORM
15
Mission-Oriented Research: GREEN TECHNOLOGY
Table 1. Description of the Test Section Inner Tubes
Description
Plain Tube
Turbo-C
Inner Tube Outer Diameter, do
22.2 mm
22.2 mm
Inner Tube Inner Diameter, di
17.82 mm
17.82 mm
Length
2980 mm
2980 mm
Outer Surface
Smooth Surface
3-D Integral Fin
Inner Surface
Smooth Surface
Smooth Surface
Other Information
N.A.
Pitch of Fin = 0.75 mm
Pitch of Corrugation = 5 mm
Fin height = 0.8 mm
with data as shown in Table 1. An illustration of the
enhanced annulus is shown in Figure 3.
Data Reduction & Experimental Uncertainties
In the data run the subcooled liquid from the precooler enters and exits in the same phase. The heat
transferred to the cooling water is kept sufficiently low
to allow the subcooled liquid temperature at the test
section exit to be below the saturation temperature;
hence no phase change occurs. This will allow us to
obtain the single phase heat transfer coefficient for
subcooled liquid.
Q SC = M R ( ∆h) R = M C (Cp )C (∆T )C = U SC (AC )o ( LMTD )
(1)
(2)
1
1
1
=
+ RW +
U SC (AC )o hSC (AC )o
hC (AC )i From equation (1) and (2) the heat transfer coefficient
for single phase liquid is calculated using the Wilson
Plot technique by subtracting the thermal resistance
of the tube wall and cooling water from the overall
thermal resistance. This gives us the heat transfer
coefficient for subcooled liquid given in equation (3).
hSC
 (A ) LMTD 
 (AC )o
 − ((AC )o RW )− 
=  C o
 (A ) h
QSC

 C i C

−1
 

 (3)
Figure 3. Illustration of enhanced surface (Turbo-C)
RW
 kC
hC = 
 Di

 fC 
 Re C Pr C


 2 


0 .5


0 . 63
 f 
 1 . 07 +  900  − 
 + 12 . 7  2  (Pr C − 1 )(Pr C
 Re 
(
)
1
10
Pr
+



C
C

 

(4)
)
−1
3






(5)
Uncertainties in the experimental data were calculated
based on the propagation of error method, described
by Kline and McClintock (1953). Accuracy for various
measurement devices, refrigerant properties and
water properties are given in Table 3 and 4.
The heat transfer coefficient hi for cooling water in
the tube is got from Petukhov and Popov(1963), for
the application range, 0.5 < Pr < 106 and 4000 < Re <
5x106, with a reported uncertainty of 8%.
16
d 
ln  O 
 di 
=
2π k C u L PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: GREEN TECHNOLOGY
Table 2. Uncertainties of Various Measurement Devices
Parameter (Make, Type)
Uncertainties
Water Volume Flow (YOKOGAWA, Magnetic Flow Meter)
+ 0.08% of reading
Refrigerant Mass Flow Meter (YOKOGAWA, Coriolis)
+ 0.05% of reading
Refrigerant Pressure (HAWK, Pressure Transducer)
+ 0.18 psig
Water temperature (CHINO, Pt-100 RTD)
+ 0.1 °C
Refrigerant Temperature (VANCO, Type-T Thermocouple)
+ 0.6 °C
Data provided by OYL R&D Centre Malaysia
Table 3. Uncertainties of Properties
Predicted Properties (R-22)
Uncertainties
Source
Density
+ 0.1%
Isobaric Heat Capacity
+ 1.0%
Viscosity
+ 2.1%
Klein et al. (1997)
Thermal Conductivity
+ 3.7%
McLinden et al. (2000)
Predicted Properties (Water)
Uncertainties
Density
+ 0.02%
Isobaric Heat Capacity
+ 0.3%
Viscosity
+ 0.5%
Thermal Conductivity
+ 0.5%
Predicted Properties (Copper)
Thermal Conductivity
Kamei et al. (1995)
Wagner and Pruß (2002)
Kestin et al. (1984)
Uncertainties
+ 0.5%
Touloukian et al. (1970)
Property data obtained from ASHRAE (2001)
Table 4. Uncertainty Analysis for Experimental Data
Test Sequence
Plain Tube, hsc (W/m2.K)
Turbo-C, hsc (W/m2.K)
Compressor
Highest
Lowest
Highest
Lowest
1 HP
+ 5.40%
+ 4.81%
+ 5.32%
+ 5.04%
2 HP
+ 4.90%
+ 3.86%
+ 4.24%
+ 3.95%
3 HP
+ 4.42%
+3.72%
+ 3.99%
+ 3.89%
VOLUME Six NUMBER two july - december 2008 PLATFORM
17
Mission-Oriented Research: GREEN TECHNOLOGY
Uncertainties in the single phase heat transfer
coefficient (subcooled liquid) are calculated for various
test runs in the smooth and enhanced annulus as a
root-sum-square (RSS) method. Experimental results
and the associated uncertainties are listed in Table 6.
The uncertainties are dominated by the uncertainties
associated with the refrigerant and water properties.
Higher uncertainties were found at higher refrigerant
mass flow rate.
Results & Analysis
reduction and the correlations were evaluated using
property tables in ASHRAE (2001). All subsequent
analysis of correlations is given in non-dimensional
form, shown in Figure 4. Comprehensive reviews of
literature led to selection of a few correlations which
best represent the annulus geometries and flow
characteristics of the fluids in the test section. The
selected correlation was compared with experimental
work on the plain tube. This is done to standardise the
apparatus and assess the applicability of the selected
correlations.
This experimental work was conducted to develop
heat transfer correlations so that the results of the
study could be directly used by the HVAC community
for design and development of double tube condenser
systems. The first step in this effort was comparison
of the current data with available correlations. The
correlations available in the literature range from
purely theoretical ones to those, purely empirical
in nature. All fluid properties required in the data
The one that most accurately represents the range
and conditions of interest was used as the starting
point. These correlations give the basic representation
of the average heat transfer coefficients for a given
set of parametric conditions. Next, it was assumed
that the presence of the fluid inlet and exit fittings
(both refrigerant and cooling water) and surface
enhancement did not alter the form of the heat
transfer coefficient substantially and that any
Figure 4. Comparison of augmented single phase Nusselt number
18
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: GREEN TECHNOLOGY
difference present can be handled by adjusting the
empirical constants. Base on the above conditions,
the correlation by Hausen (1934) was found to be most
suitable for the following analysis where it applies for
transitional flow development until fully developed
turbulent flow, given in equation (6). The use of Hausen
(1934)’s correlation was reviewed and presented by
Knudsen and Katz (1950) and by Knudsen et al. (1999).
 D
Nu = C (Re) 3 (Pr ) 3 1 +  i
  DO
2
1
2



3



for 2,000 < Re < 35,000
(6)
Considering that the Reynolds Number for the test is
<10,000 the flow of subcooled liquid in both the plain
and enhanced annulus is taken to be in the transition
region. Equation (6) was used to evaluate the Nusselt
type correlation which is given by Hausen (1934).
Thus, the single phase heat transfer coefficient for
subcooled liquid for the plain annulus is:



(7)
Equation (6) was also used to evaluate the Nusselt
type correlation for enhanced annulus since
applicable correlations for transition flow in the
enhanced annulus are lacking. Thus, the single phase
heat transfer coefficient for subcooled liquid flow for
enhanced annulus is:
(Nu)
PLAIN TUBE
LIQUID
( Nu )
TURBO − C
LIQUID
= 0.0055(Re h )
0.8058
= 0.0086 (Re h )
0.8175
 
( PrR) 3 1 +  Di
  DO
1
(PrR)
1
3
 D
1 +  i
  DO






2
3



(8)
2
3
Comparison of experimental Nusselt value using (7)
and (8) against predicted Nusselt value is illustrated in
Figure 5.
Overview Remarks
The overall objective of the present study was to
develop the single phase heat transfer coefficient for
subcooled liquid in transition flow. The correlation
by Hausen (1934) was used for both plain annulus
Figure 5. Experimental Nu vs. Predicted Nu for Subcooled Liquid
and enhanced annulus for the transition region. The
new empirical constants resulted in good prediction
for transitional subcooled liquid flow in the annulus
for both plain annulus and enhanced annulus. By
examining the accuracy of the single phase heat
transfer correlation proposed, all predicted data is
within the +5% of experimental value. The subcooled
liquid flow of plain annulus has an absolute deviation of
+2.52%. Similar results are observed for the subcooled
liquid flow for enhanced annulus with +2.46%.
Nomenclature
C
Coefficient in heat transfer correlation
(∆h)R
Enthalpy change of liquid R-22 (J/kg)
LMTD
Log mean temperature difference
Reh
Reynolds Number of liquid R-22 based on annulus
hydraulic diameter
U
Average Overall heat transfer coefficient (W/m2K)
di
Inner tube inner diameter
do
Inner tube outer diameter
Di
Outer tube inner diameter
Subscripts:
c
Cooling Water
co
copper
R
Refrigerant R-22
SC
subcooled liquid
w
wall
Acknowledgement
The authors wish to acknowledge the support provided for this
research by OYL Research and Development Sdn. Bhd.
VOLUME Six NUMBER two july - december 2008 PLATFORM
19
Mission-Oriented Research: GREEN TECHNOLOGY
References
[1]
ASHRAE, 2001, “Fundamentals Handbook”, Appendix E
Thermophysics Properties
[2]
Deans, J., Sinderman, A., Morrison, J.N.A., 1999, “Use Of
The Wilson Plot Method To Design and Commission A
Condensation Heat Transfer Test Facility,” Two-Phase Flow
Modelling and Experimentation, Edizioni ETS, pp. 351-357
[3]
Hausen, 1934, “C.H., Z. Ver. Dtsch. Ing. Beih.” Verfahrenstech.,
Vol. 91, No. 4
[4]
Kamei, A., Beyerlein, S. W., and Jacobsen, R.T., 1995,
“Application of Nonlinear Regression in the Development
of a Wide Range Formulation for HCFC-22,” International
Journal of Thermophysics, Vol. 16, No. 5, pp. 1155-1164
[5]
Kestin, J., Sengers, J. V., Kamgar-Parsi, B., and Levelt Sengers,
J.M.H., 1984, “Thermo Physical Properties of Fluid H2O,”
Journal of Physical and Chemical Reference Data, Vol. 13,
No. 1, pp. 175-183
[6]
Klein, S. A., McLinden, M. O., 1997, “An Improved Extended
Corresponding States Method for Estimation of Viscosity
of Pure Refrigerants and Mixtures,” International Journal of
Refrigeration, Vol. 20, No. 3, pp. 208-217
[7]
Kline, S., and McClintok, F., 1953, “Describing Uncertainties
in Single-Sample Experiments,” Mechanical Engineering,
Vol. 75, pp. 3-8
[8]
Knudsen, J. G., and Katz, D. L., 1950 “Chemical Engineering
Progress”, Vol. 46, pp. 490
[9]
Knudsen, J. G., Hottel, H. C., Sarofim, A. F., Wankat, P. C.,
Knaebel, K. S., 1999, “Heat and Mass Transfer”, Ch. 5, McGrawHill, New York.
[10] Mclinden, M. O., Klein, S. A., Perkins, R. A., 2000, “An
Improved Extended Corresponding States Model of Thermal
Conductivity Refrigerants and Refrigerant Mixtures,”
International Journal of Refrigeration, Vol. 23, pp. 43-63
[11] Petukhov, B. S., and Popov, V. N., 1963 “Theoretical
Calculation of Heat Exchanger in Turbulent Flow in Tubes of
an Incompressible Fluid with Variable Physical Properties,”
High Temp., (1/1), pp. 69-83
[12] Tiruselvam, R., 2007, “Condensation in the Annulus of
a Condenser with an Enhanced Inner Tube”, M.S. thesis
(research), University Tun Hussein Onn Malaysia
[15] Wagner, W., and Pruß, A., 2002, “New International
Formulation for the Thermodynamic Properties of Ordinary
Water Substance for General and Scientific Use,” Journal of
Physical and Chemical Reference Data, Vol. 31, No. 2, pp.
387-535
R. Tiruselvam is currently a PhD candidate
at the Faculty of Mechanical Engineering,
Universiti Teknologi Petronas.
He
received his Master of Engineering
(Research) in Mechanical Engineering from
University Tun Hussein Onn Malaysia
(UTHM) in 2007. Previously, he was
conferred the B. Eng. (Hons) (Mechanical)
from UTHM and Dip. Mechanical Eng. from
POLIMAS. His research interest is mainly in heat transfer
enhancement in thermal systems. He has been collaborating with
OYL R&D Centre for a period of 4 years. Currently he holds a
position as Research Engineer in OYL R&D Centre.
Chin Wai Meng is the Research Manager of
OYL Research & Development Centre Sdn
Bhd, the research and design centre for
OYL Group, whose primary business is in
the Heating, Ventilation and AirConditioning (HVAC). He has been with the
company for the past 19 years where his
expertise is in the testing of air-conditioning
units to determine performance and
reliability. He also has experience in the design and construction
of psychrometric test facilities. For the past 3 years, he has
established and led the Research Department which specialises in
the research on Heat Transfer and Refrigeration Systems. Mr. Chin
holds a Bachelor’s degree in Mechanical Engineering from
Universiti Malaya and he is currently pursuing a Master of Science
in Universiti Teknologi Petronas.
Vijay R. Raghavan is a professor of
Mechanical Engineering at Universiti
Teknologi Petronas. Earlier he was a
professor of Mechanical Engineering at
Universiti Teknologi Tun Hussein Onn
Malaysia (UTHM) and at the Indian Institute
of Technology Madras. His areas of interest
are Thermofluids and Energy. He obtained
his PhD in Mechanical Engineering in the
year 1980 from the Indian Institute of Technology. In addition to
teaching and research, he is an active consultant for industries in
Research and Development, Design and Troubleshooting.
[13] Tiruselvam, R., Vijay R. Raghavan., Mohd. Zainal B. Md.
Yusof., 2007, “Refrigeration Efficiency Improvement Via
Heat Transfer Enhancement”, Paper No. 030, Engineering
Conference “EnCon”, Kuching, Sarawak, Malaysia, Dec. 2729
[14] Toulaukian, Y. S., Powell, R. W., HO, C. Y., and Klemens, P. G.,
1970, “Thermophysical Properties of Matter, Vol. 1, Thermal
Conductivity, Metallic Elements and Alloys,” IFI/Plenum,
New York
20
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Fenton and photo-Fenton Oxidation of
Diisopropanolamine
Abdul Aziz Omar*, Putri Nadzrul Faizura Megat Khamaruddin and Raihan Mahirah Ramli
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.
*aaziz_omar@petronas.com.my
ABSTRACT
Diisopropanolamine has been widely used as an additive in cosmetic and personal care products, metalworking
fluids as a corrosion inhibitor and lubricity agent, pharmaceutical industry for drug intermediates as well as solvent
to remove acid gas from raw natural gas. Although it is well applied in industry, the in-situ wastewater treatment
method for diisopropanolamine contaminated wastewater has not yet been developed. The applicability of
Fenton’s reagent and photo-Fenton for the degradation of diisopropanolamine was investigated. The effect
of H2O2 concentration towards the degradation was investigated by changing the concentration of H2O2 while
keeping the initial COD, concentration of FeSO4 , pH and temperature constant at 50,000 mg/L, 4.98 g/L, 3 and
30 °C respectively. 31% and 24% of the diisopropanolamine degradation were achieved for Fenton and photoFenton respectively at Fe:H2O2 ratio 1:50. Further research work will be conducted to increase the degradation
efficiency and determine other optimum parameters.
Keywords: Diisopropanolamine, wastewater, fenton oxidation, photo-fenton oxidation
Introduction
Diisopropanolamine (DIPA) is a secondary amine of
aliphatic amine group. It has been widely used as
an additive in cosmetic and personal care products,
metalworking fluids as a corrosion inhibitor and
lubricity agent, pharmaceutical industry for drug
intermediates as well as solvent to remove acid gas
from raw natural gas. Wastewater contaminated with
DIPA has an extremely high chemical oxygen demand
(COD) which exceeds the limits set by local authorities.
High influent COD levels make the biological treatment
of the wastewater not possible. Typical practice is to
store the wastewater in a buffer tank prior to pick up
by a licensed scheduled waste contractor. The cost
of the wastewater disposal is huge due to the large
volume of wastewater generated.
DIPA is highly water soluble. Thus, the removal of this
organic pollutant is tricky and very little literature is
available on this topic. Extracting the pollutant may
be one of the possible ways to solve the problem, but
the production of secondary waste may still be an
issue. In the past several years, advanced oxidation
processes (AOPs) have attracted many researchers.
Numerous reports on AOPs are available with a
handful of organic pollutants that are degradable
through this technique. According to Huang et al.
[1], Fenton’s reagent has been discovered over 100
years ago but its application as a destroying agent for
organic pollutants was only explored several decades
later. Among the applications of Fenton’s reagent that
have been reported included the degradation of azo
dye Amido black [2], textile effluents [3], cork cooking
[4], and pharmaceutical waste [5]. The Fenton system
is an attractive oxidant for wastewater treatment due
This paper was presented at the UK – Malaysian Engineering Conference 2008, London
14 -16 July 2008.
VOLUME Six NUMBER two july - december 2008 PLATFORM
21
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
to the fact that iron is a highly abundant and non-toxic
element, as well as the simple handling procedure and
environmentally benign of H2O2 [6].
Theory
Glaze et al. [7] defined AOPs as “near ambient
temperature and pressure water treatment processes
which involve the generation of hydroxyl radicals in
sufficient quantity to effect water purification”. The
main feature of AOP is the hydroxyl radical, ·OH, which
has a high oxidation potential and acts rapidly with
most organic compounds to oxidize them into CO2
and water. Hydroxyl radicals have the second largest
standard redox potential after fluorine, which is 2.8 V
[8].
H2O2. The direct photolysis of H2O2 leads to the
formation of the ·OH radical.
H2O2 →uv 2·OH
(4)
Combination of UV irradiation with Fenton’s system
known as photo-Fenton has also been a promising
technique in wastewater treatment and research has
been conducted on the application of this technique
to some organic compound. Based on the literature,
the presence of light has increased the production
rate of ·OH radical by an additional reaction as in (5).
Fe(OH)2+ →uv Fe2+ + ·OH
(5)
According to Matthew [8], although reactions (4) and
(5) are important, the most vital aspect of photoFenton is the photochemical cycling of Fe3+ back to
Fe2+.
Hydrogen peroxide (H2O2) is a strong oxidant, but
it alone is not effective in high concentration of
refractory contaminant such as amines at a reasonable
amount of H2O2. Thus, a relatively non-toxic catalyst
like iron was introduced to increase the rate of ·OH
radical production. Fenton’s reagent consists of
ferrous ion (Fe2+) and H2O2 which generates the ·OH
radical according to
There are few main factors affecting the process
efficiency. The concentration of H2O2, concentration
of FeSO4, UV power dosage, temperature and pH are
among the main contributing factors towards process
efficiency.
Fe2+ + H2O2 → Fe3+ + ·OH + OH−
Effect of H2O2 concentration
(1)
·OH radical may also be scavenged by reaction with
another Fe2+.
·OH + Fe2+ → Fe3+ + OH− (2)
The reaction between Fe3+ and H2O2 slowly
regenerated Fe2+.
In their report, Walling and Goosen [12] have simplified
the overall Fenton chemistry by considering the
dissociation of water as in Equation (3) which suggests
that the acidic environment is needed in order to
dissociate H2O2 by the presence of H+.
2Fe2+ + H2O2 + 2H+ → Fe3+ + 2 H2O
(3)
Another way to initiate the generation of ·OH radical
is by supplying UV radiation to the system containing
22
H2O2 plays an important role as the oxidising agent in
the AOP. Thus, it is vital to optimise the concentration
of H2O2 because the main cost of these methods is
the cost of H2O2 and excessive dosed of H2O2 trigger
side-effects [9] due to self-scavenging of ·OH radical
by H2O2 itself (Eqn. 6). Jian et al., [2] reported in their
study that the degradation of Amido black 10B was
reducing with high concentration of H2O2.
·OH + H2O2 → H2O + HO2·
(6)
Effect of the initial Fe2+ concentration
Optimum concentration of Fe2+ is vital for the reaction
as too low a concentration will slow the generation of
·OH radical thus reducing the removal of COD. But too
high a concentration could lead to self-scavenging of
·OH radical by Fe2+ (Eqn. 2) [2].
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Effect of UV power dosage
Hung at el. [9] in their report, wrote that COD removal
could be increased by increasing UV power dosage.
This is due to the faster formation rate of ·OH radical.
The dosage of UV could be controlled by the number
of lamps inside the reactor.
Effect of Temperature
Temperature is a critical factor to reaction rate,
production yield and distribution. Reaction rate is
expected to increase with increased temperature.
However, in these processes, no optimal temperature
was detected [10, 11]. Some researchers, however,
gave the opposite result. Anabela et al. [4] reported
in their literature that the optimal temperature was
30 °C in the degradation of cork cooking wastewater.
Effect of pH
Previous studies have shown that the acidic level of
near pH3 was usually the optimum. High values of pH
(>4) decreased the generation of ·OH radical because
of the formation of ferric hydroxo complexes but at
too low a pH value (<2), the reaction slowed down due
to the formation of [Fe(H2O2)6]2+ [11].
Research Methodology
Materials
DIPA was obtained from Malaysia Liquified Natural
Gas (MLNG); H2O2 and NaOH were from Systerm;
FeSO4·7H2O was from Hamburg Chemicals; and H2SO4
was from Mallinckrodt.
Experimental procedure
A stirred jacketed glass reactor was used to monitor
the progress of the reaction. The simulated waste of
DIPA in the desired concentration was prepared and
concentrated H2SO4 and 1M NaOH were added to
adjust the solution to the desired pH value. The ferrous
sulfate catalyst was added into the amine solution at
the beginning of the experiment and stirred to get a
homogeneous solution. This is then followed by the
addition of H2O2. The reaction started immediately
and the temperature was maintained by circulating
cooling water through the jacket. Samples were taken
at regular intervals of time for COD analysis. COD was
then measured by a Hach 5000.
For photo-Fenton oxidation, the experimental
procedure was similar to the Fenton process, except for
the additional UV irradiation. A quartz tube containing
a UV lamp 4W was inserted into the reactor.
Analysis
Samples of 3 ml in volume were taken and put into the
test tube containing 4 ml of 1M NaOH at the regular
interval for COD analysis. NaOH was added into the
samples to increase pH to 12 so that hydrogen peroxide
became unstable thus decomposing into oxygen
and water. Besides, this method can precipitate iron
into ferric hydroxide [Fe(OH)3] [12]. The precipitate
Fe(OH)3 was then separated from the solution by
using microfilter. In order to further ensure that no
interference of H2O2 to the COD measurement, the test
tubes containing samples were heated in the boiling
water for 10 minutes to remove the remaining H2O2
[13-15] as the peroxide is also unstable at temperature
higher than 40 °C. The level of reproducibility for this
system is high where the COD percentage removal
will only vary about 5 percent between runs for the
same parameters.
Result and Discussion
Effect of H2O2 Concentration
The effect of H2O2 concentration on COD removals
was examined by changing the H2O2 concentration
while keeping the concentration of FeSO4, pH and
temperature constant at 4.98 g/L, 3 and 30 °C,
respectively (Fig. 1).
Figure 1(a) shows the percentage of COD removal
versus time for different H2O2 concentration at initial
COD of 50,000 mg/L. From the figure, the COD
removal increases with increasing H2O2 concentration.
VOLUME Six NUMBER two july - december 2008 PLATFORM
23
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Fe:H2O2 = 1:20
30
Fe:H2O2 = 1:30
Fe:H2O2 = 1:40
50
28
Fe:H2O2 = 1:50
40
COD Removal (%)
COD Removal (mg/L)
Fe:H2O2 = 1:60
30
20
26
24
22
20
10
18
0
1:20
0
10
20
30
40
50
Time (min)
60
70
80
1:30
90
(a)
1:40
1:50
1:60
Fe: H2O2
(b)
Figure 1. Effect of H2O2 concentration on COD removal for Fenton Oxidation (Initial COD = 50,000 mg/L; FeSO4 was 4.98 g/L; temperature
= 30 °C, pH = 3)
However, when the Fe:H2O2 ratio is more than 1:50,
the percentage removal decreases as in Figure 1(b).
This may be due to the scavenging effect of H2O2 as
in Eqn. (6). When too much H2O2 was in the solution,
it reacted with the ·OH radical and subsequently
reduced the concentration of ·OH radical available to
attack the organic compound.
Figure 2(a) shows the percentage of COD removal by
using photo-Fenton oxidation method. It followed
the same trend as Fenton’s system. The percentage of
COD removal increased when the hydrogen peroxide
concentration increased. But when the Fe:H2O2
ratio was more than 1:30, the percentage removal
decreased as in Figure 2(b).
·OH + H2O2 → H2O + HO2·
The efficiency between Fenton’s system and photoFenton was compared as in Figure 3. From the figure,
(6)
35
26
25
25
COD Removal (%)
COD Removal (%)
30
20
15
Fe: H2O2 = 1:20
Fe: H2O2 = 1:30
10
Fe: H2O2 = 1:40
0
Fe: H2O2 = 1:60
0
10
20
30
40
50
60
70
80
22
21
19
1:20
90
Time (min)
(a)
23
20
Fe: H2O2 = 1:50
5
24
1:30
1:40
1:50
1:60
Fe:H 2O2
(b)
Figure 2. Effect of H2O2 concentration on COD removal for photo-Fenton Oxidation (Initial COD = 50,000 mg/L; FeSO4 was 4.98 g/L;
temperature = 30 °C, pH = 3)
24
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
for both Fenton and photo-Fenton was 1:50. COD
removal of 31% and 24% was achieved for Fenton
and photo-Fenton, respectively. Initial comparison
between Fenton and photo-Fenton showed that
photo-Fenton gave better degradation. However,
this is not conclusive and further research will have
to be conducted to increase the percentage of COD
removal and reduce sludge formation.
References
Figure 3. Comparison of degradation efficiency between Fenton
and photo-Fenton oxidation of DIPA (Initial COD=50,000 mg/L;
FeSO4 was 4.98 g/L; temperature = 28 °C, pH = 3)
the percentage COD removal for photo-Fenton is
slightly higher compared to Fenton. This could be due
to the additional UV irradiation added into the system.
The addition led to the additional production of ·OH
radical which then increased the concentration of ·OH
radical. Besides, UV irradiation was able to regenerate
ferrous catalyst by the reduction of Fe(III) to Fe(II) as in
Eqn. (7).
Fe(III)OH2+ →uv Fe(II) + ·OH
[1]
C. P. Huang, C. Dong, Z. Tang, “Advanced chemical oxidation:
its present role and potential future in hazardous waste
treatment”, Waste Mgmt, 13 (1993) 361-577
[2]
J. H. Sun, S. P. Sun, G. L. Wang, and L. P. Qiao (2006)
“Degradation of azo dye Amido black 10B in aqueous
solution by Fenton oxidation process”, Dyes and Pigment,
B136, 258-265G
[3]
M. Perez, F. Torrades, X. Domenech, J. Peral (2001) “Fenton
and photo-Fenton oxidation of textile effluents”, Water
Research 36, 2703-2710
[4]
A. M. F. M. Guedes, L. M. P. Madeira, R. A. R. Boaventura and
C. A. V. Costa (2003) “ Fenton oxidation of cork cooking
wastewater-overall kinetic analysis” , Water Research 37,
3061-3069
[5]
Huseyin, T., Okan, B., Selale, S. A., Tolga, H. B., I. Haluk
Ceribasi, F. Dilek Sanin, Filiz, B. D. and Ulku, Y. (2006) “ Use
of Fenton oxidation to improve the biodegradability of a
pharmaceutical wastewater” , Hazard. Mater. B136 (258265)
[6]
Rein, M (2001) “Advanced oxidation processes – current
status and prospects” , Proc. Estonian Acad. Sci. Chem., 50,
2, 59-80
[7]
Glaze, W. H., Kang, J. W. and Chapin, D. H. (1987) “ The
chemistry of water treatment processes involving ozone,
hydrogen peroxide and ultraviolet radiation”, Ozone Science
Engineering
[8]
Matthew, A. T., (2003) “ Chemical degradation methods
for wastes and pollutants, environmental and industrial
applications” , pp. 164-194, Marcel Dekker Inc
[9]
Hung, Y. S, Ming, C. C. and Wen, P. H. (2006) “ Remedy of
dye manufacturing process effluent by UV/H2O2 process”,
Hazard. Mater. B128, 60–66
(7)
However, at ratio Fe:H2O2 of 1:50, the result obtained
was different. The percentage COD removal for Fenton
was higher. The problem could be due to the stirring
method. The stirrer used was small and the volume
of the solution inside the reactor was high. Besides,
the presence of a quartz tube inside the reactor might
have reduced the stirring efficiency. One way to
overcome the problem is by using aeration to mix the
solution and this is recommended for further research
plans.
Conclusion
The applicability of Fenton and photo-Fenton for the
degradation of diisopropanolamine was investigated.
By keeping the initial COD, concentration of FeSO4, pH
and temperature constant at 50,000 mg/L, 4.98 g/L, 3
and 30 °C respectively, the optimum ratio of Fe:H2O2
[10] Dutta, K., Subrata, M., Sekhar B., and Basab C. (2001) @
Chemical oxidation of methylene blue using a Fenton-like
reaction@ , Hazard Mater. B84, 57-71
VOLUME Six NUMBER two july - december 2008 PLATFORM
25
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
[11] Ipek, G., Gulerman, A. S. and Filiz, B. D. (2006) “Importance
of H2O2/Fe2+ ratio in Fenton’s treatment of a carpet dyeing
wastewater”, Hazard. Mater. B136, 763-769
[12] C. Walling and A. Goosen, (1973) “Mechanism of the ferric
ion catalysed decomposition of hydrogen peroxide: effects
of organic substrate”, J. Am. Chem. Soc. 95 (9) 2987-2991
[13] Kavitha, V. and Palanivelu, K. (2005) “The role of ferrous ion
in Fenton and photo-Fenton processes for the degradation
of phenol”, Chemosphere 55, 1235-1243
[14] Lou J. C. and Lee S. S. (1995) “Chemical Oxidation of BTX
using Fenton’s reagent”, Hazard Mater. 12, No 2, 185-193
[15] Jones C. W. (1999) “Introduction to the preparation and
properties of hydrogen peroxide”. In: Clark, J. H. (Ed)
Application of Hydrogen Peroxide and Derivatives. Royal
Society of Chemistry, Cambridge, UK, pp. 30
Abdul Aziz Omar is currently Head of the
Geosciences and Petroleum Engineering
Department,
Universiti
Teknologi
PETRONAS.
Associate Professor Aziz completed his
tertiary education in the United States,
where he obtained his Master’s and
Bachelor’s degrees from Ohio University
in 1982. While his undergraduate training was in Chemistry
and Chemical Engineering, his graduate specialisation was in
Environmental Studies. He has over 15 years of experience as an
academician, 6 years as a process/project engineer and 4 years as
a senior manager. He has also worked on many projects related
to EIA (Environmental Impact Assessment), safety studies and
process engineering design. Among his many experiences, one
significant one would be the setting up of the School of Chemical
Engineering at Universiti Sains Malaysia. He was appointed the
founding Dean of the School. In March 2001, he joined Universiti
Teknologi PETRONAS (UTP) as a lecturer, where he continues
to teach and pursue his research interests. Assoc. Prof. Aziz is a
Professional Engineer in Chemical Engineering, registered with
the Malaysian Board of Engineers since 1989, and a Chartered
Engineer in United Kingdom from 2006. He is a Fellow of the
Institution of Chemical Engineers (IChemE), UK and is a Board
member of IChemE in Malaysia.
26
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Synthesis of well-defined iron nanoparticles
on a spherical model support
Noor Asmawati Mohd Zabidi*, P. Moodley1, P. C. Thüne1, J. W. Niemantsverdriet1
*Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
1Schuit Institute of Catalysis, Eindhoven University of Technology
*noorasmawati_mzabidi @petronas.com.my
Abstract
Spherical model catalysts consisting of SiO2-sphere-supported iron nanoparticles were prepared using the
colloidal synthesis approach, the reverse microemulsion and the ammonia deposition methods. Amongst these
synthesis methods, the colloidal synthesis approach was found to be the most promising synthesis route for the
Fe/SiO2 model catalysts. The modified colloidal synthesis method produced nearly monodisperse sphericalshaped iron oxide nanoparticles with average diameters of 6.2 ± 0.9 nm on the SiO2 spheres. X-ray photoelectron
spectroscopy (XPS) analyses revealed that the catalyst contained Fe2O3 (hematite). Morphological changes were
observed on the spherical Fe/SiO2 model catalysts during the Fischer-Tropsch synthesis (FTS) reaction.
Keywords. Nanoparticles, iron, spherical model catalyst, Fischer-Tropsch reaction
Introduction
Iron has been the traditional catalyst of choice for
the Fischer-Tropsch synthesis due to its favorable
economics. However, knowledge on the relation
between the rate of the reaction to the composition
and morphology of the catalyst is still lacking [1].
The use of spherical model catalyst system enables
investigation on the fundamental aspects of the
catalyst, such as influence of particle size, phase and
composition on the catalytic activity [2]. The objective
of the present work is to prepare and characterize
spherical model SiO2-supported iron catalysts. The
catalysts were prepared using the colloidal synthesis
approach [3], the reverse microemulsion method
[4] and the ammonia deposition method [5]. The
colloidal synthesis approach was adapted from a
method described by Sun and Zeng [3] which involved
homogeneous nucleation process. However, our aim is
to stabilise the iron nanoparticles on the SiO2 spheres
through a heterogeneous nucleation process.
The usage of spherical model silica support allows
for viewing of iron nano-particles in profile with
transmission electron microscopy. Supported
iron nano-particles in combination with electron
microscopy are well suited to study morphological
changes that occur during the Fischer-Tropsch
synthesis. The spherical model catalyst enables
investigation on the fundamental aspects of the
catalyst, such as influence of particle size, phase and
composition on the catalytic activities. This paper
presents the results of three synthesis approaches for
the spherical Fe/SiO2 model catalysts as well as their
morphological changes upon exposure to syngas.
This paper was presented at the International Conference on Nanoscience and Nanotechnology 2008, Shah Alam,
18 - 21 November 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
27
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Experimental
For the colloidal synthesis method [3], non-porous
silica spheres (BET surface area = 23 m2g-1, pore
volume = 0.1 cm3g-1 and diameter = 100 - 150 nm)
were sonicated in a mixture of olelylamine, oleic
acid and cyclohexane for 1 hour and then heated
and stirred in a multi-neck quartz reaction vessel. A
liquid mixture of iron(III) acetyl acetonate, oleylamine,
oleic acid, 1,2-hexadecanediol, and phenyl ether was
slowly added to the stirred SiO2 suspension once the
reaction temperature reached 150 °C. The reaction
mixture was refluxed under nitrogen atmosphere at
265 °C for 30 minutes.
The reverse microemulsion method [4] involved
preparing two reverse microemulsions. The first
reverse microemulsion consisted of Fe(NO3)3.9H2O
(aq) and sodium bis-(2-ethylhexyl) sulfosuccinate
(AOT, ionic surfactant) in hexanol. The second reverse
microemulsion was prepared by mixing an aqueous
hydrazine solution (reducing agent) with the AOT
solution. SiO2 spheres were added to the mixture
and the slurry was stirred for 3 hours under nitrogen
environment. The ammonia deposition method
utilised Fe(NO3)3.9H2O and 25 wt% aqueous ammonia
[5].
The calcined catalyst samples were placed on carboncoated Cu grids for characterisation by transmission
electron microscopy. TEM studies were carried
Figure 1(a)
out on a Tecnai 20 (FEI Co) transmission electron
microscope operated at 200 kV. XPS was measured
with a Kratos AXIS Ultra spectrometer, equipped with
a monochromatic Al Kα X-ray source and a delay-line
detector (DLD). Spectra were obtained using the
aluminium anode (Al Kα = 1486.6 eV) operating at
150 W. Spectra were recorded at background pressure,
2 x 10 -9 mbar. Binding energies were calibrated to Si2s
peak at 154.25 eV.
The activities of the spherical model catalysts for
Fischer-Tropsch synthesis were evaluated in a fixedbed quartz tubular microreactor equipped with an
on-line mass spectrometer (Balzers). Catalyst samples
were pre-reduced in H2 at 450 °C for 2 h and then
exposed to syngas (H2:CO = 5:1) at 270 °C. The samples
were regenerated via heating in a flow of 20% oxygen
in helium up to 600 °C.
Results and discussion
Figures 1(a), (b) and (c) show the TEM images of
spherical model catalysts comprising iron oxide nanoparticles anchored on SiO2 spheres prepared via the
modified colloidal synthesis approach, the reverse
microemulsion method and the ammonia deposition
method, respectively. Spherical-shaped iron oxide
nano-particles with average diameters of 6.2 ± 0.9
nm were formed via the modified colloidal synthesis
method and the nano-particles were almost evenly
dispersed on the SiO2 surfaces. An equimolar mixture
Figure 1(b)
Figure 1(c)
Figure 1. TEM images of calcined catalysts of iron oxide nanoparticles on SiO2 spheres prepared via the (a) colloidal synthesis approach
(b) reverse microemulsion method (c) ammonia deposition method.
28
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Intensity (AU)
(c)
(b)
(a)
275
280
285
290
295
300
Binding energy (eV)
Figure 2. TEM image of iron oxide nanoparticles on
SiO2 spheres after the Fischer Tropsch reaction at 270
°C for 2 hours at H2:CO ratio of 5:1.
Figure 3. XPS showing carbon region for (a) fresh (b) regenerated (c) spent
Fe/SiO2
of oleylamine and oleic acid was used in the colloidal
synthesis approach and these surfactants were able
to prevent the agglomeration of the iron oxide nanoparticles. Iron oxide nano-particles were anchored
on the SiO2 surfaces and did not lie in between the
SiO2 spheres, as shown in Figure 1(a), thus suggesting
that nucleation occurred heterogeneously. The iron
loading was kept at 6 wt% as we have discovered
that increasing the iron loading resulted in highly
agglomerated nano-particles. The size of the nanoparticle is influenced by temperature, time, surfactant,
amounts of metal precursor as well as the ratio of the
metal precursor to the surfactant [6]. The reverse
microemulsion method also produced sphericalshaped iron oxide nano-particles with average
diameters of 6.3 ± 1.7 nm, however, the coverage
of the SiO2 surfaces was found to be less than that
obtained using the colloidal synthesis approach. The
result of the synthesis via the ammonia deposition
method showed extensive agglomeration of the iron
nanoparticles, as depicted in Figure 1(c).
Table 1. Atomic ratios based on XPS analyses
Sample
The spherical model catalysts synthesised by
the colloidal synthesis method and the reverse
microemulsion method were tested in a Fischer Tropsch
reaction. However, only the catalyst synthesised via
the colloidal method showed some activities in the
Fischer Tropsch reaction. Changes on the morphology
were investigated upon exposure to the syngas. Figure
2 shows the TEM image of the catalyst after two hours
exposure to the syngas. It shows ~ 50% increase in the
size of the nano-particles and formation of an outer
rim of thickness 3.2 ± 0.6 nm, following exposure to
the syngas. A darker color at the centre of the used
catalyst nano-particles suggests that the iron oxide
remained in the core whereas the outer rim consists
of amorphous carbon (EB = 284.5 eV), as confirmed by
the XPS analyses (Figure 3). Table 1 shows the atomic
ratios obtained from XPS analyses. Figure 4 shows the
presence of Fe3p peak at EB= 56.0 eV for the fresh and
the used catalyst, thus suggesting that the catalyst
remained as Fe2O3 (hematite). Upon contact with H2/
CO, oxygen-deficient Fe2O3 was observed, however,
the oxygen vacancies did not reach a critical value
that can lead to nucleation of Fe3O4.
O1s Fe / O1s Si
Fe 3p / O Fe2O3
C1s / Fe3p
Fresh
0.248
0.186
2.02
Spent
0.199
0.066
7.64
Regenerated
0.231
0.076
6.24
VOLUME Six NUMBER two july - december 2008 PLATFORM
29
Mission-Oriented Research: pETROCHEMICAL CATALYSIS TECHNOLOGY
Fe 3p
Acknowledgements
The authors thank Mr. Tiny Verhoeven for the TEM and XPS
measurements. The authors would also like to thank Mr. Denzil
Moodley for his assistance in the activity studies. We acknowledge
the financial support for this project from Sasol South Africa.
Noor Asmawati Mohd Zabidi acknowledges the support given
by Universiti Teknologi PETRONAS under the sabbatical leave
programme.
Fe 3p
Fresh
Regenerated
Spent
65
Fresh
70
AU
60
ergy (eV)
Regenerated
References
Spent
45
50
55
60
65
70
[1]
A. Sarkar, D. Seth, A.K. Dozier, J.K. Neathery, H. H. Hamdeh,
and B. H. Davis, “Fischer-Tropsch synthesis: morphology,
phase transformation and particle size growth of nanoscale particles”, Catal. Lett. 117 (2007) 1
[2]
A. M. Saib, A. Borgna, J. van de Loosdrecht, P. J. van Berge,
J. W. Geus, and J. W. Niemantsverdriet, “Preparation and
characterization of spherical Co/SiO2 model catalysts
with well-defined nano-sized cobalt crystallites and a
comparison of their stability against oxidation with water”,
J. Catal. 239 (2006) 326
[3]
S. Sun and H. Zeng, “Size-controlled synthesis of magnetite
nanoparticles”, J. Am. Chem. Soc. 124 (2002) 8204
[4]
A. Martinez and G. Prieto, “The key role of support surface
tuning during the preparation of catalysts from reverse
micellar-synthesized metal nanoparticles”,Catal. Comm. 8
(2007) 1479
[5]
A. Barbier, A. Hanif, J.A. Dalmon, G.A. Martin, “Preparation
and characterization of well-dispersed and stable Co/SiO2
catalysts using the ammonia method”, Appl. Catal. A. 168
(1998) 333
[6]
T. Hyeon, “Chemical synthesis of magnetic nanoparticles,
”Chem. Commun. (2003) 927
Binding energy (eV)
Figure 4. XPS showing Fe3p region for (a) fresh (b) spent (c)
regenerated Fe/SiO2
Conclusions
Spherical model catalysts consisting of SiO2supported iron nano-particles have been prepared
and characterized using TEM and XPS. The modified
colloidal synthesis approach resulted in sphericalshaped iron oxide nano-particles with average
diameters of 6.2 ± 0.9 nm. The modified colloidal
synthesis method produced better dispersion of the
iron oxide nano-particles compared to that obtained
from the reverse microemulsion method. The spherical
Fe/SiO2 model catalyst prepared via the modified
colloidal synthesis method exhibited some activity
towards Fischer-Tropsch synthesis whereas the one
synthesized via the reverse microemulsion method
showed negligible activity for FTS. Morphological
changes were observed on the spherical Fe/SiO2 model
catalysts upon exposure to syngas and during the reoxidation step. TEM results show ~ 50% increase in the
size of the iron oxide nano-particles and formation of
the carbon rim around the iron oxide nano-particles
(confirmed by XPS) upon a 2-hour exposure to the
syngas. XPS measurements confirmed the presence of
Fe2O3 nano-particles in the fresh and the used catalyst
samples. The results of our investigation show that
the spherical Fe/SiO2 model nano-catalyst of welldefined size can be prepared and characterised. This
can facilitate the size-dependent studies of the ironbased catalyst in FTS.
30
Noor Asmawati Mohd Zabidi obtained
her PhD in 1995 from University of MissouriKansas City, USA. She joined Universiti
Teknologi PETRONAS in 2001 and was
promoted to an Associate Professor in
2005. She was on a sabbatical leave at
Eindhoven University of Technology, The
Netherlands from March–December 2007.
During the sabbatical leave, she carried out
a research project on the synthesis of SiO2-supported iron
nanocatalysts for the Fischer Tropsch reaction. Her research
interests are photocatalysis and catalytic conversion of gas to
liquid (GTL) fuels and chemicals.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
Performance and Emission Comparison of A DirectInjection (DI) Internal Combustion Engine using
Hydrogen and Compressed Natural Gas as Fuels
A. Rashid A. Aziz*, M. Adlan A., M. Faisal A. Muthalib
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*rashid@petronas.com.my
Abstract
Hydrogen internal combustion engine is considered as a suitable pathway to hydrogen economy before fuel cell
technologies became more mature and cost effective. In this study, combustion of hydrogen and compressed
natural gas (CNG) in a direct-injection single cylinder research engine was investigated. Engine performance
parameters such as the power, torque, BMEO and COV of hydrogen DI operation were measured and compared
to CNG-DI operation. Stoichiometric combustion of CNG at part load (50% throttle opening) is presented and
compared with hydrogen at 0.2 and 0.8 equivalent ratio. The slightly-lean hydrogen (0.8 equivalent ratio) resulted
in a better overall performance and emission of the engine.
introduction
Global warming and air pollution issues have
brought international efforts to scale down the use
of hydrocarbon fuel, which is one of the biggest
contributors to a number of greenhouse gases.
On the other hand, fossil fuel reserves, especially
petroleum, is depleting and will be at its peak in just
a couple of decades while the demand from industry
and transportation is increasing [1].
Hydrogen has been introduced as an alternative energy
carrier. It can be made from both renewable and fossil
energy. By using renewable energy or nuclear power
plant to produce hydrogen, greenhouse gases can
be totally eliminated. It is better than electricity in
terms of distribution efficiency, refuelling speed and
energy capacity. On the other hand, hydrogen vehicle
performance is comparable to hydrocarbon-fuelled
vehicles. In addition, the only emission of hydrogen
is water vapour.
However, the current hydrogen engines still face
practical problems that mask the actual capability
of hydrogen as fuel for transportation. Hydrogen
engine’s power and speed are limited due to knock,
backfire [2], low volumetric efficiency [3] and a number
of other problems. Some researchers suggested
the use of a direct-ignition system and a number of
experiments have been conducted that provided
proof that using hydrogen as fuel produces better
output than its gasoline counterparts [2].
In this study, the main objective was to examine
the performance of a hydrogen-fuelled directinjection spark ignition internal combustion engine.
Aspects that were observed were power and torque
output, emissions, fuel consumption and operating
characteristics.
Experiments
Experiments were done using a four-stroke Hydra
Single Cylinder Optical Research Engine at the
This paper was presented at the International Gas Research Union Conference 2008, Paris
8 - 10 October 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
31
Technology Platform: fuel combustion
Centre for Automotive Research, Universiti Teknologi
PETRONAS, Malaysia. The specification of the engine
is listed in Table 1. Engine control parameters such
as injection timing, ignition timing and air-fuel ratio
were controlled by an ECU that was connected to a
computer. All output parameters of the engine was
obtained from a high-speed data acquisition as well
as a low-speed data acquisition from the engine
dynamometer control interface.
Two experiments were conducted in this study –
“ultra-lean combustion” (equivalence ratio of 0.2)
and “slightly-lean combustion” (0.8 which is near
stoichiometric). The first experiment used lowpressure injector (7.5 bar). However, because the
flow-rate of that injector was too slow, a high-pressure
injector had to be used for the second experiment.
Both experiments used the stratification method,
also known as stratified-lean combustion. Figure 1
illustrates a piston with 35 mm bowl that was used
in the experiments and Figure 2 shows the position
of the injector relative to the spark plug in the direct
injection system. Table 2 lists the main operating
parameters.
Table 1. Engine details
Besides comparing results from both experiments,
another result from an experiment on natural gas fuel
was also made (Table 3).
Figure 1. Stratified-lean piston
Figure 2. Location of injector
and spark-plug relative to the
piston at TDC
Table 2. Main parameters of the two experiments
Engine type
4-stroke spark ignition
No. of cylinders
One
Parameter
Ultra-lean
Slightly-lean
Displacement volume
399.25 cm3
Equivalence ratio
0.2
0.8
Cylinder bore
76 mm
Injector Rail
Pressure
7.5 bar
18 bar
Cylinder stroke
88 mm
Injection Timing
130 deg BTDC
130 deg BTDC
Compression ratio
14:1
Load
Part throttle
Part throttle
Exhaust valve open
10° ATDC
Ignition Timing
MBT
MBT
Exhaust valve closed
45° BBDC
Inlet valve open
12° BTDC
Inlet valve closed
48° ABDC
Fuel induction
Direct-injection
Rail Pressure
Value
Table 3. Main parameters of the CNG-DI experiment
Parameter
Value
Equivalence ratio
Stoichiometric
Injector Rail Pressure
18 bar
7.5 and 18 bar
Injection Timing
130 deg BTDC
Injector position
Centre
Load
Part throttle
Injector nozzle type
Wide angle
Ignition Timing
MBT
32
PLATFORM VOLUME Six NUMBER TWO july - december 2008
50
40
2500 rpm
3000 rpm
35
14
30
0.8 Ign.
15
6
10
4500 rpm
0
-5
20
30
40
Crank Angle (deg ATDC)
5000
0
4500
5
2
4000
4
3500 rpm
4000 rpm
3500
10
20
0.2 Ign.
8
3000
0
10
2500
-10
0.8 CoV
2000
20
25
0.2 CoV
1500
30
10
-20
16
12
CoV (%)
Cylinder Pressure (bar)
60
Ignition (deg BTDC)
Technology Platform: Fuel combustion
Engine Speed (rpm)
Figure 3. Pressure developed during compression and expansion
stroke at various engine speeds for 0.8 equivalence ratio
Figure 4. Coefficient of variation at different engine speed with
its corresponding spark advance
results and discussions
A study showed that MBT ignition advance increases
when equivalence ratio decreases [5]. For a mixture of
0.2 equivalence ratio, the MBT ignition timing range
is between 30 to 50 degrees BTDC. This is consistent
with the current results.
The engine was unstable in ultra-lean mode. It
alternated between producing positive torque and
negative torque – showing that the engine was not
producing positive work. Peak pressure varied as
much as 13 percent (Figure 3).
Possible cause of the variation could be from misfiring.
The injector that was used for this experiment was a
low pressure injector with low mass flow rate – about
2.7 ms to fully inject the fuel. At 3 500 rpm, this
corresponded to 56.7 CA degree which was a long
duration for hydrogen injection. A study showed that,
for equivalence ratio between 0.7 to 1.4, the coefficient
of variation (CoV) is low but increases significantly
when the ratio moves far from the range. The study
also concluded that cycle variation is caused by
variation in the early combustion period [4]. Running
the engine at slightly-lean resulted in lower CoV.
The combustion produced very high peak pressure,
common for hydrogen engine because of its high
flame temperature and high flame speed. The peak
pressure reached up to 60 bars (Figure 4) but lower
than what actually the engine could produce. It was
seen that the ignition timing was set near TDC (Figure
3) to achieve MBT. Advancing the ignition resulted in
a peak pressure reaching up to 90 bars with knocks
occurring in the cylinder.
Peak pressures of natural gas combustion nearly
doubled the peak pressure for slightly-lean hydrogen
combustion (Figure 5). Despite the high peak pressure,
natural gas was still running without knock as opposed
to hydrogen.
Performance
Combustion of ultra-lean mixture faced consistent
misfiring which resulted in very low BMEP (Figure 6).
Judging from the CoV, when the speed was increased,
8
6
BMEP (bar)
Engine Stability
Ultra-lean H2
Slightly-lean H2
4
CNG
2
0
1500
2000
2500
3000
3500
4000
4500
5000
Engine Speed (rpm)
Figure 5. BMEP comparison of ultra-lean H2, slightly-lean H2 and
stoichiometric natural gas
VOLUME Six NUMBER two july - december 2008 PLATFORM
33
Technology Platform: fuel combustion
misfiring occurred frequently (Figure 3). BMEP
decreased until it could no longer produce any work
after 4 500 rpm.
Comparison at 3500 rpm
60
Pressure (bar)
50
H2 - 0.8
CNG
H2 - 0.2
40
30
20
10
0
-10
0
0.0001
0.0002
0.0003
0.0004
0.0005
Usage of a wide angle injector caused only small
amounts of fuel to be concentrated inside the piston
bowl, which was less than ignitable air-fuel ratio. A
possible solution is to utilise a narrow angle injector
with stoichiometric-lean piston. A stoichiometric lean
piston has a smaller bowl and could concentrate the
mixture to be around the stoichiometric ratio. [6]
Volume (m3)
Figure 6. Pressure map comparison of ultra-lean H2, slightly-lean
H2 and stoichiometric natural gas at 3500 rpm
Brake-Torque (Nm)
30
25
20
15
Ultra-lean H2
10
Slightly-lean H2
CNG
5
0
1500 2000 2500 3000 3500 4000 4500 5000
Engine Speed (rpm)
For the slightly-lean mixture, the usage of late ignition
hampered its potential performance. At TDC, when the
pressure is around 30 to 40 bars, the pressure diagram
(Figure 4) clearly shows that rather than a smooth line,
the pressure line dropped and ended up with lower
peak pressure. In Figure 7, it was seen that near TDC,
the pressure drop slightly after ignition, which was
caused by the ignition delay.
In comparison with natural gas, hydrogen produced
more torque and power. For torque, as well as BMEP
curve, the values decreased with increase in engine
speed (Figures 6 & 8). However, the power output
of slightly-lean hydrogen did not show a drop with
increase speed (Figure 9).
Figure 7: Brake-torque comparison of ultra-lean H2, slightly-lean
H2 and stoichiometric natural gas
Comparison at 4500 rpm
10
Brake-Power (kW)
70
Pressure (bar)
60
50
H2 - 0.8
40
H2 - 0.2
CNG
30
20
8
6
4
2
Ultra-lean H2
Slightly-lean H2
CNG
10
0
0
-10
1500 2000 2500 3000 3500 4000 4500 5000
0
0.0001
0.0002
0.0003
0.0004
0.0005
Engine Speed (rpm)
Volume (m 3)
Figure 8: P-V diagram of a cycle at 4500 rpm and 0.8 equivalence
ratio.
34
Figure 9. Brake-power comparison between ultra-lean and
slightly-lean hydrogen with stoichiometric natural gas.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
Fuel Economy
1.4
The engine’s fuel consumption at ultra-lean mixture
is fairly constant before 4 000 rpm (Figure 10). The
sharp increase after that point could be linked to the
misfiring that became worse at higher speeds. As the
speed increases, the fuel had less time to mix with
the air, making a heterogeneous mixture. At slightlylean mixture, the consumption of fuel was also low
– primarily caused by a stable combustion without
misfire.
BSFC (kg/kW-h)
1.2
Ultra-lean H2
1.0
Slightly-lean
H2
0.8
0.6
0.4
0.2
0.0
1500 2000 2500 3000 3500 4000 4500 5000
Engine Speed (rpm)
Figure 10. Brake specific fuel consumption (BSFC) at various
engine speeds.
1.2
NOx (g/m3)
1.0
0.8
Figure 11 shows the fuel consumption comparison
between hydrogen with natural gas. The graph
shows the BSFC in equivalent gasoline consumption.
Conversion was based on the fuels’ lower heating
value. From the graph, it was seen that slightly-lean
hydrogen had lower BSFC than natural gas. The ultralean hydrogen performed poorly at higher engine
speeds.
0.6
0.4
Ultra-lean H2
Slightly-lean H2
0.2
CNG
Emissions
0.0
1500 2000 2500 3000 3500 4000 4500 5000
Engine Speed (rpm)
Figure 11. NOx comparison between ultra-lean and slightly-lean
hydrogen with stoichiometric natural gas.
Gasoline Equivalent BSFC
(kg/kW-h)
4.0
3.5
3.0
2.5
2.0
Ultra-lean H2
Slightly-lean H2
CNG
1.5
1.0
0.5
0.0
1500 2000 2500 3000 3500 4000 4500 5000
Engine Speed (rpm)
Figure 12. Comparison of BSFC in term of gasoline usage.
The only significant emission for hydrogen engine
is nitrogen oxides (NOx). However, the comparison
shows that natural gas emits more NOx than hydrogen
(Figure 12). At ultra-lean, the amount of emitted
NOx is considerably high especially at higher engine
speeds. The existence of NOx is usually related to
high combustion temperature in the cylinder. As
speed increases, there is less time for heat to transfer
from cylinder to the atmosphere, which increases
temperature (Figure 13).
The test-bed did not have an in-cylinder thermocouple
which could be used to determine temperatures inside
the cylinder. However, the system did have an exhaust
gas temperature. The exhaust temperature value was
used to show a rough estimation of the combustion
cylinder temperature.
Comparison in Figure 14 shows that the amount of
carbon dioxide emitted by hydrogen engine is really
insignificant compared to hydrocarbon fuel. The large
percentage of carbon dioxide in natural gas exhaust is
mainly caused by the carbon element in the structure
VOLUME Six NUMBER two july - december 2008 PLATFORM
35
Technology Platform: fuel combustion
conclusions
6
CO2 (vol %)
5
4
3
2
Ultra-lean H2
Slightly-lean H2
1
CNG
0
-1
1500
2000 2500 3000 3500 4000
Engine Speed (rpm)
4500 5000
Figure 13. CO2 comparison between ultra-lean and slightly-lean
hydrogen with stoichiometric natural gas.
120
100
950
NOx
Temp.
925
900
80
875
60
850
40
825
20
800
Temperature (K)
NOx (mg/m3)
140
0
775
1500 2000 2500 3000 3500 4000 4500 5000
Based on the above results and discussions, the
following conclusions were derived:
• Direct-injection could avoid the backfire
phenomenon and reduce the likelihood of preignition.
• At part load, power output of hydrogen was better
than natural gas. This proved that the actual
power output of hydrogen is higher than current
commercial fuels.
• Hydrogen at slightly-lean mixture has better fuel
economy than ultra-lean mixture.
• The only significant emission of hydrogen engine
is NOx but it is still lower than the amount emitted
by natural gas.
• The amount of CO2 by hydrogen engine is much
less than natural gas.
REFERENCES
Engine Speed (rpm)
[1]
Rifkin, J. (2002). “When There is No More Oil…: The
Hydrogen Economy”. Cambridge, UK: Polity Press
Figure 14. Temperature of exhaust gas and the amount of emitted
NOx at various speeds for slightly-lean combustion.
[2]
Das, L. M. (1990). “Fuel induction techniques for a hydrogen
operated engine”. International Journal of Hydrogen
Energy, 15, 833-842
[3]
Yi, H. S., Lee S. J., & Kim, E. S. (1996). “Performance evaluation
and emission characteristics of in-cylinder injection type
hydrogen fueled engine”. International Journal of Hydrogen
Energy, 21, 617-624
[4]
Kim, Y. Y., Lee J. T. & Choi, G. H. (2005). “An investigation on
the causes of cycle variation in direct injection hydrogen
fueled engines”. International Journal of Hydrogen Energy,
30, 69-76.
[5]
Mohammadi, A., Shioji, M., Nakai, Y., Ishikura, W. & Tabo, E.
(2007). “Performance and combustion characteristics of a
direct injection SI hydrogen engine”. International Journal
of Hydrogen Energy, 32, 296-304
[6]
Zhao, F. F., Harrington, D. L. & Lai, M. C. (2002). “Automotive
Gasoline Direct-Injection Engines”. Warrendale, PA: Society
of Automotive Engineers
[7]
Norbeck, J. M., et al. (1996). “Hydrogen Fuel for Surface
Transportation”. Warrendale, PA: Society of Automotive
Engineers
of methane – a major constituent of natural gas. On
the other hand, the existence of carbon dioxides in
hydrogen fuelled engine is mainly contributed by
combustion of engine oil on the cylinder wall [7].
The negative value of CO2 on the graph occurred
because the emitted CO2 was less in relative to the
calibration value. The system calculated CO2 in
percentage of total exhaust gas – not in ppm unit.
Supposedly, there must be no CO2 in the exhaust
line while calibrating the gas analyser but a very little
amount of CO2 could possibly exist. This also shows
the low amount of CO2 emitted by a hydrogen-fuelled
engine.
36
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
ACKNOWLEDGEMENT
The authors would like to thank the Ministry of Science, Technology
and Innovation for the initial grant under the IRPA project on CNGDI Engine as well as UTP for general support.
Abd Rashid Abd Aziz graduated with a
PhD in Mechanical Engineering (Thermofluid) from the University of Miami in 1995.
He is currently an Associate Professor and
the Research Head for Green Technology
(Solar, Hydrogen and Bio-fuels). He is also
the group leader for the Hybrid Vehicle
Cluster. He leads the Centre for Automotive
Research (CAR), which carried out several
research projects with funds from the Ministry of Science,
Technology and Innovation (MOSTI) and PETRONAS. His areas of
interest are in internal combustion engines, laser diagnostics, flow
visualisation, CFD, alternative fuels and hybrid powertrain.
VOLUME Six NUMBER two july - december 2008 PLATFORM
37
Technology Platform: fuel combustion
THE EFFECT OF DROPLETS ON BUOYANCY IN
VERY RICH ISO-OCTANE-AIR FLAMES
Shaharin A. Sulaiman and Malcolm Lawes1
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak, Malaysia
1School of Mechanical Engineering, University of Leeds, UK
shaharin@petronas.com.my
ABSTRACT
An experimental study is performed with the aim of investigating the effect of the presence of droplets in flames
of very rich iso-octane-air mixture under normal gravity. Experiments are conducted for initial pressures in the
range 100-160 kPa and initial temperatures 287-303 K at an equivalence ratio of 2.0. Iso-octane-air aerosols are
generated by expansion of the gaseous pre-mixture (condensation technique) to produce a homogeneously
distributed suspension of near mono-disperse fuel droplets. The droplet size varies with time during expansion;
hence the effect of droplet size in relation to the cellular structure of the flame was investigated by varying the
ignition timing. Flame propagation behavior was observed in a cylindrical vessel equipped with optical windows
by using schlieren photography. Local flame speeds were measured to assess the effect of buoyancy in gaseous
and aerosol flames. It was found that the presence of droplets resulted in a much earlier onset of instabilities, at a
rate faster than that taken for the buoyancy effect to take place. Flame instabilities, characterised by wrinkling and
cellular surface structure, increase the burning rate due to the associated increase in surface area. Consequently,
the presence of droplets resulted in a faster flame propagation rate than that displayed by a gaseous flame. The
mechanism of flame instabilities that caused a significant reduction of the buoyancy effect is discussed.
Keywords: buoyancy, combustion, droplets, flame, instabilities
INTRODUCTION
The combustion of clouds of fuel droplets is of
practical importance in engines, furnaces and also for
prevention of explosion and fire in the storage and
use of fuels. Theoretical [1] and experimental [2-4]
evidence suggests that flame propagation through
clouds of droplets, under certain circumstances, is
higher than that in a fully vapourised homogeneous
mixture. Even though this may be advantageous in
giving more rapid burning, its effects on emissions are
uncertain.
it is well established that the laminar burning rate
plays an important role in turbulent combustion [5].
Information on laminar burning velocity is sparse,
even for gaseous mixtures at conditions pertaining
to engines, which range from sub-atmospheric to
high pressure and temperature. Such data for fuel
sprays and for gas-liquid co-burning [6-8] are even
sparser than for gases. As a consequence, there is
little experimental data of a fundamental nature that
clearly demonstrates the similarities and differences
in burning rate, either laminar or turbulent, between
single and two-phase combustion.
Most combustion in engineering applications takes
place under turbulent conditions. Nevertheless,
In the present work, the influence of droplets in isooctane-air mixtures within the upper flammability
13th International Conference on Applied Mechanics and Mechanical Engineering, Egypt,
27-29 May 2008
38
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
Buoyancy driven flow velocity
Velocity resulted
expansion
from
gaseous
Gravity
the resulting
flame front
FLAME
FLAME
without buoyancy effect
with buoyancy effect
Figure 1. Illustration of the buoyancy effect [14] on spherical
flames. The arrows indicate the local velocities resulted by gas
expansion and buoyancy driven convection
limit (rich) was investigated. Such gaseous mixtures
are well known, from a number of previous works, for
example in [9-11], to experience the effect of buoyancy
or natural convection during combustion. Some of the
previous studies were performed in tubes. However,
Andrews and Bradley [12] suggested that the limit of
flammability obtained by the tube method would
be subject to the same sources of error [13] as would
be the burning velocity measurements using tubes,
mainly due to wall quenching. Thus studies in larger
tube diameters or in large explosion vessels have been
recommended.
Figure 1 shows an illustration of two centrally ignited
spherical flames to describe the effect of buoyancy
[14]. The open arrow represents the local velocity
which resulted from gas expansion during flame
propagation. The solid arrow, which points upward,
represents the local velocity caused by the buoyancy
or natural convection effect. The dashed lines show
the resulting flame front accounting for the net
velocity. With the absence of buoyancy effect, the
resulting flame would be spherical or circular when
viewed from the side. However, with the presence
of buoyancy effect, the resulting flame has a flatter
bottom surface as that illustrated in Figure 1.
(a)
SL
EV
28 litres
DL
VP
CV :
EV :
SL :
DL:
VP:
CV
23 litres
Combustion Vessel
Expansion Vessel
Supply Line
Discharge Line
Vacuum Pump
Orifice
Pipe
Valve
(b)
Figure 2. Aerosol combustion apparatus:
(a) photograph (b) schematic
EXPERIMENTAL APPARATUS
Figure 2 shows the photograph and schematic of
the aerosol combustion apparatus. The combustion
vessel, which essentially resembled a Wilson cloud
chamber [15], was a cylindrical vessel of 305 mm
diameter by 305 mm long (internal dimensions), with
a working volume of 23.2 litres. On both end plates of
the combustion vessel circular optical access windows
of 150 mm diameter were provided for characterisation
of aerosol and photography of flame propagation. To
initially mix the reactants four fans, driven by electric
motors, were mounted adjacent to the wall of the
VOLUME Six NUMBER two july - december 2008 PLATFORM
39
Technology Platform: fuel combustion
vessel. Two electrical heaters were attached to the
wall of the vessel to preheat the vessel and mixture
to 303 K. The expansion vessel, which has a volume
of 28 litres, was connected to the combustion vessel
by an interconnecting pipe through a port. The
vacuum pump, indicated in Figure 2 (a), was used to
evacuate the system and to remove particulates prior
to preparation of the mixture.
The aerosol mixtures were prepared by a condensation
technique, which generated near mono-dispersed
droplet suspensions. This was achieved by controlled
expansion of a gaseous fuel-air mixture from the
combustion vessel into the expansion vessel that
was pre-evacuated to less than 1 kPa. The expansion
caused a reduction in the pressure and temperature
of the mixture, which took it into the condensation
regime and caused droplets to be formed.
The characteristics of the generated aerosol were
calibrated by in-situ measurements of the temporal
distribution of pressure, temperature, and droplet
size and number, without combustion, with reference
to the time from start of expansion. The diameters
of individual droplets were measured using a Phase
Doppler Anemometer (PDA) system, from which the
droplet mean diameter, D10, was obtained. Since the
expansion took place over a period of several seconds
while combustion took place over less than 100 ms,
the far field values of D10 were assumed to be constant
during combustion.
The mean droplet diameter varied with time during
expansion; hence the effect of droplet size in relation
to the cellular structure of the flame was investigated
by varying the ignition timing. The iso-octane-air
mixture was ignited at the centre of the combustion
vessel by an electric spark of approximately 500 mJ.
The flame front was monitored through the vessel’s
windows by schlieren movies, which were recorded
using a high-speed digital camera at a rate of
1000 frames per second and with a resolution of 512 ×
512 pixels. The flame image was processed digitally by
using image-processing software to obtain the flame
radius. The velocity of the flame front, also known as
the stretched flame speed, S n , was obtained directly
40
from the measured flame front radius, r, by
Sn =
dr
dt
(1)
Similarly, the local flame speed is given by
SL =
dL
dt
(2)
where L is the distance between the local flame
front and the spark electrode as measured from the
schlieren image of the flame.
RESULTS AND DISCUSSION
Figure 3 shows the schlieren photographs of flames
at maximum viewable radius and the corresponding
contour plots at 2 ms intervals for gaseous and also
aerosol mixtures. The mixtures were initially at
equivalence ratio, φov, of 2.0, temperatures between
287 and 303 K, and pressures between 100 and
159 kPa. For the aerosol mixtures, the droplet mean
diameters, D10, were 5 µm and 13 µm. It must be
noted that the relatively small differences in pressure
and temperature between the conditions in the three
images have been shown, for gaseous flames, to have
little effect on the flame structure [16]. Hence, it was
assumed that the difference in the flame structure is
entirely due to the effects of droplets. In Figure 3(a),
the image is slightly darker than the others due to the
low exposure setting of the camera. The circular black
areas at the corners of the photographs represent
the region beyond the window of the combustion
vessel. The electrode holder and thermocouple are
clearly shown on the middle-right section in each
photograph.
It is shown in the schlieren image in Figure 3(a) that
for a gaseous flame, the upward distance of the flame
propagation is greater than the downward one, which
is a sign of the buoyancy effect as described for Figure
2. The upper surface of the gaseous flame is relatively
smoother and has fewer cells as compared to the lower
surface. In the contour plot for the gaseous flame in
Figure 3(a), larger spacing between flame contours
is shown for the upper propagation as compared to
the lower one. This suggests faster upward flame
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
(a) D10 = 0 µm (gaseous)
(b) D10 = 5 µm
(c) D10 = 13 µm
Figure 3. Schlieren images and contour plots (superimposition of edges) throughout propagation for iso-octane-air flames at φov = 2.0
and various droplet sizes. The time interval between each contour is 2 ms
propagation than the downward propagation.
Conversely, the difference between the leftward and
rightward propagations is shown to be small, the
rightward propagation being very close to the flame
radius. Hence it is shown that significant difference in
propagation rate occurs only in the vertical direction.
With the presence of droplets of 5 µm, it is shown in
Figure 3(b) that trend of vertical flame propagation is
almost similar to that for the gaseous flame in Figure
3(a). However the aerosol flame has more cells on
its surface than the gaseous one. Interestingly, with
bigger droplets (D10 = 13 µm), it is shown in Figure
3(c) that the resulting flame is highly cellular. The
contour plot of the flame shows faster burning rate
(large spacing between contours) and also smaller
difference between the upward and downward
propagation rate (more uniform contour spacing in
all directions) as compared to those in Figures 3 (a)
and (b). Hence it is suggested that with the presence
of large diameter droplets, the buoyancy effect
demonstrated in gaseous flames of rich mixtures is
overcome by instabilities and consequently faster
burning rate, such that there was less time available
for natural convection to be significant.
Figure 4 shows graphs displaying the temporal
variations of the flame radius and local vertical and
horizontal propagation distances (from the electrode)
for the gaseous iso-octane-air flame depicted in Figure
3(a) and for the aerosol flame in Figure 3(c), both at φov
= 2.0. Here the effect of the presence of large droplets
(D10 = 13 µm) is presented. The upward, downward,
leftward and rightward propagation distances
between the spark gap and the corresponding flame
edge were measured at 1 ms interval.
It is shown in the graph in Figure 4(a) that for the
gaseous flame, the upward propagation distance
of the flame propagation is always greater than
VOLUME Six NUMBER two july - december 2008 PLATFORM
41
Technology Platform: fuel combustion
70
U: upward
L: leftward
R: rightward
D: downward
r: schlieren radius
50
40
L
R
r
30
D
20
U
60
Distance (mm)
Distance (mm)
60
70
U
Propagation direction:
50
R
r
L
D
40
30
20
10
10
0
0
0
20
40
60
80
100
0
20
Time (ms)
(a) gaseous, D10 = 0 µm
40
60
80
100
Time (ms)
(b) aerosol, D10 = 13 µm
Figure 4. Flame propagation distance from spark electrode as a function of time for iso-octane-air mixtures at φov = 2.0. Also shown is
the corresponding direction of the flame propagation with respect to the spark electrode
that of the downward one. In addition, the upward
distance of the flame propagation increases at a
steady rate, whereas the downward one decelerates;
these indicate the effect of buoyancy force acting
on the hot flame kernel. The difference between the
leftward and rightward propagations is shown to be
small, although the rightward propagation distance
seems to be slightly different from the flame radius.
Obviously, the flame radius and horizontal propagation
distances are shown to be at approximately midway
between the upward and downward components.
The deceleration in the downward propagation is
only experienced by the gaseous flame, as shown in
Figure 4(a). Hence, the cellularity on the bottom half
of the gaseous flame in Figure 3(a) is probably caused
by hydrodynamic instabilities, as a result of an upward
flow of unburned gas which velocity exceeded the
flame speed at the base of the flame, as illustrated
in Figure 2. Conversely, the smoother upper surface
of the flame is probably due to flame stabilisation
as a result of high stretch rate. This occurs when the
expanding flame front propagates through a velocity
gradient in the unburned gas that is induced by the
upward, buoyant acceleration of hot products as
explained by Andrews and Bradley [12].
With the presence of large enough droplets (D10 = 13
µm), the aerosol flame burned faster than the gaseous
42
flame. This is shown in Figure 4(b), in which the aerosol
flame took approximately 60 ms to reach a radius of 50
mm, as compared to about 90 ms for the gaseous flame
to reach the same radius. This is very likely caused by
earlier instabilities in the aerosol flames, as depicted
in Figure 3, which promotes a faster burning rate due
to increase in the flame surface area. In relation to the
buoyancy, it is shown in Figure 4(b) that such effect
is absent, as implied by the gap between the upward
and downward flame propagation that is significantly
and consistently smaller as compared to that for
the gaseous flame. Furthermore, the aerosol flame
exhibits acceleration in the downward propagation as
compared to deceleration in the gaseous flame.
Figure 5 shows the vertical components of the flame
propagation distance from the spark electrode for
the gaseous flame and also the aerosol flames (D10
values of 5, 13, and 16 µm) at initial conditions similar
to those described for Figure 3. The negative values
indicate the downward flame propagation distances.
The propagation rates for the fine aerosol (D10 = 5 µm)
flames are shown in Figure 5 to be similar to those
for the gaseous flames, as indicated by their nearly
identical curve plots, particularly for the downward
propagation of the flame. In addition, the buoyancy
effect is evident by the greater values of the positive
distance as compared to the negative distance. It is
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
16 µm
Distance from spark electrode (mm)
60
13 µm
Upward
5 µm
0 µm
40
20
0
0
20
40
60
80
10 0
Time
(ms)
-2 0
0 µm
-4 0
5 µm
Downward
13 µm
16 µm
-6 0
Figure 5. Comparison of vertical flame propagation distance from spark electrode as a function of time for iso-octane-air mixtures at
φov = 2.0. Negative distances indicate downward flame propagation
clear in Figure 5 that the flames within aerosols of large
droplets (13 and 16 µm) propagate at a faster rate than
those of fine droplets (5 µm) and gaseous. For these
aerosol flames, the effect of buoyancy is not obvious.
Interestingly, it is shown that the curves for upward
propagation for all values of D10 are coincident for
approximately the first 25 ms of propagation; a similar
trend is observed for downward propagation up to
about 16 ms. This was probably because the buoyancy
is not yet in effect during those periods of initial flame
kernel growth.
Figure 6 shows graphs of variation in the local flame
speed (deduced by time derivative of the graphs
in Figure 4) with time from the start of ignition.
The speed for the upward propagating flame is
represented by the diamond markers, and the bottom
one by the square markers. The circle and triangle
markers represent the rightward and leftward flames
respectively. The gaseous flame is shown in Figure
6(a) to propagate faster in the upward component by
about 0.4 m/s, as compared to that of the downward
component, which also decreases towards a near zero
value throughout propagation. A negative downward
component, if were to occur, would implicate an
upward flow of unburned gas at the central base of
the flame; such cases were reported elsewhere; e.g.
in [9], but this is beyond the scope of the present
work. The sideway components of the flame speed
are shown in Figure 6(a) to be similar, which suggest
the independencies of these components from the
natural convection effect in the gaseous flame. With
droplets (D10 = 13 µm), it is shown in Figure 6(b) that all
the components of flame speed nearly coincide with
each other, indicating a more uniform distribution of
flame speed throughout the flame surface, and hence
evident the absence of the natural convection effect.
However, after about 35 ms from the start of ignition,
the curve for the upward component of the flame
started to burn at a significantly faster rate than the
other components. The reason for this is not clear and
thus further investigation is required.
VOLUME Six NUMBER two july - december 2008 PLATFORM
43
Technology Platform: fuel combustion
1.8
Upward
Downward
Leftward
Rightward
Local Flame Speed (m/s)
.
1.6
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0
20
40
60
80
100
Time (ms)
a) gaseous, D10 = 0 µm
1.8
1.6
Upward
Downward
Leftward
Rightward
Local Flame Speed (m/s).
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0
(b) aerosol, D10 = 13 µm
20
40
60
80
100
Time (ms)
Figure 6. Comparison of vertical flame propagation distance from spark electrode as a function of time air for iso-octane-air mixtures at
φov = 2.0 for (a) D10 = 0 µm (gaseous) and (b) D10 = 13 µm
44
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: Fuel combustion
The mechanism of flame instabilities, which caused
increased cellularity and insignificance of the
buoyancy effect, in aerosol flames is probably related
to the heat loss from the flame and local rapid
expansion through droplet evaporation. Although
droplet evaporation can also cause high gradients in
the mixture strength (variations in local equivalence
ratio), which might have an effect on the flame, this
was negated experimentally [17] using water aerosol
in propane-air mixtures. In another study [18] using a
rig similar to that of the present work, it was shown
that the presence of 30 µm diameter hollow spherical
glass beads (no evaporation) in a gaseous iso-octaneair mixture did not alter the smooth characteristics of
the flame structure as well as the burning rate. Thus it
is evident that the presence of droplets probably plays
an important role in the introduction of instabilities
due to evaporation.
propensity to instability results in the burning rate of
aerosol mixtures being faster than those in the gaseous
phase at similar conditions. This is so, although the
fundamental unstretched laminar burning velocity is
probably unchanged by the presence of droplets.
REFERENCES
[1]
J. B. Greenberg, “Propagation and Extinction of an Unsteady
Spherical Spray Flame Front,” Combust. Theory Modelling,
vol. 7, pp. 163-174, 2003
[2]
D. R. Ballal and A. H. Lefebvre, “Flame Propagation in
Heterogeneous Mixtures of Fuel Droplets, Fuel Vapor and
Air,” Proc. Combust. Inst., 1981
[3]
G. D. Myers and A. H. Lefebvre, “Propagation in
Heterogeneous Mixtures of Fuel Drops and Air,” Combustion
and Flame, vol. 66, pp. 193-210, 1986
[4]
G. A. Richards and A. H. Lefebvre, “Turbulent Flame Speeds
of Hydrocarbon Fuel Droplets in Air,” Combustion and
Flame, vol. 78, pp. 299-307, 1989
[5]
D. Bradley, A. K. C. Lau, and M. Lawes, “Flame Stretch Rate as
a Determinant of Turbulent Burning Velocity,” Phil. Trans. R.
Soc. Series A, vol. 338, pp. 359, 1992.
[6]
Y. Mizutani and A. Nakajima, “Combustion of Fuel VapourDrop-Air Systems: Part I, Open Burner Flames”, Combustion
and Flame, vol. 20, pp. 343-350, 1973
[7]
Y. Mizutani and A. Nakajima, “Combustion of Fuel VapourDrop-Air Systems: Part II, Spherical Flames in a Vessel”,
Combustion and Flame, vol. 21, pp. 351-357, 1973
[8]
F. Akamatsu, K. Nakabe, Y. Mizutani, M. Katsuki, and T. Tabata,
“Structure of Spark-Ignited Spherical Flames Propagating
in a Droplet Cloud”, in Developments in Laser Techniques
and Applications to Fluid Mechanics, R. J. Adrian, Ed. Berlin:
Springer-Verlag, 1996, pp. 212-223
[9]
H. F. Coward and F. Brinsley, “Dilution Limits of Inflammability
of Gaseous Mixtures”, Journal of Chemical Society
Transaction (London), vol. 105, pp. 1859-1885, 1914
CONCLUSION
The effects of the presence of near mono-dispersed
droplets in flames of very rich iso-octane-air mixture
(φov = 2.0) were investigated experimentally in a
spherical explosion vessel at near atmospheric
conditions. The fuel droplets, which were in the form
of aerosols/vapour, were generated by condensation
of the gaseous pre-mixture through expansion and this
resulted in a homogeneously distributed suspension
of near mono-disperse fuel droplets. The effects of
droplet size in relation to the structure of the flame
surface and to the burning rate were investigated by
varying the ignition timing, as the droplet size varied
with time during expansion. Observations of the
gaseous flame using schlieren photography through
the vessel’s windows revealed the buoyancy effect,
with distinct differences in flame surface structure
and local burning rates between the upper and
lower halves of the flame, similar to those described
in previous studies. The presence of fine droplets
(5 µm) did not cause significant change with respect
to the gaseous flame in terms of the buoyancy effect,
flame structure and burning rate. However, with
larger droplets (13 µm) the flame became fully cellular
at a faster rate and more importantly the effect of
buoyancy was significantly reduced. The increased
[10] O. C. d. C. Ellis, “Flame Movements in Gaseous Mixtures”,
Fuel, vol. 7, pp. 195-205, 1928
[11] I. Liebman, J. Corry, and H. E. Perlee, “Dynamics of Flame
Propagation through Layered Methane-Air Mixtures”,
Combustion Science and Technology, vol. 2, pp. 365, 1971
[12] G. E. Andrews and D. Bradley, “Limits of Flammability
and Natural Convection for Methane-Air Mixtures”, 14th
Symposium (International) on Combustion, 1973
[13] G. E. Andrews and D. Bradley, “Determination of Burning
Velocities, A Critical Review”, Combustion and Flame, vol.
18, pp. 133-153, 1972
[14] D. Bradley, Personal Communications, 2006
VOLUME Six NUMBER two july - december 2008 PLATFORM
45
Technology Platform: fuel combustion
[15] C. T. R. Wilson, “Condensation of water vapour in the
presence of dust-free air and other gases”, in Proceedings of
the Royal Society of London, 1897
[16] D. Bradley, P. H. Gaskell, and X. J. Gu, “Burning Velocities,
Markstein Lengths and Flame Quenching for Spherical
Methane-Air Flames: A Computational Study”, Combustion
and Flame, vol. 104, pp. 176-198, 1996
[17] F. Atzler, M. Lawes, S. A. Sulaiman, and R. Woolley, “Effects of
Droplets on the Flame speeds of Laminar Iso-Octane and
Air Aerosols”, ICLASS 2006, Kyoto, Japan, 2006
[18] F. Atzler, “Fundamental Studies of Aerosol Combustion”,
Department of Mechanical Engineering, University of
Leeds, 1999
46
Shaharin Anwar Sulaiman graduated in
1993 with a BSc in Mechnical Engineering
from Iowa State University. He earned his
MSc in Thermal Power and Fluids
Engineering from UMIST in 2000, and PhD
in Combustion from the University of Leeds
in 2006. During his early years as a graduate,
he worked as a Mechanical and Electrical
(M&E) Project Engineer in Syarikat
Pembenaan Yeoh Tiong Lay (YTL) for five years. His research
interests include combustion, sprays and atomisation, airconditioning and ventilation, and biomass energy. He joined UTP
in 1998 as a tutor. At present he is a Senior Lecturer in the
Mechanical Engineering programme and also the Programme
Manager for MSc in Asset Management and Maintenance.
Certified as a Professional Engineer with the Board of Engineers,
Malaysia. He is also a Corporate member of the Institution of
Engineers Malaysia.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Anaerobic Co-Digestion of kitchen waste
and sewage sludge for producing biogas
Amirhossein Malakahmad*, Noor Ezlin Ahmad Basri1, Sharom Md Zain1
*Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.
1Universiti Kebangsaan Malaysia, 43600 UKM, Bangi, Selangor, Malaysia.
*amirhossein@petronas.com.my
ABSTRACT
In this paper, an attempt was made to present the results of some experiments conducted on anaerobic digesters
to make comparative study of the biogas generation capacity of the mixture of organic fractions of municipal
solid waste from kitchen waste and sewage sludge in different compositions. Batch digestion of samples with
various percentage of kitchen waste and sewage sludge was carried out under controlled temperature 35 °C
and pH7 conditions for 15 days for each experiment. In all experiments the content of total solid and volatile
solid, pH, Kjeldahl nitrogen, chemical oxygen demand, biogas productivity and the content of biogas were
measured. The results obtained showed that biogas productivity varied between 4.6 and 59.7 ml depending on
the composition of each component in the sample which were added to the digesters. The bioprocess efficiency
was observed to be 7.5% - 70.5% for total solid, 51.2% - 81.0% for volatile solid and 8.3% - 43.8% for COD. The
overall effluent chemical oxygen demand concentration indicated that it should be treated before using for
other applications. From the study results, it is evident that the second bioreactor with 75% of kitchen waste and
25% of sewage sludge produced the maximum quantity of methane gas as compared to other bioreactors.
Keywords: kitchen waste, sewage sludge, biogas production
INTRODUCTION
material recycling, energy recovery and landfill [1].
Malaysia, with a population of over 25 million, generates
16 000 tones of domestic waste daily. At present, the
per capita generation of solid waste in Malaysia varies
from 0.45 to 1.44 kg/day depending on the economic
status of an area. There are now 168 disposal sites but
only 7 are sanitary landfills. The rest are open dumps
and about 80% of these dumps have filled up to the
brim and have to be closed in the near future. The
Malaysian government is introducing a new law on
solid waste management and also drafting a Strategic
Plan for Solid Waste Management in Peninsular
Malaysia. The principal processes options available
and being recognised as hierarchy for integrated
waste management are: waste minimisation, reuse,
Municipal solid waste (MSW) contains an easily
biodegradable organic fraction (OF) of up to 40%.
Conventional MSW management has been primarily
disposal by land filling. Sewage sludge is characterised
by high content of organic compounds and this is the
cause of its putrescibility. Therefore, sludge before
landfill disposal or agricultural application should
undergo chemical and hygienic stabilisation. One
possible method of stabilisation and hygienisation
involves methane fermentation [2].
The anaerobic co-digestion of sewage sludge with
organic fraction of municipal solid waste (OFMSW)
seems to be especially attractive [3]. The feasibility of
This paper was presented at the 2nd International Conference on Environmental Management, Bangi,
13 -14 September 2004
VOLUME Six NUMBER two july - december 2008 PLATFORM
47
Technology Platform: SYSTEM OPTIMISATION
anaerobic co-digestion of waste activated sludge and
a simulated OFMSW was examined by Poggi-Varaldo
and Olesz-kiewicz [4]. The benefits of co-digestion
include: dilution of potential toxic compounds,
improved balance of nutrients, synergistic effects of
microorganisms, increased load of biodegradable
organic matter and better biogas yield. Additional
advantages include hygienic stabilisation, increased
digestion rate, etc. during methane fermentation the
two main processes that occur are:
i. Acidogenic digestion with the production of
volatile fatty acid; and
ii. The volatile fatty acids are converted into CH4 and
CO2.
In batch operation the digester is filled completely
with organic matter and seed inoculums, sealed, and
the process of decomposition is allowed to proceed
for a long time until gas production is decreased to a
low rate (duration of process varies based on regional
variation of temperature, type of substrate, etc.).
Then it is unloaded, leaving 10-20 percent as seed,
then reloaded and the operation continues. In this
type of operation the gas production is expected to
be unsteady and the production rate is expected to
vary from high to low. Digestion failures due to shock
load are not uncommon. This mode of operation,
however, is suitable for handling large quantities of
organic matter in remote areas. It may need separate
gasholders if a steady supply of gas is desired [5].
Callaghan et al. worked on co-digestion of waste
organic solids which gave high cumulative production
of methane [6].
kitchen waste and 50% sewage sludge, in the fourth
run, mixture of 25% kitchen waste and 75% sewage
sludge was used. The fifth experiment was conducted
with only sewage sludge.
In all experiments total solid, volatile solid, pH, Kjeldahl
nitrogen and chemical oxygen demand for initial and
final properties of samples were determined. Biogas
productivity and the content of biogas were also
measured. All analytical procedures were performed
in accordance with Standard Methods [7].
Results and discussion
i. Variation in pH value throughout the
experiments
As shown in Figure 1, from the graph plotted, the
pH variation could be categorised into 3 main zones.
The first zone started from first day till fourth day,
which showed a drastic drop of the pH. This is due
to the high development rate of volatile fatty acids
by microorganisms. The pH is maintained at neutral
with the addition of sodium hydroxide solution. The
second zone started from the fifth till the twelfth day
of experiment. In the second zone, the pH was in the
range of 6.9 to 7.3. This is due to the development
of CO3HNH4 from CO2 and NH3, which were produced
during the anaerobic process.
The percentage of CO3HNH4 had caused the increase
of alkalinity of the samples. Due to this, any differences
Experimental
7
6
pH
Batch digestion of samples was carried out under
controlled temperature 35 °C and pH7 conditions for
15 days for each experiment. All five samples were
fed into a 1 L bioreactor operated under mesophilic
condition.
8
5
Sample 1
4
Sample 2
Sample 3
3
Sample 4
Sample 5
In the first experiment, 100% kitchen waste was used
while the second experiment was conducted with the
mixture of kitchen waste (75%) and sewage sludge
(25%). The third experiment was conducted with 50%
48
2
1
2
3
4
5
6
7
8
9
Time (day)
Figure 1
PLATFORM VOLUME Six NUMBER TWO july - december 2008
10
11
12
13
14
15
16
Technology Platform: SYSTEM OPTIMISATION
in the volatile fatty acid content did not affect the pH
value. The third zone started on the thirteenth till
the last day of the experiment. In this zone, it was
found that the pH value of the samples started to
increase. This is due to the development of CO3HNH4
that still continues, but no more volatile fatty acid was
produced.
ii. Biogas production
• The production of cumulative biogas
Figure 2 shows the production of cumulative biogas
for all the samples. It was found that the second
sample with the composition of 75% kitchen waste
and 25% activated sludge produced the highest
quantity of biogas, which was 59.7 ml. This was
followed by the first sample (100% kitchen waste),
then fourth sample (25% kitchen waste and 75%
activated sludge), then the third sample (50% kitchen
waste and 50% activated sludge), and lastly the fifth
sample (100% activated sludge). The productions of
the biogas of the respective samples were 47.1 ml,
22.3 ml, 8.4 ml and 4.6 ml. The fifth sample produced
the least biogas; this is consistent with the literature
data by Cecchi et al., (1998) and Hamzawi et al., (1998)
that showed the production of the cumulative biogas
is high when organic components that are easily
biodegradable in the sample are higher. According to
Schmidell (1986), the anaerobic digestion process for
MSW alone is possible but will produce less biogas as
compared to a mixture of MSW and activated sludge.
This is due to the production of volatile fatty acids by
microorganisms is more likely to accumulate rather
than to release biogas.
An increment of 5% of activated sludge is enough
to reduce the accumulation of volatile fatty acid and
release more biogas. From Figure 2 for the first and
second samples, the results comply with the Schmdell
theory. Samples three and four that had different
composition of kitchen waste and activated sludge
produced less biogas due to the composition of
activated sludge that has unsuitable C:N ratio for the
anaerobic digestion process.
•
The biogas production rate
The rate of biogas production for every sample is
shown in Figure 3. It was found that the production of
biogas for the first sample was the slowest that started
from the fourth day of experiment and reached the
highest quantity on the eighth day. On the other hand
the fifth sample started to produce biogas the earliest
on the second day and reached the highest amount
on the fifth day. The production rate of biogas for
samples two and three occurred on the seventh day of
the experiment while the fourth sample occurred on
the sixth day. Therefore, it can be concluded that the
results obtained are consistent with the research done
by Cecchi et al. [8], which stated that the production of
biogas is slower for high organic loading as compared
to a lower organic loading.
Table 1 summarises the amount of total suspended
solid (SS), volatile suspended solid (VSS), alkalinity,
30
Sample 1
Sample 1
60
Sample 3
Sample 3
50
Sample 2
25
Sample 2
Biogas production rate (ml/day)
Production of cumulative biogas (ml)
70
Sample 4
Sample 5
40
30
20
10
Sample 4
20
Sample 5
15
10
5
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
1
Figure 2
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Time (day)
Time (day)
Figure 3
VOLUME Six NUMBER two july - december 2008 PLATFORM
49
Technology Platform: SYSTEM OPTIMISATION
kejldahl nitrogen, pH, and COD before and after
treatment in all five experiments. According to the
results, the bioprocess efficiency was observed to be
7.5% - 70.5% for total solid, 51.2% - 81% for volatile
solid and 8.3% - 43.8% for COD. The first bioreactor is
most efficient in treating the volatile solid component,
which achieved efficiency of 81.0%.
Conclusions
Five experiments were conducted under mesophilic
conditions in batch bioreactor for 75 days. Five
different kinds of feedstock were loaded into the
reactor. It was found that the cumulative biogas
production increased, when the mixture kitchen
waste and activated sludge was used.
However, the highest value of methane production
was for sample 2 (75% kitchen waste and 25%
activated sludge), which produced 59.7 ml. For
the rate of biogas production the situation was the
same and the best result was for sample 2, after the
samples 1, 4, and 3 were settled respectively. The 5th
sample produced the least biogas. The anaerobic
co-digestion of kitchen waste and activated sludge
were demonstrated to be an attractive method for
environmental protection and energy savings, but it is
clear that applying better equipment and adjustment
of conditions could give more reasonable results.
Reference
[1]
Mageswari, S., “GIAI global meeting”. Penang, Malaysia. 1721 March 2003
[2]
Sosnowski, P., Wieczorek, A., Ledakowicz, S., 2003. “Anaerobic
co-digestion of sewage sludge and organic fraction
of municipal solid wastes”. Advance in environmental
researches 7, 609-616
[3]
[4]
50
Table 1. The value of measured parameters before and after
treatment
Parameter
Before
After
treatment treatment
Sample I
SS (%)
5.00
3.29
VSS (%)
42.14
8.00
N (mg/L N-NH3)
0.40
0.55
pH
7.00
7.36
COD (mg/L)
4864
3520
SS (%)
5.10
4.72
VSS (%)
38.30
7.50
N (mg/L N-NH3)
0.29
0.47
pH
7.00
7.31
COD (mg/L)
4000
3600
SS (%)
6.44
1.90
VSS (%)
32.55
8.80
N (mg/L N-NH3)
0.24
0.45
pH
7.01
7.27
COD (mg/L)
2800
2560
SS (%)
2.45
1.40
VSS (%)
27.52
6.69
N (mg/L N-NH3)
0.23
0.30
pH
7.00
7.50
COD (mg/L)
2400
2200
Sample II
Sample III
Sample IV
Hamzawi, N., Kennedy, K.J., Mc Lean, D.D., 1998. “Technical
feasibility of anaerobic co-digestion of sewage sludge and
municipal solid waste”. Environ. Technol. 19, 993-1003
Sample V
SS (%)
0.27
0.19
Poggi-Varaldo, H. M., Olesz-kiewicz, J. A., 1992. “Anaerobic
co-composting of municipal solid waste and waste sludge
at high total solid level”. Environ technol. 13, 409-421
VSS (%)
0.41
0.20
N (mg/L N-NH3)
0.02
0.10
pH
7.00
7.59
COD (mg/L)
320
180
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
[5]
Polprasert, C., 1996. “Organic waste recycling, technology
and management”. Second edition. Chichester: John Wiley
and sons
[6]
Callaghan F. J., Wase, D. A. J., Thayanithy, K., Forster, C. F.,
1998. “Co-digestion of waste organic solids: batch study”.
Bioresearch technology 67, 117-122
[7]
Standard methods for water and wastewater treatment
18th edition (1992)
[8]
Cecchi, F., Pavan, P., Musacco, A., Mata-Alvarez, J., Sans, C.,
Ballin, E., 1992. “Comparison between thermophilic and
mesophilic digestion of sewage sludge coming from urban
wastewater plants”. Ingegneria Sanitaria Ambientale 40, 2532
Amirhossein Malakahmad is an academic
staff at Universiti Teknologi PETRONAS. He
graduated with BEng in Chemical
Engineering in 1999 from Islamic Azad
University, Tehran, Iran. He completed his
MSc in 2002 in Environmental Engineering
from the same university. In 2006, he
received his PhD from the National
University of Malaysia, UKM for his research
on an application of zero-waste anaerobic baffled reactor to
produce biogas from solid waste. He joined Universiti Teknologi
PETRONAS in August 2007. His research interests are in water and
wastewater treatment and solid waste engineering.
VOLUME Six NUMBER two july - december 2008 PLATFORM
51
Technology Platform: SYSTEM OPTIMISATION
On-line At-Risk Behaviour Analysis
and Improvement System (e-ARBAIS)
Azmi Mohd Shariff* and Tan Sew Keng
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.
*azmish@petronas.com.my
Abstract
Behaviour Based Safety (BBS) is a programme that has been implemented in many organisations to identify at-risk
behaviour and to reduce injury rate of their workers. The effectiveness of the BBS was proven with many companies
recorded high percentage of reduction of injury rate especially during the first year of implementation. However,
the BBS process could be very labour intensive. It requires many observers to make the process effective. Very
much effort was required to train the employees to become the observers. Many organisations which attempted
to obtain the benefits of BBS did not sustain comprehensive participation required in BBS related activities.
With this drawback, it calls for a simplified process that could achieve the same result as BBS. This study was
intended to establish an alternative to the BBS, termed as On-line At-Risk Behaviour Analysis and Improvement
System (e-ARBAIS). The e-ARBAIS utilises computer technology to play a role in making the routine observation
process more sustainable and hence instilling the habitual awareness through the cognitive psychology effect.
A database was set up with the pre-programmed questions regarding at-risk behaviours in the organisation.
The employees then utilised the database to feedback their observations. Through this process, the traditionally
tedious observations by trained observers as required in BBS were now done naturally by all respondents. From
the collective feedback, at-risk behaviours can be easily identified in the organisation. The HSE committee within
the organisation can thus, take the appropriate action by reinforcing the safety regulations or safe practices to
correct all the unsafe behaviours either by changing the design (“Hard-ware”), system (“Soft-ware”) or training
the employee (“Human-ware”). This paper introduces the concept, framework and methodology of e-ARBAIS.
A case study was conducted in X Company (not a true name as the permission to use their real name was not
given). A prototype computer program based on e-ARBAIS was developed using Microsoft Excel named as
“1-min Observation” programme. A preliminary result based on one-month data collection is highlighted in
this paper. Based on this preliminary result, “1-min Observation” programme has received positive feedback
from the management and employees of Company X. It was done with very small resources and thus saved
time and money compared to traditional BBS technique. The e-ARBAIS concept is workable and practical since it
is easy to implement, collect data and correct unsafe behaviour. Some recommendations by the employees of
Company X were presented in this paper to further improve the “1-min Observation” programme. The project
at Company X is still in progress in order to see a long term impact of this programme. The e-ARBAIS has shown
its potential to reduce injury in the organisation if implemented with a thorough plan and strong commitment
from all levels.
Keywords: behaviour based safety, at-risk behaviour, human factor, safety programme, injury rate reduction
This paper was published in the Journal of Loss Prevention in the Process Industries,
doi:10.1016/j.jlp.2007.11.007 /(2008)
52
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Introduction
Behaviour based safety (BBS) was first established
by B. F. Skinner in the 1930s (Skinner, 1938). He was
a psychologist who developed a systematic approach
called behaviour analysis to increase safe behaviours,
reduce risky behaviours and prevent accidental
injury at work and on the road. This approach was
later known as applied behaviour analysis (Hayes,
2000). Behaviour study was important because H. W.
Heinrich, a workplace safety pioneer, reported that
out of 550 000 accidents, only 10% was caused by
unsafe working conditions, another 88% was caused
by worker’s unsafe actions (SCF, 2004). A “Workplace
Attitude Study” conducted by Missouri Employers
Mutual Insurance (MEM) which was published in
Occupational Hazards (September 2003) revealed that
64.1% of Americans thought that a workplace accident
would never happen to them. 53.4% believed that the
probability was very low for a work injury that could
cause them to become permanently disabled (SCF,
2004). This showed that people generally perceived
that there was a low risk of injury possibility in a
workplace. This showed that accidents could happen
if workers continue to work with at-risk behaviour and
perceived it was safe to do so. The human toll of unsafe
behaviour was high. According to the U.S. Bureau of
Labour Statistics, unintentional injury was the leading
cause of death to people aged 44 and under. In 2001,
private industry had more than 5.2 million non-fatal
accidents and injuries, with more than 5 000 fatal
injuries. Behaviour-based safety programmes that
target and document behaviour changes indeed save
lives, money and productivity (APA, 2003).
Behaviour is an “upstream” approach to safety. It
focuses on the “at-risk behaviour” that might produce
an accident or near miss rather than trying to correct
a problem after an accident or occurrence. The
behaviour-based aim then, is to change the mindset
of an employee by hopefully making safety a priority
in the employee’s mind (Schatz, 2003).
However, it was noted to many that not all organisations
had successful experience in implementing the BBS as
the others did (Geller, 2002). Over years, some safety
professionals had started to develop alternatives to
the BBS programme, i.e. people-based safety, ProAct
Safety and Value-based Safety. It was desirable to
develop another alternative to the BBS programme
via the help of computer technology.
The e-ARBAIS Concept
The e-ARBAIS programme is meant to provide
alternative solutions to certain limitations of BBS as
mentioned below.
a. Prevent coyness in direct feedback with
computer interface
Problem arises when employees dare not approach
the peers to give feedback directly (Gilmore et al.,
2002). The e-ARBAIS programme provided another
channel for the peers to communicate. The peers
may now give feedback on their observations to the
database and publish the feedback via the computer.
This helps to reduce the problem of coyness through
direct feedback with peers.
b. Inculcate safety culture with e-ARBAIS
The e-ARBAIS utilises computer software to prompt
the employees if they observe any unsafe behaviour
relating to the topic asked. For instance, the
e-ARBAIS database could have a question like, “Did
you see anybody drive faster than 20 km/h today?”
The employees would be reminded to observe
occurrence around them naturally without bringing
the checklist. The e-ARBAIS questions that would be
prompted regularly may also serve as the checklist
in the ordinary BBS process. However, instead of
focusing on many items during one observation, the
questions would require the employees to respond
to certain particular areas in a day. Different topics
would be asked everyday. This will eventually instil a
psychological effect in which people are “reminded”
on the safety rules and regulations. The cultivating
of habitual awareness is always the heart of designing
the e-ARBAIS. Only when it becomes habitual, the
safety culture could be inculcated.
VOLUME Six NUMBER two july - december 2008 PLATFORM
53
Technology Platform: SYSTEM OPTIMISATION
As employees perform observations, they come
to recognise any discrepancies between their own
behaviour and what is considered safe, and they
begin to adopt safe practices more consistently.
McSween said “We have created a process where they
raise their personal standards” (Minter, 2004). This is
the objective of this whole e-ARBAIS – to inculcate the
safety culture in organisations.
c. Tackling slow data collecting and analysis with
IT
Questions on observations are then being repeated
randomly. All the feedbacks are collected in the
database and would be analysed by the HSE committee
on a regular basis. When the HSE committee analyse
the data, they would be able to identify the high
risk issues faced by the employees. For example, if
the speeding behaviour of the employees remains
high in the statistics after a few times being asked
through the software, it implies that many people
are speeding in the plant and refuse to change their
behaviour. From there, the HSE committee could
provide some recommendations such as building a
few road humps in the plant, doing spot checks and
issuing warnings to those who speed. The action item
could also be derived from an established method
such as ABC analysis or ABC model (Minshall, 1977).
The data analysed would highlight the areas in which
the HSE committee needs to focus on and to provide
the solution for improvement. This would also help
the committee to identify if the unsafe behaviours are
due to
• "Hard-ware" problem like inadequate structural
safety design,
• "Soft-ware" problem like poor system
implementation or obsolete operating procedure,
or
Set up of e-ARBAIS Database
Briefing to End Users on
e-ARBAIS
Feedback to peers
(optional)
Observations of At-Risk Behavior
at Workplace
On-line Feedback to Database
Analysis of At-Risk Behavior
ABC Analysis
(optional)
Review by HSE Committee
Action Plan to Correct At-Risk
Behavior
Figure 1. Framework of e-ARBAIS
54
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Interview
(optional)
Technology Platform: SYSTEM OPTIMISATION
Table 1. The comparison of ordinary BBS and e-ARBAIS
Element
Ordinary BBS
Training
Training given to observers on how to No training needed. All will participate in
define unsafe behaviour and how to provide the observations. Only briefing on what is
feedback.
e-ARBAIS and how it works.
Checklist
Checklist must be used to go through all No checklist is used. Checklist is built in the
items and see which does not comply.
database as pre-programmed questions.
Observation
frequency
Observers are required to make certain Observation is done on daily basis or flexible
observations in a certain period, i.e. 1 adjustment to frequency can be made.
observation a week.
Cost
Additional cost for training and printing Minimum cost since training and checklist
checklist.
are not required.
Feedback
Feedback is given directly when observation
completes, whether it is positive or negative.
Results of observation need to be reported
to HSE committee for further analysis.
Communication
The analysed results normally can only be Feedback is recorded in database which
accessed by the HSE committee. Employees everyone could access to see what unsafe
generally not being communicated on the behaviour is observed.
overall performance of the observation.
Involvement
Only those who are trained will be involved All will be involved in the observation since
in the observations. To involve all, much the observation is done naturally without
training is required.
focusing on only one activity.
Management
commitment
Management commitment determines if Database displays the management
the process will be successful. Most BBS fail participation
and
thus
motivates
due to poor management commitment.
management to further commit and
improve the programme.
•
e-ARBAIS
"Human-ware" problem like employees’ risky
behaviour.
Feedback can be given either by face to face
or through the database to prevent “sick
feeling” with peers. Feedback is displayed
directly in the database. HSE committee
uses the same set of data for further action.
Framework of e-ARBAIS
The framework of e-ARBAIS is given in Figure 1.
d. Reduce intensive labour and high cost with
e-ARBAIS 24 hour functionality
The advantage of e-ARBAIS is that the observation
can be done 24 hours a day and 7 days a week with
minimum man-hour needed. It is all operated by the
computer. Thus, resulting in higher efficiency and
cost saving.
The differences between e-ARBAIS to the ordinary
BBS programme are given in Table 1.
To start the e-ARBAIS program, a database first
needs to be set up. A systematic approach can be
established to develop the questions in the database.
The database consists of pre-programmed questions
that can be based on the past incident records in the
organization to focus on the problematic area, or it can
be the near miss cases. It can also be the questions that
purely derived from the BBS checklist alone. Basically,
to effectively use the e-ARBAIS program, the questions
must be custom designed for each company. The
discussion needs to be done with HSE committee on
VOLUME Six NUMBER two july - december 2008 PLATFORM
55
Technology Platform: SYSTEM OPTIMISATION
the development and selection of the questions and
agreed by the management of the company.
After that, the end users needed to be briefed
about the e-ARBAIS and how the program will be
implemented. The end users were not required to
observe any specific activities to ensure that e-ARBAIS
is naturally done according to the pre-programmed
questions in the database. The end users only needed
Table 2. The questions in “1-min Observation” database
1. Did you see people NOT using handrail when
travelling up and down stairs?
2. Did you see employee driving faster than
20km/h in the plant area?
3. Did you experience back pain after work?
4. Did you see any reckless forklift driver?
5. Did you see people wearing glasses instead of
safety glasses in the plant?
6. Did you see people lifting thing in improper
position?
7. Did you see people NOT wearing ear plug in
noisy area?
8. Did you see anyone working at height with
falling hazard due to improper PPE/position?
9. Did you see people working (in the plant/lab/
packaging) NOT wearing safety glass?
10. Did you see any leak in the plant but NOT
barricaded?
11. Did you see any area (office or plant) unclean
and expose to tripping hazard?
12. Did you see people using hand phone at
restricted area?
13. Did you see people NOT looking in the direction
that they are walking (eyes NOT on path)?
14. Did you see people NOT following 100% of the
procedure when doing work?
15. Did you see people working without hand
glove in the plant area?
16. Did you see people walking/working inside
the maintenance workshop yellow line area
without minimum PPE required?
56
to be more aware of at-risk behaviour observation
by their colleagues when they are doing daily work.
At any occasion, they could also give their feedback
directly to the peers. To those who are afraid that their
feedback could cause ill-feeling, they have the option
of giving their feedback to the database. The database
would calculate the analysis automatically and publish
the result online based on the feedback received.
The employees are indirectly reminded by the online
analysis and feedback on the unsafe behaviours from
this exercise and this could change their behaviour
eventually to avoid similar observations in the
future. This could be achieved through the cognitive
psychology effect.
With the analysis done by the database, the HSE
committee could use those data and discuss them
in their regular meeting directly to correct the at-risk
behaviours that frequently occur. The committee could
apply the ABC analysis (Minshall, 1977) to understand
the behaviours. Optionally, the HSE committee could
also conduct some interviews with the identified
employees to further understand why they take risk
when performing their tasks. With that, the necessary
action plan could be established to rectify the at-risk
behaviours that are contributed by factors such as
“hard-ware”, “soft-ware” or “human-ware” mentioned
above.
The questions in the pre-programme database need
to re-visit time to time and should be updated with
any new potential at-risk behaviour as necessary.
Case Study
A case study using e-ARBAIS concept was implemented
in Company X for a month. The program was termed
as “1-min Observation”. One of the important tools
of this study was to use IT (information technology)
to help on the observation processes. The database
was developed on Microsoft Excel spreadsheet. One
master list that consists of 16 questions was generated
in the database based on the previous frequent atrisk behaviour identified and reported as shown in
Table 2. The case study was run for a month and each
question was repeated four times to observe any
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Start
User open database
Database would show today's date and search
for questions which matched with today's date.
Database would prompt the 2 questions which
matched with today's date into the front page
User to select "yes" or "no" for both questions
and department name
User clicked "submit".
Data was captured in database. Data collected
consist of date, user ID, answer for 1st question,
answer for 2nd question, and department.
Database would
count the
participation in a
day based on
departments. The
data was
transferred as chart
and would be
shown in the page
of "participation".
Database would count the
percentage of total unsafe
behavior observed in a day.
The data was transferred to
chart and would be shown in
the page of "unsafe
behavior".
total unsafe behaviour =
∑ unsafe behaviour
∑ participat ions
Database would
display yesterday
responses for both
questions and
would show the
percentage of
unsafe behavior.
The statistics was
shown in the page
of "statistics".
Yes
Yes
New data was
submitted?
No
End
Figure 2. The flowchart for “1-min Observation” programme
VOLUME Six NUMBER two july - december 2008 PLATFORM
57
Technology Platform: SYSTEM OPTIMISATION
Figure 3. The main page of the “1-min Observation” program
Figure 4. The data sharing page with all employees of Company X
58
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
possible trend in the data collected. At the front page
of the on-line database, only two questions would
be prompted. Two questions per day were adopted
after the consideration of the human factor. The
program was intended to let people feel that it was
so simple to participate and “why not take part?” The
HSE Committee felt that three questions may cause
irritation and one question was just too little and not
efficient in data collection. After much consideration,
two questions were considered the most appropriate.
The questions consisted of areas like Personal
Protection Equipment (PPE) usage, ergonomics, safety
rules, safe practice and housekeeping. The questions
were custom designed to suit Company X interests
to reduce at-risk behaviour of their employees. The
flowchart of the “1-min Observation” program is
shown in the Figure 2. The main page of the “1-min
Observation” program and the data sharing page with
all the employees are given in Figure 3 and Figure 4
respectively.
Participation
The participation from the employees at the early
stage of launching was not good. This was primarily
due to the unfamiliarity of the program and the
routine of going into the web page for the “1-min
Observation” file everyday. After much explanation
and encouragement by the safety manager to
the employees, the response started to increase.
The responses of the employees from respective
departments are shown in Figure 5.
The responses were expected to be low during
weekends and public holidays. Figure 6 shows the
responses received from all the employees and the
responses were on the lower side during weekends.
The trend showed that there was improvement in their
participation over time. However, there were some
occasions when feedback were lower due to some
visitors’ plant visit or corrupted master file. The master
file which was compiled using Microsoft Excel was
easily corrupted due to multiple sharing with many
people and the huge size of the file. The problem was
then fixed by using a standby master file, consistently
backed up with double password protection. Also,
the file size was then reduced by removing some
unnecessary decorative pictures in the file.
Based on the record from the Human Resource
Department, the daily attendance of the employees
was used to compare against the participation rate.
Figure 7 shows the percent of participation relative
to the attendance. The highest participation received
was 86% whereas sometimes it went below 10%. This
depended heavily on the plant activities. If the plant
was experiencing some problems then the response
Figure 5. The responses received from all the respective departments.
VOLUME Six NUMBER two july - december 2008 PLATFORM
59
Technology Platform: SYSTEM OPTIMISATION
Figure 6. The responses received from the employees.
Figure 7. The percentage of participation based on daily attendance.
would be lower as most of the employees were tied
up on the rectification of plant problems.
Data Analysis
The feedback from the employees were analysed
by the database. Each question was prompted four
times. The calculation is shown below.
Total Unsafe Behaviours Observed
Percent of Unsafe Behaviours =
× 100%
Total Responses
60
where
Total Unsafe Behaviours Observed = Unsafe Behaviours Observed Time 1 +
Unsafe Behaviours Observed Time 2 +Unsafe Behaviours Observed Time 3 +
Unsafe Behaviours Observed Time 4
Total Responses = Number of responses Time 1 + Number of responses Time 2
+ Number of responses Time 3 + Number of responses Time 4
From this, the percentages of unsafe behaviours were
sorted accordingly. However, it was noted that even
the topmost unsafe behaviour was only 35% of the
total response as shown in Figure 8. Generally, most of
the respondents were practising the safe behaviour.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
From Figure 9, the highest unsafe behaviour observed
was the usage of handphones at restricted areas,
which contributed to 35% of the responses. There was
no occasion whereby anyone was observed lifting
goods with improper position.
Discussion
The employees of Company X gave good support and
responses to the “1-min Observation” program. Using
the e-ARBAIS program, it was rather easy to identify
at-risk behaviours that needed improvement. It did
not involve many additional resources to gather the
useful data. As this is a preliminary result, the program
was considered quite successful. A longer time is
needed to see a long–term impact of e-ARBAIS.
Figure 8. The percentage of unsafe behaviours observed based on the total feedback
Figure 9. The top five unsafe behaviours observed
VOLUME Six NUMBER two july - december 2008 PLATFORM
61
Technology Platform: SYSTEM OPTIMISATION
Some challenges in implementation of e-ARBAIS in
Company X and its limitations are shared below.
The Challenges to Implement the e-ARBAIS in an
Organisation
a. Ensure clear communication
As e-ARBAIS was a new concept, it was very
important to communicate clearly to the employees
about the implementation. During the case study,
unclear communication and explanation from safety
department were some of the feedback quoted from
the employees.
b. Ensure management commitment
Management commitment was undeniably an
important role. Management commitment on the
participation would lead others to follow. Consistent
management commitment from each level was
imperative.
c. Ensure follow up action
The e-ARBAIS program could be more effective if
the employees could see the follow-up action by the
Safety Department or HSE Committee. Employees
who participated in the program would be eager
to report their observations and wanted to see the
changes. Thus, if the HSE committee was not able
to take appropriate action to make the changes,
employees would begin to feel disappointed with
the management as there was no follow up action.
Eventually, the program may cease. There was no
motivation that could continue to thrill the employee
to participate in the program.
The Limitation of “1-min Observation” Program
As the case study was rather short, it was difficult to
measure if there was any improvement in the safety
behaviour in the long term.
Also, the “1-min Observation” programme was an IT
based programme. It was thus vital that the database
worked appropriately. During the case study, the
database was created using Microsoft Excel and the
file was corrupted several times and interrupted the
program. Additionally, the file was shared among 79
employees and only one was accessible at one time.
A lot of time was wasted while waiting. Some of
them gave up when they could not open the file on a
particular day. The file malfunctions were occasionally
due to the huge size of the database. Improvement
on this was required to make the programme more
successful.
The questions in the database were developed based
on the recommendations by the company and were
focused on the observation of unsafe act. However,
one of the questions, “Did you experience back pain
after work?” is the result of unsafe acts and might
not be suitable to be included in the database for
the observation purposes. However, it shows the
flexibility of the e-ABAIS that can also be extended to
cover general safety issues resulting from an unsafe
act.
Some of the questions may not be directly understood
by the employees. For instance, “Did you see people
lift things in improper positions?” The employees
might not fully understand what “improper position”
means and therefore inaccurate responses might be
given. The refinement of these questions is required
to ensure accurate feedback from the observers.
d. Honest Participation
The data collected would be very useful if everybody
participated and responded honestly. However, there
was a possibility that people did not answer based on
the observations. Generally, this should not happen
and the overall analysis would be useful.
62
Additionally, there was a possibility that one unsafe
act was observed by multiple observers which
created a false statistic in the database. In this case, a
“feedback” column that enables the observer to write
and describe the observation would definitely help to
minimise the problem.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Conclusion
could be more effective if the recommendations were
considered and adopted.
In conclusion, the concept of e-ARBAIS was to serve
as another alternative to the current BBS programme.
It required fewer resources, more involvement, and
low cost which all added on to the sustainability of
the program. The e-ARBAIS was easy to implement
and also in collecting data. It also emphasised and
reminded employees on conducting task with the
correct behaviour. This was intended to provide a
psychological effect to the employees on the safe
behaviour and inculcate the habitual awareness.
Thus, this can encourage a safety culture in an
organization.
The case study of implementing e-ARBAIS in Company
X, which was named as “1-min Observation” had
received positive support and feedback. There were
some constraints in fully implementing the “1-min
Observation” such as, a well designed database,
effective communication between safety department
with the employees and efficient follow up action
on the data analysed. All these would add up to
the success of the case study. It can be further
fine tuned and used in many organisations. Some
recommendations were given below. The programme
Get
ready new
questions
Update
e-ARBAIS
Overall, the e-ARBAIS concept was feasible and
practical. Given a longer time and with the
implementation stage improved, the e-ARBAIS would
definitely benefit the organisation.
Recommendations
a. Appropriate planning for the e-ARBAIS
programme
Most of the employees would like to be informed
about the analysis after their participation. They
wanted to know more about the findings and what
were the unsafe behaviours that were observed most
frequently. Therefore, more proper planning from the
safety department was required. The HSE committee
should take immediate action once the employees
had completed the feedback. The timely analysis
should be shared with everybody. Also, appropriate
action must be taken to show to the employees that
their feedback was valuable. No one would like to
waste their time if they knew nothing was going to
happen with their feedback. It was very important to
note that the sustainability of the program depended
on the confidence level of the employees on the
program.
The flow chart in Figure 10 shows the appropriate
cycle in the e-ARBAIS program.
b. Improvement on database and feedback
column
Share
data &
action
Feedback
Execute
Action
Item
As mentioned earlier, one of the weaknesses during
the case study was the frequent corruptions and
malfunctions of the database.
To overcome this problem, a more stable programme
should be used. A web-based database is much more
user-friendly in this case.
Team
Review
Figure 10. The flow chart of how the e-ARBAIS programme
should be implemented
Additionally, it would be value enhancing if there
were some feedback columns on top of the questions
VOLUME Six NUMBER two july - december 2008 PLATFORM
63
Technology Platform: SYSTEM OPTIMISATION
posted so that the respondents could more accurately
describe the problems and give feedback to the safety
department.
Azmi Mohd Shariff received his MSc in
Process Integration from UMIST, United
Kingdom in 1992. He furthered his studies
at University of Leeds, United Kingdom and
received his PhD in Chemical Engineering
in 1995. He joined Universiti Kebangsaan
Malaysia in 1989 as a tutor upon his return
from Leeds and was appointed as lecturer
in 1996.
c. Sustainability of the program
It is important to ensure that the program is sustainable.
One of the recommendations was to give rewards to
the employees for their feedback given to the program.
Rewards may help to encourage participation and
continuous feedback. In the long term, the safety
program would sustain and the unsafe behaviour of
the employees could be improved.
Acknowledgement
The authors would like to thank Tuan Haji Mashal Ahmad and his
staff on their valuable contribution in this work.
References
[1]
American Psychological Association (APA) (2003).
“Behaviour analyses help people work safer”, Washington.
www.apa.org.
[2]
Geller, E. S. (2002). “How to get people involved in BehaviourBased Safety – selling an effective process”, Cambridge, MA:
Cambridge Center for Behavioural Studies.
[3]
Gilmore, Michael R., Perdue, Sherry R., Wu, Peter (2002).
“Behaviour Based Safety: The next step in injury prevention”.
SPE International on Conference on Health, Safety &
Environment in Oil and Gas Exploration and Production,
Kuala Lumpur, MAL, 20-22 Mar 2002.
[4]
Hayes, S. C. (2000). “The greatest dangers facing behaviour
analysis today”. The Behaviour Analyst Today, Volume 2, Issue
Number 2. Cambridge Center for Behavioural Studies.
[5]
Minshall, S (1997). “An opportunity to get ahead of the
accident curve”. Mine safety and Health News, Vol 4, No. 9.
[6]
Minter, S. G. (2004). “Love the process, hate the name,
Occupational Hazards”, 3rd June 2004.
[7]
SCF Arizona Loss Control (2004). “Behavioural Safety: The
right choice – Listen to your conscience and eliminate
dangerous behaviours”, SCF Arizona, 2nd July 2004, http://
www.scfaz.com/publish/printer_549.shtml
[8]
Schatz, J. R. (2003). “Behaviour-Based Safety: An introductory
look at Behaviour-Based Safety”, Air Mobility Command’s
Magazine, Jan/Feb 2003.
[9]
Skinner, B. F. (1938). “The behaviour of organisms: An
experimental analysis”. Acton, Mass.: Copley Publishing
Group.
64
He joined Universiti Teknologi PETRONAS (UTP) in 1997 and was
appointed as the Head of Industrial Internship in 1998. He was
later appointed as the Head of Chemical Engineering Programme
1999–2003. He is currently an Associate Professor in the
Department of Chemical Engineering and a Leader of the Process
Safety Research Group.
He teaches Process Safety and Loss Prevention at undergraduate
level and Chemical Process Safety at graduate level. He started
research work in the area of Process Safety in 2002. He has
successfully supervised and is currently supervising a few PhD, MSc
and final year undergraduate students in the area of Quantitative
Risk Assessment (QRA), Inherent Safety and Behaviour Based
Safety. He has presented and published more than 15 articles
relating to process safety in conferences and journals. He currently
leads an E-Science research project under Ministry of Science and
Technology entitled ‘Development of Inherent Safety Software for
Process Plant Design’. He has also experience in conducting QRA
for a few local companies in Malaysia. Recently, he was successful
in conducting a short-course on ‘Applied QRA in Process Industry’
to OPU and Non-OPU staff.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Bayesian Inversion of Proof Pile Test:
Monte Carlo Simulation Approach
I. S. H. Harahap*, C. W. Wong1
*Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.
1Malaysia LNG Sdn. Bhd., Malaysia.
ABSTRACT
Pile load test is commonly conducted during both design and construction stages. The objective of pile load
test during design stage is to obtain the actual soil parameters and ultimate load in-situ. On the other hand, for
the test conducted during construction stage the objective is to prove that the actual pile capacity conforms to
the design loads. This paper presents probabilistic interpretation of proof pile test to obtain the ultimate pile
capacity. As the first step, the “actual” field parameters are back calculated using ultimate pile capacity from
proof pile load tests. The probabilistic inverse method is used for back calculation of parameters. Soil parameters
obtained from back calculation are then sampled using Monte Carlo simulation technique to generate histogram
of ultimate pile capacity. From the synthetic histogram, other statistical metrics such as mean, standard deviation
and cumulative probability density of ultimate pile capacity can be obtained.
Keywords: Socketed drilled shaft, Monte Carlo simulation, probabilistic inverse analysis, proof pile load test
Introduction
During construction, proof tests to verify pile design are
conducted. Specification usually calls for the amount
of absolute and permanent pile displacement during
a proof test to be less than a specified amount. The
number of piles subjected to proof tests is proportional
to the total number of piles being constructed and
the number of piles that ‘failed’ relative to the number
of proof tests should not exceed a certain prescribed
number. Load applied for proof test is usually twice
the design load at constant loading rate using one or
two load cycles.
obtain actual soil parameters in the form of its joint
probability density. Parameters obtained were then
utilised to generate histograms of ultimate pile load
capacity using Monte Carlo simulation technique.
The arrangement in this paper are as follows: Section
2 outlines geotechnical aspects of the drilled shaft
particularly its design methodology and interpretation
of pile-load-test results at project site near Kuala
Lumpur. The soil condition and pile-load-test results at
project site near Kuala Lumpur are explained in Section
3. In Section 4, the salient features of the probabilistic
inverse method are given, followed by its application
to interpret pile-load-test results in Section 5. Section
6 concludes results from this work.
This paper attempts to interpret pile proof test for
obtaining soil parameters (soil and rock unit skin
resistance as well as base resistance). The probabilistic
inverse analysis method as given in [1] was used to
This paper was presented at the International Conference on Science & Technology: Application in Industry & Education 2008 (ICSTIE 2008), Penang,
12 - 13 December 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
65
Technology Platform: SYSTEM OPTIMISATION
Geotechnical Aspects
Design of Socketed Drilled Shaft
The ultimate capacity of socketed drilled shaft can be
determined using the following equation:
Q u + Q fu + Qbu
Q u = (f S C S + FRCR) + qbAb
(1)
where Q u is ultimate pile capacity, Q fu is ultimate shaft
capacity, Q bu is ultimate base capacity. The ultimate
skin resistance consist of contribution from soil part
(f S) and rock part (f R) where f S and f R are unit shaft
resistance, C s and CR are circumferential area of pile
embedded in each layer (soil and rock). qb is unit base
resistance for the bearing layer (rock) and Ab pile base
area. Figure 1 shows components of ultimate capacity
of drilled shaft socketed into rock.
The unit skin resistance of cohesionless material
usually has the form of f R = K S σ O tanø where K S is
coefficient of lateral pressure, σ O is vertical overburden
pressure and ø is friction angle [2]. For cohesive
material the unit skin resistance is commonly taken
as proportional to the undrained shear strength as
f R = αS u where α is proportional coefficient and S u
is undrained shear strength [3]. For rock, the unit
skin resistance is empirically determined from rock
unconfined compressive strength, qu [4], or Rock
Quality Designation, RQD. Table 1 shows available
empirical correlation to determine f R from qu and
D
fS
LS
fR
LR
Table 2 to determine f R from RQD. However, the
unit skin resistance has a limiting value depends on
the unconfined compressive strength of the rock
[5]. The empirical correlations proposed is either has
linear relation with qu or power-curve relation to qu .
Evaluation by [6] indicated that the SPT N-value may not
a good indicator of f R due to its sampling rate because
it is too infrequent and suffers too much variability.
From evaluation of pile load test results [7], there is
a significant difference of load-settlement behaviour
among sedimentary, granitic and decomposed rock,
and hence the range of qu for these rocks. Generally
granitic rock has a softer response compared to
sedimentary rock. The ultimate unit resistance range
between 6 to 50 MPa for granitic rock compared to
between 1 to 16 MPa for sedimentary rock. Empirical
correlation by [8] in Table 2 is the lower bound for
sedimentary rock [7]. The rock unit skin resistance, f R
and RQD relationship as in Table 2 is commonly used in
Malaysia to design drilled shaft socketed into rock [9].
Other note, the unit resistance for uplift load should
be adjusted due to contraction of pile under uplift
load, and hence reducing confinement stress [10].
Design approach for socketed drilled shaft varies from
place to place [11]. For example the ultimate capacity
could be determined by considering all resistances
(soil and rock skin resistances, and base resistance) or
totally omitting the base resistance. It is due to the
fact that less displacement is required to mobilise
skin resistance compared to the displacement that
required mobilising base resistance. On the practical
side, the length of rock socket and hence total pile
length is determined in situ during construction based
on observed rock condition, i.e. RQD at that particular
location.
Discussions on various construction method and
constructability issues of drilled shaft can be found in
Ref [12], and the effect of construction method on skin
and base resistance in Ref [13,14]. However, from pull
out test results [6], drilled shaft construction method
has no significant effect on ultimate resistance.
qb
Figure 1. Ultimate capacity of socketed drilled shaft
66
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Table 1. Empirical value of unit skin resistance for socketed drilled shaft determine from rock unconfined compressive strength and SPT
N-value. Complete references are given in [6]
No.
Empirical Correlation
Reference
1
f R(tsf) = 1.842 qu 0.367
Williams et al. (1980)
2
f R(tsf) = 1.45 qu for clean sockets, and
f R(tsf) = 1.94 qu for rough sockets
Rowe and Armitage (1987)
3
f R(tsf) = 0.67 qu
Horvath and Kenney (1979):
4
f R(tsf) = 0.63 qu
Carter and Kulhawy (1988)
5
f R(tsf) = 0.3 (qu)
Reynolds and Kaderabek (1980):
6
f R(tsf) = 0.2 (qu)
Gupton and Logan (1984)
7
f R(tsf) = 0.15 (qu)
Reese and O’Neill (1987)
8
f R = 0.017V (tsf), or
f R = – 5.54 + 0.4 N(tsf)
Crapps (1986)
9
N-value = {10, 15, 20, 25, 30, >30}
f R(tsf) = {0.36, 0.77, 1.1, 1.8, 2.6, 2.6}
Hobbs and Healy (1979)
10
N-range = 10 - 20, 20 - 50, 50 - 50/3 in., >50/3 in.
f R(tsf) = 1.5, 2.5, 3.8, 5
McMahan (1988)
Pile Load Test
Pile-load-test generally serves two purposes. When
conducted during design stage, it helps to establish
parameters to be used in design, and when conducted
during construction stage, it proves working
assumptions during design. To obtain the “actual
soil parameters” for design purpose, the pile test is
instrumented [4] and parameters are back calculated
from data obtained during testing. The number of
this type of test is limited due to its cost; therefore
site variability of pile ultimate capacity cannot be
established.
Table 2. Empirical value of unit skin resistance for socketed drilled
shaft determine from RQD Ratio
RQD Ratio %
Working Rock Socket Resistance
f R (kPa)
Below 25
300
25 - 70
600
Above 70
1000
The load settlement curves do not always show a sign
of failure as specified by various methods, for example
Davisson’s, Terzaghi’s, Chin’s methods and others. As
such, it is difficult to ascertain the validity of design
assumption, i.e. the ultimate skin resistance, based on
information’s obtained from this test. Other elaborate
method to interpret pile load test in sand and clay
can be found in [15, 16]. A new and novel approach
that utilizes a data base of pile load tests is recently
proposed by [17]. In their method design parameters
are extracted from the data base using Bayesian neural
network that intelligently update its knowledge when
new information is added to the data base.
Other than instrumented pile as previously cited, [18]
proposed method to derive soil parameters from load
settlement curve. The approach uses “projected load
settlement curve” to obtain the ultimate capacity.
The projected load settlement curve is an analytical
function for load settlement relations with parameters
of the function are obtained from regression of the
actual load settlement curve. For this purpose, failure
VOLUME Six NUMBER two july - december 2008 PLATFORM
67
Technology Platform: SYSTEM OPTIMISATION
is defined as correspond to a settlement of 10% of pile
diameter.
As a proof test, pile-load-test results are interpreted
based on criteria establishes to achieve the
design objectives and elucidated in the technical
specification. As an example, the settlement of the
pile tested should not exceed specified settlement at
working load and permanent settlement should not
exceed settlement at twice of the working load. The
pile is considered “pass” if both criteria are satisfied.
The number of proof pile test is prescribed based on
total length of pile constructed; as such more than one
proof pile-load-test is conducted within one project.
Furthermore, it is common that proof pile-load-test is
“fail to reach failure”, in other words the applied load
is less than ultimate capacity of the pile.
While the interpretation of proof test based on
settlement criteria lay out in the technical specification
is sufficient for practical purposes, there are also
attempts to further exploit information’s from proof
pile-load-test. For example, from pile-load-test that
reaches failure, information’s can be obtained to
update the reliability of pile [19-22]. For pile-loadtest that fails to reach failure, information’s can be
obtained to update the probability distribution of
pile capacity [23]. These approaches follow trend
of migration of geotechnical analysis from factor of
safety based to reliability based [24]. It is worth to
note that, besides ultimate load limit state approach
previously cited, serviceability limit state for drilled
shaft to establish probability of failure and reliability
index from load settlement curve of pile load test also
have been attempted in [25,26]. In their approach, the
pile load settlement curves are calculated using “t-z”
approach and finite difference method. Probabilistic
load-settlement curves are developed using Monte
Carlo simulation. From the histogram generated,
the probability of failure and reliability index can be
determined.
For the work reported herein, the probability density
of soil parameters are calculated from ultimate
pile capacity, deduced from pile-load-test, using
probabilistic inverse method. Monte Carlo simulation
technique is then used to generate the histogram
of pile capacity. The probability of ultimate pile
capacity, or the reliability index, can be obtained from
cumulative probability density of pile capacity.
Description of the Project
Site Condition
A total of 12 bore holes were carried out for
soil investigation during design stage. The soil
investigations were mainly carried out by using
Standard Penetration Test (SPT) as well as standard soil
index and physical properties tests. The site condition
was mainly formed by 3 types of soil, which were silt,
clay and sand. Silt was found on the top of the soil layer
Figure 2. Load settlement curve for proof pile-load-test. Davisson’s ultimate load can be obtained only in two out of nineteen tests.
68
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
while very stiff or hard sandy silt were encountered
on the next layer which range from Reference Level
(RL) 70 m to RL 50 m. Generally, high organic content
was observed for the upper layer materials. The RQD
for the site ranged from 10% to 30%, with average of
25%.
Pile Loading Test
There were three types of pile being used at the site
consisting of a 450 mm, 600 mm and 900 mm diameter
bored pile with design load of 1500 kN, 4000 kN and
9000 kN respectively. Out of a total of 19 proof pileload-tests conducted, only two gave ultimate pile
capacity based on Davisson’s criteria. The calculated
ultimate pile capacity is 8900 kN and 3900 kN for 900
mm and 600 mm diameter piles, respectively. Figure
2 shows load deflection curves and ultimate load
determination procedure using Davisson’s method
for 600 m and 900 mm piles, and Table 3 shows the
schedule of all tests. It should be noted that for both
the 600 mm and 900 mm piles one out of seven test
piles being tested was fail.
Probabilistic Inverse Method
Suppose that we have function f that map parameters
into theoretical quantity such that d = f(m) where d
= {di ,…, d ND and m = {mi , ,…, m NM}, the objective
of inverse analysis is to determine m given m. In the
Table 3. Attributes of pile proof test result
Estimate of Q u
(kN)
Pile Length
(m)
FF1- P28
Not fail
6.000
900
4.5
FF1- P67
8900
10.245
900
4.5
FF2 - P184
Not fail
6.000
900
4.5
FF2 - P472
Not fail
11.000
900
4.5
FF3 - P234
Not fail
11.075
900
4.5
FF4 - P236
Not fail
12.325
450
1.5
FF4 - P275
Not fail
14.625
450
1.5
FF5 - P33
Not fail
12.425
600
3.0
FF5 - P85
Not fail
22.125
600
3.0
FSK 1,5 - P47
Not fail
14.000
900
4.5
FSK 1,5 - P44
Not fail
8.200
450
1.5
FSK 2,3,4 - P200
Not fail
8.800
900
4.5
FSK 2,3,4 -P387
Not fail
18.000
900
4.5
FSK 6 - P 74
Not fail
2.600
900
2.6
FSK 6 - P 307
Not fail
5.700
600
3.0
FSK 7 - P40
Not fail
21.025
600
3.0
FSK 7 - P370
Not fail
7.330
600
3.0
PSB - P138
Not fail
19.500
600
3.0
PSB- P183
3900
10.775
600
3.0
Pile Location
Pile Diameter
(mm)
Socket Length
(m)
VOLUME Six NUMBER two july - december 2008 PLATFORM
69
Technology Platform: SYSTEM OPTIMISATION
context of pile-load-test, to determine f S , f R and qbu
knowing Q u obtained from pile-load-test and f is the
relationship in Eq. (1).
Data Space
Suppose that we have observed data values dobs, the
probability density model to describe experimental
uncertainty, such as Gaussian model, can be written
as follows:
 1

(d d obs )T C D1 (d d obs )   2

where CD is the covariance matrix. If the uncertainties
are uncorrelated and follow Gaussian distribution, it
can be written as
2
 d i d i  

obs 

 
i

 
(3)
Model Space
In a typical problem we have model parameters that
have a complex probability distribution over the
model space. The probability density is denoted as
ρM(m). Suppose that we know joint probability density
function ρ(m,d) and d = f(m), then the conditional
probability density function,
σM(m) = ρM|d(m)(m|d = f(m))
can be obtained as follows [1]:
M (m )
= k (m , f (m ))
det ( g M
F Tg D F)
det g M det g D
g M F Tg D F
1/ 2
gM .
gD
1/ 2
.
d =f (m )
For constant gM(m) and gD(d), and linear or weak
linearity problem [1], Eq. (4) reduces to
Other approach is to use Markov Chain Monte Carlo
(MCMC) simulation that generate sampling points
over the model space by “controlled random walk”,
the Markov Chain, that eventually converged to the
conditional probability distribution of the posterior (or
parameters). In Markov Chain approach the sequence
of random variables X0, X1, X2, … at each time
t ≥ 0the next state Xt+1 is sampled from a distribution
P(Xt+1 | Xt) that depends on the state at time t. Similar
to Monte Carlo simulation, if sufficient numbers of
sampling points are obtained, then the approximation
to the expected value is evaluated through Eq. (6).
In general the MCMC has three basic rules:
(i) a proposal Markov Chain rule expressed by a
transition kernel q(x,y),
(5)
(ii) an accept reject rule which accept or reject a
newly proposed Y k = q(X k ,.)where X k is recently
accepted random variable, and
(iii) a stopping rule.
A typical MCMC has the basic algorithm shown as the
following algorithm
where k is the normalizing factor, µD(d) is homogenous
probability density function, which upon integration
over the data space become unity.
70
(6)
1/ 2
(4)
D (d )
M (m ) = k M (m )
D ( d ) d =f (m )
The analytical form of posterior distribution, i.e. Eq. (5),
is difficult to obtain, or, even if obtainable, is difficult
to interpret. One has to resort to simulation approach
such as Monte Carlo simulation to obtain parameter
pairs over the model space and used such data for any
application. In Monte Carlo simulation, after sufficient
number on sampling of random variables X0, X1, …, Xn
the expectation µ = E{g(Xi} is approximated as:
(2)
D ( d ) = k exp 

 1
D (d ) = k exp 
 2

Evaluation of posterior distribution
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Basic MCMC Algorithm
1.
Draw initial state X0 from some initial
distribution
2.
For I = 0 to N do
3.
Modifying Xt according to some proposal
distribution to obtain a proposed sample Yt+1
where Yt+1 = q (Xt , Y)
4.
With some probability A(Xt , Yt+1) accept Yt+1
The effects of prior knowledge on ultimate pile
capacity can be investigated using various forms of
density distribution. In this work, only the effect of
unit skin and base resistances of rock are considered,
and the joint probability density is obtained from Eq
(5) as
Y
with probabilit y A ( X t , Y t +1 )
X t +1 =  t +1
otherwise X t

Some acceptance rules are given as follow
(a) Metropolis sampling:
( Y t +1 )
A ( X t , Y t +1 ) =
M ( fR , q b )
(X t )
(b) Metropolis-Hasting sampling
A ( X t , Y t +1 ) =
( Y t +1 ) + q( Y t +1, X t )
( X t ) q( X t , Y t +1 )
(c) Boltzman sampling:
A ( X t , Y t +1 ) =
probability density model to describe experimental
uncertainty (Eq. 3) is formed using the theoretical
model d = f(m) as in Eq. (1), and observed pile ultimate
capacity as dobs. The joint probability density is then
σM(m) = σM(f S , f R , qb). Prior knowledge can be
incorporated in ρM(m) = ρM(f S , f R , qb) particularly
knowledge on those parameters specific for the rock
type and its locality.
( Y t +1 )
( X t ) + ( Y t +1 )
BAYESIAN Interpretation of proof Pile Load
Test
The simplistic model of bearing capacity of socketed
drilled shaft is given by Eq. (1). Assuming known pile
geometry, the model space is then m = (f z , f R , qb). The
probability density model to describe experimental
uncertainty (Eq. 3) is formed using the theoretical
model d = f(m) as in Eq. (1), and observed pile ultimate
capacity as dobs. The joint probability density is then
σM(m) = σM(f S , f R , qb). Prior knowledge can be
incorporated in ρM(m) = ρM(f S , FR , qb) particularly
knowledge on those parameters specific for the rock
type and its locality.
The simplistic model of bearing capacity of socketed
drilled shaft is given by Eq. (1). Assuming known pile
geometry, the model space is then m = (f S , f R , qb). The
=
M ( fS , fR , q b ) dfS (7)
In this work two aspects are being investigated: (a)
the effect for prior knowledge on predicted pile
bearing capacity and (b) comparison between the
“brute force” Monte Carlo and Markov Chain Monte
Carlo simulations. For the first objective, four cases
are considered. In Cases 1 to 3 lognormal prior
distributions of f R are assumed with mean values range
between 300 to 800 kPa. For the fourth case, normal
distribution is assumed for f R with mean value of 300
kPa. These values conform to an empirical value for
unit skin resistance for rock at low RQD (Table 2). This
number is somewhat lower than back calculated from
pile load test in limestone which range from 900 kPa
to 2 300 kPa [29]
Results from Case 1 to 4 are compared in term of:
(a) plot of posterior probability density, (b) Monte
Carlo sampling points and (c) histogram of predicted
ultimate pile capacity and shown in Figure 3 to 6.
Figure 7 shows relative comparison of prior density
distribution used in Case 1 to 3. A sample of 3D plot is
shown in Figure 8 for Case 1. In Case 5 MCMC is used
to draw sampling points with prior density distribution
as used in Case 4. All cases use 15 000 trials. Results for
all cases are shown in Table 2 for a 600 mm diameter
pile.
VOLUME Six NUMBER two july - december 2008 PLATFORM
71
Technology Platform: SYSTEM OPTIMISATION
(a)
(b)
(c)
Figure 3. (a) Plot of posterior probability density for Case 1, (b) Sampling points superimposed to probability density plot and (c)
Histogram of ultimate pile capacity
(a)
(b)
(c)
Figure 4. (a) Plot of posterior probability density for Case 2, (b) Sampling points superimposed to probability density plot and (c)
Histogram of ultimate pile capacity
(a)
(b)
(c)
Figure 5. (a) Plot of posterior probability density for Case 3, (b) Sampling points superimposed to probability density plot and (c)
Histogram of ultimate pile capacity
72
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
(a)
(b)
(c)
Figure 6. (a) Plot of posterior probability density for Case4, (b) Sampling points superimposed to probability density plot and (c)
Histogram of ultimate pile capacity
Figure 7. Prior distribution of unit skin resistance of rock
Figure 7. A 3D plot of posterior joint probability distribution for
Case 1
Figure 9. Case 5: (a) Plot sampling points generated by Markov chain and (b) Histogram of ultimate pile capacity
VOLUME Six NUMBER two july - december 2008 PLATFORM
73
Technology Platform: SYSTEM OPTIMISATION
Table 4. Comparison of mean, median and standard deviation
Standard
Deviation
(kN)
Remark
Case
Mean
(kN)
Median
(kN)
Case 1
4103
4147
521
Lognormal Distribution, MC
Case 2
4108
4164
586
Lognormal Distribution, MC
Case 3
4125
4189
573
Lognormal Distribution, MC
Case 4
4145
4169
590
Normal Distribution, MC
3840
3905
351
(a) Normal Distribution, MCMC (burned point 100)
3902
3931
270
(b) Normal Distribution, MCMC (burned point 500)
Case 5
Discussions
Concluding Remark
From Table 4 and Figures 3, 4, 5, 6 and 9 the following
observations can be advanced:
Based on previous discussions the following
conclusions can be put forward:
• Method to interpret pile test to obtain
probabilistic characteristics of ultimate load has
been presented. The first step is to obtain joint
probability distribution of soil parameters in the
material (or parameter) space and the second
step is to generate histogram of ultimate pile
capacity using Monte Carlo technique. Statistical
characteristics of the ultimate pile capacity are
then obtained from the synthetic histogram using
standard discrete method.
• From this study the effect of prior probability
distribution, for all practical purposes, is
negligible.
• Markov Chain Monte Carlo method yields more
accurate results compare to brute force Monte
Carlo method and more efficient in term of ratio
of generated sampling points to number of trial.
For all practical purposes the prior density distributions
have minimal effect on calculated ultimate pile
capacity. The mean values ranged between 4 103 to
4 145 kN.
Markov Chain Monte Carlo is more efficient compared
to the brute force Monte Carlo method. Out of 15 000
trials, 6 042 points have been generated using MCMC
method compare to 2 066 points using brute force
Monte Carlo method. The sampling points for MCMC
method are more concentrated around the maximum
probability density (Figure 9a) compare to MC method
(Figure 6b) resulting in a more accurate ultimate pile
capacity.
Markov Chain Monte Carlo yield more accurate results
(3 902 kN to 3 900 kN from pile test) compare to brute
force Monte Carlo method (4 145 kN to 3 900 kN from
pile test).
74
References
[1]
Mosegaard, K. & Tarantola, A. “Probabilistic Approach to
Inverse Problem”. In International Handbook of Earthquake &
Engineering Seismology (Part A), Academic Press. 2002. pp.
237-265
[2]
Rollins, K. M., Clayton, R. J., Mikesell, R. C. & Blaise, B. C.
“Drilled Shaft Side Friction in Gravely Soils”. Journal of
Geotechnical and Geoenvironmental Engineering, 2005.
131(8):987-1003
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
[3]
O’Neil, M. W. “Side Resistance in Piles and Drilled Shafts”.
In The Thirty-Fourth Karl Terzaghi Lecture. Journal of
Geotechnical and Geoenvironmental Engineering, 2001.
127(1):1-16
[17] Goh, A. T. C., Kulhawy, F. H. & Chua, C. G. “Bayesian Neural
Network Analysis of Undrained Side Resistance of Drilled
Shafts”. Journal of Geotechnical and Geoenvironmental
Engineering, 2005. 131(1):84-93
[4]
Zhang, L. & Einstein, H. H. “End Bearing Capacity of
Drilled Shafts in Rock”. Journal of Geotechnical and
Geoenvironmental Engineering. 1998. 124(7):574-584
[18] Boufia, A. “Load-settlement Behaviour of Socketed Piles
in Sandstone”. Geotechnical and Geological Engineering.
2003. 21:389-398
[5]
Amir, J. M. “Design of Socketed Drilled Shafts in Limestone,
a Discussion”, Journal of Geotechnical Engineering. 1994.
120(2):460-461
[19] Kay, J. N. “Safety Factor Evaluation for Single Piles in Sand”.
Journal of Geotechnical Engineering Division. 1976.
102(10):1093-1108
[6]
McVay, M. C., Townsend, F. C. & Williams, R. C. “Design of
socketed drilled shafts in limestone”. Journal of Geotechnical
Engineering. 1992. 118(10): 1626-1637
[7]
Ng, C. W. W, Yaw, T. L. Y, Li, J. H. M. & Tang, W. H. “Side Resistance
of Large Diameter Bored Piles Socketed into Decomposed
Rocks”. Journal of Geotechnical and Geoenvironmental
Engineering. 2001.127(8):642-657
[20] Lacasse, S. & Goulois. A. “Uncertainty in API Parameters
for Predictions of Axial Capacity of Driven Piles in Sand”.
Proceeding of the 21st Offshore Technology Conference,
Society of Petroleum Engineers, Richardson, Texas. 1989.
353-358
[8]
Horvath, R. G. & Kenney, T. C. “Shaft Resistance of Rocksocketed Drilled Piers”. Proceedings Symposium on Deep
Foundation. 1979
[9]
Tan, Y. C. & Chow, C. M. “Design and Construction of Bore Pile
Foundation”. Geotechnical Course for Foundation Design &
Construction. 2003
[10] Fellenius, B. H. Discussion of ‘‘Side Resistance in Piles
and Drilled Shafts’. Journal of Geotechnical and
Geoenvironmental Engineering. 2001. 127(1): 3–16
[11] Hejleh, N. A., O’Neill, M. W, Hanneman, D. & Atwooll,
W. J. “Improvement of the Geotechnical Axial Design
Methodology for Colorado’s Drilled Shafts Socketed in Weak
Rocks”. Colorado Department of Transportation. 2004.
[12] Turner, J. P. “Constructability for Drilled Shafts”. Journal
of Construction Engineering and Management. 1992.
118(1):77-93
[13] Majano, R. E., O’Neill, M. W. & Hassan, K. M. “Perimeter Load
Transfer in Model Drilled Shafts Formed Under Slurry”.
Journal of Geotechnical Engineering. 1994. 120(12.):21362154
[14] Chang, M.F. & Zhu, H. Construction Effect on Load
Transfer along Bored Pile. Journal of Geotechnical and
Geoenvironmental Engineering. 2004. 130(4):426-437
[21] Baecher, G. R. & Rackwitz, R. “Factor of Safety and Pile Load
Tests”. International Journal of Numerical and Analytical
Methods in Geomechanics. 1982. 6(4):409-424
[22] Zhang, L. M. & Tang, W. H. “Use of Load Tests for Reducing Pile
Length”. Proceeding of the International Deep Foundations
Congress. Geotechnical Special Publication No. 116, M. W.
O’Neill and F. C. Townsend, eds., ASCE, Reston, Va., 2002.
993–1005
[23] Zhang, L. M. “Reliability Using Proof Pile Load Tests”. Journal
of Geotechnical and Geoenvironmental Engineering. 2004.
130(2): 1203-1213
[24] Duncan, J. M. “Factors of Safety and Reliability in
Geotechnical Engineering”. Journal of Geotechnical and
Geoenvironmental Engineering. 2000. 126(4):307-316
[25] Misra, A. & Roberts, L. A. “Axial Service Limit State Analysis of
Drilled Shafts using Probabilistic Approach”. Geotechnical
and Geological Engineering, 2006. 24:1561–1580
[26] Misra, A., Roberts, L. A. & Levorson, S. M “Reliability Analysis
of Drilled Shaft Behaviour Using Finite Difference Method
and Monte Carlo Simulation”. Geotechnical and Geological
Engineering. 2007. 25:65–77
[27] Hassan, K. M. & O’Neill, M. W. “Side Load-Transfer Mechanisms
in Drilled Shafts in Soft Argillaceous Rock”, Journal of
Geotechnical and Geoenvironmental Engineering. 1997.
123(2):145-152
[15] Cherubini, C., Giasi, C.I. & Lupo, M. Interpretation of Load
Tests on Bored Piles in the City of Matera. Geotechnical and
Geological Engineering. 2004. 23:239-264
[28] Hassan, K. M., O’Neill, M. W., Sheikh, S. A. & Ealy, C. D. “Design
Method for Drilled Shafts in Soft Argillaceous Rock”. Journal
of Geotechnical and Geoenvironmental Engineering. 1997.
123(3):272-280
[16] Pizzi, J.F. Case history: Capacity of a Drilled Shaft in
the Atlantic Coastal Plain. Journal of Geotechnical and
Geoenvironmental Engineering, 2007. 133(5):522-530.
[29] Gunnink, B. & Kiehne, C. “Capacity of Drilled Shafts in
Burlington Limestone. Journal of Geotechnical and
Geoenvironmental Engineering”. 2002. 128(7):539-545
VOLUME Six NUMBER two july - december 2008 PLATFORM
75
Technology Platform: SYSTEM OPTIMISATION
I. S. H. Harahap holds a Bachelor’s Degree
(Sarjana Muda, SM) and Professional
Degree (Insinyur, Ir.) from Universitas
Sumatera Utara, Medan, Indonesia. After a
short stint with the industry, he continued
his tertiary education in the United States,
obtained his Master’s of Science in Civil
Engineering (MSCE) degree from Ohio
University, Athens, Ohio and Doctor of
Philosophy (PhD) degree from Northwestern University, Evanston,
Illinois. He was with Universitas Sumatera Utara (USU) before
joining Universiti Teknologi PETRONAS (UTP) in August 2005.
His research interests include: (1) application of expert system in
geotechnical engineering, (2) implementation of geotechnical
observational method, (3) subsurface exploration and site
characterization, (4) landslide hazard identification and mitigation,
and (5) robust and reliability based structural optimization. He
is the recipient of 1993 Thomas A. Middlebrook Award from
American Society of Civil Engineer (ASCE) for his research on
simulation of the performances of braced excavation in soft clay.
76
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Wong Chun Wah graduated with a
Bachelor of Science degree in Civil
Engineering from Universiti Teknologi
PETRONAS (2008). Currently, he is working
with PETRONAS’s Malaysia LNG Sdn. Bhd. in
Bintulu as a Civil and Structural Engineer.
Technology Platform: SYSTEM OPTIMISATION
ELEMENT OPTIMISATION TECHNIQUES
IN MULTIPLE DB BRIDGE PROJECTS
Narayanan Sambu Potty*, C. T. Ramanathan1,
*Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
1Kumpulan Liziz Sdn Bhd, Kota Kinabalu, Sabah, Malaysia.
*narayanan_sambu@petronas.com.my
Abstract
Management of multiple bridge projects relies on the selection and optimisation of elements. Problems are
avoided if construction knowledge and experience are utilised. A systematic selection process is required
during design stage to maximise the use of available resources. Multiple bridge designs incorporate practical
suggestions from the field personals. The case study in East Malaysia considers problems of material, labour and
equipment availability. The average ratio presence of each element is calculated to show its impact. The need
of element optimisation techniques in bridges is emphasised. The database is presented and also the process
undertaken during the design is discussed.
Keywords: Design and Build process, Multiple project management, Bridge elements, Average ratio presence.
INTRODUCTION
Bridge structures are designed with high quality and
safety standards but sometimes with not enough
attention to construction methods, site conditions and
details. Construction problems encountered during
execution are complex and costly. Many construction
problems can be avoided with proper attention
and consideration of the construction process
during the design phase [1]. Factors of simplicity,
flexibility, sequencing, substitutions and labour skill
and availability should be the part of design. The
appropriate use of standardisation can have several
benefits [1]. These include increased productivity
and quality from the realization of repetitive field
operations, reduction in design time, savings from
volume discounts in purchasing, and simplified
materials management. This method of standardising
bridge elements may be suitable for selective projects
of same nature but are less significant and more
complex when constructing multiple bridge projects
situated at different site conditions.
The construction process and success in management
of multiple bridge projects directly relies on the
selection and optimisation of their elements/
components. A systematic optimization process
is adopted during the conceptual design stage
to overcome the resource constraints during the
construction phase. The knowledge of construction
experience is also utilised during the element
optimisation process.
D&B combines the design and construction functions
to vests its responsibility with one entity: the designbuilder. The D&B process changes some fundamental
relationships between the client and the contractor.
The client employs a Project Director as the
This paper was presented at the International Conference on Construction and Building Technology 2008, Kuala Lumpur,
16 - 20 June 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
77
Technology Platform: SYSTEM OPTIMISATION
representative, whereas the contractor has to engage
design consultant and manage the construction of the
whole project. Owners employ only one contractor
who is solely responsibility for delivering the assigned
project with defined requirements, standards and
conditions. Both parties are expected to aim for a
successful project outcome.
BACK GROUND AND PROBLEM STATEMENT
Sabah situated on the northern tip of the island of
Borneo is the second largest state in Malaysia. Over
70% of its population lives in rural area as majority are
depending directly or indirectly on agriculture. The
state has several development projects procured by
D&B Method for Upgrading rural roads and Bridges
replacement for the benefit of rural sectors contributing
to the national economy. Five contract packages
comprising 45 bridges located in 12 districts in the
state having 76 115 square kilometer coverage area [2]
and two road projects in one district was assigned to
the first author’s company. As summarised in Figure
1, the Bridge projects and the two road projects were
handled simultaneously.
This study examines the use of element optimisation
technique through a case study for managing the
above multiple DB bridge projects in Sabah, East
Malaysia. The data of bridge elements of all the 45
bridges were collected and compiled. The ratio of
each element in individual bridges and their average
ratio presence in each projects were compared for
study.
RESEARCH METHODOLOGY
The element ratio comparison and analysis were made
in the following steps.
1. Review of all the five projects and compilation of
the summary.
2. Prepare the element ratio table and Pi chart for
each bridge of all the projects and analyse the
ratio of their impact on the individual bridges.
3. Compress and derive a single and common
average ratio table and pi chart showing the
overall impact of each elements for the entire
multiple project.
4. Identify the critical and crucial elements that need
attention.
5. Discussion on the element of major contributions
which is to analyse the element with maximum
element ratio.
ANALYSIS OF COMPONENTS OF MULTIPLE DB
BRIDGE PROJECTS
Schedules of the multiple projects
The projects started at different times having
simultaneous construction periods at various stages
Figure 1. Duration of Multiple D&B projects in Sabah
78
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Table 1. Schedule of Multiple contracts handled in Sabah
Project
Number
No. of
Bridges
No. of districts
Locations
Months
Period
1
12
3
18
Jul 03 – Jan 05
2
5
3
18
Jan 05 – Jul 06
3
8
3
18
Jul 05 – Jan 07
4
13
3
20
Oct 05– Jun 07
5
7
3
20
Oct 05– Jun 07
7
45
12
48
Jul 03–Jun 07
Total
of completion as shown in Table 1. The five projects
involving 45 Bridges have been completed successfully
and delivered to client for public usage.
Bridge element ratio
Project Duration
Table 2. Weight of each Element in bridges of Package A
Description
Ratio
Foundation (Piling)
22.64%
Abutment and Wing Wall
16.33%
Piers, Crossheads and Pilecaps
The element ratio for bridges for each project was
calculated as given in the following section.
1.57%
Precast, Prestressed Beam and Ancillary
39.88%
Diaphragms
2.87%
Bridge Deck and Run-On-Slab
13.93%
Parapet
Project 1 – Package A
2.77%
TOTAL
Table 2 and Figure 2 show the percentage presence of
each element in constructing the bridges in Package
A. From the weight breakdown it is clear that critical
elements “Piling work (Foundation)” has greater than
20% presence and production of beams and erection
has nearly 40% presence. The influence of these
elements in the project is high in spite of their low
quantum in physical work (20% and 40% respectively)
because of their specialty nature. Hence, care should
be taken while choosing these specialties works to
suite the local availability and eventually use few
specialised designs for those particular elements. In
this manner these crucial elements in Package A were
optimised in the design as follows:
Critical element No. 1: Piling works
– Steel H piles of driven type for 6 Bridges
– Micro pile of drilling type for 3 Bridges
Critical element No. 2: Beams
– Cast in situ Post tensioned beams of two
shapes I-16 and I-20.
100.00%
3%
13 .9
%
2. 77
4%
22 .6
7%
2. 8
3%
16 .3
8%
39 .8
%
1. 57
Figure 2. Weight of elements expressed as a percentage
VOLUME Six NUMBER two july - december 2008 PLATFORM
79
Technology Platform: SYSTEM OPTIMISATION
Project 2 – Package B
Table 3 and Figure 3 show the percentage presence of
each element in constructing the bridges in Package
B. From the weight breakdown it is clear that critical
elements are “Piling work (Foundation)” which has
greater than 20% presence and “production of beams
and erection” with 30% presence. The reasons as
explained above for Project 1 have eventually resulted
due to the use of specialised designs for those
particular elements. In this manner these crucial
elements in Package B were optimised in the design
as follows:
Critical element No. 1: Piling works
– Micro pile for 3 Bridges
– Spun pile for 1 Bridges
Critical element No. 2: Beams
– Cast in situ Post-tensioned beams of two
shapes I-16 and I-20 for 2 Bridges
– Prestressed precast beams (in factory) for 3
Bridges.
Project 3 – Package C
Table 4 and Figure 4 show the percentage presence of
each element in constructing the bridges in Package
C. From the weight breakdown it is clear that critical
elements are again “Piling work (Foundation)” with
greater than 20% presence and the “production of
beams and erection” with greater than 30% presence.
The reasons as explained above has eventually
resulted due to the use of specialised designs for
those particular elements. In this manner these crucial
elements in Package C were optimised in the design
as follows:
Table 3. Weight of each element in bridges of
Package B
Description
Ratio
Foundation (Piling)
29.41%
Abutment and Wing Wall
12.68%
Piers, Crossheads and Pilecaps
Precast, Prestressed Beam and Ancillary
80
32.50%
Diaphragms
1.63%
Bridge Deck and Run-On-Slab
11.88%
Parapet
3.11%
TOTAL
100.00%
3.11%
11.88%
29.41%
1.63%
32.50%
12.68%
8.79%
Figure 3. Weight of elements expressed in percentage
Table 4. Weight of each element in bridges of Package C
Description
Ratio
Foundation
22.44%
Abutment and Wing Wall
12.13%
Piers, Crossheads and Pilecaps
13.56%
Precast, Prestressed Beam and Ancillary
31.73%
Diaphragms
2.64%
Bridge Deck and Run-On-Slab
13.84%
Parapet
3.66%
TOTAL
100.00%
3.66%
13.84%
Critical element No. 1: Piling works
– Bored pile for 3 Bridges
– Micro pile for 2 Bridges
– Spun pile for 1 Bridges
Critical element No. 2: Beams
– Cast in situ Post tensioned beams of two
shapes I-16 and I-20 for 4 Bridges.
– Prestressed precast beams (in factory) for two
bridges.
8.79%
22.44%
2.64%
12.13%
31.73%
13.56%
Figure 4. Weight of each element expressed
in percentage
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Project 4 – Package D
Table 5. Weight of each element in bridges of Package D
Table 5 and Figure 5 show the percentage presence of
each element in constructing the bridges in Package
D.
Critical element No. 1: Piling works
– Micro pile for 9 Bridges
– Spun pile for 2 Bridges
Critical element No. 2: Beams
– Cast in situ Post tensioned beams of two
shapes I-16 and I-20 for 6 Bridges.
– Prestressed precast beams (in factory) for 6
bridges.
From the weight breakdown of the elements of
bridge, it is clear that critical elements are “Piling
work (Foundation)” has greater than 30% presence
and the “production of beams and erection” with
30% presence. The reasons as explained above has
eventually resulted due to the use of specialised
designs for those particular elements. In this manner
these crucial elements in Package D were optimised in
the design as follows:
Description
Ratio
Foundation (Piling)
33.82%
Abutment and Wing Wall
16.24%
Piers, Crossheads and Pilecaps
1.42%
Precast, Prestressed Beam and Ancillary
31.36%
Diaphragms
2.27%
Bridge Deck and Run-On-Slab
12.15%
Parapet
2.73%
TOTAL
100.00%
2.27%
12.15%
2.73%
31.36%
33.82%
1.42%
16.24%
Figure 5. Weight of each element expressed in percentage
Table 6. Weight of each element in bridges of Package E
Description
Ratio
Project 5 – Package E
Foundation (Piling)
21.05%
Abutment and Wing Wall
8.67%
Table 6 and Figure 6 show the percentage presence
of each element in the construction of the bridges in
Package D. From the weight breakdown it is clear that
critical elements are again “Piling work (Foundation)”
with greater than 30% presence and the “production
of beams and erection” with presence of greater than
55%. The reason is due to the design using Steel girder
to suite the site condition. In this manner these crucial
elements in Package E were optimised in the design
as follows:
Piers, Crossheads and Pilecaps
1.78%
Critical element No. 1: Piling works
– Micro pile for 5 Bridges
– Spun pile for 1 Bridge
Critical element No. 2: Beams
– Cast in situ Post tensioned beams I-16 for 1
Bridge.
– Steel girders 5 Bridges
– Steel trusses 1 Bridge
Precast, Prestressed Beam and Ancillary
56.84%
Diaphragms
2.16%
Bridge Deck and Run-On-Slab
7.77%
Parapet
1.72%
TOTAL
100.00%
7.77%
2.16%
1.72%
21.05%
8.67%
1.78%
56.84%
Figure 6. Weight of each element expressed as a percentage of
total
VOLUME Six NUMBER two july - december 2008 PLATFORM
81
Technology Platform: SYSTEM OPTIMISATION
Table 7. Overall weight of each element for bridges in Package A to E
Ratio
Description
Package E
Average
33.82%
21.05%
25.87%
12.13%
16.24%
8.67%
13.21%
13.56%
1.42%
1.78%
5.43%
31.73%
31.36%
56.84%
38.46%
1.63%
2.64%
2.27%
2.16%
2.32%
13.93%
11.88%
13.84%
12.15%
7.77%
11.91%
2.77%
3.11%
3.66%
2.73%
1.72%
2.80%
100.00%
100.00%
100.00%
100.00%
100.00%
100.00%
Package A
Package B
Package C
Foundation (Piling)
22.64%
29.41%
22.44%
Abutment and Wing Wall
16.33%
12.68%
1.57%
8.79%
39.88%
32.50%
2.87%
Piers, Crossheads and Pilecaps
Precast, Prestressed Beam and Ancillary
Diaphragms
Bridge Deck and Run-On-Slab
Parapet
TOTAL
Discussion of Results
2.80%
11.91%
25.87%
2.32%
13.21%
38.46%
5.43%
Figure 7. Overall Weight of each element expressed in
percentage
Overall average ratio
The Table 7 and Figure 7 show the overall influence of
these elements in the multiple projects.
Table 7 Overall Weight of Each Element for bridges in
Package A to E
The elements having high ratios (or high presence)
were high impact causers. In this multiple project the
overall influence of the elements “Piling” and “Beams”
in bridge completion are critical. Even though the
quantum of these elements were less compared
to other elements of the bridges, the ratio of their
influence in the construction was more due to their
level of special technology, specialist availability,
method of construction, risk involved and limited
usage/availability of resources to produce.
Piling has 25.87% and beams have 38.46% – they have
the maximum presence. The presence of these two
items were more with less volume of work because of
its speciality and use of uncommon materials.
Hence, extra care was given while deciding the
design for these critical elements. Then, few design
optimisations were adopted to reduce the complexity
and to ease implementation in the field as shown
in Table 8. The techniques adopted in element
optimisation for the multiple bridge construction
were successful and resulted in the projects being
executed in time and within budget.
It was observed that the critical elements that needed
more attention were “the foundation (piling) works”
and “Superstructure Beam works”.
82
Package D
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Table 8. Overall “Piling” and “Beam” element optimisation summary for Project 1 to 5.
FINDINGS AND LESSONS LEARNED
The following findings were obtained and lessons
learnt from the case study:
1. The natural tendency for more cautiousness/
attention for Girders selection was advantages in
the handling and construction of all the beams
(about 355 beams of five different varieties)
without major problems. This enhanced the
finishing of those beams as scheduled in the
bridge programme.
2. Conversely, piling works on the foundation part
were taken lightly at the design stage. There
was no attempt to rationalise and the decision
was left to the individual designers. This resulted
in usage of the same micropile method in the
majority of the bridges. Difficulties which arose
in implementing the micropiles for many bridges
were:(i) Availability of specialists to perform the works
was very limited in Sabah
(ii) The risk of loosing equipment in the drilled
holes requires skilled operators for drilling the
VOLUME Six NUMBER two july - december 2008 PLATFORM
83
Technology Platform: SYSTEM OPTIMISATION
pile. There were not enough skilled operators
in this field.
(iii) The component materials like API pipe G80
and permanent casings were very scarce to
obtain/procure.
(iv) The method had various stages to complete
one point, which is time consuming for each
bridge.
Hence remedial measures were taken to catch up
with the progress at the cost of spending extra
resources, time and money.
CONCLUSIONS
1. In general, the Element Optimisation Technique
was needed to be adopted for all the elements
in compliance with the required standards. Extra
importance is required for elements that have
more influence in the execution and completion
of the project.
2. In multiple DB projects the element optimisations
have to be started well ahead during the
conceptual design stage. But this optimisation
should not interfere in the functional quality and
the integrity of the structure which are designed
for a stipulated life period.
3. In this process it is also important to consider and
review the feedback from field personnel after
conducting a proper study on site conditions.
4. In spite of the lag in piling works as mentioned
in “lessons learned” from the case study, the
company was able to make up and complete
in time with recognition and credits due to the
proper and timely planning in rest of the element
optimisations.
84
ACKNOWLEDGEMENTS
The authors would like to thank the Directors of Kumpulan Liziz
Sdn Bhd., Hj. Ghazali Abd Halim, Mr. Simon Liew and Mr. Liew Ah
Yong for making available data regarding the multiple projects
being executed by the company in Sabah.
REFERENCES
[1]
Rowings, J. E., Harmelink, D. J., & Buttler, L. D., 1991,
“Constructability in the Bridge Design Process”, Research
project 3193, Engineering Research Institute, Iowa State
University. 1-11
[2]
Wikipedia, 2007, Area of Sabah State
Narayanan Sambu Potty received his
Bachelor of Technology in Civil Engineering
from Kerala University and Master of
Technology degree from National Institute
of Technology in Kerala. His PhD work
“Improving
Cyclone
Resistant
Characteristics of Roof Cladding of
Industrial Sheds” was done at Indian
Institute of Technolgy Madras India.
Currently Associate Professor at UTP, he has earlier worked in
Nagarjuna Steels Ltd., TKM College of Engineering, Kerala, India
and Universiti Malaysia Sabah. His main research areas are steel
and concrete structures, offshore structures and construction
management.
C.T. Ramanathan has more than 15 years
of experience in various construction
projects in India and Malaysia. As Project
Manager for Sabah, East Malaysia, he leads
a team of professionals in managing the
Design and Build Infrastructural projects for
this region. He has full responsibility
throughout the D&B project life cycle (initial
project definition to close out) involving
Highways, Roads and Bridges in the state. He also has extensive
experience in the entire process of D&B operation methods. His
Master’s research was on “Complexity Management of Multiple
D&B Projects in Sabah”. He has published and presented ten
papers for international conferences.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
A Simulation Study on Dynamics
and Control of a Refrigerated Gas Plant
Nooryusmiza Yusoff*, M. Ramasamy and Suzana Yusup
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*nooryus@petronas.com.my
Abstract
Natural gas has recently emerged as an important source of clean energy. Improving operational efficiency of a
gas processing plant (GPP) may significantly increase its profit margin. This proves to be a challenging problem
due to the time-varying nature of the feedstock compositions and business scenarios. Such fluctuations in
operational and economic conditions are handled by employing advanced process control (APC). Reasonably
accurate steady-state and dynamic simulation models of the actual plant are required for the effective
implementation of APC. This paper deals with the development of such models. Here, a refrigerated gas plant
(RGP), which is the low temperature separation unit (LTSU) of the GPP, is modeled under HYSYS environment.
Calibration and validation of the steady-state model of the RGP are performed based on the actual plant data. A
dynamic model is also built to act as a virtual plant. Main control philosophies of the virtual plant are reported in
this paper. This virtual plant model serves as a platform for performing APC studies.
Keywords: Gas processing plant, dynamic, simulation, control
Introduction
The operators of gas processing plants (GPPs) face
many challenges. At the plant inlet, several gas streams
with different ownerships are mixed as a feedstock to
the GPP. As a result, the feedstock compositions vary
continuously. The feedstock comes in two types: 1)
feed gas (FG) and 2) feed liquid (FL). The FG may be
lean or rich depending on the quantity of natural gas
liquids (NGLs). Lean FG is preferable if sales gas (SG)
production is the main basis of a contract scenario.
On the other hand, rich FG and higher FL intake
may improve the GPP margin due to the increase in
NGLs production. However, this comes at the cost of
operating a more difficult plant. The GPP needs to
obtain a good balance between smooth operation
and high margin.
At the plant outlet, the GPP encounters a number of
contractual obligations with its customers. Low SG
rate and off-specification NGLs will result in penalties.
Unplanned shutdown due to equipment failure may
also contribute to losses. These challenges call for
the installation of advanced process control (APC). A
feasibility study on the APC implementation may be
performed based on a dynamic model. This model
acts as a virtual plant. This way, the duration of the
feasibility study will be shortened and the risk of
interrupting plant operation will be substantially
reduced.
Simulation based on the first principles steady-state
and dynamic models have been recognised as a
valuable tool in engineering. Successful industrial
applications of the first principles simulation are
aplenty. Alsop and Ferrer (2006) employed the
This paper was presented at the 5th International Conference on Foundations of Computer-Aided Process Operations (FOCAPO), Massachusets,
29 June - 2 July 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
85
Technology Platform: SYSTEM OPTIMISATION
dynamic model of a propane/propylene splitter in
HYSYS to skip the plant step testing completely.
DMCPlus was used to design and implement the
multivariable predictive control (MPC) schemes in
the real plant. In another case, Gonzales and Ferrer
(2006) changed the control scheme of depropanizer
column based on a dynamic model. The new scheme
was validated and found to be more suitable for APC
implementation in the real plant. Several refinery cases
are also available. For examples, Mantelli et al. (2005)
and Pannocchia et al. (2006) designed MPC schemes
for the crude distillation unit (CDU) and vacuum
distillation unit (VDU) based on dynamic models.
The previous works above demonstrate that dynamic
simulation models can be utilized as an alternative to
the empirical modeling based on step testing data.
This way, the traditional practice of plant step testing
to obtain the dynamic response may be circumvented
prior to designing and implementing MPC. Worries
about product giveaways or off-specifications may
then be a thing of the past. In addition, the first
principles simulation model may also be utilised to
train personnel and troubleshoot plant problems
offline. In the current work, the steady-state and
dynamic models of an actual refrigerated gas plant
(RGP) are developed under HYSYS environment. The
steady-state model is initially calibrated with the plant
data to within 6% accuracy. This is used as a basis for
the development of the dynamic model of the RGP,
which is the main objective of this paper. Finally, the
regulatory controllers are installed to stabilize the
plant operation and to maintain product quality.
Simulation
The refrigerated gas plant (RGP) is simulated under
HYSYS 2006 environment (Figure 1). The feed comes
from three streams 311A, 311B and 311C with the
following compositions (Table 1).
The thermodynamic properties of the vapors and
liquids are estimated by the Peng-Robinson equation
of state. Details of the process description have been
described by Yusoff et al. (2007). The steady-state RGP
model has high degree of fidelity with deviation of
less than 6% from the plant data. This is an important
step prior to transitioning to dynamic mode.
Once the steady-state model is set up, three additional
steps are required to prepare the model for dynamic
simulation. The steps are equipment sizing, specifying
Figure 1. HYSYS process flow diagram of the RGP. Equipment’s abbreviation: E=heat transfer unit (non-fired); S=separator;
KT=turboexpander; K=compressor; P=pump; C=column; JT= Joule-Thompson valve; RV=relief valve. Controller’s abbreviation: FC=flow;
TC=temperature; LC=level; SRC=split range; SC=surge; DC=digital on-off; SB=signal block.
86
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
where,
F = mass flow rate (ton∙h−1)
k = conductance (ton∙h−1 ∙ bar−0.5)
∆P = frictional pressure loss as calculated by DarcyWeisbach equation (bar)
Table 1. Compositions of feed gas at different streams
Component
311A
311B
311C
Methane
0.8865
0.7587
0.6797
Ethane
0.0622
0.0836
0.1056
Propane
0.0287
0.0535
0.0905
i-Butane
0.0102
0.0097
0.0302
n- Butane
0.0059
0.0194
0.0402
i-Pentane
0.0003
0.0058
0.0121
n-Pentane
0.0002
0.0068
0.0101
n-Hexane
0.0001
0.0002
0.0028
Nitrogen
0.0039
0.0043
0.0012
Carbon Dioxide
0.0020
0.0580
0.0276
pressure-flow relation at the boundary streams and
installing the regulatory controllers. In the first step,
all unit operations need to be sized accordingly. Plant
data is preferable to produce a more realistic dynamic
model. In HYSYS, an alternative sizing procedure may
be used. Vessels such as condensers, separators and
reboilers should be able to hold 5-15 minutes of liquid.
The vessel volumes may be quickly estimated by
dividing the steady-state values of the entering liquid
flow rates from the holdup time.
For a column, only the internal section needs to be
sized. This is accomplished by specifying the tray/
packing type and dimensions. The tray must be
specified with at least the following dimensions: 1) tray
diameter, 2) tray spacing, 3) weir length, and 4) weir
height. For a packed column, there are a number of
packing types to choose from. Most of the packing
comes with the pre-specified properties such as void
fraction and surface area. The minimum dimension
need to be entered is the packing volume or packing
height and column diameter.
For heat exchangers, each holdup system is sized
with a k-value. This value is a constant representing
the inverse resistance to flow as shown in Equation 1
(Aspentech, 2006):
F = k ∆P
(1)
The k-value is calculated using the converged solution
of the steady-state model. For practical purposes,
only one heat transfer zone is required for the simple
cooler (E-102) and the air-cooler (E-106). The E-102
is supplied with the duty obtained from the steadystate model and E-106 is equipped with two fans. Each
fan is designed to handle 3600 m3/h of air at 60 rpm
maximum speed. The simulation of cold boxes is
more challenging as they are modeled as LNG heat
exchangers. Sizing is required for each zone and layer.
Again for practical purposes, the number of zones in
the cold boxes is limited to three. In each zone, the
geometry, metal properties and a few sets of layers
must be specified. The overall heat transfer capacities
(UAs) and k-values are estimated from the steady-state
solution and specified in each layer.
Equipment with negligible holdup is easily modeled.
For example, the dynamic specification for a mixer is
always set to ‘Equalize all’ to avoid backflow condition.
The flows of fluid across valves are governed by an
equation similar to Equation 1. Here, the k-value is
substituted with the valve flow coefficient, C v. The
valve is sized with a 50% valve opening and 15-30 kPa
pressure drop at a typical flow rate.
The rotating equipment such as pumps, compressors
and expanders may be simulated in a rigorous
manner. The main requirement is the availability of
the characteristic curve of the individual equipment.
The characteristic curves need to be added at the
‘Rating-Curves’ page in order to enhance the fidelity of
the dynamic model. An example of the characteristic
curves for K-102 is illustrated in Figure 2.
The compressors and turbo expanders may also
be modeled based on the steady-state duty and
adiabatic/polytropic efficiency specifications. For
pumps, only the power and static head are required.
This simplification is recommended to ease the
VOLUME Six NUMBER two july - december 2008 PLATFORM
87
Technology Platform: SYSTEM OPTIMISATION
10000
100
7072 RPM
9000
80
4715 RPM
Operating Point
70
Efficiency (%)
7000
Head (m)
90
5725 RPM
8000
6000
5000
4000
60
50
40
3000
30
2000
20
1000
10
0
4000
6000
8000
10000
12000
14000
0
4000
16000
7072 RPM
5725 RPM
4715 RPM
Operating Point
6000
Actual Flow (m3/h)
(a) Head curve
8000
10000
12000
14000
16000
Actual Flow (m3/h)
(b) Efficiency curve
Figure 2. The K-102 characteristic curves
transitioning from steady-state to dynamic mode or
to force a more difficult model to converge easily. In
the current work, K/KT-101 are modeled in this manner
since their characteristic curves are unavailable. In
contrast, K-102, P-101 and P-102 models are based on
the characteristic curves.
The second step in developing a dynamic model is
to enter a pressure or flow condition at all boundary
streams. This pressure-flow specification is important
because the pressure and material flow are calculated
simultaneously. Any inconsistency will result in
the ‘Integrator’ failure to converge. In addition, the
compositions and temperatures of all feed streams
at the flowsheet boundary must be specified. The
physical properties of other streams are calculated
sequentially at each downstream unit operation
based on the specified holdup model parameters (k
and C v). In the current simulation work, all boundary
streams are specified with pressure values. The feeds
enter the RGP at 56.0 barg. The exiting streams are
the NGLs stream at 28.5 barg and the SG stream at
33.5 barg. The flow specification is avoided since the
inlet flows can be governed by PID controllers and
the outlet flows are determined by the separation
processes in the RGP.
The final step is the installation of regulatory
controllers. In most cases, base-layer control is
sufficient. However, more advanced controllers such
as cascade, split range and surge controllers are also
88
installed to stabilize the production. The discussion
of the plant control aspects is the subject of next
section.
Control Philosophies
The control of plant operations is important to meet
product specifications and to prevent mishaps. Due
to space constraints, only main control philosophies
are discussed in this section. The first loop is the TC101 PID controller as shown in Figure 1. The purpose
of this loop is to control Stream 401 temperature
and indirectly the plant overall temperature. This is
accomplished by regulating the cooler duty, E-102Q
through a direct action mode. Stream 401 temperature
is set at 37 °C to prevent excessive FG condensation
into the first separator (S-101).
In the event of excessive condensation, the S-101 level
is regulated by a cascade controller. The primary loop
(LC-101) controls the liquid level in the separator, which
is normally set at 50%. This is achieved by sending a
cascade output (OP) to the secondary loop (FC-102).
The FC-102 acts in reverse mode to regulate Stream
402 flow between 0 and 60 ton/h.
The magnitude of RGP throughput is controlled by
the pressure of second separator (S-102). The S-102
pressure is regulated by the inlet vanes of KT-101 and
the by-pass Joule-Thompson (J-T) valve as per split
range operation (SRC-101). For example, the S-102
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: SYSTEM OPTIMISATION
Feed Gas
pressure is remotely set by FC-101 to decrease from
52.1 to 51.7 bar (SRC-101 PV) in order to increase the
throughput from 280 to 300 ton/h (FC-101 SP). At
the same time, the SRC-101 OP increases from 37 to
40% opening up the KT-101 inlet vanes to 80% while
keeping the J-T valve fully close. The schematic of this
loop is illustrated in Figure 3, the SRC-101 split range
set up in Figure 4 and the close-loop response to a
throughput step up in Figure 5.
E-102
E-103
E-101
S-101
FC
SRC
101
101
Another important controller is PC-101, which
regulates the C-101 top pressure (Ptop) by varying the
K-102 speed. This loop provides the means for
changing the RGP operation from ethane (C2)
to propane (C3) recovery mode or vice-versa.
An example of the close-loop response to the
change from C3 to C2 mode is shown in Figure
6. The C-101 overhead is initially at 24 barg and
-83.0 °C. To recover more C2, the column pressure
is decreased to 22 barg. If all else remain the same,
this action causes the C-101 top temperature (Ttop) to
reduce to -86.4 °C and C2 composition in the SG to
reduce from 2.72 to 2.35 mole %. The downstream
effects can be seen at both suction and discharge
sides of K-102. Lowering the C-101 Ptop will reduce the
K-102 suction pressure (Psuct) from 25.0 to 23.5 bar.
Consequently, the K-102 speed needs to be increased
from 5 684 to 6 352 RPM in order to obtain a discharge
pressure of 33.7 bar. The increase in K-102 speed causes
its discharge temperature (Tdisc) to also increase from
40.6 to 60.2 °C.
KT-101
To C-101
J-T
S-102
Figure 3. Plant throughput control
100
OP (%)
80
60
KT-101
J-T valve
40
20
0
0
4
8
12
16
20
SRC-101 Signal Output (mA)
300
52
51. 9
290
51. 8
SRC-101 PV
285
280
275
0.0
51. 7
FC-101 SP
51. 6
10. 0
20. 0
30. 0
Time (min)
40
39. 5
SRC-101 O P
295
40. 5
40. 0
50. 0
51. 5
60. 0
39
38. 5
38
37. 5
(%)
52.1
SRC-101 OP
305
SRC-101 PV (bar)
FC-101 SP (ton/h)
Figure 4. SRC-101 split range setup
37
36. 5
Figure 5. Response to throughput step up
Figure 6. Effect of C-101 Top Pressure
VOLUME Six NUMBER two july - december 2008 PLATFORM
89
Technology Platform: SYSTEM OPTIMISATION
Figure 7. Effect of TC-101 failure on C1 composition in SG
The importance of regulatory control in meeting
product specification is illustrated in Figure 7. Here,
the methane composition in feed changes from
88.7 to 80.6% at the time when TC-101 is offline but
other controllers remain online. As a result, stream
401 temperature (TC-101 PV) increases from -38.0 to
-23.8 °C. This causes the C-101 top temperature to
increase from -86.3 to -66.9 °C. A hotter top section
of C-101 induces methane losses to the bottom of the
column as confirmed by the decrease in SG methane
composition from 96.9 to 91.5%, or by 5.4%.
Conclusions
A simulation model of refrigerated gas plant (RGP) was
successfully developed. The dynamics of the RGP was
simulated based on a high fidelity steady-state model.
Controllers are installed to regulate plant operation
and to stabilize production. The dynamic model can
be used as a virtual plant for performing advanced
process control (APC) studies.
References
[1]
Alsop N. and Ferrer J. M. (2006). “Step-test free APC
implementation using dynamic simulation”. AIChE Spring
National Meeting, Orlando, Florida, USA
[2]
Aspentech (2006). “HYSYS 2006 Dynamic Modeling Guide”.
Aspen Technology: Cambridge, MA, USA
[3]
Gonzales R. and Ferrer J. M. (2006). “Analyzing the value
of first-principles dynamic simulation”. Hydrocarbon
Processing. September, 69-75
90
[4]
Mantelli V., Racheli M., Bordieri R., Aloi N. Trivella F. and
Masiello A. (2005). “Integration of dynamic simulation and
APC: a CDU/VDU case study”. European Refining Technology
Conference. Budapest, Hungary.
[5]
Pannocchia G., Gallinelli L., Brambilla A., Marchetti G. and
Trivella F. (2006). “Rigorous simulation and model predictive
control of a crude distillation unit”. ADCHEM. Gramado,
Brazil.
[6]
Yusoff N., Ramasamy M. and Yusup S. (2007). “Profit
optimization of a refrigerated gas plant.” ENCON. Kuching,
Sarawak, Malaysia.
Nooryusmiza Yusoff graduated from
Northwestern University, USA with BSc in
Chemical Engineering and subsequently
became a member of the American
Chemical Engineering Honors Society
“Omega-Chi-Epsilon”. He received the MSc
Degree from the University of Calgary,
Canada with a thesis on “Applying
Geostatistical Analyses In Predicting Ozone
Temporal Trends”. He is currently pursuing his PhD at Universiti
Teknologi PETRONAS (UTP) after serving as a lecturer in the
Department of Chemical Engineering, UTP. His areas of research
interest mingle around process modeling and simulation,
advanced process control as well as multiscale system.
M. Ramasamy is presently an Associate
Professor in the Department of Chemical
Engineering at Universiti Teknologi
PETRONAS (UTP). He graduated from
Madras University in the year 1984. He
obtained his masters degree with
specialisation in Process Simulation,
Optimisation and Control from the Indian
Institute of Technology, Kharagpur in 1986
followed by his PhD in 1996. His areas of research interests include
modeling, optimisation and advanced process control. Dr
Ramasamy has guided several undergraduate and postgraduate
students and published/presented several technical papers in
international refereed journals, international and national
conferences. He has delivered a number of special lectures and
also successfully organised seminars/symposiums at the national
level. Presently, he is the project leader for the PRF funded project
on “Optimisation of Energy Recovery in Crude Preheat Train” and
the e-Science funded project on “Soft Sensor Development”.
Suzana Yusup is presently an Associate
Professor at Universiti Teknologi PETRONAS
(UTP).
She graduated in 1992 from
University of Leeds, UK with an Honours
BEng. Degree in Chemical Engineering. She
completed her MSc in Chemical Engineering,
(Adsorption) in 1995 and PhD in Chemical
Engineering in Powder Technology at
University of Bradford UK in 1998. Her main
research areas are in the area of adsorption and reaction
engineering. She was lead researcher for natural gas storage of
direct fuel NGV Engine Injection under IRPA project and a research
leader for a cost effective route for methanol production from
natural resources via liquid phase synthesis and synthesis of
carbon nano-fibres and tubes for hydrogen storage under
e–science project.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
AN INTERACTIVE APPROACH TO CURVE FRAMING
Abas Md Said*
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*abass@petronas.com.my
Abstract
Curve framing has numerous applications in computer graphics. Commonly used curve framing techniques such
as the Frenet frame, parallel transport frame and ‘up-vector’ frame cannot handle all types of curves. Mismatch
between the technique used and the curve being modeled may result in the extruded surfaces being twisted
or an object erratically rotating while flying. We propose an interactive approach to curve framing. The user
selects a ‘twisted’ portion of the curve and untwists it accordingly, as required by its specific application needs.
The technique works fine as per user’s wish.
Keywords: Curve framing, Frenet frame, parallel transport frame.
INTRODUCTION
Curve framing is the process of associating coordinate
frames to each point on a three-dimensional space
curve. This can be depicted as in Figure 1, where a
number of local Cartesian coordinate frames are
drawn on a curve. This is very useful when one needs
to treat a coordinate locally, rather than globally. It has
numerous applications in computer graphics, such as
in the construction of extruded surfaces (Figure 2),
providing the orientation of flying objects (Figure 3)
and visualisation during a fly-through.
Figure 2: Extruded surface from a curve
Figure 1: Some local frames on the spine
Figure 3: Flying along a 3D path
This paper was presented at the International Conference on Mathematics & Natural Sciences, Bandung,
28 - 30 October 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
91
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
In these applications, frame orientations are essentially calculated based on the curve itself. A few
approaches commonly employed in these applications
are the Frenet, parallel transport, ‘up-vector’ and
rotation minimising frames [6, 7]. Having figured
out the frames, the necessary points (or vectors) to
construct the surface or orient an object can be easily
obtained. While these approaches have been very
useful, not all can successfully address all types of
curves. At times, the inherent nature of a curve causes
‘twists’ to occur, giving distorted extruded surfaces or
an object flying in a twist. This paper looks at some
of the problems with some of these approaches and
proposes an interactive technique to address the
issue.
Figure 4 (a)
THE FRENET FRAME
A Frenet frame for a curve C(t) is formed by a set of
orthonormal vectors T(t), B(t) and N(t) where
T(t ) =
C ' (t )
C ' (t )
is tangential to C(t),
B(t ) =
C ' (t ) × C ' ' (t )
C ' (t ) × C ' ' (t )
is the binormal and
N(t ) = B(t ) × T(t ) Figure 4 (b)
is the “inner” normal,
i.e., on the osculating circle side of the curve [2, 5]
(Figure 4a). A portion of the extruded surface of
a curve is depicted in Figure 4. Depending on the
curve, it can be seen that at some point on the curve,
i.e., near the point of inflection, the frame starts to
twist (Figure 4b, C (t ) = (cos 5t , sin 3t , t ) ), while on
another, there are no twists at all (Figure 4c,
C (t ) = (cos t , sin t , 0) ). The twist occurs because the
inner normal now starts to “change side”.
Another problem may also occur when the second
derivative of C (t ) vanishes, i.e., on a momentary
straight line [4]. This is clear from vector B(t ) as
C ' ' (t ) would be identically 0 for a straight line.
92
Figure 4 (c)
Figure 4: A Frenet frame (a) and extruded surfaces by Frenet
frames, twisted (b) and untwisted (c)
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Figure 5: Parallel curves
PARALLEL TRANSPORT FRAME
Parallel transport refers to a way of transporting
geometrical data along smooth curves. Initial vectors
u and v, not necessarily orthogonal, are transported
along the original curve by some procedure to generate
the other two parallel curves as in Figure 5 [4]. Hanson
and Ma [4] showed that the method reduced twists
significantly. In another similar application, Bergou
et al. successfully used parallel transport frames in
modeling elastic discrete rods [1]. Despite generating
good extruded surfaces, the method may not meet
requirements for a fly-through.
Figure 6a
Figure 6b
‘UP-VECTOR’ FRAME
Another method that could be applied to avoid twists
is to use the ‘up-vector’ [3]. In this approach, view the
frames similar to Frenet’s, with Ti ’s, B i ’s and N i ’s,
moving along the curve. Without loss of generality,
assume y-axis as the global vertical coordinate. Then
find the unit ‘radial’ vector in the unit circle (spanned
by the vectors B i ’s and N i ’s) that gives the highest
y-coordinate value (Figure 6a). The frame is then
rotated about Ti until N i coincides with the ‘radial’
vector.
Figure 6c
While this approach works on many curves, e.g., as
in Figure 6b, it fails when the curve is momentarily
vertical or abruptly tilts the opposite ‘side’ (Figures 6c
and d).
Applications-wise, this technique corrects the twists in
extruded surfaces or a walkthrough for many curves.
However, it may not be realistic when simulating fastmoving objects, e.g., airplanes, due to issues such as
inertia and momentum during flight [8].
Figure 6d
Figure 6: ‘Up-vector’ construction and its problems
VOLUME Six NUMBER two july - december 2008 PLATFORM
93
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Figure 7: Roll
Figure 8: Generating the frames
UNTWISTING THE TWISTS, INTERACTIVELY
In this study’s approach, a simple method employed
was by iteratively generating the frames based on
any three successive points on the curve (Figure 8)
and applying it on a plane flight trajectory. Given
three points A, B and C, the normal vector to the
plane parallel to ∆ABC, q, is easily calculated using
the coordinates of the points. The tangential vector
is crudely approximated by p = C – A and the ‘inner’
normal r by the cross product q × p. Following this
method, the frames were still subject to twists.
Twists were eliminated by employing the ‘up-vector’
procedure (Figure 9). The choice for this was fairly
clear, as in most applications an object set to fly will
begin in an up-right position. Furthermore it was a
better perspective to set the twist. Looking at the
whole picture of the curve, the where and how to
twist or untwist the curve (Figure 9) were identified.
The corresponding plane orientation with respect to
the extruded surface in Figure 9 is shown in Figure
10. Upon identification of the intervals, each could be
twisted and untwisted piece-wise. At position I, for
example, the plane should have more roll compared
to that in Figure 10 due to the plane speed and path
curvature.
Figure 9: ‘Up-vector’ results
Figure 10: Flight orientation (‘up-vector’)
Despite good extruded surfaces produced by parallel
transport and ‘up-vector’ frames, the techniques may
not have intended effects for graphics applications
such as a fly-through. In such a case, user interactions
are desirable to make rolling more reasonable during
flights (Figure 7). This is to adjust or correct the object
orientation using the user’s common sense. As such,
one would like to be able to interactively control the
twists.
94
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
REFERENCES
Figure 11: User intervention
The twisting was achieved by rotating the frame for
each increment about the local tangential axis. The
result of such operation is shown in Figure 11, with the
plane flying along the path, rolling according to the
new orientation. It should be noted that the purpose
of the dark band on the extruded surfaces in Figure 9 is
to visualise the twist on the curve while the flight path
is the curve itself (Figures 10 and 11). At the instance
when the dark band is up, the plane is upright, and
when the band is below the curve, the plane is upside
down. Figure 11 shows the plane flying orientation
at positions I, J and K after the user has adjusted the
twists in the curve as per his perception.
[1]
M. Bergou, M. Wardetzky, S. Robinson, S., B. Audoly and E.
Grinspun (2008). Discrete Elastic Rods, ACM Transactions
on Graphics, Vol. 27, No. 3, Article 63.
[2]
R. L. Bishop (1975). There is More than One Way to Frame a
Curve, Amer. Math. Monthly, Vol. 82, No. 3, pp. 246 – 251.
[3]
J. Fujiki and K. Tomimatsu (2005). Design Algorithm of “the
Strings - String Dance with Sound”, International Journal of
Asia Digital Art and Design, Vol. 2, pp. 11 – 16.
[4]
A. J. Hanson and H. Ma, H (1995). Parallel Transport
Approach to Curve Framing, Technical Report 425, Indiana
University Computer Science Department.
[5]
F. S. Hill (2001). Computer Graphics Using OpenGL, Prentice
Hall, New Jersey.
[6]
T. Poston, S. Fang and W. Lawton (1995). Computing and
approximating sweeping surfaces based on rotation
minimizing frames, Proceedings of the 4th International
Conference on CAD/CG. Wuhan, China.
[7]
W. Wang, B. Jüttler, D. Zheng and Y. Liu (2008). Computation
of Rotation Minimizing Frames, ACM Transactions on
Graphics, Vol. 27, Issue 1, pp. 1 – 18.
[8]
J. Wu and Z. Popovic (2003). Realistic Modeling of Bird
Flight Animations, International Conference on Computer
Graphics and Interactive Techniques, ACM SIGGRAPH 2003
Papers, pp. 888 – 895.
Abas Md Said holds BSc and MSc degrees
from Western Michigan University and PhD
from Loughborough University. He is a
senior lecturer at UTP. His research interests
are in computer graphics, visualisation and
networks.
CONCLUSIONS
Most available methods to curve framing yield
acceptable results in generating extruded surfaces.
While parallel transport and ‘up-vector’ frames
generally work well in these cases, they may not be
adequate to model a fast fly-through. This study
proposed an interactive approach to address the
issue. The approach does away with the physics of
flight and relies on user’s insight of flight trajectory.
The approach lets the user fine-tune the way a fastmoving object should roll about its axis. This is
necessary in this kind of applications because unless
the necessary physics is embedded in the model, the
Frenet, parallel transport and ‘up-vector’ methods
previously discussed may overlook the roll problem.
VOLUME Six NUMBER two july - december 2008 PLATFORM
95
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Student Industrial Internship Web Portal
Aliza Sarlan*, Wan Fatimah Wan Ahmad, Dismas Bismo
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*aliza_sarlan@petronas.com.my
Abstract
A Student Industrial Internship Web Portal (SIIWP) was developed to automate the current manual business
processes. The portal allowed internship eligibility checking, registration, student-lecturer assignment, visit
schedule, online-logbook submission and monitoring as well as grade book of industrial internship programme
at Universiti Teknologi PETRONAS. PHP 5, Easy PHP 2.0, Macromedia Dreamweaver MX 2004, MySQL Database
and Apache Web Server were used to develop SIIWP. Phased development model was used in the development
process by applying business process improvement technique. Findings showed that the prototype could be
used as a communication medium for all parties involved during the industrial internship programme. The
System could be easily used as an aid for the internship programme.
Keywords: Industrial internship, web portal, business process improvement, phased development model, business
process
Introduction
Student Industrial Internship Programme (SIIP) is part
of the curriculum for most higher learning institutions
worldwide. Its main purpose is to expose students
to a real working environment and relate theoretical
knowledge with applications in the industries. The
objectives are to produce well-rounded graduates
who possess technical competence, lifetime learning
capacity, critical thinking, communication and
behavioural skills, business acumen, practical aptitude
and solution synthesis ability [1]. Issues such as, long
distance learning, communication, monitoring and
management arise as crucial to ensure the success of
the programme.
The SIIP is a thirty-two-week programme where
Universiti Teknologi PETRONAS (UTP) students
are attached to various companies in or outside
Malaysia. The purpose of SIIP is to expose UTP
students to the world of work, thus they are able
to relate theoretical knowledge with application
in the industry. Furthermore, SIIP can enhance the
relationship between UTP and the industry and/or the
government sector. The Student Industrial Internship
Unit (SIIU) is responsible for handling and monitoring
the process of students’ internship in UTP. SIIU is
responsible for internship processes from student
application to checking students’ eligibility status,
internship placement application and confirmation,
lecturer visit scheduling and grading. Many problems
arise since all processes are still been done manually,
such as missing data and redundancy, delay in the
grading process, communication problems and most
crucial is student monitoring. Currently, by telephone
and email are the main methods of communication
which have imposed many problems, e.g. update to
all students has to be approached individually and
resulted in high cost of communications.
This paper was presented at the International Symposium on Information Technology 2008, Kuala Lumpur
26 - 29 August 2008
96
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
The main objective of this project is to develop a
prototype of Student Industrial Internship Web Portal
(SIIWP) that automates current manual processes to
reduce possible problems in communication, data
loss and redundancy. It makes monitoring, instructor
assignment and scheduling, grading and reporting
easy and to a greater extent, error free.
Related Works
Distance Learning
The SIIP refers to the placement of students on a
spread of locations for implementing their knowledge
or having practicals in the real industry sector [2].
Thus, the distance learning concept is actually being
implied in the context of SIIU.
Distance Learning and the WWW Technologies
The popularity of the World Wide Web (WWW)
has made it a prime vehicle for disseminating
information[3]. The advance of WWW technologies
has driven the usage of Internet to new applications
and at an unprecedented rate. Education institutions
have/had developed and/or migrated to web-based
applications in overcoming the distant learning issue
and to provide better services for their users, i.e.
students and teachers.
Web-based System
A web-based application is an application that is
accessed with a web browser over a network such
as the Internet or an intranet [4]. The web-based or
automated system provides far more efficiency in
processing any task domain especially for a system
that involves a lot of data collection and retrieval [5].
Web-based systems should meet its stakeholders’
(users’) requirements and expectations. Thus, webbased applications should be developed on-top of a
carefully studied business process of the organisation
in which it is to be deployed.
The research project will apply a similar strategy, that
is, to create a web-based system built based on the
current business process of SIIU. A web portal would
be the most appropriate web-based system for SIIU.
Web Portals
A web portal serves as a starting point when users
connect to the Internet, just like a doorway to web
services that guide users to the right information they
need[6]. A web portal is defined as a site that provides
a starting point for users to explore and access
information on the WWW or intranet. This starting
point could be a general purpose portal or it could be
a very specialised portal, such as a homepage [7].
According to Strauss [8], a common way is to divide
web portals into two categories:
a) Horizontal Portal – A horizontal portal is an
internet portal system that is open to the public
and is often considered as a commercial site.
Most horizontal portals offer on a single web page
a broad array of resources and services that any
user may need. Examples of horizontal portals:
Yahoo!, MSN.
b) Vertical Portal – A vertical portal is a web site that
provides information and services as defined and
requested by users.
Strauss also mentioned that a true portal should be:
1) Customised – A true portal is a web page whose
format and information content are based on
information about the user stored in the portal’s
database. When the user authenticates (logs in)
to the portal, this information determines what
the user will see.
2) Personalised – The user can select and store a
personal set of appearance content characteristics
for a true portal. These characteristics may be
different for every user.
3) Adaptive – The portal gets to “know” the user
through information the user supplies and through
information the portal is programmed to gather
about the user. As the user’s role in the institution
changes, a true portal will detect that change and
adapt to it without human intervention.
VOLUME Six NUMBER two july - december 2008 PLATFORM
97
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
4) Desktop-Oriented – The goal of a portal is to mask
the inner workings of the campus information
systems from the user. Signing-on into the portal
keeps a user from having to sign each of the many
systems, on campus and off, that provide the
portal content. The ultimate portal could become
the user’s point of entry not just into campus and
internet web spaces, but also into his or her own
desktop computer.
The web portal developed for SIIU falls into the second
category – vertical portal. The web portal allowed
users to access related information with customised
functionalities that meet the users’ requirements.
Web portals have been a successful support system
in education institutions. Thus, many education
institutions have/had been implementing vertical
portals:
Business Process
A Business Process is a complete and dynamically
coordinated set of collaborative and transactional
activities that deliver value to customer [10]. It is
focused upon the production of particular products;
these may be physical products or less tangible one,
like a service [11]. A Business Process can also be
defined in a simpler and more general description
as a specific ordering of work activities across time
and place, with a beginning, and an end, and with
clearly identified inputs and outputs – a structure for
action [12]. Essentially, there are four key features to
any process (i) predictable and definable inputs, (ii)
a linear, logical sequence or flow, (iii) a set of clearly
definable tasks or activities, and (iv) a predictable and
desired outcome or result.
Business Process Improvement (BPI)
1) The Universiti Teknologi PETRONAS Information
Resource Centre implemented a web-based
system with an objective to ease the users’ task
in finding the resources based on an indexed
location and the staffs’ tasks by having an
organised way of indexing. The main reason why
it is made available online is that its database is
located in a remote location from its users. The
system’s functionalities are to search for the
available resources and their indexes, to reserve
discussion rooms, and to search for online journals
from online journal providers [5].
2) Indiana University is a large and complex
institution consisting 8 campuses, over 92,000
students, and 445,000 alumni. The university
created an enterprise portal to which faculty,
staff, students, alumni, prospective students,
and others will travel and uncover a broad array
of dynamic web services. “OneStart” (http://
onestart.iu.edu) provides a compelling place for
faculty, staff, and students to find services with
the intent of developing life-long campus citizens
of the university [9].
98
The Business Process Improvement (BPI) method
is used to identify the requirements of the ‘tobe’ system. BPI has been used in developing and
implementing the prototype system. BPI is defined
as making moderate changes to the way in which
the organization operates to take advantage of new
opportunities offered by technology. The objective
of any BPI methodology is to identify and implement
improvement to the process; thus BPI can improve
efficiency and effectiveness[13]. The scope of BPI
compared to Business Process Reengineering (BPR) is
narrow and short term [12].
Industrial Internship Business Process
Industrial Internship business process is divided into
3 major phases namely Pre-industrial Internship;
During-industrial Internship and Post-industrial
Internship. Figures 1, 2 and 3 show the 3 major phases
of the industrial internship business process.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Start
Start
SIIU give briefing to students
Student submit logbook
Students submit form to check eligibility
Lecturers collect students’ Final
Report & logbook From SIIU
Not
eligible
SIIU check the
II coordinators key in students’
marks and submit to SIIU
Student register for internship
Submit applications and resume to SIIU
& potential host company
Receive offer of placement
Not ok
SIIU check the
marks
The complete marks (grades)
forwarded to exam Unit
Placement offer Notification
End
End
End
Figure 1. Phase 1 pre-internship
Figure 3. Phase 3 post-internship
Start
Problem Identification
Students register at host companies
Students submit form and training
schedule to confirm placement
List of students forwarded to coordinator
Not
ok
SIIU Make
confirmation with
SIIU confirms the visit schedule
SIIU inform respective lecturers of visit
schedule
First & Second visit commence
st
Visiting lecturer submit report for 1 visit
and students’ marks for 2nd visit
End
Currently, the system is working manually. However,
number of problems and pitfalls has been increasingly
arising and causing some deficiencies in the system.
Some of the problems in the current manual system
are:
 Manual and time consuming process of student’s
eligibility status identification due to manual cross
checking process.
 Manual students registration for industrial
internship by filling up a paper form require SIIU
staff to key-in the students particular and contact
details in to the excel spreadsheet. This poses
problems such as data error due to human error
as well as too time-consuming.
 Difficulties
and
ineffective
method
in
communicating with the students since all
communications are by telephone and email.
 Loss of students’ placements applications, resume
and other important documents due to too many
papers and manual process involved.
Figure 2. Phase 2 during-internship
VOLUME Six NUMBER two july - december 2008 PLATFORM
99
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
 Manual system using Microsoft Excel and Word
provide limited features just for entering, searching
and printing the data.
 There is no efficient way to notify the status of
placement, post announcement and update the
placement of internship.
 The manual system only supports two computers
with a centralise database. As a result, it has
limited the ability to access of information.
 All the business processes depends mostly on the
one person who knows the processes. Others
have difficulty to interrupt in order to complete
the process.
 Difficulty in monitoring the students’ progress
and performances as the assigned lecturers to
the lecturers usually receive and view the weekly
report at the end of the programme. Weekly
report submission to UTP lecturers been done
manually by fax or postal mail. Hence, it may
end up missing and may not reach the respective
lecturers on time.
 Grade compilation and calculation that been done
manually by individual lecturer supervisor always
poses problem such as missing evaluation forms
and delays in final grade submission.
Therefore, the new system (SIIWP) needed to be
developed to automate and improve most of the
manual processes to reduce errors and time, and to
increase the efficiency in student monitoring and
communications.
SIIWP Development
The SIIWP automates and improves SIIU’s business
process in conducting SIIP; emphasising on efficiency
and effectiveness. The SIIWP serves the following
objectives:
 To closely monitor the students’ performances by
allowing the SIIU and respective lecturers to view
the students’ weekly reports online and ensure
that the students are monitored closely in a timely
manner.
 To ease the task of scheduling the visits and
assigning UTP Supervisors, and to assist in
100
assigning the students to the respective lecturers,
based on their programme of study and location
of the host companies. Besides, the lecturers
can also view the list of students assigned under
their supervision together with the name of host
companies. SIIU need neither to make calls to
inform the lecturer nor the students about it. This
will reduce the workload of SIIU and make the
process more organised.
 To automatically calculate the final mark of
students at the end of the internship programme
once their marks have been entered into the
system. Besides preventing miscalculations, this
type of task automation actually assists in freeing
staff’s time and workload.
 To generate specific reports based on certain
criterion, such as the Host Company, location,
programme of study and the supervisor for further
references and analysis. The SIIWP is expected to create process improvements
that lead to better effectiveness; thus, the Business
Process Improvement (BPI) analysis technique is used
as the method for requirements analysis in identifying
the critical business processes which need to be
automated. Basically, BPI means making moderate
changes to the way in which the organisation
operates to improve efficiency and improve
effectiveness. BPI projects spend significant time on
understanding the current as-is system before moving
on to improvements and to be system requirements.
Duration analysis activities are performed as one of
the BPI techniques[13]. The improved business
process is more efficient whereby certain processes
can be done concurrently and thus reduce the
duration of the initial manual business process.
The SIIWP complies with the open source standards as
SIIWP is developed using PHP5, XHTML, and JavaScript
for the server-side scripting, Easy PHP 2.0, Macromedia
Dreamweaver MX 2004 as the development tool and
MySQL for the database. The SIIWP is developed
based on the phased development model. Using the
model, the project is divided into small parts. The most
important and urgent features is bundled into the first
version of the system. Additional requirements will be
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
SIIWP Version 1.0:
Pre-Industrial Internship
1. Identify Eligibility Status
2. Register to System
3. Track Application Process
4. Update & Retrieve Database
5. Report Generation
PORTAL
SIIWP Version 2.0:
During-Industrial Internship
1. Confirmation of Placement
2. Schedule Visit
3. Monitor Student – weekly report online
4. Post Announcements
Excel
Reader
Application
Tracker
Student
Monitoring
Grading
Post
Announcement
Company
Resource Centre
DBMS
SIIWP Version 3.0:
Post-Industrial Internship
1. Grade Student
2. View Grade
SIIWP Database
Figure 4. SIIWP Functionalities
Figure 5. SIIWP system architecture
added to the system in the next consecutive versions
of the system. The model allowed the development
team to demonstrate results earlier on in the process
and obtain valuable feedback from system users.
There are 3 system version releases of SIIWP. SIIWP
version 1.0, version 2.0 and version 3.0 concentrates on
pre-internship, during-internship and post-internship
business processes respectively. Each version
encompassed previous requirements and additional
requirements. Figure 4 shows the functionalities of
SIIWP version 1.0, version 2.0 and version 3.0.
staff (administrator), coordinator and lecturer. When
users log on to the system, they will be directed to
their personalised portal.
SIIWP System Architecture
The SIIWP adopted the combination of data-centred
and client-server architectural model. A client-server
system model is organised as a set of services and
associated server(s) and clients that access and use
the services. The server itself, or one of the server(s),
contains a database where data are stored, thus it
adopts data centred architecture. Figure 5 illustrates
the SIIWP system architecture.
Main Index Portal
The Main Index Portal is a point of access for all users
of SIIWP. It provides generic information that all users
can access such as Announcement and Upcoming
Events, Search Companies, About Student Industrial
Internship and Help. All Users can login to access
their personalised portal from the Main Index Portal.
Students are to register themselves to the system
before a personaliszed portal is assigned to them.
Before registering, students are required to check
their eligibility status to undergo the SIIP. Figure 6
illustrates the SIIWP Main Index Portal.
SIIWP Prototype
The SIIWP consists of a main index portal (Main
Portal) and 4 types of user personalised portals,
namely: (i) Admin (Staff) Portal, (ii) Student Portal,
(iii) Lecturer Portal, and (iv) Coordinator Portal. There
are different types of users that can log on to the
system with different level of access – student, SIIU
Figure 6. Main index portal
VOLUME Six NUMBER two july - december 2008 PLATFORM
101
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Figure 7. Admin staff portal
Figure 8. Student portal
Admin (Staff) Portal
is a personalised point of access for UTP Lecturers. It
contains generic information and functionalities that
are only accessible by Lecturers: Grade Students and
Supervise Students. The Grade Students functionality
enabled lecturers to grade students online by keying
in the student’s grades into the system. The system
can automatically calculate students’ grade. The
grade calculated can be viewed by staff and the
respective student. The Supervise Students function
enabled Lecturers to view all students under his or her
supervision.
The Admin (Staff) Portal is a personalised point of
access for SIIU staff. It contains generic information and
personalised functionalities that are only accessible
by staff: Database Statistical Analysis, Database
Tables and Upload Document. Figure 7 illustrates the
Admin (Staff) Portal.
Student Portal
The Student Portal is a personalised point of access
for students; it contains generic information and
functionalities that are only accessible by students:
Application Status and Weekly Report Submission.
The application status fuctionality allowed students to
register their internship application(s) and update their
application status into the system either: (i) Send CV/
Resume, (ii) Interview, (iii) Offer Letter, (iv) Accepted,
and (v) Rejected. This information was accessible for
viewing by Lecturers and Staff; hence they were able
to monitor and provide assistance. The Weekly Report
Submission functionality enabled students to submit
their report online and to generate (print) it. The
weekly reports that were submitted online could be
viewed by lecturers and staff. This enabled Lecturers
and staff to monitor and know student’s activities in
the host company. Figure 8 illustrates the Student
Portal.
Coordinator Portal
The Coordinator Supervisor Portal is a personalised
point of access for lecturers selected as SIIP
coordinator in their respective department, it contains
generic information and functionalities that are only
accessible by Coordinators i.e. Schedule Lecturer
Visits enabled Coordinators and SIIU staff to detemine
Lecturer Portal
The Lecturer Supervisor Portal as shown in Figure 9
102
Figure 9. Lecturer portal
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Figure 10. Coordinator portal
Figure 11. Acceptance test result
the appropriate date for Lecturer’s 1st and 2nd visits.
Figure 10 illustrates the Coordinator Portal.
The acceptance test focused on 3 criteria: (i) user
interface, (ii) user friendly navigation, and (iii) basic
functionalities. For the system to pass the acceptance
test, at least 75% of test participants (22 participants)
agreed that the system’s criteria under consideration
met their expectations. The acceptance test concluded
that 90% of the test participants agreed that the user
interface met their expectations, 85% of the test
participants agreed that it had user-friendly navigation,
and 78.3% of the test participants found that the basic
functionalities met with their expectations. Figure 11
charts the acceptance test results.
Testing
To eliminate system faults, fault removal strategy was
implemented in every system version development.
Fault removal is basically a Validation and Verification
(V&V) process; checking and analysis process consists
of requirements reviews, design reviews, code
inspections, and product testing. Table 1 summarises
the testing conducted.
Acceptance testing was conducted for SIIWP System
Version 3.0; 30 users (students, lecturers, and staffs)
participated in the testing. The purpose of the
acceptance testing was to check if the system has met
with the functional and non-functional requirements
defined in the requirements determination phase
and if the system matched exactly or came closely
matching users expectation.
Table 1. System testing
Feature Test
Execution of an operation/
functionality with interaction
between operations minimized.
Load Test
Testing with field data and
accounting for interactions.
Regression Test
Feature test after every build
involving significant change.
Acceptance Test
(Alpha Testing)
Test conducted by users to ensure
they accept the system.
Thus, System Version 3.0 was considered to have
passed the acceptance test. However, users gave
feedback and suggestions of other functionalities that
may be useful to the SIIWP.
Conclusion and Future Work
The SIIWP met all functional and non-functional
requirements. As SIIWP was built specifically on top of
a well studied and improved SIIU business process, its
basic functionalities of the business process matched
closely to users’ expectations. SIIWP was able to
solve the distant learning problem in SIIP. This is
essential to SIIU as it will improve business processes
for SIIU, in conducting SIIP, through automation. The
implementation of SIIWP can help SIIU ensure the
success of SIIP by providing optimal and high quality
service. At the same time, the new system may be
VOLUME Six NUMBER two july - december 2008 PLATFORM
103
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
able to become a central internship and job resource
centre for UTP students.
[5]
Norfadilah Bt. Samsudin (2006). Universiti Teknologi
PETRONAS. Online Industrial Training System, Final year
project report, 1 - 54.
There are still limitations in SIIWP: limited direct
communication between users, potential data
redundancy, and accessibility from outside UTP.
Users also gave feedback and suggestions of other
functionalities that may be useful to the SIIWP.
These suggestions are functionalities for further
enhancements that can be further studied and be
added into the SIIWP. Future work could include:
[6]
Zirpins, C., Weinreich, H., Bartelt ,A and Lamersdor, W. (2001).
Advanced Concepts for Next Generation Portals, 12th
International Workshop on Database and Expert Systems
Applications, 501-506.
[7]
Aragones, A. & Hart-Davidson, W. (2002). Professional
Communication Conference, 2002. IPCC 2002. Proceedings.
IEEE International. Why, when and how do users customize
Web portal?, 375 – 388.
[8]
Strauss, H. (2002). In Web Portals & Higher Education:
technologies to Make IT Personal. All About Web Portals: A
Home Page Doth Not Portal Make, 33-40.
[9]
Thomas, J. (2003). Indiana University’s Enterprise Portal
as a Service Delivery Framework. Designing Portals:
Opportunities and Challanges, 102-126.
Direct Communication Media: Forum, Blogs, Chat
Systems. SIIWP only provides contact details of the
users. Forum, blogs, and chat systems can be added
into the SIIWP a direct communcation media between
users of SIIWP.
Frequently Asked Questions: Frequently Asked
Questions (FAQ) could be added into SIIWP so
that users can share particular knowledge and/or
experience gained through the SIIP, stores them in the
database, and be mapped to certain issues. The FAQ
function enables users to interact and ask questions
to the system regarding specific issues faced during
their SIIP.
Data Mining: An effective data mining technique can
be implemented to SIIWP to provide users with a more
desirable view of the database. References
[1]
Universiti Teknologi
internship guidelines.
PETRONAS, (2000).
Industrial
[2]
Aliza Bt Sarlan, Wan Fatimah Bt Wan Ahmad, Judy Nadia
Jolonius, Norfadilah bt Samsudin (2007). Online Web-based
Industrial Internship System. 1st International Malaysian
Educational Technology Convention 2007. UTM , Johor. 25 Disember 2007, 194-200.
[3]
Liu, Haifeng, Ng, Wee-Keong & Lim, Ee-Peng, 2005.
Scheduling Queries to Improve the Freshness of a Website.
World Wide Web: Internet and Web Information Systems, 8,
61-90.
[4]
Wikipedia®, the free encyclopedia (2007). System. Retrieved
February 12, 2007, from http://en.wikipedia.org/wiki/
System.
104
[10] Smith, H.; Finger, P. (2003) IT doesn’t matter – Business
Process Do. August 2003. Meghan-Kiffer Press 2003.
[11] Aalst, W. & Hee, K. (2002), Workflow Management. Models,
Methods, and Systems, MIT Press.
[12] Davenport, T. 1998. “Some principles of knowledge
management”. <http://www.bus.utexas.edu/kman/kmprin.
htm>
[13] Dennis, A., Wixom B. & Tegarden Haley (2005). System
Analysis and Design with UML 2.0 2nd Edition. Minion:
Hohn Wiley & Sons Inc.
Aliza Sarlan received her first degree
(Information Technology) from Universiti
Utara Malaysia in 1996 and her Master’s
degree (Information Technology) from
University of Queensland, Australia in 2002.
She is currently lecturing at the Computer
& Information Sciences Department
Universiti Teknologi PETRONAS.
Her
research interests are in Information System
Management and Application Development in Organisational
Context. Currently, her research focuses on the interdependencies
of information technologies and organisational structure, and IS
policy and strategic implementation of ICT in healthcare
industry.
Wan Fatimah Wan Ahmad received her
BA and MA degrees in Mathematics from
California State University, Long Beach,
California USA in 1985 and 1987. She also
obtained a Dip. Ed. from Universiti Sains
Malaysia in 1992. She completed her PhD
in Information System from Universiti
Kebangsaan Malaysia in 2004. She is
currently a senior lecturer at the Computer
& Information Sciences Department of Universiti Teknologi
PETRONAS, Malaysia. She was a lecturer at Universiti Sains
Malaysia , Tronoh before joining UTP. Her main interests are in the
areas of mathematics education, educational technology, human
computer interaction and multimedia.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
HAND GESTURE RECOGNITION:
SIGN TO VOICE SYSTEM (S2V)
Oi Mean Foong*, Tan Jung Low and Satrio Wibowo
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*foongoimean@petronas.com.my
ABSTRACT
Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most
commonly used by people who have hearing or speech problems. This paper presents a system prototype
that is able to automatically recognise sign language to help communicate more effectively with the hearing or
speech impaired people. The Sign to Voice system prototype, S2V, was developed using Feed Forward Neural
Network for two-sequence signs detection. The experimental results have shown that the neural network could
achieve a recognition rate of 78.6% for sign-to-voice translation.
Keywords: Hand gesture detection, neural network, sign language, sequence detection.
INTRODUCTION
This system was inspired by the special group of
people who have difficulties to communicate in verbal
form. It was designed with the ease of use for humanmachine interface in mind for the deaf or hearing
impaired people. The objective of this research is to
develop a system prototype that automatically helps
to recognise a two-sequence sign language of the
signer and translate them into voice in real time.
Generally, there are two ways to collect gesture data
for recognition. Device based measurement which
measures hand gestures with equipment such as
data gloves and archive the accurate positions of
hand gestures as its positions are directly measured.
Secondly, vision-based technique which can cover
both the face and hands of the signer where the
signer does not need to wear data glove device. All
processing tasks could be solved by using computer
vision techniques which are more flexible and useful
than the first method [1].
Since the last half of the last century, sign languages
have been accepted as minority languages which
coexist with majority languages [2] and they are the
native languages for many deaf people. The proposed
system prototype was designed to help normal people
communicate with the deaf or mute more effectively.
This paper presents a prototype system known as
Sign to Voice (S2V) which is capable of recognising
hand gestures by transforming digitised images of
hand sign language to voice using the Neural Network
approach.
The rest of the paper is organised as follows: Section
II surveys the previous work on image recognition
of hand gestures. Section III proposes the system
architecture of SV2 prototype. Section IV discusses the
experimental set up and its results, and lastly Section
V draws conclusions and suggests for future work.
This paper was presented at the 5th International Conference in Information Technology
– World Academy of Science, Engineering and Technology, Singapore, 29 August – 2 September 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
105
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
RELATED WORK
Attempts on machine vision-based sign language
recognition have been published only recently with
relevant literature several years ago. Most attempts
to detect hand gestures/signs from video place
restrictions on the environment. For example, skin
colour is surprisingly uniform so colour-based hand
detection is possible [3]. However, this by itself is not
a reliable modality.
Hands have to be distinguished from other skincoloured objects and these are cases of sufficient
lighting conditions, such as coloured light or gray-level
images. Motion flow information is another modality
that can fill this gap under certain conditions [4], but
for non-stationary cameras this approach becomes
increasingly difficult and less reliable.
Eng-Jon Ong and Bowden [5] presented a novel,
unsupervised approach to train an efficient and
robust detector, applicable not only in detecting the
presence of human hands within an image but also
classifying the hand shape too. Their approach was
to detect the location of the hands using a boosted
cascade of classifiers to detect shape alone in grayscale images.
A database of hand images was clustered into sets of
similar looking images using the k-mediod clustering
algorithm that incorporated a distance metric based
on shape context [5]. A tree of boosted hand detectors
was then formed, consisting of two layers; the top
layer for general hand detection, whilst branches
in the second layer specialise in classifying the sets
of hand shapes resulting from the unsupervised
clustering method.
The Korean Manual Alphabet (KMA) by Chau-Su Lee et
al. [6] presented a vision-based recognition system of
Korean manual alphabet which is a subset of Korean
Sign Language. KMA can recognise skin-coloured
human hands by implementing fuzzy min-max neural
network algorithm using Matrox Genesis imaging
board and PULNIX TMC-7 RGB camera.
106
Image Acquisition (Sign)
Translation (Voice)
Preprocessing
Interpretation
Classification
Image Processing
Object
Neural Network
(Hand Detection)
1.
2.
3.
4.
5.
Sobel Operator
Dilation
XOR Operation
Bounding Box
Proportion Threshold
Figure 1. S2V System Architecture
SYSTEM ARCHITECTURE
Figure 1 shows the system architecture of the proposed
S2V system prototype. Image acquisition for hand
detection is implemented using the image processing
toolbox in MATLAB. This was to develop functions to
capture input from the signer and detect the handregion area. The limitation here is the background
of the image can only be in black color. Therefore
several problems were encountered in capturing and
processing images in RGB values. Thus, an approach
had to be found to detect the hand-region to produce
satisfactory results.
Image Recognition
The input images were captured by a webcam
placed on a table. The system was demonstrated on
a conventional PC Laptop computer running on an
Intel Pentium III Processor with 256 MB of RAM. Each
image has a spatial resolution of 32 x 32 pixels and a
grayscale resolution of 8 bits. The system developed
could process hand gestures at an acceptable
speed. Given a variety of available image processing
techniques and recognition algorithms, it was used as
the design of the preliminary process for detecting the
image as part of image processing. Hand detection
preprocessing workflow is showed in Figure 1.
The system was started by capturing a hand image
from the signer with a webcam, setup towards certain
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
angle with black background. The next process
converted the RGB image into grayscale with either
black (0) or white (1). The edge of each object was
then computed against the black background. The
object was then segmented and differed greatly in
contrast to the background images.
Preprocessing
Changes in contrast was detected by operators
that calculate the gradient of an image. One way
to calculate the gradient of an image is the Sobel
operator [7], [8], [9], which creates a binary mask using
a user-specified threshold value.
The binary gradient mask showed lines of high contrast
in the image. These lines did not quite delineate the
outline of the object of interest. Compared to the
original image, the gaps in the lines surrounding the
object in the gradient mask could be seen.
These linear gaps disappeared if the Sobel image was
dilated using linear structuring elements, applied
by strel function. After finding the holes, the ‘imfill’
function was applied to the image to fill up the
holes. Finally, in order to make the segmented object
look natural, a smoothing process of the object was
applied twice with a diamond structuring element.
The diamond structured element was created by
using the strel function. Then, bitwise XOR operation
set the resulting bit to 1 if the corresponding bit in
the binary image or the result from the dilated image
was a 1. Bitwise manipulation enhanced the wanted
image of the hand region. Further processing with
the bounding box approach was used to clean up the
segmented hand region.
After having a bounding box, the proportion of the
white image as compared to the black image inside
the bounding box was calculated. If the proportion
of white image was changed over the threshold,
then a second image would be captured. Currently,
the prototype used only the two-sequence sign
language.
Figure 2. Feed Forward Neural Network(FFNN)
Classification
Feed Forward Neural network, as shown in Figure 2,
was used in the classification process to recognise the
various hand gestures. It consisted of three layers:
input layer, hidden layer and output layer. Input to the
system included various types of two-sequence sign
language which were converted to a column vector
by neural network toolbox in MATLAB 7.0. The input
signals were propagated in a forward direction on a
layer-by-layer basis. Initialised weight was assigned
to each neuron. The neuron computed the weighted
sum of the input signals and compared the result with
a threshold value, θ . If the net input was less than
the threshold, the neuron output was a value of –1,
otherwise the neuron would become activated and
its output would attain a value of +1 instead. Thus, the
actual output of the neuron with sigmoid activation
function can be represented as
 n

Y = sigmoid  ∑ xi wi − θ 
=
1
i


(1)
In the case of supervised learning, the network was
presented with both the input data and the target
data called the training set. The network was adjusted
based on the comparison of the output and the target
values until the outputs almost matched the targets,
i.e. the error between them is negligible.
However, the dimension of the input for the neural
network was large and highly correlated (redundant).
VOLUME Six NUMBER two july - december 2008 PLATFORM
107
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
EXPERIMENTAL RESULTS
Neural Network Training Technique
Figure 4 shows the comparisons of FFNN training
without PCA and that of Figure 5 FFNN training with
PCA implemented.
The parameter of neural network training are 0.01
in learning rate, epochs in 5000 and momentum
coefficient is 0.8.
Figure 3. Hand Image Detection
It slowed down the processing speed. Thus, the
Principle Component Analysis was used to reduce the
dimension of the input vectors.
In MATLAB, an effective procedure for performing
this operation was the principal component analysis
(PCA). This technique has three effects: the first one
is to orthogonalises the components of the input
vectors (so that they are uncorrelated with each other);
second it orders the resulting orthogonal components
(principal components) so that those with the largest
variation come first; and finally it eliminates those
components that contribute the least to the variation
in the data set. By using this technique, the learning
rate of training the neural network was increased. As
a result, the prototype successfully detected the hand
region as shown in Figure 3.
Figure 4 shows the difference-curve of the error rate
with difference technique before conducting the
training to the neural network. It shows that principle
component analysis increased the learning rate of the
neural network.
The result of the NN training without PCA was MSE
0.050038/0, Gradient 0.114071/1e-010 whereas NN
training with PCA (Fig. 5) was MSE 0.0110818/0,
Gradient 0.00490473/1e-010.
Sequence Detection Testing
Two types of testing were conducted i.e. positive
testing and negative testing. The positive testing was
to prove the sequence of sign language that can be
recognised by the system. The negative testing was
Performance is 0.050038, Goal is 0
0
10
Performance is 0.0110818, Goal is 0
1
10
0
Training-Blue
Training-Blue
10
-1
10
-1
10
-2
10
-2
10
0
500
1000
1500
2000 2500 3000
5000 Epochs
Figure 4. FFNN Training without PCA
108
3500
4000
4500
5000
0
500
1000
1500
2000 2500 3000
5000 Epochs
Figure 5. FFNN Training with PCA
PLATFORM VOLUME Six NUMBER TWO july - december 2008
3500
4000
4500
5000
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
to prove that whenever the sign language was not
moved, the system would not response anything to
the signer. Table I shows the results of these sequence
detection testing.
Table 1. Result Of Sequence Detection Testing
Sequence
1
(+)
Test
(-)
Test
Result
Yes
No
True
Yes
No
True
No
Yes
False
No
Yes
False
Yes
No
True
Recognition Rate
Yes
No
True
70 sets of two-sequence hand gestures were captured
in real-time from signers using a video camera in
which 20 were used as training sets and the remaining
10 were used as test sets. The recognition rate of the
sign languages was calculated as follows:
Yes
No
True
Yes
No
True
The proposed solution was to implement S2V for realtime processing. The system was able to detect the
sequence of sign symbols with additional functions
that was automated to calculate the proportion
of the black and white images and was compared
with a threshold value specified by the program.
The difficulties faced were to recognise a little/small
difference in the portion of the images which were
not detected by the threshold value (and also in
the recognition part). However, for this prototype,
output was obtained by implementing the proposed
technique to detect the sequence .
Recognition rate =
No. of correctly classified signs
× 100 %
Total No. of signs
(2)
The overall results of the system prototype were
tabulated in Table 2 below:
Table 2. S2V System Recognition Rate
Data
No. of
Samples
Recognised
Samples
Recognition
Rate (%)
Training
50
40
80.0
Testing
20
15
75.0
Total
70
55
78.6 (Average)
Sequence
2
The results of segmentation and feature detection
were performed as explained above. Experimental
results of the 70 samples of hand images with different
positions gave consistent outcomes.
Based on the above experiments, the two-sequence
sign language or hand gestures were tested with an
average recognition rate of 78.6%.
CONCLUSION
Hand gestures detection and recognition technique
for international sign languages was proposed and
a neural network was employed as a knowledge
base for sign language. Recognition of the RGB
image and longer dynamic sign sequences was one
of the challenges looked into by this technique.
The experimental results showed that the system
prototype S2V produced satisfactory recognition rate
in the automatic translation of sign language to voice.
For future research, the Hidden Markov Model (HMM)
is proposed to detect longer sequences in large sign
vocabularies and to integrate this technique into a
sign-to-voice system, or vice-versa, to help normal
people communicate more effectively with mute or
hearing impaired people.
VOLUME Six NUMBER two july - december 2008 PLATFORM
109
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
REFERENCES
[1]
Noor Saliza Mohd Salleh, Jamilin Jais, Lucyantie Mazalan,
Roslan Ismail, Salman Yussof, Azhana Ahmad, Adzly
Anuar, and Dzulkifli Mohamad, “Sign Language to Voice
Recognition: Hand Detection Techniques for Vision-Based
Approach,” Current Developments in Technology-Assisted
Education, FORMATEX 2006, vol. 2, pp.967-972.
[2]
C. Neider, J. Kegel, D. MacLaughlin, B. Bahan, and R.G. Lee,
The syntax of American sign language. Cambridge: The MIT
Press, 2000.
[3]
M. J. Jones, and J. M. Rehg, “Statistical Color Models With
Application to skin Detection,” International Journal of
Computer Vision, Jan. 2002, vol. 46, no.1, pp. 81-96.
[4]
D. Saxe, and R. Foulds, “Automatic Face and Gesture
Recognition,” IEEE International Conference on Automatic
Face and Gesture Recognition, Sept. 1996, pp. 379-384.
[5]
E. J. Ong, and R. Bowden, “A Boosted Classifier Tree for Hand
Shape Detection,” Sixth IEEE International Conference on
Automatic Face and Gesture Recognition (FGR 2004), IEEE
Computer Society, 2004, pp. 889-894.
[6]
C. S. Lee, J. S. Kim, G. T. Park, W. Jang, and Z. N. Bien,
“Implementation of Real-time Recognition System for
Continuous Korean Sign Language (KSL) mixed with Korean
Manual Alphabet (KMA),” Journal of the Korea Institute of
Telematics and Electronics, 1998, vol. 35, no.6, pp. 76-87.
[7]
Gonzalez R., and R. Woods, Digital Image Processing.
Addison Wesley, 1992.
[8]
Boyle R., and R. Thomas, Computer Vision: A First Course.
Blackwell Scientific Publications, 1988.
[9]
Davies E., Machine Vision: Theory, Algorithms and
Practicalities. Academic Press, 1990.
O. M. Foong received her BSc Computer
Science from Louisiana State University
and MSc Applied Mathematics from North
Carolina A & T State University in the USA.
She worked as a Computer System Manager
at Consolidated Directories at Burlington,
USA and also as Lecturer at Singapore
Polytechnic prior to joining the CIS
Department at Universiti Teknologi
PETRONAS. Her research interests include data mining, fuzzy
logics, expert systems, and neural networks.
T. J. Low received his BEng (Hons) in
Computer Technology from Teesside
University, UK in 1989 and MSc IT from
National University of Malaysia in 2001.
Low has been in the academic line for the
past 20 years as lecturer in various public
and private institutes of higher learning.
His research interests include wireless
technology, embedded systems, and grid/
HPC computing. Some of his current R&D projects include
Biologically Inspired Self-healing Software, VANET, and Noise
Removal in Seismograph using High Performance Computer. His
other completed projects have been recognised at national as
well as international level, for instance, his Free Space Points
Detector took part in the INPEX 2008 Pittsburgh event in USA in
August 2008. Low has published various papers in systems
survivability, HPC/Grid, and wireless/mobile technology.
[10] X. L. Teng, B. Wu, W. Yu, and C. Q. Liu, “A Hand Gesture
Recognition System Based on Local Linear Embedding,”
Journal of Visual Languages and Computing, vol. 16, Elsevier
Ltd., 2005, pp. 442 – 454.
[11] W. W. Kong, and S. Ranganath, “Signing Exact English
(SEE): Modeling and Recognition,” The Journal of Pattern
Recognition Society, vol. 41, Elsevier Ltd., 2008, pp. 1638
-1652.
[12] Y. H. Lee, and C. Y. Tsai, “Taiwan Sign Language (TSL)
Recognition based on 3D Data and Neural Networks,” Expert
Systems with Applications, Elsevier Ltd., 2007, pp. 1-6.
110
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
ParalleliSation of Prime Number Generation
Using Message Passing Interface
Izzatdin Aziz*, Nazleeni Haron, Low Tan Jung, Wan Rahaya Wan Dagang
Universiti Teknologi Petronas, 31750 Tronoh, Perak Darul Ridzuan, MALAYSIA
*izzatdin@petronas.com.my
Abstract
This research proposes a parallel processing algorithm that runs on cluster architecture suitable for prime number
generation. The proposed approach is meant to decrease computational cost and accelerate the prime number
generation process. Several experimental results shown demonstrated its viability.
Keywords: Prime number generation, parallel processing, cluster architecture, MPI, cryptography
Introduction
Prime numbers has stimulated much of interest
in the mathematical field or in the security field
due to the prevalence of RSA encryption schemes.
Cryptography often uses large prime numbers
to produce cryptographic keys which are used to
encipher and decipher data. It has been identified
that a computationally large prime number is likely
to be a cryptographically strong prime. However,
as the length of the cryptographic key values
increases, this will result in the increased amount of
computer processing power required to create a new
cryptographic key pair. In particular, the performance
issue is related to time and processing power required
for prime number generation.
Prime number generation comprises of processing
steps in searching for and verifying large prime
numbers for use in cryptographic keys. This is actually
a pertinent problem in public key cryptography
schemes, since increasing the length of the key to
enhance the security level would result in a decrease in
performance of a prime number generation system.
Another trade off resulting from using large prime
numbers pertains to the primality test. Primality test
is the intrinsic part of prime number generation and
is the most computationally intensive sub-process.
It has also been proven that testing the primality of
large candidates is very computationally intensive.
Apart from that, the advent of parallel computing or
processing has invited many interests to apply parallel
algorithms in a number of areas. This is because it has
been proven that parallel processing can substantially
increase processing speed. This paper presents a
parallel processing approach in cluster architecture
for prime number generation that would provide
improved performance in generating cryptographic
keys.
Related Work
Despite the importance of prime number generation
for cryptographic schemes, it is still scarcely
investigated and real life implementations are
of rather poor performance [1]. However, a few
approaches do exist in order to efficiently generate
This paper was presented at the 6th WSEAS International Conference on Computational Intelligence, Man-Machine Systems and Cybernetics, Spain,
14 - 16 December 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
111
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
prime numbers [1-5]. Maurer proposed an algorithm
to generate provable prime numbers that fulfill
security constraints without increasing the expecting
running time [2]. An improvement was made to
Maurer’s algorithm by Brandt et al. to further speed
up the prime number generation [3]. Apart from
that, the proposed work also included a few ways for
further savings in prime number generation [3]. Joye
et al. presented an efficient prime number generation
scheme that allowed fast implementation on
cryptographic smart card [1]. Besides that, Cheung et
al. originated a scalable architecture to further speed
up the prime number validation process at reduced
hardware cost [4]. All of these research however, were
focused on processing the algorithm sequentially. It
was proved that tasks accomplished through parallel
computation resulted in faster execution as compared
to computational processes that ran sequentially
[9]. Tan et al. designed a parallel pseudo-random
generator using Message Passing Interface (MPI) [5].
This work is almost similar to it but with different
emphasis. The prime numbers generated were used
for Monte Carlo simulations and not cryptography.
Furthermore, considerable progress have been made
in order to develop high-performance asymmetric key
cryptography schemes using approaches such as the
use of high-end computing hardware [6, 7, and 8].
System Model
Experimental Setup
The experimental cluster platform for performance
comprised of 20 SGI machines. Each of the machines
consists of off-the-shelf Intel i386 based dual P3733MHz processors with 512MB memory Silicon
Graphics 330 Visual Workstations. These machines
were connected to a Fast Ethernet 100 Mbps switch.
The head node performs as master node with multiple
network interfaces [10]. Although these machines were
considered to be superseded in terms of hardware
and performance as compared to the latest version of
high performance computers, what was important in
this research was the parallelisation of the algorithm
and how jobs were disseminated among processors.
112
Number Generation
In order to first generate the number, a random seed
was picked and input into the program. The choice
of seed was crucial to the success of this generator
as it had to be as random as possible. Otherwise
anyone who uses the same random function would
be capable of generating the primes, thus defeating
the purpose of having strong primes.
Primality Test
A trial division algorithm was selected as the core for
primality testing. This algorithm was based on a given
composite integer n, trial division consists of trialdividing n by every prime number less than or equal
to n . If a number were found which divides evenly
into n, that number is a factor of n.
Parallel Approach
Once a random number was generated, master node
created a table of dynamic 2D array, which was later
populated with odd numbers. As shown in Figure 1,
a pointer-to-pointer variable **table in master, points
to an array of pointers that subsequently points to a
number of rows. This resulted in a table of dynamic 2D
array. After the table of dynamic 2D array was created,
master then initialised the first row of the table only.
The parallel segment began when master node
broadcasts row [0] to all nodes by using MPI_Bcast.
Row [0] was used by each node to continue populating
Figure 1. Master creates a dynamic 2D array to be populated with
odd numbers
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
the rest of the rows of the table with odd numbers.
Master node then equally divided n-1 number of rows
left that was yet to be populated by number of nodes
available in the grid cluster. Each node was given
an equal number of rows to be populated with odd
numbers. This was achieved by using MPI_Send. A
visual representation of this idea is depicted in Figure
2.
After each node returned the populated rows to
master node, it then picked randomly prime numbers
to be assigned as the value of p, q, and e. These values
were later used for the encryption and decryption
part of a cryptosystem algorithm. It is to be reminded
that the parallel process that took place in the whole
program was only on the prime number generation.
Parallel Algorithm
Each node received n number of rows to be populated
with odd numbers. This was where the parallel
process took place. Each node processed each row
given concurrently. Each node first populated the
rows with odd numbers. Then they filtered out for
prime numbers using the primality test chosen. The
odd prime numbers remained in the rows but those
that were not was assigned to NULL. Each populated
row were then returned to master node, which then
randomly picked three distinct primes for the value of
p,q, and public key e of the cryptographic scheme.
The algorithm of the parallel program is as follows:
Start
Master creates a table of odd numbers and
initialized row [0] only
Master broadcasts row [0] to all slaves
Master sends a number of rows to each slave
Each slave will receive an initialized row
from master
Each slave will populate row prime numbers
Each slave will return populated row to
For an example, if there are 4 processors available
to execute the above tasks, and there are 1200 rows
needed to be populated with prime numbers, each
slave will be given 300 rows to be processed. The
overall procedure is depicted in Figure 3.
Master
Master waits for results from slaves
Master
receives
populated
rows
from
each
slave.
Master checks unpopulated rows
If maxRow > 0
Processor 0 processed row(1) up to row(299),
processor 1 processed row(300) up to row(599),
processor 2 processed row(600) up to row(899) and
lastly processor 3 processed row(900) up to the last
row, row(1199).
Figure 2. Master sends an equal size of row to each slave
Master sends unpopulated row to slave
Master picks prime numbers randomly
Figure 3. Example of assigning 1200 rows to 4 processors (slaves)
VOLUME Six NUMBER two july - december 2008 PLATFORM
113
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Prompt to select program option
Switch (method)
Case 1: prompt to enter a value greater
than 10000
If value > 10000, generate key primes
Else, Exit program
Case 3: open file and decrypt
Case 4: exit program
End
End
From the figure, it was observed that the algorithm
gather was massive for the first node and deteriorated
as it approached the last node. This may due to the
frequent prime numbers discovered at the beginning
of the number series and and which became scarce
as the numbers became larger towards the end. This
proved that the relative frequency of occurrence of
prime numbers decreases with size of the number
which results in lesser prime numbers being sent back
to master node by later nodes.
Evaluation
Conclusion
Table 1, shows the execution time of running the
parallel program on single and more computing
nodes. From the results, it can be inferred that running
the algorithm in parallel mode has accelerated the
prime number generation process. However, it seems
like there is a noticeable increase in processing time
when the program is running more than 3 nodes. The
execution time recorded was higher when more nodes
participated in the generation process. This may be
caused by network latency during the distribution of
the tasks, which led to the increased of execution time
taken for the communication between nodes.
This study proposed a parallel approach for prime
number generation in order to accelerate the process.
The parallelism and the cluster architecture of this
approach was tested with large prime numbers.
Results demonstrated that improvement can be
obtained if a parallel approach is deployed. However,
further improvements can be made that include:
(1) Use other primality test that is more significant or
feasible for large prime number generation such
as Rabin-Miller algorithm.
(2) Use other random number generation that can
produce random numbers with less computation
yet provides higher security level.
Figure 4 shows the performance measurement using
MPI_GATHER tested on 15 nodes. This figure was
captured using the MPICH Jumpshot4 tool to evaluate
the algorithm usage of MPI libraries. The numbers
plotted shows the amount of time taken for each node
to send back the prime numbers discovered back to
the master node.
114
5
0.043
10
0.053
30
0.093
0 .0 2 47
0.0 25 4
0.02 5 3
0 .0 27 3
0 .0 2 64
0.03 0 3
6
0.015
0 .0 1 37
0.039
0.01 42
3
0.02
0.01
0.005
0
0
7.850
5
0.025
Execution Time (ms)
1
0 .0 27 3
4
0.03
Table 1. Comparison of Execution Time for Different Number of
Nodes.
Number of nodes
0 .0 3 18
3
0.035
0 .0 29 3
0.03 54
0.04
0 .0 32 5
0 .0 3 85
0.045
0.03 5 9
Time (Seconds)
1
2
7
8
9
10 11 12 13 14 15 16
Node
Figure 4. Time taken for MPI_BCAST and MPI_GATHER running
on 15 nodes.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
References
[1]
M. Joye, P. Paillier and S. Vaudenay, Efficient Generation of
Prime Numbers, Cryptographic Hardware and Embedded
Systems, vol. 1965 of Lecture Notes in Computer Science,
pp. 340-354, Springer-Verlag, 2000
[2]
Maurer, Fast Generation of Prime Numbers and Secure
Public-Key Cryptographic Parameters, Journal of Cryptology,
vol. 8 no. 3 (1995), 123-156
[3]
J. Brandt, I. Damgard, and P. Landrock. Speeding up prime
number generation. In Advances in Cryptology - ASIACRYPT
‘91, vol. 739 of Lecture Notes in Computer Science, pp. 440449, Springer-Verlag, 1991
[4]
Cheung, R. C. C., Brown, A., Luk, W., Cheung, P. Y. K., A Scalable
Hardware Architecture for Prime Number Validation,
IEEE International Conference on Field-Programmable
Technology, 2004. pp. 177-184, 6-8 Dec 2004
[5]
Tan, C. J. and Blais, J. A. PLFG: A Highly Scalable Parallel
Pseudo-random Number Generator for Monte Carlo
Simulations. 8th international Conference on HighPerformance Computing and Networking (May 08 - 10,
2000). Lecture Notes In Computer Science, vol. 1823.
Springer-Verlag, London, 127-135
[6]
Agus Setiawan, David Adiutama, Julius Liman, Akshay
Luther and Rajkumar Buyya, GridCrypt : High Performance
Symmetric Key Cryptography using Enteprise Grids. 5th
International Conference on Parallel and Distributed
Computing, Applications and Technologies (PDCAT 200),
Singapore. Springer Verlag Publications (LNCS Series),
Berlin, Germany. December 8-10, 2004
[7]
Praveen Dongara, T. N. Vijaykumar, Accelerating Privatekey cryptography via Multithreading on Symmetric
Multiprocessors. In Proceedings of the IEEE International
Symposium on Performance Analysis of Systems and
Software (ISPASS), March 2003
[8]
Jerome Burke, John McDonald, Todd Austin, Architectural
Support for Fast Symmetric-Key Cryptography. Proc. ACM
Ninth Int’l Conf. Architectural Support for Programming
Languages and Operating Systems (ASPLOS-IX), Nov. 2000
[9]
Selim G Aki, Stefan D Bruda, Improving A Solution’s Quality
Through Parallel Processing. The Journal of Supercomputing
archive. Volume 19, Issue 2 (June 2001).
Izzatdin Abdul Aziz delves into various
aspects of parallel programming using
Message Passing Interface for High
Performance Computing and Network
Simulation. He worked at PERODUA Sdn
Bhd as a Systems Engineer in 2002.
Presently he is with the Computer and
Information Science Department at
Universiti Teknologi PETRONAS (UTP) as a
lecturer and researcher. He received his Masters of IT from
University of Sydney specialising in Computer Networks.
Nazleeni Samiha Haron is currently
lecturing at the Computer and Information
Science Department, Universiti Teknologi
PETRONAS.
Her research areas are
Distributed Systems as well as Grid
Computing. She received her Masters
Degree from University College London
(UCL) in Computer Networks and
Distributed Systems.
Low Tang Jung is a senior lecturer at
Computer and Information Science
Department,
Universiti
Teknologi
PETRONAS. His research interests are in
High Performance Computing, Wireless
Communications, Embedded Systems,
Robotics and Network Security. He received
his MSc in Information Technology from
Malaysian National University and BEng
(Hons) Computer Technology from Teesside University, UK.
[10] Dani Adhipta, Izzatdin Bin Abdul Aziz, Low Tan Jung, Nazleeni
Binti Haron. Performance Evaluation on Hybrid Cluster:
The Integration of Beowulf and Single System Image, The
2nd Information and Communication Technology Seminar
(ICTS), Jakarta, August 2006
VOLUME Six NUMBER two july - december 2008 PLATFORM
115
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Evaluation of Lossless Image Compression
for Ultrasound images
Boshara M. Arshin, P. A. Venkatachalam*, Ahmad Fadzil Mohd Hani
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*paruvachiammasai_venkatachala@petronas.com.my,
Abstract
Since medical images occupy large amounts of storage space and hinder the speed of tele-healthcare information
exchange, compression of image data is essential. In this work a comprehensive experimental study on lossless
compression algorithms is applied on ultrasound images. There are many compression schemes available for
lossless compression. But in this work, eight commonly used well known algorithms, namely CALIC, JPEG2000,
JPEG-LS, FELICS, Lossless mode of JPEG, SPIHT (with Reversible Wavelets S+P), PNG and BTPC were applied to
determine the compression ratio and processing time. For this study a large number of ultrasound images
pertaining to different types of organs were tested. The results obtained on compression ratio indicated that
CALIC led followed by JPEG-LS. But JPEG-LS took much less time for processing than CALIC. FELICS, BTPC and
the two wavelets-based schemes (SPIHT and JEPG2000) exhibited very close efficiency and they were the next
best to CALIC and JPEG-LS. The results showed that JPEG-LS was the better scheme for lossless compression of
ultrasound images as it demonstrated comparatively better compression efficiency and compression speed.
Keywords: lossless image compression, ultrasound images, compression ratio, compression speed.
INTRODUCTION
Ultrasound is a widely accepted medical imaging
modality for the diagnosis of different diseases due
to its non-invasive, cheap and radiation hazardfree characteristics. As a result of its widespread
applications, huge volumes of data from ultrasound
images have been generated from various medical
systems.
This has caused increased problems
in transmitting the image data to remote places
especially via wireless media for tele-consultation
and storage. Thus efficient image compression
algorithms are needed to reduce file sizes as much as
possible, and make storage access and transmission
facilities more practical and efficient. The relevant
medical information contained in the images must
be well preserved after compression. Any artifact in
the decompressed images may impede diagnostic
conclusions and lead to severe consequences.
Therefore, selecting a suitable method is critical for
ultrasound image coding. This goal is often achieved
by using either lossless compression or diagnostically
lossless compression with moderate compression
ratio. This challenge faced for ultrasound images
motivates the need for research in identifying proper
compression schemes.
There are many approaches to image compression
which can be used. These can be categorised into
two fundamental groups: lossless and lossy. In
lossless compression, the image reconstructed after
decompression is numerically identical to the original
This paper was presented at the International Conference On Man-Machine Systems (ICoMMS 2006) Langkawi, Malaysia,
15-16 September 2006
116
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
image. This is obviously most desirable since no
information is compromised. In lossy compression the
reconstructed image contains degradations relative to
the original to achieve higher compression ratios. The
type of compression group to be used depends on the
image quality needed. Medical images cannot afford
to lose any data. Thus efficient lossless compression
schemes are to be studied.
In the present work, performances of eight efficient
lossless compression schemes namely CALIC,
JPEG2000, JPEG-LS, FELICS, Lossless mode of JPEG,
SPIHT with Reversible Wavelets (S+P), PNG and
BTPC were studied to investigate the compression
efficiency for digital ultrasound images highlighting
the limitation of cutting edge technology in lossless
compression. The study focused mainly on the
effectiveness of compression and computational time
of each compression method on ultrasound images.
statistical modeler and coder. The modeler gathers
information about the data by looking for some
context and identifies a probability distribution that
the coder uses to encode the next pixel xi+1 It can be
viewed as following an inductive statistical inference
problem where an image is observed in raster-scan.
At each instant i, after having observed the subset of
past source sample xi = (x1, x2, … , xi), but before
observing xt+1 a conditional probability distribution
P(.|xi) is assigned to the next symbol xi+1 Ideally, the
code length l contributed by xt+1 is
l (xi+1) = – log P(xi+1|xi) bits,
which averages to the entropy of the probabilistic
model. Modern lossless image compression [2, 3] is
based on the above paradigm in which the probability
assignment is broken into the following steps:
•
LOSSLESS IMAGE COMPRESSION
A typical image compression implementation consists
of two separate components, an encoder and a
decoder as shown in Figure 1. The former compresses
the original image into a more compact format
suitable for transmission and storage. The decoder
decompresses the image and reconstructs it to the
original form. This process will either be lossless or
lossy, which will be determined by the particular
needs of the application.
Most of the recent efficient lossless compressions
can be classified into one of the three paradigms:
predictive with statistical modeling, transform-based
and dictionary-based schemes. The first paradigm
typically consists of two distinct components:
Input
Image
Encoder
Data Storage
Or
Transmission
Decoder
Figure 1. A typical data compression system
Output
Image
•
•
A predictive step, with the advantage of image
information redundancy (correlation of data) to
for
construct an estimated prediction value
the next sample xi+1 based on a finite subset of
the available past source symbols xi .
The determination of context in which a value xi+1
occurs. The context is a function of a (possible
different) causal template.
A probabilistic model for prediction error ei +1 = x i +1 - x̂ i +1 , conditioned on the context of
xi+1
The prediction error is coded based on a probability
distribution using a simple function of previously
neighbouring samples (W, N, E and NW) as shown in
Figure 2. The advantage of prediction is that it decorrelates the data samples thus allowing the use of a
simple model for the prediction errors [2].
NW N
W ?
E
Figure 2. Four neighbouring samples in a causal template.
VOLUME Six NUMBER two july - december 2008 PLATFORM
117
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
In transform-based coding, the original digital image
is transformed into frequency or wavelet domain
prior to modeling and coding, where it is highly decorrelated by the transform. This de-correlation
concentrates the important image information into
a more compact form in which the redundancy can
be removed using an entropy coder. JPEG2000 is a
transform-based image compression scheme.
Finally, the dictionary-based compression algorithms
substitute shorter codes for longer patterns of strings
within the data stream. Pixel patterns (substrings) in
the data stream found in the dictionary are replaced
with a single code. If a substring is not found in the
dictionary, a new code is created and added to the
dictionary. Compression is achieved when smaller
codes are substituted for longer patterns of data, such
as GIF and PNG which are widely used in the Internet.
In this work focused on the following lossless
compression schemes:
1. CALIC which combines non-linear prediction with
advanced statistical error modeling techniques.
2. FELICS in which each pixel is coded in the context
of two nearest neighbours.
3. BTPC, a multi-resolution technique which
decomposes the image into a binary tree. These
methods are the result of improvements in
prediction and modeling techniques, as well as
improvements in coding techniques that yielded
significantly higher compression ratios [5].
4. Lossless mode of JPEG which combines simple
linear prediction with Huffman coding.
5. JPEG-LS is based on Low Complexity Lossless
Compression method LOCO-I, which combines
good performance with efficient implementation
that employs nonlinear simple edge detector
prediction, in particular with Golomb-Rice coding
or Run Length Coding.
118
6. Lossless mode of JPEG2000 which is integerto-integer wavelets based on Embedded Block
Coding with block Truncation (EBCOT) [5].
7. SPIHT with Reversible Wavelets (S+P) as a
second lossless wavelet-based method beside
JPEG2000.
8. Portable Network Graphics (PNG) which is a
dictionary-based compression method [1, 2, 3].
Artur Przelaskowski [6] considered only four effective
lossless compression schemes: Binary context-based
arithmetic coder, CALIC, JPEG-LS and JPEG2000 for 22
mammogram images. This study pioneered different
compression methods for mammograms; however
only a few lossless coders were involved and it was
found that the best lossless compression that could
be achieved was only 2.
Kivijärvi, et al. [7, 9] studied lossless compression of
medical images of various modalities: Computed
Radiography, Computed Tomography, Magnetic
Resonance Imaging, Nuclear Medicine, and
Ultrasound. Performance criteria of a range of general
and specific image lossless compression schemes
were evaluated. They observed that CALIC performed
consistently well and achieved compression ratio
2.98. JPEG-LS compression ratio was almost as well
at 2.81 and lossless JPEG with Huffman encoding
and selection prediction value five [1] had a lower
compression ratio 2.18. PNG did not perform well as it
only achieved compression ratio 1.90, and neither did
any of the general-purpose compression schemes.
These early studies did not have the opportunity to
examine the performance of the JPEG2000 scheme.
The measurement of compression ratio in their study
was the average compression ratios of all modalities,
regardless of the fact that different modalities may
appear to have different redundancy which would
lead to differences in compression ratios. They also
measured compression and decompression time, and
concluded that “CALIC gives high compression in a
reasonable time, whereas JPEG-LS is nearly as effective
and very fast”.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Denecker [8] used more than five image based and
general-purpose compression schemes, including
lossless JPEG, CALIC, S+P and GNU UNIX utility gzip.
They were tested on Computed Tomography, MRI,
Positron Emission Tomography, Ultrasound, X-Ray
and Angiography images. It was indicated that CALIC
performed best with an average compression ratio
of 3.65. S+P with arithmetic encoding achieved 3.4,
lossless JPEG with Huffman encoding achieved about
2.85 and gzip achieved about 2.05. Since it was not
stated as to how many images were used in this study,
it is difficult to interpret the average compression
ratios specified [10].
METHODOLOGY
The different lossless Compression algorithms referred
above yielded different values for compression ratio
and speed for the same type of images. It is essential
to decide which one among them will be better
when applied on an image. Two most important
characteristics of lossless image compression
algorithms are Compression Ratio (CR) and
Compression Speed (CS). If the combined time taken
for compression and decompression is small then
CS is high. These characteristics help to determine
the suitability of a given compression algorithm to
a specific application. CR is simply the size of the
original image divided by the size of the compressed
image. This ratio gives an indication of how much
compression is achieved for a particular image.
In order to calculate how much time the process
(compression or decompression) has used in seconds,
the function clock ( ) in C++ was used. This function
was called at the start and at the finish of the process
to compute the duration in seconds (as shown in the
following equation) using the CPU ticks since the
system was started as follows:
decompression time, some compression methods are
symmetric which means that equal time is taken for
compression and also decompression. An asymmetric
algorithm takes more time to compress than to decompress. All the compression algorithms used in
this study were implemented using MS Visual C++
along with JASPER, MG, LIBPNG, HP implementation
of JPEG-LS and ZLIB. For this study, a large number
of ultrasound images which were different in texture
and size were used. The results obtained for a sample
set of 21 images applying the above algorithms
for compression efficiency were analysed. The
ultrasound images were obtained from local hospitals
and scanned in the Intelligent Imaging Laboratory at
Universiti Teknologi PETRONAS using a Pentium IV
processor with 2.8 MHz under Windows XP.
RESULTS and DISCUSSIONS
Compression Efficiency
The results obtained for a test sample set of 21 images
applying the above method for compression efficiency
are shown in Figure 3. It was observed that the two
predictive state-of-the-art CALIC and JPEG-LS were
the first two efficient schemes for ultrasound images
independent of texture and size.
Although the best achievable compression ratio
chosen was with LJPEG having tried all the possible
seven predictors for lossless coding [1], LJPEG was
found to be lagging behind all schemes in terms of
efficiency. LJPEG used a very simple linear prediction
that combined three surrounding neighbouring
Duration = (finish – start) / CPU ticks per second
This duration value depended on the complexity
and efficiency of the software implementation of
the compression/decompression algorithms and the
speed of the processor. Considering compression/
Figure 3. Compression efficiency
VOLUME Six NUMBER two july - december 2008 PLATFORM
119
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
pixels. PNG which is a dictionary-based scheme
outperforms LJPEG only slightly. FELICS, BTPC and
the two wavelets-based schemes (JPEG2000 and
S+P) were found to lie in between and performing
almost equally good with slight differences in terms of
efficiency after CALIC and JPEG-LS. From the results,
it clearly depicted that predictive schemes with
statistical modeling performed better than transform
based coding which outperformed the dictionary
based coding for lossless compression of images.
Compression Speed
Figures 4(a) and (b) show the compression and
decompression times respectively for the above
eight lossless schemes. It can be seen that the FELICS
was the fastest and CALIC seemed to be extremely
slow, whereas LJPEG and JPEG-LS were somewhat
much closer to FELICS. It should be noted that the
above four algorithms are symmetric. PNG and BTPC
demonstrated less compression speeds compared to
the two wavelet-based algorithms.
(a) Compression time in Seconds
From the above analysis, it was found that the
predictive schemes yielded improved compression
efficiency and speed, and resulted in reduced cost of
transmission and storage especially for tele-healthcare
applications. JPEG-LS with embedded run length
encoding was found to be better than other methods
for lossless compression of ultrasound images.
CONCLUSION
In this paper a comparative study on lossless
compression schemes applied on ultrasound images
was carried out. To evaluate the lossless compression
methods, three criteria were studied viz., compression
ratio, compression time and decompression time. A
large number of ultrasound images were processed
but only the results for a set of 21 test cases were
included (refer Table 1). Various aspects of the lossless
compression procedures were examined. CALIC
gave the best compression efficiency but much
lower compression speed. JPEG-LS showed high
compression ratio (very close to CALIC) and better
compression speed than CALIC. Based on these two
features, the results showed that JPEG-LS is well
suited for compressing ultrasound images. Additional
research is to be carried by applying JPEG-LS on the
regions of interest and lossy compression algorithms
on the remaining portion of the ultrasound image.
This will will not only result in efficient storage but will
also increase speed of transmission of images in telehealthcare applications.
ACKNOWLEDGEMENT
The authors would like to thank the Department of Diagnostic
Imaging of Ipoh Hospital for providing the medical images used
in this work.
REFERENCES
[1]
Khalid Sayood, Introduction to Data Compression, Morgan
Kaufmann Publishers, 2000
[2]
B. Carpentieri, et al., Lossless compression of continuoustone images, Proc. IEEE 88 (11) (2000) I797-1809
[3]
Randers-Pehrson G, et al., PNG (Portable Network Graphics)
specification version 1.1. PNG Development Group.
February 1999
(b) Decompression time in Seconds
Figure 4. Compression/decompression time
120
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
[4]
John A. Robinson, “Efficient general-purpose image
compression with binary tree predictive coding”, IEEE
Transactions on Image Processing, Apr 1997
[7]
Kivijärvi J, et al., “A comparison of lossless compression
.methods for medical images”, Computerized Medical
Imaging and Graphics, 1998
[5]
D. Taubman,“High performance scalable image compression
with EBCOT”, IEEE Trans. On Image Processing, July 2000
[8]
[6]
Artur Przelaskowski, ”Compression of mammograms for
medical practice”, ACM symposium on Applied Computing,
2004
Denecker K, Van Overloop J, Lemahieu I, “An experimental
comparison of several lossless image coders for medical
images”, Proc. 1997 IEEE Data Compression Conference,
1997
[9]
David A. Clunie, ”Lossless Compression of Grayscale
Medical images: Effective of Traditional and State of the Art
approaches”.
Table 1. Compression ratios, compression/decompression time
Images
Ultrasound1
Ultrasound2
Ultrasound3
Ultrasound4
Ultrasound5
Ultrasound6
Ultrasound7
Ultrasound8
Ultrasound9
Ultrasound10
Ultrasound11
Ultrasound12
Ultrasound13
Ultrasound14
Ultrasound15
Ultrasound16
Ultrasound17
Ultrasound18
Ultrasound19
Ultrasound20
Ultrasound21
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
CR
CT DT
LJPEG
2.078
0.203 0.094
1.828
0.188 0.078
1.897
0.25 0.078
1.816
0.187 0.078
1.904
0.187 0.078
1.815
0.203 0.078
2.075
0.375 0.078
2.078
0.203 0.14
2.045
0.203 0.093
1.670
0.203 0.078
1.893
0.203 0.078
1.788
0.203 0.078
1.867
0.203 0.078
1.794
0.203 0.078
1.879
0.187 0.078
2.139
0.187 0.063
2.024
0.25 0.078
1.748
0.203 0.078
2.197
0.203 0.093
2.175
0.218 0.078
2.167
0.187 0.062
JPEG-LS
2.302
0.234 0.235
2.074
0.203 0.203
2.188
0.265 0.203
2.072
0.234 0.203
2.190
0.203 0.203
2.032
0.218 0.406
2.326
0.375 0.219
2.302
0.234 0.235
2.297
0.234 0.235
1.931
0.235 0.235
2.217
0.234 0.235
2.015
0.219 0.265
2.193
0.218 0.218
2.054
0.203 0.219
2.195
0.218 0.219
2.507
0.203 0.203
2.382
0.187 0.187
1.937
0.234 0.234
2.455
0.234 0.25
2.447
0.234 0.234
2.516
0.219 0.219
JPEG2000
2.194
1.156 0.937
1.952
1.125 0.921
2.028
1.109 0.937
1.946
1.14 0.968
2.028
1.14 1.015
1.934
1.171 1.015
2.172
1.312 1.109
2.194
1.187 0.906
2.166
1.234 1.14
1.840
1.25 1.046
2.063
1.312 1.046
1.924
1.187 1.156
2.043
1.156 1.14
1.929
1.234 1.171
2.039
1.328 1.125
2.375
1.078 0.906
2.117
1.125
1
1.877
1.171 1.14
2.374
1.187 1.109
2.377
1.234 1.14
2.389
1.203 1.031
CALIC
2.381
2.703 2.703
2.128
2.5
2.5
2.242
2.468 2.468
2.133
2.5
2.5
2.244
2.484 2.468
2.100
2.531 2.531
2.412
2.718 2.718
2.381
2.781 2.765
2.380
2.775 2.765
1.991
2.625 2.562
2.254
2.515 2.515
2.069
2.609 2.593
2.223
2.562 2.515
2.096
2.546 2.546
2.232
2.531 2.531
2.575
2.375 2.375
2.433
2.359 2.328
2.012
2.593 2.546
2.519
2.812 2.812
2.525
2.843 2.843
2.582
2.421 2.593
FELICS
2.250
0.127 0.113
1.980
0.141 0.122
2.087
0.126 0.109
1.978
0.127 0.112
2.093
0.126 0.111
1.964
0.13 0.116
2.263
0.128 0.12
2.250
0.127 0.116
2.234
0.139 0.127
1.821
0.147 0.121
2.094
0.124 0.114
1.927
0.136 0.117
2.034
0.124 0.111
1.952
0.127 0.116
2.070
0.143 0.113
2.324
0.116 0.106
2.274
0.119 0.106
1.864
0.133 0.122
2.359
0.128 0.115
2.336
0.133 0.125
2.349
0.118 0.109
PNG
2.133
0.64 0.263
1.923
0.546 0.256
1.992
0.5 0.371
1.912
0.515 0.252
1.999
0.609 0.253
1.904
0.5
0.25
2.133
0.765 0.294
2.133
0.765 0.287
2.113
0.625 0.27
1.816
0.625 0.274
2.016
0.656 0.553
1.898
0.484 0.252
2.000
0.671 0.253
1.902
0.5 0.251
2.002
0.671 0.25
2.188
0.609 0.244
2.120
0.515 0.25
1.865
0.593 0.248
2.247
0.765 0.266
2.248
0.75 0.267
2.201
0.765 0.243
BTPC
2.216128805
0.718 0.406
1.930
0.75 0.375
2.033
0.75 0.453
1.932
0.796 0.375
2.033
0.703 0.5
1.918
0.796 0.375
2.230
0.812 0.39
2.216
0.843 0.64
2.198
0.875 0.625
1.780
0.859 0.39
2.042
0.703 0.703
1.879
0.859 0.39
1.993
0.828 0.406
1.900
0.859 0.375
2.019
0.843 0.359
2.326
0.656 0.609
2.213
0.812 0.359
1.824
0.859 0.39
2.349
0.843 0.375
2.325
0.875 0.406
2.348
0.843 0.343
S+P
2.208
0.858 0.612
1.945
0.936 0.723
2.005
0.889 0.583
1.950
0.917 0.706
2.0127
0.91 0.711
1.938
0.945 0.801
2.183
0.999 0.584
2.208
0.962 0.864
2.185
1.002 0.84
1.838
1.074 0.917
2.057
0.972 0.836
1.922
1.034 0.65
2.026
0.99 0.885
1.919
1.01 0.904
2.032
0.997 0.849
2.390
0.836 0.721
2.0886
0.976 0.574
1.866
1.037 0.954
2.397
0.917 0.79
2.383
0.942 0.778
2.429
0.801 0.73
VOLUME Six NUMBER two july - december 2008 PLATFORM
121
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Learning Style Inventory System:
A Study To Improve Learning Programming Subject
Saipunidzam Mahamad*, Syarifah Bahiyah Rahayu Syed Mansor, Hasiah Mohamed @ Omar1
*Universiti Teknologi Petronas, Bandar Seri Iskandar, 31750 Tronoh Perak, Malaysia
1UiTM Cawangan Terengganu, 23000 Dungun, Terengganu, Malaysia
*saipunidzam_mahamad@petronas.com.my
ABSTRACT
This paper presents a learning style for personal development as an alternative for a student in learning a
programming subject. The model was developed with the intention to increase awareness of learning style
preferences among higher education students and as a guide to excel in the programming subject. The aim
of this paper is to integrate the ideal learning style with individual learning preferences in improving the way
of learning a programming subject. The study proposes a learning style model based on a human approach
to perform a task, and human emotional responses to enable learning to be oriented according to a preferred
method. The learning style defines the basic learning preference to make the assessment. A prototype, Learning
Style Inventory System (LSIS) was developed to discover an individual learning style and recommend how
to manipulate the learning skill. The analysis and comparisons showed that personal learning styles play an
important role in a learning process.
Keywords: Learning Style Inventory System, Learning Style, Learning Style Model.
INTRODUCTION
Learning a programming subject seems to be difficult.
Many students claim to dislike the programming
subject which may have led to their inability to do
programming. The main reason could be that students
do not understand or know how to manipulate their
learning style. Knowing what we have (strengths and
weaknesses) and how to improve learning is crucial.
There is a need to have a system with capabilities
of discovering one’s own learning style, and how to
improve it in order to assist the students, especially
those facing a transition period from secondary
school to university environment. Once students are
actively engaged in their own learning process, they
will begin to feel empowered and indirectly, develop
their personal sense of achievement and self-direction
levels. Thus, it may improve the student’s performance
in their future undertakings.
The key to get and keep students actively involved
in learning lies in understanding learning style
preferences, which can positively or negatively
influence a student’s performance (Hartman, 1995).
Learning style enables learning to be oriented
according to the preferred method. Besides, a study
showed that adjusting teaching materials to meet
the needs of a variety of learning styles benefits the
students (Kramer-Koehler, Tooney & Beke, 1995).
Therefore, a person’s learning style preferences have
an impact upon his/her performance in learning.
This paper was presented at the Seminar It Di Malaysia 2006 (SITMA06),
19 - 20 August 2006
122
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
In this paper, the authors analyse correlations between
a learning style with the student’s performance
in learning a programming subject. To further
investigate the correlation, a system named Learning
Style Inventory System (LSIS) was developed. This
system was built based on Kolb’s learning style model
and a strategies shift learning model by Richard M.
Felder and Barbara A. Solomon. The strategies to
shift learning model is used to recommend students
on how to improve his/her learning style. This
recommendation feature consisted of strategies of
moving between two-continuum approaches in
developing new learning style preferences focusing
on learning a programming subject. This study also
observed the implications of learning style towards
learning a programming subject.
LEARNING STYLE MODEL
Nowadays, there are quite a number of learning style
preferences being introduced. Some of them are
Kolb’s Model (David A. Kolb, 2001), Multiple Intelligent
(David Lazear, 1991), Vision Auditory and Kinesthetic
model (Dawna Markova, 1996). Each of the models
has its own strengths and weaknesses.
Kolb’s learning styles model is based on two lines of
axis; (1) human approach to a task that prefers to do
or watch; and (2) human emotional response which
prefers to think or feel, as shown in Figure 1. The
east-west axis is the Processing Continuum, while the
north-south axis is the Perception Continuum.
The theory consists of four preferences, which are the
possible different learning methods:
Concrete experience
Active
experimentation
(4)
(1)
DYNAMIC
LEARNER
INNOVATIVE
LEARNER
(3)
(2)
COMMON
SENSE
LEARNER
ANALYTIC
LEARNER
Abstract conceptualization
Figure 1. Learning style type grid
Reflective
observation
Doing Watching Feeling Thinking –
–
–
–
active experimentation
reflective observation
concrete experience
abstract conceptualization
According to David A. Kolb (2001), the learning cycle
begins at any one of the four points where the
approach should be in a continuous spiral mode.
However, it was suggested that the learning process
often begins when a person performs a particular
action. Later, he/she will discover the effect of the
action in that situation. Lynda Thomas et al. (2002)
agreed with Kolb. She indicated that the process
should be in the sequence order, where the second
step follows the first step. In the second step, a person
should understand these effects in that particular
instance. In case if the same action were undertaken
in the same circumstances, it would be possible to
anticipate what would follow from the action. Then,
in this pattern, the third step would understand the
general principle under which the particular instance
occurred.
According to James Anderson (2001), to modify a
person’s learning style is necessary, and it is needed
in a particular situation which forces them to enhance
their capability to move along with the two continuums
as discussed in Kolb’s model. He stated that a person
himself is responsible to change his/her learning style
to meet the requirements of a particular situation.
Due to lack of knowledge, a student usually tended to
be in the current learning style preference, and would
be rigid to move along between the two continuums.
The cause might be that they have no idea what is
their learning style, and how to improve it. A study
by Richard M. Felder and Barbara A. Solomon (2000)
illustrated strategies to shift a student’s learning style
such as active experimentation, reflective observation,
concrete experience and abstract conceptualisation
as shown in Table 1.
THE LEARNING STYLE INVENTORY SYSTEM (LSIS)
The LSIS, a standalone system, consisted of several
forms to indicate different levels of process flow. The
aim of this model was to discover how students learn
VOLUME Six NUMBER two july - december 2008 PLATFORM
123
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Table 1. Strategies to shift a learning style
Learning style
Problem
Suggestion
Active learner
Stuck with little or no class
discussion or problem-solving
activities
Group discussion or Q&A sessions
Reflective
learner
Stuck with little or no class
time for thinking about new
information
Write short summaries of readings or class notes in own
words.
Conceptual
learner
Memorization and substitution
formula
Find link/connection on the facts with the interpretation
or theories
Experience
learner
Material is abstract and
theoretical
Find specific example of concept and procedure and
find out how the concepts apply in practice by asking
instructor, or referring to some course text or other
references or brainstorming with friends or classmates
without the intention of evaluating their learning
ability. The objectives of the system were to help
students discover their learning style type and give
recommendations towards each type of learning
to maximise their learning skills upon studying a
programming subject, in particular. The overall
system flow is shown in Figure 2.
The most crucial phase is the completion of the learning
style test. In this phase, the system accumulates
learning result to determine a student’s learning style,
hence it gives a recommendation on how to develop
his/her learning style towards learning a programming
subject. The system is divided into three categories.
Category 1 – Develop Learning Style (LS) inventory to
obtain students learning style using multiple-choice
tests, which implemented based-on the Kolb’s scale.
The learning style inventory describes the way a
person learns and how to deal with ideas and day-today situations in life.
Category 2 – Calculate and analyse the test answer
and produce results of the student learning style with
a description.
Category 3 – Propose an appropriate recommendation
to student’s original learning type to suit with the
requirement on learning a programming subject.
124
Figure 2. System flow
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
26%
74%
Aware
Unaware
Figure 4. Awareness of
learning style preferences
41%
46%
13%
Aware
Claim Aware but eventually Unaware
No
Figure 5. Knowledge of own
learning style type
Figure 3. Screen shot for the LSIS result form interface
The sample screen shot of the result page is shown
in Figure 3. The page provided students with their
personal learning style as well as its description. Also,
it displayed some recommendation and learning
methods.
THE EXPERIMENT
The experiment was conducted during the July 2004
semester at Universiti Teknologi PETRONAS (UTP),
Malaysia. The total number of participants was 106
students. Those were students from foundation and
first year undergraduate Information Communication
and Technology (ICT) and Business Information System
(BIS) programmes. At that time, these students were
taking an introductory programming course, and the
course was in a series of programming courses offered
to ICT and BIS students.
Two sets of questionnaires were given to the students.
Both sets were designed to obtain information about
the students’ academic performance, and to know
their views on learning styles and the programming
subject. The first set was distributed during the
beginning of the semester together with LSIS, which
asked the students about the knowledge in learning
styles and experiences on learning programming. The
LSIS was used to identify the students’ learning style
and to suggest a better way to learn the programming
subject. The second set was distributed during the
last class session of the course to analyse the students’
performance in learning programming, and how the
personal learning style played an important role in
their learning process.
Below were the steps taken in analysing the data from
both sets of questionnaires:
• The questions were divided into six categories
in order to capture the students’ knowledge of
learning style, knowledge of own learning style
type, tabular of learning style type, liking toward
the programming subject, level of difficulty faced
and learning style implications toward learning
programming.
VOLUME Six NUMBER two july - december 2008 PLATFORM
125
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
•
•
The percentages of answers for each category
were counted.
The students’ learning style for the first set of
questionnaires was compared to the second set
to determine the positive result of LSIS and its
correlation with the achievement in the course.
50
35
39
Analytic
Common Sense
40
30
20
18
14
Dynamic
Innovative
10
0
RESULTS AND DISCUSSIONS
Figure 6. Tabular for learning style type
This section is divided into two parts. The first part
illustrates the participants’ awareness of their learning
style using a questionnaire and LSIS, while the other
part of the section was to discover the implications
of their learning style type towards learning
programming.
The first category of questionnaire was to capture the
students’ awareness of the existence of their Learning
Style Preferences. Figure 4 shows that only 26% of
the respondents had knowledge of a Learning Style
Model.
The second category of questions, shown in Figure
5, was designed to know the awareness of students
of their own self-learning style type. The results
showed that 49 respondents had no idea about their
learning style while 57 respondents said that they
do know about it. However, after using LSIS only 14
respondents had the correct learning as claimed.
The table for learning style type as shown in Figure
6 illustrates the number of persons for each learning
style type. It consists of four categories, which are
dynamic, innovative, analytic and common sense
learner for both samples. The respondents were using
LSIS to discover their learning style. The collected
total number for each type of learner represents
the percentage of students as a whole. The analysis
may help lecturers apply the most suitable way of
delivering lectures to suit the students’ learning
styles. Moreover, the students could follow the given
recommendations to improve their way of learning
the programming subject.
Table 2 indicates the students’ mentality state of liking
the programming subject. The graph clearly shows
that 83 (79%) respondents liked the programming
subject compared to respondents who disliked
programming. This is a positive feedback to ease
the lecturer in tackling students as most students
showed a positive attitude towards programming
and would be willing to change the way they learn as
recommended by LSIS.
The level of difficulty faced by a student while learning
programming is shown in Table 3. Evaluation from
the statistic shows that up to 56% (equivalent to 59
respondents) think that learning a programming
subject is challenging and that they have to study
hard to excel in the programming subject.
The results shown in Table 2 and Table 3 were
for students who had the experience in learning
programming before they undertook the current
subject. This valuable information could have
motivated the students to accept the recommendation
proposed by LSIS. However, the students could choose
or not choose to follow the LSIS recommendation.
Table 2. Statistic on Likeness towards Programming Subject
A
A–
B+
B
C+
C
Yes
14
7
8
7
3
No
2
1
2
2
Likeness
126
Taking
TOTAL
3
42
83
1
9
23
PLATFORM VOLUME Six NUMBER TWO july - december 2008
D+
D
F
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Table 3. Statistic on Level of Difficulties to Learn Programming Subject
Level
A
A–
B+
B
C+
Very easy
2
Easy
1
2
Medium
6
4
1
2
1
Challenging
5
2
7
7
4
Hard
2
C
D+
D
F
Taking
TOTAL
2
4
3
6
3
17
33
1
26
59
2
4
Table 4. Implication of learning style type toward learning programming subject
Learning Style Type
A
A–
B+
B
Dynamic
3
2
8
4
Innovative
8
6
2
1
3
20
Analytic
6
3
5
3
1
18
Common Sense
1
4
4
7
1
Learning style implications toward learning a
programming subject shown in Table 4. The purpose
of this analysis was to observe students’ action on
the recommendation in order to improve the way of
learning the programming subject throughout the
semester. From the analysis, there were 74 respondents
who followed the recommendation and take that as a
part of their way of learning. Later, the result was used
to compare the with student’s coursework mark to
analyse their achievement.
Overall, the results of the course were very interesting.
The experiment involved the students’ initiative to
change and follow the recommendations in learning a
programming subject. Besides, the environment was
designed where students had control of their learning
experience. This study discovered that learning style
has some impact on learning.
CONCLUSION
The LSIS was successfully developed with the aim
of increassing the awareness of Learning Style
Preferences among students, and guide them to
excel in a programming language subject. LSIS is a
C+
C
D+
D
F
TOTAL
17
2
19
computer system to assist students in identifying
and analysing learning styles based on Kolb’s model
and the strategies shift learning model by Richard
M. Felder and Barbara A. Solomon, and then guide
students to improve their learning style perspective.
Preliminary evaluation indicated that students lacked
knowledge in learning style preference and their
own learning style type. The implementation of the
system benefited students in cope with their daily
learning routine as well as increase their confidence
level in learning a programming subject.
For future work, researchers may want to integrate
other types of learning styles and features of advising
students. It could include a systematic shifting
procedure between the best suitable learning style
towards learning a programming subject. Therefore,
users could discover their learning styles from the
various models and improve the learning style on
learning a programming subject.
ACKNOWLEDGEMENT
The authors would like to thank Aritha Shadila Sharuddin for
conducting the survey for data gathering.
VOLUME Six NUMBER two july - december 2008 PLATFORM
127
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
REFERENCES
[1]
David A. Kolb (2001). Experiential Learning Theory
Bibliography 1971-2001. http://trgmcber.haygroup.com/
Products/learning/bibliography.htm
[2]
David Lazear (1991). Seven Ways of Knowing: Teaching for
Multiple Intelligences. Skylight Publishing.
[3]
Dawna Markova (1996). VAK Learning Style. http://www.
weac.org/kids/june96/styles.htm
[4]
James Anderson (2001). Tailoring Assessment to Student
Learning Styles http://aahebulletin.com/public/archive/
styles.asp
[5]
Kramer-Koehler, Pamela, Nancy M. Tooney, and Devendra
P. Beke (1995). The Use of learning style innovations to
improve retention. http://fairway.ecn.purdue.edu/asee/
fie95/4a2/4a22/4a22.htm
[6]
Lynda Thomas et al. (2002). Learning Style and Performance
in the Introductory Programming Sequence. ACM
1-58113.473-8/02/0002.
[7]
Richard M. Felder & Barbara A. Solomon (2002). A
Comprehensive Orientation and Study Skills Course
Designed for Tennessee Families First Adult Education
Classes.
[8]
Virginia F. Hartman (1995). Teaching and learning style
preferences: Transitions through technology. http://www.
so.cc.va.us/vcca/hart1.htm
128
Saipunidzam Mahamad was born in
Perak, MALAYSIA on October 23, 1976. The
author was awarded the Diploma in
Computer Science on 1997 and the
Bachelor Degree in computer science in
1999 from University Technology of
Malaysia, Johor. Later, he pursued his
Master’s
degree in Computer and
Information Science at University of South
Australia and was awarded a Master’s Degree in 2001. He is a
lecturer since January 2002 and is currently attached with
University Technology PETRONAS. He started with University
Technology PETRONAS (UTP) as a trainee lecturer since July 2000.
Apart from teaching, he is involved in Software Engineering and
the E-Commerce research group and does various administrative
work. Recently, he was appointed as UTP’s e-learning manager to
manage and monitor the implementation of e-learning at UTP.
His research interests include Computer System, E-Learning,
Software engineering and Intelligent Systems.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Performance Measurement
– A Balanced Score Card Approach
P. D. D. Dominica*, M. Punniyamoorthy1, Savita K. S. and Noreen I. A.
*Universiti Teknologi PETRONAS, 31750 Tronoh, Perak
1National Institute of Technology, Tiruchirapalli, INDIA
*pdddominic@petronas.com.my
ABSTRACT
This paper suggests a framework of performance measurement through a balanced scorecard and to provide
an objective indicator for evaluating the achievement of the strategic goals of the corporate. This paper
uses the concepts of a balanced score card and adopts an analytical hierarchical process model to measure
an organisational performance. The balanced score card is a widely used management framework for the
measurement of organisational performance. Preference theory is used to calculate the relative weightage for
each factor, using the pair wise comparison. This framework may be used to calculate the effectiveness score
for balanced score card as a final value of performance for any organisation. The variations between targeted
performance and actual performance were analyzed.
Keywords: Balanced score card, corporate strategy, performance measurement, financial factors and non-financial
factors.
INTRODUCTION
Over the recent past, organisations have tried various
methods to create an organisation that is healthy
and sound. By requiring strategic planning and
a linking of program activities performance goals
to an organisation’s budget, decision-making and
confidence in the organisational performance is
expected to improve. A business organisation vision
is one of its most important pieces of intangible
assets. Vision is planned by strategy and executed
by values that drive day to day decision-making
(Sullivan, 2000). The economic value of an intangible
asset drives the decision to invest further, continue to
hold onto it, or dispose of it. An intangible economic
value is the measure of the utility it brings to the
business organisation. Strategy is used to develop
and sustain current and competitive advantages
for a business, and to build competitive advantages
for the future. Competitive advantage strategy
depends on the command of and access to effective
utilisation of its resources and knowledge. Strategy
is the identification of the desired future state of
the business, the specific objectives to be obtained,
and the strategic moves necessary to realise that
future. Strategy includes all major strategic areas,
such as markets, suppliers, human resources,
competitive advantages, positioning, critical success
factors, and value chains (Alter 2002). In today’s fast
changing business environment, the only way to gain
competitive advantage is by managing intellectual
capital, commonly known as knowledge management
This paper was presented at the Knowledge Management International Conference 2008 (KMICE 2008), Langkawi,
10 -12 June 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
129
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
(KM). Nowaday’s knowledge is increasingly becoming
the greatest asset of organisations (Ravi Arora, 2002).
The basic objective of a knowledge management
programme should be well understood and its
potential contribution to the business value should
be established before beginning of the process. One
of the objectives of a KM programme is to avoid reinvention of the wheel in organisations and reduce
redundancy. Secondly, a KM programme is to help
the organisation in continuously innovating new
knowledge that can then be exploited for creating
value. Thirdly, a KM programme is to continuously
increase the competence and skill level of the people
working in the organisation (Ravi Arora, 2002). KM
being a long term strategy, a Balanced Score Card
(BSC) helps the organisation to align its management
processes and focuses the entire organisation to
implement it. The BSC is a management framework
that measures the economic and operating
performance of an organisation. Without a proper
performance measuring system, most organisations
are not able to achieve the envisioned KM targets.
BSC provides a framework for managing the
implementation of KM, while also allowing dynamic
changes in the knowledge strategy in view of changes
in the organisational strategy, competitiveness and
innovation. Inappropriate performance measurement
is a barrier to organisational development since
measurement provides the link between strategies
and actions (Dixon et al., 1990). Performance
measurement as a process of assessing progress
towards achieving predetermined goals, includes
information on efficiency. In which, resources are
transformed into goods and services, the quality of
those outputs and outcomes, and the effectiveness
of organisational operations in terms of their specific
contributions to organisational objectives (Dilanthi
Amaratunga, 2001). This paper identifies the balanced
scorecard developed by (Kaplan and Norton, 1992,
1996a) as a leader in performance measurement and
performance management in an attempt to identify
an assessment methodology for organisational
processes.
130
BALANCED SCORE CARD
Robert S. Kaplan and David P. Norton (1992) devised
the Balanced Scorecard of its present form. They
framed the balanced scorecard as a set of measures
that allowed for a holistic, integrated view of the
business process so as to measure the organisation’s
performance. The scorecard was originally created
to supplement “traditional financial measures with
criteria that measured performance from three
additional perspectives – those of customers, internal
business processes, and learning and growth”. The
BSC retained traditional financial measures. But
these financial measures tell the story of past events,
an adequate story for those companies for which
investments in long-term capabilities and customer
relationships are not critical for success. These financial
measures are inadequate, however, for guiding and
evaluating the performance of the modern companies
as they are forced by intense competition provided
in the environment, to create future value through
investment in customers, suppliers, employees,
processes, technology, and innovation. Non-financial
measures, such as customer retention, employee
turnover, and number of new products developed,
belonged to the scorecard only to the extent that they
reflected activities an organisation performed in order
to execute its strategy. Thus, these measures served
as predictors of future financial performance.
In due course of time, the whole concept of
the balanced scorecard evolved into a strategic
management system forming a bridge between
the long-term and short-term strategies of an
organisation. Many companies readily adopted the
BSC because it provided a single document in which
the linkages of activities – more particularly by giving
adequate importance to both tangible and nontangible factors – were more vividly brought out than
in any other process adopted. Clearly, opportunities
for creating value shifted from managing tangible
assets to managing knowledge-based strategies
that deployed an organisation’s intangible assets:
customer relationships, innovative products and
services, high quality and responsive operative
processes, information technology and databases
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
and employee capabilities, skills and motivation.
The BSC has grown out of itself from being just a
strategic initiative to its present form of a Performance
Management System. The balanced scorecard, as it is
today, is a Performance Management System that can
be used by organisations of any size to align the vision
and mission with all the functional requirements of
day-to-day work. It can also enable them to manage
and evaluate business strategy, monitor operational
efficiency, provide improvements, build organisation
capacity, and communicate progress to all employees.
Hence, it is being adopted by many companies across
the world today cutting across the nature of the
industry, types of business, geographical and other
barriers.
Kaplan & Norton (1992) described the Balanced
Scorecard as a process which “moves beyond a
performance measurement system to become the
organising framework for a strategic management
system”. It is important that the scorecard be seen not
only as a record of results achieved, and it is equally
important that it be used to indicate the expected
results. The scorecard in this way will serve as a way to
communicate the business plan and thus the mission
of the organisation. It further helps to focus on critical
issues relating to the balance between the short and
long run, and on the appropriate strategic direction
for everyone’s efforts (Olve et al., 1999). The BSC
allows managers to look at the business from the four
perspectives and provides the answers to the above
basic questions, as illustrated in Figure 1.
Customer Perspective
This perspective captures the ability of the organisation
to provide quality goods and services, the effectiveness
of their delivery, and overall customer service and
satisfaction. Many organisations today have a mission
focused on the customer and how an organisation
is performing from its customer’s perspective has
become a priority for top management. The BSC
demands that managers translate their general
mission statement on customer service into specific
measures that reflect the factors that really matters
to their customer. The set of metrics chosen under
this perspective for this study were: enhance market
share by 5%, 10% increase in export sales, obtain
competitive pricing, and increase after sales service
outlets by 10%.
Internal Business Processes Perspective
The business processes perspective is primarily an
analysis of the organisation’s internal processes. Internal
business processes are the mechanisms through
which performance expectations are achieved. This
perspective focuses on the internal business process
results that lead to financial success and satisfied
customers expectations. Therefore, managers need
to focus on those critical internal business operations
that enable them to satisfy customer needs. The set
of metrics chosen under this perspective for this study
were: improving productivity standards, eliminating
Financial Perspective
“How do we look to our
shareholders?”
Customer
Perspective
Strategy
“How do our
customers see us?”
Internal business
processes
Perspective
“What must we
excel at?
Learning and Growth
Perspective
“How can we continue to
improve?”
Source. Kaplan and Norton (1996 a)
Figure 1. Balanced Score Card
VOLUME Six NUMBER two july - december 2008 PLATFORM
131
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
defects in manufacturing, provide adequate technical
knowledge and skill for all the levels of employees and
customer feedback to be integrated in the operation.
Learning and Growth Perspective
The targets for success which keep changing in the
intense competition requires that organisations make
continual improvements to their existing products
and processes and have the ability to introduce
entirely new processes with expansion capabilities.
This perspective looks at such issues, which includes
the ability of employees, the quality of information
systems, and the effects of organisational alignment in
supporting accomplishment of organisational goals.
The set of metrics chosen in this study under this
perspective were: involve the employees in corporate
governance, become a customer driven culture and
inculcate leadership capabilities at all levels.
According to the Balanced Scorecard Collaborative,
there are four barriers to strategic implementation:
1. Vision Barrier – No one in the organisation
understands the strategies of the organisation.
2. People Barrier – Most people have objectives that
are not linked to the strategy of the organisation.
3. Resource Barrier – Time, energy, and money are
not allocated to those things that are critical to the
organisation. For example, budgets are not linked
to strategy, resulting in wasted resources.
4. Management Barrier – Management spends too
little time on strategy and too much time on
short-term tactical decision-making.
All these observations call for not only developing
proficiency in formulating an appropriate strategy
to make the organisational goals relevant to the
changing environment but also call for an effective
implementation of the strategy.
Financial Perspective
Financial performance measures indicate whether
the organisation’s strategy, implementation, and
execution are contributing to bottom line
improvement. It shows the results of the strategic
choices made in the other perspectives. By making
fundamental positive changes in their operations, the
financial numbers will take care of themselves. The set
of metrics chosen in this study under this perspective
were: 12% return on equity to be achieved, 20%
revenue growth, 2% reduction in cost of capital and
7% reduction in production cost.
METHODOLOGY
This paper used the Balanced Score Card approach
proposed by (Robert Kaplan and David Norton, 1992)
and the model adopted by (Brown and Gibson, 1972)
along with the extension to the model provided by
(Raghavan and Punniyamoorthy, 2003) to arrive at a
single measure called Effectiveness Score (ES). This
score was used to compare the differences between
targeted performance and actual performance of any
organisation.
132
EFFECTIVENESS SCORE FOR THE BALANCED
SCORECARD (ESBSC)
The Balanced Scorecard in its present form certainly
eliminates uncertainty to a great extent as compared
to the traditional financial factors based performance
measurement systems. However when this study set
out to measure the actual performance against the
targeted performance, most of the criterions was met.
For some factors, actual performance was greater
than the targeted performance, and for others, it was
less. Therefore for the decision makers there might
be some kind of confusion regarding the direction in
which the organisation is going. That is, the decision
maker might not be clear whether the firm is improving
or deteriorating. This is because the firm might have
achieved the desired performance in the not-so-vital
parameters but would have failed to show required
performance in many vital parameters. Hence, it
becomes imperative to provide weightage for the
factors considered, so as to define the importance to
be given to the various parameters. This will provide
a clear direction to the management to prioritise the
fulfillment of targets set for those measures which
were ascribed a larger weightage.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
The organisation can reasonably feel satisfied if it
were able to achieve the targets set for it, as it would
encompass all the performance measures. Basically,
“The Balanced scorecard” wass constructed by taking
into account all the strategic issues. The effectiveness
score, which this study suggests, is basically derived
for the balanced score card. The single benchmark
measure “The Effectiveness Score for the Balanced
Scorecard” created would clearly mean that the firm
will be reasonably be in a position to evaluate the
achievement of the strategic targets. In short, it is a
single benchmarking measure, which evaluates underor over-achievement of the firm in respect of fulfilling
the goals set by the organisation. It can also provide
variations of the actual measure from the targeted
measure under each of the factors considered. Thus
the framework suggested in this paper will provide a
single benchmark information for the decision makers
to take appropriate action and concentrate on such
measures which would result in the achievement of
the strategic needs of the company.
four perspectives. The perspectives, the measures
under each perspective, the target and actual values
of each measure were analysed in a framework as
shown in Figure 2.
The Target Performance (TP) and Actual Performance
(AP) were calculated using the following method:
Balanced score for Balanced scorecard
(Target Performance)
= a1(b1c1+b2c3+b3c5) + a2(b4c7+b5c9)
+a3(b 6 c11+b7c13) + a4(b8c15) (1)
Balanced score for Balanced scorecard
(Actual Performance)
= a1(b1c2+b2c4+b3c 6) + a2(b4c8+b5c10)
+a3(b 6 c12+b7c14) + a4(b8c16) (2)
There are four Levels in the Effectiveness Score for
Balanced Scorecard Model.
Level I: The first level is the goal of the model.
DEVELOPMENT OF EFFECTIVENESS SCORE (ES)
Let us now see the development of Effectiveness Score
Model for Balanced Scorecard. As discussed earlier
the Balanced Scorecard divides all the activities under
Level II: This level consists of the criteria for evaluating
organisational performance under the following
categories:
Effectiveness Score for Balanced Scorecard
I-Level
Goal
II – Level
Criteria
Financial
Perspective
(a1)
Customer
Perspective
(a2)
Internal Business
Perspective
(a3)
Learning& growth
Perspective
(a4)
III- Level
Sub Criteria
b1
b2
b3
b4
b5
b6
b7
b8
IV- Level
Alternatives
c1 c2 c3 c4 c5 c6
TP AP TP AP TP AP
c7 c8 c9 c10
TP
AP
TP
AP
c11 c12 c13 c14
TP
AP
TP
AP
c15 c16
TP
AP
Figure 2. Framework for calculating the Balanced Score for Balanced Score Card
VOLUME Six NUMBER two july - december 2008 PLATFORM
133
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
•
•
•
•
Financial Perspective (a1)
Customer Perspective (a2)
Internal Business Process Perspective (a3)
Learning and Growth Perspective (a4)
Level III: Each Perspective may have sub-criteria for
measuring organisational performance. To measure
each criterion or sub-criteria measures were identified.
These were referred to as bi
Level IV: For each measure, identified targets were set.
These target performance values were then compared
with the actual performance achieved. In nutshell, the
score was arrived based on the relative weightages of
the items incorporated in the model, based upon the
classification suggested in the Balanced Scorecard
approach.
The factors of level II and level III were evaluated using
the Preference theory. The relative weightage for each
factor was arrived at by pair wise comparison using the
preference theory. These factors were compared pair
wise and 0 or 1 was assigned, based on the importance
of one perspective over another. In each level the
factor’s relative weightage was established by pair wise
comparison. In the process of comparison, if the first
factor were more important than the second factor, 1
for the first and 0 for the second were assigned. If the
first factor were less important than the second factor,
0 for the first and 1 for the second were assigned. If both
perspectives were valued equally, 1 was assigned for
both perspectives. When the values were assigned, it
was seen that results of the comparison decision were
transitive. i.e., if the factor 1 were more important than
factor 2 and factor 2 were more important that factor
3, then the factor 1 was more important than factor 3.
The factors of level IV were grouped into financial and
non-financial factors to measure the effectiveness of
the organisation’s activity. The financial factors were
cost and benefit. Non-financial factors were classified
into factors related with time dimensions and other
factors. The above said factors were then brought
Tangible Factors
To be maximized
Financial Factors
Monetary
Dimensions
-Labour saving
-Material saving
-Inventory Cost
saving
To be minimized
Non Financial
Factors
Time
Dimensions
Utilization
Time
Financial Factors
Others
Productivity
Figure 3. Framework for calculating Level IV – Alternatives
134
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Monetary
Dimensions
-Labour Cost
-Material
Cost
-Overheads
Non Financial
Factors
Time
Dimensions
-Cycle time
-Set up time
Others
Loss
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
under categories, those which were maximised, and
the factors which were minimised.
A general expression was then framed, considering
the entire set of factors. The expression was framed
in such a manner that the factors were converted
into consistent, dimensionless indices. The sum of
each index was equal to 1. This was used to evaluate
the factors in order to assist to arrive at the relative
weightage at the lowest level. This was the framework
developed by (Ragavan, 2003).
ESI = BM I (1/ Σ BM) + [CM I Σ (1/CM)]-1
+ BT I (1/Σ BT) + [TM I Σ (1/TM)]-1
(3)
+ NFI (1/Σ NF) + [NFM I /Σ 1/NFM]-1 Where
ESI = Effectiveness score for alternative ‘I’
BM I = Benefit in money for alternative ‘I’
BT I = Benefit in time for alternative ‘I’
CM I = Cost to be minimised for alternative ‘I’
TM I = Time to be minimised for alternative ‘I’
NFI = Non financial factors for alternative ‘I’ to be
maximised
NFM I =Non financial factors for alternative ‘I’ to be
minimised
The relative weightage for all the factors were arrived
and the Effectiveness Score for the Balanced Scorecard
was arrived using equations (1) and (2) for sample
framework given in Figures 2 and 3. How the company
had fared was made by comparing the figures of
targeted performance and the actual performance.
LIMITATION OF THE RESEARCH
Level II and Level III factors were evaluated using
preference theory which has certain limitations. In the
comparison of the degree of importance of one factor
over another and by assigning 1 for a factor and 0 for
another, meant that 0 importance is attached to that
factor. There is a possibility, that the factor be uniformly
0 in value in all the pair wise comparisons. This would
result in a factor getting 0 relative importance. In
other words, in a decision, a 0 value factor does get a
role, which is not necessary.
FUTURE PROSPECT OF THE RESEARCH
To remove the abovesaid limitation, future research
may be carried out to evaluate criteria (ai ’s) and the
sub-criteria (bi ’s) by using Analytic Hierarchy Process
(AHP). Even by adopting AHP, pair wise comparisons
can be made and different values be assigned based
on the degree of importance ranging from 1-9. The
reciprocal values can also be assigned based on
the importance of one factor over the other. This
may provide further refinement towards adequate
weightages to the relevant criteria and sub-criteria.
CONCLUSION
Knowledge management, as a long term strategy,
could use BSC to help the company align its
management processes which focuses the entire
organisation in its implementation. Without a proper
performance measuring system, most organisations
were not able to achieve the envisioned KM targets.
BSC provided a framework for managing the
implementation of KM while also allowing dynamic
changes in the knowledge strategy in view of changes
in the organisational strategy, competitiveness and
innovation. There were many attempts made to show
the efficacy of the usage of the balanced scorecard
for showing better performance. While retaining all
the advantages that were made available by using the
balanced score card approach in providing a frame
work for showing better performance, a process of
calculating a benchmark figure called “Effectiveness
score” was added for the analysis. This study identified
parameters whose actual performance varied from
the targeted performance and found their relative
proportion of adverse or favourable contributions
to the performance of the company by assigning
appropriate weights for such parameters whether
financial or non-financial. Therefore, this study was
in the position to objectively capture the reason for
variations in the performance from the targeted
levels in all the functional areas of the business with
the use of the concepts of balanced scorecard as well
as applying the extended information arising out of
arriving at the “Effectiveness score for the balanced
scorecard”. In conclusion, arriving at this score by
VOLUME Six NUMBER two july - december 2008 PLATFORM
135
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
and large is considered a powerful approach in
formulating a business excellence model. This would
certainly help users of this approach to make an
objective evaluation while implementing the same in
their business environment.
REFERENCES
[1]
Alter, S. (2002). Information system the foundation of
E-business (4th ed.): Prentice-Hall, Upper Saddle River, NJ.
[2]
Brown, O. P. A., & Gibson, D. F. (1972). A quantified model
for facility site selection application to the multi product
location problem, AIIE Transaction, 4, 1, 1-10.
[3]
Dilanthi A., David B., & Marjan S. (2001). Process
improvement through performance measurement: the
balanced scorecard methodology, Work Study, 50, 5, 179188.
[4]
Dixon, J. R., Nanni, A. J., & Vollman, T. E. (1990). The new
performance challenge: Measuring operations for world
class competition: Business One Irwin, Homewood, IL.
[5]
Kaplan, R. S., & Norton, D. P., (1992). The balanced score card,
Measures that drive performance, Harvard Business Review,
January, 71-79.
[6]
Kaplan, R. S., & Norton, D. P., (1992). The balanced score card:
Translating strategy into action, Harvard Business School
Press, Boston, M.A.
[7]
Kaplan, R. S., & Norton, D. P., (1996a). The balance score card,
Harvard Business School Press, Boston, MA.
[8]
Olve, N., Roy, J., & Wetter, M. (1999). Performance drivers: A
practical guide to using the balanced scorecard: John Wiley
& Sons, Chichester.
[9]
Ragavan, P. V., & Punniyamoorthy, M. (2003). Strategic
decision model for the justification of technology selection,
International Journal of Advanced Manufacturing
Technology, 21, 72-78.
P. D. D. Dominic obtained his MSc degree
in operations research in 1985, MBA from
Regional
Engineering
College,
Tiruchirappalli, India in 1991, Postgraduate
Diploma in Operations Research in 2000
and completed his PhD in 2004 in the area
of job shop scheduling at Alagappa
University, Kariakudi, India. Since 1992 he
has held the post of Lecturer in the
Department of Management Studies, National Institute of
Technology (formerly Regional Engineering College),
Tiruchirappalli- 620 015, India. Presently, he is working as a Senior
Lecturer, in the Department of Computer and Information
Sciences, University Teknologi PETRONAS, Malaysia. His fields of
interest are Information Systems, Operations Research, Scheduling,
and Decisions Support Systems. He has published technical
papers in international and national journals and conferences.
[10] Ravi, A. (2002). Implementing KM – a balanced score card
approach, Journal of knowledge management, 6, 3, 240249.
[11] Sullivan, P. (2000). Value driven intellectual capital: How to
convert intangible corporate assets into market value, John
Wiley & Sons, New York, NY.
136
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
A Conceptual Framework for Teaching Technical
Writing Using 3D Virtual Reality Technology
Shahrina Md Nordin*, Suziah Sulaiman, Dayang Rohaya Awang Rambli,
Wan Fatimah Wan Ahmad, Ahmad Kamil Mahmood
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.
*shahrina_mnordin@petronas.com.my
Abstract
This paper presents a conceptual framework for teaching technical writing using 3D virtual reality technology
to simulate a contextual learning environment for technical learners. The 3D virtual of a virtual environment of
the offshore platform is proposed to provide learners with an opportunity to acquire effective communication
skills in their target workplace community. The goal of this project is to propose a virtual environment with realworld dynamic contents to develop effective technical writing. The theories and approaches to teaching and
learning underlying the conceptual framework of the virtual reality (VR) will be discussed. The architecture and
the technical aspects of the offshore platform virtual environment will also be presented. These include choices
of rendering techniques to integrate visual, auditory and haptics in the environment. The paper concludes with
a discussion on pedagogical implications.
Keyword: Virtual reality, offshore platform, technology, 3D
Introduction
One of the most challenging tasks for language
instructors is to provide practical learning
environments for the students. In recent years, Virtual
reality (VR) and 3D virtual learning environment (3D
VLE) have become increasingly explored. VR, through
simulations, could help overcome the limitations
of language learning within the four walls of the
classroom. The potential of VR in education however
is exploited only quite recently by educators and
institutions [1]. [2] defines VR as “an experience in
which a person is surrounded by a three-dimensional
computer-generated representation, and is able
to move around in the virtual world and see it from
different angles, to reach into it, grab it and reshape
it” [3].
3D VLE is a learning and teaching program that makes
use of a multi-user virtual environment or a single
user virtual environment to immerse students in
educational tasks [4]. The interactive and 3D immersive
features of a built virtual environment would provide
the learners a rich, interactive and contextual setting
to support experiential and active learning. This paper
therefore aims to present the conceptual framework
of a 3D VR to simulate environments of the offshore
oil platform for technical learners. Such simulation
and virtual environment of the offshore platform
will provide learners with an opportunity to acquire
relevant effective technical communication skills in
their target workplace community. The paper will
thus first discuss the underlying theories and recent
research work on virtual environment in education.
The conceptual framework for implementation of
This paper was presented at the Information Technology Symposium 2008 (ITSim08), Kuala Lumpur
25 - 29 August 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
137
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
VR to second language technical writing learners will
also be presented. The paper will also present the
basic architecture and the technical aspects of the
offshore platform virtual environment. The discussion
includes choices of rendering techniques to integrate
visual, auditory and haptics in the environment. The
paper concludes with a discussion on pedagogical
implications.
Literature Review
Many attempts have been made to develop immersive
learning environments [5, 6, 7, 8]. [6] highlighted
several memorable yet still ‘living’ environments that
include applications for medical students studying
neuroanatomy developed as early as in the 1970s,
and a system for beginners learning a particular
language [9]. The goal for developing such virtual
worlds is to facilitate learning since the environment
enables self-paced exploration and discovery to take
place. A learner can engage in a lesson that is based
on learning by doing and is able to understand the
subject matter in context. Among the advantages of
using 3D VLE for teaching and learning include the
sense of empowerment, control and interactivity; the
game-like experience, heightened levels of motivation;
support visual learners; allow self-awareness, support
interaction and enable real-time collaboration and
ability to situate students in environments and context
unavailable within the classroom [4].
From the educational perspective, virtual
environments are considered as a visualisation tool
for learning abstract concepts [6]. Touch the Sky,
Touch the Universe [5] is a virtual environment that
represents a dynamic 3-D model of a solar system.
It aims towards providing students with a more
intuitive understanding of astronomy and contributes
to the development of essential visual literacy and
information-processing skills. Another example that
emphasises virtual environment as a visualisation
tool is NICE [7], an immersive participatory learning
environment for young learners aged 6 to 10. The
application enables the children to grow plants by
manipulating variables such as water and light in the
virtual world. This environment supports immediate
138
visual feedback to the learners and provides learning
experience visualising complex models of ecological
systems. Among other examples of similar focus
are the virtual worlds in ScienceSpace [8], and Water
on Tap [10]. The ScienceSpace project consists of
three virtual worlds: NewtonWorld, MaxwellWorld,
and Pauling World; all of which allow students to
investigate concept of Physics. On the other hand,
Water on Tap is a Chemistry world where students
learn about the concepts of molecules, electron, and
atoms. In general, most virtual worlds developed rely
mainly on the visual representation in order to learn
complex and abstract scientific concepts.
In developing a virtual world with real-world dynamic
contents, visual cues should be complemented with
other modalities such as sound and touch to create an
immersive environment. [8] applied both elements of
sound and touch to complement visual in the virtual
worlds. They argued that multisensory cues assisted
students’ experience phenomena and focus their
attention to important factors such as mass, velocity,
and energy. In all three virtual worlds developed,
students interact with the environments using a 3-Ball,
a three-dimensional haptic mouse. The vibration from
the mouse is an indication for a tactile cue whereas an
audible chime is provided for the auditory. During the
interactions, tactile and visual cues were utilised to
emphasise the potential energy concept, and auditory
and visual cues to make velocity more salient. Another
example of virtual world that supports multimodal
cues for interactions is The Zengo Sayu [9], a virtual
environment for Japanese Language Instruction.
This system allows the use of combined voice and
gesture recognition in an educational setting. It was
designed to teach Japanese prepositions to students
who have no prior knowledge of the Japanese
language. Students can hear digitised speech
samples representing the Japanese name of many
virtual objects and their relative spatial location when
touched by the user in the virtual environment. Even
though multimodal cues assist in creating a sense of
presence in a virtual environment, such integration
needs careful consideration as some modalities such as
touch sensation is very context dependent [10]; thus,
a reason for touch modality being scarcely considered
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
for a virtual environment. However, in order to obtain
a dynamic virtual environment, there is a need to take
into account other modalities besides visual so that a
more meaningful interaction similar to that in the real
world could be supported.
Virtual reality environments have in fact generated
high interest amongst language professionals. It
is argued that immersion of the target language is
essential to learn a language and simulations through
virtual reality offers such learning experience. Thus,
a number of VR environments for language learning
have been constructed. As mentioned earlier, an
example of language learning environment is Zengo
Sayu [9], developed by Howard Rose. The VR Japanese
language learning environment allows students to
interact with objects which talk.
In the context of language learning classroom, the
most beneficial characteristic about 3D virtual
environments is that they provide a first-person form
of experiential learning [18]. Most lessons today
however are based on textbooks, which the knowledge
is based on a third-person’s knowledge. It was further
argued [18] that the qualitative outcomes of thirdperson versus first-person learning are different in that
in the case of the former, the learning outcomes are
shallow and retention rates are usually low. He further
argued that through virtual reality, learners learn
language through own experiences with autonomy
over their own learning.
In the quest for authentic learning materials to be
used in a language classroom, [11] examined three
virtual zoos on the net for language learners to
explore. It is argued that by using authentic learning
materials, it may make a personal connection with the
learners. Language learning is thus presented in a
contextualised manner where the subject matter can
be meaningful and relevant to the learners. [12] also
advocated the use of authentic texts to teach language
skills provides learners with real-life opportunities to
use the target language.
In the case of technical writing learners, the use
of authentic language, in the context of technical
communication which is made highly relevant and
personal to the learners, would seem to be best
represented in the form of a simulation of their future
workplace environment that is the oil platform.
With this in mind, the researchers seek to design a
conceptual framework for teaching technical writing
using 3D virtual reality technology at the offshore oil
platform.
The Conceptual Framework
A line of research indicates that language learning
should be facilitated by pedagogically sound
instruction [11]. It is further argued that learning
activities are supposed to be contextualised and
to some extent relevant to the learners [12]. A
technical writing course that aims to help learners
write the different kinds of technical writing is thus
contextualised and tailored to prepare them for
their future professional discourse community. The
underlying rationale in a technical writing course is
that the learners need to learn specific ways to write
(e.g. memorandums, proposal, reports and technical
descriptions) to participate effectively in their future
working environment as engineers. Writing therefore,
in this course, stresses on the notion that writing varies
with the social context where it is produced, that we
have many kinds of writing with different situations
[13]. With this in mind, the learners were exposed
to good models of many types of technical writing
as materials. As pointed out by [14], “learners are
often found incapable to replicate the expert generic
models due to their communicative and linguistics
deficiencies” (cited in [13]:370). It was found motivating
to the learners to be exposed to “good ‘apprentice’
generic exemplars, which can provide a realistic model
of writing performance for undergraduate students”
[13].
Such an approach to teaching technical writing,
though contextualised within the learners’ domain,
however, has its limitations. Language practitioners do
not only face the challenge to make learning relevant
to the students’ needs in their future workplace
but also to provide learners with practical learning
environments. There has indeed been a limited
VOLUME Six NUMBER two july - december 2008 PLATFORM
139
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Exposure to Different
type of writing
opportunity to acquire effective communication skills
in their target workplace community. Figure 1 shows
the proposed framework.
Classroom activity
Lecture
Student-Centered Activities
Exposure to Relevant Virtual
Environment
Language lab activity
• Collaborative virtual
walkthrough
• Performed Assigned
Tasks
• Selection of report type
Report Writing & production
Classroom activity
• Planning
• Drafting
• publishing
Report Evaluation
Classroom activity
Figure 1
number of researches reported in exploiting human
capabilities mainly the visual, sound and touch senses
to create such environments. VR through simulations
could help overcome the limitations of a technical
writing course.
As many of the students in this institution will serve
at the various local and international oil and gas
companies, the proposed conceptual framework
utilises a simulated environment of the offshore
oil platform to provide the contextual learning
environment for teaching technical writing to these
learners. The virtual environment of the offshore
platform is expected to provide the learners an
140
In the initial phase of the framework, learners would
first be exposed to the type of writing required in their
future workplace as a classroom activity. Following the
familiarisation of different types of technical writings,
these engineering students would then be presented
with the relevant 3D virtual world, in this case an
oil platform. This second phase would take place
in the English language laboratory equipped with
computers whereby each student would be assigned
to a computer. Despite this individual assignment,
students would be working collaboratively, sharing
the same virtual world and interacting with each
other. As these students walk around the virtual oil
platform they could talk to other students via their
avatars’ representations, who would assume different
roles e.g. the supervisor, mechanical engineer, civil
engineer etc. They may interact and discuss with one
another on the task assigned to them by the instructor.
For example, there maybe equipment malfunction
at the platform and the student would be required
to identify the problem and write an equipment
evaluation report to the supervisor. The learners thus
need to communicate and discuss with one another,
whoever assumed the different roles.
Learners need to relate the purpose of writing to the
subject matter, the writer/audience relationship and
the mode or organisation of the text. This requires
the learners to examine their surrounding and the
current social context. This thus would offer learners a
view of how different texts are written in accordance
to their purpose, audience and message [15]. The VR
application would provide students with note-taking
functionalities to enable them to collect and write
relevant information and comments during their
virtual walkthrough. These notes should be easily
accessible later when the students prepare their report.
Students were expected to decide the type of report
to write during exposure to the virtual environment.
After being exposed to the organisation and language
used in the texts, learners then go through a multipledrafts process, again as a classroom activity. Instead of
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
turning in a finished product right away, learners will
be asked for multiple drafts of a work. Rewriting and
revision are integral to writing, and editing is an ongoing multi-level process, which consists of: planning,
drafting and finally publishing the end product – the
report. The report would then be submitted to the
supervisor (a role assumed by a student) who would
read and examine the report. Figure 2 illustrates the
main features in the teaching of writing equipment
evaluation report.
Situation
(The equipment to be examined
through 3-D VR software)
Examining Purpose
(To evaluate the equipment e.g.
temperature and make relevant
recommendation)
Consideration of mode/field/tenor
(Internal report, data for technical evaluation,
audience)
In-class activities:
Planning
Drafting
Publishing
Equipment
Evaluation
Figure 2
User
User
User
Desktop computer
Desktop computer
Desktop computer
OPVE
Graphics,audio,
text, gestures
Graphics,audio
, text, gestures
Computer server
User
User
User
Desktop computer
Desktop computer
Desktop computer
Figure 3
Offshore Platform Virtual Environment
Architecture
The aim of developing an offshore platform virtual
environment (OPVE) is to provide a simulation of
the actual workplace where learners could acquire
effective communication skills through experiential
and active learning. As such, it is necessary to develop
a virtual world that closely mimics the real offshore
platform environment. Considering the complexity
of modeling this environment using 3D modeling
software, the following approach is proposed. A
series of photographs of the oil platform environment
will be taken using a digital camera and a Novoflex
VR-system PRO Panaroma Adaptor. The later allows
multi-row and spherical stitched pictures and
panaroma views to be created using a software called
Easypano Tourweaver Professional. This software
allows users a virtual walkthrough and exploration
of the 3D realistic images of the offshore platform
environment. Hotspots will be created for ease of
navigation and for users to quickly move to specific
locations. Information and instruction texts as popups are provided to further assist student explore
and perform the assigned tasks. The OPVE resides in
a computer server and can be accessed by students
via their computer desktops. These concepts could
be extended to multi-user collaborative work to allow
several users to share the OPVE, create representations
of the users, communicate and interact with each
other via graphics, audio, text and gestures in realtime over a network (Figure 3).
VOLUME Six NUMBER two july - december 2008 PLATFORM
141
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Pedagogical Implications
There are several pedagogical implications that can be
drawn from this paper. First of all, it is acknowledged
that much of current research work in the design of
VR focuses on the technology itself and laboratory
prototypes. This paper however proposes a conceptual
framework for the use of VR as a tool in the teaching
and learning of technical writing. The increasing use
of Information and Communication Technologies
(ICT) in the education field over the last decade has
put pressure on second language (L2) writing teachers
to embrace what computers and technology offer.
Technology has its place and has quite an impact on
L2 writing instruction. As reminded by [16], language
and communication “…teachers should be prepared
to bring computers into the centre of their own
pedagogical practice” [16]. New technology-based
pedagogies have been increasingly integrated into L2
writing instruction that is believed could improve the
learners’ communication skills.
The obvious difference between the 3D VR simulation
and the traditional textbook is that the 3D environment
provides students with learning activities that are very
much action-oriented. It allows the learners to directly
experience for themselves the thing they seek to learn
[17]. Such approach provides the learners with a ‘firstperson form of learning’ instead of using solely text
books in traditional classroom [18]. Such approach
is very much in line with the task-based approach in
teaching which is propagated by [19]. In such taskbased teaching, students are required to engage in
interaction in order to fulfill a task. [20] stated that
“The role of tasks has received further support from
researchers in second language acquisition who were
interested in developing pedagogical application of
second language acquisition theory” (cited by [21]:
223). Task-based learning, as proposed in this paper,
focuses on the process of learning and problem-solving
as it is organised around a set of tasks that are related
to real life. [22] defines task as a “piece of meaningfocused work involving learners in comprehending,
producing and/or interacting in the target language,
and that tasks are analysed or categorised according
to their goals, input data, activities, settings and
142
roles”. The tasks allow interactions between learners
and the linguistic environment, as the learners focus
on participating in the tasks actively to solve the
“real-world problem” or the “pedagogic problems”.
For example, if there is an equipment malfunction,
the learners would need to identify the problem
and come up with an equipment evaluation report.
In this way, learners could put writing activities in a
relevant context which can facilitate and enhance
internalisation of target language and report writing
skills.
[20] further argued that “the underlying language
systems” will develop while the students carry out the
task. The language systems acquired are within the
specific genre in the context of real-world oil platform.
The acquisition of such relevant language system and
genre is crucial to enable students to produce a report
in such contextual setting. According to [23], genre
referred to “abstract, socially recognised ways of
using language”, which is used by the students’ future
professional community (engineers and technical
workers at the oil platform). As argued by [24] “a
virtual environment to learn the target language
would be very helpful for the learner to acquire the
needed knowledge for communication in the social
and cultural sense” – in the case of this paper, the
social and the cultural environment of technical
engineers at the oil platform. This would also help to
minimise the gap between classroom activities and
the professional setting.
Another pedagogical implication is in the use of the
teaching materials. For a long time, education has
relied on pictures and drawings to describe a variety
of objects or mechanisms. The graphics used in VR
can be of great assistance in the teaching and learning
activities. [25], for example, proved the effectiveness
of using software visualisation and animation in
teaching robotics and describing mechanical systems
to undergraduate engineering students.
Such
simulation-based learning overcomes the limitations
in using textbooks. In contrast to pictures in textbooks,
students in classrooms using VR are presented with
virtual objects in natural situations supporting the
communicative tasks they need to perform. When
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
learners are very much involved in the materials that
they learn this leads to a more effective learning to
take place as the traces of information would be more
vivid in the minds.
Conclusion
Teaching using 3D graphics has evolved as there
is a tremendous advancement in the hardware
and software technology. VR offers potentials in
education. Teaching and learning activities can be
greatly facilitated by the use of VR which helps to
minimise the gap between the classroom and the
professional world. Since the paper presented here
offers only the conceptual framework, researchers
and educators should perhaps conduct an empirical
study on its implementation in a technical writing
classroom. Such empirical study could evaluate the
effectiveness of such approach to teaching technical
communication by testing the students’ performance
in tests. Students’ performance in a traditional
classroom situation may be compared to the students’
performance in the classroom using VR. Future
research could also look into student’s perception of
such approach through a more qualitative study.
References
[1]
Manseur, R. “Virtual Reality in Science and Engineering
Education”, In Proceeding of the 35th ASEE/IEEE Frontiers in
Education Conference, Indianapolis, IN. (2005)
[2]
Rheingold, H. (1991). Virtual Reality. New York, NY: Summit
[3]
Jung, H. J. “Virtual Reality for ESL Students”, The Internet
TESL Journal, VIII, (10), (2002)
[4]
Nonnis, D. 3D Virtual Learning Environments, Educational
Technology Division, Ministry of Education, Singapore
2005, pp. 1-6
[5]
Yair, Y., Mintz, R. & Litvak, S. “3D-Virtual Reality in Science
Education: An Implication for Astronomy Teaching”, Journal
of Computers in Mathematics and Science Teaching, 20 (3),
(2001), pp. 293-305
[6]
Dean, K. L., Asay-Davis, X. S., Finn, E. M., Foley, T., Friesner, J.
A., Imai, Y., Naylor, B. J., Wustner, S. R., Fisher, S. S. & Wilson,
K. R. “Virtual Explorer: Interactive Virtual Environment for
Education”, Presence, (2000), 9(6), pp. 505-523
[7]
Roussos, M., Johnson, A., Moher, T. Leigh, J. Vasilakis C. &
Barnes, C. “Learning and Building Together in an Immersive
Virtual World”, Presence, 8 (3), (1999), pp. 247-263
[8]
Dede, C., Salzman, M. C. & Loftin, R. B. “ScienceSpace: Virtual
Realities for Learning Complex and Abstract Scientific
Concepts”, In Proceedings of IEEE Virtual Reality Annual
International Symposium, New York: IEEE Press, (1996), pp.
246-253
[9]
Rose, H. & Billinghurst, M. “Zengo Sayu: An Immersive
Educational Environment for Learning Japanese” (Technical
Report), Seattle: University of Washington, Human Interface
Laboratory of the Washington Technology Center, (1996)
[10] Byrne, C. M. (1996), “Water on Tap: The Use of Virtual Reality
as an Educational Tool”, Unpublished doctoral dissertation,
University of Washington, Seattle,WA
[11] LeLoup, J. W. & Ponterio, R. “On the Net”. Language Learning
and Technology , 9 (1), (2005), pp. 4-16
[12] Hadley, O. A. Teaching Language in Context. Boston, MA:
Heinle & Heinle, (2001).
[13] L. Flowerdew, “Using a genre-based framework to teach
organizational structure in academic writing”, ELT Journal,
54 (4), (2000), pp. 369-378
[14] Marshall, S. “A genre-based approach to the teaching of
report writing”. English for Specific Purposes. 10, (1991). pp.
3-13
[15] Macken-Horarik, M. “Something to shoot for: A systematic
functional approach to teaching genre in secondary
school science”. In A. M. Johns (Ed.), Genre in the classroom
Mahwah, NJ: Erlbaum. (2002), pp. 21-46.\
VOLUME Six NUMBER two july - december 2008 PLATFORM
143
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
[16] Pennington, M. C. The impact of the computer in second
language writing. In Kroll, B. (Ed.) Exploring the Dynamics of
Second Language Wrting, USA: Cambridge University Press,
(2003)
[17] Winn, W. “A Conceptual Basis for Educational Applications of
Virtual Reality”. Technical Report TR-93-9, Human Interface
Technology Laboratory, University of Washington, (1993)
[18] Chee, Y. M. “Virtual Reality in Education: Rooting Learning in
Experience”. In Proceeding of the International Symposium on
Virtual Education, (2001). Busan, South Korea
Shahrina Md. Nordin obtained her PhD
from Universiti Sains Malaysia, Malaysia.
She is attached as a lecturer to Universiti
Teknologi PETRONAS, Perak, Malaysia. She
started her career as a lecturer at Institut
Teknologi Perindustrian, Johor Bahru and
was then recruited as Assistant Lecturer, at
Multimedia University, Melaka. Shahrina
has a Master’s degree in TESL from Universiti
Teknologi Malaysia, Johor and obtained her first degree, in English
Language and Literature, from International Islamic University
Malaysia.
[19] Willis, J. “Task-based Learning – What kind of adventure?”
The Language Teacher, 22 (7), (1998), pp. 17-18
Suziah Sulaiman obtained her PhD from
University College London, United
Kingdom. She is currently a lecturer at
Universiti Teknologi PETRONAS, Malaysia.
Her research interests include topics on
human computer interactions, user haptic
experience, and virtual environment.
[20] Long, M. & Crookes, G. “Three Approaches to Task-based
Syllabus Design”, TESOL Quarterly, 26 (1). (1992)
[21] Richards, J. C., and T. Rodgers. “Approaches and methods in
language Teaching”. Cambridge University Press. (2001)
[22] Nunan, D. “Designing tasks for communicative classroom.
Cambridge”. Cambridge University Press. (1989)
[23] Hyland, K. “Genre-based pedagogies: A social response to
process”. Journal of Second Language Writing 12, (2003),
pp. 17-29
[24] Paepa, D, Ma, L., Heirman, A., Dessein, B. Vervenne, D.,
Vandamme F. & C. Willemen, “A Virtual Environment for
Learning Chinese”. The Internet TESL Journal, 8 (10), (1998).
Available online http://iteslj.org/
[25] Manseur, R. “Visualization tools for robotics education”.
Proceedings of the 2004 International Conference on
Engineering Education, Gaineville, Florida, October 17-21,
(2004)
144
Dayang Rohaya Awang Rambli obtained
her PhD at Loughborough University,
United Kingdom. She is currently a lecturer
of the Computer & Information Sciences
Department of Universiti Teknologi
PETRONAS, Malaysia. vHer main interest
areas are human factors in virtual reality,
applications of VR in education & training
(interactive learning systems), and
augmented reality applications in games and education.
Wan Fatimah Wan Ahmad received her
BA and MA degrees in Mathematics from
California State University, Long Beach,
California USA in 1985 and 1987. She also
obtained Dip. Ed. from Universiti Sains
Malaysia in 1992. She completed her PhD
in Information System from Universiti
Kebangsaan Malaysia in 2004. She is
currently a senior lecturer at Computer &
Information Sciences Department of Universiti Teknologi
PETRONAS, Malaysia. She was a lecturer at Universiti Sains
Malaysia , Tronoh before joining UTP. Her main interests are in the
areas of mathematics education, educational technology, human
computer interaction and multimedia.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
MULTI-SCALE COLOR IMAGE ENHANCEMENT
USING CONTOURLET TRANSFORM
Melkamu H. Asmare, Vijanth S. Asirvadam*, Lila Iznita
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*vijanth_sagayan@petronas.com.my
Abstract
Images captured with insufficient illumination generally have dark shadows and low contrast. This problem
seriously affects other further processing schemes like human visual perception, face detection, security
surveillance etc. In this study, a multi-scale colour image enhancement technique based on contourlet transform
was developed. Contourlet transform has better performance in representing the image salient features such
as edges, lines, curves and contours than wavelet transform because of its anisotropy and directionality. It is
therefore well-suited for multi-scale edge based colour image enhancement. The image was first converted
from RGB (red, green, blue) to a CIELUV (L is for lightness) model and then the contourlet coefficients of its L
component were adjusted to preserve the original colour using modified power law transformation function.
The simulation results showed that this approach gave encouraging results for images taken in low light and/or
non-uniform lighting conditions.
Keywords: Contrast enhancement, contourlet transform, wavelet transform, transfer function, colour space.
Introduction
Image enhancement is a digital signal processing
branch aimed at assisting image visual analysis. It is
widely used in medical, biological and multimedia
systems to improve the image quality. Theoretically,
image enhancement methods may be regarded as an
extension of image restoration methods. However, in
contrast to image restoration, image enhancement
frequently requires intentional distorting of image
signals such as exaggerating brightness and colour
contrasts, deliberate removal of certain details that
may hide important objects, converting gray scale
images into colour, etc [1]. In this sense, image
enhancement is image preparation or enrichment in
the same meaning these words have in mining. An
important peculiarity of image enhancement as image
processing is its interactive nature. The best results in
visual image analysis can be achieved if it is supported
by a feedback from the user to the image processing
system.
In outdoor scenes, we are very often confronted with
a very large dynamic range, resulting in areas which
are too dark or too light in the image. Saturation and
underexposures are common in images due to limited
dynamic range of the imaging and display equipment.
This problem becomes more common and severe
when insufficient or non-uniform lighting conditions
occurs. Current cameras and image display devices
do not have a sophisticated mechanism to sense the
large dynamic range of the scene.
There exists a number of techniques for colour
enhancement mostly working in colour spaces which
transform the Red, Green, Blue monochromatic
This paper was presented at the International Graduate Conference on Engineering and Science 2008, Johor Baharu,
23 - 24 December 2008
VOLUME Six NUMBER two july - december 2008 PLATFORM
145
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
space to the perceptually based hue, saturation and
intensity colour space. The techniques, such as simple
global saturation stretching [2], hue dependent
saturation histogram equalisation [3], and intensity
edge enhancement based on saturation have been
attempted in this space. Among others, there are
methods based on Land’s Retinex theory [4] and
non-linear filtering techniques in the XY chromaticity
diagram [5].
The multi-scale Retinex (MSR) introduces the concept
of multi-resolution for contrast enhancement. Ideally, if
an image can be decomposed into several components
in multiple resolution levels, where low pass and high
pass information are kept separately, then the image
contrast can be enhanced without disturbing any
details [6]. At the same time, image detail can also
be emphasised at a desired resolution level, without
disturbing the rest of the image information; finally,
by adding the enhanced components together, a
more impressive result can be obtained.
The Wavelet transform approach [7] consists of first
transforming the image using wavelet transform.
The wavelet coefficients at each scale are modified
using a non-linear function which is defined from the
gradient from coefficients relative to horizontal and
vertical wavelet bands. Finally, the enhanced image
is obtained by the inverse wavelet transform of the
modified coefficients.
This study is of the opinion that the wavelet
transform may not be the best choice for the contrast
enhancement of natural images. This observation
is based on the fact that wavelets are blind to the
smoothness along the edges commonly found in
images.
The contourlet framework provides an opportunity
to achieve these tasks. It provides multiple resolution
representations of an image, each of which highlights
scale-specific image features. Contourlet transform has
better performance in representing the image salient
features such as edges, lines, curves and contours
than wavelet transform because of its anisotropy
and directionality. Since features in those contourlet
146
transformed components remain localised in space
many spatial domain image enhancement techniques
can be adopted for the contourlet domain.[8].
For high dynamic range and low contrast images,
there is a large improvement by contourlet transform
enhancement since it can detect the contours and
edges quite adequately.
In this paper, a new image enhancement algorithm is
presented. The method for colour image enhancement
is based on multiscale representation of the image.
The paper has four sections. Section two discusses
the contourlet transform. Section three discusses the
enhancement technique. Section four shows some
experimental results and concludes the experiment.
Contourlet Transform
For image enhancement, one needs to improve the
visual quality of an image without distorting it. Wavelet
bases present some limitations, because they are not
well adapted to the detection of highly anisotropic
elements such as alignments in an image. Recently,
Do and Vetterli [8] proposed an efficient directional
multi-resolution image representation called the
contourlet transform. Contourlet transform has better
performance in representing the image salient features
such as edges, lines, curves and contours than wavelet
transform because of its anisotropy and directionality.
It is therefore well-suited for multi- scale edge based
colour image enhancement.
The contourlet transform consists of two steps:
the sub-band decomposition and the directional
transform [8]. A Laplacian pyramid is first used to
capture point discontinuities, followed by directional
filter banks to link point discontinuity into lineal
structure. The overall result is an image expansion
using basic elements like contour segments and is
named after it, contourlet transform. Figure 1 shows a
flow diagram of the contourlet transform. The image
is first decomposed into sub-bands by the Laplacian
transform and then each detail image is analysed by
the DFB.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Figure 1. Contourlet transform framework.
Figure 2. Contourlet filter bank.
Figure 2 shows the contourlet filter bank. First, multiscale decomposition by the Laplacian pyramid, and
then a directional filter bank is applied to each band
pass channel.
An example of contourlet decomposition coefficients,
four level pyramid transforms followed by 3, 4, 8 and
4 directional decomposition from fine to coarse levels
respectively is shown in Figure 3.
Image Enhancement Algorithm
For colour images, the colour is specified by the
amounts of Red (R), Green (G) and Blue (B) present.
Applying the grayscale algorithm independently to
the R, G, and B components of the image, will lead to
colour shifts in the image and is non-efficient, thus
unwanted. For enhancement purposes the RGB image
is converted to one of the perceptual colour spaces.
HSI (Hue, Saturation and Intensity), and LUV (L is for
Lightness,) are the most commonly used perceptual
colour spaces. LUV is claimed to be more perceptually
uniform than HSI [9]. The colour space conversion can
be formulated as follows.
To convert RGB to LUV, RGB was first converted to XYZ
components then XYZ to LUV.
 R  3.240479 - 1.53715 - 0.498535   X 
G  = - 0.969256 1.875992 0.041556  x Y 
  
  
 B  0.055648 - 0.204043 1.057311   Z 
u' =
4X
X + 15Y + 3Z
v' =
9Y
X + 15Y + 3Z
(1)
(2)
Figure 3. Contourlet coefficients
1

  Y 3
116   − 16 ,
Y
L =   n
 29  3  Y 
    ,
 
 3  Yn 
(
)
U = 13L u ' − u ' n
Y
 6
>  
 29 
Yn
3
Y
 6
≤  
 29 
Yn
(
(3)
3
)
V = 13L v ' − v ' n (4)
The quantities u n’, v n’ Xn , Yn , Z n were chromaticity
coordinates of a specified white object, which may
be termed as the white point. For a perfect reflecting
diffuser under D65 illuminant and 2° observer u n’
= 0.2009, v n’ = 0.4610. Xn = 0.950456, Yn = 1, Z n =
1.088754.
The most straightforward method would be to use
the existing grayscale algorithm for colour images by
applying it to the lightness component of the colour
image and leaving the chromaticity unaltered. The
contourlet transform was used to decompose the
lightness component into multi-scales for better
representation of major geometrical components of
the image like edges, contours and lines.
VOLUME Six NUMBER two july - december 2008 PLATFORM
147
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
The edges as features of an image contained useful
information that belonged to the high frequency
component. Human visual system is more sensitive
to the edges than the homogeneous regions. Thus
the sharpness of the edges should be preserved. The
contourlet transform has a better representation to
major geometrical parts, that is, edges, of images.
So it is the best candidate for edge based contrast
enhancement.
The contourlet transform is a hierarchy of resolution
information at several different scales and orientations.
At each level, the contourlet transform can be reapplied
to the low resolution sub-band to further decompose
the image. An image I can be characterised as, for
four decomposition levels:
1
AjI= A j+1I+ D
2
3
J+1I+D j+1I+D J+1I
4
+D
J+1I
(5)
The enhanced image:
ÃjI= Ãj+1I + [F1j+1(D1j+1I) + F2j+1(D2j+1I) + F3j+1(D3j+1I)+ F4j+1(D4j+1I)]
(6)
The important feature in contourlet transformed
components is that they remain localised in space.
Thus, many spatial domain image enhancement
techniques can be adopted for the contourlet
domain. The following non-linear mapping function
were applied to modify the contourlet coefficients.
Contourlet transformed images were treated by an
enhancement process to elevate the values of low
intensity pixels using a specifically designed nonlinear transfer function F, defined as:
L' =
(
L(0.75 z + 0,25)
+ (1 − L)* 0.5 * (1 − z ) +
2
)
L( 2 − z )
Figure 4. Transfer function for different values of z.
The function can be image dependent using z as
follows,
0
for L ≤ 0.2 Lmax

z =  L − 0.2 for 0.2 Lmax < L < 0.7 Lmax
1
for L ≥ 0.7 Lmax

(8)
Lmax is the maximum value of the sub-band image.
Thus each sub-band image was enhanced using
different enhancement function.
The over all system algorithm can be summarised in
Figure 5.
Original Image
R
G
B
Convert to LUV color space
L
(7)
This function is a modified power law transfer
function. It is a sum of three simple functions to
control the shape of the curve. It can be seen in
Figure 4 that this transformation largely increases the
luminance of darker pixels (regions) while brighter
pixels (regions) are less enhanced. Thus this process
serves as dynamic range compression. Therefore, the
line shape of the transfer function is important no
matter what mathematical functions are used. Simple
148
functions are applied for faster computation. The
range for z if [0 1].
U
V
Contourlet Transform L part
Modify the coefficients
Inverse contourlet transform
L1
U
V
Inverse color transform
Enhance Color Image
Figure 5. The overall algorithm in block diagram
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Results and Discussion
The proposed algorithm has been applied to enhance
a large number of digital images for performance
evaluation and comparison with other algorithms.
Some typical results and discussions are given
below. The image was first decomposed into four
decomposition levels using pyramid transform
and 3, 2, 4, and 16 directions from fine to coarse
decompositions levels respectively.
The algorithm was applied to the image captured at
a low lighting condition in Figure 6(a). The enhanced
image has good quality with fine detail revealed and
a well balanced colour distribution.
The second test image was captured under very low
lighting condition. Figure 7(a) shows the original image
in which the background is entirely invisible. The
enhancement algorithm performed very well in these
kinds of images; it enhanced the almost completely
dark region and still preserved the illuminated area.
The third test image was a low and non-uniform
lighting condition as shown in Figure 8. It can be
seen that the brightness of the light source affects the
display of the cars and the building, where many details
are not visible. The enhancement algorithm tried to
balance the lighting condition the enhanced image
reveals an improved display of the entire image.
a)
b)
Figure 6. a) Original image b) enhanced image
a)
b)
Figure 7. a) Original image b) enhanced image
The algorithm was compared with most common
contrast enhancement methods such as, Histogram
equalisation (HE), Contrast Limited Adaptive Histogram
Equalisation (CLAHE) and the Wavelet transform (WT)
method. One objective function, called Structural
Similarity index (SSIM) [10] was used to measure the
performance of the algorithms.
Structural similarity index is a measure of structural
information change. It assumes one perfect reference
input image and measures the similarity. It uses three
comparisons: luminance, contrast and structure.
Suppose x and y are two non-negative images, the
structural similarity index is defined as follows. It
ranges from 0 to 1, where 1 is 100% similarity. The
constants C1, C2 and C3 are included for mathematical
stability.
SSIM ( x, y ) =
a)
b)
Figure 8. a) Original image b) enhanced image
2(2µ x µ y + c1 )( 2σ xy + c 2 )
2
( µ x + µ y 2 + c1 )(σ x 2 + σ y 2 + c 2 )
(9)
Figure 9 shows a sample comparison result for
enhancement methods. The image (Figure 9a) is
considered as a perfect image and Figure 9(b) is the
same image taken in low and non-uniform lighting
condition. Figures 9(c), (d), (e) and (f) are results of
VOLUME Six NUMBER two july - december 2008 PLATFORM
149
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Table 1. SSIM comparison result.
Method
SSIM
HE
0.9861
CLAHE
0.9989
Wavelet
0.9991
Contourlet
0.9991
Original
0.1644
contourlet transform, wavelet transform, histogram
equalisation and contrast limited histogram
equalisation respectively.
Table 1 shows the SSIM measure of the images in Figure
8. The contourlet and wavelet methods gave the same
index. But for a human observer the contourlet result
is more natural and appealing.
Conclusion
The simulation results show that the approach of this
study gave encouraging results for images taken in
low light and/or non-uniform lighting conditions.
a)
b)
The algorithm is fast using simple mathematical
transformation function. Since the images were
well represented using the contourlet transform, the
process did not introduce any distortion to the original
image. The system has good tonal retention and its
performance is very good especially in low light and
non-uniform lighting conditions.
References
c)
e)
[1]
R. C. Gonzalez, R. E. Woods, “Digital Image Processing” 2nd
edition, Prentice-Hall, Inc., 2002
[2]
S. I. Sahidan, M. Y. Mashor, Aida S. W. Wahab, Z. Salleh,
H. Ja’afar, “Local and Global Contrast Stretching For Color
Contrast Enhancement on Ziehl-Neelsen Tissue Section
Slide Images“, 4th Kuala Lumpur BIOMED 2008 25–28 June
2008 Kuala Lumpur, Malaysia
[3]
Naik, S. K.; Murthy, C. A. “Hue-preserving color image
enhancement without gamut problem”, IEEE Transactions
on Image Processing, IEEE Transactions, Volume 12, Issue
12, Dec. 2003
[4]
B. Funt , F. Ciurea, and J. McCann, “Retinex in Matlab“,
Proceedings of CIC08 Eighth Color Imaging Conference,
Scottsdale, Arizona, pp. 112-121,2000
[5]
Lucchese, L.; Mitra, S. K. “A new filtering scheme for
processing the chromatic signals of color images: definition
and properties”, IEEE Workshop on Multimedia Signal
Processing, 2002
d)
f)
Figure 9. a) Perfect Image b) Image to be enhanced
c) Contourlet method d) Wavelet Method
d) Histogram Equalization f ) CLAHE.
150
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
[6]
Koen Vande Velde, “Multi-scale color image enhancement”,
in Proceedings of SPIE international Conference, image
processing, Vol. 3, 1999, pp. 584-587
[7]
F. Sattar, X. Gao, “ Image Enhancement Based on a Nonlinear
Multiscale Method using Dual-Tree Complex Wavelet
Transform”, IEEE, 2003
[8]
M. N. Do, M Veterli, “The Contourlet Transform: An efficient
Directional Multi-Resolution Image Representation”, IEEE
Transactions on Image Processing, Vol. 14, pp. 2091-2106,
2005
[9]
Adrian Ford, Alan Roberts,” Color Space Conversions”,
August 11, 1998
[10] Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh,
Eero P. Simoncelli, “Image Quality Assessment: From Error
Visibility to Structural Similarity”, IEEE Transactions on Image
Processing, Vol. 13, no. 4, April 2004, pp. 600-612
Melkamu H. Asmare was born in Dangla,
Ethiopia, in 1982. He received his BSc
Degree in Electrical and Computer
Engineering from Addis Ababa University,
in 2005. Currently, he is a Master’s Research
student in Electrical and Electronic
Engineering department of Universiti
Teknologi PETRONAS. His research interests
include image processing, colour science,
digital signal processing and image fusion.
Vijanth S. Asirvadam is from an old mining
city of Malaysia called Ipoh. He studied at
University Putra, Malaysia for the Bachelor
Science (Honours) majoring in Statistics
and graduated in April 1997 before leaving
for Queen’s University, Belfast where he
received the Master’s degree of Science in
Engineering
Computation
with
a
Distinction. He later joined the Intelligent
Systems and Control Research Group at Queen’s University Belfast
in November 1999 where he completed his Doctorate (PhD) on
Online and Constructive Neural Learning Methods. He took
previous employments as a System Engineer and later as a
Lecturer at the Multimedia University Malaysia and also as Senior
Lecturer at the AIMST University. Since November 2006, he has
served as Senior Lecturer at the department of Electrical and
Electronics Engineering, Universiti Teknologi PETRONAS. His
research interests include linear and nonlinear system
identification, model validation and application of intelligent
system in computing and image processing. Dr Vijanth is member
of Institute of Electrical and Electronics Engineering (IEEE) and has
over 40 publications in local and international proceedings and
journal.
Lila Iznita Izhar obtained her BEng in
Electrical and Electronics Engineering from
the University of the Ryukyuus, Japan in
2002. She later earned her MSc in Electrical
and Electronics Engineering from Universiti
Teknologi PETRONAS in 2006. She is
currently a lecturer with Universiti Teknologi
PETRONAS. Her research interests are in
the area of medical image analysis,
computer vision and image processing.
VOLUME Six NUMBER two july - december 2008 PLATFORM
151
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
AUTOMATED PERSONALITY INVENTORY SYSTEM
Wan Fatimah Wan Ahmad*, Aliza Sarlan, Mohd Azizie Sidek
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*fatimhd@petronas.com.my
Abstract
Personality Inventory is a measurement tool to identify the characteristics or traits of an individual. Nowadays,
personality inventory is used globally by many big organisations in order to hire good quality employees.
Therefore, personality inventory is used as part of self-assessment for career planning purposes. The paper
focuses on development of an Automated Personality Inventory System (APIS). The system can be used as a tool
for any organisation especially by the Human Resource Department to assess their (potential) possible employee.
APIS was developed based on the manual Sidek Personality Inventory which determines the characteristics of
individual personality traits. The methodology used in developing the system was the Spiral model. The system
was developed using PHP, MySQL and Apache Web Server. The system may be useful for any organisation and
employee. The system will help an organisation filter the possible candidate as their employee for the qualified
job. The system may also benefit the employee to know their personality type. It is hoped that the system will
help an organisation in choosing the right candidate.
Keyword: personality inventory, organisation, career planning, automated, self-assessment.
Introduction
Nowadays, personality inventories are used globally
by many big organisations in order to hire good
quality employees. A big organisation such as
PETRONAS could use these personality inventories
to filter candidates. Personality inventory could be
used as part of a self-assessment for career planning
purposes. The test results can be enormously helpful
when determining the kind of career a candidate
might like to pursue. Frieswick [1] highlighted that
as employers start to hire again, they are increasingly
taking steps to ensure that the hires they make are a
good fit – not only with the job description but also
with the people with whom they will be working.
Therefore, understanding the candidate’s personality
type will also improve their chance of being happy in
their work. This will decrease the amount of turnover
in an organisation.
Personality tests or inventories are self-reporting
measures of what might be called traits, temperaments,
or dispositions [2]. Different types of tests are used
for different purposes and personality tests are lately
being used more frequently. It is difficult to pinpoint
which tests are more efficient since it is a very
subjective area, however, if used correctly personality
tests can be very effective tools [3]. Research has also
shown that personality can be used as a predictor [4].
In order to create the “perfect” user experience,
designers should also apply a suitable graphical
user interface. Diversity centred approach was
proposed [5] where the design focused more on
groups of users, or even on the characteristics of
individual users. In another words, designers need
to understand individual user characteristics, such as
gender, disabilities or personalities, and relate to user
preferences for interface properties such as structure,
This paper was presented at the International Conference on Science & Technology: Applications in Industry and Education, Penang,
12 - 13 December 2008
152
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
layout or shape. Designers should also segment
their user population according to these designrelevant user characteristics and develop customised
user interfaces. Therefore, successful customisation
would probably improve the usability, and the usage
experience itself. Users would no longer have to
accept the traditional uniform design that only targets
the average user. Instead, they would have a product
tailored to their needs which allows them to express
their individuality.
The goal of the paper is to develop a personality
inventory system with a suitable user interface. This
paper presents an Automated Personality Inventory
System (APIS) which can be used as a tool for any
organisation especially for the Human Resource
Department and to research on the suitable user
interface in the development. APIS is developed
based on the manual Sidek Personality Inventory (IPS).
Currently, IPS personality test has to be done manually
where an individual takes a pencil and paper test.
Related work
A number of personality scales or inventories have
been implemented and made available online. A
majority of them focused on single personality
constructs rather than broad inventories. However,
the evaluation of an online personality test was not
made available [6].
The Hogan Personality Inventory (HPI) provides the
industry a standard for measuring normal personality
[7]. HPI predicts employee performance and helps
companies reduce turnover, absenteeism, shrinkage,
and poor customer service. HPI contains seven primary
scales, six occupational scales, and one validity scale.
The scales are shown in Table 1.
Another online personality inventory that is available
is Jung Typology Test [8]. The indicators used in the
test are divided into four areas: extraversion (E) or
introversion (I); sensing (S) or intuition (N); thinking (T)
Table 1. Hogan personality inventory scales
Primary scales
Adjustment
confidence, self-esteem, and composure under pressure.
Ambition
initiative, competitiveness, and leadership potential.
Sociability
extraversion, gregariousness, and a need for social interaction.
Likeability
warmth, charm, and the ability to maintain relationships.
Prudence
responsibility, self-control, and conscientiousness.
Intellectance
imagination, curiosity, and creative potential.
Learning Approach
the degree to which a person is achievement-oriented and stays up-to-date on
business and technical matters.
Occupational Scales
Service Orientation
being attentive, pleasant, and courteous to clients and customers.
Stress Tolerance
being able to handle stress; low scores are associated with absenteeism and health
problems.
Reliability
integrity (high scores) and organizational delinquency (low scores).
Clerical Potential
the ability to follow directions, pay attention to details, and communicate clearly.
Sales Potential
energy, social skill, and the ability to solve problems for clients.
Managerial Potential
leadership ability, planning, and decision-making skills.
VOLUME Six NUMBER two july - december 2008 PLATFORM
153
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
or feeling (F); and judging (J) or perceiving (P). Results
are usually expressed as four letters indicating the
respective preferences: an example would be ENTP.
A Need for a Personalised Inventory
System
Organisations want the best people to work for them.
To achieve this objective they cannot rely only on the
applicant’s resume. Sometimes people with good
and tempting resume might not be as good as they
claimed. Applicants could cheat on their resume to
make it look impressive in the eye of the organisation
they want to work in. This has become a problem
to many organisations as they have to spend a lot of
time, money and effort in reviewing and screening
all applications to get the best applicant to fill up the
vacant posts. This is where personality inventory
comes in handy because it helps employers analyse
and filter out persons best suitable for the job.
However, good and reliable personality inventories
are hard to find and design. Furthermore, most
personality inventory tests had to be done manually
with the assistance of psychologists for the best
outcome. This setting is impractical as the organisation
has to hire a psychologist for recruitment purposes
only. Therefore, automated personality inventories
should be developed to counter this problem.
APIS development
APIS is developed with the aim in developing a
tool to aid the human resource department of the
organisation in identifying and recruiting a right
person for a right job function. The system serves the
following objectives:
• To effectively filter suitable applicants for any
vacant position in the organisation.
• To easily identify an individual’s suitability for the
job based on their personality analysis.
• Assist individuals to know their personality types.
154
Table 2. Sidek personality inventory traits
Aggressive
Endurance
Analytical
Achievement
Autonomy
Controlling
Depending
Helping
Extrovert
Honesty
Introvert
Self-critics
Intellectual
Supportive
Variety
Structure
APIS is a web-based system that was developed to
automate the conventional personality inventories
system based on Sidek Personality Inventory
(IPS) developed by Associate Professor Dr. Sidek
Mohd Noah [9]. IPS is one of the most successful
personality inventories that could determine the kind
of personality an individual has and also suggests
suitable jobs. There are 16 personality traits that were
used in IPS as shown in Table 2.
The system was developed using Open Source
technology, which consisted of Open Source web
programming language (PHP) or JavaScript, Open
Source database (MySQL), and also Open Source
Web Server (Apache). The project involved Associate
Professor Dr. Sidek Mohd Noah as the client for
the system. The system is developed based on
the spiral development model. Using the model,
user’s requirements and feedback were acquired
throughout the development period as it is crucial to
understand the concept and how it works so that the
system will provide the user with an accurate output
or information.
System architecture
APIS adopted the combination of data-centred and
client-server architectural model. A client-server
system model is organised as a set of services and
associated server(s) and clients that access and use
the services. The server itself or one of the server(s)
contains a database. Figure 1 illustrates the system
architecture.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
CLIENT
LOGIN page
User
User
User
Internet
Authorized?
New User
Registration
APIS
Selection Page
Login
IPS Questionnaires
Personality Profile
System Page 1
System Page 2
System Page 3
DBMS
SERVER
APIS
Database
Result Page
Figure 1. System architecture
Figure 2. Navigation map
Figure 2 shows the navigation map of the APIS website.
Once a user logs in to the system, the user can choose
which personality inventory system to use from a list
provided. After completing the test, the system will
provide the user with the results page which could be
saved or printed. A user can obtain a display of their
results by logging in back to the system.
simpler, yet custom tailored to users’ specific needs.
In developing APIS, the IPS was used to test the
individual’s personality by posing 160 questions
in a questionnaire form. Each page contained ten
questions, so there were 10 pages for a user to go
through when taking the test. Figure 4 shows the
questionnaires page.
APIS prototype
The prototype is developed to serve two different
types of users namely i) Human Resource staff and ii)
candidates. The main page is a point of access for all
users of the system. Currently, APIS is developed in
Bahasa Malaysia. Figure 3 shows the main page of the
system. According to [10], systems can be designed
Figure 3. Main page
After a user finishes the test, a Personality Profile
that shows the score, percentages and analysis for
each trait will be generated in table form as shown
in Figure 5. Apart from this table, there will also be a
graph and description for the user to see their scores
and personality analysis result as shown in Figure 6.
Figure 4. Questionnaire page
VOLUME Six NUMBER two july - december 2008 PLATFORM
155
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Figure 5. Personality profile result (table form)
Figure 6. Personality profile result
Functional Testing
A functional testing was conducted with the expert
user, that is, Dr. Sidek. The aim was to validate the
operation of the developed system with respect
to the system’s functional specification and to get
feedback from him regarding the prototype. Overall,
he was satisfied with the system since the functional
requirements were met. With the developed system,
a user does not have to key in the data manually
anymore. The system will be able to automate and
156
present the score, percentages and analysis for each
trait of personality.
Conclusion
The paper presentd an automated personality
inventory system (APIS) which could be very
beneficial for any organisation especially the Human
Resource Department in order to hire quality workers.
Understanding the candidate personality type will
improve their chance of being happy at their work.
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
This will decrease the amount of turnover in an
organisation. Personality is also important since once
people know the type of person they are, they can
learn about how to communicate better, work better
and also change weaknesses into strengths. APIS has
adopted 16 personality traits compared to the others
available online which were only based on 5 traits. It
is hoped that the system will help improve the quality
of employees hired. APIS could also be used by any
individual who wants to know his/her personality
type. Future work could include conversion of the
system to other languages, testing the reliability of
the system and making it accessible to all.
Wan Fatimah Wan Ahmad received her BA
and MA degrees in Mathematics from
California State University, Long Beach,
California USA in 1985 and 1987. She also
obtained Diploma in Education from
Universiti Sains Malaysia in 1992. She
completed her PhD in Information Systems
from Universiti Kebangsaan Malaysia in
2004. She is currently a senior lecturer at
Computer & Information Sciences Department of Universiti
Teknologi PETRONAS, Malaysia. She was a lecturer at Universiti
Sains Malaysia, Tronoh before joining UTP. Her main interests are
in the areas of mathematics education, educational technology,
human computer interaction and multimedia.
References
[1]
Frieswick, K. 2005. Magazine of Senior Executive. Available
online http://www.boston.com/news
[2]
Shaffer, D. J. and Schimdt, R. A. 1999. Personality Testing in
employment. Available online http://pview.findlaw.com/
[3]
Collins, M. 2004. Making the most of Personality Tests.
Available online http://www.canadaone.com/
[4]
Woszczynski, A. B., Guthrie, T. C. and Shade, S. 2005.
Personality and Programming. Journal of Information
Systems Education 16(3), pp. 293-300
[5]
Saati, B., Salem, M. and Brinkman, W. 2005. Towards
customised user interface skins: investigating user
personality and skin colour. Available online. http://mmi.
tudelft.nl/~willem-paul/HCI2005
[6]
Buchanan, T., Johnson, J. A., Goldberg, L. R. 2005. European
Journal of Psychological Assessment, 21(2) pp 116-128
[7]
http://www.hoganassessment.com/_HoganWeb/
[8]
Shaunak, A. 2005. Personality Type – Jung Typology
Test.
Available
online.
http://shaunak.wordpress.
com/2005/10/14/personality-type-jung-typology-test/
[9]
Sidek Mohd Noah, 2008. available online http://www.educ.
upm.edu.my/directori/sidek.htm
[10] Fuchs, R. (2001). Personality traits and their impact on
graphical user interface design: lessons learned from the
design of a real estate website. In Proceedings of the 2nd
Workshop on Attitude, Personality and Emotions in UserAdapted Interaction.
VOLUME Six NUMBER two july - december 2008 PLATFORM
157
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
A Fuzzy Neural Based
Data Classification System
Yong Suet Peng* and Luong Trung Tuan
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia
*yongsuetpeng@petronas.com.my
Abstract
Data mining has emerged to be a very important research area that helps organisations make good use of the
tremendous amount of data they have. In data classification tasks, fuzzy systems lack the ability to learn and
cannot adjust themselves to a new environment. On the other hand, neural networks can learn, but they are
opaque to the user. This paper presents a hybrid system to perform classification tasks. The main work of this
paper includes generating a set of weighted fuzzy production rules, mapping it into a min-max neural network;
re-deriving the back propagation algorithm for the proposed min-max neural network; and performing data
classification. The iris and credit card datasets were used to evaluate the system’s accuracy and interpretability.
The algorithm was found to have improved the fuzzy classifier.
Keywords: data classification, fuzzy neural
INTRODUCTION
Data mining refers to the discovery step in knowledge
discovery in databases. Its functionalities are used
to specify the kind of patterns to be found in data
mining tasks. In general, data mining tasks can be
classified into 2 categories: descriptive and predictive.
Descriptive mining tasks characterise the general
properties of the data whereas predictive mining
tasks perform inferences on the current data in order
to make predictions [1].
Many algorithms have been proposed and developed
in the data mining field [2][3][4][5]. There are several
challenges that data mining algorithms must satisfy in
performing either descriptive or predictive tasks. The
criteria for evaluating data mining algorithms include:
accuracy, scalable, interpretable, versatile and fast [6]
[7][8].
Although human experts have played an important
role in the development of conventional fuzzy systems,
automatically generating fuzzy rules from data is very
helpful when human experts are not available and
may even provide information not previously known
by experts. Several approaches have been devised
to develop data-driven learning for fuzzy rule based
systems. They involve the use of a method that
automatically generates membership functions of
fuzzy rule structures of both from training data. Chen
[9] proposed a method for generating fuzzy rules for
classification problems which uses fuzzy subsethood
values for generating fuzzy rules.
In Chen’s research [9], he defined fuzzy partitions for
the input variables and output variables according
to the type of data and the nature of classification
problems, then transform crisp value into fuzzy input
value and generate a set of fuzzy rules based on a
This paper was presented at the The 2006 International Conference on Data Mining (DMIN’06),
26-29 June 2006, Las Vegas
158
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
fuzzy linguistics model in order to apply the rule sets
for classification. Regarding the relative weight of the
linguistic term, Khairul [10] proposed the weighted
subsethood based algorithm for data classification.
Weighted subsethood based algorithm is the use
of subsethood values as relative weights over the
significance of different conditional attributes. Fuzzy
systems lack the ability to learn and cannot adjust
themselves to a new environment [15].
our approach
A hybrid system of a neural network with fuzzy system
to data classification was used in this project. Figure 1
shows the block diagram of our developed system.
The system was a supervised learning system. The
training dataset was fed to train the system to
generate the weighted fuzzy production rules (WFPR),
and then these weighted fuzzy production rules were
trained by a min-max neural network to achieve a
better classification accuracy. The system produced a
new set of weighted fuzzy production rule which was
fed into the analysing data into the system and was
applied into a new set of WFPR to classify data.
for every candidate’s input were defined; this step
depended on the target classification class and the
data type of linguistic variable [16].
If the data type of input attribute is nominal, so the
available category will become the linguistic term and
the number of linguistic term is independent with the
number of target class. For example, if the attribute is
gender, which is a nominal data type, then there are
only 2 linguistic terms male and female. In the case
of crisp membership values, a boy will have {(male, 1),
(female, 0)}.
If the data type of the input attribute is continuous
numerical, then the number of linguistic terms equal
to the number of target classes, and the membership
function is trapezoid. For example, with the iris data
set based on the petal length, petal width, sepal
length and sepal width, classification of the iris flower
was into setosa, versicolor, virginica; which meant 3
target classes, and thus 3 linguistic terms for all the
input attributes as shown in Table I. Thus, the value
range of the input attribute value had to be obtained
to define the trapezoid membership.
A. Weighted fuzzy production rules induction
[11][12][13]
The iris data set was used to illustrate the paper
work.
With application on iris data set, the trapezoidal
membership function was used. The iris data set has
4 attributes which are petal length, petal width, sepal
length and sepal width. The iris flower was classified
into setosa, versicolor, virginica.
1) Define input fuzzy member class
2) Calculate the subsethood value
Data classification tasks work on several candidates
were input to classify the observed object into
predetermined classes. In order to generate fuzzy
rules for data classification the linguistics variables
After defining the fuzzy member class, the subsethood
value of each linguistic term were calculated over the
classification class. Table 2 shows the subsethood
values for the iris dataset.
Table 1 Linguistic term for iris dataset
Training
Dataset
Analyzing
Data
Preprocessing
Subsystem
Fuzzy Neural
Network
Fuzzy logic
Inference
Classification
Output & Fuzzy
Rules
Figure 1. Block Diagram of the Fuzzy Neural System
Attributes
Linguistic term
Sepal Length
Small
Average
Large
Sepal Width
Small
Average
Large
Petal Length
Small
Average
Large
Petal Width
Small
Average
Large
VOLUME Six NUMBER two july - december 2008 PLATFORM
159
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Table 2. Subsethood values for iris dataset
setosa
versicolor
virginica
Small
S(set,S_SL)
S(ver,S_SL)
S(vir,S_SL)
Average
S(set,A_SL)
S(ver,A_SL)
S(vir,A_SL)
Large
S(set,L_SL)
S(ver,L_SL)
S(vir,L_SL)
Small
S(set,S_SW)
S(ver,S_SW)
S(vir,S_SW)
Average
S(set,A_SW)
S(ver,A_SW)
S(vir,A_SW)
Large
S(set,L_SW)
S(ver,L_SW)
S(vir,L_SW)
Small
S(set,S_PL)
S(ver,S_PL)
S(vir,S_PL)
Average
S(set,A_PL)
S(ver,A_PL)
S(vir,A_PL)
Large
S(set,L_PL)
S(ver,L_PL)
S(vir,L_PL)
Petal
Small
S(set,S_PW)
S(ver,S_PW)
S(vir,S_PW)
width
Average
S(set,A_PW)
S(ver,A_PW)
S(vir,A_PW)
Large
S(set,L_PW)
S(ver,L_PW)
S(vir,L_PW)
setosa
versicolor
virginica
Small
W(set,S_SL)
W(ver,S_SL)
W(vir,S_SL)
Average
W(set,A_SL)
W(ver,A_SL)
W(vir,A_SL)
Large
W(set,L_SL)
W(ver,L_SL)
W(vir,L_SL)
Small
W(set,S_SW)
W(ver,S_SW)
W(vir,S_SW)
Average
W(set,A_SW)
W(ver,A_SW)
W(vir,A_SW)
Large
W(set,L_SW)
W(ver,L_SW)
W(vir,L_SW)
Small
W(set,S_PL)
W(ver,S_PL)
W(vir,S_PL)
Average
W(set,A_PL)
W(ver,A_PL)
W(vir,A_PL)
Large
W(set,L_PL)
W(ver,L_PL)
W(vir,L_PL)
Small
W(set,S_PW)
W(ver,S_PW)
W(vir,S_PW)
Average
W(set,A_PW)
W(ver,A_PW)
W(vir,A_PW)
Large
W(set,L_PW)
W(ver,L_PW)
W(vir,L_PW)
Sepal length
Sepal width
Petal length
Table 3: Weighted subsethood values for iris dataset
Sepal length
Sepal width
Petal length
Petal width
Here, S(set, S_SL) is the subsethood of Small_
SepalLength with regards to setosa and the rest
follow.
4) Weighted fuzzy rule generation
3) Calculate the weighted subsethood value
• If Sepal length is [W(set,S _ SL) x Small
After getting the weighted subsethood values, the set
of weighted fuzzy production rule was generated.
OR W(set,A _ SL) x Average OR W(set,L _ SL) x
Based on Table 2, the weighted subsethood value
was calculated. As in Table 3, W(set, S_SL) refers to
the weighted subsethood of Small_SepalLength with
regards to setosa and the rest follow.
160
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Large] AND Sepal width is [W(set,S _ SW) x
Small OR W(set,A _ SW) x Average OR W(set,L _
SW) x Large] AND Petal Length is [W(set,S _
PL) x Small OR W(set,A _ PL) x Average OR
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
•
W(set,L _ PL) x Large] AND Petal width is
If
[W(set,S _ PW)
x
Small
OR
W(set,A _ PW)
W’(set,SL) x {Sepal length is [W’(set,S _ SL)
x
x Small OR W’(set,A _ SL) x Average OR W’(set,L _
Average OR W(set,L _ PW) x Large] Then Class
SL) x Large]} AND W’(set,SW) x {Sepal width is
is SETOSA
[W’(set,S _ SW) x Small OR W’(set,A _ SW) x Average
If Sepal length is [W(ver,S _ SL) x Small
OR W’(set,L _ SW) x Large]} AND W’(set,Pl) x {Petal
OR W(ver,A _ SL) x Average OR W(ver,L _ SL) x
Length is [W’(set,S _ PL) x Small OR W’(set,A _
Large] AND Sepal width is [W(ver,S _ SW) x
PL) x Average OR W’(set,L _ PL) x Large]} AND
Small OR W(ver,A _ SW) x Average OR W(ver,L _
W’(set,PW) x {Petal width is [W’(set,S _ PW) x
SW) x
Small OR W’(set,A _ PW) x Average OR W’(set,L _
Large] AND Petal Length is [W(ver,S _
PL) x Small OR W(ver,A _ PL) x Average OR
PW) x Large]} Then Class is SETOSA;
W(ver,L _ PL) x Large] AND Petal width is
[W(ver,S _ PW)
x
Small
OR
W(ver,A _ PW)
x
Average OR W(ver,L _ PW) x Large] Then Class
is VERSICOLOR
• If Sepal length is [W(vir,S _ SL) x Small
Where W’(set,SL) x W’(set,S _ SL) = W(set,S _ SL);
initially for neural network weight W’(set,S _ SL) =
W(set,S _ SL) and W’(set,SL) =1, similarly for the
other weights.
OR W(vir,A _ SL) x Average OR W(vir,L _ SL) x
Large] AND Sepal width is [W(vir,S _ SW) x
Small OR W(vir,A _ SW) x Average OR W(vir,L _
SW) x Large] AND Petal Length is [W(vir,S _
PL) x Small OR W(vir,A _ PL) x Average OR
The rule was a logic AND operation of sets of OR logic
operation. After the rules were re-written, a set of
sub-rules were obtained which has OR operation and
AND operation rules.
W(vir,L _ PL) x Large] AND Petal width is
x
Sub rule 1: if Sepal length is [W’(set,S _ SL) x
Average OR W(vir,L _ PW) x Large] Then Class
Small OR W’(set,A _ SL) x Average OR W’(set,L _
is VIRGINICA
SL) x Large] Then Class is SETOSA valued at
[W(vir,S _ PW)
x
Small
OR
W(vir,A _ PW)
A1
B. Fuzzy Neural network subsystem
Sub rule 2: if Sepal width is [W’(set,S _ SW) x
The task of the fuzzy neural network subsystem
was to train the weighted fuzzy production rules by
modifying the weight so that the rule can classify data
with higher accuracy. Through the literature review
the fuzzy neural network generally has the input
membership layer, fuzzy rule layer and output layer.
The activation function for the input membership
layer is the trapezoid membership function; the
activation for the fuzzy rule layer is the fuzzy rule, and
the activate function for the output layer is the logic
operation function.
Small OR W’(set,A _ SW) x Average OR W’(set,L _
SW) x Large] Then Class is SETOSA valued at
A2
Sub rule 3: if Petal Length is [W’(set,S _ PL) x
Small OR W’(set,A _ PL) x Average OR W’(set,L _
PL) x Large] Then Class is SETOSA valued at
A3
Sub rule 4: if Petal width is [W’(set,S _ PW) x
Small OR W’(set,A _ PW) x Average OR W’(set,L _
PW) x Large] Then Class is SETOSA value at A4
The generated weighted production fuzzy rules were
the combination of “And” and “Or” operations, which
cannot be used to activate the neuron. Thus, the
rule was separated into two sub-rules; one sub-rule
performs the “OR” operation and another sub-rule
performs the “AND” operation. The initial rule can be
rewritten as follows:
Followed by: If W’(set,SL) x A1 AND W’(set,SW) x
A2 AND W’(set,Pl) x A3 AND W’(set,PW) x A4 then
Class is Setosa
Simplification of the the rule helped in the modification
of the fuzzy rule layer in the fuzzy neural network
VOLUME Six NUMBER two july - december 2008 PLATFORM
161
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
into two sub-layers: AND (Min) layer and OR (Max)
Layer.
5) Fuzzy neural network design
Figure 2 shows the fuzzy neural network architecture
for the system.
Layer 1 Input layer:
Each neuron in this layer transmits external crisp
signals directly to the next layer
Y(1)i = X(1)i
Where X(1)i is the input, Y(1)i is the output of neuron
i in layer 1
Layer 2 input membership layer:
Layer 5 defuzzification
The output was classified. For the classification
purpose, a minor modification was made for the
computed output of network
1, if y i(4) = Max i =1..n y i(4)
Yi (5) = 
(4)
(4) 0, if y i ≠ Max i =1..n y i
X(2)j = Y(1)i
Layer 1
Crisp
Input
X1
These layers perform the AND and OR operation in
the weighted fuzzy rule. The activation for the AND
layer was re-derived which is the min operation, and
the activation function for the OR layer is the max
operation.
Layer 4
Min
Layer
Layer 5
Output
maxW11
R11
minW11
R21
A2
A3
R12
maxW32
O1
minW31
minW22
B1
R13
X2
(7)
Where maxC is number of neuron in layer 3; maxWi,j
is the weight of connection from neuron i in layer 2 to
neuron j in layer 3
(10)
Layer 3
Max
layer
maxW12
Layer 3 and Layer 4 are the fuzzy rule layers:
162
Layer 2
Input
membership
functions
A1
Y(2)j = μ(X(2)j)
C
( 2)
Y j( 3) = ∨ imax
), = 0 (max Wi , j ∗ Yi
1 OC (5)
∑ (Yk − y k ) 2 2 k =1
Where OC is number of output neurons, yk is the
desired output for neuron k and Yk(5) is the actual
output of neuron k. It could be seen that the error
E(p) is the function with respect to minW, maxW. The
main objective is to adjust these weights such that
error function reaches minimum or is less than a given
threshold.
The output of neuron in this layer is the membership
value of crisp input value
(9)
For 1 iteration the error was
E ( p) =
Neurons in this layer represent fuzzy sets used in the
antecedents of fuzzy rules. A fuzzification neuron
receives a crisp input and determines the degree to
which this input belongs to the neuron’s fuzzy set.
The activation function is the membership function.
The input is the output of crisp input layer
C
( 3)
Y j( 4 ) = ∧ imin
), = 0 (min Wi , j ∗ Yi
(8)
Where minC is number of neurons in layer 4; minWi,j
is the weight of connection from neuron i in layer 3 to
neuron j in layer 4
R22
B2
B3
R14
minW42
Figure 2. Fuzzy neural network architecture
PLATFORM VOLUME Six NUMBER TWO july - december 2008
O2
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
6) Learning in fuzzy neural network
After training, new set of weights were obtained, and
the trained rules became:
For the learning phase, we apply the back propagation
equations[14][15]. According to the principle of
gradient descent, the back propagation equations
can be written as
If minW(set,SL) x {Sepal length is [maxW(set,S _
min Wi , j ( p + 1) = min Wi , j ( p ) − α ∗
maxW(set,L _ SL) x Large]}
AND minW(set,SW) x {Sepal width is [maxW(set,S _
SW) x Small OR maxW(set,A _ SW) x Average OR
∂E ( p )
∂ min Wi , j
max Wi , j ( p + 1) = max Wi , j ( p ) − α ∗
SL) x Small OR maxW(set,A _ SL) x Average OR
maxW(set,L _ SW) x Large]}
∂E ( p )
∂ max Wi , j
AND minW(set,Pl) x {Petal Length is [maxW(set,S _
PL) x Small OR maxW(set,A _ PL) x Average OR
maxW(set,L _ PL) x Large]}
where α is the learning rate.
AND minW(set,PW) x {Petal width is [maxW(set,S _
PW) x Small OR maxW(set,A _ PW) x Average OR
By eliminating the derivative process, the following
results were obtained:

(Y j( 4 ) − Y j ) ∗ Yi (3) , C1
∂E ( p )
=  ( 4)
C
( 3)
∂ min Wi , j (Y j − Y j ) ∗ Yi (3) * ∧ min
ki = 0 , k ≠ i (min W k , j ∗ Yk ), C 2
C
( 3)
C1 : min Wi , j ∗ Yi (3) is ∧ min
k = 0 (min W k , j ∗ Yk )
C
( 3)
C 2 : min Wi , j ∗ Yi (3) is not ∧ min
k = 0 (min W k , j ∗ Yk )
∂E ( p )
∂ max Wi , j
maxW(set,L _ PW) x Large]}
Then Class is SETOSA
The rules were simplified as follows:
If Sepal length is [newW(set,S _ SL) x Small OR
newW(set,A _ SL) x Average OR newW(set,L _ SL) x
Large]
 (Yu( 4 ) − Yu ) ∗ min W j ,u ∗ Yi 2 , if C1


( 3)
C
 (Yu( 4 ) − Yu ) ∗ ∧ kimin
)
= 0 , k ≠ j (min Wk ,u ∗ Yk

2
 ∗ min W j ,u ∗ Yi , if C 2


=
(Yu( 4 ) − Yu ) ∗ min W j ,u


∗ max Wi , j ∗ Yi ( 2 ) ∗ Yi 2 , if C 3


(Y ( 4 ) − Y ) ∗ ∧ min C (min W ∗ Y ( 3) )
u
ki = 0 , k ≠ j
k ,u
k
 u
∗ min W j ,u ∗ max Wi , j ∗ Yi ( 2 ) ∗ Yi 2 , if C 4

AND Sepal width is [newW(set,S _ SW) x Small OR
newW(set,A _ SW) x Average OR newW(set,L _ SW) x
Large]
AND Petal Length is [newW(set,S _ PL) x Small
OR newW(set,A _ PL) x Average OR newW(set,L _ PL)
x Large]
AND Petal width is [newW(set,S _ PW) x Small OR
newW(set,A _ PW) x Average OR newW(set,L _ PW) x
Large]
Then Class is SETOSA
Here,
( 3)
j
C1 : min W j ,u ∗ Y is ∧
min C
k =0
( 3)
k
(min Wk ,u ∗ Y
)
C
( 2)
and max Wi , j ∗ Yi ( 2 ) is ∨ max
k = 0 (max Wk , j ∗ Yki )
C
( 3)
C 2 : min W j ,u ∗ Y j( 3) is not ∧ min
k = 0 (min W k ,u ∗ Yk )
C
and max Wi , j ∗ Yi ( 2 ) is ∨ max
(max Wk , j ∗ Yki( 2 ) )
k =0
C
( 3)
C 3 : min W j ,u ∗ Y j(3) is ∧ min
k = 0 (min W k ,u ∗ Yk )
C
and max Wi , j ∗ Yi ( 2 ) is not ∨ max
(max Wk , j ∗ Yki( 2 ) )
k =0
C
( 3)
C 4 : min W j ,u ∗ Y j(3) is not ∧ min
k = 0 (min W k ,u ∗ Yk )
C
and max Wi , j ∗ Yi ( 2 ) is not ∨ max
(max Wk , j ∗ Yki( 2 ) )
k =0
newW(set,S _ SL)
=
maxW(set,S _ SL)
x
minW(set,SL)
Results and discussions
With the iris data set, the entire dataset was labeled
from 1 to 150, divided equally into two sub-datasets:
IP1 and IP2; IP1 consisted of the odd numbered
objects and IP2 consisted of the even numbered
objects. Then the sub-dataset was fed to train the fuzzy
neural system and to analyse data. For evaluation, the
IP1 was used to train the system, and then the IP2 and
the whole dataset was analysed; and the IP2 was also
used to train the system then the IP1 and the whole
dataset was analysed. With this, the accuracy of results
achieved are shown in Table 4.
VOLUME Six NUMBER two july - december 2008 PLATFORM
163
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
Table 4. Iris data set result
Training
dataset
Training
Accuracy
Testing
dataset
SBA Fuzzy
System
WSBA Fuzzy
System
Fuzzy Neural
System
IP1
100%
IP2
80%
93.33%
94.67%
Whole
78.69%
94.67%
96.67%
IP1
78.67%
93.33%
93.33%
Whole
78%
93.33%
96.77%
IP2
97.30%
Table 5. Credit Card data set result
Training
dataset
Training
Accuracy
Testing
dataset
SBA Fuzzy
System
WSBA Fuzzy
System
Fuzzy neural
System
Cr1
80%
Cr2
70%
76%
81%
SBA: subsethood based algorithm [10]
WSBA: weighted subsethood based algorithm [10]
For credit card dataset, the dataset was divided into 2
sub-datasets, one was used for training and the other
was used for evaluating the system. 358 samples were
randomly selected from the credit card approval data
set for training set (Cr1) and wthe rest of 332 samples
(Cr2) were used for testing. The achieved training
accuracy was 80% and the evaluation accuracy was
81% as shown in Table 5.
conclusion
Based on the results from Table 4 and Table 5,
the hybrid system with fuzzy neural network has
improved the accuracy of the weighted fuzzy
production rules. For the iris dataset, the improvement
was about 1-2% accuracy increased whereas for
the credit card approval dataset 5% accuracy
improvement was obtained as compared to fuzzy
systems.
This project has demonstrated a fuzzy neural
algorithm for data classification task. Fuzzy logic
and neural networks are complementary tools in
building intelligent systems. Fuzzy systems lack the
ability to learn and cannot adjust themselves to a new
environment. On the other hand, although neural
networks can learn, they are opaque to the user
[15]. Integrated fuzzy neural systems can combine
the parallel computational and learning abilities of
neural networks with the human-like knowledge
representation and explanation abilities of fuzzy
system. As a result neural networks has become more
transparent, while fuzzy systems has become capable
of learning.
With fuzzy neural approach, the initial weight was
taken from the weighted fuzzy rules that had high
accuracy; thus there was no need to randomly
generate the weight and start training from without
knowledge. Besides that, the hybrid system could
generate interpreted rules that were not applicable in
neural networks.
164
Data mining has emerged to be a very important
research area that helps organisations make good use
of the tremendous amount of data they have. With
the combination of many other research disciplines,
data mining turns raw data into useful information
rather than just raw, meaningless data.
In conclusion, the hybrid system could gain high
accuracy for data classification task. Besides that,
it could generate rules that helped to interpret the
output results. Also, the neural network being used
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
was the easiest and most popular network that was
a min-max back propagation network. Therefore, it
makes the algorithm easy to implement. The algorithm
has improved the fuzzy classifier as well as the neural
network training. However, the processing time will
be long if the dataset is huge. Thus, for future work,
further testing needs to be done in order to improve
the design of the algorithm to classify larger datasets.
References
[1]
Margaret H. Dunham, 2003, “Data Mining Introductory and
Advanced Topics”, New Jersey, Prentice Hall
[2]
Mehmeh Katardzic, 2003, “Data Mining: Concepts, Models,
and Algorithm”, John Wiley & Sons
[3]
David Hand, Heikki Mannila, Padhraic Smuth, 2002,
“Principles of Data Mining”, The MIT Press
[4]
Jiawei Han, Micheline Kamber, 2000, “Data Mining: Concepts
and Techniques”, Morgan Kaufmann
[5]
Freitas, A. A., 2002, “Data Mining and Knowledge Discovery
With Evolutional Algorithms”, Neural Computing series
[6]
John W., 2003, “Data Mining: Opportunities and Challenges”,
Idea Group Publishing
[7]
Michalski R. et al., 1998, “Machine Learning and Data Mining:
Methods and Application” John Wiley & Son
[8]
Karuna Pande Joshi, 1997, “Analysis of Data Mining
Algorithms”, <http://userpages.umbc.edu/~kjoshi1/datamine/proj_rpt.htm>. Accessed in January 2005
[9]
Chen S. M., Lee S. H., and Lee C. H., 2001, “A new Method for
generating Fuzzy Rules From numerical data for handling
Classification Problems”, Applied Artificial Intelligence, vol.
15, pp 645-664.
[15] Micheal Negnevitsky, 2005, “Artificial Intelligence, A guide
to Intelligent Systems”, Second Edition, Addison Wesley
[16] Zadeth L. A., 1988, Fuzzy Logic IEEE – Computer Science,
vol. 21, pp83-93
S. P. Yong obtained her Master in
Information Technology and Bachelor in
Information Technology (Honours) in
Industrial Computing from Universiti
Kebangsaan Malaysia. Her employment
experience includes IBM Malaysia Sdn Bhd,
Informatics Institute Ipoh and Univerisiti
Teknologi PETRONAS.
Her research
interests are in intelligent systems, data
and text mining.
[10] Rasmani K. A., Shen Q., 2003, “Weighted Linguistic modeling
based on fuzzy subsethood values”, IEEE International
Conference on Fuzzy Systems, USA (FUZZ-IEEE 2003)
[11] Castro J. L. and Zurita J. M., 1997, “An inductive Learning
Algorithm in fuzzy Systems”, Fuzzy Sets and Systems, vol.
89, pp 193-203
[12] D. S. Yeung and E. C. C. Tsang, 1997, “Weighted Fuzzy
Production rules”, Fuzzy Sets and Systems, vol. 88 pp229313
[13] Hong T. P. and Lee. C. Y., 1996, “Induction of Fuzzy Rules and
Membership Functions from Training Examples”, Fuzzy sets
and Systems, vol. 84, pp33-47
[14] Eric C. C. Tsang, Daniel So Yeung and Xi-Zhao Wang, 2002,
“Learning Weights of Fuzzy Production Rules by A Max-min
neural network”, Proceedings of IEEE Conference on Systems,
Men and Cybernetics, pp1485-1490, Tuscon Arizona USA.
VOLUME Six NUMBER two july - december 2008 PLATFORM
165
OTHER AREAS
RESEARCH IN EDUCATION:
TAKING SUBJECTIVE BASED RESEARCH SERIOUSLY
ABSTRACT
Sumathi Renganathan and Satirenjit Kaur
Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.
Qualitative based research is often criticized for being bias and unscientific. In an educational institution
dominated by researchers in the scientific discipline where positivism’s methodology reigns supreme, it is
often difficult to convince research colleagues of the true value of research that are subjective. In this paper the
authors seek to investigate how other researchers from the scientific community, namely from the engineering
discipline, who are familiar with scientific based research perceive education research that are subjective and
value-laden. It is also hoped that the findings from this study can highlight that different epistemologies help
produce different knowledge, and that subjective research can produce noteworthy research findings in the
field of education.
Keywords: qualitative, subjective methodology, interpretivism, education research
Introduction
Research is a process of finding solutions to a problem
by conducting a thorough study and analysis of
the situational factors. It provides the needed
information that guides people to make informed
decisions to successfully deal with problems (Sekaran,
2000). This can be done by conducting analysis of
data gathered firsthand (primary data) or data that
are already available (secondary data). The data can
either be quantitative which are gathered through
experiments, structured questions, numerical
information, or qualitative which are gathered using
open-ended questions through interviews or through
observations. However, the idea of research varies in
different disciplines or subject areas.
In an educational institution dominated by
researchers in the scientific discipline where
positivism’s methodology reigns supreme, it is often
difficult to convince research colleagues of the true
value of research that are subjective. In the educational
institution where we carried out this study, research
papers from a particular discipline are often reviewed
by colleagues from some other discipline. Thus, we
believe that there is often major conflict of ideas and
underpinning knowledge on what “research” from
various disciplines should be.
This research is carried out in a technological
university where the majority of lecturers are from
the engineering discipline. In the Engineering
discipline itself there are different departments such
as the Electrical and Electronics, Chemical, Civil,
and Mechanical. In addition, research among these
disciplines may differ, however, majority of these
researchers are from the hard-science and thus we
believe have a positivist view of what a research
should be. We believe that it is much more difficult to
convince engineering colleagues of the true value of
This paper was presented at the ICOSH 07 Conference, Kuala Lumpur,
13 - 15 March 2007
166
PLATFORM VOLUME Six NUMBER TWO july - december 2008
OTHER AREAS
research which falls under the soft-science especially if
there is a conflict in the underlying research paradigm.
There has been ongoing debate in the social sciences
regarding appropriate research methodologies. In
general, there are two main paradigms in research,
known as positivist (usually practiced by researchers
from the hard-science) and non-positivist (typically
practiced by researchers from the soft-science). The
debate about which paradigm needs to be adopted
revolves around the fact that each paradigm has its
strengths and limitations.
The positivist paradigm is also known as traditional,
experimental, empiricist and quantitative. The origins
of this paradigm date back to the nineteenth century
during the period of the French philosopher, August
Comte. There are several other well-known scholars
of this paradigm including Durkheim, Weber and Marx
(Creswell, 2003). This approach argues that the world
is external and objective. The positivists suggest that
there are certain laws which can be found in both
social and natural worlds in a sequence of cause and
effect, and believe in causality in the explanation of
any phenomena. Causality explains the relationship
between cause and effect and integrates these
variables into a theory. In this paradigm, the observer
is considered to be an independent entity. It is
believed that the researcher can remain distant from
the research, in terms of their own values which might
distort their objectivity. Researchers that adhere to this
paradigm operationalise the research concepts to be
measured, and large samples are taken. This research
approach is known as the quantitative approach and
is based on a numerical measurement of a specific
aspect of phenomena. The aim of this approach is to
seek a general description or to test causal hypotheses
(Hollis, 1994).
On the other hand, the non-positivist paradigm
considers the world to be socially constructed
and subjective. This paradigm encompasses the
constructivist approach, interpretative approach,
postmodern perspective and it is commonly known
as the qualitative approach. The constructivist or
interpretivist believe that to understand the world
of meaning one must be able to interpret it. The
inquirer must elucidate the process of meaning and
clarify what and how meanings were embodied in the
language and actions of social actors (Love et al., 2002,
p. 301). Guba and Lincoln (1994) argued that post
modernism suggested that the world was constituted
by “our shared language and we can only know the
world through the particular forms of discourse
that our language(s) can create” (Love et al., 2002, p.
298). Based on these, this paradigm in general, relies
on personal views, experiences and subjectivity in
understanding human actions and behaviour. The
observer is considered part of what is being observed.
Researchers of this paradigm use multiple methods
to establish different views of a phenomena. Smaller
samples are taken to investigate in-depth or over
time, and the research approach used is known as
qualitative.
The purpose of this study
We carried out this study to investigate how other
researchers from the scientific community, namely
from the engineering discipline, who are familiar
with scientific based research perceive education
research that are subjective and value-laden. It is not
our intention to criticize any research paradigm or the
various research methods used for research purposes
because we believe as others (McGrath, 1982; Frankel,
2005), different research problems require different
research methodologies. What we hope, is that this
study can address any gap that might exist in the
engineering lecturers’ understanding of subjective
education research.
In this paper, first, we discuss based on existing
literature, what is the general understanding of
research. Next we give a brief background of this study
followed by a discussion on the findings and analysis.
This is followed by a further discussion on the findings
through answering the two main research questions:
1. How familiar are the researchers from the
engineering discipline, in subjective, qualitative
based education research?
2. What are the engineering researchers’ perceptions
regarding the contribution of qualitative based
research?
VOLUME Six NUMBER two july - december 2008 PLATFORM
167
OTHER AREAS
Finally, we conclude by highlighting the contributions
of this study.
General Understanding of Research
In this paper, our main underlying notion is that of
Hitchcock and Hughes (1995) who suggested that
ontological assumptions give rise to epistemological
assumptions, which give rise to methodological
considerations, which in turn give rise to
instrumentation and data collection. Thus, research is
not merely about research methods:
… research is concerned with understanding of the
world and that this is informed by how we view our
world(s), what we take understanding to be, and
what we see as the purposes of understanding.
Cohen, Manion and Morrison, 2000:1
This paper examines the practice of research through
two significant lenses (Cohen, Manion and Morrison,
2000):
1. scientific
and
positivistic
methodologies
(quantitative approach)
2. naturalistic and interpretive methodologies
(qualitative approach)
These two competing views have been absorbed by
educational research. ‘Positivism strives for objectivity,
measurability, predictability, controllability, patterning
the construction of laws and rules of behaviour,
and the ascription of causality’ (pg28). Positivist
educational researchers ‘concentrate on the repetitive,
predictable and invariant aspects of the person’
whereas interpretivists explore the person’s intention,
individualism and freedom (Cohen, Manion and
Morrison, 2000:18). Interpretivist strive to understand
and interpret the world in terms of its actors’ (pg 28).
As discussed earlier, terms such as hard-science and
soft-science are also often used to refer to positivistic
and interpretive methodologies respectively.
Positivists often argue that interpretivists often do
not use scientific methods, use anecdotal evidence,
or evidence which are often not mathematical
thus, these methods are often considered to be
‘lack of rigour’. To conduct research in any field, it
168
is important to identify the research paradigm of
the researcher. As Guba and Lincon (1994) state, a
research paradigm guides the researcher in choices of
methods as well as epistemologically. Epistemology
is defined as ‘the study of the nature of knowledge
and justification’ (Moser, 1995), which implies that it
“involves both articulating and evaluating conceptions
of knowledge and justification’ (Howe, K.R., 2003:97).
Research paradigms may vary from positivist
(evaluative-deductive) paradigm to interpretivist
(evaluative-interpretivist) paradigm and there are also
combinations of paradigms which give rise to mixed
paradigms (Johl, S.K, Bruce, A. and Binks, M. 2006).
Research paradigms often guide the researchers in
identifying suitable methods for their research and
more often than not many researchers distinguish
their research as either qualitative or quantitative.
Some researchers also use both the quantitative and
qualitative methods and refer it as a mixed-method
research. Although researchers may use various
methods in their research, a positivist epistemological
stance is radically different from an interpretivist.
However, often a positivist paradigm is identified with
quantitative methods and the interpretivist paradigm
is identified with qualitative methods.
The strength of quantitative research approaches, in
the field of education, is that they are able to yield data
that is projectable to a larger population. Furthermore,
because quantitative approaches deal with numbers
and statistics, the data can be effectively translated
into charts and graphs. Thus, quantitative research
methods play important role in policy making in
the field of education. As for qualitative research
approaches, the strength are that they are able to
gain insights of participants’ feelings, perception
and viewpoints. These insights are unobtainable
from quantitative research methods. In the field
of education the findings from qualitative research
can determine “why” and “how” certain educational
policies actually work (or not work) in the real world.
The following is a brief summary of the differences
between qualitative and quantitative research
methods (Burns, 2000):
PLATFORM VOLUME Six NUMBER TWO july - december 2008
OTHER AREAS
Table 1. Summary of qualitative and quantitative research methods
Qualitative
Assumptions •
•
•
•
Quantitative
Reality socially constructed
Variables complex and interwoven, hard
to measure
Events viewed from informant’s
perspectives
Dynamic quality to life
•
•
•
•
Facts and data have objective reality
Variables can be measured and identified
Event’s viewed from outsider’s
perspective
Static reality to life
Purpose
•
•
•
Interpretation
Contextualisation
Understanding of the perspectives of
others
•
•
•
Prediction
Generalisation
Causal explanation
Method
•
•
•
•
•
•
•
Testing and measuring
Commences with hypothesis and theory
Manipulation and control
Deductive and experimental
Statistical analysis
Abstract impersonal write-up
•
•
Data collection using participant
observation, unstructured interviews
Concludes with hypothesis and
grounded theory
Emergence and portrayal
Inductive and naturalistic
Data analysis by themes from informants’
descriptions
Data reported in language of informant
Descriptive write-up
•
•
•
Researcher as instrument
Personal involvement
Empathetic understanding
•
•
•
Researcher applies formal instruments
Detachment
Objective
•
•
•
•
Role of
Researcher
The Study
This study was carried out in a private technological
university which offers undergraduate and
postgraduate degree programmes in Engineering
and Information Technology/Information System (IT/
IS). We used questionnaires to obtain the relevant
information and the participants were lecturers from
the Engineering and the IT/IS discipline. We only
targeted lecturers who possess a PhD as we believe
that these lecturers would practice and understand
research better. The university has a total of 263
lecturers whereby 37% (98 lecturers) possess a PhD.
We distributed the questionnaires during a workshop
that was particularly conducted for identified
lecturers from every department in the university, for
their research experiences or potential capabilities
in research. Thirty-three lecturers (33%) completed
and returned the questionnaires. The questionnaires
contained 28 statements related to quantitative and
qualitative based research. However, each statement
required the lecturers to rate on a 7-point Likert scale,
thus, there is no right or wrong answers but based
on their ratings it will be possible to gauge lecturers’
familiarity and perception of qualitative based
research.
Findings and Analysis of Data
Although the questionnaire contained 28 statements,
these statements were preceded with the following
question: How familiar are you with qualitative based
research?
VOLUME Six NUMBER two july - december 2008 PLATFORM
169
OTHER AREAS
Not at all
familiar
1
Very
familiar
2
3
4
5
6
7
For this question, we categorised lecturers whose
responses ranged between 1 and 3 as indicating that
they are not familiar and responses that ranged
between 5 and 7 as familiar while the mid-point 4 as
those who believe that they have some knowledge
regarding qualitative research. Surprisingly 27%
(9 lecturers) choose not to answer this question.
Furthermore, equal number of lecturers responded for
each category, 8 lecturers (24%) responded as being
not familiar, another 8 as being familiar and the rest
(again another 8 lecturers) responded as having some
knowledge of qualitative research. It is also important
to note that 4 of the lecturers in this study stated that
they are not at all familiar with qualitative based
research and thus did not complete the questionnaire.
Based on these, we decided that it was important
to analyse the lecturers’ responses for each of the
statements in the questionnaire. However, we have
analysed the lecturers’ responses to the statements
under the following headings:
•
•
•
•
•
General perception of “research”
Research paradigm
Research methods
Researcher’s role
Style of presentation
General perception of “research”
We included 6 statements in the questionnaires to
gauge the lecturers’ general perception regarding
research, especially qualitative research:
Each statement required the lecturers to choose from
a 7-point Lickert scale whereby 1 indicates strongly
disagree, 4 as neutral and 7 as strongly agree.
Based on the mean scores shown in Table 2, the
findings indicate that the lecturers in this study are
not that familiar with qualitative research. In fact, the
lower mean scores (3.97 and 3.94) is for Statements 1
and 19 which indicate that the lecturers disagree with
these statements. This implies that they believe that
all researchers have the same research paradigm and
thus, regard research from different disciplines as the
same. Furthermore, the mean scores for the lecturers’
responses for the other statements (Statements
2, 22, 23, 24) ranged from 4.5 to 4.8, which indicate
the lecturers uncertainty on their understanding of
qualitative based research.
Research Paradigm
Although we perceive the lecturers from the
engineering discipline as positivists, we wanted to
know their perception regarding qualitative based
research. Our statements in the questionnaire did
not use explicit terms for research paradigms such as
positivist or interpretivist , in case the lecturers were
Table 2. Lecturers’ general perception of “research”
No.
Statements
Mean Scores
1
The idea of “research” varies in different disciplines or subject areas
3.97
2
In qualitative based research, meanings are discovered not created
4.12
19
Researchers from different disciplines think about “research” differently
3.94
22
Qualitative research seeks to illuminate, understand and extrapolate what is being
researched.
4.82
23
Qualitative research is concerned with meanings and personal experience of individuals,
group and sub-culture.
4.70
24
Research is about making sense of chaos and translating it into culturally accepted
explanations.
4.50
170
PLATFORM VOLUME Six NUMBER TWO july - december 2008
OTHER AREAS
Table 3. Statements to gauge positivist influence on qualitative based research
No.
Statements
Mean Scores
3
A good qualitative research must be reliable, valid and objective.
5.67
5
In qualitative research, the truth should b established logically or supported by
empirical evidence.
5.15
6
The research must be carried out systematically so that it can be replicated by other
researchers.
5.39
7
Qualitative research seeks causal determination, prediction and generalisation of
findings.
4.39
8
The quality of the research is determined by the validity of the research.
5.12
10
Doing research means getting the right answer or the true picture of what was
being researched.
4.48
11
A qualitative research believes in a true reality and the task of research is to find it.
4.42
14
The research must be free of biases.
5.00
16
Rules for qualitative based research are objectivity, consistency and rationality.
5.09
18
The researcher believes that there is a reality out there that can be objectively
measured and found through research.
4.36
20
Qualitative findings are not generalizable.
3.21
27
There must be real or tangible conclusion in a research.
4.70
28
Research must be able to produce findings that are conclusive.
4.61
not familiar with such terms. However, based on their
responses it was possible to identify if the lecturers’
perceptions regarding qualitative based research are
influenced by their positivist research paradigm.
As stated earlier, to conduct research in any field,
it is important to identify the researcher’s research
paradigm. Furthermore, a research paradigm guides
the researcher in choices of methods as well as
epistemologically (Guba and Lincon, 1994). Thus,
in this study we included 13 statements (see Table
3) to gauge whether the lecturers were influenced
by their positivist research paradigm in determining
qualitative based research:
The mean scores for the responses from the lecturers
for these questions ranged from 4.36 (for Statement
8) to 5.67 (for Statement 3). The only exception
is for Statement 20, whereby researchers from an
interpretivist paradigm would have had scores ranging
from 5 to 7. However, a mean score of 3.21 indicates that
majority of the lecturers disagree with this statement
implying that they believe that qualitative findings
are generalisable, which reflects a positivist mindset
on research findings. It is important to note that
some of the statements are on fundamental aspects
which differentiate a positivist research paradigm
from a interpretivist paradigm. The following section
provides some examples on this:
Statement 3 (see Table 3) is a typical underlying
concept for a positivist research paradigm. Researchers
who use logical positivism or quantitative research are
accustomed to using familiar terms such as reliability,
validity and objectivity in judging the quality of such
research. Thus, from such a point of view, qualitative
research may seem unscientific and anecdotal.
However, applying the same criteria used to judge
quantitative research on qualitative research, has
been extensively criticised as inappropriate (Healy
VOLUME Six NUMBER two july - december 2008 PLATFORM
171
OTHER AREAS
Table4: Statement 3 – A good qualitative research must be
reliable, valid and objective
Valid
0
5
6
7
Total
Frequency
4
2
12
15
33
Percent
12.1
6.1
36.4
45.5
100.0
Valid Percent
12.1
6.1
36.4
45.5
100.0
Cumulative
Percent
12.1
18.2
54.5
100.0
Table 5: Statement 5 – In qualitative research, the truth should be
established logically or supported by empirical evidence
Valid
Frequency
Percent
Valid Percent
0
2
3
4
5
6
7
Total
4
1
1
3
2
11
11
33
12.1
3.0
3.0
9.1
6.1
33.3
33.3
100.0
12.1
3.0
3.0
9.1
6.1
33.3
33.3
100.0
Cumulative
Percent
12.1
15.2
18.2
27.3
33.3
66.7
100.0
Table 6: Statement 8 – The quality of the research is determined
by the validity of the research
Valid
0
4
5
6
7
Total
Frequency
5
1
6
12
9
33
Percent
Valid Percent
15.2
3.0
18.2
36.4
27.3
100.0
15.2
3.0
18.2
36.4
27.3
100.0
Cumulative
Percent
15.2
18.2
36.4
72.7
100.0
and Perry, 2000; Lincon and Guba, 1985; Stenbacka,
2001). Table 4 below, shows that 88% of the lecturers’
responses ranged from 5 and 7, while the rest did not
attempt to rate this statement. Thus, the findings
here show that the lecturers in this study are more
accustomed to positivist research paradigm.
Statements 5 and 8 are other typical aspects underlying
research in the positivist research paradigm. For
Statement 5 (see Table 5), majority of the lecturers’
(72.7%) responses ranged from 5 to 7 which again
reflect a positivist research paradigm. Again for
Statement 8, as stated earlier the term “valid” typically
reflects a positivist research paradigm. For this
statement (see Table 6), the majority of the lecturers’
responses (81.9%) ranged from 5 to 7 on the Likert
scale. This again confirms that the lecturers definitely
come from a positivist research paradigm because
the non-positivist paradigm relies on personal views,
experiences and subjectivity in understanding human
actions and behaviour in research.
Research methods
There were 4 statements (see Table 7) in the
questionnaire which were used to gauge the lecturers’
perception regarding the methods used in qualitative
based research.
Statements 4, 25 and 26 were all concerned with
determining the number of participants needed in a
research to yield credible conclusions. Although it is
difficult to determine the correct sample size, for many
researchers using statistical analysis for their data, ‘a
sample size of thirty’ is held as the minimum number
of cases needed (Cohen, Manion and Morrison,
2000:93). Nevertheless, there is a guide to determine
the adequate number of cases for statistical analysis,
to ensure the rigour and robustness of the results.
However, this requirement is held differently in
qualitative based research. In some qualitative studies
on life history, one participant can be the sample
of the study. Furthermore, a case study approach
‘involves the investigation of a relatively small number
of naturally occurring (rather than researcher-created)
Table 7. Lecturers’ perception regarding research methods
No.
Statements
Mean Scores
4
A qualitative study can comprise of in-depth interviews with a single participant
3.15
15
The key element of research is to explain a phenomena by collecting data that can be
analysed using mathematically based methods (in particular statistics)
4.52
25
Doing a case study in qualitative research can involve a single case
3.3
26
The number of subjects interviewed in a research is important (must have adequate
number of participants) to come up with credible conclusions.
4.82
172
PLATFORM VOLUME Six NUMBER TWO july - december 2008
OTHER AREAS
Researcher’s Role
Table 8: Statement 15
Valid
0
3
4
5
6
7
Total
Frequency
4
3
3
12
9
2
33
Percent
Valid Percent
12.1
9.1
9.1
36.4
27.3
6.1
100.0
12.1
9.1
9.1
36.4
27.3
6.1
100.0
Cumulative
Percent
12.1
21.2
30.3
66.7
93.9
100.0
In a qualitative based research, the role of the researcher
is important. The ‘theoretical sensitivity’ (Strauss and
Corbin, 1990) of the researcher is pertinent in carrying
out a qualitative based research:
Theoretical sensitivity referred to the personal quality
of the researcher. It could indicate an awareness of
the subtleties of meaning of data
… [It] refers to the attribute of having insight, the
ability to give meaning to data, the capacity
to understand, and capability to separate the
pertinent from that which isn’t.
Strauss and Corbin, 1990:42
Table 9: Statement 26
Valid
0
2
3
4
5
6
7
Total
Frequency
5
2
1
2
3
11
9
33
Percent
Valid Percent
15.2
6.1
3.0
6.1
9.1
33.3
27.3
100.0
15.2
6.1
3.0
6.1
9.1
33.3
27.3
100.0
Cumulative
Percent
15.2
21.2
24.2
30.3
39.4
72.7
100.0
cases1’ (original emphasis, Hammersley, 1992:185).
From the responses of the lecturers, their mean scores
for Statements 4 and 25 were 3.15 and 3.3 respectively.
This indicated that the lecturers disagree that a single
participant or a single case is adequate for research
purposes. Again this is contradictory with the
interpretivist research paradigm and thus, confirms
that the lecturers in this study are certainly positivist.
This is also supported by the lecturers’ responses for
Statements 15 and 26 (refer to Table 6) where the
mean scores were 4.52 and 4.82 (indicating a more
agreeable nature) respectively. For both of these
statements, more than 69% (see Table 8 and Table 9)
of the lecturers’ responses ranged from 5 to 7 which
strongly implies that they believe in the positivist view
of research where credible research is determined by
the sample size.
Thus, in qualitative research, the researcher would be
subjectively immersed in the subject matter whereas
a quantitative researcher would tend to remain
objectively separated from the subject matter (Miles
and Huberman, 1994). However, the mean scores
(Table 10) indicated that the mindset of the lecturers in
this study was biased to a quantitative based research
which encouraged the researcher to be objective and
separated from the research.
Style of Presentation
The use of personal pronouns such as ‘I’ and ‘We’ are
forbidden in presenting research from the positivist
research paradigm. Scientific and technical writing
are usually objective and impersonal. However,
in qualitative research the presence of the author
is an important aspect of the research. Therefore,
subjectively based qualitative research usually is written
using personal pronouns, especially the pronoun ‘I’ to
highlight the researcher’s subjective rhetorical stance.
Table 10. Statement on Researcher’s Role
1
No.
Statements
Mean Scores
12
A researcher must be objective
5.39
17
The findings produced are influenced by the beliefs of the researcher.
3.36
21
A researcher’s own opinion must not interfere with the ‘subjects’ opinion.
6.3
Case studies may involve a single case.
VOLUME Six NUMBER two july - december 2008 PLATFORM
173
OTHER AREAS
Table 11. Lecturers’ presentation style for qualitative based research
No.
Statements
Mean Scores
9
A qualitative thesis can be told in a form of a story.
3.06
13
In writing a qualitative based research paper, using personal pronouns such as ‘I’ to
refer to the researcher is permitted.
3.03
Hyland (2003:256&257) states that using the pronoun
‘I’ emphasizes personal engagement of the author
with the audience, which is ‘an extremely valuable
strategy when probing connections between entities
that are generally more particular, less precisely
measurable, and less clear-cut’. Therefore, the use
of personal pronouns supports qualitative research
inquiry. Furthermore, ‘qualitative research excels at
‘telling the story’ from the participant’s viewpoint’
which provides a rich descriptive detail as compared
to quantitative based research (Trochim, 2006:3).
The lecturers in this study however are only familiar
with the styles that reflect a positivist research
paradigm which underpins the style of writing for
scientific and technical research documents. This
is clearly reflected in the mean scores (see Table 11)
where the lecturers disagree with the statements
which basically reflect a typical style of a qualitative
researcher.
Discussion
In this section, we revisit the research questions we
posed in this paper. We begin by discussing our
findings based on our first research question:
1. How familiar are the researchers from the
engineering discipline, in subjective, qualitative
based education research?
This study set out to investigate how familiar are the
lecturers from the engineering and technological
discipline in qualitative based research. It is widely
known that researchers from the scientific and
technological discipline are from the positivist
(quantitative) research paradigm. However, in this
study the lecturers were asked to evaluate statements
174
to gauge their familiarity with qualitative based
research which is often based on an interpretivist
(qualitative) research paradigm. The main objective is
to examine positivist lecturers’ perception regarding
qualitative based research that is mainly subjective in
nature. This is very important because researchers
may have different research paradigms, especially
those who are from diverse disciplines.
The findings in this paper clearly indicate that the
majority of researchers in this study reflect a positivist
research mindset. Thus, they are not familiar with
the fundamental knowledge underlying qualitative
based research which is subjective in nature. Thus,
research from a conflicting research paradigm such
as the interpretivist, would not be evaluated fairly
by these researchers. Furthermore, this study shows
that the lecturers in this study perceive a qualitative
research which is often subjective, based on their
understanding and familiarity of quantitative based
research. Obviously, a positivist researcher would not
agree with the underlying principles that guide an
interpretivist researcher.
As stated earlier, in the university where this research is
carried out, research papers from different disciplines
are evaluated by researchers who may have different
research paradigms. Thus, for example, a qualitative
research that highlights in depth qualitative interviews
of only one or two participants may not favour well
if the reviewer of the research is a positivist who is
well versed in quantitative approaches. The reviewer
would be very concerned with the number of subjects
used in the research. From a positivist point of view,
the reviewer’s concerns are justifiable. However, as
discussed in this paper, the quality and the validity
of a qualitative based research is not determined by
the number of participants in a research. Thus, the
PLATFORM VOLUME Six NUMBER TWO july - december 2008
OTHER AREAS
reviewer’s concern for the number of participants
used in a qualitative research would be regarded as
unnecessary. Furthermore, a positivist reviewer might
expect conclusive and tangible conclusions. What the
reviewer would not be aware is that the purpose of
a qualitative research is to ‘describe or understand
the phenomena of interest from the participants’
eyes’ (Guba and Lincon, 1999). Thus, it calls for a rich
description of the real life research context in which the
complexities of different individuals who participated
in the research, is acknowledged and explored.
According to Creswell (1994:6), it is the responsibility of
the researcher to ‘report faithfully these realities and to
rely on voices and interpretations of informants’. Thus
there is ‘no true or valid interpretation’, only ‘useful,
liberating, fulfilling and rewarding’ interpretations
(Crotty, 1998:48).
Conclusion
Therefore, the findings in this study supports the
discussion above, that it is not appropriate for a
researcher from a conflicting research paradigm to
evaluate research papers from researchers who do not
share the same research paradigm as the evaluators.
This study clearly supports that researchers with
different research paradigms view research differently.
It is inappropriate to impose one’s underlying
principles of research on others, especially if the
research reflects contradictory research paradigms
such as the positivist and interpretivist as shown in
this study.
• Research methods
• Researcher’s role
• Style of presentation
Although this paper started with two research
questions, we did not deliberate on the second
research question:
2. What are the engineering researchers’ perceptions
regarding the contribution of qualitative based
research?
This is because the study showed that the lecturers
are not familiar with qualitative based research and
thus will not be able to contribute to this question.
This paper highlights that different research paradigms
reflect differences in knowledge on the underlying
principles that guide researchers. This study confirms
that the researchers in this study have a positivist
research mindset. However, this study also showed
how researchers from the positivist mindset radically
differ from researchers from the non-positivist
(interpretivist) research paradigm.
Furthermore,
the findings in this study indicate that the lecturers
impose their positivist knowledge even on research
from non-positivist (interpretivist) research paradigm.
Thus, this study stresses the importance of knowing
that researchers may differ in their research paradigms
and as a consequence, may also differ in the following
aspects regarding research:
Thus, even within a discipline, identifying the
researcher’s research paradigm is important in
determining the true value of a research.
References
[1]
Burns. (2000). “Introduction to research methods”. London:
Sage Publications
[2]
Cohen, L., Manion, L., & Morrison, K. (2000). “Research
Methods in Education” (5 ed.). London: Routledge Falmer
[3]
Creswell, J. W. (2003). “Research Design Qualitative,
Quantitative and Mixed Methods approaches”. London:
Sage Publication
[4]
Crotty, M. (1998). “The Foundations of Social Research Meaning and perspective in the research process”. London:
Sage Publications Ltd
[5]
Frankel, R. (2005). The “White Space” of Logistics Research:
A look at the role of methods usage. Journal of Business
Logistics, 1-12
[6]
Guba, E., & Lincon, Y. S. (1994). “Competing paradigms in
Qualitative Research”. In N. K. Denzin & Y. S. Lincon (Eds.),
Handbook of Qualitative Research (pp. 105-117). California:
Sage Publication
[7]
Hammersley, M. (1992). “What’s wrong with ethnography”.
London: Routledge
VOLUME Six NUMBER two july - december 2008 PLATFORM
175
OTHER AREAS
[8]
Healy, M., & Perry, C. (2000). “Comprehensive criteria to
judge validity and reliability of qualitative research within
the realism papradigm”. Qualitative Market Research, 3(3),
118-126
[9]
Hitchcock, G., & Hughes, D. (1995). “Research and the
teacher”. London: Routledge
[24] Strauss, A., & Corbin, J. (1990). “Basics of qualitative research:
Grounded theory procedures and techniques”. Newbury
Park: Sage Publications
[25] Trochim, M. K. (2006, 20 October 2006). “Qualitative Validity”.
Retrieved 20 January, 2007, from the World Wide Web:
http://www.socialresearchmethods.net/kb/qualval.php
[10] Hollis, M. (1994). “The Philosophy of Social Science: An
Introduction”. Cambridge: Cambridge University Press
[11] Howe, K. R. (2003). “Closing methodological divides: Toward
democratic educational research” (Vol. 100). Dordrecht,
Netherlands: Kluwer
Sumathi Renganathan is a senior lecturer
in the Management and Humanities
Department. Her areas of specialisation
and interest are: Language in Education,
Second Language Socialisation, Multilingualism, and Language and Ethnicity.
[12] Hussey, J., & Hussey, R. (1997). “Business Research”. London:
Mac Millan Press Ltd
[13] Hyland, K. (2003). “Self-Citation and Self-Reference:
Credibility and Promotion in Academic Publication”. Journal
of the American Society for Information Science and
Technology, 54(3), 25 -259
[14] Johl, S. K., Bruce, A., & Binks, M. (2006). “Using mixed
method approach in conducting a business research”.
Paper presented at the 2nd International Borneo Business
Conference, Kuching, Sarawak
[15] King, G., Keohane, R., & Verba, S. (1994). “Designing Social
Inquiry: Scientific Inferences in Qualitative Research”. New
Jersey: Princeton University Press
[16] Lincon, Y. S., & Guba, E. G. (1985). “Naturalistic inquiry”.
Beverly Hills, CA: Sage Publications
[17] Love, P. E. D., Holt, G. D., & Heng, L. (2002). “Triangulation
in Construction Management Research”. Engineering
Construction and Architecture Management, 9(4), 294-303
[18] McGrath, J. E. (1982). “Dilemmatics: The Study of Research
Choices and Dilemmas”. In J. E. McGrath & J. Martin & R.
KuIa (Eds.), Judgment Calls in Research. Beverly Hills: Sage
Publications
[19] Miles, M., & Huberman, M. A. (1994). “Qualitative data
analysis”. Beverly Hills: Sage Publications
[20] Moser, P., & Trout, J. D. (Eds.). (1995). “Contemporary
materialism”. London: Routledge
[21] Neuman, W. L. (2000). “Social Research Methods Qualitative
and Quantitative Approaches”. Boston: Pearson Education
Company
[22] Sekaran, U. (2000). “Research methods for business, a skillbuilding approach”. New York: John Wiley and Sons Inc
[23] Stenbacka, C. (2001). “Qualitative research requires quality
concepts of its own”. Management Decision, 39(7), 551-555
176
PLATFORM VOLUME Six NUMBER TWO july - december 2008
Satirenjit Kaur is a senior lecturer in the
Management and Humanities Department.
Her areas of specialisation and interests are
entrepreneurship and management.
PL ATFORM is a biannual, peerreviewed journal of Universiti Teknologi
PETRONAS. It serves as a medium for
faculty members, students and industry
professionals to share their knowledge,
views, experiences and discoveries in
their areas of interest and expertise. It
comprises collections of, but not limited
to, papers presented by the academic
staff of the University at various local and
international conferences, conventions
and seminars.
The entries range from opinions and
views on engineering, technology and
social issues to deliberations on the
progress and outcomes of academic
research.
Opinions expressed in this journal need
not necessarily reflect the official views
of the University.
All materials are copyright of Universiti
Teknologi PETRONAS. Reproduction
in whole or in part is not permitted
without the written permission from
the University.
Notes for Contributors
Instructions to Authors
Authors of articles that fit the aims, scopes and policies of
this journal are invited to submit soft and hard copies to
the editor. Paper should be written in English. Authors are
encouraged to obtain assistance in the writing and editing
of their papers prior to submission. For papers presented or
published elsewhere, authors should include the details of
the conference or seminar.
Manuscript should be prepared in accordance with the
following:
1.The text should be preceded by a short abstract of 50-100
words and four or so keywords.
2.The manuscript must be typed on one side of the
paper, double-spaced throughout with wide margins
not exceeding 3,500 words although exceptions will be
made.
3. Figures and tables have to be labelled and should be
included in the text. Authors are advised to refer to
recent issues of the journals to obtain the format for
references.
4. Footnotes should be kept to a minimum and be as brief
as possible; they must be numbered consecutively.
5. Special care should be given to the preparation of the
drawings for the figures and diagrams. Except for a
reduction in size, they will appear in the final printing in
exactly the same form as submitted by the author.
6. Reference should be indicated by the authors’ last names
and year of publications.
PLATFORM Editor-in-Chief
Universiti Teknologi PETRONAS
Bandar Seri Iskandar
31750 Tronoh
Perak Darul Ridzuan
MALAYSIA
VOLUME SIX NUMB ER TWO J UL Y - D ECEMBER 2008
P L AT F O R M
P L AT F O R M
Volume 6 Number 2
2
6
13
21
27
31
38
47
52
65
77
85
91
96
105
111
116
122
129
137
145
152
158
166
VOLUME SIX NU MB ER TWO JU L Y - D EC EMB ER 2008
Mission-Oriented Research: CARBON DIOXIDE MANAGEMENT
Separation Of Nitrogen From Natural Gas By Nano- Porous Membrane Using Capillary Condensation
Farooq Ahmad, Hilmi Mukhtar, Zakaria Man, Binay. K. Dutta
Mission-Oriented Research: DEEPWATER TECHNOLOGY
Recent Developments In Autonomous Underwater Vehicle (AUV) Control Systems
Kamarudin Shehabuddeen, Fakhruldin Mohd Hashim
Mission-Oriented Research: GREEN TECHNOLOGY
Enhancement Of Heat Transfer Of A Liquid Refrigerant In Transition Flow In The Annulus Of A DoubleTube Condenser R. Tiruselvam, Chin Wai Meng, Vijay R Raghavan
Mission-Oriented Research: PETROCHEMICAL CATALYSIS TECHNOLOGY
Fenton And Photo-Fenton Oxidation Of Diisopropanolamine
Abdul Aziz Omar, Putri Nadzrul Faizura Megat Khamaruddin, Raihan Mahirah Ramli
Synthesis Of Well-Defined Iron Nanoparticles On A Spherical Model Support
Noor Asmawati Mohd Zabidi, P. Moodley, P. C. Thüne, J. W. Niemantsverdriet
Technology Platform: FUEL COMBUSTION
Performance And Emission Comparison Of A Direct-Injection (DI) Internal Combustion Engine Using
Hydrogen And Compressed Natural Gas As Fuels
Abdul Rashid Abdul Aziz, M. Adlan A., M. Faisal A. Mutalib
The Effect Of Droplets On Buoyancy In Very Rich Iso-Octane-Air Flames
Shaharin Anwar Sulaiman, Malcolm Lawes
Technology Platform: SYSTEM OPTIMISATION
Anaerobic Co-Digestion Of Kitchen Waste And Sewage Sludge For Producing Biogas
Amirhossein Malakahmad, Noor Ezlin Ahmad Basri, Sharom Md Zain
On-Line At-Risk Behaviour Analysis And Improvement System (E-ARBAIS)
Azmi Mohd Shariff, Tan Sew Keng
Bayesian Inversion Of Proof Pile Test: Monte Carlo Simulation Approach
Indra Sati Hamonangan Harahap, Wong Chun Wah
Element Optimisation Techniques In Multiple DB Bridge Projects
Narayanan Sambu Potty, C. T. Ramanathan
A Simulation Study On Dynamics And Control Of A Refrigerated Gas Plant
Nooryusmiza Yusoff, M. Ramasamy, Suzana Yusup
Technology Platform: APPLICATION OF INTELLIGENT IT SYSTEM
An Interactive Approach To Curve Framing Abas Md Said
Student Industrial Internship Web Portal Aliza Sarlan, Wan Fatimah Wan Ahmad, Dismas Bismo
Hand Gesture Recognition: Sign To Voice System (S2V) Foong Oi Mean, Tan Jung Low, Satrio Wibowo
Parallelization Of Prime Number Generation Using Message Passing Interface
Izzatdin A Aziz, Nazleeni Haron, Low Tan Jung, Wan Rahaya Wan Dagang
Evaluation Of Lossless Image Compression For Ultrasound Images
Boshara M. Arshin, P. A. Venkatachalam, Ahmad Fadzil Mohd Hani
Learning Style Inventory System: A Study To Improve Learning Programming Subject
Saipudnizam Mahamad, Syarifah Bahiyah Rahayu Syed Mansor, Hasiah Mohamed
Performance Measurement – A Balanced Score Card Approach
P. D. D. Dominic, M. Punniyamoorthy, Savita K Sugathan, Noreen I. A.
A Conceptual Framework For Teaching Technical Writing Using 3D Virtual Reality Technology
Shahrina Md Nordin, Suziah Sulaiman, Dayang Rohaya Awang Rambli,
Wan Fatimah Wan Ahmad, Ahmad Kamil Mahmood
Multi-Scale Color Image Enhancement Using Contourlet Transform
Melkamu H. Asmare, Vijanth Sagayan Asirvadam, Lila Iznita
Automated Personality Inventory System Wan Fatimah Wan Ahmad, Aliza Sarlan, Mohd Azizie Sidek
A Fuzzy Neural Based Data Classification System Yong Suet Peng, Luong Trung Tuan
Other Areas
Research In Education: Taking Subjective Based Research Seriously
Sumathi Renganathan, Satirenjit Kaur
Jul - Dec 2008
I SSN 1 5 1 1 - 6 7 9 4