acsij publication - Advances in Computer Science : an International

Transcription

acsij publication - Advances in Computer Science : an International
Advances in Computer Science:
an International Journal
Vol. 3, Issue 1, January 2014
© ACSIJ PUBLICATION
www.ACSIJ.org
ISSN : 2322-5157
ACSIJ Editorial Board 2014
Dr. Seyyed Hossein Erfani ( Chief Editor )
Azad University
Tehran, Iran
Dr. Indrajit Saha
Department of Computer Science and Engineering, Jadavpur University
India
Mohammad Alamery
University of Baghdad
Baghdad, Iraq
ACSIJ Reviewers Committee











Prof. José Santos Reyes, Faculty of Computer Science, University of A Coruña, Spain
Dr. Dariusz Jacek Jakóbczak, Technical University of Koszalin, Poland
Dr. ANANDAKUMAR.H, PSG College of Technology (Anna University of Technology), India
Dr. Mohammad Nassiri, Faculty of Engineering, Computer Departement, Bu-Ali Sina University,
Hamedan, Iran
Dr. Indrajit Saha, Department of Computer Science and Engineering, Jadavpur University, India
Prof. Zhong Ji, School of Electronic Information Engineering, Tianjin University, Tianjin, China
Dr. Heinz DOBLER, University of Applied Sciences Upper Austria, Austria
Dr. Ahlem Nabli, Faculty of sciences of Sfax,Tunisia
Dr. Ajit Kumar Shrivastava, TRUBA Institute of Engg. & I.T, Bhopal, RGPV University, India
Mr. S. Arockiaraj, Mepco Schlenk Engineering College, Sivakasi,India
Prof. Noura AKNIN, Abdelmalek Essaadi University, Morocco
ACSIJ Published Papers are Indexed By:
1. Google Scholar
2. EZB, Electronic Journals Library ( University Library of Regensburg, Germany)
3. DOAJ, Directory of Open Access Journals
4. Academic Journals Database
5. Bielefeld University Library - BASE ( Germany )
6. AcademicKeys
7. WorldCat (OCLC)
8. Technical University of Applied Sciences ( TH - WILDAU Germany)
9. University of Rochester ( River Campus Libraries _ USA New York)
TABLE OF CONTENTS
Prediction of Load in Reverse Extrusion Process of Hollow Parts using Modern
Artificial Intelligence Approaches – (pg 1-5)
Navid Moshtaghi Yazdani, Arezoo Yazdani Seqerlou
Theoretical Factors underlying Data Mining Techniques in Developing Countries: a
case of Tanzania – (pg 6-14)
Salim Amour Diwani, Anael Sam
Effects of Individual Success on Globally Distributed Team Performance – (pg 15-20)
Onur Yılmaz
A Survey of Protocol Classifications and Presenting a New Classification of Routing
protocol in ad-hoc – (pg 21-26)
Behnam Farhadi, Vahid Moghiss, Hossein Ranjbaran, Behzad Farhadi
Developing mobile educational apps: development strategies, tools and business models
– (pg 27-36)
Serena Pastore
Data Mining Awareness and Readiness in Healthcare Sector: a case of Tanzania
– (pg 37-43)
Salim Amour Diwani, Anael Sam
Digital Organism Simulation Environment (DOSE): A Library for Ecologically-Based
In Silico Experimental Evolution – (pg 44-50)
Clarence FG Castillo, Maurice HT Ling
Determining Threshold Value and Sharing Secret Updating Period in MANET
– (pg 51-56)
Maryam Zarezadeh, Mohammad Ali Doostari, Hamid Haj Seyyed Javadi
Beyond one shot recommendations: The seamless interplay of environmental
parameters and Quality of recommendations for the best fit list – (pg 57-66)
Minakshi Gujral, Satish Chandra
Island Model based Differential Evolution Algorithm for Neural Network Training
– (pg 67-73)
Htet Thazin Tike Thein
3D-Diagram for Managing Industrial Sites Information – (pg 74-81)
Hamoon Fathi, Aras Fathi
Benefits, Weaknesses, Opportunities and Risks of SaaS adoption from Iranian
organizations perspective – (pg 82-89)
Tahereh Rostami, Mohammad Kazem Akbari, Morteza Sargolzai Javan
Evaluation of the CDN architecture for Live Streaming through the improvement of
QOS Case study: Islamic Republic of Iran Broadcasting – (pg 90-96)
Shiva Yousefi, Mohamad Asgari
A Robust and Energy-Efficient DVFS Control Algorithm for GALS-ANoC MPSoC in
Advanced Technology under Process Variability Constraints – (pg 97-105)
Sylvain Durand, Hatem Zakaria, Laurent Fesquet, Nicolas Marchand
Mobile IP Issues and Their Potential Solutions: An Overview – (pg 106-114)
Aatifah Noureen, Zunaira Ilyas, Iram Shahzadi, Muddesar Iqbal, Muhammad Shafiq, Azeem Irshad
Action recognition system based on human body tracking with depth images
– (pg 115-123)
M. Martínez-Zarzuela, F.J. Díaz-Pernas, A. Tejeros-de-Pablos, D. González-Ortega, M. AntónRodríguez
Contention-aware virtual channel assignment in application specific Networks-on-chip
– (pg 124-129)
Mona Soleymani, Fatemeh Safinia, Somayyeh Jafarali Jassbi, Midia Reshadi
Optimization Based Blended Learning Framework for Constrained Bandwidth
Environment – (pg 130-139)
Nazir Ahmad Suhail, Jude Lubega, Gilbert Maiga
Analysis of Computing Open Source Systems – (pg 140-148)
J.C. Silva, J.L. Silva
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Prediction of Load in Reverse Extrusion Process of
Hollow Parts using Modern Artificial Intelligence
Approaches
Navid Moshtaghi Yazdani 1, Arezoo Yazdani Seqerlou 2
1
Mechatronics Department, University Of Tehran
Iran
Navid.moshtaghi@ut.ac.ir
2
Computer Department, University Of Tehran
Iran
Arezoo.yazdani@ut.ac.ir
Abstract
Extrusion is one of the important processes to manufacture and
produce military and industrial components. Designing its tools is
usually associated with trial and error and needs great expertise and
adequate experience. Reverse extrusion process is known as one of
the common processes for production of hollow parts with closed
ends. The significant load required in formation of a workpiece is
one of the existing constraints for the reverse extrusion process.
This issue becomes rather difficult especially for the parts having
thin walls since its analysis using finite element softwares is
exposed to some limitations. In this regard, application of artificial
intelligence for prediction of load in the reverse extrusion process
will not only save time and money, but also improve quality
features of the product. Based on the existing data and methods
suggested for variations of punching force through the reverse
extrusion process, the system is trained and then performance of
the system is evaluated using the test data in this paper. Efficiency
of the proposed method is also assessed via comparison with the
results of others.
Keywords: reverse extrusion,
simulation, prediction of load.
intelligent
Fig.1. Reverse extrusion process: (1) die; (2) workpiece; and (3) punch
Extrusion is mainly divided into three groups in
terms of deformation and type of the process: direct
extrusion, indirect extrusion.
In direct (forward) extrusion, the punch and the die
have horizontal positions, whereas the material under
deformation and the punch move in the same
direction. The load is applied to end of the billet with
the metal flowing toward the force. There is some
friction between the billet and the cylinder
surrounding it. This technique has an advantageous
simple design and in hot direct extrusion, the
extruded workpiece can be easily controlled and
cooled. Nevertheless, existence of friction in the
contact area between the billet and the cylinder and
heat generated thereof, formation of non-uniform
microstructure and thus non-uniform properties along
the extruded workpiece, greater deformation load in
comparison with non-direct extrusion, as well as
formation of internal defects particularly in existence
of friction, introduction of surface and subsurface
impurities to the workpiece, appearance of funnelshaped cavity at the end of the billet and thus,
excessive thickness of the remaining billet as scrap
are some serious disadvantages of the direct
extrusion.
Indirect (backward or reverse) extrusion is the same
as the direct extrusion process except that the punch
approaches,
1. Introduction
Extrusion is a simple forming process in which a
precast billet is first put in cylinder of the extrusion
machine. Then, this billet flows outward from the die
that is in direct contact with the cylinder by
application of a great load which is introduced
hydraulically or mechanically [1].
1
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
cylindrical billet, the final product can be produced
through one single process, reduced costs of
warehousing; this method is performed automatically,
so the operations of loading, unloading and materials
handling are considerably decreased, simplicity of the
process; this process is really simple such that many
of the components are made just through one step and
there is no need to intermediate steps, great
production capacity; small parts can be made more
than 50 workpiece per minute and this rate reaches
up to 15 parts per minutes for the larger parts, bottom
thickness of the product is independent of its wall
thickness, capability to manufacture parts with zero
separation angle, providing excellent mechanical
properties and making a single part from several
components.
On the other hand, the following items may be
mentioned as the main disadvantages of the impact
extrusion:
this process is sometimes uneconomic; for example
production of alloyed steels and high carbon steels is
not economic because they both require a significant
pressure and several intermediate pressing and
annealing operations, impact extrusion is usually
limited to products of cylindrical, square, hexagonal
and elliptical sections or other symmetrical
geometries with hollow or solid cross sections,
application of the impact extrusion is not
recommended for manufacturing off-center parts
having different wall thickness since they impose
asymmetric and anisotropic loads to the tool during
the forming process, ratio of length to diameter is
limited for both the product and the billet
itself,impact extrusion process is rather capital
consuming with its equipment being relatively
expensive; therefore, mass production would be
needed to make the process economically feasible.
is fixed while the die moves [2]. Flow of the metal is
opposite to the load application. Moreover, there is no
friction between the billet and its surrounding
cylinder. Taking into account hollow shape of the
punch in the indirect extrusion process, there are
practically some limitations in terms of the load as
compared to the direct extrusion. This method
benefits from several advantages; including 20-30%
smaller load required in comparison with the direct
extrusion owing to its no-friction condition, and
temperature on outer layer of the billet is not
increased because there is no friction between the
billet and the cylinder. As a result, deformation will
become uniform and formation of defects and cracks
on corners and surface of the product will be
diminished. The indirect extrusion process at high
deformation rates is possible especially for aluminum
materials which are pressed with difficulty.
Surface impurities of the billet do enter the final
product so that the funnel-shaped cavity is not formed
due to existence of friction. But instead, these
impurities can also appear on surface of the
workpiece. Life of the deformation tool especially
inner layer of the cylinder is enhanced due to
existence of the friction. Indirect extrusion has some
disadvantages in spite of its numerous advantages,
including limited deformation load, less facilities for
cooling the extruded workpiece after leaving the die,
lower quality of outer surface of the product.
Impact extrusion acts by performing impacts in
which die and punch are positioned vertically and the
punch impacts on the billet to give shape of the die
and its surrounding cylinder. This forming process is
sometimes known as a type of forging processes.
Economic significance of the impact extrusion is
associated with effective application of raw material,
reduced labor costs, elimination of intermediate
operations, improved quality of the product, and great
yield with relatively simple tools. Some advantages of
this method are listed below:
saving in consumption of the raw materials since all
or a major part of the initial billet is transformed to
the final product and the amount of wasted raw
material is negligible, reduction or elimination of the
final machining tools; examples prepared by this
method provide acceptable dimensional tolerances
and surface roughness (350-750µm), so they are
completely usable and need no further machining,
using less expensive materials is possible, many of
the parts, if manufactured by conventional machining
techniques, will need a series of preliminary
operations, like rolling, drawing and etc. prior to
production, while in impact extrusion and by using a
2. Significance of Load in Reverse Extrusion
Process
Indirect (reverse) extrusion is one of the most important
processes in the extrusion process. Exact amounts of
pressing load and capacity are critical because designing
the die and punch, determination of the forming steps,
and materials selection all depend on it. There are
numerous factors which affect the extrusion load, some of
the most important of which are material and properties
of the work piece, initial cross section of the work piece,
ratio of friction between the surfaces, shape of the work
piece and punch, and temperature of the process. A great
2
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
examples. A program for conversion of example "a" to
example "b" involves a constant set of conversions which
starts which "a" and ends to "b". These programs are
typically made by adding a termination sign to each
string. Complexity of a program is generally defined as
length of the shortest string representing it. Therefore, the
distance between two examples is length of the shortest
string which converts two examples to one another. This
approach concentrates on one conversion (the shortest
one) among all the possible conversions [8].
number of experimental, analytical and numerical
methods have been addressed in the literature to study
prediction of load in extrusion processes [3].
Experimental methods usually demonstrate poor
performance in representing exact details of the process
and have limited applications. Meanwhile, finite element
and finite difference methods need great computational
time and powerful computers. Among the existing
analytical methods, upper bound theory is the one which
gives relatively accurate answers in a short time by its
simple formulization, even though the workpiece
incorporates many complexities. Macdormott et al. [4]
have used 8 basic elements via the abovementioned
method in order to predict the load required in forging
process. Out of them, 4 ring elements belong to the
inward flow, while the other 4 ring elements are related to
the outward flow. Kiuchi et al. [5] have developed a
method based on elemental upper bound technique for
simulation of the metal forming processes (e.g. forging),
in which straightforward elements have been used to
obtain the forming force. Kim et al. [6] have estimated the
material flow in the forging process using the elemental
upper bound method. Thereby, they have become able to
predict the preform geometry as well as the number of
required forging steps in this process. Bae et al. [7] have
adopted the elemental upper bound approach to analyze
and get the load needed for the reverse extrusion process
three dimensionally. The load increases throughout the
workpiece due to growth of the friction surfaces at the
final step and this regime is intensified at the final step.
4. SVM Algorithm
SVM is an algorithm for classification of linear and
nonlinear data. It initially uses a nonlinear mapping for
conversion of the initial data to higher dimensions and
later looks for the best hyper-plane in the new
dimensions. This hyper-plane includes a decision
boundary which separates records of one class from other
classes. The data marked as belonging to different classes
are simply separated by a hyper-plane with a nonlinear
mapping. SVM finds this hyper-plane using support
vectors and margins which are defined by the support
vectors. There are various decision boundaries for
separation of the data associated with every class.
However, current research work aims to find the decision
boundary or the separating hyper-plane with maximum
margin (i.e. maximum margin hyper-plane or MMH)
which separates the data at greater accuracy and lower
error. A data is deemed linear when it is separable by
using a linear decision boundary. No direct line may
separate data from different classes when the data is
nonlinear. For those data which are separable linearly, the
support vectors are a subset of learning records which are
located on the margins. Nevertheless, the problem is a bit
different for the nonlinear data.
Once the MMH and support vectors are obtained, a learnt
SVM would be available. A learnt SVM is found as a
function of support vectors, test records, constants and
Lagrangean parameters versus support vectors by
equations related to the margins and hyper-plane followed
by rewriting of them by Lagrangean formula and solving
this problem by HHR conditions.
The approach utilized for a linear SVM can also extended
for creating a nonlinear SVM. Such an algorithm of SVM
can get nonlinear decision boundary for the input space.
Development of this approach has two main steps: in the
first step, the input data is transferred to one higher
dimension using a nonlinear mapping. There are a large
number of nonlinear mappings at this step which can be
utilized for this purpose. Once the data were mapped to a
space of higher dimensions, the second step will search
3. Kstar Algorithm
This is an example-based learner which classifies every
new record by comparing it with the existing classified
records in the database. It is assumed in this algorithm
that the similar examples have the same classes. Two
basic components of the example-based learners are
distance function and classification function. The former
determines the similarity between examples and the latter
shows how similarity of the examples leads to a final class
for the new example.
Kstar algorithm is a lazy learner of K-nearest
neighborhood learning which uses a fatigue criterion to
address distance or similarity of the examples to each
other. The approach adopted by this algorithm to measure
the distance between two examples is derived from
information theory. In this regard, the distance between
two examples includes the complexity of converting one
example to another one. Measurement of the complexity
is done in two steps: first, a constant set of conversions is
defined which maps some examples to some other
3
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
the new space for the linear separating hyper-plane.
Finally, an optimization problem of second degree will be
obtained which can be solved by linear SVM formula. The
MMH found in the new space is indicative of a nonlinear
hyper-plane in the initial space [9].
6. Improved XCS
In the suggested method, firstly the limited set of
training data is commonly applied for amending
characteristics of rules consists of prediction,
prediction error and fitness. This is done by means of
the following relation:
Updating prediction and prediction error
5. XCS algorithm
An extensive range of learning algorithms are employed
either supervised or not in the context of machine
learning to stop the machine from searching a large
volume of information and data. It further suggests a
pattern which can be used for the predictable
(classification and regression) or descriptive (clustering)
actions. Techniques which work based on rules are known
as the most famous machine learning methods, since they
are more comprehensible in comparison with other
techniques thanks to the approaches they commonly
adopt.
They use a limited set of “action” and “condition” rules to
show a small contribution of the whole solution space.
The conditions address a part of the problem domain,
whereas the actions indicate the decision based on the
sub-problems specified by the condition. Basically, the
classification systems include a set of rules in which every
rule is an appropriate solution for the target problem.
These classifications gradually become effective by
application of a reinforcement plan which has genetic
algorithms on its separators.
The first classification system was proposed by Holland
in order to work for both the individual problems and
continuous problems (LCS). This learning system
classifies an example of the machine learning which
combines temporal differences and learner’s supervisions
with genetic algorithm and solves simple and difficult
problems. According to the supervision suggested by
Holland, the LCS uses a single property (called power) for
each of the classifiers. Power of a separator denotes
affectability of it and is exclusively determined by
percentage the answer correlates with the expected
results. These criteria are characterized by principles of
supervisory training.
From the first introduction of the main learning
classification systems (LCS), some other types of the LCS
are proposed so far including XCS. Before 1995 when the
extended classification system was not developed yet,
ability of a classification system to find proper answers in
the reinforcement system of these classifiers was of major
concern. Thereby, basic and simple classification systems
were changed to more accurate decision making factors.
Now, it is firmly believed that the XCS is able to solve
even more complicated problems with no need to further
adjust the parameters. As a result, it is currently
accounted for the most successful learning system.
If expi < 1/β then Pi =Pi + (R-Pi) / expi,
εi) / expi
εi= εi+(|R-Pi|(1)
If expi ≥ 1/β then Pi =Pi + β (R-Pi), εi= εi+ β (|R-Pi|- εi)
(2)
Updating fitness:
If εi < ε0 then ki=1
If εi ≥ ε0 then ki= β (εi/ε0) –γ
Fi = fi+ β [(ki/∑kj) – fi]
(3)
(4)
(5)
In these relations, β is learning rank, γ is power of law
accuracy, ε is prediction error, exp is law experiment, P is
law prediction, R is reward received from environment, k
is law accuracy and f is fitness. i index also indicates
number of law in set of rules.
In the next phase for developing variety in set of data,
several couples were selected as parents from among the
fields that display the part of existing data condition using
the method of “Stochastic selection with remainder”², and
new data condition section is created using intermediate
crossover method which are applied on the fields of
parents. In this method, the quantity of each of the
conditional variables is obtained from the following
relation:
ai= α(ai F)+(1-α)( ai M)
(6)
in which ai is the quantity of conditional variable of i in
new data, ai F is the quantity of conditional variable i in
the first parent (father), ai M is the quantity of conditional
variable of i in the second parent (mother) and α is the
coefficient of parents partnership which are determined in
adaptive form. New data section performance is also
produced using a non-linear mapping of conditional
variables area to area of performance which are created by
using the existing data.
Diversifying the existing data continues up to where
learning stop condition (for example, when percent of
system correct answers to the test data reach to a predetermined threshold) is satisfied aided by completed
data. [11].
4
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
extrusion process considering accuracy of each of them in
the test phase (as summarized in Table 1).
7. Methodology
experiments [10] was selected in relation with the reverse
extrusion process of aluminum parts in order to compare
efficiency of the models developed by the abovementioned
algorithms. The data extracted from these experiments
contains some 240 data entries, out of which 200 entries
are randomly selected for training with the remaining
being chosen for testing.
Displacement of the punch, coefficient of friction,
diameter of the punch, circumferential diameter of the die
polygonal, and number of sides of the workpiece are taken
as input variables of the problem, while the punching load
is considered to be the output.
References
[1] Yang, D. Y., Kim, Y. U., & Lee, C. M. (1992). Analysis of
center-shifted backward extrusion of eccentric tubes using round
punches. Journal of materials processing technology, 33(3),
289-298.
[2] Bae, W. B., & Yang, D. Y. (1993). An upper-bound analysis
of the backward extrusion of tubes of complicated internal shapes
from round billets. Journal of materials processing
technology, 36(2), 157-173.
[3] Bae, W. B., & Yang, D. Y. (1993). An analysis of backward
extrusion of internally circular-shaped tubes from arbitrarily-shaped
billets by the upper-bound method. Journal of materials
processing technology, 36(2), 175-185.
[4] McDermott, R. P., & Bramley, A. N. (1974). An elemental
upper-bound technique for general use in forging analysis.
In Proceedings 15th MTDR conference, Birmingham, UK (pp.
437-443).
[5]M.Kiuchi,Y.Murata. Study on application of UBET to nonaxisymmetric forging Adv.Technol.of plasticity vol.I,pp9, (2005).
[6]KimY.H,Computer
aided
perform
process,J.of
Matt.Tech,41(1994) p 105.
[7] Bae, W. B., & Yang, D. Y. (1993). An analysis of backward
extrusion of internally circular-shaped tubes from arbitrarily-shaped
billets by the upper-bound method. Journal of materials
processing technology, 36(2), 175-185.
[8] Witten, I. H., & Frank, E. (2005). Data Mining: Practical
machine learning tools and techniques. Morgan Kaufmann.
[9] Han, J., Kamber, M., & Pei, J. (2006). Data mining: concepts
and techniques. Morgan kaufmann.
[10] Ebrahimi, R., “Analysis of Reverse Forge Extrusion for
Manufacturing Hollow Parts”, Master of Science Thesis, Shiraz
University, 1997.
[11] M.ShariatPanahi, N.MoshtaghiYazdani, An Improved XCSR
Classifier System for Data Mining with Limited Training Samples,
Global Journal of Science, Engineering and Technology, (ISSN :
2322-2441)Issue 2, 2012 , pp. 52-57.
Fig.2. Practical testing of the reverse extrusion process
A comparison between the three trained algorithms gives
the following results.
Table.1. Prediction of load in reverse extrusion process
Algorithm
Kstar
SVM
Accuracy
83.9%
91.4%
Improved
xcs
92.8%
XCS
88.6%
First Author postgraduate of Mechatronics from University Of
Tehran, Interested in Artificial intelligence, data mining, signal
processing.
8. Conclusion
Second Author student of Computer Engineering from University Of
Tehran, Interested in Software Engineering, Artificial intelligence.
Reverse extrusion is a common process for manufacturing
hollow pars with closed ends and thus has numerous
applications in different industries. The significant load
required for forming the workpiece is one of the main
drawbacks of the reverse extrusion process. A load
prediction system is developed in this research based on
the information gathered from reverse extrusion forge
analysis and artificial intelligence algorithms. These
results may properly contribute to improve the reverse
5
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Theoretical Factors underlying Data Mining Techniques
in Developing Countries: a case of Tanzania
Salim Amour Diwani1, Anael Sam2
1
Information Communication Science and Engineering, Nelson Mandela African Institution of Science and Technology,
Arusha, P.O.BOX 447, Tanzania
diwania@nm-aist.ac.tz
2
Information Communication Science and Engineering,
Arusha, P.O.BOX 447, Tanzania
Anael.sam@nm-aist.ac.tz
methodologies for decision making, problem solving,
analysis, planning, diagnosis, detection, integration,
prevention, learning and innovation. Data mining
technology involves different algorithms to accomplish
different functions. All these algorithms attempt to fit the
model to the data. Data mining algorithms is divided into
two parts:- predictive model which makes a prediction
about values of data using known results found from
different data based on historical data, like how Netflix
recommends movies to its customers based on the films
themselves which are arranged as groups of common
movies and how Amazon recommends books to its users
based on types of books user requested. The other is a
descriptive model which identifies patterns or
relationships concealed in a database. Unlike the
predictive model, a descriptive model serves as a way to
explore the properties of the data examined, not to predict
new properties. For instance we can find the relationship
in the employees database between age and lunch
patterns. Assume that in Tanzania most employees in
their thirties like to eat Ugali with Sukumawiki for
lunch break and employees in their forties prefer to carry
a home cooked lunch from their homes. Therefore we can
find this pattern in the database by using descriptive
model. Data mining can be applied in strategic decision
making, wealth generation, analyzing events and for
security purpose by mining network streaming data or
finding for abnormal behavior which might occur in the
network . Data mining helps organizations to achieve
various business objectives such as low costs, increase
revenue generation while maintaining the quality of their
products. Also data mining can improves the customer
Abstract
Just as the mining of Tanzanite is the process of extracting large
block of hard rock's by using sophisticated hard rock mining
techniques to find valuable tanzanite glamour, data mining is the
process of extracting useful information or knowledge from large
un-organized data to enable effective decision making. Although
data mining technology is growing rapidly, many IT experts and
business consultants may not have a clue about the term. The
purpose of this paper is to introduce data mining techniques, tools,
a survey of data mining applications, data mining ethics and data
mining process.
Keywords: Data Mining, Knowledge Discovery,
Mererani , KDD, J48, RandomForest, UserClassifier,
SimpleCART, RandomTree, BFTree, DecisionStump,
LADTree, logit transform, Naive Bayes
1. Introduction
The amount of data kept in computer files and databases
are growing rapidly. At the same time users of these data
are expecting to get sophisticated knowledge from them.
Banking industry wants to give better service to
customers and cross sell bank services, banks need to
mine the huge amount of data generated by different
banking operations so that new approaches and ideas are
generated. Data Mining is an interdisciplinary field that
combines artificial intelligence, computer science,
machine learning, database management, data
visualization, mathematic algorithms, and statistics. It is
a technology for knowledge extraction from huge
databases[1]. This technology provides different
6
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
pattern within data and then to build the model to predict
the behaviors based on data collected. Hence data mining
is about finding the pattern and building the best model.
Pattern is an event or combination of events in a database
that occurs more often than expected. Typically this
means that its actual occurrences is significantly different
than what would be expected by random choice and a
model is a description of the original historical database
from which it was built that can be successfully applied to
new data in order to make predictions about missing
values or to make statements about expected values.
The goal of a predictive model is to predict future
outcomes based on past records with known answers. Two
approaches are commonly used to generate models:
supervised and unsupervised.
Supervised or directed modeling is goal-oriented. The
task of supervising modeling is to explain the value of
some particular field. The user selects the target field and
directs the computer to tell how to estimate, classify, or
predict it. Unsupervised modeling is used to explain those
relationships once they have been found. [3, 4]
service, better target marketing campaigns, identify high
risk clients and improve production processes.
2. Methodology
2.1 Identification of Publications
In order to identify selected publications in the area of
knowledge discovery in health care system, articles were
selected from various databases and resources linked to
Nelson Mandela African Institute of Science and
Technology (NM-AIST), such as African Digital Library,
Organization
for
economic
corporation
and
development(OECD), Hinari, Agora, Oare, Emerald,
Institute of Physics and IEEE. Keywords such as database,
data mining, data warehousing, healthcare and knowledge
discovery were used to facilitate the searches.
2.2 Selection of Publications
This literature reviews considered the papers and articles
published between 2008 and 2012 in the areas of data
mining, knowledge discovery, and health care. The
literature published in English were selected with specific
emphasis placed on literature covering the relationship
between knowledge discovery and data mining,
applications of data mining in health care in different
countries in the developed world and constraints and
requirements for the set up of data mining in health care.
Data Mining
Predictive
Descriptive
Classification
Regression
Clustering
Summarization
Time series analysis
Prediction
Association rules
Sequence discovery
2.3 Analysis Strategy for Selected Publication
The selected articles will then be analyzed for trends in
data mining over the period of review stated above. The
selected literatures were categorized according to areas of
emphasis including frame work for knowledge discovery
and data mining techniques, methods and algorithm for
knowledge discovery and data mining and specific
instances of application of data mining in health care.
Fig. 1 Data Mining Models and Tasks
4. Predictive Techniques
A predictive model is the process of analyzing historical
data using identifying and quantifying relationships
between predictive inputs and outcomes found during
knowledge and discovery. Some of the techniques of
predictive model are classification, regression,
generalized linear models, neural networks, genetic
algorithms, stochastic gradient boosted trees and support
vector machines.
3. Data Mining Techniques
Data mining make the use of various techniques in order
to perform different types of tasks. These techniques
examine the sample data of a problem and select the
appropriate model that fits closely to the problem to be
tackled. Data mining is described as the automated
analysis of large amounts of data to find patterns and
trends that may have otherwise gone undiscovered[2].
What data mining techniques does is to find the hidden
4.1 Classification
Classification maps each data element to one of a set of
pre-determined classes based on the difference among
data elements belonging to different classes. In
7
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Neighbor, Neural Network, Bayesian Classifier, Decision
Tree, Hidden Markov Model and Temporal Belief
Network. Prediction applications includes flooding,
speech recognition, machine learning and pattern
recognition.
classification, data is divided into groups of objects and
each object is assigned to a class of unknown sample,
then we identify the attributes that are common to each
class. For example, classifying patients with recurring
health problem to understand major diseases and make
necessary arrangement to give proper treatments. Data
mining searches healthcare databases and classifies the
patients into symptoms or problems of patients and
classify into groups so that they can be easily referred at
the time of emergency.
4.5 Decision Tree
Decision tree is a popular structure of supervised learning
technique used for classification and regression. The goal
is to create a model that predict the value of the target
variable. Some of the popular algorithms for decision tree
are J48, RandomForest, UserClassifier, SimpleCART,
Random Tree, BFTree, DecisionStump, and LADTree.
Decision tree can be used in business to quantify decision
making and to allow comparison of different possible
decision to be made. Below is the example of Fisher's Iris
data set. Starting from the top if petal width is less than or
equal to 0.6cm the Iris is Setosa. Next we see that if Petal
width is greater than 0.6 also greater than 1.7cm then the
Iris is Varginica.
4.2 Regression
Regression is a data mining application which map data
into a real valued prediction variable. In regression the
variable of interest is continuous in nature. The goal of
regression is to develop a predictive model where a set of
variables are used to predict the variable of interest. For
instance a patient with high blood pressure wishing to
reduce his cholesterol level by taking medication. The
patients want to predict his cholesterol level before
medication and after medication. He uses a simple linear
regression formula to predict this value by fitting past
behavior to a linear function and then use the same
function to predict the values at points in the future based
on the values , he then alters his medical improvement.
Some of the functions of regression are logistic
regression, simple logistic, isotonic regression and
sequential minimal optimization (SMO).
4.3 Time Series Analysis
Time series analysis is a collection of observations of well
defined data items obtained through repeated
measurements over time. The values are usually obtained
hourly, daily, weekly or monthly. Time series analysis
can be used to identify the nature of the phenomenon
represented by the sequence of observation as well as
forecasting or predicting future values of time series
variable. Example the tidal charts are prediction based
upon tidal heights in the past. The known component of
the tides are built into models that can be employed to
predict future values of the tidal heights.
Fig. 2 Example of Decision Tree
4.6 Naive Bayes
Naive Bayes is the supervised learning algorithms which
analyzes the relationships between independent and
dependent variables. This method goes by the name of
Naïve Bayes because it’s based on Bayes’ rule and
“naïvely” assumes independence—it is only valid to
multiply probabilities when the events are independent.
The assumption that attributes are independent (given the
class) in real life certainly is a simplistic one. But despite
the disparaging name, Naïve Bayes works very effectively
when tested on actual datasets, particularly when
combined with some of the attribute selection
procedures[5].
4.4 Predictions
Prediction is the process of predicting future events, it
will predict what will happen next based on the available
input data. For instance if Salim turns on the TV in the
evening then he will 80% of the time go to kitchen to
make coffee. Prediction techniques includes Nearest
8
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
4.9 Neural Network
4.7 K-Nearest Neighbor
Nearest neighbor method is a technique that classifies
each record in a dataset based on a combination of the
classes of the k record(s) most similar to it in a historical
dataset. Sometimes it is called the k-nearest neighbor
technique [6]. K-nearest neighbor is asymptotically
optimal for large k and n with k/n → 0. Nearest-neighbor
methods gained popularity in machine learning through
the work of [7], who showed that instance-based learning
can be combined with noisy exemplar pruning and
attribute weighting and that the resulting methods
perform well in comparison with other learning methods.
Neural Network is very powerful and complicated data
mining technique based on the models of the brain and
nervous systems. Neural network is highly parallel which
processes information much more like a brain rather than
a serial computer. It is particularly effective for predicting
events when a network has a large database. Neural
networks are typically organized in layers and each layer
is made of interconnected nodes. Input layers are
interconnected with a number of hidden layers where the
actual processing is done via system of weighted
connections. These hidden layers can be connected to
other hidden layers which finally link to the output layer.
Neural network can be applied in voice recognition
system, image recognition system, industrial robotics,
medical imaging, data mining and aerospace application.
4.8 Logistic Regression
Logistic regression is a popular and powerful data mining
technique that enable to determine the impact of
multiple independent variables presented simultaneously
to predict membership of one or other of the two
dependent variable categories, which uses logit transform
to predict probabilities directly. Logistic regression does
not assume a linear relationship between dependent and
independent variables. The dependent variable must be
dichotomy while independent variable need not to be
interval nor normally distributed, nor linearly related, nor
of equal variance within each group. Logistic regression
attempts to produce accurate probability estimates by
maximizing the probability of the training data. Hence,
probability estimates lead to accurate classification.
Fig. 3 Example of Neural Network
5. Descriptive Data Mining Techniques
Descriptive data mining techniques are typically
unsupervised which describes data set in a concise way
and presents interesting characteristics of the data
without having any predefined target. They are used to
induce interesting patterns from unlabelled data. The
induced patterns are useful in exploratory data analysis.
Some of the descriptive techniques are clustering,
summarization, association rules and sequence discovery.
5.1 Association Rules
Association rules is a process to search relationships
among data items in a given data set, which helps in
managing all the data items. For instance association rule
can be used in retail sales community to identify items
that are frequently purchased together. For example
people buying school uniforms in December also buy
school bags. An association is a rule of the form if X
9
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
then Y it is denoted as X
Y.
Any rule if X
Y
Y
Y are called an interesting item set.
medical domain, a data sequence may correspond to the
symptoms or diseases diagnosed during the visit to the
doctor. The patterns discovered using this data could be
used in diseases research to help identify symptoms or
diseases that precede certain diseases.
X, then X and
5.2 Clustering
Clustering is a process of partitioning or segmenting a set
of data or objects into a set of meaningful sub classes,
called clusters. Clustering is similar to classification
except that the group are not predefined, but rather
defined by data alone. Clustering is alternatively referred
to as unsupervised learning or segmentation. It can be
thought of as partitioning or segmentation the data into
groups that might or might not disjointed. Clustering can
be used in organizing computing clusters, market
segmentation, social network analysis and astronomical
data analysis.
6. Data Mining Process
In order to systematically conduct any data mining
analysis, certain procedures should be eventually
followed. There is a standard process called CrossIndustry Standard Process for Data Mining (CRISP-DM)
widely used by industry members.
6.1 Understanding the Business
This is the first phase which its aim is to understand the
objectives and requirements of the business problem and
generating data mining definition for the related problem.
6.2 Understanding the Data
The objective of this phase is to analyze the data collected
in the first phase and study its characteristics. Models
such as cluster analysis can also be applied in this phase
so that the patterns can matched to propose hypothesis
for solving the problem.
Fig. 4 Clustering Process
In clustering, there are no predefined classes and no
examples. The records are grouped together on the basis
of self-similarity. It is up to the user to determine what
meaning, if any, to attach to the resulting clusters.
Clusters of symptoms might indicate different diseases.
Clusters of customer attributes might indicate different
market segments. Clustering is often done as a prelude to
some other form of data mining or modeling. For
example, clustering might be the first step in a market
segmentation effort: Instead of trying to come up with a
one-size-fits-all rule for “what kind of promotion do
customers respond to best,” first divide the customer base
into clusters or people with similar buying habits, and
then ask what kind of promotion works best for each
cluster[8].
6.3 Data Preparation
In this phase raw data are first transformed and cleaned to
generate the data sets that are in desired format.
Therefore, this phase creates final datasets that are input
to various modeling tools which providing the opportunity
to see patterns based on business understanding.
5.3 Sequence Discovery
Sequence discovery is an ability to determine sequential
pattern in the data. The input data is the set of sequences
called data sequences. Each data sequence is an ordered
list of item sets, where each item set is a set of literal.
Unlike market basket analysis which requires the items to
be purchased over time in some order. For instance, in the
10
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
7. Data Mining Tools
Data mining is the process that uses a variety of data
analysis tools which originates from machine learning
and statistics. It uses a variety of data analysis and
modeling techniques to discover patterns and
relationships in data that may be used to make valid
predictions. Most frequently, many business end users are
lacking quantitative skills which enables them to analyze
their data more effectively by using statistical methods
analysis tools. To bridge this gap, software companies
have developed data mining application software to make
the job a lot easier than they think. These data mining
tools allows users to analyze data from different angles by
summarizing, predicting and identify the relationships.
Product
Table 1: Data Mining Tools
Vendor
Functions
CART
Salford Systems
Classification
Association rules,
classification,
Fig. 5 CRISP DM-PROCESS
Clementine
SPSS Inc
6.4 Modeling
clustering, factor
analysis, forecasting,
prediction, sequence
discovery
In this phase different data mining techniques for
modeling such as visualization and clustering analysis can
be selected and applied depending on the nature of the
data. The data collected from previous phase can be
analyzed and predicted the generated output.
Clustering, prediction,
Darwin
Oracle Corporation
classification,
association rules
Association rules,
6.5 Evaluation
Enterprise Miner
SAS Institute Inc.
In this phase model results or set of models that you
generate in the previous phase should be evaluated for
better analysis of the refined data.
classification,
clustering, prediction,
time series
Association rules,
clustering,
Intelligent Miner
IBM Corporation
classification,
prediction, sequential
6.6 Deployment
patterns, time series
The objective of this phase is to organized and implement
the knowledge discovered from the previous phase in such
a way that it is easy for end users to understand and use in
prediction or identification of key situations. Also models
need to be monitored for changes in operating conditions,
because what might be true today may not be true a year
from now. If significant changes do occur, the model
should be redone. It’s also wise to record the results of
data mining projects so documented evidence is available
for future studies[8].
LOGIT
JDA Intellect
Salford Systems
JDA Software Group,
Inc.
Forecasting, hypothesis
testing
Association rules,
Classification,
Clustering, Prediction
Association rules,
WEKA
The University of
Waikato
Classification,
Clustering,
Visualization
11
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
cost of doing the business, improving profit and
enhancing the quality of service to consumers.
8. Application of Data Mining
Data mining is an emerging technology which finds its
application in various fields to identify buying patterns
from customers, determine the distribution schedule
among outlets, identify successful medical therapies for
different illness, detect patterns of fraudulent credit card
use and other innovative applications in order to solve the
social problem and improve the quality of life. Hence data
mining application can be used by small, medium and
large organizations to achieve business objectives such as
low costs, increase revenue generation while maintaining
high standard of living. Below are some of the
applications of data mining:
8.6 Banking
Data mining enables banking authorities to study and
analyze the credit patterns of their consumers and prevent
any kind of bad credits or fraud detection in any kind of
banking transactions. It also enables them to find hidden
correlations between different financial indicators and
identify stock trading from historical market data.
8.7 Bioinformatics
Data mining enables biological scientists to analyze large
amount of data being generated in the field of
bioinformatics studies using the techniques of data
visualization.
8.1 Business
Mining enables established business organizations to
consolidate their business setup by providing them with
reduced cost of doing the business, improved profit, and
enhanced quality of service to the consumer.
8.8 Stocks and Investment
Data mining enables you to first study the specific
patterns of growth or downslides of various companies
and then intelligently invest in a company that shows the
most stable growth for a specific period.
8.2 Electronic Commerce
Data mining can be used in web design and promotion
depending on the user's needs and wants. Also it can be
used in cross selling by suggesting to a web customers
items that he/she may be interested in, through
correlating properties about the customer, or the items the
person has ordered.
8.9 Crime Analysis
Data mining enables security agencies and police
organizations to analyze the crime rate of a city or a
locality by studying the past and current attributes that
leads to the crime. The study and analysis of these crime
reports helps prevent the reoccurrence of such incidences
and enables concerned authorities to take preventive
measures too
8.3 Computer Security
Data mining enables network administrators and
computer security experts to combine its analytical
techniques with your business knowledge to identify
probable instances of fraud and abuse that compromises
the security of computer or a network.
9. Discussion
8.4 Health Care
Since the conception of data mining, data mining has
achieved tremendous success in today's business world.
Many new problems have emerged and have been solved
by data mining researchers. However, we still have big
challenges in front of us. Some of the most difficult
challenges faced by data miners are individual privacy,
anonymization, discrimination and integration.
Issue of privacy of an individual is an ethical issue, before
collecting any personal information the purpose must be
stated and such information must not be disclosed to
others without consent. Also personal information must
Healthcare organization generates large amount of data
in its clinical and diagnostic activities. Data mining
enables such organization to use machine learning
techniques to analyze healthcare data and discovered new
knowledge that might be useful in developing new drugs.
8.5 Telecommunication
Telecommunication industry can use data mining to
enable telecommunication analyst to consolidate
telecommunication setup by providing them with reduced
12
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
not be transmitted to location where equivalent data
protection cannot be assured and some data are too
sensitive to be collected except in extreme circumstances
such as sexual orientation or religion.
Anonymization of an individual is another ethical issue
which is harder than you think. For instance the hospital
data are very sensitive before the release of medical data
to any researcher should first be anonymized by removing
all identifying information of an individual such as name,
address, and national identification number.
The main purpose of data mining is discrimination. In
banking industry data mining is used to discriminate
customers who apply for bank loan and check if they are
eligible or not. The ones who are eligible for bank loan in
what amount they can apply for the loan and for how
long. Also data mining can be used to discriminate
customers by identifying who can get special offer. Some
kind of discrimination are unethical and illegal hence
cannot be allowed such as racial, sexual and religious.
Maintaining Data integrity is very important in any
organizations, if the data does not have proper integrity, it
could be false or wrong. If the data is wrong, the results will
be wrong. Hence conclusions and interpretations of the
results will be wrong. Everything will have been a
complete waste of time and money, and it will never get
work as a researcher again. The key challenge here is the
integration of data from different sources. For instance
CRDB bank in Tanzania have many branches across the
country and each branch require different database which
needed to be integrated to the main branch. Each customer
has more than one ATM card and each ATM card has more
than one address. Therefore mining these type of data is
very difficult since the application software need to translate
data from one location to another location and select most
address which recently entered hence it is difficult to extract
the correct information.
Recently the cost of hardware system has dropped
dramatically, hence data mining and data warehousing
becoming extremely expensive to maintain . Small, medium
and large organizations are accumulating a lot of data in day
to day activities through their local branches, government
agencies and companies that are dealing with data
collection electronically. Therefore there is no other choice
rather than to implement data mining technology in their
organization no matter what so that they can achieve their
business goals, increase revenue generation and reduce
labor cost. Therefore data mining is like a pregnant woman
whatever she eats nourishes the baby.
10. Conclusion
This paper has defined data mining as a tool of extracting
useful information or knowledge from large un-organized
data base which enable organization to make effective
decision. Most of the organization uses data warehouse to
store their data and later be extracted and mined in order
to discover new knowledge from their databases in the
acceptable format such as ARFF and CSV format. The
data is then analyzed using data mining techniques and
the best model will be built which help organizations in
their effective decision making. Some of the techniques
have been discussed includes classification, regression,
time series analysis, prediction, decision tree, naive bayes,
k nearest neighbor, logistic regression and neural
network.
Data mining process called CRISP-DM model have been
discussed. In order to carry out any data mining task some
procedure need to be followed included business
understanding, data understanding, data preparation
modeling, testing, evaluation and deployment. Therefore
before doing any data mining task you need to ask
yourself what do you need to know and why and how
much data do you need to collect, collect your required
data and clean the data, data cleaning is the most hardest
part of data mining task which need high understanding
of data mining techniques. Lastly, you define your new
features and deploy your data by convincing your boss if
the data you mined make any sense and can be used in
decision making.
Many industries use data mining applications in day to
day
activities
such
as
banking,
healthcare,
Telecommunication, Marketing and police. In marketing
data mining can be used to select best profitable market to
launch new products and to maintain competitive edge
over competitors. In banking industries data mining can
be used to segment data, determine customers preferences,
detect frauds, cross selling banking services and retention
of customers. In healthcare data mining can be used to
segment patients into groups, identifying the frequent
patients and their recurring health problems, relationships
between diseases and symptoms, curbing the treatment
costs and predicting the medical diagnosis. Therefore we
conclude that those industries who take full advantage of
data mining application in their day to day activities have
the ability to enhance competitive advantage.
Acknowledgements
I would like to thank almighty and my family for their
constant support during this work. I would also like to
13
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
thank my supervisors Dr. Anael Sam from Nelson
Mandela African Institute of Science and Technology and
Dr. Muhammad Abulaish from Jamia Milia Islamia for
their guidance and valuable support during this work.
References
[1]Liao, S.-h. (2003). ‘Knowledge Management
Technologies and applications-Literature review from
1995 to 2002’. Expert System with Application, 25, 155164.
[2] Fabris, P. 1998. Advanced Navigation. CIO, May15.
[3] Berson, A., Smith, S., and Thearling, K., Building
Data Mining Applications for CRM (McGraw-Hill, New
York, 2000).
[4] J. Shanmugasundarum, M.V.Nagendra-Prasad, S.
Vadhavkar, A. Gupta, Use Of Recurrent Neural Networks
For Strategic Data Mining Of Sales, MIT Sloan School of
Management, Working Paper 4347- 02, 2002.
[5] Zhang, H., Jiang, L., & Su, J. (2005). Hidden Naïve
Bayes. In Proceedings of the 20 th National Conference on
Artificial Intelligence (pp. 919–924). Pittsburgh. Menlo
Park, CA: AAAI Press.
[6] Devroye, L., Györfi, L., & Lugosi, G. (1996). A
probabilistic theory of pattern recognition. New York:
Springer-Verlag.
[7] Aha, D. (1992). Tolerating noisy, irrelevant, and
novel attributes in instance-based learning algorithms.
International Journal of Man-Machine Studies, 36(2),
267–287.
[8]Michael Berry, Gordon Linoff., Data Mining
Techniques (WILLEY Publishing, Inc., Indianapolis,
Indiana, 2004)
Salim Amour Diwani received his BS degree in computer science at Jamia
Hamdard University, New Delhi, India in 2006 and Ms degree in computer
science at Jamia Hamdard University in New Delhi, India in 2008. He is
currently a PhD scholar in Information communication Science and
Engineering at Nelson Mandela African Institution of Science and
Technology in Arusha, Tanzania. His primary research interests are in Data
Mining, Machine Learning and Database Management Systems. He published
two papers in the area of Data Mining.
14
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Effects of Individual Success on Globally Distributed Team
Performance
Onur Yılmaz
Middle East Technical University, Department of Computer Engineering
Ankara, Turkey
yilmaz.onur@metu.edu.tr
Grouping people from different backgrounds and regions
have very common drawbacks. Considering different
time-zones, communication within teams and managing
them is a rigorous work. In addition, people from
different programming backgrounds result with large
range of experience and knowledge, which are
indispensable for software development. Therefore,
software development teams must be considered as “an
optimum combination of competencies” rather than just a
group of developers.
Abstract
Necessity of different competencies with high level of
knowledge makes it inevitable that software development is a
team work. With the today’s technology, teams can
communicate both synchronously and asynchronously using
different online collaboration tools throughout the world.
Researches indicate that there are many factors that affect the
team success and in this paper, effect of individual success on
globally distributed team performance will be analyzed. Student
team projects undertaken by other researchers will be used to
analyze collected data and conclusions will be drawn for further
analysis.
Keywords: Teamwork, Individual success, Software
development
As mentioned, software development needs different
responsibilities and competencies, which can be gathered
together from different regions of the world by means of
today’s technology. In addition, there are many factors
that can affect the performance of a global software
development team, such as cultural, individual and
collaborative group work attitude [3]. In this study, a
subclass of individual factors, namely individual success,
will be analyzed in the sense of affecting the team
performance. Individual success of the team players will
be based on their experience and GPA; on the other hand
team performance will be considered in two dimensions
as overall grade and team communication. Although
reasoning of analyzing overall team grade is selfexplanatory, team communication analysis will also be
undertaken because research on software teams indicate
the fact that team communication is affected by frequency
of communication within teams [4].
1. Introduction
Considering the necessity and variety of competency, it is
inevitable that software development is a cooperative
work. In other words, software engineering is based on
group activity where each player is assigned with the role
related to different responsibilities. Although, teamwork
is thought as a must, most of the project failures are
related to the deficiency of team configurations. Therefore,
having the optimum team composition is crucial for
software development projects [1].
With the tools available online, in today’s world,
collaboration can be conducted throughout the world
without any face-to-face communication obligation. When
these applications are considered, both synchronous and
asynchronous applications are available for online
collaboration. For synchronous applications, video chat
and instant messaging can be the most progressive
examples as of today, whereas asynchronous tools can be
illustrated with email, social networking and forums.
With the help of these tools, people from different
universities, countries and even from different continents
can work on the same topic with a high degree of
collaboration [2].
In this paper, projects which are conducted by previous
researchers in this area are analyzed. Software
development teams studied in this research were group of
students from different universities and countries who are
assigned an assignment project and different
communication tools are provided for them. Goal of this
research is extracting underlying relationships between
the individual success of people and the successful
collaborations in globally distributed teams. Analysis
resulted with some important findings. Firstly, it could be
15
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
stated that higher average academic success of team
members yield more successful teams. Secondly, as range
of team members’ success increases, a decrease in team
performance is observed. Thirdly and finally, as GPA of
team members’ increase, their contribution to team
communication is increasing, which is an important
driver of success in this environment.
3. Methodology
3.1 Overall Design of the Study
In this research, two independent student team projects
which are conducted in fall and spring semesters of 2009
are used. Students from three universities, namely Atilim
University (AU), Universidad Tecnológica de Panamá
(UTP) and University of North Texas (UNT), are attended
to these projects. Scope of their works are based on design
and implementation of their assignments. In addition,
they are provided with a set of online communication
tools and they are trained on how to use them. Usage
statistics of these communication tools are collected and
students were pre-informed about this. Full coverage of
these projects and data collection were in the scope of the
article of Serçe et al. [7].
2. Relevant Research
2.1 Individual Success
Individual characteristics like academic success, age, sex,
education level and experience contribute on team
performance in collaborative teams [3]. In addition, some
researches indicate that with the increasing range of
individual characteristics within group leads to decrease
in team performance [5]. When the scope of individual
characteristics is limited to individual success, it can be
concluded that one person’s educational background and
work experience can considerably affect the perception on
his/her work [6].
3.2 Projects and Participants
As already mentioned, in this study, two projects which
are already undertaken by other researchers are used.
These projects were studied in the paper of Serçe et al.
with the name of “Online Collaboration: Collaborative
Behavior Patterns and Factors Affecting Globally
Distributed Team Performance” [7]. Important aspects of
these projects related to this study can be summarized as
following:
2.2 Globally Distributed Teams
Although some of the drawbacks of globally distributed
teams are mentioned, both industry and institutions
continue to work with small groups which are
combination of people from different regions [5].
Researches also mention that student teams and
professionals cannot be always analyzed with the same
reasoning, therefore the conclusions inferred from
researches based on student teams should not be followed
in industry without delicate analysis [3].
Project #1: In this project, student teams are assigned with
a database management system for car rental agency in
the fall semester of 2009 for 6 weeks. Scope of the work is
based on design, functionality assessment, implementing
and testing. Participants were from Atilim University
(AU), Universidad Tecnológica de Panamá (UTP) and
University of North Texas (UNT).
Project #2: In this project, student teams are assigned with
a standalone bookstore management application which
can be used by bookstore staff for daily operations. Scope
of this project was design and implementation of this
application and hence the duration was nearly two months.
Participating universities are same with the Project #1 and
this project is conducted during the spring semester of
2009.
2.3 Team Performance
Since the projects in this research based on capabilities
provided by online collaboration, effect of communication
on team performance should be considered. Research
about this issue reveals that frequency of communication
affects the performance of the team [3]. Because
communication frequency shows the number of times
team members interacted each other, the mentioned result
is not surprising but detailed analysis should also be
undertaken to check the relationships and underlying
effects.
Number of students with their universities and their roles
in these projects are summarized in Table 1 and Table 2
as following.
16
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Table 1: Number of students from each university in projects
AU
UNP
UTP
Project #1
38
29
12
Project #2
10
7
36
Table 2: Roles of students from each university in projects
AU
UNP
UTP
Project #1
Java Programmer
Tester / Leader
Database
designer
Project #2
Java Programmer
Tester
Database
designer
Fig. 1 GPA Histogram of Project #1
3.3 Measures and Data Collection
There are three important aspects of data related to this
study and they are GPA for indicator of individual success,
team performance grades for measuring team success
levels and communication statistics which are directly
influencer of team success in this environment [3].
Although vast amount of communication related statistics
are collected, there are significant level of missing data in
GPA values of students which are counted as missing and
removed from data completely in analysis stage. This loss
data restricted this research conducting comprehensive
statistical analysis and thus simple methods are used for
correlation and revealing underlying relationships.
Fig. 2 GPA Histogram of Project #2
Secondly, general situation of the team performances is
analyzed for both projects. While conducting these
projects, all students were assigned with a performance
grade and for each team, team averages are taken as team
performance grade in this study. With the same approach
used in the prior step, analysis of team performance
values are tabulated in Table 4. It can be seen that,
average performance grade and range is slightly shifted in
Project #2. In addition, when histograms of performance
grades are checked from Figure 3 and 4, it can be seen
that the second project grades are articulated and higher
than the first project grades.
3.4 Data Analysis
Firstly, in order to present the general situation, GPA
values of all participants are analyzed for each project. As
tabulated in Table 3 below, it can be seen that projects
have no significant difference in GPA values in the sense
of mean, standard deviation and ranges. In addition, when
their histograms are checked from Figure 1 and 2, even
there are less number of samples no significant skewness
in the graphs is observed.
Table 4: Analysis of performance grades for both projects
Average GPA
Std. Deviation
Min
Table 3: Analysis of GPA for both projects
Average GPA
Std. Deviation
Min
Max
Project #1
2,93
0,64
1,89
3,92
Project #2
2,55
0,68
1,25
3,86
Max
Project #1
64,75
14,41
50
91,50
Project #2
80,80
10,63
65
97,50
17
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fig. 3 Performance Grade Histogram of Project #1
Fig. 5 Average Team Performance vs. Average Team GPA
After the analysis of the average team success, effect of
the most successful team player on the average team
success is analyzed. With this aim, for each team, student
with the highest GPA value is selected and relationship
between the average team performances is checked.
Although it is expected that having a team member with
higher GPA values can push team to have higher grades
in general, this situation does not hold for both projects.
In other words, where Project #1 has positive correlation,
Project #2 reveals a negative correlation between these
measures as it can be seen from Figure 6.
Fig. 4 Performance Grade Histogram of Project #2
After presenting the individual analysis of measures,
thirdly, effect of average GPA of development team on
team performance is analyzed. For each team, mean of
GPA values are calculated which is an indicator of the
general academic success of the team. With the same
approach used in the prior analysis, average performance
grades are calculated as team performance indicator.
Relationship between these measures can be seen from the
Figure 5 below. Scatter diagram shows that for both
projects, there is a tendency for increase in average team
performance as average team GPA increases. Since the
second project has higher performance grades in general,
effect of average GPA is not standing out as the first
project.
Fig. 6 Average Team Performance vs. Maximum Team GPA
As mentioned before, researches indicate difference in
team members’ individual characteristics yield decrease
in team performance. Considering the focus of this paper,
maximum difference in the GPA of team members for
each team is calculated. When the relationship between
GPA difference within team and average team
performance is checked from the Figure 7 below, negative
correlation can be easily seen. In other words, for both
projects as the GPA range within students increase
average team performance decreases.
18
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
between measures can easily be detected. Considering the
necessity for further studies, some conclusions from these
analysis are generalized and drawn:

When the average GPA of a team increases, team
performance is escalated. This shows that
individually successful team members can work in
harmony and create a successful teamwork
environment, which ultimately yields teams with
higher performance.

Increasing range of GPA decreases the team
performance. This confirms the researches
mentioning that the difference in individual
characteristics yields decreasing team success. This
conclusion is based on the fact that as range increases,
there is at least one team member with decreasing
GPA. Having unsuccessful students in the teams, it is
inevitable that the overall team performance will be
falling down.

Team members with higher GPA communicate more
in teams. This conclusion is based on the fact that, for
these projects each type of communication is counted
as same without considering their characteristics.
Since communication frequency shows the number of
interactions, as communication increases it can be
concluded that “a work is in progress”. This is
confirmed by the two projects, where in each
percentage of the team communication increases with
the higher GPA of the students.

Although team members with higher GPA values are
expected to pull their teams to be more successful, for
these two projects this statement cannot be verified in
this study. This is mostly based on the fact that, only
one individual with high GPA cannot affect the
whole team’s work attitude and performance grade.
This asymmetry in the teams resulted with the fact
that no concrete relation between the highest GPA in
the team and team performance can be found.
Fig. 7 Average Team Performance vs. Maximum GPA Difference in Team
In the relevant research section, it is mentioned that
studies reveal the fact that frequency of communication
affects the performance of the team [3]. Considering this,
how GPA of a team member is related to his/her
contribution to communication is analyzed. Before going
into further analysis, it should be mentioned that there are
many different communication tools available for these
projects, like forums, email, chat and wiki. However,
since analyzing and assessing differences and
contributions of each tool is out of scope of this paper, all
of them is counted as communication session. For each
student, percentage of their contribution to team
communication is calculated and its relationship to GPAs
are checked. From the scatter diagram in Figure 8, it can
be seen that there is a positive correlation between GPA
and contribution to communication for both projects.
Fig. 8 Team Communication Contribution vs. GPA
5. Conclusion and Future Work
In this paper, effect of individual success on globally
distributed software teams is studied. In order to reveal
underlying relationships, student team projects which are
conducted by other researchers and their data is used. In
these projects, students from Panama, Turkey and the
United States participated and their individual
characteristics and communication statistics are collected
4. Discussion
Data analysis showed that there are different underlying
relationship between GPA and team success. Although
there is high level of missing data, which restricts
undertaking detailed statistical analysis, correlation
19
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
conflict and performance in workgroups. Administrative
science quarterly, 44(4), 741-763.
[7] Serçe, F. C., Swigger, K., Alpaslan, F. N., Brazile, R.,
Dafoulas, G., & Lopez, V. (2011). Online collaboration:
Collaborative behavior patterns and factors affecting
globally distributed team performance. Computers in human
behavior, 27(1), 490-503.
[3]. Since data is collected and shared by other
researchers, there was a noteworthy level of missing data;
however, projects with the most complete dataset are
selected and used in this paper. Analysis of the data with
the goal of extracting underlying relationships between
the individual success of people and the successful
collaborations in globally distributed teams is undertaken.
In the light of the relevant research and this analysis,
some remarkable findings are constructed and presented.
Onur Yılmaz The author received B.Sc. degree from Industrial
Engineering department of Middle East Technical University (METU)
in 2012. In 2013, he gained second B.Sc. degree from Computer
Engineering department at the same university. Since 2012, he is a
master’s degree student in Computer Engineering field at METU. As
a professional, Onur Yılmaz works as software engineer and follows
opportunities in the internet industry.
For future work on this subject, there some important
points to mention. Firstly, in this paper a comprehensive
statistical analysis is not implemented due to low number
of data. However, with a complete and large dataset,
statistical analysis and validation should be undertaken to
clarify and support findings. Secondly, in this paper
student teams are studied but the conclusions provide
insight for implementing in the industry. Before
implementing these findings in the professional life,
delicate analysis of compatibility should be undertaken.
Thirdly and finally, dimension of individual success is
studied as GPA in this study. However, studies indicate
that individual success can be defined on multiple
parameters such as work experience level or knowledge
on the related subject. Considering this, different
parameters of individual success can be studied to reveal
their relationships to team performance. With the help of
these future studies, the presented conclusions and
analysis will be more concrete and drawbacks of them will
be overcome.
References
[1] Singh, I. (2011). Effectiveness of different personalities in
various roles in a software engineering team. Informally
published manuscript, Carnegie Mellon University,
Pittsburgh, PA, USA.
[2] Meyer, B. (2006). The unspoken revolution in software
engineering. Computer, 39(1), 124-121.
[3] Swigger, K., Nur Aplaslan, F., Lopez, V., Brazile, R.,
Dafoulas, G., & Serce, F. C. (2009, May). Structural factors
that affect global software development learning team
performance. In Proceedings of the special interest group on
management information system's 47th annual conference on
Computer personnel research (pp. 187-196). ACM.
[4] Dutoit, A. H., & Bruegge, B. (1998). Communication
metrics for software development. Software Engineering,
IEEE Transactions on, 24(8), 615-628.
[5] Bochner, S., & Hesketh, B. (1994). Power distance,
individualism/collectivism, and job-related attitudes in a
culturally diverse work group. Journal of Cross-Cultural
Psychology, 25(2), 233-257.
[6] Jehn, K. A., Northcraft, G. B., & Neale, M. A. (1999). Why
differences make a difference: A field study of diversity,
20
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
A Survey of Protocol Classifications and Presenting
a New Classification of Routing protocol in ad-hoc
1
Behnam Farhadi
, Vahid Moghiss2, Hossein Ranjbaran3, Farhadi.Behzad4
Scientific-Research Institute of Iran sensor Networks
Tehran, Iran
1
Farhadi.behnam@gmail.com
2
Moghiss@gmail.com
3
hossein.ranjbaran.it@gmail.com
4
farhadi.behzad@gmail.com
Abstract
Many Researchers are researching in the field of Wireless
Networks around the world. The results of these researches are
the annual production of several new protocols and standards.
Since the field is divided into several sub- categories so, each
new protocol is presented must place in this sub-categories based
on its own structure. Many researchers proposed new ideas in
the field of protocol classifications that they seem appropriate
categories in the last few years. According to the expand area of
wireless network research and studies on intelligent methods
right now and classifications are inefficient and need to be
designed a new classification. One of the most fundamental
problems in this area is the lack of a comprehensive structure
classification to accommodate the new intelligent protocol. In this
paper, we are going to compare different protocols and available
classifications, presenting a new classification by modeling from
Mendeleev table where protocols can be classified in one of the
branches of the tree, and allocate a unique code in the world.
problematic because of limited sources of these networks.
Using these protocols in routing made formation of loops. To
solve these problems, some protocols were designed for adhoc networks. Regarding the properties of ad-hoc networks,
due to variable structure the limitation of wireless
communication bandwidth, limited energy of nodes, each of
the designed protocols is confronted with problems. In this
paper, we review different unintelligent routing protocols in
mobile ad-hoc networks, which include routing algorithm
based on network structure in terms of on-demand and on
table-driven routing according to new classification. First, we
explain different protocols briefly and compare them in terms
of various parameters. The paper is organized as the follow: in
section II, we review related works and in part, (A) we analyze
unintelligent protocols based on network structure or on
routing table. In part, (B) we pay attention to unintelligent
protocols based on routing table and in part, (C) we pay
attention to unintelligent protocols based on network structure
in terms of hybrid methods. In section III, we introduce a
new method in classifying routing protocols. Conclusion and
further works are at the last section.
Keywords: Mobile ad-hoc network; Unintelligent routing
protocols; Modern classification.
1. INTRODUCTION
Mobile network is the network in which the relative
situation of its constituent nodes is varying. The routing
protocol must manage the node mobility and transfer the
parcel to target accurately so that communication sides is
not informed about the mobility in network nodes. One of
the mobile wireless networks, which it has been recently
paid attention [1], is ad-hoc network consisting of mobile
nodes while it does not use any control center in routing
its packets. These networks are applied in specific
situation in which centrality, wire support is impossible to
provide, or it is not economical to use ordinary systems
such as scientific conferences, military communications,
or saving the wounded in the natural disasters. By testing
the protocols used for wired networks, researchers
concluded that they are suitable for these networks .In
these protocols, due to the mobility in network, a lot of
routing overloading is created which it will be
2. REVIEWING PREVIOUS WORKS AND
OFFERING A COMPARATIVE TABLE FOR
EACH GROUP
2.1. Investigation and analysis of unintelligent protocols
based on network structure in terms of on-deman
Routing protocol of DSR [3] is a protocol in terms of ondemand that uses source routing in the algorithm that each
packet contains complete information of routing. Therefore,
each node is aware of information about its own neighbors. In
DSR protocol, each node uses the technology of route storing
to support route information that reduces routing overload.
This algorithm provides the shortest path available for the
enquirer.
21
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
TABLE 1. ON-DEMAND ROUTING PROTOCOLS
Network wide
broadcasting
N/A
rediscovery
low
Network wide
maybe
broadcasting
no
N/A
no
N/A
rediscovery
low
Network
Wide
broadcasting
low
Network wide
broadcasting
yes
no
yes
Shortest path
no
no
Participate in
routing
yes
no
yes
N/A
on demand
QOS
no
Update destination
high
Shortest and
fastest path
yes
Update period
Start repair at
failure
point
no
flat
Based on
occurrence
Source
node
AODV
[15]
on demand
flat
Based on
occurrence
Neighbor
nodes
TORA
[4,5]
on demand
flat
Periodically
& Based
on
occurrence
Source
node
ABR
[7]
periodically
Neighbor
nodes
ACOR
[7]
Based on
occurrence
Source
node
DSR
[3]
periodically
Neighbor
nodes
on demand
yes
Shortest path
no
no
on demand
(Reactive)
no
improvement of
convergence
no
no
on demand
Routing protocol of TORA [4,5] is an on-demand protocol
in terms of bi-directional links that increases the transmission
of packet numbers along the path thereby increasing the
overload in this protocol. TORA has found the ability to be
used in ad-hoc networks with high densities by modifying the
method of using bi-directional links.
Structure
no
yes
Route computation
maybe
N/A
Multicast capability
yes
N/A
Hello message requirement
yes
high
Route metric
Loop free
no
Unidirectional link
Broadcasting of route
request
Network
Wide
broadcasting
high
no
DV
N/A
Overhead
yes
Start repair at
failure
point
N/A
N/A N/A
yes
yes
Route recovery
yes
DV
Aware of distance between
node
yes
Usage Algorithm
Distributed
yes
flat
flat
flat
property
Protocol
DYMO
[11]
to solve the problem. Network lifetime increases by the
protocol that makes it suitable for large networks with high
density.
DYMO routing protocol [11] with a flat structure is the
main aim of which is to improve the convergence in network
that prevents from loop and count to infinity problems in
routing. DYMO protocol uses discover and storage processes.
With the breaking of each path, the discovery process is started
again. This protocol does not use periodical messages of
HELLO, which result in the reduction of routing overload
thereby meeting the requirements of high dense networks.
ABR routing protocol [7] is an on-demand protocol in
terms of source routing, which does not keep and store routing
information of nodes while founded or relatively stable paths.
(I.e. the paths are valid for a certain time in which nodes are
without movement). In this protocol, the chosen path is the
most stable although there is the possibility of unexpected
movement of nodes that the reconstruction trend of path will be
started of course, the path may not be reconstructed, and the
source is forced to start discovering from onset. This protocol
has a flat structure and uses HELLO messages periodically,
which increases overload. Therefore, this protocol is not
suitable for high dense and great mobile networks but it is
suggested for networks in which lifetime is a priority as it
produces paths of lifetime.
AODV Routing protocol [15] is a reactive protocol, which
needs the support of routing information about active paths.
This protocol uses some tables to restore the path information.
The number of route request exchanges of message and request
response is great. As the diffusion of these messages is
performed through distribution, there are extra messages
produced which reduces the node battery power. Bandwidth
and network efficiency is the reason if message redundancy. At
present, this protocol is the best and commonest.
ACOR routing protocol [8] is based on demand and uses a
flat structure. The main aim is to improve the quality of routing
that is using QOS parameters. It does not use network sources
in unnecessary occasions and starts to discover the path when it
is necessary. The response time to the path request is great in
this protocol but ACOR improves end-to-end delay parameter
2.2. analysis of structure based on unintelligent protocols
and routing table
DSDV routing protocol [2] is a table-driven protocol which
is not suitable for high dense and great mobile ad-hoc
networks due to creation of large overload in the situation of
22
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
CBRP is a Hybrid protocol [12] that uses meshing. In this
vast mobility in nodes , unreasonable use of network
protocol, each node must support its own neighbor table and
resources to keep un used information, of course , DSDV
contains the information required for all cluster members. This
protocol could reduce routing overload by using two different
protocol provides the fastest and shortest route while its biggest
partial and whole updating. The main aim of this protocol is to
problem is using network resources unnecessarily. The
remove the loop problem and count to infinity that most tableoverload is high in this protocol due to using route-finding
driven protocols suffer.
table and distributing Hello messages periodically. It is suitable
OLSR routing protocol [6] is a table-driven protocol that
for high dense networks because the network never fails to
has improved the classic bind state route –finding algorithm in
work.
which every node distributes all route information to its own
neighbors, while OLSR sends the information to a selected
3. A NEW METHOD TO CLASSIFY ROUTING
node named MPR that, reduces the number of updates and
PROTOCOLS
overload remarkably. This is ineffective for high dense and
vast movable ad-hoc networks due to dependency on routing
In this paper, we present a comprehensive classification for
tables.
routing protocol, which covers all-important items of expand
routing.
IARP routing protocol [9] is a table-driven protocol with
hierarchical structure that supports the routes immediately. It
does not periodic messages of Hello that helps to reduce the
overload while functioning on the shortest route and route
repair. The main aim of this protocol is to meet routing
requirement without any attention to network resources and
overload. It cannot be suitable for networks with limited energy
resources.
A new classification offered in [21], the important factors
are the main problems, which does not consider. For example,
learner automata based intelligent algorithm [23], which pays
attention to routing error tolerability, cannot be placed in it. In
[22], the classification is introduced with only one factor of
situation of services, which limited. In [24] a relatively more
comprehensive classification, which again does not consider
any place for intelligent algorithms [25], shows only a
classification for mobile models. There are also other
classifications in [16-20].
FSR routing protocol [13] is a table-driven protocol based
on link-state algorithm that exploits a mechanism similar to
fisheye’s that makes the nearest node and/or the most qualified
route in priority. Therefore, the accuracy of routing information
in this protocol is dependent on the distance to destination,
which could decrease the network overload due to information
exchange with nearest nodes frequently. FSR is a better
performance compared with other protocols with link-state
algorithms, as it does not try to obtain information about all
available nodes.
All these support the idea that researchers have sought to
find a classification to cover their preferred algorithm. In our
proposed method, we consider a tree the root of which is
routing. The root has two main elements being weighed 1 or 0
based on being intelligent or unintelligent respectively. One of
the important aims of designing and allocating code to its
elements is that the algorithm located at the subset of the last
leaf will have a specific code after being surveyed and the
order of placing each algorithm is based on the publishing year
and the importance while the algorithm codes start from 1.
TBRPF routing protocol [14] is a table-driven protocol with
link-state algorithm, which provides the shortest route by hopby-hop routing method. Each node has a table, which keeps the
information of all accessible nodes. It uses periodic messages
of Hello to discover and supervise the route but the number of
Hello messages is fewer than usual. It aims at successful
routing and uses the network suitably, which makes it usable in
ad-hoc networks.
The left side element (unintelligent) is consisted of two
position-based and topology-based subsets position-based part
has two types of subset of aware and unaware parts. The
topology-based part is divided in to Topology-aware and
Topology-unaware parts. We have classified the former part in
to flat and hierarchical parts and the latter in to four distinct
parts. The right side element (intelligent) contains four subsets,
which the researchers have used to make the algorithms
intelligible. The subset of these parts is a part, which specifies
the main feature of intelligent algorithm and contains various
subsets dependent on algorithm. Take, for example, ZRP and
ZHLS algorithms. In our method, as in figure (1), the code of
algorithm ZRP is ( 001101 ) which lets us know that it is a
routing algorithm of unintelligent (the first digit 0), structure
based (second digit 0) and hierarchical (third digit 1) which has
hybrid method. Using the last digit, we know that it is the first
protocol in this class our proposed classification is shown in
figure (1). As shown in Fig (2), it is illustrated routing protocol
classification with new method.
2.3. analysis of Hybrid structure based on unintelligent
protocols
ZRP routing protocol [10] is a Hybrid protocol that uses ondemand method in intra zone routing and
table –driven
methods in inter zone routing. This will reduce control in tabledriven method and recession (The response time of route
request) in on-demand method. The network is divided in to
routing zones according to the distance between the mobile
nodes. ZPR protocol has considered different routing methods
to find the routes inter zone and intra zone that can provide the
shortest and best route. In addition, in this protocol the
overload is low which makes it suitable for high dense network
in which routing quality is in priority.
23
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
TABLE 2. TABLE-DRIVEN ROUTING PROTOCOLS
no
no
Table
driven
Update destination
Shortest path
Update period
Route computation
no
flat
hybrid
Neigh
bor
nodes
DSDV
[2]
MPR
node
OLSR
[6]
Structure
Multicast capability
no
Hello message requirement
N/A
Route metric
high
Unidirectional link
No
recover
Loop free
Broadcasting of route
request
yes
no
Overhead
NO
DV
Route recovery
yes
Aware of distance between
node
yes
Usage Algorithm
Distributed
yes
LS
no
Start
repair at
failure
point
high
Only
broadcasting
to MPRs
yes
no
Shortest path
yes
yes
Table
driven
flat
Periodically
LS
yes
Rediscov
ery
low
flooded into
the network
aybe
no
m Shortest path
no
no
Table
driven
flat
Periodically
LS
no
N/A
medium
yes
N/A
Shortest path
yes
yes
Table
driven
flat
Periodically
LS
no
N/A
medium
N/A
no
Shortest path
and route repair
no
N/A
Table
driven
hierarc
hically
Periodically
TABLE 3.
Update destination
Update period
Structure
Route computation
Multicast
capability
Hello message
requirement
Route metric
Unidirectional link
Loop free
Broadcasting of
route request
Overhead
Route recovery
Aware of distance
between node
Usage Algorithm
Distributed
N/A
Periodically
flat
Hybrid
no
yes
ZRP [10]
Property
IARP
[9]
1
intellig
Unintelligent
0
1
Hybrid
Topology Base (Network Structure)
no
yes
yes
Network Wide
broadcasting
NO
TBRPF
[14]
Routing
Periodically
hierarchical
yes
Only broadcasting to
clusterheads
LS
FSR
[13]
N/A
0
Shortest path
yes
Neigh
bor
nodes
Neigh
bor
nodes
Neighbor nodes
Shortest and fastest
path
yes
high
Start repair at failure
point
Protocol
with a unique code. Our future work is to develop the proposed
tree in all areas of ad-hoc networks such as security, secure
routing, coverage, and research about intelligent algorithms.
HYBRID ROUTING PROTOCOLS
CBRP [12]
Protocol
Network
Wide
broadcast
Broadcast
To
Intrazone
Property
0
1
Flat
no
Position Base
hierarchical
0
Method
medium
no
DV
0
Yes
On-Demand
(reactive)
N/A
2
1
Table Driven
(ProActive)
Hybrid
4. CONCLUSION AND FURTHER WORKS
1-ZRP
Routing is an important component of communication
protocols in mobile ad-hoc networks. The design of a protocol
in these networks is performed to obtain special objectives,
based on available requirements, network specifications, or
application in different zones. In this paper, we introduce
variety of routing protocols along with its applied features. We
present a new classification (taxonomy) method, based on
Mendeleev table in which each algorithm with a new code.
2-ZHLS
Figure 1. Example of our presented classification
This tree is undergoing the development and each
researcher can locate his or her algorithm in one of the classes
24
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fault Tolerance
Ant Colony
decrease of power
Learning
Automata
location finding
primary object
Intelligent
Quality
Neural Network
data delivery
Genetic
Energy aware
Flooding, Gossiping And
controlled flooding
Data-centric
Network topology
unaware(local
information)
Routing
Energy - aware
qos routing
topology
based(network
structure
Cluster- based
table driven
(proactive)
link state
hop-count
hierachical
unintelligent
Network topology
aware(global
information)
method
hybrid
qos routing
flat
ondemand(reactive)
link state
hop-based
unknown(virtual)
cordinate
hop-count
distance based
position base
routing loop
recovery
guaranted
delivery
routing loop
avoiding
known(physical)
coordinate
best effort
delivery
Figure 2. A view of routing protocol classification with new method
25
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[14] R. Ogier, "Topology Dissemination Based on Reverse-Path Forwarding
(TBRPF) : Correctness and Simulation Evaluation", Technical Report,
SRI International, Arlington, Va, USA, Oct. 2003.
[15] C.E. Perkins, E.M. Royer, "Ad Hoc On-Demand Distance Vector
Routing." Proceedings of the 2nd IEEE Workshop on Mobile
Computing Systems and Applications (WMCSA ’99), pp. 90-100, New
Orleans, LA, Feb. 1999.
[16] Miguel A. Labrador, Pedro M. Wightman, ―Topology Control in
Wireless Sensor Networks,ǁ Library of Congress Control Number:
2008941077, Springer Science + Business Media B.V, 2009.
[17] A. Nayak, I. Stojmenovic ―Wireless Sensor and Actuator Networks
Algorithms and
Protocols for Scalable Coordination and Data
Communication,ǁ Published by John Wiley & Sons, Inc., Hoboken, New
Jersey ,Canada, 2010.
[18] A. Boukerche, ―Algorithms and Protocols for Wireless Sensor
Networks,ǁ Published by John Wiley & Sons, Inc (Wiley-IEEE Press),
Nov. 2008.
[19] J. Zheng , A. Jamalipour ―Wireless Sensor Networks: A Networking
Perspective,ǁ Published by John Wiley & Sons, Inc (Wiley-IEEE Press),
Oct. 2009.
[20] K. Sohraby, D. Minoli, T. Znati ―Wireless Sensor Networks:
Technology, Protocols, and Applications,ǁ Published by Wiley-IEEE
Press, May 2007.
[21] A. N. Al-Khwildi, K. K. Loo and H. S. Al-Raweshidy ―An Efficient
Routing Protocol for Wireless Ad Hoc Networks,ǁ 4th International
Conference: Sciences of Electronic,Technologies of Information and
Telecommunications (SETIT 2007), Tunisia, Mar. 2007.
[22] S.M. Das, H. Pucha, and Y.C. Hu, "Performance Comparison of
Scalable Location Services for Geographic Ad hoc Routing," In Proc. of
the IEEE Infocom'05, vol. 2, pp. 1228-1239. Miami, FL, Mar. 2005.
[23] S.M. Abolhasani, M. Esnaashari, M. R. Meybodi, "A Fault Tolerant
Routing Algorithm Based on Learning Automata for Data Inconsistency
Error in Sensor Networks," Proceedings of
the 15th Annual CSI
Computer Conference (CSICC'10), Tehran, Iran, Feb. 2010.
[24] Krishna Gorantala "Routing Protocols in Mobile Ad-hoc Networks,"
Master Thesis, Department of Computing Science, Ume°a University,
Ume°a, Sweden, Jun. 2006.
[25] H. Idoudi, M. Molnar, A. Belghith, B. Cousin "Modeling Uncertainties
in Proactive Routing Protocols for Ad Hoc Networks," Information and
Communications Technology'07 (ICICT 2007), ITI 5th IEEE
International Conference, pp. 327-332, Dec. 2007.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Murthy, S. and J.J. Garcia-Luna-Aceves, ― An Efficient Routing Protocol
for Wireless Networks,ǁ ACM Mobile Networks and App. J., (Special
Issue on Routing in Mobile Communication Networks), pp. 183-97, Oct.
1996.
C. E. Perkins, P. Bhagwat, ―Highly Dynamic Destination-Sequenced
Distance-Vector Routing (DSDV) for Mobile Computers,ǁ ACM
Computer Communication Review (ACM SIGCOMM’94), vol. 24, no.
4, pp. 234-244, Oct. 1994.
D. Johnson, D. A. Maltz, ―Dynamic Source Routing in Ad Hoc Wireless
Networks,ǁ in Mobile Computing (T. Imielinski and H. Korth, eds.),
Kluwer Acad. Publ., 1996.
V. D. Park, M. S. Corson, ―A Highly Adaptive Distributed Routing
Algorithm for Mobile Wireless Networks,ǁ INFOCOM ’97, Sixteenth
Annual Joint Conference of the IEEE Computer and Communications
Societies. Driving the Information Revolution, Proceedings IEEE, vol. 3,
pp. 1405 -1413, Apr. 1997.
V. D. Park, M. S. Corson, ―Temporally-Ordered Routing Algorithm
(TORA) Version 1 Functional Specification,ǁ IETF Internet Draft, 1997.
T. Clausen, P. Jacquet, "RFC 3626 - Optimized Link State Routing
Protocol (OLSR)," Project Hipercom, INRIA, Oct. 2003.
C.-K. Toh, ―Associativity Based Routing For Ad Hoc Mobile
Networks,ǁ Wireless Personal Communications Journal, Special Issue
on Mobile Networking and Computing Systems, vol. 4, pp. 103-139,
Mar. 1997.
Jinyna Li, J. Jannotti, D. S. J. De Couto, D. Karger and R. Morris, ―A
Scalable Location Service for Geographic Ad Hoc Routing,ǁ in Proc.
MOBICOM’2000, Boston, MA, USA, 2000.
Zygmunt J. Haas, Marc R. Pearlman, Prince Samar, ―The Intrazone
Routing Protocol (IARP) for Ad Hoc Networks,ǁ IETF Internet Draft,
Jul. 2002.
Z. J. Haas, M.R Pearlman, ―The Zone Routing Protocol (ZRP) for Ad
Hoc Networks,ǁ IETF Internet Draft, Aug. 1998.
I. Chakeres CenGen, C. Perkins, WiChorus, ―Dynamic MANET OnDemand (DYMO) Routing,ǁ IETF Internet Draft , Sept. 2009.
Mingliang Jiang, Jinyang Li and Y. C. Tay, ―Cluster Based Routing
Protocol (CBRP),ǁ IETF Internet Draft , Aug. 1999.
G. Pei, M. Gerla and T.-W. Chen, ―Fisheye State Routing in Mobile Ad
Hoc Networks,ǁ In Proceedings of the 2000 ICDCS Workshops, pp.
D71-D78, Taipei, Taiwan, Apr. 2000.
26
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Developing mobile educational apps: development strategies,
tools and business models
Serena Pastore
1
Astronomical Observatory of Padova, INAF
Padova, 35122, ITALY
serena.pastore@oapd.inaf.it
with the several market analysis companies [2]. Mobile
actors involved in the mobile ecosystem (Fig. 1) includes
devices manufacturers (e.g., Nokia, Samsung, RIM
Blackberry), software platforms providers (e.g. Apple;
Google, Microsoft), applications developers and mobile
operators that in each country (e.g., in Italy Vodafone,
TIM, 3G) provide Internet connectivity. A single mobile
device is composed by specific hardware and software
platform that deals with the peculiarities of these systems.
Mobile devices should follow the user's mobility
(therefore mobile devices are generally of reduced size
and weight, such that they can be transported easily).
Their hardware components manifest limited processing
power and memory, little screen dimension, but a variable
numbers of other hardware such as cameras, network
adaptors (Wi-Fi, Bluetooth), sensors like GPS (Global
Positioning systems) receivers, accelerometer, compass,
proximity sensor, ambient light sensor (ALS), and gyros.
Abstract
The mobile world is a growing and evolving market in all its
aspects from hardware, networks, operating systems and
applications. Mobile applications or apps are becoming the new
frontier of software development, since actual digital users use
mobile devices as an everyday object. Therefore also in an
educational environment, apps could be adopted in order to take
advantage of mobile devices diffusions. Developing an app
should not be a decision linked to the trends, but must follow
specific strategies in choosing target devices, mobile platforms,
and distribution channels that in these contexts usually go
through e-commerce sites called stores (e.g., Apple Store,
Google Play, Windows Phone store). Furthermore the design of
an educational mobile app requires a careful analysis on the
methodologies to be adopted and the available tools especially if
the aim is to build a successful and useful app. Very often there
is a tradeoff between the costs associated with the development
that requires a lot of technical and programming skills. The
economic return is neither high nor guaranteed considering the
business models offered by the major stores. The paper deals
with the approaches to apps development analyzing
methodologies, distribution channels and business models in
order to define the key points and strategies to be adopted in an
educational context.
Keywords: Mobile apps development, cross-platforms mobile
tools, native apps, e-commerce, hybrid apps, Sencha Touch,
Apache Cordova
1. Introduction
The mobile world is a growing and evolving market
in every its facets. Mobile devices [1] intended as
electronic devices created for specific functions (e.g.,
voice like cell phones and smartphones, reading books as
e-book reader, and more general activity such as laptops
or tablets) have undergone exponential growth in recent
years, especially with the introduction in the market of
mobile devices connected to the Internet network as
smartphones and tablets. Even if such market includes the
different categories of mobile devices, tablets and
smartphones seem to be the top categories in accordance
Fig. 1 Actors of the mobile ecosystem
Software applications on such devices or mobile apps as
they are universally called, are the new frontier of
software considering the mobile devices diffusion and
should follow the rules of such ecosystem. Developing an
app of every category, even in an educational environment,
should be the result of a careful analysis of mobile
platforms, tools and methodologies to be adopted. There
27
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
are a lot of hardware manufactures (e.g., Apple, Samsung,
Nokia, LG, HTG, Asus) and this fragmentation also
breads in mobile software platforms (e.g., iOS, Android,
Symbian, and Windows Phone) that directly deal with
hardware and help to take advantage of the device's
performance. An app is usually targeted to few software
platforms, since developing for all existing platforms is
really expensive and requires high technical skills. The
paper deals with the approaches to apps development
focusing on methodologies, tools and distribution
channels to analyze peculiarities and issues of the
different methods. From methodologies point of view,
apps are divided into native and web apps [3] meaning the
use of different technologies. Moreover a compromise
between the pro and cons of the two methods are the
hybrid apps that use special frameworks to convert web
apps into native apps. There are different aspects that
should be taken into consideration when choosing a
specific platform or methodology, and the paper would
like to analyze them in order to figure out which is the
best approach for educational apps. The first section
analyzes apps’ features focusing on types and the
structure of their market. The second section distinguishes
the choices in the development methodologies and tools,
while the third section examines the tools needed for the
development of each kind of app. It will finally give an
evaluation of the various types of development taking into
account also the distribution channels of the store
normally used by users to search for an app.
mobile platform and are therefore capable of interfacing
completely with the hardware’s device. But since they are
dedicated to mobile devices equipped with specific
operating system, they are targeted to a single platform.
Web apps use web technologies (HTML5, JavaScript,
CSS3 [4]) and are multi-platform, since they can be
executed from any mobile device equipped with a web
browser. However such app could not take advantage of
all hardware capabilities, since interacts with the
hardware by means of an intermediate software layer and
specific Application Programming Interfaces (APIs). A
compromise between the pro and cons of the two methods
are the hybrid apps that use special frameworks mainly
based on JavaScript technology to convert web apps into
native apps [5].
Usually apps could be preinstalled in the device and
thus imposed by the mobile software reference platform or
could be installed by the user or, as in the case of web app,
imply a request to an Internet server machine. Convince a
user to download and use an app depends a lot on both its
utility and its quality and how it is distributed.
As regards apps distribution, even if some app can be
delivered through specific websites or e-mail, the main
distribution channel is an e-commerce site called store [6]
owned by major hardware and software mobile providers.
Each store imposes specific policies for apps distribution
and different business models. In this context, a developer
should decide if developing for one or more mobile
platforms, since this decision affects the choices on
methodologies, technologies, tools and distribution
channels.
2. Mobile apps features and the business
models
2.1 Mobile devices and platforms
Each mobile device has different technical features in
terms of available memory, computing power, screen
resolution,
operating
system
(OS)
and
the
hardware/software platform that determine the quality and
performance. Even if actually the market seems to be
occupied by few dominant players (Apple and Samsung),
there are many other manufacturers of hardware and
software that are trying to earn their slice market. Many
companies producing mainly software such as Microsoft
and Google have made agreements with hardware
manufacturers to provide their own systems. Other
companies like Apple or RIM Blackberry provide both the
hardware and the software. Each manufacturer tries to
keep pace with new technologies, thereby spreading out
ever newer models of the two main categories of devices
(i.e., smartphones and tablets) trying to anticipate
competitors to increase their market value.
Among the leading manufacturers of mobile
hardware, Nokia has been at the top of the sale of mobile
App design (Fig. 2), starting from the concept idea
and requirement analysis, mainly differs in the stages
concerning the development and the final delivery.
Fig. 2 Apps’ development and deployment steps
As regards development, apps can be divided into
native or web apps meaning the use of different
programming technologies and thus different tools and
frameworks. Native apps use the languages of the specific
28
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
phones for many years, offering devices running the
SymbianOS operating system. But with the arrival of
smartphones, Nokia has lagged behind its competitors
Apple and Samsung. It currently offers mobile phones
that still have the Symbian OS operating system, but also
the smartphones Lumia series that for now, thanks to an
agreement with the supplier of Microsoft software are
equipped with the Microsoft operating system Windows
Phone. Samsung has been for many years very successful
in selling low-end phones and smartphones. Samsung
uses as software platforms the BadaOS of his property
(even if it will be probably replaced with the TizenOS [5],
an open operating system developed in collaboration with
Intel) in most of mobile devices. It uses the Android
operating system in the smartphone Galaxy series which
are those that have decreed its success.
Sony-Ericsson has equipped its devices with
Windows Phone and Android in the latest Xperia
products, while Research in Motion (RIM) is linked to its
successful BlackBerry product, now equipped with the
operating BlackBerry OS system version 10 as other
mobile devices provided by such company. LG instead
equips its mobile devices with Android.
iOS and Android actually are predominant in the
market of mobile OS, but other OS (e.g., Windows Phone,
Blackberry [5]) could, however, have their market and
could be the target platform for a developer. An app
provider very often has to optimize the software for a
specific platform or device. In order to obtain the best
performance and thus to maximize the mobile experience,
he/she should know the devices’ various mobile platforms.
An app depends strongly on the mobile context, then on
the type of hardware device, the software platform and the
network used for data transmission. Then apps final
delivery is usually done by means of stores. The user
searches on a store the app and, when selected, it is
automatically installed on the device. There is no way to
test the app before downloading and this makes it difficult
for the user to select the application on the basis of the
available information. Considering the spread of such
devices, Fig. 3 displays mobile users’ main activities.
Fig. 3 Time spent on iOS and Android device (April 2013)
You see as the percentage of time spent on iOS and
Android devices in the use of apps is greater than the time
spent in web browsing. This clearly indicates the
importance of an app.
Very often the factors that influence a user in
downloading an app, apart from the choice of the category,
are the price, a clear description of the purpose of the
sample images, and the vote given by the users. Rarely a
developer can affect the price in an important way,
considering the fact that users prefer to download free
apps, but it can affect the other elements. The description
and pictures showing the app in action are very important,
since they are the way in which a user can understand
how an app behaves. Once the user has downloaded the
application, he/she will expect that the app responds to
his/her needs. A simple and intuitive application is the
best method to be appreciated by a user. If the apps is
delivered by the reference store, there are other factors
that many influence the choice. Each store in fact
implements an evaluation method based on a vote that the
user needs to do. A developer must keep in mind that app’
future updates should be able to improve the user
experience and remedy the possible negative criticism.
Focusing on store distribution channel, each store
offers different business models to take a revenue from
apps, and studies demonstrate that this is the preferred
channel to search for an app. Recent study by Canalys
analyst firm on the main store (Apple's AppStore, Google
Play, Windows Phone and Blackberry Store World) [7]
shows a constant growth in apps sales during the first
quarter of 2013. Between the stores, Apple store records
the highest turnover of three-quarters of the total value of
the market (74%), while Google Play is the first
framework for number of app downloads (51%). The
other stores are more detached, but still remain the point
of reference for users and developers of related ecosystems.
29
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
2.2 Apps business models on mobile stores
On a store, app are usually downloaded according a
license that is set by the developers. However each store,
as the Table 1 shows, offers a specific program to sell the
app ranging from free/paid app for download to the others
business models (e.g., in-app purchase and in-app
advertising). The first distinction is about free apps and
paid apps. Then as regards the ways to monetize an app,
the two main models are pay per download or premium
offering and in app purchases (IAP). Pay for download
method was the first model adopted especially in the App
Store with different values of fees required (e.g., most
apps requires a little fee of $0.99 or $1.99). Each user
pays the app before downloading and installing into the
device. The in-app purchases (IAPs) is a model that lets a
user to download an app both for free (in this case the
model is known as freemium) or paid, and requires a
payment if the user want to enable some added features or
functionalities, make some upgrades or have game
advancement (e.g., enabling levels in a game). Another
source of revenue could be the in-app advertising or inapp ads.
App Store
Google
Play
Window
Free (no charge to
download)
and
priced (user charged
before download)
Free and paid app
whose base price
can be selected by
country
Blackberry
World
Free
and
paid
(purchase
before
downloading)
apps and apps divided into
consumable or durable);
- ad-funded apps (free
apps with ads made by
inserted code thanks to
Microsoft
Advertising
service
-7-day,
30-day
subscription (trial period
after than users pay a fee
to
renew
their
subscription)
Other analysis however have shown (e.g., by Distimo
[8]) that the in-app purchases in the fremium model
generate the majority of revenue in the app stores. Finally
the in-app advertising model uses the store advertising
platform or service and consists in inserting specific
advertising lines to the app code in order to take a revenue
from such publicity. According to recent statistics ads
through mobile or mobile advertising is increasing, even
if this model should necessary be different from the
traditional model of ads that dominates the Web sites. In
many cases, getting an advertising message when
download an app for free, seems to be a price that users
are willing to pay for having no costs. As the Table 1
describes, there are different models of advertising such as
affiliate marketing that promotes other mobile apps in
order to earn commissions when the app is purchased and
downloaded.
In any case the best strategy for marketing and app
success (Fig. 4) depends on several factors, including the
buying habits of the target audience and the type of
application. For example according several analysts,
Apple users are more used to pay for their apps and so
paid apps are normally. In other contexts, the fremium
model seems to prevail, since the app is offered for free
and you only pay for enhanced functionalities. However,
considering the number of free apps and the low revenue
from advertising network, most developer use other
channels to make money from creating mobile software.
Table 1: Business models for apps selling in the different stores
Fixed tires (e.g. Pay
for download)
Fixed tires: free,
$0.99, $1.99, $2.99,
… Exchange rate
fixed and controlled
by Apple. Price can
be changed anytime
Fremium
model
generates 72% of
total revenues of
paid apps
Phone
Store
Additional models
- In app purchases (offer
additional digital content,
functionality, services or
subscription within the
paid or free apps);
- iAd rich media ads
(exploiting the Apple’s
digital
advertising
platform and adds an ass
to receive 70% of the net
ad revenue generated)
volume
purchase
program
(a
specific
program for business and
education institution to
purchase apps in volume)
-in app products and
subscriptions (e.g., onetime purchases and autorenewing
subscription
from inside the app,
freemium);
- ad-supported model (ads
apps with the use of
AdMob integration or
distribution for free and
selling in-app products or
advertising)
- in-app product (e.g., trial
Fig. 4 Good practice for app design, marketing vs. milestones for app success
A recent report [9] shows as the major revenue comes
from other sources that app store sales and advertising
such as e-commerce licensing and commissioned appmaking. Probably the top revenue-generating apps on the
30
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
store belong to the game category or in some cases to
some mobile messaging app.
3. Strategies on apps development
App development project is not a simple action and
requires critical decisions on strategies to be adopted
considering the scope and the target. Usually apps
manifest some peculiarities: even if developed with the
same techniques used by other type of application
software, they should be light, essential and easy to use
considering the features of mobile devices. Even if a
mobile app could be distributed via a website or e-mail,
the main channel is the reference store of the mobile
software platform. Since each store is proprietary, it
imposes rigid policies on content and quality of the app
influencing its development. Furthermore there is a cost
variable to each store related to the store’s developer
program registration.
The focus is so on the mobile players and on the different
aspects of app’s development that range from the scope,
the software tools needed and the business model.
2.3 Apps category
There are categories of apps that seems to be more
appealing in a mobile context. For example the research
company Nielsen describes as in Brazil and in Italy, users
prefer to surf the Internet with the mobile device rather
than downloading apps, while in countries like Australia,
India, South Korea and Turkey the use of apps and web
browsing is comparable. Mobile apps and websites for
mobile devices should be different things referring to two
different activities, but sometimes many apps are
developed as versions of mobile optimized websites or just
links to websites. In any case, mobile websites are
designed to work on any mobile browser, while generally
the apps are designed for one or a few mobile platforms.
Focusing on apps, statistics on major stores shows as the
games represent a great percentage of used apps, even if,
as the Fig. 5 shows, maps (Google Maps) and social
network apps are in the top positions (Facebook,
YouTube, Google+, WeChat, Twitter) followed by
messaging apps (e.g. Skype, Whatsapp).
3.1 Tools needed for mobile apps development
Considering the development methods, the choice of
developing a native, web or hybrid apps includes the
choice of the tools and the frameworks. Table 2
summarizes the different tools available for native, web
and hybrid apps using several mobile platforms.
Table 2: Tools and frameworks for mobile development
Mobile
software
platform
iOS
Fig. 5 Statistics on the activities performed by users of smartphones at least
once a month in various countries (source: Nielsen survey, February 2013)
In some way software platforms discriminates also the
apps categories considering that mobile apps are
preinstalled on the device. If a developer needs to mediate
between the actual usages of the app and its cost, these
analysis are very important.
In the Italian context, for example, a report about mobile
users behavior [10] shows as mobile users have on
average 27 apps installed on the device, but about half use
every month and every day. Of these most of them are free
apps, and only 6 % of users have mainly paid app.
Android
Windows
Development tools for native, web and hybrid
app
App lifetime cycle: Develop, test, distribute
maintain;
Native app develop tools: Xcode (Knowledge of
Objective-C).
Native .IPA file to deploy on the device or on the
store;
Web app develop tools: Web frameworks
Javascript libraries (e.g., jQuery mobile, Zepto)
Knowledge of HTML5, CCS3, Javascript.
Hybrid app tools: Phonegap, Titanium
AppCelerator, RhoMobile suite by Motorola and
Rhodes open source Ruby-based framework to
build native apps, eMobc
App lifetime cycle: Setup, develop, debug&test,
distribute
Native app develop tools: Android development
kit (ADT) or Android Studio (knowledge of
Java) – packaged as apk file
Web app develop tools: Web frameworks and
Javascript libraries.
Hybrid app tools: Phonegap, Titanium Appc,
RhoMobile suite, eMobc
App lifetime cycle: design, develop, distribute
31
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Phone
Blackberry
OS
Javascript libraries built-in for mobile web apps (e.g.,
jQuery mobile, Zepto.js). Focusing on mobile framework,
some of this framework are simple JavaScript libraries
offering the developer a set of useful built-in functions.
Native app develop tools: Windows Phone SDK
(along with Visual Studio)
Knowledge of C#, C++, Visual Basic. Packaged
as XAP file.
Web app develop tools: Web frameworks and
Javascript libraries.
Hybrid app tools: Phonegap, Titanium
AppCelerator, RhoMobile suite
App lifetime cycle: design, develop, distribute
Native app develop tools: Momentic IDE
(Blackberry 10 Native SDK)
Web app develop tools: Web frameworks and
Javascript libraries.
Hybrid app tools: Phonegap, Titanium
AppCelerator, RhoMobile suite
Table 3: Tools and frameworks for mobile development
Framework
HTML5
framework
For native apps development, each mobile software
platform provides a custom software development kit
(SDK) that consists in a set of tools allowing you to
design, develop, build and manage the process of app
development. Most of them enable you to create a package
version of the app suited to be published on the target
store. Generally in each SDK there are compilers to
translate the source code in the mobile platform reference
language (e.g., Objective-C for iOS, Java for Android, C#
or C++ for Windows Phone, C/C++/Qt for Blackberry 10)
into an executable, the standard libraries with public
interfaces and other tools such as the visual editors and
the simulators in order to help the development. The
development environment takes advantages of the mobile
operating system and the software component related to
runtime environments (e.g., the Android Dalvik Virtual
machine or the Windows WinRT runtime environment)
and the applications frameworks (e.g., Windows
Silverlight, Apple Cocoa Touch). Mobile native apps are
designed for a target mobile platform by using native
programming languages and because of that they have
access to all features of the device ensuring optimum
performance.
Normally they require the installation into the device, a
local execution and not necessary they need an Internet
connection (except for social network or messaging apps).
The main distribution channel is the store.
Mobile web app are developed in accordance with the
HTML5 framework that understand the language of style
CSS3 and Javascript programming language implemented
using different dialects. There are different HTML5,
CSS3 and Javascript frameworks that simplify the
development process and speed up the coding.
Table 3 shows an example list of frameworks that could
be used to develop web apps. We tried to made a
distinction between tools that are used to create web
applications for desktop (i.e., 52 framework) from those
optimized for a mobile environment (e.g., Zoey), from the
JS
mobile
libraries
HTML5CSS3 mobile
framework
Software available
Iio Engine -Open source lightweight
framework for creating HTML5 applications
with JS and canvas. SDK + debugging system
+ cross-platform deployment engine
LimeJS -HTML5 game framework for
building games that work on touch screens
and desktop browser. Created with the
Closure library by Google.
52
framework -HTML5-CSS3
based
framework
Kendo UI – inclusive HTML5/JS framework
for modern web and mobile app development
Zepto.js – jS framework for mobile webkit
browser compatible iQuery
jQuery mobile – touch optimized web
framework built on top of jQuery
M-project – mobile HTML5 JS framework to
build mobile apps easy and fast
Xui – a super micro tiny DOM library for
authoring html5 mobile web apps
Sencha Touch – HTML5 mobile app to
develop app that look and feel native on iOS
and Android
Zoey -Lightweight framework for developing
mobile apps Built on Zepto.js, support modern
browsers (iOS 4.x and Android 4.x)
Jo -Open source mobile application
framework working on iOS, Android,
Symbian, Safari, Chrome and dashboard
widgets. Compatible with PhoneGap
Lungo.js -Mobile framework that includes
features of HTML5, CSS3 and Javascript
(iOS,
Android,
Blackberyy,
WebOs).
Distribution also on stores
Junior Front-end framework for building
HTML5 mobile apps that look and behave
native (uses Zepto.js and integration with
backbone.js views + routers)
eMobc – open source framework for
generation of web, mobile and native iOS and
Android apps using XML
The inclusion in the source code of the app is simple and
consists, as the Fig. 6 shows for the case of the Zoey
framework, in putting in the scripts and the built-in style
sheets inside the specific tags of the HTML5 code (e.g.,
the script and the link tags).
32
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
3.1.1 Sencha Touch
Sencha Touch is a Model View Controller (MVC) JS
framework that appears to be more complex than a single
library as seen before. Its use requires to download the
Sencha Touch SDK and Sencha Cmd that install other
software like Ant, Ruby, Sass and Compass. Sencha cmd
is necessary to generate the app, since it gives a set of
command line tools that allows to create the skeleton for
the application available through a web address. The
skeleton is composed by some directories (e.g., app that
contains the models, views, controllers and stores) and
files useful for the app. The app.js file is the entry point of
the app. It contains the launch function where the
application logic is inserted. The app is so made by an
index.html file and the Javascript file that contains the
source code of the app. Sencha touch comes with a set of
components, objects, functions and code ready to use. A
Sencha touch app is web-based, but the framework offers
a way to include JS commands (by using native API with
Ext.device function) in order to be translated into native
functions (e.g., orientation, notification) when the app is
compiled. The options to be set in order to transform the
app into native are configured in the packager.json file.
An example of the structure of a Sencha touch app is
shown in Fig. 8.
Fig. 6 Tools and frameworks for mobile development
The same figure shows an example of a web page coded
as single o multi-web page that uses the JQuery mobile
framework by including the Javascript library (jquerymobile.js) and the style sheet (jquery-mobile.css) in the
header section of the web page. Once included, the
various functions coded in the library and the instructions
inside the style sheet could be called directly in the body
of the page. Other frameworks are more complex and also
include a developing environment and the command line
tools. Moreover they could be targeted to some platforms
or development languages as the Fig. 7 shows.
Fig. 7 A comparison between mobile frameworks and
functionalities.
Fig. 8 Tools and frameworks for mobile development
For example Sencha company offers a set of products that
allow to design (Sencha Architect), develop (Sencha Ext
JS, Sencha GXT, and Sencha Touch) and deploy (Sencha
Space) apps for mobile and desktop. Of these products,
Sencha releases some of these software with an open
source license as the case of Sencha Touch that is the
framework dedicated to develop HTML5 Mobile app for
iOS, Android and Blackberry. Since we focus on an open
source framework that could allow to develop a mobile
app, we examine, from the technical skill required, three
products that are Sencha Touch, Phonegap/Apache
Cordova and Appcelerator Titanium.
After the writing of the code, following main components
of an app (views, controllers and data), an app could be
packaged for a native provision. The app packaging
process is similar for iOS or Android devices. Each
platforms differs by the configuration file. iOS requires
the registration to the Apple iOS provisioning profiles
portal in order to obtain a certificate, identity devices and
an App ID. Also Android requires a certificate to sign the
application by using the Android SDK Manager. For
Blackberry and Windows Phone App, Sencha touch in the
last version (2.3) introduces tools and an API for working
with Cordova and Phonegap in order that these
frameworks could package the app for the use on these
devices. In any case, developing with Sencha Touch
33
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
requires a set of actions that are not as simple as it seems.
Probably using the entire Sencha ecosystem allows an
easily coding by means of visual tools, but the other
frameworks are quite expensive. If we focus on the open
source version, technical skills are required.
requires technical skills and the knowledge of the target
mobile platforms and their development kits.
3.1.3 Appcelerator Titanium
Appcelerator company provides two main products: the
Appcelerator platform designed to enterprise that provides
a single open, cloud-based platform to deliver native
cross-platform apps and Titanium designed to developers
to create cloud-connected native apps.
Titanium is a development environment that includes an
open source SDK to support various devices and mobile
operating systems APIs, an integrated development
environment (Titanium Studio), an MVC framework
(Alloy) and cloud services to deliver apps.
Titanium products are provided as free, but it is required
the registration to the site. Titanium Studio (Fig. 10) is a
complete product that requires technical skills in order to
be used. The software is installed on the local machine,
but require the signing to the platform in order to work.
3.1.2 Phonegap vs. Cordova
Abobe Phonegap is built on top of Apache Cordova that is
a platform providing API for building native mobile
applications using the HTML5 framework. The difference
is on how their packaging tools are implemented.
Phonegap provides a remote building interface at Adobe
Phonegap Build that emulates an app for a single platform
in the cloud.
Phonegap requires the installation of the Nodejs
framework in order to install all the packages needed for
the app creation. Then it is required the presence on the
local system of the different SDKs.
As the Fig. 9 shows, the creation and the execution of an
app by means of the Phonegap command line tool,
requires the sign up to the Phonegap Build portal and thus
of an Adobe ID account.
Fig. 9 Steps required for Phonegap/Cordova installation
Fig. 10 Titanium Studio
Cordova packaging tools allow to build apps on a local
computer. In any case it is necessary to have a certificate
and AppID for each platform for the market in which a
developer what to distribute the app. Apache Cordova
requires the installation of Node.js (the JS platform used
to build network applications), that is used to install
locally the Cordova software. When installation is
completed, the Cordova tools allow to create the main
structure of the app inside a specific directory. When it is
necessary to add platforms that are needed to build the
project for the target mobile platform (e.g., iOS, android,
wp8, blackberry). The ability to run the commands
depends on whether the local machine support each SDK
(e.g., a Windows-based machine lacks of the iOS SDK).
The cordova command allows to build the app for the
target platform (i.e., using the build option), and to test
the app on an emulator or a device (i.e., using the emulate
option). Also in this case, the development of an app
3.2 A comparison between some strategies and tools
A comparison between native, web and hybrid apps based
on some criteria rated on a scale of 0 to 5 is shown in Fig.
11. Native apps, HTML5 apps, Phonegap apps and
Titanium apps are compared according the following
criteria: the features of the app, the user experience and
the performance, the monetization, the distribution, the
updates, the cost, the platforms fragmentation and
security.
34
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Some research (e.g., Canalys) reveals how Google Play is
the store that has achieved the highest number of
download (51% of total downloads carried out on the App
Store, Google Play, Windows Phone and BlackBerry
Store), but the App Store is the one that receives the most
revenue getting 74% of the total profits of the four
analyzed store. Even if the download of apps is
expanding, the gain does not increase in proportion, since
the percentage of free download is approximately 90% of
all installed apps. Native app has thus a low score
compared to other solutions whereas in the first case the
strategy is imposed by the store.
Different scores are related to the updates that solve bugs
or made apps’ improvements. The web app has the
highest score given that the update is automatic, a
Phonegap app has an average score considered that is
encoded with web technologies. Conversely a native app
and a Titanium app have a low score considering that the
updates imply the user having to re-download the app or
must wait since also updates are subjected to the store’s
approval process.
The costs are related to the number of platforms to be
achieved and the type of target device. The costs
associated with the web app are lower (high score)
considered the large number of developers with
knowledge of web technologies and thus their
competition. The other type of app have a cost that
depends on the knowledge of the tools and the crossplatform software. The creation of an app supported by
more than two platforms involves a high cost when
developing a native app. But also in the web-based
approach, there is a requirement for the app depending
the different versions of the mobile browsers installed in
the mobile devices.
The platform’s fragmentation means that different
versions of the mobile operating system may or may not
enable a functionality. The fragmentation has few
disadvantages in web-based app, app average in hybrid
app and many issues in native apps. Since there may be
several versions of the same operating system (e.g.
Android) active on different devices, development must
take into account the different hardware configurations
which every manufacturer adopts. A native app created
for the latest version of the operating system may not
work on systems that are equipped with the previous
versions.
Also a web app manifests a fragmentation problem that
concerns the mobile web browser each of which has
various versions and different engines that may or may
not implement the language-specific content. In this case,
the developers are obliged to test a web-based app on
different devices and browsers. A Phonegap app is able to
manage this fragmentation since it supports multiple
Fig. 11 Evaluation of methods of analysis for application development
features.
As far as apps features, native (score 5) and hybrid (score
4) apps offer the possibility to access to the various
resources of the device directly or via plugins or modules
provided by cross-platform tools. HTML5-based web app
instead (score 1), despite the rapid evolution of the
technology, have not yet reached the same level as the
other and do not allow to exploit the full capabilities of
the device on which they operate.
Focusing on the user experience and the performance,
since the performance evaluation is done according to the
graphics and the load time of an app, native apps have the
highest score (5), followed by hybrid app Titanium (4),
while the others methods have the minimum score. Note
that a poor performance lead to app failure, since user
wants an app that runs fast and with high performance.
Such features till now are possible only on native apps.
The user experience of the app Titanium is very close to
that of a native app, while a Phonegap app or a web app
may have performance issues related, for example, to the
time of loading a page, maybe because of the number of
files relating to external style sheets or scripts to load.
The gain is undoubtedly linked to the method of
distribution in the store and that is why hybrid native apps
and have maximum score than HTML5 -based app. The
app store distribution lead to a greater visibility with a big
chance of being purchased and create a revenue for
developers, although the competition is very high. The
web app is distributed via the website and therefore is less
visible since it is related to its availability through the
search engines.
35
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[4] M. Pilgrim, HTML5: Up and running, O’Reilly, 2010
[5] S. Pastore, Mobile platforms and apps cross-platforms
development tools, Int. Journal of Engineering Research and
Application www. ijera.com. ISSN : 2248-9622, Vol. 3,
Issue 6, Nov-Dec 2013, pp.521-531.
[6] T. Pierce, Appreneur: secrets to success in the App store,
Apress, 2013
[7] T. Ferguson, App store revenue, downloads continue upward
trajectory – Canalys, Apr. 2013, at url: Mobilewordllive.com
[8] H. Koekkoek, Publication: how the most successful apps
monetize their user bade, Distimo.com publication, March.
2013.
[9] C. O’Connor, Evolution of the app business model,
Boston.com Aug, 2013
[10] L. Lazzarin, Il Mobile Internet italiano vola a 1,25 miliardi,
Content e app a 620 milion, wireless4innovation magazine,
Jun 2013.
software platforms at the same time. Conversely, the
extension of a product created with the Titanium
framework to all these operating systems would be
extremely expensive.
Finally, the security aspect presents a high score for a
native and a Titanium app and a low score for a Phonegap
app or a web-based app. The source code of a web-based
app is easily recoverable and there are several ways to
attack the security of a web app. On the other hand there
are techniques to secure a web app. HTML5 provides the
ability to cache data within the browser, and considering
that the app manage data on the device, these data should
be adequately protected and encrypted. The app can use
the native API of its operating systems to encrypt the
stored data, but this is not possible to implement such
technology with web languages.
As widely stated, the programmer is placed in front of a
series of choices. The final decisions about the tools to be
adopted are a compromise between the features and the
skills needed.
Serena Pastore Researcher/technologist from 2007 in ICT at the
Italian National Institute of Astrophysics (INAF) and University
teaching professor in some Italian University. Current research
interest are on Internet and web standards focusing on the mobile
environment, distributed network paradigms (grid and cloud), open
source products.
4. Conclusions
Developing a mobile app is not simple as it appears and
requires a carefully analysis on the mobile ecosystem and
some decisions about the strategies in the development.
The choice of a cross-platform tool that allows to
distribute an app on more than one mobile operation
system is a choice that requires technical knowledge and
skills comparable to those required in the development of
a native app. This is particularly true when we focus on
open source tools. Solutions like Sencha Touch or
Cordova could really help to develop various version of
the same app targeted to different mobile platforms, but
require time and cost. Moreover the mobile world is an
evolving and interesting market, but the revenue for an
app developer is not so sure. Even in a research
environment where we work, it is necessary to make an
analysis between the costs related to the development and
distribution of an app and the return in term of user
satisfaction and knowledge of our activities.
References
[1] F. Sheares, Power management in mobile devices, newness,
Elsevier, 2008.
[2] L. Columbus, 2013 Roundup of Samrtphone and Tablet
forecasts and market estimates, Forbes, Jan. 2013. At url:
http://www.forbes.com/sites/louiscolumbus/2013/01/17/201
3-roundup-of-mobility-forecasts-and-market-estimates/
[3] S. Pastore, The role of open web standards for website
development adhering to the one web vision. International
Journal of Engineering and Technology (IJET), Vol.2,
No.11. November 2012. ISSN 2049-3444.
36
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Data Mining Awareness and Readiness in Healthcare
Sector: a case of Tanzania
Salim Amour Diwani1, Anael Sam2
1
Information Communication Science and Engineering, Nelson Mandela African Institution of Science and Technology,
Arusha, P.O.BOX 447, Tanzania
diwania@nm-aist.ac.tz
2
Information Communication Science and Engineering,
Arusha, P.O.BOX 447, Tanzania
Anael.sam@nm-aist.ac.tz
properly utilize the huge medical data collected through
different medical processes. Stored medical data
collection is an asset for healthcare organizations if
properly utilized. Healthcare industry uses data mining
techniques to fully utilize the benefits of the stored
medical datasets. This study investigates the readiness to
adopt, explore and implement healthcare data mining
techniques in Tanzania. The healthcare industry is
growing rapidly. The healthcare expenditures are also
increasing very high. Various healthcare organizations
worldwide such as world health organization (WHO) are
trying to provide quality healthcare treatments at cheaper
costs. The government sectors are able to benefits the use
of healthcare data mining technology in many ways.
Examples providing self healthcare treatments, using
scientific medical knowledge to provide healthcare
services to everyone, minimize time to wait for the
medical treatments, minimizing the delay time in
providing medical treatment, providing various healthcare
treatments based on the patients needs, symptoms and
preferences.
Abstract
Globally the application of data mining in healthcare is great,
because the healthcare sector is rich with tremendous amount of
data and data mining becoming very essential. Healthcare sector
collect huge amount of data in daily basis. Transferring data into
secure electronic systems of medical health will saves lives and
reduce the cost of healthcare services as well as early discovery of
contiguous diseases with advanced collection of data. This study
explore the awareness and readiness to implement data mining
technology within healthcare in Tanzania public sector . This study
is triangulated adopted online survey using Google doc and offline
survey using presentation techniques through different hospital
and distributed the questionnaires to the healthcare professionals.
The issues explored in the questionnaire included the awareness of
data mining technology, the level of understanding of, perception
of and readiness to implement data mining technology within
healthcare public sector. In this study we will analyze the data
using SPSS statistical tool.
Keywords: Data Mining , KDD, SPSS, healthcare,
ICT,WHO, Healthcare, Dynamic Morphology, NM-AIST,
NHIF, Paradigms, Positivists, HIS
2. Problem Description and Objectives
In Tanzania there has been no literature that has
discussed the implementation of data mining technology
within healthcare in the public sector . Various studies
have shown that new technology have become popular
and have been accepted by many people worldwide. This
study seeks to address this problem by investigating the
knowledge about and readiness with regard to the
implementation of the data mining technology in
healthcare in the public sector. Prior studies of data
mining readiness and implementation have been
undertaken in the private sector. Evidence suggests that
personnel within private sector firms are aware and ready
1. Introduction
In recent years, there has been a tremendous change in
the adaptation and implementation of new technologies in
both private and public sectors. Data mining is an
emerging technology used in different types of
organizations to improve the efficiency and effectiveness
of business processes. Data mining techniques analyze
large data sets to discover new relationships between the
stored data values. Healthcare is an information rich
industry, warehousing large amount of medical data.
Health care industry finds it difficult to manage and
37
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
to implement this technology. Studies in the area of
telecommunication, banking, and insurance companies
indicate that there is a level of optimism and
innovativeness among employees indicating the potential
to adopt data mining techniques. Readiness can be seen in
terms of the adoption of or intent to adopt data mining
technologies [1],[2],[3],[4].
disease and symptoms, curbing the treatment cost,
predicting medical diagnosis and can help physicians to
discover which medications are most effectively and
efficient for patients. Healthcare Data mining uses
dynamic morphology for mining healthcare data.
Dynamic morphology is the process of extracting
healthcare data in order to discover new knowledge and
relationships. Refer to figure 2 below, let say we want to
mine the tuberculosis data using dynamic morphology we
need to follow the following procedures:
 analyze stored medical data for TB
3. Data Mining
Data Mining is the process of extracting useful
information from large and mostly unorganized data
banks. It is the process of performing automated
extraction and generating predictive information from
large data banks. It enables you to understand the current
market trends and enables you to take a proactive
measures to gain maximum benefit from the same. Refer
to figure 1 below, data mining is a step in the knowledge
discovery in databases (KDD) process and refers to
algorithms that are applied to extract patterns from the
data. The extracted information can then be used to form
a prediction or classification model, identify trends and
associations, refine an existing model, or provide a
summary of the database being mined[5].

identify causes or symptoms of TB

Transform discovered knowledge using any one
of the healthcare data transformation techniques
to explore extra information

store the results of knowledge transformation for
further analysis
Fig. 2 Example of Dynamic Morphology for Tuberculosis
Fig. 1 KDD Process
4. Data Mining in Healthcare
5. Data Mining Readiness
The healthcare industry is growing abundantly. The
healthcare expenditures are also increasing tremendously.
Healthcare organization worldwide such as National
Health Insurance Fund (NHIF) in Tanzania are trying to
provide quality healthcare at cheaper cost. The healthcare
organizations are adopting new technologies, which will
help in early detection of life threatening diseases and
lowering the medical cost. Data mining in healthcare is
useful in segmenting patients into groups, identifying
patients with recurring health problems, relation between
In order to accept and ready to apply or adopt data mining
techniques, the major issues is the willingness and
capability to accept the technology. Human resources
primarily on their readiness toward accepting data mining
technology can be argued as a major obstacle in adopting
a new technology within any organization[3]. In this
study issues whether people use such a technology or not
and their ready to accept a technology or not are major
concern. Hence, in order to discover the readiness towards
data mining technology issue like clarity for business
38
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
strategy, user skills and experiences, organization and
culture, technology and data quality needs to be
considered.
6.2 Study Settings and Time Horizon
The setting selected for the study were hospitals in Dares-salaam, Tanga, Zanzibar, Arusha and Dodoma. This
study was undertaken via presentation through the use of
survey in the local hospital during their breakfast
meeting. The questionnaire was distributed among
respondent and later the respondent were contacted. The
process of distributing and collecting the questionnaire
begins in January 2013 and end in March 2013.
6. Research Design and Methodology
7. Purpose and Approach
The study aims to gain an understanding of data mining
technology utilization in Tanzania healthcare public
sector. The impact factors which data mining technology
can be sought to be utilized such as level of awareness and
readiness of data mining technology in healthcare public
sector. Since there is little or no any paper which has been
published in data mining technology in Tanzania. Also
there are very few companies who implement and use data
mining technology in their day to day activities such as in
decision making process. This study aim to discover the
level of awareness and readiness in data mining
technology in healthcare sector and the willingness to
accept or reject the implementation. This study will assist
in filling the gap in the literature as well.
Figure 3: Modeling the research design [6]
The above figure explore the study design which will be
adopted in this study. The design consists of research
questions which need to be answered and analyzed,
identify participants, location of the study and time
horizon, purpose and justification of the study before
discussing the research paradigm and approach
considered. Each of this part of the research design will
be discussed in detailed.
7.1 Paradigm
All researchers have different beliefs and ways of looking
and interacting within their environment. As a result, the
way in which research is conducted vary. However, there
are certain standards rules that govern researchers
actions and beliefs. A paradigm is “a broad view or
perspective of something”[7]. Hence, definition of
paradigm reveals how research could be affected and
guided by a certain paradigm by stating, “paradigms are
patterns of beliefs and practices that regulate inquiry
within a discipline by providing lenses, frames and
processes
through
which
investigation
is
accomplished”[8]. This study utilized a positivist
approach which explore social reality, such as attitudes of
readiness toward data mining and their satisfaction
toward healthcare system. The concept is based on the
philosophical ideas of emphasizing observation and
reason behind participant actions in adopting data mining
technology for example, their knowledge and
understanding about data mining, how they think about
technology advancement in their work place.
6.1 Participants
Participants in this study are professionals who working
in medical fields in Tanzania public sectors. The
respondent selected are physician, medical doctors,
dentists, nurses, pharmacists, and hospitals body of
directors. In this study individuals who are working in
medical sectors are surveyed to address the basic research
problem in the level of awareness and readiness to adopt
data mining techniques. The study also seek if people are
ready or not to adopt data mining techniques in
healthcare. The questionnaires were prepared and
available in English through Google docs and posted
through emails of individuals who are working in medical
field. Also the questionnaires were distributed manually
in the hospital and research centers after conducting
presentation which explained more about the study
design.
39
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
approximate intake of 500 students, also we have more
than 6000 medical doctors scattered around the country[8]
. Structured questionnaires were designed and online
survey was conducted to reach the targeted sample size of
400 respondents out of the entire population of the at least
6000 medical specialists. Different medical specialists
(surgeon,
gynecologists,
physicians),
doctors,
Pharmacists, Nurse and information system officers were
also involved in the survey.
7.2 Approach
This research is to be conducted in African developing
countries. Consequently I expect issues such as national
culture, form of government, and the politics of protest
and shape the nature of this study in a different way as
compared to a study conducted in the context of developed
countries. Hence, this study utilized a triangulation
approach to explore the survey data which will provide
basic understanding about what happening in the industry
in regard to data mining technology such as data mining
awareness, attitudes towards data mining technology and
software used related to data mining if any within
healthcare sector. Also, if healthcare professionals are
willing to adopt data mining technology and able to
upgrade their skills.
8. Analysis and Findings
Refer to figure 4, 5 and 6 below, awareness and readiness
of the respondents were assessed if they know or do not
know any other health information systems and other
questions were specific to data mining tool. 60% (241) of
the respondents knew at least one health information
system excluding data mining tool, 40% (158) did not
know any health information system. Among them, 53%
(211) respondents have once used it meanwhile 47%
(189) have never used any health information system.
In the organization’s perspective, 48% (190) of the
organizations where respondents once worked at have an
experience in a health information systems excluding
data mining tool while 50% (202) of the organizations
had no experience in health information systems and 2%
(8) used other systems.
In order to gain more understanding of the respondents’
readiness in adopting technology particularly in data
mining, the following statements were provided to the
respondents, these include whether technology gives the
respondents greater control over his/her daily work
activities, to what extent do they agree that technologies
are much more convenient in the products and services,
preference in using the most advanced technology
available, efficient of technology in the respond’s
occupation, to keep up with the latest technological
developments in his/her areas of interest, to have fewer
problems than other people in making technology
working for them, flexibility to learn about new and
different technology, usefulness of the technology for
accomplishment of any tasks and willingness to use data
mining technology for analyzing medical data in addition
to current methods. All these statements had to be
responded in the rate of scale which includes strongly
disagree, agree, neutral, Disagree and strongly disagree.
Strongly disagree and disagree were put by a researcher
to be negative response category and agree and strongly
agree were put in the category of positive category. All
the results are presented in the Chart below:
7.3 Instrument Design
Questionnaires was designed as a quantitative part of this
study using Google doc. There is no wrong or right
answers in the questionnaire what we try to test is the
level of awareness regard to data mining technology in
healthcare and if the healthcare practitioners are willing
to adopt a new technology or not. The design of the
questionnaires of this study adopted several sources of
data, including previous instrument developed by other
researchers and literature related to data mining adoption.
Types of questions were closed ended questions with
ordered and unordered responses using Likert type scale
like level of measurement for example 1=strongly
disagree and 5 = strongly agree. Itemized rating scale was
also developed for few questions. For example, scale from
1=poor to 5=excellent was used for question as to rate the
actual performance on important factors for quality
healthcare system. There is also similar scale 1=seldom to
5=very often was used for question requesting to indicate
frequency of healthcare data were used in particular
areas. The questionnaires were pre-tested by sending
through the emails of NM-AIST colleagues to review the
questions critically and getting the feedback from them
about the content and form, and how to make it more
appropriate and were to send it, in order to get valuable
feedback.
7.4 Data Collection
A survey was conducted for 400 respondents from
different parts of the country including Dar es salaam,
Arusha, Zanzibar, Tanga and Dodoma. In Tanzania we
have 5 medical University in operation with
an
40
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
users will catch up with it slowly because it will not be
easy for them to adapt to it quickly. Also, the majority of
the respondents stayed neutral about new technology like
data mining. This is due to limited knowledge of data
mining tool.
Figure 6: Neither positive nor negative toward the factor
Figure 4: Positive response in percentage supporting the factor
9. Reliability
In this section the reliability of data in the study is
discussed. Refer to table 1, 2 and 3 below, a reliability test
was conducted using Cronbach’s Alpha to measure the
internal consistence or average correlation of the items. .
It ranges from 0-1. The most reliable value is 0.7. In this
study alpha obtained after analysis in SPSS was 0.974
which means the correlation of the research questions
were most appropriate. The key variables used in the
statistical analysis are: data mining readiness and
awareness.
Table 1: Health Information System
Scale Mean
Scale
Corrected
Cronbach's
if Item
Variance if
Item Total
Alpha if
Deleted
Item Deleted
Correlation
Item
Deleted
Do you
1495.5556
2614954.365
.304
.974
1495.5000
2614827.802
.307
.974
know any
HIS in
Tanzania?
Figure 5: Negative response in percentage not supporting the factor
Have you
It is observed that new technology like data mining tools
is essential to facilitate personal performance though
ever used
any HIS?
41
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Table 2: Data Accuracy
Table 3: Medical Data
Scale Mean
Scale
Corrected
Cronbach's
Scale
if Item
Variance if
Item-Total
Alpha if
Mean
Deleted
Item Deleted
Correlation
Item
Item
Total
Item
Deleted
Deleted
Correlati
Deleted
The data
conforms to
Correcte
Cronbach's
if Item Deleted
d
Alpha
Item-
if
on
Please
1489.5000
2594194.972
indicate how
.473
frequently you
the actual
use
value
medical data
data which
from HIS in
recorded in
each of the
your system
Scale Variance
.974
which
recorded is
if
1489.4074
2594545.454
.469
the
1465.6667
2548146.038
.542
.974
1462.0370
2536410.263
.608
.973
following
.974
areas-
is up to date
Decision
Complete :
Making
all
data analysis
relevance
Please
value for a
1487.9815
2597785.792
.366
and
indicate how
.974
certain
frequently you
variable are
use
recorded
medical data
Consistent:
from HIS in
representati
each of the
following
on of the
data value
the
1489.8519
2593776.166
.477
areas-
.974
is the same
Performance
in all
measurement
classes
and planning
Please rate
the actual
performanc
10. Discussion
e on each of
those
1438.9630
2568839.206
.374
.974
Data mining in healthcare helps to improve healthcare
standards. Data mining technology helps healthcare
organizations to achieve various business goals such as
low costs, increase revenue generation while maintaining
high quality healthcare. Data mining in healthcare
identified as the process of extracting and analyzing
healthcare data for presentation in a format allowing the
generation of information and the creation of knowledge
through the analysis of information in order to enhance
the decision making process within the public sector
organization.
The major research problem addressed in this study was
in Tanzania there is a lack of the status of implementation
factors by
your
organizatio
n
42
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Mandela African Institute of Science and Technology and
Dr. Muhammad Abulaish from Jamia Milia Islamia for
their guidance and valuable support during this work.
of data mining techniques within healthcare in the public
sector and benefits to be derived by implementing such
technologies. The study sought to increase an
understanding of data mining technology in Tanzania
within public sector. Areas of interests within this study
were data mining awareness and readiness to accept or
reject technology in healthcare public sector, the reasons
to accept or reject the technology and impacts of data
mining technologies.
References
[1] Berger, C. (1999). ‘Data mining to reduce churn’.
Target Marketing, 22(8), 26.
[2] Chye, K. H., & Gerry, C. K. L. (2002). ‘Data Mining
and Customer Relationship Marketing
in the Banking
Industry’. Singapore Management Review, 24(2), 1-27.
[3] Dahlan, N., Ramayah, T., & Hoe, K. A. (2002). Data
Mining in the Banking Industry: An Exploratory Study.
The proceedings of the International Conference 2002.
Paper presented at the ‘Internet Economy and Business’,
Kuala Lumpur, 17-18th September 2002.
[4] Chun, S.-H., & Kim, S. H. (2004). ‘Data mining for
financial prediction and trading: application to single and
multiple markets’. Expert System with Applications, 26,
131-139.
[5] Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P.
(1996). ‘The KDD process for extracting useful
knowledge from volumes of data’. Association for
Computing Machinery. Communications of the ACM,
39(11), 27-34.
[6] Cavana, R. Y., Delahaye, B. L., & Sekaran, U. (2001).
Applied Business Reseach:Qualitative and Quantitative
Methods. Sydney and Melbourne: John Wiley & Sons
Australia, Ltd.
[7] Taylor, B., Kermode, S., & Roberts, K.(2007).
Research in nursing and health care: Evidence for
practice. (3rd ed.). South Melbourne, VIC: Thomson.
[8] Weaver, K., & Olson, J.K.(2006). Understanding
paradigms used for nursing research. Journal of Advanced
Nursing, 53(4), 459-469.
11. Conclusion
This paper discussed the awareness and readiness to adopt
data mining techniques in healthcare sector in Tanzania.
The statistical results of the questionnaire were collected
and analyzed, and the issues discussed in the
questionnaire were healthcare information system, data
mining
readiness,
data
mining
technologies
implementation and perception of data mining. The
descriptive statistics has shown that many healthcare
professionals in Tanzania they are not aware of the data
mining technology and they never used it before. Hence
they are willing to accept and adopting data mining
technology if they get an opportunity to do so.
Respondents identified the influencing factors for
utilizing data mining and reason for not utilizing it. Each
of the major research questions were analyzed and the
results were given. Cronbach’s Alpha was used to test the
internal consistence of the questions; the alpha test was at
least 0.7 which means the questions were most reliable.
All the questions used likert scales, those scales which
were coded as strongly agreed and agreed were considered
to measure positive response while disagreed and strongly
disagreed were considered to measure negative response.
The findings showed the majority are aware of health
information system but limited knowledgeable on data
mining technology. About performance of health
information system the majority of the respondents
strongly agreed that the system should be easy to use, able
to validate data, adequate and sufficient documentation
for employees to follow and easy to modify and upgrade
information. Also they suggested that sufficient financial
resources, effective and adequate for the staffs is required
in order to implement data mining in developing
countries like Tanzania.
Salim Amour Diwani received his BS degree in computer
science at Jamia Hamdard University, New Delhi, India in
2006 and Ms degree in computer science at Jamia
Hamdard University in New Delhi, India in 2008. He is
currently a PhD scholar in Information communication
Science and Engineering at Nelson Mandela African
Institution of Science and Technology in Arusha, Tanzania.
His primary research interests are in Data Mining, Machine
Learning and Database Management Systems.
Acknowledgement
I would like to thank almighty and my family for their
constant support during this work. I would also like to
thank my supervisors Dr. Anael Sam from Nelson
43
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Digital Organism Simulation Environment (DOSE): A Library
for Ecologically-Based In Silico Experimental Evolution
Clarence FG Castillo1, and Maurice HT Ling 2
1
2
School of Information Technology, Republic Polytechnic
Singapore, Republic of Singapore
clarence.castillo_33@yahoo.com
School of Chemical and Biomedical Engineering, Nanyang Technological University
Singapore, Republic of Singapore
Department of Zoology, The University of Melbourne
Parkville, Victoria 3010, Australia
mauriceling@acm.org
Richard Lenski in 1988 [4], using a common intestinal
bacterium, Escherichia coli, which has one of the shortest
generation time. Other experimental evolution
experiments [5-7], such as adaptation to salt and food
additives, have also used E. coli due to its generation time.
Despite so, it is generally prohibitively expensive to
examine the genetic makeup of each bacterium using
experimental techniques, such as DNA sequencing. At the
same time, such examination is destructive in nature and
the examined bacterium cannot be revived for further
evolutionary experiments.
Abstract
Testing evolutionary hypothesis in biological setting is
expensive and time consuming. Computer simulations of
organisms (digital organisms) are commonly used proxies to
study evolutionary processes. A number of digital organism
simulators have been developed but are deficient in biological
and ecological parallels. In this study, we present DOSE
(Digital Organism Simulation Environment), a digital organism
simulator with biological and ecological parallels. DOSE
consists of a biological hierarchy of genetic sequences, organism,
population, and ecosystem. A 3-character instruction set that
does not take any operand is used as genetic code for digital
organism, which the 3-nucleotide codon structure in naturally
occurring DNA. The evolutionary driver is simulated by a
genetic algorithm. We demonstrate the utility in examining the
effects of migration on heterozygosity, also known as local
genetic distance.
Keywords: Digital Organisms, Simulation Environment,
Ecology Simulation, Migration, Genetic Distance.
A means around these limitations is to use models of
bacteria or higher organisms, rather than real biological
organisms. These modeled organisms are known as
artificial life or digital organisms (DO) which organisms
are simulated, mutated, and reproduced in a computer [8].
Although digital organisms are not real biological
organism, it has characteristics of being a real living
organism but in a different substrate [9]. Batut et al. [3]
argue that DO is a valuable tool to enable experimental
evolution despite its drawbacks as repeated simulations
can be carried out with recording of all events.
Furthermore, only computational time is needed to study
every organism, which is analogous to sequencing every
organism, and this process is not destructive in a
biological sense as the studied organism can be “revived”
for further simulations.
1. Introduction
Nothing in Biology makes sense except in the light of
Evolution -- Theodosius Dobzhansky [1]
Nothing in Medicine makes sense, except in the light of
Evolution -- Ajit Varki [2]
Evolution is a fundamental aspect of biology. However,
testing evolutionary hypotheses is a challenge [3] as it is
highly time consuming and expensive, if not impossible.
Long generation time associated with most species makes
it virtually impossible to test evolutionary hypotheses in a
laboratory setting. The longest on-going laboratory
experiment in evolutionary biology have been initiated by
The main tool needed for using DO is a computational
platform to act as a simulation environment. A number of
DO platforms have been developed [10]. One of the early
simulators is Tierra [11], where each organism is an
evolvable, mating and reproducing program competing
for computing resources, such as CPU cycles and memory
44
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
The 4 sets of components are briefly described as follow;
firstly, DOSE consists of a set of objects representing a
chromosome, organism, and population [17]. An
organism can consist of one or more chromosome to make
up its genome and a population consists of one or more
organisms. Secondly, a GA acts as the evolutionary driver
acting on the chromosomes. Thirdly, Ragaraja interpreter
[19] is used to read the chromosomes and update the
cytoplasm (cell body). This resembles the translation of
genes into proteins in biological context; hence, Ragaraja
interpreter [19] can be seen as the transcription and
translation machinery. Lastly, a 3-dimensional world [18]
consisting of ecological cells allows the mapping of DOs
onto the world.
space. Hence, Tierra’s programs can be seen as an
executable DNA. A major drawback of Tierra is that the
DOs are not isolated from each other as all DOs shared
and compete for the same memory space. Avida [12]
simplified Tierra [11] by enabling each DO to run on its
own virtual machine; thus, isolating each DO, resulting in
CPU cycle being the main competing resource. As Tierra
[11] and Avida [12] used bytecodes as the genetic
constituents for DO, it is difficult to examine parameters
such as heterozygosity and genetic distance, which is
commonly used in population genetics [13] from HIV
virus [14] to human migration [15]. Mukherjee et al. [16]
defines heterozygosity as variation within population
while genetic distance is the variation between
populations. Hence, heterozygosity can be considered as
local genetic distance or within group genetic distance.
Aevol [3] used a binary string as genetic material and had
incorporated concepts from molecular biology; such as
genes, promoters, terminators, and various mutations;
into its design. This allowed for genetic distance to be
measured. However, aevol [3] is designed for simulating
bacterial genetics and evolution. Hence, ecological
concepts, such as migration and isolation, are not
incorporated.
Each simulation is defined by a set of parameters and
functions, which are used by the simulation driver and
management. It constructs and initializes the DOs, maps
the DOs onto the world, runs the simulation from first
generation to the maximum generation as defined in the
parameter, and report the events into a text file or
database as required. After DO initialization, the current
simulation driver simulates each organism and ecological
cell sequentially [18].
The following is the core set of 18 parameters available in
DOSE to cater for various uses:
 population_names: provides the names of one or more
populations
 population_locations: defines the deployment of
population(s) at the start of the simulation
 deployment_code: defines the type of deployment
scheme
 chromosome_bases: defines allowable bases for the
genetic material
 background_mutation: defines background mutation
rate
 additional_mutation: defines mutation rate on top of
background mutation rate
 mutation_type: defines a default type of mutation
 chromosome_size: defines the initial size of each
chromosome
 genome_size: defines the number of chromosome(s) in
each organism
 max_tape_length: defines the size of cytoplasm
 interpret_chromosome: defines whether phenotype is to
be simulated
 max_codon: defines the maximum number of codons to
express
 population_size: defines the number of organisms per
population
Previously, our group had designed a genetic algorithm
(GA) framework conforming to biological hierarchy
starting from gene to chromosome to genome (as
organism) to population [17], which may help
interpreting GA results to biological context. Further
work [18, 19] by our group had formalized a 3-character
genetic language to correspond the 3-nucleotide codon in
naturally occurring DNA and incorporating a 3dimensional “world” consisting of ecological cells in
order to give it parallels to biological DNA and natural
ecosystem.
Here, we present a Python DO simulation library, Digital
Organism Simulation Environment (DOSE), built on our
previous work [17-19]. We then illustrate the use of
DOSE to examine the effects of migration on
heterozygosity (local genetic distance) where DOs can
only mate within their own ecological cell.
2. Methods
2.1 DOSE Library
The basis of DOSE is a simulation driver and
management layer built on top of 4 different sets of
components, which had been previously described [17-19].
45
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
were not allowed to cross cell boundaries throughout the
simulation in order to simulate complete isolation. In
adjacent migration simulation, 10% of the organisms from
a cell can migrate to one of its 8-neighbour cell within a
generation in order to simulate short distance migration
patterns, such as foraging or nomadic behavior. In long
migration, 10% of the organisms from a cell can migrate to
any other cells within a generation in order to simulate long
distance migration patterns, such as flight. Each simulation
was performed for 1000 generations.
 world_x, world_y, world_z: defines the size of the world
in terms of numbers of ecological cells
 maximum_generations: defines the number of
generations to simulate
 ragaraja_instructions: list of recognized codons
The following is the core set of 12 functions definable in
DOSE to cater for various uses; of which, Functions 2 to
11 were previously defined [18]:
1. deployment_scheme: initial deployment of organisms
into the ecosystem
2. fitness: calculates the fitness of the organism and
returns a fitness score
3. mutation_scheme: mutation events in each
chromosome
4. prepopulation_control: population control events
before mating event in each generation
5. mating: mate choice and mating events
6. postpopulation_control: population control events
after mating event in each generation
7. generation_events: other irregular events in each
generation
8. organism_movement: short distance movement of
organisms within the world, such as foraging
9. organism_location: long distance movement of
organisms within the world, such as flight
10. ecoregulate: events to the entire ecosystem
11. update_ecology: local environment affecting entire
ecosystem
12. update_local: ecosystem affecting the local
environment
2.3 Data Analysis
Within cell analyses were performed. Hamming distance
[20] was calculated between the chromosomes of two
organisms and used as local genetic distance. 50 random
pairs of organisms within a cell were selected for pairwise local genetic distance calculation and an average
heterozygosity was calculated for each cell in every
generation. Within a generation, mean and standard error
of heterozygosity were calculated from the average local
genetic distances of 25 cells for each simulation.
3. Results
In this study, we present Python DO simulation library,
Digital Organism Simulation Environment (DOSE), built
on our previous work [17-19]. We first briefly outline a
typical use of a DO simulation platform such as DOSE
before illustrating 2 examples to examine the effects of
migration on heterozygosity, given that the DOs can only
mate within their own ecological cell.
2.2 Simulations
3.1 Typical use of an in silico evolutionary platform
Two sets (Example 1 and Example 2) of three simulations
with different migration schemes; no migration, adjacent
migration, and long migration; were developed, giving a
total of six simulations. Each simulation consisted of a 25cell flat world with 50 organisms per cell and mating could
only be carried out between organisms within the same cell.
As a result, each cell resembled an isolated landmass. One
binary chromosome of 5000 bases formed the genetic
material for each organism. Only point mutation was used
and the two sets of simulation differ by point mutation rates.
In the first set of 3 simulations (Example 1), mutation rate
was set at 0.001, resulting in 5 point mutations per
generation. In the second set of simulations (Example 2),
mutation rate was set at 0.002, effectively doubling the
occurrence of point mutations per generation compared to
Example 1. Since the chromosomes were binary, mutation
events were limited to inverting the base from one to zero
and vice versa. Mutation scheme was identical in all 3
migration schemes. In no migration simulation, organisms
Similar to other in silico evolutionary platforms such as
aevol [3], the basic output of DOSE is a set of time series
data with generation count as the timing variable. These
can include organism statistics; such as fitness, and
genome size; or population statistics; such as average
fitness, and genetic distance. Further analyses can be
carried out from these results. For example, if the parentchild (also known as ancestry) relationships are recorded,
the lineage of beneficial mutations can be carried out
using genealogical analysis [21]. Further studies using 2
or more evolved populations of digital organisms, such as
measuring mutational robustness using a competition [22,
23], may be performed. These competition assays may be
used to model biological processes, such as parasitism
[24].
A typical in silico evolutionary experiment consists of
modifying one or more parameters, such as mutation rate,
46
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
migration between sub-groups of mating populations
results in the increase of genetic variation within local
populations. The scenario of no migration acts as a
control and long migration scenario yields the same local
population variation as control where genetic variation
only occurs from mutations. This suggests that long
distance migration covering the entire ecosystem may
result in the entire ecosystem behaving as one
geographically extensive “local” population. This is
observed in hoverflies where extensive migration result in
the lack of genetic differentiation in a continental scale
[29]. A study in human populations also suggested that
long migration may result in the lack of genetic variation
between sub-populations [30], which is consistent with
our simulation results.
and/or functions, such as mating scheme, in the platform,
and examining the time series data emerging from one or
more simulations. Batut et al. [3] highlighted that
fortuitous events can be distinguished from systematic
trends by comparing data from replicated simulations. It
is also possible to revive one or more simulations from
stored data and that can be mixed to simulate interactions
between groups of organisms [25].
3.2 Example 1: Testing the effects of migration on
heterozygosity
DOSE is designed as a tool to examine evolutionary
scenarios on an ecological setting. In this example, we
examine the effects of migration, simulated by movement
of organisms to adjacent or across non-adjacent ecological
cells.
Hamming distance [20], which had been used as distance
measure for phylogenetic determination between viruses
[26, 27], was used in this study as a measure of
heterozygosity. As chromosomal lengths were identical in
all organisms throughout the simulation, Hamming
distance represented the number of base differences
between any two organisms.
Our results show that the average heterozygosity for no
migration and long migration across all 1000 generations
for all 25 ecological cells is similar (p-value = 0.989;
Table 1). The average heterozygosity for adjacent
migration is marginally lower but not significantly
different from that of no migration (p-value = 0.932) or
long migration (p-value = 0.921). The average spread
(standard error) of heterozygosity for no migration and
long migration is also similar (p-value = 0.264; Figure 1A
and 1C). However, the spread of heterozygosity for
adjacent migration is significantly larger (p-value < 4.3 x
10-26), especially after 500 generations (Figure 1B).
Fig. 1a Standard error of heterozygosity for no migration scenario.
Table 1: Summary statistics of 3 simulations with mutation rate of 0.001
Simulation
Average
Heterozygosity
Average
Standard Error
No migration
1228.19
25.064
Adjacent migration
1226.09
32.664
Long migration
1228.54
25.661
The average spread of heterozygosity from organisms
within an ecological cell can be used as a proxy to
estimate the variation within local population or intrapopulation [28]. Our results suggest that adjacent
Fig. 1b Standard error of heterozygosity for adjacent migration scenario.
47
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fig. 1c Standard error of heterozygosity for long migration scenario.
Fig. 2 Standard errors of heterozygosity between no migration and long
distance migration for mutation rate of 0.002.
Our results also suggest that migration and mating
between adjacent sub-populations increased the genetic
variability, as seen in increased variation between
adjacent migration and no migration scenarios. This is
supported by current study suggesting that migration is
crucial in maintaining genetic variation [31].
By comparing simulation outputs from different mutation
rates (0.1% against 0.2%), our results show that
heterozygosity (Figure 3A) and spread of heterozygosity
(Figure 3B) are increased with higher mutation rate. This
increase is significant for both heterozygosity (p-value <
6.8 x 10-90) and spread of heterozygosity (p-value < 7.3 x
10-55). However, the trend is consistent in both examples.
This is consistent with Mukherjee et al. [16] whom
demonstrates that mutation rates does not impact on the
statistical tests for evaluating heterozygosity and genetic
distance using a simulation study.
3.3 Example 2: Testing the effects of mutation rates
and migration on heterozygosity
In this example, we double the mutation rate from 0.001
(0.1%) to 0.002 (0.2%) on the 3 migration scenarios in
Example 1. The simulation results can be analyzed in the
same manner as Example 1 or compared with that of
Example 1 to examine the effect of increased mutation
rate.
Our results show that there is no difference in the average
heterozygosity between all 3 simulations (F = 0.01, pvalue = 0.987; Table 2). The spread of heterozygosity is
significantly higher in adjacent migration when compared
to no migration (p-value = 4.4 x 10-34) or long migration
(p-value = 2.2 x 10-31) scenarios (Figure 2). These results
are consistent with that of Example 1, suggesting that
these trends are not significantly impacted by doubling
the mutation rate.
Table 2: Summary statistics of 3 simulations with mutation rate of 0.002
Simulation
Average
Heterozygosity
Average
Standard Error
No migration
1787.79
36.296
Adjacent migration
1784.32
45.776
Long migration
1788.52
36.695
Fig. 3a Mean heterozygosity between migration scenarios for both mutation
rates.
48
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Infocomm Research, Singapore) for their comments on an
initial draft of this manuscript.
References
[1] T. Dobzhansky, “Nothing in biology makes sense except in
the light of evolution”, The American Biology Teacher, Vol.
35, 1973, pp. 125-129.
[2] A. Varki, “Nothing in medicine makes sense, except in the
light of evolution”, Journal of Molecular Medicine, Vol. 90,
2012, pp. 481-494.
[3] B. Batut, D. P. Parsons, S. Fischer, G. Beslon and C. Knibbe,
“In silico experimental evolution: a tool to test evolutionary
scenarios”, BMC Bioinformatics, Vol. 14, No. Suppl 15,
2013, Article S11.
[4] J. E. Barrick and R. E. Lenski, “Genome dynamics during
experimental evolution”, Nature Reviews Genetics, Vol. 14,
No. 12, 2013, pp. 827-839.
[5] C. H. Lee, J. S. H. Oon, K. C. Lee and M. H. T. Ling,
“Escherichia coli ATCC 8739 adapts to the presence of
sodium chloride, monosodium glutamate, and benzoic acid
after extended culture”, ISRN Microbiology, Vol. 2012,
2012, Article ID 965356.
[6] J. A. How, J. Z. R. Lim, D. J. W. Goh, W. C. Ng, J. S. H.
Oon, K. C. Lee, C. H. Lee and M. H. T. Ling, “Adaptation
of Escherichia coli ATCC 8739 to 11% NaCl”, Dataset
Papers in Biology 2013, 2013, Article ID 219095.
[7] D. J. W. Goh, J. A. How, J. Z. R. Lim, W. C. Ng, J. S. H.
Oon, K. C. Lee, C. H. Lee and M. H. T. Ling, “Gradual and
step-wise halophilization enables Escherichia coli ATCC
8739 to adapt to 11% NaCl”, Electronic Physician, Vol. 4,
No. 3, 2012, pp. 527-535.
[8] S. F. Elena and R. Sanjuán, “The effect of genetic robustness
on evolvability in digital organisms”, BMC Evolutionary
Biology, Vol. 8, 2008, pp. 284.
[9] Y. Z. Koh, and M. H. T. Ling, MHT, “On the liveliness of
artificial life”, Human-Level Intelligence, Vol. 3, 2013,
Article 1.
[10] A. Adamatzk, and M. Komosinski M, Artificial life models
in software, London: Springer-Verlag, 2005.
[11] T. S. Ray, 1991, "Evolution and optimization of digital
organisms", in Billingsley K.R. et al. (eds), Scientific
Excellence in Supercomputing: The IBM 1990 Contest Prize
Papers, 1991, pp. 489–531.
[12] C. Ofria, and C. O. Wilke, “Avida: A software platform for
research in computational evolutionary biology”, Artificial
Life, Vol. 10, 2004, pp. 191-229.
[13] O. Tal, “Two complementary perspectives on interindividual genetic distance”, Biosystems, Vol. 111, No. 1,
2013, pp. 18-36.
[14] M. Arenas, and D. Posada, “Computational design of
centralized HIV-1 genes”, Current HIV Research, Vol. 8,
No. 8, 2010, pp. 613-621.
[15] S. J. Park, J. O. Yang, S. C. Kim, and J. Kwon, “Inference
of kinship coefficients from Korean SNP genotyping data”,
BMB Reports, Vol. 46, No. 6, 2013, pp. 305-309.
[16] M. Mukherjee, D. O. Skibinski, and R. D. Ward, “A
simulation study of the neutral evolution of heterozygosity
Fig. 3b Standard error of heterozygosity between migration scenarios for
both mutation rates.
4. Conclusions
In this study, we have presented a Python DO simulation
library, Digital Organism Simulation Environment
(DOSE), built on our previous work [17-19]. DOSE is
designed with biological and ecological parallels in mind.
As a result, it is relatively easy to construct evolutionary
simulations to examine evolutionary scenarios, especially
when a complex interaction of environment and biology is
required. To illustrate the use of DOSE in an ecological
context, we have presented 2 examples on the effects of
migration schemes on heterozygosity. Our simulation
results show that adjacent migration, such as foraging or
nomadic behavior, increases heterozygosity while long
distance migration, such as flight covering the entire
ecosystem, does not increase heterozygosity. These results
are consistent with previous studies [29, 30].
Appendix
DOSE version 1.0.0 is released under GNU General
Public
License
version
3
at
http://github.com/mauriceling/dose/
release/tag/v1.0.0
and anyone is encouraged to fork from this repository.
Documentation
can
be
found
at
http://
maurice.vodien.com/project-dose.
Acknowledgement
The authors will like to thank AJZ Tan (Biochemistry,
The University of Melbourne) and MQA Li (Institute of
49
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[31] M. Yamamichi, and H. Innan, “Estimating the migration
rate from genetic variation data”, Heredity, Vol. 108, No.
4, 2012, pp. 362-363.
and genetic distance”, Heredity, Vol. 53, No. 3, 1987, pp.
413-423.
[17] J. Z. R. Lim, Z. Q. Aw, D. J. W. Goh, J. A. How, S. X. Z.
Low, B. Z. L. Loo, and M. H. T Ling, “A genetic algorithm
framework grounded in biology”, The Python Papers Source
Codes, Vol. 2, 2010, Article 6.
[18] M. H. T Ling, “An artificial life simulation library based on
genetic algorithm, 3-character genetic code and biological
hierarchy”, The Python Papers, Vol. 7, 2012, Article 5.
[19] M. H. T Ling, “Ragaraja 1.0: The genome interpreter of
Digital Organism Simulation Environment (DOSE)”, The
Python Papers Source Codes, Vol. 4, 2012, Article 2.
[20] R. W. Hamming, RW, “Error detecting and error correcting
codes”, Bell System Technical Journal, Vol. 29, No. 2, 1950,
pp. 147–160.
[21] T. D. Cuypers, and P. Hogeweg, “Virtual genomes in flux: an
interplay of neutrality and adaptability explains genome
expansion and streamlining”, Genome Biology and Evolution,
Vol. 4, No. 3, 2012, pp. 212-229.
[22] R. E. Lenski, C. Ofria, T. C. Collier, and C. Adami, “Genome
complexity, robustness and genetic interactions in digital
organisms”, Nature, Vol. 400, No. 6745, 1999, pp. 661-664.
[23] J. Sardanyés, S. F. Elena, and R. V. Solé, “Simple
quasispecies models for the survival-of-the-flattest effect:
The role of space”, Journal of Theoretical Biology, Vol. 250,
No. 3, 2008, pp. 560-568.
[24] F. M. Codoñer, J. A. Darós, R. V. Solé, and S. F. Elena,
“The fittest versus the flattest: experimental confirmation of
the quasispecies effect with subviral pathogens”, PLoS
Pathogens, Vol. 2, No. 12, 2006, Article e136.
[25] M. A. Fortuna, L. Zaman, A. P. Wagner, and C. Ofria C,
“Evolving digital ecological networks”, PLoS Computational
Biology, Vol. 9, No. 3, 2013, Article e1002928.
[26] C. D. Pilcher, J. K. Wong, and S. K. Pillai, SK, “Inferring
HIV transmission dynamics from phylogenetic sequence
relationships”, PLoS Medicine, Vol. 5, No. 3, 2008, Article
e69.
[27] A. M. Tsibris, U. Pal, A. L. Schure, R. S. Veazey, K. J.
Kunstman, T. J. Henrich, P. J. Klasse, S. M. Wolinsky, D. R.
Kuritzkes, and J. P. Moore, “SHIV-162P3 infection of rhesus
macaques given maraviroc gel vaginally does not involve
resistant viruses”, PLoS One, Vol. 6, No. 12, 2011, Article
e28047.
[28] D. W. Drury, and M. J. Wade, “Genetic variation and covariation for fitness between intra-population and interpopulation backgrounds in the red flour beetle, Tribolium
castaneum”, Journal of Evolutionary Biology, Vol. 24, No.
1, 2011, pp. 168-176.
[29] L. Raymond, M. Plantegenest, and A. Vialatte, “Migration
and dispersal may drive to high genetic variation and
significant genetic mixing: the case of two agriculturally
important, continental hoverflies (Episyrphus balteatus and
Sphaerophoria scripta)”, Molecular Ecology, Vol. 22, No.
21, 2013, pp. 5329-5339.
[30] J. H. Relethford, “Heterogeneity of long-distance migration
in studies of genetic structure”, Annals of Human Biology,
Vol. 15, No. 1, 1988, pp. 55-63.
50
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Determining Threshold Value and Sharing Secret
Updating Period in MANET
Maryam Zarezadeh1, Mohammad Ali Doostari2 and Hamid Haj Seyyed Javadi3
3
1
Computer Engineering, Shahed University
Tehran, Iran
m.zarezadeh@shahed.ac.ir
2
Computer Engineering, Shahed University
Tehran, Iran
doostari@shahed.ac.ir
Mathematics and Computer Science, Shahed University
Tehran, Iran
h.s.javadi@shahed.ac.ir
secret value periodically update the shared secret is
suggested.
Secret sharing method has many applications in key
management. Li et al. [2] suggested new distributed key
management scheme by combination of certificateless
public key cryptography and threshold cryptography. In
this scheme, for sharing master key of the network, out
of
nodes are chosen as shareholders. Zhu et al. [3],
using threshold cryptography
, presented mobile
agent to exchange topology information and private key.
When a new node requests to connect to network with size
, nodes cooperate and authentication is done. This
method can reduce network overhead and can improve the
success rate of authentication. Zefreh et al. [4] proposed a
distributed CA system based on secret sharing scheme.
They have assumed that the network is divided into several
clusters and each cluster head is in role of distributed CA.
So, a valid certificate is produced by a quorum of cluster
heads. Ge et al. [5] suggested the certificate authority
based on the group of distributed server nodes. In this
model, it is considered different types of nodes jointed to
network and MANET is subject to frequent partitioning
duo to dynamic nature of topology. Hence they classify
nodes to three types: servers, high-end clients and low-end
clients. In requesting procedure, high-end and low-end
clients obtain a valid certification. They sent request to
proxy server and proxy server forwards this request to
other servers. If at least servers exist in group server to
combine at least partial certificates, certificate is issued.
The aim of group key distribution protocol is to distribute
a key used for encrypting the data. Therefore, based on
generalized Chinese remainder theorem, Guo and Change
[6] suggested a group key distribution built on the secret
sharing scheme. Their protocol requires few computation
operations while maintain at least security degree. Liu et al.
[7] proposed similar group key distribution protocol. They
Abstract
In this paper, an attack model is proposed to implement safe and
efficient distributed certificate authority (CA) using secret
sharing method in mobile ad hoc networks (MANETs). We
assume that the attack process is based on a nonhomogeneous
Poisson process. The proposed model is evaluated and an
appropriate amount of threshold and updating period of sharing
secret is suggested. In addition, threshold value effect on security
of the network which uses the distributed CA is investigated. The
results may be also useful in security improvement of networks
that apply secret sharing scheme.
Keywords: Ad Hoc Network, Nonhomogeneous Poisson Process,
Certificate Authority, Public Key Infrastructure.
1. Introduction
Public key infrastructure (PKI) is a basic and fundamental
infrastructure for implementation of security services such
as key generation and distribution in mobile ad hoc
networks (MANETs). In conventional PKI, the centralized
certificate authority (CA) is responsible for the distribution
and management of public key certificate used for
assigning public key to relevant user. But implementation
of PKI in MANET faces several obstacles. Including in
PKI, the conventional single-CA architecture suffers from
single point of failure problem. Furthermore, due to the
dynamic topology and mobile nature of nodes, setting a
node as CA may cause a lot of communication overhead.
The distributed CA method has been suggested for solving
single point of failure problem [1]. In this method, using
threshold secret sharing scheme, functionality of CA is
distributed among several nodes. Then, for providing CA
services, nodes as CA servers cooperate, which is
called the threshold parameter of a secret sharing scheme.
Therefore, the attacker cannot identify the secret key of
CA until it detects the number of sharing secret less than .
Also, to cope with the efforts of attackers to know the
51
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
indicated Guo and Chang ’ s protocol [6] have some
security problems and suggested simpler protocol that
confidentiality of group key is secure unconditionally. In
registration phase, key generation center (KGC) shares a
secret with each group member. Then, KGC establishes
the session key of group using threshold secret sharing
method and Chinese remainder theorem. Each group
member use her/his secret shared with the KGC to recover
the group key. Also, Gan et al. [8] proposed a threshold
public key encryption scheme. In this scheme, on the base
of dual pairing vector space and bilinear group, the
decryption key is distributed between
nodes. For
decrypting the cipher text, it is sent to or more than
nodes and the plain text is obtained, is the threshold
value. For more researches on applications of secret
sharing scheme in ad hoc networks, one can see [9-11].
As can be seen in aforementioned research works, many
studies showed the role of secret sharing scheme in
security of MANET. Then, determination of threshold
value and updating period of sharing secret is important.
However, few studies have focused on this issue. Dong et
al. [12] have compared security of the partially and fully
distributed CA based on the number of server nodes. But
they did not show how to determine the threshold value.
Haibing and Changlun [13] have suggested an attack
model to determine threshold value and updating period of
sharing secret.
The aim of the present paper is to determine the sufficient
threshold value and sharing secret updating period to use
effectively secret sharing scheme in MANETs. For this
purpose, we propose an attack model by considering the
attack process as a nonhomogeneous Poisson process
(NHPP). The paper is organized as follows. Attacks on
MANET and secret sharing scheme are studied in Section
2. Attack process is explored in Section 3. First, some
researches on the attack processes in MANETs are
discussed. Then, in the sequel, the suggested attack model
in this paper is described and a sufficient amount for
threshold and updating period of sharing secret is specified.
In Section 4, the effect of threshold on security of
distributed CA scheme is evaluated. The paper is
concluded in Section 5.
Passive/active: A passive attacker takes an action such as
traffic eavesdropping for information gathering. But in this
attack, no interference is occurred in network host
performance. In active attack, adversary interferes through
actions e.g. modulating, packet forwarding, injecting or
replaying packets, and so on.
Insider/outsider: This is potentially serious security risk
in all security application domains and adversary can
cause with insider capability. Some researchers have
suggested threshold protocols (e.g. m-out-of-n voting
protocols) for resolving this problem in field of secret
sharing and aggregating application protocols.
Static/adaptive: Setting a learning algorithm in each node
can be considered as static. From a practical point of view,
the network's ability to respond to environment, increases
significantly attacker power. For example, make an
informed selection as to which node to compromise next
improves attack performance.
2.2. Attacks on secret sharing scheme in MANET
Since secret is shared between several users and each user
can get only a single secret key, it is difficult to BruteForce attack. It becomes more harder for the adversary to
guess all the values of threshold because value of being
variable for different partitions. But there is a chance of
Brute-Force attack by obtaining partial information from a
shared secret. A malicious user with help of his share of
secret can get another shared secret. When the number of
nodes that is compromised is more than or equal to
threshold, the malicious node can reconstruct secret key.
In other words security of the network has been failed; see
[15]. Yi and Kravets [16] studied the following two active
attacks on a distributed PKI:
(i) Routing Layer Attacks - Malicious nodes disrupt
routing by announcing false routing information such as
injecting incorrect routing packets or dropping packets. If
the attacker blocks or reroutes all victim's packets, some
routing layer attacks can be used to establish a denial-ofservice (DOS) attack.
(ii) Directed Attacks on CA nodes - Once an attacker
discovers identity or location of CA nodes may employ its
resource in attacking only the CA nodes.
2. Attacks
3. Proposed attack model
In this section, we review the attacks on MANET and
secret sharing scheme.
In this section, first, some researches on the attack
processes in MANET are discussed. Then, the suggested
attack model in this paper is described and a sufficient
amount for threshold and updating period of sharing secret
is specified.
Some researchers have studied and modeled the attack
process. Jonsson and Olovsson [17] targeted a distributed
computer system which consisted of a set of 24 SUN ELC
diskless workstations connected to one file server. In
2.1 Attacks on MANET
Many attacks in MANET have been investigated by
researchers. According to [14], attacks on MANET can be
classified as follows:
52
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
(i)
.
(ii) Non-overlapping increments are independent
(iii)
.
(iv)
.
intrusion test is assumed that all attackers are system legal
users with normal user privileges and physical access to all
workstations except file server. They considered intrusion
process into three phases: learning phase, standard attack
and innovative attack phases. Many of data related to
standard attack phase and statistical evidence showed the
intrusion process could be described by an exponential
distribution.
Kaaniche et al. [18] collected data from the honeypot
platforms which deployed on the Internet. Then they did
empirical analysis and statistical modeling of attack
processes. Results showed the probability distribution
corresponding to time between the occurrence of two
consecutive attacks at a given platform can be described
by a mixture distribution combining a Pareto distribution
and an exponential distribution. The probability density
function
, is defined as follows:
where, in little
notation,
when
Interarrival times of Poissson process have exponential
distribution with rate and hence it is said this process has
no memory. This means that
is Poisson
process with mean
where
is
called the mean value function (m.v.f.). NHPP is a
generalization of Poisson process with conditions (i)-(iv)
except for that the rate is a function of denoted by
,
and is called intensity function. The m.v.f of the NHPP is
written as Λ
where
is the distribution function of the time of the first
event in the process. See [20] for a good review of
stochastic processes.
We consider MANET as a closed system. When a network
is based on threshold secret sharing scheme, a group of
nodes have the pieces of secret. Then, for modeling of
attacks, the inside attacks are only considered. Usually in
such attacks, malicious nodes drop and refuse to forward
request of generation or updating certificate, return a fake
reply (e.g. an invalid partial certificate).
It should be noted that
is a probability, is the scale
parameter of the exponential distribution and is the
shape parameter of the Pareto distribution.
It was found that the amount of
in (1) varies from
0.9885 to 0.9981 in all the platforms of honeypot; see [13].
Because
is the weight of exponential distribution in
mixture distribution, Haibing and Changlun [13]
concluded that the exponential distribution dominates
mixture distribution. Also, these authors proposed an
attack model based on Poisson process to determine
threshold value and updating period of sharing secret. In
the suggested model, since the attacks appear according to
a Poisson process then the attacks occur at random instants
of time with an average rate of attacks per second.
Limitation of Poisson process to approximate the attack
process is that its rate is constant and does not vary over
time. On the other hand, it is well known that the security
of MANETs is poor. In other words, the wireless
communication medium is accessible to any entity with
adequate resources and appropriate equipment. Hence,
access to the channel cannot be restricted. According,
attackers are able to eavesdrop on communication and
inject bogus information [19]. This means that over time,
node is more vulnerable to compromise and attack. So, by
considering the rate of attack process as a function of time,
modeling is closer to reality. Based on this, we suggest a
new attack model in which this assumption is also
considered. To describe the model, we need to present the
following definition definition.
Let
be a counting process in which
denotes the number of attacks happened in the interval
. Thus
means that no attack has occurred in
the network up to time
Also, it implies that
denotes the number
of attacks in interval
.The probability that the
network receives attacks in interval
is denoted by:
According to our analysis
can be estimated
by NHPP with m.v.f.
. Then, the probability that
attacks appear during
is obtained as
Particularly, if attacks in the time interval
considered, we can write
is
Due to network protection, all attackers cannot
successfully compromise nodes. We assume that any node
at each attack may be compromised with probability . In
the following, we calculate the probability distribution of
the successful attack process.
Definition: Let
be a non-negative random variable
representing the number of events in the interval
.
Then
is called a counting process. The
Poisson process model is a well known counting process
model. A counting process is a Poisson process if for some
small value h and all times t
Rule 1: Let
be a NHPP with m.v.f.
in
which
denotes the number of attacks happened in the
interval
Assume the probability that the attackers
compromise successfully a node at each attack is and
53
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
called the security of the network and is obtained, as
follows
hence the unsuccessful probability is
. Suppose that
is a random variable denoting the number of nodes
that attackers compromise successfully in the interval
. Then
is a NHPP with m.v.f.
and
hence the probability that nodes are compromised in the
interval
is
is the incomplete gamma
where Γ
function.
Representation (8) shows that the probability of network
security does not depend on the number of nodes. Then,
using
, cannot be compared the security of partial with
full distributed CA schemes.
Also, it can be seen that
is increasing in and is
decreasing in and , separately. Thus, if we select nodes
that are less vulnerable to attacker compromise as nodes
holding the pieces of shared secret in partial distributed
scheme, the security of network is improved.
Using equations (5) and (6), it can be seen that
Hence
for all
which means that no
attack is happened at time
and the network is secure.
The next rule provides the time of maximum compromise
probability.
Rule 2: Consider the assumptions of Rule 1. Further,
suppose that
in which
denotes
the distribution function of the time to the first attack in
the network. The probability that k nodes in the network
have been compromised successfully reaches maximum at
time where
Proof: Differentiating from equation (5), we can obtain
the peak time as follows:
Letting
That is, the maximum value of
depends only on
and is independent of m.v.f. and , which is shown in
figure 1 and figure 2. Figure 1 depicts the 3D plot of
in which
. This plot shows the
independency of maximum value of
from
Also, figure 2 draws
for
and m.v.f.
. As we can see, the maximum value
of
is independent of m.v.f.
, the unique maximum value of equation
(5) is obtained at time
.
Rule 3: Equation (6) will help us to determine the
updating period of sharing secret. Equation (6) states
that when the system is under risk. According to equation
(6) at time attack to nodes as shared secret holders
reaches maximum. In this condition to maintain system
security and prevent of disclosure of shred confidential
should be secret value is updated in
.
.
Rule 4: From (6), it can be concluded that the threshold
value of sharing secret, denoted by . From equation (6),
is obtained. As mentioned until
attacker compromise less than nodes, cannot discover
sharing secret. Therefore, for given attack stream and ,
should be greater than and we can write
4. Evaluation of proposed model
In this section, we will discuss the influence of threshold
value on the security performance of the distributed CA
scheme based on the proposed model. An attacker which
wants to attack the network must compromise no less than
nodes to recover the sharing secret. The probability of
compromising less than nodes in time interval
) is
Fig. 1. The 3D plot
for
.
54
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fig. 2. The plot
for
been estimated by NHPP. By considering the attack
process as NHPP, the rate of the attack is not necessary
fixed in time as Poisson process and can vary over time.
The results of evaluating of suggested model have shown
that the probability of network security is independent of
the nodes number. Therefore, it cannot be compared the
security of fully distributed and partially distributed CA
schemes using the number of nodes.
According to our analysis, the maximum of probability
that an attacker compromises the nodes of the network
depends only on the number of nodes and is independent
of the m.v.f. This means that if attack process is adopted
based on NHPP, we have a unique maximum for any m.v.f.
Also, the results of network security probability have
shown this probability decreases when the probability that
the attackers successfully compromise a node increases.
Then, the nodes that have less risk of exposure and
vulnerability have been selected as a part of the sharing
secret holder in a partially distributed CA scheme and
hence the security of MANETs can be improved.
.
According to equation (9), the maximum value of
is
decreasing in and is shown in figure 3. In other words,
increasing the number of secret holders, success of
malicious node to obtain secret value decreases. So,
attacker requires to further efforts in fully distributed CA
in compared to the partially distributed CA that all nodes
have a part of secret.
Fig. 3. Relationship between
References
[1] L. Zhou and Z. J. Haas, "Securing ad hoc networks",
Network, IEEE, vol. 13, pp. 24-30, 1999.
[2] L. Liu, Z. Wang, W. Liu and Y. Wang, "A Certificateless key
management scheme in mobile ad hoc networks", 7th
International Conference on Wireless Communications,
Networking and Mobile Computing, IEEE 2011, 1-4.
[3] L. Zhu, Y. Zhang, and L. Feng, "Distributed Key
Management in Ad hoc Network based on Mobile Agent",
in Intelligent Information Technology Application, 2008.
IITA'08. Second International Symposium on, 2008, pp.
600-604.
[4] M. S. Zefreh, A. Fanian, S. M. Sajadieh, M. Berenjkoub,
and P. Khadivi, "A distributed certificate authority and key
establishment protocol for mobile ad hoc networks", in
Advanced Communication Technology, 2008. ICACT 2008.
10th International Conference on, 2008, pp. 1157-1162.
[5] M. Ge, K.-Y. Lam, D. Gollmann, S. L. Chung, C. C. Chang,
and J.-B. Li, "A robust certification service for highly
dynamic MANET in emergency tasks", International Journal
of Communication Systems, vol. 22, pp. 1177-1197, 2009.
[6] C. Guo and C. C. Chang, "An authenticated group key
distribution protocol based on the generalized Chinese
remainder
theorem",
International
Journal
of
Communication Systems, 2012.
[7] Y. Liu, L. Harn, and C.-C. Chang, "An authenticated group
key distribution mechanism using theory of numbers",
International Journal of Communication Systems, 2013.
[8] Y. Gan, L. Wang, L. Wang, P. Pan, and Y. Yang, "Efficient
threshold public key encryption with full security based on
dual pairing vector spaces", International Journal of
Communication Systems, 2013.
[9] S. Yi and R. Kravets, "Key management for heterogeneous
ad hoc wireless networks", in Network Protocols, 2002.
Proceedings. 10th IEEE International Conference on, 2002,
pp. 202-203.
and k.
5. Conclusion
In this paper, we have presented an attack model to
determine the proper value of threshold and updating
period of sharing secret for efficient implementation of
distributed CA based on threshold secret sharing scheme
in MANETs. In the proposed model, attack process has
55
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[10] J. Zhang and Y. Xu, "Privacy-preserving authentication
protocols with efficient verification in VANETs",
International Journal of Communication Systems, 2013.
[11] A. A. Moamen, H. S. Hamza, and I. A. Saroit, "Secure
multicast routing protocols in mobile ad-hoc networks",
International Journal of Communication Systems, 2013.
[12]Y. Dong, A.-F. Sui, S.-M. Yiu, V. O. K. Li, and L. C. K. Hui,
"Providing distributed certificate authority service in clusterbased mobile ad hoc networks", Computer Communications,
vol. 30, pp. 2442-2452, 2007.
[13] M. Haibing and Z. Changlun, "Security evaluation model for
threshold cryptography applications in MANET", in
Computer Engineering and Technology (ICCET), 2010 2nd
International Conference on, pp. V4-209-V4-213.
[14] J. A. Clark, J. Murdoch, J. McDermid, S. Sen, H. Chivers, O.
Worthington, and P. Rohatgi, "Threat modelling for mobile
ad hoc and sensor networks", in Annual Conference of ITA,
2007, pp. 25-27.
[15] P. Choudhury, A. Banerjee, V. Satyanarayana, and G.
Ghosh, "VSPACE: A New Secret Sharing Scheme Using
Vector Space", in Computer Networks and Information
Technologies: Springer, 2011, pp. 492-497.
[16] S. Yi and R. Kravets, "MOCA: Mobile certificate authority
for wireless ad hoc networks", in 2nd Annual PKI Research
Workshop Program (PKI 03), Gaithersburg, Maryland, 2003,
pp. 3-8.
[17] E. Jonsson and T. Olovsson, "A quantitative model of the
security intrusion process based on attacker behavior",
Software Engineering, IEEE Transactions on, vol. 23, pp.
235-245, 1997.
[18] M. Kaaniche, Y. Deswarte, E. Alata, M. Dacier, and
V. Nicomette, "Empirical analysis and statistical modeling
of attack processes based on honeypots", International
Conference on Dependable Systems and Networks, IEEE
2006; Philadelphia. USA.
[19] J. V. D. Merwe, D. Dawoud, and S. McDonald, "A survey
on peer-to-peer key management for mobile ad hoc
networks", ACM computing surveys (CSUR), vol. 39, p. 1,
2007.
[20] S. Ross, "Stochastic processes", 2nd edn, New York: Wiley,
2008.
56
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Beyond one shot recommendations: The seamless interplay of
environmental parameters and Quality of recommendations
for the best fit list
1
2
Minakshi Gujral1, Satish Chandra2
Department of Computer Science and Engineering, Jaypee Institute of Information Technology,
Noida, UP-201307, INDIA.
minakshigujral2011@gmail.com
Department of Computer Science and Engineering, Jaypee Institute of Information Technology,
Noida, UP-201307, INDIA.
satish.chandra@jiit.ac.in
Recommendation Systems lays strong foundation for the
commercially rich and ever expanding Recommendation
culture with ever growing online commerce. Not only it
proves solution for Information Overload problems but
also provides users with novel and preferred products,
concepts and services relevant to their preferences. The
Social aspect of Recommendations shapes the cultural
flow. As our culture moves online, the creativity,
evolution and augmentation of connection centric
recommendation process grows four folds. Rightly said by
Fortune magazine writer Jeffrey M. O'Brien, in an article
published in CNN Money, entitled “The race to create
'smart' Google”,”The Web, they say, is leaving the era of
search and entering one of discovery. What's the
difference? Search is what you do when you're looking for
something. Discovery is when something wonderful that
you didn't know existed, or didn't know how to ask for,
finds you”. The quest for best fit decisions is about
moving ahead of search engines in the world of
recommendation.
This proposed solution analyzes some selected but generic
recommendation systems of varied application
environments, and their recommendation methods,
performance, user and item profile, rating structures,
similarity measure, and other issues. This gives a Multi
dimensional evaluation framework to model optimized
system for the best fit recommendations. This kind of
approach has cropped from recommendation system’s
evaluation and research for optimized recommendations.
It started from way back mid 1990 to this current era but
no concrete, balanced and feasible solution from
multidimensional perspective has been given till date.
This analysis can be extended with no of issues, systems
and their changing context. There are viable paths of
improvements and extensions, which can be implemented,
mixed and matched for feasible, environmentally tailor
made best fit recommendations.
Abstract
The Knowledge discovery tools and techniques are used in an
increasing number of scientific and commercial areas for the
analysis and knowledge processing of voluminous Information.
Recommendation systems are also one of Knowledge Discovery
from databases techniques, which discovers best fit information for
appropriate context. This new rage in Information technology is
seen in area of E-commerce, E-Learning, and Bioinformatics,
Media and Entertainment, electronics and telecommunications and
other application domains as well. Academics, Research and
Industry are contributing into best-fit recommendation process
enrichment, thereby making it better and improvised with growing
years. Also one can explore in depth for qualitative and
quantitative analysis of E-World Demand and Supply chain with
help of recommendation systems. Lot has been talked about
effective, accurate and well balanced recommendations but many
shortcomings of the proposed solutions have come into picture.
This Paper tries to elucidate and model Best Fit Recommendation
issues from multidimensional, multi-criteria and real world’s
perspectives. This Framework is Quality Assurance process for
recommendation systems, enhancing the recommendation quality.
The proposed solution is looking at various dimensions of the
architecture, the domain, and the issues with respect to
environmental parameters. Our goal is to evaluate
Recommendation Systems and unveil their issues in quest for the
Best Fit Decisions for any application domain and context.
Keywords: Recommendation Systems, Best fit decisions, Issues
in Recommendations,
decisions.
Expert
recommendations,
best fit
1. Introduction
The Recommendation systems are the new search
paradigm which has stormed the E-world with semantic
preferences and excellent information services. Some of
the renowned recommendation systems are Amazon,
last.fm, Netflix, Cinematch, yahoo and Google.
57
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
large scale application domains like recommending
vacations, financial services. The flipside is although
these Extensions are introduced but till date neither
implemented nor explained concretely and explicitly in
recommendation system’s research. Also the changing
context
and
knowledge level influences the
recommendation results.
The recommendation Quality has multilingual aspects to
it while diving deep in research, innovation and novel
ideas for recommendations. It is the search for best fit
solution for any application, technology, for varying
demography of users. The need is to explore Multi
dimensional issues for Multi criteria parameters to
produce best fit optimized recommendations. This paper
tried to ponder on best fit recommendation issues in
section 2, evaluates it in section 3 and gives a knowledge
prototype approach in section 4.
2.2 Issues hindering best fit Recommendations:
The study of various recommendation issues in this
scenario gives a new dimension through some
formulations. Firstly the various unresolved extensions
introduced:
a) Comprehensive Understanding of Users and Items
b) Model Based recommending techniques
c) Multidimensionality of Recommendations
d) Multi-criteria Rating
e) Non-intrusiveness
f) Flexibility
g) Trustworthiness
h) Scalability
i) Limited Content Analysis
j) Over specialization
k) New Item/User Problem
l) Sparsity
2. Exploring Multi Dimensional Issues in
Recommendation Systems [1-4]:
2.1 Evaluating Recommendation system’s algorithm:
The evaluation of various recommender systems’
algorithm is done for jargoning and validating the issues.
Thereby the focus is on two kinds of evaluations:
1) The first one concerns the performance accuracy with
respect to changing context and knowledge levels.
2) The second concerns user satisfaction, retaining the
interest of users and formulating the requirement model
of the problem task.
Several approaches are compared by the tool used in
experiments on two real movies rating datasets i.e. Movie
Lens and Netflix. The collaborative and content filtering
algorithms, used in recommender system, are
complementary approaches. Thus this motivates the
design and implementation of hybrid systems. And thus
this new hybrid system is tested with real users.
Various extensions in recommendation capabilities are
rightly focused but not justified because:
1) Problems and Extensions are introduced theoretically
but not yet solved from multi dimensional real world
scenario. The thematic profiles of Users and Product
Attributes are evaluated and updated theoretically with
synthetic data sets but real life transactions give the clear
picture. These formulations need further validation with
effective feedback of real world data.
2) Introduction of Contextual, critiquing and
Conversational Recommendations should be incorporated.
3) These extensions should be balanced with increase in
information, user, network and addition of complex cross
application networks.
Following deductions give some focal points to this
approach:
1) Similarity Measure cannot be implemented for all users
of varying preferences.
2) An unresolved issue is to explore criteria that try to
capture quality and usefulness of recommendations from
the user satisfaction perspectives like coverage,
algorithmic complexity, scalability, novelty, confidence
and trust. This all is important along with user-friendly
interaction.
3) The need to design an efficient, user friendly user
interface which keeps the user from quitting and
extracting important information for making requirement
model of the problem task.
4) Traditional Recommendation Methods i.e. content,
collaborative and hybrid [depending on rating system],
with their advantages and shortcomings contribute
towards Possible extensions. This capacitates them for
2.3 Background study of some recommendation
systems:
1) An innovative approach for developing reading
material recommendation systems is by eliciting domain
knowledge from multiple experts. To evaluate the
effectiveness of the approach, an article recommendation
expert system was developed and applied to an online
English course.
But the flipside measured in this case: Learning
preferences or needs are not the same. Online e-learning
58
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
computational cost increases. Practical problems arise in
this case i.e.
-New user new item problem
-Cold Start Problem
-Coverage Issues
-Computational Cost
6) Some recommendation research also tried to explore
Singular Value Decomposition (SVD) to reduce the
dimensionality of recommender system databases to
increase performance of recommendations [6]
a) SVD can be very expensive and is suitable for offline
transactions. Its complex architecture, differentiating
results for different setups also hinders good
recommendation approach. This does not take into
account the Top N Predictions in a cot effective way to
enhance
quality
of
recommendations.
Here
Recommendation Evaluation speaks of more issues to
ponder. For example Best Fit Recommendations in
changing E-WORLD, security issues, measuring user
satisfaction,
efficiency,
and
persuasiveness
of
recommendation process, scalability, coverage, and these
points are not fulfilled in such evaluations.
b) The Concept of [Rating, Prediction Elicitation, and
Similarity Measure] is vague and difficult in practical
scenario. They are taken from synthetic Datasets. Real
world transactions say reservation system, registration is
far better than synthetic datasets; this approach fails to
implement it. The thematic profiles of Users and Product
Attributes are implemented theoretically but not from real
life transactions. These calculations further needs to be
checked with effective feedback of real world data.
c) Overhead of Computation of Similarity Measures is
also there. Out of Box thinking is needed to measure
Similarity not only in the thematic profiles of Users and
Product Attributes theoretically but also from real life
transactions. These calculations further needs to be
checked with effective feedback of real world data.
module provides fixed learning content for all students
with varying aptitude and knowledge level.
2) Technique using the repertory grid method assists the
domain experts to better organize their knowledge and
experiences. This is significant approach but with ever
changing and evolving nature of user and item model
there arises doubts on this approach being successful. It’s
difficult to calculate recommendation best for different
learning levels. Other factors like Authenticity of data
filled, coverage of application domain with increase in
courses and students, absence of Justification of choice by
users also contribute to this discussion and points towards
Multi dimensional issues.
3) Another novel research problem which focuses on the
personalization issue is the problem of recommendation
under the circumstance of a blog system. It is a
comprehensive investigation on recommendation for
potential bloggers who have not provided personalized
information in blog system. This talks about registered
and unregistered bloggers (personalized/non personalized)
given recommendations for services in blog environment.
4) Another Recommendation Method presents an
algorithm on the inhomogeneous graph to calculate the
important value of an object, and then combined with
similarity value to do recommendation.
This is a random work in inhomogeneous graph thereby
this cannot be generalized with increasing users and
resources. The following points come in front:
a) There is no information about what happens to
recommendations with new users and resources.
b) There is no clear picture about how one can calculate
best recommendations for a particular item type which
does not match neighbor nodes?
c) The issues like scalability and trustworthiness of the
system are at stake. Normally any blogger can register
and can create imperfect/illegal data or change
information which is again one contradictory point.
d) Furthermore the methodology is too voluminous,
complex to implement for large application domains.
5) Some recommendation algorithm tried to incorporate
“thinking of out of the box recommendations” [5]. This
Concept introduces TANGENT, a novel recommendation
algorithm, which makes reasonable, yet surprising,
horizon-broadening recommendations. The main idea
behind TANGENT is to envision the problem as node
selection on a graph, giving high scores to nodes that are
well connected to the older choices, and at the same time
well connected to unrelated choices.
Extensive comparison of the numerous alternatives; while
all seem reasonable on paper, several of them suffers from
subtle issues. Computational cost being one of them. Also
at some undesired situation, if neighbor’s nodes are at
complex
positions,
complexity,
overhead
and
3. Evaluating multidimensional issues of
recommendation process:
3.1. The Cold Start Problem
The cold start problem occurring when a new user has not
provided any ratings yet or a new item has not yet
received any rating from the users. The system lacks data
to produce appropriate recommendations. [1][2].To
remove over specialization problem when there is use of
diversification in explanations [11] this gives chance to
new items in the group which is good testament to
solution of cold start problem. CBR plus Ontology [12]
59
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
need logic or reasoning to calculate satisfaction which is
an inexplicable concept.
concepts reasons out the current and old status of items as
per logic based calculations rather than ratings. This
increases the system complexity and even if it gives cold
start solution to 15% it increases scalability sparsity and
other issues. This 15% also is viable when information is
well fed by past cases. Knowledge based Models[6] hits
hard on usage of rating structure by Implicit
recommendation methods[1,2] and proposes evaluating
explicit user/item models to take care of cold start
problem. The excellent analysis done by knowledge
Model framework[16] which says that as new products of
various categories arrive in market, this can further ignite
cold start, over specialization and sparsity problem. Even
clubbing Intelligent Agents and mining Semantic web [18]
leads to cold start problem and with increase in scalability
the system suffers. The Cold start problem is apparent in
systems depicted by Table 1.
3.4. User Satisfaction Issue
An important issue in recommender system now is to
explore criteria that try to capture quality and usefulness
of recommendations from the user’s satisfaction
perspectives like coverage, algorithmic complexity,
scalability, novelty, confidence and trust, user interaction.
The need to design an efficient, user-friendly user
interface: [1-3, 5]
1) For Mixed hybrid approach to implement again there is
a decision i.e. which items should be rated to optimize the
accuracy of collaborative filtering systems, and which
item attributes are more critical for optimal content-based
recommendations, are issues that are worth exploring.
2) Even if the recommender system is accurate in its
predictions, it can suffer from the ‘one-visit’ problem, if
users become frustrated that they cannot express
particular preferences or find that the system lacks
flexibility. Creating a fun and enduring interaction
experience is as essential as making good
recommendations.
Moreover Table 1 depicts this problem in some systems.
3.2. Coverage
While preferences can be used to improve
recommendations, they suffer from certain drawbacks, the
most important of these being their limited coverage. The
coverage of a preference is directly related to the coverage
of the attribute(s) to which it is applied. An attribute has a
high coverage when it appears in many items and a low
coverage when it appears in few items [1] [2]. Table 1
depicts the coverage problems.
3.5. Personalization
Recommendations.
3.3. Overspecialization
as
potent
factor
in
As depicted by Table 1, personalization is a potent factor
to be solved:
Processing of User Preference is Difficult due to:
1) Different User Background
2) Registered/Unregistered Users (TRUST ISSUE)
3) Dynamic Remodeled Information.
4) Willingness/Decision making criteria of user for best
fit preference.
5) Personalization should be clubbed with security issues.
Content-based approaches can also suffer from
overspecialization. That is, they often recommend items
with similar content to that of the items already
considered, which can lead to a lack of originality. [5][6].
Moreover Table 1 depicts the Overspecialization problems
in some systems. As per Conversational strategy also
which is best of above mentioned lot, all the preferences
to be specified upfront, at the beginning of the interaction.
This is a stringent requirement, because users might not
have a clear idea of their preferences at that point. New
Preferences and servicing needs to be feed in, system itself
cannot predict or recommend and again presents with old
strategies and products [14]. Overspecialization is said to
be solved effectively by attribute based diversity and
explanation based diversity[11], of which explanation
based diversity strategy reigns supreme as it takes care of
computational cost, provides coverage for even social
domain, and is better in efficiency and performance. But
needlessly explanation is also criteria added to content or
collaborative based technologies, so it cannot escape from
its structural disadvantages. Furthermore explanations
3.6. Scalability
Some of the systems have limitation of scalability as
depicted by Table 1. By increasing load on the
recommendation in terms of growing item, users, the
system slows down effective process. This degrades
system performance, efficiency and throughput of
recommendation system. In using Explanation Facility to
solve over specialization[13] and to bring in diversity in
product choice[11], the recommendation quality improve
but with increase in users, items and modules of system,
the result is complexity and overhead which further
breaks the performance. Same happens with other systems.
60
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
it profitable to shill recommender. The Affiliate
marketing and other E-Commerce profit gain concepts
also fall in this category. This all is done to have their
products recommended more often than those of their
competitors. Such acts are known as shilling attack on
recommendations. Also the concepts from a well known
recommendation work on research domain elucidates
some unwanted situations and the framework for shilling
attack as a recommendation issue and the respective
formulations given below. This work defines a new issue
named shilling attack on quality of recommendation on
the basis given below. Method Citation is considered
equivalent to rating phenomenon. Four Formulations [F1F4] is given on these grounds:
a) Matthew Effect describes the fact that frequently cited
publications are more likely to be cited just because the
author believes that well-known papers should be
included.
F1: Branded/well known items are given more
preference/rating not even knowing that they match the
demography of users and context of his query. This can
further lead to Over specialization, cold start and sparsity.
For example any new shoe category branded X Company
has to be rated 8/10, without actually checking out the
product. This can be termed as RecommBrandedRating
Effect.
b) Self citations: Sometimes self citations are made to
promote other publications of the author, although they
are irrelevant for the citing publication.
F2: At times when items are recommended, there are
some complementary things which are presented along
with that, which may not match the preference elicitation
or context. This is just done for gaining profit from other
company/advertising. This can lead to Computational cost,
decay in performance, efficiency, user satisfaction,
coverage and personalization issues. Ratings/Preferences
or recommendation results are more biased towards own
company product thereby to increase brand value. This
can be termed as RecommSelfRating Effect.
c) Citation circles: In some cases Citation circles occur if
citations were made to promote the work of others,
although they are pointless.
F3: Group Recommendations following F1 and F2 can be
construed in this category. This can further ignite sparcity,
cold start, and overspecialization. Even the corporate
world revert their direction towards ‘give and take’
phenomenon where other products are rated high, who in
turn favors you with high recommendation value of home
products. This can be termed as RecommAffiliateRating
Effect.
d) Ceremonial citations: The Ceremonial citations are
citations that were used although the author did not read
the cited publications.
Increase in load or scalability is real test of system
potential and capacity. Research solutions on small scale
or fewer loads are feasible from all perspective but with
scalability of system it is a different scenario in itself.
3.7. Scrutability
There are Recommendation Frameworks which exhibit
absence of scrutability criteria as per Table 1. Scrutability
is one of recommendation quality parameter which
permits user to alter his query to tailor fit his
recommendations. [13] The user is giving his authentic
feedback for the recommendations via scrutability. Many
recommendation process do not allow user feedback or
scrutability. This can lead to dissatisfaction of user.
3.8. Sparsity
There are Recommendation Frameworks, in which there
are sparsity issues as shown by Table 1. The Concept of
sparsity leads to a situation when enough transactional
data is not there for any type of correlation or mapping of
[item/user] data. Be it recommendation technique of
recommendation using diversification [11], explanation
facility[13], tags[15] and others, most of them lack
transactional, linking data to map or correlate user/item
models. In other words calculating the distance between
entities [user/item] becomes difficult.
3.9. Introducing the concept Recommendation
shilling attack [7, 8]
This is a Novel Contribution, which is a part of the
proposed Solution. But the background research for this
formulation is given by study of some of such issues
coming from fake ratings systems of research citations
and E-Commerce world.
E-Commerce Security Issues like “shilling attack” can
also be found in well known research context issues like
Matthew Effect, self citations, citation circles and
ceremonial citations.
Recommendations in ratings can be faked just as
publication numbers, citations and references are inserted
for good results. Ratings can be added to high numbers of
desired products, and even decreased. There arises a need
for semantic structuring and authenticity.
As shown in Table 1, some of the Systems suffer from
Recommendation Shilling attack. Unscrupulous producers
in the never-ending quest for market penetration may find
61
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
extent and Explanation facility, further augmented this
approach. The Concept of Search Algorithms, repertory
grids and Graph models also participated to churn out
optimized best fit recommendations. Conversational,
Context and Critiquing process brought a see saw change
in recommendation quality research. Even Case Base
reasoning, Ontology, Intelligent Agents also contributed
towards this direction. The innovation and feasible
technique illuminated by knowledge based models is a
show stopper. It models.
The Learning Techniques are described by the strategy of
recommendation process. Various systems used in Table 1
have following learning Techniques.
a) Traditional systems based on [1, 2] Rating structures,
User/item profile analysis [Content/ Collaborative/hybrid]
b) Personalization-User Demography [19]
c) Search Algorithms
d) Tagging Concept in Recommendations [15]
e) Explanation Facility [11, 13]
f) Knowledge Models [4, 16, 22]
g) Repertory Grids [3]
h) Graphs [6, 9, 10]
i) Capturing Context through Conversational/Critiquing
Strategies [14, 23]
j) Capturing Context using semantic web [17]
k) Using CBR+ONTOLOGY Combination [2]
l) Using intelligent agents for Recommendations [18, 21]
m) Blogs/Social Groups based Connection Centric
Recommendations.
Example: Orangut/twitter/LinkedIn/Wayn [9]
F4: At times raters or recommendations are just listed for
increased brand name in market without Comprehensive
understanding of user and items. Without calculating the
similarity
measure/personalization/preference
of
recommendations some items are rated arbitrarily.
All Four factors affect recommendation quality, even
discourage future extensions. This intensely affects the
comprehension understanding of users and Items and also
inversely affects User Satisfaction level. This can be
termed as RecommAstractRating Effect.
This Four Formulations has been grouped under the
concept Recommendation shilling attack. This has been
applied to concept of shilling attack with perspective of
quality framework. One can well imagine if such fake
entries enter in recommendation process, what will be the
overall effect on item preferences, cold start, over
specialization and other issues. It would be disastrous.
3.10.
The
Central
Processing
Unit
Recommendation
Framework:
Quality
Recommendations. [1-3, 6, 9, 10-17, 21, 23]
of
of
The Quality of Recommendations is measured by some
primary component such as Novelty, flexibility,
scrutability, transparency, effectiveness, efficiency,
persuasiveness. But presence of issues depicted by Table 1
fails the most tailor made recommendations also. So the
need arises to evaluate the recommendation quality from
Multi dimensional qualitative perspectives as well. These
are given by primary components of recommendation
quality parameters as described above. The various
learning techniques i.e. ontology, repertory grid etc of
these algorithms are evolved to resolve recommendation
issues. Recommendation strategies also encompasses
evaluations grounded in mathematical logic such as
Pearson’s co-efficient, top N Model, but they are
algorithmically striking a balance among prediction,
recommendation, relevance, diversity of items and
measuring of issues which is not viable. This requires
feasible real world modeled framework.
The recommendation Quality is much more than these
seven primary factors. They are affected by 12 issues of
Table 1 framework. Quality needs to be seen from
multidimensional perspectives.
Quality of recommendations depends on many factors;
1) Recommendation Issues.
2) The Learning Techniques and the variables used in it.
3) The real time evolution of Application Framework.
Therefore the 12 factors of Table 1 have quality
parameters embedded in them.
3.11 Environmental parameters of Recommendation
Evaluation. [25-27]
The quality of Recommendation is a Multi dimensional
evaluation from all these twelve perspective. The quality
evolution started from content, collaborative and hybrid
techniques. Ratings helped to calculate user satisfaction,
Personalization brought semantic understanding to it.
Recommendation Quality research further saw similarity
measure, preference elicitation, difference between search
recommend and predict to ascertain the exact co-relation
between user characteristics and item attributes. Tagging
facility solved the understanding of item background to an
Issues like Multidimensionality of Recommendation
system, Multi criteria Rating and Model based
Recommendation techniques calls for a Frame work
which takes information from user with user friendly GUI
in conversational stages/increments. This further
processes contextual ratings using KBB technique from
62
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
in
Recommendation density, consumer taste, and consumer
taste variability and consumer type.
This Concept refers to Recommendations which fits in
the Multi Dimensional Criteria’s of user and application
domain. For example demography of user, object
attributes, social, legal and environmental rule based
structure, understandable and cost-effective navigational
links. It has Multi-User Flexible Interface to cater to user
needs i.e. stop the user from one visit problems and assist
him to decide. This further assists to create homogenous
environment of interaction. But some of the
Recommendation systems lacks in one way or the other in
this type of evaluation point. As depicted by Table 1. In
this work, the novel contribution is the 3M Model which
has been identified from the above stated concept. This is
of Multidimensional Multi-criteria modeling: The 3MModel.
From the perspective of E-Business point of view:
Recommendation System’s Maintenance Model [Business
Model]: This deals with business model perspective of
recommendations thereby taking care of real-time
transactions and the cost incurred.
a) Maintenance Model 1: charge recipients of
recommendations either through subscriptions or pay-peruse [Danger of fraud].
b) Maintenance Model 2: A second model for cost
recovery is advertiser support. [Danger of fraud]
c) Maintenance Model 3: A third model is to charge a fee
to the owners of the items being evaluated. [Partial
Danger]. This 3M Model can also take care of TRUST
Issues.
This has given a balanced structure to evaluate and satisfy
various recommendation parameters. Therefore following
analysis holds true:
a) Recommendation Quality Framework: Good Evaluation
Matrix which enhances recommendation quality. It talks
about the parameters of quality evaluations which can
provide sound base to explain issues. For example:
Recommenders Density of recommendations tells you
about over all coverage or sparsity criteria’s.
b) It also evaluates the Cost structure therefore evaluating
the remuneration of each approach and strategy.
c) Consumer Taste parameter evaluates the over
specialization issue also to some extent can help to solve
collaborative
filtering
disadvantages
due
to
inconsideration of different user background.
d) Recommendation system’s implementation, usage and
evaluation are a costly transaction. This has to be
strategically managed by structuring balanced quotient of
multidimensional Recommendation quality criteria’s and
Business Model’s needs. Cost and quality parameters
should be managed according to application domain’s
need and user’s perspective.
e) Qualative parameters alone cannot justify best fit
recommendations; they need to work with issues. They
need justification for analysis and an architectural
framework for generation of recommendations.
E-COMMERCE
knowledge Grid.
Databases
thereby
processing
We can look at recommendation system from social and
technical evaluation perspective. For this the author has
taken 5 recommender systems Grouplens, Fab, Referral
web, PHOAKS AND SITECEER.
This evaluation metrics further illuminates various
recommendation methodologies, architecture and their
implementation and processing strategies with two main
goals:
1) Recommendation system’s quality evaluations
2) Recommendation system’s maintenance in real world.
For this the evaluation metrics are categorically divided
into three major parts of evaluations:
The Technical Design Space: This consists of Contents
of recommendation, use of recommendations, explicit
entry, anonymous and aggregation,
The domain space and characteristics of items
evaluated: This consists of type, quantity, and lifetime
and cost structure of Recommendations.
The domain space and characteristics of the
participants and the set of evaluations: This consists of
ISSUES
S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
Accurate
Rating
1
1
0.5
1
1
1
1
1
1
0.5
1
1
Preference
Elicitation
1
1
0.5
1
1
1
1
1
1
0.5
1
1
Similarity
Measure
1
1
0.5
1
1
1
1
1
1
0.5
1
1
Building
of item/
user profile
1
1
0
1
0.5
1
0
1
1
1
1
1
Personalized
recommendation
1
1
0
1
0.5
1
0
1
1
1
1
1
1
1
0
1
0.5
1
0
1
1
1
1
1
1
0
0
1
1
1
1
0
0
0
0
0
Implicit/
Explicit Data
Collection
Cold Start
Problem
Coverage
0
0
0
1
0
1
1
0
0
0
0
1
Sparsity
0
0
0
1
0
1
1
1
1
0
0
1
Over
specialization
1
0
0.5
0
1
0
1
1
0
0
0
1
Recommend
-ation
Quality
1
1
0.5
1
0
1
1
1
0.5
0
1
1
User
Satisfaction/1
Visit Problem
1
0
0.5
0
0
1
1
1
0
1
0.5
1
63
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Scrutability
1
1
1
1
0
0
1
1
0
0
1
1
Scalability
0
0
0
0
0
0
1
0
0
0
0
1
Real World
Modeling
0
0
0
0
0
1
1
1
0
0
1
1
Shilling Attack
0
0
0
0
0
0
0
0
0
1
0
0
Personalization- 05
Scalability- 06
Scrutability- 07
Sparsity- 08
Real World Modeling of data. - 09
Layer 3: The Recommendation Density and other
Quantitative parameters are referred as:
Function F= {A1…..AN} where N is all the quantitative
parameters. This Function F is further checked with the
information overload problem due to social networking.
This is done with the viewpoint of Social Centric View of
World Wide Web i.e. harnessing the Recommendation
linkages from social centric network [25][26][27]. This
step is important to recover from problems of
Personalization, Navigational links and Social Networks.
Table 1: Multi Dimensional Issues in Recommendation Systems Matrix. Here
Systems S1..S12 refers to systems and concepts depicted by research papers
[11-22] at back reference in same order. The number 0 marks presence of
issue. 1 means system is working fine with respect to following issue. The
table can be read as: a) Coverage Problems are there in some systems [11-13,
15, 18-21]. b) Overspecialization problems are there in some systems [12, 14,
16, 19, 20, and 21]. c) User Satisfaction Issue are prevalent in systems [12,
14, 15, 19].d) Personalization problem is there in systems [13, 17].e)
Scalability is there in some of the systems [11-16, 18-21]. f) Scrutability
problem present in systems [15, 16, 19, 20]. g) Sparsity is shown by some of
the Recommendation Frameworks [11, 12, 13, 15, 20, and 21]. h) As shown
in Table 1, some of the systems [11-19, 21-22] also suffer from
Recommendation Shilling attack. i) System in References [12, 13, 18-22]
depicts Cold start problem. Real world Modeling problem is shown by the
systems [1-5, 9, 10].
Layer 4 and 5:
For system S1 the tags allocated going through various
layers starting from Layer 1 to Layer 5.
To solve the probability of Recommendation shilling
attacks, tags are further extended by adding extensions
F1..F4
{Formulations
discussed
in
section
Recommendation shilling attack} -> For S1 We have
S1%26789%F2F3.
4. The Recommendation evaluation
framework by Multi Layer Architecture:
This Evaluation prototype paves the way for measuring
the recommendation process of best fit recommendations.
This is a generic, abstract framework which can further be
extended by increasing number of systems, architecture,
issues, measurement criteria’s and viable solution sets.
This can be further given the shape of Knowledge Grid.
This Consists of 8 layers and 3 Knowledge Bases.
Referring to Table 1, the working of the framework is
explained with respect to system S1.
Layer 6: Here we resolve the fake rating structure or data
due to recommendation shilling attack. We remove the
redundant data added due to fake ratings thereby adding
Clear Flag giving S1%26789%Clr.
Layer 7: further matches the Solution set Knowledgebase
SKB3 which consists of strategies to resolve issues. This
is future work and this may give rise to many other
research directions. For system S1 we have to resolve
issues 2, 6, 8, 9. This may further give solution sets in
terms of architectural, environmental, qualitative or
quantitative changes to rectify the given problems.
Layer 1: Extracts all the Algorithmic Detail of system S1
i.e. Architecture {Knowledge Model, Ontology, Repertory
Grid etc}, Domain Application {E-Commerce, Elearning}, Quantitative Parameters [25][26] {Number,
type, Cost and lifetime of recommended items, Contents
and density of Recommendation, User taste variability etc}
and store in Knowledge Base as KB1.
Layer 8: tests the validity of the recommendations. In
case of failure, it sends this to layer 1 or exits the system.
Each Recommendation Data goes through maturation
effect i.e. data change due to environment or knowledge
level of people so layer 1 again tracks all the system
information. Thereby it is customary to update KB1, KB2
and SKB3 with evolution in recommendation data and
strategies.
Layer 2: The Issues are recorded in a grid framework
with severity of issues mapped as {X=0, √=1, 1/2√
=50%}. The Issues are given a Tag which consists of
Issue Number. The Issues are stored in Knowledge Base
as KB2, according to their sequence Number and Issue
details.
The Cold Start Problem - 01
Coverage - 02
Overspecialization - 03
User Satisfaction Issue - 04
5. Conclusions
Recommendation System can be termed as Information
Filtering KDD Technique which evolves with E-world. It
64
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
be done and more importantly issues have to be clearly
and broadly classified and worked out for approximate,
contextually best fit decision.
is a way ahead of smart search. Every Recommendation
Algorithm in spite of anomalies have basic Learning
Technique, Issues creating problems in good
recommendations,
parameter
to
evaluate
recommendations, social centric point of view, all
contributing to quality framework for best fit
recommendations.
References
[1] L. Candillier, K. Jack, F. Fessant, and F. Meyer, "State-of-theArt Recommender Systems," in Collaborative and Social
Information Retrieval and Access: Techniques for Improved
User Modeling. M. U. o. T. Chevalier, Ed. 2009, pp. 1-22.
[2] Adomavicius, G., & Tuzhilin, A. (June 2005). Toward the
next generation of recommender systems: A survey of the
state-of-the-art and possible extensions. IEEE Transactions on
Knowledge and Data Engineering, Vol 17,Issue 06,pp 734–
749.
[3] Hsu, C., Chang, C., and Hwang, G. 2009. Development of a
Reading Material Recommendation System Based on a Multiexpert Knowledge Acquisition Approach. In Proceedings of
the 2009 Ninth IEEE international Conference on Advanced
Learning Technologies - Volume 00 (July 15 - 17, 2009).
ICALT. IEEE Computer Society, Washington, DC, 273-277.
[4] Hsu, C., Chang, C., and Hwang, G. 2009. Development of a
Reading Material Recommendation System Based on a Multiexpert Knowledge Acquisition Approach. In Proceedings of
the 2009 Ninth IEEE international Conference on Advanced
Learning Technologies - Volume 00 (July 15 - 17, 2009).
ICALT. IEEE Computer Society, Washington, DC, 273-277.
[5] Onuma, K., Tong, H., and Faloutsos, C. 2009. TANGENT: a
novel, 'Surprise me', recommendation algorithm. In
Proceedings of the 15th ACM SIGKDD international
Conference on Knowledge Discovery and Data Mining (Paris,
France, June 28 - July 01, 2009). KDD '09. ACM, New
York, NY, pp 657-666.
[6] B. M. Sarwar, G. Karypis, J. A. Konstan, and J. T. Riedl,
"Application of dimensionality reduction in recommender
systems–a case study," in In ACM WebKDD Workshop,
2000.
[7] B. Gipp and J. Beel, “Scienstein: A Research Paper
Recommender System,” in International Conference on
Emerging Trends in Computing. IEEE, 2009, pp. 309–315.
[8] S. K. Lam and J. Riedl. Shilling recommender systems for fun
and profit. In WWW ’04: Proceedings of the 13th
international conference on World Wide Web, pages 393–402,
New York, NY, USA, 2004. ACM Press.
[9] Perugini, S., Gonçalves, M. A., and Fox, E. A. 2004.
Recommender Systems Research: A Connection-Centric
Survey. J. Intell. Inf. Syst. 23, 2 (Sep. 2004), pp 107-143.
[10] Z. Huang, W. Chung, and H. Chen, "A graph model for ecommerce recommender systems," Journal of the American
Society for information science and technology, vol. 55, no. 3,
pp. 259-274, 2004.
[11] Yu, C., Lakshmanan, L. V., and Amer-Yahia, S. 2009.
Recommendation Diversification Using Explanations. In
Proceedings of the 2009 IEEE international Conference on
Data Engineering (March 29 - April 02, 2009). ICDE. IEEE
Computer Society, Washington, DC, 1299-1302.
[12] Garrido, J.L.; Hurtado, M.V.; Noguera, M.; Zurita,
J.M.,"Using a CBR Approach Based on Ontologies for
Recommendation and Reuse of Knowledge Sharing in
This paper categorizes recommender systems semantically
on large scale. This includes applications ranging from ecommerce to social networking, platforms from web to
mobile, healthcare to differential diagnosis, project
management to quality assurance and beyond, and a wide
variety of technologies ranging from collaborative
filtering, content based, hybrid approaches, explanations
facility, tags terminology, ontology, case-based reasoning
to Knowledge Models.
Thus a need arises to evaluate and explore
recommendation architecture with perspective of issues,
quality parameters of evaluation.
The main goal is trust worthy, user friendly,
conversational, context enriched, novel best fit
recommendations. This all contributes to Multi
dimensional evaluation architecture which filters cost
effective,
application/domain
based
best
fit
recommendations. The summary Table 1 encompasses
semantic analysis of some selected systems and can be
extended with n number of systems and other issues as
well. The era of best fit recommendations tailor made for
any application domain will see the user asking for,” what
you suggest for me? Rather than, “I am X ,I have used Y,
and can you suggest something like Y?.
Further Recommendation system needs more cross
dimensional analysis from the perspective of: Issues,
Qualitative and Quantitative Framework, Business Model,
Architecture and Application Domains.
In this Paper the evaluation frameworks of very few
architecture with respect to issues and quality parameters
have been evaluated. The prototype of evaluation is
present but this consists of core areas of research, precise
measurements and boundary definition which is a future
extension. The main focus of this paper has mainly been
discussion of issues and trying to eradicate them by giving
an algorithm for filtering the issues through tags. This
algorithm needs to be extended to a bigger architecture
with respective functions simulated with datasets. There
are many Issues and some of them are overlapping as well.
The accurate and best fit recommendation generating
evaluation framework is future work. There is lot more to
65
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[26] Gujral, M. and Asawa, K., Recommendation Systems - The
Knowledge Engineering analysis for the best fit decisions ,
Second International Conference on Advances in Computer
Engineering – ACE 2011, Trivandrum, Kerala, INDIA.
Decision Making",Hybrid Intelligent Systems, 2008. HIS '08.
Eighth International Conference on 10-12 Sept. 2008,
Page(s):837 - 842.
[13] Tintarev, N.; Masthoff, J.,"A Survey of Explanations in
Recommender Systems",Data Engineering Workshop, 2007
IEEE 23rd International Conference on 17-20 April 2007
Page(s):801 – 810.
[14] Mahmood, T. and Ricci, F. 2009. Improving recommender
systems with adaptive conversational strategies. In Proceedings
of the 20th ACM Conference on Hypertext and Hypermedia
(Torino, Italy, June 29 - July 01, 2009). HT '09. ACM, New
York, NY, 73-82.
[15] Sen, S., Vig, J., and Riedl, J. 2009. Tagommenders:
connecting users to items through tags. In Proceedings of the
18th international Conference on World Wide Web (Madrid,
Spain, April 20 - 24, 2009). WWW '09. ACM, New York,
NY, 671-680.
[16] Towle, B., Quinn, C. “Knowledge Based Recommender
Systems Using Explicit User Models”, In Knowledge-Based
Electronic Markets, Papers from the AAAI Workshop, Menlo
Park, CA: AAAI Press, 2000.
[17] P. Bhamidipati and K. Karlapalem, Kshitij: A Search and
Page Recommendation System for Wikipedia, COMAD 2008.
[18] Xin Sui, Suozhu Wang, Zhaowei Li, "Research on the model
of Integration with Semantic Web and Agent Personalized
Recommendation System," cscwd, pp.233-237, 2009 13th
International Conference on Computer Supported Cooperative
Work in Design, April 22-24, 2009.
[19] Yae Dai, HongWu Ye, SongJie Gong, "Personalized
Recommendation Algorithm Using User Demography
Information," wkdd, pp.100-103, 2009 Second International
Workshop on Knowledge Discovery and Data Mining, 23-25
Jan, 2009.
[20] SemMed: Applying Semantic Web to Medical
Recommendation Systems Rodriguez, A.; Jimenez, E.;
Fernandez, J.; Eccius, M.; Gomez, J.M.; Alor-Hernandez, G.;
Posada-Gomez, R.; Laufer, C., 2009. INTENSIVE '09. First
International Conference on Intensive Applications and
Services ,20-25 April 2009, Page(s):47 – 52.
[21] Richards, D., Taylor, M., and Porte, J. 2009. Practically
intelligent agents aiding human intelligence. In Proceedings of
the 8th international Conference on Autonomous Agents and
Multiagent Systems - Volume 2 (Budapest, Hungary, May 10
- 15, 2009). International Conference on Autonomous Agents.
International Foundation for Autonomous Agents and
Multiagent Systems, Richland, SC, 1161-1162.
[22] B. A. Gobin, and R. K. Subramanian,"Knowledge Modelling
for a Hotel Recommendation System", World Academy of
Science, Engineering and Technology, 2007.
[23] Chen, L. and Pu, P. 2009. Interaction design guidelines on
critiquing-based recommender systems. User Modeling and
User-Adapted Interaction 19, 3 (Aug. 2009), 167-206.
[24] Riedl. John, and Dourish. P.,”Introduction to the special
section on recommender systems“, ACM Trans. Comput.Hum. Interact. , Sep. 2005, Vol. 12 No.3, pp. 371-373.
[25] Resnick, P. and Varan, R. Hal, (Mar 1997),
Recommendation systems, Communications in ACM, Vol.40
No.3, pp 56-58.
66
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Island Model based Differential Evolution Algorithm for
Neural Network Training
Htet Thazin Tike Thein
University of Computer Studies, Yangon
Yangon, Myanmar
htztthein@gmail.com
topology of the network is allowed to vary freely it can take
take the shape of any broken curve [15]. Several types of
learning algorithm have been used for neural network in the
the literature.
The DE algorithm is a heuristic algorithm for global
optimization. It was introduced several years ago (in 1997)
and has been developed intensively in recent years [16]. Its
advantages are as follows: the possibility of finding the
global minimum of a multimodal function regardless of the
initial values of its parameters, quick convergence, and the
small number of parameters that needs to be set up at the
start of the algorithm’s operation [17].
Abstract
There exist many approaches to training neural network. In this
system, training for feed forward neural network is introduced by
using island model based differential evolution. Differential
Evolution (DE) has been used to determine optimal value for
ANN parameters such as learning rate and momentum rate and
also for weight optimization. Island model used multiple
subpopulations and exchanges the individual to boost the overall
performance of the algorithm. In this paper, four programs have
developed; Island Differential Evolution Neural Network
(IDENN), Differential Evolution Neural Network (DENN),
Genetic Algorithm Neural Network (GANN) and Particle Swarm
Optimization with Neural Network (PSONN) to probe the impact
of these methods on ANN learning using various datasets. The
results have revealed that IDENN has given quite promising
results in terms of convergence rate smaller errors compared to
DENN, PSONN and GANN.
Keywords: Artificial neural network, Island Model, Differential
Evolution, Particle Swarm Optimization, Genetic Algorithm.
Differential evolution is a relatively new global search and
optimization algorithm that is suitable for the real variable
optimization. It used the vector difference and elite
selection for the selection process and have a relatively few
parameter compared to other evolutionary algorithm.
Neural network weight can be trained or optimized using
differential evolution. Island based model works by running
multiple algorithms and shares the results at regular interval
promoting the overall performance of the algorithm. This
system will propose the island based differential evolution
algorithm for training feed forward neural network.
1. Introduction
A neural network is a computing system made up of a
number of simple, interconnected processing neurons or
elements, which process information by its dynamic state
response to external inputs [1]. The development and
application of neural networks are unlimited as it spans a
wide variety of fields. This could be attributed to the fact
that these networks are attempts to model the capabilities of
of human. It had successfully implemented in the real world
world application which are accounting and finance [2,3],
[2,3], health and medicine [4,5], engineering and
manufacturing
[6,7], marketing [8,9] and general
applications [10,11,12]. Most papers concerning the use of
of neural networks have applied a multilayered, feedforward, fully connected network of perceptions [13, 14].
Reasons for the use of simple neural networks are
done by the simplicity of the theory, ease of
programming, good results and because this type of NN
represents an universal function in the sense that if the
2. Literature Review
The most widely used method of training for feed forward
ANNs is back propagation (BP) algorithm [18]. Feed
forward ANNs are commonly used for function
approximation and pattern classifications. Back
propagation algorithm and its variations such as Quick
Prop [19] and RProp [20] are likely to reach local minima
especially in case that the error surface is rugged. In
addition, the efficiency of BP methods depends on the
selection of appropriate learning parameters. The other
training methods for feed forward ANNs include those
that are based on evolutionary computation and heuristic
67
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
the evolutionary theory, PSO is based on the principles that
that flock of birds, school of fish, or swarm of bee’s
searches for food sources where at the beginning the perfect
perfect location is not known. However, they eventually
eventually they reach the best location of food source
by means of communicating with each other.
principles such as Differential Evolution (DE), Genetic
Algorithm (GA), and Particle Swarm Optimization
(PSO).
2.1 Artificial Neural Network (ANN)
An Artificial Neural Network, often just called a neural
network, is a mathematical model inspired by biological
neural networks. A neural network consists of an
interconnected group of artificial neurons, and it
processes information using a connectionist approaches to
computation. In most cases a neural network is an
adaptive system that changes its structure during a
learning phase. Neural networks are used to model
complex relationships between inputs and outputs or to
find patterns in data.
2.4 Genetic Algorithm
Genetic algorithms are stochastic search techniques that
guide a population of solutions towards an optimum using
the principles of evolution and natural genetics. In recent
years, genetic algorithms have become a popular
optimization tool for many areas of research, including
the field of system control, control design, science and
engineering. Significant research exists concerning genetic
algorithms for control design and off-line controller
analysis.
Genetic algorithms are inspired by the evolution of
populations. In a particular environment, individuals which
better fit the environment will be able to survive and hand
down their chromosomes to their descendants, while less fit
individuals will become extinct. The aim of genetic
algorithms is to use simple representations to encode
complex structures and simple operations to improve
these structures. Genetic algorithms therefore are
characterized by their representation and operators. In the
original genetic algorithm an individual chromosome is
represented by a binary string. The bits of each string are
called genes and their varying values alleles. A group of
individual chromosomes are called a population. Basic
genetic operators include reproduction, crossover and
mutation [28]. Genetic algorithms are especially capable of
handling problems in which the objective function is
discontinuous or non differentiable, non convex,
multimodal or noisy. Since the algorithms operate on a
population instead of a single point in the search space, they
climb many peaks in parallel and therefore reduce the
probability of finding local minima.
2.2 Differential Evolution (DE)
Differential evolution (DE) algorithm is a simple
evolutionary algorithm that creates new candidate
solutions by combining the parent individual and several
other individuals of the same population. A candidate
replaces the parent only if it has better fitness. This is
rather greedy selection scheme that often outperforms the
traditional evolutionary algorithm. In addition, DE is a
simple yet powerful population based, direct search
algorithm with the generation and test feature for globally
optimizing functions using real valued parameters.
Among DE’s advantages are its simple structure, ease of
use, speed and robustness. Due to these advantages, it has
many real-world applications. DE starts with random
population as like other evolutionary algorithm. Solutions
are encoded using chromosomes, for neural network
training, weight of neural network are encodes in the
chromosome. For each iteration of DE, fitness of each
chromosome is evaluated. Fitness determines the quality
of solution or chromosomes. For training neural network,
fitness function is generally MSE (mean square error) of
neural network. Each chromosome undergoes mutation
and crossover operation to produce trial vector. Each
fitness of trial vector is compared with the parent vector
and the one with greater fitness survived and the next
generations begin.
2.5 Island Model (IM)
An island model (IM) is an approach to distribute EA. It
divides individuals into subpopulations and allows for
occasional exchange of individuals (migrations). The
simplest island mode assumes the same global parameters
for islands and the same global parameters for migrations.
Populations are characterized by their number, size and
the evolutionary algorithm type. Migrations are described
by the topology.
2.3 Particle swarm optimization (PSO)
Particle swarm optimization (PSO) [21] [22] is a
stochastically global optimization method that belongs to
the family of Swarm Intelligence and Artificial Life. Similar
Similar to artificial neural network (ANN) and Genetic
Algorithms (GA) [23][24] which is the simplified models
models of the neural system & the natural selections of the
68
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
individuals are replaced by the newly obtained
individuals. Four migration strategies are common:
• Select the best individuals replace the worst
individuals.
• Select random individuals, replace the worst
individuals.
• Select the best individuals replace random individuals.
• Select random individuals, replace random
individuals.
This system experiments the best individuals replace the
worst individuals.
3. Island Model based Differential Evolution
Algorithm (IDE)
The DE algorithm was proposed by Price and Storn [25].
The DE algorithm has the following advantages over the
traditional genetic algorithm: it is easy to use and it has
efficient memory utilization, lower computational
complexity (it scales better when handling large
problems), and lower computational effort (faster
convergence) [26]. DE is quite effective in nonlinear
constraint optimization and is also useful for optimizing
multimodal problems [27].
Its pseudocode form is as follows:
3.3. Migration Interval
In order to distribute information about good individuals
among the islands, migration has to take place. This can
either be done in synchronous way every nth generation or
in an asynchronous way, meaning migration takes place
at non-periodical times. It is commonly accepted that a
more frequent migration leads to a higher selection
pressure and therefore a faster convergence. But as always
with a higher selection pressure comes the susceptibility
to get stuck in local optima. In this system, various
migration intervals will be experimented to find the best
solution for the neural network training.
a) Create an initial population consisting of
PopSize individuals
b) While (termination criterion is not satisfied)
Do Begin
c) For each ith individual in the population
Begin
d) Randomly generate three integer numbers:
r1,r2,r3∈[1;PopSize], where r1≠r2≠r3≠i
e) For each jth gene in ith individual (j∈[1;n])
Begin
vi,j =xr1,j +F•(xr2,j −xr3,j)
f) Randomly generate one real number randj∈[0; 1)
g) If rand j<CR then ui,j :=vi,j
Else ui, j: =xi, j
End;
h) If individual u i is better than individual xi then
replace individual xi by child ui individual
End;
End;
3.4. Migration Size
A further important factor is the number of individuals
which are exchanged. According to these studies the
migration size has to be adapted to the size of a
subpopulation of an island. When one migrates only a
very small percentage, the influence of the exchange is
negligible but if too much individuals are migrated, these
new individuals take over the existing population, leading
to a decrease of the global diversity. In this system,
migration size will also be investigated which can yield
the best performance.
3.1. Migration Topology
4. System Design
There are four types of migration topology. They are ring,
torus, random and fully connected topology. This system
investigates the ring topology.
3.2. Migration Strategy
A migration strategy consists of two parts. The first part is
the selection of individuals, which shall be migrated to
another island. The second part is to choose which
69
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Iris. The results for each dataset are compared and
analyzed based on the convergence rate and classification
performance. All algorithm are run for different numbers of
iteration, among of them, MSE (Mean Square Error) of
Island DE is much lower than other algorithms.
5.1 Results on XOR Dataset
Table 1: Result of IDENN, DENN, PSONN and GANN on XOR Dataset
IDENN
DENN
PSONN
GANN
Learning
Iteration
20
41
51
61
Error
Convergence
0.003
0.0048865
0.00473763
0.04125
Convergence
Time
4 sec
7 sec
12 sec
37 sec
Classification
(%)
99.2
98.97
95.17
85.66
Fig. 1 System Flow Diagram
Island model used different subpopulation with each own
island. Each island operates its own execution as like in
DE algorithm. Each island initializes the population at
the start of the algorithm or replace the subpopulation
migrates from other neighbor. Mutation, crossover and
selection are performed on the individual chromosome. If
the migration interval is not fired, the next iteration
begins within island, otherwise, a portion of its own
population and neighbor is selected for migration. If the
migration occurs, island sends sub-population to neighbor
island. Neighbor island replaces the sub-population send
by its neighbor and replace with its portion of population
and algorithm continue.
5. EXPERIMENTAL RESULT
Fig. 2 MSE on XOR dataset
Currently the system experiment the island model with
simple ring topology, migration strategy select the best
individuals replace the worst individuals. The island model
used the iteration as the migration interval and one-third of
the old population is used to migrate and replace. Learning
rate of this system is set to 0.01. Four programs have been
developed: Island Differential Evolution Feed Forward
Neural Network (IDENN), Differential Evolution Feed
Forward Neural Network (DENN), Particle Swarm
Optimization Feed Forward Neural Network (PSONN)
and Genetic Algorithm Feed Forward Neural Network
(GANN) using four dataset: XOR, Cancer, heart and
5.2 Results on Cancer Dataset
Table 2: Result of IDENN, DENN, PSONN and GANN on Cancer Dataset
Learning Iteration
Error
Convergence
IDENN
DENN
PSONN
GANN
200
443
219
10000
0.00201
0.00499
0.004870
0.50049
70
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Convergence
Time
103 sec
195 sec
110 sec
273 sec
Classification (%)
99.01
98.40
98.65
97.73
Fig. 4 MSE on Iris dataset
5.4 Results on Heart Dataset
Table 4: Result of IDENN, DENN, PSONN and GANN on Heart Dataset
IDENN
DENN
PSONN
GANN
Learning Iteration
40
58
10000
9000
Error
Convergence
0.039
0.048925
1.46392
3.00
Convergence
Time
7 sec
16 sec
170 sec
110 sec
Classification (%)
88.93
85.50
89.56
92.83
Fig. 3 MSE on Cancer dataset
5.3 Results on Iris Dataset
Table 3: Result of IDENN, DENN, PSONN and GANN on Iris Dataset
Learning
Iteration
IDENN
DENN
PSONN
GANN
28
61
818
10000
Error
Convergence
0.0205
0.049803
0.049994
1.88831
Convergence
Time
5 sec
16 sec
170 sec
256sec
Classification
(%)
96.39
95.014972
93.86
97.72
Fig. 5 MSE on Heart dataset
71
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
6. Comparison IDENN, DENN, PSONN and
GANN
References
[1] A. Majdi, M. Beiki: Evolving Neural Network Using Genetic
Algorithm for Predicting the Deformation Modulus of Rock
Masses. International Journal of Rock Mechanics and Mining
Science, vol. 47(2), pp. 246-253, 2010.
[2] Lee, K., Booth, D., and Alam, P.: A Comparison of Supervised
and Unsupervised Neural Networks in Predicting Bankruptcy of
Korean Firms. Expert Systems with Applications, vol. 29(1), pp.1–
16, 2005.
[3] Landajo, M., Andres, J. D., and Lorca, P.: Robust Neural
Modeling for the Cross-Sectional Analysis of Accounting
Information. European Journal of Operational Research, vol.
177(2), pp. 1232–1252, 2007.
[4] Razi, M. A., and Athappily, K.: A Comparative Predictive
Analysis of Neural Networks (NNs), Nonlinear Regression and
Classification and Regression Tree (CART) Models. Expert
Systems with Applications, vol. 2(1), pp. 65–74, 2005.
[5] Behrman, M., Linder, R., Assadi, A. H., Stacey, B. R., and
Backonja, M. M.: Classification of Patients with Pain Based on
Neuropathic Pain Symptoms: Comparison of an Artificial Neural
Network against an Established Scoring System. European Journal
of Pain, vol. 11(4), pp. 370–376, 2007.
[6] Yesilnacar, E., and Topal, T.: Landslide Susceptibility
Mapping: A Comparison of Logistic Regression and Neural
Networks Methods in a Medium Scale Study, Hendek region
(Turkey). Engineering Geology, vol. 79 (3–4), pp. 251–266,
2005.
[7] Dvir, D., Ben-Davidb, A., Sadehb, A., and Shenhar, A. J.:
Critical Managerial Factors Affecting Defense Projects Success: A
Comparison between Neural Network and Regression Analysis.
Engineering Applications of Artificial Intelligence, vol. 19, pp.
535–543, 2006.
[8] Gan, C., Limsombunchai, V., Clemes, M., and Weng, A.:
Consumer Choice Prediction: Artificial Neural Networks Versus
Logistic Models. Journal of Social Sciences, vol. 1(4), pp. 211–
219, 2005.
[9] Chiang, W. K., Zhang, D., and Zhou, L.: Predicting and
Explaining Patronage Behavior toward Web and Traditional Stores
Using Neural Networks: A Comparative Analysis with Logistic
Regression. Decision Support Systems, vol. 41, pp. 514–531,
2006.
[10] Chang, L. Y.: Analysis of Freeway Accident Frequencies:
Negative Binomial Regression versus Artificial Neural Network.
Safety Science, vol. 43, pp. 541–557, 2005.
[11] Sharda, R., and Delen, D.: Predicting box-office success of
motion pictures with neural networks. Expert Systems with
Applications, vol. 30, pp. 243–254 , 2006.
[12] Nikolopoulos, K., Goodwin, P., Patelis, A., and
ssimakopoulos, V.: Forecasting with cue information: A
comparison of multiple regressions with alternative forecasting
approaches. European Journal of Operational Research, vol.
180(1), pp. 354–368, 2007.
[13] S. Curteanu and M. Cazacu.: Neural Networks and Genetic
Algorithms Used For Modeling and Optimization of the Siloxane Siloxane Copolymers Synthesis. Journal of Macromolecular
Science, Part A, vol. 45, pp. 123–136, 2007.
[14] C. Lisa and S. Curteanu: Neural Network Based Predictions
for the Liquid Crystal Properties of Organic Compounds.
Fig. 6 Comparison of correct classification Percentage IDENN, DENN,
PSONN and GANN
For XOR dataset, the results show that IDENN has better
results on convergence time and correct classification
percentage. IDENN convergence in a short time with high
correct classification percentage. For Cancer dataset,
IDENN classification results are better than DENN,
PSONN and GANN. For Iris dataset, GANN classification
results are better than IDENN, DENN, and PSONN. For
Heart dataset, GANN classification results are better than
IDENN, DENN, and PSONN. For overall performance,
the experiments show that IDENN significantly reduces the
error with minimum iterations. IDENN produces feasible
results in terms of convergence time and classification
percentage.
7. Conclusion
This system presents the neural network training
algorithm using island model based differential
algorithm. By exploiting the global search power of
differential evolution algorithm in conjunction with island
model will boost the training performance of the
algorithm. The system will converge quickly to the lower
mean square error. Island model encourage the diversity
among the individual among islands which increase
search capability and by migration island model can share
the best experiences of each other. By using island model
rather than single DE, it can get advantages from parallel
problem solving and information sharing which lead to
faster global search.
72
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Computer-Aided Chemical Engineering, vol. 24, pp. 39–45,
2007.
[15] Fernandes and L.M.F. Lona: Neural Network Applications in
Polymerization
Processes, Brazilian Journal Chemical
Engineering, vol. 22, pp. 323–330, 2005.
[16] K. V. Price, R. M. Storn, and J. A. Lampinen, Differential
Evolution: APractical Approach to Global Optimization. Berlin,
Germany: Springer Verlag, 2005.
[17] R. L. Becerra and C. A. Coello Coello, “Cultured differential
evolution for constrained optimization,” Comput. Methods Appl.
Mech. Eng., vol. 195,no. 33–36, pp. 4303–4322, Jul. 1, 2006.
[18] Hecht Nelso R., back propagation neural network, 1989.
[19] Scott E. Fahlman: An Empirical Study of Learning Speed in
Back-Propagation Networks, September 1988.
[20] Rprop – Description and Implementation Details Martin
Riedmiller, Technical report, 1994.
[21] Kennedy, J.; Eberhart, R. "Particle Swarm Optimization".
Proceedings of IEEE International Conference on Neural
Networks, 1995.
[22] Kennedy, J.; Eberhart, R.C. Swarm Intelligence. Morgan
Kaufmann, 2001.
[23] Zhang, G.P., Neural networks for classification: a survey.
IEEE Transactions on Systems Man and Cybernetics, 2000.
[24] Rudolph, G., Local convergence rates of simple evolutionary
algorithms with cauchy mutations, 1997.
[25] R. Storn and K. Price, “Differential evolution—A simple and
efficient heuristic for global optimization over continuous spaces,”J.
Glob. Optim. vol. 11, no. 4, pp. 341–359, Dec. 1997.
[26] K. V. Price, R. M. Storn, and J. A.Lampinen, Differential
Evolution—A Practical Approach to Global Optimization. Berlin,
Germany: SpringerVerlag, 2005.
[27] D. Karaboga and S. Okdem, “A simple and global
optimization algorithm for engineering problems: Differential
evolution algorithm,” Turkish J. Elect. Eng. Comput. Sci., vol. 12,
no. 1, pp. 53–60, 2004.
[28] D. Whitley, "Applying Genetic Algorithms to Neural Network
Problems," International Neural Network Society p. 230, 1988.
Htet Thazin Tike Thein received her B.C.Tech. degree in Computer
Technology from University of Computer Studies in Yangon Myanmar
and M.C.Tech. degree in Computer Technology from University of
Computer Studies in Yangon Myanmar. She is currently a student of
Ph.D. degree at University of Computer Studies in Yangon Myanmar.
She is currently been working toward her Ph.D. degree in Neural
Network and classification.
73
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
3D-Diagram for Managing Industrial Sites Information
Hamoon Fathi1, Aras Fathi2
1
Stractural engineering Department of Kurdistan University- ASCE member
Sanandaj, IRAN
hamoon.fathi@gmail.com
2
Food Technology engineering Department of Islamic Azad University
Science and research Branch, Tehran, IRAN
arasfathi@yahoo.com
Abstract
2. Methodology
This paper proposes a 3D-model for managing industrial sites
informations with coding and prioritizing the sources. Managing
the industrial and construction site information are organized by
PMBOK standard in this study. This model investigated on three
different construction sites and factories including a building site,
a tractor assembling factory and a food production factory. The
results show, the 3D-model has floating act in the different
problems. The proposed model is in good agreement with site
information results.
Keywords: 3D-modeling, industrial site, information
management, PMBOK standard.
This study proposes a 3D-model for managing the
industrial site information. Methodology of the model is
focused on PMBOK standards. These methodologies
combine the sources by coding and prioritizing. Source
types in different projects are categorized with same
standards. Cost, timetable, communication, risk and
limitations are some parts of the uniform management at
PMBOK standards. Human sources and quality are
prioritized with the other sources at the same level (Fig1).
1. Introduction
3D-diagram is one of the new automate methods for
managing the industrial information and risk management.
Many models used for managing the construction of
projects are based on systematic structure [1-5]. Some
researches tried to develop experience and knowledge of
modeling the construction management [6-8]. The main
purpose of 3D-diagram is managing and estimating the
risks of the project. However, this information can build a
way for analyzing the future of the project. Many
researchers studied the risk of construction operation
analysis [9-18]. Some of these studies discussed about
industries risk and cost management [18-25]. Reduce cost
and time and improve the quality are object of engineering
simulations. Site information and source data are the feed
of engineering simulations. This study assemblage the site
information and source data in the 3D-diagrams. This
information is gathered from the industrial sites. Several
researches tried to simulate the site information with
computer systems [26-31]. Computer software and models
help to simulate the project dynamically. Large scale
projects belong to classify the complex dynamic systems.
Traditionally, formal modeling tools are not dynamic so
the 3D-models are used to improve and build dynamic
management. Furthermore, producing schedules for a
project by computer needs a uniform format for translating
the information to the managers. 3D-diagrams' algorithm
supply the uniform format for managing site information.
This study proposes a 3D-diagrams' algorithm for risk,
cost and time management. Three different industrial
projects tested by 3D-diagrams for different kinds of
source management. The manager limitations defined with
PMBOK standards.
Fig. 1 Uniform management at PMBOK standards
2.1 Cost management
Budget sources are one of the important parts of the
projects and the other sources have direct relation with
cost. Cost management needs estimating, predicating and
controlling all duration of the project's time. Cost is one of
the dimensions at this methodology and growing with
uniform gradient. Estimating techniques for cost of the
activities need to combine all the sources and contractor's
responsibilities for their works however normal project
cost would be predictable by suitable sub programs.
2.2 Time management
Most of the projects are programed with time, whereas
managing the project time has relationship with cost. Most
projects have a simple diagram for estimating the costs or
incomes. However, all the sources' information is used for
simulating cost-time diagram. The other sources are
installed by coding and prioritizing in the third direction of
this methodology. In fact, the normal cost-time diagram is
74
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Normally, value of weight-cost items is important in
critical and risk points in project management. Items
which have some parts under the budget cover in the 3Ddiagram are financed by the other sources. Items which are
completely out of the budget line are contract and parttime items. It should be considered that the budget of these
items depends to the contract’s type. Many design
software are used for estimate the project behavior.
However, most of these programs are based on time and
lost budget. The 3D-model algorithm can be used by
mathematical software beside the other managing project
softwares.
unqualified for balancing between time and budget alone.
Because, the cost behaviors are usually not linear over the
full range of activities and the cost go up in the inactive
parts.
3. Algorithm of the 3D-model
Most of the uniform management stabilized on the
timetable. Time is the determination factor for calculating
project improvement in 2D-modeling. Controlling by
timetable has some limitations such as: time doesn’t have
negative form and growing without any activities. Human
management and quality are two activities which have
cost, even while the project has stopped, so the total cost
can be another factor for calculating project improvement.
3.2 3D-models
Perfect project has activities with lag and lead relationship
which they categories in the same level with different
codes. The main algorithm is not including the temporary
and limit works, subcontracts and accidents, so the 3Dmodel forecasts these items on the costs vector
automatically. Actually, the 3D-model shows balance and
optimizes the ways between the budget and cost points of
the project at different times. Figure 5, provides a colored
3D-model of a sample project. Risk management and
managing the critical conditions are based on cost and
time perform by changing the vector of the project from
start point to end point. In fact, the changing of the project
vector is used for decreasing the time or cost of the
project.
3D-model applies the both factors of cost and time for
managing the industrial site informations. The proposed
model estimates the project’s behavior with using
combinations of cost and time. PMBOK standards present
the usual curve between total cost and time in normal
constructions (Fig 2). Total cost and time in the project
have relationship, and this relation is depended on the
other activities and sources.
These reductions participate with erasing some parts of the
activities or growing the cost or time. One of the most
important abilities of the 3D-model management is
estimating the effect of cost on the project duration. For
example, Figure 5 shows the sample project with recession
on the cost sources. These methods are used by
commander and senior of the project. The sector managers
have a WBS for their own activities and input them in the
special diagrams.
Fig. 2 Cost –Time curve
3.1 Algorithm
The algorithm of the 3D-model of management site is
simple and each activity divided in the branch with
specified height and those items are input in the algorithm
with priority (Fig 3). Each activity has a millstone and
work break structure (WBS). Sub management of the
WBS is characterized with codes for the items. Whole way
of the project started from the meet point of cost and time
vectors and ended at the opposite point at the total cost and
time (Fig 4).
3D-diagram is a profile with three dimensions. Cost is
arranged by one of the axis and time by the other one. The
third axis pertains to the other sources. A project’s
approach begins from the origin of the coordinate timecost curve to the end point. Direct platform from the start
point to the end of the project constitute an S-form shape
which have been defined by PMBOK standards. This
approach conducts the managers to the shortest vector.
Managers shift the project’s approach to control cost or
time.
Fig. 4 The algorithm of WBS (subprogram)
75
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fig. 3 The algorithm of the management site's 3D-model
Fig. 5 3D-model of a sample project
76
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
This part investigated a tractor assembling factory at
Sanandaj city with 3D-modeling management (Fig 8).
The task of this factory is assembling and repairing their
products. One of the important parts at the assembling
factory is the human source management. Table 2 is
including the work numbers, wages and abilities and
some input information for 3D-diagram. The investigation
results with 3D-modeling of the human sources
management are shown (Fig 9). Daily workers' numbers,
numbers of skilled workers and engineers, amount of
overtime hours, upload and unload the employees are
characterized in figure 9. This 3D-diagram categorizes the
employees to nine groups. These groups are characterized
with E1 to E9 symbols. Numbers of working days, row
fiscal and start to end working periods are specified in the
3D-diagram by the means of numbers and percentages.
Negative percentages replaced with zero number and
positive percentages are cumulatively shown in the last
step of the chart. This 3D-diagram submits the minimum
to maximum salary increment for E1 to E9 groups while
the E9 has the maximum payment. In fact, E9 is showing
the manager team.
4. Test Results and Discussion
These parts are discussing about 3D-modeling results and
applications, on the real industrial site information. Data
and information are used to develop the 3D-model in the
third direction and the other directions are cost and time.
The purpose of this model is to provide comprehensive
guidance for information's bank on risk management.
4.1 building construction site
A residential tower with 80 houses is controlled by the
cost-time 3D-model. These houses have the same
infrastructures and been built in an apartment with 12
floors at Sanandaj city (Fig6). The project started at May
2009 and finished at March of 2013. Supply and
producing materials in the building site is one of the
important items. Table 1 is including some parts of the
required materials, cost and budget, sources and storage
capacity in the limited site at the center of a city (input
data). The site environment had effect on the time and
cost of the budget. Figure 7, presents the time of
producing, type and value, sources and daily cost of the
materials and some other information in the 3D-diagram.
The 3D-diagram is including time, cost and materials.
Materials of the construction project were classified into
16 groups on an axis. M1 to M5 groups are non-storable
material and used upon receipt to the construction site.
The E6 is related to night time shift workers with 70 %
bonus adjust the salary base. Risk management is one of
this diagrams consequences. The amount of personnel
costs arrived to the critical point at autumn and winter of
2011, so one of the solutions could be reducing some
parts of night time shifts in the mentioned seasons to
decrease the total cost from the critical limitations by the
mangers.
M6 to M16 are storable material which assort according
to use time and storage’s maximum capacity. This
diagram contains a large group of information, for
example M14 group’s material, are loaded in a constant
specified amount from the project’s start point (May
2009) to the end line (March 2013). Therefore the M14’s
unload and reload is a constant number. M14 determines
water in this project. This project is a sample and some of
the management results are:
 M15 and M16 material, are symbols for doors,
valves, pipes and decoration’s material that been
logged in the last year of the project duration. As
the storage capacity was not considered to them,
so they temporary stored in some of the
building’s rooms. Accordingly, material
purchasing postponed to construction of the
building roofs, annual inflection causes
increasing cost in material providing.
 Most parts of entering materials are stored
discontinuously and new purchasing will be done
after the most part of the material been used
approximately. This process is able the
managing team to assign a large part of the
storehouse to expensive and importing material.
However depletion of unsorted material could be
counted as a risk and cause an increase in the
project’s duration.
Cumulative part of the salaries shows equality between
the total and critical cost. The maximum value is due to
repairing the tractor products on the assembly line.
Actually, environmental condition and temperature affects
the workers efficiency. For this reason, the factory needs
to postpone the personnel adjustment or funding for staff
salaries in previous months. One of the time-cost 3Ddiagram’s functionalities is to define the risks and finding
best solutions.
4.3 Food production factory
Cost and quality are dependent parameters which they
have direct effect on the find product and marketing (Fig
10). One of the most important parts of the food factories
is quality control. Managing the quality attend to
marketing results and healthy controls. Figure 11,
suggests the optimum cost of investigation at the different
properties of the final food product. This kind of 3Ddiagram is mostly used by sales and marketing senior
executives. These executives in cooperating with R&D,
decide on the size, flavors, advertising and product titles.
Results of this diagram, presents a long-term program for
sale’s improvement. Of course, these results are not
available until the end of one complete sale period.
Product’s sale duration, total costs, consumer’s feedback
and other sale information, are forming the 3D-diagram.
4.2 Tractor assembling factory
77
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Table 1: Materials information and input data
Material
Name
Start point
End point
Budget
cost
Capacity
Value
Concrete
welding
Plasticizer
Casing
Installation
Stucco
Brickwork
wiring
Tarry
Alabaster
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
2009-10
2009-10
2009-10
2009-5
2010-1
2009-5
2009-10
2011-3
2010-9
2010-1
2011-10
2011-7
2011-7
2011-10
2011-1
2012-12
2012-10
2012-8
2012-6
2012-5
59000
15000
24600
24000
12000
21000
18000
8500
8500
7000
3000
3000
3000
3000
3000
1000
1000
1000
1000
1000
Non-storable
Non-storable
Non-storable
Non-storable
Non-storable
8 parts
7 parts
6 parts
6 parts
6 parts
500
0
1100
0
0
1000
1000
700
700
500
Scoria
Calk
Crockery
Water
Window
Door
M11
M12
M13
M14
M15
M16
2010-1
2010-1
2011-4
2009-5
2012-10
2012-10
2012-5
2012-2
2013-1
2013-3
2013-3
2013-3
7000
5000
8500
10000
13000
13000
1000
1000
1000
1000
1000
1000
6 parts
5 parts
5 parts
10 parts
4 parts
4 parts
500
0
500
0
1500
1500
Fig. 6 A residential tower with 80 houses
Table 2: Employee's information and input data
Group
Activity
Number
E1
E2
E3
E4
E5
E6
E7
E8
E9
Guardsman
Journeyman
Serviceman
Repairman
Engineer
Night Workers
Engineer
Inspector
Manager
3
12
5
5
7
7
5
6
3
Working
day
455
455
225
180
360
455
270
225
180
Wage
Start point
End point
7
14
21
21
30
32
37
47
60
2011-1-1
2011-1-1
2011-2-15
2011-6-15
2011-3-1
2011-1-1
2011-3-10
2011-3-1
2011-7-1
2012-3-28
2012-3-28
2011-7-15
2011-9-28
2012-1-15
2012-3-28
2011-10-1
2012-1-28
2012-3-28
Fig. 8 Tractor assembling factory
78
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fig. 7 The 3D-diagram of time, cost and materials
Fig. 9 3D-modeling of the human sources management
79
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Fig. 11 3D-diagram of food production factory
Fig. 10 food production factory
For example, this diagram includes four items that each
one has two equilibrium coefficients to balance the price
and numbers. Presented coefficient proposes information
to balance the items at a uniform mathematical form. Size
and shape of the products is one of these marketing items.
Results describe that alternative size does not have any
significant effect on the sale rate firstly, however coincide
sale of six different sizes together caused growing sale
rate.
5. Conclusions
This study, investigated the feasibility of using 3Ddiagrams for collecting the information of site project in a
uniform format. The methodology of target reduces time
and cost in the risk management. The results focused on
real industrial information to analyze project conditions
before and after constrictions. The following conclusions
can be drawn from the experimental results and new
model:
 The 3D-diagrams for risk management are
rational and compatible with industrial results
and informations.
 The 3D-diagrams analyses the project conditions
with cost and time at the same level of valuation,
so the diagrams estimate the future to manage
the project and rid deleterious items.
Helping to decide between repeats and diversity of
advertisements is one of this diagrams consequences.
Actually advertising is the main factor to recommend the
product to consumers. Investigations show that efficiency
of an advertisement decreases after a repeating period.
The suitable time to change the advertisement could be
taken from the 3D-diagram. In this case after 72 repeats
of an advertisement, the sale’s growing procedure stops
denoting that more repeating could not be commodious
anymore.
80
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[17] H. Tanaka, L.T. Fan, F.S. Lai and K. Toguchi, "Fault tree
analysis by fuzzy probability", IEEE Transactions on
Reliability, Vol. 32, No. 5, 1983, pp. 453-457.
[18] S. Azampanah, A. Eskandari, A. Khademzadeh and F.
Karimi, "Traffic-Aware Selection Strategy for ApplicationSpecific 3D NoC", Advances in Computer Science: an
International Journal, Vol. 2, Issue 4, No.5, September 2013,
pp. 107-114.
[19] L.A. Zadeh, "Probability of measure of fuzzy events",
Journal of Mathematical Analysis and Application, Vol. 23,
1968, pp. 421-427.
[20] M.I. Kellner, R.J. Madachy and D.M. Raffo, "Software
process simulation modeling: Why? What? How?" Journal
of Systems and Software, 46(2-3), 1999, April 15, pp. 91105.
[21] G. Rincon, M. Alvarez, M. Perez and S. Hernandez, "A
discrete-event simulation and continuous software
evaluation on a systemic quality model: An oil industry
case", Information & Management, 42, 2005, pp. 10511066.
[22] J.K. Shank and V. Govindarajan, "Strategic Cost
Management: The Value Chain Perspective", Journal of
Management Accounting Research 4, 1992, pp. 179-197.
[23] A. Maltz and L. Ellram, "Total cost of relationship: An
analytical framework for the logistics outsourcing decision",
Journal of Business Logistics 18 (1), 1997, pp. 45-55
[24] L.M. Ellram, "A Framework for Total Cost of Ownership",
International Journal of Logistics Management 4 (2), 1993,
pp. 49-60.
[25] B. Boehm and R. Turner, "Using Risk to Balance Agile and
Plan-Driven Methods", IEEE Computer 36(6), June 2003,
pp. 57-66.
[26] K.M. Chandy and J. Misra, "Distributed Simulation: A
Case Study in Design and Verification of Distributed
Programs", IEEE Transactions on Software Engineering SE5(5), 1978, pp. 440-452.
[27] R.M. Fujimoto, "Performance Measurements of Distributed
Simulation Strategies", Transactions of the Society for
Computer Simulation 6(2), 1989, pp. 89-132.
[28] H. Mehl, M. Abrams and P. Reynolds, "A Deterministic
Tie-Breaking Scheme for Sequential and Distributed
Simulation: Proceedings of the Workshop on Parallel and
Distributed Simulation", Journal Society for Computer
Simulation. 24, 1992, pp. 199-200.
[29] B.D. Lubachevsky, "Efficient Distributed Event-Driven
Simulations of Multiple-Loop Networks", Communications
of the ACM 32(1), 1989, pp. 111-123.
[30] P. B. Vasconcelos, "Economic growth models: symbolic
and numerical computations", Advances in Computer
Science: an International Journal, Vol. 2, Issue 5, No.6 ,
November 2013, pp. 47-54.
[31] U. Dahinden, C. Querol, J. Jaeger and M. Nilsson,
"Exploring the use of computer models in participatory
integrated assessment – experiences and recommendations
for further steps", Integrated Assessment 1, 2000, pp. 253266.
Reference
[1] P. Carrillo, "Managing knowledge: lessons from the oil and
gas sector", Construction Management and Economics, Vol.
22, No 6, 2004, pp. 631-642.
[2] I. Nonoka and H. Takeuchi, "The Knowledge Creating
Company: How Japanese Companies Create the Dynamics
of Innovation. New York", NY: Oxford University Press,
1995. 304 p.
[3] T. Maqsood, A. Finegan and D. Walker, " Applying project
histories and project learning through knowledge
management in an Australian construction company", The
Learning Organization, Vol. 13, No1, 2006, pp. 80-95.
[4] N. Bahra, "Competitive Knowledge Management. New
York", NY:Palgrave, 2001. 258 p.
[5] J.M. Kamara, A.J. Chimay and P.M. Carillo, "A CLEVER
approach to selecting a knowledge management strategy",
International Journal of Project Management, Vol. 20, No 3,
2002, pp. 205-211.
[6] A. Kaklauskas, E.K Zavadskas and L. Gargasaitė, "Expert
and Knowledge Systems and Data-Bases of the Best
Practice", Technological and Economic Development of
Economy, Vol. X, No3, 2004, pp. 88-95.
[7] K.B. DeTienne and R.B. Jensen, "Intranets and business
model innovation: managing knowledge in the virtual
organization. In: Knowledge Management and Business
Model Innovation / Y. Malhotra (Ed.). Hershey", PA: Idea
Group Publishing, 2001, pp. 198-215.
[8] S. Ogunlana, Z. Siddiqui, S. Yisa and P. Olomolaiye,
"Factors and procedures used in matching project managers
to construction projects in Bangkok", International Journal
of Project Management, Vol. 20, 2002, pp. 385-400.
[9] B.M Ayyub and A. Haldar, "Decision in construction
operation", Journal of Construction Engineering and
Management, ASCE, Vol. 111, No. 4,1985, pp. 343-357.
[10] H.N. Cho, H.H. Choi and Y.B. Kim, "A risk assessment
methodology for incorporating uncertainties using fuzzy
concepts", Reliability Engineering and System Safety, Vol.
78, 2002, pp. 173-183.
[11] H.H. Choi, H.N Cho and J.W Seo, "Risk assessment
methodology for underground construction projects",
Journal of Construction Engineering and Management,
ASCE, Vol. 130, No. 2, 2004, pp. 258-272.
[12] S. Lee and D.W. Halpin, 'Predictive tool for estimating
accident risk", Journal of Construction Engineering and
Management, ASCE, Vol. 129, No. 4, 2003, pp. 431-436.
[13] D. Singer, "A fuzzy set approach to fault tree and reliability
analysis", Fuzzy Sets and Systems, Vol. 34, 1990, pp. 145155.
[14] P.V. Suresh, A.K. Babar and V.V. Raj, "Uncertainty in
fault tree analysis: A fuzzy approach", Fuzzy Sets and
Systems, Vol. 83, 1996, pp. 135-141.
[15] J.H.M. Tah and V. Carr, "A proposal for construction
project risk assessment using fuzzy logic", Construction
Management and Economics, Vol. 18, 2000, pp. 491-500.
[16] J.H. Tah and V. Carr, "Knowledge-based approach to
construction project risk management", Journal of
Computing in Civil Engineering, ASCE, Vol. 15, No. 3,
2001, pp. 170-177.
81
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Benefits, Weaknesses, Opportunities and Risks of SaaS
adoption from Iranian organizations perspective
Tahereh Rostami1, Mohammad Kazem Akbari 2 and Morteza Sargolzai Javan 3
1
Taali University of Technology
Computer Engineering and IT Department
Qom, Iran
msrostami.tahereh@gmail.com
2
Amirkabir University of Technology
Computer Engineering and IT Department
Tehran, Iran
akbarif@aut.ac.ir
3
Amirkabir University of Technology
Computer Engineering and IT Department
Tehran, Iran
msjavan@aut.ac.ir
Standards and Technology (NIST), major
characteristics of cloud services are: on-demand selfservice, ubiquitous network access, location
independent resource pooling, rapid elasticity, and
measured service. Cloud services based on cloud
computing can free an organization from the burden
of having to develop and maintain large-scale IT
systems; therefore, the organization can focus on its
core business processes and implement the
supporting applications to deliver the competitive
advantages [1]. Today, cloud services have been
regarded not only as the favorable solutions to
improve an organization’s performance and
competitiveness, but also as the new business models
for providing novel ways of delivering and applying
computing services through IT. Generally, cloud
services can be divided into three categories:
Software as a Service (SaaS), Platform as a Service
(PaaS), and Infrastructure as a Service (IaaS). Among
them, SaaS is regarded as a potential segment and the
utilization of SaaS solutions can lead to many
benefits for enterprise users with profound
consequences in improving IT performance [2]. SaaS
delivers applications’ functionality through the media
of the Internet as a service [3]. Although many
vendors announced that the SaaS adoption can bring
out promising benefits, yet some organizations are
still reluctant to introduce SaaS solutions due mainly
to the trust concern (e.g., data security, network
security). In fact, each service model (SaaS, PaaS, or
IaaS) has its own security issues and calls for a
different level of security requirement in the cloud
Abstract
Software as a Service (SaaS) is a new mode of software
deployment whereby a provider licenses an application to
customers for use as a service on demand. SaaS is regarded
as a favorable solution to enhance a modern organization’s
IT performance and competitiveness which helps
organizations avoid capital expenditure and pay for the
functionality as an operational expenditure. SaaS has
received considerable attention in recent years, and an
increasing number of countries have consequently
promoted the SaaS market. However, many organizations
may still be reluctant to introduce SaaS solutions mainly
because of the trust concern they may perceive more risks
than benefits. This paper focuses on the analysis of Iranian
organizations understand from the benefits, weaknesses,
opportunities and risks of SaaS adaption.
Keywords: Cloud services, Software as a Service (SaaS),
Adoption, Binomial test
1. Introduction
Effectively making use of Information technology
(IT) can constitute a sustainable source of an
organization’s competitiveness. Cloud computing has
become a topic of tremendous interest as
organizations struggle to improve their IT
performance. Cloud services can be viewed as a
cluster of service solutions based on cloud
computing, which involves making computing, data
storage, and software services available via the
Internet. According to the U.S. National Institute of
82
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
environment [2, 4]. Some surveys related to cloud
services have enhanced our understandings of the
factors involved in adoption of SaaS solutions. For
example, in The Adoption of Software as a Service in
Small and Medium-Sized Businesses (IDC #205798,
2007), the report remarked that while SaaS has strong
growth potential, small and medium-sized businesses
have not been adopting SaaS as quickly as originally
anticipated. Concern about data security is the factor
most frequently cited as discouraging the use of
SaaS. This report also revealed that marketing efforts
for the SaaS adoption should highlight the issue of
trust by enhancing users’ perceived benefits as well
as decreasing users’ perceived risks. The rest of this
paper is organized as follows. In Section 2, presents
the related work. In Section 3, the SaaS concept has
been described. In Section 4, Effective factors on
adoption of cloud is detected. In Section 5, the
methodology has been defined.
3. Software as a Service
SaaS is an outsourcing innovation that transforms IT
resources into continuously provided services
[5].That is, SaaS delivers an application’s
functionality through the Internet as a service and
thus, eliminates the need to install and run the
software on the client’s computer [3,6]. Therefore,
customers only pay for their use of the software
because there are no licensing fees [7, 8]. This unique
feature of SaaS has allowed the SaaS market to grow
six times faster than the packaged software market
and is expected to facilitate further development of
SaaS. According to a study by Gartner, SaaS is
predicted to become increasingly important in most
enterprise application software (EAS) markets. The
global SaaS market is expected to reach 12.1 billion
USD by 2014, reflecting a compound annual growth
rate of 26%. This rapid growth of the SaaS market
has had considerable influence on the software
market [9]. However, despite this rapid growth of the
SaaS market, some of countries with SaaS markets in
their initial stages have faced many problems in SaaS
adoption. According to [9], in a new SaaS market,
inducing SaaS adoption is likely to be difficult due to
major inhibitors, such as limited integration and
flexibility.
In fact, not everyone is positive about SaaS adoption.
Some companies and market researchers are
particularly skeptical about its viability and
applicability in strong EAS markets such as ERP.
The main adoption barriers are said to be reliability
issues (i.e., stable access to services), information
security and privacy concerns (i.e., security breaches
and improper protection of firm data), and process
dependence (i.e., performance measurement and
service quality) [10]. Also there are vulnerabilities in
the applications and systems availability may lead to
the loss of valuable information and sensitive data or
may be the money. These concerns discourage the
enterprises to adopt the SaaS applications in the
cloud.
2. Related Works
In [16], presumed that SaaS adoption is a trust issue
involving perceived benefits such as pay only for
what you use, monthly payments, costs and perceived
risks such as data locality and security, network and
web application security. The paper has proposed a
solution framework that employs a modified
DEMATEL approach to cluster a number of criteria
(perceived benefits and perceived risks) into a cause
group and an effect group, respectively. In [17],
attempts to develop an explorative model that
examines important factors affecting SaaS adoption,
in order to facilitate understanding with regard to
adoption of SaaS solutions. An explorative model
using partial least squares (PLS) path modeling is
proposed and a number of hypotheses are tested,
which integrate TAM related theories with additional
imperative constructs such as marketing effort,
security and trust.in [12], analyzes the opportunities
such as Cost advantages, Strategic flexibility, focus
on core competencies and risks such as Performance,
Economic, Managerial, associated with adopting
SaaS as perceived by IT executives at adopter and
non-adopter firms. Also developed a research model
grounded in an opportunity-risk framework.
In [18], reports on research into SaaS readiness and
adoption in South Africa as an emerging economy.
Also discussed are benefits of Immediacy, Superior
IT Infrastructure, Software Maintenance and
Challenges of limited customization, integration
Problems, Perceived Security concerns.
4. Benefits, Weaknesses, Opportunities
and risks of SaaS adoption
Understand the benefits, Weaknesses, Opportunities
and risk of SaaS as a subjective manner that members
collectively assess their cloud adoption. Hence they
were identified as follows:
83
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
completely shifting responsibility for developing,
testing, and maintaining the outsourced software
application and the underlying infrastructure to the
vendor [12].
Access to specialized resources: SaaS clients benefit
from economies of skills by leveraging the skills,
resources, and capabilities that the service provider
offers. These specialized capabilities (e.g., access to
the latest technologies and IT related
Know-how) could not be generated internally if the
application were delivered in-house via an onpremises model [12].
4.1 SaaS Benefits
Access Anywhere: one of the advantages of the SaaS
is Applications used over the network are accessible
anywhere and anytime, typically with a browser.
Zero IT infrastructure: When delivering business
applications via SaaS, the complexity of the
underlying IT infrastructure is all handled SaaS
vendor.
Software Maintenance: the SaaS vendors are
responsible for any software updates; and these
happen almost without the customer noticing.
Lower cost: the cost of using SaaS can be
significantly lower compared with on premise
software, because the clients only pays for what they
use.
4.4 SaaS Risks
Lack of control risk: When the SaaS goes down,
business managers can find themselves feeling
completely helpless because they suddenly have no
visibility of the infrastructure.
Legal issues: There are legal risk include
Governance, SLAs, service reliability and
availability, etc.
Security risks: data protection and privacy are
important concerns for nearly every organization.
Hosting data under another organization’s control is
always a critical issue which requires stringent
security policies employed by Cloud providers. For
instance, financial organizations generally require
compliance with regulations involving data integrity
4.2 SaaS Weaknesses
Immature SaaS: Some feel that the SaaS model is
still immature and has yet to prove itself worthy, and
are waiting for it all to settle down before moving
forwards even if their own infrastructure is far from
perfect.
Ambiguous and complex pricing: Often providers
offer different rates for their services. Usage costs,
support and maintenance can be different. There is no
public standard tariff that all providers are required to
follow it. So consumers are confused.
Dependence on the SaaS provide: The customer is
dependent on the service provider. The service will
develop or end based on service provider’s actions.
Also If the SaaS provider were to go bankrupt and
stopped providing services, the customer could
experience problems in accessing data and therefore
potentially in business continuity.
Dependence on the Internet: In most cases, the
service cannot be used offline. The service is
available only over the Internet.
and privacy. Security and Privacy is multidimensional in nature and includes many attributes
such as protecting confidentiality and privacy, data
integrity and availability [15].
Performance risks: Performance risks are the
possibility that SaaS may not deliver the expected
level of service. Service Down time or slow it can be
a huge economic losses inflicted to the organization.
Economic risks: if the client wants to customize the
application core, he needs to own it. Even if the client
can use the standard core, he may want to build
components on top of the core functionality (using
APIs) to suit his needs with regard to integration and
customization. Higher-than-expected costs may thus
arise from the additional or changing future
requirements. In addition, increasing costs may
emerge from the hold-up, because vendor ownership
of the application core provides the vendor with more
future bargaining power. This power enables him to
increase prices, charge extra costs, or refuse to invest
in backward-compatible interfaces for the client's
customized code.
Internet resilience and bandwidth: SaaS does not
provide application availability and/or network
bandwidth as the provider originally stipulated [14].
System outages and connectivity problems can affect
4.3 SaaS Opportunities
Cost saving: No purchase of software licenses,
reduce staff IT, eliminating the cost of deployment
and infrastructure leading to savings in the overall
cost of organization.
Strategic flexibility: SaaS adoption provides a great
degree of flexibility regarding the utilization of easily
scalable IT resources. This flexibility makes it easier
for firms to respond to business-level volatility,
because the SaaS provider handles fluctuations in IT
workloads. In this regard, a client company can
leverage a SaaS vendor's capacity to adapt to change.
Focus on core competencies: SaaS adoption will
also facilitate firms' refocusing on their core
competences. This refocusing is possible by
84
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
all customers at once, which implies a high value at
risk [13].
Integration risk: risk of problems related to the SaaS
application's interoperability and integration with
homegrown applications located on the client side.
Potential losses due to performance risks can be
significant because the day-to-day operations will not
be optimally supported [12].
Other Cases
Working experiences (Year)
1~3
3~5
5~10
>10
9(14%)
19(29%)
29 (45%)
8(12%)
We measured the validity by factor analysis test and
reliability by Cronbach's alpha test. According to the
results of factor analysis to test kmo = 0.7 and α = 0.8
for the alpha test has demonstrated high reliability
and validity. The questionnaire is designed in likert
spectrum. We evaluate the normal distribution of the
data by the Kolmogorov – Smirnov test. So according
to the test results, the values of significance level all
components are less than 0/05, so the non-normal
data distribution. Therefore, in this study the
binomial test was done (Sign-level=0.05, cut
point=3) by SPSS software. So the research
hypothesis is as follow:
5. Analysis Methodology
To analyze the factors identified in the previous
section, a questionnaire among 192, employees and
managers of government agencies and private
companies distributed that were the 65 questionnaires
were answered with a success. Frequency distribution
of respondents' profile in table 1 is shown.
Table 1: Frequency distribution of respondents' profile
Characteristic
Type of Organization
Governmental organizations
Private organizations
Roles
Organization manager
IT manager
Computer and IT engineer
R&D
2(3%)
Sample composition
Hypotheses A: Understanding and knowledge of
cloud computing and SaaS.
HA-1: Respondents' awareness of cloud computing is
desirable.
HA-2: Respondents' awareness of SaaS is desirable.
Due to the novelty of cloud computing, in this case
we
determine
cut
point
on
2
24(37%)
41(63%)
4(6%)
15(23%)
36(56%)
8(12%)
Table 2: The results of testing HA hypothesis (sig. level 0.05)
Awareness
Cloud
Awareness
Group
1
2
Total
N
Binomial test
Observed
Category
Prop.
15
<= 2
.23
50
65
>2
.77
.50
Exact Sig.
(2-tailed)
Reject/confirm
the hypothesis
.000
Confirm
1.00
SaaS
1
24
<= 2
.37
Awareness
2
41
65
>2
.63
Total
Test
Prop
.50
.046
Confirm
1.00
According to table 2 the values of significance level
HA-1 and HA-2 is less than 0.05 and the frequency of
observations category (>2) is more, so they are
confirmed. Thus, according to all frequency of
observations, respondents have confirmed hypothesis
A.
Hypotheses B: Respondents believe that the SaaS
has many benefits.
HB-1: The SaaS is accessible anywhere at any time.
HB-2: IT infrastructure is not needed for SaaS
utilization.
HB-3: SaaS utilization reduce software support and
data management tasks dramatically.
HB-4: SaaS utilization is lead to reduce costs.
85
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Table 3: The results of testing HB hypothesis (sig. level 0.05)
Benefits
Group
N
Category
Binomial test
Observed
Test
Prop.
Prop.
Access
1
24
<= 3
.37
Anywhere
2
41
>3
.63
Total
65
1
40
<= 3
.62
infrastruct
2
25
>3
.38
ure
Total
65
Software
1
46
<= 3
.71
Maintenan
2
19
>3
.29
ce
Total
65
Lower cost
1
20
<= 3
.31
2
45
>3
.69
Total
65
Zero
IT
.50
Exact Sig. (2tailed)
Reject/confirm
the hypothesis
.046
Confirm
1.00
.50
.082
Reject
1.00
.50
.001
.50
.003
Reject
1.00
Confirm
1.00
According to table 3 the values of significance level
HB-1 and HB-4 are less than 0.05 and the frequency of
observations category (>3) is more, so it is
confirmed. The values of significance level HB-3 is
less than 0.05 but the frequency of observations
category (<= 3) is more, so it is rejected. The values
of significance level H B-2 is more than 0.05, so it is
rejected. Thus, According to all frequency of
observations, respondents have rejected hypothesis B.
Hypotheses C: Respondents believe that the SaaS
has many Weaknesses.
Hc-1: SaaS still has not matured in Iran.
Hc-2: SaaS pricing is Ambiguous and complicated.
Hc-3: SaaS utilization is leads to Dependency on the
Provider.
Hc-4: cloud is Completely Dependent on the internet.
Table4: The results of testing HC hypothesis (sig. level 0.05)
Weaknesses
Group
N
Binomial test
Observed
Category
Prop.
Immature
1
22
<= 3
.34
SaaS
2
43
>3
.66
Total
65
complex
1
39
<= 3
.60
pricing
2
26
>3
.40
Total
65
Dependency
1
42
<= 3
.65
on the provide
2
23
>3
.35
Total
65
Dependency
1
24
<= 3
.37
on the Internet
2
41
>3
.63
Total
65
Test
Prop.
.50
Exact Sig.
(2-tailed)
Reject/confirm
the hypothesis
.013
Confirm
1.00
.50
.136
Reject
1.00
.50
.025
Reject
1.00
.50
.046
Confirm
1.00
86
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
According to table 4 the values of significance level
HC-1 and HC-4 are less than 0.05 and the frequency of
observations per category (>3) is more, so they are
confirmed. The values of significance level HC-3 is
less than 0.05 but the frequency of observations
category (<= 3) is more, so it is rejected. The values
of significance level HC-2 more than 0.05, so it is
rejected. Thus, According to all frequency of
observations, respondents have rejected hypothesis C.
Hypotheses D: Respondents believe that SaaS
adoption brings many opportunities.
HD-1: SaaS adoption is leads to saving the cost.
HD-2: SaaS adoption provides a great degree of
flexibility.
HD-3: SaaS adoption will also facilitate firms'
refocusing on their core competences.
HD-4: SaaS adoption Brings Access to specialized
resources.
Table 5: The results of testing HD hypothesis (sig. level 0.05)
Opportunities
Cost saving
Group
N
Category
Binomial test
Observed
Test
Prop.
Prop.
1
23
<= 3
.35
2
42
>3
.65
Total
65
Strategic
1
24
<= 3
.52
flexibility
2
41
>3
.48
Total
65
1
57
<= 3
.88
core
2
8
>3
.12
competencies
Total
65
Access
1
46
<= 3
.71
specialized
2
19
>3
.29
resources
Total
65
Focus
on
to
.50
Exact Sig. Reject/confirm the
hypothesis
(2-tailed)
.025
Confirm
1.00
.50
.804
Reject
1.00
.50
.000
Reject
1.00
.50
.001
Reject
1.00
According to table 5 the values of significance level
HD-1, is less than 0.05 and the frequency of
observations category (>3) is more, so it is
confirmed. The values of significance level HD-3 and
HD-4 are less than 0.05 but the frequency of
observations per category (<= 3 ) is more, so they are
rejected. The values of significance level HD-2 more
than 0.05, so it is rejected. Thus, According to all
frequency of observations, respondents have rejected
hypothesis D.
Hypotheses E: Respondents believe that SaaS
adoption has many risks.
HE-1: SaaS adoption has many lack of control risks.
HE-2: SaaS adoption has many legal risks.
HE-3: SaaS adoption has many security risks.
HE-4: SaaS adoption has many Performance risks.
HE-5: SaaS adoption has many economic risks.
HE-6: SaaS adoption has Risks related to Internet
resilience and bandwidth.
HE-7: SaaS adoption has risk related to problem of the
application's integration.
87
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Table6: The results of testing HE hypothesis (sig. level 0.05)
Risks
Lack
Group
of
Binomial test
Observed
Test
Category
Prop.
Prop.
N
1
24
<= 3
.37
2
41
>3
.63
Total
65
Legal
1
36
<= 3
.55
issues
2
29
>3
.45
Total
65
1
17
<= 3
.26
2
48
>3
.74
Total
65
Performanc
1
57
<= 3
.88
e
2
8
>3
.12
Total
65
1
54
<= 3
.83
2
11
>3
.17
Total
65
1
24
<= 3
.37
2
41
>3
.63
Total
65
1
23
<= 3
.35
2
42
>3
.65
Total
65
control
Security
Economic
bandwidth
Integration
.50
Exact Sig. Reject/confirm
the hypothesis
(2-tailed)
.046
Confirm
1.00
.50
.457
Reject
1.00
.50
.000
Confirm
1.00
.50
.000
Reject
1.00
.50
.000
Reject
1.00
.50
.046
Confirm
1.00
.50
.025
Confirm
1.00
the analysis of organizations understand of the benefits,
weaknesses, opportunities and risks of SaaS. According to
According to table 6 the values of significance level
HE-1, HE-3, HE-6 and HE-7 are less than 0.05 and the
frequency of observations per category (>3) are
more, so all is confirmed. So values of significance
level HE-4 and HE-5 are less than 0.05 but the
frequency of observations per category (<= 3) are
more, so they is reject. The values of significance
level HE-2 more than 0.05, so it is rejected. Thus,
According to all frequency of observations,
respondents have rejected hypothesis E.
the results, Respondents believe that the more main
benefits of the SaaS are reducing the cost and
permanent availability. Also leading to savings in the
overall cost of organization. Immature SaaS and
Dependency on the Internet confirmed as a more main
Weaknesses of SaaS. They are most concerned about
Security and lack of control risks and risk related to
problem of the application's integration and Internet
resilience and bandwidth. Iranian organization
perceived more SaaS risks than SaaS benefits. Thus,
they are not tended to adopt SaaS. Emergence of
successful SaaS business models can help to adopting
the SaaS.
6. Discussion and conclusions
Software as a Service (SaaS) is a relatively new
organizational application sourcing alternative,
offering organizations the option to access
applications via the Internet. We in study focuses on
88
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[10] A. Benlian and T. Hess and P. Buxmann, “Drivers of
SaaS-adoption: an empirical study of different application
types”, Business & Information Systems Engineering 1 (5),
2009, 357–369
[11] R Narwal and S Sangwan, “Benefits, Dimensions and
Issues of Software as a Service (SAAS)”, International
Journal of New Innovations in Engineering and
Technology (IJNIET), 2013,
[12] A. Benlian and T. Hess, “Opportunities and risks of
software-as-a-service: Findings from a survey of IT
executives”, Decision Support Systems 52, 2011, 232–246
[13] R.J. Kauffman and R. Sougstad, “Risk management of
contract portfolios in IT services: the profit-at-risk
approach”, Journal of Management Information Systems 25
(1), 2008, 17–48.
[14] H. Gewald and J. Dibbern, “Risks and benefits of
business process outsourcing: a study of transaction
services in the German banking industry”, Information and
Management 46 (4), 2009, 249–257
[15] S. Garg and S. Versteeg and R. Buyya, “A framework
for ranking of cloud computing services”, 2012, Future
Generation Computer Systems
[16] W. Wu and L.W. Lan and Y Lee, “Exploring decisive
factors affecting an organization’s SaaS adoption: A case
study”, International Journal of Information Management
31 (2011) 556–563
[17] We. Wu,” Developing an explorative model for SaaS
adoption”, Expert Systems with Applications 38 (2011)
15057–15064
[18] M. Madisha, “Factors Influencing SaaS Adoption by
Small South African Organisations1”, Conference on
World Wide Web Applications, Durban, South Africa,
2011
Reference
[1] G. Feuerlicht, “Next generation SOA: Can SOA survive
cloud computing?”, In: V. Snasel P.S. Szczepaniak, & J.
Kacprzyk, (Eds.), Advances in Intelligent Web Mastering 2, AISC 67, 2010, pp. 19–29.
[2] D. Catteddu and G. Hogben, “Cloud computing:
Benefits, risks and recommendations for information
security”, European Network and Information Security
Agency (ENISA), 2009, 1–125.
[3] N. Sultan,” Cloud computing for education: A new
dawn?” International Journal of Information Management,
2010, 30(2), 109–116.
[4] S. Subashini and V. Kavitha,” A survey on security
issues in service delivery models of cloud computing”,
Journal of Network and Computer Applications, 2010, 111.
[5] A. Susarla and A. Barua, “Multitask agency, modular
architecture, and task disaggregation in SaaS”, Journal of
Management Information Systems, 26, 2010, 87–118.
[6] S. Marstson, and Z. Li, “Cloud computing – The
business perspective”, Decision
Support Systems, 51, 2011, 176–189
[7] V. Choudhary, “Comparison of software quality under
perpetual licensing and software as a service”, Journal of
Management Information Systems, 24, 2007, 141–165.
[8] L. Wu, S and Garg and R. Buyya, “SLA-based
admission control for a Software-asa-Service provider in
Cloud computing environments”, Journal of Computer and
System Sciences, 78(5), 2012, 1280–1299
[9] Gartner, “Forecast analysis: Software as a service”,
worldwide,
2010b,http://www.scribd.com/doc/49001306/ForecastAnalysis-Software
as-a-Service-Worldwide-2009-2014.
Accessed 10.11.11 [Online].
89
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Evaluation of the CDN architecture for Live Streaming through
the improvement of QOS
Case study: Islamic Republic of Iran Broadcasting
Shiva Yousefi1, Mohamad Asgari2
1
Taali university,
Ghom, Iran
Sh.yusefi@yahoo.com
2
Islamic Republic of Iran Broadcasting university,
Tehran, Iran
m.asgari@iribu.ac.ir
- High traffic, packet loss and narrow channels of
broadcast decrease the quality of the content significantly.
In light of the discussed issues and the endless growth
of Internet usage in the past few years, providing services
such as Live Television programs, On-demand clips and
music, and social networks such as Facebook and
YouTube simultaneously to thousands of users has
become a challenge. CDN network providers use
companies as a tool in terms of sharing the content at the
discretion of the user. Reduce in the video quality and lack
of access associated mainly resulting from the long
loading (download) can be tiring and boring for the users.
Companies make great profit from e-commerce on the
Internet, and would like to create a suitable experience for
visitors to their website. Therefore, in recent years, the
goal to improve the quality of technology for sending
content through the Internet has grown dramatically. The
content networks try to increase the quality of services by
using different mechanisms. But unfortunately no
company in Iran has invested in this field yet. The only
Web site that is located on the CDN is live.irib.ir, which is
owned by the Islamic Republic of Iran's Broadcasting and
it uses a few CDNs in foreign countries to transfer the
content and its central server inside the same country
offers services to users inside the country. A CDN is a
collaborative collection of network elements spanning the
Internet, where content is replicated over several mirrored
Web servers in order to perform transparent and effective
delivery of content to the end users [1]. By maximizing
bandwidth, increasing accessibility and repeating content,
CDN's reduce the response time to the user’s request, and
therefore lead to greater network productivity. It is
predicted that by employing the technology in our country,
we will witness better network performance in content
delivery.
This paper proposes the architecture of CDN in Iran with
Content Re-distribution Function to overcome the
limitations of watching live streaming in the Internet, and
Abstract
Over the last decades, users have witnessed the growth and
maturity of the Internet. As a consequence, there has been an
enormous growth in network traffic, driven by rapid acceptance
of broadband access, along with increases in system complexity
and content richness. CDN, provides services that increases the
efficiency of the network bandwidth, and increases the access
and maintains the accuracy of the information during
transmission. One of the most important requests in the context
of the Internet is watching video live streaming with techniques
that can be transferred. The aim of this study is to enhance the
QOS and increase the efficiency of the transfer of the content in
the context of the Internet. The results of this research in the form
of diagrams in order to compare the advantages and
disadvantages of using a CDN will be shown at the end of the
review proposals. Finally we concluded that the use of this
technology in the context of the Internet in Iran can increase the
QOS and customer satisfaction levels.
Keywords: CDN, Live Streaming, QOS, Delay, Jitter, Packet
loss, surrogate.
1. Introduction
Limited access to the network and computational resources
can become the arena for competing service centers with
one another. High availability and optimal answers
provided the key to success in this arena. At the same time
the influx of high volumes of users to a Web site and
creation of a high traffic, became one of the challenges of
this the realm. Some of the problems resulting from this
situation are as follows:
- Since content must go through several long distances and
backbone passes for the relocation the transmission speed
is going to greatly decreased.
- Traffic along with issues related to backbone peering,
causes issues with the proper delivering of content.
- The parent site's bandwidth limits the usage of the users.
90
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
it explains about QOS factors and clients satisfaction
based on Content Delivering Network (DCDN).
In the rest of the sections, first we discuss the architecture
of CDN, live streaming and Islamic Republic of Iran
Broadcasting’s website.
Then we introduce the proposed CDN architecture in Iran,
content
re-distribution
algorithm
and
RDCDN
transmission algorithm that make a more stable
transmission performance to clients. Finally we present the
simulation results which prove our proposed model, and
the conclusion.
2. Content Delivery Network Architecture
Figure1: CDN node distribution over the internet
Content Delivery Network (CDN) is one of the new
popular technologies which are used for distributing
content on Internet. With this technology, corporations and
institutions can increase the speed of uploading and
browsing their sites. As you know, faster uploading site to
optimize the site for search engines (SEO) is currently
very important in terms of e-commerce and also has
positive results.
Internet Corporations for controlling traffic of the sites and
Internet services use multiple servers around the world for
storing and distributing their information. This makes it
possible for all users around the world to have the same
condition while using this service and using the closest
server of corporation with highest speed. This will bring
many advantages that will be described in the following
sections but what about the smaller companies and the
Internet users who have shared their personal sites on the
Internet?
Usually these sites keep their data on a single server (their
host) and service the users and the visitors around the
world and carry out their work with their server limitation
such as low bandwidth, low transmission speed and etc. [1]
CDN can be efficient and help small business owners and
private sites to act like corporations. This service keep a
copy of downloadable information of your site (like CSS
code files, java scripts code files, multimedia files, etc) on
different nodes or servers around the world and service
many sites at the same time. At this time when client want
to browse your site, information will deliver from the
closest available server to him and because these
information were already cached, exchange rate has
changed dramatically.
Figure1 can help to better understand the concept of CDN
in the world of internet [2].
3. Functions and Structure of CDN
As mentioned before, CDN is a technology with its
Internet corporations control traffic of the sites and
Internet services. With this technology users can connect
to the nearest server and achieve their desired content.
For further explanation about CDN’s construction, some
concept must be defined:
3.1 Surrogate server
CDN consist of several edge servers (surrogate) which
keep a copy of provider’s contents so they called cache
servers too. Client’s requested content will deliver to
nearest surrogate server. Every cluster consists of set of
surrogates which are located in same geographical region.
The number of surrogates is too important because they
should be in appropriate number and also they should be in
appropriate location [2].
3.2 Content server
Content server: content providers upload client’s requested
content in content servers.
CDN is a set of network elements which combine together
to deliver a content to clients in a more efficient way.
A CDN provides better performance through caching or
replicating content over some mirrored Web servers (i.e.
surrogate servers) strategically placed at various locations
in order to deal with the sudden spike in Web content
requests, which is often termed as flash crowd [3] or
SlashDot effect [4]. The users are redirected to the nearest
surrogate server to them. This approach helps to reduce the
network impact on the response time of user requests. In
the context of CDNs, content refers to any digital data
resources and it consists of two main parts: the encoded
media and metadata [5]. The encoded media includes static,
91
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
dynamic and continuous media data (e.g. audio, video,
documents, images and Web pages). Metadata is the
content description that allows identification, discovery,
and management of multimedia data, and also facilitates
the interpretation of multimedia data.
3.3 Components of CDN
The three key components of a CDN architecture are
content provider, CDN provider and end-users. A content
provider or customer is one who delegates the URI name
space of the Web objects to be distributed. The origin
server of the content provider holds those objects. A CDN
provider is a proprietary organization or company that
provides infrastructure facilities to content providers in
order to deliver content in a timely and reliable manner.
End-users or clients are the entities who access content
from the content provider’s website.
CDN providers use caching and/or replica servers located
in different geographical locations to replicate content.
CDN cache servers are also called edge servers or
surrogates. The surrogates of a CDN are called Web
cluster as a whole. CDNs distribute content to the
surrogates in such a way that all cache servers share the
same content and URL. Client requests are redirected to he
nearby surrogate, and a selected surrogate server delivers
requested content to the end-users. Thus, transparency for
users is achieved. Additionally, surrogates send accounting
information for the delivered content to the accounting
system of the CDN provider [1].
Figure 2 shows a typical content delivery environment
where the replicated Web server clusters are located at the
edge of the network to which the end-users are connected
[1]. A content provider (i.e. customer) can sign up with a
CDN provider for service and have its content placed on
the content servers. The content is replicated either ondemand when users request for it, or it can be replicated
beforehand, by pushing the content to the surrogate servers.
A user is served with the content from the nearby
replicated Web server. Thus, the user ends up
unknowingly communicating with a replicated CDN
server close to it and retrieves files from that server.
CDN providers ensure the fast delivery of any digital
content. They host third-party content including static
content (e.g. static HTML pages, images, documents,
software patches), streaming media (e.g. audio, real time
video), User Generated Videos (UGV), and varying
content services (e.g. directory service, e-commerce
service, file transfer service). The sources of content
include large enterprises, Web service providers, media
companies and news broadcasters. The end-users can
interact with the CDN by specifying the content/service
request through cell phone, smart phone/PDA, laptop and
desktop [1].
Figure 2: Abstract architecture of a Content Delivery Network (CDN)
Figure 3 depicts the different content/services served by a
CDN provider to end-users [1].
Figure 3. Content/services provided by a CDN
4. Live Streaming
Streaming media delivery is challenging for CDNs.
Streaming media can be live or on-demand. Live media
delivery is used for live events such as sports, concerts,
channel, and/or news broadcast. In this case, content is
delivered ‘instantly’ from the encoder to the media server,
and then onto the media client. In case of on-demand
delivery, the content is encoded and then is stored as
streaming media files in the media servers. The content is
available upon requests from the media clients. Ondemand media content can include audio and/or video ondemand, movie files and music clips [1].
Streaming means sending data and this data usually is
sound or video. It let users play the stream before
receiving and processing the full stream and the live
streaming protocols are used for control the transmission.
4.1 Components of Live Streaming
Here are the components of live streaming [6]:
- Video sources that produce the video.
- A Live encoder is a hardware/software capable of
compressing the video source, in real-time, and send the
92
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Number of bits (packet length) and “R” is the Rate,
(bandwidth) in bits per second.
compressed video to a media server in a specific format,
using a specific protocol.
- The media server is a software that installs on a dedicated
server and it is optimized/specialized in "serving" media
content to the end users, through a WebTV (for example).
The media server takes the compressed video supplied by
the live encoder and broadcast it to the users.
- The players that connect to media server and play the
content of media to clients.
Dt = N/R
5.2 Jitter
Jitter refers to the changes in delay, which is calculated
through the variations in the length of delays in the
network. A network with a constant delay rate, has no
Jitter (calculated with Formula 4).
5. Quality of Service in CDN
Delay jitter = Maximum time – Minimum delay time
One major goal of CDN and in fact one of the reasons for
its creation is increasing quality of service and customer
satisfaction levels. The most important factors of
increasing Quality of Service (QOS) are reducing Delay,
Jitter and Packet loss. For a better understanding we will
explain each of these concepts [7].
(4)
5.3 Packet loss
Packet loss means the loss of packet through transmission.
One of the reasons that packet loss occurs is that the
uplinks and switches of the network send more packets
than the buffer packets; and another reason is the loss of
connection. The easiest way to calculate packet loss is
using the ping command [9].
5.1 Delay
The time that a packet need to be transmit from the source
to the destination in a network. It is calculated according to
the following formula:
6. Islamic Republic of Iran’s Web Site
Islamic Republic of Iran’s Web Site, with the address
live.irib.ir, is the only web site in the country in the
context of CDN that has shared its contents around the
world. A content distribution network makes it possible
for users in remote out of coverage areas to TV channels
and radio broadcasting stations via their websites accessed
by personal computers or mobile devices. This service
must have access to the image files of the organization
comply with security tips with the contacts to people
around as well. The Islamic Republic of Iran Broadcasting
has 6 main servers. One of the servers is in USA, two of
them in Europe and also three servers in Asia and it as
thirty surrogated servers which are scattered all over the
continents.
End-to-end delay=N[transmission delay + propagation
delay + processing delay]
(1)
In this formula “N” is the number of routers + 1
5.1.1 Processing delay
The time that the router needs to process the header of the
packet to determine the next path. Processing delay in the
network is usually measured in microseconds or even less
[9].
5.1.2 Propagation delay
It shows the amount of time it takes for the head signal to
reach the receiver from the sender
It will be calculated with the formula (2), where “d” is the
length of the physical link and “s” is velocity in the
medium
Propagation delay = d/s
(3)
7. Proposed CDN
In the short time that has passed since the birth of ecommerce in today's world, it has seen unprecedented
growth in developing countries, and is predicted to expand
enormously on a global scale in the near future. One of the
businesses that large foreign countries have heavily
invested into these days is providing CDN's, which has
been described in some detail in previous sections.
Unfortunately in our country so far, the company has not
attempted to do this. Perhaps because of its multiple
servers setup in money capital from all parts of the country,
or it may be because of its high weakness of our
telecommunication infrastructure or maybe because of
(2)
5.1.3 Transmission delay
The amount of time needed for all data packets in the wire
to be sent. In other words, this delay is the given rate in the
link. Transmission rate is directly related to the length of
the packet, and is independent of the distance between the
two nodes. This delay is calculated with Formula (3), in
which “Dt” is Delay Time in seconds and “N” is the
93
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
uncertainty of getting this technology in the country. But
given the very rapid progress of technology in our country,
the existence of such technology will soon be needed and
even essential. The establishment of virtual universities in
the country is another advantage of advancing
technologies, and as was mentioned in the previous
sections, the spread of CDN in the country can be a great
help to universities and scholars in increasing the quality
of conduct for online classrooms. Today, using the internet
to watch movies or listen to radios or other content sources
has become ubiquitous, and as mentioned earlier, IRIB is
the one and only content provider that uses CDN
technology for distributing their media. This institution,
due to the lack of CDN services in our country, resorts to
using CDN from other countries. This is only ideal if a
user requests content on the IRIB's website from another
country, in which case they will be connected to the
nearest server. If someone from Southern Iran requires
access to this content, they must connect to the server in
Tehran and receive their desired content despite all the
difficulties and barriers, since it is all stored in a single
server located in Tehran. However, if there was a CDN in
the country and the content of this website were stored on
the CDN's servers, this user could have connected to the
nearest server containing the data, and had quicker and
higher quality access to their desired content. In the next
section, through the experiment that will be conducted, we
will discuss whether the given suggestion will result in
increased quality of service for customers or not,
considering the aforementioned factors; and this
suggestion will be tested on the server that IRIB's website
is stored on.
Figure 4. The Proposed Architecture of Iran CDN
This architecture is modeled from RDCDN architecture
that was introduced in 2009 [8].
In this architecture, main surrogate can operate as a
backup storage system controlled by local server. So, even
though sub-surrogates delete their stored contents in the
storage
spaces, the clients can continuously receive contents
services from main surrogate, and sub-surrogates can
receive the deleted contents from main surrogate to
recover them without re-transmission of content provider.
8.1 Content distribution Algorithm
Content redistribution algorithm transfers contents to each
sub-surrogate according to the features based on request
rates of contents during the set-up time. Proposed content
distribution algorithm manages all contents by grouping,
and each sub-surrogate receives one content per groups at
least. At that time, sub-surrogate which has the content
that is the highest request rate in its group will receive the
lowest content of next group [8].
8. Simulating proposed CDN in Iran
Now we simulate a CDN in Iran. The main elements of
CDN architecture are [8]:
Content providers (in our simulation Islamic Republic of
Iran’s broadcasting), main servers and surrogates including
main surrogates and sub-surrogates, and they are arranged
in hierarchical order as in Figure 4.
94
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Figure 6. CDN Server’s physical location arrangement
For testing and calculation the conclusions, we consider
one of the users that is located in the city of Shiraz (Fars
province) in this article. Due to being closer to the
Isfahan's server, it appears the connection to the Isfahan's
server the user delay, takes less jitter and Packet loss than
when the server is connected directly to Tehran. To ensure
this issue, we connect to servers in both cities and we
compare the listed parameters. To do this, firstly we send
the packages with the size of 32 bytes and then the
package with the size of 1024 bytes to Tehran and Isfahan
servers and we extract the results. After the three main
calculated factors of QOS, in Figure (7) we can see the
differences of the delay when the user is closer to the
nearest server (Isfahan) than to the main server (Tehran).
Figure (8) can show the differences in connection of jitter
when the user is closer to the nearest server (Isfahan) and
the main server (Tehran).
Figure 5. The Flow Diagram of Content distribution Algorithm
Figure (5) shows the flow diagram for content distribution
algorithm. By this algorithm, sub-surrogates maintain their
prior storing states if priority of contents is changed but
group of content is not changed. Therefore, entire network
load is decreased [8].
8.2 Simulating and results
To simulate the implementation of CDN in Iran and also to
prove the efficiency of the offer, we consider the
surrogated servers in three big cities of Iran, which are
located in three different geographic areas. For example,
surrogated servers are located in the cities of Tabriz,
(North West), Mashhad (North-East), Isfahan (South). The
main server (Media Server) of the Broadcast of Iran is
located in the city of Tehran. According to the
consideration of first factor in the CDN in transferring
traffic, which is choosing closest server to the requested
user, therefore, is expected for users to connect to the
nearest server at the time of the request. In order to
demonstrate the improvement in user experience in
implementing this suggestion, a few users in different
cities send packets to new servers and Tehran's server,
which allows us to draw a conclusion and compare the
new method with the person old methods using the QOS
parameters. The CDNs server's physical views are shown
in the figure 6.
Figure 7. Delay of sending packets from Shiraz to Tehran and Isfahan’s
servers
95
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
companies that are the founder of this technology in Iran,
in addition to the benefits it has, they can make some kind
of type and foster entrepreneurship in the country, and by
that a very large amount of jobs can be created and also
the companies can use from college graduates and
professionals in the field. The above research can be
extended and tested by more users in the future to ensure
the function for the accuracy achieved.
jitter
25.000
20.000
15.000
10.000
5.000
0.000
32 byte
1024 byte
32 byte
1024 byte
Shiraz
Shiraz
Shiraz
Shiraz
Isfahan
Isfahan
Tehran
Tehran
References
[1] M. Pathan, and R. Buyya, and A. Vakali, Content Delivery
Network, Springer-Verlag Berlin Heidelberg, 2008.
[2] J. Mulerikkal, and I. Khalil, "An Architecture for Distributed
Content Delivery Network" , IEEE Network, 2007, pp.359364.
[3] M. Arlitt, and T. Jin, "A Workload Characterization Study of
1998 World Cup Web Site", IEEE Network, 2000, pp. 30-37.
[4] S. Adler, "The SlashDot Effect: An Analysis of Three
Internet Publications", Linux Gazette Issue, Vol. 38, 1999.
[5] T. Plagemann, and V. Goebel, and A. Mauthe, L. Mathy, and
T. Turletti, and G. Urvoy-Keller, "From Content Distribution
to Content Networks – Issues and Challenges" Computer
Communications, Vol. 29, No. 5, 2006, pp. 551-562.
[6] WebTV Solutions, http://www.webtvsolutions.com/, 2013.
[7] N. Bartolini, and E. Casalicchio, and S. Tucci, "A Walk
Through Content Delivery Networks", In Proceedings of
MASCOTS 2003, LNCS Vol. 2965/2004, 2004, pp. 1-25.
[8] M. Sung, and Ch. Han, "A study on Architecture of
CDN(Content Delivering Network) with Content Redistribution Function" , IEEE Network, 2009, pp.727-777.
[9]
University
of
Minnesota
Duluth,
http://
http://www.d.umn.edu/, 2013.
Figure 8. Jitter of sending packets form Shiraz to Shiraz and Isfahan’s
servers
Figure 9 shows the difference of connection of Packet loss
when the user is closer to the nearest server (Isfahan) and
the main server (Tehran) can be seen.
Paket loss
40.000
30.000
20.000
10.000
0.000
32 byte
1024 byte
32 byte
1024 byte
Shiraz
Shiraz
Shiraz
Shiraz
Isfahan
Isfahan
Tehran
Tehran
Shiva Yousefi received the associate degree from DPI college in
Tehran, B.S. degree from Iran Info-Tech department university in
Tehran and M.S. degree from Taali university in Qom, all in
computer sciences. She is currently working as a Network
Administrator in Islamic Republic of Iran Broadcasting. She is
involved in research and development project about CDN in
Islamic Republic of Iran broadcasting.
Figure 9. Packet loss of sending packets from Shiraz to Tehran and
Isfahan’s servers
This simulation has been tested on a few other cities and
other users and the extracted results were almost the same.
After reviewing these figures, it can be concluded that
when the users after their request, they connect to the
nearest server for receiving their content, the amount of
delay of their Jitter and Packet loss would be significantly
reduced.
Mohamad Asgari received the P.H.D. degree from Iran University
of Science and Technology in Tehran. He is currently working as
director of the department of development and technology in
Islamic Republic of Iran broadcasting and also He is professor in
IRIB broadcasting university. He is involved in research and
development project about CDN in Islamic Republic of Iran
broadcasting.
9. Conclusion and Future work
According to the simulation, after doing the calculations, it
can be concluded that the existence of CDN in Iran, when
users request their content according to the existing
algorithm that were introduced earlier would connect to
the nearest server and consequently the amount of delay,
Jitter and Packet loss will be reduced significantly. Items
listed are the three basic calculations of the items of the
quality of the services. So it can be said that this would
enhance the quality of the services. The establishment of
96
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
A Robust and EnergyEnergy-Efficient DVFS Control Algorithm
for GALSGALS-ANoC MPSoC in Advanced Technology
under Process Variability Constraints
1,∗Hatem Zakaria2, †Laurent Fesquet3 ‡
∗ 1
Sylvain
Nicolas
Marchand
SylvainDurand
Durand
, Hatem Zakaria , Laurent Fesquetand
, Nicolas
Marchand
1
Control Department of GIPSA-lab, CNRS, Univ. of Grenoble
Control Department of GIPSA-lab, CNRS, University of Grenoble
Grenoble,
France
Grenoble, France
sylvain@durandchamontin.fr, nicolas.marchand@gipsa-lab.fr
†
Electrical Engineering Department, Benha Faculty of Engineering, Benha University
Egypt
2 Electrical Engineering Department,Benha,
Benha Faculty of Engineering, Benha University
‡
TIMA Laboratory,
Benha, Grenoble
Egypt University
Grenoble, France
hatem.radwan@bhit.bu.edu.eg
E-mail: sylvain@durandchamontin.fr
3
TIMA Laboratory, Grenoble University
Grenoble, France
Abstract
tainty about how a fabricated system will perform [1]. Allaurent.fesquet@imag.fr
though a circuit or chip is designed to run at a nominal clock
Many Processor Systems-on-Chip (MPSoC) have become tremenfrequency, the fabricated implementation may vary far from
dously complex systems. They are more sensitive to variability
unevenness, and changeability associated with a given
with technology scaling, which complicates the system design and
this expected performance. Moreover, some cores may beAbstract
feature or speci-fication. It has became one of the leading
impact the overall performance. Energy consumption is also of
have differently inside the same chip [2]. To solve these
Many
Processor
Systems-on-Chip
(MPSoC)
haveandbecome
causes for chip failures and delayed schedules at a subgreat interest
for mobile
platforms powered
by battery
power
problems, MPSoCs in advanced technology require a dytremendously
complex systems.
They onareDynamic
more sensitive
to
management techniques,
mainly based
Voltage and
micrometric scale and complicates system design by
namic power management in order to highly reduce the envariability
technology
scaling,
which become
complicates
the system
Frequencywith
Scaling
(DVFS)
algorithms,
mandatory.
A
introducing uncertainty about how a fabricated system will
ergy consumption. Architectural issues are also needed for
Globally
Locally
Synchronous
(GALS)
design aldesign
andAsynchronous
impact the overall
performance.
Energy
consumption
perform [1]. Although a circuit or chip is designed to run
problems
clocks,
each by
onebattery
being
isleviate
also ofsuch
great
interest by
forhaving
mobilemultiple
platforms
powered
helping the yield enhancement of such circuits with strong
at a nominal clock frequency, the fabricated
distributed
on
a
small
area
of
the
chip
(called
island),
whereas
and power management techniques, mainly based on Dynamic
technological uncertainties.
implementation may vary far from this expected
an
Asynchronous
Network-on-Chip
(ANoC)
allow
to
communiVoltage and Frequency Scaling (DVFS) algorithms, become
Whereas
leakageMoreover,
power is a some
significant
contributor
the
cate betweenAtheGlobally
different islands.
A robustLocally
technique
is proposed
performance.
cores
may to
behave
mandatory.
Asynchronous
Synchronous
to deal with a GALS-ANoC architecture under process variabiltotal
power,
the
average
power
consumption
and
the
energy
differently inside the same chip [2]. To solve these
(GALS) design alleviate such problems by having multiple
ity constraints using advanced automatic control methods. The
dissipation
dominated
by the dynamic
power inrequire
current a
clocks,
each
one
being
distributed
on
a
small
area
of
the
chip
problems, are
MPSoCs
in advanced
technology
approach relaxes the fabrication constraints and help to the yield
embedded
CMOS
integrated
circuits.
The
energy
reduction
(called
island),
whereas
an
Asynchronous
Network-on-Chip
dynamic power management in order to highly reduce the
enhancement. Moreover, energy savings are even better for the
is
a quadratic
functionArchitectural
of the voltageissues
and a are
linear
function
(ANoC)
allow toperformance
communicate
between
the different
islands.
A
energy
consumption.
also
needed
same perceived
with
the obtained
variability
robustrobust
technique
is proposed
to based
deal on
with
a GALS-ANoC
of
the
clock
frequency
[3].
As
a
result,
Dynamic
Voltage
ness. The
case study
is an island
a MIPS
R2000 profor helping the yield enhancement of such circuits
with
architecture
under process
variability constraints
advanced
cessor implemented
in STMicroelectronics
45nm using
technology
and
Scaling
(DVS) can be
used to efficiently manage the enstrong technological
uncertainties.
automatic
methods. simulations.
The approach relaxes the fabrication
validated control
with fine-grained
ergy consumption of a device [4]. Supply voltage can be
Keywords:
control,
power MPSoC,
process
variconstraints
andPredictive
help to the
yield low
enhancement.
Moreover,
energy
reduced
slack isisavailable
in the contributor
critical path,tobut
Whereaswhenever
leakage power
a significant
the
ability are
robustness,
DVFS,
ANoC.
savings
even better
forGALS,
the same
perceived performance with
one
has
to
take
care
that
scaling
the
voltage
of a microprototal power, the average power consumption
and the
the obtained variability robustness. The case study is an island
cessor
its speed
as well. by
Therefore,
adapting
thein
energy changes
dissipation
are dominated
the dynamic
power
based on a MIPS R2000 processor implemented in
supply
voltage
is
very
interesting
but
implies
the
use
of
Dy1.
Introduction
STMicroelectronics 45nm technology and validated with finecurrent embedded CMOS integrated circuits. The energy
namic
Frequency
Scalingfunction
(DFS) toofkeep
correct the
grained simulations.
reduction
is a quadratic
the voltage
andsystem
a linear
behavior.
The
addition
of
DFS
to
DVS
is
called
The upcoming
generations
of integrated
Many
Keywords:
Predictive
control,
low powersystems,
MPSoC,asprocess
function of the clock frequency [3]. As a result,Dynamic
Dynamic
Voltage
Frequency
and resultsmanage
in siProcessorrobustness,
Systems-on-Chip
(MPSoC),
variability
DVFS, GALS,
ANoC.require drastic techVoltage and
Scaling
(DVS) Scaling
can be (DVFS)
used to efficiently
∗
nological evolution since they have reached limits in terms
of power consumption, computational efficiency and fab1.rication
Introduction
yield. Moreover, process variability is one of the
main problems in current nanometric technologies. This
The
upcoming refers
generations
of integrated inconsistency,
systems, as Many
phenomenon
to unpredictability,
unProcessor
Systems-on-Chip
(MPSoC),
require
drasticfeature
techevenness, and changeability associated with a given
nological
evolutionItsince
they have
terms
or specification.
has became
onereached
of the limits
leadingincauses
offor
power
consumption,
computational
efficiency
and
fabrichip failures and delayed schedules at a sub-micrometric
cation
yield.
Moreover,system
process
variability
is one of
the
scale and
complicates
design
by introducing
uncermain problems in current nanometric technologies. This
phe-nomenon refers to unpredictability, inconsistency,
multaneously
managing both
many voltage
cases,
the energy consumption
of aparameters.
device [4].InSupply
the
only
performance
requirement
is
that
the
tasks
meet
a
can be reduced whenever slack is available in the critical
deadline,
where
a
given
task
has
to
be
computed
before
path, but one has to take care that scaling the voltage ofa a
given
time. Such changes
cases create
to runTherefore,
the promicroprocessor
its opportunities
speed as well.
cessor
at
a
lower
computing
level
and
achieve
the
same
peradapting the supply voltage is very interesting but implies
ceived
performance
while
consuming
less
energy
[5,
6,
7].
the use of Dynamic Frequency Scaling (DFS) to keep
As
a
result,
closed-loop
control
laws
are
required
to
mancorrect the system behavior. The addition of DFS to DVS
age
the energy-performance
tradeoff
in MPSoCs
new
is called
Dynamic Voltage and
Frequency
Scalingand
(DVFS)
strategies
are
developed
in
this
sense
in
this
paper.
and results in simultaneously managing the frequency and
the voltage. In many cases, the only performance
97
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
In addition, embedded integrated systems have two means
of implementation. Firstly, the conventional clocked circuits with their global synchronization – in which one
faces the huge challenge of generating and distributing a
low-skew global clock signal and reducing the clock tree
power consumption of the whole chip – makes them difficult for implementation. Secondly, MPSoCs built with predesigned IP-blocks running at different frequencies need
to integrate all the IP-blocks into a single chip. Therefore, global synchronization tends to be impractical [2].
By removing the globally distributed clock, Globally Asynchronous Locally Synchronous (GALS) circuits provide a
promising solution. A GALS design allows each locally
synchronous island to be set independently, the island becoming a Voltage/Frequency Island (VFI), where an Asynchronous Network-on-Chip (ANoC) is used as the mechanism to communicate between the different VFIs. As a consequence, a GALS-ANoC architecture is more convenient
for DVFS than the standard synchronous approach, because
the power consumption of the whole platform depends on
the supply voltage and the clock frequency applied to each
VFI [4, 8]. A GALS-ANoC architecture also mitigates the
impact of process variability [9], because a globally asynchronous system does not require that the global frequency
was dictated by the longest path delay of the whole chip, i.e.
the critical path. Each clock frequency is only determined
by the slowest path in its VFI.
comes essential in nanometric technologies. The proposed
solution is introduced for a practical microprocessor (i.e.
a MIPS R2000 processor), detailing also the actuators and
sensors. The robust and energy-efficient DVFS control algorithm is depicted in section 3. Stability and robustness
are analyzed too. Fine-grained simulation results are then
presented in section 4. Note that only simulation results
are provided in the present paper. An implementation on a
real chip will be possible after the hardware and/or software
cost of the proposed approach will be evaluated. Nevertheless, preliminary results are obtained for a realistic island
implemented in STMicroelectronics 45nm technology.
2. Controlling uncertainty and handling process variability
The main points of interest of the proposal is to handle
the uncertainty of a processor (or processing node) over a
GALS-ANoC design and reduce its energy consumption.
This is possible by means of automatic control methods.
Using both an ANoC distributed communication scheme
and a GALS approach offer an easy integration of different functional units thanks to a local clock generation [11].
Moreover, this allows better energy savings because each
functional unit can easily have its own independent clock
frequency and voltage. Therefore, a GALS-ANoC architecture appears as a natural enabler for distributed power management systems as well as for local DVFS. It also mitigates
the impact of process variability [9].
An architecture for DVFS and process quality management
is presented in Fig. 1 (note that the description in [12] gives
more details about this architecture and the processing node
circuit.) The operating system (or scheduler) provides a set
of information ref (the required computational speed, in
terms of number of instructions and deadline, for each task
to treat in a given VFI), eventually through the ANoC. This
information about real-time requirements of the application
enables to create a computational workload profile with respect to time. There are also speed sensors (not represented
in Fig. 1) embedded in each processing unit in order to provide real-time measurements of the processor speed ω (in
million of instructions per second for instance). Consequently, combining such a profile with such a monitoring
makes possible to apply a power/energy management allowing application deadlines to be met. On the other hand, the
DVFS hardware part contains voltage and frequency converters, that are a Vdd-hopping and a Programmable SelfTimed Ring (PSTR) for supplying the voltage Vdd and the
frequency fclk respectively. A controller then dynamically
controls these power actuators in order to satisfy the application computational needs with an appropriate management strategy. It processes the error between the measured
speed and the speed setpoint information within a closed-
The present paper is a proof of concept, gathering different techniques together to propose a robust design of an
automatic control algorithm dealing with nano-scale process variations. The setup is based on an energy-efficient
DVFS technique (previously developed in [10]) applied to
a GALS-ANoC MPSoC architecture. A fast predictive control law i) deduces the best two power modes to apply for a
given task and then ii) calculates when to switch from one
power mode to the other, where a power mode is defined
by a voltage/frequency pair supplying a VFI. The switching time instant is such that the energy consumption is minimized while the task fits with its deadline, guaranteeing
good performance. This is repeated for all the tasks to treat.
The control decisions are sent to the voltage and frequency
actuators (respectively a Vdd-hopping and a programmable
self-timed ring) while speed sensors provide real-time measurements of the processor speed. The control strategy is
highly robust to uncertainties since the algorithm does not
need any knowledge on the system parameters. It is dynamically (on line) computed, which ensures that it works for
any type of tasks (whereas only periodic tasks are generally
treated in the literature). Furthermore, the control strategy
is simple enough to limit the overhead it may introduces
(see [10] for further details).
The rest of the document is organized as follows. In section 2, it is explained why a closed-loop architecture be-
98
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
loop system, and by applying a well-suited compensator
sends the desired voltage and frequency code values to the
actuators (denoted Vlevel and flevel ). Consequently, the system is able to locally adapt the voltage and clock frequency
values clock domain by clock domain. Furthermore, the
ANoC is the reliable communication path between the different domains. Data communications between two VFIs
can fix the speed to the slowest communicating node in order to have a secure communication without metastability
problem and an adaptation to process variability too [13].
OS
ref
ANoC
flevel
ref
PSTR
Controller
ω
Vlevel
Vdd
hopping
Vdd
fclk
Processing
node
adjustable clocks. Today, many studies are oriented to SelfTimed Ring (STR) oscillators which present well-suited
characteristics for managing process variability and offering
an appropriate structure to limit the phase noise. Therefore,
they are considered as a promising solution for generating
clocks even in presence of process variability [15]. Moreover, STRs can easily be configured to change their frequency by just controlling their initialization at reset time.
2.3.1 Self-timed rings
A STR is composed of several nested stages which behavior
is mainly based on the “tokens” and “bubbles” propagation
rule. A given stage i contains a bubble (respectively a token) if its output is equal (resp. not equal) to the output
of the stage i + 1. The number of tokens and bubbles are
respectively denoted NT and NB , with NT + NB = N ,
where N is the number of the ring stages. For keeping the
ring oscillating, NT must be an even number. One can think
about this as the duality of designing the inverter ring by odd
number of stages. Each stage of the STR contains either a
token or a bubble. If a token is present in a stage i, it will
propagate to the stage i + 1 if and only if this latter contains a bubble, and the bubble of stage i + 1 will then move
backward to stage i.
ω
Vdd
Voltage/Frequency Island
Figure 1: DVFS control of a GALS-ANoC MPSoC architecture: control of
energy-performance tradeoff in a VFI.
Without lack of generality, the DVFS technique is supposed
to be implemented with two discrete voltage levels Vhigh
and Vlow , with Vhigh > Vlow > 0. Also, ω high and ω low
denote the maximal computational speeds when the system
is running under high and low voltage with its maximal associated clock frequency, with ω high > ω low > 0 by construction.
2.3.2 Programmable self-timed rings
2.1 MIPS R2000 as a processing node
The oscillation frequency in STRs depends on the initialization (number of tokens and bubbles and hence the
corresponding number of stages) [16]. Programmability
can be simply introduced to STRs by controlling the tokens/bubbles ratio and the number of stages. Programmable
Self-Timed Ring (PSTR) uses STR stages based on Muller
gates which have a set/reset control to dynamically insert
tokens and bubbles into each STR stage. Moreover, in order
to be able to change the number of stages, a multiplexer is
placed after each stage [17]. This idea was also extended
to have a fully Programmable/Stoppable Oscillator (PSO)
based on the PSTR. Look-up tables loaded with the initialization token control word (i.e. to control NT /NB ), and the
stage control word (i.e. to control N ) was used to program
the PSTR with a chosen set of frequencies.
The MIPS R2000 is a 32-bit Reduced Instruction Set Computer (RISC). Due to its simplicity in terms of architecture,
programming model and instruction set, as well as availability as an open core, it has been used as a processing node
(see Fig. 1) in our study case. The whole GALS-ANoC island implementation is done using Synopsis Design Vision
tool with STMicroelectronics 45 nm technology libraries
(CMOS045 SC 9HD CORE LS).
2.2 Vdd-hopping for voltage scaling
The Vdd-hopping mechanism was described in [14]. Two
voltages can supply the chip. The system simply goes to
Vlow or Vhigh with a given transition time and dynamics
that depend upon an internal control law (one could refer to
the reference above for more details). Considering that this
inner-loop is extremely fast with respect to the loop considered in this paper, the dynamics of the Vdd-hopping can be
neglected.
2.3.3 PSTR programmability applied to MIPS R2000
Presently, the variability is captured in the design by using
simulation corners, which correspond to the values of process parameters that deviate by a certain ratio from their typical value. In the STMicroelectronics 45 nm CMOS technology, three PVT (Process, Voltage, and Temperature) corners are available, denoted Best, Nominal and Worst. All
standard logic cells were characterized at each of these corners. Since, our main goal is to define the optimal oper-
2.3 Programmable self-timed ring for frequency scaling and
variability management
The application of the proposed DVFS to a system requires
the use of a process variability robust source for generating
99
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
ating clock frequency needed by the processing workload
that compensates for the propagation delay variations due
to the variability impact. Therefore, the critical path delay
of the synthesized MIPS R2000 with respect to the supply
voltage is analyzed at the different PVT corners. The resulting optimal frequencies needed by the MIPS R2000 at two
specified voltage levels are finally defined in Table 1 for the
three process variability corners.
the deadline ∆i . Furthermore, let Λi denote the laxity, that
is the remaining available time to complete the task Ti .
3.1 Speed setpoint building
The presence of deadline and time horizon to compute tasks
naturally leads to predictive control [18]. The main idea of
the predictive strategy is firstly intuitively explained and its
formal expression is given in the sequel.
Table 1: Optimal clock frequencies required to compensate for the process
variability.
3.1.1 Intuitive DVFS control technique
Voltage level
0.95 V
1.1 V
Clock frequency
for different process variability conditions
Worst
Nominal
Best
60 MHz
75 MHz
85 MHz
95 MHz
115 MHz
145 MHz
2.4 Speed sensors for real-time measurement
Any closed-loop scheme requires measurements to compare
the measure with a given setpoint to reach. In the present
study case, speed sensors are embedded in each VFI. They
provide a real performance measurement of the processor
activity, i.e. its computational speed (in number of instructions per second).
Speeds sensors play a critical role and must be carefully
selected. A reference clock fixes the time window where
the number of instructions are counted. Its period is crucial
since it determines the accuracy of the calculated average
speed and the system speed response. Therefore, according
to the set of clock frequencies available for the MIPS R2000
(see Table 1), the reference clock was chosen to be 2 MHz
in order to count a considerable amount of instructions with
a proper system response. To conclude, the computational
speed is now applied in terms of number of instructions executed per 500 ns to the digital controller.
3. Energy-efficient DVFS control with strong
process variability robustness
The control of the energy-performance tradeoff in a voltage
scalable device consists in minimizing the energy consumption (reducing the supply voltage) while ensuring good computational performance (fitting the tasks with their deadline). This is the aim of the controller introduced in Fig. 1,
which dynamically calculates a speed setpoint that the system will have to track. This setpoint is based on i) the
measurement of the computational speed ω (obtained with
the speed sensors) and ii) some information provided by a
higher level device (the operating system or scheduler) for
each task Ti to treat. Information are the computational
workload, i.e. the number of instructions Ωi to execute, and
A naive DVFS technique applies a constant power mode for
each task Ti to treat, as represented in Fig. 2(a). The average speed setpoint, that is the ratio Ωi /∆i , is constant for
a given task. Therefore, if the computational workload of
a given task is higher than the processor capabilities under
low voltage, i.e. Ωi /∆i > ω low , then the VFI executes the
whole task with the penalizing high voltage Vhigh and its
associated frequency in order not to miss its deadline. This
is the case for T2 for instance.
Note that if Ωi /∆i > ω high for a given task to treat, the
execution of the task is not feasible by the desired deadline.
3.1.2 Energy-efficient DVFS control technique
To overcome such intuitive approaches, an energy-efficient
control has been proposed in [10]. The idea is to i) minimize
the energy consumption by reducing as much as possible the
penalizing high voltage level ii) while ensuring the computational performance required so that the tasks meet their
deadline. A task is thus split into two parts, see Fig. 2(b).
Firstly, the VFI begins to run under high voltage Vhigh (if
needed) with the maximal available speed ω high in order to
go faster than required. Then, the task is finished under low
voltage Vlow with a speed lower than or equal to the maximal speed at low voltage ω low which, consequently, highly
reduces the energy consumption, where highly is task dependent. The task is hence executed with a given ratio between high and low voltage. A key point in this strategy is
that the switching time to go from Vhigh to Vlow has to be
suitably calculated in order to meet the task deadline. This
is done thanks to a fast predictive control law.
In fact, the lower is the supply voltage the better will be the
energy savings. For this reason, only one possible frequency
Fhigh is possible when running under Vhigh (in order to
minimize the penalizing high voltage running time). On the
other hand, several frequency levels Flown are possible under low voltage Vlow because, as the energy consumption
could not be reduced anymore, the degree of freedom on
the frequency will allow to fit the task with its deadline (as
much as this is possible). In the present case, two frequency
levels exist under Vlow , with Flow1 ≥ Flow2 . Whereas the
maximal levels Fhigh and Flow1 are determined from the
100
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
speed setpoint ωsp is then directly deduced from the value
of the predicted speed
ω high
if δ(t) > ω low
ωsp (t) =
(2)
ω low
elsewhere
average computational speed setpoint
ω high
Ω2
∆2
ω low
Ω1
∆1
T2
Ω3
∆3
T1
t1
and so are the voltage and frequency levels. Indeed, the system has to run under Vhigh /Fhigh when the required setpoint
is higher than the capabilities under low voltage, else Vlow
will be enough to finish the task. The method for the frequency control decision when the system is running under
low voltage is similar.
The proposed control strategy is dynamically computed.
The online measurement of the computational speed ω ensures that the control law works for any type of tasks, periodic but also non-periodic, because it is not required to
a priori know ω. Moreover, the control algorithm will
react in case of perturbation, e.g. if ω does not behave
as expected, or if the computing workload (Ωi or ∆i ) is
adapted/estimated during the execution of the current task.
On the other hand, the control strategy is simple enough to
limit the overhead it may introduces. Actually, some simplifications are possible for a practical implementation, like
removing the division in (1) (see [10] for further details).
T3
t2
t3
time
t3
time
voltage
Vhigh
Vlow
t1
t2
tVhigh
(a) Intuitive average speed setpoint.
energy-efficient computational speed setpoint
ω high
Ω2
∆2
ω low
Ω1
∆1
T2
Ω3
∆3
T1
t1
T3
t2
k
t3
time
k
t3
time
voltage
Vhigh
Vlow
3.3 Performance and stability
t1
t2
tVhigh
(b) Energy-efficient speed setpoint.
Figure 2: Different computational speed setpoint buildings can be used to save
energy consumption while ensuring good performance.
optimal frequency values in Table 1 (regarding the variability of the chip), the second frequency level at low voltage is
chosen equal to Flow1 /2. This enables to have 3 dB reduction in the power consumption.
3.2 Fast predictive control
A predictive issue can be formulated as an optimization
problem but an optimal criteria is too complex to be implemented in an integrated circuit. Further, a fast predictive control consists in taking advantage of the structure of
the dynamical system to make faster the determination of
the control profile, see e.g. [18]. In the present case, the
closed-loop solution yields an easy algorithm as one simply
needs to calculate the so-called predicted speed δ, defined as
the speed required to fit the task with its deadline regarding
what has already been executed
δ(t) =
Ωi − Ω(t)
Λi
(1)
where Ω(t) is the number of instructions executed from the
beginning of the task Ti . The dynamical energy-efficient
The performance of the proposed control strategy is guaranteed because the execution of a task always starts with the
penalizing high voltage level (by construction of the predictive control law) and the low level will not be applied while
the remaining computational workload is important (higher
than the maximal possible speed at Vlow ). As a result, it is
not possible to make better. Furthermore, even if the voltage/frequency levels discretely vary, the speed setpoint to
track is always higher or equal than required by construction.
On the other hand, the Lyapunov stability is verified. Lyapunov stability is based on an elementary physical constatation: if the total energy of the system tends to continuously
decline, then this system is stable since it is going to an
equilibrium state. Let
V (t) = Ωi − Ω(t)
(3)
be a candidate Lyapunov function. This latter expression
comes from (1) and refers to the remaining workload in the
contractive time horizon of the task. As a result, the Lyapunov function intuitively decreases because the speed of
the processor can only be positive, and so is ensured the stability of a task. This is verified for all tasks to be executed.
3.4 Estimation of the maximal speeds and robustness
The maximal speeds ω high and ω low cannot be directly calculated since they could vary with temperature or location
101
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
(variability), and yet, the control law has to be robust to such
uncertainties. For these reasons, the maximal speeds are estimated. A solution consists in measuring the system speed
for each power mode and using a weighted average of the
measured speed in order to filter the (possible) fluctuations
of the measurement. One can refer to [10] for further details. The proposed estimation of the computational speeds
leads to a control law without any need of knowledge on
the system parameters. This means the strategy is robust
to technological uncertainties since it self-adapts whatever
the performance of the controlled chip. Robustness deals
with the conservation of the stability property when things
do not behave as expected. Unexpected behaviors can be of
two types: a wrong estimation of the number of instructions
to process Ωi or the deadline ∆i , or the presence of process
variability.
In the first case, if Ωi is overestimated (or ∆i is underestimated), the task will be completed before its deadline but
the Lyapunov function V in (3) remains decreasing. On
the other hand, if Ωi is underestimated (or ∆i is overestimated), the deadline will be missed. In this case, the remaining instructions are added to the next task workload in
order to speed up the system. Another solution could be
to use a damping buffer as usually done for video decoding,
see [19, 20] for instance. Note that Ωi and ∆i can also be reestimated during the execution of the task in order to reduce
the error of estimation. Nonetheless, V remains decreasing
during the whole task execution.
3.5 Coarse-grain simulation results
Coarse-grain simulations are performed in Matlab/Simulink
in order to evaluate the efficiency of the proposed controller.
A scenario with three tasks to be executed is run, the number of instructions and deadline for each task being known.
The simulation results are presented in Fig. 3. The top plot
shows the average speed setpoint of each task (for guideline), the predicted speed (for guideline) and the measured
computational speed, while the bottom plot shows the supply voltage. In Fig. 3(a), the system runs during about 80 %
of the simulation time under low voltage. As a result, a reduction of about 30 % and 65 % of the energy consumption
is achieved (in comparison with a system without DVS and
DVFS mechanism respectively).
As the proposed control strategy does not use any knowledge on the system parameters, the controller adapts itself
with these uncertainties. Fig. 3(b) shows how the system behaves in case of 20 % of process variability, that is when the
real performance of the circuit are 20 % less than expected.
One could see that the estimation of the maximal computational speed allows the system to still work, even if the
processing node does not work as expected. Of course, in
order to compensate a lower computational speed induced
by the process variability, the system runs a longer time under the penalizing supply voltage.
More simulations and details are presented in [10].
E = 1.14326 eJ
C = 19302 OPs
Fully discrete control scheme
2 voltage levels and 3 frequency levels −− using of the clock gating
Computational
speeds
[x107 IPS]
Voltage
[V]
In case of process variability, the real computational speed
becomes ωreal (t) = α ω(t), where α ≥ 0 is the unknown
process variability factor, and the system performance is
hence affected. α = 1 means the real process behaves ideally. Otherwise, the performance of the system is weaker
than expected for α < 1 and the system is faster than expected for α > 1. Nonetheless, V in (3) becomes
4
average speed setpoint
predicted speed
measured speed
3
2
1
0
1.1
voltage
0.8
0
0.5
1
1.5
2
2.5
3
3.5
4
Time [x10−6 s]
(a) Simulation results with three frequency levels, one for the high voltage
level and two for the low voltage level.
Robustness to process variabilty (20%)
2 voltage levels and 3 frequency levels −− using of the clock gating
4
and so the stability is still ensured for all α > 0. The convergence rate is reduced for 0 < α < 1 and increased
for α > 1. As a result, the system runs a longer (respectively shorter) time under the penalizing high supply voltage (by construction of the control law) to compensate the
weaker (resp. better) performance. The only unstable case
is for α = 0, which means that the processor does not compute at all. In that case, the operating system (or scheduler) must avoid allocating tasks to this part of the chip.
Also, note that the tasks can meet their deadline as far as
the processor is able to execute the computational workload
in the given time (denoted as feasible tasks), that is while
Ωi /∆i ≤ α ω high .
Computational
speeds
[x107 IPS]
(4)
Voltage
[V]
V (t) = Ωi − αΩ(t)
average speed setpoint
measured speed (0% of variabiity)
measured speed (20% of variabiity)
3
2
1
0
1.1
voltage
0.8
0
0.5
1
1.5
2
2.5
3
3.5
4
Time [x10−6 s]
(b) Simulation results to test the robustness of the controller with 20% of
process variability.
Figure 3: Simulation results of the energy-performance tradeoff control in
Matlab/Simulink.
4. Fine-grain simulation results
In this latter section, fine-grain simulation results are presented for the whole MIPS-R2000 architecture with the
102
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
45 nm CMOS parameter variations. A post layout simulation with Modelsim has been performed with a scenario of
three tasks, the number of instructions and deadline for each
task being known. The simulation results of the system under different process variabilities are shown in Fig. 4. The
different available frequencies for each case were summarized in Table 1.
a) The results under nominal process variability are depicted in Fig. 4(a). Tasks 1 and 3 are completed successfully using the low voltage and frequency levels. For
task 2, the controller speeds up the MIPS R2000 to Vhigh
in order to be able to complete the task at the proposed
deadline. Once the controller has detected that task 2
can be completed with relaxed conditions (that is after
1.52 µs), the system switches back to Vlow .
b) The results under worst process variability are represented in Fig. 4(b). With reduced performance of the
MIPS R2000 (i.e. increased critical path delay), task 2
runs for 3.02 µs under Vhigh , which is twice longer than
processing under nominal process viability.
c) The results when using the best process variability configuration are given in Fig. 4(c). The MIPS R2000 is
able to successfully complete all the three tasks under
Vlow , which adds much more power/energy saving opportunities than under the nominal case. Therefore, our
proposal is able to not only compensate for the delay
variations with different process variability impacts, but
also exploit the enhanced response of the system under
best variability conditions to gain more in terms of energy savings.
To evaluate the proposed DVFS control for GALS-ANoC
architecture, its average dynamic power, energy consumption, area overhead and robustness to process variability are
also analyzed.
4.1 Energy and average dynamic power savings
Under nominal process variability, the energy-efficient
DVFS control (using dynamic set of clock frequencies) is
able to save 21 % of the energy consumption and 51 % of
the average dynamic power consumed by a system without DVFS, while the intuitive average-based strategy (using
fixed set of clock frequencies) only saves 14 and 37 % respectively. Therefore, it appears that the energy-efficient
control is 1.5 more power and energy saving efficient than
the intuitive control (under nominal process variability).
Our proposal has the ability to adapt the set of clock frequencies with respect to the process variability impact.
Thus, the energy-efficient DVFS control was able to exploit
the enhanced performance of the system (i.e. reduced critical path delay) to save more energy consumption (i.e. 25 %
under best process variability impact).
Under worst process variability conditions, the used set of
clock frequencies for a system without DVFS, and even that
for a system with average based DVFS control, violates the
MIPS R2000 critical path delay. As a consequence, the
MIPS R2000 have erroneous output results. Such a processing node has to be neglected and its allocated tasks have
to be reallocated over other high performance processing
nodes. However, with the proposed DVFS control architecture, the MIPS R2000 is still able to complete the allocated
tasks successfully by using the proper set of maximal clock
frequencies. Therefore, this drastically relaxes the fabrication constraints and helps to the yield enhancement.
4.2 Robustness to process variability
As already mentioned, the proposed approach is robust to
technological uncertainties since the control algorithm does
not need any knowledge on the system parameters. Fig. 4
clearly shows how the controller adapts itself with the different (nominal, worst, best) variability corners. The system still works even if the processing node does not work
as expected. However, in order to compensate for a weaker
(respectively stronger) computational speed induced by the
process variability, the system runs a longer (resp. shorter)
time under the penalizing supply voltage level. Of course,
the energy consumption is impacted in consequence.
4.3 Whole energy-efficient control and area overhead
The whole DVFS control system is also evaluated taking
into account the consumption of the actuators and the processing element (see Fig. 1). A GALS-ANoC island is compared with a single processing element (i.e. MIPS R2000)
and with eight controlled elements.
Under nominal process variability, the average dynamic
power and energy saving values of the whole DVFS control system are smaller than but not too far from those presented in previous section (46 and 15 % respectively). Also,
the area overhead due to the proposed control approach is
about 33 %. On the other hand, a single DVFS control system is needed for all the VFIs in a GALS-ANoC system.
Therefore, in a VFI with multiprocessing elements, the efficacy of the DVFS control system is more effective in saving
power/energy consumption (51 and 20 % respectively) and,
moreover, the area overhead of the extra DVFS hardware
is approximately divided by the number of processing elements per a GALS island. For example, the area overhead
in a processing island with eight processors is 4.15 %.
5. Conclusions
In this paper, a survey of different problems facing designers over the nanometric era was first presented. A GALSANoC system was taken as an issue with the application
103
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Figure 4: Timing diagram of the digital controller behavior with 3 different MIPS R2000 workloads under different process variability effects.
of DVFS technique. Also, a closed-loop scheme clearly appeared as necessary in such systems and an architecture was
proposed in this way. The idea to use integrated sensors embedded in each clock domain, as well as a Programmable
Self-Timed Ring (PSTR) oscillator, was presented as one
of the promising solutions to reduce the process variability
impact. A control algorithm has also been detailed, based
on a fast predictive control law. The proposed feedback
controller smartly adapts voltages and frequencies (energy-
performance tradeoff) with strong technological uncertainties. Stability and robustness were analyzed.
A practical validation was realized in simulation for a MIPS
R2000 microprocessor over STMicroelectronics 45 nm
technology to obtain results about the power consumption
and area overhead of each unit in the power management
architecture. Global results for a multicore system were
also presented. Through this example, it was hence demonstrated that a dedicated feedback system associated to the
104
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
GALS-ANoC paradigm and DVFS techniques, with correct
sensors and actuators, is able to achieve better robustness
against process variability. This relaxes the fabrication constraints, thanks to an appropriated strategy, and helps to the
yields enhancement by applying design techniques.
These preliminary results must now be evaluated on a real
test bench. A precise evaluation of the hardware and/or software implementation cost of the proposed control scheme
has also to be performed.
Acknowledgments
This research has been supported by ARAVIS, a Minalogic
project gathering STMicroelectronics with academic partners, namely TIMA and CEA-LETI for micro-electronics,
INRIA/LIG for operating system and INRIA/GIPSA-lab for
control. The aim of the project is to overcome the barrier of
sub-scale technologies (45nm and smaller).
References
[1] B. F. Romanescu, M. E. Bauer, D. J. Sorin, and S Ozev. Reducing the impact of process variability with prefetching and
criticality-based resource allocation. In Proceedings of the
16th International Conference on Parallel Architecture and
Compilation Techniques, 2007.
[2] L. Fesquet and H. Zakaria. Controlling energy and process
variability in system-on-chips: Needs for control theory. In
Proceedings of the 3rd IEEE Multi-conference on Systems
and Control - 18th IEEE International Conference on Control Applications, 2009.
[3] A. P. Chandrakasan and R. W. Brodersen. Minimizing power
consumption in digital CMOS circuits. Proceedings of the
IEEE, 83(4):498–523, 1995.
[4] K. Flautner, D. Flynn, D. Roberts, and D. I. Patel. An energy
efficient SoC with dynamic voltage scaling. In Proceedings
of the Design, Automation and Test in Europe Conference
and Exhibition, 2004.
[5] A. Varma, B. Ganesh, M. Sen, S. R. Choudhury, L. Srinivasan, and J. Bruce. A control-theoretic approach to dynamic
voltage scheduling. In Proceedings of the International Conference on Compilers, Architecture and Synthesis for Embedded Systems, 2003.
[6] T. Ishihara and H. Yasuura. Voltage scheduling problem for
dynamically variable voltage processors. In Proceedings of
the International Sympsonium on Low Power Electronics and
Design, 1998.
[7] J. Pouwelse, K. Langendoen, and H. Sips. Dynamic voltage
scaling on a low-power microprocessor. In Proceedings of
the 7th Annual International Conference on Mobile Computing and Networking, 2001.
[8] P. Choudhary and D. Marculescu. Hardware based frequency/voltage control of voltage frequency island systems.
In Proceedings of the 4th International Conference on Hardware/Software Codesign and System Synthesis, pages 34 –39,
oct. 2006.
[9] D. Marculescu and E. Talpes. Energy awareness and uncertainty in microarchitecture-level design. IEEE Micro, 25:64–
76, 2005.
[10] S. Durand and N. Marchand. Fully discrete control scheme
of the energy-performance tradeoff in embedded electronic
devices. In Proceedings of the 18th World Congress of IFAC,
2011.
[11] M. Krstic, E. Grass, F. K. Gurkaynak, and P. Vivet. Globally
asynchronous, locally synchronous circuits: Overview and
outlook. IEEE Design and Test of Computers, 24:430–441,
2007.
[12] H. Zakaria and L. Fesquet. Designing a process variability robust energy-efficient control for complex SoCs. IEEE
Journal on Emerging and Selected Topics in Circuits and
Systems, 1:160–172, 2011.
[13] T. Villiger, H. Käslin, F. K. Gürkaynak, S. Oetiker, and
W. Fichtner. Self-timed ring for globally-asynchronous
locally-synchronous systems. In Proceedings of the 9th International Symposium on Asynchronous Circuits and Systems, 2003.
[14] C. Albea Sánchez, C. Canudas de Wit, and F. Gordillo. Control and stability analysis for the vdd-hopping mechanism. In
Proceedings of the IEEE Conference on Control and Applications, 2009.
[15] J. Hamon, L. Fesquet, B. Miscopein, and M. Renaudin.
High-level time-accurate model for the design of self-timed
ring oscillators. In Proceedings of the 14th IEEE International Symposium on Asynchronous Circuits and Systems,
2008.
[16] O. Elissati, E. Yahya, L. Fesquet, and S. Rieubon. Oscillation period and power consumption in configurable selftimed ring oscillators. In Joint IEEE North-East Workshop
on Circuits and Systems and TAISA Conference, pages 1 – 4,
2009.
[17] E. Yahya, O. Elissati, H. Zakaria, L. Fesquet, and M. Renaudin. Programmable/stoppable oscillator based on selftimed rings. In Proceedings of the 15th IEEE International
Symposium on Asynchronous Circuits and Systems, 2009.
[18] M. Alamir.
Stabilization of Nonlinear Systems Using
Receding-Horizon Control Schemes: A Parametrized Approach for Fast Systems, volume 339. Lecture Notes in Control and Information Sciences, Springer-Verlag, 2006.
[19] S. Durand, A.M. Alt, D. Simon, and N. Marchand. Energyaware feedback control for a H.264 video decoder. International Journal of Systems Science, Taylor and Francis
online:1–15, 2013.
[20] C. C. Wurst, L. Steffens, W. F. Verhaegh, R. J. Bril, and
C. Hentschel. QoS control strategies for high-quality video
processing. Real Time Systems, 30:7–29, 2005.
105
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Mobile IP Issues and Their Potential Solutions:
An Overview
Aatifah Noureen1, Zunaira Ilyas1,Iram Shahzadi1, Muddesar Iqbal1, Muhammad Shafiq2, Azeem Irshad3
1
Faculty of Computing and Information Technology
University of Gujrat,Gujrat, Pakistan
2
Department of Information and Communication Engineering
Yeungnam University, Gyeongsan, South Korea
3
Department of computer science and software engineering
International Islamic University, Islamabad, Pakistan
{atifanoureen, zunaira.ilyas,}@gmail.com,
iramshahzadi013@yahoo.com
m.iqbal@uog.edu.pk,shafiq.pu@gmail.com
irshadazeem2@gmail.com@gmail.com
A typical Internet protocol (IP)could not address the
mobility of nodes and was only designed for fixed
networks where the nodes were improbable to move
from one location to other. An ever-increasing
dependence on network computation makes the use of
portable and handy devices,inevitable. Mobile IP
protocol architecture was developed tomeet such needs
and support mobility. Mobile IPlets the roving nodes to
establish an uninterrupted connection towards internet
without altering the IP address whilemoving to another
network.However, Mobile IP goes through several issues
like ingress filtering, triangle routing, handoff and
Quality of service problems etc. In this paper we discuss
few of those with their possible solutions. That resolves
these problems, reduce the unnecessary load from the
network and enhance the efficiency. Some security
threats on mobile and their solutions are also focused to
secure the communicationduring mobility.
support open standard Protocol suggested by the
Internet Engineering Task Force (IETF) in
November 1996. That makes it possible for the
moving nodes to maintain an un-interrupted
connection towards internet without altering the IP
address while shifting to another network [2]. The
primary reasons that persuading the need of Mobile
IP; making available a scalable, translucent as well
as a safe alternative [3]. In Mobile IP a host node
possesses two addresses at a time, one for the home
network; permanent address and other for the
foreign network which is short-lived and just
legitimate at a particular foreign network [4]. Every
roving node is recognized through its home address
irrespective of its present location over the internet.
When the node is far away to home network it
owed a care-of address to determine its existing
location [5].
Keywords: Mobile IP, Triangle routing, Ingress
1.2 Functional components of mobile IP:
Abstract
filtering, binding updates and COA.
The integral componentsbelong to the mobile
architecture are as follow:
1. Introduction









1.1 Mobile Internet Protocol (Mobile IP)
In standard IP routing, a problem occurs when a
mobile node roams through multiple access
points.It has to reinstate the connection each time
with changing in its IP address due to which the
lively session of the nodes was dropped and
response time of the network is increased [1]. A
novel technology which satisfies the requirement
for smooth connection is Mobile IP; a mobility
Mobile Node (MN)
Home Agent (HA)
Foreign Agent (FA)
Correspondent Node (CN)
Home Agent Address
Care-of-Address (CoA)
Home Network
Foreign Network
Tunneling
106
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
These architectural components are conferred
below:
Foreign network: The network in which the
Mobile Node is currently existent, away from its
Home network.
Mobile Node (MN): Any device which software
facilitates network roaming. A node using the
Mobile IP that moves in different IP subnets. A
permanent (home address) IP address is allotted to
this node which defines the destination of all its
packets. Other nodes use only home IP address to
send packets, regardless of node’s physical
location.
Tunneling: The encapsulating process foran IP
packet in a further IP packet to route it at a place
other than the one defined in the destination field
originally. Prior to promoting it to a suitable router,
when a packet is accepted by home agent, it
encapsulatesthat packet into a new packet, setting
the mobile node’s care-of address in the new
target/destination address field. The path that is
pursued by the new packet is known as the tunnel.
[5]
Home Agent (HA): A router within home network
that delivers packets to the Mobile Node and in
charge to intercept packets whenever the mobile
node is connected to an overseas network. The
home agent is liable for sending these packets to
the mobile node.
1.3 Architecture:
In Figure 1 exemplifies the simple architecture of
mobile IP scenario.
Foreign Agent (FA): A router in foreign network
to which mobile node is connected. A foreign
network router designed to use for Mobile IP. A
mobile node having a ‘foreign agent care-of
address’ all packets are passed on through this
node. The mobile node possibly uses a foreign
agent for registration in the distant network. [6]
Correspondent Node (CN): Any host with which
mobile node communicates. It is a communicating
host with the mobile node. This can be placed on
the foreign network, home network, or any other
place holding potential to send packets to the
mobile node’s home network.
Figure 1: Architecture of Mobile IP
Home Agent Address: IP address of the device
within its home network.
1.4 How mobile IP Works
Whenever a mobile node goes far away from its
home network it listens for advertisement by
FAandtake care-of-address via a foreign agent then
the mobile node records its existing position with
the Foreign Agent and Home Agent (Agent
Discovery and Registration). Packets are delivered
to the home address of the mobile node are
resented by the home agent over the tunnel towards
the care of address in the foreign agent
(Encapsulation). Then the foreign agent passes the
packets to the mobile node on its home address
(De-capsulation). Every time a node shifts to a
network it needs to obtain a new care-of address. A
mobility agent at home network preserves mobility
binding table where as a foreign agent sustains the
visitor list. [1, 8]
Care-of-Address (CoA): IP address of the device
when it is working in a foreign network. The
address used by mobile node for communication
purpose while it is not away from home network.
When the mobile node is using foreign agent’s IP
address as its care-of address it is named as foreign
agent care-of address, and a collocated care-of
address is used when network interface of the
mobile node is provisionally allocated an IP
number on the foreign network [7].
Home Network: The subnet which communicates
to the home address of the mobile node and the
home agent. It is sighted as the “home” spot of
connection for mobile node.
107
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
2.2 Triangle routing problem
Another aspect [11] is that when mobile node is in
foreign network the corresponding node is not
aware of its care of address, it transmits a packet to
the home address of the mobile node which sends it
to the foreign agent which further delivers it to the
mobile node, on the other side the mobile node
communicates directly with the corresponding
node.
Packets are usually sent efficiently both to and
from the mobile node but when corresponding node
sends packet towards mobile node they have to
follow a longer route as compared to optimum.
Since the both transmission directions have used
distinct paths, creates “triangle routing” problem
which decreases network efficiency and places an
unwanted load on the networks [4, 1, 6]. However
Figure 4 illustrates triangle routing problem
scenario in more simple fashion.
Figure 2: Working of Mobile IP
2. Issues in mobile IP:
These are the transitional issues of mobile IPv4 to
Mobile IP v6:
 Ingress Filtering
 Intra-Domain Movement Problem
 Triangular Routing Problems
 Handoff Problem
 Quality of Service Problem
 Fragility problem
2.1 Ingress Filtering problem
Ingress Filtering [9] is an approach carried out by
the firewall enabled systems to ensure that the
particular packets are truly from the network that
they are declaring, if the IP address is not same as
the network these packets are discarded. If the
mobile node transmits packets using a Home
Address rather than Temporary Care of Address
when it is in foreign network then packets are
declined by the program which supports Ingress
Filtering.This problem occurs when both nodes
(sender and receiver) belongs to same home
network [10]. The whole process is shown in
Figure 3.
Figure 4: Triangle Routing
2.3 Handoff Problem
Handoff problem occurswhen mobile node
switches from its current foreign agent to
another[12]. Since, home agent is not aware of the
change of the position of the mobile node so it
tunnels packets to its old foreign agent till it
receives the news registration request from the
mobile node. Thus packets are lost during the hand
off and as a result communication between
corresponding node and mobile node does not
occur successfully [13].
2.4 Intra-Domain Movement Problem
Figure 3: Ingress filtering
108
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
When mobile node changes its position frequently
within a domain and hand off occurs again and
again, a large number of messages are created in
network which decreases the effectiveness of the
network [14].
2.5 Quality of service Problem
Figure 5: Reverse Tunneling
As mobility is the vital ability of telecom wires
networks, besides this phenomenon some other
management issues are also led to be address,
sinceYong Fenghighlighted themas,due to mobility,
unpredictable bandwidth, inadequate network
resources and increasing error rate it is tough to
offer QoS in mobile environment [15].
Reverse tunneling enables the reception of packets
on home network is free from firewall blockage but
it negatively affects the efficiency and safety of the
mobile IP because of inside subnets direct link[17].
3.2 Solutions of Triangle Routingproblem
2.6 Fragility problem
It is simple and easy to configure a single home
agent but it has also a drawback of fragility. As if
the home agent stops working the mobile node also
come to be unapproachable [16].
To tackle the triangle routing problem, many
protocols are planned like forward tunneling and
binding cache, bidirectional route optimization, the
smooth handoff technique, a virtual home agent
and SAIL.
3. ViableSolutions
3.2.1 Route optimization
The most possible solutions of given problems,so
far proposed by the experts, discussed for single
glance overview, are as follow:
Charles Perkins and David Jhosan presented this
protocol in 2000 and it is used in IP Version 6.
Route optimization mobile IP work as it tells the
new care of address to corresponding Nodes.
Nevertheless the communication would be much
more reliable if correspondent node is aware of
care of address of the mobile node and directly
sends the packets directly to the mobile node at
current location without detouring by the home
network [6, 2].Correspondent node can be aware of
the care-of address of the mobile node as the
moving node tells by itself or a home agent
transmits a binding update message to the
correspondent node when forwarding the packet
towards the foreign agent. The correspondent node
then constructs an entry in the binding cache for
further communication[3]. since a mobile node
shifts to another foreign agent, the foreign agent
will maintain a binding cache of the new care-of
address, forward packets towards mobile node
which has moved from its network and also
transmits a Binding Warning message to aware the
home agent about the movement of the node [1].
Handling of triangular routing by route
optimization mobile IP is effective but in a
complex manner. Regular updating is shared by all
of the nodes that are connected [17].
3.1 Solution of Ingress FilteringProblem
For Mobile IP, ingress filtering triggers/creates
massive problems.Ingress filtering limitations may
be improved through an approach named as reverse
tunneling. In Reverse Tunneling foreign agent
receives packet from the mobile node encapsulate it
and transmit it towards home agent, then the home
agent de-capsulate it and further send it to the
corresponding node. [10]In this manner now we
have always topologically truthful addresses
however it probably raises the delay, congestion of
the packets [5].Figure 5 illustrates the whole
process of reverse tunneling.
109
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
corresponding node send messages to the virtual
home agent which further tunnels them towards the
mobile node. This protocol supports transparency
as the corresponding nodes are not aware of
mobility. All the sessions are just for a short period
of time and after that mobile node can change its
virtual home agent which makes sure that every
time mobile node is near to its virtual home agent
[20].
3.2.2 Reverse Routing
Reverse Routing is proposed by Peifang Zhou and
Oliver W. Yang as a substitute to Route
optimization mobile IP. Mobile node deliver packet
to its home network and while moving to foreign
network, it gets care of address. At the next step
mobile node send registration message to the
corresponding node that contain its home network
address as well as care of address. Then the
corresponding node updates its routing table, and
all the further communication from mobile node to
corresponding node takes place according to
updated binding table. It is simpler then Router
optimization as it relay on one message [17].
3.2.5 Surrogate HA
To resolve resolves triangle routing problem idea
of surrogate home agent (SHA) is presented.
Surrogate home agent exist two levels above the
permanent home agent. When a mobile node goes
far away from its home network, it takes care of
address from foreign agent and registers it with its
home agent which further shares it with SHA.
When the corresponding node sends packets they
are tunneled to SHA rather than permanent home
agent than SHA forwards them towards mobile
node current location. As compare to Original
mobile IP this technique decrease the transmission
time, handoff delay, packet loss, and registration
time But it has some limitations as it only provide
better results whenever corresponding node is
above the SHA otherwise not. It is just supportive
in the inter-network. [21]
3.2.3 Location Registers (LR)
In this method the corresponding node first create
link to the database named as home location
register (is a database bridging information of
mobile home IP address and care of address). And
the mobile node always uses home location register
for registration purpose, when the mobile node is
on home network HLR takes responsibility of
identity mapping and when its location changes to
home network, it register its care of address
assigned it by foreign network similar to mobile IP.
When corresponding nodes want to communicate
message at first check the database of Home
location registration and confirm the mobility
binding and deliver data directly to mobile node
care of address. Correspondent node has to update
its binding database before its registration lifetime
expires. Mobile node upholding lists of its all
corresponding nodes that deliver packet during its
existing registration life time and While it shift to
new place it share binding update to entire active
CN’s [18]. It reduces the cost because it does not
require the home address and foreign address but it
create burden on corresponding node in form of
mobility and ways of interaction with home LR
[17].
3.2.6 Other Possible Solutions
Many other protocols has been also proposed so far
to solve the triangle routing problem as forward
tunneling and binding cache [3], bidirectional route
optimization [22], the smooth handoff technique
[20], a virtual home agent [21], and a port address
translation based route optimization scheme [4].
3.3 Solution of Hand off problem
The transmission overall performance during the
mobile node handoff relies on three factors: re
registration, mobile test, and fundamental
interaction with the above layer protocol [13].
Getting local registration via the layered Mobile IP
[23] is a method that helps in decreasing the wait
time for re-registration and enhances the handoff
functionality of Mobile IP. Original FA notification
is also a solution to reduce packets loss by using
the buffer memory [13].
3.2.4 Virtual Home Agent
In this scheme idea of virtual home agent is
presented. Virtual home agent [19] behaves like a
real home agent. Whenever a mobile node moves
to a foreign network it has assigned a virtual home
address against its permanent home address for a
session period. During the session the
110
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
node. It inserts its own CoA in the field of the
spoofed Binding message. When this particular
message is send to corresponding node it will
revise its binding cache and start out
communication. In this way an attacker can able to
see and alter the packets that are for the mobile
node [25].
3.4 Solution of intra domain movement
Problem
Many protocols are designed to reduce problem of
intra domain movement like TeleMIP, Cellular IP,
and HAWAII. They can resolve the problems of
frequently handoff and consequently decreases
packet loss and hand off delay [14].
4.3 Basic Denial of Service Attacks
3.5 Solution of quality of serviceproblem
If the attacker spoofs the binding update and
resends a registration request to a home address
using own CoA. In this manner the attacker will get
all the packets that fail entire connection with
mobile node [14].
To solve end to end Quality of service problem in
mobile IP combination of (DiffServ) differentiate
service and (RSVP) resource reservation protocol
is used [15].
Bombing CoA with unwanted data
Sometimes the attacker places false CoA in the
binding update and reroute the packets to any other
targeted node. The purpose behind that is to bomb
the targeted node with undesired packets. This
could be possible when attacker understands about
the large data stream involving both nodes [26].
This is a very serious attack because targeted node
cannot recognize the attacker and could not do any
think to avoid.
3.6 Solution of Fragility problem
Multiple home agents rather than single one are
used to solve the fragility issues. If one HA break
down there are others who can took the
responsibility to route the data packets towards
mobile node [16].
4. Mobile IP Threats
Mobility leads to a number of different kinds of
threats that might have different effects on the
protection of the protocol. Though MIPv6 have
many advance features against MIPv4. It uses
IPsec protocol to provide security but still it has
some loop holes for security threats. Few of them
are described here.
4.4 Replaying and Blocking Binding
Updates
In this type of attacks the attacker captures the
binding update message and holds the previous
location of the mobile node. When the MN roves to
a new place then it send that binding update to the
corresponding node on the behalf of mobile node
and reroute the packets on the previous spot. In that
manner an attacker can stop binding updates for
MN at its current position till the CN cache entry
exhales [25].
4.1 Spoofing Binding Updates
If binding updates are not attested / authenticated,
an attacker between the mobile node and
corresponding node can spoofs all the binding
update messages. It captures the messages and can
alter the home address/CoA in it and then deliver
them
to
corresponding
node
[24].Thus
communication between mobile node and
corresponding node cannot do successfully and
packets are transmitted to an unwanted node.
4.5 Replay attacks
Sometimes attacker can spoof and store a copy of
true registration request and use it later by
registering false CoA for the MN.The temporal
effects, are also need to be focused to tackle replay
attacks.
4.2 Attacks against secrecy and integrity
4.6 Eavesdropping
Attacker can bust the integrity and privacy between
mobile node and corresponding node as by sending
the false binding message to the corresponding
In this type of attacks an attacker silently listen the
traffic between MN and CN and gather important
111
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
information. Sometimes it just alters the messages
and
sometimes
makes
its
independent
communication to both MN and CN on the behalf
of each other. Both MN and CN’s are unaware of
this and believe that they are communicating
directly on a secure link [27].
5. Possible
Threats
Solutions
binding updates messages. [26] These tests confirm
the home address and care of address efficiently
but may result as the base of many other harasses
[24].
5.2 Solution of Replay attacks
forPotential
To avoid replay attacks the identification field is
produced as it enables the home agents to figure
out about the next value.
These are the possible solutions proposed by the
researchers as countermeasures:
5.3 Solution of Eavesdropping
MIPv6 uses IPsec for security purpose, it
authenticate and encrypt all IP packet during the
communication session. Packets exchanged during
communication are secured using IPSec but there is
no security procedure exists for this kind of attacks
[27].
5.1 Message Authentication
Whenever corresponding node receive the binding
update message, Mobile IP system architecture
shouldauthenticate it to proceed. Following are
some authentication techniques likely to be
considered for this.
6. Conclusion
5.1.1 Use of Public Key Infrastructure (PKI)
Mobility can be considered as amajorinnovationin
communication technologiescontributing valuein
user’s life style. This ability in telecom
environment comes in to existence by the invention
of Mobile IP. Mobile IP is a protocol that permits
mobile system user to shift from one place to
another keeping permanent IP address without
getting the communicating session terminated.
However, this capability also bears technicalissues
and security concerns as presented by the
researchers. This Paper takes into account
thesechallenges besides possible security concerns
faced by the mobile IPtill now. Since the index of
their possible solutions tosecurethe Mobile
IPcommunication architecture and process by
reducingthenetworkoverhead and maximization of
network performance is also discussed for single
glance overview.
Use of public key infrastructure is a typical method
for authentication but in mobile IPv6 it is difficult
to use a single PKI on the internet. [24]
5.1.2 Use of Cryptographically Generated
Addresses
Cryptographically generated addresses are another
method of authentication. In this technique mobile
node’s public key is hashed in the 2nd half of the
HA. The key makes 62-64 bits of IP address which
make it hard for an attacker to discover the key of
the particular address. But there is only 62-64 bits
of address so attacker can find the key by matching
through error and trial methods. [25] Bombing
attacks does not prevent by the Use of
Cryptographically Generated Addresses.
5.1.3 Return Routability Tests
References:
Return routability tests are also used to authenticate
the binding updates and prevent bombing attacks.
Two types of tests are used one for home address
and other for CoA. Whenever the corresponding
node gets a binding update message deliver a secret
key to the home agent which it further tunnels
towards mobile node, on the other hand the
corresponding node also deliver another data
packet with 2nd key at the CoA. The mobile node
utilizes these two keys for the authentication of
[1] Redi, J., and P. Bahl. Mobile ip: (1998). A solution
for
transparent,
seamless
mobile
computer
communications.Fuji-Keizai’s Report on Upcoming
Trends in Mobile Computing and Communications.
[2] Girgis, Moheb R., Tarek M. Mahmoud, Youssef S.
Takroni, and Hassan S. H.(2010). Performance
evaluation of a new route optimization technique for
mobile IP. arXiv preprint arXiv:1004.0771.
112
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[3] Chandrasekaran,
Janani.
Mobile
ip:
Issues,
[16] Nada,
challenges and solutions. (2009). PhD diss.,
Fayza
A.(2006).On
using
Mobile
IP
Protocols. Journal of Computer Science 2, Volno. 2
Master’s thesis, Department of Electrical and
[17] Mahmood, Sohaib. (2004). Triangle Routing in
Computer Engineering Rutgers University.
Mobile IP. Lahore University of Management
[4] Raj S. B., Daljeet K., and Navpreet K. Mobile
Sciences, Pakistan. http://suraj. lums. edu. pk/-
IP:(2013). A Study, Problems and their Solutions.
cs678s04/2004% 20ProjectslFinals/Group04.pdf.
International Journal of Scientific Research.
[18] Jan, R., Thomas R., Danny Y., Li F. C., Charles G.,
[5] Kumar, Ranjeeth. (2003) MSIP: a protocol for
Michael B., and Mitesh P. (1999). Enhancing
efficient handoffs of real-time multimedia sessions
survivability of mobile Internet access using mobile
in mobile wireless scenarios. PhD diss., Indian
IP with location registers. In INFOCOM'99.
Institute of Technology, Bombay.
Eighteenth Annual Joint Conference of the IEEE
[6] Nada, Fayza. (2007). Performance Analysis of
Computer
Mobile IPv4 and Mobile IPv6. Int. Arab J. Inf.
and
Communications
Societies.
Proceedings. IEEE, vol. 1, pp. 3-11.
Technol. 4, no. 2: 153-160.
[19] Gao, Qiang, and Anthony Acampora. (2000), A
[7] Siozios, Kostas, PavlosE, and AlexandrosK.(2006).
virtual home agent based route optimization for
Design and Implementation of a Secure Mobile IP
mobile IP. In Wireless Communications and
Architecture.
Networking Conference. WCNC. IEEE, vol. 2, pp.
[8] Stoica, Adrian. (2007).A Review on Mobile IP
592-596.
Connectivity and its QoS.International Journal of
[20] Wang, Nen-Chung, and Yi-Jung Wu. (2006). A
Multimedia and Ubiquitous Engineering.
route optimization scheme for mobile IP with IP
[9] Perkins, Charles. (2002). IP mobility support for
header extension. In Proceedings of the 2006
IPv4.IETF RFC 3344,
international
[10] Doja, M. N., and Ravish S. (2012).Analysis of
conference
on
Wireless
communications and mobile computing, pp. 1247-
Token Based Mobile IPv6 and Standard Mobile
1252. ACM.
IPv6 using CPN Tool. International Journal 2,vol
[21] Kumar, C., N. Tyagi, and R. Tripathi. (2005).
no. 7
Performance of Mobile IP with new route
[11] Wu, Chun H., AnnT., Cheng, ShaoT. Lee, JanM.
optimisation
technique.
In
Personal
Wireless
H., and DerT. L. (2002). Bi-directional route
Communications, ICPWC. 2005 IEEE International
optimization in mobile ip over wireless lan. In
Conference on, pp. 522-526. IEEE.
Vehicular
Technology
Conference,
2002.
[22] Zhu, Zhenkai, Ryuji Wakikawa, and Lixia Zhang.
Proceedings. VTC 2002-Fall. IEEE 56th, vol. 2, pp.
(2011). SAIL: A scalable approach for wide-area IP
1168-1172.
mobility. In Computer Communications Workshops
[12] Ayani, Babak. (2002).Smooth handoff in mobile IP.
PhD
diss.,
Master’s
thesis,
Department
(INFOCOM WKSHPS), 2011 IEEE Conference on,
of
pp. 367-372. IEEE.
Microelectronics and Information Technology at
[23] Gustafsson Eva, Jonsson Annika, Perkins Charles
KTH University of California,
E.(2003). IETF Internet Draft. Mobile IPv4
[13] Minhua Y., Yu L.Huimin Z.(2003). Handover
Regional Registration.
Technology in Mobile IP. Telecommunication
[24] Aura, Tuomas. (2004). Mobile IPv6 SecurityIn
Technology.
Security Protocols,. Springer Berlin Heidelberg,pp.
[14] Min S., ZhiminL. (2003). Mobile IP and Its
Improved
Technologies.
215-234.
Telecommunications
[25] Aura, Tuomas, and JariA. (2002).MIPv6 BU attacks
Science
and defenses. Work in Progress
[15] Yong Feng, FangweiLi. (2003).QoS Guarantee over
Mobile IP.
113
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[26] Arkko, Jari, Vijay D., and Francis D. (2004).Using
He did Master of Science in Computer Science
(MSCS) degree from UIIT, PMAS Arid
Agriculture University Rawalpindi, Pakistan in
2010. He also received the degree of Master in
Information Technology (M.I.T) from The
University of Punjab in 2006. He served as visiting
lecturer in the Federal Urdu University, Islamabad,
Pakistan, for the period of one year since 2009.He
has been serving as Lecturer in Faculty of
Computing
and
Information
Technology,
University of Gujrat, Pakistan since 2010. He has
published more than 10 research papers within the
field of information and communication
technology, at national and international level. His
research interests span the area of QoS issues,
resources and mobility management in mobile adhoc networks.
Azeem Irshad has been doing Ph.D from
International Islamic University, Islamabad, after
completing MS-CS from PMAS Arid Agriculture
University Rawalpindi and is currently serving as
visiting lecturer in AIOU. He has published more
than 10 papers, in national and International
Journals and proceedings. His research interests
include NGN, IMS security, MANET security,
merger of Ad hoc networks with other 4G
technology platforms, multimedia over IP and
resolving the emerging wireless issues.
IPsec to protect mobile IPv6 signaling between
mobile nodes and home agents.
[27] Barbudhe K Vishwajit, Barbudhe K Aumdevi.
“Mobile IPv6:Threats and Solution.” International
journal of application or innovation in engineering
and management. (2013).
Aatifah Noureen is a research scholar under the
enrollment of MS (IT) program in University of
Gujrat, Pakistan since 2013.She enrolled at the
University of Gujrat in 2008 as she realized her
interests in computing and technology and received
her BS degree in Information Technology from
University of Gujrat in 2012.Her research interest
includes networking performance issues in mobile
computing.
Zunaira Ilyasis a student of MS (IT) in University
of Gujrat, Pakistan since 2013. She did BS (IT)
from the same university in 2012. Her research
interest spans the area of mobile ad-hoc networks.
Iram Shahzadi is a research scholar in University
of Gujratsince 2013. She receives her B.Sc and
MSc degrees from University of The Punjab,
Pakistan in 2002and 2008 respectively. She is
serving as a lecturer at Govt. Margazar College for
women, Gujrat since 2009.Her research interests
are QoS of packet radio networks for real
communication.
Dr. Muddesar Iqbal has done Ph.D. from
Kingston University UK in the area of Wireless
Mesh Networks in 2009. He has been serving as
Associate Professor in Faculty of computing and
Information technology, University of Gujrat,
Pakistan, since 2010. He won an Award of
Appreciation from the Association of Business
Executive (ABE) UK for tutoring the prize winner
in Computer Fundamentals module. He also
received Foreign Expert Certificate from State
Administration of Foreign Experts. He is the
authors of numerous papers within the area of
wireless networks. He is also the approved
supervisor for Ph.D. studies of Higher Education
Commission of Pakistan.
Mr. Muhammad Shafiq is the Ph.D. scholar in
department of Information and Communication
Engineering, Yeungnam University, South Korea.
114
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Action recognition system based on human body
tracking with depth images
M. Martínez-Zarzuela 1, F.J. Díaz-Pernas, A. Tejeros-de-Pablos2, D. González-Ortega, M. Antón-Rodríguez
1
University of Valladolid
Valladolid, Spain
marmar@tel.uva.es
2
University of Valladolid
Valladolid, Spain
atejpab@ribera.tel.uva.es
as binocular and Time-of-Flight (ToF) cameras. Within
the ToF field, different techniques are utilized; the most
commonly used are the extraction of features from depth
images and the processing of a stick figure model (or
skeleton) using depth information (Figure 1).
Abstract
When tracking a human body, action recognition tasks can be
performed to determine what kind of movement the person is
performing. Although a lot of implementations have emerged,
state-of-the-art technology such as depth cameras and intelligent
systems can be used to build a robust system. This paper
describes the process of building a system of this type, from the
construction of the dataset to obtain the tracked motion
information in the front-end, to the pattern classification backend. The tracking process is performed using the Microsoft(R)
Kinect hardware, which allows a reliable way to store the
trajectories of subjects. Then, signal processing techniques are
applied on these trajectories to build action patterns, which feed
a Fuzzy-based Neural Network adapted to this purpose. Two
different tests were conducted using the proposed system.
Recognition among 5 whole body actions executed by 9 humans
achieves 91.1% of success rate, while recognition among 10
actions is done with an accuracy of 81.1%.
Keywords: Body-tracking, action recognition, Kinect depth
sensor, 3D skeleton, joint trajectories.
According to [3] the use of 3D information results
advantageous over the pure image-based methods. The
collected data is more robust to variability issues
including vantage point, scale, and illumination changes.
Additionally, extracting local spatial features from
skeleton sequences provides remarkable advantages: the
way an action is recognized by humans can be
conveniently represented this way, since the effect of
personal features is reduced, isolating the action from the
user who performed it.
1. Introduction
Body tracking and action recognition are study fields that
are nowadays being researched in depth, due to their high
interest in many applications. Many methods have been
proposed whose complexity can significantly depend on
the way the scene is acquired. Apart from the techniques
that use markers attached to the human body, tracking
operations are carried out mainly in two ways, from 2D
information or 3D information [1, 2].
On the one hand, 2D body tracking is presented as the
classic solution; a region of interest is detected within a
2D image and processed. Because of the use of silhouettes,
this method suffers from occlusions [1, 2]. On the other
hand, advanced body tracking and pose estimation is
currently being carried out by means of 3D cameras, such
Fig. 1 Fifteen-joints skeleton model
As stated by [4], one of the main issues existing in action
recognition is related to the extremely large
dimensionality of the data involved in the process (e.g.
noises in 2D videos, number of joints of 3D skeletons…).
115
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
designed in [6]. They manage simultaneous full-body pose
tracking and activity recognition. Another tracking
method is used in [7], where local shape descriptors allow
classifying body parts within a depth image with a human
shape. The results of this point detector show that an
articulated model can improve its efficiency. In [8], it is
presented the construction of a kinematic model of a
human for pose estimation. Key-points are detected from
ToF depth images and mapped into body joints. The use
of constraints and joint retargeting makes it robust to
occlusions. In [9], human poses are tracked using depth
image sequences with a body model as a reference. To
achieve greater accuracy, this method hypothesizes each
result using a local optimization method based on dense
correspondences and detecting key anatomical landmarks,
in parallel. Other approaches to skeleton tracking use
multi-view images obtained from various cameras [10,
11].
This derives in problems such as an increase in
computational complexity and makes more difficult the
extraction of key features of the action. Current
methodologies use four stages to solve this: feature
extraction, feature refinement to improve their
discriminative capability (such as dimension reduction or
feature selection), pattern construction and classification.
Then, by studying the temporal dynamics of the body
performing the movement, we can decode significant
information to discriminate actions. To that extent, a
system able to track parts of the human body for the whole
duration of the action is needed.
However, many action recognition approaches do not
utilize 3D information obtained from consumer depth
cameras to support their system. The existence of state-ofthe-art technology offering positions of the human body in
all three dimensions calls for the emergence of
methodologies that use it to track this information for a
variety of purposes such as action recognition. In addition
to this, an algorithm that is accurate and computationally
efficient is needed.
In [12] challenge of recognizing actions is accounting for
the variability that appears when arbitrary cameras
capture humans performing actions, taking into account
that the human body has 244 degrees of freedom.
According to the authors, variability associated with the
execution of an action can be closely approximated by a
linear combination of action bases in joint spatio-temporal
space. Then, a test employing principal angles between
subspaces is presented to find the membership of an
instance of an action.
This paper is focused on the work field of action
recognition using a 3D skeleton model. This skeleton is
computed by tracking the joints of the human performing
the movement. The emergence of novel consume-depth
cameras, such as Kinect, enables convenient and reliable
extraction of human skeletons from action videos. The
motivation for this paper is to design an accurate and
robust action recognition system using a scheme that
takes into account the trajectory in 3D of the different
parts of the body. As a result, our system is more reliable
than those systems in which only thresholds are taken into
account when determining if a body part has performed a
movement. The movement is classified into a set of
actions by means of a neural network adapted for this
purpose. To that extent, we track the moving body using
the Kinect hardware and OpenNI/NITE software to
extract 3D information of the position of the different
human joints along time.
Apart from ToF cameras, nowadays there exist several
low-cost commodity devices that can capture the user’s
motion, such as the Nintendo Wiimote, Sony Move, or the
Microsoft Kinect camera. This last device allows
perceiving the full-body pose for multiple users without
having any marker attached to the body. The mechanism
used in this camera device is based on depth-sensing
information obtained by analyzing infrared structured
light, to process a 3D image of the environment. It is
capable of capturing depth images with a reasonable
precision and resolution (640x480 px 30fps) at a
commercial cost.
2. 3D skeleton-based approaches
Around this hardware, a community of developers has
emerged and there is a wide variety of tools available to
work with Kinect. A commonly used framework for
creating Kinect applications is OpenNI [13]. OpenNI
software has been developed to be compatible with
commodity depth sensors and, in combination with
PrimeSense’s NITE middleware, is able to automate tasks
for user identifying, feature detection, and basic gesture
recognition. These applications vary from gaming and
rehabilitation interfaces, such as FAAST [14] to
2.1 Action recognition system
The usage of depth cameras has allowed many new
developments in the field of action recognition. In [5],
depth images are processed with a GPU filtering
algorithm, introducing a step further in Real Time motion
capture. A pose estimation system based on the extraction
of depth cues from images captured with a ToF camera is
116
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
actions are: walking, jogging (slow running), gesture
(“hello” and “goodbye”), throw/catch (different styles),
boxing, combo (sequence of walking, jogging and
balancing).
Augmented Reality software. However, the most up-todate Kinect tools are provided in the Microsoft Kinect
SDK [15], released more recently. A previous study to
determine the advantages of Kinect SDK and OpenNI was
carried out in [16]. It was finally concluded that, despite it
lacked some minor joints, OpenNI offers an open
multiplatform solution with almost the same functionality
for Kinect-based applications.
Some datasets, recorded using ToF cameras include depth
information. The ARTTS 3D-TOF database [19] offers
datasets of faces for detection, heads for orientation, and
gestures for recognition. The gestures dataset is composed
by nine different simple actions, such as push and resize,
performed by different users. Other example is the SZU
Depth Pedestrian Dataset [20], which contains depth
videos of people standing and walking for pedestrian
detection. In [6], a database to evaluate a pose estimation
system was proposed, since a synchronized dataset of ToF
captured images was not available online. This dataset
contains ten activities conducted by ten subjects; besides,
each of the movements was recorded 6 times: clapping,
golfing, hurrah (arms up), jumping jack, knee, bends,
picking something up, punching, scratching head, playing
the violin and waving.
In [3], a methodology for working with the Microsoft
Kinect depth sensor is provided. They explain an efficient
way to make use of the information about the human body
(i. e. relationship between skeleton joints). To build their
system, they assume that the distance function based on
the kinetic energy of the motion signals shows enough
discriminative power. Nevertheless, it is a nontrivial
problem to define geometric (semantic) relationships that
are discriminative and robust for human motions.
Taking Kinect skeleton data as inputs, [4] propose an
approach to extract the discriminative patterns for
efficient human action recognition. 3D Skeletons are
converted into histograms, and those are used to
summarize the 3D videos into a sequence of key-frames,
which are labeled as different patterns. Then, each action
sequence is approached as a document of action ”words”,
and the text classification concept is used to rank the
different heterogeneous human actions, e.g. "walking",
"running", "boxing". However, this technique does not
take into account most of the spatial structure of the data.
Ganapathi et al. built a database composed of 28 realworld sequences with annotated ground truth [5]. These
streams of depth images vary from short sequences with
single-limb motions to long sequences such as fast kicks,
swings, self-occlusions and full-body rotations. Actions
can have different length, occlusions, speed, active body
parts and rotation about the vertical axis. Depth
information is captured with a ToF camera, and groundtruth is collected using 3D markers. The scope of this
extensive dataset is to evaluate performance on tracking
systems.
2.2 Datasets
In order to develop a recognition system, it is necessary to
have a set of movements to learn and classify. In this
section, some of the existing datasets that use depth
information are briefly described and a self-made
movement database constructed using a consumer depth
camera is explained. As described in [17], datasets can be
classified according to different criteria such as the
complexity of the actions, type of problem or source.
Heterogeneous gestures are those who represent natural
realistic actions (e.g. jumping, walking, waving…).
Apart from using ToF cameras, another way to obtain 3D
information is by capturing the video via a commercial
Kinect sensor, which offers a better resolution. An RGBD
dataset captured with Kinect is provided by [21],
containing specific daily life actions such as answering
the phone or drinking water. However, to the best of our
knowledge a Kinect dataset which contains heterogeneous
motions such as running, jumping, etc., cannot be found.
Moreover, several other approaches using 3D skeletons,
such as some of the aforementioned, are also evaluated
with this kind of movements.
According to [2], one of the datasets used to evaluate both
2D and 3D pose estimation and tracking algorithms is the
HumanEva database [18]. Their intention is to become a
standard dataset for human pose evaluation. Videos are
captured using a single setup for synchronized video and
ground-truth 3D motion, using high-speed cameras and a
marker-based motion acquisition system. HumanEva-I
dataset consists of four subjects performing six predefined
actions with a certain number of repetitions each. These
With this in mind and taking as a reference a set of
simple actions, we built our own corpus to test the
proposed action recognition system and compare it with
other related approaches. In order to perform complete
action recognition, the desired dataset should contain fullbody natural motions, not based on interfaces or for
specific recognition tasks (dance moves…), performed by
117
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
represent the position of the points that match the main
joints of the human model. With these points, a stick
figure is drawn on the original image, overlaid in the
silhouette of the person.
different people, as in [18]. It consists of ten independent
actions represented by nine different users. As it is shown
in Figure 2, the actions are: bending down, jumping jack,
jumping, jumping in place, running, galloping sideways,
hop-skip, walking, waving one hand and waving both
hands. This database will be further extended to include
more users and action types. The scene is composed of a
cluttered background inside an office, although this
scenario does not need to be maintained.
Fig. 3 An action video is acquired live using a Kinect camera or via the action
dataset, and the human body is tracked from the beginning of the movement to
the end. The obtained 3D skeleton is processed to build motion patterns which
are introduced to the action classifier in order to determine which action is
being performed.
Fig. 2 Example of different movements and users
3.1 Skeleton pre-processing
Once body information is obtained, it is necessary to
break it down into motion data in order to represent an
action. Aggarwal et al. [24] describe a series of steps to
model an action recognition system. To represent the
trajectory of a joint, the coordinate system takes the center
of the image as the origin and then positive and negative
values are assigned to x and y coordinates. Coordinate z is
represented with a positive value, taking the position of
the camera as the origin. The value of these measures is
offered in millimeters.
3. System description
After presenting state-of-the-art 3D-based recognition
systems and datasets, this section describes our approach
derived from this study. Figure 3 depicts the organization
of the proposed system.
In our approach for action recognition, an OpenNI/NITE
wrapper is utilized to access the kinematic skeleton.
OpenNI is an API that can be used to interact with the
information provided by the Kinect technology. One of its
main features includes body tracking of one or more users
in front of the camera, which involves detecting a body
model and offering the coordinates of some of its joints.
OpenNI, in conjunction with NITE [22], supports the
detection and tracking of a kinematic skeleton. As shown
in Figure 1, in the case of Kinect this skeleton consists of
fifteen joints: head, neck, left shoulder, left elbow, left
hand, right shoulder, right elbow, right hand, torso, left
hip, left knee, left foot, right hip, right knee and right foot.
Once the movement starts, the system stores the body
coordinates. In each frame, the system takes from the
coordinate matrix extracted from OpenNI the x-y-z
coordinates of each joint. This coordinate information is
centered with respect to the position of the torso in the
first frame of the movement. This step allows reducing
variability with respect to the initial position and
redundancy of the coordinate values.
3.2 Pattern Generation
As suggested in [3, 23], instead of using edges or regions
within the images for recognition, a coordinate system of
some parts of the human body is utilized. The coordinates
Once an action is tracked, the segmented sequence is
represented into a motion matrix containing the x-y-z
normalized coordinates of the whole movement along
118
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org

p (t )  ( x j , y j , z j ), j  1,15
time, j
. These motions signals
are rectified with respect to the position of the user’s torso
in the first frame, since we consider that the action can be
executed anywhere in the scene. Then, this tracking block
is processed into an action pattern with the purpose to
feed an intelligent system.
~
~
~
~
 X k0 , Re X k1 , Im X k1 ,, Im X k4

9
~
~
~
~
P    Yk0 , Re Yk1 , Im Yk1 ,, Im Yk4 ,

k 1 ~
 Z , Re Z~ , Im Z~ ,, Im Z~
k1
k1
k4
 k0
     , 
      
      
(1)
3.3 Action recognition
After studying several motion representation schemes [25],
it was inferred that using the coordinate points directly in
a time sequence results in enormous vectors when
modeling a motion pattern. In order to transform the raw
trajectory information into a manageable representation,
different compression techniques were analyzed [26]. The
way we reduce dimensionality of joint trajectories is by
applying the Fourier transform. As stated in [27], this
technique is used in many fields, and allows representing
time-varying signals of different duration. Also, by
keeping only the lowest components, the data can be
represented without losing highly relevant information,
smoothing the original signal. This also eliminates
possible existing noise. It is also indicated in [27] that
selecting the first five components of the Fourier
transform, the main information is always preserved.
The core of the system is in charge of carrying out the
action recognition tasks. Henceforth, we will use the term
class to represent each kind of movement. Therefore,
input patterns of the same class represent the same actions
performed by various users or various repetitions of the
same user.
The proposed system uses a neural network capable of
learning and recognizing action patterns P. This neural
network is based on the ART (Adaptive Resonance
Theory) proposed by Grossberg et al. [28], thus includes
interesting characteristics like fuzzy computation and
incremental learning. Due to the nature of Fuzzy logic,
action patterns P need to be normalized altogether,
adjusting their values to the range of [0, 1], before they
can be introduced to the neural network for learning or
testing. Moreover, patterns are complement-coded to enter
the neural network. As a consequence, the action
recognition network works with very large information
patterns of 486 elements.
Unlike in the aforementioned paper, our system not only
follows the trajectory of a single point, but of a whole
skeleton composed by fifteen joints. When building the
motion pattern, the FFT trajectories of each point,

~
~ ~ ~
Pj  FFT ( p j (t ))  ( X j , Y j , Z j )
, are assembled together
to compose a single vector. In order to reduce even more
the dimension of the final motion pattern, only the main
joints are introduced into the final system. It was
determined that the joints to select were head and limbs,
discarding neck, shoulders, hips and torso. This is
because the range of movements of these joints is quite
limited, and also we are subtracting the initial torso
position to represent a relative motion.
The neural network is comprised of an input layer L1, an
output layer L2 and a labeling stage. L1 and L2 are
connected through a filter of adaptive weights wj.
Similarly, L2 is linked to the labelling stage through Ljl
weights. Before neural network training takes place,
weights wj are initialized to 1 and Ljl to zero.
The action patterns P activate L1 using complement
coding I = (P, 1-P). Each input I activates every node j in
L2 with an intensity T j, through adaptive weights wj
(j=0,…, N). Activation is computed using Eq. (2), where
|.| is the L1 norm of the vector, ^ is the fuzzy AND
operator ((p ^ q)i = min(p i, qi)).
Small values for the FFT may add non-relevant
information whereas greater values may eliminate
important information within those first components. A
sub-vector of nine elements is constructed using the real
and imaginary parts of the first five components of the
FFT. The motion pattern is then represented by
assembling the Fourier sub-vectors of each coordinate of
each point, that is 9 FFT elements (0…5) x 3 coordinates
(x,y,z) x 9 joints (k) = 243, as depicted in Eq. (1).
Tj 
I  wj
0.05  w 
(2)
j
In contrast to the ARTMAP neural network [28], a fast
winner tracking node mechanism is included, so that only
those committed nodes in L2 that are linked to
supervision label l would participate in the competition
process. This way, a mismatch between the supervision
label and the label of the label of winning node is avoided.
119
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Also, this mechanism favors a parallel implementation of
the system and speeds-up the training process.
One of the advantages of our system is that it can be
trained to learn new actions and the neural network does
not forget previous trained examples. For this test for
example, it was only necessary to teach the system to
include the new movements that appear in the larger set.
Figure 5 depicts the accuracy rates for this case.
In layer L2 takes place a winner-take-all competition, and
a winner node J so that TJ=max(Tj) is selected. Then, the
resemblance between the action pattern, II, and the
adaptive winner node, wJ is measured using a match
function in Eq. (3)
I I  wJ
II
 threshold
(3)
In case this criterion is not satisfied, a new node is
promoted in L2 and linked through the adaptive weights
Ljl, to the supervision label l. In case the criterion is
satisfied, the network enters in resonance and the input
vector, II, is learnt according to Eq. (4).
wJ 
1
[ wJ  ( I I  wJ )]
2
Fig. 4 Accuracy rates of the Basics test
(4)
Table 1: Confusion matrix of the Basic test
During classification, weights do not change and the
winning node in layer L2 will determine the predicted
label, corresponding to weight Ljl =1.
Basic
Bend
Bend
9
Pjump
Pjump
7
Jump
4. Evaluation
Walk
Two tests were carried out against the prototype of the
action recognition system. For each, different movements
from the dataset were used; the first one, called Basic,
involved the simplest movements (bending down,
jumping, jumping in place, walking and waving one
hand). The evaluation was carried out with the crosscorrelation leave-one-out technique. This involves nine
experiments, each one consisting in learning the actions
of eight users and performing the test with the remaining
subject.
Jump
Walk
2
9
2
7
Wave1
Figure 4 shows the accuracy rates of the set of tests with
the Basic dataset, resulting quite satisfactory. Movements
corresponding to jumping and walking are slightly
confused. The recognition system achieves a classification
success of 91.1%. Table 1 shows the corresponding
confusion matrix, which shows the specific mistakes
made by the system.
Wave1
9
Fig. 5 Accuracy rates of the Complete test
Table 2 presents the confusion matrix for the more
complex test. Although movements such as jumping jack
and waving both hands are successfully introduced into
the system, jumping forward and galloping sideways are
misclassified into the running category. Skipping and
walking actions show also low accuracy in their
recognition results. The average success rate in this case
is 81.1%. These results allow drawing some important
conclusions about the proposed system.
Once the results of this test were obtained, the inclusion
of new movements in the dataset was determined in order
to carry out further experiments. The second set of tests
was carried out including videos for jumping jack,
running, galloping sideways, hop-skip and waving both
hands.
120
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Table 2: Confusion matrix of the Complete test
Complete
Bend
Bend
9
Jack
Pjump
Jack
Pjump
Jump
Run
Side
Skip
Walk
Wave1
Wave2
9
4
Jump
3
2
9
8
1
Side
Run
1
4
4
Skip
2
1
6
Walk
1
1
1
Wave1
6
9
Wave2
9
4.1 Discussion
5. Conclusions
The system is robust recognizing actions that do not
involve very fast transitions and those that involve
different body parts. However, the recognition rate is
decreased when including more actions that do not
follow these premises.
This study presented a body tracking and action
recognition system for heterogeneous actions using
consumer depth cameras, i.e. Kinect. To test the system,
also an action dataset was created from scratch. Since our
system is not only designed to recognize gestures but to
track general actions, the actions that are contained in the
dataset represent whole body movements. The results
obtained show that the proposed system is capable of
performing action recognition for heterogeneous
movements with a high success rate. When the base set of
five actions is used, a mean accuracy of 91.1% is achieved.
By adding another group of similar but more complex
actions, the mean hit rate is 81.1%.
By understanding how the whole system works, it can be
determined the reason of these results. When two joints
follow similar trajectories in the 3D coordinate system,
the components in the action pattern are closer and the
action can be misclassified. This is the reason of the
errors that occur among movements such as walk, run,
skip and jump, when the tracked points share similar
paths. It is also necessary to take into account that
truncating the trajectory signal into the Fourier
dimension eliminates part of the information, smoothing
the differences between similar signals.
Our system is able to track a human body by means of a
commodity depth camera, and software that provides a set
of trajectories. Taking these trajectories as features to
represent an action has been proved to be robust and
reduce the variability that different actors and repetitions
may introduce into the motion signals. Then, the
intelligent network used to classify the actions is capable
of incremental learning, so new actions can be included
into the system once deployed. It is also remarkable that
the processing applied to the motion signals is reversible,
so the original movement can be recovered.
When compared to other approaches that use joint
trajectories, the results are analogous. In [4] the accuracy
ratio is similar, and their confusion matrices also show
misclassification in jumping-waling actions. Authors in
[12] also state the difficulty of distinguishing walking
and running tracked actions using their approach,
lowering the hit rate.
An advantage of our system against other tracking and
recognition approaches is that the original motion signals
are recoverable, since the preprocessing of the motion
patterns can always be inverted. This system does not
need the actions to follow any specific rhythm either.
A possible future work would include the extension of the
dataset, with more varied actions and poses with
trajectories in the third dimension. We will consider the
possibility of opening the dataset for collaboration, so that
new videos could be submitted using an online webpage.
Also, since the system has been designed as a highly
parallelizable algorithm, GPU parallel coding techniques
121
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
[16] Martínez-Zarzuela M, Díaz-Pernas F.J, Tejero de Pablos A,
Perozo-Rondón F, Antón-Rodríguez M, González-Ortega D
(2011) Monitorización del cuerpo humano en 3D mediante
tecnología Kinect. SAAEI XVIII Edition pp. 747-752.
[17] Chaquet J.M, Carmona E.J, Fernández-Caballero A (2013)
A survey of video datasets for human action and activity
recognition. Computer Vision and Image Understanding
117(6): 633-659.
[18] Sigal L, Balan A.O, Black M.J. HumanEva: Synchronized
video and motion capture dataset and baseline algorithm
for evaluation of articulated human motion. International
Journal of Computer Vision 87(1-2): 4-27.
[19] ARTTS
3D-TOF
Database.
Available:
http://www.artts.eu/3d_tof_db.html. Accessed 2013 Apr
21.
[20] Shenzhen University (SZU) Depth Pedestrian Dataset.
Available:
http://yushiqi.cn/research/depthdataset.
Accessed 2013 Apr 21.
[21] Ni B, Wang G, Moulin P (2011) RGBD-HuDaAct: A colordepth video database for human daily activity recognition.
International Conference on Computer Vision Workshops
(ICCV Workshops), 2011 IEEE pp. 1147-1153.
[22] NITE, Algorithmic infrastructure for user identification,
feature detection and gesture recognition. Available:
http://www.primesense.com/solutions/nite-middleware/.
Accessed 2013 Apr 21.
[23] Guo Y, Xu G, Tsuji S (1994) Understanding Human
Motion Patterns. Proceedings of International Conference
on Pattern Recognition Track B: 325-329.
[24] Aggarwal J.K, Park S. Human Motion: Modeling and
Recognition of Actions and Interactions. 3D Data
Processing, Visualization and Transmission pp. 640-647.
[25] Kulic D, Takano W, Nakamura Y (2007) Towards Lifelong
Learning and Organization of Whole Body Motion Patterns.
International Symposium of Robotics Research pp. 113124.
[26] Gu Q, Peng J, Deng Z (2009) Compression of Human
Motion Capture Data Using Motion Pattern Indexing.
Computer Graphics Forum 28(1): 1-12.
[27] Naftel A, Khalid S (2006) Motion Trajectory Learning in
the DFT-Coefficient Feature Space. IEEE International
Conference on Computer Vision Systems pp. 47-54.
[28] Carpenter G, Grossberg S, Markuzon N, Reynolds J, Rosen
D (1992) Fuzzy ARTMAP: A neural network architecture
for incremental supervised learning of analog
multidimensional maps. IEEE Transactions on Neural
Networks 3(5): 698-713.
may be speed-up computational expensive operations,
such as pattern processing and neural network recognition.
References
[1] Poppe R (2007) Vision-based human motion analysis: An
overview. Computer Vision and Image Understanding 108:
4–18.
[2] Weinland D, Ronfard R, Boyer E (2011) A survey of
vision-based
methods
for
action
representation,
segmentation and recognition. Computer Vision and Image
Understanding 115: 224–241.
[3] Raptis H.H.M, Kirovski D (2011) Real-Time Classification
of Dance Gestures from Skeleton Animation. Proceedings
of the 2011 ACM SIGGRAPH, ACM Press pp. 147–156.
[4] Chen H.L.F, Kotani K (2012) Extraction of Discriminative
Patterns from Skeleton Sequences for Human Action
Recognition. RIVF International Conference, IEEE 2012
pp. 1–6.
[5] Ganapathi V, Plagemann C, Koller D, Thrun S (2010) Real
Time Motion Capture Using a Single Time-Of-Flight
Camera. Computer Vision and Pattern Recognition pp.
755-762.
[6] Schwarz L.A, Mateus D, Castañeda V, Navab N (2010)
Manifold Learning for ToF-based Human Body Tracking
and Activity Recognition. Proceedings of the British
Machine Vision Conference 80: 1-11.
[7] Plagemann C, Ganapathi V, Koller D, Thrun S (2010)
Real-time identification and Localization of Body Parts
from Depth Images. IEEE International Conference on
Robotics and Automation pp. 3108-3113.
[8] Zhu Y, Dariush B, Fujimura K. Kinematic self retargeting:
A framework for human pose estimation. Computer Vision
and Image Understanding 114: 1362-1375.
[9] Zhu Y, Fujimura K (2010) A Bayesian Framework for
Human Body Pose Tracking from Depth Image Sequences.
Sensors 10: 5280-5293.
[10] Yániz C, Rocha J, Perales F (1998) 3D Part Recognition
Method for Human Motion Analysis. Proceedings of the
International Workshop on Modelling and Motion Capture
Techniques for Virtual Environments.
[11] Chen D, Chou P, Fookes C, Sridharan S (2008) MultiView Human Pose Estimation using Modified Five-point
Skeleton Model.
[12] Sheikh M.S.Y, Sheikh M (2005) Exploring the Space of a
Human Action. International Conference on Computer
Vision 2005, IEEE 1: 144–149.
[13] OpenNI, Plataform to promote interoperability among
devices, applications and Natural Interaction (NI)
middleware. Available: http://www.openni.org. Accessed
2013 Apr 21.
[14] Suma E.A, Lange B, Rizzo A, Krum D.M, Bolas M (2011)
FAAST: The Flexible Action and Articulated Skeleton
Toolkit. Virtual Reality Conference pp. 247-248.
[15] Microsoft Kinect SDK, Kinect for Windows SDK.
Available:
http://www.microsoft.com/enus/kinectforwindows/. Accessed 2013 Apr 21.
Mario Martínez Zarzuela was born in Valladolid, Spain; in1979. He
received the M.S. and Ph.D. degrees in telecommunication
engineering from the University of Valladolid, Spain, in 2004 and 2009,
respectively. Since 2005 he has been an assistant professor in the
School of Telecommunication Engineering and a researcher in the
Imaging & Telematics Group of the Department of Signal Theory,
Communications and Telematics Engineering. His research interests
include parallel processing on GPUs, computer vision, artificial
intelligence, augmented and virtual reality and natural humancomputer interfaces.
Francisco Javier Díaz Pernas was born in Burgos, Spain, in1962.
He received the Ph.D. degree in industrial engineering from Valladolid
122
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
University, Valladolid, Spain, in 1993. From 1988 to 1995, he joined
the Department of System Engineering and Automatics, Valladolid
University, Spain, where he has worked in artificial vision systems for
industry applications as quality control for manufacturing. Since1996,
he has been a professor in the School of Telecommunication
Engineering and a Senior Researcher in Imaging & Telematics Group
of the Department of Signal Theory, Communications, and
Telematics Engineering. His main research interests are applications
on the Web, intelligent transportation system, and neural networks for
artificial vision.
Antonio Tejero de Pablos was born in Valladolid, Spain, in 1987.
He received his M.S. in Telecommunication Engineering and M.S. in
ICT Research from the University of Valladolid in 2012 and 2013
respectively. During his academic career he has collaborated on
various projects with the Imaging & Telematics Group of the
Department of Signal Theory, Communications and Telematics
Engineering. Now he works as a trainee researcher at NTT
Communications Science Lab, in Japan. His research fields of
interest are Action Tracking & Recognition, Augmented Reality based
interfaces and General Purpose GPU Programming.
David González Ortega was born in Burgos, Spain, in 1972. He
received his M.S. and Ph.D. degrees in telecommunication
engineering from the University of Valladolid, Spain, in 2002 and 2009,
respectively. Since 2003 he has been a researcher in the Imaging and
Telematics Group of the Department of Signal Theory,
Communications and Telematics Engineering. Since 2005, he has
been an assistant professor in the School of Telecommunication
Engineering, University of Valladolid. His research interests include
computer vision, image analysis, pattern recognition, neural networks
and real-time applications.
Míriam Antón Rodríguez was born in Zamora, Spain, in 1976. She
received her M.S. and Ph.D. degrees in telecommunication
engineering from the University of Valladolid, Spain, in 2003 and 2008,
respectively. Since 2004, she is an assistant professor in the School
of Telecommunication Engineering and a researcher in the Imaging &
Telematics Group of the Department of Signal Theory,
Communications and Telematics Engineering. Her teaching and
research interests include applications on the Web and mobile apps,
bio-inspired algorithms for data mining, and neural networks for
artificial vision.
123
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Contention-aware virtual channel assignment in
application specific Networks-on-chip
Mona Soleymani1 , Fatemeh Safinia2 , Somayyeh Jafarali Jassbi3 and Midia Reshadi4
1,2
3,4
Department of Computer Engineering Science and Research branch, Islamic Azad
University, Tehran, Iran
1
Mona.soleymani@gmail.com
2
f.safinia1@gmail.com
Faculty Member of Computer Department, Science and Research Branch, Islamic Azad University, Tehran, Iran
3
s.jassbi@srbiau.ac.ir
4
reshadi@srbiau.ac.ir
Abstract
a significant challenge[4],[5],[8].Different application
specific NoCs have specified traffic pattern[13].
Occasionally, the data flow control of multiple paths may
overlapped in these traffic patterns. When two or more
packet contend for one path, congestion appears and as
said, consequently the network influenced drastically by
delay.
Nowadays lots of processing elements should be placed on a
single chip because of ever complexity of applications. In order
to connect this number of processing elements, providing a
interconnection infrastructure is essential. Network on Chip
(NoC) is used as an efficient architecture for chip
multiprocessors, which brings a reasonable performance and
high scalability. A relevant challenge in NoC architectures is
latency that increased in the presence of network contention. In
this paper, we utilized virtual channels in buffer structure of
routers. We calculated end to end latency and link utilization in
NoCs by considering different numbers of virtual channels
ranging from 2 to 5. The simulation results show that using
virtual channels for specified routers in 2D mesh 4x4 decreases
end to end latency up to 17% . It also improves link utilization
and throughput significantly.
Keywords: Network on Chip, virtual channel, path-based
contention, latency.
This discussion leads us to focus on delay as major
challenges in data transmitting of specific-application
NoCs[15]. In this article we proposed a solution by the
help of virtual channels to reduce delay.
Generally, in NoC architectures, the global wires are
replaced by a network of shared links and multiple
routers transmitting data flows through the cores. As
said, one of the main problems in these architectures is
traffic congestion that should be avoided in order to
optimize the efficiency and performance of NoCs.
1. Introduction
In addition, the contention that created in some special
simultaneous data flows through the cores and path also
decreases the network performance, especially latency
and throughput. We should consider that it is very hard
to eliminate the contention completely in such networks
but there are some solutions to control these contentions
and degrade the bad impact of it on the whole data flows
and performance in the network[16].
With technology advances, the number of cores
integrated into a single chip is increasing[1]. Traditional
infrastructures such as non-scalable buses cannot handle
the complexity of inter-chip communications. NoC
(Network on Chip) architectures were appeared as a new
paradigm of SoC design offering scalability, reusability
and higher performance [1],[2]. NoC is a set of
interconnected switches, with IP-cores connected to
these switches.
In this work by considering a special core graph as a
sample, we use our proposed technique foe specified
cores that contention would happen in them.In the
technique proposed in this paper for solving contention
in NoCs, virtual cannels are used for controlling pathbased contentions.
Nowadays, one of the principal research issues that onchip interconnection network designers Encountered is
the network’s performance and efficiency [6]. In other
words, when the network becomes congested some
problems such as contention which would be have a
direct impact on NoC’s delay appeared that should be
solved. Virtual channels are a solution for contentions
but the way and the number of them that should be use is
The structure of this article is organized as follows:
Section 2 reviews the fundamental structure of NoC,
preliminaries are presented in Section 3. Following that,
124
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
in Section 4, path-contention issue is described by
motivational example and an efficient allocation virtual
channels are proposed for some routers in section 5.
Experimental results are given and discussed in
Section 6. Finally, concludes the paper.
single buffer for each port in the router structure, one
data flow behind of another should be waited until the
area of buffer on related port to becoming empty.
Virtual channels originally introduced to solve the
deadlock problem, although and therefore they can be
used to improve network latency and throughput[8].
Assigning multiple virtual paths to the same physical
channel is the nature of VC flow control. General
structure of virtual channel is presented in figure2. As
shown each input port has a physical buffer that divided
into two virtual channels, it means that instead of one
channel which can be placed by four flits we can use two
separated channels that each of them can be managed
two flits alone.
2. Fundamental structure of NoC
NoC architecture consists of three components : routers,
links and processing elements[14]. as shown in Figure 1
each router is composed of five input and five output
ports. One pair of input/output ports is connecting to
processing element while the other four pairs are
connected to four adjacent routers in four directions
(north, south, east and west). Every input port has a FIFO
buffer and link controller, The purpose of buffers is to
act as a holding area, enabling the router to manipulate
data before transmitting it to another router and LC
controls the empty area in the buffer for coming a new
packet/flit from neighbor routers. The transferring of
data between the input and output ports of the switch is
carried out by the crossbar, whose active connections are
decided by the arbiter which is a circuit used to make
decisions which controls the crossbar.
Fig.2. virtual channel
3. PRELIMINARIES
In this section, we introduce some related concepts first.
The network contention that occurs into network on chip
communications can be classified into three types:
source-based contention, destination-based contention
and path-based contention[3].
The source-based contention (figure.3(a)) is appeared
when two variety traffic flows are distributed from the
same source on the same link .The destination-based
contention (figure.3(b)) occurs when two traffic flows
have the same destination, in this case it is common to
see a contention on the link which two traffic flows are
leading to the same destination at the end of the
considered link. the path-based contention (figure.3(c))
occurs when two data flows which come from the
different sources, and go towards the
different
destinations contend for the same links somewhere in the
network, (in other words some links that are common
between two traffic flows are prone to contention.)
Fig.1. structure of tile (router and processing element)
In this paper, we consider a 2-D mesh as our topology
with XY routing where the packet is first routed on the X
direction and then on the Y direction before reaching its
destination [7].
Wormhole switching is a well-known technique of
multiprocessor network[4]. The structure of buffers in
the wormhole switching is based on flit units that allows
minimization of the size of buffers; In other words,
power consumption reducing is caused by flit
segmentation instead of packet segmentation.
In the source-based contention, time scheduling method
for transmission of data can be a good solution for
avoidance of contention inside of specified source core.
Similarly, in the destination-based contention, scheduling
Deadlock is a critical problem in wormhole switching
that occurs when one router used by multiple data flows
which each of them has different destination. By using a
125
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
packets to send to the same destination can has
significant impact on reducing this kind of contention.
figure.4(b) which represent a network of switches (cores
are ignored for simplicity) and several paths onto links
are shown.
Unlike two previous methods path-based contention are
not improved by timing technique, so another methods
are required for optimizing this contention.
We apply XY routing and wormhole switching for data
transmission with 4 flit per packet. As seen in figure.4(c)
there are content in some paths, for instance five paths
have a common link across transmitting their data.
Network contention issue affects performance, latency
and communication energy consumption.
Each specific path that is under XY routing in the traffic
pattern shown in Figure.4(c) , composed of some links
that explain as bellow:
For better explaining the technique proposed in this
paper, we firstly describe the following definitions:
Definition: The core graph is a directed graph G(V,P).
Each vertex vi ϵ V indicates a core in communication
architecture. A directional path pi,j ϵ P illustrates a
communication path for data flow from core-source vi to
the core-destination vj. The weight of pi,j that is shown
by W(pi,j) represented the data flow volume (bits) from
core vi to core vj. .
P0,15 = l0,1 + l1,2 + l2,3 + l3,7 + l7,11 + l11,15
P1,11 = l1,2 + l2,3 + l3,7 + l7,11
P7,15 = l7,11 + l11,15
P6,15 = l6,7 + l7,11 + l11,15
P5,11 = l5,6 + l6,7 + l7,11
Notice that Path-contention occurs on the paths that have
overlapped on the same links. When pi,j ∩ pp,q ≠ Ø, then
there would be at least one path contention. we calculate
the content of α for each link that has path-contention in
this traffic pattern.
α(l1,2) = 2, α(l2,3) = 3, α(l3,7) = 2, α(l7,11) = 5,
α(l11,15) = 3, α(l6,7) = 2
(a)
(b)
(c)
Node u5 to node u15 wants to send a packet based on
XY routing, it goes forward two hops to east (first hop to
u6 and second hop to u7) and then goes down as two
hops.
Fig.3. (a) source-based contention (b) destination-based contention (c)
path-based contention
Similarly, node u1 to sent a packet that allocated to u15
destination should goes east to delivery to u3, then goes
down as three hops. Like two previous data flows, the
packets belong to node u5, u0 and u7 are delivered to
destinations across some common links.
Every specific path from core vi to core vj consists of
some links that demonstrated by Lm,n which means a
directed link from cm to vn ( it is obvious that vm , vn are
always considered as adjacent cores). As said in
section… when two or more paths contend at the same
time for the same link for transferring their data flow,
path-contention occurs. α(Li,j) is a parameter used for a
link from core vi to core vj that shows how many path
contends simultaneously on accessing to it.
As described in the mentioned scenario five flows of data
can be inferred some paths, in more detailed some links,
are common, hence these can be prone to content. In this
example, the link between u7 and u11 has five different
data flows, as shown in figure.4(c) although It may seen
that some of these contention can be appeared as a
destination-based contention, we address them to pathbased contention in this paper.
4. Motivational example
As a starting point we assume to have an application that
be mapped onto a 2-D mesh topology.
The ratio of path-based to source-based contention and
the ratio of path-based to destination-based contention
increase linearly with the network size [3]. therefore in
this paper we focus to reduce path-based contention
since this has the most significant impact on the packet
latency and can be mitigated through the mapping
process. hence to overcome this problem we proposed
using virtual channels in this paper.
the communication between the cores of the NoC are
exposed by the core graph[9]. The connectivity and link
bandwidth are presented by the NoC graph. As shown in
figure.4 general application such as figure.4(a) can be
mapped (as with any mapping algorithm) onto a 2-D
mesh topology with dimension of 4×4, to show the
impact of path-based contention consider the example of
126
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
500
v4
u1
u2
u3
u0
u1
u2
u3
u4
u5
u6
u7
u4
u5
u6
u7
u8
u9
u10
u11
u8
u9
u10
u11
u12
u13
u14
u15
u12
u13
u14
u15
v6
00
16
91
0
v0
u0
v9
v14
v15
31
5
91
51
2
v1
v3
0
v5
v10
v12
0
72
00
18
v11
v13
v8
v7
v2
(a)
(b)
(c)
Fig.4. (a) core graph (b) NoC graph (c) mapping
the u0 and u1 flits go through its destination by another
virtual channel and u7 processing element has a single
virtual channel separately.
5. Proposed Technique
In this section we focus on buffer structure as a
fundamental element of router to reduce contentions.
using a single buffer into router structure does not aid
the contention problem remarkably. Take the case of
with no virtual channels, flits 0 and 1 will prevent flits 5,
6 and 7 from advancing until the transmission of flits 0
and 1 have been completed (figure.5).
(a)
In this paper, we consider 5 flits as a buffer sizing for
any port of each router. In the virtual channel allocation
how many flits we should allocate to each virtual channel
Based on this assumption there are 5 flits that should be
divided between virtual channels proportionally. As an
illustration, u1 has two virtual channels on output east
port: one virtual channel is allocated to u0 and another
belongs to u1 processing element. In this case
considering to the volume of data can determine the
number of flits in each virtual channel. The data volume
of u0 is 910Mb and the data volume of u1 is 512Mb
(figure.4(a)). Hence, u0 gives 3 flits because of more
data volume and 2 remaining flits are allocated to u1.
(b)
Fig.5. (a) router7 without VC (b) router7 with three VCs
In this paper, we propose the use of virtual channels for
some routers that are placed on the path contention. The
number of virtual channels for any port of each router
can be modified. Increasing the number of virtual
channels has a direct impact on router performance,
through their effect on the achievable hardware cycle
time of the router[4]. For example, in u7 we can define
three VCs for south port, one VC for flits coming from
north port (u0, u1),one VC for flits coming west port (u5,
u6) and one single buffer( one VC) for processing
element port.
The paths u0àu15, u1àu11, u7àu15, u5àu11 and
u6àu15 have common links, implicitly there are
contention pending these paths. however, the flits of
u5 and u6 from one virtual channel and on the other hand
Fig.6. VCs for some routers on path-contention
127
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
6. Experimental Results
link utilization
In this paper we have used HNoC which is developed in
Omnet++ environment [18] in order to evaluate the
functionality of Networks-on-Chip[10],[11],[12]. This
simulator makes it possible to assess different traffic
patterns with different packet generation rate. However,
We define 4×4 mesh and 4B as flit sizing.
1
We first evaluate our network from the path-based
contention point of view and routers that encounter with
this kind of contention are selected. After that, virtual
channels are allocated to specified ports of these routers
that path-based contention occurred in them. For the
various numbers of VCs ranges from 2 to 5, different
results are achieved from simulation.
number of recieved packets
end to end latency
600
400
200
0
4
5
number of VCs
)×(
)×(
1
2
3
4
5
Table.1. throughput for different number of VCs
Number of VCS
1
2
3
4
5
Throughput can be defined in a variety of different ways
depending on the specifics of the implementation[17].
We used the definition of throughput based on[17] where
for massage passing system, TP can be calculated, as
follows in Eq.(1):
(
4000
3500
3000
2500
2000
1500
1000
500
0
Fig.9. average number received packets results for different
number of VCs
By modifying the number of VCs the link utilization
differs. Figure8 show that using 5 VCs has the highest
link utilization in compared with 2, 3 and 4 VCs. More
virtual channels lead to more virtual independent path
between routers hence, the bandwidth of links are
utilized efficiently.
(
5
number of VCs
Fig.7. average end to end latency results for different number of VCs
=
4
of IP blocks has been 16 and the total time is 100 µs.
Also. according to proposed technique in this paper , the
number of virtual channels varies from 2 to 5. The TP
results are obtained from equation 1 can be seen in table
1. In this table, we observe that throughput(TP) is
slightly higher when using 5 VCs compared to the case
of without VC.
800
3
3
Fig.8. average link utilization results for different number of VCs
1000
2
2
number of VCs
Simulations are conducted to evaluate the end to end
latency, link utilization and throughput. However, the
most important achievement is the fact that the end to
end latency is reduced to 17% by increasing the number
of virtual channels as seen in figure7.
1
70
60
50
40
30
20
10
0
)
ℎ)
Throughput
0.0032022
0.0035555
0.0038215
0.0040455
0.0041890
7. Conclusions
In this paper we have explored the influence of virtual
channels in the presence of path contention. In
application-specific NoC that mapped on any topology ,
path-contention is appeared as a problem. we show that
selecting the appropriate number of virtual channels
based on data volume for some specified routers can
(1)
As shown in figure9 we calculated the number of
received packets which are equal to (total message) ×
(message length). In the simulation process, the number
128
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
significantly improve properties of network contention.
Simulation has been carried out with a number of
different virtual channels. the experimental results show
that by using our proposed method an improvement is
obtained in end to end latency, link utilization and
throughput parameters.
Proceeding of Design, Automation and Test in Europe
Conference and Exhibition, 2, 896.
[14] Shiuan Peh, L., & Jerger, N. E. (2009). On-Chip Networks
(Synthesis Lectures on Computer Architecture). Morgan
and Claypool publisher, Mark D. Hill Series Editor,
Edition1.
[15] Seiculescu, C., Rahmati, D., Murali, S., Benini, L., De
Micheli, G., & Sarbazi-Azad, H. (2013). Designing Best
Effort Networks-on-Chip to Meet Hard Latency
Constraints. The Journal of ACM Transaction on
Embedded Computing Systems (TECS), 12(4), 109.
References
[1] Benini, L., & De Micheli, G. (2002). Network on Chip: A
New SoC Paradigm. IEEE Trans. Computer, 35(1), 70.
[16] Lin, S. Y., Huang, C. H., & Chao, C. H. (2008). TrafficBalanced Routing Algorithm for Irregular Mesh-Based
On-Chip Networks. The Journal of IEEE Transaction on
Computers, 57(9).
[2] Daly, W., & Towles, B. (2001). Route Packets, Not Wires:
On-Chip Interconnection Networks. Proceeding of Design
Automation Conference (DAC), 684.
[3] Chou, C. L., & Marculescu, R. (2008). Contention-Aware
Application Mapping for Network on Chip Communication
Architecture. IEEE International Conference, 164.
[17] Pande, P. P., Grecu, C., Jones, M., & Ivanov, A. (2005).
Performance Evaluation and Design Trade-Offs for
Network-on-Chip Interconnect Architectures. The
Journal of IEEE Transaction on Computers, 54(8), 1025.
[4] Duato, J., Yalamanchili, S., & Lionel, N. (2002).
Interconnection Networks. Morgan Kaufmann.
[18] Varga, A. (2012). The OMNeT++ discrete event
simulation system.
Proceedings of the European
Simulation Multi conference (ESM’2001), 319.
[5] Dally, W. J., & Towles, B. (2003). Principles and Practice
of Interconnection Networks. The Morgan Kaufmann
Series in Computer Architecture and Design.
Mona Soleymani received her B.Sc. from Tehran central branch
of Islamic Azad University, Tehran, Iran in 2010 in Hardware
engineering and her M.S. from science and Research Branch of
Islamic Azad University, Tehran, Iran in 2013 in Computer
architecture engineering. She is currently working toward the
Ph.D. in computer architecture engineering at the Science and
Research Branch of IAU. Her research interests lie in Network
on Chip and routing, switching and mapping issues in on-chip
communication networks.
[6] Bjerregaard, T., & Mahadevan, S. (2006). A Survey of
Research and Practices of Network-on-Chip. Journal of
ACM Computing surveys (CSUR), 38(1).
[7] Kakoee, M. R., Beatacco, V., & Benini, L. (2011). A
Distributed and Topology-Agnostic Approach for On-line
NoC Testing. Networks on Chip (NoCS), Fifth IEEE/ACM
International Symposium, 113.
[8] Mullins, R., West, A., & moore, S. (2012). Low-Latency
Virtual-Channel Routers for On-Chip Networks.
Proceedings of the 31st annual international symposium on
Computer architecture, 188.
Fatemeh Safinia received her B.Sc. from Tehran central branch
of Islamic Azad University,Tehran, Iran in 2010 in Hardware
engineering and her M.S. from science and Research Branch of
Islamic Azad University, Tehran, Iran in 2013 in Computer
architecture engineering. She is currently working toward the
Ph.D. in computer architecture engineering at the Science and
Research Branch of IAU. Her research interests lie in Network
on Chip and routing, switching and mapping issues in on-chip
communication networks.
[9] Murali, S., & Mecheli, G. De. (2004). BandwidthConstrained Mapping of Cores onto NoC Architectures.
Design, Automation and Test in Europe Conference and
Exhibition, 896.
Somayyeh Jafarali Jassbi received her M.Sc. degree in
computer architecture from Science and Research Branch of
Islamic Azad University(SRBIAU), Tehran, Iran in 2007. She also
received her Ph.D. degree in computer architecture from
SRBIAU, Tehran, Iran in 2010. She is currently Assistant
Professor in Faculty of Electrical and Computer Engineering of
SRBIAU. Her research interests include Information
Technology(IT), computer arithmetic, cryptography and network
security.
[10] Walter, I., Cidon, I., Ginosar, R., & Kolodny, A. (2007).
Access Regulation to Hot-Modules in Wormhole NoCs.
Proceedings of the First International Symposium on
Networks-on-Chip, 137.
[11] Ben-Itzhak, Y., Zahavi, E., Cidon, I., & Kolodny, A.
(2013). HNOCS: Modular Open-Source Simulator for
Heterogeneous NoCs. International Conference on
Embeded Computer Systems(SAMOS), 51.
Midia Reshadi received his M.Sc. degree in computer
architecture from Science and Research Branch of Islamic Azad
University (SRBIAU), Tehran, Iran in 2005. He also received his
Ph.D. degree in computer architecture from SRBIAU, Tehran,
Iran in 2010. He is currently Assistant Professor in Faculty of
Electrical and Computer Engineering of SRBIAU. His research
interests include Photonic NoCs, fault and yield issues in NoCs,
routing and switching in on-chip communication networks. He is
a member of IEEE.
[12] Ben-Itzhak, Y., Cidon, I., & Kolodny, A. (2012).
Optimizing Heterogeneous NoC Design. Proceeding of
the International Workshop on System Level Interconnect
Prediction, 32.
[13] Murali, S., & De Micheli, G. (2004). BandwidthConstraint Mapping of Cores onto NoC Architectures.
129
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Optimization Based Blended Learning Framework for
Constrained Bandwidth Environment
Nazir Ahmad Suhail1, Jude Lubega2 and Gilbert Maiga3
1
School of Computer Science and Information Technology, Kampala University,
Kampala, Uganda
nazir_suhail@yahoo.co.uk
2
3
Uganda Technology and Management University,
Kampala , Uganda
jlubega@utamo.ac.ug
Department of Information Technology, College of Computing and Information Sciences,
Makerere University, Kampala , Uganda
gilmaiga@yaho o.com
technology revolution. Research shows that demand for
higher education is expanding [1] exponentially
worldwide, and about 150 million people are estimated to
seek for tertiary education by the year 2025.
Abstract
Adoption of multimedia based e-learning approach in higher
education institutions is becoming a key and effective component
for curriculum delivery. Additionally, multimedia teaching and
learning approach is a powerful tool to increase the perceived level
of user satisfaction, leading to enhance the blended learning
process. However, transmission of multimedia content over the
Internet in Constrained Bandwidth Environment (CBE) is a
challenge. This paper developed a framework that addressed the
most critical issue not addressed by the existing blended learning
frameworks after careful literature scanning. The framework was
tested and validated through experimental scientific research
process. This paper summarizes the main findings of the PhD
Thesis defended by the author some time back.
On the other hand, a larger percentage of eligible
candidates do not get access to higher education in LDCs.
According to Millennium Development Goal [2], the
university enrollment in Sub-Sahara Africa is the lowest
as compared to other parts of the world.
To be part of the global village, it is necessary for SubSaharan countries to make transition from natural
resources based economies to knowledge based
economies, in an effort to make departure from the
existing situation. Researchers argue that e-learning can
be used to deliver cost effective, flexible, and quality
higher education more effectively with improved access
by learners [3] and has the capacity to provide education
for all in LDCs [4]. However, universities in LDCs are
integrating Information Communication Technologies
(ICT) in their systems at a very slow pace. UNESCO
Institute for Statistics report [5] states that “in SubSaharan Africa, it is estimated that only 1 in 250 people
have access to the Internet as against the global average of
1 in 15” [6].
Keywords: Blended Learning, Framework, Multimedia
learning, Optimization, Constrained Bandwidth Environment.
.
1. Introduction
In the current knowledge driven society, the developments
in Information Technology have brought about many
positive changes in every field including education.
Driven by the growing demand of higher education
together with increasingly available new innovative
information technologies to increase access to learning
opportunities, the organizations in Lease Developed
Countries (LDCs) are considering cost effective and
efficient means for course delivery [34]. The aim of large
scale moves to use technology to deliver educational
content, by organizations in emerging environments is to
prepare them for the information and communication
The implementation of e-learning is an ideal choice for
the universities. And the introduction of the concept of
blended learning model is a natural start to achieve the
goal for the universities in developing countries, which is
pursued by many organizations in the developed world.
130
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
2. What type of multimedia content is compatible with the
challenges of Constrained Bandwidth Environment?
3. What are the factors of e-readiness leading to the
development of an e readiness model for the organizations
seeking to adopt blended learning systems, and their
challenges within Constrained Bandwidth Environment?
4. What framework design supports the implementation of
blended learning process in Constrained Bandwidth
Environment?
5. How well does the proposed framework address the
issues of Constrained Bandwidth Environment?
Research indicates that blended learning approach is the
most effective teaching and learning model [7], [8].
Although the concept of blended learning can prove as a
ground breaking and innovative approach in the context
of developing countries [9], it is not a panacea. There are
some obstacles such as Constrained Bandwidth
Environment (CBE) which can be a key challenge to its
adoption in the sub-region [10]. Constrained Bandwidth
Environment refers to insufficient bandwidth as compared
to user needs, coupled with other constraints such as high
cost, misuse and mismanagement due to ineffective or
non-existent Bandwidth Management Policies (BMP),
viruses and spam etc.
3. Methodology
This study used mixed research methods; qualitative and
quantitative that had approaches such as case study,
surveys, and experiments, dictated by the nature of the
study and research questions that could not be answered
by using a single approach [15].
A large number of organizations in the world are
increasingly adopting blended learning solutions in their
systems and several frameworks have been designed by
various authors such as [10], [11], [12], [13], [14]. A
critical review of the issues addressed by the existing
blended learning frameworks suggests that many aspects
of constrained bandwidth environment are taken for
granted that need to be addressed explicitly.
3.1. Research Design Framework
This research followed the generalized research design
framework format as suggested by [15]. Fig. 1 presents
the research design framework of the study.
2. Aim of the Research
The main objective of this research was to develop a
framework for adoption of blended learning process in
Constrained Bandwidth Environment, a peculiar state
associated with most of the developing countries.
Specifically, the study intended:
(a) To conduct a critical investigation of the literature on
current approaches aimed to identify gaps and challenges
for implementing blended learning process, within
Constrained Bandwidth Environment.
(b)To identify multimedia (learning content) compatible
with the challenges of Constrained Bandwidth
Environment.
(c) Exploring factors of e-readiness leading to the
development of an e-readiness model for organizations
seeking to implement blended learning process and their
challenges in the context of Constrained Bandwidth
Environment.
(d) To design a framework for implementing blended
learning process in Constrained Bandwidth Environment.
(e) To test and validate the designed framework.
Fig. 1 Research design framework
To address the specific objectives, this research sought to
develop the framework guided by the following research
questions:
1. What are the current approaches and their challenges
for implementing blended learning process, within
Constrained Bandwidth Environment?
First phase presented the ‘Critical Literature Review’ to
identify Research Question.
Second phase was
‘Gathering and Analyzing user Requirement Data’ for elearning which is part of blended learning process. Third
phase was ‘Design of the Framework’. Design of the
131
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
and empirical investigation. Literature investigation
aimed to identifying multimedia compatible with
challenges of CBE and focused on the most important and
dominant multimedia application, namely video format,
using a Multi-Level Systematic Approach (MLSA). The
research found that MPEG4 (.mp4) is the only video
format, in addition to MP3, JPEG, the audio and graphic
formats that are compatible with the challenges of
developing countries, to answer Research Question 2.
framework was informed by first and second phases that
worked as input to the design phase. The designed
framework was tested and validated at fourth phase that
completed the framework development process. Following
sections present the brief description of all the phases.
3.1.1 Identifying approaches for adoption of Blended
learning process in CBE
The aim of identifying existing blended learning
approaches, strategies, and their challenges was to
identify the research question. To achieve the objective,
literature survey was conducted by using various ‘key
words’ and ‘citations’ during research process aimed to
provide an answer to Research Question 1. Secondary
source of data was acquired through the studying of
technical literature; research articles, research reports,
books, journals, reports on the general problem area,
attending and participating in International conferences,
interaction with e-learning experts, and stakeholders [16].
The literature survey also included relevant PhD thesis
reports submitted to various universities in the world.
Aligning a strategy with challenges, opportunities, and
realities that abound the context is an important
preliminary step leading to the successful adoption of an
innovation [19]. Also the process of development of such
a strategy must involve the concern of relevant
stakeholders at the earlier stage on one hand and the
stakeholders must be ‘e-ready’ on the other hand, in order
to design a coherent achievable strategy tailored to meet
their needs [20]. The key stakeholders identified by this
research included: Institutions, lecturers, and students
[21].
Basing on this background, this research used a multiple
case study approach, an appropriate research method in
the situation when the intent of the study is description or
theory building as suggested in [22], in order to achieve
the underlying objective of the empirical investigation.
Researchers selected 3 universities in Uganda, in line with
[23]
guidelines
and
developed
three
survey
questionnaires; Institutional Readiness Questionnaire
(IRQ), Lecturers’ Perception Questionnaire (LPQ), and
Students’ Perception Questionnaire(SPQ) to explore
factors contributing to e-readiness in constrained
bandwidth environment as shown in Fig.2.
The literature review results revealed that [10], [11], [12],
[13], [14] are most popular models for adoption of blended
learning. An overview of the identified blended learning
frameworks suggest that mainly they address issues related
to the following components: Institutional (Administrative
& academic affairs etc.); Learners & Lecturers concerns
(Audience requirement analysis); Pedagogical (Content
analysis). Technological Infrastructure (Hardware &
software); Learning Content (Teaching and learning
material); Management & Evaluation (Logistics, student
support services); Ethical (Copyright etc.); Resources
(Human & financial resources); Interface Design (user
interface); and Interactivity (Interactivity among various
learning modes in blended learning environment).
Prior reliability testing of the data collection tools was
done using Chronbach’s alpha coefficient, in line with
suggestions of [24] that gave the average correlation
among the various items in the questionnaires. The
responses were coded into the Statistical Package for
Social Sciences (SPSS) to run the reliability test. The
Chronbach’s alpha coefficient value for Institutional and
Students’ questionnaires were found above 0.7 which is
acceptable. However, for lecturer’s questionnaire,
reliability test could not run because most of the questions
were open ended aimed to extract information from
lecturers rather than seeking their opinion.
The critical analysis of literature points to the lack of
frameworks for adaptation of blended learning process
that addresses the important issue of constrained
bandwidth environments. The need has therefore
remained for framework that support implementation of
blended learning process in constrained bandwidth
environments [18].
The reviewed literature gave a way forward to gather data
on user requirements.
3.1.2 Gathering User Requirements Data
User requirements data was gathered through study
investigation that comprised of literature investigation
132
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
bandwidth environment. E-learning which is an essential
part of blended learning occurs, when at least there is
learning content (multimedia) and network to deliver the
content [27]. Hence multimedia and network (bandwidth)
related data constitutes the main part of user requirements
data for design of blended framework adaptable in
constrained low bandwidth environment.
On that pretext, section 3.1.2 gathered data on user
requirements; identified multimedia (learning content)
compatible with challenges of constrained bandwidth
environment through literature investigation and identified
the most critical (bandwidth or network
related)
challenges, organizations seeking to implement blended
learning solutions are facing. The framework that addressed
the issue of constrained bandwidth environment not
addressed by existing blended learning frameworks is
presented in Fig.3. The framework presented used a
popular framework by Khan [13] as basis and has two main
components; Network Optimization and Multimedia
Optimization.
Fig.2 Survey research design
The questionnaires were distributed among identified
stakeholders in a sample of three organizations
(universities); A, B and C. The study applied convenient
sampling technique used widely in Doctoral Thesis
reports (e.g., [25]). Furthermore the methodology used is
appropriate for this research and has achieved recognition
by researchers at International level (e.g., see [26]).
The study concluded with identification of the network
(bandwidth) related issues and assessed the level of ereadiness leading to the development of an e-readiness
model, influenced by three survey questionnaires; IRQ,
LPQ, and SPQ as the answer to Research Question 3.
Lack of sufficient bandwidth, Low speed Internet, High
cost of bandwidth, Ineffective or non-existent Bandwidth
Management Policies (BMP) leading to mismanagement
of available little bandwidth were among the main critical
issues identified during the investigation.
Fig. 3 Optimization Based Blended Learning Framework for (CBE)
(OBBLFCBE)
Identification of multimedia compatible with challenges
of constrained bandwidth environment and an E-readiness
Model, was a step towards designing a framework that
addressed the critical issues not addressed by existing
blended learning frameworks and models.
Khan’s framework has eight dimensions; Institutional,
Pedagogical, Technological, Interface design, Evaluation,
Management, Resource support, and Ethical. Although
Khan’s framework considered many factors leading to the
adoption of a successful blended learning strategy in
organizations. However, the issue of adopting blended
learning process in constrained bandwidth environment
has not been addressed. More particularly, the
Pedagogical dimension of the framework states that
organizations are required to provide sufficient
3.1.3 Design of the Framework
Design of the framework is informed by previous sections
3.1.1 and 3.1.2. Section 3.1.1 affirmed the need of a
blended learning framework adaptable in constrained
133
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
bandwidth, which is limited in CBE, and carryout
learning content analysis without providing details about
the type of content compatible with the challenges of
constrained bandwidth environment where designing the
content is costly.
The underlying thesis defended in this research addressed
the above most two critical issues and posits that: in
constrained bandwidth environment, optimizing the
network and multimedia content supports a framework for
the adoption of blended learning process. Moreover,
research shows that application and network are symbiotic
and performance is enhanced when both network and
learning content are optimized together [28]. These two
issues were impeding the technology integration process
in developing countries.
The framework as shown in the Fig.3 comprises of a
sequence of processes. Each process has its input and
output. Network
optimization and Multimedia
optimization are the main components governed by their
respective Quality of Service (QoS) rules. Preoptimization states are inputs to optimization states of
network and multimedia processes, whereas postoptimization states act as their outputs.
Fig. 4 Network Optimization
Authentication: User Authentication System (UAS) can
be used to restrict the access to network resources from
unauthorized users by issuing them log in passwords.
[30].
However, for next process the interaction between the
optimized states of network and multimedia, these two
outputs act as inputs, and output results of this process
provide a high performing multimedia content to be used
to implement blended learning process in constrained
bandwidth environment and close the loop. The details of
the main components of the framework; Network
Optimization and Multimedia Optimization along with
various related technological concepts are discussed in the
following sections.
Prioritizing the network traffic: Prioritization technique
is applied to enhance the Quality of Service (QoS) for
real time applications such as video and audio by
processing them before all other data streams, aiming to
provide low latency as dictated by policy rules configured
on the router interface. In a similar manner, the
technique is used to prioritize among various categories of
users based upon their associated privileges [30].
3.1.3.1 Network Optimization
Time-Based ACLs: Time based ACLs are used to restrict
the users from downloading and uploading huge files,
during the busy working hours of the week that consumes
a lot of bandwidth. ACLs can be configured on a router in
the network system by using simple command (Cisco IOS
Software Release 12.0.1.T) by creating the time range on
the router, using preferably the Network Time Protocol
(NTP) synchronization.
Fig. 4 as mentioned in Fig. 2 outlines the techniques that
can be used to optimize the network system in an
organization. Pre-optimization and Post-optimization
states of network are also shown in the figure for the
purpose of comparison.
Filtering the network traffic that is done using packet
filter firewalls is aimed to reduce the congestion. The
acceptance or discarding decision of each arriving packet is
made by the firewall by comparing the information at its
header, with a set of governing rules called Access Control
Lists (ACLs) [29].
Cache and Proxy: Cache and proxy, the two local
memory devices are the best option to save bandwidth
[31]. The cache saves the most frequently digital contents
needed by the students. On the hand, proxy servers are
used to save the recently or more often visited WebPages.
134
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
provides better quality at low bit rate, is appropriate for use
in developing countries.
3.1.3.2 Multimedia Optimization
Fig.5 as mentioned in Fig.3 presents the multimedia
optimization techniques along with Pre-optimization and
Post-optimization states.
Audio: Research indicates that MPEG-3 (MP-3) audio
format is compatible with the challenges of CBE [34].
Graphics: Joint Photographic Expert Groups (JPEG) and
Portable Network Graphics (PNG) graphic files are
identified as compatible with CBE [34].
3.1.4 Framework Testing and Validation
3.1.4.1 Framework Testing
An experimental testbed as shown in figure 6 was
designed to test and validate the designed framework.
Fig.5 Multimedia Optimization
Multimedia may refer to presentation of information
using multiple forms of media (text, graphics, audio, and
video). The concept of interactive multimedia in
constrained bandwidth is still a challenge [20]. However,
compression and streaming techniques stated in Fig.5 can
be used to enhance the media quality [32].
Multimedia
Streaming:
Multimedia
streaming
techniques enable the user to receive a real time continuous
audio video data once the sender start sending the file,
without waiting when full contents are ready in local
storage. However, the data is compressed before it is
transmitted and is again decompressed before viewing by
the viewer. In on demand option, the client receives the
streaming multimedia contents as per demand.
Fig.6 Experimental test bed system architecture
The testbed used following main hardware and software;
two computers connected to a Cisco router by means of a
Cisco switch, 128 kb broad band Internet connection,
various multimedia files (video, audio, graphics, and
text), multimedia conversion software, and “Net catcher”a multimedia performance measuring tool installed on
one of the client computers. We acquired one video file
from Internet and converted it to various other types for
the sake of consistency.
Multimedia Compression: The compression technique is
applied to reduce the huge file size to manageable level for
efficient transmission of multimedia content. The
compression process significantly reduces the bandwidth
requirements as required in the context of LDCs.
Afterwards, all these files were compressed-converted to
.mp4 (commonly used video standard). Similar procedure
was adopted for other types of media files. The userdriven metric ‘latency’ was used to measure the
performance of various media (video, audio, graphics, and
text) files when they were transmitted over a wireless
network connection. The media performance was
measured in four phases:
Video. Transmission of time sensitive multimedia contents
such as video requires much bandwidth is challenge in low
bandwidth networks. However, the compression technique
applied under Quality of Service (QoS) can significantly
reduce the demand for high bandwidth [33]. Section 3.1.2
identified MPEG4 (MP4) as the standard video format
compatible with constrained bandwidth environment that
135
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Phase I:
Unoptimized Network & Unoptimized
Multimedia (UNUM)
Phase II: Unptimized Network & Optimized Multimedia
(UNOM)
Phase III: Optimized Network & Unoptimized
Multimedia (ONUM)
Phase IV: Optimized Network & Optimized Multimedia
(ONOM)
Fig. 9 Latency results for unoptimized flv
Multimedia Performance for Unoptimized Network:
This section presents the screen shots taken by Net
Catchlite tool that measured the latency, when
unoptimized and optimized multimedia (.flv video) files
were transmitted over the unoptimized wireless network.
The average latency for unoptimized and optimized .flv
file was 559ms and 422ms as shown in Fig.7and Fig.8
respectively.
Fig. 10 Latency results for optimized .flv
Analysis of Multimedia performance results: The
analysis of .flv video format performance is presented in
Table 1 that illustrates the latency comparison for
various phases.
Table 1 Latency results analysis for .flv
Fig.7 latency results for unoptimized .flv video file
The percentage of latency reduction is also calculated. We
notice that at every phase, latency factor continue to
reduce as compared to initial phase when both network
and multimedia files were not optimized. The results
further indicate that at phase IV when both network and
multimedia were optimized, latency factor reduced
significantly, from 559 ms to 212, suggesting a reduction
of 62.07%.
Fig. 8 latency results for optimized .flv video file
Multimedia Performance for Optimized Network: Prior
to transmission of multimedia files over the network, this
research optimized network by configuring policy map on
the router interface by allocating specific amounts of
bandwidth to various classes and prioritizing a specific
type (e.g., video and audio) over other network traffic,
aimed to apply Quality of Service (QoS) to increase the
perceived level of user satisfaction [35].
In a similar manner, 6 other video files (.mov, .mkv,
.mpg, .mpg2, .avi, .mp4); 3 Audio files (WAV, M4A,
MP3); 3 Graphic file files (bmp, tiff, JPEG); and 2 Text
(PPT) files were transmitted over Internet one by one
and results calculated in 4 phases. Study results indicated
that performance of multimedia was improved
significantly, when network and application were both
optimized that verified the main theoretical concept of the
framework.
With optimized network state, Net Catchlite tool
measured the latency of unoptimized and optimized
multimedia (video) files (.flv) when transmitted over the
optimized wireless network. The average latency for .flv
was 221ms and 212 ms respectively as shown in Fig 9 and
10.
136
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
3.1.4.2 Framework Validation
The novelty of this research is that it linked practical with
theoretical foundation and relevant literature, aimed to
make both scientific and practical contribution, and laid
the foundation stone for further research in the area to
make subsequent developments[17].
To ensure the soundness and accuracy of the designed
framework, Verification and Validation (V&V) was done
based on: Correspondence between model and statement
of the problem which is in line with results presented in
above section; Model Behavior was tested and proved that
it does what it is expected to do; and Conservation of
Flow was confirmed from consistent results in section
3.1.4 when several media files from each category were
transmitted over the network that shows the stability of
the model. The validation process was further enhanced
by sending prototype results to users and IT staff working
in various universities in Uganda. The designed
framework was highly evaluated by users and IT
professionals as shown in Table 2 (1- lowest score, 4highest score) as answer to Research Question 5.
The output of this study has several far reaching
implications for community of stakeholders that include;
higher education institutions, students, researchers, and
information system providers.
This research addresses the question of high practical
value, is rooted into realistic and idealistic grounds that
leads to increased usefulness of knowledge created to
guide researchers. The study focused on evaluating the
performance of learning content which is an integral part
of blended learning process.
Table 2: Framework evaluation results by stakeholders
This research can make significant improvements on the
landscape of African higher education system. At the
same time, this study has given a way forward to
Institutions of Higher learning to protect their valuable
institutional resource (bandwidth) through prioritizing
time-sensitive applications to minimize delays. The
findings of the study can be used by information system
developers,
e-learning
solution
providers,
and
instructional designers to increase the perceived level of
user–satisfaction when delivering multimedia content to
the users in LDCs.
The future research may include:
(i)How to improve the performance of multimedia
content, to increase further the level of user satisfaction
by reducing latency factor more than shown in the
current study?
(ii)How the findings of this research can be replicated in
different context?
(iii)The research utilized mixed methods approach that
enabled the researcher to study depth and breadth of the
problem, provided the ability to triangulate the findings of
the research methods used and hence giving a way
forward for the utilization of the methodology by other
researchers in future studies.
(iv)Researchers can use Multi Level Systematic Approach
(MLSA) introduced in this study for in-depth analysis of
research problems.
4. Conclusions
The fundamental contribution of this research was
providing Optimization Based Blended Learning
Framework for Constrained Bandwidth Environment,
with multimedia optimization and network optimization
as its two main components. The research achieved this
main objective. The framework emphasizes that
performance of multimedia (learning content) can be
improved when network and application are both
optimized, in return that would increase the perceived
level of user satisfaction, enhancing the blended learning
process for organizations seeking to implement blended
learning solutions in developing countries. Increasing the
perceived level of user satisfaction in developing
economies is a big mile stone indeed [17].
References
[1] B. A. Digolo, E.A. Andang J. Katuli, “E- Learning as a
Strategy for Enhancing Access to Music Education”, International
Journal of Business and Social Science Vol. 2 No. 11 [Special
Issue - June 2011].
[2]Millinuim Development Goal Report, 2010.
137
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Heidelberg, ISBN: 978-3-642-22762-2, Vol. 6837, 2011, pp.
188-199.
[18]L. Casanovas, “Exploring the Current Theoretical
Background About Adoption Until Institutionalization of Online
Education in Universities: Needs for Further Research”, Electronic
Journal of e-Learning, Vol. 8, Issue 2, 2010, pp. 73-84.
[19]CANARIE report, “Framework for Rural and Remote
Readiness in Telehealth” June, 2002.
[20] A. Sife, E. Lwoga, and C. Sanga, "New technologies for
teaching and learning: Challenges for higher learning institutions
in developing countries", International Journal of Education and
Development using ICT, Vol. 3, No. 2, 2007, pp. 57-67.
[21]M. Faridha, “A Framework for Adaptation of Online
Learning. In Towards E-nhancing Learning With Information and
Communication Technology in Universities”, A Master’s thesis in
computer science Makerere University, 2005.
[22] R. K. Yin, “Case study research: Design and method”,
Newbury Park, CA: Sage Publication, 1984.
[23]J. Creswell, “Educational research: Planning, conducting, and
evaluating quantitative and qualitative research”, Upper Saddle
River, NJ: Merrill Prentice Hall, 2002.
[24] D. M. Gabrielle, “The effects of technology-mediated
instructional strategies on motivation, performance, and self
directed learning”, PhD Thesis Florida State University College
of Education, 2003.
[25] M.C. Wesley, “Family and Child Characteristics Associated
with Coping Psychological Adjustment and Metabolic Control in
Children and Adolescents with Type 1 Diabetes” , A PhD Thesis
report The University of Guelph, April, 2012.
[26] N. A. Suhail, “Assessing Implementation of Blended learning
in Constrained Low Bandwidth Environment” In Aisbet, J.,
Gibbon,G., Anthony, J. R., Migga, J. K., Nath, R., Renardel, G.
R. R.: Special Topics in Computing and ICT Research.
Strengthening the Role of ICT in Development, August, 2008,
Vol. IV, pp. 377-390, Fountain publishers Kampala,Uganda
[27]C. Snae and M. Bruckner, “Ontology driven e-learning
system based on roles and activities for Thi learning
environment”, Interdisciplinary Journal of knowledge and
Learning Objects, Vol. 3, 2007.
[28]Understanding Network-Transparent Application Acceleration
and WAN Optimization. Cisco white paper, 2007.Cisco systems.
Inc.
[29] U. Thakar, L. Purohit, and A. Pahade, “An approach to
improve performance of a packet filtering firewall”, Proceedings
of wireless and optical communications networks, 2012 Ninth
International Conference on 20-22 September, 2012, Indore, pp.15.
[30] M. Kaminsky, “User Authentication and Remote Execution
Across Administrative Domains”, Doctor of Philosophy Thesis,
Massachusetts Institute of Technology, 2004.
[31]Z. Xu, L. Bhuyan , and Y. Hu, “Tulip: A New Hash Based
Cooperative Web Caching Architecture”, The Journal of
Supercomputing, 35, 2006, pp. 301–320.
[32]N.A. Suhail and E.K. Mugisa, “Implementation of E-learning
in Higher Education Institutions in Low Bandwidth Environment:
A Blended Learning Approach”, in Kizza J.M., Muhirwe J.,
Aisbett J., Getao K., Mbarika V., Patel D., and Rodrigues A. J.:
Special Topics in Computing and ICT Research Strengthening the
URL:http://www.un.org/millenniumgoals/pdf/MDG%20Report%
202010%20En%20r15%20-low%20res%2020100615%20-.
[3]A.W. Bates, “Technology, Open Learning and Distance
Education” Routledge London, 1995.
[4] S. O. Gunga, I. W. Ricketts, “Facing the challenges of elearning initiatives in African Universities”, British Journal of
Educational Technology Vol. 38, No. 5, 2007.
[5] UNESCO Institute for Statistics, Global education digest 2006:
Comparing education statistics across the world.
URL: http://www.uis.unesco.org/ev.php?ID=6827 201&
ID2=DOTOPIC
[6] S. Asunka, H.S. Chae, “Strategies for Teaching Online
Courses within the Sub-Saharan African Context: An
Instructor's Recommendations”, MERLOT Journal of Online
Learning and Teaching Vol. 5, No. 2, June 2009.
[7] F. Alonso, G. Lopez, D. Manrique, and J. M. Vines, “An
instructional model for web based education with blended learning
process approach” , British Journal of Educational Technology,
36(2), 2005, pp. 217-235.
[8]E. Nel, “Creating meaningful blended learning experience in a
South Africa classroom: An Action inquiry”, Thesis submitted in
fulfillment of the requirements for the degree Philosophiae Doctor
in Higher Education Studies, University of The Free State, 2005.
[9]K. P. Dzvimbo, “Access Limitation and the AVU Open,
Distance and e-Learning (ODeL) solutions”, Proceedings of The
Association of African Universities conference Cape Town,
February 22, 2005.
[10] Bersin and associates, “4-factor Blended Learning
Framework, What Works in Blended Learning”, ASTDs Source of
E-learning, Learning Circuits 2003. http://www.bersin.20.com/
510.654.8500
[11] J. De Vries, “E-learning strategy: An e-learning Framework
for Success”, Blue Streak learning, Quality learning solutions,
Bottom line results. 2005.
elearningcentre.typepad.com/whatsnew/2005/08/elearning
strat.html
[12] A Blended Learning Model. In Training: A Critical Success
Factor in Implementing a Technology Solution”, North America
Radiant Sytems, 2003, Inc. 3925 Brookside Parkway Atlanta, GA
30022. URL: www.radiantsystems.com (Accessed on 28 June,
2007)
[13] B. H. Khan, “A framework for Web-based authoring
systems”, Web-based training, 2001, pp.355-364, Englewood
Clffs, NJ: Educational technology Publications.
http://iteresources.worldbank.org/EXTAFRREGTOPDISEDU/Re
sources/Teacher education Toolkit
[14], F. Z. Azizan, “Blended Learning in Higher Institutions in
Malaysia”, Proceedings of Regional Conference on Knowledge
Integration in ICT, 2010.
[15] K. Mc Dougall, “A Local-State Government Spatial Data
Sharing Partnershi Model to Facilitate SDI Development”, PhD
Thesis, University of Melbourne, 2006.
[16] A. Strauss, J. Corbin, “Basics of qualitative research:
Grounded theory procedures and techniques”, London: Sage.
1990.
[17] N. A. Suhail, J Lubega, and G Maiga, “Optimization Based
Multimedia Performance to Enhance Blended Learning
Experience in Constrained Low Bandwidth Environment”,
Lecture Notes in Computer Science, Springer-Verlag Berlin
138
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Role of ICT in Development, Vol. III, August, 2007, pp. 302-322,
Fountain publishers, Kampala, Uganda.
[33] S. Mishra and U.Sawarkar, “Video Compression Using
MPEG”, Proceedings of International Conference & Workshop on
Recent Trends in Technology, proceedings published in
International Journal of Computer Applications (IJCA), 2012
[34]N. A. Suhail, J. Lubega,, and G. Maiga, “ Multimedia to
Enhance Blended Learning Experience in Constrained Low
Bandwidth Environment”, Lecture Notes in Computer Science,
Springer-Verlag Berlin Heidelberg, ISBN: 978-3-642-320170,Vol.7411, 2012, pp. 362—372.
[35]A. A. Abbasi, J. Iqbal, and A. N. A. Malik, “Novel QoS
Optimization Architecture for 4G Networks”, in World Academy
of Science, Engineering and Technology, 2010.
139
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Analysis of Computing Open Source Systems
J.C. Silva1 and J.L. Silva2
1
Dep. Tecnologias, Instituto Politécnico do Cávado e do Ave,
Barcelos, Portugal
jcsilva@ipca.pt
2
Madeira-ITI, Universidade da Madeira
Funchal, Portugal
jose.l.silva@m-iti.org
implementation. Each widget has a fixed set of properties
and at any time during the execution of the GUI, these
properties have discrete values, the set of which
constitutes the state of the GUI. The programmers interest,
besides satisfying the user, is in the intrinsic quality of the
implementation, which will impact the system's
maintainability.
As user interfaces grow in size and complexity, they
become a tangle of object and listener methods, usually all
having access to a common global state. Considering that
the user interface layer of interactive open source systems
is typically the one most prone to suffer changes, due to
changed requirements and added features, maintaining the
user interface code can become a complex and error prone
task. Integrated development environments (IDEs), while
helpful in that they enable the graphical definition of the
interface, are limited when it comes to the definition of the
behavior of the interface.
In this paper we explore an approach for the analysis of
open source system's user interfaces. Open-source
software is software whose source code is made available,
enabling anyone to copy, modify and redistribute the
source code without paying royalties or fees. This paper
discusses an approach to understand and evaluate an open
source system from an interactive perspective. We present
a static analysis based framework for GUI-based
applications analysis from source code.
In previous papers [1,3] we have explored the
applicability of slicing techniques [4] to our reverse
engineering needs, and developed the building blocks for
the approach. In this paper we explore the integration of
analysis techniques into the approach, in order to reason
about GUI models.
The paper is organized as follow: Section three discusses
the value of inspecting source code from a GUI quality
perspective; Section four introduces our framework for
GUI reverse engineering from source code; sections five
Abstract
Graphical user interfaces (GUIs) are critical components of
today's open source software. Given their increased relevance,
the correctness and usability of GUIs are becoming essential.
This paper describes the latest results in the development of our
tool to reverse engineer the GUI layer of interactive computing
open source systems. We use static analysis techniques to
generate models of the user interface behavior from source code.
Models help in graphical user interface inspection by allowing
designers to concentrate on its more important aspects. One
particular type of model that the tool is able to generate is state
machines. The paper shows how graph theory can be useful
when applied to these models. A number of metrics and
algorithms are used in the analysis of aspects of the user
interface's quality. The ultimate goal of the tool is to enable
analysis of interactive system through GUIs source code
inspection.
Keywords: analysis, source code, quality.
1. Introduction
In the user interface of an open source software systems,
two interrelated sets of concerns converge. Users interact
with the system by performing actions on the graphical
user interface (GUI) widgets. These, in turn, generate
events at the software level, which are handled by
appropriate listener methods. In brief, and from a user's
perspective, graphical user interfaces accept as input a
pre-defined set of user-generated events, and produce
graphical output. The users' interest is in how well the
system supports their needs.
From the programmers perspective, typical WIMP-style
(Windows, Icon, Mouse, and Pointer) user interfaces
consist of a hierarchy of graphical widgets (buttons,
menus, text-fields, etc) creating a front-end to the software
system.
An event-based programming model is used to link the
graphical objects to the rest of the system's
140
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
conditions in high order logic. Proof of the conditions is
usually conducted within interactive theorem provers such
as PVS or Coq [11,12].
We believe that defining and integrating a methodology
into open source systems development processes should
be the first priority to certificate open source systems.
and six presents the analysis of a software system; Section
seven discusses the results of the process; the paper end
with conclusions in Section eight.
2. Analysis of Open Source Systems
Open source systems are popular both in business and
academic communities with products such as Linux,
MySQL, OpenOffice or Mozilla. Open source systems are
free redistribution with source code accessible and
complying several criterions. The program must allow
distribution in source code as well as compiled form.
Deliberately obfuscated source code is not allowed.
Intermediate forms such as the output of a preprocessor or
translator are not allowed. The license must allow
modifications and derived works, and must allow them to
be distributed under the same terms as the license of the
original software [5]. Considering that open source
systems are typically prone to suffer changes, due to
modifications and derived works, maintaining the system
and its usability can become an error prone task [6]. A
number of challenges remain to be met, however, many of
which are common to all open source projects. This
Section discusses open source systems analysis as a way
to foster adoption and deployment of open source systems.
The objective of the open source analysis is to evaluate
the quality of open source systems involving software
analysis and engineering methodologies. In the literature,
several directions are used for achieving this goal such as
testing, light weight verification and heavy weight
verification, e.g [7,8]. Testing is a huge area for open
source analysis [9]. Different kinds of tests are applied
such as functional testing, regression testing, stress testing,
load testing, unit testing, integration testing,
documentation analysis, source code analysis, reverse
engineering. Lightweight verification includes various
methods of static analysis and model checking, e.g. [10].
These may include identification of domain specific
restrictions and typical bugs for automatic detection,
formal representation of the restrictions in terms of the
tools used, development of simplified models of target
system to be used for automatic analysis, automatic
analysis of target source code with verification tools and
investigation and classification of results.
Another approach is heavyweight verification providing a
more complete analysis of the quality of the source code
system. There are different approaches to heavyweight
verification. Classical methods of verification requires to
formally describe requirements in the form of
precondition and post-condition. Then, invariants and
variants should be defined for the open source system.
After that verification tools automatically generate
3. Inspection from source code
The evaluation of an open source software is a
multifaceted problem. Besides the intrinsic quality of the
implementation, we have to consider the user reaction to
the interface (i.e. its usability [13]). This involves issues
such as satisfaction, learnability, and efficiency. The first
item describes the user's satisfaction with the open source
system. Learnability refers to the effort users make to
learn how to use the application. Efficiency refers to how
efficient the user can be when performing a task using the
application.
The analysis of a system's current implementation can
provide a means to guide development and to certify
software. For that purpose adequate metrics must be
specified and calculated [14,15]. Metrics can be divided
into two groups: internal and external [16]. External
metrics are defined in relation to running software. In
what concerns GUIs, external metrics can be used as
usability indicators. They are often associated with the
following attributes [17]:
•
•
•
•
•
Easy to learn: The user can do desired tasks
easily without previous knowledge;
Efficient to use: The user reaches a high
productivity level.
Easy to remember: The re-utilization of the
system is possible without a high level of effort.
Few errors: Errors are made hardly by the users
and the system permits to recover from them.
Pleasant to use: The users are satisfied with the
use of the system.
However, the values for these metrics are not typically
obtainable from direct analysis of the implementation,
rather through users' feedback to using the system.
Internal metrics are obtained by source code analysis, and
provide information to improve software development.
Such metrics measure software aspects, such as source
lines of code, functions invocations, etc. A number of
authors has looked at the relation between internal metrics
and GUI quality. Stamelos et al. [18] used the Logiscope 1
tool to calculate values of selected metrics in order to
1http://www-01.ibm.com/software/awdtools/logiscope/
141
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
supported by a given GUI implementation.
The tool performs the parsing of the source code. A
module executes this step. To implement this first module,
a parser for the programming language being considered
is used. The tool has been used to reverse engineer Java
and Haskell [21] programs written using the (Java) Swing,
GWT, and (Haskell) WxHaskell GUI toolkits. For the
Java/Swing and GWT toolkits, the SGLR parser has been
applied whose implementation can be accessible via the
Strafunski tool [22]. For the WxHaskell toolkit the
Haskell parser that is included on the Haskell standard
libraries was used. Whatever the parser, it generates an
Abstract Syntax Tree (AST). The AST is a formal
representation of the abstract syntactical structure of the
source code.
The full AST represents the entire code of the application.
However, the tool's objective is to process the GUI layer
of interactive open source systems, not the entire source
code.
To this end, an another module implements a GUI code
slicing process using strategic programming. The module
is used to slice the AST produced by the compiler, in
order to extract its graphical user interface layer. The
module is composed of a slicing library, containing a
generic set of traversal functions that traverse any AST.
Once the AST has been created and the GUI layer has
been extracted, GUI behavioral modeling can be
processed. A module implements a GUI abstraction step.
The module is language independent. It generates a model
of user interface behavior. The relevant abstractions used
in the model are user inputs, user selections, user actions
and output to user.
More specifically, the modules generates GUI-related
metadata files with information on possible GUI events,
associated conditions and actions, and states resulting
from these events. Each of these items of data are related
to a particular fragment from the AST. These are GUI
specifications written in the Haskell programming
language. These specifications define the GUI layer by
mapping pairs of event/condition to actions.
study the quality of Open Source code. Ten different
metrics were used. The results enable evaluation of each
function against four basic criteria: testability, simplicity,
readability and self-descriptiveness. While the GUI layer
was not specifically targeted in the analysis, the results
indicated a negative correlation between component size
and user satisfaction with the software.
Yoon and Yoon [19] developed quantitative metrics to
support decision making during the GUI design process.
Their goal was to quantify the usability attributes of
interaction design. Three internal metrics were proposed
and defined as numerical values: complexity, inefficiency
and incongruity. The authors expect that these metrics can
be used to reduce the development cost of user interaction.
While the above approaches focus on calculating metrics
over the code, Thimbleby and Gow [20] calculate them
over a model capturing the behavior of the application.
Using graph theory they analyze metrics related to the
users' ability to use the interface (e.g., strong
connectedness ensure no part of the interface ever
becomes unreachable), the cost of erroneous actions (e.g.,
calculating the cost of undoing an action), or the
knowledge needed to use the system (e.g., the minimum
cut identifies the set of actions that the user must know in
order to to be locked out of parts of the interface).
In a sense, by calculating the metrics over a model
capturing GUI relevant information instead of over the
code, the knowledge gained becomes closer to the type of
knowledge obtained from external metrics. While
Thimbleby and Gow manually develop their models from
inspections of the running software/devices, an analogous
approach can be carried out analyzing the models
generated directly from source code. We have been
developing a tool to reverse engineer models of a user
interface from its source code [1,3]. By coupling the type
of analysis in [20] with our approach, we are able to
obtain the knowledge directly from source code. By
calculating metrics over the behavioral models, we aim to
acquire relevant knowledge about the dialogue induced by
the interface, and, as a consequence, about how users
might react to it. In this paper we describe several kinds of
inspections making use of metrics.
5. HMS Case Study: A Larger Interactive
System
4. The Tool
In previous Section, we have presented the implemented
tool. In this Section, we present the application of the tool
to a complex/large real interactive system: a Healthcare
Management System (HMS) available from Planet-sourcecode2, one of the largest public source code database on
the Internet. The goal of this Section is twofold: Firstly, it
is a proof of concept for the tool. Secondly, we wish to
The tool's goal is to be able to extract a range of models
from source code. In the present context we focus on finite
state models that represent GUI behavior. That is, when
can a particular GUI event occur, which are the related
conditions, which system actions are executed, or which
GUI state is generated next. We choose this type of model
in order to be able to reason about and test the dialogue
2 http://www.planet-source-code.com/
142
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
list [4]) which exit the system. These events can be
executed by clicking the Exit or Login buttons,
respectively. The informal description of login window
behavior provided at the start of the Section did not
included the possibility of exiting the system by pressing
the Login button. The extracted behavioral graph however
defines that possibility, which can occur if condition
cond2 is verified (cf. pair loginBtn/cond2 with action list
[4]). Analysing condition cond2 (source.equals(exitBtn)),
dead code was encountered. The source code executed
when pressing the Login button uses a condition to test
whether the clicked button is the Login button or not. This
is
done
through
the
boolean
expression
source.equals(loginBtn). However, the above action
source code is only performed when pressing the Login
button. Thus, the condition will always be verified and the
following else component of the conditional statement will
never be executed.
Summarizing the results obtained for the login window,
one can say that the generated behavioral graph contains
an event/condition/actions triplet that does not much the
informal description of the system. Furthermore, this
triplet cannot be executed despite being defined on the
behavioral model. This example demonstrates how
comparing expected application behavior against the
models generated by the tool can help understand (and
detect problems in) the applications' source code.
analyze the interactive parts of a real application.
The HMS system is implemented in Java/Swing and
supports patients, doctors and bills management. The
implementation contains 66 classes, 29 windows forms
(message box included) and 3588 lines of code.
The login window is the first window that appears to HMS
users. This window gives authorized users access to the
system and the HMS main form, through the introduction
of a user name and password pair. This window is
composed of two text box (i.e. username and password
input) and two buttons (i.e. Login and Exit buttons).
If the user introduces a valid user name/password and
presses the Login button, then the window closes and the
main window of the application is displayed. On the
contrary, if the user introduces invalid data, then a
warning message is produced and the login window
continues to be displayed. By pressing the Exit button, the
user exits the application.
Applying the tool to the source code of the application,
and focusing on the login window, enables the generation
of several models. Figure 1, for example, shows the graph
generated to capture the login window's behavior.
Associated to each edge there is a triplet representing the
event that triggers the transition, a guard on that event
(here represented by a label identifying the condition
being used), and a list of interactive actions executed
when the event is selected (each action is represented by a
unique identifier which is related to the respective source
code).
6. GUI Inspection through Graph Theory
This Section describes some examples of analysis
performed on the application's behavioral graph from the
previous section. We make use of the implemented tool
for the manipulation and statistical analysis of the graph.
6.1 Graph-tool
Graph-tool3 is an efficient python module for the
manipulation and statistical analysis of graphs. It allows
for the easy creation and manipulation of both directed or
undirected graphs. Arbitrary information can be associated
to the nodes, edges or even the graph itself, by means of
property maps. Graph-tool implements all sorts of
algorithms, statistics and metrics over graphs, such as
degree/property histogram, combined degree/property
histogram, vertex-vertex correlations, average vertexvertex shortest distance, isomorphism, minimum spanning
tree, connected components, maximum flow, clustering
coefficients, motif statistics, communities, or centrality
measures.
Fig. 1 HMS: Login behavioral graph
Analyzing this model, one can infer that there is an
event/condition pair (edge loginBtn / cond1, with action
list [1,2,3]) which closes the window (cf. edge moving to
close node). Investigating action reference 2, it can be
further concluded that another window (startApp) is
subsequently opened. Furthermore, one can also infer that
there are two event/condition pairs (edge exitBtn / cond4
with action list [6], and edge loginBtn / cond2 with action
3 http://projects.forked.de/graph-tool/
143
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
maximal distance from other points in the graph. Finding
the center of a graph is useful in GUI applications where
the goal is to minimize the steps to execute a particular
task (i.e. edges between two points). For example, placing
the main window of an interactive system at a central
point reduces the number of steps a user has to execute to
accomplish tasks.
Fig. 2 HSM: The overall behavior
Now we will consider the graph described in Figure 2
where all vertices and edges are labeled with unique
identifiers. Figure 2 provides the overall behavior of the
HMS system. This model can be seen in more detail in the
electronic version of this paper. Basically, this model
aggregates the state machines of all HMS forms. The right
top corner node specifies the HMS entry point, i.e. the
mainAppstate0 creation state from the login's state
machine (cf. Figure 1).
Fig. 3 HSM's pagerank results
6.2 GUI Metrics
PageRank is a link analysis algorithm, used by the Google
Internet search engine that assigns a numerical weighting
to each element of a hyperlinked set of documents. The
main objective is to measure their relative importance.
The wight assigned to each element represents the
probability that a person randomly clicking on links will
arrive at any particular page [23]. A probability is
expressed as a numeric value between 0 and 1. This same
algorithm can be applied to our GUI's behavioral graphs.
Figure 3 provides the result obtained when applying the
pagerank algorithm to graph of Figure 2. The size of a
vertex corresponds to its importance within the overall
application behavior. This metric can have several
applications, for example, to analyze whether complexity
is well distributed along the application behavior. In this
case, there are no particularly salient vertices, which is an
indication that interaction complexity is well distributed
considering the overall application. It is also worth
noticing that according to this criteria, the Main window is
As discussed in this paper, one of our goals is to show
how the implemented tool supports the use of metrics such
as those used by Thimbleby and Gow [20] to reason about
the quality of a user interface. To illustrate the analysis,
we will consider three metrics: Shortest distance between
vertices, Pagerank and Betweeness.
The Graph-Tool enables us to calculate the shortest path
between two vertices. This is useful to calculate the
number of steps to execute a particular task. These results
can be used to analyze the complexity of an interactive
application's user interface. Higher numbers of steps
represent complex tasks while lower values are
applications with simple tasks. It can also be applied to
calculate the center of a graph. The center of a graph is the
set of all vertices A where the greatest distance to other
vertices B is minimal. The vertices in the center are called
central points. Thus vertices in the center minimize the
144
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
clearly a central point in the interaction.
6.3 GUI Testing
Betweenness is a centrality measure of a vertex or an edge
within a graph [24]. Vertices that occur on many shortest
paths between other vertices have higher betweenness than
those that do not. Similar to vertices betweenness
centrality, edge betweenness centrality is related to
shortest path between two vertices. Edges that occur on
many shortest paths between vertices have higher edge
betweenness.
The reverse engineering approach described in this paper
allows us to extract an abstract GUI behavior
specification.
Our next goal is to perform model-based GUI testing. To
this end, we make use of the QuickCheck Haskell library
tool. QuickCheck is a tool for testing Haskell programs
automatically. The programmer provides a specification of
the program, in the form of properties which functions
should satisfy, and QuickCheck then tests that the
properties hold in a large number of randomly generated
cases. Specifications are expressed in Haskell, using
combinators defined in the QuickCheck library.
QuickCheck provides combinators to define properties,
observe the distribution of test data, and define test data
generators. Considering the application described in the
previous section and its abstract GUI model-based we
could now write some rules and test them through the
QuickCheck tool. To illustrate the approach, we will test if
the application satisfies the following rule: users need to
execute less than three actions to access the main window.
The rule is specified in the Haskell language. From the
windows set we automatically generate randomly cases.
We extract valid GUI sentences from a GUI behavioral
model. Then the rule is tested in a large number of cases
(10000 in this GUI testing process!). The number of
random cases and event lengths are specified by the user.
Each random case is a sequence of valid events associated
with their conditions, actions and the respective window.
In other words, each case is a sequence of possible events,
so all respective conditions are true in this context.
This approach enables to analyze a GUI model using a
model-based testing technique. Though our approach is
non-exhaustive, this is a technique which allows us to test
the quality of models at a lower cost than other exhaustive
techniques such as model checking. This section's focus is
on GUI testing. Coverage criteria for GUIs are important
rules that provide an objective measure of test quality.
We plan to include coverage criteria to help determine
whether a GUI has been adequately tested. These
coverage criteria use event sequences to specify a measure
of test adequacy. Since the total number of permutations
of event and condition sequences in any GUI is extremely
large, the GUI's hierarchical structure must be exploited to
identify the important event sequences to be tested.
Fig. 4 HSM's betweenness values
Figure 4 provides the obtained result when applying the
betweenness algorithm. Betweenness values are expressed
numerically for each vertices and edges. Highest
betweenness edges values are represented by thicker
edges. Some states and edges have the highest
betweenness, meaning they act as a hub from where
different parts of the interface can be reached. Clearly
they represent a central axis in the interaction between
users and the system. In a top down order, this axis
traverses
the
following
states
patStartstate0,
patStartstate1,
startAppstate0,
startAppstate1,
docStartstate0 and docStartstate1. States startAppstate0
and startAppstate1 are the main states of the startApp
window's state machine.
The Main window has the highest betweenness, meaning it
acts as a hub from where different parts of the interface
can be reached. Clearly it will be a central point in the
interaction.
6.4 Conclusions
This Section described the results obtained with the
implemented tool when applying it to a larger interactive
system. The chosen interactive system case study is
145
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
information about the interaction. This means that we are
able to analyze the quality of the user interface, from the
users perspective, without having to resort to external
metrics which would imply testing the system with real
users, with all the costs that process carries.
Additionally, we have explored the possibility of
analyzing the graphs via a testing approach, and how best
to generate test cases. It must be noted that, while the
approach enables us to analyze aspects of user interface
quality without resorting to human test subjects, the goal
is not to replace user testing. Ultimately, only user testing
will provide factual evidence of the usability of a user
interface. The possibility of performing the type of
analysis we are describing, however, will help in gaining a
deeper understanding of a given user interface. This will
promote the identification of potential problems in the
interface, and support the comparison of different
interfaces, complementing and minimizing the need to
resort to user testing. Similarly, while the proposed
metrics and analysis relate to the user interface that can be
inferred from the code, the approach is not proposed as an
alternative to actual code analysis.
Metrics related to the quality of the code are relevant, and
indeed the tool is also able to generate models that capture
information about the code itself. Again, we see the
proposed approach as complementary to that style of
analysis. Results show the reverse engineering approach
adopted is useful but there are still some limitations. One
relates to the focus on event listeners for discrete events.
This means the approach is not able to deal with
continuous media and synchronization/timing constraints
among objects. Another has to due with layout
management issues. The tool cannot extract, for example,
information about overlapping windows since this must be
determined at run time. Thus, we cannot find out in a
static way whether important information for the user
might be obscured by other parts of the interface. A third
issue relates to the fact that generated models reflect what
was programmed as opposed to what was designed.
Hence, if the source code does the wrong thing, static
analysis alone is unlikely to help because it is unable to
know what the intended outcome was. For example, if an
action is intended to insert a result into a text box, but
input is sent to another instead. However, if the design
model is available, the tool can be used to extract a model
of the implemented system, and a comparison between the
two can be carried out.
Additionally, using graph operations, models from
different implementations can be compared in order to
assess whether two systems correspond to the same
design, or to identify differences between versions of the
same system.
related to a healthcare management system (HMS). The
HMS system is implemented in Java/Swing programming
language and implement operations to allow for patients,
doctors and bills management. A description of main
HMS windows has been provided, and tool results have
been described. The tool enabled the extraction of
different behavioral models. Methodologies have been
also applied automating the activities involved in GUI
model-based reasoning, such as, pagerank and
betweenness algorithms. GUI behavioral metrics have
been used as a way to analyze GUI quality. This case
study demonstrated that the tool enables the analysis of
real interactive applications written by third parties.
7. Discussion
The previous section has illustrated how the implemented
tool makes possible high-level graphical representation of
GUI behavior from thousand of lines of code. The process
is mostly automatic, and enables reasoning over the
interactive layer of open source systems. Examples of
some of the analysis that can be carried out were provided.
Other uses of the models include, for example, the
generation of test cases, and/or support for model-based
testing. During the development of the framework, a
particular emphasis was placed on developing tools that
are, as much as possible, language independent. Through
the use of generic programming techniques, the developed
tools aim at being retargetable to different user interface
programming toolkits and languages. At this time, the
framework supports (to varying degrees) the reverse
engineering of Java code, either with the Swing or the
GWT (Google Web Toolkit) toolkits, and of Haskell
code, using the WxHaskell GUI library. Originally the
tool was developed for Java/Swing. The WxHaskell and
GWT retargets have highlighted successes and problems
with the initial approach. The amount adaptation and the
time it took to code are distinct. The adaptation to GWT
was easier because it exploits the same parser. The
adaptation to WxHaskell was more complex as the
programming paradigm is different, i.e. functional. Using
the tool, programmers are able to reason about the
interaction between users and a system at a higher level of
abstraction than that of code. A range of techniques can be
applied on the generated models. They are amenable, for
example, to analysis via model checking [25]. Here
however, we have explored alternative, lighter weight
approaches.
Considering that the graphs generated by the reverse
engineering process are representations of the interaction
between users and system, we have shown how metrics
defined over those graphs can be used to obtain relevant
146
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
methods and functional strategies regarding the reverse
engineering of interactive applications,” in Interactive Systems,
Design, Specifications and Verification, Lecture Notes in
Computer Science. DSV-IS 2006, the XIII International
Workshop on Design, Specification and Verification of
Interactive System, Dublin, Ireland, pp. 137–150, Springer
Berlin / Heidelberg, July 2006.
[2] J. C. Silva, J. C. Campos, and J. Saraiva, “Models for the
reverse engineering of java/swing applications,” ATEM 2006,
3rd International Workshop on Metamodels, Schemas,
Grammars and Ontologies for Reverse Engineering, Genova,
Italy, October 2006.
[3] J. C. Silva, J. C. Campos, and J. Saraiva, “A generic library
for gui reasoning and testing,” in In ACM Symposium on
Applied Computing, pp. 121–128, March 2009.
[4] F. Tip, “A survey of program slicing techniques,” Journal of
Programming Languages, september 1995.
[5] D. Cerri, A. Fuggetta, D. Cerri, A. Fuggetta, and C. P. D.
Milano, “Open standards, open formats, and open source,”
2007.
[6] C. Benson, “Professional usability in open source projects,”
in Conference on Human Factors in Computing Systems, pp.
1083–1084, ACM Press, 2004.
[7] H. Nakajima, T. Masuda, and I. Takahashi, “Gui ferret: Gui
test tool to analyze complex behavior of multi-window
applications,” in Proceedings of the 2013 18th International
Conference on Engineering of Complex Computer Systems,
ICECCS ’13, (Washington, DC, USA), pp. 163–166, IEEE
Computer Society, 2013.
[8] C. E. Silva and J. C. Campos, “Combining static and
dynamic analysis for the reverse engineering of web
applications,” in Proceedings of the 5th ACM SIGCHI
Symposium on Engineering Interactive Computing Systems,
EICS ’13, (New York, NY, USA), pp. 107–112, ACM, 2013.
[9] D. H. Department and D. Horton, “Software testing.”
[10] C. D. Roover, I. Michiels, K. Gybels, and K. Gybels, “An
approach to high-level behavioral program documentation
allowing lightweight verification,” in In Proc. of the 14th IEEE
Int. Conf. on Program Comprehension, pp. 202–211, 2006.
[11] G. Sala¨un, G. Salan, G. Salan, C. Attiogb, C. Attiogb, C.
Attiogb, M. Allem, M. Allem, and M. Allem, “Verification of
integrated specifications using pvs.”
[12] J.-C. Filliatre, “Program verification using coq introduction
to the why tool,” 2005.
[13] ISO, ISO 9241-11: Ergonomic requirements for office work
with visual display terminals (VDTs) – Part 11: Guidance on
Usability. International Organization for Standardization, 1998.
[14] F. A. Fontana and S. Spinelli, “Impact of refactoring on
quality code evaluation,” in In Proceeding of the 4th workshop
on Refactoring toolsi, Honolulu, HI, USA, 2011. ACM.
Workshop held in conjunction with ICSE 2011, 2011.
[15] J. Al Dallal, “Object-oriented class maintainability
prediction using internal quality attributes,” Inf. Softw.
Technol., vol. 55, pp. 2028–2048, Nov. 2013.
[16] ISO/IEC, “Software products evaluation,” 1999. DIS
14598-1.
[17] J. Nielsen, Usability Engineering. San Diego, CA:
Academic Press, 1993.
[18] I. Stamelos, L. Angelis, A. Oikonomou, and G. L. Bleris,
“Code quality analysis in open source software development,”
8. Conclusions
In what concerns interactive open source software
development, two perspectives on quality can be
considered. Users, on the one hand, are typically
interested on what can be called external quality: the
quality of the interaction between users and system.
Programmers, on the other hand, are typically more
focused on the quality attributes of the code being
produced. This work is an approach to bridging this gap
by allowing us to reason about GUI models from source
code. We described GUI models extracted automatically
from the code, and presented a methodology to reason
about the user interface model. A number of metrics over
the graphs representing the user interface were
investigated. An approach to testing the graph against
desirable properties of the interface was also put forward.
A number of issues still needs addressing. In the example
used throughout the paper, only one windows could be
active at any given time (i.e., windows were modal). The
tool is also able to work with non-model windows (i.e.,
with GUIs where users are able to freely move between
open application windows). In that case, however, nodes
in the graph come to represents sets of open windows
instead of a single active window. While all analysis
techniques are still available, this new interpretation of
nodes creates problems in the interpretation of some
metrics that need further consideration. The problem is
exacerbated when multiple windows of a given type are
allowed (e.g., multiple editing windows). Coverage
criteria provide an objective measure of test quality. We
plan to include coverage criteria to help determine
whether a GUI has been adequately tested. These
coverage criteria use events and event sequences to
specify a measure of test adequacy. Since the total number
of permutations of event and condition sequences in any
GUI is extremely large, the GUI's hierarchical structure
must be exploited to identify the important event
sequences to be tested.
This work presents an approach to the analysis of
interactive open source systems through reverse
engineering process. Models enable us to reason about
both metrics of the design, and the quality of the
implementation of that design. Our objective has been to
investigate the feasibility of the approach. We believe this
style of approach can feel a gap between the analysis of
code quality via the use of metrics or other techniques,
and usability analysis performed on a running system with
actual users.
References
[1] J. C. Silva, J. C. Campos, and J. Saraiva, “Combining formal
147
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 1, No.7 , January 2014
ISSN : 2322-5157
www.ACSIJ.org
Information Systems Journal, vol. 12, pp. 43–60, 2002.
[19] Y. S. Yoon and W. C. Yoon, “Development of quantitative
metrics to support ui designer decision-making in the design
process,” in Human-Computer Interaction. Interaction Design
and Usability, pp. 316–324, Springer Berlin / Heidelberg, 2007.
[20] H. Thimbleby and J. Gow, “Applying graph theory to
interaction design,” pp. 501–519, 2008.
[21] S. P. Jones, J. Hughes, L. Augustsson, et al., “Report on the
programming language haskell 98,” tech. rep., Yale University,
Feb. 1999.
[22] R. Lammel and J. Visser, “A STRAFUNSKI application
letter,” tech. rep., CWI, Vrije Universiteit, Software
Improvement Group, Kruislaan, Amsterdam, 2003.
[23] P. Berkhin, “A survey on pagerank computing,” Internet
Mathematics, vol. 2, pp. 73–120, 2005.
[24] S. Y. Shan and et al., “Fast centrality approximation in
modular networks,” 2009.
[25] J. C. Campos and M. D. Harrison, “Interaction engineering
using the ivy tool,” in ACM Symposium on Engineering
Interactive Computing Systems (EICS 2009), (New York, NY,
USA), pp. 35–44, ACM, 2009.
148
Copyright (c) 2014 Advances in Computer Science: an International Journal. All Rights Reserved.