proceedings book - ICEECAT`14 - International Conference and

Transcription

proceedings book - ICEECAT`14 - International Conference and
PROCEEDINGS BOOK
SELCUK
UNIVERSITY
SELCUK UNIVERSITY
TECHNOLOGY FACULTY
ICEECAT’14
ICEECAT’14
ON ELECTRONIC, COMPUTER
AND AUTOMATION TECHNOLOGIES
I N T E R N AT I O N A L C O N F E R E N C E
I N T E R N AT I O N A L C O N F E R E N C E
ON ELECTRONIC, COMPUTER
AND AUTOMATION TECHNOLOGIES
MAY 9-11, 2014
I N T E R N AT I O N A L C O N F E R E N C E
ON ELECTRONIC, COMPUTER
AND AUTOMATION TECHNOLOGIES
MAY 9-11, 2014
ICEECAT’14
SELCUK UNIVERSITY ILTEK - KONYA, TURKEY
PROCEEDINGS BOOK
KONYA
ekotek.selcuk.edu.tr
SELÇUK
ÜNİVERSİTESİ
1882
KONYA TİCARET ODASI
K O N Y A
TİCARET
BORSASI
Selcuk University Technology Faculty
SELÇUK
ÜNİVERSİTESİ
1882
KONYA TİCARET ODASI
K O N Y A
TİCARET
BORSASI
International Conference on Electronic, Computer and
Automation Technologies
9 - 11 May, 2014.
Konya, Turkiye
ekotek.selcuk.edu.tr
HONORARY CHAIR
Prof. Dr. Hakkı GÖKBEL
CHAIR
Prof. Dr. Faruk ÜNSAÇAR
COCHAIR
Assoc. Prof. Dr. Ismail SARITAS
The Conference has been supported by Selcuk University Scientific Research Projects
Coordination Unit
Copyright © 2014, by Selcuk University
International Conference on Electronic, Computer and Automation Technologies
9 - 11 May, 2014, Konya, Turkiye
All papers of the present volume were peer reviewed by no less than two independent reviewers. Acceptance
was granted when both reviewers' recommendations were positive.
ORGANIZING COMMITTEE
Assoc. Prof. Dr. Ismail SARITAS
Assoc. Prof. Dr. Ali KAHRAMAN
Assoc. Prof. Dr. Şakir TASDEMIR
Assist. Prof. Dr. Mustafa ALTIN
Assist. Prof. Dr. Kemal TÜTÜNCÜ
Assist. Prof. Dr. Humar KAHRAMANLI
Dr. Ilker Ali ÖZKAN
Nevzat ÖRNEK
Murat KÖKLÜ
Eyüp CANLI
Selahattin ALAN
Okan UYAR
Hasan Hüseyin ÇEVIK
Fehmi SEVILMIS
INTERNATIONAL ADVISORY BOARD
Prof. Dr. Asad Abdul-Aziz AL-RASHED, Kuwait University, Kuwait
Prof. Dr. Bhekisipho TWALA, University of Johannesburg, South Africa
Prof. Dr. Hakan ISIK, Selcuk University, Turkey
Prof. Dr. Hendrik FERREIRA, University of Johannesburg, South Africa
Prof. Dr. Hameedullah KAZI, Isra University, Pakistan
Prof. Dr. Ir. Aniati Murni ARYMURTHY, Universitas Indonesia, Indonesia
Prof. Dr. Mehmet UZUNOGLU, International Burch University, Bosnia and Herzegovina
Prof. Dr. Meliha HANDZIC, International Burch University, Bosnia and Herzegovina
Prof. Dr. Moha M’rabet HASSANI, Cadi Ayyad University, Morocco
Prof. Dr. Novruz ALLAHVERDI, Selcuk University, Turkey
Prof. Dr. Okyay KAYNAK, Bogaziçi Üniversitesi, Turkey
Prof. Dr. Rahim GHAYOUR, Shiraz University, Iran
Prof. Dr. Sadetdin HERDEM, Ahmet Yesevi Universty, Kazakhistan
Assoc. Prof. Dr. A. Alpaslan ALTUN, Selcuk University, Turkey
Assoc. Prof. Dr. Ali KAHRAMAN, Selcuk University, Turkey
Assoc. Prof. Dr. Babek ABBASOV, Qafqaz University, Azerbaijan
Assoc. Prof. Dr. Bakıt SARSEMBAYEV, Manas Univesity, Kirghizistan
Assoc. Prof. Dr. Fatih BASCIFTCI, Selcuk University, Turkey
Assoc. Prof. Dr. Ismail SARITAS, Selcuk University, Turkey
Assoc. Prof. Dr. Mehmet CUNKAS, Selcuk University, Turkey
Assoc. Prof. Dr. Mykola H. STERVOYEDOV, Karazin Kharkiv National University, Ukrain
Assoc. Prof. Dr. Sergey DIK, Belarusian State University, Belarus
Assoc. Prof. Dr. Stoyan BONEV, American University, Bulgaria
Assoc. Prof. Dr. Şakir TASDEMIR, Selcuk University, Türkiye
Assist. Prof. Dr. Abdurrazag Ali ABURAS, International University of Sarajevo, Bosnia and
Herzegovina
Assist. Prof. Dr. Emir KARAMEHMEDOVIC, International University of Sarajevo, Bosnia and
Herzegovina
Assist. Prof. Dr. Haris MEMIC, International University of Sarajevo, Bosnia and Herzegovina
Assist. Prof. Dr. Hayri ARABACI, Selcuk University, Turkey
Assist. Prof. Dr. Hulusi KARACA, Selcuk University, Turkey
Assist. Prof. Dr. Humar KAHRAMANLI, Selcuk University, Turkey
Assist. Prof. Dr. Kemal TUTUNCU, Selcuk University, Turkey
Assist. Prof. Dr. Mustafa ALTIN, Selcuk University, Türkiye
Dr. Felix MUSAU, Kenyatta University, Kenya
Dr. Ilker Ali OZKAN, Selcuk University, Turkey
Message from the Symposium Chairpersons
On behalf of the Organizing Committee, it is our great pleasure to welcome you to
the International Conference and Exhibition on Electronic, Computer and Automation
Technologies (ICEECAT 2014) in Konya. ICEECAT 2014 establishes a leading technical
forum for all engineers, researchers and development analysts to exchange information
advance the state-of-the-art and define the prospects and challenges of Electronic,
Computer and Automation Technologies in the new century.
ICEECAT 2014, International Conference and Exhibition on Electronic, Computer
and Automation Technologies is organized by Selcuk University - Faculty of Technology
in Konya and supported by Selcuk University Scientific Research Projects Coordination
Unit.
We also would like to present our thanks to Assoc. Prof. Dr. Ahmet Afsin Kulaksız
for his contribution as speaker and providing highly interesting and valuable
information to the attendees.
We would like to express our gratitude to all the delegates attending the
conference and look forward to meeting you at the ICEECAT 2014. We also hope you
have a wonderful stay in Konya that has hosted Mevlânâ Celâleddîn-î Belhî Rûmî who
was a great poet and philosopher and who owns the "Mesnevi-i Manevi" work of art.
Prof. Dr. Faruk ÜNSAÇAR
On the behalf of Organizing Committee
Table of Contents
Integration of Renewable Energy Sources in Smart Grids
1
Ahmet Afşin Kulaksız
A Text Mining Approach on Poems of Molana (Rumi)
5
Ali Akbar Niknafs, Soudabeh Parsa
Görsel Kriptografi’nin Kaos Tabanlı Steganografi ile Birlikte Kullanımı (Visual
Cryptography With Chaos Based Steganography)
10
E. Odabaş Yıldırım, M.Ulutaş
Photovoltaic System Design, Feasibility and Financial Outcomes for Different
Regions in Turkey
16
V.A. Kayhan, F. Ulker, O. Elma, B. Akın and B. Vural
Tavsiye Sistemleri için Bulanık Mantık Tabanlı Yeni Bir Filtreleme Algoritması
22
D.Kızılkaya, A.M. Acılar
Semantic Place Recognition Based on Unsupervised Deep Learning of Spatial
Sparse Features
27
A. Hasasneh, E. Frenoux and P. Tarroux
Wind Turbine Economic Analysis and Experiment Set Applications with Using
Rayleigh Statistical Method
33
B. F.Kostekcı, Y. Arıkan and E.Çam
A Comparative Study of Bacterial Foraging Optimization and Cuckoo Search
37
A.Özkış, A. Babalık
Effect of Segment Length on Zigzag Code Performance
43
Salim Kahveci
Evaluation of İnformation in Library and Information Activities
46
A.Gurbanov, P.Kazimi
Bulanık Mantık ve PI Denetleyici ile Sıvı Sıcaklık Denetimi ve Dinamik Performans
Karşılaştırması
50
A.Gani, H.Açıkgöz, Ö.F.Keçecioğlu and M.Şekkeli
BT Görüntüleri Üzerinde Pankreasın Bölge Büyütme Yöntemi ile Yarı Otomatik
Segmentasyonu
55
S.Darga, H.Evirgen, M.Çakıroğlu and E.Dandıl
Serially Communicating Lift System
A.Gölcük, H.Işık
59
Kahramanmaraş Koşullarında 3 kW Gücündeki a-Si ve c-Si Panellerden Oluşan
Fotovoltaik Sistemlerin Karşılaştırılması
63
Ş. Yılmaz, H. R. Özçalık
The Experiment Set Applications of PLCHMI Based SCADA Systems
67
H. Terzıoglu, C. Sungur, S. Tasdemır, M. Arslan, M. A. Anadol and C. Gunseven
Emotion Recognition From Speech Signal: Feature Extraction And Classification
70
S. Demircan, H. Kahramanlı
The Design and Implementation of a Sugar Beet Harvester Controlled via Serial
Communication
74
Adem Golcuk, Sakir Tasdemır and Mehmet Balcı
Menezes Vanstone Eliptik Eğri Kriptosisteminde En Önemsiz Biti Değiştirerek
Şifreleme
78
M.Kurt, N. Duru
Quality And Coverage Assessment In Software Integratıon Based On Mutatıon
Testıng
81
Iyad Alazzam, Kenneth Magel and Izzat Alsmadi
Methodological Framework of Business Process Patterns Discovery and Reuse
84
Laden Aldin, Sergio de Cesare and Mohammed Al-Jobori
Rule-Based Modeling of Heating and Cooling Performances of RHVT were Positioned
at Different Angles with a Horizontal
89
Yusuf Yılmaz, Sakir Tasdemır and Kevser Dıncer
Comparison of Performance Information Retrieval of Maximum Match and Fixed
Length Stemming
92
M.Balcı, R.Saraçoğlu
Lineer Kuadratik Regülatör ve Genetik Tabanlı PID Denetleyici ile Doğru Akım
Motorunun Hız Denetimi
95
H.Açıkgöz, Ö.F.Keçecioğlu , A.Gani and M.Şekkeli
Methodological Framework of Business Process Patterns Discovery and Reuse
100
Laden Aldin, Sergio De Cesare
Speed Control of Direct Torque Controlled Induction Motor By Using PI, Anti-Windup
PI And Fuzzy Logic Controller
105
H.Açıkgöz, Ö.F.Keçecioğlu, A.Gani And M.Şekkeli
The Load Balancing Algorithm for the Star Interconnection Network
111
Ahmad M. Awwad
Bankacılık ve Finans Sektöründe Bir Veri Sıkıştırma Algoritma Uygulaması
İ. Ülger, M. Enliçay, Ö. Şahin, M. V. Baydarman and Ş. Taşdemir
117
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Integration of Renewable Energy Sources in
Smart Grids
A.A. KULAKSIZ
Selcuk University, Engineering Faculty, Electrical&Electronics Eng. Dept. Konya/Turkey, afsin@selcuk.edu.tr
parts of the power system are monitored, its state can be
observed and control can be possible [1].
Most of the traditional fossil-fueled power plants are able to
operate at a determined output level. Therefore, electricity
supply and demand can be matched without any challenge.
However, for some renewable energy sources notably solar
photovoltaic and wind, especially in case they provide an
important fraction of total power, the fluctuating nature of the
source may negatively impact the stability of electricity
system. In recent years, the increase of electrical energy
consumption was not followed by the investment in
transmission infrastructure. In many countries, existing power
distribution lines are operating near full capacity and some
renewable energy resources cannot be connected. The aim of
this study is to discuss and review the role of smart grid in
optimal use of renewable energy sources and maximum use of
the available energy at minimum cost.
Abstract – The environmental as well as economical concerns is
changing the way on how we obtain and utilize energy. The trend
is on the cleaner and sustainable energy practices to replace older
ones. The obligation of energy infrastructure transformation
introduces costs and many other challenges. For upgrading old
power grids, in which the energy is distributed unidirectional and
passive, the interest is shifting to the smart grids. Recent studies
suggest that the de-carbonization of power sector can be provided
at a realistic cost by them.
The increasing connection of a great number of large
renewable energy plants and other environment-friendly
generators and increasing demands of users introduces problems
such as power quality, efficiency of energy management and
stability. Smart grids can increase the connectivity, automation
and coordination between suppliers and networks. This study
discusses the role of smart grid in optimal use of renewable
energy sources and reviews power electronic converter
applications.
Keywords – Smart grid, renewable energy resources, power
electronics.
II. TECHNOLOGIES FOR SMART GRID
The modernization of transmission and distribution grids
can be provided by means of smart grids. However, there are
several technologies to develop and implement. These are [1];
 Information and communications technologies
 Sensing, measurement, control and automation
technologies
 Power electronics and energy storage.
All the grid components are able to communicate by bidirectional communication systems. The increased penetration
of electric vehicles and micro-generation makes it possible to
modify their consumption pattern to overcome congestions in
power system.
In order to enable the grid responding to real-time demand,
sensors, measurement, control and automation systems are
required. Control and automation systems provide rapid
diagnosis and timely response to any event throughout the
power system. This provides efficient operation of power
system components and helps relieve congestion in
transmission and distribution circuits [1]. Intelligent electronic
devices provide protective relaying, measurements and fault
records for the power system. Smart meters, communication,
displays and associated software allow customers to have
control on electricity use [2].
Long distance transport and penetration of renewable energy
resources can be made possible by high voltage transmission
I. INTRODUCTION
Reducing carbon footprint and minimizing fuel cost are the
keys to meet global warming concerns of power generation
sector. For this concern, number of renewable energy
resources, distributed generators and electric vehicles are
increasing to provide sustainable electric power. This situation
and requirements to improve the power supply reliability and
quality mandate new strategies for the operation of the
electricity grid. Smart grids can play a key role to enable the
use of low-carbon technologies and meeting peak demand with
an ageing infrastructure.
The advancement in communication systems offers the
possibility of monitoring and control throughout the power
system. Therefore, by means of the information technologies,
electrical power systems can be modernized. Smart grid was
developed to allow consumer’s contribution in the grid
operation employing advanced metering and taking control of
their energy use. They can coordinate the needs of generators,
grid operators and consumers to operate all parts of the system
as efficiently as possible, maximizing system reliability,
stability and resilience, minimizing costs and environmental
impacts. Therefore, integration of the load in the operation of
the power system helps balancing supply and demand. All
1
(HVDC) and flexible AC transmission systems (FACTS).
Renewable energy sources, energy storage and consumer loads
can be controlled by power electronic circuits. Control of
active and reactive power output of the renewable energy
generators can also be achieved conveniently using power
electronic converters [3].
A. Photovoltaic systems
Figure 2 demonstrates the general block diagram of a PV
system, which consists of a DC-DC converter for maximum
power point tracking (MPPT) and for the increase of bus
voltage, a single or three phase inverter, an output filter,
sometimes a transformer and grid interface and a controller.
III. RENEWABLE ENERGY RESOURCES
The vital necessity of smart grid is to ensure the network is
not overloaded and connecting renewable energy resources in
a quicker and more cost effective way to the congested
network. Since they are highly intermittent, in case they are not
controlled, renewable sources such as solar photovoltaic, wind
and hydro may have the potential problems on the network
such as voltage levels, voltage fluctuations, thermal ratings and
power flows. Also, their localized and intermittent nature
makes them difficult to properly accommodate to the current
electric grid. A smart grid can reduce the effect on the overall
power system from employing renewable energy, and ensures
a stable electrical power.
Figure 1 demonstrates an overview for the use of power
electronic converters in renewable energy systems. The
converters are applied to match parameters and couple
distributed sources with power lines and controlling
consumption of energy produced by these sources. Employing
power electronic converters between a renewable energy
resource and the grid can be used to control reactive power
output and hence the network voltage. Also, real power output
can be controlled to enable the generator meeting grid
requirements. The characteristics of renewable energy
resources are explained in the following sub-sections.
Figure 2: A general PV system block diagram
The basic element of a PV system is the solar cell. A typical
solar cell consists of a p-n junction formed in a semiconductor
material similar to a diode. As shown in Figure 3, the
equivalent circuit model of a solar cell consists of a current
generator, IL and a diode plus series Rs and parallel Rsh
resistance [4]. Series and parallel combination of solar cells
form solar PV modules rated at required output voltage and
current. The characteristics of a PV module can be determined
using the model of a single solar cell.
Figure 3: The equivalent circuit of a solar cell
The current-voltage characteristic of single cell is described
by the Shockley solar cell equation [4];
  V  I .Rs
I  I L  I 0 exp
  n.Vth
  V  I .Rs
  1 
Rsh
 
(1)
where I is the output current, IL is the generated current under
a given insolation, I0 is the diode reverse saturation current, n
is the ideality factor for a p-n junction, Rs is the series loss
resistance, and Rsh is the shunt loss resistance. Vth is known as
the thermal voltage.
The characteristics of a PV system vary with temperature
and insolation [4]. As the PV system output is intermittent,
smart grid can help to integrate them to electric network. For
example, for a PV system supplying several commercial and
industrial consumers, supply and demand matching can be
provided by bi-directional communication used in smart grid
technology. When the solar insolation decreases, smart grid
can stop serving to the customer for an allowed rate and when
Figure 1: Power electronic converters in renewable energy
systems
2
the insolation is recovered, the service can resume.
A maximum power point tracking (MPPT) controller, which
can be built including a dc-dc converter, is required to track
the maximum power point in its corresponding curve whenever
temperature and/or insolation variation occurs. In literature,
several methods grouping MPPT algorithms have been
suggested. The algorithms can be categorized as either direct
or non-direct methods. The indirect methods use data on PV
modules or mathematical functions obtained from empirical
data to estimate maximum power points. The methods in this
category are curve fitting, look-up table, open-voltage PV
generator, short circuit PV generator and the open circuit cell
[5]. Direct methods seek maximum power point using PV
voltage and/or current measurements. The advantage of these
algorithms is being independent from the a priori knowledge
of the PV module characteristics. The operating point is
independent of solar insolation or temperature. Some of the
methods in this group are feedback voltage, feedback current,
perturbation&observation (P&O), incremental conductance
(IC) and fuzzy logic [5].
PV power systems can be configured by three basic
connections as shown in Figure 4. In Figure 4(a), an internal
DC bus is used, which is a widely used application. Here, one
high power DC/AC inverter is used to obtain 3 phase AC
output. The disadvantage of this configuration is that in case
the inverter fails the entire output is lost. This problem can be
avoided by using the configuration in Figure 4(b). Here, the
inverters are used for individual arrays. A demanding
configuration is to employ DC/AC inverters integrated into the
PV module as shown in Figure 4(c) [6]. The converters here
are highly efficient, minimum in size and able to work in
parallel with other converters.
quadrant converters, both active and reactive power flow to
and from the rotor circuit can be controlled. A general block
diagram of a wind power generation system is shown in Figure
5. Here, 3 phase rectifier converts the generated voltage in AC
form to DC. Boost converter is used to step-up the rectified
voltage to the dc-link value of the inverter. Capacitor in the
output controls Vdc voltage as an energy storage element.
Energy transfer control of the active and reactive powers can
be controlled by three-phase inverter.
Figure 5: A general block diagram of wind power generation
system
The power in the wind is computed from the following
relationship [7];
(2)
where Pw is the power in the wind (watts); ρ is the mass
density of air (kg/m3) (at 15 oC and 1 atm, ρ=1.225 kg/m3); A
is the cross-sectional area through which the wind passes (m2);
and υ is the velocity of the wind normal to A (m/s). The wind
power cannot be fully utilized as the wind speed after passing
the turbine is not zero. The theoretical maximum power
extraction ratio, the so-called Betz limit, is 59% [8].
Cut-in wind speed is the minimum speed needed to generate
net power. For lower wind speeds, wind energy is wasted. As
velocity increases, the delivered power by the generator rises
as the cube of wind speed given by Equation (2). Above rated
wind speed, the generator delivers as much power as its
capacity allows. When a wind speed limit is exceeded, in order
to avoid damaging the wind turbine, a part of the excess
energy must be wasted [7]. This point is called the cut-out
wind speed, where the machine must be shut down.
High power converters are widely employed in wind farms
consisted of closely placed turbines. The generator type and
converter type determines the topology of electric network.
Typical examples for turbine connections with generators are
shown in Figure 6. For the configuration in Figure 6(a),
application of AC-DC/DC-AC converters provide a DC link to
allow direct control of active power supplied to the network.
DC line also helps to connect a farm located at a distance from
the existing grid [6]. In order to obtain individual control of
turbine power, it is possible to divide rectifier part of the ACDC/DC-AC converter into particular turbines. For the
configuration shown in Figure 6(b), matching turbines to a DC
network is significantly easier than to an AC network as the
Figure 4: Typical PV power supply system configurations
B. Wind power systems
Wind power systems convert the kinetic energy in air into
electrical energy. They are currently developed onshore and
offshore. As offshore wind farms experience stronger and
more consistent wind, they are viable for reduced
environmental impact. The majority of the generators used in
offshore wind turbines are variable speed [3]. Using four
3
control parameter of DC network is only amplitude, while AC
network requires three, namely amplitude, frequency and
phase. Connection of energy storage devices is also easier for
this configuration [6]. In a smart grid, electric vehicles can be
integrated to charge vehicles’ batteries using wind power. If
the charge is implemented when the wind blows, smart grid
can provide supply and demand matching by reducing
greenhouse gas emissions. Employing power electronic
converters in smart grids can increase the power transfer
capacity of circuits, arrange power flows through less loaded
circuits and increase controllability.
IV. CONCLUSION
There are significant barriers to be overcome for employing
smart grids. Especially governments need to establish policy
and plans to accelerate grid transformation process. The need
for smart grids and their benefits should be clearly explained
by research organizations, industry and the financial sector.
However, each region’s own technical and financial structure
must be considered to develop solutions enabling widespread
deployment of smart grids.
As smart grids are much more complex systems than the
classical power networks they require an extensive research in
many disciplines. It can be considered a research priority to
implement probability risk assessment to the entire electricity
system and develop smart network control technologies for
optimal use of renewable energy sources. As renewable energy
integration subject has a very extensive application possibility
in smart grids, exhaustive theoretical discussions have been
omitted. The aim of this article was to increase awareness and
inspire the research in the discussed area.
REFERENCES
[1]
[2]
(a)
[3]
[4]
[5]
[6]
[7]
[8]
(b)
Figure 6: Diagram of the typical turbine connections in wind
farms (a) with common dc link (b) with internal dc network
4
J. Ekanayake, K. Liyanage, J. Wu, A. Yokoyama, and N. Jenkins, The
Smart Grid, in Smart Grid: Technology and Applications, John Wiley &
Sons, Ltd, Chichester, UK, 2012.
M. Sooriyabandara, and J. Ekanayake, "Smart grid-technologies for its
realisation", Proc. IEEE Int. Conf. Sustainable Energy Technol.
(ICSET) 2010, pp.1-4.
J. Ekanayake, K. Liyanage, J. Wu, A. Yokoyama, and N. Jenkins, Power
Electronic Converters, in Smart Grid: Technology and Applications,
John Wiley & Sons, Ltd, Chichester, UK, 2012.
A. A. Kulaksiz, “ANFIS-based estimation of PV module equivalent
parameters: application to a stand-alone PV system with MPPT
controller”, Turk J Elec Eng & Comp Sci, vol. 21: 11, pp. 2127 – 2140,
2013.
V. Salas, E. Olias, A. Barrado, and A. lazaro, “Review of the maximum
power point tracking algorithms for standalone photovoltaic systems”,
Solar Energy Materials & Solar Cells, vol. 90, pp. 1555-1578, 2006.
G. Benysek, M. Kazmierkowski, J. Popczyk, and R. Strzelecki, "Power
electronic systems as a crucial part of Smart Grid infrastructure - a
survey" Bulletin of the Polish Academy of Sciences: Technical
Sciences, vol. 59.4, pp. 455-473, 2012.
G. M. Masters, Renewable and efficient electric power systems. Wiley
Interscience, 2004.
M. P. Kazmierkowski, R. Krishnan, and F. Blaabjerg, Control in Power
Electronics. Academic Press, London, 2002.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
A text mining approach on poems
of Molana (Rumi)
Ali Akbar Niknafs1 and Soudabeh Parsa2
1
2
Computer Engineering Dept.,Shahid Bahonar University, Kerman/Iran, niknafs@uk.ac.ir
Computer Engineering Dept.,Shahid Bahonar University, Kerman/Iran, s.parsa@eng.uk.ac.ir
To achieve this purpose, a text mining methodology is used
in this paper. Three rule mining methods, including apriori,
C5.0 and GRI are used and a fuzzy inference system (FIS) is
created for testing the rules. Also, the capability of different
membership functions (MF) is compared in this FIS. The
results are analyzed in two literal and technical points of view;
which will be discussed in other sections.
Abstract – In this paper a series of text mining tasks are
performed on 1000 poems of Molana (Rumi). Three data mining
methods including apriori, C5.0 and GRI are used. The poems
are preprocessed and a fuzzy inference system is used for testing
the methods. Two types of analysis are performed, including
technical and literature analysis. The methods are compared and
the results show better performance of apriori. The comparison
criterion is recall of classification.
Keywords – text mining, classification, TFIDF, Rule mining,
Molana (Rumi).
II. DATA MINING MODELS, FUZZY INFERENCE SYSTEM AND
TFIDF TECHNIQUE
GRI is a data mining model that extracts association rules
out of data. In this model some fields are input and some are
output; its input can be numeric or categorical.
Apriori is another data mining model which similarly can
explore association rules. This model extract rules with more
information and is faster in compare with GRI.
The third model that is used in this paper is C5.0. This
model is based on decision tree algorithm. This model can
make decision tree which is a development of ID3 [2].
Understanding the procedure of C5.0 is easier than decision
tree.
Fuzzy inference system is the process of mapping from a
given input to an output using fuzzy logic. The process
involves membership functions, fuzzy logic operators and ifthen rules.
In this paper four membership function such as triangular,
Gaussian, S and sigmoid are used to check which one in data
mining models can achieve higher accuracy in forecasting
class of poems.
Documents are treated as a bag of words or terms. Each
term weight is computed based on some variations of Term
Frequency Inverse Document Frequency (TFIDF) scheme.
One of the approaches to extract relevant information from the
related topics is selecting one or more phrases that best
represent the content of the documents. Most systems use TFIDF scores to sort the phrases in multiple text documents and
select the top-k as key phrases [3].
I. INTRODUCTION
T
HE new era of information technology has affected on
many fields of science including literature. The poets have
sophisticated universe of imaginations, such that deep
understanding of their beliefs is not simple. There is a
scientific responsibility for computer specialists in entering the
universe of poems and using the powerful algorithms in the
field of artificial intelligence and data mining for extracting
clear and bright analysis of their styles and concepts.
Data mining is extracting hidden patterns of significant
relationships from a big amount of data. Text mining is a more
literature oriented field of data mining that prepares many
useful methods for extracting rules, classification, clustering
and forecasting in literature texts.
Text mining is a new field that tries to extract meaningful
information from natural language text. It may be loosely
characterized as the process of analyzing text to extract
information that is useful for particular purposes. Compared
with the kind of data stored in databases, text is unstructured.
The field of text mining usually deals with texts whose
function is the communication of factual information or
opinions [1].
Molana (Rumi) is the 12th century genius Gnostic that his
fame is wide spreading all over the universe. During recent
years many researchers have focused on interpreting his works
including Masnavi, Divan e Shams, Fih-e-Mafih, Maktoobat
and Majales-e Sab'e. Although statistical analysis are
introduced during these years, but still not enough data mining
or text mining approaches are considered.
We are concentrating on the purpose that when Molana
(Rumi) was composing a poem, which conceptual elements
have been contributed in his mentality.
III.
RELATED WORKS
Pereira et. al. proposed an automatic process of classifying
epileptic diagnoses based on International Classification of
Disease, Ninth Revision (ICD-9). By using processed
electronic medical records and a K-Nearest Neighbors as a
5
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Figure 1: Sample part of data base
White-box multi classifier approach to classify each instance
mapping into the corresponding standard code, a text mining
approach was done. Results suggest a good performance
proposing a diagnosis from electronic medical records, despite
of the reduced volume of available training data [4].
Liang et.al proposed a classification method named explicit
Markov model that applied for text classification. They
presented a new method called explicit Markov model (EMM)
which is based on Hidden Markov Model (HMM) for text
classification. EMM makes better use of the context
information between the observation symbols. Experimental
results showed that the performance of EMM is comparable to
classifiers such as SVM for text classification [5].
Yang et. al. proposed an approach for short texts classifying
by combining lexical and semantic features. They presented an
improved measurement method for lexical feature selection
and obtained the semantic features with the background
knowledge repository which covers target category domains.
The combination of lexical and semantic features is achieved
by mapping words to topics with different weights. In this
way, the dimensionality of feature space is reduced to the
number of topics. The experiment results showed that the
proposed method has better effectiveness compared with
existing methods for classifying short texts [6].
IV.
noticeable that the poems of Molana (Rumi), are chosen from
"Divan e Shams" which is a complete collection of poems in
Persian language. In the proposed approach all text mining
models are implemented by using of Persian language data
base, but here for more clarity of results we have tried to use
both Persian words and English equivalents; also for each
complete sentence of poem (in Persian it is named "Beyt") we
introduce its meaning in English language. Figure 1 shows a
sample part of data base used for text mining from poems.
After classifying, the second step is finding key words. In
order to find key words, TFIDF technique is used. In this step,
the stop words are important. There is no stop words file for
poems, but in this work a stop words file for poems is
produced that is a good point of this research. By using TFIDF
technique for each "Ghazal", a word-weight vector will be
produced. Table 1 shows a part of these vectors and the
weights.
Table 1: A part of word-weight vector
Words
Class
‫فنا‬
(perdition)
‫وصال‬
(joiner)
PROBLEM DESCRIPTION AND METHODOLOGY
To start mining poems of Molana (Rumi), the first step is
preprocessing. In this step 1000 poems (Ghazal) are selected
and an expert has classified them. "Ghazal" is a poem frame
which has about 6 or more verse. In Persian literature, each
verse is called "Beyt". There are 22 classes defined in this
paper which are as follow. For more explanation, the name of
each class is written in 3 forms: [in Persian alphabet, its
equivalent pronunciation, and its meaning in English]: [‫وصال‬
= vesal = joiner], [‫ = آب حیات‬aab-e hayat = water of vitality],
[‫ = وفا‬vafa = troth], [‫ = عشق‬eshgh = love], [‫ = فنا‬fana =
perdition], [‫ = صبر‬sabr = patience], [‫ = عطا‬ata = gift], [‫لطف حق‬
= lotf-e hagh = God kindness], [‫ = شمس‬Shams=sun ], [‫= نور‬
nour = light], [‫ = قبله‬ghebleh = keblah], [‫ = توبه‬tobeh =
penance], [‫ = بقا‬bagha = survival], [‫ = قضا‬ghaza = hap], [‫= روزه‬
roozeh = fast], [‫ = اختیار‬ekhtiar = authority], [‫ = تالش‬talash =
try], [‫ = حج‬Hajj=Hadj], [‫ = فراق‬feragh = separation], [‫= چشمه‬
cheshmeh = spring], [‫ = شهید‬shahid = martyr], [‫ = غرور‬ghoroor
= pride]. Each "Ghazal" might be in several classes. It is
‫عالم‬
univer
se
‫گل‬
flower
‫آسمان‬
sky
‫بحر‬
sea
‫لقا‬
face
0
0
0
1.501
0
1.2204
1.08990
0
1.501
1.90980
By passing these two steps about 6000 key words found.
These keywords are features and as there are so many features
for the third step, a feature selection process is needed. Thus
the most important words will be selected. Here 34 words are
selected as the most important key words. These words are as
follow:[‫ = نور‬noor = light], [‫ = نهان‬nahan = covert],[‫ = هوا‬hava
= air], [‫ = یار‬yar = friend], [‫ = جام‬jaam = goblet], [‫ = باغ‬baagh
= garden], [‫ = درد‬dard = pain], [‫ = روز‬rooz = day], [‫= زمین‬
zamin = earth], [‫ = شب‬shab = night], [‫ = لب‬lab = lips], [‫سلطان‬
=sultan = sultan],[‫= شه‬shah = king], [‫= لقا‬legha = face], [‫بنده‬
=bandeh = slave], [‫= بحر‬bahr = sea], [‫= آسمان‬aseman = sky],
[‫= آتش‬aatash = fire], [‫= گل‬gol = flower], [‫= ماه‬maah = moon],
[‫= عالم‬aalam = universe], [‫= زر‬zar = gold], [‫= دولت‬dolat =
government], [‫= دریا‬daryaa = sea], [‫= خون‬khoon = blood], [‫حق‬
6
Rule 1 for [ ‫ =چشمه‬cheshmeh = spring]
If [‫ = روح‬rooh = spirit] = 0.0 and [‫= حق‬hagh = truth] =
1.304 and [‫= گل‬gol = flower] = 1.090 and [‫آسمان‬
=aseman = sky] = 0.0 and ‫( هوا‬air) = 0.0 then [‫=چشمه‬
cheshmeh = spring]
According to this rule it can be understood that if "Ghazal"
has got some key words such as [‫= حق‬hagh = right] and [‫گل‬
=gol = flower] but doesn’t have words such as [‫= آسمان‬aseman
= sky] as their weights are zero, then it can be said that
"Ghazal" is classified in ‫( چشمه‬spring) class.
The second model is apriori and the sample rules extracted
by this model are shown in table 2. This table shows that if
"Ghazal" has got words such as [‫= دریا‬daryaa = sea] and [‫عقل‬
=aghl = wisdom] and doesn’t have words such as [‫ = یار‬yar =
friend], [‫ = زمین‬zamin = earth] and [‫= سلطان‬sultan = sultan],
then it is classified in ‫( فنا‬perdition) class, where confidence is
42.8%; and if "Ghazal" has got words such as [‫= دریا‬daryaa =
sea] and[‫= آتش‬aatash = fire] and doesn’t have words such as
[‫= حق‬hagh =truth] and [‫ = یار‬yar = friend], then it is classified
in ‫( فنا‬perdition) class, where confidence is 41.6%.
There is also an example for each rule in that table. The
example is a "Beyt" which includes words mentioned in the
rule.
For apriori and GRI, the minimum threshold value of
confidence is set to 50% and the minimum threshold value of
support is set to 3%.
=hagh = truth], [‫= پنهان‬penhaan = hidden], [‫= عقل‬aghl =
wisdom], [‫= سینه‬sineh = thorax], [‫= ساقی‬saaghi = tapster], [‫روح‬
=rooh = spirit], [‫= دوا‬dava = cure], [‫= خورشید‬khorshid = sun]
and [‫= خدا‬khoda = God].
These 34 words are input for data mining models and the
classes defined for poems are output. The forth step is rule
extracting by using data mining models. The fifth step is
testing extracted rules in fuzzy inference system. In this step
some poems are needed as testing samples. Preprocessing
should be done on these poems too. The created FIS including
the extracted rules must predict the class of test samples. This
testing is done with all four membership functions mentioned
before.
The last step is evaluating the models. Classifying accuracy is
widely used as a metric to evaluate machine learning systems
[7, 8]. The models and techniques which are used are machine
learning techniques and models. To evaluate their proficiency,
recall measure is calculated. Recall is used to evaluate the
poems classification. Recall is the number of retrieved
relevant items as a proportion of all relevant items [9]. Recall
measure is defined as the proportion of the set of relevant
items that is retrieved by the system, and therefore penalizes
false negatives but not false positives [10]. Figure 2 shows the
methodology used in our approach.
Table 2: Sample output of apriori
Class
‫=فنا‬Fana
(perdition)
V.
EXPERIMENTAL RESULTS AND COMPUTATIONAL ANALYSIS
As mentioned previously, C5.0 is one of the models used in
this paper. The sample rules extracted by this model are as
follow:
[ ‫دریا‬
=Darya=sea] =
1.651996 and[
‫آتش‬
=Atash=fire] =
1.209312 and[
‫= حق‬hagh =
right] = 0.0
and[ ‫ = یار‬yar
= friend] = 0.0
3.19
41.6
‫ز قلزم آتشی برشد در او هم ال و هم‬
‫ اگر هستی تو از آدم در این دریا‬/‫اال‬
‫فروکش دم‬
Class
‫=فنا‬Fana
(perdition)
Figure 2: Methodology
1.651996
and[‫=عقل‬Aghl=
wisdom] =
1.267107 and
[‫=یار‬Yar =
friend] = 0.0
and[ ‫زمین‬
=Zamin=earth]
= 0.0 and
[‫=سلطان‬Sultan=
=King] = 0.0
‫چو مفلوجی چو مسکینی بماند آن عقل هم‬
‫ اگر هستی تو از آدم در این دریا فروکش‬/ ‫برجا‬
‫دم‬
Confidence
%
42.8
=Darya=sea]=
Example
Support
%
Antecedent
Consequent
3.72
[‫دریا‬
The third model is GRI. Table 3 shows a sample output of
rule extracted by GRI model. It is mentioned that if "Ghazal"
7
has got words such as [‫= دولت‬dolat = government], [‫عالم‬
=aalam = universe] and [‫= گل‬gol = flower] or words such as
[‫ = جام‬jaam = goblet] and [‫ = درد‬dard = pain] and doesn’t have
word such as[‫= شه‬shah = king], then by confidence equal
100%, it is classified as ‫( آب حیات‬water of vitality).
Table 3: Sample output of GRI
100
0.27
100
‫چون شیشه صاف گشته‬
‫ درد تو‬/ ‫از جام حق تعالی‬
‫خوش گوارد تو درد را مپاال‬
Class
‫=آب حیات‬Abe-hayat
(water of vitality)
0.53
‫زهی دولت زهی رفعت‬
‫ شقایق‬/ ‫زهی بخت و زهی اختر‬
‫ها و ریحان ها و گل های عجب‬
‫ما را‬
Class
‫=آب حیات‬Abe-hayat
(water of vitality)
[ ‫= جام‬Jam=goblet]
= 1.657267 and[
‫ =درد‬dard = pain] =
1.646765 and[ ‫شه‬
=shah=king] = 0.0
Example
Confidence
%
Support
%
Antecedent
Consequent
[ ‫دولت‬
=Dolat=government]
= 1.662579 and[ ‫عالم‬
=Alam=universe] =
1.220492 and[‫گل‬
=Gol=flower] =
1.089902
Figure 3: Recall comparison
Selecting MF is dependant to expert. It is tried to test
different MF types, and these MF types are evaluated based on
expert’s classification. The test results showed that some
membership functions prepare more clear results for text
mining. Table 4 and figure 3 show that in all four membership
functions, apriori model can predict "Ghazal" classes more
accurately and among those membership functions S-shape
membership function describes more accurately than others.
Thus for text mining, especially Persian text mining and
mining poems, using apriori model will be more useful and
accurate. Next section is content analysis that helps us to
acquiesce the hidden conceptual elements; in fact the final
goal of text mining is this analysis; it will help to literature
experts for understanding the mystical embedded concepts in
poems.
VI.
Rules extracted by these three models should be applied to
FIS. There are 20 "Ghazal" as test samples that should be pre
processed as mentioned before in step one. This testing is done
by four membership function, sigmoid, Gaussian, S-shape and
triangular; the recall measure is calculated for testing by all
these membership functions.
Table 4 and figure 3 show the recall comparison of these
three models in four different membership functions.
In this section, typical conceptual illustration of results is
introduced. As it was said earlier, the text mining process led
to some classes, where in each class, some keywords are
found. As an example, the class of [‫ = فنا‬Fana = perdition] is
considered as our example. The following poems are selected
from the obtained results.
For example, the following "Beyt" is written in both Persian
and phonetic forms, and the concept is followed.
‫ هزار غوطه تو را خوردنی است در دریا‬/ ‫تو جامه گرد کنی تا زآب تر نشود‬
To jame gerd koni ta ze aab tar nashavad / hezaar ghooteh to
raa khordanist dar daryaa
When you are in the path of love, whenever you try you
cannot be safe of its phenomena. Exactly similar to folding
your cloths to not let them get wet, but as you arrive to the sea,
you will be immersed and you cannot avoid getting wet.
Therefore the lover will sunk in the sea of the love and will
forget his imaginary existence. So the path of love is a route
leading to the ocean of perdition.
In another "Ghazal", surprisingly existence is meant as nonexistence. The waves of the sea convey you to perdition.
Therefore, while you suppose that you are alive, seriously you
are not alive; you are dead, a special death before usual death.
Another example is a "Ghazal" which says about oyster that
robs a water drop from the beach; after that a diver searches
and finds it in the depth of the ocean. In fact that drop has
been dead and after that it born in a higher perfect situation.
‫دربحر جوید اورا غواص کآشنا شد‬/‫گرچه صدف زساحل قطره ربودوگم شد‬
Table 4: Recall
MF
Model
apriori
GRI
C5.0
Sigmoid
Gaussian
S
Triangular
0.35
0.21
0.13
0.13
0.09
0.07
0.37
0
0.03
0.34
0.27
0.13
CONCEPTUAL CONTENT ANALYSIS
8
Garche sadaf ze sahel ghatreh robood o gom shod / dar bahr
jooyad ou raa ghawaas kashena shod
In comparison with the light of sun, each person is just like
a shadow so what's the benefits of sun for a shadow rather
than perdition. It means the imaginary and virtual existence of
a human is just like a shadow that should die in front of
glorious shining of sun and the sun is a glory appearance of
the creator.
‫ ز نور ظلمت غیر فنا چه سود کند‬/ ‫هما و سایه اش آن جا چو ظلمتی باشد‬
Homa va saye ash aan ja cho zolmati bashad / ze nor zolmat
zolmat gheire fana che sood konad
a cooperative and transactional procedure between human and
machine.
REFERENCES
[1]
I. H. Witten, "Adaptive text mining: Inferring structure from sequence".
J Discrete Algorithms, 2(2), pp. 137-159, June, 2004.
[2] Z. GuoDong, J. Jian Su,Zhang, Min, "Exploring various knowledge in
relation extraction". In Proceedings of the 43 rd Annual Meeting of the
Association for Computational Linguistics, pages 427–434, 2005.
[3] T. Zakzouk and H. Mathkour, "Text classifiers for cricket sports news".
Internat. Conf. Telecommun. Tech. Appli., Proc. CSIT, vol. 5, IACSIT
Press, Singapore, 2011.
[4] L. Pereira, R. Rijo, C. Silva, M. Agostinho, "ICD9-based Text Mining
Approach to Children Epilepsy Classification". Conference on
Enterprise Information Systems, International Conference on Project
Management, International Conference on Health and Social Care
Information Systems and Technologies, 2013.
[5] J. G. Liang, X. F. Zhou, P. Liu, L. Gou, S. Bai, "An EMM-based
Approach for Text Classification". J Information Technology and
Quantitative Management, 2013.
[6] L. Yang, C. Li, Q. Ding, L. Li, "Combining Lexical and Semantic
Features for Short Text Classification". 17th International Conference in
Knowledge Based and Intelligent Information and Engineering System,
2013.
[7] D. Billsus, M. Pazzani, "Learning collaborative information filters".
Proc. Int. Conf. on Machine Learning, 1998.
[8] P. Brusilovsky, M. T. Maybury, "From adaptive hypermedia to the
adaptive web". Guest Editors' Introduction, Communications of the
ACM, 45 (5), 31-33, 2002.
[9] M. Buckland, "The relationship between recall and precision". Journal
of the American society for information science, 45(1): 12-19, 1994.
[10] A. Alvarez, "An exact analytical relation among recall, precision, and
classification accuracy in information retrieval". Department of
computer science, Boston College,2002.
VII. CONCLUSION
Text mining in the field of poems is a complicated procedure;
not only because of technical tasks but because of the special
phenomena of literature that uses the words in an ambiguous
and virtual application. In poems of Molana(Rumi) many
words can be seen with similar spelling but different meaning.
For example the word "‫"گل‬, somewhere is used as
“Gol=Flower” and some other places is used as “Gel=Slob”;
but the spelling of both words in Persian is the same. This is
seen more in the poems dealing with mystical concepts. In this
paper we tried to handle such an approach. It is noticeable that
in addition to text mining outputs, the expert opinions should
be added to the results for obtaining a suitable conceptual
analysis. Therefore it can be said that text mining in poems is
9
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 911,2014
Konya, Turkey
Görsel Kriptografi’nin Kaos tabanlı
Steganografi ile birlikte Kullanımı
(Visual Cryptography with Chaos based
Steganography)
E. ODABAŞ YILDIRIM 1 and M.ULUTAŞ2
1
2
Ataturk University, Erzurum/Turkey, esra.odabas@atauni.edu.tr
Karadeniz Technical University, Trabzon/Turkey, ulutas@ktu.edu.tr
Abstract - In this paper, visual cryptography with chaos based
steganography is discussed from a point of view which I believe
supply more effective data security.
This paper presents a new scheme for hiding two meaningless
secret images into two meaningful cover images. Meaningful
images are more desirable than noiselike _meaningless_ shares in
Visual Cryptography. We used steganography to generate a
meaningful image. There are a lot of technique for image
steganography such as Patchwork Algorithm, Amplitude
Modulation, Spread Spectrum Image Steganography, Least
Significant Bit Embedding (LSB). We focus on LSB techniques
with chaos based steganograpy. Chaos has potential applications
in several functional blocks of a digital communication system:
compression, encryption and modulation. The possibility for selfsynchronization of chaotic oscillations has sparked an avalanche
of works on application of chaos in cryptograph so we used it
with the steganography.
Keywords – LSB, Visual Cryptography, Steganography, Chaos.
I. INTRODUCTION
İlk olarak 1994 yılında Naor ve Shamir tarafından önerilen
Görsel Şifreleme (Visual Cryptography) yöntemi, gizlenecek
olan görüntüyü pay adı verilen anlamsız iki parçaya ayırır, bu
yöntemde paylaşılan sır gizli bir görüntüdür [1].
Özetle
görsel şifreleme, ikili (binary) görüntüyü karmaşık
hesaplamalara gerek kalmadan, basit bir yol ile iki anlamsız
görüntüye paylaştırır. Bu yöntemin en önemli özelliklerinden
birisi başka bir hesaplamaya ihtiyaç duymadan, saydam pay
görüntülerini üst üste koyarak, insan görme sistemini
kullanarak gizli veriyi ortaya çıkarmasıdır.
Bu yöntemde gizli görüntüdeki her bir piksel
oluşturulan pay görüntülerinde birden çok piksel ile temsil
edildiğinden görüntü genişliği (pixel expansion) adı verilen bir
büyüme oranı ve kontrast problemi ortaya çıkmaktadır.
Genişleme faktörü olarak da adlandırılan bu büyüme oranı ve
kontrast problemi bir çok çalışma konusu olmuştur [2-8].
Görsel şifrelemede pay görüntülerinin anlamsız görüntü
şeklinde olması, veriyi iletirken art niyetli kişilerin dikkatini
çekebileceği ihtimali ile bu anlamsız payları da steganografi
ile gizleyerek anlamlı hale getirilmesi fikrini doğurmuştur [912].
Steganografi kriptografiden farklı olarak veriyi şifrelemek
değil, veriyi bir başka ortama gizleme sanatıdır. Şifrelemeye
göre en büyük avantajı bilgiyi gören kişide içinde gizlenmiş
veri olacağına dair şüphe uyandırmamasıdır. Tarihte
steganografi, hem şifreleme öncesi dönemde hem de
sonrasında (ilgi çekmeme avantajından dolayı) kullanılmıştır.
Eski Yunanistan'da insanlar mesajları tahtaya yazıp üzerini
mumla kaplarlardı. Böylece cisim kullanılmamış bir tablete
benzerdi öte yandan mumun eritilmesiyle birlikte içindeki
gizli mesaj okunabilirdi.
Bu çalışmada Görsel Kriptografi (VC)’nin steganografi ile
birlikte kullanımı, steganografi kısmında ise LSB
yöntemlerinden, sıralı LSB, Ayrık Logaritma tabanlı
steganografi ve Kaos tabanlı steganografi yöntemleri
uygulanmış, korelasyon, entropy, homojenlik, kontrast, enerji,
MSE, PSNR ölçütleri ile karşılaştırmaları verilmiştir.
Çalışmanın geri kısmı şu şekilde düzenlenmiştir. İkinci
kısımda Görsel Kriptografi’nin ayrıntılarından bahsedilmiştir.
Üçüncü Kısımda Steganografi’den bahsedilmiş, dördüncü
kısımda En Anlamsız Bite Ekleme Yöntemi (LSB)’nin
ayrıntılarından, beşinci kısımda Ayrık Logaritma tabanlı
Steganografi’nin ayrıntılarından, altıncı kısımda Kaos
Sistemler ve Lojistik Harita’dan, yedinci kısımda da
steganografide
önerilen
yöntem
Kaos
Tabanlı
Steganografi’nin ayrıntılarından bahsedilmiştir. Sekizinci
kısımda İstatistiksel Güvenlik Analiz parametrelerinin
ayrıntıları verilmiş, sonuç kısmında da önerilen yöntemin
diğer LSB yöntemleri ile birlikte değerlendirilmesine yer
verilmiştir.
II. VISUAL CRYPTOGRAPHY (GÖRSEL KRİPTOGRAFİ)
Naor ve Shamir 1994 yılındaki çalışmalarında sır adı
verilen gizli görüntü pay adı verilen anlamsız iki parçaya
ayrılır. Bu paylara ayırma işlemi için karmaşık matematik
10
hesaplamalarına ihtiyaç duyulmaz. Görüntüyü paylara
ayırırken siyah ve beyaz pikseller alt piksellerle temsil edilir.
Yukarıda görüntü genişliği ve kontrast problemi diye iki
problemden bahsedilmiştir. Bunlardan görüntü genişliği Şekil
1’de verilen siyah ve beyaz piksellerin birden çok piksel ile
ifade edilmesi şeklinde gösterilmiştir. Beyaz piksellerde iki
payın birleşimi sonucu oluşan yeni beyaz pikselden kontrast
probleminin de ortaya çıktığı görülmektedir.
stego denilmektedir. Stego, saklanan veriyle taşıyıcı
medyanın bileştirilmiş haline denilmektedir.
Bu çalışmada görüntü içine veri saklama yöntemi ile
ilgilenilmiştir. Görüntü steganografisinde bilgiyi resmin içine
gizlemek için çeşitli yöntemler vardır. Bunlar şu şekilde
sınıflandırılabilir.
− En önemsiz bite ekleme
− Maskeleme ve filtreleme
− Algoritmalar ve dönüşümler [14]..
IV. EN ÖNEMSİZ BİTE E KLEME YÖNTEMİ (LSB)
Şekil 1: (2,2) eşik şemalı bir piksel şeması gösterilmiştir.
En önemsiz bite ekleme yöntemi (Least Significant Bit
Method) yaygın olarak kullanılan basit bir yöntemdir. Sayısal
(dijital) görüntü dosyaları renkli olarak genellikle 24 yada 8
bit; gri-seviye görüntüler 1-2-4-6 yada 8 bit olabilirler. Bu
yöntemde görüntüyü oluşturan her pikselin her baytının en
anlamsız
bitine
veri
gizleme
işlemi
uygulanır.
Burada her sekiz bitin en fazla bir biti değişikliğe
uğratıldığından ve eğer değişiklik olmuşsa da değişiklik
yapılan bitin byte'ının en az anlamlı biti olmasından dolayı,
ortaya çıkan stego görüntüdeki ( örtü verisi + gömülü veri)
değişimler insan tarafından algılanamaz boyutta olmaktadır.
Şekil 3’te verildiği gibi elimizde iki görüntü dosyası
olduğunu varsayalım. Ve bu gizlenecek görüntü dosyasını
başka bir görüntü dosyasının içine nasıl gizlediğimizi basitçe
gösterelim.
Şekil 2. Görsel Kriptografi
Şekil 2’de görüldüğü üzere pay görüntülerinin büyüklüğü
orijinal görüntüden büyüktür (pixel expansion).
Yine Şekil 2’de görüldüğü gibi anlamsız pay görüntüleri
kötü niyetli kişilerin şüphesini üzerine çekmeye meyilli
olduğundan, bu pay değerlerini anlamlı hale dönüştürüp karşı
tarafa iletme fikrini doğurmuştur. Bu da steganografi ile
sağlanmıştır.
Şekil 3. LSB veri gizleme yöntemi
Sıralı LSB yönteminde gizlenecek görüntü, taşıyıcı görüntü
(cover image)’in piksellerin LSB’sine sıra ile gizlenerek stego
görüntü elde edilir. Şayet belirli analizlerle bu stego görüntünün
içine veri gizlendiği tespit edilirse gizlenen görüntüyü geri elde
işlemi çok kolay olacaktır çünkü gizlenecek görüntü cover
image’e ard arda piksel sırası ile gizlenmiştir. LSB yönteminin
bu zaafını ortadan kaldırmak için sıralı LSB yöntemi yerine
rasgele sıralı LSB yöntemi önerilmiştir. Önerilen yöntemlerden
birisi Ayrık Logaritma Fonksiyonu’nu kullanan steganografidir
[15]. Bu çalışmada rasgele sıralı LSB yöntemi için Kaos
Yöntemini kullanan steganografi kullanılmıştır.
III. STEGANOGRAFİ
Steganografi masum görünümlü bir taşıyıcı ortam (görüntü,
ses, video vs.) ‘a başka bir verinin gizlenmesi sanatıdır [13].
Burada gizleyeceğimiz veri text, görüntü vs. olabilir.
Steganografi’de gizli mesajı saklamak icin kullanılan
medyaya (ki bu yukarıda bahsettiğimiz gibi resim, ses, video
dosyası olabilir) tasıyıcı (cover veya carrier) denilmektedir.
Saklanacak olan veriye ise gomulu (embedded, secret,
authanticating image) denilmektedir. Saklama işlemi
sonucunda oluşan, orijinalinden ayırt edilemeyen medyaya ise
11
V. AYRIK LOGARİTMA FONKSİYONUNU
KULLANAN STEGANOGRAFİ
Denklem (1)’de verildiği gibi tanımlanan ayrık logaritma
fonksiyonu resim içine rasgele şekilde veri gizlemeyi sağlar.
yi =ai (mod p)
Böylece, başlangıç değerleri olarak
ve kontrol değişkeni
olarak alınarak,
serisi hesaplanır.
Şekil 4’te lojistik haritanın kontrol değişkeni olan değerine
duyarlılığı gösterilmiştir.
(1)
Bu fonksiyonda;
• yi mesajın i. bitinin resmin içinde saklanacağı
pozisyonu;
• i gizlenecek mesajın bit indeksini göstermektedir.
• Buradaki p büyük bir asal sayı ve a ise p’den
üretilen asal bir köktür.
• a değeri üsler şeklinde yazıldığında 1’den p-1’e kadar
tüm tamsayıları verecek şekilde seçilmelidir.
• Yani p ile a kendi aralarında asal olmalıdırlar.
• p değerinin asal olmasının nedeni aynı değerin tekrar
üretilmemesidir.
Gizlenecek metnin uzunluğu m, içine veri gizlenecek
resmin büyüklüğü l ise p değeri, m<p<l şartını sağlamalıdır.
Aşağıda
veri
gizleme
işleminin
algoritması
gösterilmektedir.
Adım 1: m<p<l şartını sağlayan en büyük asal sayıyı (p)
seç.
Adım 2: p’nin asal elemanları sayısını (Ø) bul.
Adım 3: Asal elemaları üretmek için en küçük böleninden
başlayarak üsler şeklinde yazıldığında 1’den (p-1)’e kadar tüm
tamsayıları veren böleni bul.
Adım 4: OBEB(i, p-1) =1 şartını sağlayan i değerlerini bul.
Adım 5: (mod p)’ye göre
değerlerini hesapla ve
büyük olanlardan birini a olarak seç.
Şekil 4. a) =0.8, b) =1.5, c) =2.6, d) =3.2, e) =3.57, f)
=3.67, g) =3.99, h) =4
Burada (Şekil 4’te) = 0.8, 1.5, 2.6 değerleri için lojistik
harita kaotik özellik göstermemiştir. Sistemin kaosa 3.57 <
< 4 değerleri arasında girdiği gözlemlenmiştir. ’nın (e) ve
(f)’deki gibi 0.1’lik bir değişimde bile sistemin ne kadar farklı
davranış sergilediği gözlemlenmiştir.
Lojistik haritaların başlangıç değeri ve kontrol değişkenine
olan duyarlılığı bifurkasyon (çatallanma) diyagramından daha
net gözlemlenebilmektedir .
Adım 6: yi =ai (mod p) denklemine göre mesajın bitlerinin
hangi piksele yerleşeceğini bul.
VI. KAOTİK SİSTEMLER VE LOJİSTİK HARİTA
Kaos tabanlı şifreleme algoritmaları temelde, kaotik
haritaları kullanarak rastsal sayı üreteçleri olarak bir uzun
rastgele sayı dizisi üreterek düz görüntüyü bu rastgele
sayılarla şifrelerler [15]. Buradan yola çıkarak, steganografide
rastsal sayı üreteci olarak da kaotik haritalar kullanılmaktadır.
Lojistik harita, görüntülerin şifrelenmesinde başlangıç
koşullarına hassas duyarlılığı, rasgeleye benzer davranış
göstermesinden dolayı kullanılır [15-19].
Lojistik harita aşağıdaki gibi verilir:
Şekil 5. Lojistik haritanın bifurkasyon diyagramının 3 boyutlu
görünümü
(1)
Bu fonksiyonda
sistem kontrol değişkeni,
başlangıç değeri ve 0 < λ < 4,
ise yineleme (iterasyon) sayısıdır.
12
Burada çok küçük sayılarla işlem yapıldığından, genişletme
işlemi sırasında aynı değere iz düşen tekrarlı eleman olması
kaçınılmazdır. Rasgele sayı dizisini oluşturan algoritma
aşağıda verilmiştir.
Adım 1:
∑
Adım 2:
, anahtar kelimeden
başlangıç değeri hesaplanır.
Adım 3:
, lojistik harita ile bir sonraki
değer hesaplanır.
Adım 4:
, bir sonraki iterasyon için
değeri
güncellenir.
Şekil 6. Bifurkasyon diyagramının 2 boyutlu görünümü.
Şekil 5 ve Şekil 6’da lojistik haritanın 0 <
< 4
aralığında ’nın her bir değeri için 50 iterasyon ile
oluşturduğu
bifurkasyon
(çatallanma)
diyagramı
gösterilmiştir. Şekil 6’dan daha net görüleceği üzere sistem
=3.5’den sonra kaosa girmiştir.
Adım 5:
, iterasyon listesine
eklemek için lojistik harita ile üretilen değer istenilen oranda
genişletilir.
Adım 6: Yeni
iterasyon listesinde yoksa diziye eklenir.
Adım 7: İterasyon listesinde istenilen sayıda eleman olana
kadar 3 – 6 adımları tekrarlanır.
Adım 8: İterasyon listesindeki değerlere göre pay
görüntüleri ilgili piksele gizlenir.
İstenilen piksel sayısı kadar rasgele sayı üretimi esnasında
tekrarlı elemanlardan dolayı iterasyon sayısı artmaktadır.
Tablo 1’de lojistik harita ile üretilmek istenen dizi
uzunlukları, hesaplamadaki iterasyon sayısı ve hesaplama
süreleri verilmiştir.
Tablo 1: Lojistik Harita ile rasgele sayı üretiminde hesaplama
süresi ve iterasyon sayısı
VII. ÖNERİLEN YÖNTEM
Yukarıda bahsedilen Görsel Kriptografi’nin Steganografi
ile birlikte kullanılmasında, LSB yöntemi olarak sıralı
olmayan LSB, ve bu sıralı olmayan LSB tekniği için rasgele
sayı üreteci olarak Lojistik Harita kullanılmıştır.
Lojistik
harita
görüntü
şifreleme
işlemlerinde
kullanırken ’nın değeri 3.99999 olarak seçilmiştir. Başlangıç
değeri olarak , algoritmanın anahtarından seçilir. Anahtar
kelimeden üretilen
değeri, anahtar olarak belirlenen
kelimeyi oluşturan her bir karakterin ASCII değerinin ikilik
sistemdeki karşılığına dönüştürülerek
şeklinde
gösterebiliriz ve daha sonra Denklem 2’de gösterildiği
şekilde hesaplarız.
Dizi
uzunluğu
(resmin boyutu)
1024-> 32*32
4096-> 64*64
16384->1 28*128
*65536->256*256
İterasyon
sayısı
11856
43404
266272
500000
Hesaplama
Süresi
0.980830 sn
7.156542 sn
224.202061 sn
832.290096 sn
* 65536 uzunluğunda dizi üretimi tamamlanamamış, dizi
uzunluğu yaklaşık 14 dk. Sürede 65311 olmuş, sonrasında
Matlab hesaplamayı tamamlayamamış, sistem cevap
vermemiştir.
Önerilen Yöntemin akış diyagramı Şekil 7’de gösterilmiştir.
∑
Lojistik haritada [0 1] aralığında değerler üretilir.
Amaçlanan yöntemde n*m’lik bir resim dosyası için n*m
piksel olduğundan, rasgele sayı üreteci ile [1 n*m] arasında
değerler üretilmesi gerekir. Bunun için lojistik harita ile
üretilen değerler n*m oranında genişletilir.
Örneğin 32*32’lik bir resim dosyası için [1 1024]
aralığında rasgele sayı üretilmesi gerekir. Bunun için toplam
piksel sayısını N diye ifade edersek Denklem 3’te verilen
hesaplama ile lojistik harita değerleri istenilen aralığa
genişletilmiş olur.
([
]
)
[
]
(3)
Şekil 7. Önerilen Yöntemin Akış Şeması
13
VIII. İSTATİSTİKSEL GÜVENLİK ANALİZLERİ
A. Kontrast
Bir görüntüde bulunan lokal varyasyonların miktarının
ölçüsüdür. Sabit bir görüntü için kontrast 0’dır. Denklem 7’de
verilen ifade ile hesaplanır.
∑
Verilen ifadede
, rassal değişken
F. PSNR
Doğruluk oranını belirleyebilmek için PSNR değerleri
hesaplanmıştır. PSNR değerlerinin hesaplanması için
Denklem 9’da verilen ifade kullanılmıştır. Değerlendirme için
her üç yöntem tarafından elde edilen stego resimlerin doğruluk
oranı dikkate alınmıştır.
PSNR  10 log 10
1
MSE 
NM
B. Korelasyon
Bir pikselin tüm resim üzerinde, komşu pikseli ile nasıl
ilişkili olduğu hakkında, eş oluşum (co-occurance) matrisini
kullanarak verdiği istatistiksel bir ölçümdür. [-1 1] aralığında
bir değerdir. Mükemmel pozitif ilişkili resim için 1,
mükemmel negatif ilişkili resim için -1 değerini verir. Sabit
bir resim için korelasyon değeri tanımsızdır. Korelasyonun
hesaplanması için kullanılan ifade Denklem 4’te verilmiştir.
’nin olasılığıdır.
max( p) 2
dB
MSE
  p
N
i 1 j 1
(9)
2
M
ij
 sij 
Verilen ifade de, PSNR değeri NM boyutlarındaki P ile
gösterilen orjinal görüntünün, S ile gösterilen stego
görüntüden ne kadar farklı olduğunun bir ölçütüdür. Daha
yüksek PSNR’ye sahip görüntü orijinale daha çok
benzemektedir.
Tablo 2: PAY 1’in Gizlendiği Lena Stego Görüntüsü için
İstatistiksel Güvenlik Analiz Parametreleri ile Yöntemlerin
Karşılaştırılması
İstatistiksel
Orijinal Stego
Stego
Stego
Güvenlik
Resim
Görüntü
Görüntü
Görüntü
Analiz
(Lena)
(Sıralı
(Ayrık
(Lojistik
Parametreleri
LSB)
Logaritma) Harita)
Kontrast
0.3712
3.5000
0.3704
0.5318
Korelasyon
0.8811
0.0370
0.8813
0.9020
Entropy
7,2718
7,2720
7,2707
7,6622
Enerji
0.1509
0.8647
0.1509
0.0984
Homojenlik
0.8822
0.9375
0.8824
0.8391
∑
Verilen ifadede i, j piksel pozisyonunu, p(i,j) i.satır, j.
Sutundaki piksel değerini, µ varyansı,
standart sapmayı
göstermektedir.
C. Enerji
GLCM (gray level co-occurance matrix) eş oluşum matrisi
unsurların karesi toplamını döndürür. Denklem 6’da verilen
ifade ile hesaplanır.
Tablo 3: PAY 2’nin Gizlendiği Biber Stego Görüntüsü için
İstatistiksel Güvenlik Analiz Parametreleri ile Yöntemlerin
Karşılaştırılması
∑
İstatistiksel
Güvenlik
Analiz
Parametreleri
Kontrast
Korelasyon
Entropy
Enerji
Homojenlik
D. Homojenlik
Diyagonal GLCM için GLCM öğelerin dağılımının
yakınlığını ölçen bir değer döndürür. Denklem 8’de verilen
ifade ile hesaplanır.
∑
E. Entropy
Entropy rastgele bir süreçte gelen rassal bir değişkenin
belirsizliğin büyüklüğüdür. Başka bir deyile rastgele sayılar
arasında belirsiz bir ilişkiyi bulmak demektir. Denklem 5’te
verilen ifade ile hesaplanır.
∑
14
Orijinal
Resim
(Biber)
0.1866
0.9627
7.6945
0.1343
0.9245
Stego
Görüntü
(Sıralı
LSB)
0.1902
0.9619
7.6951
0.1336
0.9227
Stego
Görüntü
(Ayrık
Logaritma)
6.1250
0.5910
7.6946
0.5861
0.8906
Stego
Görüntü
(Lojistik
Harita)
0.1882
0.9623
7.6943
0.1340
0.9236
Tablo 4: PSNR Değeri ile Yöntemlerin Karşılaştırılması
Görüntü Dosyaları
Sıralı LSB Stego.
Ayrık Log. Stego
Kaos tabanlı Stego
MSE
(LENA)
0,3036
0,3055
0,1263
MSE
(BİBER)
0.3026
0.3048
0,1251
PSNR
(LENA)
53,0013
53,0041
56,8405
PSNR
(BİBER)
53.1850
53.1534
57.0196
parametrede, entropy değeri hariç diğer 4 parametrede Kaos
Tabanlı Steganografi’nin orijinal resme daha yakın sonuçlar
verdiği görülmektedir. Tablo 3’deki Biber stego görüntüsü
değerlerine baktığımızda 5 parametrede de Kaos Tabanlı
(Lojistik harita kullanarak LSB) steganografinin orijinal
görüntüye en yakın sonuçları verdiği görülmektedir.
IX. SONUÇLAR
Bu çalışmada görsel kriptografinin anlamsız paylarından
anlamlı bir görüntü elde etmek için kaos tabanlı steganografi
yöntemi kullanılmıştır. Önerilen yöntem, sıralı LSB kullanan
Steganografi ve Ayrık logaritma kullanan Steganografi ile
resmin korelasyon, entropy, enerji, kontrast, homojenlik ve
PSNR değerleri testlerine tabi tutulmuştur. 1.pay
görüntüsünün içine gizlenmesi için 256256 boyutlarındaki
gri renkli “Lena”, 2.pay görüntüsünün içine gizlenmesi için
256256 boyutlarındaki gri renkli “Biber” resimleri
kullanılmış, bu resimler ile oluşan stego görüntüler testlere
tabi tutulmuştur. Her üç yöntem de Matlab 7.0 ortamında
programlanmıştır. Testler, Intel(R) Core(TM) i7, 2.67 GHz
işlemcili ve 8 GB RAM’i olan taşınabilir bir bilgisayar
üzerinde gerçekleştirilmiştir. İşletim sistemi olarak Windows 7
Home Premium kullanılmıştır.
Önerilen yöntem resimdeki piksel sayısı kadar rastgele sayı
üretiminde
tekrarlı
iterasyonlar
bakımından
yavaş
çalışmaktadır. Bunun yanında kaos sistemlerin başlangıç
değerlerine olan hassas duyarlılığı bakımından şifrelemede
yüksek güvenlik sağlamaktadır. Başlangıç veya kontrol
değerindeki 0.01’lik bir değişim bile kaotik sistemin
davranışını baştan aşağı değiştireceğinden (Şekil 4) sistemi
kırmak isteyen kişi veya programların doğru başlangıç değeri
bulma şansını oldukça düşürecektir. Yalnızca ilgili tarafların
bileceği başlangıç değeri ve kontrol değişkeni sayesinde aynı
lojistik haritalar üretilebilecek, stego image’den sır görüntü
konumundaki pay değerleri elde edilebilecektir.
Yöntemi diğer yöntemlerle karşılaştırmak için sıralı LSB ve
rassal sıralı LSB yöntemlerinden de Ayrık Logaritma tabanlı
steganografi ile istatistiksel testlere tabi tutulmuştur. Aynı
zamanda üç farklı steganografi yöntemi sonucu elde edilen üç
stego görüntünün ortalama karesel hata (MSE) ve PSNR
değerleri hesaplanmıştır(Tablo 4) .
Tablo 2’deki değerlere bakıldığında orijinal görüntüye en
yakın sonuçları sıralı LSB yöntemiyle oluşmuş stego
görüntünün verdiği görülmektedir. İstatistiksel olarak bu
yöntem diğer iki yönteme göre daha iyi sonuçlar vermiş olsa
da, RS steganaliz ile içinde veri gizlediği anlaşıldığında,
içindeki gizli görüntüyü elde etmek çok kolay olacaktır.
Bunun için rassal sıralı yöntemler tercih edilmektedir. Rassal
sıralı LSB yöntemlerinden olan Ayrık Logaritma tabanlı ve
Kaos tabanlı steganografi yöntemini karşılaştırdığımızda 5
KAYNAKLAR
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
15
M. Naor and A. Shamir, “Visual cryptography,” presented at the
Proceedings of the Conference on Advances in Cryptology – Eurocrypt
’94, A. De Santis, Ed., Berlin, Germany, 1994, pp. 1–12.
Blundo, C., Climato, S., De Santis, A. “Visual Cryptography schemes
with optimal pixel expansion”, Journal Theoretical Computer Science
archive Vol. 369, 2006, pp.169-182.
Askari, N., Moloney, C. and Heys, H.M. "A Novel Visual Secret
Sharing SchemeWithout Image Size Expansion", IEEE Canadian
Conference on Electrical and Computer Engineering (CCECE),
Montreal, pp. 1-4, 2012.
Blundo, C., De Santis, A., Stinson, D. R., “On the Contrast in Visual
Cryptography Schemes”, Journal of Cryptology, Vol. 12, pp. 261–289,
1999.
Shyu SJ, Chen MC (2011) “Optimum pixel expansions for threshold
visual secret sharing schemes.” IEEE Trans Inf Forensic Secur
6(3):960–969
Cimato S, de Santis A, Ferrara AL, Masucci B. “Ideal contrast visual
cryptography schemes with reversing.” Inf Process Lett 93:199–206,
2005
Lee K, Chiu P. “ A high contrast and capacity efficient visual
cryptography scheme for the encryption of multiple secret images.” Opt
Commun 284:2730–2741, 2011.
Marwaha, P. and Marwaha, P. “ Visual Cryptographic steganography in
images”, Second International conference on Computing,
Communication and Networking Technologies, 2010
Abboud, G.. Marean, J. And Yampolskiy, R.V., “Steganography and
Visual Cryptography in Computer Forensics”, Fifth International
Workshop on Systematic Approaches to Digital Forensic Engineering,
2010
Vaman, P.. Manjunath, C.R. andSandeep.K, “Integration of
Steganography and Visual Cryptography for Authenticity”, ISSN 22502459, ISO 9001:2008 Certified Journal, Volume 3, Issue 6, June 2013
Gupta, R.. Jain, A. And Singh, G. “Combine use of Steganography and
Visual Cryptography for Secured Data hiding in Computer Forensics ”,
International Journal of Computer Science and Information
Technologies, Vol. 3 (3) , 2012,4366 – 4370
Bender, W., Gruhl, D., Morimoto, N., and Lu, A. “Techniques for data
hiding”. IBM Syst. J., 35(3&4):313-336 (1996).
Sellars D., “An Introduction to Steganography”, Student
Papers,1999..
M. M. Amin, M. Salleh, S. Ibrahim, and M. R. Katmin, “Steganography:
Random LSB Insertion Using Discrete Logarithm,” Proceedings of 3rd
International Conference on Information Technology in Asia (CTA03),
pp. 234-238, 2003.
Pisarchik AN, Flores-Carmona NJ, Carpio-Valadez M. ” Encryption and
decryption of images with chaotic map lattices.” Chaos: Interdiscipl J
Nonlinear Sci2006;16(3):033118.
Chang C, Hwang M, Chen T. A new encryption algorithm for img.
cryptosystems. J Syst Software 2001;58:83–91.
Guan Z H, Huang F, Guan W. Chaos based image encryption algorithm.
Phys Lett A 2005;346:153–7.
Pareek NK, Patidar V, Sud KK. Image encryption using chaotic logistic
map. Image Vision Comput 2006;24:926–34.
Chen G, Mao Y, Chui CK. ''A symmetric image encryption scheme
based on 3D chaotic cat maps. '' Chaos Solitons Fract 2004;21:749–61.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Photovoltaic System Design, Feasibility and
Financial Outcomes for Different Regions in
Turkey
V.A. KAYHAN, F. ULKER, O. ELMA, B. AKIN, B. VURAL
Yıldız Technical University, Istanbul/Turkey,
{kayhanalper, fevzeddinulker}@gmail.com {onurelma, bakin, bvural}@yildiz.edu.tr
Abstract – Energy is very important for countries which are
developing rapidly, such as Turkey, during the fossil fuel prices
are increasing. In this respect, renewable technologies such as PV
systems are gaining importance in the world gradually due to its
eco friendliness, easy applicability and increasing efficiency.
Today, incentives for PV systems are being provided to private
sector companies and people who would like to take benefit from
generating energy from PV systems in Turkey. This paper makes
an overview on PV systems and examines the financial outcome
of PV system investment in Turkey. Two locations, Istanbul and
Antalya, were taken into account in installing/constructing a PV
Plant and these two systems were examined and compared in
terms of their prices, feasibility, solar efficiencies regarding the
location and future financial performances.
Antalya, Akdeniz University Campus is selected. It is
predicted PV system can be placed these campuses roofs and
unused areas separately, as these campuses have very large
areas. Also, efficiencies and system loss of areas will be
discussed as well. Lastly, it will be assumed that produced
electricity will be sold to the grid and regarding this, two PV
systems, in two different cities, will be analyzed in terms of
their financial outcomes and performances to give us an idea
about their applicability and feasibility.
II. PV SYSTEMS AND PRINCIPAL
PV technology is one of the solar technologies. It uses the
solar radiation to produce solar energy. PV panels consist of
PV cells. Cells should be combined to create a PV panel.
Basically, PV Panels can directly absorb the sunlight that
includes photon. After absorption of photons, electrons start
moving due to different electrical fields and poles onto the
panel surface. This move creates DC current within cells.
DC Voltage should be converted to AC Voltage via
inverters if the produced electricity would be used for regular
AC electricity consumption purposes or would be transmitted
to the national grid. If PV system is used at places far away
from grid, (without grid connection) DC electricity should be
stored in batteries. However as shown in figure 1, in this
study, PV systems are designed assuming systems are
connected to the grid (on grid system) and main components
are solar panels and inverters. Moreover, in the system, there
should be an electricity meter to measure how much electricity
is sold or transmitted to the grid.
Keywords – Renewable Technologies, PV Systems, Solar
Efficiency, Feasibility, Financial Performance
I. INTRODUCTION
nergy is playing a crucial role in countries’ economical
and socio-cultural development. This makes energy
generation is one of the critical topics. Due to the increasing
demand of energy, governments are not only focusing on
traditional energy generation ways, but also they focus on
renewable energy generation application such as solar
systems.
Growing natural concern and decreasing renewable
technology prices encourage countries to provide energy
investors with incentives for new investments [1]. For
instance, Organization for Economic Co-operation and
Development (OECD) countries aim to supply their %25
energy generations from renewable sources [2]. It shows how
development countries’ targets in renewable energy. However,
investors need to be sure if these technologies enable great
outcomes financially. PV technology cost, has been declining
since 1992 and it is cheaper today because of R&D works
which improved PV panel and other solar equipment
efficiencies [1].
In this study, firstly, PV technology will be introduced with
its principals. Secondly, incentives in Turkey will be
summarized. Thirdly, PV system design will be explained for
this project. Fourthly, PV system will be applied to two
different locations: Istanbul and Antalya. For Istanbul, Yildiz
Technical University Davutpasa Campus is selected and for
E
Figure 1: The general schematic of on-grid PV systems
The coordination and matching of panels and inverters
16
IV. PV SYSTEM DESIGN
should be proper. For instance, the output voltage and current
of panels should be suitable to the input voltage and current of
inverters.
Solar efficiency is important in estimating the electricity
production of PV systems. Solar efficiency can differ from
location to location and it can be measured with the sun hour
and system losses of these locations. As seen in figure 2, the
seaside and southern part of Turkey can have higher sun
hours. It is because of climatic conditions which include less
cloudy time periods and high temperature of these areas.
Hence, it would be possible to generate solar energy
efficiently in these places.
Firstly, it was predicted that the system is designed by
considering on-grid system requirements. Thus, PV panels and
inverters- which contain high percentage of total system cost
should be selected carefully during the design process. By
taking into optimum prices and incentives account, these
selections were done for this study. PV groups which consist
of panels and inverters can be created. As seen in Table 1,
the number of panels and inverters should be found to specify
overall design.
Selected inverter input power is 17410 Watt and each solar
panel output power is 250 Watt. Thus, the number of solar
panels which can be connected inverter can be found by the
equation below.
17410Watt
 69,64
250Watt
(2)
According to (2) one inverter can be connected to 69 panels
in maximum. After that clarification, we need to find how
many parallel branches and serial connected panels should
exist in one group. To calculate this, two criteria, these are
voltage and current, should be checked.
“Voltage criterion” is useful to specify the number of serial
connected PV panel. Input values of inverter should be
checked. Minimum MPP voltage of inverter is 150 V and
maximum MPP Voltage of inverter is 800 V. Therefore, the
output voltage of PV panels should be between 150 V and
800V. One PV panel has 31,28V nominal power voltage and
assuming if we have 23 panels are connected serial, in the
system, we have (23x31,28) 719,44 V system or “string
group” voltage.
Figure 2: Solar efficiency map of Turkey [3]
In this paper, locations will be examined by considering
efficiency factors. The first location is Istanbul that has a
colder climate and many cloudy days in year. The second
location is Antalya that is known as a sunny and tourism city
for summers. After the foundation of PV system, the
efficiency differences can be compared.
III. INCENTIVES FOR PV SYSTEMS IN TURKEY
In Turkey, in terms of renewable energy applications,
currently the highest incentive is given for PV systems by
government. In 2010, it was announced that the price of
energy that is provided by PV panels is 13,3 $cent/kWh [4].
Turkish government buys electricity from electricity supplier
with this price. This number is high if it is compared with
wind energy incentive which include 7,3 $cent for per kWh
produced [4].
Furthermore, in the whole PV system if Turkish made
products are used, the incentive value can be increased. For
instance, if the used PV panel is manufactured in Turkey, 1.3
$cent/kWh can be added to 13,3 $cent/kWh and also Turkish
made mechanical equipment also have 0,8 $cent/kWh
contribution. Assuming an energy provider who is generating
electricity via PV panels and use Turkish brand panels and
mechanical equipment can sell electricity with the price 15,4
$cent/kWh.
150 V  719,44 V  800 V
(3)
Because of the verification (3), 23 serial connected PV
panels are suitable for this system design.
“Current criterion” is vital to specify the number of parallel
branches in the PV group. According to selected PV and
inverter catalogues, one PV panel’s short circuit current (Isc)
equals to 8,66 A and one inverter’s input current can be up to
33 A. Assuming we have 3 parallel branches in one group, we
can calculate (8,66x3) 25,98 A.
25,98 A  33 A
(4)
According to (4), 3 parallel branches can be used in one
group.
Table 1: Specifications for PV modules and inverters
PV Array Specification
Polycrystalline type cell
Rated power: 250 W
Rated voltage: 31,28 V
Short circuit current: 8,66 A
(1)
13,3 $cent  1,3 $cent  0,8 $cent  15,4 $cent
it is assumed that electricity price for per kWh sold is
15,4 $cent as local made panels and mechanical equipments
are chosen for design in this paper.
17
Inverter Specification
Rated input power: 17410 W
Max. input current: 33 A
Min. input voltage: 150 V
Max. input voltage: 800 V
DailySunHour 
3,63 kWh
 3,63 h
1 kW
(5)
According to (5), the average daily sun hour of Istanbul is 3,63
hour and, for Antalya, sun hour is found as 4,46 h.
In addition, the solar efficiency graphic for Istanbul and
Antalya were prepared by the help of the simulation for this
study that can be seen in figure 5. It can be seen in summer
time there is a higher solar efficiency.
Istanbul
Antalya
180
160
140
120
Figure 3: System design illustration
100
80
To sum up, in one PV system group, as it is seen in the
figure 3 there are 69 solar panels and one inverter. It was
predicted that the total system includes 19 groups to reach the
“target power generation” that is approximately 250 kW
installed power with system loses. It means, in this study in
total 1311 PV panels and 19 inverters will be used in
feasibility and financial outcome study.
60
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Figure 5: Solar efficiency graphic for Istanbul Davutpasa and Antalya
In terms of system loss, simulation results and literature
findings [5, 6] are correlated to each other; the average loss
that was stated in average as %25 is close to the value %24.3
that is calculated by the software and literature findings which
are seen in figure 6 for Istanbul. For Antalya the system loss
was found as %26.
V. LOCATIONS AND SOLAR EFFICIENCIES AND PRICING
In this study two locations; Istanbul and Antalya has been
chosen. To clarify sun hour and solar efficiency, EU JRC-PV
GIS simulation software was utilized. In the figure 2, the
simulation result of Istanbul Davutpasa Campus is illustrated.
Fixed system: inclination=31ᵒ, orientation=-1ᵒ (optimum)
Month
Ed
Em
Hd
Hm
Jan
1,96
60,6
2,42
74,9
Feb
2,45
68,5
3,03
84,9
Mar
3,36
104
4,23
131
Apr
4,09
123
5,32
160
May
4,88
151
6,52
202
Jun
5,03
151
6,90
207
Jul
5,28
164
7,28
226
Aug
4,92
153
6,83
212
Sep
4,11
123
5,58
167
Oct
3,04
94,1
3,98
123
Nov
2,41
72,3
3,05
91,5
Dec
1,92
59,6
2,38
73,7
3,63
110
4,80
146
Yearly Average
1320
1750
Total for year
Figure 6: System Losses in Istanbul and Antalya
As discussed before, 1311 PV panels and 19 inverters
will be used for energy sale. In %100 efficiency projected
installed PV power can be calculated.
Ed: Average daily electricity production from the given system(kWh)
Em: Average monthly electricity production from the given system(kWh)
Hd: Average daily sum of global irridation per square meter received by
the modules of the given system(kWh/m2)
Hm: Average sum of global irridation per square meter received by the
modules of the given system(kWh/m2)
1311 250 Watt  327750 Watt
(6)
With equation (6), the installed power of the whole PV system
can be 327,750 kW. However the system loss should be
considered to estimate the output power of system or inverters.
The loss was calculated as %24,3. Thus, with equation (7), the
estimated output of the system;
Figure 4: The estimation for Davutpasa Campus solar efficiency via
EU JRC-PV GIS
As seen in figure 4, yearly average energy can be generated in
Davutpasa Istanbul for daily is 3,63 kWh for per 1 kW
installed power.
327750  327750 x
18
24.3
 248,107 kW
100
(7)
Therefore; for the estimated price of electricity would be sold;
in other words, the daily income from electricity sales in
Istanbul can be calculated.
The price 15,4 $cent/kWh was found by equation (1).
The daily sun hour is 3,63 h was calculated through equation
(5).
The average parity of $/TL is 2,2 on March 2014 in Turkey.
It is also estimated there is 365 days in a year.
According to equation (8), highlights the 1 kWh energy price
in Istanbul to be sold by considering values above.
Energy Price for 1 kWh for Istanbul=
illustrated in the table 3, because the time value of money
changes due to inflation very year and next year’s income
can’t be equal to this year’s income if there is inflation at
somewhere.
As another income, there is always tax saving in these kinds
of project. It is because of high investment amount which was
calculated in feasibility report, section VI. Tax saving comes
from the depreciation of the PV products. Companies have
right to pay less tax due to the high amount of investment and
depreciation factor relating with energy equipments. The cost
which is issued with solely product price (without service
cost) is the sum of PV Panel Price, inverter, mechanical
equipment and electrical equipment price that is 1.322.595 TL.
Furthermore, in this respect, the depreciation period for
solar products and the corporate tax rate should be utilized to
calculate tax saving amount. It is known that the corporate tax
rate is %20 in Turkey and depreciation time for solar products
is 10 years [7]. Hence, the tax saving for each year can be
calculated as it can be seen below.
0.154 x2.2 x248.107 x3.63x365  111.373,570 TL
(8)
Moreover for Antalya this value has been calculated as
133.765, 9397 TL. This higher price is caused from higher sun
hour in Antalya.
VI. COST REPORT
In previous sections, the number of PV Panels and inverters
were specified. Apart from those, the system requires
mechanical equipments for each PV panel and also fuses,
connectors, solar cables and other service costs such as
transportation and consultancy. These prices are provided
from Turkish companies for Turkish standards.
By the data collection from companies, created and detailed
cost table for this project is shown below, as table 2. For this
clarification, €/TL Parity was taken as 3 by considering € and
TL values in March 2014 averagely.
To sum up the total price, turnkey price for the PV system
that is discussed in this project is 1.346.595 TL that will be
used in financial model as shown in the table 2.
Tax Saving  1.322.595 x0,2  20.212,20 TL (9)
10
It means after the foundation of PV Plant, according to
equation (9), for next ten years company or individual investor
will pay 20.212,20 TL less tax.
Considering outcomes, the major expense is Capex which
refers to “capital expenditure” in this project. Capex was
calculated as 1.346.595 TL in previous section.
Regarding maintenance cost, it is known that PV Panels do
not require regular maintenance in its lifetime. However, they
should be cleaned regularly to not lose efficiency. Dust and
dirt can influence solar panels’ efficiency negatively. This cost
is neglected for this study as it can be considered very low
compared to other costs such as Capex.
All finding are combined in the financial model in the Table
3. As a result the payback time of the system is found as 9
years for Istanbul as cumulative values becomes positive in
the year 2022. By using the same financial model payback
time was calculated as 7 years. Payback time is shorter for
Antalya because there is daily higher sun hour time even
though the system loss is higher at Antalya. By these findings,
also Return on Investment (ROI) ratios can be calculated for
our two locations. For Istanbul;
VII. FINANCIAL MODELING FOR PROJECTS
A financial model can be prepared by using inflows,
outflows and necessary ratios for this project. Firstly, the
model is will be prepared for 25 years. It is because the life
time of PV System is 25 years. It is predicted that in the
beginning of 2014 the system configured and is started to
work.
In terms of inflows, in section V. price per 1 kWh energy
was calculated for Istanbul and Antalya. These values should
be utilized as income in financial model. Nevertheless, this
income type should be increased by a ratio every year. This
ratio is taken as current inflation rate, that is %7,82, of Turkey
in this model. Electricity sales income grows in each year as
Table 2: Cost table
PV Panel
İnverter
Mechanical Equipment/Construction
Electrical Equipment (Fuse, switchgear
,solar cable, connectors)
Transportation
Installation and Consultancy Cost
Unit Price(€)
170
6000
45
Quantity
1311
19
1311
15000
2000
6000
1
1
1
Total Price(€)
222.870,00
114.000,00
58.995,00
45.000,00
2.000,00
6.000,00
19
€/TL
Parity
3
3
3
3
3
3
Total
Total Price(TL)
668.610,00
342.000,00
176.985,00
135.000,00
6.000,00
18.000,00
1.346.595,00
Table 3: Financial modeling of Istanbul Davutpasa PV system
Year
Capex (TL)
Income from
electricity
sales (TL)
2014
-1.346.595,00
111.373,39
2015
2016
2017
2018
2019
2020
2021
0
0
0
0
0
0
0
2022
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
Inflation rate
in Turkey)
Tax Saving
(TL)
Cash
Flow(TL)
Cumulative
(TL)
7,82
20.212,20
-1.215.009,41
-1.215.009,41
-
120.082,79
7,82
20.212,20
140.294,99
-1.074.714,42
-
129.473,26
7,82
20.212,20
149.685,46
-925.028,96
-
139.598,07
7,82
20.212,20
159.810,27
-765.218,69
-
150.514,64
7,82
20.212,20
170.726,84
-594.491,84
-
162.284,89
7,82
20.212,20
182.497,09
-411.994,76
-
174.975,56
7,82
20.212,20
195.187,76
-216.806,99
-
188.658,65
7,82
20.212,20
208.870,85
-7.936,14
-
203.411,76
7,82
20.212,20
223.623,96
215.687,82
219.318,56
7,82
20.212,20
239.530,76
455.218,58
-
236.469,27
7,82
0,00
236.469,27
691.687,85
-
254.961,17
7,82
0,00
254.961,17
946.649,02
-
274.899,13
7,82
0,00
274.899,13
1.221.548,16
-
296.396,24
7,82
0,00
296.396,24
1.517.944,40
-
319.574,43
7,82
0,00
319.574,43
1.837.518,83
-
344.565,15
7,82
0,00
344.565,15
2.182.083,98
-
371.510,15
7,82
0,00
371.510,15
2.553.594,13
-
400.562,24
7,82
0,00
400.562,24
2.954.156,37
-
431.886,21
7,82
0,00
431.886,21
3.386.042,57
-
465.659,71
7,82
0,00
465.659,71
3.851.702,28
-
502.074,30
7,82
0,00
502.074,30
4.353.776,58
-
541.336,51
7,82
0,00
541.336,51
4.895.113,09
-
583.669,02
7,82
0,00
583.669,02
5.478.782,11
-
629.311,94
7,82
0,00
629.311,94
6.108.094,05
-
678.524,13
7,82
0,00
678.524,13
6.786.618,18
6.786.618,18
-
Total
ROI  Inflows  Outflows x100
Outflows
(10)
Payback Time
(Payback Time)
2 year difference can be considered as high for investors who
would like to have an idea about financial outcome of solar
systems. This difference can be changed if the installed power
of system is increased or decreased.
For another criterion ROI was calculated for Istanbul
Davutpasa as % 503,98 and for Antalya ROI is % 622,40.
There is % 118,42 difference which is revealed from location
and solar efficiency differences.
In conclusion, during the fossil fuel and energy prices are
increasing in today’s world, scientists and investor’s can focus
on renewable technologies such as PV systems. PV system
feasibility can change from location to location and this study
aimed to reveal this change for Istanbul and Antalya. Even
though the perception that always estimates the PV system
price is high, exists. Today, incentives are provided by
governments and R&D works relating to PV technology are
still proceeding, thereby, opportunities in PV investments may
be caught especially in areas which comprise high solar
efficiency to obtain better financial performances.
ROI  6.786.618,18 x100  %503,98
134.659,5
%503,98 ROI means; in every 100 TL investment and in 25
years, there is 503,98 TL return which can be considered high.
For Antalya, with equation (10), ROI was calculated as % 622,
40 which is higher than Istanbul PV Project’s ROI.
VIII. CONCLUSION
PV systems are still discussed among science people and
investors in terms of its applicability, efficiency and outcomes.
This study first explained PV systems generally, then, made an
analysis regarding its design, pricing and feasibility process.
In this study, the same solar system was established for two
different locations: Istanbul and Antalya, and financial
outcomes for them differed from each other.
PV system payback time for Istanbul was calculated as 9
years and payback time of Antalya was calculated as 7 years.
REFERENCES
[1]
20
G.R. Timilsina, L. Kurdgelashvili, P.A. Narbel “Solar energy: Markets,
economics and policies”. Renewable and Sustainable Energy Reviews
16,vol.16, pp.449–465, January 2012 Available:
[2]
[3]
[4]
http://www.sciencedirect.com/science/article/pii/S1364032111004199
East Marmara Development Agency, Tr42 Doğu Marmara Bölgesi
Yenilenebilir Enerji Raporu. July 2011. [Online]
Available:http://www.dogumarmarabolgeplani.gov.tr/pdfs/8_CevreEnerj
i_38_YenilenebilirEnerjiRaporu.pdf
http://www.eie.gov.tr/MyCalculator/Default.aspx [Accessed: 26 March
2014]
Official Journal of the Republic of Turkey, Yenilenebilir Enerji
Kaynaklarinin Elektrik Enerjisi Üretimi Amaçli Kullanimina İliskin
Kanun,2010.[Online]Available:
http://www.enerji.gov.tr/mevzuat/5346/5346_Sayili_Yenilenebilir_Enerj
[5]
[6]
[7]
21
i_Kaynaklarinin_Elektrik_Enerjisi_Uretimi_Amacli_Kullanimina_Iliski
n_Kanun.pdf
E. Deniz, Gunes Enerjisi Santrallerinde Kayiplar. Akademi Enerji İzmir.
[Online].Available:
http://www.emo.org.tr/ekler/38f0038bf09a40b_ek.pdf
E. Roman, R. Alonso, P. Ibanez, S. Elorduizapatarietxe, D. Goitia. ,
Intelligent PV Module for Grid-Connected PV Systems. IEEE Trans.
Industrial Electronics, vol. 53, pp. 1066-1073, August 2006.
Revenue Administration, Amortismana Tabi İktisadi Kıymetler.
Available:
http://www.gib.gov.tr/fileadmin/user_upload/Yararli_Bilgiler/amortism
an_oranlari2011.html [Accessed: 4 September 2013].
Uluslararası Elektronik, Bilgisayar ve Otomasyon Teknolojileri Kongre ve Sergisi (EKOTEK’14), Mayıs 9-11,2014
Konya, Türkiye
Tavsiye Sistemleri için Bulanık Mantık Tabanlı
Yeni Bir Filtreleme Algoritması
D.KIZILKAYA1 ve A.M. ACILAR 1
1
Selçuk Üniversitesi, Konya/Türkiye, daimekizilkaya@gmail.com
Selçuk Üniversitesi, Konya/Türkiye, msakiroglu@selcuk.edu.tr
1
değerlendirmelerinden yararlanarak önerilerin sunulması gibi
yeni stratejiler belirlenerek hem kullanıcının istedikleri ürüne
daha kısa ve kolay şekilde ulaşması hem de elektronik ticaret
yapan sitelerin satışlarının artırması sağlanabilir.
Ancak seçeneklerin bu kadar artması kullanıcılarında tercih
yapmasını güçleştirmiştir. Bu sorunun üstesinden gelmek için
pek çok öneri tekniği geliştirilmiştir. Bunlardan en popüleri
film, makale, ürün web sayfası tavsiye etmek gibi pek çok
uygulamada kullanılan İşbirlikçi Filtreleme Tekniğidir (İFT Collaborative Filtering Technique). Bu tekniğin temelleri ilk
olarak 1994’de Resnick ve ark.[1] tarafından atılmıştır. Bu
makalede haber gruplarından kullanıcıya haber önermek için
İFT tabanlı bir öneri sistemi sunulmuştur. İFT’de iki kullanıcı
arasındaki benzerliğin hesaplanması için sıklıkla kullanılan
Pearson Bağıntısı ilk defa literatürde bu makalede geçmiştir.
1998 yılında Breese ve ark.[2] tarafından İFT için kullanılan
bağıntı katsayısı, vektör benzerliği ve istatistiksel Bayesian
tabanlı metotlar birbirleri ile kıyaslanmış, bu metotların doğru
tahmin üretme yetenekleri aynı problemler üzerinde
ölçülmüştür.
2000 yılında Sarwar ve arkadaşlarının [3]
yaptığı çalışmada İFT tabanlı tavsiye sistemlerinin üç aşamada
müşteriye tavsiye verdiği belirtilmiştir. Bunlardan ilki
kullanıcının oyladığı ürünlere bakılarak kullanıcı profilinin
oluşturulmasıdır. İkinci aşama sistemin makine öğrenmesi
veya istatistiksel teknikler kullanarak benzer davranışlara
sahip komşuluklar olarak adlandırılan kullanıcı kümelerini
inşa etmesidir. Son aşama ise tahmin ve tavsiye hesaplama
sürecidir.
İFT tabanlı tavsiye sistemleri ağırlıklı olarak benzer
kullanıcıların komşuluklarını kullanarak işlem yaparlar. Yani
müşterinin ilgilendiği ürünler hakkında diğer müşterilerin
fikirlerine dayanarak bilgi elde etmeye çalışırlar. Pek çok
başarılı uygulaması olmasına rağmen halen giderilmesi
gereken bazı problemler içermektedir. Bunlardan başlıcaları
bilgi eksikliği (sparsity) ve ölçeklenebilirlik (scalability)
problemleridir.
Bilgi eksikliği problemi, sistemdeki eleman sayısının
artmasıyla beraber kullanılan oy sayısının da azalması ve buna
bağlı olarak komşuluk hesabının zorlaşması problemidir.
Ölçeklenebilirlik, ise büyük veri tabanlarına sahip sistemlerde
veri kümesinin büyüklüğünden dolayı çalışma zamanlarının
uzaması, performansın düşmesi problemidir.
Bu çalışmada, bulanık mantık tabanlı yeni bir filtreleme
algoritması (BMF) sunularak İFT’nin ölçeklenebilirlik ve bilgi
eksikliği problemlerinin giderilmesi hedeflenmiştir. Zira
oldukça büyük veri tabanlarında kullanıcı benzerliklerini
hesaplamak için Pearson bağıntısını kullanan İFT oldukça
Özet - The evolution of the Internet has brought us into a
world that represents a huge amount of information items such
as music, movies, books, web pages, etc. with varying quality.
However, internet contains vast amount of information and this
information is not filtered. In such an environment, the people
who seek for information are overwhelmed in the alternatives
that she/he can reach via the web. Recommendation Systems
address the problem of getting confused about items to choose,
and filter a specific type of information with a specific
information filtering technique that attempts to present
information items that are likely of interest to the user. A variety
of information filtering techniques have been proposed for
performing recommendations, including content-based and
collaborative techniques which are the most commonly used
approaches in recommendation systems. In this paper, we
propose a new fuzzy based filtering algorithm to detect the user
similarities. Our valid and simplified fuzzy reasoning model for
filtering is constructed using user’s common voting items and
similarities of these votes. Through numerical experiments
compared with conventional collaborative filtering technique
using Movie Lens data, our approach is found to be promising
for improvement of collaborative filtering model accuracy.
Keywords - Collaborative filtering, Fuzzy set, Fuzzy reasoning
model, Recommender system, Fuzzy filtering.
I.
GİRİŞ
Teknolojinin hızla geliştiği ve internet kullanımının
yaygınlaştığı bir ortamda verilerin depolanması da oldukça
kolaylaşmıştır. Bu yüzden bu birikmiş veri yığınlarının içinde
kullanıcıların kaybolmasını engelleyecek sistemlere ihtiyaç
duyulmuştur. Özellikle alışveriş ve eğlence sitelerinde
kullanılan bu sistemlere genel olarak “Tavsiye Sistemleri
(Recommender
Systems)”
ismi
verilir.
Tavsiye
Sistemlerinden, ürünler hakkında müşterilerin farklı
taleplerine cevap verebilecek olan tutarlı tavsiyeleri makul bir
sürede üretmesi beklenir.
Tavsiye sistemleri kullanıcıların demografiklerinden, en çok
satılan ürünlerden, kullanıcının geçmişteki alışveriş
alışkanlıklarından veya kullanıcıların ürünler için yaptıkları
değerlendirmelerden faydalanır. Temelde bu tekniklerin hepsi
e-ticaret sitesini kullanıcı için cazip hale getirmeye böylece
müşteri bağımlılığını temin etmeye çalışır. Müşteri
bağımlılığını temin etmek e-ticaretin en temel stratejilerinden
birisidir. Kısaca tavsiye sistemlerinden faydalanılarak siteye
alışveriş için gelen müşteriye özel dinamik sayfaların açılması,
hangi ürünlerin kimler tarafında alındığı tespit edilerek özel
promosyonların
düzenlenmesi,
müşterinin
geçmiş
22
zorlanmaktadır. Ayrıca kabul edilebilir sonuçlar elde etmesi
için en az 20 tane ortak oylanan filme ihtiyacı vardır. Buna
karşın sözel ifadelerden oluşan basit bulanık kuralları ile BMF
daha doğru sonuçlara, daha kolay ulaşmaktadır. Önerilen
yöntem, araştırmacıların kullanımına açık olan MovieLens
(ML) (www.movielens.org) veri kümesi üzerinde test
edilmiştir.
Bu bildirinin 2.bölümünde tavsiye sistemlerinde kullanılan
temel filtreleme teknikleri anlatılmıştır. Bölüm 3’de bulanık
sistemler hakkında kısa bilgiler verilmiştir. Bölüm 4’de
önerilen bulanık tabanlı filtreleme algoritması anlatılmış,
bölüm 5’de deneysel çalışmalar verilmiştir. Son olarak bölüm
6’da sonuçlar yorumlanmıştır.
“önemli” kelimeleri kullanırlar. Hangi kelimelerin önemli
olduğuna karar vermek içinse literatürdeki çeşitli
yöntemlerden biri kullanılarak kelimelerin ağırlıkları
hesaplanır ve kullanılan yönteme göre en yüksek veya en
düşük ağırlık değerine sahip ilk n kelime dokümanı göstermek
için seçilir. Bu yöntemlerden birisi terim frekans
indekslemedir.
İkinci sorun ise henüz görülmemiş
dokümanların tavsiye edilmesini mümkün kılan bir modelin
oluşturulmasıdır. Dokümanın hangi kelimeler ile temsil
edileceği belirlendikten sonra “karar ağaçları” veya “bayes
sınıflandırıcı” gibi bir sınıflandırma algoritması kullanılarak
dokümanlar gruplanabilir.
B. İşbirlikçi Filtreleme Tekniği
İşbirlikçi filtreleme teknikleri (İFT) tavsiye sistemleri
tarafından en yaygın olarak kullanılan tekniklerdendir. İFT hiç
tanımadığımız kişilere bilgisayar yardımıyla tavsiye
vermemize yardım eder. Başka bir deyişle İFT sistemleri
bilgisayarın ve insanın iyi yapabildiği şeyi yapmasına izin
vererek ortak çalışma imkanı sunar. Şöyle ki; kullanıcı
elamanları okumada veya değerlendirmede iyidir bilgisayar
ise bu değerlendirmeler arasındaki benzerliklerin bulunması
için gerekli olan hesaplamaları yapmakta. Kullanıcıların bir
eleman için yaptıkları değerlendirmeye “oylama” denir.
Oylama genellikle 1-5 veya 1-7 gibi belli bir aralıkta yapılır ve
kullanıcının o eleman hakkındaki düşüncesinin iyimi kötümü
olduğunu gösterir. Bilgisayarın rolü ise bir kullanıcının henüz
görmediği bir elemana vereceği oyu önceden tahmin etmektir.
Tahmin hesaplamasındaki ilk adım elemanları benzer şekilde
oylayan kullanıcı gruplarının tespit edilmesidir. Bu kullanıcı
grubuna “komşuluk” adı verilir ve bir kullanıcının bir eleman
için yaptığı tahmin, o elaman için komşularının yaptığı
oylamalara bakılarak hesaplanır. Buradaki esas düşünce eğer
bir kullanıcı geçmişte komşuları ile aynı fikirde ise gelecekte
de aynı fikirde olma olasılığının yüksek olmasıdır.
II. FİLTRELEME TEKNİKLERİ
İçerik Tabanlı Filtreleme (Content based filtering) ve
İşbirlikçi Filtreleme (Collaborative Filtering) tavsiye
sistemlerinin en sık kullandığı iki temel filtreleme tekniğidir.
İçerik tabanlı filtreleme teknikleri kullanıcı tercihlerine ait bir
profil oluşturmak için oylanmış olan bilgi kaynağının içeriğini
analiz eder. Oluşturulan bu profil daha önceden görülmemiş
diğer bilgi kaynaklarını oylamak için veya arama motoru için
bir sorgu oluşturmakta kullanılabilir [4]. İşbirlikçi filtreleme
teknikleri ise içerik hakkında herhangi bir bilgi gerektirmez.
Kullanıcıların geçmişte verdikleri oyların benzerliklerine
bakarak komşulukları tespit eder ve bunlara dayanarak tavsiye
üretir. Bu bölümü devamında bu filtreleme teknikleri kısace
özetlenmiştir.
A. İçerik Tabanlı Filtreleme Tekniği
Otuz yılı aşkın süredir bilgisayar bilimcileri yazılım
teknolojilerini kullanarak hızla biriken verileri tanıma ve
sınıflandırma problemini çözmeye çalışmaktadırlar. Bu amaç
için geliştirilen yazılımlar sayesinde otomatik olarak her bir
elemanın içeriğinin tanımı üretilir daha sonra bu tanımlar ile
kullanıcının ihtiyacı olan elemanın tanımı kıyaslanır.
Kullanıcının ihtiyaç duyduğu tanım ise ya bir sorgu ile
kullanıcıdan alınır ya da kullanıcının daha önce ilgilendiği
elemanların gözlemlenmesi ile öğrenilir. Bahsedilen bu
teknikler içerik tabanlı filtreleme olarak adlandırılır çünkü
filtreleme işlemi elemanların içeriği analiz edilerek
gerçekleştirilir.
Bu filtreleme teknikleri genellikle kullanıcıya tercihlerine
göre doküman tavsiyesi yapılırken kullanılır. Kullanıcının
oyladığı elemanın (makaleler, haber gruplarındaki mesajlar
vb.) içeriği ile tavsiye edilmesi düşünülen elemanın içeriğini
analiz ederek tavsiye üretir. Tavsiye verilirken temel alınacak
olan içeriğinin düzenini bulması için dokümanın içeriğini
analiz eden pek çok algoritma geliştirilmiştir. Bu
algoritmaların çoğu dokümanın hangi sınıfa (hoşlanılan veya
hoşlanılmayan) ait olduğunu öğrenmeye çalışan sınıflandırma
algoritmalarının özelleştirilmiş versiyonlarıdır. Geriye kalan
diğer algoritmalar ise dokümana verilecek (oy gibi) sayısal bir
değeri tahmin etmeye çalışırlar.
İçerik tabanlı filtreleme sistemi tasarlanırken iki tane temel
problemin çözümlenmesi gerekir. Bunlardan ilki bir
dokümanın nasıl temsil edileceğini belirlemektir. Hemen
hemen tüm içerik tabanlı filtreleme sistemini kullan
yaklaşımlar dokümanı göstermek için dokümanın içinde geçen
Şekil 1: İşbirlikçi filtreleme süreci
Şekil 1, İFT sürecinin şematik yapısını gösterir. Görüldüğü
üzere mxn boyutlu oy bilgisini içeren bir kullanıcı-ürün
matrisi giriş olarak alınmaktadır. Matristeki her bir a i,j elamanı
i. kullanıcının j. ürüne verdiği oyu göstermektedir. Bu matrise
İFT algoritması uygulanır ve çıkış olarak da tahmin ve
tavsiyeler elde edilir.
Kullanıcıya belli bir ürün için tavsiye üretirken yapılması
gereken ilk işlem kullanıcı komşuluklarının oluşturulmasıdır.
Kullanıcılar ile aktif kullanıcı arasındaki benzerliğin
hesaplanmasında sıklıkla kullanılan yöntemlerin başında
“Pearson Bağıntısı (Pearson Correlation)” gelir. Pearson
bağıntısı ilk olarak 1994 yılında GroupLens projesinde
23
Resnick ve ark. tarafından kullanılmıştır [2]. Pearson
bağıntısının hesaplanması sonucunda elde edilen değer ne
kadar yüksek ise kullanıcıların benzerliği o kadar fazladır.
Matematiksel olarak Pearson bağıntısı eşitlik 1’de ifade
edildiği gibidir.
n
 (a
Benzerlik(a,k) =
i
i 1
n
 (a
i 1
i
 a)(ki  k )
_
A. Bulanık Sistemler
Bileşenleri ve aralarındaki ilişkileri modellerken bulanık
küme teorisi temelli bir matematiksel disiplin olan bulanık
mantık ilkelerinden faydalanan sistemlere de Bulanık
Sistemler adı verilir. Bulanık bir sistem tasarlamak; dijital bir
platformda ve esnek yöntemlerle, bulanık mantık çıkarım ve
karar verme süreci sağlayacak bir sistem geliştirmeye karşılık
gelmektedir Bulanık bir sistem bulanıklaştırıcı, bilgi tabanı,
bulanık çıkarım birimi ve durulaştırıcı olmak üzere temelde 4
birimin birleşiminden meydana gelmektedir. Bu birimlerin
birbirleri ile olan ilişkileri Şekil 2’de gösterildiği gibidir.
(1)
n
 a ) 2  ( ki  k ) 2
i 1
ai = aktif kullanıcının i filmine verdiği oy
ki = k kullanıcısının i filmine verdiği oy
n = a ve k kullanıcısının ortak oyladıkları film sayısı
a, k = sırasıyla a ve k kullanıcısının tüm filmlere verdikleri
oyların ortalaması
Bir kullanıcının tüm filmlere verdiği oyların ortalamasının
hesaplamak için eşitlik 2 ‘deki formül kullanılır. Buradaki Ia
terimi a kullanıcısının oyladığı filmlerin kümesidir.
a
1
aj
| I a | jI a
(2)
Şekil 2: Bulanık bir sistemin genel yapısı
Pearson bağıntı hesabı için bazı istisna durumlar söz
konusudur. Bunlardan ilki iki kullanıcısının oyladıkları hiç
ortak film bulunmamasıdır (n=0). Bu durumda Benzerlik(a,k)
sıfır olarak kabul edilir. İkinci durum ise paydanın değerinin
sıfır çıkmasıdır. Bu sıfıra bölüm hatasına sebep olacağından
Benzerlik(a,k)’nin değeri sıfır olarak atanır. Diğer bir durum
ise ortak oylanan film sayısının istenen sayıdan (p) düşük
olmasıdır. Yani (n<p) gibi bir durumda Benzerlik(a,k)’nin
değerini direkt sıfıra eşitlemek hatalı olur. Sayısı az da olsa
ortak
oylanan
film
vardır.
Bu
durumda
Benzerlik(a,k)=(n/p)*Benzerlik(a,k) olarak hesaplanabilir.
Bulanık bir sistemin genel yapısını oluşturan birimleri
kısaca açıklayalım.
1) Bulanıklaştırıcı Birimi: Bulanık bir sistemin ilk adımıdır.
Sisteme alınan kesin giriş verileri, üyelik fonksiyonlarından
birisini veya başka bir üyelik fonksiyonu kullanılarak dilsel
değerlere dönüştürülürler yani bulanıklaştırılırlar. Bu işlem
sonucunda her bir giriş verisine karşılık gelen bir üyelik
derecesi elde edilir.
2) Bilgi Tabanı: Üyelik fonksiyonları ve bulanık kurallar
ile ilgili parametrelerin depolandığı alandır ve bulanık çıkarım
birimi ile sürekli iletişim halindedir. Bilgi tabanının iki temel
bileşeni vardır. Bunlar veri tabanı ve kural tabanıdır. Veri
tabanında, üyelik fonksiyonlarının sayısı ve değerleri ile ilgili
veriler tutulur. Kural tabanında ise, bulanık sistemin
çalışmasını belirleyen “Eğer - O halde” şeklinde tanımlanmış
bulanık kuralların öncülleri, sonuçları ve ağırlıkları (kesinlik
faktörleri) tutulur.
3) Bulanık Çıkarım Birimi: Bulanık mantık sisteminin
çekirdek kısmını oluşturan bu birim, insanın karar verme ve
çıkarım yapma yeteneğini taklit ederek bulanık çıkarımların
yapıldığı birimdir. Yapılan işlem aslında üyelik
fonksiyonlarından
yararlanarak
bulanık
kuralların
değerlendirilmesi (implication) ve ardından elde edilen
sonuçların bileşkesinin (aggregation) alınmasıdır. Bulanık
çıkarım yöntemi olarak sıklıkla kullanılan 2 yöntem
bulunmaktadır. Bunlar Mamdani [6] ve Takagi-Sugeno-Kang
(TSK) [7] yöntemleridir. Bulanık sistemleri, kullandıkları
çıkarım yöntemine göre isimlendirmek mümkündür. Örneğin
Mamdani tipi bulanık sistem gibi. Bu çalışmada önerilen
bulanık mantık tabanlı filtreleme tekniği Mamdani tipi bir
bulanık sistemdir. Şekil 3’de Mamdani tipi bir bulanık çıkarım
sistem örneği verilmiştir.
III. BULANIK MANTIK
Sonucu tam olarak bilinemeyen, her insan tarafından aynı
şekilde algılanmayan, sübjektif veriler içeren, belki soyut
olarak ifade edilebilecek her duruma belirsizlik denir. insan
hayatı çoğu zaman belirsizliklerle doludur.
Bulanıklık bilimsel olarak belirsizlik olarak tanımlanmış ve
bu belirsizlikleri ifade edebilmek amacıyla bulanık mantık
geliştirilmiştir. Klasik mantıkta bir şey ya doğrudur ya da
yanlıştır. Yani ikili bir mantık vardır. Bulanık mantıkta ise
doğru ile yanlışın arasında birçok durum bulunmaktadır.
Belirsizlik ve karmaşıklığın arttığı bir dünyada, insanlar bu
belirsizlik ve karmaşıklığa çözüm bulmak amacıyla
bilgisayarları geliştirmişler. Ancak bu da bu belirsizlikleri
gidermede bir çare olmamıştır. 1965 yılında Azerbaycan asıllı
olan Prof. Lotfi A. Zadeh belirsizliği ifade edebilmek için
bulanık kümeleri (fuzzy sets) geliştirmiştir [5]. Bulanık
mantık, günümüze kadar gelişerek birçok alanda kullanım
imkânı bulmuştur. Bulanık mantıkta artık sadece siyah ve
beyaz renkler değil bu iki renk arasında bulunan gri tonlarda
dikkate alınmaktadır. Bu ise insan düşünme sistemine
uygunluk açısından çok yakınlık göstermektedir.
24
Önerilen bulanık sisteme ait üyelik fonksiyonlarının
grafikleri şekil 4’de verilmiştir. İkinci grafikteki üyelik
fonksiyonlarının sıralaması “yüksek, orta ve düşük” şeklinde
verilmesinin sebebi öklid uzaklığından gelen değer ile
benzerlik kavramının ters orantılı olmasıdır. Yani iki nesne
birbirinden ne kadar uzaksa o kadar az benzerdir.
Şekil 3. Mamdani bulanık çıkarım sistemi
4) Durulaştırma: Bulanık çıkarım mekanizmasından gelen
ve bulanık olan verilerin kesin sonuçlar haline dönüştürülmesi
için yapılan işlemlere durulaştırma işlemleri denir. Durulama
birimi çıkarım biriminden gelen bulanık bir bilgiden bulanık
olmayan ve uygulamada kullanılacak gerçek değerlerin elde
edilmesini sağlar.
IV. BULANIK MANTIK TABANLI YENİ BİR FİLTRELEME
ALGORİTMASI
Geleneksel işbirlikçi filtreleme tekniğinde kullanıcıların
benzerliklerinin tespiti için eşitlik 1’de verilen pearson
bağıntısı kullanılmaktaydı. Bu matematiksel eşitliğin
hesaplanması sistemin boyutu büyüdükçe zorlaşmaktadır. Bu
yüzden bu çalışmada kullanıcıların benzerlikleri hesaplanırken
hesaplama maliyeti daha düşük olan bulanık mantık tabanlı
yeni bir yöntem önerilmiştir. Önerilen yöntem aslında bulanık
kural tabanlı bir sistemdir. Bundan dolayı ilk olarak bulanık
değişkenler ve kurallar belirlenmiştir. Sistem iki girişli bir
çıkışlıdır. Girişlerden ilki; iki kullanıcının ortak oyladığı film
sayısı (X1), diğeri bu iki kullanıcının filmlere verdikleri
oyların benzerliğidir (X2). Bu benzerlik eşitlik 3’de verilen
öklid uzaklığı kullanılarak hesaplanmıştır. Çıkış ise
kullanıcıların benzerliğidir (Y).
Şekil 4: Önerilen bulanık sisteme ait üyelik fonksiyonları
Konuyu bir örnek üzerinde açıklayalım. Örneğin
birinci kullanıcı ile ikinci kullanıcı arasındaki benzerliği
önerdiğimiz sistemi kullanarak bulalım.
Bu iki kullanıcının ortak oyladıkları film sayısının 6,
oyların benzerlik oranlarının ise 0.4 olarak bulunmuştur.
Oyların benzerliği eşitlik 3’de verilen öklid uzaklığı formülü
kullanılarak hesaplanmıştır.
Şekil 4’de verilen üyelik fonksiyonlarında bu değerler
yerine konularak bulanık değerler tespit edilmiş ve
gösterilmiştir. Çıkışta elde edilen bulanık değerler eşitlik
4’deverilen ağırlıklı durulaştırma formülü kullanılarak
durulaştırılmıştır.
(3)
Burada;
n: Ortak oylanan film sayısı
pi : p kullanıcısının i filmine verdiği oy
qi : q kullanıcısının i filmine verdiği oyu göstermektedir.
Aşağıda önerilen sisteme ait bulanık kurallar verilmiştir.
Kural 1: Eğer a kullanıcısı ile k kullanıcısının ortak
oyladıkları film sayısı çoksa ve oyların benzerliği çoksa
benzerlik oranları yüksektir.
Kural 2: Eğer a kullanıcısı ile k kullanıcısının ortak
oyladıkları film sayısı ortaysa ve oyların benzerliği ortaysa
benzerlik oranları ortadır.
Kural 3: Eğer a kullanıcısı ile k kullanıcısının ortak
oyladıkları film sayısı düşükse ve oyların benzerliği çoksa
benzerlik oranları düşüktür.
n
y *     y i * y i
i 1
n
 y 
i 1
i
(4)
Kullanıcıların benzerlikleri bulunduktan
sonra tavsiye verme işlemine sıra gelmiştir. Bu çalışmada
tavsiye üretirken “izlenen yöntem şu şekildedir. İlk olarak
aktif kullanıcının (a kullanıcısı) komşuları belirlenir. Bunun
için belli bir değerin üzerinde benzerliğe sahip tüm
kullanıcılar komşu olarak belirlenir. Daha sonra komşuların
25
izlediği ama a kullanıcısının izlemediği filmler tespit edilir. Bu
filmler arasından komşuların 4 veya 5 oy verdiği filmler, a
kullanıcısına tavsiye olarak döndürülür.
Test Sonuçlarının
Değerlendirilmesi
119…
Bu çalışmada 100000 oy içeren 100K MovieLens veri
kümesi kullanılmıştır. MovieLens, Minnesota Üniversitesi
Bilgisayar Bilimleri ve Mühendisliği Bölümünde tavsiye
sistemleri için oluşturulmuş 1997’den beri GroupLens
Araştırmacıları tarafından www.movielens.org adresinden
toplanan araştırmacılara açık bir veri kümesidir. Bu siteyi
halen binlerce insan ziyaret edip, filmlere oy verip tavsiye
almaktadır.
Bu veri kümesinde 1682 film 943 farklı kullanıcı tarafından
oylanmıştır. Genel olarak her bir kullanıcı en az yirmi filmi
oylamıştır. Bu filmler 1 ile 5 arasında oylar vermiştir. Hiç
oylanmamış filme ise 0 değeri atanmıştır. Bu veri kümelerinde
kullanıcılara ait belli bilgiler de tutulmuştur.
Çalışmada veri kümesinin %80’i eğitim %20’si test olarak
kullanılmaktadır. Eğitim kümesi komşulukların tespiti ve
tavsiye üretmek için kullanılmış, üretilen tavsiyeler test veri
kümesindeki filmler ile kıyaslanmıştır.
Yapılan deneysel çalışmada, eğitim kümesinden rastgele 10
adet kullanıcı seçilmiş ve bu kullanıcıların komşulukları tespit
edilmiştir. Komşuluk tespiti için gerekli olan benzerliği
hesaplanması için İFT’de pearson bağıntısı kullanılırken,
BMF’de bu çalışmada önerilen bulanık sistem kullanılmıştır.
Daha sonra için belli bir değerin üzerinde benzerliğe sahip tüm
kullanıcılar komşu olarak a kullanıcısına atanmış ve
komşuların izlediği ama a kullanıcısının izlemediği filmler
tespit edilmiştir. Bu filmler arasından komşuların 4 veya 5 oy
verdiği filmler, a kullanıcısına tavsiye olarak döndürülmüştür.
En son olarak bu tavsiyelerin kalitesi ölçülmüştür. Bunun için
a kullanıcısının test veri kümesinde bulunan ve 4 veya 5 oy
verdiği filmler ile tavsiye olarak verilen kendisine verilen
filmler kıyaslanmış ve sonuçlar tablo 1’de verilmiştir.
Tablo ve grafikten de görüldüğü üzere ölçeklenebilirlik ve
bilgi eksikliği problemlerinden daha az etkilenen BMF
algoritması, pearson bağıntısını kullanan İFT’ye göre daha
isabetli tavsiyeler üretmiştir. Ayrıca bu 10 kullanıcı için
kurulan “H0 : Aktif kullanıcının komşuluklarını tespit için
kullanılan İFT-pearson ile bu çalışmada önerilen BMF
algoritmaları arasında %95 önem seviyesinde istatistiksel
olarak anlamsal bir fark yoktur” hipotezi Wilcoxon eşlenikçift istatistikî testi kullanılarak test edilmiştir. Test sonucu elde
edilen p değeri 0.002 olarak bulunmuştur. Bilindiği üzere p
değeri %95 önem seviyesi için 0.05’den küçük ise, önerilen
hipotez testi ret edilmekteydi. Bulunan p değeri 0.05’den
küçük olduğu için hipotez ret edilmiş demektir. Son olarak
tablo-1’deki verilere bakarak önerilen yöntemin daha başarılı
olduğu söylenebilir.
VI. SONUÇLAR
Bu çalışmada tavsiye sistemlerinin ölçeklenebilirlik ve bilgi
eksikliği problemlerine bulanık mantık tabanlı yeni bir
filtreleme algoritması kullanılarak çözüm getirilmesi
hedeflenmiştir. Kabul edilebilir sonuçlar için en az 20 ortak
oylanan filme ihtiyaç duyan pearson bağıntısına karşın, BMF
10 tane ortak oylanan filmi kullanarak daha doğru sonuçlar
üretmeyi başarmıştır. Rastgele seçilen on adet kullanıcı
üzerinde deneysel çalışmalar yapılmış ve iki yöntem arasında
istatistiksel olarak anlamlı bir fark olduğu gösterilmiştir.
Sonuçlar incelendiğinde pearson bağıntısını kullanan İFT
yönteminin ortalama başarısı %86.5 iken önerilen BMF
yönteminin başarısı %98’dir. Sonuç olarak önerilen yöntemin
daha isabetli tavsiyeler yaptığı gösterilmiştir.
Tablo 1: Yöntemlerin Tavsiye Başarıları
Ku.
1
15
23
45
50
65
100
156
277
356
İFT-Pearson
136 filmden 121 i eşleşti:%88
44 filmden 38 i eşleşti:%86
63 filmden 39 u eşleşti:%63
19 filmden 17 si eşleşti:%89
11 filmden 9 u eşleşti:%81
32 filmden 29 u eşleşti:%90
25 filmden 20 si eşleşti:%80
19 filmden 18i eşleşti:%96
26 filmden 25 i eşleşti:%96
9 filmden 8 i eşleşti:%96
Ortalama Başarısı : %86.5
119…
18190001900r7l
Y 9190001900r4l
ü
İFT
T
0190001900r1l
z
e
BMF
d
s
Kullanıcılar
e
t
l
Şekil…
5: Kullanıcılara verilen tavsiyelerin doğruluk yüzdeleri
V. DENEYSEL ÇALIŞMA
BMF
136 filmden 133 i eşleşti:%97
44 filmden 42 si eşleşti:%95
63 filmden 63 ü eşleşti: %100
19 filmden 18 i eşleşti:%94
11 filmden 9 u eşleşti:%98
32 filmden 32 si eşleşti:%100
25 filmden 24 ü eşleşti: %96
19 filmden 19 u eşleşti:%100
26 filmden 26 sı eşleşti:%100
9 filmden 9 u eşleşti:%100
Ortalama Başarısı : %98
REFERENCES
[1]
[2]
Tablo 1’de verilen sonuçlar ayrıca şekil 5’de verilen
grafikte de gösterilmiştir.
[3]
[4]
[5]
[6]
[7]
26
Resnick P., Lacovou N., Suchak M., Bergstrom P., Riedl J., 1994.
GroupLens: An Open Architecture for Collaborative Filtering of
Netnews. In proceedings of CSCW94.
Breese J.S., Heckerman D., Kadie C., 1998. Empirical analysis of
predictive algorithms for collaborative fitlering, Proceedings of 14th
Conference on Uncertainty in Artificial Intelligence, 43-52.
Sarwar B.M., Karypis G. Kontsan J.A, Riedl J.T. 2000. Application of
Dimensionality Reduction in Recommender System – A Case Study.
Pazzini M. J., 1999. A Framework for Collaborative, Content Based and
Demografic Filtering, Artificial Intelligence Review, 13(5-6) 393-408.
Zadeh, L.A., 1965, Fuzzy sets, Informatıon and Control, 8, 338-353.
Mamdani, E. H., 1974, Application of fuzzy algorithms for control of
simple dynamic plant, Proc. Inst. Elec. Eng., 121, 1585-1588.
Takagi T. and Sugeno, M., 1985, Fuzzy identification of systems and its
applications to modeling and control, IEEE Transactions on Systems,
Man and Cybernetics, 15 (1), 116-132.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Semantic Place Recognition Based on Unsupervised
Deep Learning of Spatial Sparse Features
A. HASASNEH1, E. FRENOUX2, 3 and P. TARROUX3, 4
1
Hebron University, Hebron/Palestine, ahasasneh@hebron.edu
Paris-Sud University, Orsay/France, Emmanuelle.frenoux@limsi.fr
3
LIMSI-CNRS, Orsay/France, {emmanuelle.frenoux,philippe.tarroux}@limsi.fr
4
Ecole Normale Superieure, Paris/France, philippe.tarroux@limsi.fr
2
Abstract – Recently, the sparse coding based on unsupervised
learning has been widely used for image classification. The sparse
representation is assumed to be linearly separable, and therefore
a simple classifier, like softmax regression, is suitable to perform
the classification process. To investigate that, this paper presents
a novel approach for semantic place recognition (SPR) based on
Restricted Boltzmann Machines (RBMs) and a direct use of tiny
images. These methods are ables to produce an efficient local
sparse representation of the initial data in the feature space.
However, data whitening or at least local normalization is a
prerequisite for these approaches. In this article, we empirically
show that data whitening forces RBMs to extract smaller
structures while data normalization forces them to learn larger
structures that cover large spatial frequencies. We further show
that the latter ones are more promising to achieve the state-ofthe-art performance for a SPR task.
II. RELATED WORK
Although most of the proposed approaches to the problem of
robot localization have given rise Simultaneous Localization
and Mapping (SLAM) techniques [1], significant recent works
have been developed for this problem based on visual
descriptors. In particular, these descriptors are either based on
global images features using global detectors, like GiST and
CENTRIST [2, 3], or on local signatures computed around
interest points using local detectors, like SIFT and SURF [4,
5]. However, these representations first need to use Bag-ofWords (BoWs) methods, which consider only a set of interest
in the image, to reduce their size and then followed by the use
of vector quantization such that the image is eventually
represented as a histogram. Discriminative approaches can be
used to compute the probability to be in a given place
according to the current observation. Generative approaches
can also be used to compute the likelihood of an observation
given a certain place within the framework of Bayesian
filtering. Among of these approaches, some works [6] omit the
quantization step and model the likelihood as a Gaussian
Mixture Model (GMM). Recent approaches also propose to
use naive Bayes classifiers and temporal integration that
combine successive observations [7].
SPR therefore requires the use of an appropriate feature
space that allows an accurate and rapid classification.
Contrarily to these empirical methods, new machine learning
methods have recently emerged which strongly related to the
way natural systems code images [8]. These methods are
based on the consideration that natural image statistics are not
Gaussian as it would be if they have had a completely random
structure [9]. The auto-similar structure of natural images
allowed the evolution to build optimal codes. These codes are
made of statistically independent features and many different
methods have been proposed to construct them from image
datasets. Imposing locality and sparsity constraints in these
features is very important. This is probably due to the fact that
any simple algorithms based on such constraints can achieve
linear signatures similar to the notion of receptive field in
natural systems. Recent years have seen an interesting interest
in computer vision algorithms that rely on local sparse image
representations, especially for the problems of image classification and object recognition [10-12]. Moreover, from a
Keywords – Image Classification, Semantic Place Recognition,
Restricted Boltzmann Machines, Softmax Regression, Sparse
Coding.
I. INTRODUCTION
I
t is indeed required for an autonomous service robot to be
able to recognize the environment in which it lives and to
easily learn the organization of this environment in order to
operate and interact successfully. To achieve that goal,
different solutions have been proposed, some based on metric
localization, and some other based on topological localization.
However, in these approaches, the place information is
different from the information used for the determination of
the semantic categories of places. Thus, the ability for a
mobile robot to determine the nature of its environment
(kitchen, room, corridor, etc.) remains a challenging task. The
knowledge of its metric coordinates or even the neighborhood
information that can be encoded into topological maps is
indeed not sufficient. The SPR is however required for a large
set of tasks. It can be used as contextual information which
fosters object detection and recognition when it is achieved
without any reference to the objects present in the scene.
Moreover, it is able to build an absolute reference to the robot
location, providing a simple solution for problems where the
localization cannot be deduced from neighboring locations,
such as in the kidnapped robot or the loop closure problems.
27
generative point of view, the effectiveness of local sparse
coding, for instance for image reconstruction [13], is justified
by the fact that a natural image can be reconstructed by a
smallest possible number of features. However, while a sparse
representation has been assumed to be a linearly separable in
several works [12, 16], and thus simplifies the overall classification problem, the question of whether smaller or larger
sparse features are more appropriate for SPR remains an open
question. So, this paper investigates the data normalization on
the detection of features and SPR performance.
It has been shown that Independent Component Analysis
(ICA) produces localized features. Besides, it is efficient for
distributions with high kurtosis well representative of natural
image statistics dominated by rare events like contours;
however the method is linear and not recursive. These two
limitations are released by DBNs [14] that introduce nonlinearities in the coding scheme and exhibit multiple layers.
Each layer is made of a RBM, a simplified version of a
Boltzmann machine proposed by [15]. Each RBM is able to
build a generative statistical model of its inputs using a
relatively fast learning algorithm, Contrastive Divergence
(CD), first introduced by [15]. Another important characteristic of the codes used in natural systems, the sparsity of the
representation [8], is also achieved in DBNs.
contain significant statistical redundancies, i.e. their pixels
have strong correlations [20]. Natural images bear considerable
regularities in their first and second order statistics (spatial
correlations), which can be measured using the autocorrelation
function or the Fourier power spectral density [21]. These
correlations are due to the redundant nature of natural images
(adjacent pixels usually have strong correlations except around
edges). The presence of these correlations allows, for instance,
image reconstruction using Markov Random Fields. It has thus
been shown that the edges are the main characteristics of the
natural images and that they are rather coded by higher order
statistical dependencies [21]. It can be deduced from this
observation that the statistics of natural images are not
Gaussian. These statistics are dominated by rare events like
contours, leading to high-kurtosis long-tailed distributions.
Pre-processing the initial images to remove these expected
order-two correlations is known as whitening. It has been
shown that whitening is a useful pre-processing strategy in
ICA [22]. It seems also a mandatory step for the use of
clustering methods in object recognition [23]. Whitening
being a linear process, it does not remove the higher order
statistics or regularities present in the data. The theoretical
grounding of whitening is simple: after centering, the data
vectors are projected onto their principal axes (computed as
the Eigen-vectors of the variance-covariance matrix) and then
divided by the variance along these axes. In this way, the data
cloud is sphericized, letting appear only the usually nonorthogonal axes corresponding to its higher-order statistical
dependencies.
Another way to pre-process the original data is to perform
local normalization. In this case, each patch is normalized by
subtracting the mean and dividing by the standard deviation of
its elements. For visual data, this corresponds to local
brightness and contrast normalization. One can find in [23] a
study of whitening and local normalization and their
influences on object recognition task.
III. MODEL DESCRIPTION
A. Image Preprocessing
The typical input dimension for a DBN is approximately
1000 units (e.g. 300x300 pixels). Dealing with smaller
patches could make the model unable to extract interesting
features. Using larger patches can be extremely timeconsuming during features learning. Three solutions can be
envisioned to address this problem. First, selecting random
patches from each image [17], second, using convolutional
architectures [18], third, reducing the size of each image to a
tiny image [19]. The first solution extracts local features and
the characterization of an image using these features can only
be made using BoWs approaches we wanted to avoid. The
second solution shows the same limitations as the first one and
additionally gives raise to extensive computations that are
only tractable on Graphics Processing Unit architectures.
However, tiny images have been successfully used for
classifying and retrieving images from the 80-million database
developed at MIT [19]. They showed that the use of tiny
images coupled with a DBN approach lead to code each image
by a small binary vector defining the elements of a feature
alphabet that can be used to optimally define the considered
image. The binary vector acts as a bar-code while the alphabet
of features is computed only once from a representative set of
images. The power of this approach is well illustrated by the
fact that a relatively small binary vector (like the ones we use
as the output of our DBN structure) largely exceeds the
number of images that have to be coded even in a huge database. So, for these reasons we have chosen image reduction.
On the other hand, natural images are highly structured and
B. Gaussian-Bernoulli RMBs
Unlike a classical Boltzmann Machine, a RBM is a bipartite
undirected graphical model   {wij , bi , c j }, linking, through a
set of weights wij between visible and hidden units and biases
{bi , c j } a set of visible units v to a set of hidden units h . For
a standard RBM, a joint configuration of the binary visible
units and the binary hidden units has an energy function given
by:
(1)
E (v, h; )   vi h j wij   bi vi   c j h j .
i
j
iv
jh
The probability of the state for a unit in one layer
conditional to the state of the other layer can therefore be
easily computed. According to Gibbs distribution:
1
(2)
P(v, h;  )  
exp  E ( v , h; ) .
Z ( )
where Z ( ) is a normalizing constant. After marginalization,
the probability of a particular hidden state configuration h can
be derived as follows:
28
P(h;  )   P(v, h;  ) 
v
e
e
illustrated in [26], given a training set v (1) ,..., v ( m)  including
m examples, we pose the following optimization problem:
2
m
n
(10)
1 m

(l )
(l ) 
(l )
(l )
 E ( v , h ; )
v
v
 E ( v , h ; )
.
(3)
h
minimize {wij , bi , c j }   log  P(v , h )     p   [h j | v ] .
m l 1
l 1
j 1
 h

It can be derived [24] that the conditional probabilities of a
standard RBM are given as follows:
(4)
P(h  1 | v;  )   (c   w v ).
j
j
ij
where [.] is the conditional expectation given the data, p is
the sparsity target controlling the sparseness of the hidden
units h j , and  is the sparsity cost. Thus, after involving this
i
i
P(vi  1 | h;  )   (bi   wij h j ).
(5)
j
regularization in the CD learning algorithm, the gradient of the
sparsity regularization term over the parameters wij in
where  ( x)  1 (1  e  x ) is the logistic function.
Since binary units are not appropriate for multivalued inputs
like pixel levels, as suggested by Hinton [25], in the present
work visible units have a zero-mean Gaussian activation
scheme:
(6)
P(v  1 | h; )  (b   w h ,  2 ).
i
i
ij
equation 9 can be rewritten as follows:

wij  wij   vi0 h 0j
j
recon.
m
R
l 1
(l )
j
). (11)
D. Layerwise Training for DBNs
A DBN is a stack of RBMs trained in a greedy layerwise and
bottom-up fashion introduced by [14]. The model parameters
at layer i  1 are frozen and the conditional probabilities of the
hidden units are used to generate the data to train the model
parameters at layer i . This process can be repeated across the
layers to obtain sparse representations of the initial data that
will be used as final output for the classification process.
C. Training RBMs with a Sparsity Constraint
To learn RBM parameters, it is possible to maximize the
log-likelihood in a gradient ascent procedure. Therefore, the
derivative of the log-likelihood of the model over a training
set D is given by:
E (v,  )
E (v,  )

(8)
L( ) 

.



M
D
where the first term represents an average with respect to the
model distribution and the second one an expectation over the
data. Although the second term is straightforward to compute,
the first one is often intractable. This is due to the fact that
computing the likelihood needs to compute the partition
function, Z ( ), that is usually intractable. However, Hinton
[15] proposed a quick learning procedure called CD. This
learning algorithm is based on the consideration that
minimizing the energy of the network is equivalent to
minimize the distance between the data and a statistical
generative model of it. A comparison is made between the
statistics of the data and the statistics of its representation
generated by Gibbs sampling. It has been shown that few steps
of Gibbs sampling (most of the time reduced to one) are
sufficient to ensure the convergence. For RBM, the weights of
the network can be updated using the following equation:
(9)
wij  wij   vi0 h 0j
 vin h nj
.
data
   ( p  m1  p
where m , in this case, represents the size of the mini-batch
and p (jl )   ( vil wij  c j ) .
i
j
where  2 denotes the variance of the noise. In this case the
energy function of Gaussian-Bernoulli RBM is given by:
(v  bi ) 2
v
E (v, h;  )   i
  c j h j   i h j wij . (7)
2
2 i
iv
jh
i
j i

D
 vin h nj
IV. COLD DATABASE DESCRIPTION
The COLD database (COsy Localization Database) was
originally developed by [27] for the purpose of robot
localization. It contains 137,069 of labeled 640x480 images
acquired at 5 frames/sec during the robot exploration of three
different laboratories (Freiburg, Ljubljana, and Saarbruecken).
Two sets of paths (standard A and B) have been acquired
under different illumination conditions (sunny, cloudy and
night), and for each condition, one path consists in visiting the
different rooms (corridors, printer areas, etc.). These walks
across the laboratories are repeated several times. Although
color images have been recorded during the exploration, only
gray images are used since previous works have shown that in
the case of the COLD database colors are weakly informative
and made the system more illumination dependent [27].

where  is the learning rate, v corresponds to the initial data
distribution, h 0 is computed using equation 4, v n is sampled
using the Gaussian distribution in equation 6 and with n full
steps of Gibbs sampling, and h n is again computed from
equation 4.
Concerning the sparsity constraint in RBMs, we follow the
same approach developed in [26]. This method introduces a
regularizer term that makes the average hidden variable
activation low over the entire training examples. Thus, the
activation of the model neurons becomes also sparse. As
0
Figure 1: Samples from the COLD database. The corresponding
tiny images are displayed bottom right. One can see that, despite the
size reduction, these small images remain fully recognizable.
As proposed by [19] the image size is reduced to 32x24
(see figure 1). The final set of tiny images is centered,
whitened, and normalized to create two databases called
whitened-tiny-COLD and normalized-tiny-COLD. Consequently,
the variance in equation 6 is set to 1. Contrarily to [19], these
preprocessed tiny images are used directly as input vector of
the network.
29
Figure 2: First column: Filters samples obtained by training a first RBM layer on the whitened-tiny-COLD database. Second column:
filters samples obtained by training a first RBM layer on the normalized-tiny-COLD database. Third column: The Log-Log representation
of the mean Fourier power spectrum for 256 patches sampled from initial, whitened, and normalized databases respectively.
shift between the two curves is only due to a multiplicative
difference in the signal amplitude between the original and the
locally normalized patches). It means that the frequency
composition of the locally normalized images differs from the
initial one only by a constant factor. The relative frequency
composition is the same as in initial images. On the contrary,
whitening completely abolishes this dependency of the signal
energy with frequency. This means that whitening equalizes
the role of each frequency in the composition of the images.
This suggests a relationship between the scale law of natural
images and the first two moments of the statistics of these
images. It is interesting to underline that we have here a
manifestation of the link between the statistical properties of
an image and its structural properties in terms of spatial
frequencies. This link is well illustrated by the WienerKhintchine theorem and the relationship between the
autocorrelation function of the image and its power spectral
density. Concerning the extracted features, these observations
allow deducing that an equal representation (in terms of
amplitude) of all frequencies in the initial signal gives rise to
an over-representation of high frequencies in the obtained
features. It could be due to the fact that, in the whitened data,
the energy contained in each frequency band increases with
the frequency while it is constant in initial or normalized
images.
We can argue that low frequency dependencies are related
to the statistical correlation between neighbor pixels. Thus the
suppression of these second order correlations would suppress
these low frequencies in the whitened patches. The resulting
features set is expected to contain a larger number of low
frequency less localized features, what is actually observed.
V. EXPERIMENTAL RESULTS
A. Effect of Normalization on the Feature Space
Preliminary trials have shown that the optimal structure of
the DBN in terms of final classification score is 768-256-128.
The training protocol is similar to the ones proposed in [26,
28] (300 epochs, a mini-batch size of 100, a learning rate of
0.002, a weight decay of 0.0002, momentum, a sparsity target
of 0.02, and a sparsity cost of 0.02). The features shown in
figure 2 (1st column) have been extracted by training the first
RBM layer on the whitened database. Some of them represent
parts of the corridor, which is over-represented in the database
and correspond to long sequences of images quite similar
during the robot exploration. Some others are localized and
correspond to small parts of the initial views, like edges and
corners that can be identified as room elements. The features
shown in figure 2 (2nd column) have been obtained using the
normalized data. They look very different from those obtained
from the whitened data. Parts of rooms are much more
represented and the range of spatial frequencies covered by the
features is much broader. However, for both cases, the combinations of these initial features in higher layers correspond to
larger structures more characteristic of the different rooms.
It is obvious that the features extracted from the whitened
data are more localized. This underlines that data whitening
clearly changes the characteristics of the learned bases. One
explanation could be that the second order correlations are
linked to the presence of low frequencies in the images. If the
whitening algorithm removes these correlations in the original
dataset, it leads to whitened data covering only high spatial
frequencies. The RBM algorithm in this case finds only high
frequency features. However, the features learned from the
normalization data remain sparse but cover a broader spectrum
of spatial frequencies. These differences between normalized
and whitened data have already been observed in [24] and
related to better performances for the normalized data on
CIFAR-10 in an object recognition task.
To better understand why features obtained from whitened
and normalized data are different, we computed the mean
Fourier spectral density for both cases and we compared them
to the same function for the original data. We plotted the mean
of the Log Fourier power spectral density of all patches
according to the Log of the frequencies as shown in figure 2
(3rd column). The scale law in 1 f  characteristic of natural
images is approximately verified as expected for the initial
patches. For the local normalization it is also conserved (the
B. Supervised Learning of Places
After feature extraction, a classification was performed in
the features space. Assuming that the non-linear transform
operated by DBNs improves the linear separability of the data,
a simple regression method was used to perform the
classification process in the initial case. To express the final
result as a probability that a given view belongs to one room,
we normalize the output with a softmax regression method.
We have also investigated the classification phase using
Support Vector Machine (SVM) in order to demonstrate that
the DBN computes a linear separable signature and thus it
should not affect the final classification results.
The samples have been taken from each laboratory and each
different illumination condition was trained separately as in
[4].
30
Table 1: Average classification results for three different laboratories and three training conditions.
Laboratory name
Training: Condition
Ullah's work
No thr. using whitened features
SVM using whitened features
0.55 thr. using whitened features
No thr. using normalized features
0.55 thr. using normalized features
Cloudy
84.20%
70.21%
69.92%
84.73%
80.41%
86.00%
Saarbrucken
Night
Sunny
86.52%
87.53%
70.80%
70.59%
71.21%
70.70%
87.44% 87.32%
81.29%
83.66%
88.35% 87.36%
Cloudy
79.57%
70.43%
70.88%
85.85%
81.65%
88.15%
Freiburg
Night
75.58%
70.26%
70.46%
83.48%
80.08%
85.00%
Sunny
77.85%
67.89%
67.40%
86.96%
79.64%
87.98%
Cloudy
84.45%
72.64%
72.20%
84.99%
83.14%
85.95%
Ljubljana
Night
87.54%
72.70%
72.57%
89.64%
82.38%
90.63%
Sunny
85.77%
74.69%
74.93%
85.26%
83.87%
86.86%
percentage of considered examples, ranges from 75% to 85%
depending on the laboratory. Similarly, the results are ranging
from 85.00% to 90.63% using features learned from the
normalized data. In this case, the average rate of acceptance
examples ranges from 86% to 90%, depending on the laboratory, showing that more examples are used in the classification
than the former one. However, in both cases, our results show
values that outperform the best published ones [4].
Concerning the sensitivity to the illumination changes, our
results seem to be less sensitive to the illumination conditions
compared to the results obtained in [4]. For instance, based on
features extracted from localized data, we obtained an average
classification rate of 91.6%, 90.98% and 91.77% for
Saarbrucken, Freiburg and Ljubljana laboratories respectively
under similar illumination conditions. While under different
illumination conditions, we got an average classification rate
of 84.5%, 85.1% and 85.84% for the same laboratories. We
can also note that the lower performance on the Freiburg data,
which confirms that this collection is the most challenging of
the whole COLD database as indicated in [4]. However, with
and without threshold our classification results for this
laboratory outperforms the best ones obtained by [4].
Moreover, we can see that the results obtained using a SVM
are quite comparable to those obtained using a softmax
regression. This shows that the DBN computes a linearly
separable signature. They underline the fact that features
learned by DBNs approach are more robustness for a SPR task
than the extraction of ad hoc features based on (gist,
CENTRIST, SURF, and SIFT) descriptors.
For each image the softmax network output gives the
probability of being in each of the visited rooms. According to
maximum likelihood principles, the largest probability value
gives the decision of the system. Thus, using features learned
from the whitened data, we obtain an average of correct
answers ranging from 67.89% to 74.69% according to
different conditions and laboratories as shown in table 1
(second row). In contrast, using features learned from the
normalized data, we obtain an average of correct answers
ranging from 79.64% to 83.87% according to the different
conditions and laboratories as shown in table 1 (fifth row).
These results demonstrate that features from an RBM
trained on the normalized data outperformed those from an
RBM trained on the whitened data. It illustrates the fact that
the normalization process keeps much more information or
structures of the initial views which are very important for the
classification process. In contrast, data whitening completely
removes the first and second order statistics from the initial
data which allows DBNs to extract higher-order features. This
demonstrates that data whitening could be useful for image
coding. However, it is not the optimal pre-processing method
in the case of image classification. This is in accordance with
the results in the literature showing that first and second order
statistics based features are significantly better than higher
order statistics in terms of classification [28, 29].
However, one way is still open to improve these results is to
use the decision theory. The detection rate has been computed
from the classes with the highest probabilities, irrespective of
the relative values of these probabilities. Some of them are
close to the chance (in our case 0.20 or 0.25 depending on the
number of categories to recognize) and it is obvious that, in
such cases, the confidence in the decision made is weak. Thus,
below a given threshold, when the probability distribution
tends to become uniform, one could consider that the answer
given by the system is meaningless. This could be due to the
fact that the given image contains common characteristics or
structures that can be found in two or more classes. The effect
of the threshold is then to discard the most uncertain results.
Table 1 (4th and 6th rows) show the average classification
results for a threshold of 0.55 (only results where
max p( X  ck | I )  0.55, and P( X  ck ) is the probability that the
VI. CONCLUSION AND FUTURE WORK
The fundamental contributions of this paper are two-fold.
First, it shows that data normalization significantly affects the
detection of features, by extracting higher semantic level
features than whitening, and thus improves the recognition
rates. Second, it demonstrates that DBNs coupled with tiny
images can be successfully used in a challenging image
recognition task, view-based SPR. Our results outperformed
the best published ones [4] based on more complex techniques
(use of SIFT detectors followed by a SVM classification).
According to our classification results, it can be argued that
first and second order statistics based features are significantly
better than higher order statistics in terms of classification as
recently observed by [29]. Also, to recognize a place it seems
not necessary to correctly classify every image of the place.
current view I belongs to c k , are retained). One can see that
the results are significantly improved. They are ranging from
83.49% to 89.64% using the features extracted from the
whitened data. In this case, the average acceptance rate, i.e.
the
31
[8] B. A. Olshausen and D. J. Field. Sparse coding of sensory inputs.
With respect to place recognition not all the images are
informative: some of them are blurred when the robots turns
or moves too fast from one place to another, some others show
no informative details (e.g. when the robot is facing a wall).
As the proposed system computes the probability of the most
likely room among all the possible rooms, it offers the way to
weight each conclusion by a confidence factor associated with
the probability distribution over all classes. Then, discard the
most uncertain views thus increasing the recognition score.
Our proposed model has greatly contributed in simplifying
the overall classification algorithm. It indeed provides coding
vectors that can be used directly in a discriminative method.
So, the present approach obtains scores comparable to the
ones based on hand-engineered signatures (like GiST or SIFT
detectors) and more sophisticated classification techniques
like SVM. As emphasized by [30], it illustrates the fact that
features extracted by DBNs are more promising for image
classification than hand-engineered features.
Different ways can be used in further studies to extend this
research. A final step of fine-tuning can be introduced using
back-propagation instead of using rough features as illustrated
in [30]. However, using the rough features makes the
algorithm fully incremental avoiding the adaptation to a
specific domain. The strict separation between the
construction of the feature space and the classification allows
considering other classification problems sharing the same
feature space. The independence of the construction of the
feature space has another advantage: in the context of
autonomous robotics it can be seen as a developmental
maturation acquired on-line by the robot, only once, during an
exploration phase of its environment. Another open question
has not been investigated in this work and that remain open
despite some interesting attempts [7] is the view-based
categorization of places. Moreover, it could be also interesting
to evaluate the performance of DBNs on object recognition
tasks.
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
REFERENCES
[1]
S. Thrun, et al. Probabilistic Robotics (Intelligent Robotics and
Autonomous Agents). MIT Press, Cambridge, MA, first edition, 2005.
[2] J. Wu and J. M. Rehg. Centrist: A visual descriptor for scene
categorization. IEEE Trans. Pattern Anal. Mach. Intell., 33(8):14891501, 2011.
[3] S. K. Wei Liu and M. Gabbouj. Robust scene classification by gist with
angular radial partitioning. In Proceeding of the 5th International
Symposium on Communications, Control and Signal Processing,
ISCCSP 2012, Rome, Italy, pages 2-4, 2012.
[4] M. M. Ullah, A. Pronobis, B. Caputo, P. Jensfelt, and H. Christensen.
Towards robust place recognition for robot localization. In Proceedings
of the IEEE International Conference on Robotics and Automation
(ICRA 2008), pages 3829-3836, Pasadena, California, USA, 2008.
[5] S. Lee and N. Allinson. Building Recognition Using Local Oriented
Features. IEEE Transactions on Industrial Informatics, 3(9):1687-1704,
2013.
[6] A. Torralba, et al. Context-based vision system for place and object
recognition. In Proceedings of the IEEE International conference on
Computer Vision (ICCV 2003), pages 273-280, Nice, France, 2003.
[7] M. Dubois, H. Guillaume, E. Frenoux, and P. Tarroux. Visual place
recognition using Bayesian filtering with markov chains. ESANN2011,
pages 435- 440, Bruges, Belgium, 2011.
[25]
[26]
[27]
[28]
[29]
[30]
32
Current Opinion in Neurobiology, 14(4):481-487, 2004.
D. J. Field . What is the goal of sensory coding? Neural Computation,
6(4):559-601, 1994.
C. Zhang, et al. Image Classification Using Spatial Pyramid Robust
Sparse Coding. Pattern Recognition Letters, 34(9): 1046-1052, 2013.
T. Zhang, et al. Low-Rank Sparse Coding for Image Classification.
International conference on computer vision (ICCV2013), 2013.
A. Hasasneh, E. Frenoux, and P. Tarroux. Semantic place recognition
based on deep belief networks and tiny images. ICINCO 2012, volume
2, pages 236-241, Rome, Italy, 2012.
K. Labusch and T. Martinetz. Learning sparse codes for image
reconstruction. In Proceedings of the 18th European Symposium on
Artificial Neural networks, Computational Intelligence and Machine
Learning (ESANN 2010), pages 241-246, Bruges, Belgium, 2010.
G. E. Hinton, S. Osindero, and Y. The. A fast learning algorithm for
deep belief nets. Neural Computation, 8(7):1527-1554, 2006.
G. E. Hinton. Training products of experts by minimizing contrastive
divergence. Neural Computation, 14(8):1771-1800, 2002.
M. A. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient
learning of sparse representations with an energy-based model. In
Proceedings of the Advances in Neural Information Processing Systems
(NIPS 2006), volume 19, pages 1137-1144, Hyatt Regency Vancouver,
Vancouver, B.C., Canada. MIT Press, 2006.
M. A. Ranzato, A. Krizhevsky, and G. E. Hinton. Factored 3-way
restricted Boltzmann machines for modeling natural images. Journal of
Machine Learning Research (JMLR) -Proceedings Track, 9:621-628,
2010.
H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep
belief networks for scalable unsupervised learning of hierarchical
representations. ICML 2009, pages 609–616, Montreal, Canada.
Computer Science Department, Stanford University, Stanford, CA
94305, USA, 2009.
A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image
databases for recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR 2008), pages 1-8,
Anchorage, Alaska, USA, 2008.
H. Barlow. Redundancy reduction revisited. Network: Computations in
Neural Systems, 12:241-325, 2001.
D. J. Field. Relations between the statistics of natural images and the
response properties of cortical cells. Journal of Optical Society of
America, A, 4(12):2379-2394, 1987.
A. Hyvarinen and E. Oja. Independent Component Analysis: Algorithms
and Applications. Neural Network, 13:411-430, 2000.
A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in
unsupervised feature learning. Journal of Machine Learning Research
(JMLR) - Proceedings Track, 15:215-223, 2011.
A. Krizhevsky. Learning multiple layers of features from tiny images.
Master science thesis, Department of Computer Science, University of
Toronto, Toronto, Canada, 2009.
G. E. Hinton. A practical guide to training restricted Boltzmann
machines - version 1. Technical report, Department of Computer
Science, University of Toronto, Toronto, Canada, 2010.
H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for
visual area v2. In Proceedings of the Advances in Neural Information
Processing Systems (NIPS 2008), volume 20, pages 873–880,
Vancouver, British Columbia, Canada. MIT Press, 2008.
M. M. Ullah, et al. The cold database. Technical report, CAS - Centre
for Autonomous Systems. School of Computer Science and
Communication. KTH Royal Institute of Tech., Stockholm, Sweden,
2007.
A. Krizhevsky. Convolutional deep belief networks on cifar-10.
Technical report, Department of Computer Science, University of
Toronto, Toronto, Canada, 2010.
N. Aggarwal and R. K. Agrawal. First and second order statistics
features for classification of magnetic resonance brain images. Signal
and Information Processing, 3(2):146–153, 2012.
G. E. Hinton, A. Krizhevsky, and S. Wang. Transforming auto-encoders.
In Proceedings of the International Conference on Artificial Neural
Networks (ICANN 2011), pages 44–51, Espoo, Finland, 2011.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Wind Turbine Economic Analysis and
Experiment Set Applications with Using
Rayleigh Statistical Method
F.KOSTEKCI 1 , Y. ARIKAN2 , E.ÇAM2
1
2
Kırıkkale University, Kırıkkale /Turkey, mfkostekci@hotmail.com
Kırıkkale University, Kırıkkale /Turkey, yagmurarikan@kku.edu.tr,
cam@kku.edu.tr
reduce greenhouse gas emissions by twenty percent. Second is
to increase energy efficiency by 20 percent. The last one is to
increase renewable energy sources share of production by 20
per cent [2].
In Turkey, proportion of renewable energy sources was
19, 6 percent on total electricity consumption in 2009. In
2011, renewable energy sources produced 57, 6 TWh
electricity and 8, 3 percent of this production was produced by
wind energy sources [3].
At the end of 2012, Turkey’s installed wind power was
2,140 MW, but at the beginning of 2013 it was 2,619 MW.
Given this progress wind energy is a vital resource [3].
In this study, a program has been prepared which is doing
calculations of the wind energy potential of a region by using
Rayleigh statistical method. Through this program, the effect
of different hub heights on probability density function is
examined and amount of annual energy production, capacity
factor and cost analysis is made. After the results shown to be
successful, this program has been applied with in a experiment
set. The experiment set that was purchased under the project
number of 2012/11 by Scientific Research Projects Unit of
Kırıkkale University. In this way, users will be able to
consolidate information which is learned in theoretical
applications. So it can be used for educational purposes.
Abstract - In this study, a program has been prepared
which is doing calculations of the wind energy potential of
a region by using Rayleigh statistical method. Through this
program, the effect of different hub heights on probability
density function is examined and amount of annual energy
production, capacity factor and cost analysis is made. The
program is prepared by using Profilab-Expert 4.0 software
and it is designed as a program that the users can easily
make the technical-financial data entrance and also take
out the calculations results as tables and graphics by a
visual screen.
Rayleigh probability density functions, annual energy
production, capacity factor and cost calculations of
Enercon E-48 turbines are done by the prepared program.
After the results shown to be successful, this program has
been applied with in a experiment set. The experiment set
that was purchased under the project number of 2012/11
by Scientific Research Projects Unit of Kırıkkale
University. In this way, users will be able to consolidate
information which is learned in theoretical applications. So
it can be used for educational purposes.
Keywords - Rayleigh statistical method, profilab-expert,
economic analysis, set of experiment
II. WEIBULL AND RAYLEIGH STATISTICAL METHOD
It is not sufficient to determine wind energy potential with
annual measurements and is needed at least ten years
measurements. However any instruments don’t wait this time,
so some methods are used for determining the potential of
wind power. One of most widely used methods is the Weibull
distribution [4, 5] , Weibull distribution is widely used in
calculations of wind energy due to simplicity and flexibility. It
can be expressed as formula 1, where f (v) is the observed
wind speed, k is the shape parameter and c is the scale
parameter [6, 7].
I. INTRODUCTION
E
nergy is one of the most important needs. In recent years,
the difference between of energy production and energy
consumption has been increased. The development of
technology, the reduction of energy reserves and the
population growth are some reasons of this. So the countries
have been tended to renewable energy sources.
International energy conference separates its energy policies
in three main topics when analyzing energy policies in recent
years. These are called energy security, environmental
protection and sustainable economic development [1].
Three objectives has been determined by the European
Union to be achieved by the year 2020.The first of these is to
( )
33
( )
( )
(1)
Cumulative probability density function can be expressed as
formula 2 [6, 7 ].
( )
( )
hub heights were plotted. In the other part of the program, the
amount of annual energy production, capacity factor, and cost
calculations were calculated. After the results shown to be
successful, this program has been applied with in a experiment
set. In this way, users will be able to consolidate information
which is learned in theoretical applications. So it can be used
for educational purposes.
Some specific values and power curve of set of
experiments were needed for analysis program. These values
are given in Table 1 and power curve of this is shown in figure
2.
(2)
Weibull probability density function whose shape parameter
equals to two, is called Rayleigh probability density function
and it can be expressed as formula 3 [4,8].
( )
[ ( ) ]
(3)
Table 1: some features of set of experiments
The relation with scale parameter of Rayleigh probability
density function and average wind speed is given as formula 4
[9].
Rated power (kw)
Hub heights (m)
Start wind speed(m)
Rated wind speed(m)
Stop wind speed(m)
(4)
Wind energy density can be found with scale and shape
parameter. It is given as formula 5; where is air density and
is gamma function [9].
0,4
10/14/18/27/37/46
3,6
12,5
14-22
(5)
The most frequent wind speed can be found as formula 6 [9].
(
)
⁄
(6)
(6)
The wind speed which is contributed maximum to energy can
be found as formula 7 [9].
(
) ⁄
⁄
Figure 1: power curve of set of experiments
IV. RESULTS
(7)
Wind speeds which were used for calculations, were
measured at a height of 10 meters by State Meteorological
Station [11]. In 2010 average temperature was assumed 15
degrees for Datça region. Temperature correction factor was
assumed 1 for calculation of air density. The heights of the
turbines were assumed 200 meters above sea and floor was
assumed long grassy region.
The results which are calculated in first part is given as Table
2 and table 3; where Vr is the monthly average wind speed,
EY is the amount of energy density, VFmaks is the most frequent
wind speed, VEmaks is the wind speed which contributes
maximum to energy , c is the scale parameter.
III. WIND ENERGY POTENTIAL ANALYSIS PROGRAM AND
SIMULATION
Profilab-Expert 4.0 software has analog and digital
measuring technology. It has library which consists of
arithmetic and logic elements. Results of program can be
displayed and saved. The program which is prepared by
Profilab-Expert 4.0 can be done simulation for real time.
Switch, potentiometer, control and measurement elements
which will be used in simulations can be displayed as a
separate panel. A important feature of Profilab-Expert 4.0
software is using with hardware devices. It also contains a
compiler [10].
The technical values of set of experiment and the daily
average wind speed of Datça region which was taken from
State Meteorological Station in 2010 were used for the wind
energy potential analysis program which has prepared in
Profilab-Expert 4.0 software [11]. For each month, scale
parameter value, average wind speed values, energy density
values and the amount of total energy was written in first
section of the program. Rayleigh statistical method was used
in this part. In second section of the program, Rayleigh
probability density functions of turbines which have different
Table 2 : Results of the first part
Months
34
c
Vr
(m/sn)
Ey(kw/m2
)
Vfmaks
Vemaks
(m/sn)
(m/sn)
1
7.1
6.3
0.28
5
10
2
8.37
7.4
0.47
5.9
11.8
6.5
0.31
5.1
10.3
4
5.68
5
0.15
4
8
5
4.22
3.7
0.06
3
6
6
5.51
4.9
0.13
3.9
7.8
7
6.87
6.1
0.26
4.9
9.7
8
3.54
3.1
0.04
2.5
5
9
5.66
5
0.14
4
8
10
7.14
6.3
0.29
5.1
10.1
11
6.03
5.3
0.17
4.3
8.5
12
7.73
6.8
0.37
5.5
10.9
cumulative
density function
7.28
1 4 7 10 13 16 19 22 25
wind speed (m/s)
b)
Figure 3: a) Rayleigh probability function- b) cumulative
probability function (hub height is 27meters)
According to first results, the Rayleigh probability density
functions and cumulative probability density function were
plotted for set of experiments which has different hub heights
(14/27/37 meters) .The graphics are shown as figure 2, 3, 4. It
has been realized that the scale parameter of Rayleigh
distribution function and the average wind speed are
increasing when the height of turbine increases.
0.000
0.000
0.000
0.000
1 4 7 10 13 16 19 22 25
wind speed (m/s)
a)
cumulative
density function
rayleigh
probability
function
0.002
0.001
0.001
0.000
rayleigh
probability
function
3
0.000
0.000
0.000
0.000
0.002
0.001
0.001
0.000
1 4 7 10 13 16 19 22 25
wind speed (m/s)
1 4 7 10 13 16 19 22 25
b)
wind speed(m/s)
Figure 4: a) Rayleigh probability function- b) cumulative
probability function (hub height is 37meters)
cumulative
probability
function
a)
The amount of total energy which is produced by turbines
for different hub heights were calculated. These are shown in
figure 5.
0.002
0.001
0.001
0.000
The amount of
annual energy
potential…
1 4 7 10 13 16 19 22 25
wind speed (m/s)
b)
rayleigh
probability
function
Figure 2: a) Rayleigh probability function- b) cumulative
probability function (hub height is 14 meters)
0.000
0.000
0.000
0.000
004
002
000
Figure 5:amount of annual energy potential
For economic analysis, capacity factor, and cost
calculations were made in last part of the program. In the
analysis turbine investment cost was assumed 1400 $/kw and
maintenance and operating cost were assumed to be 3 percent
of the total cost. It is assumed that 25 percent of investment
cost will be met by equity and 75 percent of investment cost
will be met from by bank loans. The debt payment period was
taken 15 years. The results of economic analysis are given as
table 3.
1 4 7 10 13 16 19 22 25
wind speed (m/s)
a)
35
Table 3: Results of economic analysis
Turbines
Capacity
factor
Turbine 1
(hub
heights=14
meters)
Turbine 2
(hub
heights=27
meters)
Turbine 3
(hub
heights=37
meters)
The annual
repayment
of
bank
debt($/year)
40
%23
The amount
of
annual
energy (kwh)
827
The sale
price
( $/kwh)
0.094
%29
1019
0.076
analysis, it has been realized that the scale parameter of
Rayleigh distribution function and the average wind speed are
increasing when the height of turbine increases. It is observed
that the produced energy increased as hub height increases of
set of experiments. Also sale price is low of turbines with a
high capacity factor.
This study was made with set of experiments. In this way
users will be able to consolidate information which is learned
in theoretical applications. So it can be used for educational
purposes.
VI.REFERENCES
%31
1118
The
annual
repayme
nt
of
equity
($/year)
21
The annual
operation
and
maintenan
ce
cost
($/year)
17
[1]
Worldwide engagement
strategies 2013, International
0.070
for
sustainable energy
Energy Agency (IEA), 16s., 2013
[2] Anonim Fasıl15-Enerji Sektörel Politikilar Başkanlığı
,Türkiye Cumhuriyeti Avrupa Birliği Başkanlığı
[3] Enerji Yatırımcısı El Kitabı 2012,Enerji Piyasası
Düzenleme Kurulu (EPDK),33s,2012
[4] Patel, R., M., Wind and Solar Power Systems. 40-65. CRC
Press, USA, 1999.
[5] Masters, M., G., Renewable and Efficient Electric Power
Systems. 334-379. John Wiley & Sons, Inc., USA, 2004
[6] Lun, F.,Y., I., Lam, C., J., A study of Weibull parameters
Using long terms observations. Renewable Energy 20:145153,2000
[7] Akpınar, S., Akpınar, K. E., An Assessment of Wind
Turbine Characteristics and Wind Energy Characteristics for
Electricity Production. Energy Sources. Part A: Recovery,
Utilization, and Environmental Effects. Taylor & Francis
Group. 28:941–953, 2006.
[8] Ucar, A., Balo, F., A Seasonal Analysis of Wind Turbine
Characteristics and Wind Power Potential in Manisa, Turkey.
International Journal of Green Energy. Taylor & Francis
Group. 5: 466–479, 2008
[9] Mathew, S., Wind Energy Fundamentals, Resource
Analysis and Economics. 61-88. 145-164. Springer-Verlag
Berlin Heidelberg, Netherlands, 2006
[10]Anonim-Profilab-Expert 4.0,ABACOM
[11] State Meteorological Station
The total
amount of
debt
($/year)
78
V. CONCLUSIONS
To predict the wind energy potential of a region and to
choose the right turbine are important subjects for investors.
More electricity generation and cost effectiveness are expected
from a good investment.
General assessment was made about the potential of wind
energy with Rayleigh statistical method. This study show the
height of turbine and capacity factor is found to have a direct
relationship with production and cost. As a result of the
36
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
A Comparative Study of Bacterial Foraging
Optimization and Cuckoo Search
Ahmet ÖZKIŞ and Ahmet BABALIK
1
Selcuk University, Konya/Turkey, ahmetozkis@selcuk.edu.tr
1
Selcuk University, Konya/Turkey, ababalik@selcuk.edu.tr
Abstract – In this paper, two of the well-known meta-heuristic
algorithms, called Cuckoo Search (CS) and Bacterial Foraging
Optimization (BFO), are presented and used to solve 6 different
continuous optimization problems widely-used in optimization
area. Convergence graphs of the both algorithms to optimum
point are shown and obtained results of both algorithms are
compared and analized the performance of them.
Keywords – metaheuristic algorithms, artificial intelligence,
CS, BFO.
I.
INTRODUCTION
Meta-heuristic algorithms are optimization methods developed
by mimicking nutrition searching process of the living
creatures in the nature. For instance, Particle Swarm
Optimization [1], proposed by Kennedy and Eberhart is
mimicking bird flockings, Ant Colony Optimization [2],
proposed by Dorigo et al. is mimicking behavior of the ant
colonies, Genetic Algorithm [3] developed by Holland is
mimicking crossover and mutation process of chromosomes.
Similarly Genetic Algorithm, Differential Evolution Algorithm
[4] developed by Storn and Price is also mimicking some
biological process. As addition, Artificial Bee Colony
Algorithm [5] developed by Karaboga is mimicking nectar
searching process of honeybees. Besides these well-known
algorithms,
Passino
developed
Bacterial
Foraging
Optimization [6] (BFO) in 2002, inspired by nutrition
searching process of Escherichia coli (E.Coli) bacteria and
Yang and Deb developed Cuckoo Search [7] in 2009, inspired
by laying behavior of some species of cuckoos.
E.Coli which is a useful bacteria lives in rectum of mammals.
BFO algorithm imitates E.Coli bacteria which try to maximize
obtained energy in unit of time in the case of some constraints
about the physical possibilities. There is a control mechanism
which guides E.Coli bacteria while searching nutrient.
Bacteria approach to nutrient source step by step by using this
control mechanism. Biological researchs show that nutrient
source searching process of E.Coli bacteria divides 4 main
steps. These steps are introduced by Passino as follows [6]:
1) Searching a reasable nutrient area,
2) Making a desicion about whether bacteria go to
founded nutrient area,
3) If bacteria went to new nutrient area, search for
nutrient studiously in new area.
37
4) After some nutrient consumed, making a decision
between staying that area and emigrating to a new
area.
If bacteria stay in a position which have no enough nutrient,
they use their experiment and conclude that other areas have
plenty of nutrient. Each changing of location effort aims to
maximize obtained energy in unit time.
Cuckoos are very special birds not only because of their
beautiful sounds, but also because of their irritable
reproduction strategy. Some species of cuckoos lay their eggs
in the nest of other birds. If a host bird notices the eggs are not
their owns, they will either throw these alien eggs away or
simply abandon its nest and build a new nest elsewhere [7]. If
host bird can’t notice the parasitic eggs and goes on laiding on
them, the nest becomes in a dangerous situation. The cuckoo
eggs hatch a little bit earlier than the eggs of host bird. Once
the first cuckoo chick is hatched, first instinct action, it throws
out of the nest the eggs of host bird [7]. The cuckoo chick
grows up rapidly and abandons the host.
Detailed information about both algorithms is presented
following part of the work.
II.
BFO AND CS ALGORITHMS
A.
Bacterial Foraging Optimization (BFO)
Main steps of BFO algorithm is given below [6, 8]:
Step 1: Initiailize randomly first positions of bacteria,
Step 2: Evaluate bacteria according to aim function,
Step 3: Go into cycle for optimization:
Internal cycle: Chemotactic event
Middle cycle: Reproduction event
External cycle: Elimination-dispersal event
Step 4: Obtain the optimal result.
Figure 1: Main Steps of the BFO Algorithm
Firstly, to begin to BFO, variable (p), bacteria (S), chemotactic
step (Nc), swimming (Ns), reproduction (Nre), elimination
dispersal (Ned), probability of elimination dispersal (Ped) and
step length (C(i), i = 1,2,...,S) parameters are set. Details of the
BFO algorithm is explained by Passino in Figure 2 as follows
[6]:
1) Elimination -dispersal loop: l = l + 1
2) Reproduction loop: k = k + 1
3) Chemotaxis loop: j = j + 1
a) For i = 1,2,…S, take a chemotactic step for
bacterium i as follows.
b) Compute J( i, j,k,l). Let J(i,j,k,l) = J(i,j,k,l) +
Jcc(θi(j,k,l),P(j,k,l)).
c) Let Jlast =J(i,j,k,l) to save this value since we may
find a better cost via a run.
d) Tumble: Generate a random vector Δ m (i), m =
1,2,…,p, a random number on [−1,1].
e) Move: Let
6) If k < Nre, go to step 2. In this case, we have not
reached the number of specified reproduction steps, so
we start the next generation in the chemotactic loop.
7) Elimination-dispersal: For i = 1,2,…,S, with
probability ped , eliminate and disperse each bacterium
(this keeps the number of bacteria in the population
constant). To do this, if you eliminate a bacterium,
BFO Algorithm
simply disperseFigureII.
one to a random
location on the
optimization domain.
8) If l < Ned, then go to step 1; otherwise end.
Figure 2: Pseudo Code of BFO Algorithm
.
B.
This results in a step of size C(i) in the direction of
the tumble for bacterium i.
f) Compute J( i, j + 1,k,l), and then let J( i, j + 1,k,l) =
J(i,j+1,k,l) + Jcc(θi(j+1,k,l), P(j+1,k,l)).
g) Swim (note that we use an approximation since we
decide swimming behavior of each cell as if the
bacteria numbered {1,2,…,i} have moved and
{i+1,i+2,…, S} have not; this is much simpler to
simulate than simultaneous decisions about swimming
and tumbling by all bacteria at the same time):
i) Let m = 0 (counter for swim length).
ii) While m Ns < (if have not climbed down too long)
• Let m = m+ 1.
• If J(i,j+1,k,l) < Jlast (if doing better), let Jlast =
J(i,j+1,k,l) and let
Cuckoo Search (CS)
Main steps of CS algorithm is given below [7]:
Step 1: Each cuckoo select one host nest and put them
just one egg at a time (n eggs, n host),
Step 2: The nests whose eggs are high quality will carry
over to the next generations;
Step 3: The number of available host nests is fixed, and
the egg laid by a cuckoo is discovered by the host bird
Figure 3: Main Steps of the CS Algorithm
with a probability
pa [0, 1].
Step 4: New solutions (‫ݔ‬௜௧ାଵ ) are generated from the
existing solutions (‫ݔ‬௜௧ ) by using levy flight
Figure 3: Main Steps of the CS Algorithm
Firstly, to begin to CS, variable (d), host nest (n), probability
(pa) and α, β parameters belong to levy flight are set. Pseudo
code of the CS algorithm is given below [9]:
and use this θi (j+1,k,l) to compute the new J( i,j+1,k,l)
as we did in f).
• Else, let m=Ns. This is the end of the while
statement.
h) Go to next bacterium (i+1) if i ≠ S (i.e., go to b) to
process the next bacterium).
4) If j< Nc, go to step 3. In this case, continue
chemotaxis, since the life of the bacteria is not over.
5) Reproduction:
a) For the given k and l, and for each i = 1,2,…, S, let
begin
Objective function f(x), x=( x1,…, xd)T;
Initial a population of n host nests xi (i=1,2,…,n);
while (t < Maximum Generation) or (stop criterion);
Get a cuckoo (say i) randomly and generate a new
solution by Lévy flights;
Evaluate its quality/fitness; Fi
Choose a nest among n (say j ) randomly;
if (Fi > Fj),
Replace j by the new solution;
end
Abandon a fraction (Pa) of worse nests
[and build new ones at new locations via Lévy flights];
Keep the best solutions;
Rank the solutions and find the current best;
end while
Post process results and visualization;
end
be the health of bacterium i (a measure of how many
nutrients it got over its lifetime and how successful it
was at avoiding noxious substances). Sort bacteria and
chemotactic parameters C(i) in order of ascending cost
Jhealth (higher cost means lower health).
b) The Sr bacteria with the highest Jhealth values die and
the other Sr bacteria with the best values split (and the
copies that are made are placed at the same location as
their parent).
Figure 4: Pseudo Code of CS Algorithm
38
IV.
Levy flight provides to generate a new random solution from
the existing one. Formulations of levy flight and finding new
solution in CS are shown in Eq.(1) and Eq.(2) respectively.
(1   3)
 x    Levy (  ) , 1< β < 3
Levy ~ u  t  ,
( t 1)
i
x
III.
Results from Table2-Table4 show that CS algorithm has better
performance than BFO algorithm on tested numerical
optimization problems. Additionaly, Figure5-Figure10 shows
that CS algorithm has better convergence performance than
BFO algorithm.
(1)
(2)
(t )
i
CONCLUSION
REFERENCES
PERFORMANCE ANALIZES OF BFO AND CS ALGORITHMS
[1]
Kennedy, J., Eberhart, R., Particle Swarm Optimization, IEEE
International Conference on Neural Networks, Perth, Australia, IEEE
Servive Center, Piscataway, NJ, 1942-1948, 1995.
[2] Dorigo, M., et al. Positive feedback as a search strategy, Technical
Report 91-016, Politecnico di Milano, Italy, 1991.
[3] 3 Holland, J.H., Adaptation in Natural and Artificial Systems, Ann Arbor:
The University of Michigan Press, 1975.
[4] Storn, R. and Price, K., Differential Evolution - a Simple and Efficient
Adaptive Scheme for Global Optimization over Continuous Spaces.
Technical Report TR-95-012, International Computer Science Institute,
Berkley, 1995.
[5] D. Karaboga, An Idea Based On Honey Bee Swarm for Numerical
Optimization, Technical Report TR06, Erciyes University, Engineering
Faculty, Computer Engineering Department, 2005.
[6] Passino, K. (2002). Biomimicry of bacterial foraging for distributed
optimization and control. Control Systems, IEEE, (June), 52–67.
Retrieved
from
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1004010
[7] Yang, X., Cb, C., & Deb, S. (2009). Cuckoo Search via Levy Flights.
[8] Başbuğ, S. (2008). Pattern Nulling of Linear Antenna arrays with the
Use of Bacterial Foraging Algorithm.. Erciyes University, Graduate
School of Natural and Applied Sciences.
[9] Gandomi et al. (2011). Cuckoo Search Algorithm-a Metaheuristic
Approach to Solve Structural Optimization Problems.
[10] Chen, H., Zhu, Y., & Hu, K. (2011). Adaptive Bacterial Foraging
Optimization. Abstract and Applied Analysis, 2011, 1–27.
doi:10.1155/2011/108269
A.
Parameter Settings
For BFO algorithm, parameters are set as S=100, Nc = 100,
Ns = 4, Nre = 4, Ned = 2, ped = 0.25 and C(i) =0.1, i = 1,2,...,S.
compatible with paper Chan [10].
For CS algorithm, Yang sets the parameters as n=15, pa=0.25
in his work [7]. However we set the number of the cuckoo nest
(n) as 20 with the aim of fair comparision.
BFO and CS algorithms are tested with 6 different numerical
optimization problems named as sphere, rosenbrock, ackley,
griewank, rastrigin and schwefel. Detailed information about
the used problems is given in Table1. All problems was solved
for 2, 5, 10, 30, 50 dimensions and each experiment was
repeated 25 times independently for 100.000 MaxFES
(Maximum Fitness Evaluation Number) value.
The experimental results of the BFO and CS algorithms in
terms of mean, best, worst and standard deviation values are
shown in Table2 and Table3 respectively. Comparision of the
obtained results of both algorithm is also shows in Table4.
Table 1: Numerical Optimization Problems
f1
f2
Function
Dim
Chr.
Space
Sphere
2,5,10,30,
50
U/S
[-5.12,
5.12]
Rosenbrock
2,5,10,30,
50
U/N
[-2.048,
2.048]
Fmin
Formulation
0
f ( x )   xi2
n
i 1
n 1
0
2
2
f ( x )   100  xi 1  xi2    xi  1 


i 1
f3
Ackley
2,5,10,30,
50
M/N
[-2.048,
2.048]
0

1 n 2
f ( x )  20 exp  0.2
 xi 

n i 1 

1 n

 exp   cos(2 xi )   20  e
 n i 1

f4
Griewank
2,5,10,30,
50
M/N
[-600,
600]
0
f ( x) 
f5
Rastrigin
2,5,10,30,
50
M/S
[-5.12,
5.12]
0
f ( x )    xi2  10cos(2 xi )  10
2,5,10,30,
50
M/S
[-500,
500]
-12569.5
f6
Schwefel
1 n 2 n
x 
xi   cos  i   1

4000 i 1
 i
i 1
n
i 1
n
f ( x )    xi sin
i 1
 x
i
U: Unimodal, M: Multimodal, S: Seperable, N: Nonseperable Dim: Dimension, Chr: Characteristic
39
Table 2: Performance Results of BFO Algorithm on Numerical Optimization Problems
2
5
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
1,06E-08
2,01E-06
6,02E-07
5,92E-07
8,73E-07
0,0001
2,08E-05
2,12E-05
0,00032
0,002471
0,001335
0,000579
0,2170478
1,1419861
0,5883212
0,2260819
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
Best
Worst
Mean
Std
0,000373
0,003698
0,001893
0,000835
0,007397
2,239271
0,697305
0,721856
8,74E-05
0,003617
0,001196
0,000955
-976,142
-837,966
-918,503
66,67762
0,049396
0,131992
0,090553
0,021707
4,726793
46,878923
24,66655
10,32997
1,494873
3,468907
2,492819
0,577559
-1921,96
-1324,56
-1694,94
144,1509
Function
Sphere
Rosenbrock
Ackley
Griewank
Rastrigin
Schwefel
Dimension
10
30
50
0,010729
0,038829
0,026451
0,007661
7,955292
10,95402
9,597096
0,758452
0,363625
0,590515
0,465361
0,064304
55,71391
90,50247
76,49017
6,932341
1,524297
2,893639
2,203954
0,330154
142,6309
235,637
197,9148
20,25006
0,307351
0,537664
0,420228
0,060349
50,2557
111,8924
86,85501
16,77434
15,47547
25,488
20,23137
2,430032
-2989,14
-2025,71
-2454,62
244,1814
1,139363
1,735143
1,563889
0,141514
415,4879
609,7122
510,2446
50,9222
179,3482
221,6341
203,4025
11,9276
-5414,11
-2941,97
-3663,54
501,1412
2,18649
2,782216
2,493888
0,150765
857,1498
1138,518
1005,547
75,39072
364,396
459,9423
426,4095
22,99681
-5483,5
-3796,05
-4488,61
545,5966
Table 3: Performance Results of CS Algorithm on Numerical Optimization Problems
Function
Sphere
Rosenbrock
Ackley
Griewank
Rastrigin
Schwefel
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
Best
Worst
Mean
Std.
2
5
1,1E-109
7,88E-92
3,17E-93
1,58E-92
0
0
0
0
8,88E-16
8,88E-16
8,88E-16
0
0
0
0
0
0
0
0
0
-837,966
-837,966
-837,966
0
3,14E-80
2,72E-70
1,76E-71
5,63E-71
0
1,11E-28
7,61E-30
2,42E-29
8,88E-16
8,88E-16
8,88E-16
0
1,44E-10
0,010924
0,001627
0,00315
0
0
0
0
-2094,91
-2094,91
-2094,91
1,39E-12
40
Dimension
10
3,12E-49
7,76E-46
1,16E-46
2,01E-46
6,26E-15
8,35E-08
3,53E-09
1,67E-08
4,44E-15
4,44E-15
4,44E-15
0
3,47E-06
0,051848
0,020254
0,0126
2,2E-06
2,452513
0,418345
0,764953
-4189,83
-3939,09
-4138,05
66,72893
30
50
3,76E-17
3,28E-14
2,96E-15
6,52E-15
17,92875
20,8789
19,61092
0,895056
1,81E-09
6,69E-08
1,45E-08
1,35E-08
1,61E-12
0,01478
0,001791
0,004292
25,31349
60,92168
40,16664
7,368976
-10451,3
-8910,13
-9732,48
353,2296
5,56E-09
1,03E-07
2,92E-08
2,24E-08
42,7514
45,08042
43,92707
0,636913
1,33E-05
6,37E-05
3,03E-05
1,4E-05
1,53E-06
0,03203
0,004374
0,007665
67,39736
135,7943
100,0309
15,59865
-15946
-13629,6
-14540,4
486,5571
Table 4: 6 Performance Comparision of BFO Algorithm and CS on Numerical Optimization Problems
50 Dimension
30 Dimension
10 Dimension
5 Dimension
2 Dimension
BFO
CS
Mean
Std
Mean
Std
Sphere
6,02E-07
5,91802E-07
3,16838E-93
1,57529E-92
Rosenbrock
2,08E-05
2,12155E-05
0
0
Ackley
0,001893
0,000835416
8,88178E-16
0
Griewank
0,697305
0,721855823
0
0
Rastrigin
0,001196
0,000954567
0
0
Schwefel
-918,503
66,67761916
-837,9657745
0
Mean
Std
Mean
Std
Sphere
0,001335
0,000578641
1,76389E-71
5,6272E-71
Rosenbrock
0,588321
0,226081862
7,61399E-30
2,41716E-29
Ackley
0,090553
0,021707295
8,88178E-16
0
Griewank
24,66655
10,32996968
0,001627094
0,003149848
Rastrigin
2,492819
0,577559396
0
0
Schwefel
-1694,94
144,1509233
-2094,914436
1,39237E-12
Mean
Std
Mean
Std
Sphere
0,026451
0,007661193
1,15623E-46
2,00929E-46
Rosenbrock
9,597096
0,758452331
3,53295E-09
1,66636E-08
Ackley
0,420228
0,060348601
4,44089E-15
0
Griewank
86,85501
16,77433921
0,020253719
0,012599501
Rastrigin
20,23137
2,430031556
0,418344716
0,764953304
Schwefel
-2454,62
244,1813769
-4138,051081
66,7289268
Mean
Std
Mean
Std
Sphere
0,465361
0,064304291
2,95553E-15
6,5186E-15
Rosenbrock
76,49017
6,932340644
19,61092176
0,895055801
Ackley
1,563889
0,141513898
1,45367E-08
1,34978E-08
Griewank
510,2446
50,92220409
0,001790872
0,004291716
Rastrigin
203,4025
11,92759755
40,16664359
7,368976311
Schwefel
-3663,54
501,1412194
-9732,483294
353,2295885
Mean
Std
Mean
Std
Sphere
2,203954
0,330153659
2,92089E-08
2,244E-08
Rosenbrock
197,9148
20,25005975
43,92707301
0,63691323
Ackley
2,493888
0,150764801
3,0273E-05
1,39902E-05
Griewank
1005,547
75,39071752
0,004373682
0,007665317
Rastrigin
426,4095
22,99681079
100,0308924
15,59865338
Schwefel
-4488,61
545,5966033
-14540,38826
486,5571142
41
Figure 5: Sphere Convergence Graph
Figure 6: Rosenbrock Convergence Graph
Figure 7: Ackley Convergence Graph
Figure 8: Griewank Convergence Graph
Figure 9: Rastrigin Convergence Graph
Figure 10: Schwefel Convergence Graph
42
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14),
May 9-11,2014
Konya, Turkey
Effect of Segment Length on Zigzag Code Performance
Salim Kahveci, Member IEEE
Department of E&Electronics Engineering, Karadeniz Technical University
61080, Trabzon-TURKEY
E-mail: salim@ktu.edu.tr
Abstract- General turbo codes have excellent biterror-rate performance at low signal-to-noise ratios in
additive white Gaussian noise channels. However,
standart turbo decoding algorithm is highly complex. To
reduce decoding computational complexity, zigzag codes
have been used anymore. It is an important issue how
segment length called J can effect on zigzag code
performance. In this study, it is investigated what is the
segment size in order to obtain good performance for
zigzag code with one encoder.
is defined by two parameters; J and I .J is the number
of
data
nodes
considered
to
determine a parity bit while I is the number of parity
nodes in a coded data block. I is also the number of
segment in a zigzag code.
A segment includes J bits
d(1,1)
d(1,J)
d(1,2)
I. Introduction
d(1,
d(1,J-
p(1)
d(2,
Turbo codes well known are usually used when the
information need to be received with minimum error
d(2,
over a fading channel [1]. Widely used another code is
d(2,
the low density parity check code [2]. The performances
of these high complexity codes are very close to
Shannon bound.
p(2)
d(2,JZigzag code is a simple lineer block code. The
d(3,J-1)
d(2,J
zigzag code is a weak code since the minimum
d(3,J
d(3,
d(3,
d(3,
Hamming distance between two codewords of any code
is equal to 2. However, zigzag codes introduce only one
parity check bit per segment. There is also significant
performance improvement when multiple numbers of
zigzag encoders are cascaded.
It is shown in this paper, a simple zigzag code can
achieve good performances for optimal segment length
p(I-1) d(I,1)
d(I,J)
is short in additive white Gaussian noise (AWGN)
d(I,2
d(I,3
d(I,Jchannel. The low complexity Max-Log-MAP decoders
are used in all computer simulations [3].
Figure 1: A general zigzag code
This paper is organised as follows. In Section II,
the zigzag code, its decoding algorithm and selection of
segment length are introduced. Section III contains The parity nodes p(i ) are determined by Equation (1).
simulation results and we can conclude in the last section
p(0)  0
of this paper.
p(3)
p(I)
J


(1)
p(i)   p(i  1)   d (i, j )  mod 2, i  {1, 2,3,..., I }
j 1


II. Zigzag Codes
A. Encoder Structure
The general structure of a zigzag code is illustrated
in Figure 1, where  denotes the modulo 2 summation.
In Figure 1, data notes shown d (i, j ) {0,1} are the
information bits while the parity nodes p(i ) are the
parity check bits in a coded block. A simple zigzag code
43
As can be seen, the minimum distance of any pair of
( I , J ) , d min is equal to 2. The zigzag code can also be
depicted in matrix form. A data vector d defined by
d   d (1,1), d (1, 2), d (1,3),..., d (1, J ),..., d ( I , J )1XI .J
(2)
The
computation
of
F  p(i) , B  p(i  1) and LLR d (i, j ) can be optimized
since the calculation the right terms of the respective
Max-Log-MAP can be jointly calculated.
It is rearranged the vector d to form an IXJ matrix, D
 d (1,1) ... d (1, J ) 
 .
.
. 

D .
.
. 


.
. 
 .
 d ( I ,1) ... d ( I , J ) 
IXJ
C. Selection of Different Values of J
The zigzag code as a two-state rate-1/2 systematic
convolutional code with generator polynomial matrix
1 

G( D)  1
. The parity sequence is punctured so
1

D 

The resulting coded matrix is shown as
that only one parity bit is transmitted for every J
information bits in segments. Thus, reducing the code
(4)
X   D P IX ( J 1)
 J 
rate to 
 . It can be shown that for small of J or
 J 1
T
where P   p(1), p(2), p(3),..., p( I )IX 1 the parity check segment size, the zigzag code has a very sparse parity
bits column vector calculated using the Equation (1). check vector. Therefore, the zigzag code can be regarded
Note that the rate of any zigzag code is given as Rc= as a special case of the low-density parity check code.
 J 
III. Simulation Results

 , and  n, k    I .J  I , I .J  .
 J 1
The performances of different values of J in zigzag
codes
are analysed in an AWGN channel. The noise is
B. Decoding Process
considered to be zero mean with variance N0 / 2 . The
Y =  D,P  be the 1, 1 -modulated codeword
decoder performs 10 iterations.
"0"  1, "1"  1 . The modulated signal propagates
Figures 2 and 3 show the performances of individual
through an AWGN channel, and the signal received are zigzag codes with different values of J for
noted as Y = D,P . It can be used Max-Log-MAP I  32 and 128 bits, respectively. It can be seen that the
bit-error-rate (BER) performances of zigzag codes
algorithm to decode the signal as it exhibits a very low depend on J or segment length. In Figures 2 and 3,
decoding computational complexity compared with performance different between cases J=2 and J=64 bits
minimum performance losses.
for I=32, and 128 bits are about 0.3 and 0.6 dB at BER
The forward (F) and backward (B) Max-Log-MAP of 10-3 respectively.
of the parity check bits as follows [4-6].
F  p(i )  p(i ) 

(3)



W F  p(i  1) , d (i,1), d (i, 2),..., d (i, j ) , (5)
0
10
J=2
J=4
J=8
J=16
J=32
J=64
i  1, 2,3,..., I
-1
B  p(i  1)  p(i  1) 

10

W d (i,1), d (i, 2),..., d (i, J ), B  p (i ) ,
(6)
-2
10
BER
i  I , I  1,..., 2
where F  p(0)  , B  p( I )  p( I ), and
 n

W  x1 , x2 ,..., x n    sign( x j )  min( x j )
 j 1
 1 j  n
-3
10
-4
10
(7)
-5
10
Once the forward and backward Max-Log-MAP for the
parity bits are calculated, it can be determined the MaxLog-MAP of the information bits as follows,
-6
10
LLR  d (i, j )  d (i, j ) 

W F  p(i  1) , d (i,1), d (i, 2),..., d (i, J ), B  p(i) 
(8)

0
1
2
3
4
SNR in dB
5
6
7
8
Figure 2: BER performances of zigzag codes for
different values of J with one encoder, I  32 and 10iteration
44
With zigzag codes proposed here low complexity simulation results to asses their performances for
encoder and decoder, it may be interesting for use in real different segment size ( J ). It is assumed channel is
time applications.
AWGN and the binary phase shift key (BPSK)
modulation is employed. It can be shown that the
-1
performance increases shorten J values. The decoding
10
J=2
algorithm used is very efficient, zigzag code is a serious
J=4
candidate for high rate real time application.
J=8
J=16
J=32
J=64
-2
10
References
-3
BER
10
-4
10
-5
10
-6
10
0
1
2
3
4
SNR in dB
5
6
7
8
Figure 3: Performances of zigzag codes for various J
with one encoder, I  128 and 10-iteration
IV. Conclusion
[1] C. Berrou and A. Glavieux, “Near optimum error
correcting coding and decoding: Turbo codes,”
IEEE Trans. Commun., vol.44, pp.1261-1271, Oct.
1996.
[2] D. J. C. MacKay, “Good error-correcting codes based
on very sparse matrices,” IEEE Trans. Inform.
Theory, vol.45, pp. 399-431, Mar. 1999.
[3] L. Ping, X. Huang and N. Phamdo, “Zigzag codes
and concatenated zigzag codes,” IEEE Trans.
Inform. Theory, vol.47, no.2, pp.800-807, Feb. 2001.
[4] L. Ping, S. Chan and K. Yeung, “Iterative decoding
of multi-dimensional concatenated single parity
check codes,” in Proc. IEEE ICC-98, pp.131-135,
Jun. 1998.
[5] S. Gollakota and D. Katabi, “Zigzag decoding:
Combating hidden terminals in wireless networks,”
in SIGCOMM’08, Proceedings of the ACM
conference on Data Communication, 2008.
[6] S. –N. Hong and D. –J. Shin, “Design of irregular
concatenated zigzag codes,” in International
Symposium on Information Theory (ISIT), IEEE, pp.
1363-1366,
Sept.
2005.
In this paper, it is represented BER performances of
the zigzag codes with different values of J . Some
45
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Evaluation of information in library and
information activities
Prof.assos. A.Gurbanov
and
PhD P.Kazimi,
BDU, Baku/Azerbaijan, azadbey@mail.ru
BDU, Baku/Azerbaijan, pkazimi@mail.ru
modern globalisation society are becoming more significant.
In spite of carrying the theoretical character, this formula lets
analyse some issues. PC-prime cost in the formula occurs
differently in various places. For example, in normal and
rational technological process if we approach by PC=a
formula and implement it in various places, then it equals to:
in Azerbaijan PC=a; in Egypt PC=(-2a); in the USA PC=10a,
and makes imbalance in final price that withdraws
competitiveness. The valuable technologies used in pricemaking withdraw information competitiveness and marketing
exposes with vast sums of money. It is impossible to disagree
with this situation in the globalized information society.
The formation of an information society demands the
assessment of the library-information function to be more
differential and be expressed in concrete formulae. The works
of American, Russian and English researchers related to the
problem are limited by commenting only in one direction. The
work “The assessment of libraries’ functions” (2009-russian
translation) by an English researcher B. Peters attracts in this
direction.
Considering the difficult functionality of libraryinformation function, B.Peters divides the pricing and
assessment categories from each-other and analyzes them. We
add “purpose” function to the researcher’s assessment
algorithm and get the following scene :
Resource – Purpose – Process- Product – Result –
Impact
This can be a model structure of assessment system in
library-information function. In the assessment of information
the specialists mainly, tried to attitude the problem for the
following prism:
1. The assessment of information
2. The assessment of production
3. The assessment of information process
4. The assessment of the quality of information
The well-known Russian researcher M.Yuzvishin
considers that information is invaluable and becomes a main
method providing the development of nature and society.
Other specialists assess information product by the working
time budget and process values. Specialists propose more
perspective and broad ideas on the sphere of the assessment of
information quality.
On the assessment of information quality the USA
researcher Peter Brophy advises to use criteria of accuracy and
integrity. For example: there is a lot of information
appropriate to 50 surveys among 10000 information. Real
searching facilities enable to find only 25 of them, only 20 of
which match to the theme, but 5 don’t fit the survey. At this
Abstract - The article examines the pricing information in
today's global information society and participation of
library information services in the process. We study the
concept of the U.S., British and Russian scientist’s
librarians. As well as the actual practice of pricing
information in the information market, the commodity
nature of the capitalization information and the
participation of library and information institutions in the
formation of a fundamentally new economic relations.
Keywords - Library and information, assessment of
information, information market, the pricing mechanism.
evaluation.
I. INTRODUCTION
The specification of the library-information function,
its cultural, social, political, psychological and pedagogic
features always becomes one of the most difficult factors in
making a unique formula on the assessment of this function.
This problem has been attracted the specialists’ attention for
many years and some directions on the assessment have been
taken as a basic: political assessment (determining the
attributes of government and power) and cost assessment
(market equivalent).
The acceleration of information process, the increasing
effect of information on everyday life and economy makes the
problem of assessment of library-information function more
actual. Economist-analytics think that, the economic methods
of industry governing, the improvement of finance, the
development of market relations, the balancing of national
economy, the changing of different properties, self-financing,
the efficiency of social production and the growth of national
income, in whole, the improvement of industrial mechanism
and its impact on the complete product – all these depend on
the implementation of assessment mechanism.
How is the assessment of the library-information
function occurred? The classical economy literature defines
the assessment by the following formula:
P=PC+I+T
Here, P is considered - price, PC- prime cost (of
production expenses), I- enterprise and production income, Ttaxes and other payments. Sometimes, the marketing costs are
also added to the assessment formula, and this plays an
important role in pricing. The marketing expenses in the
46
time, accuracy is expressed by 20/25=0,8 and completeness by
20/50=0,4 formulae.
The fact that limelight is, these formulae can be
implemented both in traditional serving in library-information
function and in serving modern communication means. These
methods are used in modern automated information retrieval
system and is done to identify the efficiency of service.
The assessment of the quality methods of libraryinformation function can only be considered as an effective
system. However, it is rational to proceed the systematic
analyse of the housekeeping model. With that end of view, the
structure of assessment system forwarded by B.Peters is
utterly useful.
To consider the quality of library-information function
during the assessment as a content of cost implemented
processes on the direction of creation information product.
However, assessments of social-political efficiency of the
library-information function and pedagogic-psychological
efficiency are measured by different criteria. During the
library-information function that serves state and national
interests, the assessment of information service should be
carried out with assessments of functions “result” and
“influence” in algorithms shown by B.Peters.
Now let try to characterize the algorithms separately. On
the first stage resources determine a concrete price. Despite of
being a traditional or modern resource, the recruitment of a
library by both directions results concrete amounts. By years
these prices can be increase or decrease according to the
quality of recruitment.
On the second stage the assessment of results occurs.
The result notion takes part on formation of resources in the
first stage and characterised the increasing or decreasing the
fund assessment in further years.
For example, during the Soviet state years the great
number of library fund composition for the purposes of great
percentage of propaganda of the Soviet ideology, by the
collapse of post Soviet state a large part of library fund’s cost
rate was lost and a long activity period of library fund release
and renovation has been started.
At present, the assessment of the library-information
purposes is interpreted by the nature of purpose. For example,
the purposes covering information security and national
interests are subject to significant funding and change into the
daily work of wide information structures. The parties
competing in a market economy conditions adjusts pricing
policy goals in many cases, do manipulations in reducing and
raising the market price of the.
The assessment of processes is both specific and
subjective. Thus, if it is spoken about the assessment of
concrete processes then time and place conditions this
assessment can be changed. For example, the work of the
librarian, who has written the book's bibliographic description,
is assessed by different terms in different condition and place.
Such factors provide the assessment of the processes. The
results of processes, in most cases change into information
products that a product price can be considered in the content
of resources, goals and processes. The price of this product
can be higher of the real value if the purposes serve to market
relations, and can be much lower of the real value if the
purpose serve to of state and national criteria, to corporate
interests.
The assessment of results is a complicated process.
The results of effectiveness of the library-information occur
as a product of a group research. Library and sociological
research is required. The reliability of results and ethical
aspects are subject to extensive analysis in B.Peter’s
monograph. The last algorithm in the mentioned research if
the assessment of result. Thus, the assessment of the libraryinformation function are completed not only as a result of
function but also as the assessment of function influence.
The assessment of influence together with some
analysis of statistical results can be implemented by the
assessment of some social activities. For example, the
influence activity of the library-information function carried
out by MLS can be assessed by the factors of the level of
education in the district schools, by the quality indicators of
the university entrance exams and others. factors can be
evaluated. Of course, it is impossible to formulate a model for
all types of libraries, and it will be rational to discuss
separately about the internationally accepted IF (impact factor)
indicators. As it is mentioned above, the categories of result
and influence have social and political importance and
formula P=P+I+T doesn’t respond in their assessment. In this
regard, the proposed formula for the overall assessment of the
library information function can be summarized as follows:
A=F-(Ic+R+I)=0 (1)
In a given formula A – is general assessment of the
information activity, F-allocated financial resources for
results, Ic- the cost of information product and R-is the
financial equivalent of the result. Thus, if A=0, the financial
resource allocated for suspended purposes changes into the
appropriate information product, and an appropriate result will
be got (in this case a reasonable amount of money is allocated
for each user), and if the impact of these results is fully
appropriate to purposes, then an optimal price of the libraryinformation function is equal to zero. If the equality is above
zero we can talk about the efficiency of collective activity,
innovative methods and the application of best practice, if it is
below zero, then let’s talk about the non-professionalism of
the staff, not to cope with the responsibilities of taken duties.
The proposed formula can be useful in the definition of Rresult and I-impact. If A> 0 or A<0, the determination of R
and A may be of particular importance.
In B.Peter’s algorithms all components either
characterize each-other or creates the dependency. For
example, an information product emerged as a result of the
library-information function turns into an information
resource. Information retrieval processes as well as service
processes, the implementation of the goals and objectives
depends on the material and technical base of the enterprise
and impacts the quality efficiency.
The component "objective" added to algorithm takes
part either in the formation, or in the evaluation of many other
components.
The assessment of the purpose component occurs according to
the parameters of time and space, creates the funding policy
and system and expresses in concrete ranges. For example, the
47
funding budget allocated by the government to education
literature, to different oriented library-information enterprises,
the difference given to the assessment of a librarian’s work is
defined by purposes.
The purposes take part in formation of product price. A
number of information resources instead of particular value
are distributed free of charge and it serves to set goals. In other
cases, the information sold in the high-paid information
market for gaining benefits are appropriate to the targeted
goals.
During the assessment of the library-information
function some parameters of measurements find their place in
classical literature. Economic indicators and indicators of
efficiency are mainly included to these. An economic indicator
gives answer to the following question: "How cheaper and
effective is this service among the existing ones?" As well as
the necessity of services and the management of process are
studied.
The indicators of efficiency covers the study of
product quantity indicators gaining by minimal expenses. The
efficiency studies the library and information process. For
example, how much information service can a librarian
provide in a specific time frame. If the indicators in an
appropriate enterprise are different then it lose competition
ability and faces with unpleasant consequences.
The indicators of inference determines that, you are
busy the required work. It coordinates the external purposes
(eg. state or national) with the long-term fundamental
objectives. One of the most important elements of the
indicators of inference is the system of values and the effect
measures. Only these parameters don’t fit to be measured.
A group of researchers at Sheffield University proposed to
measure another indicator of equality. Many experts agree
that, just public libraries provide the equal accessibility to the
product information. It also serves as an expression of social
equality. The indicator of equality does not yet define the
quality of the equal information service. During the
assessment of service, the quality of the service attracts the
researcher’s attention more than the equality of the service as
an important category. As well some other factors, as the
social importance of reading, level of education, the
information itself and knowledge don’t fit to be measured and
their assessment is only comparing and possible.
The economic theorists argue that, the globalization
of the society provides the passage to the financial economy,
and the informativeness of the global society from the
financial economy into the information economy. Thus, during
the information economy the prices of information resources,
the information product and information processes, the pricing
mechanisms must base and be adequate to appropriate laws
and formulae. The results and impact of modern assessment
practice (IF-impact factor) in many cases can be justified in
scientific literature turnover. However, it is impossible to
implement this to the social literature, especially to the fiction
literature.
According to the Russian State Library’s statistics of
2010, D.Dansova’s books had more turnover throughout the
country than books by N.Dostoyevski, were sold in book
distributors and were given to readers in libraries. This
increases the "impact factor" of D.Dansova, but makes
N.Dostoyevski’s books invaluable. As the representative of
Thomson Reuter Agency, Metin Tunj noted, if any
archaeological excavations carried out in one of kurgans of
Kazakhstan Province shall cover more information, it can have
zero IF on international rating as well. The assessment of IF
Scientific Literature is based on Reference System.
Let’s consider that, a chemical laboratory has been working
for 6 years on a major project and at the end publishes an
article. The research team of two professors and 6 researchers
at the end of 6 years publish the article expressing interesting
scientific results and the IF of the article defines by a high
rating. As these figures are valid for a period of only a year,
the new indicators are calculated for the next year. In this case,
only one work is referred in the research and for the
importance this work are presented for the award of Nobel
Prize or for state award. This indicator doesn’t affect on IF of
the article.
Many Russian researchers note the assessment based on
citations to be conventional. However, in the process of rating
assessment, the only viable mechanism now is considered the
IF, it still carries the commercial character and there is no the
alternative in assessment of IF.
Some international organizations dealing with marketing
information, apply special pricing mechanisms. These
methods widespread in the can be grouped as follows.
1. The assessment according to information unit
2. The assessment according to information use time
3. The assessment according to information users
number
For example, the use of Russian national electronic library is
happened online and an online document (depending on
volume) is sold about 0,1-12 USD. Or Russian company
"Public Library" requires from the consumers 8 USD per hour
for the information of 25 Gb. Another company "Lexis
Nexis" is defined the annual subscription by the number of
people served by the libraries. The price for 50 thousand
people is defined 5000 USD, for 8 thousand people 10000
USD, for 10 thousand people 250USD, for 20 thousand
500.000USD, for 35 thousand 1 million USD, for 60 thousand
2 million USD, and 70 thousand U.S. dollars for the libraries
network with more than 2 million potential consumers.
While analyzing the prices of information resource
centres which have special activity in information market, it is
observed that, the marketing technologies, special PR
companies affect to prices. The information is gradually
capitalised.
The economic theorists define 3 signs of capital:
1. The price generating surplus value or increasing by
itself.
2. The resources formed by people for the provision
of goods and services .
3. The means of production is a source deposited to
certain activity and incoming.
Information can also be characterized by three signs.
At the rapidly growth modern stage of the of informativeness
of economic relations, the development of market relations
demands the emergence of new global marketing information
services market and this market is being formed. The feature
48
of modern market relations concerning the mutual activity of
the subjects different by composition, interests and further
goals demand the formation of a new phase creating the
opportunity for everybody to use information resources.
Perhaps, in media market so-called "chaos"
conditions will go on for a while. In order to regulate the
media market in the nearest future, a number of legal and
organizational measures must be taken throughout the world.
1) .
2. A.A. Khalafov. Library and society. B. BSU. 2011. -370 p.
3. Ismayilov X. İ. The basis of library management.. B.,
2005.-199 p.
4. Kazimi P.F. Information engineering in library activity. B.
BSU., 2011.-230 p.
5. Memmedova K.(Anvarqizi) The organization payable
services in the libraries. Baku., CSL, 2008.-120 p.
6. Mamedov M. S. The role of marketing in the management
of libraries// Library science and bibliography.
Theoretical, methodological and practical Journal. B.,
2009. N1, 47-54pp.
7. Suslova I.M., Klyuev V.K. The management of library and
information service. M. Professiya. 2011.610 p.
8. Kolesnikov M.N. The management of library and
information function. 2009.- 390 pp.
9. Kazymi P.F. About democracy of library work. Kursk.
Journal of scientific publications. 2011.-pp.102-105.
10. Iliyaeva I.A., Markov V.N. The strategic management of
the library. M., Knorus, 2008.-184 pp.
11. Golubenko N.B. The information technologies in libraries.
Rostov on D., Phoenix, 2012, 282 pp.
12. Patrick Forsite. Marketing in publishing. M; School of
publishing and media business, 2012.-221 pp.
13. Peter Broffy. The assessment of libraries functions:
Principles and methods. M.; "Omego-L", 2009.-357 pp.
II. CONCLUSION
The article is devoted to the problems of information pricing
in the globalization society and the ways of solution of
problems in library-information function. The theories of the
US, British and Russian scientists of library science
concerning to the problem has been analyzed and the rational
ideas on this case has been generalized. The practice of
information assessment in the pricing market is investigated,
its pricing character, the problems of capitalization and the
role of library-information enterprises in this process are
analyzed.
.
REFERENCES
1.
A.A. Khalafov. The introduction to Librarianship. 3
volumes. B.: Baku State University, 2003.
49
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Bulanık Mantık ve PI Denetleyici ile Sıvı
Sıcaklık Denetimi ve Dinamik Performans
Karşılaştırması
A.GANİ1, H.AÇIKGÖZ2, Ö.F.KEÇECİOĞLU1, M.ŞEKKELİ1
1
Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş/Türkiye, agani@ksu.edu.tr
1 Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş/Türkiye, fkececioglu@ksu.edu.tr
1Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş/Türkiye, msekkeli@ksu.edu.tr
2Kilis 7 Aralık Üniveristesi, Kilis/Türkiye, hakanacikgoz@ksu.edu.tr
Endüstriyel süreç denetiminde bazı zorluklar vardır. Bu
zorluklar sürecin matematiksel modelinin bilinmemesi,
denetlenecek sistemin lineer olmaması, ölçme zorlukları,
model parametrelerinin zamanla büyük değişiklikler
gösterebilmesidir. Ayrıca, istenilen sistem davranışı ve
bunun gerçekleştirilmesi için gerekli sınırlamalar nümerik
değerlerle ifade edilemeyebilir.
Böyle durumlarda bir uzman kişiden yararlanmak gerekir.
Uzman kişi denetiminde kesin matematiksel ilişki yerine
"sıcak,"az sıcak","ılık", "soğuk" gibi sözel ifadeler kullanılır.
Bulanık denetim bu tür bulanık mantık ilişkileri üzerine
kurulmuştur [4-6]. BMD sistemin herhangi bir matematiksel
modeline ihtiyaç duymadan tamamen uzman kişinin bilgi ve
becerisine dayandığı için endüstri de yaygın bir şekilde
kullanılmaya başlanmıştır ve oldukça iyi sonuçlar
vermektedir[7]. Bulanık mantık, su arıtma denetimi, metro
denetimi, elektronik pazarlar, otomotiv ürünleri, ısı, sıvı, gaz
akımı denetimleri, kimyasal ve fiziksel süreç denetimleri gibi
bir çok alanda kullanılmaktadır [8]. Günümüzde sıcak su; sera
ve bina ısıtması, kerestecilik, kağıt sanayi, tekstilde dokuma,
boyama ve terbiye işletmeleri, dericilik, kimyasal madde
üretimi gibi alanlarda kullanılmaktadır. Bu çalışmada
silindirik bir tanktaki sıvının sıcaklık denetimi bulanık mantık
denetleyici ve PI denetleyici kullanılarak gerçekleştirilmiş ve
dinamik performans karşılaştırması yapılmıştır. İkinci
bölümde bulanık mantık denetleyicinin çalışma prensibi,
üçüncü bölümde PI denetleyicinin çalışma prensibi, dördüncü
bölümde sistemin modellenmesi, beşinci bölümde yapılan
çalışmadan elde edilen benzetim çalışmaları,altıncı bölümde
ise sonuçlar tartışılmaktadır.
Özet – Endüstriyel tesislerde su ve benzeri likitlerin
doldurulduğu depolarda sıvı sıcaklık denetiminin doğru bir
şekilde yapılması gerekmektedir. Birçok endüstriyel tesiste sıvı
sıcaklık denetimi sistem düzeninin en önemli parçasıdır. Bu
tesisler için bazı durumlarda sıvı sıcaklığının optimum
denetiminin yapılmaması maddi ve manevi zararlara yol açabilir.
Bulanık mantık denetim ve geleneksel denetim yöntemleri birçok
endüstriyel uygulamada kullanılmaktadır. Bu çalışmada bir sıvı
tankının içindeki sıvının sıcaklık denetimi bulanık mantık ve PI
denetleyici ile ayrı ayrı gerçekleştirilmiş ve dinamik performans
karşılaştırması yapılmıştır. Analiz çalışmaları ve performans
artırıcı
çalışmalara
imkan
verdiği
için
benzetim
Matlab/Simulink’te gerçekleştirilmiştir.
Anahtar Kelimeler- Bulanık Mantık, PI Denetleyici, Sıvı Sıcaklık
Denetimi
Abstract - In industrial plants, liquid temperature control
must be made accurately in warehouses where water and the like
which is filled with liquid. Liquid temperature control is the most
important part of the system layout in many industrial plants.
Failure to make the optimum control of liquid temperature can
lead to moral and material damages for these plants in some
cases. Fuzzy logic control and traditional control methods are
used in many applications. In this study, the temperature control
of the liquid in a liquid tank was made separately with fuzzy logic
controller and a PI controller and dynamic performance
comparison. Simulation has been realized in Matlab/Simulink
because it allows analyzing work and performance enhancing
activities.
Keywords – Fuzzy Logic, PI Controller, Liquid Temperature
Control
II. BULANIK MANTIK DENETLEYİCİ
I. GİRİŞ
İlk BMD küçük bir buhar makinesini denetlemek için
Mamdani ve Assilian tarafından gerçekleştirilmiştir. BMD
algoritması,
sezgisel
denetim
kurallar
kümesinden
içermektedir ve dilsel terimleri ifade etmek için bulanık
kümeler ve kuralları değerlendirmek için bulanık mantık
kullanılmaktadır. Genel bir BMD blok diyagramı şekil 1’de
verilmiştir. BMD, genel yapısıyla bulandırma birimi, bulanık
çıkarım birimi, durulama birimi ve bilgi tabanı olmak üzere
Günümüz imalat sanayinde kullanılan makinelerin hızlı
çalışmaları,
üretimin artması
bakımından
önemlidir.
Üretimde insan faktörünün en aza indirilmesi, üretimin
kalitesi ve üretimin eşdeğerliği bakımından önem arz
etmektedir. Bunu gerçekleştirecek sistemlere otomasyon
sistemleri adı verilmektedir [1,3].
50
dört temel bileşenden oluşmuştur.
BİLGİ TABANI
VERİ TABANI
KURAL TABANI
BULANDIRMA
ÇIKARIM
DURULAŞTIRMA
Şekil 2:BMD’nin Matlab/Simulink Blok Diyagramı
ÇIKIŞ
GİRİŞ
Şekil 1: Genel BMD yapısı
Bulandırma birimi, sistemden alınan giriş bilgilerini dilsel
niteleyiciler
olan
sembolik
değerlere
dönüştürme
işlemidir.Bulanık çıkarım birimi, bulandırma biriminden gelen
bulanık değerleri, kural tabanındaki kurallar üzernde
uygulayarak bulanık sonuçlar üretilmektedir. Girişler ve
çıkışlar arasındaki bağlantılar, kural tabanındaki kurallar
kullanılarak sağlanır. Bu kurallar If-Then mantıksal ifadeleri
kullanılarak oluşturulur. Bu birimde elde edilen değer kural
tablosundan dilsel ifadeye çevrilir ve durulama birimine
gönderilir. Durulama birimi, karar verme biriminden gelen
bulanık bir bilgiden bulanık olmayan ve uygulamada
kullanılacak gerçek değerin elde edilmesini sağlar. Durulama,
bulanık bilgilerin kesin sonuçlara dönüştürülmesi işlemidir.
Durulama işleminde değişik yöntemler esas alınmaktadır.
Ağırlık merkezi yöntemi en yaygın kullanılan durulama
yöntemidir.Bilgi tabanı, denetlenecek sistemle ilgili bilgilerin
toplandığı bir veri tablosundan ibarettir [9-10]. Bu çalışmada
tasarlanan BMD için iki tane giriş seçilmiştir. Bu girişler hata
ve hata değişimidir. Hata (e), istenen seviye değeri (r) ile
gerçek seviye değeri (y) arasındaki farktır. Hata değişimi
∆e(k), mevcut hata e(k) ile önceki hata e(k-1) arasındaki
farktır. k simülasyon programındaki iterasyon sayısını
göstermek üzere hata ve hata değişiminin ifadesi denklem 1 ve
2’deki gibi olacaktır.
Şekil 3:Hata için 5 Kurallı Gauss Üyelik Fonksiyonu
Şekil 4:Hata Değişimi için 5 Kurallı Gauss Üyelik Fonksiyonu
e(k)=r(k)-y(k)
(1)
∆e(k)= e(k)-e(k-1)
(2)
BMD’nin bulandırma işleminde giriş ve çıkış değişkenleri
sembolik ifadelere dönüştürülmektedir. BMD’nin dilsel
değişkenleri VS (Çok Küçük ), S (Küçük), Z (Sıfır), L
(Büyük), VL (Çok Büyük),) şeklinde kullanılmıştır. BMD’nin
matlab/simulink blok diyagramı şekil 2’de verilmiştir.Sisteme
verilen her bir giriş için gauss tipi üyelik fonksiyonu
kullanılmıştır. Kullanılan gauss üyelik fonksiyonları ve
denetim yüzeyi şekil 3-6’da gösterilmiştir. [11].
Şekil 5:Çıkış için 5 Kurallı Gauss Üyelik Fonksiyonu
51
Şekil 7: PI denetleyicili sistemin blok diyagramı
Şekil 6: 5 Kurallı BMD’nin Denetim Yüzeyi
Bulanık çıkarım biriminde girişlerin çıkış ile ilişkisi
belirlenen kurallarla sağlanır. Kurallar yazılırken AND (ve)
bulanık operatörü kullanılmıştır. Oluşturulan kural tablosu
tablo 1’de verilmiştir.
Şekil 8: PI denetleyicinin Matlab/Simulink blok diyagramı
Tablo 1. 5x5 kural tablosu
∆e
IV. SİSTEMİN MODELİ
u
e
VS
VS
VS
S
VS
Z
VS
L
S
VL
Z
S
Z
L
VL
VS
VS
S
Z
VS
S
Z
S
S
Z
L
PL
Z
L
VL
VL
L
VL
VL
VL
Aşağıda oluşturulan bu kurallardan bazıları verilmiştir.
Kural 5: Eğer e VS ve ∆e VL ise u Z
Kural 8: Eğer e S ve ∆e Z ise u S
Kural 15: Eğer e Z ve ∆e VL ise u VL
Kural 22: Eğer e VL ve ∆e S ise u S
Durulama biriminde, her kural için hata ve hata değişiminin
üyelik ağırlık değerleri bulunarak, bu iki değerin en az üyelik
ağırlığı ve buna göre çıkış üyelik (u) değerleri tespit edilir.
Durulama biriminin çıkışında elde edilen sayısal değer sisteme
uygulanır.[12].
III. PI DENETLEYİCİ
Bir PI doğrusal denetleyici, oransal ve integral alıcı
kısımlardan oluşur. PI denetleyici denetim sistemlerinde
genellikle kalıcı durum hatalarını en aza indirgemek için
kullanılır[13]. PI denetleyicinin çıkışı denklem 3’de
verilmiştir.
( )
( )
∫ ( )
Şekil 9: Sıvı Sıcaklık Sistemi
Isı transferi (geçişi), sıcaklık farkından dolayı ortaya çıkan
bir fiziksel mekanizmadır. Doğada ısı akışı yüksek sıcaklıktan
alçak sıcaklığa doğru gerçekleşmektedir. Isı bir noktadan diğer
bir noktaya üç farklı mekanizma ile transfer edilebilir: iletim,
taşınım ve ışınım gibi. Buradaki uygulamada söz konusu ısı
geçiş türlerinden iletim ve taşınım bir arada bulunmaktadır.
Burada iletim ve taşınım ile çevreye ısı kaybı olmakta ve bir
rezistans vasıtasıyla bu kayıp karşılanarak sistemin sıcaklığı
sabit tutulmaktadır. Enerjinin korunumu (Termodinamiğin I.
Kanunu) gereğince [15];
( )
PI denetleyici kullanılarak denetlenen sisteme ait blok
diyagramlar şekil 7-8’de verilmiştir[14].
52
L=10mm, iç ve dış ısı taşınım katsayısı hi =2800W/m2K,
hd=2800W/m2K, ısı iletim katsayısı k=50W/mK alınmıştır.
Sistemin Enerjisindeki Değişim = Giren Enerji – Çıkan Enerji
+ Üretilen Enerji
Esis  Eg  Eç  Eü
V.BENZETİM ÇALIŞMALARI
(4)
Kapalı-döngü sistemlerde kararlı hale ulaşma sürecinde;
P(oransal) denetleyici sistemin yükselme zamanının
azalmasını sağlar, I (integral) denetleyici sistemi aşmaya ve
kararsızlığa yönlendirir, D(türevsel) denetleyici sistemin aşma
ve kararsızlığını azaltır[17-19].Bulanık mantık denetimde
kontrol stratejisi kural tabanına ve uzman kişinin öngörülerine
bağlıdır[20]. Şekil 10’da 200C için bulanık mantık denetleyici
ve PI denetleyicinin sistem cevabı verilmiştir. Şekil 11’de ise
350C için bulanık mantık denetleyicinin sistem cevabı
verilmiştir.
Sistemin sıcaklığı sabit kabul edildiğinde sistemin
enerjisindeki değişim sıfır olacağından ve sistemde enerji
üretimi olmadığından enerjinin korunumu denklemi aşağıdaki
şekli alır;
Eg  Eç
(5)
Bu denklemi sisteme uygularsak;
Sisteme giren Elektrik enerjisi = Sıvının Isıtılması için Gerekli
Olan Isı Enerjisi + Sistemden Çevreye Olan Isı enerjisi
25
Ee  Qs  Qk
(6)
Sıcaklık(Derece)
20
Sıvının ısıtılması için gerekli olan ısı enerjisi aşağıdaki şekilde
hesaplanabilir;
Qs  m  C 
dT
dt
(4)
0
0
100
200
300
400
500
600
Zaman(Saniye)
700
800
Referans
Fuzzy
PI
900
1000
Şekil 10: 200C için bulanık mantık ve PI denetleyicinin cevabı
25
dT
Qk  U 
dt
(7)
20
1
ln  r2 
r
1
1
  1
2r1H hi 2H k 2r2 H hd
Sıcaklık(Derece)
Burada U karma ısı transfer katsayısı, Dt/dt zamana bağlı
sıcaklık farkını göstermektedir.U karma ısı transfer katsayısı
takip eden denklemle hesaplanır;
(8)
15
10
5
0
0
Burada hi ve hd sırasıyla iç ve dış ısı taşınım katsayısı, k ısı
iletim katsayısı, r1 ve r2 sırasıyla tankın iç ve dış yarı çapı, ve
H ise tankın boyunu göstermektedir. Sonuç olarak enerjinin
korunumu denklemi yazarsak;



dT 
1
Ee  m  C   
r2 
dt

ln 

r
1
1

  1
 2r1H hi 2H k 2r2 H hd
10
5
Burada m sıvının kütlesi, C sıvının özgül ısısı ve Dt/dt
zamana bağlı sıcaklık farkını göstermektedir. Sistemden
çevreye olan ısı enerjisi iletim ve taşınım toplamı olarak şu
şekilde hesaplanır [16];
U
15



 dT
 dt



100
200
300
400
500
600
Zaman(Saniye)
700
800
Referans
Fuzzy
PI
900
1000
Şekil 11: 10-200C için bulanık mantık ve PI denetleyicinin cevabı
VI. SONUÇLAR
Bu çalışmada kapalı bir ortamdaki sıvının sıcaklık denetimi
bulanık mantık ve PI denetleyici ile aynı şartlar altında
gerçekleştirilmiş ve dinamik performans karşılaştırması
yapılmıştır. Matlab/Simulink benzetim programından elde
edilen grafiklerden de görüldüğü gibi, farklı referans
sıcaklıkları izleme başarımında bulanık mantık denetiminin,
kontrol stratejisinin kural tabanına dayalı olması ve karar
verme yeteneğine sahip olmasından dolayı PI denetime göre,
çok daha iyi sonuçlar verdiği görülmektedir. Bulanık mantık
denetim PI denetime göre daha iyi bir dinamik performans
sağlamıştır.
(9)
bağıntısı elde edilmiş olur. Burada tankın boyutları r1=0.5m,
r2=0.5m ve tankın boyu H=2 metre, tankın cidar kalınlığı
53
[12] Gani, A., Özçalık, H.R., Açıkgöz, H., Keçecioğlu, Ö.F., Kılıç, E.,“Farklı
Kural Tabanları Kullanarak PI-Bulanık Mantık Denetleyici ile Doğru
Akım Motorunun Hız Denetim Performansının İncelenmesi”, Akademik
Platform Fen ve Mühendislik Bilimleri Dergisi (APJES), Cilt 2, Sayı 1 ,
2014.
[13] Kuo, B.C., Çeviri:Bir, A., 1999, Otomatik Kontrol Sistemleri, Literatür
Yayınları:35,s.186-187.
[14] Özlük, F. M., Sayan, H. H., “Matlab GUI ile DA Motor için PID
Denetleyicili Arayüz Tasarımı”, İleri Teknoloji Bilimleri Dergisi, Cilt 2,
Sayı 3 ,10-18, 2013.
[15] Michael J. M., Howard N. S., Daisie D. B., Margaret B. B. 2011.
Principles of
Engineering Thermodynamics. 7th edition. John Wiley
& Sons Ltd,
[16] Çengel Y. A. 2007. Heat and mass transfer: a practical approach.3rd
edition. McGraw-Hill.
[17] Dumanay, A.B., “PID, Bulanık Mantık Ve Kayan Kip Kontrol
Yöntemleri İle İnternet Üzerinden DC Motor Hız Kontrolü”, Yüksek
Lisans Tezi, Balıkesir
Üniversitesi Fen Bilimleri Enstitüsü,
Balıkesir,15-25 (2009).
[18] Açıkgöz, H., Keçecioğlu, Ö.F., Şekkeli, M., “Genetik-PID Denetleyici
Kullanarak Sürekli Mıknatıslı Doğru Akım Motorunun Hız
Denetimi.”,Otomatik Kontrol Ulusal Toplantısı (TOK2013), 26-28 Eylül
2013, Malatya.
[19] Coşkun, İ., Terzioglu, H., “Gerçek Zamanda Değişken Parametreli PID
Hız
Kontrolü”,
5.
Uluslararası
İleri
Teknolojiler
Sempozyumu(IATS’09),13-15 Mayıs 2009,Karabük,Türkiye.
[20] Şekkeli, M., Yıldız, C.,Özçalık H.R., “Bulanık Mantık ve PI Denetimli
DC-DC Konvertör Modellenmesi ve Dinamik Performans
Karşılaştırması”,4.
Otomasyon
Sempozyumu,
23-25
Mayıs
2007,Samsun,Türkiye.
VII. KAYNAKLAR
Mergen Faik, “Elektrik Makineleri (Doğru Akım Makineleri) ” ,Birsen
Yayınevi 2006.
[2] Kuo Benjamin C., “Otomatik Kontrol Sistemleri”,Yedinci Baskı,
Prentice Hall 1995.
[3] J.G. Zigeler, N.B. Nichols, “Optimization Setting for Automatic
Controller”, Trans. ASME, Vol. 64, pp. 756- 769, 1942.
[4] L. A. Zadeh, “Fuzzy sets,” Inform, Control, Vol.8, 1965, pp.338-353
[5] Cheung, J.Y.M, Cheng, K.W.E, Kamal, A.S., “Motor Speed Control by
Using a Fuzzy Logic Model Reference Adaptive Controller”, 6th
International Conference on Power Electronics and Variable Speed
Drives, pp.430-435,1996.
[6] Akar, M., “Bulanık Mantık Yöntemiyle Bir Servo Motorun Kontrolü Ve
Geleneksel
Yöntemlerle Karşılaştırılması”,Marmara Üniversitesi
Yüksek Lisans Tezi, İstanbul, 2005.
[7] Özçalık, H.R., Türk, A., Yıldız, C., Koca, Z., “Katı Yakıtlı Buhar
Kazanında Yakma Fanının Bulanık Mantık Denetleyici ile Kontrolü”,
KSÜ Fen Bilimleri Dergisi, 11(1), 2008.
[8] Aslam, F., Haider, M.Z., “An Implementation and Comparative Analysis
of PID Controller and their Auto Tuning Method for Three Tank Liquid
Level Control” International Journal of Computer Applications , Volume
21– No.8, May 2011.
[9] Jiang, W., “The Application of the Fuzzy Theory in the Design of
Intelligent Building Control of Water Tank”, Journal of Software, Vol.
6,, No. 6, June 2011.
[10] Jiang, W., “The Application of the Fuzzy Theory in the Design of
Intelligent Building Control of Water Tank”, Journal of Software, Vol.
6,, No. 6, June 2011.
[11] Özçalık, H.R., Kılıç, E., Yılmaz, Ş., Gani, A.,“Bulanık Mantık Esaslı
Sıvı Seviye Denetiminde Farklı Üyelik Fonksiyonlarının Denetim
Performansına Etkisinin İncelenmesi”, Otomatik Kontrol Ulusal
Toplantısı (TOK2013), 26-28 Eylül 2013, Malatya.
[1]
54
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
BT Görüntüleri Üzerinde Pankreasın Bölge
Büyütme Yöntemi ile Yarı Otomatik
Segmentasyonu
S.DARGA1 , H.EVİRGEN2 , M.ÇAKIROĞLU3 , E.DANDIL4
1
Sakarya University, Sakarya/Turkey, sdarga@sakarya.edu.tr
Sakarya University, Sakarya/Turkey, evirgen@sakarya.edu.tr
1
Sakarya University, Sakarya/Turkey, muratc@sakarya.edu.tr
2
Bilecik Şeyh Edebali University, Bilecik/Turkey, emre.dandil@bilecik.edu.tr
1
sistemlerinin tasarımı belli adımlardan oluşmaktadır: Herhangi
bir medikal görüntü işleme cihazından alınan görüntüler belli
başlı görüntü iyileştirme algortimaları kullanılarak görüntü
üzerindeki gürültüler ve bozkuluklar giderilir. Bu sayede daha
sonraki
görüntü
işleme
adımlarında
karşılaşılacak
problemlerin en aza indirgenmesi sağlanır. Sonraki adım ise
aynı özellikteki bölgelerin birleştirilmesi veya organlar
arasındaki sınırların çıkarılarak bir organa ait görüntü
bölgelerinin diğer organlara ait bölgelerden ayrılması
anlamına gelen segmentasyon işlemidir. İlgilenilen
bölge/organ (ROI-Region of Interest) tesbit edildikten sonra
diğer bölgeler görüntüden çıkarılarak işlem karmaşıklığından
kurtulmuş ve sonraki adımlarda çıkabilecek hataların önüne
geçilmiş olunur. Daha sonra görüntüler üzerinde aranılan
kriterlere göre uygun algoritmalar kullanılarak öznitelik
belirleme, öznitelik seçme ve sınıflandırma işlemleri yapılarak
gerçekleştirilecek CAD sistemi tasarlanmış olur.
Görüntü bölütleme için tek ve standart bir yaklaşım söz
konusu değildir. Uygun bölütleme tekniği görüntünün türüne,
uygulamanın çeşidine ve organın yapısına göre farklılık
gösterir. Henüz bütün medikal görüntülere uygulanabilen ve
kabul edilebilir doğruluk oranlarına sahip bir bölütleme
yöntemi gerçekleştirilememiştir.
Pankreasın karın bölgesinin arka tarafında olması,
etrafındaki organ ve damarlarla iç içe ve atipik yapıda olması
sebebiyle yerinin doğru olarak tesbit edilmesi ve
segmentasyonu oldukça zordur. Tam otomatik pankreas
segmentasyonu yarı otomatik yöntemlere göre daha az
doğruluk oranlarına sahiptir. Shimizu vd. BT görüntüleri
üzerinden tam otomatik pankreas segmentasyonu için levelset
yöntemini kullanarak 2 farklı yaklaşım sunmuşlardır[10]. Tek
fazlı BT görüntüleri üzerinde yaptıkları çalışmada %32.5, çok
fazlı BT görüntüleri üzerinde yaptıkları çalışmada %57.9
doğruluk oranlarında segmentasyon yapabilmişlerdir. Kitasaka
vd. BT görüntülerinde pankreas segmentasyonunu modifiye
edilmiş
bölge
büyütme
algoritması
kullanarak
gerçekleştirmişlerdir[11]. Yaptıkları çalışma 12 vaka için çok
iyi, 6 vaka için orta ve 4 vaka için zayıf sonuçlar vermiştir.
Wolz vd.yaptıkları bir çalışmada %49.6 oranında bir pankreas
segmentasyon gerçekleştirmişlerdir[12].
Bu çalışmada pankreas bölütleme (segmentasyon) işlemi
Özet – Pankreas kanserleri kanserden ölüm sebepleri arasında
ön sıralarda yer alan, 5 yıllık sağ kalım oranı sadece %5’lerde
olan ve erken teşhisi çok zor olan ölümcül bir hastalıktır.
Pankreasın, karın bölgesinin arka tarafında olması, etrafındaki
organ ve damarlarla iç içe ve atipik yapıda olması sebebiyle
yerinin doğru olarak tesbit edilmesi ve segmentasyonu oldukça
zordur. Bu çalışmada pankreas görüntülerinin tesbitinde
radyologlara destek olacak bir Bilgisayar Destekli Tesbit Sistemi
(CAD-Computer Aided Detection)’nin başarımını etkileyecek en
önemli işlemlerden biri olan bölütleme (segmentasyon) işlemi
Bölge Büyütme yöntemi ile gerçekleştirilmiştir. 25 hastadan
alınan DICOM formatındaki BT görüntüleri uygun kesitler
alınarak JPEG formatına dönüştürülmüş ve bölütleme işleminin
Bölge Büyütme Algoritması ile gerçeklenebilirliği gösterilmiştir.
I. GİRİŞ
ankreas kanserleri, kanserden ölüm sebebleri arasında
erkeklerde dördüncü sırada, kadınlarda ise beşinci sırada
yer alırlar [1]. Pankreas kanseri agresif davranışlı bir seyir
izlemesi nedeniyle hastaların ilk tanı aldıktan sonraki 1 yıllık
yaşam oranı %20’lerin altında olup 5 yıllık yaşam oranları
sadece %3’ler seviyesindedir [2, 3]. Tümörün çıkarılması 5
yıllık sağ kalım oranlarını %10 civarına çıkarabilmektedir
ancak hastaların ancak %10-15’inde tümörün çıkarılması
mümkün olmaktadır[4, 5]. Hastalığın erken safhalarda iken
saptanmasının ve yayılımlarının değerlendirilmesinin tümörün
çıkarılabilmesine ve bu sayede yaşam sürelerinde belirgin bir
artışa neden olacağı düşünülmektedir [6].
Pankreas kanserinin teşhisinde Ultrasonografi(US),
bilgisayarlı tomografi (BT), manyetik rezonans görüntüleme
(MRG) endoskopik retrograd kolanjiopankreatografi (ERCP),
anjiografi ve endoskopik ultrasonografi(EUS) görüntülemede
kullanılan
radyolojik
modalitelerdir.
BT
pankreas
kanserlerinin tanısında ve evrelendirilmesinde yüksek
doğruluk oranlarına sahip olan ve en sık kullanılan
görüntüleme yöntemidir [7, 9].
Son yıllarda medikal alanda tıbbi tedavi yöntemlerinin
yanında hekimin karar verme aşamasında kolaylık sağlayacak,
erken bir aşamada hastalığın tespitini yapabilecek CAD
sistemleri sıklıkla kullanılmaya başlanmıştır. Bu CAD
P
55
sağlayacaktır. Orjinal BT görüntüsü ve görüntü iyileştirme
adımlarından sonra elde edilen BT görüntüsü Şekil 2’de
gösterilmektedir.
yarı otomatik bir Bölge Büyütme yöntemi ile
gerçekleştirilmiştir. 25 hastadan alınan DICOM formatındaki
BT görüntüleri uygun kesitler alınarak JPEG formatına
dönüştürülmüş ve bölütleme işleminin, kullanıcının belirlediği
bir tohum piksel noktasından başlayarak benzer özelliklerdeki
piksellerin bölgeye dahil edilmesi mantığıyla çalışan bir Bölge
Büyütme Algoritması ile gerçeklenebilirliği gösterilmiştir.
Çalışmanın geri kalanı şu şekilde organize edilmiştir:
Bölüm 2’de geliştirilen sistemin işlem basamakları ve
kullanılan yöntemler tanıtılmaktadır. Bölüm 3’de deneysel
çalışmalar ve bulgular açıklanmaktadır. Bölüm 4’de ise
çalışmadan elde edilen sonuçlar sunularak makale
sonlandırılmaktadır.
(a)
(b)
(c)
Şekil 2: (a) Orjinal BT görüntüsü. (b) Medyan filtre uygulanmış BT
görüntüsü. (c) Histogram eşitleme uygulanmış BT görüntüsü
II. MATERYAL VE METOTLAR
D. Bölge Büyütme Yöntemi
Bölge Büyütme Yöntemi görüntünün istenilen bölgesi
üzerinde belirlenen bir tohum piksel noktası seçilerek başlar
ve komşu piksellerin seçilen tohum piksele renk, yoğunluk ve
parlaklık açısından benzerliği test edilerek bölge büyütülür.
İlk piksel veya piksel grubu manuel ya da otomatik olarak
görüntü üzerinden seçilir. Başlangıç tohum pikseli ile
yeni/aday piksel arasında bir benzerlik değeri hesaplanarak
yeni piksel bu benzerlik değerinden küçükse bölgeye dahil
edilir. Eğer bu değerden büyükse yani yeni piksel seçilen
tohum piksele istenilen oranda benzer değilse yeni bir aday
piksel seçilerek adımlar tekrarlanır.
R, N piksele sahip bir bölge ve p ise R bölgesine
komşu yeni aday piksel olsun. Ortalama değeri X ve varyans
da S 2 1. ve 2. Eşitliklerde verilmiştir.
A. BT Görüntü Veri Seti
Bu çalışmada 25 farklı hastadan alınan DICOM formatındaki
BT görüntülerinden pankreasın bütününü veren en uygun kesit
alınarak 512x512 çözünürlükte jpeg formatında görüntüler
oluşturulmuştur.
Veritabanını
oluşturan
görüntüler
BezmiAlem Vakıf Üniversitesi’nden temin edilmiştir.
B. Segmentasyon Aşamaları
Gerçekleştirilen segmentasyon işleminin aşamaları Şekil 1’de
gösterilmektedir.
Görüntü
iyileştirme/
Ön-işleme
Başlangıç
Pikselinin
seçilmesi
X 
S2 
İlgisiz
Kısımların
Atılması
Bölge Büyütme
Tekniği ile
Bölütleme
1
N
1
N
 I ( r, c )
(1)
( r ,c )R
 ( I ( r, c )  X )
2
(2)
( r ,c )R
T benzerlik değeri aşağıdaki formül ile tanımlanır.
 ( N  1) N

T  
( p  X ) 2 / S 2 
(
N

1
)


Şekil 1:.Pankreas bölütleme için önerilen sistemin blok şeması
(3)
Benzerlik, bitisik iki piksel veya piksel kümesinin ortalama gri
seviyesi ile, yeni eklenecek bir pikselin gri seviyesi arasındaki
minimum farkı gösterir. Eger 4. Eşitlikte gösterildiği gibi bu
fark benzerlik esik degerinden daha düsükse, pikseller aynı
bölgeye ait olur. Ortalama ve varyans yeniden hesaplanarak
yeni bir komşu piksel kalmayıncaya kadar devam ettirilir.
Eğer T, benzerlik eşik değerinden büyük ise p pikseli ilgili R
bölgesine dahil edilmez, yeni bir piksel seçilerek işlemler
tekrarlanır.
C. Görüntü İyileştirme
Görüntü ön-işleme adımında görüntü üzerindeki gürültüler ve
bozukluklar giderilerek görüntünün kalitesinin artırılması
hedeflenmektedir. Bu sayede daha sonraki segmentasyon
adımlarında karşılaşılabilecek problemler minimize edilmiş
olur. Bu amaçla görüntüleri iyileştirmek için 3x3 medyan filtre
uygulanmış ve daha sonra pikseller arasındaki gri seviye
farkını en aza indirmek için histogram eşitleme işlemi
gerçekleştirilmiştir. Histogram eşitleme sayesinde komşu
pikseller arasındaki gri seviye yoğunluk farkları azaltılarak
pankreas bölgesi içerisine dahil edilmesi gerekirken dışarıda
kalabilecek piksellerin sayısı en aza indirilmiş olacaktır. Bu da
doğruluk oranı daha yüksek bir bölütleme yapma imkanı
P( Ri )  true : if p  pseed  T
56
(4)
III. BÖLGE BÜYÜTME YÖNTEMI KULLANARAK PANKREAS
SEGMENTASYONU
ZSI 
Gerçekleştirilen sistemin bu aşamasında amaç pankreas
bölgesinin BT görüntüsü üzerinde diğer organlardan ayırarak
segmentasyonun gerçekleştirilmesidir. Bu amaçla literatürde
FCM, Otsu, Watershed, Region Growing, Bölge Birleştirme
ve Bölme, Graph Cuts gibi birçok yöntem kullanılmaktadır.
Bu çalışmada yarı otomatik bölge büyütme (region growing)
algoritması kullanılmıştır. Bölge büyütme belirtilen bir tohum
pikselinden başlayarak yoğunluk gri seviye derecesi, renk gibi
benzer özelliklere sahip komşu pikselleri aynı alan içerisine
dahil ederek ilerleyen bir alan büyütme işlemidir. Tüm
pikseller işleme sokularak belirtilen noktadan 8 yönde
yinelemeli olarak alanın büyütüldüğü ve böylece sınırı kapalı
olan bölgelerin şekillendiği iteratif bir işlemdir.
Bölge büyütme işleminde ilk adım başlangıç/tohum
piksel noktasını belirlemektir. Burada istenilen yoğunluk veya
renk değerlerine göre veya aranılacak alanın şekline,
konumuna, ölçülerine bağlı morfolojik ve matematiksel
işlemlere dayalı otomatik bir başlangıç noktası belirlemek
mümkündür. Fakat literatürde yapılan çalışmalarda otomatik
tohumlandırmalı bölge büyütme yöntemlerinde henüz yeterli
doğruluk oranlarında segmentasyon gerçekleştirilememiştir.
Bu nedenlerden dolayı başlangıç pikseli belirleme işleminin
kullanıcı tarafından gerçekleştirilmesi başarım oranını
artıracağı düşünülmektedir. Bu nedenle geliştirilen sistemde
yarı otomatik bir bölge büyütme işlemi uygulanmıştır. Şekil
3’te bölge büyütme yöntemi ile bölütlenmiş BT görüntüleri
görülmektedir.
(a)
(b)
2*(A M )
A M
(5)
Tablo 1: Elle ve Bölge Büyütme Yöntemiyle Yapılan Segmentasyon
İşleminin Karşılaştırılması.
Görüntü
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A
210
140
190
234
312
88
184
162
166
133
242
302
146
98
344
278
134
90
122
96
230
98
149
280
162
M
250
170
180
212
330
96
203
177
146
176
198
288
124
114
361
260
150
112
143
128
204
112
130
254
136
A M
212
130
170
208
308
70
180
150
130
112
190
264
112
90
322
222
120
88
102
90
190
86
120
230
130
ZSI
0.92
0.83
0.91
0.93
0.95
0.76
0.93
0.88
0.83
0.72
0.86
0.89
0.82
0.84
0.91
0.82
0.84
0.87
0.76
0.80
0.87
0.81
0.86
0.86
0.87
Tablo 1’de 25 görüntü için hem bölge büyütme yöntemi ile
bölütlenen alan hem de elle referans olarak çizilen alan
büyüklükleri gösterilmektedir. Tabloda A elle çizilen alanın
piksel değerlerini, M ise bölge büyütme yöntemiyle elde
edilen piksel değerleri göstermektedir. Elde edilen sonuçlar
birbirlerine çok yakındır. Tüm görüntüler için ZSI değerinin
ortalaması 0,85 elde edilmiştir.
(c)
Şekil 3: (a) Orjinal BT görüntüsü üzerinde tohum noktası seçimi. (b)
Bölge Büyütme Yöntemi ile pankreas belirlenesi.
(c) İlgisiz alanların atılması ve elde edilen bölütlenmiş görüntü
Bu çalışmada 25 farklı hastadan alınan DICOM
formatındaki BT görüntülerinden pankreasın bütününü veren
en uygun kesit alınarak 512x512 çözünürlükte jpeg formatında
görüntüler oluşturulmuştur. Her bir görüntü üzerindeki
pankreas alanı elle çizilerek oluşturulan referans görüntüler ile
bölge büyütme yöntemi kullanılarak bölütlenen görüntüler
karşılaştırılmıştır. Karşılaştırma işlemi referans görüntülerde
pankreas olarak işaretlenen bölgenin alanı ile bölge büyütme
yöntemi kullanılarak bölütlenen alan arasındaki kesişim olarak
değerlendirilmiştir. Bu amaçla sonuçların doğruluğunu
değerlendirmek için ZSI indeksi kullanılmıştır. ZSI değeri 0
ile 1 aralığındadır. 1’e yakın değerler başarılı, 0’a yakın
değerler ise başarısız durumları temsil eder. 5. Eşitlikte verilen
formülde A bölge büyütme yöntemiyle bulunan alanı, M ise
elle yapılan bölütleme işleminin alanını gösterir.
IV. SONUÇLAR
Bu çalışmada pankreas görüntülerinin tesbitinde radyologlara
destek olacak bir Bilgisayar Destekli Tesbit Sistemi (CADComputer Aided Detection)’nin başarımını etkileyecek en
önemli işlemlerden biri olan bölütleme (segmentasyon) işlemi
Bölge Büyütme yöntemi ile gerçekleştirilmiştir. 25 hastadan
alınan DICOM formatındaki BT görüntüleri uygun kesitler
alınarak JPEG formatına dönüştürülmüş ve bölütleme
işleminin Bölge Büyütme Algoritması ile gerçekleştirilmiştir.
Pankreasın atipik bir yapıda ve batın bölgesindeki diğer organ
ve damarlarla iç içe olmasından dolayı BT kesitleri manuel
olarak en uygun kesit seçilerek çalışılmıştır. Bu yüzden
yüksek doğruluk oranlarına çıkılabilmiştir. Rasgele alınacak
kesitlerde görüntünün yoğunluk, renk, parlaklık gibi belli
özelliklerine dayalı bölge büyütme yöntemi doğru sonuçlar
üretmeyebilir. Bu durumda bağlanabilirlik, bitişiklik bilgisi
57
veya morfolojik ve matematiksel işlemlere dayalı yöntemler
bölge büyütme yöntemine ek olarak kullanılmalıdır.
[5]
Gudjonsson B. Cancer of pancreas 50 years of surgery. Cancer 1987; 60:
2284-2303.
[6] Freeny PC, Marks WM, Ryan JA, Traverso LW. Pancreatic ductal
adenocarsinoma: diagnosis and staging with dynamic CT. Radiology
1988; 166:125-133.
[7] Freeny PC. Radiologic diagnosis and staging of pancreatic ductal
adenocarsinoma. Radiol Clin North Am 1989; 7:121-128.
[8] Hommeyer SC, Freeny PC, Crabo LG. Carcinoma of the head of the
pancreas: Evaluation of the pancreaticoduodenal veins with Dynamic
CT- potentialfor improved accuracy in staging. Radiology 1995; 196:
233-238.
[9] Freeny PC, Traverso LW, Ryan JA. Diagnosis and staging of pancreatic
adenocarcinoma with dynamic computed tomography. Am J Surg 1993;
165: 600-606.
[10] A. Shimizu, R. Ohno, T. Ikegami, H. Kobatake,S. Nawano, and D.
Smutek,
“Segmentation of multiple organs in non-contrast 3d
abdominal images,” Int J Computer Assisted Radiology and Surgery,
vol. 2, pp. 135–142, 2007.
[11] T. Kitasaka, M. Sakashita, K. Mori, Y. Suenaga, and S. Nawano, “A
method for extracting pancreas regions from four-phase contrasted 3d
abdominal ct images,” Int J Computer Assisted Radiology and Surgery,
vol. 3, pp. 40, 2008.
[12] Wolz R, Chu C, Misawa K, Mori K, Rueckert D: Multi-organ abdominal
CT segmentation using hierarchically weighted subject-specific
atlases.
MICCAI
LNCS
7510:10
–
17,
2012
TEŞEKKÜR
Bu çalışmada kullanılan pankreas BT görüntülerinin temin
edilmesindeki yardımlarından dolayı BezmiAlem Vakıf
Üniversitesi Gastroenteroloji Bölümü Öğretim Üyesi Doç.Dr.
Orhan KOCAMAN’ a teşekkür ederiz
KAYNAKLAR
[1]
[2]
[3]
[4]
Parker SL, Tong T, Bolden S, et al. Cancer Statistics, 1997CA Cancer J
Clin 1997; 47: 5-27.
Warshaw AL, Fernandez-Del Castillo C. Pancreatic Carcinoma. The
New England Journal of Medicine 1992; 326:455-465..
National Cancer Institute. Annual cancer statistics review1973-1988.
Bethesda, Md. : Department of Health and Human Services 1991. (NIH
Publication No. 91-2789.)
Di Magno EP, Malegelada JR, Taylor WF, et al. Aprospective
comparison of current diagnostic tests for pancreatic cancer. N Engl J
Med 1977; 297: 737-742.
58
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Serially Communicating Lift System
A.GÖLCÜK1 and H.IŞIK2
1
Karamanoglu Mehmetbey University, Karaman/Turkey, ademgolcuk@kmu.edu.tr
2
Selcuk University, Konya/Turkey, hisik@selcuk.edu.tr
Abstract – Lifts/elevators are mechanical equipment for
moving people or goods in a vertical direction via moveable
cabins or platforms on guide rails. They provide quick, easy,
comfortable, and safe transport between floors. In machine
rooms of lifts, there are cards to guide the cabin by interpreting
the received commands. For sending and receiving data between
the machine room and the cabin, there are generally two 24x0.75
flexible cables varying according to the number of floors. In
order to eliminate these cables, we designed a Serially
Communicating Lift System. This system enables data exchange
between the cabin and the lift control card. The system was tested
in a real lift system for two months. The system measured up
after rectifying the faults during the test. Then, it was repeatedly
tested for 45 days. These tests affirmed the applicability of our
Serially Communicating Lift System.
Figure 1-Basic Structure of a Lift
Keywords: Serial Communication, Flexible cable, Machine room,
Lift card.
The system that is provoked by a machine room and enables
the transportation of people or load in a vertical direction
through a cabin or a cage moving on guide rails in a
shaft/hoistway is called Mechanical Freight Lift.[11]
Thanks to technology, lifts have been at the disposal of
humanity, making everyday life easier for people.[2]
[2]
Flexible Cable Connection between the Cabin and the
Mainboard;
 Display Data
 Floor Buttons
 Floor Counters
 Automatic Door Data
 Top and Bottom Power Breaker
 Overload Switch
 Cabin Lamp
[4, 6]
Figure-1 presents the basic structure of lifts. Main elements of
lifts are as follows:
Control Panel: This panel is the computer of lift.
Engine: It moves the lift cabin according to the information
coming from the control panel.
Counter-weight: It means the weight bound to the other end of
the rope to balance the cabin weight within the shaft.
Flexible Cables: They are the cables enabling communication
between the cabin and the control panel.
Cab/Cabin: It is the lift part that carries the load.
The data conveyed from the control panel to the cabin;
Display Data: The data sent to the display segments to show
on which floor the cabin is.
Automatic Door Data: The data coming from the control panel
in order to open and close the automatic door in the cabin.
Cabin Lamp: The data coming from the control panel in order
to switch on or off the cabin lamp.
Button Leds: The data sent to the cabin leds in order to ensure
that, when someone pressed on a button, its light is on until
the cabin reaches to the wanted floor.
I.
INTRODUCTION
The data conveyed from the cabin to the control panel;
Floor Buttons: The data sent to the control panel when
someone presses on a button inside the cabin.
Floor Counters: The data sent to the control panel when the
cabin moves up or down.
Top and Bottom Power Breaker: The data sent to the control
panel when the cabin reaches the bottom or the top point.
Overload Switch: The data sent to the control panel if the
cabin is overloaded.
59
II. MATERIAL AND METHOD
Parity bit: It is used to check whether characters are
transferred to the other side properly or not. If the receiver
detects that the received parity bit is not equal to the computed
parity bit, it fails and do not accept the character at that
moment.
Stop bit: It indicates that a character has finished. It provides
idle or dead times between characters. After a stop bit is sent,
new data can be sent at any time.
Baud: It is a unit of data communication speed expressed as
bit per second. It is the rate of analogue signal change.
Asynchronous serial data links use the data in the form of
ASCII coded characters. Asynchronous communication
requires 10 data bits in total to transmit 7 useful data bits. This
is why asynchronous communication is inefficient to a certain
extent. [1]
A. Serial Communication Technique
This technique is crucial for business organisations,
automation systems, and factories with a number of machines
and engines, in terms of minimising the complexity, reducing
the cost, making the control easier, programming the system
easily as desired, and removing the need for an additional data
link when new devices are added to circuits.
Today, serial communication has a wide area of utilisation.
Microprocessors and the devices such as modem, printer,
floppy disc, hard disc, and optical drives communicate in a
serial way. In addition to these, serial communication is used
in cases that the number of lines are aimed to be reduced. In
serial communication, data are conveyed unidirectionally or
bidirectionally on a single channel. As the number of lines is
reduced, its data signalling rate is low as well.
Serial communication involves two ways of data
communication. One of them is synchronous and the other is
asynchronous.
Synchronous communication: In syncronous data
transmission, data and clock pulse are transferred together.
This situation dispenses with the need for start and stop bits.
Moreover, syncronous communication is faster than
asynchronous communication as the former is based on
character blocks. However, it is more expensive and includes
more complicated circuits. Syncronous communication means
that a transmitter and a receiver operate simultaneously. This
is why it needs clock pulses. It initiates the transmission in the
following way: At first, the transmitter sends a particular
character known by the both sides. This character indicates the
initiation of communication. When the receiver reads this
character, communication starts. Then the transmitter sends
data. This transfer process goes on until the data block is
completed or the syncronisation between the transmitter and
the receiver is lost.
Asynchronous communication: Leading characteristics of
asynchronous communication are as follows:
 Transfers are based on characters.
 Parameters of each data communication device must
be equivalent.
Figure-2 demonstrates the basic form of asynchronous
communication and asynchronous data block. An
asynchronous character consists of a start bit, a parity bit, data
bits, and stop bits.
B. Structure of the System
A lift is basically made up of an engine, control panel,
counter-weight, cabin, flexible cables, power cables etc. This
study aims to show how a lift can use of serial communication
by dispensing with the 24x0.75 flexible cables that are used in
the electronic communication system of the lift and constantly
move along with the cabin. Figure-4 shows the system we
have developed. As it is understood from the diagram, two
circuits have been designed, one of them is for the machine
room, the other for the cabin. These circuits include a
microcontroller module, a data input module, a data output
module, a power module, and serial communication modules.
These circuits pave the way for removing the flexible cables in
the lift system.
Figure 3- The lift system with serial communication link
[2]
Basic electronic materials in serial communication circuits;
PIC16F877A, IC (Integrated Circuit) CD4067, IC Uln2803,
IC 74Ls273, IC UDN2981, IC Max485, Lm2576T-5 StepDown Switching Regulator, PC817 Photocoupler etc.
Pic16f877A Microcontroller IC;
It is a 16f series microcontroller with 33 input/output ports,
six of these belong to PortA, eight to PortB, eight to PortC,
eight to PortD, and three to PortE. It has three timers and one
analog-digital converter. It is a programmable IC to which the
software we developed for this system is uploaded. In short, it
is the brain of the serially communicating lift system. The two
circuits we have designed contain each of this IC. The data
which this IC receives via the written software are transmitted
from the cabin to the lift card module in the machine room
through serial communication, and from the lift card to the
cabin module through serial communication again. [5]
Figure 2-Asynchronous data block
Start bit: It is used to signalise that a character has proceeded
to be sent. It is always sent as the first bit of transfer.
Data bits: The groups that compose these bits are made up of
all of the characters and the other keys on keyboard.
60
IC CD4067;
This is a 16-input and single-output multiplexer IC. Its
purpose is to check 32 inputs through six Pic ports. Therefore,
instead of using 32 pins, six pins are used from Pic. This IC
was used in both cards two times for each. Control bits of each
IC were checked by the 0.,1.,2. and 3. ports of Pic16f877
PortA. Output pins of CD4067 ICs were read from the 4. and
5. ports of PortaA as well. In this way, 32-input data were
controlled via six Pic16F877 ports.
IC ULN2803;
This IC consists of eight Darlington transistors. These
transistors are composed of two NPN transistors. This IC was
used for the displays inside the cabin. The cabin displays are
seven-segment displays with common anodes. IC ULN2803
drives the segment ends (cathode ends) of the displays. In the
meantime, this IC transmits the data coming from the buttons
and the lift card to IC CD4067 inputs.
Figure 4- Conncetion diagram of serial data communication
between Pics
PC817 photocoupler;
The cabin lamp used in the lift system runs on 220V AC. Its
automatic door runs on 190V DC. In order to eliminate the
cables between the cabin and the machine room, our system
involved a module with PC817 to take trigger from 220V AC
and 190V DC voltages. Circuits were protected from high
voltages and heavy currents thanks to this circuit.
IC 74LS273;
This is an eight-input and eight-output data flip-flop
IC. It can keep the eight outputs until the next clock pulse.
Four ICs of this type were used in each circuit. It keeps the
data coming from Pic to the lift card and the cabin until the
next clock pulse.
C. Serial Communication Software
Pic Basic Pro commands enabling serial communication are
as follows:
Data sending command:
SEROUT2 VERIOUT,188,["S","U",BITLER1]
IC UDN2981;
This is an eight-input and eight-output IC with a capacity of
output voltage reaching up to 50V. Four ICs of this type were
used in the lift card module, and three ICs of this type were
used in the cabin module. Its intended purpose is to increase
the 5V-output of 74LS273 to 24V and to convey it to the lift
card and the cabin inputs.
This command transmits data at 4800 baud rate. It
transmits the letters “S” and “U” before sending BITLER1
data.
Data receiving command:
SERIN2 VERIIN,188,100,ATLA,[WAIT ("SU"),BITLER1]
Lm2576T-5 step-down switching regulator;
As the circuit components of the lift system operate on 24V,
our circuits were supplied with 24V adaptors. The ICs of TTL
and CMOS used in our circuits operate on a voltage of +5V.
With a wiring diagram seen in Figure-22, a power circuit
module was designed and +24V was lowered to +5V. This
circuit converts the input voltages between 7-40V to a 5V
output voltage.
This command reads the data sent at 4800 baud rate
and transfers the data coming after the regularly sent letters
“S” and “U” to BITLER1 variable.
Each circuit has four of these commands to send and
receive data. As each command can convey 8-bit data, 32 bits
of data are transmitted from the cabin to the machine room,
and 32 bits of data from the machine room to the cabin.
IC SN 75176 N;
One IC of this type was used in each module. Serial data
communication between Pic16F877 microcontrollers takes
place by way of this IC. This IC ensures a serial
communication reaching up to 300m. Figure-4 presents the
connection diagrams regarding the serial data communication
of circuits.
III. APPLICATION
Microcontroller software in this Serially Communicating
Lift System was written by using PicBasic Pro program.
Printed circuits were prepared via Ares 7 Professional inside
the Proteus 7 Professional program package. We used twosided printing technique while plotting these printed circuits.
In addition, before transferring the circuits into printed
circuits, working principles of the used codes and ICs were
tested on the simulation program of Isis 7 Professional.
We designed two serial communication circuits, one for the
cabin and the other for the machine room. These circuits were
tested on a lift prototype at first. After we got the desired
61
results, the circuits were assembled to a real lift system
together with lift technicians. At the first stage, malfunctions
were detected and notes were taken. Following the necessary
changes in software and hardware, the system was tested
again. After the tests bore fruit, it was observed that the serial
communication technique can be used in the lift system.
IV.
cable system and a serial communication system. While
calculating the cost of flexible cable, inter-floor distance was
considered as 3m. Two flexible cables of 24x0.75 sections are
used. The cost of flexible cable for a five-storey building was
found as 5*6,23Tl*3m*2=186,9TL. After all, it is seen that
more floors there are, more economical the serial
communication system is in comparison to the flexible cable
system.
CONCLUSION
This system eliminates the two 24x0.75 flexible cables used
in the electronic communication system of lifts, moving with
their cabin. Moreover, it obviates the possible failures due to
cable ruptures and minimises the time spent for repairing.
There are two LEDs on each circuit to show whether serial
communication is taking place or not. If the red LED is on, it
means that serial communication is failing; if the blue one is
on, it means that serial communication is running. Whether
there is a systemic failure or not can be detected easily in this
way.
EXCERPT
This study was excerpted from the PhD dissertation under
the title of “Design and Actualisation of the RF-controlled Lift
System” written for Selçuk University Institute of Science in
2010.
REFERENCES
Canbolat, A., 2009. Mikrodenetleyici İle Tek Hat Seri İletişim
Hazırlayan Akif Canbolat, http://320volt.com/mikrodenetleyiciile-tek-hat-seri-iletisim-pic16f84 [Date Accessed: 4 Kasım 2009]
[2] Gölcük A, 2010. Design And Actualisation Of The RF-Controlled
Lift System, PhD Thesis, Graduate School of Natural and Applied
Sciences, Selçuk University, 75 P. Konya
[3] Görgülü, Y., E., 2007. RTX51 İle Asansör Otomasyonu, Yüksek
Lisans Tezi, Süleyman Demirel Üniversitesi Fen Bilimleri
Enstitüsü, 100 S., Isparta
[4] Kan, İ., G., 1997. Asansör Tekniği : Elektrikli / İbrahim G. Kan.
326 s. ; 28 cm. Yayın yeri; İstanbul, Birsen Yayınevi, [t.y.]
[5] Megep, 2007. Bilişim Teknolojileri/Mikrodenetleyiciler 1,
ANKARA,http://megep.meb.gov.tr/mte_program_modul/modul_p
df/523EO0191.pdf [Date Accessed: 4 Eylül 2009]
[6] Mikrolift Mühendislik, 2007. ML60X Programlama (Ver:2.78),
Konya, 03, ARALIK,2007, http://www.mikrolift.com.tr/tr/pdf/
l60xskullanimklavuzu.pdf, [Date Accessed: 05 Nisan 2010]
[7] Nergis Kablo, 2010. H05VVH6-F Kablo, Nergiz Kablo San. ve
Tic.Ltd.Şti.,İstanbul,
http://www.nergizkablo.com.tr/urun_227iec71f.htm
[Date Accessed: 05 Mayıs 2010]
[8] Özden, S., 2007. Bir Elektrikli Asansör Sisteminin Bulanık Mantık
Tekniği İle Denetimi, Yüksek Lisans Tezi, Gazi Üniversitesi Fen
Bilimleri Enstitüsü, 116 S., Ankara
[9] Sarıbaş, Ü., 2006. Akıllı Bir Asansör Sisteminin Benzetimi,
Yüksek Lisans Tezi, Gazi Üniversitesi Fen Bilimleri Enstitüsü, 129
S., Ankara
[10] Sword
Lift,
2010.
Sword
Lift
Asansör
Market,
http://www.onlineasansormalzemesi.com/24X075-MMFLEXIBLE-KABLO-pid-106.html
[Date Accessed: 10 Nisan 2010]
[11] Texier, G., 1972. Asansör Tesisleri :Temel Bilgiler, Kosrüksiyon,
Proje ve Hesap Esasları/Georges Texier;Çeviren Uğur Köktürk.
166 s. ; 26 cm. İstanbul:İnkılap ve Aka Basım Evi
[12] Tüm Elektronik, 2010. İstanbul, http://www.bluemavi.com/
[Date Accessed: 10 Mayıs 2010]
[1]
Table 1-Flexible Cable Prices
SIZES
Nominal
Section
Diameter
Height (mm)
(mm)
Weight
(gr/m)
Price (m)
Price per unit
(m)
Vat included
12x0.75
33.8
4.2
284
3 TL
3,54 TL
16x0.75
44.4
4.2
366
4,7TL
5,55 TL
20x0.75
55
4.2
463
4,96 TL
5,85 TL
24x0.75
75.6
4.4
740
5,28 TL
6,23 TL
[7, 10]
Table 2- Cost Comparison in Respect to the Number of Floors
Number of
Floors
Cabling Cost of a
System with Flexible
Cables
Serial
Communication
System
5
186,9 TL
82,18 TL
10
373,8 TL
82,18 TL
15
560,7 TL
82,18 TL
20
747,6 TL
82,18 TL
25
934,5 TL
82,18 TL
30
1121,4 TL
82,18 TL
35
1308,3 TL
82,18 TL
40
1495,2 TL
82,18 TL
[7, 12]
Table-1 shows the prices of flexible cable used in lift
systems. Tablo-2 presents a cost comparison of a flexible
62
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Kahramanmaraş Koşullarında 3 kW Gücündeki
a-Si ve c-Si Panellerden Oluşan Fotovoltaik
Sistemlerin Karşılaştırılması
Ş. YILMAZ1, H. R. ÖZÇALIK2
1
Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş/Türkiye, sabanyilmaz1@hotmail.com
2
Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş/Türkiye, ozcalik@ksu.edu.tr
II. KAHRAMANMARAŞ’IN İKLİM DEĞERLERİ
Özet – Yenilenebilir enerji kaynaklarına olan ilginin artması
ve yeni teknolojik gelişmeler Fotovoltaik sistemlerin
yaygınlaşmasına olumlu etki yapmıştır. Fotovoltaik sistemlerin
maliyetlerinin her geçen gün azalmasına rağmen, başlangıç
maliyetleri hala yüksektir. Bu yüzden fotovoltaik sistemlerin
verimli bir şekilde planlanması çok önemlidir. Panel tiplerinin
doğru seçilmesi sistemin verimini etkilemektedir. Bu çalışmada
güneş enerjisi üretimi için oldukça iyi olan Kahramanmaraş
şartlarında Thin Film (a-Si) ve Çoklu-kristal silikon (c-Si) panel
tiplerinin performans ve maliyet analizi yapılmıştır.
Kahramanmaraş yıllık ortalama 1582,5 kWh/m2 ışınım
değeri ile güneş enerjisi üretimi açısından oldukça iyi bir
bölgedir [6].
Tablo 1.de Fotovoltaik sistem tasarımı için gerekli olan
Kahramanmaraş’ın iklim değerleri aylık ortalama olarak
verilmiştir.
Tablo 1.Kahramanmaraş’ın iklim koşulları [6]
Anahtar Kelimeler – a-Si, c-Si, Fotovoltaik, Maliyet Analizi
Işınım
I. GİRİŞ
(
dünya enerji tüketimi yılda 10 TW ve 2050 yılında
yaklaşık olarak 30 TW olacağı tahmin edilmektedir [1].
Enerji ihtiyacını karşılayan en önemli kaynak olan Fosil
yakıtların maliyetinin artması, Küresel ısınma ve çevresel
konular temiz ve yenilenebilir enerjilerinin önemini ve
kullanımını artırmaktadır [2]. Yenilenebilir enerji kaynakları
içinde son yıllarda güneş enerjisi ön plana çıkmaktadır.
Fotovoltaik sistemler, Güneş enerjisini elektrik enerjisine
dönüştüren umut verici yenilenebilir enerji sistemlerinden biri
haline gelmiştir [3]. Hızla yaygınlaşan fotovoltaik sistemlerin
teknolojik ve ekonomik sorunlarla karşı karşıya kalmaması
için uygun değer planlanması gerekmektedir [4,5].
Fotovoltaik sistemlerin uygun değer planlanmasında
ekipmanlar arası uyum önemli olduğu kadar, coğrafi konum
ve iklim şartlarına göre planlamada önemlidir. Bu çalışmada
güneş enerjisi üretimi için oldukça iyi olan Kahramanmaraş
için Thin Film (a-Si) ve Çoklu-kristal silikon (c-Si) tipleri
karşılaştırılmıştır. Şekil 1.de panel tipleri görülmektedir.
H
ALEN,
January
February
March
April
May
June
July
August
September
October
November
December
Güneşlenme
Saati
Sıcaklık
4,21
5,47
6,61
7,85
9,57
11,49
12,07
11,43
10,13
7,55
5,56
3,86
4,43
4,97
9,03
13,91
20,19
26,01
30,36
29,25
24,03
18,00
10,78
5,91
)
59,70
77,40
125,10
152,70
188,70
204,30
203,10
180,00
151,80
113,40
72,00
54,30
Kahramanmaraş aylık ortalama 131,875 kWh/m2 ışınım ve
16,405 0C sıcaklık ve yıllık toplam 2874 saat güneşlenme
süresi ile güneş enerjisi üretimi açısından önemli bir bölgedir.
III. FOTOVOLTAİK SİSTEMİN ÖZELLİKLERİ
Şekil 2.de sistemin şeması görülmektedir. Fotovoltaik
sistem için 3 kW’lık gücü elde etmek içim 10 ar adet 300 W
panel kullanılmıştır. Thin film paneller (5x2 ), Çoklu kristal
silikon paneller (10x1) düzeninde seri-paralel bağlanmışlardır.
İnvertör girişi doğru akım 220-480 V olduğu için panellerin
seri-paralel kombinasyonu sonucunda elde edilen voltaj bu
aralıkta olmalıdır.
Şekil 1: Thin Film ve Çoklu-kristal silikon panel tipleri
63
Şekil 2. Sistemin şeması
Şekil 4. Thin Film panelin P-V karakteristiği
Thin Film panellerin 5 seri, 2 paralel olarak bağlanırsa
Vmp=300 V olur. Çoklu kristal silikon paneller 10 seri, 1
paralel olarak bağlanırsa Vmp=361 V olur. Bu değerler 220480 V aralığında olduğu için invertörün çalışması için gerekli
voltaj sağlanmış olur.
Ayrıca her iki sistem için 3 kW gücünde Sunny Boy SB
3000HFUS-240 invertör kullanılmıştır. İnvertör 1 fazlı ve on
grid olarak seçilmiştir. İnvertörün verimi %96,6 dır.
Kullanılan Çoklu kristal silikon panelin akım-gerilim
karakteristiği Şekil 5.de, güç-gerilim karakteristiği şekil 6.da
görülmektedir.
Tablo 2.Kullanılan Fotovoltaik panellerin özellikleri
Marka
Model
Hücre Tipi
Maksimum Güç (Pmax)
Nominal Gerilim (Vmp)
Nominal Akım (Imp)
Açık Devre Gerilimi (Voc)
Kısa Devre Akımı (Isc)
Verim
Boy
En
Ağırlık
Xunlight Corporation
XR36-300
a-Si
300 W
60 V
Suntech
STP 300-24/Vd
c-Si
300 W
36,1 V
5.00 A
81.0 V
6.35 A
6,54%%
5160 mm
889 mm
12 kg
8.32 A
45.2 V
8,65 A
15.5%%
1956 mm
992 mm
27 kg
Şekil 5. Çoklu kristal silikon panelin I-V karakteristiği
Kullanılan thin film panelin 200-400-600-800-1000 W⁄m2
ışınım değerleri için akım-gerilim karakteristiği Şekil 3.de ve
güç-gerilim karakteristiği şekil 4.de görülmektedir. Pm
değerleri 300,5 W -237,9 W-177,2 W-118,3 W- 58,0 W olarak
bulunmuştur.
Şekil 6. Çoklu kristal silikon panelin P-V karakteristiği
Kullanılan Çoklu kristal silikon panelin 200-400-600-8001000 W⁄m2 ışınım değerleri için akım-gerilim karakteristiği
şekil 5.de ve güç-gerilim karakteristiği şekil 6.da
görülmektedir. Pm değerleri 302,8 W – 241,4 W-179,7 W-118
W- 57,1 W olarak bulunmuştur. [7].
Şekil 3. Thin Film panelin I-V karakteristiği
64
Şekil 8. Aylık üretilen enerjinin farkları
Şekil 7. Bir yıl için üretilen enerjinin dağılımı
Şekil 8.de Thin Film, Çoklu kristal silikon panellerden elde
edilen enerjinin farklarının aylık değişimi görülmektedir.
3 kW gücündeki fotovoltaik sistemin bir yıl için günlük
üretmiş olduğu enerji Şekil 7.de görülmektedir. Thin Film,
Çoklu kristal silikon panellerden oluşan iki ayrı sistemin
üretmiş olduğu enerji bir birine çok yakın değerlerdir.
Farklılığı ancak yakın plan incelemede görmekteyiz. Bir yıllık
sürede Thin Film panellerden oluşan sistem 5018,6 kWh,
Çoklu kristal silikon panellerden oluşan sistem 5195,1 kWh
enerji üretmiştir. Kahramanmaraş’ın iklim şartlarına göre yılın
her günü üretim olduğu görülmektedir. Bütün mevsimlerde
üretim olduğu ve kış aylarında bile iyi ışınım değerleri olduğu
görülmektedir. Kullanılan sistem güneş izleyici olmayıp, sabit
sistem olarak planlanmıştır. Güneş izleyici sistem kullanılarak
daha fazla enerji üretilebilirdi [8-14].
IV. MALİYET ANALİZİ
Tablo 3.de Thin Film, Çoklu kristal silikon panellerden
oluşan iki ayrı 3 kW gücündeki fotovoltaik sistemin maliyet
analizi görülmektedir. Thin Film panellerden oluşan
1.sistemin toplam yatırım maliyeti 9768,6 TL, Çoklu kristal
silikon panellerden oluşan 2.sistemin toplam yatırım maliyeti
9995,4 TL dır. İki sistemin toplam maliyeti 19764 TL dır.
Yatırım maliyetinin %47 si PV Panellere, %37 si Eviriciye,
%16 sı diğer donanım için harcanmıştır [15-21].
Tablo 3.Sistemlerin maliyet analizi
Thin Film
PV Modules
414 TL
10
4140 TL
435 TL
10
4350 TL
Supports
150 TL
10
1500 TL
150 TL
10
1500 TL
Inverter
3405 TL
1
3405 TL
3405 TL
1
3405 TL
Total
9045 TL
9255 TL
1628,1 TL
1665,9 TL
10673,1 TL
10920,9 TL
Taxes
Net investment
Şekil 8. Aylık üretilen enerji miktarları
Thin Film ve Çoklu kristal silikon panellerden oluşan iki
ayrı 3 kW gücündeki fotovoltaik sistemin üretmiş olduğu
enerji, bir yıl için aylık olarak Şekil 8.de görülmektedir.
Kahramanmaraş iklim şartlarında bütün aylarda Çoklu kristal
silikon panellerden oluşan sistem daha fazla enerji
üretmektedir.
PV maliyeti (TL/W)
1,628 TL
1,711 TL
Sistem Maliyeti (TL/W)
3,557 TL
3,640 TL
Üretim saati (Yıllık)
2874 saat
2874 saat
Üretim [kWh] (Yıllık )
5018,6
5195,1
Üretim [kWh] (30 Yıllık)
150558
155853
Tarife
0,292 TL
0,292 TL
Yıllık Gelir
1 465,431 TL
1 516,969 TL
Toplam Gelir
43 962,930 TL
45 509,076 TL
0,0648 TL
0,0642 TL
6,6652
6,5883
Birim Enerji Maliyeti
Yeri dönüş Süresi [Yıl]
65
Çoklu kristal
Thin Film panellerden oluşan 1.sistemin toplam yatırım
maliyeti ve üretilen enerji, Çoklu kristal silikon panellerden
daha düşüktür. Çoklu kristal silikon panellerden oluşan
sistemin yatırım maliyeti yüksek olmasına rağmen birim enerji
maliyeti düşüktür [22-33].
[10] R. Kumar, M. A. Rosen, “A critical review of photovoltaic–thermal
solar collectors for air heating”, Applied Energy 88 (2011) 3603–3614
[11] J.Zhao, A.W ang, M.A. Green, Prog.Photovoltaic Res. Appl.7 (1999)
471.
[12] A.V. Shah, R.Platz, H.Keppner , Sol.Ener gy Mater. Sol. Cells 38 (1995)
501.
[13] A. Catalano, Sol.Ener gy Mater. Sol.Cells 41y42 (1996) 205.
[14] Z.Shi, M.A.Gr een, Prog. Photovoltaic Res. Appl. 6 (1998) 247.
[15] S. Bhattarai , G. K. Kafle, S.H. Euh, J.H. Oh, D. H. Kim, “Comparative
study of photovoltaic and thermal solar systems with different storage
capacities: Performance evaluation and economic analysis”, Energy 61
(2013) 272-282
[16] C. Cañete, J. Carretero, M. S. Cardona,”Energy performance of different
photovoltaic module Technologies under outdoor conditions”, Energy
65 (2014) 295-302
[17] K. Vats, V. Tomar, G.N. Tiwari, “ Effect of packing factor on the
performance of a building integrated semitransparent photovoltaic
thermal (BISPVT) system with air duct “, Energy and Buildings 53
(2012) 159–165
[18] B. P. Jelle, C. Breivik, H. D. Røkenes, “Building integrated photovoltaic
products: A state-of-the-art review and future research opportunities”,
Solar Energy Materials & Solar Cells 100(2012)69–96
[19] M.J. Wild-Scholten, “Energy payback time and carbon foot print of
commercial photovoltaic systems”, Solar Energy Materials & Solar
Cells 119(2013)296–305
[20] A. M. Al-Sabounchi, S. A. Yalyali, Ha. A. Al-Thani,”Design and
performance evaluation of a photovoltaic grid-connected system in hot
weather conditions”, Renewable Energy 53 (2013) 71-78
[21] Bloem JJ. Evaluation of a PV-integrated building application in a well
controlled outdoor test environment. Build Environ 2008;43:205–16.
[22] K. Vats , G.N. Tiwari, “Energy and exergy analysis of a building
integrated semitransparent photovoltaic thermal (BISPVT) system”,
Applied Energy 96 (2012) 409–416
[23] K. Padmavathi, S. Arul Daniel,”Performance analysis of a 3 MWp grid
connected solar photovoltaic power plant in India”, Energy for
Sustainable Development 17 (2013) 615–625
[24] G. Makrides , B. Zinsser , M. Norton , G. E. Georghiou, M. Schubert, J.
H. Werner, “Potential of photovoltaic systems in countries with high
solar irradiation”, Renewable and Sustainable Energy Reviews 14
(2010) 754–762
[25] J. Peng, L. Lu, H. Yang, “Review on life cycle assessment of energy
payback and greenhouse gas emission of solar photovoltaic systems”,
Renewable and Sustainable Energy Reviews 19(2013)255–274
[26] V. Sharma, S. S. Chandel, “Performance and degradation analysis for
long term reliability of solar photovoltaic systems: A review “,
Renewable and Sustainable Energy Reviews 27(2013)753–767
[27] J. Wong , Y. S. Lim, J. H. Tang, E. Morris, “Grid-connected
photovoltaic system in Malaysia: A review on voltage issues
“,Renewable and Sustainable Energy Reviews 29(2014)535–545
[28] K.R. Ranjan, S. C. Kaushik, “ Energy, exergy and thermo - economic
analysis of solar distillation systems: A review
“,RenewableandSustainableEnergyReviews27(2013)709–723
[29] W. S. Ho, M. Z. W. Mohd Tohid, H. Hashim, Z. A. Muis, “Electric
System Cascade Analysis (ESCA): Solar PV system”, Electrical Power
and Energy Systems 54 (2014) 481–486
[30] J. E. Burns, J. S. Kang, “Comparative economic analysis of supporting
policies for residential solar PV in the United States: Solar Renewable
Energy Credit (SREC) potential”, Energy Policy44(2012)217–225
[31] R. Senol, “An analysis of solar energy and irrigation systems in
Turkey”, Energy Policy47(2012)478–486
[32] C. Dong, R. Wiser, “The impact of city-level permitting processes on
residential photovoltaic installation prices and development times : An
empirical analysis of solar systems in California cities “,Energy
Policy63(2013)531–542
[33] E. Harder, J. M. D. Gibson, “The costs and benefits of large-scale solar
photovoltaic power production in Abu Dhabi, United Arab Emirates”.
Renewable Energy 36 (2011) 789-796
V. SONUÇ
Her iki sistemin 30 yıllık çalışması sonucunda üreteceği
enerji miktarları göz ününe alınarak hesaplanan birim enerji
maliyetleri; Thin Film panellerden oluşan 1.sistemin birim
enerji maliyeti 0,0648 TL, Çoklu kristal silikon panellerden
oluşan 2.sistemin birim enerji maliyeti 0,0642 TL dır. Aylara
göre üretim farklılık göstermesine rağmen, ortalama olarak
Çoklu kristal silikon oluşan 2.sistem, Thin Film panellerden
oluşan 1.sistemden %3,5 daha fazla gelir getirmektedir.
Panellerin kapladığı toplam alanlar karşılaştırıldığında Thin
Film panellerin dezavantajlı olduğu görülür. Sistemlerin
büyüklüğü göz önüne alındığında arazi maliyetleri sistemlerin
maliyet analizini önemli ölçüde etkileyecektir. Panellerin
ağırlıkları karşılaştırıldığında Çoklu kristal silikon panellerin
dezavantajlı olduğu görülür. Ancak ağırlık özel durumlar hariç
çok fazla maliyeti etkilememektedir.
Yatırımcı acısından en önemli kıstas yeri dönüş noktasıdır.
Çoklu kristal silikon panellerden oluşan 2.sistem enflasyon
göz önüne alınarak yapılan hesaplamada 6,5883 yılda amorti
etmektedir. Thin Film panellerden oluşan 1.sistem enflasyon
göz önüne alınarak yapılan hesaplamada 6,6652 yılda amorti
etmektedir.
Sonuç olarak, Kahramanmaraş şartlarında 3 kW’lık Thin
Film, Çoklu kristal silikon panellerden oluşan iki ayrı sistem
karşılaştırılarak, Çoklu kristal silikon panellerden oluşan
sistemin avantajlı olduğu tespit edilmiştir.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
T.M. Razykov, C.S. Ferekides, D. Morel, E. Stefanakos, H.S. Ullal,
H.M. Upadhyaya, “Solar photovoltaic electricity: Current status and
future prospects”, Solar Energy 85 (2011) 1580–1608
Mohammad H. M., S.M. Reza Tousi, Milad N., N. Saadat Basir, N.
Shalavi, “A robust hybrid method for maximum power point tracking in
photovoltaic systems”, Solar Energy 94 (2013) 266–276
Yan S., Lai-Cheong C., Lianjie S., Kwok-Leung T., “Real-time
prediction models for output power and efficiency of grid-connected
solar photovoltaic systems”, Applied Energy 93 (2012) 319–326
Ju-Young K., Gyu-Yeob J., Won-Hwa H., “The performance and
economic analysis of grid-connected photovoltaic systems in Daegu,
Korea”, Applied Energy 86 (2009) 265–272
V.M. Fthenakis, H.C. Kim,” Photovoltaics: Life-cycle analyses”, Solar
Energy 85 (2011) 1609–1628
http://www.eie.gov.tr
http://www.pvsyst.com/en/
Raugei M, Frankl P. Life cycle impacts and costs of photovoltaic
systems: current state of the art and future outlooks. Energy 2009;
34:392–9.
Phuangpornpitak N, Kumar S. User acceptance of diesel/PV hybrid
system in an island community. Renew Energy 2011; 36(1):125–31.
66
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
The Experiment Set Applications of PLCHMI Based SCADA Systems
H. TERZIOGLU1, S. TASDEMIR1, C. SUNGUR1, M. ARSLAN1, M. A. ANADOL1, C. GUNSEVEN1
1
Selcuk University, Higher School of Vocational and Technical Sciences, Konya, Turkey
hterzioglu@selcuk.edu.tr, stasdemir@selcuk.edu.tr, csungur@selcuk.edu.tr, mustafaarslan@selcuk.edu.tr,
anadol@selcuk.edu.tr, cumalig@selcuk.edu.tr
pressure, level and heat and buttons may be connected to the
input and driver elements of control circuit such as contactor
and solenoid valve may be connected to the output. As seen in
Figure 1, a PLC basically consists of basic units such as a
numerical processor memory, input and output units, a
programming unit and power supply [1, 2, 3].
Abstract - The quickening of developments in the industry and
the increase of control applications based on industry,
correspondingly, led to the development of automation
technologies and software. A controlled system or process should
be well managed and observed. The spreading of automation
systems revealed the need for the personnel in vocational and
technical fields who is well equipped and with broad vision not
only theoretically but also in application. This is possible with the
increase in the quality of education, training individuals with high
knowledge and potential of application. In this study, an
applicative PLC-HMI experimental set for the Programmable
Logic Controller (PLC) which is commonly used in industrial
applications and Human-Machine Interface (HMI) based
Supervisory Control and Data Acquisition (SCADA) was
prepared. Through using this education set, PLC and SCADA
lessons will be operatively used in the laboratory environments in
various stages of higher education and real time experiments will
be held. Thus, the individuals with high sufficiency will be trained
especially in vocational and technical education through
applicative education.
.
Keywords - PLC, SCADA, Automation Systems, HMI,
Vocational and Technical Education, Experiment Set.
Figure 1: General Structure of PLC [3]
I. INTRODUCTION
The term of Supervisory Control and Data Acquisition
(SCADA) is a term which was suggested for the first time by
Arcla Energy Resources (AER) in 1971. The term
“Supervisory Control and Data Acquisition” was published
during the conference of Power Industry Computer
Applications (PICA) for the first time in 1973. The first
SCADA system was installed into a DC2 computer bought
from Fisher Corporation by AER firm [4].
SCADA fulfills the functions of collecting data and
sending data from headquarter, analyzing and displaying this
data on an operator screen. SCADA automatically performs
the tasks of controlling and observing all the units of a facility
or an enterprises and production planning. SCADA system
monitors the field equipment and simultaneously inspects
them. SCADA systems are alarm based. This feature is most
basic characteristics which distinguish SCADA system from
the other systems. For that reason, SCADA systems are used
to report any undesired event occurring in the field with
determination of date and hour and submit the necessary
warnings to the operators rather than monitoring the
instantaneous values of the inspected system.
The detection of the location of the breakdown in the
N
owadays, extremely rapid developments are witnessed in
every field of life. Parallel to those developments, PLC
and SCADA systems have been commonly used in
various fields such as industrial applications, small scale
enterprises and even smart house systems. As a result of those
developments, the significance of trainings related to SCADA
systems is increasing day by day.
Programmable Logic Controllers are electronic units
designed to execute command and control circuits of industrial
automation systems. It is an industrial computer which is
equipped with input/output units and functions through
software convenient to the control structure. The fields where
PLC’s are commonly used the control circuits of industrial
automation systems. As is known, control circuits are circuits
which are executed through elements such as auxiliary relays,
conductors, time relay and time counters. Nowadays, control
systems with PLC which function the same replaced such
circuits. PLC’s are equipped with special input and output
units which are convenient to be directly used in industrial
automation systems. The elements employing two valuable
logic information carrying elements such as sensors of
67
enterprise through real time data obtained from field, device
and various points; filtering the data on breakdown according
to their significance and determining the order of priority are
the aspects expected from a control system. Moreover, noting
the activities related to the troubleshooting by the operator,
monitoring the breakdowns and breakdown calls on the
screen or obtaining print out; saving a chronological summary
of breakdown into hard disk or a server may be listed among
those characteristics [5].
SCADA system is commonly used in various fields such as
the management of electric distribution and management of
energy transmission, steel and iron production, natural gas
distribution, petro-chemistry, water collection, purifying and
distribution, traffic signalization and smart buildings. SCADA
system provides the operators benefits such as productivity,
qualitative and rapid production through advanced level
control and observation characteristics [6,7].
SCADA systems may be comprehensively carried out
through the elements such as computer software-electronic
card-PLC-HMI. Each of those elements falls into different
education programs. The diagram of a typical SCADA system
may be seen in Figure 2 [8].
II. MATERIAL AND METHOD
In this study, an education set was designed in order to use in
the courses of SCADA systems, PLC and sensors in the fields
of technical field, technologists and engineering. The block
scheme of the designed education set was shown in Figure 3.
As seen in Figure 3, it may be seen that the designed
education set is open to be developed.
Figure 3: The block scheme of education set
In the developed education set, Siemens S7200 CPU 222
or S71200 CPU 1212C which has a wide area of usage may
be used as PLC. The developed education set was designed
with 8 inputs and 6 outputs. For that reason, the PLC’s used
in the education set may be changed considering the quantity
of input and output. Delta A and Delta B series with a wide
intercommunication and low cost are used in the types of PLC
as HMI. In the designed education set, the operator panels
such as Siemens and Beijer may also be used in the system
considering the field determined for HMI. Moreover, buttons
were used in the inputs of PLC and relays and mini contactors
of 4 kW driven by relays were used in the outputs of the
education set. This also enables the execution of the
applications. The graph of education set which has been
formed is seen in Figure 4. As seen in Figure, it was designed
through combining the education experiment set and the parts
containing the whole.
Figure 2: Typical SCADA System Diagram
It was aimed in this study that the students maintain the
information they obtain through both getting theoretical
information and performing applications. For this purpose, a
SCADA (PLC-HMI) education set was designed to facilitate
education and support their information. Thanks to this
education set, an applicative and developable learning
program will be established in the education of engineers and
technical staff. In this education set, numerous applications
from the control of the simplest systems to the control of the
most complicated systems through adding necessary software
may be carried out as the application. Using the designed
experiment set; various applications such as motor controls,
PV panel control and complicated applications including
sensors may be carried out in a modular way. Since the
opinion of the students related to the system designing and
software will positively change in those applications with the
approach of complicated structure, the development of the
vision related to this field will be provided.
Figure 4: General view of experiment set
68
developing will be provided positive supply. The quality in the
education will be promoted through the applications executed
in the formed laboratory environments and qualified staff that
is the continuous needs in the markets will be trained. This
experiment set has a modular structure which may be
developed during the courses. For the modular applications,
the applications such as speed control applications, reading
data from sensors and PV panel control will be performed
when the necessary software is added.
Through this experiment set, the students were enabled to
comprehend the multi-discipline structures and integrate the
theory and applications. Although the obtained mechanism
was designed for educational purposes, it is a model which
may commercially function, too. This designed education set
may be used in other Vocational Schools and the Faculties of
Engineering and Technology, especially, in Vocational High
Schools.
There are software programs depending on the brand of
PLC and HMI. For example, the software such as SIMATIC
STEP 7-Micro/WIN for Siemens PLC, Delta WPLSoft for
Delta PLC, Screen Editor for Delta HMI and WinCC for
Siemens HMI are used both for PLC programming and
operator designing.
The PLC software designed for the control of 4
asynchronous motors using the education set was shown in
Figure 5 and HMI designing was shown in Figure 6.
REFERENCES
[1]
[2]
[3]
[4]
[5]
Figure 5: A sample PLC software executed through SIMATIC
STEP 7-Micro/WIN program
[6]
[7]
[8]
Figure 6: A sample HMI designing carried out through Delta
Screen Editor Program
III. CONCLUSION
Through using the designed education set, the students are
enabled to perform their complete learning through combining
their theoretical information with the applications. The
students were encouraged to switch to concrete comprehension
ability in the discrete thinking structure. Through the
applications they will perform in the laboratory environment,
complicated applications were developed using the automation
system elements and the horizon of the students in project
69
Çolak İ, Bayındır R, Electrical Control Circuits, Seçkin Publishing,
Ankara 2004.
Siemens Simatic S7-200, Programmable Logic Control Device (PLC)
User’s Manual, 2005.
Sungur C, Terzioğlu H, Programmable Logic Control Device (PLC) and
its Applications, Konya, 2013.
Howard J, An Economical Solution to SCADA.Communications, 25,
1988.
Zecevic G, Web Based Interface to SCADA System, Power System
Technology Powercon’98, Beijing, 1218-1221. 1998.
Telvent, Automation S/3 User Manuel , Telvent S/3-2, Canada, 1.1-5.15
, 1999.
Telvent, Automation S/3 User Manuel , Telvent S/3-1, Canada, 5.15.35, 1999.
Yabanova İ, The Review of Application of Automatic Stocking and
Stock Renewal Systems and an Application Related to the Marble
Sector, Master’s Thesis, Afyon Kocatepe University,
Institute of
Science and Technology, Afyon, 2007.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Emotion recognition from Speech signal:
Feature extraction and classification
S.DEMİRCAN1 and H. KAHRAMANLI1
1
1
Selcuk University, Konya/Turkey, semiye@selcuk.edu.tr
Selcuk University, Konya/Turkey, hkahramanli@selcuk.edu.tr
in their application. They tested the work standard classifiers
based on Gaussian Mixture Models, Hidden Markov Models
and Multilayer Perceptron with seven emotions.
Milton et al.[3] have used Pitch, duration, energy and Melfrequency Cepstral coefficients (MFCC), linear prediction
coefficients (LPC), features of Autoregressive (AR)
parameters, which include gain and reflection coefficients to
recognize the emotion from the speech. They have used a
single classifier or a combination of classifiers to recognize
emotions from the input features. They used seven emotions
(Anger, boredom, disgust, fear, happiness, sadness and
neutral) to recognize.
Lee et al.[4] have introduced a hierarchical computational
structure to recognize emotions. The proposed structure maps
an input speech utterance into one of the multiple emotion
classes through subsequent layers of binary classifications.
The key idea is that the levels in the tree are designed to solve
the easiest classification tasks first, allowing us to mitigate
error propagation. They evaluated the classification
framework on two different emotional databases using
acoustic features, the AIBO database and the USC IEMOCAP
database [4].
In this paper, we have extracted prosodic features and
spectral features from emotion data. Extracted features had
been classified with kNN, ANN and SVM algorithms.
The paper is divided four sections. In section 2, we give an
overview of the feature extraction in emotion recognition
systems. Emotion classification and extracted features
(prosody and spectral features) are described in section 3.
Finally, experimental results are given in section 4.
Abstract - Nowadays, emotion recognition from speech signal is
one of the most important results of the human-machine
interaction. Recently, studies on this area were increased rapidly.
In this paper, pre-processing necessaries for emotion recognition
from speech data, have been performed. The researches in the
area have been proven that meaningful results have obtained
using prosodic features of speech. To recognize emotion some
prosodic features have been extracted first (Statistical data
extracted from F0), second Mel Frequency Cepstral Coefficients
(MFCC) and thirdly LPC (linear prediction coefficients).
Extracted features have classified with ANN (Artificial Neural
Network), SVM (Support Vector Machines), kNN (k- Nearest
neighbor Algorithm).
Keywords - Speech processing, speech recognition, emotion
recognition, MFCC, LPC.
I. INTRODUCTION
S
PEECH is one of the most important communication
facilities. When talking we don’t say the only words; we
are also adding our feelings upon words. Hence researchers
have concentrated to detect speakers emotion. In the last
decades detection of emotion from speech gained important
from the perspective of human computer interaction.
Feature extraction is the most important stage of the
emotion recognition. Many kinds of feature extraction
methods have been used for emotion recognition. The short
time log frequency power coefficients (LFPC) are one method
of features extraction. Nwe et al. are [1] used of short time log
frequency power coefficients. A text independent method of
emotion classification of speech has been proposed in this
study. As the classifier Discrete Hidden Markov Model
(HMM) has been used. The emotions were classified into six
categories. Performance of the LFPC feature parameters is
compared with that of the linear prediction Cepstral
coefficients (LPC) and mel-frequency Cepstral coefficients
(MFCC) feature parameters commonly used in speech
recognition systems. Results reveal that LFPC is a better
choice as feature parameters for emotion classification than
the traditional feature parameters.
Several works on this domain use the prosodic features or
the spectrum characteristics of speech signal, with neural
networks, Gaussian mixtures and other standard classifiers.
Enrique et al.[2] are used MLS (mean of the log-spectrum),
MFCCs and prosodic features (Energy and F0 - min max std)
II. FEATURE EXTRACTION
In theory it should be possible to recognize speech directly
from the signal. However, because of the large variability of
the speech signal, it is a good idea to perform some form of
feature extraction to reduce the variability [5].
Feature extraction is the most important stage of the
recognition. There are many kinds of feature extraction
methods. Some of the parametric representations are the Melfrequency cepstrum coefficients (MFCC), the linear-frequency
cepstrum coefficients (LFCC), the linear prediction
coefficients (LPC), the reflection coefficients (RC), and the
cepstrum coefficients derived from the linear prediction
coefficients (LPCC) [6].
70
frame x'(n) duplicated at the end of the following frame
x't+1(n).
In this project we used 3 methods for extraction features;
1- Prosodic features
2- Mel-frequency cepstrum coefficients (MFCC)
3- Linear prediction coefficients (LPC)
Spectral analysis: The standart methods for spectral
analysis rely on the Fourier transform of xt(n):Xt(ejω).
Computational complexity is greatly reduced if Xt(ejω) is
evaluated only for a discrete number of ω values. If such
values are equally spaced, for instance considering
ω=2πk/N, then the discreate Fourier Transform (DFT) of all
frames of the signal is obtained:
A- Prosodic features: Prosody is essentially a collection of
factors that control the pitch, duration and intensity to convey
non-lexical and pragmatic information in speech [7]. Some
prosodic features are explained below[7]:
Fundamental frequency, or f0, is the frequency at which the
vocal folds vibrate, and is often perceived as the pitch of
speech. f0 is important in the perception of emotion as it has
strong effects in conveying stressed speech, but studies have
shown it to be relatively ineffectual in producing affect when
altered on its own. It is generally split into two smaller
measures, mean f0 and f0 range, although several more are
also in common use.
Segmental duration is the term used to describe the length
of speech segments such as phonemes (the basic units of
language) and syllables, as well as silences. After f0, this is
the most important factor in emphasis of words.
Amplitude, perceived as intensity or loudness in speech,
although not as effective as f0 and duration for the purposes of
emphasis, can also be a useful indicator of emotional state in
speech. It is important to note that relative, rather than
absolute, amplitude is the indicating factor in most measures clearly, a recording taken closer to the microphone would
result in a higher amplitude, yet carry exactly the same effect
as an identical utterance at a greater distance.
X1(k) = Xt(ej2πk/N), k= 0,…,N-1
Filter bank processing: Spectral analysis reveals those
speech signal futures which are mainly due to the shape of the
vocal tract. Spectral futures of speech are generally obtained
as the exit of filter banks, which properly integrate a spectrum
at defined frequency ranges. A set of 24 band-pass filters is
generally used since it simulates human ear processing.
Y1(m) = ∑
H(z) =1-α.z
yt(m)(k)= ∑
k=0,…,L
(1)
α being the preemphasis parameter. In essence, in the time
domain, the preemphasized signal is related to the input signal
by the relation:
x'(n)=x(n)-α x(n-1)
(2)
0≤n<N
1≤t≤T
(
)
(5)
{|
( )|}
( (
) )
(6)
C- Linear prediction coefficients (LPC): LPC is commonly
used method in speech and audio signals. LPC is generally
called an illustration of the signal in compress form. It can
provide high quality compression with low number of bits.
At the same time with the outputs of LPC formant frequency
estimation can be done. This method runs in time domain.
Windowing: Traditional methods for spectral evaluation are
reliable in the case of a stationary signal. For voice, this holds
only within the short time analysis can be performed by
“windowing” a signal x'(n) into a succession of windowed
sequences xt(n) t=1,2,…,T, called frames, which are then
individually processed:
x'(n) ≡ x'(n-t.Q),
xt(n) ≡w(n). x'(n)
( )
Log energy computation: the previous procedure has the
role of smoothing the spectrum, performing a processing that
the similar to that is similar to that executed by human ear.
The next step consists of computing the algorithm as square
magnitude of the coefficients Y1(m) obtained with (Eq. 5).
This reduces the simply computing the logarithm of
magnitude of the coefficients, because of logarithm algebraic
property which brings back the logarithm of a power to a
multiplication by a scaling factor.
Mel frequency cepstrum computation: The final procedures
for the Mel frequency cepstrum computation (MFCC) consist
of performing the inverse DFT on the logarithm of the
magnitude of the filter bank output:
B- Mel-frequency cepstrum coefficients (MFCC): The
major stages of MFCC can be summarized as follows [8]:
Preemphasis: A preemphasis of high frequencies is
therefore required to obtain similar amplitude for all formants.
Such processing is usually obtained by filtering the speech
signal with a first order FIR filter whose transfer function in
the z-domain is:
-1
(4)
III. EMOTION CLASSIFICATION
In this paper we used Berlin Database [9]. In Berlin
Database there are 7 emotional conditions. These emotions are
1- Anger,
2- Boredom,
3- Disgust,
4- Anxiety(Fear),
5- Happiness,
6- Sadness,
7- Normal.
(3)
where w(n) is the impulse response of the window. Each frame
is shifted by a temporal length Q. If Q=N, frames do not
temporally overlap while if Q<N, N–Q samples at the end of a
71
Ten actors (5 female
producing 10 German
sentences) which could
and are interpretable in
sample is 16 kHz.
and 5 male) simulated the emotions,
utterances (5 short and 5 longer
be used in everyday communication
all applied emotions. The frequency
Machines), kNN (k- Nearest neighbor Algorithm) classifiers.
All results achieved on the Berlin database are produced using
10-fold cross-validation.
The most basic instance-based method is the k-Nearest
neighbors algorithm. This algorithm assumes all instances
correspond to points in the n-dimensional space Rn [10]. The
nearest neighbors of an instance are defined in terms of the
standard Euclidean distance. More precisely, let an arbitrary
instance x be described by the feature vector
〈 ( )
( )
( ) 〉 where ar(x) denotes the value of the
rth attribute of instance x. Then the distance between two
instances xi and xj is defined to be d (xi, xj), where
In our experiments on emotion recognition, two kinds of
speech features were extracted. One is prosodic features (F0)
and the other one is Spectral features (LPC and MFCC).
(Figure 1)
PITCH
features
(F0)
Prosody Features
MFCC
features
Spectral Features
• Maximum value of F0
• Minimum value of F0
• Mean Value of F0
• Standard deviation of
F0
• Skewness value of F0
• Kurtosis value of f0
• Median value of F0
d(xi,xj)≡√∑
( )
( ))
(7)
The other classifier we have used ANN (Artificial Neural
Network). ANN is inspired by the work of human neurons. In
many pattern recognition problems ANN is used widely and
efficiently. Two kinds of architectures are used in ANN:
feedforward and backforward. In the studies the most
frequently used method is Back-Propagation Neural Network
(back reflection of error) algorithm which is a multi-layer
feedforward network structure.
• 7 Features
• Maximum value of
MFCC
• Minimum value of MFCC
• Mean Value of MFCC
• Standard deviation of
MFCC
• Skewness value of MFCC
• Kurtosis value of MFCC
• Median value of MFCC
Thirdly, we have used SVM(Support Vector Machines).
SVM are supervised learning models with associated
learning algorithms that analyze data and recognize patterns,
used for classification and regression analysis. Given a set of
training examples, each marked as belonging to one of two
categories, an SVM training algorithm builds a model that
assigns new examples into one category or the other, making it
a non-probabilistic binary linear classifier [11].
• 16 Features
LPC
features
(
IV. RESULTS AND DISCUSSION
•LPC features
•16 Features
After extracting features we have analyzed all possible
combinations with features. In this section we have explained
the performance of the classification. The results of the
classification are shown as Table 1.
Spectral Features
Figure 1: Extracted Features.
All Features (prosodic and spectral) have been classified
with kNN, ANN and SVM. Number of neighborhood has been
selected 7 in kNN. Backpropagation algorithm has been
selected for training ANN. The designed model has 3 layers
with 7 neurons in hidden layer.
First, pitch features have been extracted as prosody. 7
statistical features have been used (maximum, minimum,
Mean, Standard Deviation, Skewness, Kurtosis and Median)
which is extracted from fundamental frequency.
We have used all seven emotion class of Berlin Database.
Seven different datasets have been created. These datasets
have created using MFCC, F0, LPC, MFCC+ LPC, MFCC +
F0, LPC+ F0, MFCC+F0+LPC features.
16 MFCC features have been extracted from Berlin
Emotion Data. Same statistical features (with F0) have been
used to reduce the features.
16 LPC features are extracted from the data as showed
figure 1.
kNN ANN SVM
In this paper 3 kinds of classification methods are used. We
used ANN (Artificial Neural Network), SVM (Support Vector
MFCC
72
%65
%65
%70
accuracy.
%42
%39
%73
%70
%48
%65
20190001900r3l
10190001900r3l
classification rate
%38 %49
%43 %49
%68 %68
%66 %65
%49 %58
%67 %68
Table 1: Classification Accuracy
F0
LPC
MFCC + LPC
MFCC + F0
LPC + F0
MFCC + F0 + LPC
Classification accuracies have been shown for each datasets
in Table 1. It can be observed from table1and figures 2-4 the
all four datasets created using MFCC features have been
reached the high classification accuracy. The change in
datasets performance for each classification is shown in Figure
2(for kNN), Figure 3(for ANN) and Figure 4(for SVM).
29190001900r2l
19190001900r2l
9190001900r2l
30190001900r1l
20190001900r1l
10190001900r1l
0190001900r1l
MFCC
F0
LPC
MFCC + LPCMFCC + F0 LPC+F0MFCC+LPC + F0
Features
Figure 4: Classification with SVM
20190001900r3l
classification rate
10190001900r3l
As shown in figures 2-4, the best result of
classification has been obtained using SVM algorithm. Best
result (%73 classification accuracy) has been achieved using
MFCC+LPC features and SVM classification algorithm for all
seven emotions in Berlin Database.
29190001900r2l
19190001900r2l
9190001900r2l
30190001900r1l
20190001900r1l
REFERENCES
10190001900r1l
[1]
0190001900r1l
MFCC
F0
LPC
MFCC + LPC
MFCC + F0 LPC+F0
MFCC+LPC + F0
Features
[2]
Figure 2: Classification with kNN.
[3]
20190001900r3l
[4]
classification rate
10190001900r3l
29190001900r2l
19190001900r2l
[5]
9190001900r2l
[6]
30190001900r1l
[7]
[8]
20190001900r1l
10190001900r1l
0190001900r1l
MFCC
F0
LPC
MFCC + LPC
MFCC + F0 LPC+F0
MFCC+LPC + F0
features
Figure 3: Classification with ANN
It can be observed from figures 2-4 the dataset created with
only LPC features has low classification accuracy, where the
dataset created using MFCC and LPC together reached high
73
T. L. Nwe, S. W. Foo, and L. C. De Silva, “Speech emotion recognition
using hidden Markov models,” Speech Commun., vol. 41, no. 4, pp.
603–623, Nov. 2003.
E. M. Albornoz, D. H. Milone, and H. L. Rufiner, “Spoken emotion
recognition using hierarchical classifiers,” Comput. Speech Lang., vol.
25, no. 3, pp. 556–570, Jul. 2011.
A. Milton and S. Tamil Selvi, “Class-specific multiple classifiers
scheme to recognize emotions from speech signals,” Comput. Speech
Lang., Sep. 2013.
C.-C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan, “Emotion
recognition using a hierarchical binary decision tree approach,” Speech
Commun., vol. 53, no. 9–10, pp. 1162–1171, Nov. 2011.
D.B. Roe, J. G. Wilpon, Voice Communication Between Humans and
Machines, National Academy Press, 1994, pp.177.
S. B. Davis, P. Mermelstein, D. F. Cooper, and P. Nye, “Comparison of
Parametric Representations for Monosyllabic Word Recognation” vol.
61, 1980.
C. Hoult, “Emotion in Speech Synthesis,” pp. 1–12, 2004.
C. Becchetti, L. P. Ricotti, Speech Recognition;theory an C++
Implementation,3rd ed., John Wiley &Sons, 2004, pp.125– 135
F. Burkhardt, a Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss, “A
Database of German Emotional Speech,” Ninth Eur. Conf. Speech
Commun. Technol., vol. 2005, pp. 3–6, 2005
[10] T.M. Mitchell, Machine Learning, McGra-Hill Companies, 1997,
pp.231–232
[11] http://en.wikipedia.org/wiki/Support_vector_machine
[9]
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
The Design and Implementation of a Sugar Beet
Harvester Controlled via Serial Communication
Adem GOLCUK1, Sakir TASDEMIR2, Mehmet BALCI2
1
Higher School of Vocational and Technical Sciences, Karamanoğlu Mehmetbey University / Karaman, Turkey
ademgolcuk@kmu.edu.tr
2
Higher School of Vocational and Technical Sciences, Selçuk University / Konya, Turkey
stasdemir@selcuk.edu.tr, mehmetbalci@selcuk.edu.tr
in agricultural machinery. Agricultural equipment and
machinery are indispensable for completing work in due time.
Since manual harvesting of sugar beet requires a vast number
of workers and the cost of workmanship has increased in
recent years, producers are heading for mechanical harvesting.
Mechanization brings about more cultivated areas and
prolongs the harvest season (considering the weather and soil
conditions) so that sugar beets become heavier. Furthermore,
producers do not confront with workforce problems during
their business plans [3, 4, 5].
In the 2010/2011 season, world sugar production was 152
million tons and the capacity settled by the board of sugar beet
in Turkey is 3 million and 151 thousand tons. In Turkey, there
are seven sugar beet-producing companies, one of which is
publicly owned, and 33 active sugar factories with different
capacities. Turkey’s share in world sugar production was 8 %
in the 2010/2011 season. Turkey is the world’s fifth largest
sugar producer after the USA, France, Germany, and Russia,
and the fourth largest sugar producer in Europe after France,
Germany, and Russia [6].
What lies behind the dominance of developed countries in
the global economy is that they develop their agricultural
machinery, they use this machinery efficiently, and their sale
of agricultural products increases in this way. This accounts
for why the developing countries fall one step behind the
developed countries in terms of agro-based industrialization.
Considering the trio of agriculture-trade-industry, agriculture
has an active and positive role in industrialization, and as a
matter of fact, developing countries have necessary
infrastructure for agro-based industrialization [2, 7].
In parallel with the technological developments, agricultural
machinery is developing in a positive way and the use of
technology in agricultural equipment is rapidly increasing.
Electronic and computer applications are becoming
widespread in the sector, which impacts the agricultural
mechanization. A modern approach to agricultural practices
not only enhances agricultural productivity but also provides
speed and ease of use. Designs of agricultural mechanization
are affected by industrial developments and this leads to
favourable outcomes. Multidisciplinary studies have paved the
way for improvements in agricultural machinery and use of
electronics and computer technologies in this sector.
Moreover, these technological and electro-mechanical
developments help people overcome the problems they
encounter at workplace, providing them more comfort, more
Abstract - Conventional agricultural equipment and machinery
have acquired a modern characteristic owing to technological
transfers, which returns as profit in the meantime. Electronic
and computer applications are becoming gradually widespread in
agriculture. Agricultural mechanization is positively affected by
this development. Using new technologies together with
agricultural mechanization, results in more qualified crops in
production stages. This study was carried out on an electronic
module we designed to control the hydraulic lifting and
unloading parts of a sugar beet harvester. For this purpose, we
wrote a program in Peripheral Interface Controller (PIC).
Processes have been automatised by means of this circuit. Our
electronic circuits were tested in a workshop and then they were
applied on a real sugar beet harvester. Its control system can
communicate on cable via serial communication. The tests
confirmed that this control system is applicable for sugar beet
harvester. In this way, the system is controlled remotely via a
wireless circuit designed as an addition to the electronic card.
Keywords - Serial Communication, Sugar Beet Harvester, PIC,
Electronic Circuit, Software.
I. INTRODUCTION
Turkey is an agricultural country of high potential with its
population, land size, and ecological advantages like climate
and soil diversity. As an important nutrient for humans, sugar
is produced from two sources. One is sugar cane and the other
is sugar beet. Sugar producers are tending towards
mechanization within the extent of their purchasing power. In
recent years, agricultural equipment such as tractors, rotary
cultivators, hoeing machines, and combined harvesters are
increasing in number. Even those who cannot afford them use
these machines on loan or by renting [1, 2].
Agricultural mechanization involves designing, developing,
marketing, handling, and operating every kind of energy
source and mechanical equipment that are used to develop
agricultural areas and to produce and utilize every kind of
crop. It is possible to reap more qualified crops in production
stages as long as the agricultural mechanization is
accompanied by new technologies. It also enables the
production processes to be completed as soon as possible,
prevents the yield loss resulting from delays and makes the
rural labour conditions more convenient and secure.
Agricultural productivity is enhanced and new employment
opportunities are introduced with the industrial developments
74
time, and more economic profit. Traditional agricultural
machinery has acquired a modern characteristic owing to
technological transfers, which returns as profit.
In this study, we developed an electronic circuit to control
the hydraulic system by which sugar beet harvesters do lifting
and unloading, and for this purpose, we wrote a program in
Peripheral Interface Controller (PIC). By means of this circuit,
processes were automatised. The system was remotecontrolled via a wireless circuit designed as an addition to the
electronic card. In this way, cables were eliminated and the
system was made remotely operable.
and unloading operations of the machine. This module
conducts the sugar beet harvester according to the data
transmitted from the remote control. The second module
consists of the remote control system. Users control their
sugar beet harvesters by this system. Figure 1 shows the
structure of this system.
II. MATERIAL AND METHOD
Sugar beet harvester is a machine used to uproot beets. It is
attached to the tractor drawbar and runs by the power take-off
(PTO). Self-lubricant machines do not require anything but
tractor PTO and 12-volt accumulator electricity. This
combined machine picks up the sugar beets, separates their
stems and leaves, removes their dirt, takes them to the ground
or the bunker, and finally unloads them. Like manual lifting,
sugar beets are picked up cleanly by this machine without
giving any detriment to them. All of its units except the star
drum are driven hydraulically. It adjusts the digging height
and range automatically with its electronic-hydraulic control
system and accomplishes the lifting and loading processes
easily and in a short time [8, 9].
Using PicBasic Pro, we wrote microcontroller software for
the serially communicating sugar beet harvester we designed.
Printed circuits were prepared in Ares Professional inside the
Proteus Professional software package. We used two-sided
printing technique while plotting these printed circuits. In
addition, before transferring the circuits into printed circuits,
working principles of the used codes and ICs were tested on
the simulation program of Isis Professional. During the tests,
our printed circuits were prepared by the method of ironing in
a workshop environment. As the tests yielded positive results,
the printed circuits were churned out by professional firms.
Figure 1: Block Structure of Automation for Sugar Beet Harvester
In the system shown in Figure 1, all the data exchange
between the control module and the mainboard on the sugar
beet harvester takes place on two cables with serial
communication. This system is safer and easier to install.
Also, troubleshooting becomes easier and the cost is cut down
as the number of cables is reduced by serial communication.
However, hardware cost increases to a certain extent because
there are two circuits to ensure serial data communication in
the control system. One of these circuits is placed inside the
control device, conveying the user commands to the sugar beet
harvester. The other one is placed in the sugar beet harvester,
guiding the sugar beet harvester according to the commands
coming from the control unit to open and close the valves.
2.2.1 The Control Keypad and Its Functions
There are control buttons on the control system designed to
control the sugar beet harvester by a tractor driver. Table 1
shows these buttons and their functions.
In the system we designed, all the data exchange between
the control device and the mainboard takes place either on two
cables with serial communication.
2.1 Serial Communication Technique
Serial communication technique is preferred by business
organizations, automation systems, and factories with a
number of machines and engines, in order to minimize the
complexity, reduce the cost, and make the control easier. It is
easily and as required programmed and removes the need for
an additional data link when new devices are added to circuit.
Nowadays, serial communication has a wide area of
utilization. Microprocessors and the devices such as modem,
printer, floppy disc, hard disc, and optical drives communicate
in a serial way. In addition to these, serial communication is
used in cases that the number of links are wanted to be
reduced. In serial communication, data are conveyed
unidirectionally or bidirectionally on a single channel. As the
number of links is reduced, its data signaling rate is low as
well [10, 11].
Table 1: Keypad for Control Circuit
Function
In this mode, the system is controlled by
user through the control device.
Automatic
In this mode, the control device is
deactivated and the system runs under the
control of automation software with the data
coming from sensors.
Lower Hitch
It lowers the hitch part to start lifting beets.
Raise Hitch
It raises the hitch part when the lifting
process is over.
Lower Elevator
If the elevator is clogged with objects like
stone, this button makes the lifter reverse to
remove those objects.
Key Name
Manual
2.2. The Systemic Structure
Our system is made up of two modules in essence. The
circuit we developed for the first module is the “mainboard”
part mounted to the sugar beet harvester, enabling the lifting
75
Raise Elevator
Lower Bunker
Raise Bunker
Unload Bunker
Right
Unload Bunker Left
Open Bunker
Close Bunker
Signal On/Off
Light On/Off
the microcontroller. Thus, even if the power is cut off, the
information is kept by the microcontroller without being
erased.
PicBasic code writing the mode information to internal
EEPROM:
WRITE 0,modd ‘this command writes mode information to
address 0.
PicBasic code reading the mode information from internal
EEPROM:
READ 0,modd ‘this command reads mode information than
address 0.
If the wired system will be used, the instruction to
send the control information to the sugar beet harvester via
serial communication is as follows:
SEROUT2 VERIOUT,188,["P","M",TusBilgisi]
This instruction conveys data at the rate of 4800 baud. It
conveys “P” and “M” letters before sending the key
information.
It is used to transfer the collected beets to
the bunker or tank.
It lowers the bunker when the unloading
process is over.
It raises the bunker before unloading.
It unloads the bunker from the right side.
It unloads the bunker from the left side.
It opens the bunker door.
It closes the bunker door.
It switches the signal on or off.
It switches the headlight on when work is
done by night.
2.2.2 Electronic Circuit for Control
The circuits in Figures 2 and 3 are for the control device.
Keys were mounted to the circuit in Figure 2. The circuit
shown in Figure 3 serially transmits the data from the pressed
key to the circuit on the sugar beet harvester through RX and
TX cables.
Figure 2: The Printed Circuit for Control Keypad
Figure 4: PT2262 and ATX-34 transmitter module [10, 14]
BC327 NPN transistor used in the diagram of the control
circuit in Figure 4 prevents the circuit from dispending battery
unless the button is pressed. When any button is pressed, the
base of PNP transistor is grounded and the transistor starts to
transmission. The circuit is electrified in this way. Then, the
instruction for a pressed key is conveyed wirelessly to the
control panel.
2.3 Control and Electronic Card of the Sugar Beet Harvester
Figure 5 shows the circuit we prepared for the sugar beet
harvester. This circuit guides the machine according to the
data coming from the control device.
Figure 3: Control Transmitter Circuit
The program running on the PIC 16F84A microcontroller
[12, 13] used in the control circuit was developed in
PICBASIC PRO [12, 13]. This software regulates the way the
system runs either wired or wireless. If RF signals are to be
used through the wireless control system, the data regarding
the key pressed by the user is calculated as 4 bits by the
program codes. Afterwards, it sends this data to the inputs of
PT2262 IC (Figure 4) to be transmitted in turn to the RF
signals. This 4-bit data is conveyed by using 0, 1, 2, and 3. bits
of PortA. In addition, this microcontroller memories in which
mode the sugar beet harvester is running. When there is no
electricity in the circuit unless a key is pressed, the power of
microcontroller is off as well. The data regarding the mode of
the sugar beet harvester is stored in the internal EEPROM of
76
approach not only enhances agricultural productivity but also
provides a quick and easy use.
Furthermore, these
technological and electro-mechanical developments help
people overcome the problems they encounter at workplace,
providing more comfort, more time, and more economic
profit.
Number of cables is reduced and troubleshooting becomes
easier with the serial communication technique. This
technique also provides an easier installation and a safer
working environment. The system is remote-controlled with a
wireless RF circuit designed additionally to the electronic
card. By this way, cables are eliminated and the system is
made remotely operable The system we have developed can
be used and improved further by being adapted to the other
parts of sugar beet harvester or to the other kinds of machines.
Figure 5: Sugar Beet Harvester Circuit
PIC16F84A opens and closes the valves on the sugar
beet harvester in accordance with the data coming from the
control device. The machine is controlled by an electronic
circuit and software. If the wired control system is to be used,
the instruction for reading the data serially is as follows:
SERIN2 VERIIN,188,100,ATLA,[WAIT ("PM"),
TusBilgisi]
This instruction reads the data sent at the rate of 4800 baud
and conveys the data sent after the regularly conveyed “P” and
“M” letters to the key information variable. If the control
system is to communicate via RF signals, a control receiver
circuit is placed in the sugar beet harvester with the help of a
header socket and these two circuits run like a single circuit.
12 volt power from the tractor accumulator is reduced to 5 volt
with the 7805 regulator. The valves used in the sugar beet
harvester are driven by the TIP55 power transistor. This
transistor is preferred because its high collector current
(Ic=15A).
REFERENCES
[1] Unal H.G., Research on Mechanization Conditions and Agricultural
Applications of Sugar Beet Producers in Kastamonu, Journal of Agricultural
Sciences Ankara University Faculty of Agriculture, 13 (1) 9-16, 2006.
[2] Arısoy H. “Tarımsal Araştırma Enstitüleri Tarafından Yeni Geliştirilen
Buğday Çeşitlerinin Tarım İşletmelerinde Kullanım Düzeyi ve Geleneksel
Çeşitler İle Karşılaştırmalı Ekonomik Analizi -Konya İli Örneği” Yayın No:
130, ISBN:975-407-174-8, 2005.
[3] Eryilmaz T., Gokdogan O, Yesilyurt M. K., Ercan K., Properties of
Agricultural Mechanization of The Nevsehir Province, Journal of Adnan
Menderes University Agricultural Faculty,10(2):1-6, 2013.
[4] Şeker Pancarı Hasat Makineleri, http://www.ziraatciyiz.biz/seker-pancarihasat-makinelerit1528.html?s=00c338ed6fd8fd245dbc150440ddd34a&amp;t=1528
[Ziyaret
Tarihi: 24 Mart 2014]
[5] Tarımsal Mekanizasyonun Faydaları, http://www.birlesimtarim.com/bilgiTARIMSAL.MEKANIZASYONUN.FAYDALARI-2-tr.html
[Ziyaret
Tarihi: 24 Mart 2014]
[6]
Türkiye
Cumhuriyeti
Şeker
Kurumu,
http://www.sekerkurumu.gov.tr/sss.aspx [Ziyaret Tarihi: 24 Mart 2014]
[7] Guzel S., Yerel Kalkınma Modeli: Afyon-Sandıklı’da Tarıma Dayalı
Sanayileşme, Karamanoğlu Mehmetbey University Journal of Social and
Economic Science, 133-143, May 2007.
[8]
Kombine
Pancar
Hasat
Makinasi,
http://www.ozenistarimmak.com/pancar-hasat-makinasi-pancar-sokmemakinesi-pancar-toplama-makinesi_1_tr_u.html [Erişim Tarihi: 13 Mart
2014]
[9] Milli Eğitim Bakanlığı, Mesleki ve Teknik Eğitim Programlar ve Öğretim
Materyalleri, Tarım Teknolojisi Programı, Traktörle Kullanılan Özel Hasat
Makineleri Modülü, 2014.
[10] Golcuk A, 2010. Design And Actualisation Of The RF-Controlled Lift
System, M.Sc. thesis, Graduate School of Natural and Applied Sciences,
Selcuk University, 75 P. Konya, Turkey.
[11] Mikrodenetleyici İle Tek Hat Seri İletişim (Hazırlayan Akif Canbolat),
http://320volt.com/mikrodenetleyici-ile-tek-hat-seri-iletisim-pic16f84 [Ziyaret
Tarihi: 24 Mart 2014]
[12] Microchip Technology Inc., http://www.microchip.com [Ziyaret Tarihi:
24 Mart 2014]
[13] Altınbaşak, O., Mikrodenetleyiciler ve PIC Programlama, Altaş
Yayıncılık, İstanbul. 2004.
[14]
AN-ASKTX-PT2262,
Udea
Elektronik,
http://www.udea.com.tr/dokumanlar/AN-ASKTX-PT2262.PDF,
[Ziyaret
Tarihi:
13
Mart
2014]
III. CONCLUSION AND DISCUSSION
This study deals with the design and use of an electronic
circuit to control the hydraulic parts of an agricultural
machine, namely sugar beet harvester. A program was written
in PIC for its electronic circuit. Designed electronic circuits
were applied on a real sugar beet harvester after they were
tested in our workshop. Possible hardware and software
failures were detected and resolved during the tests. After the
final revision, the system has been tested continually for two
months. The tests have shown that the control system which
can communicate on cables with serial communication and
wirelessly with RF signals is applicable for sugar beet
harvester machines. After the tests yielded the desired results,
our printed circuits were serially produced by professional
firms.
Using electronics and computers in agricultural
mechanization applications is a modern approach. This
77
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Encryption with Changing Least Significant Bit
on Menezes Vanstone Elliptic Curve
Cryptosystem
M. KURT1 and N. DURU1
1
Kocaeli University, Kocaeli/Turkey, meltem.kurt@kocaeli.edu.tr
1
Kocaeli University, Kocaeli/Turkey, nduru@kocaeli.edu.tr
decryption. Every user has two keys as public and private.
Public key is known everybody in communication area, but
private key is known only its owner.
Abstract – Security problems have become more important
year by year consequently of widespread of internet usage and
then lots of methods have been developed to overcome these
problems. These methods especially achieve to safe
communication with a cryptosystem that uses encryption and
decryption algorithms in communication systems based on
internet. That cryptosystems’ security comes from complex
mathematics. In this paper, Menezes Vanstone Elliptic Curve
Cryptography Algorithm that we changed encryption method is
used. After message was encrypted, cipher text’s least significant
bits (LSB) are changed finally secret message is sent to recipient.
We used C++ programming language in our algorithm.
II. ELLIPTIC CURVE CRYPTOSYSTEM
Elliptic Curve Cryptosystem that is an asymmetric
cryptosystem has been proposed in 1985 as an alternative to
RSA [3] encryption algorithm. To ensure more security bit
length increases using RSA. Therefore applications that use
this algorithm have brought on a large computational weight.
Elliptic Curve Cryptography (ECC) algorithm’s reliability
comes from discrete logarithm problem is still unresolved.
It is important to choose elliptic curve used in encryption
process. Because designed cryptosystem must be resistant to
all known attacks.
Keywords – Elliptic Curve Cryptography Algorithm,
Cryptography, ECC, Menezes Vanstone Elliptic Curve
Cryptography, Least Significant Bit.
A. Elliptic Curves
I. INTRODUCTION
R
In mathematics, definition of elliptic curve on real numbers
is cluster of (x, y) points. Equation of this curve is given as in
(1).
y2 = x3 + ax + b mod p
(1)
x, y, a and b are real numbers.
If x3 + ax + b consists of non - repetitive multiplier or 4a3 +
27b2 is nonzero, y2 = x3 + ax + b can be used as a group form.
The group of elliptic curves on real numbers is to be
composed of similar numbers on elliptic curve furthermore
these groups are named point O [4].
APIDLY evolving technology brings to information
spoofing threat and information theft. In insecure
communication environment, it is too hard to protect entirety
of plaintext during the transfer from sender to recipient. In this
case, studies on information security and information hiding
have been increased.
The word cryptography comes from the Greek language,
“kryptos”
meaning
“hidden”
and “graphein” meaning
“writing”. Cryptography is basically used to hide contents of
the message by replacing the letters [1].
In other words, cryptography is a set of mathematical
methods to ensure confidentiality, authentication, integrity and
information security [2]. During the transmission of the
message these methods aim to protect not only sender and
recipient but also data from active and passive attacks.
Systems’ security we mentioned earlier is provided by two
type cryptographic algorithms as symmetric or asymmetric. If
sender and recipient use the same key for both encryption and
decryption, these cryptosystems are called as symmetric. In
symmetric cryptosystem there is one problem that is keydistribution. To solve that problem, asymmetric encryption
algorithms were designed. This means that, two parties who
are sender and recipient use different keys for encryption and
B. Elliptic Curve Cryptography
Key Exchange:
p is prime number and p ≈ 2180 . a and b are elliptic curve
parameters in (1) are selected by sender. This selection creates
EF (a, b) set of points. Sender choose generator point G = (x1,
y1) in EF (a, b).
Key exchange between sender and recipient is:
1) Sender determines own private key nA. nA that is smaller
than n is integer number. After that using
formula sender generates own public key. That public key
is a point on EF (a, b).
2) Recipient calculates own public key with same method.
3) Sender uses
formula and recipient uses
78
changed. Table 1 gives information about LSB, changing “1”
to “0” and “0” to “1”.
formula. If K is same, key exchange and key
agreement are completed.
Encryption:
k is a random integer number. Cm is cipher text’s points and
calculated in (2).
Table 1: Example of changed LSB values.
Original Binary Values
11011011
11010010
(2)
Decryption:
Using formula (3) plain text is removed.
LSB Changed Values
11011010
11010011
According to our method, encryption and decryption steps
are as follows:
In encryption step sender;
Sets α generator point over y2 = x3 + ax + b mod p curve,
Decides plaintext x,
Divides x to n blocks. Every block consists of only one
character,
Converts every character to its hexadecimal value as (mn),
However n can be one of these letters A, B, C, D, E, and F, in
this case hexadecimal value is converted to decimal. These are
A→10, B→11, C→12, D→13, E→14, and F→15,
m→ x1, n→ x2,
i = {1,2,., n} and every character is expressed as (x1i, x2i) ,
Chooses k randomly,
y0 = k α,
(c1, c2) = k β,
y1 = c1 x1 mod p,
y2 = c2 x2 mod p,
Calculates (y0, y1i, y2i) points,
Converts separately y0, y1i, y2i points to binary number,
Changes these binary numbers’ LSBs,
Finally sends new points (y0, y1j, y2j) to sender n times. j=
{1,2,., n},
In decryption step recipient;
Chooses secret key a,
Changes (y0, y1j, y2j) points’ LSBs, then converts these
numbers to decimal values,
Using (c1j, c2j) = a.y0, x = (y1j c1j-1 mod p, y2j c2j-1 mod p)
decrypt cipher text to plaintext,
With this (x1j ∙16 + x2j) calculation converts every point to
character.
According to these calculations we perform this application
with C++ programming language.
For example sender wants to encrypt plaintext “Cryptology”
word, then selects E: y2 = x3 + 27x + 31 mod 149 elliptic
curve, α is (7, 99), calculates n as 10, k is 23, recipient secret
key is 41. β is calculated as (117, 64).
(3)
III. MENEZES VANSTONE ELLIPTIC CURVE CRYPTOSYSTEM
Menezes Vanstone Elliptic Curve Cryptosystem [5]
basically uses elliptic curves. However in this cryptosystem
differ from ECC. If we use Menezes Vanstone Cryptosystem,
the message will be encrypted doesn’t embed on E elliptic
curve. The message is masked.
Over Zp (p is prime number, p > 3 or (n > 1)), E that is a
curve defined over GF(pn) [5]:
P = Zp* × Zp*, C = E × Zp* × Zp* in this case α 
Key set K = {(E, α, a, β): β a α},
α and β are public but a is secret,
k is randomly chosen number, k and
x = (x1, x2)Zp* × Zp
Sender;
Determines generator point α over E: y2 = x3 + ax + b mod p
elliptic curve,
Chooses k randomly, x plain text or message (x = (x1, x2)).
x1and x2 points doesn’t locate over E elliptic curve,
β= a∙ α,
y0 = k α,
(c1, c2) = k β,
y1 = c1 x1 mod p,
y2 = c2 x2 mod p,
In the above equations is calculated then (y0, y1, y2) points that
are plain text are sent to recipient.
Recipient;
Choose secret key a,
Thanks to these equations ((c1, c2) = a y0, x = (y1 c1-1 mod p,
y2 c2-1 mod p)) plain text is obtained.
IV. PROPOSED METHOD
In this work, we made some modifications over previous
work [6]. This method based upon Menezes Vanstone
Cryptography. However in Menezes Vanstone Cryptosystem
message’s characters are expressed as randomly produced
points. This is a security gap. In our method we divided
message to blocks. Every block has only one character. This
character’s hexadecimal values are used as coordinate points
(x, y). Using these points, encryption process is made with
Menezes Vanstone Elliptic Curve Cryptography Algorithm.
Then obtained points (y0, y1, y2)’s values converted to their
binary values. Finally every point’s least significant bits [7] are
79
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Table 2: Encryption - Steps of “Cryptology” word.
Block
Number
(j)
Charachters
1
C
Hexadecimal
Values of
Charachters (Used
as a point)
(4, 3)
2
r
(7, 2)
3
y
(7, 9)
4
p
(7, 0)
5
t
(7,4)
6
o
(6, 15)
7
l
(6, 12)
8
o
(6, 15)
9
g
(6, 7)
10
y
(7, 9)
y0, y1j, y2j
values
Binary Values of y0, y1j, y2j
Cipher Text
((117, 85), 61,
47)
( (117, 85),
144, 81)
((117,
85),144, 141)
((117, 85),
144, 0)
((117, 85),
144, 13)
((117, 85), 17,
86)
((117, 85), 17,
39)
((117, 85), 17,
86)
((117, 85), 17,
60)
((117,
85),144, 141)
((1110101,
1010101),
111101,101111)
((1110101, 1010101), 10010000,
1010001)
((1110101, 1010101), 10010000,
10001101)
((1110101, 1010101), 10010000,
0000000)
((1110101, 1010101), 10010000,
1101)
((1110101, 1010101), 10001,
1010110)
((1110101, 1010101), 10001,
1001111)
((1110101, 1010101), 10001,
1010110)
((1110101, 1010101), 10001,
111100)
((1110101, 1010101), 10010000,
10001101)
((1110100,
1010100),
111100,101110)
((1110100, 1010100), 10010001,
1010000)
((1110100, 1010100), 10010001,
10001100)
((1110100, 1010100), 10010001,
0000001)
((1110100, 1010100), 10010001,
1100)
((1110100, 1010100), 10000,
1010111)
((1110100, 1010100), 10000,
1001110)
((1110100, 1010100), 10000,
1010110)
((1110100, 1010100), 10000,
111101)
((1110100, 1010100), 10010001,
10001100)
Table 2 gives information about encryption steps of our
algorithm. Plaintext “Cryptology” is encrypted after that cipher
text is obtained. Finally cipher text is sent to recipient into
blocks. The cipher text delivered to the recipient in each
blocks, as mentioned Table 2, is decrypted. The achieved
values are hexadecimal values of plain text. As j={1, 2,..10}
and using (x1j ∙16 + x2j), recipient reaches “Cryptology”.
Cryptosystem. We also used Menezes Vanstone Elliptic Curve
Cryptosystem to encrypt plaintext. Before encryption,
plaintext’s every character is converted to hexadecimal value.
After encryption cipher text is converted to its binary value
than LSB of binary value is changed. Finally cipher text is
conveyed to the recipient. Proposed algorithm’s encryption
and decryption times are given.
In the future works we will perform this algorithm over
FPGA to observe encryption and decryption results of
different size data.
Table 3: Encryption and decryption times.
Block
Number
(j)
1
2
3
4
5
6
7
8
9
10
Charachters
C
r
y
p
t
o
l
o
g
y
Encryption
Times
(mili sec.)
3
32
31
37
39
37
37
32
37
36
Decryption Times
(mili sec.)
REFERENCES
4
7
13
10
5
9
8
7
7
5
[1]
[2]
[3]
[4]
[5]
[6]
In Table 3, “Cryptology” word’s encryption and decryption
times are given as millisecond . The observations are tested on
a machine with 8 GB RAM and 2.20 GHz processor speed on
7 Home Premium platform.
[7]
V. CONCLUSION
This paper gives information about Elliptic Curve
80
T. E. Kalaycı, “Security and Cryptography on Information
Technologies”, Ege University, MSc Thesis, 2003.
C. Çimen, S. Akleylek, E. Akyıldız, Mathematics of Password,
(Cryptography, METU Development Foundation, Ankara, 2007).
R. L. Rivest, A. Shamir ve L. Adleman, A Method for Obtaining Digital
Signatures and Public-Key Cryptosystems, Communications of the
ACM, 21(2):120-126, February (1978).
A. Koltuksuz, “Securing .NET Architecture With Elliptic Curve
Cryptosystems”, Izmir Institute of Technology College of Engineering
Department of Computer Engineering, Izmir Turkey, 2005.
A. Menezes, S. Vanstone, Elliptic Curve Cryptosystems and Their
Implementation, Journal of Cryptography 6 (4), pp. 209-224, (1993).
M. Kurt, T. Yerlikaya, “A New Modified Cryptosystem Based on
Menezes Vanstone Elliptic Curve Cryptography Algorithm that Uses
Characters’ Hexadecimal Values”, TAEECE 2013, Konya, Turkey,
2013.
M. Kurt, T. Yerlikaya, “An Application of Steganography to 24-bit
Color Image Files by Using LSB Method”, Bulgarian Cryptography
Days- BulCrypt 2012 1st International Conference on Bulgarian and
Balkans Cryptography, Sofia, Bulgaria, 87-95, 2012.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
QUALITY AND COVERAGE ASSESSMENT IN SOFTWARE INTEGRATION BASED ON MUTATION TESTING
Iyad Alazzam1, Kenneth Magel2 and Izzat Alsmadi1
1: Faculty of IT, Yarmouk University - Jordan, eyadh@yu.edu.jo, ialsmadi@yu.edu.jo
2: CS Department, North Dakota State University – USA, Kenneth_Magel@ndsu.nodak.edu
ABSTRACT
The different activities and approaches in software
testing try to find the most possible number of errors or
failures with the least amount of possible effort. Mutation is
a testing approach that is used to discover possible errors in
tested applications. This is accomplished through changing
one aspect of the software from its original and writes test
cases to detect such change or mutation. In this paper, we
present a mutation approach for testing software
components integration aspects. Several mutation
operations related to components integration are described
and evaluated. A test case study of several open source
code projects is collected. Proposed mutation operators are
applied and evaluated. Results showed some insights and
information that can help testing activities in detecting
errors and improving coverage.
component testing, integration testing combines two or
more components and start testing them together to make
sure that they work collectively to achieve high level or
user level goals. After integrating components, integration
testing can be based on: use cases, threads, etc.
KEY WORDS: Software testing, integration testing,
In mutation testing, one software aspect (e.g. code,
functions, requirement, integration, user interface, etc.) is
modified. Tests are then created and executed. Their results
are evaluated and whether results of those test cases
produce results that are different from applying the same
test cases on the original software before applying the
mutation process. If no test case produces different results
between original and mutated software, all test cases then
fail to detect (aka kill) the mutations. The idea behind
mutation testing is then related to coverage and its related
criteria (Reach-ability, propagation and infection). For
example, if a small part of the code is changed (i.e. < is
changed to > ) and all test cases produced same exact
results with those applied on the original code, then the
specific line of code that contains the symbol (<) is either
not reached by any test case or results from such test cases
were the same. If the first case, we will assume that there is
a reach-ability problem and the subject line of code was not
reachable by any test case. In the second case, the problem
is with propagation or infection where the different results
(between original and mutated test case) where not visible
to the output.
Two components communicate or are coupled with
each other if one of them needs the other for a service. In
such simple demonstration of the meaning of integration,
the caller component is called the client which is the
component that sends the message or the call. The
component that contains the service is called the server
which receives the message with certain inputs, processes it
and sends back the output as parameters or results. As such,
the infrastructure for any integration contains three
components: client, server and media.
mutation, coverage, software design.
1.
INTRODUCTION
In software projects, testing activities aim to check the
correctness of the developed software, its conformance with
expected requirements and quality standards. Those are the
three major goals for conducting all software testing
activities. Testing activities may occur in all software
development stages (i.e. requirements, design, coding and
testing) and not only in the software testing stage. Further,
testing can be divided into white box and black testing
where in white box testing test cases are derived from the
code and through looking at its internal structure, whereas
black box test cases are derived based on the requirements
to check whether the developed software contains all
expected functionalities.
In white box testing, unit testing activities test each
component (e.g. a class, a package, etc.) separately to make
sure that such component work individually. For example,
for a class that has several public methods that represent the
interface of that class, it is important to first test those
public methods to check whether they can receive inputs
correctly and further they can produce correct or expected
outputs. Internal structure for those public methods as well
as private methods should be also checked in this stage to
make sure that their internal structure has no real or
possible errors or failures. As a next stage after unit or
In this paper, some mutation operators related to
integration testing are presented and evaluated. A case
study of several open source codes is assembled to assess
the validity of the proposed mutation operators.
81
The rest of the paper is organized as the following: The
next section presents some related papers to the subject of
the paper. Section three presents experiments and analysis
and paper is concluded with a conclusion and possible
future extensions.
Changing
parameter
value
Swapping
method
parameters
Chain call
deletion
Swap
methods
2.
RELATED WORK
As mentioned earlier, in mutation area, papers are
divided based on the software area that mutation operators
are generated from. As such, we will focus on selecting
some papers that discuss mutation in integration testing.
Agrawal et al paper 1989 is an early paper that
discusses using mutation in testing to detect new errorprone possible areas in the code.
any other one
For each parameter, actual value will be
changed to any other valid value based on
the value type
Method parameters of the same type if exist
in a method will be swapped
A call to a method is deleted
Swap methods: if there are two or more
methods in a class with the same type of
parameter and number and the return type is
quite similar (can be cast).
Tables 2 show a summary of results from applying
mutation operators on the case study of the open source
code projects.
In integration testing specifically, Delamaro has several
relevant papers (Delamaro et al 1996, Delamaro and
Maldonado, 1996, and Delamaro et al 2001). Those papers
were pioneers in discussing mutations in generation and
mutation in integration or interface testing in particular. In
Delmaro et al paper 1996 authors proposed several
examples of mutation operators between client server
methods’ calls. Experiments and analysis were based on
Proteum tool that is discussed in Delamaro and Maldonado,
1996 paper. The tool itself contains more than 70 types of
different operators. Mutation operators were related to
methods signatures, class and methods’ variables, etc.
Mutation score is calculated based on the number of
mutations that were detected to the total number of injected
mutations. In addition to calculating mutation score, for
mutation evaluation, authors measured also execution and
verification time for the set of proposed and evaluated
mutation operators. Delmaro et al 2001 paper extended
interface or integration testing mutation operators to cover
new aspects or concerns to test in the messaging between
method calls partners.
TABLE 2: Number of generated mutation operators for
all evaluated source codes
Swapping
method
parameters
Chain
call
deletion
Swap
methods
0
13
9
Linked List
Coffee
Maker
Cruise
Control
phone
directory
1
2
1
2
20
7
0
20
11
2
15
10
Bank
0
8
4
Application
Word
processor
Offutt has also several publications related to mutation
testing in general and integration or coupling testing in
particular (e.g. Jin and Offutt 1998). In most of those
papers, authors will propose new mutation operators to test
a particular aspects, and then evaluate the validity of those
operators based on mutation score, coverage, execution
efficiency, etc.
TABLE 2: (Continued)
Application
Duplicate
calling
Changing
return
type
Changing
parameter
value
53
2
6
Linked List
Coffee
Maker
Cruise
Control
phone
directory
6
5
5
90
31
26
26
13
2
86
10
12
36
2
1
Word
processor
3.
EXPERIMENTS AND RESULTS
Six mutation operators related to integration testing are
discussed in this paper. Table 1 shows a summary of those
operators
TABLE 1: Integration testing mutation operators
Mutation
Operator
Duplicate
calling
Explanation
Bank
The mutant will call the same method twice
rather than one time
Changing
return type
Based on the different value types, the return
type will be changed from its original type to
After executing experiments and running our tool on
six different applications: “Word Processor”, “Linked List”,
“Coffee Maker”, “Cruise Control”, “Phone Directory” and
“Bank”. We found that the number of mutants created
based on the mutation operator that duplicates call is the
82
highest among other mutation operators for the six
applications, this is because it is one of the most popular
mean (way) in connecting two classes (modules) or more
together and because of the characteristics of object
oriented programming languages (OOP) such as
inheritance, polymorphism and inheritance. The equivalent
mutants of the duplicate calling appears mainly when the
methods have no implementations or when they have no
other effects such as closing and opening file or connection
to database. On the other hand we found that the swapping
method parameter operator has the minimum number of
mutants and for three applications: “Word Processor”,
“Cruise Control” and “Bank” we did not get any mutants
because this mutation operator needs at least two
parameters having the same data types in order to prevent
occurring syntax errors in the mutants, which means that in
all applications the methods parameters types are not
similar to each other. The equivalent mutants of the
swapping method parameters occur mostly when passing
same value for the parameters.
The results show that the number of mutants created
based on swapping methods is greater than the number of
mutants created based on swapping method parameters
because the swapping methods require only two methods
return the same data type regardless of their parameters.
The equivalent mutants of the swapping method may occur
only when they have no implementations or always return
the initial values of their return types. Moreover we have
seen that the number of created mutants by changing return
type and changing parameter value is almost to somehow
equivalent
to each other. The equivalent mutants of
changing the parameters values and changing return type
occur when the changed value of parameters value or return
type is equivalent to the origin values of the methods
parameters or return types.
with their comparison with integration or coupling
operators discussed in previous studies.
REFERENCES
[1] H. Agrawal, R. A. DeMillo, R. Hathaway, Wm. Hsu,
W. Hsu, E. Krauser, R. J. Martin, A. P. Mathur and E.
Spafford, “Design of Mutant Operators for C
Programming Language”, Technical Report SERCTR4 1 -P, Software Engineering Research center,
Purdue University, March 1989.
[2] M. E. Delamaro, J. C. Maldonado, “Proteum: A Tool
for the Assessment of Test Adequacy for C Programs ”,
Proceedings of the Conference on Performability in
Computing Systems, East Brunswick, New Jersy,
USA, July 1996, pp 79-95.
[3] Delamaro, M.E. JosC C. Maldonado, and Aditya P.
Mathur, Interface Mutation: an approach for
integration testing, IEEE Transactions on Software
Engineering, Volume: 27 , Issue: 3, Page(s): 228 - 247,
2001.
[4] Delamaro, M.E., JosC C. Maldonado, and Aditya P.
Mathur. Integration testing using interface mutation,
Seventh International Symposium on Software
Reliability Engineering, 1996. Proceedings, 1996.
[5] A. M. R. Vincenzi, J. C. Maldonado, E. F. Barbosa, M.
E. Delamaro, Unit and integration testing strategies for
C programs using mutation, Software Testing,
Verification and Reliability, Vol 11 Issue 4, 2001.
[6] Fabbri, S.C.P.F., Mutation testing applied to validate
specifications based on statecharts, Proceedings. 10th
International Symposium on Software Reliability
Engineering, 1999.
[7] Shaukat Alia, Lionel C. Briandb, Muhammad Jaffar-ur
Rehmana, Hajra Asghara, Muhammad Zohaib Z.
Iqbala, A state-based approach to integration testing
based on UML models, Information and Software
Technology, Volume 49, Issues 11–12, November
2007, Pages 1087–1106.
[8] Chan, W., T. Chen, and Tse, An Overview of
Integration Testing Techniques for Object-Oriented
Programs, 2nd ACIS Annual International Conference
on Computer and Information Science (ICIS 2002),
International Association for Computer and
Information Science, Mt. Pleasant, Michigan (2002).
[9] ZHENYI JIN AND A. JEFFERSON OFFUTT,
Coupling-based Criteria for Integration Testing,
software testing, verification and reliability Softw.
Test. Verif. Reliab. 8, 133–154 (1998).
4.
CONCLUSION
Mutation is used as a testing activity to improve
coverage and discover new possible errors or problems in
the tested software. Testing integration aspects between
software components is important to make sure that the
different software components work together to perform
accumulative tasks. In this paper several integration related
papers are discussed and evaluated. A case study of several
open source codes is assembled. Results showed that some
mutation operators can be more significant than the others
in terms of their ability to detect possible problems or give
us some insights about software weaknesses, dead code,
etc.
Future extensions of this work should include thorough
investigation for those integration mutation operators along
83
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Methodological Framework of Business Process Patterns Discovery and Reuse
Laden Aldin
Faculty of Business and Management
Regent’s University London, Inner Circle, Regent’s Park, London NW1 4NS, UK
laden.aldin@regents.ac.uk
Sergio de Cesare
Brunel Business School, Brunel University, Uxbridge UB8 3PH, UK
sergio.decesare@brunel.ac.uk
Mohammed Al-Jobori
Moaden Ltd., Western Avenue, London, W5 1EX, UK
mohammed.aljobori@gmail.com
industry reports document, the adoption of design patterns in
software engineering projects improves reuse of shared
experiences, reduces redundant code, reduces design errors
and accelerates the learning curve [4]. As a consequence, it is
conceivable that patterns in BPM can produce similar
Abstract - In modern organisations business process modelling
has become fundamental due to the increasing rate of
organisational change. As a consequence, an organisation needs
to continuously redesign its business processes on a regular basis.
One major problem associated with the way business process
modelling is carried out today is the lack of explicit and
systematic reuse of previously developed models. Enabling the
reuse of previously modelled behaviour can have a beneficial
impact on the quality and efficiency of the overall information
systems development process and also improve the effectiveness
of an organisation’s business processes. The purpose of the
presented research paper is to develop a methodological
framework for achieving reuse in BPM via the discovery and
adoption of patterns. The framework is called Semantic
Discovery and Reuse of Business Process Patterns (SDR). SDR
provides a systematic method for identifying patterns among
organisational data assets representing business behaviour. The
framework proposes the use of semantics to drive both the
discovery of patterns as well as their reuse.
Keywords: pattern, business process modelling, reuse, semantic
and framework
advantages, thus reducing both time and cost of generating
business process models and their subsequent transformation
into software designs of enterprise applications.
However, the systematic adoption of patterns in BPM
cannot be a simple transposition of the experience acquired by
the design patterns community in software engineering. This is
due to some essential differences between business modelling
and software design. While the latter involves the
representation of an engineered artefact (i.e., software), the
former concerns the representation of behaviour of a real
world system (i.e., the business organisation). As such
business process patterns should ideally be discovered from
the empirical analysis of organisational processes. The
discovery of real world patterns should resemble the process
of discovery of scientific theories; both must be based on
empirical data of the modelled phenomena. Empiricism is
currently not the basis for the discovery of patterns for BPM
and no systematic methodology for collecting and analysing
process models of business organisations currently exists.
Thus, this research study aims at developing such a
methodology. In particular, the novel contribution of this
research is a Semantic Discovery and Reuse of Business
Process Patterns methodological framework (SDR) that
enables business modellers to empirically discover business
process patterns and to reuse such patterns in future
development projects.
The remainder of this paper is structured as follows: Section
2 summarises the study background and its related work;
Section 3 shows the proposed SDR methodological framework
in detail, and with more focus on its discovery lifecycle.
Finally, Section 4 addresses the conclusions and future work.
I. INTRODUCTION
T
he modelling of business processes and their subsequent
automation, in the form of workflows, constitutes a
significant part of information systems development (ISD)
within large modern enterprises. Business processes (BP) are
designed on a regular basis in order to align operational
practices with an organisation’s changing requirements [1]. A
fundamental problem in the way business process modelling
(BPM) is carried out today is the lack of explicit and
systematic reuse of previously developed models. Although all
business processes possess unique characteristics, they also do
share many common traits making it possible to classify
business processes into generally recognised patterns of
organisational behaviour.
Patterns have become a widely accepted architectural
technique in software engineering. Patterns are general
solutions to recurring problems. A pattern generally includes a
generic definition of the problem, a model solution and the
known consequences of applying the pattern. In business
process modelling the use of patterns is quite limited. Apart
from a few sporadic attempts proposed by the literature [2 –
3], pattern-based business process modelling is not
commonplace. The benefits of adopting patterns are
numerous. For example, as the academic literature and
II. BACKGROUND
The modelling of business processes and their subsequent
automation, in the form of workflows, constitutes a significant
part of ISD within large modern enterprises. Business
processes are designed on a regular basis in order to align
operational practices with an organisation’s changing
84
requirements [1]. A fundamental problem in the way that
BPM is carried out today is the lack of explicit and systematic
reuse of previously developed models. Although all business
processes possess unique characteristics, they also do share
many common traits making it possible to classify business
processes into generally recognised patterns of organisational
behaviour.
The idea of patterns for the design of artefacts can be traced
to Alexander (1977) and his description of a systematic
method for architecting a range of different kinds of physical
structures in the field of Civil Architecture. Research on
patterns has been conducted within a broad range of
disciplines from Civil Architecture [5] to Software and Data
Engineering [6, 7, 8, 9, 10, 2] and more recently there has also
been an increased interest in business process patterns
specifically in the form of workflows. Russell et al. (2004)
introduced a number of workflow resource patterns aimed at
capturing the various ways in which resources are represented
and utilised in workflows. Popova and Sharpanskykh (2008)
stated that these patterns provide an aggregated view on
resource allocation that includes authority-related aspects and
the characteristics of roles. This greater interest is primarily
due to the emergence of the service-oriented paradigm in
which workflows are composed by orchestrating or
choreographing Web services due to their platform-agnostic
nature and ease of integration [13]. van der Aalst et al. (2000)
produced a set of so called workflow patterns. Workflow
patterns proposed by van der Aalst are referred to as ‘Process
Four’ or P4lists and describe 20 patterns specific to processes.
This initiative started by systematically evaluating features of
Workflow Management (WfM) systems and assessing the
suitability of their underlying workflow languages. However,
as Thom et al. (2007) point out, these workflow patterns are
relevant towards the implementation of WfM systems rather
than identifying business activities that a modeller can
consider repeatedly in different process models. In fact, these
workflow patterns [16] are patterns of reusable control
structures (for example, sequence, choice and parallelism)
rather than patterns of reusable business processes subject to
automation. As such these patterns do not, on their own,
resolve the problems of domain reuse in modelling
organisational processes. Consequently new types of business
process patterns are required for reusing process models [17].
The MIT Process Handbook project started in 1991 with the
aim to establish an online library for sharing knowledge about
business processes. The knowledge in the Process Handbook
presented a redesign methodology based on concepts such as
process specialisation, dependencies and coordinating
mechanisms [3]. The business processes in the library are
organised hierarchically to facilitate easy process design
alternatives. The hierarchy builds on an inheritance
relationship between verbs that refer to the represented
business activity. There is a list of eight generic verbs
including ‘create’, ‘modify’, ‘preserve’, ‘destroy’, ‘combine’,
‘separate’, ‘decide’ and ‘manage’. Furthermore, the MIT
Process Handbook represents a catalogue of common business
patterns and it has inspired several projects, among them
Peristeras and Tarabanis (2000) who used the MIT Process
Handbook to propose a Public Administration General Process
Model.
The patterns movement can be seen to provide a set of
‘good practice’ building blocks that extend well beyond
software development to describe design solutions for generic
business process problems. These business process patterns
provide a means of designing new processes by finding a
richer structured repository of process knowledge through
describing, analysing and redesigning a wide variety of
organisational processes.
Patterns have been applied to various phases of the
Information System Engineering lifecycle (e.g., analysis,
design and implementation), however the application of
patterns to business process modelling has been limited with
research having been conducted by only a few over the past 20
years. While limited, such research has been highly effective
in proposing different sets and kinds of patterns (e.g.,
Workflow Patterns for Business Process Modelling by Thom
et al. (2007)) however a problem that has not been effectively
researched is how process patterns are discovered and how
such an activity can be made systematic via a methodology
that integrates the production of reusable process patterns
within traditional BPM. This paper investigates this problem
and proposes such a methodology.
It can be seen from the background investigation that
existing patterns provide limited support to resolving the
problems of domain reuse in modelling organisational
processes. Although, more and more researchers and
practitioners recognise the importance of reusability in BPM
[19], little consensus has been reached as to what constitutes a
business process pattern. Therefore, the need arises to provide
patterns that support the reuse of BPM, as patterns offer the
potential of providing a viable solution for promoting
reusability of recurrent generalised models.
Most of the patterns community mentioned earlier agrees
that patterns are developed out of the practical experience of
real projects by stating, for example, that ‘patterns reflect
lessons learned over a period of time’ [20, p. 45]. During that
process, someone creates and documents a solution for a
certain problem. In similar situations, this person refers to the
solution that was documented before and adds new
experiences. This may lead to a standard way of approaching a
certain problem and therefore constitutes the definition of a
pattern. Thus, each pattern captures the experience of an
individual in solving a particular type of problem.
Often however with every new project, analysts create new
models without referencing what was done in previous
projects. So, providing systematic support towards the
discovery and reusability of patterns in BPM can help resolve
this problem.
A conceptual technology that has gained popularity recently
and that can play a useful role in the systematic discovery as
well as the precise representation and management of business
process patterns is ontology. Ontologies have the potential of
improving the quality of the produced patterns and of the
modelling process itself due to the fact that ontologies are
85
aimed at providing semantically accurate representations of
real world domains.
[22]. In other words, programs, applications, or systems are
included in the application layer, whereas their common and
variable characteristics, as can be described, for example, by
patterns, ontology, or emerging standards, are generalised and
presented in the domain layer.
Domain Engineering is the process of defining the scope
(i.e., domain definition), analysing the domain (i.e., domain
analysis), specifying the structure (i.e., domain architecture
development) and building the components (e.g.,
requirements, designs and documentations) for a class of
subsystems that will support reuse [23].
Domain engineering as a discipline has practical
significance as it can provide methods and techniques that
may help reduce time-to-market, product cost, and projects
risks on one hand, and help improve product quality and
performance on a consistent basis on the other hand. Thus, the
main reason of bringing domain engineering into this study is
that information used in developing systems in a domain is
identified, captured and organised with the purpose of making
it reusable when creating or improving other systems. Also,
the use of domain engineering has four basic benefits [24], as
follows:
 Identification of reusable entities.
 Abstraction of entities
 Generalisation of solution.
 Classification and cataloguing for future reuse.
Therefore, the SDR methodology is based on a dual
lifecycle model as proposed by the domain engineering
literature [22]. This model defines two interrelated lifecycles:
(1) a lifecycle aimed at generating business process patterns
called Semantic Discovery Lifecycle (SDL), and (2) a
lifecycle aimed at producing business process models called
Semantic Reuse Lifecycle (SRL). Figure 2 illustrates the SDR
methodological framework.
Second, the phases of the former lifecycle have been
classified according to the Content Sophistication (CS)
methodology [25]. CS is an ontology-based approach that
focuses on the extraction of business content from existing
systems and improving such content along several dimensions.
CS was followed as it allows the organisation to understand
and document knowledge in terms of its business semantics
providing scope for future refinements and reuse. Therefore,
the Semantic Discovery Lifecycle is based on the four phases
(called disciplines) of the Content Sophistication
methodology. SDL therefore defines four phases, three of
which based on CS, as follows: (1) a phase aimed at acquiring
legacy assets and organising them in a repository called
Preparation of Legacy Assets (POLA), (2) a phase aimed at
ontologically interpreting elements of existing process
diagrams (or in general data sources of organisational
behaviour) called Semantic Analysis of BP Models (SA) and
(3) a phase aimed at generalising models to patterns called
Semantic Enhancement of BP Models (SE).
Third, the last phase of SDL documents the discovered
patterns. A pattern generally includes a generic definition of
the problem, a model solution and the known consequences of
III. SDR METHODOLOGICAL FRAMEWORK
A. SDR Cross-fertilisation of Disparate Disciplines
The issues identified in the literature review are investigated
in the context of the overall discovery and reuse objectives.
The lack of guidelines to modellers as to how business process
patterns can be discovered must first be resolved as it forms
the basis for attempting to resolve further issues. Evolving a
methodology to support the finding of business process
patterns represents an important area of work. Such a
methodology guides the application process and acts as a
reference document for situations where the methodology is
applied. Therefore, the design of the SDR methodological
framework, for empirically deriving ontological patterns of
business processes from organisational knowledge sources
(i.e. documentation, legacy systems, domain experts, etc.), is
essential.
In this study, the cross-fertilisation of disparate disciplines
or research fields tackles the design of the SDR
methodological framework of business process patterns. More
specifically, three main domains (i.e., domain engineering,
ontologies and patterns) are deemed relevant and helpful in
addressing this research problem. Hence, as illustrated in
Figure 1, the intersections amongst these research domains
symbolises the context of the current study. The construct of
the Semantic Discovery and Reuse methodological framework
is based on the following foundations.
Figure 1 Relevant Domains to Develop the SDR Framework
First, Domain Engineering (DE) is an engineering
discipline concerned with building reusable assets, such as
specification sets, patterns, and components, in specific
domains [21]. Domain engineering deals with two main
layers: the domain layer, which deals with the representation
of domain elements, and the application layer, which deals
with software applications and information systems artefacts
86
applying the pattern [26]. Patterns can produce many
advantages: (1) Reducing both time and cost of generating
business process models and their subsequent transformation
into software designs of enterprise applications. (2) Improving
modelling by replacing an ad hoc approach with a successful
one. (3) Promote reuse of business processes. (4) Reuse has
the longer-term benefit of encouraging and reinforcing
consistency and standardisation. Thus, the fourth phase of the
SDL, called Pattern Documentation, provides a way of
documenting the patterns identified. Figure 2 illustrates the
SDR methodological framework.
one are semantically interpreted in order to derive more
precise ontological models of the processes themselves and
semantically richer than its predecessors. Interpretation
identifies the business objects that the process commits to
existing. Interpretation explicitly makes the business
processes as much as possible close to real world objects,
which ensures the grounding of the patterns to real world
behaviour. For this phase the object paradigm (Partridge,
1996) provides a sound ontological foundation.
Phase 3: Semantic Enhancement of BP Models (SE).
This phase takes the ontological models created in SA and
aims at generalising them to existing patterns or to newly
developed patterns. Generalisation is an abstraction principle
that allows defining an ontological model as a refinement of
other ontological models. It sees a relationship between a
general and specific model where the specific ontology model
contains all the activities of the general model and more.
Phase 4: Pattern Documentation
This is the fourth and last phase of SDL. Documentation
plays an important role, bringing people from different groups
together to negotiate and coordinate common practice as it
plays a central role for global communication.
In this study business process patterns used a template
proposed by [3] to represent the different (e.g., intent,
motivation, etc.) aspects of a process pattern. Additional,
thinking will be added to structure a hierarchy of the
discovered patterns. The primary motivation behind this
rationale is to describe the different BP elements that the
discovered patterns generalised or extracted from so that
unwanted ambiguities related to the application and use of the
pattern can be avoided.
Figure 2: SDR Methodological Framework
The first lifecycle, Semantic Discovery Lifecycle (SDL),
initiates with the preparation of the organisational legacy
assets and finishes with the production of business process
patterns, which then become part of the pattern repository. The
second lifecycle is the Semantic Reuse Lifecycle (SRL) and is
aimed at producing business process models with the support
of the patterns discovered during the SDL. In this framework
the SRL is dependent on the SDL only in terms of the patterns
that are produced by the SDL. The two lifecycles are, for all
other purposes, autonomous and can be performed by different
organisations.
IV. CONCLUSION
The necessity of changing the way in which organisations
do business and provide value in order to survive and flourish
in a high-tech market has been recognised by both academics
and industries. Nevertheless, the resulting SDR methodology
is intended to adequately support business process modelling.
It allows the capture and recording of pattern discovery and
evolvement and their reuse in future developments.
B. The Discovery Lifecycle of SDR
The Semantic Discovery Lifecycle (SDL) initiates with the
procurement and organisation of legacy sources and finishes
with the production of business process patterns, which then
become part of the pattern repository. The repository feeds
into the Semantic Reuse Lifecycle. The phases of the SDL are
as follows:
Phase 1: Preparation of Legacy Assets
This provides SDL with organisational legacy assets that
demonstrate the existence of certain types of models as well as
their generalised recurrence across multiple organisations.
Also during this phase business process models are going to
be extracted from the legacy assets. These models are typical
process flow diagrams such as BPMN diagrams.
Phase 2: Semantic Analysis of BP Models (SA).
This phase along with the following represents the core of
SDL. The elements of the process diagrams generated in phase
The SDR methodological framework overcomes two
limitations of previous research on business process patterns.
Firstly, the workflow patterns defined by van der Aalst et al.
(2003) model common control structures of workflow
languages are not aimed at modelling generic processes of a
business domain (like an industrial sector). Secondly, the
patterns research community to date has dedicated limited
attention to the process of patterns discovery. The unique
features of the SDR methodological framework are its dual
lifecycle model, its use of semantics and the grounding in real
world legacy.
Our research study is continuing in several directions.
Firstly, we are applying the full version of the developed
patterns in an industrial domain to check their validity and
solve the problem of domain reuse in modelling organisational
processes, which exist in current business process patterns.
Secondly, the SDR is being extended to include domains not
87
[14] van der, A., A.H.M.Ter, H. & Kiepuszewski, B. , "Advanced Workflow
Patterns", 7th International Conference on Cooperative Information
Systems (CoopIS), ed. O. Etzion en P. Scheuermann, Lecture Notes in
Computer Science, Springer-Verlag, Heidelberg, Berlin, pp. 18, 2000.
[15] Thom, L., Lau, J., Iochpe, C., Reichert, M., "Workflow Patterns for
Business Process Modelling", n: 8th Workshop on Business Process
Modeling, Development, and Support in conjunction with CAISE,
Trondheim, Norway, June 12-14, international Conference on Advanced
Information Systems Engineering, 2007.
[16] van der, A., A.H.M.Ter, H., Kiepuszewski, B. & Barros, A.P.,
"Workflow Patterns", QUT Technical report, FIT-TR-2002-02,
Queensland University of Technology, Brisbane, 2002 (also see
http://www.tm.tue.nl/it/research/patterns). To appear in Distributed and
Parallel Databases, 2003.
[17] Aldin, L. & de Cesare, S. and Lycett, M., "Semantic Discovery and
Reuse of Business Process Patterns", 4th Annual Mediterranean
Conference on Information Systems, Athens University of Economics
and Business (AUEB), Greece, September 25-27, pp. 1-6, 2009b.
[18] Peristeras V., Tarabanis K., “Towards an enterprise architecture for
public administration using a top-down approach”, European Journal of
Information Systems, vol.9, pp.252-260, December, 2000.
[19] di Dio, G. "ARWOPS: A Framework for Searching Workflow Patterns
Candidate to be Reused", Second International Conference on Internet
and Web Applications and Services (ICIW): IEEE CNF, May 13-19,
Mauritius, pp. 33, 2007.
[20] Kaisler, S.H. Software paradigms, Hoboken, N.J., USA: John Wiley &
Sons, 2005.
[21] Arango, G., “Domain Engineering for Software Reuse”, PhD Thesis,
Department of Information Systems and Computer Science, University
of California, Irvine, 1998.
[22] Foreman, J., "Product Line Based Software Development- Significant
Results, Future Challenges", Software Technology Conference, Salt
Lake City, UT, United State, April 23, 1996.
[23] Nwosu, K.C. & Seacord, R.C. 1999, "Workshop on Component-Based
Software Engineering Processes", Technology of Object-Oriented
Languages and SystemsInstitute of Electrical & Electronics Engineers
(IEEE), Santa Barbara, California, USA, pp. 532.
[24] Prieto-Daiz, R., "Domain analysis: an introduction", SIGSOFT Software
Engineering Notes, Association for Computing Machinery (ACM), vol.
15, no. 2, pp. 47-54, 1990.
[25] Daga, A., de Cesare, S., Lycett, M. & and Partridge, C., "An Ontological
Approach for Recovering Legacy Business Content", In Proceedings of
the 38th Annual Hawaii International Conference on System Sciences
(HICSS), January 3-6, Los Alamitos, California, IEEE Computer Society
Press, pp. 224, 2005.
[26] Cabot and Raventos, Roles as Entity Types: A Conceptual Modelling
Pattern. ER 2004: 69-82, 2004.
included in it. Thirdly, we are working on the application of
the reuse lifecycle. Finally, we are improving the way to
classify these discoverable to facilitate their practical use.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Azoff, A., Kellett, A., Roy, S. & and Thompson, M., Business Process
Management: Building End- to-end Process Solutions for the Agile
Business: Butler Direct Ltd., Technology Evaluation and Comparison
Report, 2007.
Eriksson, H. & Penker, M., Business modelling with UML: business
patterns at work, New York, Chichester: John Wiley & Sons, 2000.
Malone, T., Crowston, K. & Herman, G., Organizing Business
Knowledge: The MIT Process Handbook, Cambridge, Mass, London:
MIT Press 2003.
Cline, M.P. "The pros and cons of adopting and applying design patterns
in the real world", Communications of the ACM, vol. 39, no. 10, pp. 4749, 1996.
Alexander, C., Ishikawa, S., Silverstein, M. & Centre for Environmental
Structure, A pattern language: towns, buildings, construction, New
York: Oxford University Press., 1977
Beck, K. & Cunningham, W., "Using Pattern Languages for ObjectOriented Program", Technical report, Tektronix, Inc., presented at
Workshop on Specification and Design for Object-Oriented
Programming (OOPSLA), Orlando, Florida, USA, 1987.
Coad, P., "Object-oriented patterns", Communications of the ACM, vol.
35, no. 9, pp. 152-159, 1992.
Gamma, E., Helm, R., Johnson, R. & Vlissides, J., Design patterns:
elements of reusable object-oriented software, Reading, Mass: AddisonWesley, 1995.
Hay, D.C., Data model patterns: conventions of thought, New York,
USA: Dorset House Publisher Press, 1996.
Fowler, M., Analysis patterns: reusable object models, Menlo Park,
Calif: Addison Wesley, 1997.
Russell, N., ter Hofstede, A.H.M., Edmond, D. & van der, A.,
"Workflow Resource Patterns", BETA Working Paper Series, Technical
Report WP 127, Eindhoven University of Technology, vol. 127, pp. 166, 2004
V. Popova, A. Sharpanskykh, Formal goal-based modeling of
organizations, in: Proceedings of the Sixth International Workshop on
Modelling, Simulation, Verification and Validation of Enterprise
Information Systems (MSVVEIS'08), INSTICC Press, 2008.
Rosenberg, F., et al. Top-down business process development and
execution using quality of service aspects. Enterprise Information
Systems, 2(4), 459-475, 2008.
88
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Rule-Based Modeling of Heating and Cooling
Performances of RHVT were Positioned at
Different Angles with a Horizontal
Yusuf YILMAZ1, Sakir TASDEMIR2, Kevser DINCER1
1
Faculty of Engineering, Department of Mechanical Engineering, Selcuk University, Konya, Turkey
yyilmaz@selcuk.edu.tr, kdincer@selcuk.edu.tr
2
Higher School of Vocational and Technical Sciences, Selcuk University, Konya, Turkey
stasdemir@selcuk.edu.tr
tangentially introduced into the vortex tube from nozzles,
starts to make a circular movement inside the vortex tube at
high speeds, because of the cylindrical structure of the tube,
depending on its inlet pressure and speed. As a result of this,
pressure difference occurs between the tube wall and tube
center because of the friction of the fluid circling at high
speeds. Speed of the fluid near the tube wall is lower than the
speed at the tube center, because of the effects of wall friction.
As a result, fluid in the center region transfers energy to the
fluid at the tube wall, depending on the geometric structure of
the vortex tube. The cooled fluid leaves the vortex tube from
the cold output side, by moving towards an opposite direction,
compared to the main flow direction, after a stagnation point.
Whereas, the heated fluid leaves the tube in the main flow
direction from the other end of the tube [1]. Dincer et al. [1, 2]
worked fuzzy modeling of performance of counterflow
Ranque-Hilsch vortex tubes with different geometric
constructions. They noted that fuzzy expert system results
agree well with experimental data. Tosun et al. [3] studied
rule-based Mamdani-type fuzzy modelling of thermal
performance of multi-layer precast concrete panels used in
residential buildings in Turkey. They found that RBMTF can
be used as a reliable modelling method for thermal
performance of multi-layer precast concrete panels used in
residential buildings’ studies.
In this study, performance of counterflow Ranque-Hilsch
brass vortex tubes having different different angles with
horizontals were modeled with fuzzy logic. Fuzzy logic
predictions were compared with experimental results and were
found to be compatible with each other. Furthermore,
predictions were carried out for values not studied
experimentally.
Abstract - A counter flow Ranque-Hilsch vortex tubes
(RHVT) used was made of brass. The internal diameter (D) of the
vortex tubes was 15 mm and their lengths were given as L=15D.
Number of nozzle is 5. Conical tip plugs were mounted right over
the hot fluid exit of the vortex tube and have diameters of 5 mm.
Throughout the tests, the valve on the cold stream exit side was
set at full throttle whereas that on the hot exit was set to a near
closed position gradually from full throttle, in this way, the
system pressure, temperature and volumetric flow rates were
measured and used for the best performance of counter flow
Ranque-Hilsch tubes were positioned at different angles with
horizontals (0o, 30o, 60o, 90º). In this study, heating and cooling
performances of counter-flow RHVT were experimentally
investigated and modeled with a RBMTF (Rule-Based MamdaniType Fuzzy) modeling technique. Input parameters (ξ, ) and
output parameters ΔTh, ΔTc were described by RBMTF if-then
rules. Numerical parameters of input and output variables were
fuzzificated as linguistic variables: Very Low (L1), Low (L2),
Negative Medium (L3), Medium (L4), Positive Medium (L5),
High (L6), Very High (L7) linguistic classes. Absolute fraction of
variance (R2) for the ΔTh was found to be 99.06% and R2 for the
ΔTc was 99.05%. The actual values and RBMTF results
indicated that RBMTF can be successfully used for the
determination of heating and cooling performances of counter
flow Ranque-Hilsch tubes were positioned at different angles
with horizontals.
Keywords - Rule base, fuzzy, RHVT, heating, cooling,
temperature separation.
I. INTRODUCTION
R
anque (1933) invented the vortex tube and first reported
about energy separation. Later Hilsch (1947) published
systematic experimental results of this effect. Since then,
this phenomenon has attracted interests of many scientists.
The vortex tube can be classified into two types. Both hot and
cold flows in parallel flow RHVTs leave the vortex tube in the
same direction. It is not possible for cold flow to turn back
after a stagnation point. In order to separate the flow in the
center of the tube from the flow at the wall, an apparatus
which has a hole in the center is used. The temperature of hot
and cold flows can be changed by back and forth movement of
this apparatus. In parallel flow RHVTs, hot and cold flows
mix with each other. Working principle of the counter-flow
RHVT can be defined as follows. Compressible fluid, which is
II. EXPERIMENTAL STUDY
In this study, a counter flow RHVT was used made of brass.
The internal diameter (D) of the vortex tubes was 15 mm and
their lengths were given as L= 15D. Number of nozzle is 5 and
nozzle cross-section area (NCSA) of 3x3 mm2. Conical tip
plugs were mounted right over the hot fluid exit of the vortex
tube and have diameters of 5 mm. Compressed air was
supplied by a rotary screw compressor. Air coming from the
compressor was introduced to the vortex tube via the nozzles.
89
The temperatures of cold outlet flow, hot outlet flow and the
inlet flow were measured with 24-gauge copper-constantan
thermocouples. Throughout the tests, the valve on the cold
stream exit side was set at full throttle whereas that on the hot
exit was set to a near closed position gradually from full
throttle, in this way, the system pressure, temperature and
volumetric flow rates were measured and used for the best
performance of counter flow Ranque-Hilsch tubes were
positioned at different angles with horizontals (0o, 30o, 60o,
90º). In this study, heating and cooling performance of RHVT,
which has been made of brass (it contains 33 percent zinc) was
investigated experimentally. The heating performance (Th)
of RHVT is defined in Eq. 1 and its cooling performance
(Tc) is defined in Eq. 2.
Th= Th- Ti
Figure1. Designed RBMTF structure
In this study, thermal performances of counter flow
Ranque-Hilsch vortex tubes with different geometric
constructions were investigated and modeled with a RBMTF
modeling technique. The model proposed in this study is a
two-input, two-output model (Figure (Fig.) 1). Input variables
(, α) and output variables (ΔTh, ΔTc) are as shown in Fig.2
and Fig.3 for the fuzzy triangular membership functions.
RBMTF was designed using the MATLAB fuzzy logic
toolbox in Windows XP. With the linguistic variables used, 36
rules were obtained for this system. The actual and RBMTF
heating-cooling performance values for RHVT are presented
in Figs. 4-5.
(1)
Tc= Tc- Ti
(2)
where Th is the temperature of hot stream and Tc is the
temperature of cold stream. Here Ti is the temperature of inlet
stream. In this study, the heating and cooling performance of
RHVTs was determined by taking cold stream fraction into
consideration. The cold flow fraction () is defined as the ratio

of the mass flow rate of the cold stream ( m c ) to the mass

flow rate of the inlet stream ( m i ).  is given as follows:

=
mc

mi
(3)
In this study, flow was controlled by a valve on the hot
outlet side, whereby this valve was changed from a nearly
closed position from its nearly open position. In this case,
=0.1-0.9 was determined.
Figure 2. Fuzzy membership functions for two input variables a)
ξ, b) α
III. FUZZY MODELLING FOR THERMAL PERFORMANCES OF
COUNTER FLOW RHVT FOR DIFFERENT GEOMETRIC
CONSTRUCTION
The fuzzy subsets theory was introduced by Zadeh in
1965 as an extension of the set theory by the replacement of
the characteristic function of a set by a membership function,
whose values range from 0 to 1. RBMTF is basically a multivalued logic that allows intermediate values to be defined
between conventional evaluations like yes/no, true/false,
black/white, large/small, etc. [2, 4, 5].
Figure 3. Fuzzy membership functions for two output variables a)
ΔTh, b) ΔTc
Figure 4. The heating performance and the cooling
performance of RHVT at α=0o and α=30o
90
Figure 7. The heating performance and the cooling
performance of RHVT at α=75o
IV. CONCLISIONS
The aim of this study has been to show the possibility of
the use of RBMTF technique for the calculation of
performance of counter-flow Ranque-Hilsch brass vortex
tubes having different different angles with horizontals. When
the analysis was assessed, the thermal performances of counter
flow RHVT obtained from the fuzzy was very close to the
experimental results.
Further studies may be focused on different methods. This
system can also be developed and expanded by adding
artificial intelligent method, mixed systems and statistical
approach. Moreover, quite close results can be obtained by
either increasing number of input, output parameters or using
double or multiple hidden layers.
Figure 5. The heating performance and the cooling
performance of RHVT at α=60o and α=90o
In addition, absolute fraction of variance (R2) was defined
as follows [6]
  (t j  o j ) 2 


R  1  j

2
  (o j ) 
 j

2
(4)
where t is target value, o is output value, and p is pattern [6].
The statistical value R2 for the ΔTh is 99.06 % and R2 for the
ΔTc is 99.05 % (Fig. 6). When Fig.6 is studied, it is found that
actual values and the values from fuzzy technique are very
close to each other. Unperformed experiments are predicted
with RBMTF for α=75o (Fig. 7).
ACKNOWLEDGMENT
This study has been supported by Scientific Research
Project of Selcuk University.
REFERENCES
[1]
[2]
[3]
[4]
Figure 6. Comparison of the actual and RBMTF results for
ΔTh and ΔTc
[5]
[6]
91
A. Berber, K. Dincer, Y. Yılmaz, D.N. Ozen, Rule-based Mamdani-type
fuzzy modeling of heating and cooling performances of counter-flow
Ranque–Hilsch vortex tubes with different geometric construction for
steel, Energy, 51 (2013) 297-304.
K. Dincer, S. Tasdemir, S. Baskaya, I.Ucgul, B.Z. Uysal, Fuzzy
Modeling of Performance of Counterflow Ranque-Hilsch Vortex Tubes
With Different Geometric Constructions. Numerical Heat Transfer, Part
B. 54 (2008) 499-517.
M. Tosun, K. Dincer, S. Baskaya, Rule-based Mamdani-Type Fuzzy
Modelling of Thermal Performance of Multi-Layer Precast Concrete
Panels Used in Residential Buildings in Turkey, Expert Systems with
Applications. 38 (2011) 5553-5560.
Tasdemir S, Saritas I, Ciniviz M, Allahverdi N. Artificial neural network
and fuzzy expert system comparison for prediction of performance and
emission parameters on a gasoline engine. Expert Systems with
Applications 2011, 38: 13912-13923.
H. Yalcin, S. Tasdemir, Fuzzy expert system approach for determination
of α-linoleic acid content of eggs obtained from hens by dietary
flaxseed. Food Science and Technology International, 2007, 217-223.
A. Sözen and E. Arcaklioglu, Exergy Analysis of an Ejector-Absorption
Heat Transformer Using Artificial Neural Network Approach. Applied
Thermal Engineering. 27 (2007) 481-491.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Comparison of Performance Information
Retrieval of Maximum Match and Fixed Length
Stemming
M.BALCI1 and R.SARAÇOĞLU2
1
2
Selcuk University, Konya/Turkey, mehmetbalci@selcuk.edu.tr
Yüzüncü Yıl University, Van/Turkey, ridvansaracoglu@yyu.edu.tr
it is possible to develop a stemmer for a language that has less
affixes like English by just looking the glossary of affixes up
[4]. Because Turkish is an agglutinate language, the number of
affixes and the varieties of addition make it necessary to
examine in a detailed way [5].
Abstract - Most of the textual data stored in electronic media
also be written in natural language the importance of natural
language processing has revealed in text mining, and knowledge
of structure of the text is written in language need, has revealed.
Turkish affixes to the root of the new words are created by
bringing the body, the body of the words finding process by a
word that is attached to the removal of the suffixes stemming
called. Turkish is a language to add from the last consideration
when, efficient stemming process would affect a large extent the
success of text processing can be said.
The performance in information retrieval of different
stemming algorithms, comparatively analyzed by stemming
software in this work.
Keywords – Text mining, natural language processing,
information retrieval systems, stemming.
A. Information Retrieval Systems
Before giving information, Information Retrieval Systems
are systems that offer the presence of documents related to a
particular subject and the information about where they can be
found. These sytems are systems that are made for saving and
reaching the data in electronic environments including the
texts such as newspaper archives, library catalogues, articles
and law information written through natural language [6]. A.
Koksal, who made researches about this matter, describes the
information retrieval as “a research aiming to find the marks
of the documents whose presence generally is not known
related to the subjects and concepts researched in terms of
content by using an information retrieval system” [7].
Nowadays, the great increase in the documents kept in Turkish
electronic environments increases the necessity of
tools/systems that will offer these documents to the users’
service.
In information retrieval systems, the documents thought to
be related to the users’ necessities are returned by making
searches on the documents in accordance with the inquiry
stating the users’ necessities. When examined this process
closely, the obligation of using the documents with the state of
marker set reflecting the contents of the documents not with
their states that are presented to the system is noticed. These
markers of content are given names such as keyword, index
word and definer. Using markers of content was firstly
suggested by Luhn at the end of the 1950s [8].
I. INTRODUCTION
T
echnology that is a part of our daily lives brings an
outburst of data. The most important examples for the
increasing data at this speed are electronic mails and web sites
[1]. In order to utilize from these considerable amount of data,
these data should be saved in a practical way, processed
effectively and accessed quickly. In other words, the available
raw data should be saved, processed through appropriate
methods and they should be reached to the users. Data mining
coming out because of these necessities can be described as
retrieval of the useful information from the great amount of
data [2].
Scientific institutions all over the world and especially
universities publish their scientific researches in the electronic
environments. Scientific entourage wanting to utilize from
these texts want to reach the information they need while
working more quickly. Because of this necessity, processing
the texts written through natural language brings out a
different work field in data mining and this field is called as
“Text Mining” or “Text Processing”. Text mining is
commonly used in order to find the documents written in the
same subject, the ones related to each other and discover the
relations among concepts [3].
The techniques of text processing undoubtedly have a great
place in stemming affecting substantially the performance of
information retrieval in processing textual data while they are
basis of especially Information Retrieval Systems. The process
of stemming differs according to the languages. For instance,
II. MATERIAL AND METHOD
A. Maximum Match Algorithm
In this method, firstly the word is searched in a dictionary
where the stems of the words related to the document exist. If
it cannot be found, the process of searching is made again by
deleting a letter from the end of the word. The process ends
when a stem is found or a word is a letter. Although this
method is one of the most understandable methods in terms of
application, it has a bad performance in terms of time because
92
many times of searching in the dictionary are made for each
term during stemming. What is more, there is a possibility that
unrelated stems are founds as results. For instance, when the
stem of the word “aksamaması” is wanted to be found
through this method, the stem is found as “aksam”. The real
stem of the word is “aksa”, root of the verb “aksa-mak” [9]
stemming as well as the longest match algorithms.
Inquiry 1: “Turizm sektöründe otellerin sorunları üzerine
çözümler var mı?”
Table 2. Outputs of the longest match information retrieval belonging
to the inquiry 1
Document
Number
B. Fixed Length Algorithm
While finding the stem of each term that will be stemming
in this method, letters of fixed amount are considered as
stems. The researchers using this method make researches by
accepting the different amounts of letters as stems. In the
experimental research that will be mentioned below, this
stemming method is tried in four different ways by stemming
with letters of 4, 5, 6 and 7 and their performances on
information retrieval are examined.
C. Dataset
In the data set used in the study, there are 1000 documents
including many dissertations and articles. Because that all the
text contents belonging to these documents exist in the data set
causes an awkward structure in the processing the processes in
software and it causes more waste of time, marker sets
presenting the contents of the documents best are used in the
data set. As marker set, the name of document, abstract and
keywords are chosen. After the documents in the data set
undergo preprocessing processes such as;
 Decomposition
o Removing the punctuation marks,
o Converting all the letters into lower cases.
 Removing the stop word
They are separated word by word and prepared for the
stemming. Some numeric data belonging to the data set are
seen in Table 1.
2
Türkiye Küresel Krizin Neresinde?
Interest
with the
inquiry
Yes
205
Kurumsallaşma, Turizm İşletmeleri
Yes
21
63
Turizm Sektöründe E-Ticaret
Uygulamaları: Nevşehir örneği
Yes
20
43
Türkiye’de Turizm Otel İşletmeciliği
Alanında Eğitim Veren Yükseköğretim
Kuruluşlarındaki Eğitimcilerin Turizm
Mesleki Eğitiminin Etiksel Açıdan
İncelenmesine Yönelik Bir Alan
Araştırması
Sigmund Freud
Yes
17
No
17
973
Matching
Term
Count
24
Total matching term count in interest documents with the inquiry
82
Table 3. Outputs of the fixed length information retrieval information
belonging to inquiry 1
Related
matching
term count
Term count
Doc.Numb.
Interest
with inquiry
Term count
Doc.Numb.
Interest
with inquiry
Term count
Doc.Numb.
Interest
with inquiry
7 Letter
Interest
with inquiry
6 Letter
Doc.Numb.
1
2
3
4
5
5 Letter
Term count
4 Letter
Output
sequence
Table 1. Numeric data of the data set
Data Set Information
Number of Document
Total Number of Words
Number of Stop Words
Total Raw Term Count
(After discarding stop words)
Document Name
41
27
25
24
18
752
205
2
63
996
No
Yes
Yes
Yes
No
24
21
17
17
16
2
205
63
973
43
Yes
Yes
Yes
No
Yes
21
21
17
17
16
2
205
63
973
43
Yes
Yes
Yes
16
15
15
15
14
63
43
205
973
2
Yes
Yes
Yes
76
78
No
Yes
75
No
Yes
60
Inquiry 2: “Nükleer enerji gerçekten zararlı olsaydı,
ülkemizde uygulanmazdı.”
Count
1000
327.636
61.454
266.182
Table 4. Outputs of the longest match information retrieval belonging
to the inquiry 2
Document
Number
III. PRACTICE
In this study, the performances of information retrieval of
two algorithms were compared using two different inquiry
sentences. After the data set was loaded to the system, raw
terms were obtained after preprocessing processes including
determined inquiry and data set, removing punctuation marks,
converting all the letters into lower cases, removing separation
and stop words. The raw terms obtained were converted into
stemmed terms by using the chosen stemming algorithm.
Considering these stemmed terms as basis, five documents
where terms in the inquiry sentence exist most were returned.
In the analyses in this study, fixed length stemming with
letters of 4, 5, 6 and 7 was used for the method of fixed length
Document Name
Interest
with the
inquiry
Yes
Matching
Term
Count
51
693
Nükleer Enerji
725
Birlik ve Termodinamik
No
50
303
No
48
971
Değişik Yörelerden Sumak (Rhus
Coriaria L.) Meyvesinin Ayrıntılı
Kimyasal Bileşimi ve Oleorezin
Üretiminde Kullanılması Üzerine
Araştırma
Ötanazi
No
45
768
Nükleer Teknolojinin Riskleri
Yes
44
Total matching term count in interest documents with the inquiry
93
95
Table 5. Outputs of the fixed length information retrieval
belonging to the inquiry 2
Term count
Doc.Numb.
Interest
with
inquiry
Term count
Doc.Numb.
Interest
with
inquiry
Term count
Doc.Numb.
Interest
with
inquiry
7 Letter
Interest
with
inquiry
6 Letter
Doc.Numb.
5 Letter
Term count
4 Letter
Table 7. The closing rations of the fixed length to the longest
match
1
2
3
4
33
30
29
26
693
725
683
971
Yes
No
No
No
29
29
27
24
683
725
693
715
No
No
Yes
Yes
29
29
25
24
683
725
693
715
No
No
Yes
Yes
19
18
16
16
693
683
715
725
Yes
No
Yes
No
5
24
715
Yes
24
988
No
20
671
No
15
943
No
Related
matching
term count
Output
sequence
57
51
49
In the tables above, the retrieval outputs coming out for
both the longest match and the fixed length stemming methods
for both two inquiries and the number of matching terms
indicating the appropriateness of these outputs to the inquiry
are seen.
In the table 6, durations in obtaining these outputs are given
below.
Table 6. The durations of stemming and fetch the appropriate
document
Operation
All the raw terms stemming
Term matching and obtaining
outputs
Fixed
Length
Longest
Match
Fixed
Length
10 dk. 18 s
4s
10 dk. 18 s
5s
8s
6s
4s
6s
Inquiry 2
4 Letter
5 Letter
6 Letter
7 Letter
%93
%95
%91
%73
%60
%54
%52
%37
ACKNOWLEDGMENT
This study was taken from the dissertation named as
“Comparative Analysis of the Longest Match Algorithm in
Computer Based Text Processing” and prepared in The
Graduate School of Natural And Applied Science of Selcuk
University In 2010.
Inquiry 2
Longest
Match
Inquiry 1
With the experimental applications whose results are given
above, in the situation of making stemming by using the
longest match and fixed length methods, what kind of
situation occurs and duration performances of stemming
algorithms are seen while forming outputs of information
retrieval system. In Information Retrieval Systems, by using
the stemming method the longest match algorithm and the
fixed length algorithm, stemming method is compared in
terms of the number of matching terms in the documents are
compared (accepting first 4, 5, 6 and 7 letters of the raw term
as the stem) and as a result stemming methods made by
accepting first four and five letters as stem of the raw term in
the fixed length method giving the closest result to the longest
match were found.
35
Inquiry 1
Stem Length
REFERENCES
IV. CONCLUSION
[1]
Kantardzic M., 2003, Data Mining:Concepts, Models, Methods, and
Algorithms, IEEE Pres, Wiley Interscience Publications.
[2] Saracoğlu R., 2007, Searching For Similar Documents Using Fuzzy
Clustering, PhD Thesis, Graduate School of Natural and Applied
Sciences, Selçuk University, Konya.
[3] Yıldırım P.(*), Uludağ M.(**), Görür A.(*), 2008, Hastane Bilgi
Sistemlerinde Veri Madenciliği, Akademik Bilişim Konferansları’08, (*)
Çankaya Üniversitesi, Bilgisayar Mühendisliği Bölümü, Ankara. (**)
European Bioinformatics Institute, Cambridge, UK.
[4] Porter, M.F., 1980, An Algorithm For Suffix Stripping, Program,
14(3):130-137.
[5] Jurafsky, D. and Martin, J., 2000, Speech and Language Processing,
Prentice Hall, New Jersey.
[6] Sever H., 2002, Kaşgarlı Mahmut Bilgi Geri Getirim Sistemi (KMBGS)
Proje no: 97K121330 Sonuç Raporu, Bilgisayar Mühendisliği Bölümü
Bilgi Erişim Araştırma Grubu, Hacettepe Üniversitesi, Ankara.
[7] Köksal A., 1981, Tümüyle Özdevimli Deneysel Bir Belge Dizinleme ve
Erişim Dizgesi, TBD 3. Ulusal Bilişim Kurultayı, Ankara.
[8] Lassila O., 1998, Web Metadata : A Matter of Semantics. IEEE Iternet
Computing, pp. 30-37.
[9] Kesgin F., 2007, Topıc Detectıon System For Turkısh Texts, Master
Thesis, Graduate School of Natural and Applied Sciences, Istanbul
Technical University,Istanbul.
[10] Balcı M., 2010 Comparative Analysis of The Longest Match Algorithm
in Computer Based Text Processing, Master Thesis, Graduate School of
Natural and Applied Sciences, Selçuk University, Konya.
In the tables above, the information retrieval outputs
obtained through both two stemming methods and how many
mathcing terms in total were enabled in the outputs related to
the inquiry statement are seen. In the light of these
information, because the number of total matching terms were
much and the glossary was used, we can say that the longest
match method gives more accurate retrieval outputs than the
other method. However, as understood from Table 6, this
method incommensurably takes much time for stemming raw
terms compared to the fixed length method. Considering that
the durations shown in the tables were used for the
applications of four different fixed length, the difference of
durations between for only a method and the method of the
longest match increases more.
Because these duration difference between two methods, we
see below the proportioning of total matching terms obtained
through two methods as the answer for this question “Which
one of the fixed length methods produces the closest result for
the longest match method?”
94
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Lineer Kuadratik Regülatör ve Genetik Tabanlı
PID Denetleyici ile Doğru Akım Motorunun Hız
Denetimi
H.AÇIKGÖZ1, Ö.F.KEÇECİOĞLU2, A.GANİ2, M.ŞEKKELİ2
1
2
Kilis 7 Aralık Üniveristesi, Kilis/Türkiye, hakanacikgoz@kilis.edu.tr
Kahramanmaraş Sütçü İmam Üniversitesi, K.Maraş/Türkiye , {fkececioglu & agani & msekkeli}@ksu.edu.tr
kollektör yapısından dolayı meydana gelen dezavantajlarda en
aza indirilmiştir. DA motorlarının hız ve konum denetiminde
kullanılan sürücü devresinin yanı sıra denetleyicininde önemi
oldukça fazladır. DA motorlarının hızı yüke bağlı olarak
değiştiğinden, sabit hız uygulamalarında açık çevrim
denetiminin yerine DA motorlarının hız denetiminde kapalı
çevrim denetimi tercih edilir. Temel olarak, bir kapalı çevrim
hız denetiminde motor hızını geribeslemek gerekir. Motordan
ölçülen hız bilgisi bir denetleyiciden geçirilerek, motor için
gerekli olan gerilim büyüklüğü hesaplanır ve sürücü devre
yardımı ile motora uygulanır [3-5].
DA motorlarının hız ve konum denetiminde yapılarının
basit olmasından dolayı PID denetleyiciler kullanılmaktadır.
PID denetleyicilerin P, I, D kazanç paramtrelerinin
belirlenmesi sistemi için önemlidir. Bu kazanç parametrelerini
belirlemek için klasik yöntemlerden olan Ziegler-Nichols [6]
yöntemi sıkça kullanılmaktadır. Fakat bu yöntemde, verilen
denetim sistemi için en büyük kazanç değerinin belirlenmesi
veya salınım periyodunun bulunması gibi bazı sorunlar
mevcuttur.
Modern teknoloji ile birlikte PID denetleyicinin en uygun
kazanç parametrelerini belirlemek için bir çok optimal
denetim yöntemi ortaya atılmıştır. İlk olarak John Holland
tarafından ortaya atılan doğal evrim sürecine dayanan
stokastik bir arama yöntemi olan genetik algoritmalar (GA)
kullanılarak PID denetleyici için en uygun Kp, Ki ve Kd
parametreleri bulunmuştur [7]. Daha sonra parçacık sürüsü
optimizasyonu (PSO), lineer kuadratik regulatör (LKR) ve
yapay arı kolonisi (ABC) ile PID denetleyicinin optimal
kazanç parametreleri belirlenmiş ve çok başarılı sonuçlar elde
edilmiştir [8-10].
Bu çalışmada öncelikle PID denetleyici için P,I, D kazanç
parametreleri el ile ayarlanarak sisteme uygulanmıştır. Daha
sonra GA ve LKR yardımıyla PID denetleyicinin kazanç
parametrelerinin bulunması amaçlanmıştır. Çalışmanın ikinci
bölümünde DA motorunun modeli incelenmiş ve gerekli
denklemler verilmiştir. Üçüncü bölümde ise GA ve LKR
hakkında bilgi verilmiştir. Dördüncü bölümde benzetim
çalışmalarından elde edilen sonuçlar verilmiştir. Tartışma ve
sonuç kısmı ise son bölümde verilmiştir.
Özet – Doğru akım (DA) makinaları endüstride hız ve konum
denetimi gerektiren birçok uygulamada yaygın bir şekilde
kullanılmaktadır. Bu sebeple DA motorlarının hız ve konum
denetimi önemli bir konudur. Gelişen teknolojiyle birlikte DA
motorlarının performansını arttırmak için birçok çalışma
yapılmaktadır. Bu çalışmada DA motorlarının hız denetimi için
ilk önce genetik algoritma (GA) ile PID denetleyici
oluşturulmuştur. Daha sonra ise optimal denetleyicilerden olan
lineer kuadratik regülatör (LKR) DA motorunun hız denetim
ünitesine uygulanmıştır. Her iki denetleyicide Matlab/simulink
paket programında oluşturulmuş ve birbiriyle karşılaştırılmıştır.
Anahtar Kelimeler- DA motor, Genetik algoritmalar, Lineer
kuadratik regülatör, PID denetleyici
Abstract - DC motors are widely used in many applications
requiring speed and position control in industry. Therefore,
speed and position control of DC motors is an important issue.
With the developing technology, many studies have been done in
order to improve the performance of DC motors. In this study,
firstly, genetic algorithm based PID controller has been
establihed to speed control of DC motors. Then, linear quadratic
regulator that are one of the optimal controllers has been applied
to speed control unit of DC motor. Both controllers has been
created using matlab/simulink package program and compared
with each other.
Keywords – DC Motor, Genetic algorithms, Linear quadratic
regulator, PID controller
I. GİRİŞ
DA motorları doğru akım enerjisini, mekanik enerjiye
çeviren elektrik makinalarıdır [1]. Bilindiği gibi DA
motorlarında akı ve moment birbirinden bağımsız bir şekilde
denetlenebilmektedir. DA motorlarının hız karakteristiklerinin
çok iyi olmasından dolayı elektrikli trenler, sarma makinaları,
vinçler ve robot kolları gibi hız ve konum denetimi gerektiren
endüstriyel uygulamalarda sıkça kullanılmaktadırlar. DA
motorlarının fırça kollektör temasından dolayı belirli
periyotlarla bakım gerektirmeleri ve her ortamda
kullanılamama gibi bir takım dezavantajları mevcuttur [1-2].
Son yıllarda mikroişlemciler, yarıiletken ve güç
elektroniğindeki ilerlemelerle birlikte DA motorlarının fırça
95
y  Cx  Du
II. DA MOTOR MODELİ
DA motorları bilindiği gibi elektrik enerjisini mekanik
enerjiye dönüştüren elektrik makinalarıdır. Faraday kanuna
göre gerekli şartlar sağlandığında bir elektrik makinası hem
motor hem de generatör olarak çalışabilir [1-2].
(6)
Denklem 2-4’e göre, durum uzay modeli aşağıdaki gibi
yazılabilir.
Generatör
Elektrik
Makinası
Elektrik Enerjisi
.
 I a   R / L
.   a a
    K i / J m
.  
   0
 
Mekanik Enerji
Motor
Şekil 1: Elektromekanik enerji dönüşümü
+
E a (s) +
M
-
Tm
m
TL
Şekil 2: DA motorunun eşdeğer devresi
 m (s)
III. GA VE LKR TASARIMI
A. Genetik Algoritmalar
İlk olarak John Holland tarafından ortaya atılan GA
yöntemi doğal seçim mekanizmanı esas alan stokastik
optimizasyon
yöntemi
olup
karmaşık mühendislik
problemlerinin çözümünde yaygın bir biçimde kullanılır.
Klasik optimizasyon yöntemlerine göre farklılıkları olan GA,
parametre kümesini değil kodlanmış biçimlerini kullanırlar.
Olasılık kurallarına göre çalışan GA, bir amaç fonksiyonuna
gereksinim duyar. Çözüm uzayının tamamını değil belirli bir
kısmını tararlar. Böylece, etkin arama yaparak çok daha kısa
bir sürede çözüme ulaşırlar. GA, canlıların en iyi olanı yaşar
prensibini örnek alır ve iyi bireylerin kendi yaşamlarını
muhafaza edip kötü bireylerin yok olması esasına dayanır [1113]. GA’da genel olarak kodlama, popülasyon büyüklüğü,
seçim, mutasyon ve çaprazlama gibi genetik operatörler
kullanılmaktadır. GA ilk önce kullanıcı tarafından rastgele bir
başlangıç
popülasyonu
oluşturulur.
Daha
sonra
popülasyondaki her bir birey için uygunluk değeri hesaplanır
ve bulunan uygunluk değerleri dizilerin çözüm kalitesini
gösterir. Popülasyonda yer alan en iyi uygunluk değerine sahip
olan birey, bir sonraki yeni popülasyona doğrudan
değiştirilmeden aktarılır.
Tm  Ki I a
(1)
(2)
Şekil 2’den Newton ve Kirchhoff kanununa göre aşağıdaki
eşitlikleri yazabiliriz.
dI a
d
 Ra I a  Ea  K b
dt
dt
(3)
Jm
1
J m s  Bm
Şekil 3: DA motoru blok diyagramı
Denklem 1’den de görüldüğü gibi moment (Tm), endüvi
akımı (Ia) ve moment sabiti (Ki) ile orantılıdır. Eb zıt emk ise
açısal hız ile ilişkilidir ve denklem 2’de verilmiştir.
La
Ki
Kb
-
d
Eb  K bm  K b
dt
1
La s  R a
+
Eb
Ea
1
(7)
(8)
La
Ia
 Bm / J m
0  İ a  1 / La 
 
0  m    0  E a
0  m   0 
 İa 
 
m  0 1 0m 
 m 
Şekil 1 bir elektrik makinasının motor ve generatör
çalışmasını göstermektedir. DA motorunun hızı devreye
uygulanan gerilimle orantılıyken momenti motor akımıyla
orantılıdır. DA motor modeli şekil 2’de verilmiştir. Endüvi
devresi Ra direncine seri bağlı La indüktasından ve Eb zıt
e.m.k’den oluşmaktadır. Şekil 3’te ise DA motoru blok
diyagramı verilmiştir.
Ra
 K b / La
d 2
d
 Bm
 Ki I a
2
dt
dt
(4)
DA motor modeli, durum uzay modeli şeklinde
oluşturulabilir ve aşağıdaki denklemlerle gösterilebilir.
.
x  Ax  Bu
(5)
96
x

İlk Popülasyonu
Oluştur
x  Ax  Bu
C
y
Uygunluk Değerini
Hesapla
u  Kx
Seçim
Şekil 5: LKR yapısı
Seçilen Bireylerden
Yeni Bir Popülasyon
Oluştur
Tasarımın amacı istenilen çalışma performansını
sağlayacak olan pratik bileşenler ile bir sistemi
gerçekleştirmektir. Sürekli zaman sisteminde fonksiyonel bir
denklem eşitlik 9 ve 10’da olduğu gibi tanımlanır [16-19].
Çaprazlama
Mutasyon
f  x, t   min  h(x, u)dt
t1
u
Sonlandırma
Kriteri
Sağlandı
Sağlanmadı
(9)
f  x, t0   f (x(0)) , f  x, t1   0
En İyi Çözüm
Şekil 4: GA’ın genel çalışma yapısı
(10)
Yeni popülasyon çaprazlama yapıldıktan sonra başlangıç
popülasyonundan farklı bireyleri içeren farklı bir popülasyon
oluşturulur. Bazı durumlarda erken yakınsama ihtimaline karşı
mutasyon işlemi gerekmektedir. Bunun için çaprazlama
işleminden sonra düşük olasılıklı mutasyon işlemi
gerçekleştirilmiştir [11-16]. Şekil 4’te GA genel çalışma
yapısına ait blok diyagramı verilmiştir. GA’larda başlangıçta
seçilen popülasyon sayısı en iyi çözüm için en önemli
operatördür. Bunun için bu çalışmada popülasyon sayısı 100
olarak seçilmiştir. Diğer genetik operatörler ise tablo 1’de
verilmiştir. Kp, Ki ve Kd için alt- üst değerler -100 ile 100
olarak belirlenmiştir.
Hamilton-Jacobi denklemi uygulanırsa;
T


f
 f 
 min  h(x, u)    g(x, u) 
u
t

t




(11)
Performans kriteri kuadratik
gösterilen şekilde tanımlanır.
J
t1
t
Değer/Çeşit
Popülasyon Sayısı
100
Popülasyon Tipi
Çift Vektör
Çaprazlama Yöntemi
Tek Noktalı
Mutasyon Olasılığı
0.04
Seçim Stratejisi
Rulet
Çaprazlama Olasılığı
0.08
Mutasyon Tekniği
Üniform
olarak denklem
12’de
h(xTQx+uT Ru)dt
0
(12)
Tablo 1: GA Parametreleri ve Değerleri
Parametre
t0
Bu ifadeler sonucunda H-J eşitliği:
T


f
 f 
  min  xTQx  uT Ru    ( Ax+Bu) 
u
t
 t 


(13)
P matrisi simetrik ve kare matrisi olmak üzere:
f  x,t   xT Px
(14)
şeklinde tanımlanırsa,
f

f
 f 
 xT Px ,
 2Px ve    2xT P
t
t
x
 x 
T
B. Lineer Kuadratik Regülatör
LQR denetimi, optimal denetim sistemleri olarak
sınıflandırılmış tasarımlardır. Bu kontrol mühendisliği için
önemli bir fonksiyondur. Tasarlanan LKR geri beslemeli
durum modeli şekil 5’te gösterilmiştir.
(15)
ifadeleri sonucunda H-J eşitliği denklem 16’da gösterilmiştir.
xT
P
x   min xTQx  uT Ru  2xT P( Ax  Bu) 
u
x
Burada u ifadesini minimize etmek için eşitlik 17 yazılır.
[f / t ]
 2uT R+2xT PB=0
u
(17)
97
(16)
Optimal denetim yasasına göre uopt  Kx şeklinde yazılır
1
Atalet Momenti(Jm)
ve K ifadesinin değeri: K=R B P olur. Uopt değeri H-J
denkleminde yerine yazılırsa P matrisinin bulunması için
aşağıdaki Ricatti denklemi matrisi bulunur.
T
PA  AT P  Q - PBR1BTP  0
0.01kgm2
Motor Sabiti(Ki-Kb)
0.023 Vs/rad
Sürtünme Katsayısı(Bm)
(18)
3.5e-5 Nms/rad
1.4
1.2
Denklem 18 Riccati eşitliği olarak isimlendirilir [16-17].
LKR olarak adlandırılan bu optimal denetim, şekil 5’te durum
uzay modelinde gösterilmiştir. Şekil 3 ve 5 birleştirilirse şekil
6 oluşturulabilir ve bu şekilde DA motorunun LKR denetleyici
ile kullanımı gösterilebilir.
HIZ (p.u)
1
0.8
0.6
0.4
0.2
0
Ea (s) +
-
1
La s  Ra
Ki
1
J m s  Bm
0
 m (s)
2
4
6
8
10
12
14
16
18
20
18
20
Zaman (sn)
Şekil 7: PID denetleyicinin birim basamak cevabı
1.4
1.2
Motor
1
HIZ (p.u)
Kb
0.8
0.6
0.4
u  Kx
0.2
LKR
0
0
2
4
6
8
10
12
14
16
Zaman (sn)
Şekil 8: LKR denetleyicinin birim basamak cevabı
Şekil 6: LKR ile DA motor sistemi
1.4
1.2
IV. BENZETİM ÇALIŞMALARI
1
HIZ (p.u)
Bu çalışmada GA temelli PID denetleyici ve LKR
denetleyici ile DA motorunun hız denetimi yapılmıştır.
Çalışmada kullanılan DA motorunun parametreleri tablo 2’de
verilmiştir. PID denetleyicinin Kp, Ki, Kd kazanç
parametreleri ilk olarak GA metodu ile belirlenmiş ve DA
motoru hız denetimi için oluşturulmuştur. Daha sonra LKR
tasarlanarak DA motorunun hız denetimine uygulanmıştır.
Oluşturulan GA tabanlı PID denetleyicinin kazanç
parametreleri şöyle bulunmuştur: Kp= 1.5704, Ki= 2.3046,
Kd= 0.3603. LKR denetleyicinin Q ve R matrisleri ise
Q=[ 0.25 0; 0 0.027], R= [0.25] olarak ayarlanmıştır. Şekil
7’de klasik PID denetleyicinin birim basamak cevabı
görülmektedir. Bu çalışmada PID denetleyicinin kazanç
parametreleri el ile ayarlanmıştır. Şekil 8 ve 9’da ise LKR ve
GA tabanlı PID denetleyicinin birim basamak cevabına
verdiği performans görülmektedir. Her iki denetleyicide hızlı
bir sürede referans hızı yakalamış ve aşmada yapmadan
sürekli durum hatası olmaksızın referans hızı takip etmiştir.
Çalışmada oluşturulan PID, LKR ve GA tabanlı PID
denetleyicinin yerleşme zamanı, yükselme zamanı ve aşım
performansları tablo 3’te verilmiştir.
Değer
Endüvi Direnci(Ra)
2Ω
Endüvi İndüktansı(La)
0.6
0.4
0.2
0
0
2
4
6
8
10
12
14
16
18
20
Zaman (sn)
Şekil 9: GA tabanlı PID denetleyicinin birim basamak cevabı
Tablo 3: Denetleyicilerin performanslarının karşılaştırılması
GA-PID
LKR
PID
Yükselme Zamanı
0.873sn
1.19sn
0.92sn
Yerleşme Zamanı
1.36sn
1.93sn
2.58sn
Aşma
%0
%0
%8
V. SONUÇLAR
DA motorların hız ve konum denetimleri önemli bir
konudur ve bu konu üzerinde bir çok çalışma yapılmaya
devam etmektedir. Bu çalışmada GA tabanlı PID denetleyici
ve LKR denetleyici kullanılarak DA motorunun optimal hız
denetimi anlatılmıştır. GA tabanlı PID denetleyici ile en iyi
performans sağlanmıştır. Fakat GA tabanlı PID denetleyicide
başlangıçta oluşturulan popülasyon sayısı sistemin
performansını etkilemektedir. Bu yüzden popülasyon sayısı
100 olarak seçilmiştir ve birçok deneme yapıldığı için süreç
uzamıştır. GA için PID denetleyicinin kazanç parametrelerinin
aralıklarının çok iyi bilinmesi de gerekmektedir. Çalışmada
Tablo 2: DA motorunun parametreleri
Sembol
(sec)
0.8
0.4 H
98
LKR denetleyicinin Q ve R matrisleri birçok deneme
yapılarak bulunmuştur. Her iki denetleyici birbiriyle
karşılaştırıldığında GA tabanlı PID denetleyici yükselme
zamanı, yerleşme zamanı ve aşma bakımından daha iyi
performansa sahip olduğu görülmektedir
[10] Ozden Ercin and Ramazan Coban, “Comparison of the Artificial Bee
Colony and the Bees Algorithm for PID Controller Tuning”, Innovations
in Intelligent Systems and Applications (INISTA) IEEE conference, pp.
595-598, 2011.
[11] Juang J., Huang M. and Liu W. (2008), “PID control using prescribed
genetic algorithms for MIMO system”, IEEE Trans. Systems, Man and
Cybernetics, vol. 38, no.5, pp. 716–727.
[12] J.S. Yang, “PID Control for a Binary Distillation Column Using a
Genetic Searching Algorithm”, WSEAS Trans. Syst., Vol. 5, pp. 720726, 2006.
[13] Açıkgöz, H., Keçecioğlu, Ö.F., Şekkeli, M., “ Genetik-PID Denetleyici
Kullanarak Sürekli Mıknatıslı Doğru Akım Motorunun Hız Denetimi”,
Otomatik Kontrol Ulusal Toplantısı (TOK2013), 26-28 Eylül 2013,
Malatya.
[14] R.A. Krohling, J.P. Rey, “Design of Optimal Disturbance Rejection PID
Controllers Using Genetic Algorithm”,IEEE Trans. Evol. Comput.,
Vol.5, pp. 78-82, 2001.
[15] Nitish K., Sanjay Kr. S., Manmohan A., “Optimizing Response of PID
Controller for Servo DC Motor by Genetic Algorithm” International
Journal of Applied Engineering Research, ISSN 0973-4562 Vol. 7
No.11,2012
[16] Ö. Oral, L. Çetin ve E. Uyar, “A Novel Method on Selection of Q And R
Matrices In The Theory Of Optimal Control” International Journal of
Systems Control, Cilt.1, No:2, s: 84-92, 2010.
[17] Nasir A., Ahmad M. and Rahmat M.,“Performance Comparison between
LQR and PID Controller for an Inverted Pendulum System”,
International Conference on Power Control and Optimization, Chiang
May, Thailand, July 2008.
[18] R. Yu, R. Hwang ‘Optimal PID Speed Control of Brushless DC Motors
using LQR Approach’, In Proc. IEEE International Conference on Man
and Cybernetics, Hague, Netherlands, 2004.
[19] Keçecioğlu Ö.F., Güneş M., Şekkeli, M., “Lineer Kuadratik Regülatör
(LKR) ile Hidrolik Türbinin Optimal Kontrolü”, Otomatik Kontrol
Ulusal Toplantısı (TOK2013), 26-28 Eylül 2013, Malatya
VI. KAYNAKLAR
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Mergen Faik, “Elektrik Makineleri (Doğru Akım Makineleri)”, Birsen
Yayınevi 2006.
G. Bal, “Doğru akım makinaları ve Sürücüleri”, Seçkin Yayıncılık,
2001.
Attaianese, C., Locci, N., Marongiu, I. and Perfetto, A. (1994). A
Digitally Controlled DC Drive, IEEE Electrotechnical Conference, 3,
1271–1274.
Altun H., Aydoğmuş Ö., Sunter S., “Gerçek Dört-Bölgeli Bir DC Motor
Sürücüsünün Modellenmesi ve Tasarımı”, Fırat Üniv. Fen ve Müh. Bil.
Dergisi, 20 (2), 295-303, 2008
Kuo Benjamin C., “Otomatik Kontrol Sistemleri”,Yedinci Baskı,
Prentice Hall 1995.
J.G. Ziegler, N.B. Nichols, “Optimization Setting for Automatic
Controller”, Trans. ASME, Vol. 64,pp. 756-769, 1942
I.Sekaj: “Application of genetic algorithms for control system design”,
Int. Conf. SCANN‘98, 10.-12.11.1998, Smolenice, Slovakia, pp.77-82.
J. Kennedy, “The Particle Swarm: Social Adaptation of Knowledge”,
Proceeding of the IEEE International Conference on Evolutionary
Computation, ICEC1997, Indianapolis, pp. 303-308, 1997.
Akhilesh K. Mishra, Anirudha Narain, “Speed Control of Dc Motor
Using Particle Swarm Optimization”, International Journal of
Engineering Research and Technology Vol. 1 (02), 2012, ISSN 2278 –
0181.
.
99
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Methodological Framework of Business
Process Patterns Discovery and Reuse
LADEN ALDİN1, SERGİO DE CESARE2
1
Regent's University, 2Brunel University
Abstract - In modern organisations business process modelling
has become fundamental due to the increasing rate of
organisational change. As a consequence, an organisation needs
to continuously redesign its business processes on a regular basis.
One major problem associated with the way business process
modelling is carried out today is the lack of explicit and
systematic reuse of previously developed models. Enabling the
reuse of previously modelled behaviour can have a beneficial
impact on the quality and efficiency of the overall information
systems development process and also improve the effectiveness
of an organisation’s business processes. The purpose of the
presented research paper is to develop a methodological
framework for achieving reuse in BPM via the discovery and
adoption of patterns. The framework is called Semantic
Discovery and Reuse of Business Process Patterns (SDR). SDR
provides a systematic method for identifying patterns among
organisational data assets representing business behaviour. The
framework proposes the use of semantics to drive both the
discovery of patterns as well as their reuse.
Keywords: pattern, business process modelling, reuse, semantic
and framework
I. INTRODUCTION
T
he modelling of business processes and their subsequent
automation, in the form of workflows, constitutes a
significant part of information systems development (ISD)
within large modern enterprises. Business processes (BP) are
designed on a regular basis in order to align operational
practices with an organisation’s changing requirements [1]. A
fundamental problem in the way business process modelling
(BPM) is carried out today is the lack of explicit and
systematic reuse of previously developed models. Although all
business processes possess unique characteristics, they also do
share many common traits making it possible to classify
business processes into generally recognised patterns of
organisational behaviour.
Patterns have become a widely accepted architectural
technique in software engineering. Patterns are general
solutions to recurring problems. A pattern generally includes a
generic definition of the problem, a model solution and the
known consequences of applying the pattern. In business
process modelling the use of patterns is quite limited. Apart
from a few sporadic attempts proposed by the literature [2 –
3], pattern-based business process modelling is not
commonplace. The benefits of adopting patterns are
numerous. For example, as the academic literature and
industry reports document, the adoption of design patterns in
software engineering projects improves reuse of shared
experiences, reduces redundant code, reduces design errors
and accelerates the learning curve [4]. As a consequence, it is
conceivable that patterns in BPM can produce similar
advantages, thus reducing both time and cost of generating
business process models and their subsequent transformation
into software designs of enterprise applications.
However, the systematic adoption of patterns in BPM
cannot be a simple transposition of the experience acquired by
the design patterns community in software engineering. This is
due to some essential differences between business modelling
and software design. While the latter involves the
representation of an engineered artefact (i.e., software), the
former concerns the representation of behaviour of a real
world system (i.e., the business organisation). As such
business process patterns should ideally be discovered from
the empirical analysis of organisational processes. The
discovery of real world patterns should resemble the process
of discovery of scientific theories; both must be based on
empirical data of the modelled phenomena. Empiricism is
currently not the basis for the discovery of patterns for BPM
and no systematic methodology for collecting and analysing
process models of business organisations currently exists.
Thus, this research study aims at developing such a
methodology. In particular, the novel contribution of this
research is a Semantic Discovery and Reuse of Business
Process Patterns methodological framework (SDR) that
enables business modellers to empirically discover business
process patterns and to reuse such patterns in future
development projects.
The remainder of this paper is structured as follows: Section
2 summarises the study background and its related work;
Section 3 shows the proposed SDR methodological framework
in detail, and with more focus on its discovery lifecycle.
Finally, Section 4 addresses the conclusions and future work.
II. BACKGROUND
The modelling of business processes and their subsequent
automation, in the form of workflows, constitutes a significant
part of ISD within large modern enterprises. Business
processes are designed on a regular basis in order to align
operational practices with an organisation’s changing
requirements [1]. A fundamental problem in the way that
BPM is carried out today is the lack of explicit and systematic
reuse of previously developed models. Although all business
processes possess unique characteristics, they also do share
many common traits making it possible to classify business
processes into generally recognised patterns of organisational
behaviour.
100
The idea of patterns for the design of artefacts can be traced
to Alexander (1977) and his description of a systematic
method for architecting a range of different kinds of physical
structures in the field of Civil Architecture. Research on
patterns has been conducted within a broad range of
disciplines from Civil Architecture [5] to Software and Data
Engineering [6, 7, 8, 9, 10, 2] and more recently there has also
been an increased interest in business process patterns
specifically in the form of workflows. Russell et al. (2004)
introduced a number of workflow resource patterns aimed at
capturing the various ways in which resources are represented
and utilised in workflows. Popova and Sharpanskykh (2008)
stated that these patterns provide an aggregated view on
resource allocation that includes authority-related aspects and
the characteristics of roles. This greater interest is primarily
due to the emergence of the service-oriented paradigm in
which workflows are composed by orchestrating or
choreographing Web services due to their platform-agnostic
nature and ease of integration [13]. van der Aalst et al. (2000)
produced a set of so called workflow patterns. Workflow
patterns proposed by van der Aalst are referred to as ‘Process
Four’ or P4lists and describe 20 patterns specific to processes.
This initiative started by systematically evaluating features of
Workflow Management (WfM) systems and assessing the
suitability of their underlying workflow languages. However,
as Thom et al. (2007) point out, these workflow patterns are
relevant towards the implementation of WfM systems rather
than identifying business activities that a modeller can
consider repeatedly in different process models. In fact, these
workflow patterns [16] are patterns of reusable control
structures (for example, sequence, choice and parallelism)
rather than patterns of reusable business processes subject to
automation. As such these patterns do not, on their own,
resolve the problems of domain reuse in modelling
organisational processes. Consequently new types of business
process patterns are required for reusing process models [17].
The MIT Process Handbook project started in 1991 with the
aim to establish an online library for sharing knowledge about
business processes. The knowledge in the Process Handbook
presented a redesign methodology based on concepts such as
process specialisation, dependencies and coordinating
mechanisms [3]. The business processes in the library are
organised hierarchically to facilitate easy process design
alternatives. The hierarchy builds on an inheritance
relationship between verbs that refer to the represented
business activity. There is a list of eight generic verbs
including ‘create’, ‘modify’, ‘preserve’, ‘destroy’, ‘combine’,
‘separate’, ‘decide’ and ‘manage’. Furthermore, the MIT
Process Handbook represents a catalogue of common business
patterns and it has inspired several projects, among them
Peristeras and Tarabanis (2000) who used the MIT Process
Handbook to propose a Public Administration General Process
Model.
The patterns movement can be seen to provide a set of
‘good practice’ building blocks that extend well beyond
software development to describe design solutions for generic
business process problems. These business process patterns
provide a means of designing new processes by finding a
richer structured repository of process knowledge through
describing, analysing and redesigning a wide variety of
organisational processes.
Patterns have been applied to various phases of the
Information System Engineering lifecycle (e.g., analysis,
design and implementation), however the application of
patterns to business process modelling has been limited with
research having been conducted by only a few over the past 20
years. While limited, such research has been highly effective
in proposing different sets and kinds of patterns (e.g.,
Workflow Patterns for Business Process Modelling by Thom
et al. (2007)) however a problem that has not been effectively
researched is how process patterns are discovered and how
such an activity can be made systematic via a methodology
that integrates the production of reusable process patterns
within traditional BPM. This paper investigates this problem
and proposes such a methodology.
It can be seen from the background investigation that
existing patterns provide limited support to resolving the
problems of domain reuse in modelling organisational
processes. Although, more and more researchers and
practitioners recognise the importance of reusability in BPM
[19], little consensus has been reached as to what constitutes a
business process pattern. Therefore, the need arises to provide
patterns that support the reuse of BPM, as patterns offer the
potential of providing a viable solution for promoting
reusability of recurrent generalised models.
Most of the patterns community mentioned earlier agrees
that patterns are developed out of the practical experience of
real projects by stating, for example, that ‘patterns reflect
lessons learned over a period of time’ [20, p. 45]. During that
process, someone creates and documents a solution for a
certain problem. In similar situations, this person refers to the
solution that was documented before and adds new
experiences. This may lead to a standard way of approaching a
certain problem and therefore constitutes the definition of a
pattern. Thus, each pattern captures the experience of an
individual in solving a particular type of problem.
Often however with every new project, analysts create new
models without referencing what was done in previous
projects. So, providing systematic support towards the
discovery and reusability of patterns in BPM can help resolve
this problem.
A conceptual technology that has gained popularity recently
and that can play a useful role in the systematic discovery as
well as the precise representation and management of business
process patterns is ontology. Ontologies have the potential of
improving the quality of the produced patterns and of the
modelling process itself due to the fact that ontologies are
aimed at providing semantically accurate representations of
real world domains.
101
III. SDR METHODOLOGICAL FRAMEWORK
A. SDR Cross-fertilisation of Disparate Disciplines
The issues identified in the literature review are investigated
in the context of the overall discovery and reuse objectives.
The lack of guidelines to modellers as to how business process
patterns can be discovered must first be resolved as it forms
the basis for attempting to resolve further issues. Evolving a
methodology to support the finding of business process
patterns represents an important area of work. Such a
methodology guides the application process and acts as a
reference document for situations where the methodology is
applied. Therefore, the design of the SDR methodological
framework, for empirically deriving ontological patterns of
business processes from organisational knowledge sources
(i.e. documentation, legacy systems, domain experts, etc.), is
essential.
In this study, the cross-fertilisation of disparate disciplines
or research fields tackles the design of the SDR
methodological framework of business process patterns. More
specifically, three main domains (i.e., domain engineering,
ontologies and patterns) are deemed relevant and helpful in
addressing this research problem. Hence, as illustrated in
Figure 1, the intersections amongst these research domains
symbolises the context of the current study. The construct of
the Semantic Discovery and Reuse methodological framework
is based on the following foundations.
Figure 1 Relevant Domains to Develop the SDR Framework
First, Domain Engineering (DE) is an engineering
discipline concerned with building reusable assets, such as
specification sets, patterns, and components, in specific
domains [21]. Domain engineering deals with two main
layers: the domain layer, which deals with the representation
of domain elements, and the application layer, which deals
with software applications and information systems artefacts
[22]. In other words, programs, applications, or systems are
included in the application layer, whereas their common and
variable characteristics, as can be described, for example, by
patterns, ontology, or emerging standards, are generalised and
presented in the domain layer.
Domain Engineering is the process of defining the scope
(i.e., domain definition), analysing the domain (i.e., domain
analysis), specifying the structure (i.e., domain architecture
development) and building the components (e.g.,
requirements, designs and documentations) for a class of
subsystems that will support reuse [23].
Domain engineering as a discipline has practical
significance as it can provide methods and techniques that
may help reduce time-to-market, product cost, and projects
risks on one hand, and help improve product quality and
performance on a consistent basis on the other hand. Thus, the
main reason of bringing domain engineering into this study is
that information used in developing systems in a domain is
identified, captured and organised with the purpose of making
it reusable when creating or improving other systems. Also,
the use of domain engineering has four basic benefits [24], as
follows:
 Identification of reusable entities.
 Abstraction of entities
 Generalisation of solution.
 Classification and cataloguing for future reuse.
Therefore, the SDR methodology is based on a dual
lifecycle model as proposed by the domain engineering
literature [22]. This model defines two interrelated lifecycles:
(1) a lifecycle aimed at generating business process patterns
called Semantic Discovery Lifecycle (SDL), and (2) a
lifecycle aimed at producing business process models called
Semantic Reuse Lifecycle (SRL). Figure 2 illustrates the SDR
methodological framework.
Second, the phases of the former lifecycle have been
classified according to the Content Sophistication (CS)
methodology [25]. CS is an ontology-based approach that
focuses on the extraction of business content from existing
systems and improving such content along several dimensions.
CS was followed as it allows the organisation to understand
and document knowledge in terms of its business semantics
providing scope for future refinements and reuse. Therefore,
the Semantic Discovery Lifecycle is based on the four phases
(called disciplines) of the Content Sophistication
methodology. SDL therefore defines four phases, three of
which based on CS, as follows: (1) a phase aimed at acquiring
legacy assets and organising them in a repository called
Preparation of Legacy Assets (POLA), (2) a phase aimed at
ontologically interpreting elements of existing process
diagrams (or in general data sources of organisational
behaviour) called Semantic Analysis of BP Models (SA) and
(3) a phase aimed at generalising models to patterns called
Semantic Enhancement of BP Models (SE).
Third, the last phase of SDL documents the discovered
patterns. A pattern generally includes a generic definition of
the problem, a model solution and the known consequences of
applying the pattern [26]. Patterns can produce many
advantages: (1) Reducing both time and cost of generating
business process models and their subsequent transformation
into software designs of enterprise applications. (2) Improving
modelling by replacing an ad hoc approach with a successful
one. (3) Promote reuse of business processes. (4) Reuse has
the longer-term benefit of encouraging and reinforcing
102
consistency and standardisation. Thus, the fourth phase of the
SDL, called Pattern Documentation, provides a way of
documenting the patterns identified. Figure 2 illustrates the
SDR methodological framework.
Figure 2: SDR Methodological Framework
The first lifecycle, Semantic Discovery Lifecycle (SDL),
initiates with the preparation of the organisational legacy
assets and finishes with the production of business process
patterns, which then become part of the pattern repository. The
second lifecycle is the Semantic Reuse Lifecycle (SRL) and is
aimed at producing business process models with the support
of the patterns discovered during the SDL. In this framework
the SRL is dependent on the SDL only in terms of the patterns
that are produced by the SDL. The two lifecycles are, for all
other purposes, autonomous and can be performed by different
organisations.
B. The Discovery Lifecycle of SDR
The Semantic Discovery Lifecycle (SDL) initiates with the
procurement and organisation of legacy sources and finishes
with the production of business process patterns, which then
become part of the pattern repository. The repository feeds
into the Semantic Reuse Lifecycle. The phases of the SDL are
as follows:
Phase 1: Preparation of Legacy Assets
This provides SDL with organisational legacy assets that
demonstrate the existence of certain types of models as well as
their generalised recurrence across multiple organisations.
Also during this phase business process models are going to
be extracted from the legacy assets. These models are typical
process flow diagrams such as BPMN diagrams.
Phase 2: Semantic Analysis of BP Models (SA).
This phase along with the following represents the core of
SDL. The elements of the process diagrams generated in phase
one are semantically interpreted in order to derive more
precise ontological models of the processes themselves and
semantically richer than its predecessors. Interpretation
identifies the business objects that the process commits to
existing. Interpretation explicitly makes the business
processes as much as possible close to real world objects,
which ensures the grounding of the patterns to real world
behaviour. For this phase the object paradigm (Partridge,
1996) provides a sound ontological foundation.
Phase 3: Semantic Enhancement of BP Models (SE).
This phase takes the ontological models created in SA and
aims at generalising them to existing patterns or to newly
developed patterns. Generalisation is an abstraction principle
that allows defining an ontological model as a refinement of
other ontological models. It sees a relationship between a
general and specific model where the specific ontology model
contains all the activities of the general model and more.
Phase 4: Pattern Documentation
This is the fourth and last phase of SDL. Documentation
plays an important role, bringing people from different groups
together to negotiate and coordinate common practice as it
plays a central role for global communication.
In this study business process patterns used a template
proposed by [3] to represent the different (e.g., intent,
motivation, etc.) aspects of a process pattern. Additional,
thinking will be added to structure a hierarchy of the
discovered patterns. The primary motivation behind this
rationale is to describe the different BP elements that the
discovered patterns generalised or extracted from so that
unwanted ambiguities related to the application and use of the
pattern can be avoided.
IV. CONCLUSION
The necessity of changing the way in which organisations
do business and provide value in order to survive and flourish
in a high-tech market has been recognised by both academics
and industries. Nevertheless, the resulting SDR methodology
is intended to adequately support business process modelling.
It allows the capture and recording of pattern discovery and
evolvement and their reuse in future developments.
The SDR methodological framework overcomes two
limitations of previous research on business process patterns.
Firstly, the workflow patterns defined by van der Aalst et al.
(2003) model common control structures of workflow
languages are not aimed at modelling generic processes of a
business domain (like an industrial sector). Secondly, the
patterns research community to date has dedicated limited
attention to the process of patterns discovery. The unique
features of the SDR methodological framework are its dual
lifecycle model, its use of semantics and the grounding in real
world legacy.
Our research study is continuing in several directions.
Firstly, we are applying the full version of the developed
patterns in an industrial domain to check their validity and
solve the problem of domain reuse in modelling organisational
processes, which exist in current business process patterns.
Secondly, the SDR is being extended to include domains not
included in it. Thirdly, we are working on the application of
the reuse lifecycle. Finally, we are improving the way to
classify these discoverable to facilitate their practical use.
REFERENCES
[1]
103
Azoff, A., Kellett, A., Roy, S. & and Thompson, M., Business Process
Management: Building End- to-end Process Solutions for the Agile
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Business: Butler Direct Ltd., Technology Evaluation and Comparison
Report, 2007.
Eriksson, H. & Penker, M., Business modelling with UML: business
patterns at work, New York, Chichester: John Wiley & Sons, 2000.
Malone, T., Crowston, K. & Herman, G., Organizing Business
Knowledge: The MIT Process Handbook, Cambridge, Mass, London:
MIT Press 2003.
Cline, M.P. "The pros and cons of adopting and applying design patterns
in the real world", Communications of the ACM, vol. 39, no. 10, pp. 4749, 1996.
Alexander, C., Ishikawa, S., Silverstein, M. & Centre for Environmental
Structure, A pattern language: towns, buildings, construction, New
York: Oxford University Press., 1977
Beck, K. & Cunningham, W., "Using Pattern Languages for ObjectOriented Program", Technical report, Tektronix, Inc., presented at
Workshop on Specification and Design for Object-Oriented
Programming (OOPSLA), Orlando, Florida, USA, 1987.
Coad, P., "Object-oriented patterns", Communications of the ACM, vol.
35, no. 9, pp. 152-159, 1992.
Gamma, E., Helm, R., Johnson, R. & Vlissides, J., Design patterns:
elements of reusable object-oriented software, Reading, Mass: AddisonWesley, 1995.
Hay, D.C., Data model patterns: conventions of thought, New York,
USA: Dorset House Publisher Press, 1996.
Fowler, M., Analysis patterns: reusable object models, Menlo Park,
Calif: Addison Wesley, 1997.
Russell, N., ter Hofstede, A.H.M., Edmond, D. & van der, A.,
"Workflow Resource Patterns", BETA Working Paper Series, Technical
Report WP 127, Eindhoven University of Technology, vol. 127, pp. 166, 2004
V. Popova, A. Sharpanskykh, Formal goal-based modeling of
organizations, in: Proceedings of the Sixth International Workshop on
Modelling, Simulation, Verification and Validation of Enterprise
Information Systems (MSVVEIS'08), INSTICC Press, 2008.
Rosenberg, F., et al. Top-down business process development and
execution using quality of service aspects. Enterprise Information
Systems, 2(4), 459-475, 2008.
van der, A., A.H.M.Ter, H. & Kiepuszewski, B. , "Advanced Workflow
Patterns", 7th International Conference on Cooperative Information
Systems (CoopIS), ed. O. Etzion en P. Scheuermann, Lecture Notes in
Computer Science, Springer-Verlag, Heidelberg, Berlin, pp. 18, 2000.
Thom, L., Lau, J., Iochpe, C., Reichert, M., "Workflow Patterns for
Business Process Modelling", n: 8th Workshop on Business Process
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
104
Modeling, Development, and Support in conjunction with CAISE,
Trondheim, Norway, June 12-14, international Conference on Advanced
Information Systems Engineering, 2007.
van der, A., A.H.M.Ter, H., Kiepuszewski, B. & Barros, A.P.,
"Workflow Patterns", QUT Technical report, FIT-TR-2002-02,
Queensland University of Technology, Brisbane, 2002 (also see
http://www.tm.tue.nl/it/research/patterns). To appear in Distributed and
Parallel Databases, 2003.
Aldin, L. & de Cesare, S. and Lycett, M., "Semantic Discovery and
Reuse of Business Process Patterns", 4th Annual Mediterranean
Conference on Information Systems, Athens University of Economics
and Business (AUEB), Greece, September 25-27, pp. 1-6, 2009b.
Peristeras V., Tarabanis K., “Towards an enterprise architecture for
public administration using a top-down approach”, European Journal of
Information Systems, vol.9, pp.252-260, December, 2000.
di Dio, G. "ARWOPS: A Framework for Searching Workflow Patterns
Candidate to be Reused", Second International Conference on Internet
and Web Applications and Services (ICIW): IEEE CNF, May 13-19,
Mauritius, pp. 33, 2007.
Kaisler, S.H. Software paradigms, Hoboken, N.J., USA: John Wiley &
Sons, 2005.
Arango, G., “Domain Engineering for Software Reuse”, PhD Thesis,
Department of Information Systems and Computer Science, University
of California, Irvine, 1998.
Foreman, J., "Product Line Based Software Development- Significant
Results, Future Challenges", Software Technology Conference, Salt
Lake City, UT, United State, April 23, 1996.
Nwosu, K.C. & Seacord, R.C. 1999, "Workshop on Component-Based
Software Engineering Processes", Technology of Object-Oriented
Languages and SystemsInstitute of Electrical & Electronics Engineers
(IEEE), Santa Barbara, California, USA, pp. 532.
Prieto-Daiz, R., "Domain analysis: an introduction", SIGSOFT Software
Engineering Notes, Association for Computing Machinery (ACM), vol.
15, no. 2, pp. 47-54, 1990.
Daga, A., de Cesare, S., Lycett, M. & and Partridge, C., "An Ontological
Approach for Recovering Legacy Business Content", In Proceedings of
the 38th Annual Hawaii International Conference on System Sciences
(HICSS), January 3-6, Los Alamitos, California, IEEE Computer Society
Press, pp. 224, 2005.
Cabot and Raventos, Roles as Entity Types: A Conceptual Modelling
Pattern. ER 2004: 69-82, 2004.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Speed Control of Direct Torque Controlled
Induction Motor By Using PI, Anti-Windup PI
And Fuzzy Logic Controller
H.AÇIKGÖZ1, Ö.F.KEÇECİOĞLU2, A.GANİ2 and M.ŞEKKELİ 2
1
2
Kilis 7 Aralik University, Kilis/Turkey, hakanacikgoz@kilis.edu.tr
K.Maras Sutcu Imam University, K.Maras/Turkey, {fkececioglu & agani & msekkeli}@ksu.edu.tr
Abstract - In this study, comparison between PI controller,
fuzzy logic controller (FLC) and an anti-windup PI (PI+AW)
controller used for speed control with direct torque controlled
induction motor is presented. Direct torque controlled induction
drive system is implemented in MATLAB/Simulink environment
and the FLC is developed using MATLAB/Fuzzy-Logic toolbox.
The proposed control strategy is performed different operating
conditions. Simulation results, obtained from PI controller, FLC
and PI+AW controller showing the performance of the closed
loop control systems, are illustrated in the paper. Simulation
results show that FLC is more robust than PI and PI+AW
controller against parameter variations and FLC gives better
performance in terms of rise time, maximum peak overshoot and
settling time.
Keywords - Anti-windup PI controller, Direct torque control,
Fuzzy logic controller, Induction motor, PI controller
I. INTRODUCTION
DC motors have high performance in terms of dynamic
behaviour and their control is simple. Because its flux and
torque can be controlled independently. However, DC motors
have certain disadvantages due to the existence of the
commutators and brushes. Nowadays, induction motors are
extensively used in industrial application. Induction motors
have complex mathematical models with high degree of
nonlinear differential equations including speed and time
dependent parameters. However, they are simple, rugged,
inexpensive and available at all power ratings and they need
little maintenance. Therefore, the speed control of induction
motor is more important to achieve maximum torque and
efficiency[1-5]. By the rapid development of microprocessor,
power semiconductor technologies and various intelligent
control algorithm, controlling methods of induction motors
have been improved. In the recent years, researchs about
induction motors which are common in industrial systems due
to some important advantages are focused on vector based
high performans control methods such as field orientation
control (FOC) and Direct torque control (DTC) [1-7].
FOC
principles were firstly presented by Blaschke [4] and Hasse
[5]. FOC of induction motors are based on control principle of
DC motors. DC motors have high performance in terms of
dynamic behaviour and their control is simple. Armature and
excited winding currents of self-excited DC motors can be
independently controlled because they are vertical to each
other. There isn’t such case in induction motors. Made studies
on induction motors showed that these motors could be
controlled such as DC motors if three-phase variables are
converted to dq-axis and dq-axis currents are controlled.
Vector control methods which are done transform of axis have
been developed. Flux and torque of induction motors can be
independently controlled. Thus induction motors can
adequately be used for variable speed drive applications [1-4].
DTC were firstly presented by Depenbrock [6] and
Takahashi [7]. DTC method has simple structure and the main
advantages of DTC are absence of complex coordinate
transformations and current regulator systems. In the DTC
method, the flux and torque of the motor are controlled
directly using the flux and torque errors which are processed
in two different hysteresis controllers (torque and flux).
Optimum switching table depending on flux and torque
hysteresis controller outputs is used to control of inverter
switches in order to provide rapid flux and torque response.
However, because of the hysteresis controllers, the DTC has
disadvantage like high torque ripple.
In the recent years, FLC has found many applications.
Fuzzy logic is a technique, improved by Zadeh [8] and it
provides human-like behavior for control system. İt is widely
used because FLC make possible to control nonlinear,
uncertain systems even in the case where no mathematical
model is available for the controlled system [8-14]. This paper
deals with comparison of PI, FLC and PI+AW controller on
speed control of direct torque controlled induction motor. The
performance of FLC has been researched and compared with
PI+AW and PI controller.
The rest of this paper is organized as follows. In Section II,
direct torque control scheme is given. Section III describes
proposed controller design. The simulation results are given in
Section IV. Conclusions are presented in Section V.
II. DIRECT TORQUE CONTROL
A. Modeling of Induction Motor
The induction motor model can be developed from its
fundamental electrical and mechanical equations. The d-q
equations of 3-phase induction motor expressed in the
stationary reference frame:
105
V ds Rs ids  p ds
(1)
Lr is rotor inductance. This equation (10) shows the torque is
dependent on the stator flux magnitude, rotor flux magnitude
and the phase angle between the stator and rotor flux vectors.
The equation of induction motor stator is given by [6]:
V qs Rs iqs  p qs
(2)
0  Rr idr  p dr  wr qr
Vs
d s
 i s Rs
dt
(3)
(11)
If the stator resistance is ignored, it can be approximated as
equation 12 over a short time period [6-7]:
0  Rr iqr  p qr  wr dr
(4)
The flux linkage equations:
 s  Vs t
 qs  Ls iqs  Lm iqr
(5)
 ds  Ls ids  Lm idr
(6)
 qr  Lr iqr  Lm iqs
(7)
 dr  Lr idr  Lm ids
(8)
Electromagnetic torque in the stationary reference frame is
given as:
Te 
3P
 dsiqs  qsids 
22
(9)
Where; p= (d/dt), Rs, Rr are stator and rotor resistances; Ls,
Lr, Lm are stator, rotor and mutual inductances; ds, qs are
stator flux in d-q frame; dr, qr are rotor flux in d-q frame; ids,
iqs , iqr, iqr are stator and rotor currents in d-q frame and wr is
rotor speed.
B. Direct Torque Control
DTC design is very simple and practicable. It consists of
three parts such as DTC controller, torque-flux calculator and
VSI. In principle, the DTC method selects one of the
inverter’s six voltage vectors and two zero vectors in order to
keep the stator flux and torque within a hysteresis band around
the demand flux and torque magnitudes [1-6]. The torque
produced by the induction motor can be expressed as shown
below:
Te 
(12)
This means that the applied voltage vector determines the
change in the stator flux vector. If a voltage vector is applied
to system, the stator flux changes to increase the phase angle
between the stator flux and rotor flux vectors. Thus, the torque
produced will increase [6-7].
Fig. 1 shows closed loop direct torque controlled induction
motor system. The closed loop DTC induction motor system is
implemented in MATLAB/Simulink environment. DTC
induction motor model consists of four parts such as speed
control, switching table, inverter and induction motor. d-q
model is used for the induction motor design. DTC blog has
flux and torque within a hysteresis models. Two-level and
three-level flux and torque within hysteresis band comparators
are given in Fig. 2 and 3, respectively. Flux control is
performed by two-level hysteresis band and three-level
hysteresis band provides torque control. Outputs of the
hysteresis bands are renewed in each sampling period and
changing of the flux and torque are determined by these
outputs. Voltage vectors are shown in Fig. 4. Flux control
output ds, torque control output dTe and voltage vector of the
stator flux are determined a Switching Look-up Table as
shown in Table 1.
In DTC method, stator flux and torque are estimated to
compare with references of the flux and torque values by aid
of stator current, voltage and stator resistance. The obtained
flux and torque errors are applied to the hysteresis layers. In
these hysteresis layers, flux and torque bandwidth are defined.
Afterwards, the amount of deflection is determined and the
most appropriate voltage vectors are selected to apply to the
inverter using Switching Look-up Table.
3 P Lm
 r  s sin 
2 Ls Lr
(10)
Where, α is angle between the rotor flux and the stator flux
vectors. r is the rotor flux magnitude and s is the stator flux
magnitude. P is the pairs of poles, Lm is mutual inductance and
106
sQ
Wr*
V3  (010)
V2  (110)
Wr*
Te*
Te*
wm
ψ*
ψ*
S1,6
V1  (100)
V4  (011)
Ua
3/2
S1,6 Ub
Vs
V0  (000)
V7  (111)
Te
sD
wm
Uc
TL
Vabc
TL
Iabc
V6  (101)
V5  (001)
AC MACHINE
INVERTER
Iabc
Fig. 4: Voltage vectors
DTC
Table 1: Switching Look-up Table
1/z
Fig. 1: DTC induction motor system in MATLAB/Simulink
environment
Flux
(ψ)
If a torque increment is required then dTe equals to +1, if a
torque reduction is required then dTe equals to -1 and if no
change in the torque is required then dTe equals to 0. If a stator
flux increment is required then ds is equals to +1, if a stator
flux reduction is required then ds equals to 0. In this way, the
flux and torque control is implemented.
+1
1
dψs
ψ*
+-
ψ
∆ψs
ψ=1
ψ=-1
III.
Torque
(Te)
SECTORS
Te=1
Te=-1
Te=1
Te=-1
S1
S
S2
S
S3
S
S4
S5
S6
V2
V6
V3
V5
V3
V1
V4
V6
V4
V2
V5
V1
V5
V3
V6
V2
V6
V4
V1
V3
V1
V5
V2
V4
DESIGN OF FLC, PI AND ANTİ-WINDUP PI CONTROLLER
In this paper, conventional PI controller, PI+AW controller
and FLC are designed and applied to the DTC model. In the
first design, the conventional PI controller and AW+PI
controller are given to apply an induction motor drive in order
to control its speed. In the second design, the FLC is designed
for stability and robustness control. As a rule, the control
algorithm for discrete PI controller can be described as:
u PI (k )  K P e(k )  K I i 1 e(k )
k
Fig. 2: Two-level flux hysteresis comparator
1
+1
Teref
dTe
+Te
-1
∆Te
(13)
Where, Kp is the proportional factor; KI is the integral factor
and e(k) is the error function.
As shown in figure 5, the structure of PI controller is really
simple and can be implemented easily. An anti-windup
integrator is added to stop over-integration for the protection
of the system in figure 6 [18-22].
Kp
Fig. 3: Three-level torque hysteresis comparator
1
e
Ki
K Ts (z+1)
2(z-1)
Fig 5: Simulink model of classic PI controller
107
1
Te*
de
NB
NM
NS
Z
PS
PM
PB
NB
NB
NB
NB
NB
NM
NS
Z
NM
NB
NB
NM
NM
NS
Z
PS
NS
NB
NM
NS
NS
Z
PS
PM
Z
NB
NM
NS
Z
PS
PM
PB
PS
NM
NS
Z
PS
PS
PM
PB
PM
NS
Z
PS
PM
PM
PB
PB
PB
Z
PS
PM
PB
PB
PB
PB
Kp
e
1
e
1
Te*
>
>
K Ts
Ki
z-1
~=
AND
0
Fig. 6: Simulink model of PI controller with anti-windup
Fuzzy Logic Control is an appropriate method for designing
nonlinear controllers via the use of heuristic information [9,
15]. A FLC system allows changing the control laws in order
to deal with parameter variations and disturbances.
Especially, the inputs of FLC are speed error and change in
the speed error. These inputs are normalized to obtain error
e(k) and its change ∆e(k) in the range of -1 to +1. The fuzzy
membership functions consist of seven fuzzy sets: NB, NM,
NS, Z, PS, PM, PB as shown in Fig. 7.
IV. SIMULATION RESULTS
Several simulation results for speed control of Direct
Torque Controlled induction motor drive using PI, PI+AW
and FLC is realized in MATLAB/Simulink environment and
fuzzy logic toolbox. The simulations are performed for
different reference speeds with load of 3N-m and no-load
during 2sec. The parameters of the induction motor used in the
simulation are given in Table 3.
Table 3: Induction Motor Parameters
NB
NM
NS
Z
PS
PM
PB
1
Parameters
Values
Power supply
Stator resistance (Rs)
3Ф
8.231Ω
0.4
Rotor resistance (Rr)
4.49Ω
0.2
Number of Poles (P)
2
Stator self-inductance (Ls)
0.599H
Rotor self-inductance (Lr)
0.599H
Moment of inertia (J)
0.0019kg-m2
Mutual inductance (Lm)
0.5787H
Friction factor (B)
Frequency
0.000263
50Hz
0.8
0.6
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Fig. 7: Membership function of inputs and output
In the FLC, the rule has the form of: IF e is Fke AND de is
THEN du is wk:
Fkde
k  1,..., M
(14)
Fke and Fkde are the interval fuzzy sets and wk is singleton
output membership functions. The rule base of the FLC
system is given in Table 2. The block diagram of FLC system
for DTC is given in fig.8.
e(k)
z -1
e
-+
e(k)-e(k-1)
e
G1
G2
Fuzzy-PI
Controller
G3
u
u(k)
++
z-1
Figure 9 shows the performance of PI, PI+AW and FLC.
Conventional PI and PI+AW show overshoot during starting
(%4.6 and %0.8, respectively). The PI controller response
reaches to reference speed after 122 ms with overshoot and
PI+AW response reaches to reference speed after 110 ms with
overshoot while the FLC response reaches to steady state after
nearly 65 ms without overshoot.
The simulation results show the FLC provides good speed
response over the PI and PI+AW controller. The FLC
performance is better than both of controllers in terms of
settling time and maximum peak overshoot. The output torques
controlled by PI+AW, PI and FLC controllers is illustrated in
Fig. 10, 11 and 12, respectively.
Fig. 8: Block diagram of Fuzzy-PI controller
Table 2: Rule Base
108
1800
with respect to settling time and maximum peak overshoot.
Moreover, the corresponding values are represented in Table
5. The torque responses of PI+AW, PI and FLC are given in
figure 14, 15 and 16, respectively.
1600
1400
1200
Table 5: Performance of Controllers at Load
Speed (rpm)
1570
1000
1560
1550
1540
800
Settling Time
Overshoot
122ms
42.1ms(response to
load torque)
%4.6(1st peak)
%2.33(2nd peak)
110ms
42.1ms(response to
load torque)
%0.8(1st peak)
%2.33(2nd peak)
58ms
3ms(response to load
torque)
%0(1st peak)
%0.66(2nd peak)
Controller
Type
1530
1520
600
1510
1500
1490
400
1480
0.02
0
0
0.2
0.04
0.4
0.06
0.6
0.08
0.1
0.8
0.12
1
PI Controller
Fuzzy Logic
Reference
PI
PI + AW
1470
200
1.2
1.4
1.6
1.8
2
Time (sec)
Anti-Windup
PI
Fig. 9: Motor speed responses at no-load
10
8
FLC
6
Torque (N.m)
4
2
0
-2
1800
-4
1600
-6
-8
-10
1400
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Time (sec)
1200
Speed (rpm)
Fig. 10: The output torque response using PI+AW controller
10
8
6
Torque (N.m)
4
1000
1570
1510
1560
1505
1550
1500
1540
1495
1530
1490
1520
1485
1510
1480
1500
1475
1490
1470
800
600
1465
1480
2
1460
1470
400
0
0.02
0.04
0.06
0.08
0.1
0.78
0.12
0.79
0.8
0.81
0.82
0.83
0.84
-2
Fuzzy Logic
Reference
PI
PI + AW
200
-4
-6
0
-8
-10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
1.8
2
Time (sec)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Fig. 13: Constant speed responses with load of 3N-m
Time (sec)
Fig. 11: The output torque response using PI controller
8
8
6
6
4
4
2
Torque (N.m)
Torque (N.m)
10
10
2
0
0
-2
-4
-2
-6
-4
-8
-6
-10
-8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Time (sec)
Speed (rpm)
-10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Time (sec)
Fig. 12: The output torque response using FLC
Constant speed response with load of 3N-m at 0.8sec is
given in Fig. 13. The speed response with FLC has no
overshoot and settles faster in comparison with PI and PI+AW
controller and there is no steady-state error in the speed
response. When the load is applied there is sudden dip in
speed. The speed falls from reference speed of 1500 rpm to
1490 rpm and it takes 3ms to reach the reference speed. The
results of simulation show that the FLC gives better responses
109
Fig. 14: The output torque response using PI+AW controller
10
[4]
8
[5]
6
4
Torque (N.m)
2
[6]
0
-2
-4
-6
[7]
-8
-10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Time (sec)
[8]
Fig. 15: The output torque response using PI controller
10
[9]
8
6
4
[10]
Torque (N.m)
2
0
-2
[11]
-4
-6
-8
-10
[12]
0
0.2
0.4
0.6
0.8
1
Time (sec)
1.2
1.4
1.6
1.8
2
Fig. 16: The output torque response using FLC
[13]
V. CONCLUSIONS
In this study, Direct Torque Controlled induction motor
drive system is presented and speed control of the induction
motor is implemented. The motor drive system is carried out
in MATLAB/Simulink environment using mathematical
model of d-q of the induction motor. PI+AW controller, PI
and FLC control systems are compared and effectiveness of
the FLC against PI and PI+AW control performance is
illustrated. Considering the overshoot and the response time,
the FLC gives obviously better performance than PI and
PI+AW controller. Moreover, it can be seen that the ripple in
torque with FLC is less than PI and PI+AW controller for all
speed change cases
[14]
[15]
[16]
[17]
[18]
[19]
REFERENCES
[1]
[2]
[3]
K. Bose Bimal, “An Adaptive Hysteresis-Band Current Control
Technique of a Voltage-Fed PWM Inverter for Machine Drive System,”
IEEE Trans. Industrial Electronics, Vol 37, Oct. 1990, pp. 402-408.
I. Takahashi and T. Noguchi, “A new quick-response and high
efficiency control strategy of an induction motor,” in IEEE Transactions
on Industry Application. Volume. IA, No. 5, 1986, pp. 820-827.
T. G. Habetler, F. Profumo, M. Pastorelli, and L. M. Tolbert, “Direct
torque control of induction machines using space vector modulation,” in
IEEE Transactions on Industry Applications. Volume: 28, Issue: 5,
1992, pp. 1045 – 1053.
[20]
[21]
110
M. Depenhrock. "Direct self-control of inverter-fed machine" IEEE
Trans. Power Electron, X988,3:420-429
D. Casadei and G.Serra, “Implementation of direct Torque control
Algorithme for Induction Motors Based On Discrete Space Vector
Modulation,” IEEE Trans. Power Electronics. Vol.15, No.4, July 2002.
T. G. Habetler, F. Profumo, M. Pastorelli, and L. M. Tolbert, “Direct
torque control of induction machines using space vector modulation,” in
IEEE Transactions on Industry Applications. Volume: 28, Issue: 5,
1992, pp. 1045 – 1053.
R. Toufouti and H. Benalla, “Direct torque control for induction motor
using fuzzy logic,” in ACSE Journal. Volume: 6, Issue: 2, 2006. pp.1926.
R. Toufouti, and H. Benalla, “Direct torque control for induction motor
using intelligent techniques,” in Journal of Theoritical and Applied
Information Technology. 2007. pp. 35-44.
F. Sheidaei, M. Sedighizadeh, S.H. Mohseni-Zonoozi, Y. AlinejadBeromi, “A fuzzy logic direct torque control for induction motor
sensorless drive,” in Universities Power Engineering Conference. 2007.
pp. 197-202.
Y. Tang and G. Lin, “ Direct Torque control of Induction Motor Based
on Self-Adaptive PI Controller,” The 5th International conference on
computer Science & Education, Hefei, China. August 24-27, 2010.
M. Baishan, L. Haihua, Z. Jinping, “Study of Fuzzy control in Direct
Torque Control system,” International Conference on Artificial
Intelligence and Computational Intelligence, 2009.
N.H. Ab Aziz, A. Ab Rahman, “Simulation on Simulink AC4 model
(200HP DTC Induction Motor Drive) Using Fuzzy Logic Controller,”
International Conference Computer Applications and Industrial
Electronics (ICCAIE) December 5-7, Kuala Lumpu, Malaysia, 2010.
I.Ludthe, "The Direct Control of Induction Motors," Thesis, Department
of Electronics and Information Technology, University of Glamorgan,
May 1998.
L. A. Zadeh, “fuzzy sets,” Inform, Control, Vol.8, 1965, pp.338-353.
Y.V.Siva Reddy, T.Brahmananda Reddy, M.Vijaya Kumara, “Direct
Torque Control of Induction Motor using Robust Fuzzy Variable
Structure Controller”, International J. of Recent Trends in Engineering
and Technology, Vol.3, No.3, May 2010.
Vinod Kumar, and R.R. Joshi, “Hybrid controller based intelligent speed
control of induction motor,” Journal of Theoretical and Applied
Information Technology (JATIT), 2005,pp-71-75,
V Chitra, and R. S Prabhakar,. “Induction Motor Speed Control using
Fuzzy Logic Controller”, Proc. of World Academy of Science,
Engineering and Technology, Vol. 17, December 2006, pp. 248-253.
M. Gaddam, “Improvement in Dynamic Response of Electrical
Machines with PID and Fuzzy Logic Based Controllers,” Proceedings of
the World Congress on Engineering and Computer Science, WCECS
2007, San Francisco, USA, October 24-26, 2207.
M. Sekkeli, C. Yıldız, H. R. Ozcalik, “Fuzzy Logic Based Intelligent
Speed Control of Induction Motor Using Experimental Approach”
International Symposium on 1Nnovaitons in intelligent SysTems and
Applications, Trabzon/Turkey, June 29, July 1, 2009 INISTA
K.,Klinlaor, C.,Nontawat, “Improved Speed Control Using Anti-windup
PI controller For Direct Torque Control Based on Permanent Magnet
Synchronous Motor”,12 th International Conference on Control,
Automation and Systems, Jeju Island/Korea, 17-21 Oct., 2012.
D. Korkmaz, O. G. Koca, Z. H. Akpolat, “Robust Forward Speed
Control of a Robotic Fish,” Sixth International Advanced Technologies
Symposium,
Elazig/Turkey,
May
16-18,
2011,
pp.33-38.
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
The Load Balancing Algorithm for the Star
Interconnection Network
Ahmad M. Awwad
University of Petra
Faculty of information technology
Amman, Jordan
awwad@uop.edu.jo
Abstract
The star network is one of the promising interconnection
networks for future high speed parallel computers, it is
expected to be one of the future-generation networks. The star
network is both edge and vertex symmetry, it was shown to
have many gorgeous topological proprieties also it is owns
hierarchical structure framework. Although much of the
research work has been done on the promising network in
literature, it still suffers from having adequate algorithms for
load balancing problem. In this paper we try to work on this
issue by investigating and proposing an efficient algorithm for
load balancing problem for the star network. The proposed
algorithms is called Star Clustered Dimension Exchange
Method SCDEM to be implemented on the star network. The
proposed algorithm is based on the Clustered Dimension
Exchange Method (CDEM). The SCDEM algorithm is shown
to be efficient in redistributing the load balancing as evenly as
possible among all nodes of different factor networks.
Keywords: Load balancing, Star network, Interconnection
networks.
1. Introduction
The star graph which was proposed by Akers and et al as one
of the attractive topologies, it was proposed as an attractive
alternative to the cube network [1]. The star graph shown to
have excellent topological properties for comparable network
sizes of the cube network [2]. The star graph showed to have
many good properties over many networks including the wellknown cube network including: smaller diameter, smaller
degree, and smaller average diameter [1].The star graph
proved to have a hierarchical structure which will enable it
building large network size of smaller ones, the star graph is
both edge and vertex symmetry. Also the star network is
shown to have fault tolerance properties.
Although some algorithms proposed for the star graph such as
distributed fault-tolerant routing algorithm [3]. The proposed
algorithm adapts the routing decisions in response to node
failures. Anyway one of the main problems that the star
network still suffers from is having enough algorithms for load
balancing problem.
To our knowledge there is no enough results proposed in
literature about implementing and proposing efficient
algorithms for load balancing on star network. In this paper we
try to fill this gap by proposing and embedding the SCDEM
algorithm on the star graph which is based on the CDEM
algorithm [4]. The CDEM algorithm was shown to be
attractive on OTIS-Hypercube network by redistributing the
load as evenly among different processors [4]. Efficient
implementation of the SCDEM algorithm on the star network
will make the star network more acceptable network for real
life application in connection to load balancing. The rest of the
paper is organized as follows: In section 2 we present the
necessary basic notations and definitions, in section 3 we
introduce some of the related work on load balancing, in
section 4 we present and discuss the implementation of the
SCDEM algorithm on the star graph, also we present an
example of SCDEM on S4 star network, finally section 5
concludes this paper.
2. Definitions and Basic Topological Properties
During the last decade a huge number of interconnection
networks for High Speed Parallel Computers (HSPC) have
been investigated and proposed in literature [5, 6, 7]. As an
example one of these networks was the hypercube
interconnection network, also this network is known as the
binary n-cube. The star graph [1] is another example, which
has been proposed as an attractive alternative to the hypercube
network. Since its appearance the star network has attracted a
lot of research efforts. Few properties of this network have
been studied in the literature including its basic topological
properties [1], parallel path classification [4], and node
connectivity [2] and embedding [8]. The authors Akers and
Krishnamurthy [3] have proved that the star graph has several
advantages over the hypercube network including a lower
degree for a fixed network size of the comparable network
sizes, a smaller diameter, and smaller average diameter.
Furthermore they showed that the star graph is maximally
fault tolerant edge, and vertex symmetric [3].
The structure of the star network can play an effective step for
proposing any algorithms on it. The authors in [21] Menn and
Somani have shown that the star graph may be seen as n  (n1)!, where the rows and the columns in this framework are (n1)-star and an n-linear array correspondingly. Also, Ferreria
and Berthome [9] claimed that the star graph may be seen also
as a rectangular framework RC (Rows by Columns) where
the rows are substar-Sn-2 and the columns are n (n-1) nodes on
each of its column.
111
However, there has been relatively a limited research efforts
have been dedicated to design efficient algorithms for the star
graph including computing fast Fourier transforms [10],
broadcasting [11], selection and sorting [12, 21], and Matrix
Multiplications [13 ] and load balancing [ 14, 15]. In an
attempt to overcome this problem we present an efficient
algorithm for load balancing problem on star graph to
redistribute the load balancing among all processors of the
network as evenly as possible.
Definition 1: The n-star graph, which is denoted by Sn, has n
nodes each labelled with a sole permutation  n = {1,…,n}.
Any two nodes of Sn are connected if, and only if, their
corresponding permutations differ exactly in the first position
and any other position.
Figure 1 shows the 4-star graph with 4 groups each containing
6 vertices (i.e. four copies of 3-star graphs). The degree, ,
and the diameter, , of the star graph are as follows [1, 16]:
, of the n-star graph = n-1, where n1.
, of n-star graph =  32 (n-1).
a
231
4
42
31
3241
12
21
34
312
243
3421
d
13
43
d
14
b
b
a
43
21
24
13
4213
based on the idea of finding the average load of neighbours’
nodes, such that the dimension of the network is n, the
neighbours which are connected on the nth dimension they will
exchange their loads to redistribute the load and achieve
evenly load balancing as possible, the processor which have
more load will broadcast the extra amount of the load to its
direct neighbour node. The main advantage of the Dimension
Exchange Method is that every node will be able to
redistribute tasks to its direct neighbours to reach even load
balancing among all nodes. Ranka and et al have achieved that
in the worst case in the DEM method to redistribute load
balancing was log2n on the cube network [17].
The researchers Zaho, Xiao, and Qin have presented hybrid
scheme of diffusion and dimension exchange called DED-X
for load balancing on Optical Transpose Interconnection
System (OTIS) [18, 19]. The proposed algorithm works by
dividing the load balancing task to three different phases. The
results achieved on OTIS networks showed that the load
balance efficiently redistributed almost evenly. On the other
hand the achieved results of the simulation from Zaho et al of
the proposed algorithms on load balancing has shown a
considerably major advancement in enhancement of efficiency
and stability [18, 19]. In another research done by Zaho and
Xiao they have presented different DED-X schemes for load
balancing on homogeneous OTIS networks and they proposed
new algorithm structure called Generalized DiffusionExchange- Diffusion Method, the proposed scheme enabled
load balancing on Heterogeneous OTIS networks [20].
Furthermore Zaho, Xiao, and Qin have shown that the
usability of the new proposed load balancing methods to be
better than the X traditional load balancing algorithm [20].
The main objective of this paper is to propose and present a
new load balancing algorithm for the star networks named Star
Clustered Dimension Exchange Method (SCDEM) based on
the algorithm [4].
4123
314
2
Figure 1: The 4-star
214
3
c
4. The Star Clustered Dimension Exchange Method
for Load Balancing on the Star Network
graph, S4.
3. Background and Related Work
The attractive results shown and proved by researchers in
literature of the star graph make it a one of the strongest
competitor’s topology for High Speed Parallel Computers
(HSPC) and a strong candidate network for real life
applications. This fact has motivated us to investigate the load
balancing problem on the star network since the star graph
suffers from limited number of efficient algorithms proposed
for it in general and load balancing problem as specific case.
The load balancing problem has been investigated on various
types of infrastructure ranging from electronic networks [15]
and OTIS networks [4].
Load balancing problem is one of the well-known and
important types of problems which was studied from different
point views and different approaches. This problem was
studied and investigated by Ranka, Won, and Sahni [17], they
proposed and introduced the Dimension Exchange Method
(DEM) on the hypercube topology. The DEM algorithm was
The algorithm we present in this paper SCDEM is based on
the Clustered Dimension Exchange Method CDEM for load
balancing for Optical Transpose Interconnection system on
Hypercube factor network [4]. The worst time case complexity
of CDEM for load balancing on OTIS-Hypercube was
O(Sqrt(p)*M log2 p). Also the number of communication
steps which is required by CDEM proved to be 3log2 p [4].
The main achievement of the new presented SCDEM is to
obtain even load balancing for the Sn network by redistributing
number of tasks between different nodes on different groups.
The numbers cooperating moves needed between different
nodes in the SDEM is 2n-1, where n is the degree of Sn. Figure
2 presents the SCDEM algorithm for load balancing problem
on n! Processors of Sn.
The SCDEM load balancing algorithm is based on the
following phases:
112
PHASE 1:
A.
B.
C.
The load balancing of neighbour nodes
of the 1st stage is achieved by
redistributing the load balancing of all
direct neighbour nodes, and only if,
their corresponding permutations differ
exactly in the 1st and 2nd position.
Then redistributing the load balancing
of any two neighbour nodes that are
connected if, and only if, their
corresponding
permutations differ
exactly in the first and 3nd position.
Keep redistributing the load balancing
of any two neighbour nodes that are
connected if, and only if, their
corresponding
permutations
differ
th
exactly in the first and n position.
15. redistribute the weight as in steps as in steps 3
to12 between pi and the node with the max │pi –
pj│.
Figure 2: The SCDEM load balancing
Algorithm
SCDEM algorithm works on redistributing load balancing
among all processors of the network, phases one, two and
three are done in parallel.
Phase 1:
The load balancing between the processors of Sn based on
SCDEM algorithm is exchanged as in steps 2 to 12 in parallel,
in first step the load exchange will be between all the
processors in which they differ in 1st position and 2nd position
for all the factor networks of Sn i.e. Sn
PHASE 2:
A.
-1.
Then the same
process will be repeated continually until it reach the
Repeat phase one more time.
neighbours pj that is n positions far away from pi .
PHASE 3:
Phase 2:
EACH DIRECT NEIGHBOUR’S PROCESSORS PI AND PJ
FIND THE MAXIMUM DIFFERENCE WEIGHT OF DIRECTED
WEIGHTS AND REDISTRIBUTE THE WEIGHT AMONG THE
NODES OF HIGHEST DIFFERENCE FOLLOWING THE STEPS 3-
FOR
To enhance the load balancing efficiency between different
processors of n factor networks, the algorithm suggests
repeating the steps 2 to 12 as mentioned above.
12.
Phase 3:
As a final phase all adjacent processors which they differ in
Note that n-1 is the number of neighbours of any
processor in Sn:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
for n = 1; n ≤ n-1; n++
for all neighbour nodes pi and pj which they
differ in 1st and n+1 position of Sn do in parallel
Give-and-take pi and pj total load sizes of the
two nodes
TheAverageLoad pi,j = Floor (Load pi + Load
pi )/2
if ( Totalload pi >= excess AverageLoad pi,j )
Send excess load pi to the neighbour node
pi
Load pi = Load pi – extra load
Load pj = Load pj + extra load
else
Receive extra load from neighbour pj
Load pi = Load pi + extra load
Load pj = Load pj – extra load
Repeat steps 3 to 12 one more time
for all adjacent processors of pi and find the
max │pi – pj,│ such that pj is the set of all
neighbours of pi where 1≤ j ≤ n-1.
first position and any other position i.e. pi and pj. The
algorithms will find the maximum difference among all the
weights of these neighbours and redistribute the weight
between pi and the max │pi - pj│ only once following the
steps 2-12 of the SCDEM algorithm.
Example: - To explain the proposed algorithm SCDEM
presented in figure 2, the following example implements the
load balancing algorithm on the different factor networks of
S4 .
Figure 3 shows the four factor networks S3 of the Network S4,
each factor network has 6 processors with a specific load
assigned to it. Since the degree of Sn is n-1 it follows that each
node connected to three other direct nodes, two of the inner
group and one in outer group. The number which was assigned
next to the processor present the starting load.
First we start by implanting phase 1 of the algorithm by
following the steps 3-12. Figures 4, 5 and 6 reflects phase1: A,
113
B and C of the SCDEM algorithm, each pair of nodes which
they differ in 1st position, 2nd, 3rd and 4th position. At the end
of this phase figure 6 shows the new load balancing
distribution of the phase 1 of the algorithm.
In phase 2 we repeat the same steps one more time to
redistribute the load balancing among the neighbours’ nodes
as suggested in SCDEM algorithm presented in figure 2 by
following the steps 2- 13. Figure 7 shows the load balancing
of all processors at the end of phase 2.
Finally in phase 3 all adjacent nodes which differ in first
position and any other position i.e. pi and pj. will redistribute
their load balancing by first finding the highest difference
between their weights then by implementing the steps of the
algorithm.
The new achieved distribution of the load balance shown to be
efficient and optimal. The proposed SCDEM algorithm is
efficient at the end of phase 3 where for each direct
neighbour’s pi and pj we will find the maximum difference
weights among the nodes and then redistribute the weight of
highest difference, following the steps 2-12 (figure 8). The
final distribution is achieved in 2n-1 communication steps
where n is the degree of the star network.
5. Conclusion
This paper presents an efficient algorithm for redistributing
the load balancing of nodes in star network. The new proposed
algorithm is called Star Clustered Dimension Exchange
Method (SCDEM) is based on the well-known algorithm
which was proposed by Mahafza et al (CDEM). The proposed
algorithm SCDEM resulted in almost even distributions load
balancing among the all nodes of star network. The algorithm
is able to redistribute load balancing among all nodes in 2n-1
communication steps which is considered efficient.
As future extension of this research work we will do some
analytical estimation including: execution time, load balancing
accuracy, communication steps and speed to prove the
SCDEM efficiency mathematically.
References
1. S. B. Akers, D. Harel and B. Krishnamurthy,
“The Star Graph: An Attractive Alternative
to the n-Cube” Proc. Intl. Conf. Parallel
Processing, 1987, pp. 393-400.
114
2. K. Day and A. Tripathi, “A Comparative
Study of Topological Properties of
Hypercubes and Star Graphs”, IEEE Trans.
Parallel & Distributed Systems, vol. 5.
3. Kaled Day and Abdel-Elah Al-Ayyoub,
“Node-ranking schemes for the star
networks”, Journal of parallel and
Distributed Computing, Vol. 63 issue 3,
March 2003, pp 239-250.
4. B.A. Mahafzah and B.A. Jaradat, “The Load
Balancing problem in OTIS-Hypercube
Interconnection
Network”,
J.
of
Supercomputing (2008) 46, 276-297.
5. S. B. Akers, and B. Krishnamurthy, “A
Group Theoretic Model for Symmetric
Interconnection Networks,” Proc. Intl. Conf.
Parallel Proc., 1986, pp. 216-223.
6. K. Day and A. Al-Ayyoub, “The Cross
Product of Interconnection Networks”, IEEE
Trans. Parallel and Distributed Systems,
vol. 8, no. 2, Feb. 1997, pp. 109-118.
7. A. Al-Ayyoub and K. Day, “A Comparative
Study of Cartesian Product Networks”,
Proc. of the Intl. Conf. on Parallel and
Distributed Processing: Techniques and
Applications, vol. I, August 9-11, 1996,
Sunnyvale, CA, USA, pp. 387-390.
8. I. Jung and J. Chang, “Embedding Complete
Binary Trees in Star Graphs,” Journal of the
Korea Information Science Society, vol. 21,
no. 2, 1994, pp. 407-415.
9. Berthome, P., A. Ferreira, and S. Perennes,
“Optimal Information Dissemination in Star
and Panckae Networks,” IEEE Trans.
Parallel and Distributed Systems, vol. 7, no.
12, Aug. 1996, pp. 1292-1300.
10. P. Fragopoulou and S. Akl, “A Parallel
Algorithm
for
Computing
Fourier
Transforms on the Star Graph,” IEEE Trans.
Parallel & Distributed Systems, vol. 5, no.
5, 1994, pp. 525-31.
11. Mendia V. and D. Sarkar, “Optimal
Broadcasting on the Star Graph,” IEEE
Trans. Parallel and Distributed Systems,
Vo;. 3, No. 4, 1992, pp. 389-396.
12. S. Rajasekaran and D. Wei, “Selection,
Routing, and Sorting on the Star Graph,” J.
Parallel & Distributed Computing, vol. 41,
1997, pp. 225-33.
13. S. Lakshmivarahan, and S.K. Dhall,
“Analysis and Design of Parallel Algorithms
Arithmetic and Matrix Problems,” McGrawHill Publishing Company, 1990.
14. N. Imani et al, “Perfect load balancing on
star interconnection network”, J. of
supercomputers, Volume 41 Issue 3,
September 2007. pp. 269 – 286.
15. Jehad Al-Sadi, “Implementing FEFOM
Load Balancing Algorithm on the Enhanced
OTIS-n-Cube Topology”, Proc. of the
Second Intl. Conf. on Advances in
Electronic Devices and Circuits - EDC
2013, 47-5.
16. K. Day and A. Al-Ayyoub, “The Cross
Product of Interconnection Networks”, IEEE
Trans. Parallel and Distributed Systems,
vol. 8, no. 2, Feb. 1997, pp. 109-118.
17. Ranka, Y. Won, S. Sahni, “Programming a
Hypercube Multicomputer”, IEEE Software,
5 (5): 69 – 77, 1998.
18. Zhao C, Xiao W, Qin Y (2007), “Hybrid
diffusion schemes for load balancing on
OTIS networks”, In: ICA3PP, pp 421–432
19. G. Marsden, P. Marchand, P. Harvey, and S.
Esener, “Optical Transpose Interconnection
System Architecture,” Optics Letters,
18(13), 1993, pp. 1083-1085.
20. Qin Y, Xiao W, Zhao C (2007), “GDED-X
schemes
for
load
balancing
on
heterogeneous OTIS
networks”,
In:
ICA3PP, pp 482–492.
21. A. Menn and A.K. Somani, “An Efficient
Sorting Algorithm for the Star Graph
Interconnection Network,” Proc. Intl. Conf.
on Parallel Processing, 1990, pp.1-8.
115
116
International Conference and Exhibition on Electronic, Computer and Automation Technologies (ICEECAT’14), May 9-11,2014
Konya, Turkey
Application of a Data Compression Algorithm
in Banking and Financial Sector
İ. ÜLGER1, M. ENLİÇAY1, Ö. ŞAHİN1, M. V. BAYDARMAN1, Ş. TAŞDEMİR2
1
Kuwait Turkish Participation Bank, R&D Center, Kocaeli, Turkey
ilker.ulger@kuveytturk.com.tr, mustafa_enlicay@kuveytturk.com.tr, ozgur.sahin@kuveytturk.com.tr,
mehmet.baydarman@kuveytturk.com.tr
2
Selçuk Üniversitesi, Teknik Bilimler Meslek Yüksekokulu, Konya, Türkiye
stasdemir@selcuk.edu.tr
Abstract - Data compression can be defined as a reduction of the
size of files in a disk with respect to its original size. As Internet
become widespread, the transferred data amount is increased
depending on the increase of the file transfers. Therefore a
requirement shows up to reduce the size of the transferred data
with compression techniques. In order to control bandwidth
usage and provide faster data transfers, data compression has
become very critical in the banking and financial systems which
have intensive business processes. In this research, GZIP
algorithm is implemented in data transfer process between clients
(ATM, Internet Banking, Mobile Branch, Branch) and
application server in Kuveyt Türk Participation Bank.
Furthermore, successful and high-performance results are
obtained with a comparison is made between GZIP and BZIP
algorithms by applying this techniques on different sized
documents, obtained results are represented using graphics. In
considering further development, this algorithm is placed as a
modular extension which may be unplugged later in order to use
better algorithm.
Keywords - Gzip, Data Compressing, Finance, SoftwareHardware, Banking.
I. INTRODUCTION
T
he rapid developments in the information technologies
have a significant catalyst affect on internet and network
technologies. Specifically the increase of the data that is
transferred through network, bring out some problems like
bandwidth usage and everlasting data transfer. In considering
these issues, compressing the data before the transfer is
commonly used solution to overcome these types of problems.
With the spread of internet usage, the data transfer between
devices is increased since the new features like document
sharing or video conference are available to users. The users
are getting used to access their data on the different devices.
These developments make the data compression very
important and inevitable [1, 3].
Although the disk capacity problem is not so important
concern as in the past, actually the sound files (MP3, WAV,
AAC etc.), the images (JPEG, PNG, GIF etc.) and the videos
(MPEG, MP4 etc.) are all compressed with some
miscellaneous compression methods.
Data compression methods are divided into two groups due
to their compression techniques: lossy and lossless
compression. In the lossy compression, original data is lost
partially and after the compressed data is decompressed
original data cannot be maintained entirely. Since the original
data cannot be maintained entirely these methods are called
lossy compression methods. Lossy compression methods are
generally used in cases that the data loss in the compression is
not important until it is not realized by human senses. In these
files the data loss is not so important in data analysis.
Therefore these methods are applied for the files like images,
sounds and videos. As long as human eye and ear are less
sensitive to the high frequency values, usually the data
elimination process is performed on the data that symbolizes
the high frequency values [2, 4].
There are two lossless compression methods: probabilitybased encoding and dictionary-based encoding. In the
probability-based encoding, compression is performed by
replacing the symbols that have high frequency with the bits
that have smaller size. The most used probability-based
methods are: Huffman encoding and Arithmetic encoding. In
the dictionary-based encoding, compression is performed by
using a single symbol instead of the symbols that have high
frequency. The most used dictionary-based methods are LZ77,
LZ78 and LZW algorithms. Algorithms which benefit from
both dictionary-based and probability based algorithms
provide higher compression rates. For instance Deflate
algorithm uses both Huffman and LZ77 algorithms [2, 4].
In the large scale data processing of Internet, data
compression and decompression is a very important
technology which can significantly improve the valid capacity
of the disk and the valid bandwidth of IO, which can reduce
the costs of IDC and accelerate application programs. This
paper describes a low-cost FPGA hardware architecture of
GZIP compression and decompression, which has been
applied to IDC services successfully [5].
During the opening process of a website or transferring a
file between different locations, overloading of bandwidth
problem occurs, it causes the website to open lately. These
issues should be handled in order to maintain higher customer
satisfaction. It makes data compression inevitable in the
transfer processes.
Gzip algorithm is commonly used in the different areas of
information technologies. One of them is using Gzip as a
decoder in the digital television applications. “Not only could
GZIP decoder decompress the normal GZIP stream, but it also
117
speed up the decompression for DTV application. The
architecture exploits the principles of pipelining to the
maximum extent in order to obtain high speed and throughput.
A partial decoding approach is proposed to deal with the
decompression with the limited memory [6].
Another area of utilization is Location Based Services. With
the development of LBS (Location Based Services),
transportation of spatial data in wireless Web has become hot
point in current research. After the existing solutions are
compared and extended in the paper, a new SVG based
solution is provided to represent and compress spatial data.
And then the key technologies in the fields of SVG data
compression are researched and resolved, in which the
improved compression method combined simplified SVG
representation and GZIP compression [7].
According to research based on GZIP implementation a
low-cost FPGA hardware architecture of GZIP compression
and decompression, which has been applied to IDC services
successfully. Depending on different applications, the disk IO
utilization has been improved by 300 to 500 percent, and the
programs are accelerated by 30% to 200%, while 1 to 3 CPUcore resources could be released [5].
In considering midsize companies in banking and financial
sector, it would be concluded that there are two approaches for
compression. First and most preferred approach to compress
the data is using software and the other approach is using
hardware.
Compression with software;
 If the software architecture is suitable, this approach is very
low cost.
 Has flexibility notwithstanding different protocols
For instance, algorithms based on text are more successful
with XML and HTML, however, for different format banking
documents (tiff, jpeg, gif etc.) utilized algorithms are more
successful.
 Low-cost for development and extensification
 New technologies or algorithms could easily be
implemented due to modularity.
The companies which benefit from data compression with
software may use the built-in libraries of their software
technology or prefer to buy third party software.
Compression with hardware;
 High cost for purchase, management and maintenance,
 Hardware needed in the center and also in the clients
connected to the center
 Skilled stuff needed to set up and manage the hardware.
In this research, an software implementation is performed by
using integrated algorithm in order to speed up the data
transfer among branches in a participation bank.
II. MATERIAL AND METHOD
Day by day, data processing technologies have more area of
usage, in this large scale, data compression and decompression
algorithms and applications have significant role. In the
different sectors, companies benefit from data compression
with software or hardware in order to reduce costs and save
time. When this approach is analyzed in the banking and
financial sector, data compression methods are commonly
used.
2.1. GZIP Compression Algorithm
Gzip compression algorithm finds the similar strings in text
documents and replaces these strings in order to reduce file
size. Gzip algorithm combines DEFLATE, Huffman and LZ77
compression algorithms. Gzip is open-source and very
effective in terms of performance therefore this algorithm
became widespread swiftly and is still commonly used.
2.2. Comparing Algorithms
To measure gzip and bzip algorithms’ performance and
outputs fairly, different sized documents are collected. In a
research done by Navarro N. and his colleagues [8], the
performance and speed measurements of Gzip and Bzip
compression algorithms are calculated with different size
documents (Table 1)
.
Table 1. The documents’ specifications [8]
In this research [8], Comparative result data set is used and
graphical representations which indicate compression rates and
time are created. The compression rates of Gzip and Bzip
algorithms are shown in the Figure 1. As indicated in the
figure, it can be inferenced that the algorithm which has the
best compression rate is Gzip-f algorithm.
The preferred algorithm may vary due to the requirements
of the system and environment. In other words, the basic
requirement would be maximizing the compression of the data
in a particular system, on the other hand this would be
minimizing the compression and decompression time in
another system.
In the drawn graphics, the compression times for the same
document corpus are shown in Figure 2, the decompression
times of the compressed documents are shown in the Figure 3.
It may be inferenced from Figure 2 that the compression time
increases as long as the document size gets larger. After
analyzing these results, Gzip-f algorithm seems to be most
stable algorithm therefore it has the best performance for
compression rates. The decompression time is an important
issue as much as the compression time. The decompression
times of compressed documents are shown in Figure 3 where
the differences between algorithms are clearly shown. Gzip-f
118
algorithm has a good performance in the decompression
process as well as the compression process. [8]
transactions in a web site, ATM, mobile banking and branches
which have thousands of daily-user. Also this algorithm
implementation is integrated modularly therefore further
developments of new algorithms may easily be used. In the
Figure 4, the block of compression structure is shown.
Figure 1. Compression Rates
Figure 4. The Block Structure in the participation bank.
Figure 2. Compression Time
With the help of literature research, evaluation is made and
results are examined. It is concluded that Gzip-f algorithm is
best in terms of compression rates and compression/
decompression times. In addition, this algorithm has
availability for different requirements. In considering these
obtained results, Gzip compression algorithm is used in this
application.
In this bank or industry one of the SOA (Service Oriented
Architecture) approaches, the ESB (Enterprise Service Bus)
architecture is used. All of the messages (HTTP, HTTPS,
SOAP, TCP, REST) coming from internal and external
network arrive at ESB, are compressed with Gzip algorithm in
the endpoints (Mobile devices, ATM, Internet Branch and
Branches) and transferred as compressed data over network.
ESB decompresses every messages that arrive to it and direct
them to the banking services [9, 10].
In the structure of this compression software, GzipStream
algorithm in the .NET 4.5 framework is used with the purpose
of decreasing the bandwidth usage and speeding up the
messaging. In the 3-tier architecture that has been used this
bank, all messages between clients and application server are
compressed with this application. When the results are
examined in this bank, compression ratios up to 1/7 are
obtained for this network.
There are 2 Mbit lines between center and the branches in
our network. With the increase of new branch’ reporting needs
and the number of transactions, bandwidth usage is increasing
day by day. The high-costs of network lines in our county,
direct us to the algorithms that are effective in compression.
III. RESULTS AND RECOMMENDATIONS
Figure 3. Decompression Time
2.3. Application for Finance Sector
This research is made in 300-branched participation bank in
order to speed up the data transfer between branches and the
center. In this research, an implementation of compression
algorithm is applied and inferences made in this participation
bank. This system has been using in order to have faster
In this research, it is concluded that data compression is very
important in order to prevent bandwidth overload and to speed
up the data transfer between locations like web site, ATM,
branches and center.
Specifically in the companies in finance sector, the
algorithm applications are focused on Gzip and Bzip2
algorithms. In the some trials, although Bzip2 algorithm has
very good compression rates, it has longer compression and
decompression times. These inferences direct our company to
prefer Gzip algorithm.
119
Gzip algorithm is integrated to banking system modularly
therefore, if better algorithm will be developed in the future, it
would be easily integrated to the system. With the intention to
find a better algorithm research and development continue in
this area.
REFERENCES
Goksu H, Diri B, “Morphology Based Text Compression, Dokuz Eylul
University Faculty of Engineering”, Journal of Engineering Sciences, 12
(3), 77-85, 2010.
[2] Altan M, “Veri Sıkıştırmada Yeni Yöntemler”, PhD thesis, Institute of
Natural Sciences, Trakya University, Edirne, Turkey, 2006.
[3] Goksu H, Diri B, “Morphology Based Text Compression”, IEEE 18th
Signal Processing and Communications Applications Conference (SIU),
Dicle University, Diyarbakir, p.45-48, 2010.
[4] Mesut A, Carus A, “Kayıpsız Görüntü Sıkıştırma Yöntemlerinin
Karşılaştırılması”, II. Mühendislik Bilimleri Genç Araştırmacılar
Kongresi, MBGAK İstanbul, p,. 93-100, 2005,
[5] Ouyang J, Luo H, Wang Z, Tian J, Liu C, Sheng K, “FPGA
implementation of GZIP compression and decompression for IDC
services”, Field-Programmable Technology (FPT), International
Conference, p. 265 – 268, 2010.
[6] Zhu K, Liu W, Du J, “A custom GZIP decoder for DTV application”
,Circuits and Systems (ISCAS), 2013 IEEE International Symposium ,
p. 681 – 684, 2013.
[7] Li Y, Wang Y, “Research on Compression Technology for Geodata
Based SVG in LBS, Web Information Systems and Mining (WISM)”,
2010 International Conference, p. 134 – 137, 2010.
[8] Navarro N, Brisaboa N, “New Compression Codes for Text Databases”,
PhD thesis, Corona University, Spain, 2005.
[9] Pasatcha, P., “Management of Innovation and Technology”, 4th IEEE
International Conference, Mahanakorn Univ. of Technol., Bangkok,
1282 – 1285, 2008.
[10] Available:
Service-oriented
architecture,
http://en.wikipedia.org/wiki/Service-oriented_architecture,
04.04.2014.
[1]
120