ICETSH-Part-6

Transcription

ICETSH-Part-6
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
ENHANCE THE SECURITY AND AUTHORIZATION FOR DATA
TRANSMISSION IN WIRELESS NETWORKS
N.Karpagam1, S.Nithya 2
1
II-M.E(CS) , Assistant Professor / ECE2, Dhanalakshmi Srinivasan Engineering College, Perambalur.
ABSTRACT
Microwave Access (WiMAX) and Long-Term Evolution (LTE) are considered the best
technologies for vehicular networks. WiMAX and LTE are Fourth-Generation (4G) wireless
technologies that have well-defined quality of service (QoS) and security architectures.Existing
work QoS-aware distributed security architecture using the elliptic curve Diffie–Hellman
(ECDH) protocol that has proven security strength and low overhead for 4G wireless
networks.The proposed distributed security architecture using the ECDH key exchange
protocol.ECDH can establish a shared secret over an insecure channel at the highest security
strength.Limitations of high computational and communication overhead in addition to lack of
scalability.so that the proposed work propose an unconditionally secure and efficient SAMA.
The main idea is that for each message m to be released, the message sender, or the sending
node, generates a source anonymous message authenticator for the message m.The generation is
based on the MES scheme on elliptic curves. For a ring signature, each ring member is required
to compute a forgery signature for all other members in the AS. IMS (IP Multimedia Subsystem)
is a set of specifications to offer multimedia services through IP protocol. These make it possible
to include all kinds of services, such as voice, multimedia and information, on reachable
platform through any Internet link. A multi-attribute stereo model for IMS security analysis
based on ITU-T Recommendation X.805 and STRIDE threat model, which provide a
comprehensive and systematic standpoint of IMS. Femtocell access points (H(e)NBs) are closerange, limited-capacity base stations that use residential broadband connections to connect to
carrier networks. Final conclution to provide hop-by-hop message Authentication without the
weakness of the built in threshold of the polynomial-based scheme, then proposed a hop-by-hop
message authentication scheme based on the SAMA.
Key words:-Long-Term Evolution (LTE), Multi Hop, Worldwide Interoperable For Microwave
Access (Wimax), Elliptic Curve Diffie Hellman (ECDH).
providing high-speed Internet of 100 Mb/s at
a vehicular speed of up to 350 km/h.
1. INTRODUCTION
In 4G networks, Worldwide interoperable
for Microwave Access (WiMAX) and LongTerm Evolution (LTE) are two emerging
broadband wireless technologies aimed at
Further, 4G wireless standards provide welldefined QoS and security architecture. For
this reason, 4G cellular networks are
considered up-and-coming technologies for
vehicular multimedia applications. WiMAX
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 1
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
and LTE resemble each other in some key
aspects,
including
operating
frequency
spectrum, large capacity, mobility, strong
QoS mechanisms, and strong security with a
related key hierarchy from the core network
to the access network. However, WiMAX
and LTE also differ from each other in
assured aspects, as they have evolved from
different origins. LTE has evolved from 3rd
Figure 1. WIMAX architecture
Generation Partnership Projects (3GPP);
therefore, the LTE network has to support
the existing 3G users’ connectivity, but there
is
no
such
constraint
for
WiMAX.
Particularly, on the security aspect, the
WiMAX
authentication
process
uses
Extensive Authentication Protocol Tunneled
Transport Layer Security (EAP-TTLS) or
EAP-Transport
Layer
WiMAX
is
the
emerging broadband wireless technologies
based on IEEE 802.16 standard. The
security sublayer of the IEEE 802.16d
standard defines the security mechanisms
for fixed and IEEE 802.16e standard defines
the
security
mechanisms
for
mobile
network. The security sub layer supports are
to: (i) authenticate the user when the user
enters in to the network, (ii) authorize the
client, if the user is provisioned by the
network service provider, and then (iii)
provide the necessary encryption support for
the key transfer and data traffic.
ISSN: 2348 – 8387
The
previous
IEEE
802.16d
standard
security architecture is based on PKMv1
(Privacy Key Management) protocol but it
has many security issues. A large amount of
these issues are resolved by the later version
of PKMv2 protocol in IEEE 802.16e
standard which provides a flexible solution
that supports device and user authentication
between a mobile station (MS) and the home
connectivity service network (CSN). Even
though both of these principles brief the
medium access control (MAC) and physical
(PHY) layer functionality, they mainly
concentrate on point-to multipoint (PMP)
networks. In the concern of security, mesh
networks are more vulnerable than the PMP
network,
but
the
principles
have
unsuccessful to concentrate on the mesh
mode.
The requirement for higher data speed is
increasing
rapidly,
www.internationaljournalssrg.org
reason
being
Page 2
the
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
availability of smart phones, at low cost in
Access (WCDMA), Code Division Multiple
the market due to competition and usage of
Access (CDMA2000), satellite network etc
social
are integrated.
networking
websites.
Constant
improvement in wireless data rate is already
Selecting the suitable access network to
happening. Different network technologies
meet the QoS requirements of a specific
are
application has become a significant topic
integrated
connectivity
to
and
provide
as
and priority is to maximize the QoS
Term
experienced by the user. QoS is the ability
Evolution-Advanced (LTE-A) is known as
of a network to provide premier service to
4G and it is the solution for heterogeneous
some fraction of total network traffic over
networks and wireless broadband services.
specific
International Mobile Telecommunication-
metrics are delay, jitter (delay variation),
Advanced (IMT-Advanced) represents a
service availability, bandwidth, throughput,
family of mobile wireless technology,
packet loss rate. Metrics are used to indicate
known as 4G.
performance of particular scheme employed.
Basically IP was termed as a general-
QoS can be achieved by resource reservation
purpose data transport protocol in the
(integrated
network layer, but now extended as a carrier
(differentiated services).
for voice and video communications over
From the QoS point of view, the protocol
4G networks.
stack is composed of upper layer protocols
Wireless networks in the future will be
(transport and above), on top of IP.
heterogeneous. Different access networks
Applications can, in this context, be
such
and
classified according to the data flows they
802.15
exchange as elastic or real-time. The
Wireless Personal Area Network (WPAN),
network layer includes IP traffic control that
IEEE 802.11 Wireless Local Area Network
implements
(WLAN),
heterogeneous
as
network.
Institute
Electronics
are
seamless
of
Engineers
termed
Long
Electrical
(IEEE)
underlying
technologies.
services),
datagram
QoS
prioritization
policing
and
IEEE
802.16
Wireless
classification, flow shaping, and scheduling.
Area
Network
(WMAN),
The data link layer may also provide QoS
General Packet Radio Service (GPRS),
support, by means of transmission priorities
Enhanced Data rate for GSM Evolution
or virtual channels. QoS provision in 4G
(EDGE), Wideband Code Division Multiple
networks is challenging as they support
Metropolitan
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 3
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
varying bit rates from multiple users and
WiMAX and LTE security threats in
variety of applications, hostile channel
existing research efforts. Therefore, the third
characteristics, bandwidth allocation, fault-
objective of this paper is then to analyze
tolerance levels, and frequent handoff
both WiMAX and LTE
among heterogeneous wireless networks.
convergence that may be useful or even
QoS support can occur at the network,
crucial for service providers to support high-
transport, application, user and switching
speed vehicular applications. To identified
level. To meet QoS,
the DoS/Reply attack threat in the LTE
On the other hand, the LTE authentication
network during the initial network entry
procedure uses the EAP Authentication and
stage of the user equipment (UE). As the
Key Agreement (EAP-AKA) procedure that
WiMAX
authenticates only the International Mobile
similarities in security key hierarchy from
Subscriber Identity (IMSI) burned in a
the core network to the access network and
subscriber identity module (SIM) card.
symmetric key encryption, we further apply
Accordingly, the LTE security does not
the design of ECDH to LTE networks.
meet the enterprise security requirement, as
Device mutual authentication is performed
LTE
using IKEv2 with public key signature based
does
not
authenticate
enterprise
controlled security.
and
LTE
for network
networks
have
authentication with certificates Security is
arguably one of the primary concerns and
will
determine
the
future
of
IMS
will
affect
the
QoS
deployment.
Usually,
IPSec
performance, because the IPSec header in
each packet consumes additional bandwidth.
To
mitigate
performance
distributed
the
security threats
degradation,
security
protocol—elliptic
scheme
curve
and
propose
a
using
a
Diffie–Hellman
(ECDH)—that has lower overhead than that
Figure 2. LTE Architecture
Moreover, there is a lack of an integrated
of IPSec. ECDH is a Layer-2 key agreement
protocol that allows users to establish a
study and QoS-aware solutions for multi hop
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 4
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
shared key over an insecure channel. ECDH
levels. To meet QoS, address the following
was investigated, and the results showed that
issues like encryption protocols, security and
it did not affect the QoS performance much
―trust of information‖, dissimilar rates, fault
in
profiles,
4G
single-hop
WiMAX
networks.
latencies,
burstiness,
dynamic
Therefore, ECDH is adopted in this research
optimization of scarce resources and fast
in dealing with the existing Layer-2 security
handoff control.
threats for 4G multihop networks.
This paper proposes a multi-attribute stereo
In this paper [3] M. Purkhiabani and A.
model for IMS security analysis based on
Salahi proposed authentication and key
ITU-T
agreement protocol for next generation Long
Recommendation
X.805
and
STRIDE threat model, which provides a
term
comprehensive and systematic perspective
Evolution(LTE/SAE)
of IMS. A threat analysis of IMS network is
compared its enhancements with contrast to
made by adopting the model.
Universal
Attack is avoided at H(e(NB by allowing
Authentication and Key Agreement( UMTS-
only IKE negotiations and ESP-encrypted
AKA), then, offers a new improvement
traffic using IKEv2.
protocol which increases performance of
During setup of the tunnel, the H(e)NB
authentication procedure. In fact the new
includes
ESP
proposed
ESP
network with Home Subscription Server
encryption transforms as part of the IKEv2
(HSS) for execution of authentication
signaling. The SeGW selects an
procedure
ESP authentication transform and an ESP
computation
encryption transform and signal this to the
Entity(MME)
H(e)NB.
authentication vectors in both MME and
a
authentication
list
of
supported
transforms
and
evolution/
System
Mobile
Architecture
networks
Terrestrial
protocol
by
and
Mobility
and
System-
sharing
increasing
in
and
serving
a
little
Management
generated
joined
HSS can remove aforementioned problems
2. RELATED WORKS
during
In this paper [2] Muhammed Mustaqim,
proposed-AKA with contrast to original-
Khalid Khan, Muhammed Usman represent
AKA, the more MS and service request rate,
QoS support can occur at the network,
the
transport, application, user and switching
authentication load for HSS. In this paper
ISSN: 2348 – 8387
authentication
more
considerable
www.internationaljournalssrg.org
process.
The
deduction
Page 5
of
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[4] H-M. Sun, Y-H. Lin, and S-M.Chen,
This paper should be viewed as a reproving
proposed
authentication
note pointing out the frailty of current
schemes have been proposed based on pre
network coding-based wireless systems and
authentication concept in 802.11/WLAN
a general guideline in the effort of achieving
networks. These schemes present different
security for network coding systems.
Several
fast
methods to enhance the efficiency and
security of re-authentication process. By
In this paper [9] Wang, F. Yang, Q. Zhang
using the pre authentication notion, suggest
and Y. Xu proposed an analytical model for
a pre-authentication system for WiMAX
multi
infrastructures
to
calculate how a lot bandwidth can be
elasticity and refuge, the proposed scheme is
utilized along a path without violating the
joint with the PKI architecture. It provides a
QoS requirements of existing rate-controlled
safe and fast re-authentication procedure
traffic flows. To analyze the path capacity, a
during macro-handover in 802.16/WiMAX
notion of "Free channel time" is introduced.
networks.
It is the time allowed for a wireless link to
in
this
paper.
Due
hop
IEEE
802.11
networks
to
transmit data. The model characterizes the
In this paper [6] J. Donga, R. Curtmolab,
unsaturated traffic condition to attain goal.
and C. N. Rotarua proposed to identify two
The node depicts the interaction between the
general frameworks (inter-flow and intra-
newly injected traffic and the hidden traffic
flow) that encompass several network
which would have an effect upon the new
coding-based systems proposed in wireless
traffic.
networks. The systematic study of the
mechanism of these frameworks reveals
vulnerabilities to a wide variety of attacks,
which
may
severely
degrade
system
presentation. Then, recognize security goals
and design challenges in achieving security
for network coding systems. Adequate
sympathetic
of
both
the
threats
and
challenges is essential to effectively design
secure practical network coding systems.
ISSN: 2348 – 8387
3.
DESCRIPTION
OF
THE
PROPOSED SCHEME
WiMAX and LTE are Fourth-Generation
(4G) wireless technologies that have welldefined quality of service (QoS) and security
architectures.
The
numbers
of
users
communicate to the Mobile Station through
the
Base
Station.
During
the
data
transmission the attackers easily hack the
www.internationaljournalssrg.org
Page 6
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
data.
To overcome this problem, in the
To establish hop-by-hop authentication and
proposed scheme, the wireless nodes are
to reduce the computational overhead for the
initially authenticated by the home network
centralized
and then authorized by the access node. The
architecture is necessary for multihop
proposed scheme requires only a slightly
networks. Further, the centralized security
higher
computational
mode introduces longer authorization and
overhead than the default standard scheme.
SA delay than that of the distributed mode,
The
which affects the QoS performance in
Bandwidth
Proposed
and
a
distributed
security
node,
vehicular
Layer 2 for 4G multihop wireless networks.
networks, the security architecture defined
The proposed scheme provides strong
by the 3GPP standard is a distributed
security
for
scheme. On the other hand, selection of the
handover users without affecting the QoS
distributed security mode in WiMAX is
performance.
security
optional, but data transfer using the tunnel
schemes, measuring and analyzing both the
mode is still an open issue. Hence, proposed
security level and QoS performance is
the distributed security architecture using
mandatory for 4G vehicular networks, as
ECDH for multihop WiMAX networks. For
they intend to provide high QoS and security
multihop (nth hop) connectivity using
for their customers. To analyze the security
ECDH, the cell-edge RS broadcasts its
and QoS performance of the proposed
public key, ECDH global parameters, RS-
ECDH security for both WiMAX and LTE
ID, and system parameters in the DCD
networks.
broadcast message.
hasty
The
authentication
proposed
The QoS performance metrics
In
security
architecture using the ECDH algorithm in
and
networks.
distributed
multihop
LTE
used in the experiments are subscriber
stations
(SSs)
connectivity
latency,
3.1 Security Analysis
throughput, frame loss, and latency. For the
There are three security schemes considered
ECDH scheme, the handover latency is
for this analysis: 1) default MAC-layer
significantly reduced versus that of the
security defined by standards; 2) IPSec
default security scheme; thus, the ECDH
security on top of the MAC-layer security;
scheme improves the QoS performance of
and 3) the proposed
the vehicular users.
ECDH protocol at the MAC layer with
default security. First, we explain how the
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 7
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
proposed ECDH protocol overcomes the
installed on top of a train, and WiMAX
existing security threats in each category for
users are inside the train), and 2) MS
both WiMAX and LTE networks. Later, we
mobility. For RS mobility in the proposed
compare the performance of these three
security architecture, reauthentication for RS
security schemes in Table VIII, where we
is not necessary, because the BS or the
enhanced our previous analysis.
target RS knows the list of RSs and the
3.1.1 Analysis on ECDH Protocol
corresponding RS_ID in the network.
against
Otherwise, if the target node is another BS,
Security
Threats
in
the
WiMAX Networks:
1) Ranging attacks: In our proposed
security
architecture,
RNG_REQ
and
RNG_RSP messages are encrypted by the
public key of the receiver. Hence, the
intermediate rogue node has difficulty in
processing the message in a short period,
and the system is free from DoS/Replay and
other attacks during initial ranging.
2) Power-saving attacks: Already, the
IEEE 802.16m standard provides an option
for encrypting the control messages in a
power-saving mode. For IEEE standards, the
network may use ECDH implementation to
overcome the power-saving attacks.
3) Handover attacks: The MOB NBRADV attacks do not exist in the IEEE
802.16 network because the BS can encrypt
the message. For other networks, the
messages are encrypted using ECDH to
overcome those security threats. For latency
issues during handover, two scenarios are
considered: 1) RS mobility (e.g., RS is
ISSN: 2348 – 8387
serving
BS
can
send
the
RS
authentication information including AK in
a secured manner, as defined in IEEE
802.16m.
4) Miscellaneous attacks: For downgrade
attack, if the level of security is low in the
MS basic capability request message, the BS
should ignore the message. For bandwidth
spoofing, the BS should allocate the
bandwidth only based on the provisioning of
the MS. These downgrade attack and
bandwidth spoofing can be solved by using
basic intelligence in the BS.
5) Multihop security threats: One of the
major issues in a multihop wireless network
is the introduction of rogue node in a
multihop path. In our distributed security
mode, once the joining node is authenticated
by the home network (AAA server), mutual
authentication takes place between the
joining node and the access node (RS or
BS). Hence, the new node identifies the
rogue node during the mutual authentication
www.internationaljournalssrg.org
Page 8
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
step, and no other credential information is
random-access process, as the messages are
shared. Thus, the proposed solution avoids
in plain text. In our proposed security
the introduction of the rogue node problem.
architecture, the random-access Request
For tunnel mode security support, the
message is encrypted by the public key of
communication between the BS and the
eNB, and the response message is encrypted
access RS is encrypted using the ECDH
by the public key of UE. Hence, the
public key of the receiver. Hence, the
messages exchanged during the random-
network supports tunnel mode operation
access process are encrypted, and the
using the ECDH tunnel.
DoS/Replay attack is avoided. For IMSI
6) Other security threats: Other security
water torture attacks, we suggest EAP-based
threats such as attacks against WiMAX
authentication that is similar to WiMAX,
security, multicast/broadcast attacks, and
where the Attach Request message is
mesh mode attacks do not exist in IEEE
encrypted by home network shared secrets.
802.16m
the
For disclosure of the user’s identity privacy,
network uses ECDH implementation, the
the Attach Request message is encrypted by
control messages are encrypted. Hence,
eNB’s public key in ECDH implementation.
those security threats are avoided.
Hence, it is difficult for the attacker to
networks.
Otherwise,
if
decrypt the Attach Request message to know
3.1.2 Analysis on ECDH Protocol
against Security Threats in LTE
the IMSI. Thus, disclosure of the user’s
identity is avoided.
3) Handover attacks: Location tracking is
Networks:
possible by eavesdropping the CRNTI
1) LTE system architecture security
threats: Security threats such as injection,
modification, eavesdropping attacks, HeNB
physical intrusions, and rogue eNB/RN
attacks
still
exist
with
ECDH
implementation.
in
a
handover
command
message. However, this attack is avoided
with the proposed scheme, because the
CRNTI information is now encrypted. Other
security threats, lack of backward secrecy,
and desynchronization attacks still exist in
2) LTE access procedure attacks: Similar
to WiMAX networks, the intruder can
introduce a DoS/Replay attack during the
ISSN: 2348 – 8387
information
ECDH implementation.
4) Miscellaneous attacks: If the attacker
eavesdrops the CRNTI information in the
www.internationaljournalssrg.org
Page 9
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
random-access response or the handover
RS may drop the packet to avoid the entropy
command message, they can send a fake
attacks.
bandwidth request or false buffer status to
allocate bandwidth unnecessarily. Using
ECDH, eNB encrypts the random access
response message using UE’s public key.
Hence, bandwidth-stealing attack is avoided.
The lack of SQN synchronization is similar
to the desynchronization attack and still
exists in ECDH implementation.
4. EXPERIMENT AND RESULTS
4.1 Throughput performance
The throughput performance of the system
for both the default and the IPSec security
schemes.
Provisioning
of
uplink
and
downlink wireless link for both the SSs in
the AAA server is varied from 0 to 20 Mb/s.
3.2 Analysis on ECDH Protocol
Using an IXIA traffic generator, traffic is
Against Pollution and Entropy
transmitted for the total provisioned wireless
Attacks
WiMAX/LTE
capacity, and the received traffic is also
Networks: Pollution and entropy attacks are
noted. From the results, it is clear that the
the major security threats in multihop
throughput for the IPSec security scheme is
wireless networks, when network coding is
less than that for the MAC-layer security
used for data transmissions. Since packets
scheme. Initially, when the wireless link
are unencrypted, attackers may introduce the
capacity is small, corresponding payloads
polluted or stale packets that lead to
(1500-byte packet) in the traffic are small.
pollution and entropy attacks. In our
Hence, the drop is negligible.
approach,
4.2 Frame loss performance
neighbor
in
Multihop
every
and
authenticates
The end-to-end frame loss performance with
have
respect to the total link capacities of the two
difficulty in introducing the pollution attack.
SSs. Initially, as the number of packets
For the entropy attack, the RS may introduce
(payload) is small at low wireless link
a time stamp field in the message header.
capacity, frame loss is small (< 40) until the
Subsequently, the RS can verify the time
input traffic reaches. The frame losses in the
stamp of a received packet with the older
IPSec scheme increases as the link capacity
packets. If the time stamp is older, then the
increases. The frame loss increases almost
Hence,
shares
the
the
the
digital
signatures.
RSs
RS
attackers
linearly for the IPSec scheme between the
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 10
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
input traffic 8 Mb/s and 12 Mb/s. The packet
drop increases in both schemes when the
input traffic exceeds the practical system
capacity of 18.5 Mb/s.
4.3 Latency performance
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Existing System
Proposed
System
The delay experienced by the traffic in the
IPSec security scheme steadily increases
from 4 to 9 Mb/s. The delay for the IPSec
scheme is much higher than that for the
MAC security scheme when the wireless
link capacity reaches 10 Mb/s. After 10
Mb/s, the average delay experienced by the
IPSec is more than double when compared
with the default MAC security.
For the ECDH scheme, the handover latency
is significantly reduced versus that of the
default security scheme; thus, the ECDH
scheme improves the QoS performance of
the vehicular users. Consequently, suggest
the ECDH protocol for 4G multihop
wireless networks, and it is suitable for
PERFORMANCE COMPARISON
vehicular networks, since the proposed
security scheme aids in hasty authentication
Metho
d
Existin
g
system
Propos
ed
system
Throughput Frame
loss
74%
80%
93%
45%
Latenc without
compromising
y
92% performance.
37%
the
QoS
5. CONCLUSION
This
paper,
therefore,
presented
an
integrated view with emphasis on Layer-2
and Layer-3 technologies for WiMAX and
LTE security, which is useful for the
research
community.
In
addition,
the
performance of the proposed and other
security
schemes
is
analyzed
using
simulation and testbed implementation.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 11
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Existing
work
QoS-aware
distributed
the QoS performance Final conclution to
security architecture using the elliptic curve
provide hop-by-hop message Authentication
Diffie–Hellman (ECDH) protocol that has
without the weakness of the built in
proven security strength and low overhead
threshold of the polynomial-based scheme,
for 4G wireless networks.The proposed
then
distributed security architecture using the
authentication scheme based on the SAMA.
ECDH key exchange protocol.ECDH can
establish a shared secret over an insecure
channel
at
the
highest
security
strength.Limitations of high computational
and communication overhead in addition to
lack of scalability.so that the proposed work
propose an unconditionally secure and
efficient SAMA. The main idea is that for
each message m to be released, the message
sender, or the sending node, generates a
source anonymous message authenticator for
the message m.The generation is based on
the MES scheme on elliptic curves. For a
ring signature, each ring member is required
to compute a forgery signature for all other
The QoS measurement using the testbed
implementation and theoretical studies show
that the IPSec scheme provides strong
security for data, but not for the control
messages. Consequently, suggest the ECDH
protocol for 4G multihop wireless networks,
and it is suitable for vehicular networks,
since the proposed security scheme aids in
hasty authentication without compromising
ISSN: 2348 – 8387
a
hop-by-hop
message
REFERENCES
[1] N. Seddigh, B. Nandy, and R. Makkar,
―Security advances and challenges in 4G
wireless networks,‖ in Proc. 8th Annu. Conf.
Privacy, Security, Trust, 2010, pp. 62–71.
[2] Muhammed Mustaqim, Khalid Khan,
Muhammed
‖LTEadvanced:
Usman,
Requirements and technical challenges for
4G cellular network‖, Journal of Emerging
Trends in Computing and Information
Sciences, vol.3, Issue.5, pp. 665-671, May
2012.
[3]M. Purkhiabani and A. Salahi, ―Enhanced
authentication and key agreement procedure
of
members in the AS
proposed
next
generation
evolved
mobile
networks,‖ in Proc. 3rd Int. Conf. Commun.
Softw. Netw., 2011, pp. 557–563.
[4] H-M. Sun, Y-H. Lin, and S-M. Chen,
―Secure and fast handover scheme based on
pre-authentication
method
for
802.16-
WiMAX,‖ in Proc. IEEE Region 10 Conf.,
2007, pp. 1–4.
[5] T. Shon and W. Choi, ―An analysis of
mobile WiMAX security: Vulnerabilities
www.internationaljournalssrg.org
Page 12
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
and
solutions,‖
in
Lecture
Notes
in
Computer Science, T. Enokido, L. Barolli,
and M. Takizawa, Eds. Berlin, Germany:
Springer-Verlag, 2007, pp. 88–97.
[6] J. Donga, R. Curtmolab, and C. N.
Rotarua,
―Secure
network
coding
for
wireless mesh networks threats challenges
and directions,‖ J. Comput. Commun., vol.
32, no. 17, pp. 1790–1801, Nov. 2009.
[7] L. Yi, K. Miao, and A. Liu, ―A
comparative study of WiMAX and LTE as
the
next
generation
mobile
enterprise
network,‖ in Proc. 13th Int. Conf. Adv.
Comm. Tech., 2011, pp. 654–658.
[8] H. Jin, L. Tu, G. Yang, and Y. Yang,
―An improved mutual authentication scheme
in multi-hop WiMax network,‖ in Proc. Int.
Conf. Comput. Elect. Eng., 2008, pp. 296–
299.
[9] K. Wang, F. Yang, Q. Zhang and Y. Xu,
"Modeling path capacity in multi-hop IEEE
802.11 networks for QoS service," IEEE
Transactions on Wireless Communications,
Vol.6, No.2, pp.738- 749, February 2007.
[10] T. Han, N. Zhang, K. Liu, B. Tang, and
Y. Liu, ―Analysis of mobile WiMAX
security: Vulnerabilities and solutions,‖ in
Proc. 5th Int. Conf. Mobile Ad Hoc Sensor
Syst., 2008, pp. 828–833.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 13
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Efficient Data Access in Disruption Tolerant
Network using Hint based Algorithm
Mrs.D.IndraDevi
Kaviya .P PG Scholar
Department of Computer Science and Engineering,
Indra Ganesan College of Engineering,
Trichy,
Abstract— Data access is an important issue in Disruption
Tolerant Networks (DTNs). To improve the performance of
data access, cooperative caching technique is used. However
due to the unpredictable node mobility in DTNs, traditional
caching schemes cannot be directly applied. A hint based
decentralized algorithm is used for cooperative caching which
allow the nodes to perform functions in a decentralized
fashion. Cache consistency and storage management features
are integrated with the system. Cache consistency is
maintained by using the cache replacement policy. The basic
idea is to intentionally cache data at a set of network central
locations (NCLs), which can be easily accessed by other nodes
in the network. The NCL selection is based on a probabilistic
selection metric and it coordinates multiple caching nodes to
optimize the tradeoff between data accessibility and caching
overhead. Extensive trace-driven simulations show that the
approach significantly improves the data access performance
compared to existing schemes.
Keywords—disruption tolerant networks;network central
location;cooperative caching
I.INTRODUCTION
Disruption tolerant networks (DTNs) consist of mobile
nodes that contact each other opportunistically. Due to
unpredictable node mobility, there is no end-to-end
connection between mobile nodes, which greatly impairs
the performance of data access. In such networks node
mobility is exploited to let mobile nodes carry data as relays
and forward data opportunistically when contacting other
nodes. The subsequent difficulty of maintaining end-to-end
communication links makes it necessary to use “carry-andforward” methods for data transmission. Such networks
include groups of individuals moving in disaster recovery
areas, military battlefields, or urban sensing applications. In
such networks, node mobility is exploited to let mobile
nodes carry data as relays and forward data
opportunistically when contacting others. The key problem
is, therefore, how to determine the appropriate relay
selection strategy. It has the difficulty of maintaining endto-end communication links. It requires number of
Retransmissions and cache consistency is not maintained.
ISSN: 2348 – 8387
Associate Professor,
Indra Ganesan college of Engineering,
Trichy.
If too much data is cached at a node, it will be difficult for
the node to send all the data to the requesters during the
contact period thus wasting storage space. Therefore it is a
challenge to determine where to cache and how much to
cache in DTNs.
A common technique used to improve data access
performance is caching such that to cache data at
appropriate network locations based on query history, so
that queries in the future can be responded with less delay.
Client caches filter application I/O requests to avoid
network and server traffic, while server caches filter client
cache misses to reduce disk accesses. Another level of
storage hierarchy is added, that allows a client to access
blocks cached by other clients. This technique is known
as cooperative caching and it reduces the load on the server
by allowing some local client cache misses to be handled by
other clients.
The cooperative cache differs from the other levels of the
storage hierarchy in that it is distributed across the clients
and it therefore shares the same physical memory as the
local caches of the clients. A local client cache is controlled
by the client, and server cache is controlled by the server,
but it is not clear who should control the cooperative cache.
For the cooperative cache to be effective, the clients must
somehow coordinate their actions. Data caching has been
introduced as a techniques to reduce the data traffic and
access latency.
By caching data the data request can be served from the
mobile clients without sending it to the data source each
time. It is a major technique used in the web to reduce the
access latency. In web, caching is implemented at various
points in the network. At the top level web server uses
caching, and then comes the proxy server cache and finally
client uses a cache in the browser.
The present work proposes a scheme to address the
challenges of where to cache and how much data to cache.
It efficiently supports the caching in DTNs and
intentionally cache data at the network central location
(NCLs). The NCL is represented by a central node which
has high popularity in the network and is prioritized for
caching data. Due to the limited caching buffer of central
nodes, multiple nodes near a central node may be involved
for caching and the popular data will be cached near a
www.internationaljournalssrg.org
Page 14
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
central node. The selected NCL achieve high chances for
prompt response to user queries with low overhead in
network storage and transmission. The data access scheme
will probabilistically coordinate multiple caching nodes for
responding to user queries. The cache replacement scheme
is used to adjust cache locations based on query history.
In order to ensure valid data access, the cache
consistency must be maintained properly. Many existing
cache consistency maintenance algorithms are stateless, in
which the data source node is unaware of the cache status at
each caching node. Even though stateless algorithms do not
pay the cost for cache status maintenance, they mainly rely
on broadcast mechanisms to propagate the data updates,
thus lacking cost-effectiveness and scalability. Besides
stateless algorithms, stateful algorithms can significantly
reduce the consistency maintenance cost by maintaining
status of the cached data and selectively propagating the
data updates. Stateful algorithms are more effective in
MANETs, mainly due to the bandwidth-constrained,
unstable and multi-hop wireless communication.
A Stateful cache consistency algorithm called Greedy
algorithm is proposed. In Greedy algorithm, the data source
node maintains the Time-to-Refresh value and the cache
query rate associated with each cache copy. Thus, the data
source node propagates the source data update only to
caching nodes which are in great need of the update. It
employs the efficient strategy to propagate the update
among the selected caching nodes. Cooperative caching,
which allows the sharing and coordination of cached data
among multiple nodes, can be used to improve the
performance of data access in ad hoc networks. When
caching is used, data from the server is replicated on the
caching nodes. Since a node may return the cached data, or
modify the route and forward a request to a caching node, it
is very important that the nodes do not maliciously modify
data, drop or forward the request to the wrong destination.
Caching in wireless environment has unique constraints
like scarce bandwidth, limited power supply, high mobility
and limited cache space. Due to the space limitation, the
mobile nodes can store only a subset of the frequently
accessed data. The availability of the data in local cache can
significantly improve the performance since it overcomes
the constraints in wireless environment. A good
replacement mechanism is needed to distinguish the items
to be kept in cache and that is to be removed when the
cache is full. While it would be possible to pick a random
object to replace when cache is full, system performance
will be better if we choose an object that is not heavily
used. If a heavily used data item is removed it will probably
have to be brought back quickly, resulting in extra
overhead.
II. RELATED WORK
Research on data forwarding in DTNs originates from
Epidemic routing which floods the entire network. Some
later studies focus on proposing efficient relay selection
metrics to approach the performance of Epidemic routing
ISSN: 2348 – 8387
with lower forwarding cost, based on prediction of node
contacts in the future. Some schemes do such prediction
based on their mobility patterns, which are characterized by
Kalman filter or semi-Markov chains. In some other
schemes, node contact pattern is exploited as abstraction of
node mobility pattern for better prediction accuracy, based
on the experimental and theoretical analysis of the node
contact characteristics. The social network properties of
node contact patterns, such as the centrality and community
structures, have also been also exploited for relay selection
in recent social-based data forwarding schemes.
The aforementioned metrics for relay selection can be
applied to various forwarding strategies, which differ in the
number of data copies created in the network. While the
most conservative strategy always keeps a single data copy
and Spray-and-Wait holds a fixed number of data copies,
most schemes dynamically determine the number of data
copies. In Compare-and-Forward, a relay forwards data to
another node whose metric value is higher than itself.
Delegation forwarding reduces forwarding cost by only
forwarding data to nodes with the highest metric.
Data access in DTNs, on the other hand, can be provided
in various ways. Data can be disseminated to appropriate
users based on their interest profiles. Publish/ subscribe
systems were used for data dissemination, where social
community structures are usually exploited to determine
broker nodes. In other schemes without brokers, data items
are grouped into predefined channels, and are disseminated
based on users’ subscriptions to these channels.
Caching is another way to provide data access.
Cooperative caching in wireless ad hoc networks was
studied in, in which each node caches pass-by data based on
data popularity, so that queries in the future can be
responded with less delay. Caching locations are selected
incidentally among all the network nodes. Some research
efforts have been made for caching in DTNs, but they only
improve data accessibility from infrastructure network such
as WiFi access points (APs) . Peer-to-peer data sharing and
access among mobile users are generally neglected.
Distributed determination of caching policies for
minimizing data access delay has been studied in DTNs ,
assuming simplified network conditions.
III. OVERVIEW
A.Motivation
A requester queries the network for data access, and the
data source or caching nodes reply to the requester with
data after having received the query. The key difference
between caching strategies in wireless ad hoc networks and
DTNs is illustrated in Fig. 1. Note that each node has
limited space for caching. Otherwise, data can be cached
everywhere, and it is trivial to design different caching
strategies. The design of caching strategy in wireless ad hoc
networks benefits from the assumption of existing end-to
www.internationaljournalssrg.org
Page 15
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
end paths among mobile nodes, and the path from a
requester to the data source remains unchanged during data
access in most cases. Such assumption enables any
intermediate node on the path to cache the pass-by data. For
example, in Fig. 1a, C forwards all the three queries to data
sources A and B, and also forwards data d1 and d2 to the
requesters. In case of limited cache space, C caches the
more popular data d1 based on query history, and similarly
data d2 are cached at node K. In general, any node could
cache the pass-by data incidentally.
However, the
effectiveness of such an incidental caching strategy is
seriously impaired in DTNs, which do notassume any
persistent network connectivity. Since data are forwarded
via opportunistic contacts, the query and replied data may
take different routes, and it is difficult for nodes to collect
the information about query history and make caching
decision. For example, in Fig. 1b, after having forwarded
query q2 to A, node C loses its connection to G, and cannot
cache data d1 replied to
incidentally cached “anywhere,” data are intentionally
cached only at specific nodes. These nodes are carefully
selected to ensure data accessibility, and constraining the
scope of caching locations reduces the complexity of
maintaining query history and making caching decision.
IV. NCL SELECTION
When DTNs are activated the nodes will be generated,
after generating all nodes the NCL will be selected from a
network. The node is selected using probability selection
metric techniques. The selected NCLs achieve high
chances for prompt response to user queries with low
overhead in network storage and transmission. After that
each and every node will send a generated data to a NCL,
and NCL will receive a data and store in a cache memory.
The opportunistic path weight is used by the central node as
relay selection metric for data forwarding. Instead of being
incidentally cached anywhere data are intentionally cached
only at the specific node called NCL. These nodes are
carefully selected to ensure data accessibility and
constraining the scope of caching locations reduces the
complexity of maintaining query history and making
caching decision. The push and pull process conjoin at the
NCL node. The push process means that whenever the
nodes generate the data it will be stored at the NCL. The
pull process describes that whenever the nodes request for a
particular data it will send request to the NCL then it checks
the cache memory and send response to the requested node.
If the data is not available then it forwards the request to the
nearest node.
A r-hop opportunistic path
= , ) between nodes A
and B consists of a node set =(A, , ,…
,B)
and an edge set =(
,……. )
with edge weights
( , ,… ). Path weight
(T)is the probability that data
are opportunistically transmitted from A to B along
within time T. The path weight is written as
(T) = ∫
Fig :1 Caching strategy in different network environment
requester E. Node H which forwards the replied data to E
does not cache the pass-by data d1 either because it did not
record query q2 and considers d1 less popular. In this case,
d1 will be cached at node G, and hence needs longer time to
be replied to the requester. The basic solution to improve
caching performance in DTNs is to restrain the scope of
nodes being involved for caching. Instead of being
ISSN: 2348 – 8387
)
=∑
.(1-
)
and the data transmission delay between two nodes A and
B, indicated by the random variable Y , is measured by the
weight of the shortest opportunistic path between the two
nodes. In practice, mobile nodes maintain the information
about shortest opportunistic paths between each other in a
distance-vector manner when they come into contact.
The metric for a node i to be selected as a central node to
represent a NCL is then defined as follows:
= ∑
(T)
where we define that
(T)=0. This metric indicates the
average probability that data can be transmitted from a
www.internationaljournalssrg.org
Page 16
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
random node to node i within time T. In general, network
information about the pairwise node contact rates and
shortest opportunistic paths among mobile nodes are
required to calculate the metric values of mobile nodes
according to
the above equation. However, the
maintenance of such network information is expensive in
DTNs due to the lack of persistent end-to-end network
connectivity. As a result, we will first focus on selecting
NCLs with the assumption of complete network
information from the global perspective.
V. CACHING
A common technique used to improve data access
performance is caching. Cache the data at appropriate
network locations based on query history, so that queries in
the future can be responded with less delay. Although
cooperative caching has been studied for both web-based
applications and wireless ad hoc networks to allow sharing
and coordination among multiple caching nodes, it is
difficult to be realized in DTNs due to the lack of persistent
network connectivity. When a data source generates data, it
pushes data to central nodes of NCLs which are prioritized
to cache data. One copy of data is cached at each NCL. If
the caching buffer of a central node is full, another node
near the central node will cache the data. Such decisions
are automatically made based on buffer conditions of nodes
involved in the pushing process. A requester multicast a
query to central nodes of NCLs to pull data and a central
node forwards the query to the caching nodes. Multiple
data copies are returned to the requester data accessibility
and transmission overhead is optimized by controlling the
number of returned data copies.
When a data source generates data, it pushes data to
central nodes of NCLs, which are prioritized to cache data.
One copy of data is cached at each NCL. If the caching
buffer of a central node is full, another node near the central
node will cache the data. Such decisions are automatically
made based on buffer conditions of nodes involved in the
pushing process. A requester multicast a query to central
nodes of NCLs to pull data, and a central node forwards the
query to the caching nodes. Multiple data copies are
returned to the requester, and we optimize the tradeoff
between data accessibility and transmission overhead by
controlling the number of returned data copies. Utilitybased cache replacement is conducted whenever two
caching nodes contact and ensures that popular data are
cached nearer to central nodes. We generally cache more
copies of popular data to optimize the cumulative data
access delay. We also probabilistically cache less popular
data to ensure the overall data accessibility.
VI. CACHE DISCOVERY
A cache discovery algorithm that is efficient to discover
and deliver requested data items from the neighbours node
and able to decide which data items can be cached for
future use. In cooperative caching this decision is taken not
ISSN: 2348 – 8387
only on the behalf of the caching node but also based on the
other nodes need. Each node will maintain a Caching
Information Table (CIT). When a NCL node caches a new
data item or updates its CIT it will broadcasts these updates
to all its neighbours.
When a data item d is requested by a node A, first the
node will check whether d available is TRUE or FALSE to
see the data is locally available or not. If this is FALSE then
the node will check d node to see whether the data item is
cached by a node in its neighbour. If the matching entry
found then the request is redirect to the node otherwise the
request is forwarded towards the data server. However the
nodes that are lying on the way to the data center checks
their own local cache and d node entry in their CIT. If any
node has data in its local cache then the data is send to
requester node and request forwarding is stop and if the
data entry is matched in the CIT then the node redirect the
request to the node.
The hint based approach is to let the node itself perform
the lookup, using its own hints about the locations of blocks
within the cooperative cache. These hints allow the node to
access the cooperative cache directly, avoiding the need to
contact the NCL node on every local cache miss. Two
principal functions for a hint based system is Hint
Maintenance and lookup mechanism. The hints must be
maintained so that they are reasonably accurate; otherwise
the overhead of looking for blocks using incorrect hints will
be prohibitive. Hints are used to locate a block in the
cooperative cache, but the system must be able to
eventually locate a copy of the block should the hints prove
wrong.
VII.CACHE REPLACEMENT
A commonly used criterion for evaluating a replacement
policy is its hit ratio the frequency with which it finds a
page in the cache. Of course, the replacement policy’s
implementation overhead should not exceed the anticipated
time savings. Discarding the least-recently-used page is the
policy of choice in cache management. Until recently,
attempts to outperform LRU in practice had not succeeded
because of overhead issues and the need to pretune
parameters. The adaptive replacement cache is a selftuning, low-overhead algorithm that responds online to
changing access patterns. ARC continually balances
between the recency and frequency features of the
workload, demonstrating that adaptation eliminates the
need for the workload-specific pretuning that plagued many
previous proposals to improve LRU.
ARC’s online
adaptation will likely have benefits for real-life workloads
due to their richness and variability with time. These
workloads can contain long sequential I/Os or moving hot
spots, changing frequency and scale of temporal locality
and fluctuating between stable, repeating access patterns
and patterns with transient clustered references. Like LRU,
ARC is easy to implement, and its running time per request
is essentially independent of the cache size.
www.internationaljournalssrg.org
Page 17
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
ARC maintains two LRU pages lists: L1 and L2. L1
maintains pages that have been seen only once, recently,
while L2 maintains pages that have been seen at least twice,
recently. The algorithm actually caches only a fraction of
the pages on these lists. The pages that have been seen
twice within a short time may be thought of as having high
frequency or as having longer term reuse potential. T1,
which contains the top or most-recent pages in L1, and B1,
which contains the bottom or least-recent pages in L1. If
either |T1| > p or (|T1| = p and x B2), replace the LRU
page in T1. If either |T1| < p or (|T1| = p and x B1),
replace the LRU page in T2.
other clients, offloading the server and improving the
performance of the system. However, cooperative caching
requires some level of coordination between the clients to
maximize the overall system performance. The proposed
method allows clients to make local decisions based on
hints, which performs well than the previous algorithms.
REFERENCES
VIII. NCL LOAD BALANCING
When a central node fails or its local resources are
depleted, another node is selected as a new central node.
Intuitively, the new central node should be the one with the
highest NCL selection metric value among the current non
central nodes in the network. When the local resources of
central node C1 are depleted, its functionality is taken over
by C3. Since C3 may be far away from C1, the queries
broadcasted from C3 may take a long time to reach the
caching nodes A, and hence reduce the probability that the
requester R receives data from A on time. The distance
between the new central node and C1 should also be taken
into account. More specifically, with respect to the original
central node j, we define the metric
for a node to be
selected as the new central node as
= .
[1] Hefeeda .M and Noorizadeh .B, “On the Benefits of Cooperative Proxy
Caching for Peer-to-Peer Traffic,” IEEE Trans. Parallel Distributed
Systems vol.21, no. 7, pp. 998-1010, July 2010.
[2] Hui .P, Crowcroft .J, and Yoneki .E, “Bubble Rap: Social-Based
Forwarding in Delay Tolerant Networks,” Proc. ACM MobiHoc, 2008.
[3] Ioannidis .S, Massoulie .L, and Chaintreau .A, “Distributed Caching
over Heterogeneous Mobile Networks,” Proc. ACM SIGMETRICS Int’l
Conf. Measurement and Modeling of Computer Systems, pp. 311-322,
2010.
[4] Li .F and Wu .J, “MOPS: Providing Content-Based Service in
Disruption-Tolerant Networks,” Proc. Int’l Conf. Distributed Computing
Systems (ICDCS), pp. 526-533, 2009.
[5] Nkwe .T.K.R and Denko M.K, “Self-Optimizing Cooperative Caching
(T)
in Autonomic Wireless Mesh Networks,” Proc. IEEE Symp. Computers
After a new central node is selected, the data cached at
the NCL represented by the original central node needs to
be adjusted correspondingly, so as to optimize the caching
performance. After the functionality of central node C1 has
been migrated to C3, the nodes A, B, and C near C1 are not
considered as good locations for caching data anymore.
Instead, the data cached at these nodes needs to be moved
to other nodes near C3. This movement is achieved via
cache replacement when caching nodes opportunistically
contact each other. Each caching node at the original NCL
recalculates the utilities of its cached data items with
respect to the newly selected central node. In general, these
data utilities will be reduced due to the changes of central
nodes, and this reduction moves the cached data to the
appropriate caching locations that are nearer to the newly
selected central node. Changes in central nodes and
subsequent adjustment of caching locations inevitably
affect caching performance.
and Comm. (ISCC), 2009.
[6] Pitkanen M.J and Ott .J, “Redundancy and Distributed Caching in
Mobile DTNs,” Proc. ACM/IEEE Second Workshop Mobility in the
Evolving Internet Architecture (MobiArch), 2007.
[7] Ravindra Raju .R.K, Santha Kumar .B and Nagaraju Mamillapally,
“Performance Evaluation of CLIR, LDIS and LDCC Cooperative Caching
Schemes Based on Heuristic Algorithms”, International Journal of
Engineering Research & Technology (IJERT) Vol. 2 Issue 5, May – 2013.
[8]Wei Gao, Arun Iyengar, and Mudhakar Srivatsa “Cooperative Caching
for Efficient Data Access in Disruption Tolerant Networks” Network
Science CTA under grant W911NF-09-2-0053, 2014.
[9] Yin .L and Cao .G, “Supporting Cooperative Caching in Ad Hoc
Networks,” IEEE Trans. Mobile Computing, vol. 5, no. 1, pp. 77-89, Jan.
2006.
[10] Zhao .J, Zhang .P, Cao .G, and Das .C, “Cooperative Caching in
Wireless P2P Networks: Design, Implementation, and Evaluation,” IEEE
Trans. Parallel & Distributed Systems, vol. 21, no. 2, pp. 229-241, Feb.
IX. CONCLUSION
2010.
Cooperative caching is a technique that allows clients to
access blocks stored in the memory of other clients. This
enables some of the local cache misses to be handled by
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 18
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
MULTIPATH HYPERMEDIA CASCADE ROUTING PROTOCOL OVER
THE HYBRID VEHICULAR NETWORK
Sashikumar S 1, Venkatesan R 2
P.G. Student, Department of Computer Science and Engineering, M.I.E.T Engineering College, Trichy, India 1
Assistant Professor, Department of Computer Science and Engineering, M.I.E.T Engineering College, Trichy, India 2
Abstract:
A VANET is a technology that uses moving vehicles as nodes in a network to create a mobile
network. In many applications, the nodes are closer to the Source and destinations are overburden with
massive traffic load as the data from the entire area are forwarded through them to reach to the sink.
Because the coverage problems are the most important problem in the VANET’s. This paper address the
problem of vehicular Network, the bandwidth of the 3G/3.5G network over the moving vehicular
networks is unstable and insufficient and the video quality of the requested video will be also poor. In the
existing k-hop cooperative video streaming protocol using H.264/SVC over the hybrid vehicular networks
which consist of 3G/3.5G cellular network and Dedicated Short-Range Communications in ad-hoc
network and the smooth video playback over the DSRC-based ad-hoc network, this work proposes one
streaming task assignment scheme that schedules the streaming task to each member over the dynamic
vehicular networks and the packet forwarding strategies that decide the forwarding sequence of the
buffered video data to the requested member through hop by hop process. The proposed work examines
the issues of multi-path routing protocols in Ad-hoc networks. Multi-path routing protocols allow the
establishment of multiple paths between a single source and single destination node. To help multimedia
streaming applications to perform error control, resource allocation correctly and it can accurately
differentiate the packet looses by detecting the network states.
Keywords: Dedicated Short-Range Communications (DSRC), Scalable Video coding (SVC), Vehicular
ad-hoc networks (VANET), Multipath Routing Protocols.
I. Introduction
A vehicular ad hoc network uses cars
as mobile nodes in an ad hoc to create a mobile
network. A VANET turns every participate car
into a wireless router or node, allowing cars
approximately 100 to 300 meters of each other
to connect and, in turn, create a network with a
wide range. As cars fall out of the signal range
and drop out of the network, other cars can join
in, connecting vehicles to one another so that a
mobile Internet is created.
The main characteristic of the VANET
is the absence of infrastructure, without the
access point or base stations, existing in the WiFi,
WiMax,
GSM
or
UMTS.
The
ISSN: 2348 – 8387
communication between nodes that they are
beyond the reach of transmission of the radio is
made in multi hops through the intermediate
nodes contribution. Moreover, the topology of
the networks can move dynamically due to
inoperative. On the other hand, the media
without wire, absence of infrastructures and the
multi hops routing transforms these networks in
potential targets of diverse types of attacks that
go since simple eavesdropping passive of the
messages until active interferences with
the creation, modification and destruction of the
messages.
Vehicular Ad Hoc Network (VANET) is
the most important component of Intelligent
Transportation System (ITS), in which vehicles
www.internationaljournalssrg.org
Page 19
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
are equipped with some short-range and
medium-range wireless communication. In
VANET two kinds of communication are
supposed is Vehicle-to-Vehicle and Vehicle-toroad side units, where the road side units might
be cellular base station for example. The
VANET is a most important challenge is
understandable. Suppose at the mid-night in
some rural area, a person in a vehicle has a very
important data packet (i.e. detection of an
accident) which should be forwarded to the
following persons in the vehicles immediately.
The probability of low density of vehicles in the
rural areas at mid-night is very high.
Consequently, in this situation the packet will be
lost due to lack of presence of other vehicles to
receive and broadcast it, and arrival of the
following vehicles in the accident area is
unavoidable.
The Multimedia services are increased
due to the matured multimedia processing and
wireless technologies are H.264/SVC codec. If
one of the vehicles in a fleet wants to request a
video stream from the Internet, it can download
video data using 3G/3.5G cellular network.
Since the bandwidth of the 3G/3.5G network
over the moving vehicular networks is unstable
and insufficient, the video quality of the
requested video stream may not be good enough.
Still using 4G network, the bandwidth still may
not be enough for the following concerns. Initial,
other applications may utilize the 4G network
simultaneously. Next, the moving behavior of
one vehicle, e.g., moving with high speed or
around the coverage boundary of one base
station, makes the decaying of 4G bandwidth. In
order to increase the video quality during the
travelling path, one person in a vehicle would
ask other persons in the vehicles belonging to
the same fleet to download video data using
their redundant 3G/3.5G bandwidth. Once other
vehicles download video data from the Internet,
they forward the downloaded video data to the
ISSN: 2348 – 8387
requested vehicle through the ad-hoc transmission among vehicles, in which Dedicated
Short-Range Communications (DSRC).
In existing CVS protocol, a YUV video
file is encoded into one base layer and three
enhancement
layers
using
H.264/SVC.
Thereafter, extract the encoded bit stream to a
trace file that records the corresponding
information of each extracted NAL unit, and
then traced file is fed to the NS2 to simulate the
CVS application. Finally, the simulation results
is produced through analyzing a trace file that
records related information. In the NS2
simulation environment, a highway scenario is
constructed to simulate the CVS protocol. In this
paper, each member is willing to share its
3G/3.5G bandwidth, it is randomly chosen
between 50Kbps to 150Kbps. There are some
DSRC contending vehicles in the simulation
process. The proposed scheme can estimate the
assignment interval adaptively and the playback
priority first (PPF) strategy has the best
performance [3] for the k-hop video forwarding
over the hybrid vehicular networks Three roles
of the existing CVS are (1) requester, (2)
forwarder, and(3) helper. Our proposed system
considers multiple hop networks and the
corresponding scenario is for the vehicular adhoc network. In this proposed system a k-hop
fleet based cooperative video streaming protocol
over the hybrid vehicular networks.
In the Proposed Multimedia Streaming
Protocol (MSTP) in ad hoc networks has the
following advantage over single path streaming.
First, it can potentially provide higher aggregate
bandwidth to real-time multimedia applications.
Second, data partioning over multi paths can
reduce the short term correlation in real-time
traffic, thus improve the performance of
multimedia streaming application [2]. Third the
existence of multiple paths can help to reduce
the chance of interrupting the streaming service
due to node mobility. The Multimedia streaming
www.internationaljournalssrg.org
Page 20
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
over multiple paths in ad hoc networks. In
Several Video Coding techniques [4] are
optimizes for multipath streaming have been
under the ad hoc network scenario. In this paper
to provide better support multipath streaming
over as hoc networks, the following two issues
are addressed. First, Packet losses due to
different causes should be differentiated since
incorrect information of different packet loss
ratio may decrease the end to end multimedia
streaming quality. This relies on the accurate
discrimination among congestion channel error,
and route change/break sates. Secondly,
streaming protocol need to choose multiple
maximally disjointed paths to achieve good
streaming quality.
2. Related Works
In [5] C.-M. Huang, C.-C. Yang, and H.-Y.
Lin, In this Paper a Bandwidth Aggregation
Scheme is used to improve the quality of the
videos without deliberately. The Greedy
Approach (GAP) is proposed to select suitable
helpers to achieve the maximal throughput in the
cooperative video streaming scenario and
Estimate the available bandwidth in DSRC over
the dynamic vehicular ad hoc network
environment. The main inconvenience of the
paper is CVS scenario because enabling
streaming to have good quality within such
dynamic network is undoubtedly a challenging
work.
In [6] C.-H. Lee, C.-M. Huang, C.-C. Yang,
and H.-Y. Lin, In order to improve the quality
of Video playback, this vehicle, which is defined
as requester in this paper, may try to ask other
members of the same fleet to download video
cooperatively. The FIFO scheme is an intuitive
forwarding scheme and would focus on the
transmission sequence. The PPF scheme send
buffered AUs according to the playback priority.
The DSRC wireless channel has limited
bandwidth, forwarders or helpers who are close
ISSN: 2348 – 8387
to the requester may have data congestion due to
packet forwarding.
In [9] R. Khalili, D. L. Goeckel, D. Towsley,
and A. Swami, In this paper a Discovering of
neighboring nodes in wireless network using
random discovery algorithm in reception status
feedback mechanism.
Neighbor discovery
algorithms functions by exchanging messages
between nodes, where the message contain the
identifiers of transmitters. And the process of
neighbor discovery, nodes divide into active
nodes and passive nodes. The main difficulty of
the paper is Collisions are the only source of
losses in the network.
In [10] K. Rojviboonchai, Y. Fan, Z. Qian, H.
Aida, and W. Zhu, In this paper a Fast and
efficient discovery of neighbouring nodes in
wireless Ad hoc network and all neighbourhood
simultaneously send their unique on-off
signatures known to the receive node. Two
scalable detection algorithms are introduced
from group testing viewpoint is Direct algorithm
and Group testing with binning. In Direct
algorithm, the negative tests and positive test are
checked and nodes are marked to discover
definite neighbours. In group testing with
binning, the key element is to use binning which
decompose neighbour discovery among large
number of nodes into smaller problems. The
main difficulty of the paper is It considers
neighbour discovery for one particular node it
can be extended to neighbour discovery for all
nodes.
In [7] M. Xing and L. Cai, The main goal of the
paper is adaptive the video streaming scheme for
video streaming services in highway scenario
and Providing adaptive video streaming scheme
for video streaming services in the direct link or
a multihop path to the RSUs. Adaptive video
stream scheme includes three key parts:
Neighbor discovery, relay selection, and video
www.internationaljournalssrg.org
Page 21
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
quality adaptation strategy. Finally Better
throughput can’t be achieved.
III. Proposed System
1. Network formation:
Inter organizational networks emerge as a result
of the interdependencies between organizations
that ensure organizations to interact with each
other and lead in time to network structures.
Where hierarchical arrangements can be
purposely planned, networks are reactionary
since they emerge out of contextual events that
initiate the formation of a collaborative network.
Although network emergence is well studied, the
process in which networks come into being and
evolve through time is not as well known. Its
mainly due to the difficulties in terms of data
collection and analysis. This is especially the
case for public sector networks since network
evolution studies are predominantly focused on
the private sector. Some authors suggest that
networks evolve through a cyclical approach.
And propose five iterative phases that are
important in all cooperative phases: 1) face-tophase dialogue, 2) trust building, 3) commitment
to the process, 4) shared understanding, and 5)
intermediate outcome. Another model is
developed by cooperative inter-organizational
relations go through three repetitive phases: 1)
negotiation phase in which organizations
negotiate about joint action, 2) a commitment
phase in which organizations reach an
agreement and commit to future action in the
relationship, and 3) an execution phase where
joint action is actually performed. These three
stages overlap and are repetitive throughout the
inter-organizational relationship. Both cyclical
models attempt to explain the processes within
an operating network, but they do not consider
the
evolutionary
process
organizational
networks go through from their emergence till
their termination.
ISSN: 2348 – 8387
2. Neighbor estimation:
Neighbor Discovery (ND), is each node
transmits at randomly chosen times and
discovers all its neighbors by a given time with
high probability and the nodes are randomly
exchanging neighbor discovery packets with
their neighbors. The Random algorithm is used
for neighbor discovery. The completely random
algorithm (CRA) used in the direct discovery
algorithm that uses the directional antenna for
transmission and reception of signals. The
algorithm requires the nodes to that converse to
be in time. The can effectively transmit and
receive only if the nodes are in corresponding
mode. The algorithm divides the time frame into
three slots. During the first mini slot the node
decides to be in any one of the following state.
The states of node are described as Transmit,
Listen and Sleep.
When a node chooses to be in transmit mode, it
broadcasts the DISCOVER message in the first
mini-slot and waits for the ACK in the second
mini-slot. During third mini- slot it sends the
confirmation to the receivers. When the node
chooses to be in listen mode, it receives the
DISCOVER message in the second mini-slot
and sends the ACK to the sender if it
successfully receive the discover message. In the
third mini-slot it receives the confirmation
message from the sender.
3. Multipath-multimedia streaming protocol
implementation:
The algorithm used in the proposed system for
Routing of data packets can be either unicast or
multicast transmission. Unicast transmission is
utilized, in which a packet is sent from a single
source to a specified destination. Multicasting is
the networking technique of delivering the same
packet simultaneously to a multiple nodes or
group of nodes. Multipath routing using Ad-hoc
on-demand Multipath distance vector routing
www.internationaljournalssrg.org
Page 22
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
(AOMDV) is used to obtain a multicasting of
nodes through multiple paths.
Mobile client can download the first segment of
the video from the initial buffer of the neighbor
client to its own initial buffer. The mobile client
must be within the coverage area of neighbor
client.
3.4 Distortion Estimation of Video Nodes for
Throughput Capacity
A metric used for estimating video nodes is
rate-distortion, instead of conventional network
performance metrics (i.e. hop-count, loss
probability, and delay) for video routing. The
rate distortion is estimated by using the network
prediction models. In our model, packet loss is
generated by two reasons: channel error and
queuing loss.
Sender
Receiver
Forwarder
Figure.1, Co-operative network of multipath
protocols.
Figure.1, The Multicast Transmission is used for
the delivery of information to a group of
destinations, simultaneously, which deliver the
packets over each link of the network.
3.1 Partial Video Sequence (PVS) Caching
Scheme
Partial Video Sequence decomposes video
sequences into a number of parts by using a
scalable video compression algorithm. Video
parts are selected to be cached in local video
servers based on the amount of bandwidth that
would be demanded from the distribution
network and central video server if it was only
kept in the central video server.
3.2 Prefix Caching
Prefix Caching could reduce the request
rejection ratio and client waiting time. The
network bandwidth usage also has been reduced
by the caching scheme through sharing the video
data of the currently played video object with
other clients of the active chain.
3.3 Neighbor Based Caching Scheme
ISSN: 2348 – 8387
The steps to estimate the video distortion
introduced by a node are First, packet error
probability in the MAC layer is estimated.
Second, packet loss probability due to
congestion is estimated. Third, rate distortion
model is used to calculate the rate-distortion of a
node
3.5 Topology Construction
Topology construction involves determining
where to place the components and how to
connect them. The topology of the network is
dependent on the relative locations and
connections of nodes within the network. The
(topological) optimization methods that can be
used in this stage come from an area of
mathematics called Graph Theory. These
methods involve determining the costs of
transmission and the cost of switching.
4. Path selection:
A mechanism is introduced for path selection
when the energy of the sensors in original
primary path has dropped below a certain level.
This allows us to distribute energy consumption
more evenly among the sensor nodes in the
www.internationaljournalssrg.org
Page 23
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
network. Number of hope counts is also
identified by using this method. The Energy
Efficiency of the individual node is increased by
this path selection method
(2) In networking, the amount of time it takes a
packet to travel from source to destination.
Together, latency and bandwidth define the
speed and capacity of a network.
5. Analysis:
(3) In VoIP terminology, latency refers to a
delay in packet delivery. VoIP latency is a
service issue that is usually based on physical
distance, hops, or voice to data conversion.
Let us analysis the following Parameters are
divided into three (1) Packet Delivery ratio (2)
Residual Energy (3) Delivery Latency
IV. PERFORMANCE EVALUATION
5.1 Packet delivery ratio:
Packet delivery ratio is defined as the ratio of
data packets received by the destinations to
those generated by the sources mathematically,
it can be defined as
PDR=S1/S2
(1)
Where, S1 is the sum of data packets received
by the every destination and S2 is the sum of
data packets generated by the every source.
Graphs show the fraction of data packets that are
successfully delivered during simulations time
against the number of nodes. While the PDR is
growing in the routing protocols.
5.2 Residual Energy:
The Residual Energy is to resolve the problems
of how to reduce the collisions from intrusive
nodes in event-driven wireless sensor networks
and how to support the communication
efficiency of the system and how to make
certain a balance energy consumption of WSN.
5.3 Delivery latency:
(1) In general, the period of time that one
component in a system is spinning its wheels
waiting for another component. Latency,
therefore, is wasted time. For example, in
accessing data on a disk latency is defined as the
time it takes to position the proper sector under
the read/write head.
ISSN: 2348 – 8387
In order to verify our works, the NS2
network simulation tool is adopted to evaluate
the performance of our proposed multipath
streaming protocol. In the simulation, 50
wireless nodes move freely in a 400mX800m
area following a random waypoint mobility
pattern, in which the pause time is zero so that
each node is constantly moving. The IEEE
802.11p interface of each mobile node is used
for communication between nodes through adhoc network and the settings of corresponding
parameters are summarized in Table 1. Set
related parameters of IEEE 802.11p into the
802.11Ext module and the Wireless PhyExt
module supported in the ns2 and the radio
propagation model for the interface is set to be
the two-ray ground reflection model.
Parameters
Results
Packet Payload
8000 bits
MAC + PHY
headers
384 bits
ACK
112 bits
RTS
160 bits
CTS
112 bits
Propagation Delay
1 us
Channel Bit Rate
6 Mbps
V. Conclusion
In mobile ad hoc networks, multi-path
multimedia streaming to improve the routing
www.internationaljournalssrg.org
Page 24
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
efficiency for secured data transmission. They
are tradeoff between path disjointing and the
packet reordering among different paths, while
using multiple disjoint paths for different
packets, and tolerating the burst of packet losses
in the case of route breakage due to channel
interferences. A multipath streaming protocol
can identify the states of route change or break,
channel error, and network congestion.
Consequently, in the proposed formal model, it
prevents the adversarial nodes break up routes
by inserting alternate path for the parted
messages. The Experimental results shows that
our proposed (MMSTP’s) for multi-path
multimedia streaming protocols gives the better
throughput result in terms of Transmission
Delay, Bandwidth allocation and Load Factor.
[5]
[6]
[7]
REFERENCES
[1]
[2]
[3]
[4]
Ananthanarayanan .G, Padmanabhan
.V, Thekkath .C, and Ravindranath
.L
(2007),
“Collaborative
downloading
for
multi-homed
wireless devices,” in Proc. 8th IEEE
Worksh HotMobile, pp. 79–84.
Bushmitch .D, Mao .S, Narayanan .S,
Panwar .S (2003), “MRTP: a multi-flow
realtime transport protocol for ad hoc
networks,” in Proc. IEEE VTC, pp.
2629- 2634.
Chao-Hsien Lee, Chung-Ming Huang,
Chia-Ching Yang, and Hsiao-Yu Li,
(2014), “The K-hop
Cooperative
Video Streaming Protocol Using
H.264/SVC
Over
the
Hybrid
Vehicular Networks,’’ in proc. IEEE
Transaction on mobile computing, vol
13, No.6.
Celebi .E, Mao, Lin .S, S, Panwar .S,
and Wang .Y, and (2003), “Video
transport over ad hoc networks:
Multistream coding with multipath
ISSN: 2348 – 8387
[3]
[4]
[6]
[11]
transport,” in Proc. IEEE J. Select.
Areas Commun., vol. 21, pp. 17211737.
Ching Yang .C, Ming Huang .C, and Yu
Lin .H (2011), “A K-hop bandwidth
aggregation scheme for member-based
cooperative transmission over vehicular
networks,” in Proc. 17th IEEE ICPADS,
Tainan,Taiwan, pp. 436–443.
Ching Yang .C, Lee .C, Ming Huang .C,
and Yu Lin .H (2012), “K-hop packet
forwarding schemes for cooperative
video
streaming
over
vehicular
networks,”
in
Proc.
4th
Int.
Workshop
Multimedia
Computing
Communications-21st ICCCN, Munich,
Germany, pp. 1–5.
Cai .L, and Xing .M (2012)“Adaptive
video streaming with inter-vehicle relay
for highway VANET scenario,” in Proc.
IEEE ICC, Ottawa,ON, Canada, pp.
5168–5172.
Chebrolu .K and Rao .R (2006),
“Bandwidth aggregation for realtime
applications in heterogeneous wireless
networks,” IEEE Trans. Mobile
Comput., vol. 5, no. 4, pp. 388–403.
Chiang .T .C (2007), Hsieh .M .Y, and
Huang .Y .M,“Transmission of layered
video streaming via multi-path on adhoc networks,” Multimedia Tools Appl.,
vol. 34, no. 2, pp. 155–177.
Chan .G
and Leung .M (2007),
“Broadcast-based
peer-to-peer
collaborative video streaming among
mobiles,” IEEE Trans.Broadcast., vol.
53, no. 1, pp. 350–361.
Das .S, Gerla .M, Nandan .A, Pau .G,
and
Sanadidi
.M
.Y
(2005),
“Cooperative downloading in vehicular
ad-hoc wireless networks,” in Proc. 2nd
Annu. Conf. WONS, Washington, DC,
USA, pp. 32–41.
www.internationaljournalssrg.org
Page 25
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[8]
[9]
[10]
Galatchi .D and Zoican .R (2010),
“Analysis and simulation of a
predictable
routing protocol
for
VANETs.” in Proc. 9th ISETC,
Timisoara, Romania, pp. 153–156.
Goeckel .D .L, Khalil .R, Swami .A
(2010) and Towsley .D, “ Neighbor
discovery with reception status feedback
to transmitters,” in Proc.29th IEEE
Conf. INFOCOM, San Diego, CA,
USA, pp. 2375–2383.
GuO .D and Luo .L (2008), “Neighbor
discovery in wireless ad-hoc networks
based on group testing,” in Proc. 46th
Annu. Allerton Conf.Communication,
Control,
Computing,
UrbanaChampaign, IL, USA, pp. 791–797.
ISSN: 2348 – 8387
[11]
[12]
[13]
Ideguchi .T, Tian .X, Okamura .T, and
Okuda .T (2009), “Traffic evaluation of
group
communication
mechanism
among vehicles,” in Proc. 4th ICCIT,
Seoul, South Korea, pp. 223–226.
Tsai .H, Chen .C, Shen .C, Jan .R
(2009), and Li .H, “Maintaining
cohesive fleets via swarming with smallworld communications,” in Proc IEEE
VNC, Tokyo, Japan, pp.18.
Talebet al .T (2007), “A stable routing
protocol to support ITS services in
VANET networks,” in IEEE Trans.
Veh. Technol., vol. 56, no. 6, pp. 3337–
3347.
www.internationaljournalssrg.org
Page 26
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
AN EFFECTUAL QUERY PROCESSING IN CLOUD WITH RASP DATA AGITATION USING DIJIKSTRA’S
ALGORITHM
Anand.M
M.E. CSE II year,
Dept of Computer Science and Engineering,
M.I.E.T Engineering College,
Trichy, India.
Abstract ----- Data mining is the process of analyzing
data from different perspectives and summarizing it
into useful information. It allows users to learning
data from various different level or angles, categorize
it, and summarize the associations identified.
Increasing data intensity in cloud may also need to
improve scalability and stability. Sensitive data must
preserve securely on violating environment. This
research work presents to secure the queries and to
retrieve data to the client efficient manner. In this
proposed work, here used two schemes like Random
space perturbation (RASP) and Advanced Encryption
Standard (AES) for the purpose for initialization and
encrypt the data. Finally apply Dijikstra’s algorithm
to calculate the distance between the nearby objects
in the database. Comparing to K-NN, Dijikstra’s
algorithm provides exact results, so its performance
highly accurate. In order to decrease the
computational cost inherent to process the encrypted
data, and consider the case of incrementally updating
datasets in the entire progress.
Keywords: Cloud data, RASP, AES, Dijikstra’s
Algorithm, and Encryption.
I
INTRODUCTION
Authentication on Location based services
contributes major process of extracting queries on
to client. The name of protection in third party side
violation has to be completely avoided. Finding
location based spatial data with distances can be
progressed through data mining technique
(classification and regression). Classification is the
process of categorizing the data and regression is
verdict relationship among the data. To address
the user privacy needs, several protocols have
been proposed that withhold, either partially or
completely, the users’ location information from
the LBS. For instance, the work in larger cloaking
regions that is mean to prevent disclosure of exact
user whereabouts. Nevertheless, the LBS can
still derive sensitive information from the
cloaked regions, so another line of re-search
that uses cryptographic-strength protection was
started in and continued . The main idea is to
extend existing Private Information Retrieval (PIR)
protocols for binary sets to the spatial domain, and
to allow the LBS to return the NN to users
ISSN: 2348 – 8387
Senthamil Selvi R M.E., (Ph.D)
Associate Professor,
Dept of Computer Science and Engineering,
M.I.E.T Engineering College,
Trichy, India.
without learning any in-formation about users’
locations [1]. This method serves its purpose well,
but it assumes that the actual data points (i.e., the
points of interest) are available in plaintext to the
LBS. This model is only suitable for generalinterest applications such as Google Maps,
where the landmarks on the map represent
public information, but cannot handle scenarios
where the data points must be protected from
the LBS itself. More recently, a new model for
data sharing emerged, where various entities
generate or collect datasets of POI that cover
certain niche areas of interest, such as specific
segments of arts, entertainment, travel, etc. For
instance, there are social media channels that
focus on specific travel habits, e.g., eco-tourism,
experimental theater productions or underground
music genres.
The content generated is often geo-tagged,
for instance related to upcoming artistic events,
shows, travel destinations, etc. How-ever, the
owners of such databases are likely to be small
organizations, or even individuals, and not have the
ability to host their own query processing services
[4]. This category of data owners can benefit
greatly from outsourcing their search services to a
cloud service provider. In addition, such services
could also be offered as plug-in components
within social media engines operated by large
industry players. Due to the specificity of such
data,
collecting
and maintaining
such
information is an expensive process, and
furthermore, some of the data may be sensitive
in nature. For instance, certain activist groups may
not want to release their events to the general
public, due to concerns that big corporations or
oppressive
governments may intervene and
compromise their activities.
Similarly, some groups may prefer to keep
their geo-tagged datasets confidential, and only
accessible to trusted subscribed users, for the
fear of backlash from more conservative
population groups [6]. It is therefore important to
protect the data from the cloud service provider. In
addition, due to financial considerations on behalf
of the data owner, sub-scribing users will be billed
for the service based on a pay-per-result model.
www.internationaljournalssrg.org
Page 27
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
For instance, a subscriber who asks for NN
results will pay for specific items, and should
not receive more than k results. Hence,
approximate querying methods with low
precision, such as existing techniques that
return many false positives in addition to the
actual results, are not desirable. Specifically, both
the POI and the user locations must be protected
from the cloud provider. This model has been
formulated previously in literature as “blind queries
on confidential data”. In this context, POIs must be
encrypted by the data owner, and the cloud
service provider must perform NN processing
on encrypted data.
WORKING STEPS
Step 1: Input initialization; spatial input will get
initialize with storing all attributes inside the
database.
Step 2: Perturbation work; Extracting co-ordinates
for finding shortest distance from one node to
another node, edges and vertex will gathered lead
to estimate.
Step3: Shortest distance values for each node will
get updated into database. Entire information will
encrypt by using advanced encryption standard.
Encrypt (plaintext[n])
Add-Round-Key (state, round-key [0]);
Fori = 1 to Nr-1 step-size 1 do
Sub-Bytes (state);
Shift-Rows (state);
Mix-Columns (state);
Add-Round-Key (state, round-key[i]);
Update ()
To address this problem, previous work
such as has proposed privacy-preserving data
transformations that hide the data while still
allowing the ability to perform some geometric
functions evaluation [7]. However, such
transformations lack the formal security guarantees
of encryption. Other methods employ strongersecurity transformations, which are used in
conjunction with dataset partitioning techniques,
but return a large number of false positives, which
is not desirable due to the financial considerations
outlined earlier. Finding location based spatial data
with distances can be progressed through data
mining
technique(classification
and
regression)Classification is the process of
categorizing the data and regression is finding
relationship among the data [5]. Shortest path
algorithm is the only solution to reduce the time for
data extraction.
Step4: Outsourcing; Encrypted values will send
over to cloud side untrusted area. Cloud will
maintain that information in separate database.
Step5: Service provision can be processed by cloud
to the required client.
Step6: After the Client authentication service has
been received by client. Points of interest can be
considered as client request which will be given as
query.
Step7: Query has been decrypted by requested
client at the end.
II PROPOSED WORK
In this research work presents to secure the queries
and to retrieve data to the client efficient manner.
Preprocess
ing Work
Text
ghfgsf778
Cipher
Text
hdf7sfgh
k
Restaur
ant
Spatial
Informa
tion
Nearby
Segment
Evaluation
Repo
sitor
y
Data
Maintenanc
e
ISSN: 2348 – 8387
Content
Encryptio
n
www.internationaljournalssrg.org
Page 28
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig 1.1 System Architecture
I.
RASP:
RANDOM
PERTURBATION
SPACE
RASP is one type of multiplicative perturbation,
with a novel combination of OPE, dimension
expansion, random noise injection, and random
projection. Let’s consider the multidimensional
data are numeric and in multidimensional vector
space. The database has searchable dimensions
and records, which makes a
matrix . The
searchable dimensions can be used in queries and
thus should be indexed. Let
represent a
dimensional record,
. Note that in the ddimensional vector space
., the range query
conditions are represented as half-space functions
and a range query is translated to finding the point
set in corresponding polyhedron area described by
the half spaces [4]. The RASP perturbation
involves three steps. Its security is based on the
existence of random invertible real-value matrix
generator and random real-value generator. In this
work RASP mainly used for initialization of
building a block.
ISSN: 2348 – 8387
II.
DIJIKSTRA ALGORITHM
A. BASIC PRINCIPLES OF DIJIKSTRA
ALGORITHM
Currently, the best algorithm to find the shortest
path between two points is publicly known as
Dijikstra algorithm which is proposed by Dijikstra
in 1959. It can not only get the shortest path
between the starting point and the final destination,
but also can find the shortest path of each vertex
from the starting point.
Step 1: Initialization.
[ ]
{
{ }
}
[
] [ ]
is the path starting point. is one of the vertexes.
is the number of all vertices in the network. is
the set of all the vertices. [ ] is the distance
between the vertex and vertex . is the set of
vertices. is an array of
elements which is used
www.internationaljournalssrg.org
Page 29
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
to store the shortest distance from vertex to other
vertexes. is an array of elements which is used
to store the nearest vertex before vertex in the
shortest path.

Step 2: Find a vertex
from the
set and
make [ ] be the minimum value, then add into
.
if the
is empty set, the algorithm is over.

Improve the performance of query
processing for both range queries and
shortest path process.
Formally analysing the leaked query and
access patterns
Our proposing RASPS possible effect on
both data and query confidentiality.
It will reduce the verification cost for
serviced clients
Complete avoidance of security violation
from provider side.



Step 3: Adjust the value of array and array
For the each adjoining vertex of vertex in
the
set,
[ ]
If [ [
[ ]
[ ]
[ ]
[ ]
[ ] then let:
.
III.
EXPERIMENTAL RESULT
To assess the efficiency of the proposed approach,
we have made both qualitative (visual) and
quantitative analysis of the experimental results.
Step 4: Go to step 2.
B. ALGORITHM PSEUDO CODE
Function Dijikstra (Graph, source):
dist[source] ← 0
// Distance from source to source
prev[source] ← undefined
// Previous node in optimal path initialization
for each vertex v in Graph:
// Initialization
ifv ≠ source
// Where v has not yet been removed from Q
(unvisited nodes)
dist[v] ← infinity
// Unknown distance function from source to v
prev[v] ← undefined
// Previous node in optimal path from source
end if
add v to Q
// All nodes initially in Q (unvisited nodes)
end for
whileQ is not empty:
u ← vertex in Q with min dist[u]
// Source node in first case
remove u from Q
for each neighbor v of u:
// where v has not yet been removed from Q.
alt ← dist[u] + length(u, v)
ifalt<dist[v]:
// A shorter path to v has been found
dist[v] ← alt
prev[v] ← u
end if
end for
end while
returndist[], prev[]
end function
Algorithm: security Comparison
25%
AES
11%
Triple DES
RSA
64%
Fig 1.2 Result of Security Comparison
5
4
3
2
1
0
Cost
Confidentiality
Time
consumption
Fig 1.3 Result of Cost Confidentially and
Time Consumption
C. REWARD OF PROPOSED WORK
IV.
CONCLUSION
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 30
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
In this paper, proposed two schemes to support
RASP data perturbation: Dijikstra’s algorithm with
encryption algorithm had proposed. They both use
mutable order-preserving encoding (mOPE) as
building block. Dijikstra’s provides exact results,
but its performance overhead may be high. K-NN
only offers approximate NN results, but with better
performance. In addition, the accuracy of k-NN is
very close to that of the exact method. Planning to
investigate ore complex secure evaluation functions
on cipher-texts, such as skyline queries. And also
research formal security protection guarantees
against the client, to prevent it from learning
anything other than the received k query results.
REFERENCES
Computer Survey, vol. 45, no. 6, pp. 965-981,
1998.
[10] R. Curtmola, J. Garay, S. Kamara, and R.
Ostrovsky, “Searchable Symmetric Encryption:
Improved Definitions and Efficient Constructions,”
Proc. 13th ACM Conf. Computer and Comm.
Security, pp. 79-88, 2006.
11] N.R. Draper and H. Smith, Applied Regression
Analysis. Wiley, 1998.
[12] H. Hacigumus, B. Iyer, C. Li, and S. Mehrotra,
“Executing SQL over Encrypted Data in the
Database-Service-Provider Model,” Proc. ACM
SIGMOD Int’l Conf. Management of Data
(SIGMOD),
2002.
[1] Huiqi Xu, Shumin Guo, and Keke Chen,
“Building Confidential and Efficient Query
Services in the Cloud with RASP Data
Perturbation”, IEEE Transactions on knowledge
and Data Engineering, Vol. 26, No. 2, 2014.
[2] Dong Haixiang, Tang Jingjing, “The Improved
Shortest Path Algorithm and Its Application in
Campus Geographic Information System”, Journal
of Convergence Information Technology, Vol 8,
No 2, Issue 2.5, 2013.
[3] J. Bau and J.C. Mitchell, “Security Modeling
and Analysis,” IEEE Security and Privacy, vol. 9,
no. 3, pp. 18-25, 2011.
[4] S. Boyd and L. Vandenberghe, Convex
Optimization. Cambridge Univ. Press, 2004.
[5] N. Cao, C. Wang, M. Li, K. Ren, and W. Lou,
“Privacy-Preserving
Multi-Keyword
Ranked
Search over Encrypted Cloud Data,” Proc. IEEE
INFOCOMM, 2011.
[6] K. Chen, R. Kavuluru, and S. Guo, “RASP:
Efficient Multidimensional Range Query on
Attack-Resilient Encrypted Databases,” Proc. ACM
Conf. Data and Application Security and Privacy,
pp. 249-260, 2011.
[7] K. Chen and L. Liu, “Geometric Data
Perturbation for Outsourced Data Mining,”
Knowledge and Information Systems, vol. 29, pp.
657- 695, 2011.
[8] K. Chen, L. Liu, and G. Sun, “Towards AttackResilient Geometric Data Perturbation,” Proc.
SIAM Int’l Conf. Data Mining, 2007.
[9] B.Chor, E.Kushilevit, O.Goldreich, and M.
Sudan, “Private Information Retrieval,” ACM
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 31
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
ROBUST SECURE DYNAMIC AUTHENTICATION SCHEME
M. Nanthini Parvatham, Ms. M.S.S.Sivakumari M.E., Mr. S.Subbiah M.E, (Ph.D).,
M.E (CSE) student, Project Guide, HOD
TRICHY ENGINEERING COLLEGE
TIRUCHIRAPPALLI
Abstract— Textual passwords are the most common method
used for authentication. But textual passwords are vulnerable
to eves dropping, dictionary attacks, social engineering and
shoulder surfing. Graphical passwords are introduced as
alternative techniques to textual passwords. Most of the
graphical schemes are vulnerable to shoulder surfing. To
address this problem, Captcha with Graphical Password in
new way for security provide for text with combine of special
character captcha name as Click-Text, Click-Animal, and
Animal-Grid for CaRP. Meanwhile in proposed system
phishing attack protection is made through anti-phishing
detection mechanism combined with OTT security and Voice
recognition. The OTT password encrypt by Symmetric
Encryption and then send to particular user mail and decrypt
the password by an application to access the particular site.
These methods are suitable for Personal Digital Assistants,
improve the security and reduce the attacker in online
environment.
on top of text Captcha. A Click Text password is a sequence
of characters in the alphabet. And then Click Text image is
generated by the underlying Captcha engine, each character’s
location is tracked to produce ground truth for the location of
the character in the generated image.
Index Terms—Dictionary attack, social engineering,
shoulder surfing, graphical passwords, click-text, click-animal,
animal-grid, CaRP, anti-phishing detection mechanism, OTT,
voice recognition, symmetric encryption.
Graphical password systems are a type of knowledge-based
authentication that attempts to leverage the human memory for
visual information. Of interest herein are cued-recall clickbased graphical passwords.
I. INTRODUCTION
II. BACKGROUND AND RELATED WORK
An exciting new paradigm for security is Hard AI (Artificial
Intelligence) problems. Under this paradigm, the most notable
primitive invented is Captcha, which distinguishes human
users from computers by presenting a challenge, beyond the
capability of computers but easy for humans.
Phishing is a form of social engineering in which an
attacker, also known as a phisher, attempts to fraudulently
retrieve legitimate users confidential or sensitive credentials
by mimicking electronic communications from a trustworthy
or public organization in an automated fashion. Phisher set up
fraudulent websites (usually hosted on compromised
machines), which actively prompt users to provide
confidential information. In this paper, security over phishing
attack is provided and introduced a new security primitive
based on hard AI problems, known as CaRP (Captcha as
gRaphical Passwords). CaRP is click-based graphical
passwords, where a sequence of clicks on an image is used to
derive a password, images used in CaRP are Captcha
challenges, and a new CaRP image is generated for every
login attempt. Captcha is now a standard Internet security
technique to protect online email and other services from
being abused by bots.
CaRPs built on both text Captcha and image-recognition
Captcha. CaRP prevents online dictionary attacks and relay
attacks. Click Text is a recognition-based CaRP scheme built
ISSN: 2348 – 8387
Captcha scheme which uses 3D models of horse and dog to
generate 2D animals with different textures, colors, lightings
and poses, and arranges them on a cluttered background. User
clicks the entire animal in the grid to pass the test. Animal
Grid password space is a grid-based graphical password. It is a
combination of Click Animal and Click-A-Secret (CAS). At
every Captcha level, voice recognization is made. At every
time of login, OTT authentication is made both in text
password and image password. It prevents passwords from
being hacked by social engineering, brute force attack and
dictionary attack.
Network attack is defined as an intrusion on the network
infrastructure that will first analyze the environment and
collect information in order to exploit the existing open ports
or vulnerabilities - this may include as well unauthorized
access to resources. A type of network attack is Password
Guessing attack. Here a legitimate users access rights to a
computer and network resources are compromised by
identifying the user id/password combination of the legitimate
user. Password guessing attacks can be classified into Brute
Force Attack and Dictionary Attack.
A Brute Force attack is a type of password guessing
attack and it consists of trying every possible code,
combination, or password until you find the correct one. This
type of attack may take long time to complete. A complex
password can make the time for identifying the password by
brute force long. A dictionary attack is another type of
password guessing attack which uses a dictionary of common
words to identify the user’s password.
In cryptography, a brute-force attack, or exhaustive key
search, is a cryptanalytic attack that can, in theory, be used
against any encrypted data (except for data encrypted in an
www.internationaljournalssrg.org
Page 32
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
information-theoretically secure manner). Such an attack
might be utilized when it is not possible to take advantage of
other weaknesses in an encryption system (if any exist) that
would make the task easier. It consists of systematically
checking all possible keys until the correct key is found. In the
worst case, this would involve traversing the entire search
space. The key length used in the cipher determines the
practical feasibility of performing a brute-force attack, with
longer keys exponentially more difficult to crack than shorter
ones. A cipher with a key length of N bits can be broken in a
worst-case time proportional to 2N and an average time of half
that. Brute-force attacks can be made less effective by
obfuscating the data to be encoded, something that makes it
more difficult for an attacker to recognize when he/she has
cracked the code. One of the measures of the strength of an
encryption stem is how long it would theoretically take an
attacker to mount a successful brute-force attack against it.
Brute-force attacks are an application of brute-force search,
the general problem-solving technique of enumerating all
candidates and checking each one. Certain types of
encryption, by their mathematical properties, cannot be
defeated by brute force.
An example of this is one-time pad cryptography, where
every clear text bit has a corresponding key from a truly
random sequence of key bits. A 140 character one-time-pad–
encoded string subjected to a brute-force attack would
eventually reveal every 140 character string possible,
including the correct answer - but of all the answers given,
there would be no way of knowing which the correct one was.
Defeating such a system, as was done by the Venona project,
generally relies not on pure cryptography, but upon mistakes
in its implementation: the key pads not being truly random,
intercepted keypads, operators making mistakes - or other
errors.
In a reverse brute-force attack, a single (usually common)
password is tested against multiple usernames or encrypted
files. The process may be repeated for a select few passwords.
In such a strategy, the attacker is generally not targeting a
specific user. Reverse brute-force attacks can be mitigated by
establishing a password policy that disallows common
passwords. In cryptanalysis and computer security, a
dictionary attack is a technique for defeating a cipher or
authentication mechanism by trying to determine its
decryption key or passphrase by trying likely possibilities,
such as words in a dictionary. A dictionary attack uses a
targeted technique of successively trying all the words in an
exhaustive list called a dictionary (from a pre-arranged list of
values). In contrast with a brute force attack, where a large
proportion key space is searched systematically, a dictionary
attack tries only those possibilities which are most likely to
succeed, typically derived from a list of words for example a
dictionary (hence the phrase dictionary attack).
Generally, dictionary attacks succeed because many
people have a tendency to choose passwords which are short
ISSN: 2348 – 8387
(7 characters or fewer), single words found in dictionaries or
simple, easily predicted variations on words, such as
appending a digit. However these are easy to defeat. Adding a
single random character in the middle can make dictionary
attacks untenable. It is possible to achieve a time-space tradeoff by pre-computing a list of hashes of dictionary words, and
storing these in a database using the hash as the key. This
requires a considerable amount of preparation time, but allows
the actual attack to be executed faster. The storage
requirements for the pre-computed tables were once a major
cost, but are less of an issue today because of the low cost of
disk storage.
Pre-computed dictionary attacks are particularly effective
when a large number of passwords are to be cracked. The precomputed dictionary need only be generated once, and when it
is completed, password hashes can be looked up almost
instantly at any time to find the corresponding password. A
more refined approach involves the use of rainbow tables,
which reduce storage requirements at the cost of slightly
longer lookup times. LM hash for an example of an
authentication system compromised by such an attack. Precomputed dictionary attacks can be thwarted by the use of salt,
a technique that forces the hash dictionary to be recomputed
for each password sought, making precomputation infeasible
provided the number of possible values is large enough, that
requires answering an challenge first before entering the
{username, password} pair. Failing to answer the challenge
correctly prevents the user from proceeding further.
A. Challenge – response test
"Completely Automated Public Turing test to tell
Computers and Humans Apart") is a type of challengeresponse test used in computing to determine whether or not
the user is human. It relies on the gap of capabilities between
humans and bots in solving certain hard AI problems. And it is
mainly used to prevent bots from using various types of
computing services or collecting certain types of sensitive
information. There are two types of visual Captcha: text
Captcha and Image-Recognition Captcha (IRC). The former
relies on character recognition while the latter relies on
recognition of non-character objects. There are two types of
captcha Text based captcha and image based captcha.
Text-Based CAPTCHAs have been the most widely
deployed schemes. Major web sites such as Google, Yahoo
and Microsoft all have their own text-based CAPTCHAs
deployed for years. Pessimal Print [6] is one of the first text
based scheme.
Image-based CAPTCHAs such as [7] have been proposed
as alternatives to the text media. More robust and user-friendly
systems have been developed. Images are randomly distorted
before presenting them. Machine recognition of non-character
objects is far less capable than character recognition. IRCs
rely on the difficulty of object identification or classification,
possibly combined with the difficulty of object segmentation.
Asirra [11] relies on binary object classification: a user is
www.internationaljournalssrg.org
Page 33
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
asked to identify all the cats from a panel of 12 images of cats
and dogs. A new type of CAPTCHA called as Collage
CAPTCHA is introduced in [8] where distorted shapes are
used instead of images.
CAPTCHAs are generated by software and the structure of
a captcha gives hints to its implementation. thus due to these
properties of image processing and image composition, the
process that creates captchas can often be reverse engineered.
Once the implementation strategy of a family of captchas has
been reverse engineered the captcha instances may be solved
automatically by leveraging weaknesses in the creation
process or by comparing a captcha's output against itself, this
concept is discussed in [10]. A combination of both text and
image captcha is brought out as a hybrid captcha in [9].
B. Draw-a-secret
An approach to user authentication that generalizes the
notion of a textual password and that, in many cases, improves
the security of user authentication over that provided by
textual passwords. Design and analyze of graphical passwords,
which can be input by the user to any device with a graphical
input interface. A graphical password serves the same purpose
as a textual password, but can consist, for example, of
handwritten designs (drawings), possibly in addition to text
[2].A purely graphical password selection and input scheme,
which is call draw a secret" (DAS). In this scheme, the
password is a simple picture drawn on a grid. This approach is
alphabet independent, thus making it equally accessible for
speakers of any language. Users are freed from having to
remember any kind of alphanumeric. General implication of
all graphical passwords helps in formulating passwords rule in
creating proactive password checker is done by memorable
password space DAS [3].
C. Pass Points
Using CCP as a base system, a persuasive feature is added
to encourage users to select more secure passwords, and to
make it more difficult to select passwords where all five clickpoints are hotspots. Specifically, when users created a
password, the images were slightly shaded except for a
randomly positioned viewport. The viewport is positioned
randomly rather than specifically to avoid known hotspots,
since such information could be used by attackers to improve
guesses and could also lead to the formation of new
hotspots.[7] The viewports size was intended to offer a variety
of distinct points but still cover only an acceptably small
fraction of all possible points. Users were required to select a
click-point within this highlighted viewport and could not
click outside of this viewport.
D. Cued-click Points
Click-based graphical password is a type of knowledgebased authentication that attempt to leverage the human
memory for visual information comprehensive review of
graphical passwords is available elsewhere. Cued-recall clickbased graphical passwords (also known as loci metric). In
ISSN: 2348 – 8387
such systems, users identify and target previously selected
locations within one or more images. The images act as
memory cues to aid recall. Cued Click-Points (CCP) was
designed to reduce patterns and to reduce the usefulness of
hotspots for attackers. Rather than five click-points on one
image, CCP uses one click-point on five different images
shown in sequence. The next image displayed is based on the
location of the previously entered click-point creating a path
through an image set [5]. Users select their images only to the
extent that their click-point determines the next image.
Creating a new password with different click-points results in
a different image sequence. Remembering the order of the
click-points is no longer a requirement on users, as the system
presents the images one at a time. CCP also provides implicit
feedback claimed to be useful only to legitimate users. When
logging on, seeing an image they do not recognize alerts users
that their previous click-point was incorrect and users may
restart password entry. Explicit indication of authentication
failure is only provided after the final click-point, to protect
against incremental guessing attacks.
III. PROPOSED WORK
A. Phishing secure model
Phishing is a form of social engineering in which an
attacker, also known as a phisher, attempts to fraudulently
retrieve legitimate users confidential or sensitive credentials
by mimicking electronic communications from a trustworthy
or public organization in an automated fashion. Phisher set up
fraudulent websites (usually hosted on compromised
machines), which actively prompt users to provide
confidential information. In order to provide security over
phishing attack, user needs to select an image and must be
displayed on the application.
B. Captcha
CaRP schemes are clicked-based graphical passwords
CaRP offers an increasing threat to bypass Captchas
protection, wherein Captcha challenges are relayed to humans
to solve. ClickText is a recognition-based CaRP scheme built
on top of text Captcha. A ClickText password is a sequence of
characters in the alphabet. And then ClickText image is
generated by the underlying Captcha engine, each character’s
location is tracked in the generated image. Captcha scheme
which uses 3D models of horse and dog to generate 2D
animals with different textures, colors, lightings and poses,
and arranges them on a cluttered background. User clicks the
entire animal which is large in count to pass the test. AnimalGrid password space is a grid-based graphical password. It is a
combination of Click Animal and Click-A-Secret (CAS). User
clicks the animal on Click Animal image that should match
with the corresponding numeric value in the next grid in the
password. At every CAPTCHA authentication, a word will
sent to user’s mail id, that word should spelled by a human
voice.
www.internationaljournalssrg.org
Page 34
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
C. Password Guessing Resistant Protocol
PGRP is referred as Password Guessing Resistant
Protocol. PGRP is implemented in Captcha level. The login
protocol make brute-force and dictionary attack ineffective
even for adversaries with access to large botnets and it should
not have any significant impact on usability and easy to
deploy and scalable requiring minimum computational
resources in terms of memory, processing time and disk space.
PGRP keeps track of user machine IP address and browser
cookies are used to determine the successful logins. If no
cookie is sent by the user browser to the login server, the
server sends a cookie to the browser after a successful login to
identify the user on the next login attempt and it limits the
total number of login attempts from unknown remote hosts to
as low as a single attempt per username
D. One Time Text Authentication
To prevent passwords from being hacked by social
engineering, brute force or dictionary attack method, during
registration user enters a nine digit alphanumeric password. At
the time of login, a One Time Text (OTT) is sent to user’s
mail id. The OTT is in encrypted form. The OTT is decrypted
by using an application and number is gained. By using the
decrypted number, that nin3 digit alphanumeric password is
altered and provided as password to login. It provides more
security over brute force and dictionary attack.
E. One Time Text Image
Graphical password systems are a type of knowledgebased authentication that attempts to leverage the human
memory for visual information. Of interest herein are cuedrecall click-based graphical passwords. In Recall Based
Technique user is asked to reproduce something that was
created or selected earlier during the registration stage. At the
time of registration, five user defined pictures are said to be
uploaded. User defined picture are pictures selected by the
user from their hard disk or any other image supported
devices. And have to select two click points for every user
defined pictures. At the time of login, with the above
mentioned decrypted OTT number, the images will shuffle
and display. And user clicks the correct click points in the
shuffle images and gets access to the application.
IV. CONCLUDING REMARKS
targeting such systems are empowered by having control of
thousand to million node botnets. In contrast, PGRP is more
restrictive against brute force and dictionary attacks while
safely allowing a large number of free failed attempts for
legitimate users.
PGRP limits the total number of login attempts for
unknown remote user to as low as a single attempt per
username. The Phishing attack technique reduce security risk
over internet to communicate server. The OTT, Voice
Recognition and Cued Click Points for graphical password
knowledge also used to improve the security in online attacks.
References
[1]
L. von Ahn, M. Blum, N. J. Hopper, and J. Langford,
―CAPTCHA:Using hard AI problems for security,‖ in Proc. Eurocrypt,
2003, pp. 294–311
[2] I. Jermyn, A. Mayer, F. Monrose, M. Reiter, and A. Rubin, ―The design
and analysis of graphical passwords,‖ in Proc. 8th USENIX Security
Symp., 1999, pp. 1–15
[3] B. Pinkas and T. Sander, ―Securing passwords against dictionary
attacks,‖ in Proc. ACM CCS, 2002, pp. 161–170
[4] S. Wiedenbeck, J. Waters, J. C. Birget, A. Brodskiy, and N.
Memon,―PassPoints: Design and longitudinal evaluation of a graphical
password system,‖ Int. J. HCI, vol. 63, pp. 102–127, Jul. 2005
[5] S. Chiasson, P. C. van Oorschot, and R. Biddle, ―Graphical password
authentication using cued click points,‖ in Proc. ESORICS, 2007, pp.
359–374
[6] H. S. Baird, A. L. Coates, and R. J. Fateman. Pessimalprint: a reverse
turing test. International Journal on Document Analysis and Recognition
(IJDAR), 5(2–3):158–163, April 2003
[7] Athanasopoulos.E and Antonatos.S. Enhanced CAPTCHAs: Using
animation to tell humans and computers apart. Proc. of 10th Int. Conf.
on Communicationsand Multimedia Security (CMS 2006), vol. 4237 of
LNCS, pp. 97–108, October 2006
[8] M. Shirali-Shahreza and S. Shirali-Shahreza. Collage CAPTCHA. Proc.
of 20th IEEE Int. Symposium Signal Processing and Application
(ISSPA 2007), February 2007
[9] D. Lopresti. Leveraging the CAPTCHA problem. Proc.of 2nd Int.
Workshop on Human Interactive Proofs (HIP 2005), vol. 3517 of
Lecture Notes in Computer Science, pp. 97–110, May 2005
[10] Hindle, A.; Godfrey, M.W.; Holt, R.C. Reverse Engineering, 2008.
WCRE '08. 15th Working Conference. pp:59– 68.October2008. Reverse
Engineering CAPTCHAs
[11] J. Elson, J. R. Douceur, J. Howell, and J. Saul, ―Asirra: A CAPTCHA
that exploits interest-aligned manual image categorization,‖ in Proc.
ACM CCS, 2007, pp. 366–374.
[12] Mansour Alsaleh, Mohammad Mannan and P.C. van Oorschot,
―Revisiting Defenses Against Large-Scale Online Password Guessing
Attacks‖, Digital Object Indentifier 10.1109/TDSC.2011.242011
Online password guessing attacks on password-only
systems have been observed for decades. Present-day attackers
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 35
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
MOBILE BASED PRIVACY PROTECTED SERVICES WITH
THREE LAYER SECURITY
K.Dheepika M.E (CSE)
Guided by
CSE Department
SMK Fomra Institute of Technology
Kelambakkam , Chennai, India
Mrs. Jeyalaximi M.E.
Associate Prof. , CSE Dept.
SMK Fomra Institute of Technology
Kelambakkam , Chennai, India
ABSTRACT – A Path confusion was presented by
Hoh and Gruteser. The basic idea is to add
uncertainty to the location data of the users at the
points the paths of the users cross, making it hard to
trace users based on raw location data that was kanonymised. Position confusion has also been
proposed as an approach to provide privacy. The idea
is for the trusted anonymiser to group the users
according to a cloaking region (CR), thus making it
harder for the LS to identify an individual. A
common problem with general CR techniques is that
there may exist some semantic information about the
geography of a location that gives away the user’s
location. In proposed the data can be retrieved on the
basis of geo tagged query and checking the privacy
profile and the modifications is made to have the
privacy of the Users Location in which Query is
requested. I use three layer of security in user side
likely, High, Medium and Low for the Privacy
implementation. This executes a communication
efficient PIR to retrieve the appropriate block in the
private grid. This block is encrypted and decrypted
using RC4 algorithm that provide symmetric key.
Here proposed a major enhancement upon previous
solutions by introducing a two stage approach, where
the first step is based on Oblivious Transfer and the
second step is based on Private Information Retrieval,
to achieve a secure solution for both parties. The
solution present here is efficient and practical in
many scenarios. Implemented solution on a desktop
machine and a mobile device to assess the efficiency of
our protocol. Also introducing a security model and
analyze the security in the context of our protocol.
Finally, highlighted a security weakness of our
previous work and present a solution to overcome it.
of location- based services (LBS). Consider a user who
wants to know where the nearest gas station is, he sends a
query to a location-based service provider (LBSP) using
his smart-phone with his location attached.
Keywords—Location based query, private query, private
information retrieval, oblivious transfer
1
INTRODUCTION
Location-based services (LBS) refer to those information
services that deliver differentiated information based on
the location from where a user issues the request. Thus,
the user location information necessarily appears in a
request sent to the service providers (SPs). A privacy
problem arises in LBS when the user is concerned with
the possibility that an attacker may connect the user‟s
identity with the information contained in the service
requests, including location and other information. The
popularity of mobile devices with localisation chips and
ubiquitous access to Internet give rise to a large number
ISSN: 2348 – 8387
1.1 Related Work
The first solution to the problem was proposed by
Beresford, in which the privacy of the user is maintained
by constantly changing the user‟s name or pseudonym
within some mix-zone. It can be shown that, due to the
nature of the data being exchanged between the user and
the server, the frequent changing of the user‟s name
provides little protection for the user‟s privacy. A more
recent investigation of the mix-zone approach has been
applied to road networks. They investigated the required
number of users to satisfy the unlinkability property
when there are repeated queries over an interval. This
requires careful control of how many users are contained
within the mix-zone, which is difficult to achieve in
practice.
An enhanced trusted anonymiser approach has
also been proposed, which allows the users to set their
level of privacy based on the value of k. This means that,
given the overhead of the anonymiser, a small value of k
could be used to increase the efficiency. Conversely, a
large value of k could be chosen to improve the privacy,
if the users felt that their position data could be used
maliciously. Choosing a value for k, however, seems
unnatural. There have been efforts to make the process
less artificial by adding the concept of feeling-based
privacy. Instead of specifying a k, they propose that the
user specifies a cloaking region that they feel will protect
their privacy, and the system sets the number of cells for
the region based on the popularity of the area. The
popularity is computed by using historical footprint
database that the server collected.
New privacy metrics have been proposed that
captures the users‟ privacy with respect to LBSs. The
authors begin by analysing the shortcomings of simple kanonymity in the context of location queries. Next, they
propose privacy metrics that enables the users to specify
values that better match their query privacy requirements.
From these privacy metrics they also propose spatial
generalization algorithms that coincide with the user‟s
privacy requirements.
Methods have also been proposed to confuse and distort
the location data, which include path and position
confusion. Path confusion was presented by Hoh and
Gruteser. The basic idea is to add uncertainty to the
location data of the users at the points the paths of the
users cross, making it hard to trace users based on raw
location data that was k-anonymised. Position confusion
www.internationaljournalssrg.org
Page 36
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
has also been proposed as an approach to provide
privacy. The idea is for the trusted anonymiser to group
the users according to a cloaking region (CR), thus
making it harder for the LS to identify an individual. A
common problem with general CR techniques is that
there may exist some semantic information about the
geography of a location that gives away the user‟s
location. For example, it would not make sense for a user
to be on the water without some kind of boat. Also,
different people may find certain places sensitive.
Damiani et al. have presented a framework that consists
of a obfuscation engine that takes a users profile, which
contains places that the user deems sensitive, and outputs
obfuscated locations based on aggregating algorithms.
As solutions based on the use of a central
anonymiser are not practical, Hashem and Kulik
presented a scheme whereby a group of trusted users
construct an ad-hoc network and the task of querying the
LS is delegated to a single user. This idea improves on
the previous work by the fact that there is no single point
of failure. If a user that is querying the LS suddenly goes
offline, then another candidate can be easily found.
However, generating a trusted adhoc network in a real
world scenario is not always possible.
Another method for avoiding the use of a
trusted anonymiser is to use „dummy‟ locations. The
basic idea is to confuse the location of the user by
sending many random other locations to the server, such
that the server cannot distinguish the actual location from
the fake locations. This incurs both processing and
communication overhead for the user device. The user
has to randomly choose a set of fake locations as well as
transmitting them over a network, wasting bandwidth.
We refer the interested reader to Krumm , for a more
detailed survey in this area.
Most of the previously discussed issues are
solved with the introduction of a private information
retrieval ( PIR ) location scheme. The basic idea is to
employ PIR to enable the user to query the location
database without compromising the privacy of the query.
Generally speaking, PIR schemes allow a user to retrieve
data (bit or block) from a database, without disclosing the
index of the data to be retrieved to the database server .
Ghinita et al. used a variant of PIR which is based on the
quadratic residuosity problem. Basically the quadratic
residuosity problem states that is computationally hard to
determine whether a number is a quadratic residue of
some composite modulus n (x2 = q (mod n)), where the
factorisation of n is unknown.
This idea was extended to provide database
protection. This protocol consists of two stages. In the
first stage, the user and server use homomorphic
encryption to allow the user to privately determine
whether his/her location is contained within a cell,
without disclosing his/her coordinates to the server. In
the second stage, PIR is used to retrieve the data
contained within the appropriate cell.
1.2 Our Contributions
In this paper, we propose a novel protocol for location
based a query that has major performance improvements
with respect to the approach by Ghinita at el. Like such
protocol, our protocol is organized according to two
stages. In the first stage, the user privately determines
his/her location within a public grid, using oblivious
transfer. This data contains both the ID and associated
symmetric key for the block of data in the private grid. In
ISSN: 2348 – 8387
the second stage, the user executes a communicational
efficient PIR, to retrieve the appropriate block in the
private grid. This block is decrypted using the symmetric
key obtained in the previous stage.
Our protocol thus provides protection for both the user
and the server. The user is protected because the server is
unable to determine his/her location. Similarly, the
server‟s data is protected since a malicious user can only
decrypt the block of data obtained by PIR with the
encryption key acquired in the previous stage. In other
words, users cannot gain any more data than what they
have paid for. We remark that this paper is an
enhancement of a previous work. In particular, the
following contributions are made.
1)
Redesigned the key structure
2)
Added a formal security model
3)
Implemented the solution on both a mobile
device and desktop machine
As with our previous work, the implementation
demonstrates the efficiency and practicality of our
approach.
2 PROTOCOL MODEL
2.1 Notations
Let x ← y be the assignment of the value of variable y to
variable x and E ⇐ v be the transfer of the variable v to
entity E. Denote the ElGamal [9] encryption of message
m as E(m,y) =A= (A1,A2) = (gr,gmyr), where g is a
generator of group G, y is the public key of the form y =
gx, and r is chosen at random. This will be used as a
basis for constructing an adaptive oblivious transfer
scheme. Note that A is a vector, while A1, A2 are
elements of the vector. The cyclic group G0 is a
multiplicative subgroup of the finite field Fp, where p is
a large prime number and q is a prime that divides (p −
1). Let g0 be a generator of group G0, with order q. Let
G1 be a multiplicative subgroup of finite field Fq, with
distinct generators g1 and g2 where both have prime
order q|(q − 1). Based on this definition, groups G0 and
G1 can then be linked together and have the form gg0
x1gy 2, where x and y are variable integers. This will be
used in our application to generate an ElGamal
cryptosystem instance in group G1. We denote |p| to be
the bit length of p, ⊕ to be the exclusive OR operator,
a||b to be the concatenation of a and b, and |g| to be the
order of generator g.
We require for security reasons, that |q|= 1024
and p has the form p = 2q + 1. We also require that the
parameters G0, g0, G1,g1,g2,p,q be fixed for the duration
of a round of our protocol and be made publicly
accessible to every entity in our protocol.
2.2 System Model
The system model consists of three types of entities (see
Fig. 1): the set of users1 who wish to access location data
U, a mobile service provider SP, and a location server
LS. From the point of view of a user, the SP and LS will
compose a server, which will serve both functions. The
user does not need to be concerned with the specifics of
the communication.
The users in our model use some location-based service
provided by the location server LS. For example, what is
In this paper we use the term “user” to refer to the entity
issuing queries and retrieving query results. In most
cases, such user is a client software executing on behalf
of a human user.
www.internationaljournalssrg.org
Page 37
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
overhead of O(N). He then sends the commitments to the
receiver (Alice). The transfer phase is used to transmit a
single data element to Alice. At the beginning of each
transfer Alice has an input I, and her output at the end
supportsof
the
phaseuptoshouldksuccessivebedatatransferelementphase
s. XI. An OTkN×1 protocol
Figure 1a. System Architecture
Fig. 1b. System model.
the nearest ATM or restaurant? The purpose of the
mobile service provider SP is to establish and maintain
the communication between the location server and the
user. The location server LS owns a set of POI records r i
for 1 ≤ ri ≤ ρ. Each record describes a POI, giving GPS
coordinates to its location (xgps,ygps), and a description
or name about what is at the location.
We reasonably assume that the mobile service
provider SP is a passive entity and is not allowed to
collude with the LS. We make this assumption because
the SP can determine the whereabouts of a mobile
device, which, if allowed to collude with the LS,
completely subverts any method for privacy. There is
simply no technological method for preventing this
attack. As a consequence of this assumption, the user is
able to either use GPS (Global Positioning System) or the
mobile service provider to acquire his/her coordinates.
Since we are assuming that the mobile service provider
SP is trusted to maintain the connection, we consider
only two possible adversaries. One for each
communication direction. We consider the case in which
the user is the adversary and tries to obtain more than
he/she is allowed. Next we consider the case in which the
location server LS is the adversary, and tries to uniquely
associate a user with a grid coordinate.
2.3 Security Model
Before we define the security of our protocol, we
introduce the concept of k out of N adaptive oblivious
transfer as follows.
Definition 1.(k out of N adaptive oblivious transfer
(izationOTkN
×1)and[26for]).
OTtransfer.kN×1
protocolsTheinitializationcontaintwophasephases,isrunfo
r initial-by the sender (Bob) who owns the N data
elements X1,X2,...,XN. Bob typically computes a
commitment to each of the N data elements, with a total
ISSN: 2348 – 8387
Built on the above definition, our protocol is composed
of initialisation phase and transfer phase. We will now
outline the steps required for the phases and then we will
formally define the security of these phases.
Our initialisation phase is run by the sender
(server), who owns a database of location data records
and a 2dimensional key matrix Km×n, where m and n are
rows and columns respectfully. An element in the key
matrix is referenced as ki,j. Each ki,j in the key matrix
uniquely encrypts one record. A set of prime powers S =
{p ,...,pcNN }, where N is the number of blocks, is
available to the public. Each element in S the pi is a
prime and ci is a small natural number such that pcii is
greater than the block size (where each block contains a
number of POI records). We require, for convenience
that the elements of S follow a predictable pattern. In
addition, the server sets up a common security parameter
k for the system.
Our transfer phase is constructed using six
algorithms: QG1, RG1, RR1, QG2, RG2, RR2. The first
three compose the first phase (Oblivious Transfer Phase),
while the last three compose the second phase (Private
Information Retrieval Phase). The following six
algorithms are executed sequentially and are formally
described as follows.
Oblivious Transfer Phase
1) QueryGeneration1 (Client) (QG1):
Takes as input indices i,j, and the dimensions of the key
matrix m,n, and outputs a query Q1 and secret s1,
denoted as (Q1,s1) = QG1(i,j,m,n).
2) ResponseGeneration1(Server) (RG1):
Takes as input the key matrix Km×n, and the query Q1,
and outputs a response R1, denoted as (R1) =
RG1(Km×n,Q1).
3) ResponseRetrieval1 (Client) (RR1):
Takes as input indices i,j, the dimensions of the key
matrix m,n, the query Q1 and the secret s1, and the
response R1, and outputs a cellkeyki,j and cell-id IDi,j,
denoted as (ki,j,IDi,j) = RR1(i,j,m,n,(Q1,s1),R1).
Private Information Retrieval Phase
4) QueryGeneration2 (Client) (QG2):
Takes as input the cell-id IDi,j, and the set of prime
powers S, and outputs a query Q2 and secret s2, denoted
as (Q2,s2) = QG2(IDi,j,S).
5) ResponseGeneration2 (Server) (RG2):
Takes as input the database D, the query Q2, and the set
of prime powers S, and outputs a response R2, denoted
as (R2) = RG2(D,Q2,S).
6) ResponseRetrieval2 (Client) (RR2):
Takes as input the cell-key ki,j and cell-id IDi,j, the
query Q2 and secret s2, the response R2, and outputs the
data d, denoted as (d) =
RR2(ki,j,IDi,j,(Q2,s2),R2).
Our transfer phase can be repeatedly used to retrieve
points of interest from the location database.
www.internationaljournalssrg.org
Page 38
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
With these functions described, we can build security
definitions for both the client and server.
Definition2.(Client‟s Security (Indistinguishability)
[26]). In a OTkN×1 protocol, for any step 1 ≤ t ≤ k, for
any previous items I1,...,It−1 that the receiver has
obtained in the first t-1 transfers, for any 1 N and for any
probabilistic polynomial time B executing the server‟s
part, the views that B sees in case the client tries to
obtain XIt and in the case the
Since PIR does not require that a user is constrained to
obtain only one bit/block, the location server needs to
implement some protection for its records. This is
achieved by encrypting each record in the POI database
with a key using a symmetric key algorithm, where the
key for encryption is the same key used for decryption.
This key is augmented with the cell info data retrieved by
the oblivious transfer query. Hence, even if the user uses
PIR to obtain more than one record, the data will be
meaningless resulting in improved security for the
server‟s database. Before we describe the protocol in
detail, we describe some initialisation performed by both
parties.
An oblivious transfer query is such that a server cannot
learn the user‟s query, while the user cannot gain more
than they are entitled. This is similar to PIR, but
oblivious transfer requires protection for the user and
server. PIR only requires that the user is protected.
Figure 2. High level overview of the protocol.
client tries to obtain XIt are
indistinguishable given X1,X2,...,XN.
computationally
Definition 3.•(Server‟s Security (Comparison with Ideal
Model) implementation).We using compare a trust eda
OT third kN×1 protocol party that to the getsideal the
server‟s input X 1,X2,...,XN and the client‟s requests and
gives the client the data elements she has requested. For
every probabilistic polynomial-time machine . A
substituting the client, there exists a probabilistic
polynomial-time machine A that plays the receiver‟s role
in the ideal model such that the outputs of A and A are
computationally indistinguishable.
3 PROTOCOL DESCRIPTION
We now describe our protocol. We first give a protocol
summary to contextualize the proposed solution and then
describe the solution‟s protocol in more detail.
3.1 Protocol Summary
The ultimate goal of our protocol is to obtain a set ( block
) of POI records from the LS, which are close to the
user‟s position, without compromising the privacy of the
user or the data stored at the server. We achieve this by
applying a two stage approach shown in Fig. 2. The first
stage is based on a two-dimensional oblivious transfer
and the second stage is based on a communicationally
efficient PIR. The oblivious transfer based protocol is
used by the user to obtain the cell ID, where the user is
located, and the corresponding symmetric key. The
knowledge of the cell ID and the symmetric key is then
used in the PIR based protocol to obtain and decrypt the
location data.
The user determines his/her location within a
publicly generated grid P by using his/her GPS
coordinates and forms an oblivious transfer query2. The
minimum dimensions of the public grid are defined by
the server and are made available to all users of the
system. This public grid superimposes over the privately
partitioned grid generated by the location server‟s POI
records, such that for each cell Qi,j in the server‟s
partition there is at least one Pi,j cell from the public
grid. This is illustrated in Fig. 3.
ISSN: 2348 – 8387
Fig. 3. Public grid superimposed over the private grid
3.2 Initialization
A user u from the set of users U initiates the protocol
process by deciding a suitable square cloaking region
CR, which contains his/her location. All user queries will
be with respect to this cloaking region. The user also
decides on the accuracy of this cloaking region by how
many cells are contained within it, whose size cannot be
smaller than the minimum size defined by the location
server, which is at least the minimum size defined by the
server. This information is combined with the
dimensions of the CR to form the public grid P and
submitted to the location server, which partitions its
records or superimposes it over prepartitioned records
(see Fig. 3). This partition is denoted Q (note that the
cells don‟t necessarily need to be the same size as the
cells of P). Each cell in the partition Q must have the
same number rmax of POI records. Any variation in this
number could lead to the server identifying the user. If
this constraint cannot be satisfied, then dummy records
can be used to make sure each cell has the same amount
of data. We assume that the LS does not populate the
private grid with misleading or incorrect data, since such
action would result in the loss of business under a
payment model.
Next, the server encrypts each record ri within
each cell of Q, Qi,j, with an associated symmetric key
ki,j. The encryption keys are stored in a small (virtual)
database table that associates each cell in the public grid
P, Pi,j, with both a cell in the private grid Qi,j and
corresponding symmetric key ki,j. This is shown by Fig.
4.
The server then processes the encrypted records within
each cell Qi,j such that the user can use an efficient PIR
www.internationaljournalssrg.org
Page 39
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[11], to query the records. Using the private partition Q,
the server represents each associated (encrypted) data as
an integer Ci, with respect to the cloaking region. For
each Ci, the server chooses a set of unique prime powers
πi = pcii, such that Ci< πi. We note that the ci in the
exponent must be small for the protocol to work
efficiently. Finally, the server uses the Chinese
Remainder Theorem to find the smallest integer e such
that e = Ci (mod πi) for all Ci. The integer e effectively
represents the database. Once the initialisation is
complete, the user can proceed to query the location
server for POI records.
3.3 Oblivious Transfer Phase
The purpose of this protocol is for the user to obtain one
and only one record from the cell in the public grid P,
shown in Fig. 4. We achieve this by constructing a 2dimensional oblivious transfer, based on the ElGamal
oblivious transfer, using adaptive oblivious transfer
proposed by Naor et al.
The public grid P, known by both parties, has m columns
and n rows. Each cell in P contains a symmetric key ki,j
and a cell id in grid Q or (IDQi,j,ki,j), which can be
represented by a stream of bits Xi,j. The user determines
his/her i,j coordinates in the public grid which is used to
acquire the data from the cell within the grid. The
protocol is initialised by the server by generating m×n
keys of the form 1 gg 0 R igC2j . We remark that this key
structure of this form is an
enhancement from, as the client doesn‟t have access to
the individual components of the key. This initialisation
is presented in Algorithm 1.
Algorithm1 is executed once and the output
Y1,1,...,Ym,n is sent to the user. At which point, the user
can query this information using the indices i, and j, as
input. This protocol is presented in Algorithm 2.
At the conclusion of the protocol presented by Algorithm
2, the user has the information to query the location
server for the associated block.
Theorem 1 (Correctness). Assume that the user and
server follow Algorithms 1 and 2 correctly. Let Xi,j be
the bit string encoding the pair (IDQi,j,ki,j) and let Xi,j
the bit string generated by Algorithm 2 (Step 19) as Xi,j
= Yi,j⊕H(Ki,j). Then Xi,j = Xi,j.
Proof. We begin this proof by showing that Ki,j = Ki,j,
where Ki,j is the key obtained by the user according to
the Algorithm 2 (step 18). In the initialisation algo1
rithm (1) Ki,j is calculated as Ki,j = gg 0 RigC2j . At the
end of
the transfer protocol, the user computes Ki,j as γ W3W4,
where W3 can be simplified as follows when i = α.
W3 = V1,iW1
= gR1 irR(gα1g−1 iy1r1)r1U1−,xi 1
= gR1 irR(yr11r1)U1−,xi 1
ISSN: 2348 – 8387
= gR1 irR(gxr1 1r1)(gr11r1 )−x1tion retrieval protocol
with the location server to acquire
4 SECURITY ANALYSIS
4.1 Client’s Security
In the oblivious transfer phase, each coordinate of the
location is encrypted by the ElGamal encryption scheme,
e.g., (gr11,g−1 iyr11). It has been shown that ElGamal
encryption scheme is semantically secure. This means
that given the encryption of one of two plaintexts m1 and
m2 chosen by a challenger, the challenger cannot
determine which plaintext is encrypted, with probability
significantly greater than 1/2 (the success rate of random
guessing). In view of it, the server cannot distinguish any
two queries of the client from each other in this phase.
In the private information retrieval phase, the
security of the client is built on the Gentry-Ramzan
private information retrieval protocol, which is based on
the phi-hiding (φ-hiding) assumption .
4.2 Server’s Security
Intuitively, the server‟s security requires that the client
can retrieve one record only in each query to the server,
and the server must not disclose other records to the
client in the response. Our protocol achieves the server‟s
security in the oblivious transfer phase, which is built on
the Naor-Pinkas oblivious transfer protocol.
Our Algorithm 1 is the same as the Naor-Pinkas
oblivious transfer protocol except from the one-out-of-n
oblivious transfer protocol, which is built on the ElGamal
encryption scheme. In the generation of the first response
(RG1), the server computes C1,α for 1 ≤ α ≤ n, where
B1 = g−1 iyr11, and sends C1,α (1 ≤ α ≤ n) to the client.
Only when (g1 r iri,gR1irRyr r is the encryption of gR1
irR. When α = i, C1,α is the encryption of g1RαrRg1rα,
where rα is unknown to the client. Because the discrete
logarithm is hard, the client cannot determine rα from
Ar1α. Therefore, gR1αrR is blinded by the random factor
gr1α . In view of it, the client can retrieve the useful gR1
ir R only from C). Then following the Naor-Pinkas
oblivious transfer protocol, the client can retrieve the
encryption key kij only in the end of the phase.
In the private information retrieval phase, even if the
client can retrieve more than one encrypted records,
he/she can decrypt only one record with the encryption
key kij retrieved in the first phase.
Fig. 4. Association between the public and private grids.
Based on the above analysis, we obtain the following
result.
Theorem 4. Assume that the discrete logarithm is hard
and the Naor-Pinkas protocol is a secure oblivious
transfer protocol, our protocol has server security.
www.internationaljournalssrg.org
Page 40
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
The previous solution used the cell key of the form
gRi||gCj where gRi,gCj are the row and column keys,
respectively. If the user queries the database once, the
user can get one cell key only. However, if the user
queries the database twice, the user is able to get 4 cell
keys. In this paper we overcome this security weakness
by using the cell key of the form g0 RigC2j , where both
key components are protected g1 by the discrete
logarithm problem.
5 PERFORMANCE ANALYSIS
5.1 Computation
The transfer protocol is initiated by the user, who
chooses indices i and j. According to our protocol the
user needs to compute (A1,B1) = (gr1 1,g−1 iyr11) and
(A2,B2) = (gr22,g−2 jyr22). Since the user knows the
discrete logarithm of both y1 and y2 (i.e. x1 and x2
respectively), the user can compute (A1,B1) and (A2,B2)
as (A1,B1) = (gr11,g−1i+x1r1) and
(A2,B2) = (gr22,g−2 j+x2r2) respectively. Hence, the
user has to compute 4 exponentiations to generate his/her
query.
Upon receiving the user‟s query, the server needs to
compute for 1 ≤ α ≤ n and
Since gα and gβ can be precomputed and the server
knows the discrete log of rR and rC, the server has to
compute 3n+3m exponentiations, plus an additional
exponentiation for computing γ.
The user requires 3 additional exponentiations
to determine Ki,j. After the user has determined Ki,j,
he/she can determine Xi,j and proceed with the PIR
protocol. This protocol requires 3 more exponentiations,
2 performed by the user and 1 performed by the server.
In terms of multiplications, the user has to perform 2|N|
operations and the server has to perform |e| operations.
The user also has to compute the discrete logarithm base
h, logh, of he. This process can be expedited by using the
Pohlig-Hellman discrete logarithm algorithm. The
running time of the PohligHellman algorithm is
proportional to the factorisation of the group order O ,
where r is the number of unique factors and n is the order
of the group. In our case, the order of the group is πi =
pcii and the number of unique√ factors is r = 1, resulting
in running time O(c(lg pc + p)).
Once the user has determined his/her cell index
he/she can proceed with the PIR protocol (described in)
to retrieve the data. The PIR is based on the Quadratic
Residuosity Problem, which allows the user to privately
query the database. Let t be the total number of bits in the
database, where there are a rows and b columns.
The user and server have to compute 2(√a × b) × |N2| and
a × b multiplications respectively. We remark that
multiplying the whole database by a string of numbers,
which is required by the PIR protocol based on the
quadratic residuosity problem, is equivalent to computing
ge in our PIR protocol. The size of number e is
principally defined by the prime powers. In general, it
takes about η = bits to store e and we would expect to
be multiplying η/2 of the time using the square-andmultiply method for fast exponentiation. This is roughly
equivalent to a × b multiplications as required in the
Ghinita et al. protocol.
5.2 Communication
In our proposed solution, the user needs 4L
communications, while the server requires 2(m + n)2L +
ISSN: 2348 – 8387
L communications in the oblivious transfer protocol. In
the PIR protocol, the user and server exchange one group
element each.
The performance analysis for stage 1 (user
location test) and stage 2 (private information retrieval)
are summarised in Tables 1 and 2 respectively, where the
computation in Table 1 is in terms of exponentiation and
the computation in Table 2 is in terms of multiplication.
When we analyse the difference in
performance between our solution and the one by Ghinita
et al., we find that our solution is more efficient. The
performance of the first stage of each protocol is about
the same, except that our solution requires O(m+n)
operations while the solution by Ghinita et al. requires
O(m×n). In the second stage, our protocol is far more
efficient with respect to communication, in that it
requires the transmission of only 2 group elements
whereas the Ghinita et al. solution requires the exchange
of an a × b matrix.
6 EXPERIMENTAL EVALUATION
We implemented our location based query solution on a
platform consisting of: a desktop machine, running the
server software of our protocols; and a mobile phone,
running the client software of our protocols. For both
platforms, we measured the required time for the
oblivious
TABLE 3
Oblivious Transfer Experimental Results for Desktop
and Mobile Platforms
transfer and private information retrieval protocols
separately to test the performance of each protocol and
the relative performance between the two protocols. The
implementation on the mobile phone platform is
programmed using the Android Development Platform,
which is a Java-based programming environment. The
mobile device used was a Sony Xperia S with a Dualcore 1.5 GHz CPU and 1 GB of RAM. The whole
solution was executed for 100 trials, where the time taken
(in seconds) for each major component was recorded and
the average time was calculated. The parameters for our
experiment were the same on both platforms, which are
described next.
6.1 Experimental Parameters
6.1.1 Oblivious Transfer Protocol
In our implementation experiment for the oblivious
transfer protocol, we generated a modified ElGamal
instance with |p|= 1024 and |q|= 160, where q|(p − 1). We
also found a generator a, and set g0 = aq (g has order
q).We also set a generator g1, which has order q−1. We
set the public matrix P to be a 25 × 25 matrix of key and
index information.
We first measured the time required to generate a matrix
of keys according to Algorithm 1. This procedure only
needs to be executed once for the lifetime of the data.
There is a requirement that each hash value of gg0 R1
www.internationaljournalssrg.org
Page 41
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
igC 1j is unique3. We use the SHA-1 to compute the
hash H(•), and we assume that there is negligible
probability that a number will repeat in the matrix.
6.1.2 Private Information Retrieval Protocol
In the PIR protocol we fixed a 15×15 private matrix,
which contains the data owned by the server. We chose
the prime set to be the first 225 primes, starting at 3. The
powers for the primes were chosen to allow for at least a
block size of 1024 bits (3647, 5442,..., 142998). Random
values were chosen for each prime power e = Ci (mod
πi), and the Chinese Remainder Theorem was used to
determine the smallest possible e satisfying this system
of congruences.
Once the database has been initialised, the user
can initiate the protocol by issuing the server his/her
query. The query consists of finding a suitable group
whose order is divisible by one of the prime powers πi.
TABLE 4
Private Information Retrieval Experimental Results for
Desktop and Mobile Platforms
Future work will involve testing the protocol
on many different mobile devices. The mobile result
provided here may be different than other mobile devices
and software environments. Also, needed to reduce the
overhead of the primality test used in the private
information retrieval based protocol. Additionally, the
problem concerning the LS supplying misleading data to
the client is also interesting. Privacy preserving
reputation techniques seem a suitable approach to
address such problem. A possible solution could
integrate methods from. Once suitable strong solutions
exist for the general case, they can be easily integrated
into our approach.
ACKNOWLEDGMENTS
This work was supported in part by ARC Discovery
Project (DP0988411) “Private Data Warehouse Query”
and in part by NSF award (1016722) “TC: Small:
Collaborative: Protocols for Privacy-Preserving Scalable
Record Matching and Ontology Alignment”.
REFERENCES
6.2 Experimental Results
When we compare this outcome with our previous result
, we find that the protocol is still practical. For this
comparison, we consider the performance of the client
the most important, since we assume that a server is very
powerful. Compared with the previous work, the first
stage on the client side is 4-7 times faster, while in the
second stage the client side is 2 times slower. We must
keep in mind that the client side was implemented on a
desktop machine in the previous work, and hence made
the second stage slower. Also, we replaced the hash
algorithm with an exponentiation operation that reduced
the group space for gRigCj from 1024 to 160 bits. This
security of this structure was protected by an outer group
of 1024 bits. Because the client cannot directly access
gRigCj, since the discrete logarithm is hard in the outer
group, the client must operate in the outer group to
remove the blinding factors. This contributed to faster
execution in the first stage.
7 CONCLUSION
In this paper presented a location based query solution
that employs two protocols that enables a user to
privately determine and acquire location data. The first
step is for a user to privately determine his/her location
using oblivious transfer on a public grid. The second step
involves a private information retrieval interaction that
retrieves the record with high communication efficiency.
Analysed the performance of our protocol and found it to
be both computationally and communicationally more
efficient than the solution by Ghinita et al., which is the
most recent solution. Implemented a software prototype
using a desktop machine and a mobile device. The
software prototype demonstrates that our protocol is
within practical limits.
ISSN: 2348 – 8387
[1] (2011, Jul. 7) Openssl [Online]. Available:
http://www.openssl.org/
[2]
M. Bellare and S. Micali, “Non-interactive oblivious
transfer and applications,” in Proc. CRYPTO, 1990, pp. 547–
557.
[3]
A. Beresford and F. Stajano, “Location privacy in
pervasive computing,” IEEE Pervasive Comput., vol. 2, no. 1,
pp. 46–55, Jan.–Mar. 2003.
[4]
C. Bettini, X. Wang, and S. Jajodia, “Protecting privacy
against location-based personal identification,” in Proc. 2nd
VDLB Int. Conf. SDM, W. Jonker and M. Petkovic, Eds.,
Trondheim, Norway, 2005, pp. 185–199, LNCS 3674.
[5]
X. Chen and J. Pang, “Measuring query privacy in
location-based services,” in Proc. 2nd ACM CODASPY, San
Antonio, TX, USA, 2012, pp. 49–60.
[6]
B. Chor, E. Kushilevitz, O. Goldreich, and M. Sudan,
“Private information retrieval,” J. ACM, vol. 45, no. 6, pp.
965–981, 1998.
[7]
M. Damiani, E. Bertino, and C. Silvestri, “The PROBE
framework for the personalized cloaking of private locations,”
Trans. Data Privacy, vol. 3, no. 2, pp. 123–148, 2010.
[8] M. Duckham and L. Kulik, “A formal model of obfuscation
and negotiation for location privacy,” in Proc. 3rd Int. Conf.
Pervasive Comput., H. Gellersen, R. Want, and A. Schmidt,
Eds., 2005, pp. 243–251, LNCS 3468.
[9] T. ElGamal, “A public key cryptosystem and a signature
scheme based on discrete logarithms,” IEEE Trans. Inform.
Theory, vol. 31, no. 4, pp. 469–472, Jul. 1985.
[10] B. Gedik and L. Liu, “Location privacy in mobile systems:
A personalized anonymization model,” in Proc. ICDCS,
Columbus, OH, USA, 2005, pp. 620–629.
[11] C. Gentry and Z. Ramzan, “Single-database private
information retrieval with constant communication rate,” in
Proc. ICALP, L. Caires, G. Italiano, L. Monteiro, C.
Palamidessi, and M. Yung, Eds., Lisbon, Portugal, 2005, pp.
803–815, LNCS 3580.
[12] G. Ghinita, P. Kalnis, M. Kantarcioglu, and E. Bertino,
“A hybrid technique for private location-based queries with
database protection,” in Proc. Adv. Spatial Temporal
Databases, N. Mamoulis, T. Seidl, T. Pedersen, K. Torp, and I.
Assent, Eds., Aalborg, Denmark, 2009, pp. 98–116, LNCS 5644.
[14] G. Ghinita, P. Kalnis, A. Khoshgozaran, C. Shahabi, and
K.-L. Tan,“Private queries in location based services:
Anonymizers are not necessary,” in Proc. ACM SIGMOD,
Vancouver, BC, Canada, 2008, pp. 121–132.
[15] G. Ghinita, C. R. Vicente, N. Shang, and E. Bertino,
“Privacy preserving matching of spatial datasets with
protection against background knowledge,” in Proc. 18th
SIGSPATIAL Int. Conf. GIS, 2010, pp. 3–12.
www.internationaljournalssrg.org
Page 42
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Degenerate Delay-Multicast Capacity Tradeoffs in
MANETs
S.HEMAVARTHINI, Ms. T.KAVITHA M.E., Mr. S.SUBBIAH M.E, (Ph.D).,
M.E(CSE) student,
Project Guide,
HOD- Trichy Engineering College
Abstract— Mobile Ad Hoc Networks (MANETs). It
gives a global perspective of multicast capacity and
delay analysis. in each node moves around in
the whole network to reduce delays in the network,
each user sends redundant packets along multiple
paths to the destination. Assuming the network
has a cell partitioned structure and users move
according
to a simplified independent and
identically
distributed
mobility
model
Categorically, four node mobility models: Twomagnitude independent and identically distributed
mobility, Two- magnitude hybrid random walk, One magnitude independent and identically distributed
mobility, One- magnitude hybrid random walk. Two
mobility time-scales are used (i) fast mobility where
node mobility is at the same time-scale as data
transmissions and (ii) slow mobility where node
mobility is assumed to occur at a much slower timescale than data transmissions. Given a delay
constraint D, first segment the optimal multicast
capacity for each of the eight types of mobility
models, and then develop a scheme that can achieve a
capacity-delay tradeoff close to the upper bound up to
a logarithmic factor in homogeneous network. This
paper proposes slow mobility brings better
performance than fast mobility because there are
more possible routing schemes in network And
providing security to our packet data to share in a
network.
Index Terms— Index Terms—Multicast capacity and
delay tradeoffs, Mobile ad hoc networks (MANETs),
independent and identically distributed (i.i.d.)
mobility models, hybrid random walk mobility
models.
I. INTRODUCTION
ISSN: 2348 – 8387
The fundamental achievablecapacity in
wireless ad hoc networks. How to improvethe
network performance, in terms of the capacity
anddelay, has been a central issue.Many works have
been conducted to investigate theimprovement by
introducing different kinds of mobility into the
network. The delay constrained the delay constrained
multicast capacity by characterizing the capacity
scaling law. The scaling approach is introduced in,
and has been intensively used to study the capacity of
ad hoc networks including both static and mobile
networks. consider a MANET consisting of ns
multicast sessions. Each multicast session has one
source and p destinations. The wireless mobiles are
assumed to move according to a two dimensional
independent and identical distributed (2D-i.i.d)
mobility model. Each source sends identical
information to the p destinations in its multicast
session, and the information is required to be
delivered to all the p destinations within D time-slots.
The main contributions of this paper include: Finally,
to evaluate the performance of In my algorithm using
simulations. The algorithm to the 2D i.i.d. mobility
model, random-walk model and random waypoint
model. The simulations confirm that the results
obtained form the 2D-i.i.d. model holds for more
realistic mobility models.
A general analysis on the optimalmulticast
capacity-delay tradeoffs in both homogeneous and
heterogeneous MANETs. a mobile wireless network
that consists of n nodes, among which ns ¼ ns nodes
are selected as sources and nd ¼ na destined nodes
are chosen for each. Thus, ns multicast sessions are
formed. Our results in homogeneous network are
further used to study the heterogeneous network,
where m ¼ nb base stations connected.
www.internationaljournalssrg.org
Page 43
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Where a maximum per-node throughput was
established in a static network with n nodes, there has
been tremendous interest in the networking research
community to understand the fundamental achievable
capacity in wireless ad hoc networks. to improve the
network performance.
As the demand of information sharing
increases rapidly, multicast flows are expected to be
predominant in many of the emerging applications,
such as the order delivery in battlefield networks and
wireless video conferences. Related works are
including static, mobile and hybrid networks.
Introducing mobility into the multicast traffic pattern.
Fast mobility was assumed. Capacity and delay were
calculated under two particular algorithms, and the
tradeoff derived from them was , where k was the
number of destinations per source. In their work, the
network is partitioned into cells similar to TDMA
scheme is used to avoid interference. Zhou and Ying
also studied the fast mobility model and provided an
optimal tradeoff under their network assumptions.
Specifically, they considered a network that consists
of ns multicast sessions, each of which had one
source and p destinations. They showed that given
delay constraint D, the capacity per multicast session.
Then a joint coding/scheduling algorithm was
proposed to achieve a throughput. In their network,
each multicast session had no intersection with others
and the total number of mobile node.
Heterogeneous networks with multicast
traffic pattern were studied by Li and Fang and Mao
et al. Wired base stations are used and their
transmission range can cover the whole network. Li
and Fang studied a dense network with fixed unit
area. The helping nodes in their work are wireless,
but have higher power and only act as relays instead
of sources or destinations all study static networks.
``
II BACKGROUND AND RELATED WORK
Two-dimensional i.i.d. mobility model.
At the beginning of each time slot, nodes will
be
ISSN: 2348 – 8387
uniformly and randomly distributed in the unit square.
The node positions are independent of each other, and
independent from time slot to time slot.
Two-dimensional hybrid random walk model.
Consider aunit square which is further
divided into 1=B2squares of equal size. Each of the
smaller square iscalled a RW-cell (random walk cell),
and indexed bywhere Ux; Uy 2 f1; . . . ; 1=Bg. A
node whichis in one RW-cell at a time slot moves to
one of itseight adjacent RW-cells or stays in the same
RW-cellin the next time-slot with a same probability.
TwoRW-cells are said to be adjacent if they share a
commonpoint. The node position within the RW-cell
israndomly and uniformly selected.a hybrid network
of m base stations and n nodes, each capable of
transmitting at W bits/sec over the wireless channel.
In the first routing strategy, a node sends data through
the infrastructure if the destination is outside of the
cell where the source is located. Otherwise, the data
are forwarded in a multi-hop fashion as in an ad hoc
network.
One-dimensional i.i.d. mobility model.
The number of mobile nodes n and source
nodes ns are both even numbers.Among the mobile
nodes, n=2 nodes (including ns=2 source nodes),
named H-nodes, move horizontally; and the other n=2
nodes(including the other ns=2 source nodes), named
V-nodes, move vertically.denote the position of node
i. If node iis a H-node, yi is fixed and xi is randomly
anduniformly chosen from ½ that H-nodes are evenly
distributed vertically,so yi takes values 2=n; 4=n; . . .
; 1. V-nodes havesimilar properties.Assume that
source and destinations in the samemulticast session
are the same type of nodes.Also assume that node i is
a H-node if i is odd,and a V-node if i is even. The
orbit distance of two H(V)-nodes is definedto be the
vertical (horizontal) distance of the two nodes.Onedimensional hybrid random walk model. Each orbit is
divided into 1=B RW-intervals (random walk
interval). At each time slot, a node moves into one of
two adjacent RW-intervals or stays at the current
RW-interval. The node position in the RW-interval is
randomly, uniformly selected.
Two different routing strategies in the
hybrid wireless network. The first case is that
www.internationaljournalssrg.org
Page 44
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
when a source node and some of its receiver
nodes fall in the same subregion, the source node
will try to reach these receivers by the multicast
tree (may need some relay nodes) inside the
subregion. Otherwise, the source node will try to
reach the closest base station first through one- or
multi-hop, and then the latter will relay the data
to other base stations which are closest to those
receivers outside the subregion. At last, each of
these base stations carrying the data will act as a
root of a multicast tree to relay the data to
receivers by one or multihop (may need other
relaying wireless nodes).
To simply call this routing strategy as
pure hybrid routing. On the other hand, with the
increasing number of source nodes inside one
subregion, if most of source nodes have some
receivers outside the subregion, the base stations
may have much burden to relay data, thus
become bottlenecks. In this case, the wireless
source nodes switch to use globally multicast
trees to send data to their receivers rather than
using base stations.
The scheduler needs to decide whether to
deliver packet p to destination k in the current time
slot. If yes, the scheduler then needs to choose one
relay node (possibly the source node itself) that has a
copy of the packet p at the beginning of the time-slot,
and schedules radio transmissions to forward this
packet to destination k within the same time-slot,
using possibly multi-hop transmissions. When this
happens successfully, we say that the chosen relay
node has successfully captured the destination k of
packet p. We call this chosen relay node the last
mobile relay for packet p and destination k. And we
call the distance between the last mobile relay and the
destination as the capture range.
Fast mobility: The mobility of nodes is at the same
time scale as the transmission of packets, i.e., in each
time-slot, only one transmission is allowed.
Slow mobility: The mobility of nodes is much slower
than the transmission of packets, i.e., multiple
transmissions may happen within one time-slot. The
scaling approach is introduced in, and has been
ISSN: 2348 – 8387
intensively used to study the capacity of ad hoc
networks including both static and mobile networks. I
consider a MANET consisting of ns multicast
sessions. Each multicast session has one source and p
destinations. The wireless mobiles are assumed to
move according to a two dimensional independent
and identical distributed (2D-i.i.d) mobility model.
Each source sends identical information to the p
destinations in its multicast session, and the
information is required to be delivered to all the p
destinations within D time-slots. The main
contributions of this paper include: Finally, I evaluate
the performance of In my algorithm using
simulations. I apply the algorithm to the 2D i.i.d.
mobility model, random-walk model and random
waypoint model. The simulations confirm that the
results obtained form the 2D-i.i.d. model holds for
more realistic mobility models as well. Since the node
mobility is restricted to one dimension, sources have
more information about the positions of destinations
compared with the two-dimensional mobility models.
I will see that the throughput is improved in this case;
for example, under the one-dimensional i.i.d. mobility
model with fast mobiles, the trade-off will be shown
to be Q( 3p D2/n), which is better than Q(p D/n), the
trade-off under the two-dimensional i.i.d. mobility
model with fast mobiles. I also propose joint codingscheduling algorithms which achieve the optimal
tradeoffs.
Three mobility models are included in this paper, and
each model will be investigated under both the fastmobility and slow-mobility assumptions. The detailed
analysis of the two dimensional hybrid random walk
model and one-dimensional i.i.d. mobility model will
be presented.
III. PROPOSED WORK:
System design is the process of defining the
architecture, components, modules, and data for a
system to satisfy specified requirements. One could
see it as the application of systems theory to product
development. There is some overlap with the
disciplines of systems analysis, systems architecture
and systems engineering. If the broader topic of
product development blends the perspective of
www.internationaljournalssrg.org
Page 45
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
marketing, design, and manufacturing into a single
approach to product development, then design is the
act of taking the marketing information and creating
the design of the product to be manufactured. System
design is therefore the process of defining and
developing systems to satisfy specified requirements
of the user.
The second routing strategy is a probabilistic
routing strategy. A transmission mode is
independently chosen for each source destination
pair. With probability p, the ad hoc mode is
employed, and with probability 1−p, the
infrastructure mode is used. By varying the
probability p, a family of probabilistic routing
strategies can be obtained.
Node Mobility is techniques that are trained
to a nodes in a network to communicate with other
node by moving one position to another position in a
network. Nodes will move one position to another
position depends upon the circumstances whether it
will act Random Walk technique, to avoid a Packet
Transmission Rate.
These models are motivated by certain types
of delay2 tolerant networks, in which a satellite subnetwork is used to connect local wireless networks
outside of the Internet. Since the satellites move in
fixed orbits, they can be modeled as one-dimensional
mobilities on a two-dimensional plane. Motivated by
such a delay-tolerant network, we consider one
dimensional mobility model where n nodes move
horizontally and the other n node move vertically.
Since the node mobility is restricted to one
dimension, sources have more information about the
positions of destinations compared with the twodimensional mobility models. We will see that the
throughput is improved in this case; for example,
under the one-dimensional i.i.d. mobility model with
fast mobiles, the trade-off will be shown to be Q(
3pD2/n), which is better than Q(pD/n), the trade-off
under the two-dimensional i.i.d. mobility model with
fast mobiles. We also propose joint coding-
ISSN: 2348 – 8387
scheduling algorithms which achieve the optimal
tradeoffs.
The first case is that when a source node and
some of its receiver nodes fall in the same subregion,
the source node will try to reach these receivers by
the multicast tree (may need some relay nodes) inside
the subregion. Otherwise, the source node will try to
reach the closest base station first through one- or
multi-hop, and then the latter will relay the data to
other base stations which are closest to those
receivers outside the subregion. At last, each of these
base stations carrying the data will act as a root of a
multicast tree to relay the data to receivers by one- or
multihop (may need other relaying wireless nodes).
We simply call this routing strategy as pure hybrid
routing. On the other hand, with the increasing
number of source nodes inside one subregion, if most
of source nodes have some receivers outside the
subregion, the base stations may have much burden to
relay data, thus become bottlenecks. In this case, the
wireless source nodes switch to use globally multicast
trees to send data to their receivers rather than using
base stations. This approach has the same capacity as
the ad-hoc wireless network.
A. Activation MANETs
MANETs is a continuously self-configuring,
infrastructure-less network of
mobile
devices
connected with/without wires. Group of nodes are
used to transmit a packet data in a secure
communication. And we are going to activate the
nodes through providing the source to the each and
every node. It means that Name, IP Address, Port
Number and so on. By this way we activate
MANETs.
B. Node Mobility
Node Mobility is techniques that are trained
to a nodes in a network to communicate with other
node by moving one position to another position in a
network. Nodes will move one position to another
position depends upon the circumstances whether it
www.internationaljournalssrg.org
Page 46
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
will act Random Walk technique, to avoid a Packet
Transmission Rate
C. Packet Mobility
After Network Portioning and Node
Activation we concentrate the packet transmission
from one node to another node in a network, when
node moves near to a base station it will transfer a
packet through Slow Mobility or Fast Mobility it
depends upon a circumstances.
III. CONCLUDING REMARKS:
The multicast capacity-delay tradeoffs in
homogeneous mobile networks.in homogeneous
networks, to established the upper bound on the
optimal multicast capacity delay tradeoffs under
two-dimensional/one-dimensionali.i.d./hybrid
random walk fast/slow mobility models. In that
slow mobility brings better performance than fast
mobility because there are more possible routing
schemes in network to providing security to our
packet data to share in a network.
REFERENCES
[1] P. Gupta and P.R. Kumar, “The Capacity of
Wireless Networks,”IEEE Trans. Information Theory,
[2] M. Neely and E. Modiano, “Capacity and Delay
Tradeoffs for Ad-Hoc Mobile Networks,” IEEE
Trans. Information Theory,vol. 51, no. 6, pp. 19171937,.
ISSN: 2348 – 8387
[3] X. Lin and N.B. Shroff, “The Fundamental
Capacity-Delay Tradeoff in Large Mobile Ad Hoc
Networks,” Proc. Third Ann. MediterraneanAd Hoc
Networking Workshop, 2004.
[4] L. Ying, S. Yang, and R. Srikant, “Optimal DelayThroughput Trade-Offs in Mobile Ad-Hoc
Networks,” IEEE Trans. Information Theory, vol. 9,
no. 54, pp. 4119-4143, Sept. 2008.
[5] J. Mammen and D. Shah, “Throughput and Delay
in Random Wireless Networks with Restricted
Mobility, IEEETrans.InformationTheory.
[6] P. Li, Y. Fang, and J. Li, “Throughput, Delay, and
Mobility inWireless Ad Hoc Networks,” Proc. IEEE
INFOCOM, Mar. 2010.
[7] U. Kozat and L. Tassiulas, “Throughput Capacity
of RandomAd Hoc Networks with Infrastructure
Support,” Proc. ACM Mobi- Com, June 2003.
[8] P. Li, C. Zhang, and Y. Fang, “Capacity and
Delay of Hybrid Wireless Broadband Access
Networks,” IEEE J. Selected Areas in Comm.,, vol.
27, no. 2, pp. 117-125, Feb. 2009.
[9] B. Liu, Z. Liu, and D. Towsley, “On the Capacity
of Hybrid Wireless Networks,” Proc. IEEE
INFOCOM, Mar. 2003.
[10] X. Li, S. Tang, and O. Frieder, “Multicast
Capacity for Large Scale Wireless Ad Hoc
Networks,” Proc. ACM MobiCom, Sept. 2007.
www.internationaljournalssrg.org
Page 47
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Privacy Protection using profile-based
Personalized Web Search
M. Govindarajan
S. Thenmozhi
Assistant Professor
Department of Computer Science and Engineering
Annamalai University
Annamalai Nagar, India
PG Scholar
Department of Computer Science and Engineering
Annamalai University
Annamalai Nagar, India
Abstract-Personalized Web Search is a promising way to
improve the accuracy of web search, and has been attracting
much attention recently. However, effective personalized search
requires collecting and aggregating user information, which often
raise serious concerns of privacy infringement for many users. As
there is no strong security provided for database the
concentration will be only on data mining and not in web search
performance. Thus security threat for database which leads to
various types of attacks is very high and common where there is
no integration of database. In proposed system we can provide
strong security for database through ECC algorithm and verify
integrity through hashing technique if any threat is detected then
we will track IP and block the attacker. To ensure efficient
performance of data and strong security we imply a new
technique known as TASH algorithm. This technique will track
the attackers from the basic action via signing method using
different digital signatures which will be feasible and efficient.
Keywords: Personalized Web Search (PWS), TASH
Algorithm, Elliptic Curve Cryptography, Blocking IP Address.
I. INTRODUCTION
The web search engine has long become the most
important portal for ordinary people looking for useful
information on the web. However users might experience
failure when search engines return irrelevant results that do
not meet their real intentions. Personalized Web
Search refers to search experiences that are tailored
specifically to an individual's interests by incorporating
information about the individual beyond specific query
provided.
The solutions to PWS can generally be categorized into
two types namely click-log-based methods and profile based
ones. The click log based methods are straight forward they
simply impose bias to clicked pages in the users query history.
Although this strategy has been demonstrated to perform
consistently and considerably well it can only work on
repeated queries from the same user which is a strong
limitation confining its applicability. In contrast profile based
methods improve the search experience with complicated user
interest models generated from user profiling techniques.
Profile based methods can be potentially effective for almost
all sorts of queries, but are reported to be unstable under some
circumstances.
ISSN: 2348 – 8387
II. RELATED WORKS
A. Profile- Based Personalization
Previous works on profile-based PWS mainly focus on
improving the search utility. The basic idea of these works is
to tailor the search results by referring to, often implicitly, a
user profile that reveals an individual information goal. In the
remainder of this section, we review the previous solutions to
PWS on two aspects, namely the representation of profiles,
and the measure of the effectiveness of personalization. Many
profile representations are available in the literature to
facilitate different personalization strategies. Earlier
techniques utilize term lists/vectors [4] or bag of words [2] to
represent their profile. However, most recent works build
profiles in hierarchical structures due to their stronger
descriptive ability, better scalability, and higher access
efficiency. The majority of the hierarchical representations are
constructed with existing weighted topic hierarchy/graph as
ODP1 [1], [8] and so on. Another work in [7] builds the
hierarchical profile automatically via term-frequency analysis
on the user data.
B. Privacy Protection in PWS System
Generally there are two classes of privacy protection
problems for PWS. One class includes those treat privacy as
the identification of an individual, as described in [9]. The
other includes those consider the sensitivity of the data,
particularly the user profiles, exposed to the PWS server.
Typical works in the literature of protecting user
identifications (class one) try to solve the privacy problem on
different levels, including the pseudo identity, the group
identity, no identity, and no personal information. The third
and fourth levels are impractical due to high cost in
communication and cryptography. Therefore, the existing
efforts focus on the second level. Both [10] and [11] provide
online anonymity on user profiles by generating a group
profile of k users. Using this approach, the linkage between
the query and a single user is broken. In [12], the useless user
profile (UUP) protocol is proposed to shuffle queries among a
group of users who issue them. As a result any entity cannot
profile a certain individual. These works assume the existence
of a trustworthy third-party anonymizer, which is not readily
www.internationaljournalssrg.org
Page 48
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
available over the Internet at large. [7] proposed a privacy
protection solution for PWS based on hierarchical profiles.
Using a user-specified threshold, a generalized profile is
obtained in effect as a rooted subtree of the complete profile.
Unfortunately, this work does not address the query utility,
which is crucial for the service quality of PWS.
When a substantiated attack is detected then it starts
tracking the attackers IP address. The IP address of attacking
node will be detected by backtracking and analyzing the ECC
algorithm and then the accurate attacker will be detected. Then
the messages sent from detected attackers IP will be destroyed
at once and then that IP will be blocked from the network and
blacklisted.
III. EXISTING SYSTEM
In existing model privacy preserving personalized web
search framework UPS which can generalize profiles for each
query according to user specified privacy requirements.
Relying on definition of two conflicting metrics namely
personalization utility and privacy risk for hierarchical user
profile we formulate the problem of privacy preserving
personalized search as risk profile generalization. Thus they
develop two simple and effective generalization algorithm
known as GreedyDP and GreedyIL to support runtime
profiling in which the former maximizes the discriminating
power (DP) and the later minimizes the information loss (IL).
IV. PROPOSED SYSTEM
In proposed system to protect user privacy in Personalized
web search (PWS) researchers have to consider two
contradicting effects during the search process. On the one
hand, they attempt to improve the search quality with the
personalization utility of the user profile. On the other hand
they need to hide the privacy contents existing in the user
profile to place the privacy risk under control. The people
willing to compromise privacy if the personalization by
supplying user profile to search engine yields better search
quality.
An inexpensive mechanism for the client is to decide
whether or not to personalize a query in runtime profiling to
enhance stability of search results. We provide security for
database through ECC algorithm and verify the data integrity
through hashing technique. If any threat is detected then we
will track the attackers IP and block the attacker. In this
proposed paper we are going to use TASH algorithm which
will efficiently do the above task of securing and integrating
data. TASH method is nothing but the term defined for
tracking attackers IP address via signing and hashing method.
Even though we provide strong security prior to attack
using Elliptic Curve Cryptography (ECC) algorithm, also
imposes tracking of attacker IP address for detecting the
malicious attacker red handedly and preventing the attacks
before it happens. More than detecting and preventing
attackers we can also block the attackers IP from the network
and blacklist them as soon as possible which stops further
attacks from the attacker side.
As we imply Hashing technique for ensuring message
integrity which is very efficient method in the searching to the
exact data item in a very short time. Hashing is the process in
which we place the each and every data item at the index of
the memory location for the purpose of ease of usability.
ISSN: 2348 – 8387
Fig: 1. Dataflow Diagram
A. Client and Server Creation
In this module we create server and client systems which
will be designed for requests and responses for messages and
queries with each other. The huge data sets for different
locations imbibed in distributed database systems and stored
in different racks. Here we create an administrator login which
maintains all the other databases controls. Other than server,
client architecture we maintain distribution of racks of
different databases.
The client IP address, name and port numbers are
registered in order to make client server communication. The
server will be created for providing database to the client. The
client will request database and the server will provide them as
response.
From the distributed database locations clusters are formed
with most recently viewed data and splits them up in its
appropriate locations. For the clustered databases we prepare
indexes with references to its corresponding locations. . To
serve more client requests the copy of database will be placed
in different racks so that it would not affect the overload of
particular system. Then the most recently used data will be
listed out in the indexing so we directly have quick check over
index of each track and then to the main database which can
reduce time to access the huge big data tables.
B. ECC Implementation
Elliptic curve cryptographic algorithm is implemented to
provide security for the database. This algorithm will encrypt
the data which hides the sensitive information from the
attacker. This will be decrypted for providing original data for
the receiver. The data while transfer it will be in a form of
cipher text.
www.internationaljournalssrg.org
Page 49
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Public-key cryptography is based on the intractability of
certain mathematical problems. Early public-key systems are
secure assuming that it is difficult to factor a large integer
composed of two or more large prime factors. For Ellipticcurve-based protocols, it is assumed that finding the discrete
logarithm of a random elliptic curve element with respect to a
publicly known base point is infeasible. The size of the elliptic
curve determines the difficulty of the problem.
The primary benefit promised by ECC is a smaller key
size, reducing storage and transmission requirements—i.e.,
that an elliptic curve group could provide the same level of
security afforded by an RSA-based system with a large
modulus and correspondingly larger key—e.g., a 256-bit ECC
public key should provide comparable security to a 3072-bit
RSA public key. Several discrete logarithm-based protocols
have been adapted to elliptic curves, replacing the group with
an elliptic curve:
 The Elliptic Curve
agreement scheme
Hellman scheme,
Diffie–Hellman (ECDH) key
is based on the Diffie–
where the message starts travelling from its source nodes so
we have check over the message node in every step of its
travel from each node. Then in case of any suspected pollution
attack then it undergoes in depth verification which checks for
hashing number comparison.
When a substantiated attack is detected then it starts
tracking the attackers IP address. The IP address of attacking
node will be detected by backtracking and analyzing the ant
colony messages then the accurate attacker will be detected.
Then the messages sent from detected attackers IP will be
destroyed at once and then that IP will be blocked from the
network and blacklisted.
E. Attacker Blocking
Once the attacker is tracked then he will be blocked from
the network. He will be isolated from the network to avoid any
further malicious activities. Then the messages sent from
detected attackers IP will be destroyed at once and then that IP
will be blocked from the network and blacklisted.
V. RESULTS & DISCUSSION
 The
Elliptic
Curve Integrated
Encryption
Scheme (ECIES), also known as Elliptic Curve
Augmented Encryption Scheme or simply the Elliptic
Curve Encryption Scheme,
 The Elliptic
Curve
Digital
Signature
Algorithm (ECDSA) is based on the Digital Signature
Algorithm.
C. Applying Hashing Technique
Hashing technique is applied to verify message integrity.
The hashing will be applied to the original data before sending
and a number will be generated by this. When the data is
reached at the destination side then the same hashing will be
applied and compare whether the number generated at the
source and destination is same. If both are same then there is
no security threat. If it is different then the security threat is
there.
Fig: 2. Database Administrator registration
The figure 2 shows the result of Database Administrator
registration. In which the Administrator name, port number
and IP address will be entered.
Hashing technique is the very efficient method in
the searching to the exact data item in a very short time.
Hashing is the process in which we place the each and evey
data item at the index of the memory location for the purpose
of ease of usability. There are two types of hashing,
Static hashing: In static hashing, the hash function maps
search-key values to a fixed set of locations.
Dynamic hashing: In dynamic hashing a hash table can
grow to handle more items. The associated hash function must
change as the table grows.
D. Tracking Attacker
Once the hashing verification is done and if it is positive
then the attackers IP address will be tracked. TASH algorithm
will check from where the attack is done. The attack will be
detected by the server through two step verification by the
server. They implies ant colony algorithm from the first step
ISSN: 2348 – 8387
Fig: 3. List of Database Administrators in
Database server
Figure 3 shows the result of number of Database
Administrator registered in the Database Server.
www.internationaljournalssrg.org
Page 50
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig: 4. Database Client Registration
Figure 4 shows the Database Client Registration. In
Database Administration the number of Client should be
Entered
Fig: 7. Privilege accepted
Figure 7 shows the result of privilege accepted by all
administrators.
Fig: 8. Client Attack
Fig: 5. List of Database Clients in Database Server
Figure 5 shows the number of Database Client registered in
the Database Administrator.
Fig: 6. Set Privilege to the Administrator
Figure 6 shows the result of setting Privilege to the
Administrator. The number of all Administrators in the
Database Server either Accept or Deny the privilege which is
set by one of the Administrator in the database Server.
ISSN: 2348 – 8387
Figure 8 shows the client side attack. The attacker collect all
the details like who are all the Administrators in the Database
server and what type of privilege should be set.
Fig: 9. Tracking Attacker
VI. CONCLUSION
To ensure efficient performance of data and strong
security, a new technique is proposed known as TASH
algorithm. TASH means Tracking Attackers IP via Signing
and Hashing method. This technique provides strong security
for database through ECC algorithm and verify integrity
through hashing technique, if any threat is detected then we
will track the IP Address and block the attacker.
www.internationaljournalssrg.org
Page 51
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[6]
REFERENCES
[1]
[2]
[3]
[4]
[5]
Z. Dou, R. Song, and J.-R. Wen, ―A Large-Scale Evaluation and
Analysis of Personalized Search Strategies,‖ Proc. Int’l Conf.
World Wide Web (WWW),pp. 581-590,2007.
J. Teevan, S.T. Damais, and E. Horvitz, ―Personalizing Search
via Automated Analysis of Interests and Activities,‖ Proc. 28th
Ann. Int’l ACM SIGIR Conf. Research and Development in
Information Retrieval (SIGIR), pp. 449-456, 2005.
B. Tan, X. Shen, and C. Zhai, ―Mining Long-Term Search
History to Improve Search Accuracy,‖ Proc. ACM SIGKDD
Int’l Conf. Knowledge Discovery and Data Mining (KDD),
2006.
K. Sugiyama, K. Hatano, and M. Yoshikawa, ―Adaptive Web
Search Based on User Profile Constructed without any Effort
from Users,‖ Proc. 13th Int’l Conf. World Wide Web (WWW),
2004.
X.Shen, B. Tan, and C. Zhai, ―Context Sensitive Information
Retrieval Using Implicit Feedback,‖ Proc. 28th Ann. Int’l ACM
SIGIRConf.
Research and Development Information
Retrieval(SIGIR), 2005.
ISSN: 2348 – 8387
[7]
[8]
[9]
[10]
[11]
[12]
F. Qiu and J. Cho, ―Automatic Identification of User Interest for
Personalized Search,‖ Proc. 15th Int’l Conf. World Wide
Web(WWW), pp. 727-736, 2006.
Y. Xu, K. Wang, B. Zhang, ―Privacy-Enhancing personalized
Web Search,‖ Proc. 16th Int’l Conf. World Wide Web(WWW),
pp. 591-600,2007.
P.A. Chirtia, W. Nejdl, R. Paiu, and C. Kohlschutter, ―Using
ODP Metadata to Personalize Search,‖ Proc. 28th Ann. Int’l
ACM SIGIR Conf. Research and Development Information
Retrieval(SIGIR), 2005.
X. Shen, B. Tan, and C. Zhai, ―Privacy Protection in
Personalized Serach,‖ SIGIR Forum, vol. 41, no. 1, pp. 4-17,
2007.
Y. Xu, K. Wang, G. Yang, and A.W.-C. Fu, ― Online
Anonymity for Personalized Web Services,‖ Proc. 18th ACM
Conf. Information and Knowledge Management(CIKM), pp.
1497-1500, 2009.
Y. Zhu, L. Xiong, and C. Verdery, ― Anonymizing User Profiles
for Personalized Web Search,‖ Proc. 19th Int’l Conf. World
Wide Web(WWW), pp. 1225-1226, 2010.
J. Castelli-Roca, A. Viejo, and J. Herrera-Joancomarti,
―Preserving User’s Privacy in Web Search Engines,‖ Computer
Comm., vol. 32., no.13/14, pp. 1541-1551, 2009.
www.internationaljournalssrg.org
Page 52
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
CENTRALISED CONTROL SYSTEM FOR STREET LIGHTING
SYSTEM BY USING OF WIRELESS COMMUNICATION
N. Hubert Benny,
Dr K. Rajiv Gandhi,
Post graduate student,
Associate professor,
Kings College of Engineering
Department of CSE,
Affiliated to Anna University, Chennai, India
Kings College of Engineering,
Affiliated to Anna University, Chennai, India
ABSTRACT— the introduction of lighting system are
growing rapidly and the process also becomes
complex, keeping in mind the growth of industry and
expansion of cities of cities. So the very challenging
task stands before us is how these lighting system can
be maintained effectively with minimum cost. So the
necessity is how to develop a new light control system
incorporates with new design to overcome the
drawbacks that one face in our old system. So this
project surveyed various street light systems and
analyzed all the characteristics of this system. The
final outcome clearly points out the drawbacks such
as uneasiness in handling and difficult to
maintenance. So in this project proposed a new
system avoiding of the two disadvantages that we
pointed out in above sentence using of ZigBee
Communication Technique. In this thesis, this
project describes the H/W design of new street light
control system, designed by ZigBee Communication
Protocol.
Index terms- Automation, control system, lighting
system, sensors, wireless networks, ZigBee, GSM.
1. INTRODUCTION
Our lighting systems are still designed according to the
old standards of reliability and they often do not take
advantage of the latest technologies developments. In
ISSN: 2348 – 8387
many cases, this is related to the plant administrators
who have not completed the return of the expenses
derived from the construction of existing facilities yet.
However, the recent increasing pressure related to the
raw material costs and the greater social sensitivity to
environment issues are leading manufacturers to
develop new techniques and technologies which allow
significant cost savings and a greater respect for the
environment. These problems can be rectified using a
possible solution that is least expensive and more
reliable.
The use of a remote control system based on
intelligent lamp posts that send information to a central
control system, thus simplifying management and
maintenance issues. It is evolved in terms of using the
Global systems for mobile communications (GSM) and
Zigbee transmissions. The control is implemented
through a network of sensors to collect the relevant
information related to the management and maintenance
of the system, transferring the information via wireless
using Zigbee and GSM protocols. With this technology
we can enable the street lamps in the particular area and
also for particular lamps alone.
This centralized control system let us to get the
survey of the lamps with that we can acknowledge its
status, also we can determine the particular lamps in the
area to glow with the help of Light Dependent Resistor
(LDR) sensor; this senses the darkness in the ambience
and enables the light in street lamps.
www.internationaljournalssrg.org
Page 53
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
In this paper, we present our system, which is
able to integrate the latest technologies, in order to
describe an advanced and intelligent management and
control system of the street lighting.
2. OBJECTIVE
The main challenge to be achieved in our work
is the development of a system capable of becoming a
set of streetlights smart enough to work autonomously,
as well as to be able to be remotely managed (for
diagnostics, actions and other energy optimization
services). These two capabilities can contribute
decisively to improve the global efficiency of the
lighting systems, both from an energy consumption
point of view as in the cost required for their
maintenance. This section covers the requirements that
have to be fulfilled in order to meet these challenges.
3. DEVICES AND METHODS
The devices in this system used are LDR,
Zigbee and GSM devices; the lamp to central control
unit is consists of three units. Each unit has its own
function to be performed and thus that entire units are
controlled from centralized control unit is the station to
make order to the other two units.
3.1 Working Unit:
Here the working unit is the street lamps that are
planted in the streets with a Zigbee transceiver, LDR
sensor and relay. The LDR sensor is used because it
works by sensing the ambience and we can use it to turn
the relay on in the street lamp to glow the light. This
LDR tend to turn on the street light when the cell
resistance falls with increase in light intensity.
The relay turns on lights when it receives its
threshold voltage level while LDR attains low resistance
falls. This makes it automatic, convenient and
proficiency. Once it is done the Zigbee transceiver
device in street lamp acknowledges the intermediate unit
by sending the signal. This signal is transmitted from
Zigbee to Zigbee through street lamps. If there is any
malfunction of data the service engineer is informed and
can perform corrective actions. We are using Zigbee
because it supports the unique needs of low-cost, lowpower wireless sensor networks. The modules require
minimal power and provide reliable delivery of data
between devices. The modules operate within the ISM
ISSN: 2348 – 8387
2.4 GHz frequency band and are pin-for-pin compatible
with each other.
In this working unit the microcontroller works
according to the concept of stop and wait protocol, thus
it waits for the acknowledgement from the nearby lamp
and sends its acknowledgement along with the received
signal until it reaches the intermediate. If a lamp is not
sending acknowledgement to the nearby lamp within
time then it will send error signal to the intermediate
unit.
3.2 Intermediate Unit:
This unit is the intermediate between
Centralized control unit and working unit. It receives the
acknowledgement from the street lamps through Zigbee
and also receives acknowledgment from the streets
under its control. It has Zigbee and GSM wireless
devices in order to communicate with working unit and
centralized control unit; the gathered acknowledgments
are made as a message to send to control unit. This
message is sent to control unit by means of GSM to
carry data for a very long distance.
The GSM is very compact in size and easy to
use as plug in GSM Modem. The Modem is designed
with RS232 Level converter circuitry, which allows you
to directly interface PC Serial port .The baud rate can be
configurable from 9600-115200 through AT Commands.
The microcontroller devices in this intermediate units is
built according to stop and wait protocol thus it will
sends error message to the centralized unit unless no
acknowledgement has received.
3.3 Centralized control unit:
Unlike other two units, this unit has the full
control system of the street lamps that we can turn on/off
the street lights of a particular by giving an AT
command through GSM to intermediate. The
intermediate then checks for that particular street lamp
and turns on/off the lamps.
The GSM module works by giving the specific
commands called AT command, in this centralized unit
it has all other intermediates GSM module number to
communicate. It configures that which street lamp
should be operated for now, it is easy to track if the
given command is done or not. It is because we are
using a GLCD to monitor that and the false messages are
displayed as if the session timed out.
www.internationaljournalssrg.org
Page 54
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
When it receives an error message from the
intermediate unit it can serviced immediately by sending
service engineer. From the received message we can
also identify the malfunctioning street lamps; with the
help of this technology man power is considerably
reduced and is used when needed. Maintenance can be
done as if any failure message received and the recent
status of all working units and intermediate units can
also be checked.
c) Centralized control center
a) Street light control terminals
fig 3. Centralized control center
4. CIRCUIT DIAGRAM DESCRIPTION
Fig 1. Street light control terminals
b)Intermediate station
Fig 2. Intermediate station
Street light control systems are composed of
three parts,Street light control terminals, Intermediate
station, Centralised control center. Centralized control
center for street lights are reside in local government
office usually. At the centralized control center,
operators monitor and control street lights by using
operator’s terminal.
Centralized
control
center
computers
communicate with remote concentrator which control
lights
installed
alongside
every road.Remote
concentrators control lights and gather status
information. Remote concentrators usually control lights
that are connected to power delivery feeder, 60Hz
220V.Street light control system is composed
hierarchically.
Centralized control center are communicate with
remote concentrator.Remote concentrators communicate
with each remote street light control terminal which
installed in every light pole. Remote concentrator’s roles
are control of individual remote controller and gathering
of status information from remote control terminals.
4.1 Monitoring Stations
a) ZigBee Tx-Rx
The ZigBee RF Modules support the unique
needs of low-cost, low-power wireless sensor networks.
The modules require minimal power and provide
reliable delivery of data between devices. The modules
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 55
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
operate within the ISM 2.4 GHz frequency band and are
pin-for-pin compatible with each other.
The ZigBee RF Modules interface to a host
device through a logic-level asynchronous serial port.
Through its serial port, the module can communicate
with any logic and voltage compatible UART; or
through a level translator to any serial device.
The receiver sensitivity is high and therefore the chance
of receiving bad packets is low (about 1%). The modules
ought to be provided by 3V DC supply, and then the
power consumption is within the order of 50 mA. The
module supports sleep mode where consumption is
smaller than 10μA.
b) GSM module
fig 4. Tx-Rx
ZigBee is wireless communication technology
primarily based on IEEE 802.15.4 norm for
communication among multiple devices in a WPAN
(Wireless Personal space Network). ZigBee is intended
to be less complicated than other WPANs (such as
Bluetooth) in terms of price and consumption of energy.
The ZigBee Personal space Network consists of a
minimum of one Coordinator, one (or more) Devices
and, if necessary, of one (or more) Router. The bit rate
of transmission depends on the frequency band.
On 2.4 GHz band the typical bit rate is of 250
kb/s, 40 kb/s at 915 MHz and 20 kb/s at 868 MHz. The
standard distance of a ZigBee transmission vary,
depending on the atmospheric conditions and therefore
the transmission power, ranges from tens to hundred
meters since the transmission power is deliberately kept
as low as necessary (in the order of few mW) to keep up
very low energy consumption [7]. In proposed system,
the network is made to transfer data from the lampposts
to the central station. Data is transferred purpose by
purpose, from one lamppost to another one where every
lamppost has a distinctive address within the system.
The chosen transmission distance between the lampposts
assures that in case of failure of one lamp within the
chain, the signal will reach other operational lamppost
while not breaking the chain. ZigBee wireless
communication network has been implemented with the
utilization of radio frequency modules. They operate
within the ISM band at the frequency of 2.4 GHz.
ISSN: 2348 – 8387
GSM/GPRS RS232 Modem is built with
SIMCOM Make SIM900 Quad-band GSM/GPRS
engine, works on frequencies 850 MHz, 900 MHz, 1800
MHz and 1900 MHz It is very compact in size and easy
to use as plug in GSM Modem. The Modem is designed
with RS232 Level converter circuitry, which allows you
to directly interface PC Serial port .The baud rate can be
configurable from 9600-115200 through AT
command. Initially Modem is in Auto baud mode. This
GSM/GPRS RS232 Modem is having internal TCP/IP
stack to enable you to connect with internet via GPRS. It
is suitable for SMS as well as DATA transfer
application in M2M interface.
The modem needed only 3 wires (Tx, Rx, GND)
except
Power
supply
to
interface
with
microcontroller/Host PC. The built in Low Dropout
Linear voltage regulator allows you to connect wide
range of unregulated power supply (4.2V -13V). Yes, 5
V is in between!! .Using this modem, you will be able to
send & Read SMS, connect to internet via GPRS
through simple AT commands.
Features:
• Quad-Band GSM/GPRS
• 850/ 900/ 1800/ 1900 MHz
• Built in RS232 Level Converter (MAX3232)
• Configurable baud rate
• SMA connector with GSM L Type Antenna.
• Built in Network Status LED
• Inbuilt Powerful TCP/IP protocol stack for internet
data transfer over GPRS.
www.internationaljournalssrg.org
Page 56
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
length? The reason lies within the fact that PIC
microcontrollers are based on Harvard architecture.
Harvard architecture has the program memory and data
memory as separate memories which are accessed from
separate buses.
This improves bandwidth over traditional von
Neumann architecture in which program and data are
fetched from the same memory using the same bus.
d) Light Dependant Resistor
Two cadmium sulphide(cds) photoconductive
cells with spectral responses similar to that of the human
eye. The cell resistance falls with increasing light
intensity. Applications include smoke detection,
automatic lighting control, batch counting and burglar
alarm systems.
Fig 5. GSM
c) PIC 16F877A Microcontroller
Fig 7. LDR
5. CONCLUSION
Fig 6. PIC microcontroller
Microchip, the second largest 8-bit microcontroller
supplier in the world, (Motorola is ranked No: 1) is the
manufacturer of the PIC microcontroller and a number
of other embedded control solutions. Check out the
following links for an overview of the history of
Microchip and PIC microcontrollers.
Microchip offers four families of PIC
microcontrollers, each designed to address the needs of
different designers.
 Base-Line: 12-bit Instruction Word length
 Mid-Range: 14-bit Instruction Word length
 High-End: 16-bit Instruction Word length
 Enhanced: 16-bit Instruction Word length
You might be asking that how can an 8-bit
microcontroller have a 12, 14 or 16 bit instruction word
ISSN: 2348 – 8387
It can be monitored remotely and also be
controlled which saves man power. This wireless system
makes it easier to identify the fault in street lamp and
repair quickly. Zigbee and GSM wireless devices take
less power and costs less than external cable. It is
flexible, extendable and fully adaptable to user needs.
The intelligent management of the lamp posts
by sending data to a central station by GSM wireless
communication. The system can be adopted in the future
for loads supplied by the power system, which enables
the monitoring of energy consumption. This situation is
particularly interesting in the case of economic
incentives offered to clients that enable remote control
of their loads and can be useful. Another advantage
obtained by the control system is the intelligent
management of the lamp posts by sending data to a
central station by ZigBee and GSM wireless
communication. The system maintenance can be easily
and efficiently planned from the central station, allowing
additional savings.
www.internationaljournalssrg.org
Page 57
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
The proposed system is particularly suitable for
street lighting in urban and rural areas where the
traffic is low at a given range of time. The system is
always flexible, extendable, and fully adaptable to
user needs. The simplicity of ZigBee, the reliability
of electronic components, the feature of the sensor
network, the processing speed, the reduced costs, and
the ease of installation are the features that
characterize the proposed system, which presents
itself as an interesting engineering and commercial
solution as the comparison with other technologies
demonstrated.
The system can be adopted in the future for
loads supplied by the power system, which enables
the monitoring of energy consumption. This situation
is particularly interesting in the case of economic
incentives offered to clients that enable remote
control of their loads and can be useful, for example,
to prevent the system blackout. Moreover, new
perspectives arise in billing and in the intelligent
management of remotely controlled loads and for
smart grid and smart metering applications.
REFERENCES
1. Fabio Leccese ―Remote-Control System of High
Efficiency and Intelligent Street Lighting Using a
ZigBee Network Of Device And Sensor‖ IEEE
Tran, Vol. 28, no.6, January 2013.
2. M. A. D. Costa, G. H. Costa, A. S. dos Santos, L.
Schuch, and J. R. Pinheiro, ―A high efficiency
autonomous street lighting system basedon solar
energy and LEDs,‖ in Proc. Power Electron.
Conf., Brazil, Oct. 1, 2009, pp. 265–273.
3. P.-Y. Chen, Y.-H. Liu, Y.-T. Yau, and H.-C. Lee,
―Development of an energy efficient street light
driving system,‖ in Proc. IEEE Int. Conf. Sustain.
Energy Technol., Nov. 24–27, 2008, pp. 761–764.
5. W. Yue, S. Changhong, Z. Xianghong, and Y.
Wei, ―Design of new intelligent street light
control system,‖ in Proc. 8th IEEE Int. Conf.
Control Autom., Jun. 9–11, 2010, pp. 1423–1427.
6. R. Caponetto, G. Dongola, L. Fortuna, N.
Riscica, and D. Zufacchi, ―Power consumption
reduction in a remote controlled street lighting
system,‖ in Proc. Int. Symp. Power Electron.,
Elect. Drives, AutoMotion, Jun. 11–13, 2008, pp.
428–433.
7. Y. Chen and Z. Liu, ―Distributed intelligent city
street lamp monitoring and control system based
on wireless communication chip nRF401,‖ in
Proc. Int. Conf. Netw. Security, Wireless
Commun. Trusted Comput. Apr. 25–26, 2009, vol.
2, pp. 278–281.
8. L. Jianyi, J. Xiulong, and M. Qianjie, ―Wireless
monitoring system of street lamps based on
zigbee,‖ in Proc. 5th Int. Conf. Wireless
Commun., Netw. Mobile Comput., Sep. 24–26,
2009, pp. 1–3.
9. D. Liu, S. Qi, T. Liu, S.-Z. Yu, and F. Sun, ―The
design and realization of communication
technology for street lamps control system,‖ in
Proc. 4th Int. Conf. Comput. Sci. Educ., Jul. 25–
28, 2009, pp. 259–262.
10. J. Liu, C. Feng, X. Suo, and A. Yun, ―Street lamp
control system based on power carrier wave,‖ in
Proc. Int. Symp. Intell. Inf. Technol. Appl.
Workshops, Dec. 21–22, 2008, pp. 184–188.
11. H. Tao and H. Zhang, ―Forest monitoring
application systems based on wireless sensor
networks,‖ in Proc. 3rd Int. Symp. Intell. Inf.
Technol. Appl. Workshops, Nov. 21–22, 2009, pp.
227–230.
4. W. Yongqing, H. Chuncheng, Z. Suoliang, H.
Yali, and W. Hong,―Design of solar LED street
lamp automatic control circuit,‖ in Proc. Int.
Conf. Energy Environment Technol., Oct. 16–18,
2009, vol. 1, pp. 90–93
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 58
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
EFFICIENT LOAD BALANCING MODEL IN CLOUD COMPUTING
Sharon Sushantha.J
M.E(CSE)
Jeppiaar engineering college
Chennai.
G.Ben Sandra
ME(CSE)
Jeppiaar engineering college
Chennai.
Abstract— In this paper we formulate the static
load balancing problem in single class job
distributed systems as a cooperative game among
computers. It is shown that the Nash Bargaining
Solution (NBS) provides a Pareto optimal
allocation which is also fair to all jobs. Our
objective is to incorporate cooperative load
balancing game and present the structure of the
NBS. For this game an algorithm for computing
NBS is derived. We show that the fairness index is
always 1 using NBS which means that the allocation
is fair to all jobs. Finally, the performance of our
cooperative load balancing scheme is compared
with that of other existing schemes.
Keywords—Cloud Computing;Load Balancing;
Game Theory.
I.
INTRODUCTION
Cloud computing is the next generation of
computation. Maybe clouds can save the world
possibly because people can have everything they
need on the cloud. Cloud computing is the next
natural step in the evolution of on-demand
information technology services and products. The
Cloud is a metaphor for the Internet, based on how
it is depicted in computer network diagrams, and is
an abstraction for the complex infrastructure it
conceals.
As the computing industry shifts toward providing
Platform as a Service (PaaS) and Software as a
Service (SaaS) for consumers and enterprises to
access on demand regardless of time and location,
there will be an increase in the number of Cloud
platforms available. Cloud computing is a very
specific type of computing that has very specific
benefits. But it has specific negatives as well. And it
ISSN: 2348 – 8387
Vani Priya
ME(CSE)
Jeppiaar Engineering College
Chennai
does not serve the needs of real businesses to hear
only the hype about cloud computing – both
positive and negative.
One thing that is hoped to be accomplished with
this paper is not only a clear picture of what the
cloud does extremely well and a brief overview of
them, but also a short survey on their criteria and
challenges ahead of them .Cloud computing is an
attracting technology in the field of computer
science. In Gartner’s report [1], it says that the
cloud will bring changes to the IT industry. The
cloud is changing our life by providing users with
new types of services.
Users get service from a cloud without paying
attention to the details [2]. NIST gave a definition
of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access
to a shared pool of configurable computing
resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly
provisioned and released with minimal management
effort or service provider interaction [3].
A. Game Theory
Game Theory [1-3] is the study of mathematical
models, which are used in a situation when multiple
entities interact with each other in a strategic setup.
The theory in its true sense deals with the ability of
an entity or individual (called player in Game
Theory) to take a certain decision keeping in view
the effect of other entities decisions on him, in a
situation of confrontation. A wage negotiation
between a firm and its employees can be considered
as a game between two parties, where each party
makes a decision or move in the negotiation process
based on other party’s move.
www.internationaljournalssrg.org
Page 59
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
B. Enhancement in Public Cloud
Fig 1.1 Example of a public Cloud
II. Current Issues
In the existing model of the public cloud, the job
arrival pattern is not predictable. No common
method is applied to the situations. Tentative work
load control is not possible. The time complexity
incurred is higher than that is desired. There is no
optimal allocation of cloud resources. Fair
allocation is not possible. Round Robin and Ant
Colony algorithms can be used efficiently to
allocate jobs fairly at the expense of user
satisfaction.
optimal operation point for the distributed system
and represents the solution of the proposed no
cooperative load balancing game.
This present a characterization of the Nash
equilibrium and a distributed algorithm for
computing it. The main advantages of our no
cooperative scheme are its distributed structure and
user-optimality. We compare the performance of
the proposed load balancing schemes with that of
other existing schemes and show their main
advantages.
This dissertation is also concerned with the
design of load balancing schemes for distributed
systems in which the computational resources are
owned and operated by different self interested
agents. In such systems there is no a-priori
motivation for cooperation and the agents may
manipulate the resource allocation algorithm in their
own interest leading to severe performance
degradation and poor efficiency.
Using concepts from mechanism design theory
(a sub-field of game theory) we design two load
balancing protocols that force the participating
agents to report their true parameters and follow the
rules. We prove that our load balancing protocols
are truthful and satisfy the voluntary participation
condition. Finally we investigate the effectiveness
of our protocols by simulation.
IV. Related Work
Fig 1.3 Representation of existing system
III. Proposed Issues
In this dissertation we introduce and
investigate a new generation of load balancing
schemes based on game theoretic models. First, we
design a load balancing scheme based on a
cooperative game among computers. The solution
of this game is the Nash Bargaining Solution (NBS)
which provides a Pareto optimal and fair allocation
of jobs to computers.
The main advantages of this scheme are the
simplicity of the underlying algorithm and the fair
treatment of all jobs independent of the allocated
computers. Then we design a load balancing
scheme based on a no cooperative game among
users. The Nash equilibrium provides a user-
ISSN: 2348 – 8387
There are many studies of dynamic cloud
computing for balancing load in public cloud. Load
balancing was described in Gaochao Xu, Junjie[1]
Pang Load Balancing model based on partitioning
in public cloud the methodology used is Round
Robin and Ant Colony for partitioning the load
using fair allocation techniques.
In the work decribed by Brain Adler[2] Load
balancing in the cloud: Tools,Tips and Techniques
describes the cloud ready load balancing solution
using identical load generating servers to allocate
the cloud space.
Understanding the utilization of resource is
described by Zenon Chaczko[3] Availability and
load balancing in cloud computing using
compression techniques.
www.internationaljournalssrg.org
Page 60
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
VI. Static Load Balancing Algorithm
V. A Game Theoretic Resource Allocation
for Overall Energy
In this paper, we propose a gametheoretic approach to optimize the energy
consumption of the MCC systems. We formulate
the mobile devices and cloud servers’ energy
minimization problem as a congestion game. We
prove that the Nash equilibrium always exists in this
congestion game, and propose an efficient
algorithm that could achieve the Nash equilibrium
in polynomial time. Experimental results show that
our approach is able to reduce the total energy
compare to a random approach and an approach
which only tries to reduce mobile devices energy.
Cloud computing and virtualization techniques
provide mobile devices with battery energy saving
opportunities by allowing them to offload
computation and execute code remotely. When the
cloud infrastructure consists of heterogeneous
servers, the mapping between mobile devices and
servers plays an important role in determining the
energy dissipation on both sides. From an
environmental impact perspective, any energy
dissipation related to computation should be
counted to achieve energy sustainability. It is
important reducing the overall energy consumption
of the mobile systems and the cloud infrastructure.
Furthermore, reducing cloud energy consumption
can potentially reduce the cost of mobile cloud
users because the pricing model of cloud services is
pay-by-usage.
We formulate the energy minimization
problem as a congestion game, where each mobile
device is a player and his strategy is to select one of
the servers to off load the computation while
minimizing the overall energy consumption. It
prove that the Nash equilibrium always exists in this
game and propose an efficient algorithm that could
achieve the Nash equilibrium in polynomial time.
Experimental results show that our approach is able
to reduce the total energy of mobile devices and
servers compared to a random approach and an
approach which only tries to reduce mobile devices
alone.
ISSN: 2348 – 8387
The static load of connections on the server
are identified at run time and the balancing
algorithms assign the tasks to the nodes and the
incoming request is sent to server with least number
of based only on the ability of the node to process
connections. However LC does not consider service
new requests but they do not consider dynamic
capability, the distance between clients and servers
and changes of these attributes at run-time, in
addition, other factors. WLC considers both weight
assigned to these algorithms cannot adapt to load
changes service node W(Si) and current number of
connection of during run-time. The process is based
solely on prior service node C(Si) [15][16]. The
problem with WLC is as knowledge of node’s
processing power, memory time progresses static
weight cannot be corrected and the and storage
capacity and most recent known node is bound to
deviate from the actual load condition,
communication performance. Robin (RR) resulting
in load imbalance. Xiaona Ren et. al. [17] and
Weighted Round Robin (WRR) are most proposed
prediction based algorithm called as commonly
Static Load Balancing Algorithm Used in
Exponential Smoothing forecast- Based on
Weighted Cloud
Computing. Round Robin
Algorithm does Least- Connection (ESBWLC)
which can handle not consider server availability,
server load, and the long-connectivity applications
well. In this algorithm the distance between clients
and servers and other load on server is calculated
from parameters like CPU factors.
In this algorithm, the server selection for
upcoming utilization, memory usage, no of
connections, size of disk request is done in
sequential fashion. The main problem is occupation.
Then load per processor (Load/p) is with this
approach is inconsistent server performance
calculated and this algorithm uses (Load/p) as
historical which is overcome by WRR. In WRR the
weights are training set, establishes prediction
model and predicts the added to servers and
according to amount of traffic value of next
moment. The limitation with this algorithm is
directed to servers however for long time
connections it this algorithm does not consider the
distance between causes load tilt.
www.internationaljournalssrg.org
Page 61
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig 1.2 Design of the proposed system
There are several cloud computing categories
with this work focused on a public cloud. A public
cloud is based on the standard cloud computing
model, with service provided by a service
provider[11].
A large public cloud will include many
nodes and the nodes in different geographical
locations. Cloud partitioning is used to manage this
large cloud. A cloud partition is a subarea of the
public cloud with divisions based on the geographic
locations. The architecture is shown in The load
balancing strategy is based on the cloud partitioning
concept. After creating the cloud partitions, the load
balancing then starts: when a job arrives at the
system, with the main controller deciding which
cloud partition should receive the job. The partition
load balancer then decides how to assign the jobs to
the nodes. When the load status of a cloud partition
is normal, this partitioning can be accomplished
locally. If the cloud partition load status is not
normal, this job should be transferred to another
partition. The load balance solution is done by the
main controller and the balancers. The main
controller first assigns jobs to the suitable cloud
partition and then communicates with the balancers
in each partition to refresh this status in formation.
Since the main controller deals with information for
each partition, smaller data sets will lead to the
higher processing rates. The balancers in each
partition gather the status information from every
ISSN: 2348 – 8387
node and then choose the right strategy to distribute
the jobs. Assigning jobs to the cloud partition.
When a job arrives at the public cloud, the first step
is to choose the right partition. The cloud partition
status can be divided into three types :(1) Idle:
When the percentage of idle nodes exceeds, change
to idle status. (2) Normal: When the percentage of
the normal nodes exceeds, change to normal load
status.(3) Overload: When the percentage of the
overloaded nodes exceeds, change to overloaded
status. The parameters, and are set by the cloud
partition balancers. The main controller has to
communicate with the balancers frequently to
refresh the status information. The main controller
then dispatches the jobs using the following
strategy: When job i arrives at the system, the main
controller queries the cloud partition where job is
located. If this location’s status is idle or normal,
the job is handled locally. If not, another cloud
partition is found that is not overloaded. The
algorithm is based on assigning jobs to the nodes in
the cloud partition.
The cloud partition balancer gathers load
information from every node to evaluate the cloud
partition status. This evaluation of each node’s load
status is very important. The first task is to define
the load degree of each node.The node load degree
is related to various static parameters and dynamic
parameters. The static parameters include the
number of CPU’s, the CPU processing speeds, the
memory size, etc. Dynamic parameters are the
memory utilization ratio, the CPU utilization ratio,
the network bandwidth, etc.
Fig 1.4 Flowchart of load balancing in a partitioned cloud
www.internationaljournalssrg.org
Page 62
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Good load balance will improve the
performance of the entire cloud. However, there is
no common method that can adapt to all possible
different situations. Various methods have been
developed in improving existing solutions to resolve
new problems. Each particular method has
advantage in a particular area but not in all
situations. Therefore, the current model integrates
several methods and switches between the load
balance methods based on the system status. A
relatively simple method can be used for the
partition idle state with a more complex method for
the normal state. The load balancers then switch
methods as the status changes. Here, the idle status
uses an improved Round Robin algorithm while the
normal status uses a game theory based load
balancing strategy.
The Round Robin algorithm is used here for
its simplicity. The Round Robin algorithm is one of
the simplest load balancing algorithms, which
passes each new request to the next server in the
queue. The algorithm does not record the status of
each connection so it has no status information. In
the regular Round Robin algorithm, every node has
an equal opportunity to be chosen. However, in a
public cloud, the configuration and the performance
of each node will be not the same; thus, this method
may overload some nodes.
Thus, an improved Round Robin algorithm is used,
which called ―Round Robin based on the load
degree evaluation‖ .The algorithm is still fairly
simple. Before the Round Robin step, the nodes in
the load balancing table are ordered based on the
load degree from the lowest to the highest. The
system builds a circular queue and walks through
the queue again and again. Jobs will then be
assigned to nodes with low load degrees. The node
order will be changed when the balancer refreshes
the Load Status Table. However, there may be read
and write inconsistency at the refresh period T.
When the balance table is refreshed, at this moment,
if a job arrives at the cloud partition, it will bring
the inconsistent problem. The system status will
have changed but the information will still be old.
This may lead to an erroneous load strategy choice
and an erroneous nodes order. To resolve this
problem, two Load Status Tables should be created
ISSN: 2348 – 8387
as follows: Load Status Table 1 and Load Status
Table 2. A flag is also assigned to each table to
indicate Read or Write. When the flag = ―Read‖,
then the Round Robin based on the load degree
evaluation algorithm is using this table. When the
flag = ―Write‖, the table is being refreshed, new
information is written into this table. Thus, at each
moment, one table gives the correct node locations
in the queue for the improved Round Robin
algorithm, while the other is being prepared with
the updated information. Once the data is refreshed,
the table flag is changed to ―Read‖ and the other
table’s flag is changed to ―Write‖. The two tables
then alternate to solve the inconsistency. When the
cloud partition is normal, jobs are arriving much
faster than in the idle state and the situation is far
more complex, so a different strategy is used for the
load balancing.
Each user wants his jobs completed in the shortest
time, so the public cloud needs a method that can
complete the jobs of all users with reasonable
response time. Penmatsa and Chronopoulos[13]
proposed a static load balancing strategy based on
game theory for distributed systems. And this work
provides us with a new review of the load balance
problem in the cloud environment. .As an implementation o
binding agreement.
Each decision maker decides by comparing notes
with each others. In non-cooperative games, each
decision maker makes decisions only for his own
benefit. The system then reaches the Nash
equilibrium, where each decision maker makes the
optimized decision. The Nash equilibrium is when
each player in the game has chosen a strategy and
no player can benefit by changing his or her
strategy while the other players’ strategies remain
unchanged.
Based on Load Balancing discussed a two-level task
scheduling mechanism based on load balancing to
meet dynamic requirements of users and obtain high
resource utilization. It achieves load balancing by
first mapping tasks to virtual machines and then
virtual machines to host resources thereby
improving the task response time, resource
utilization and overall performance of the cloud
computing environment. Throttled algorithm is
completely based on virtual machine. In this client
www.internationaljournalssrg.org
Page 63
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
first requesting the load balancer to check the right
virtual machine which access that load easily and
perform the operations which is give by the client or
user. In this algorithm the client first requests the
load balancer to find a suitable Virtual Machine to
perform the required operation. Equally spread
current execution algorithm [9] process handle with
priorities. it distribute the load randomly by
checking the size and transfer the load to that give
maximize throughput. It is spread spectrum
technique in which the load virtual machine which
is lightly loaded or handle that task easy and take
less time , and balancer spread the load of the job in
hand into multiple virtual machines.
Load balancing algorithm [8] can also be based on
least connection mechanism which is a part of
dynamic scheduling algorithm. It needs to count the
number of connections for each server dynamically
to estimate the load. The load balancer records the
connection number of each server. The number of
connection increases when a new connection is
dispatched to it, and decreases the number when
connection finishes or timeout happens.
Centralized dynamic load balancing takes fewer
messages to reach a decision, as the number of
overall interactions in the system decreases
drastically as compared to the semi distributed case.
However, centralized algorithms can cause a
bottleneck in the system at the central node and also
the load balancing process is rendered useless once
the central node crashes. Therefore, this algorithm
is most suited for networks with small size.
In order to balance the requests of the resources it is
important to recognize a few major goals of load
balancing algorithms:
b) Scalability and flexibility: the distributed system
in which the algorithm is implemented may change
in size or topology. So the algorithm must be
scalable and flexible enough to allow such changes
to be handled easily.
c) Priority: Prioritization of the resources or jobs
need to be done on before hand through the
algorithm itself for better service to the important
their origin.
Conclusion
We conclude that the method and switches
based on the system status is used for dynamic
cloud balancing. Load balancing system is to focus
on a specific cloud area. It improves efficiency in
public cloud. The complexity of allocation is
reduced and fair allocation theory is utilized. Game
theory is exploited for better load balancing.
Fairness index is always 1 using NBS which means
that the allocation is fair to all jobs note. It is used
for incorporating job arrival pattern that is
predictable. We can choose the suitable partition for
arriving jobs. Work load control is made possible in
dynamic cloud computing.
References
[1]Gaochao ―Load Balancing Xu,Junjie Pang model
ba.sed on partitioning for public cloud‖;ET Al2013.
[2].Zenon Chaczko ―Availability and load
balancingin cloud computing‖ -2011
[3].Daniel Grosu ―Load balancing in distributed
system:A Game theoretic approach‖ -2011
a) Cost effectiveness: primary aim is to achieve an
overall improvement in system performance at a
reasonable cost.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 64
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
A Survey On Disease Treatment Relations Analysis
APARNA.B, Msc.,
Research Scholar,
Department of Computer Science,
Bishop Heber College,
Trichirappalli. 620017, TN, India
SIVARANJANI.K, M.Sc.,M.Phil.,
Assistant Professor
Department of Information Technology,
Bishop Heber College,
Trichirappalli. 620017, TN, India
Abstract
Machine learning is an emerging technique in the medical field.Emerging techniques such as Google
Health and Microsoft Health Vault are used for tracking the health condition. Health records are maintained in
centralized database for storing the patient records. The patient details such as the patient name, their disease and
their treatment are stored in the EHR. Machine Learning is a visualized as a tool for computer based systems in the
health care field. Accessible papers have concentrated on the identification of diseases and their treatment from short
text using NLP. The relationships of the diseases is also identified from Medline database and classified using
various classification algorithms such as decision tree, CNB and NB. The proposed system aims at humanizing the
accuracy of the credentials of the disease and their treatment by using Adaboost algorithm. The proposed work is
applied to lung images using Adaboost algorithm. Using this algorithm, the user can become aware of cancer
affected parts in the provided lung image and can get the related treatment and prevention steps for the identified
cancer.
Index Terms -- Machine Learning (ML), Natural Language Processing (NLP), Complement Naives Bayes
CNB, Adaboost
I. INTRODUCTION
Machine learning explores the study of the
algorithms that can learn from data. Image mining
also referred to as image data mining, equivalent to
image analytics, refers to the process of deriving
high-quality information from image.
Natural language processing (NLP) is a field
of computer science, artificial intelligence, and
linguistics concerned with the interactions between
computers and human (natural) languages.
This paper deals with identification of
affected parts of the lung. From the collected dataset,
the affected level is identified and the corresponding
treatment and prevention methods will be provided to
the user. This paper discuss about the performance of
the various accessing algorithm. Section II deals with
the related work and discuss about the result of
various algorithm. Section III discuss about the
proposed methodology. In Section IV the accessing
algorithm are compared the best algorithm among the
accessing algorithm is taken for comparison. The
conclusion and the future work to be done are
discussed in Section V.
II. RELATED WORK
ISSN: 2348 – 8387
The digital X-ray chest is classified into two
categories namely normal and abnormal. Learning
algorithms such as neural network & SVMs are used
in this for training different parameters and input
features in “A Study of Detection of Lung Cancer
Using Data Mining Classification Techniques”
published by Ada, Rajneet Kaur [1]. Classification
methods are used in this paper in order to classify
problems. This paper identifies the characteristics
that indicate the group to which each case belongs.
Disease and Treatment related sentences are
separated by avoiding unnecessary information,
advertisements from the medical web page namely
MEDLINE. The Multinomial Naive Bayes algorithm
is proposed by Janani.R.M.S, Ramesh. V, [2]
integrated with medical management system to
separate medical words from the short text. This
paper removes the unwanted contents from the
HTML page by comparing them from MEDLINE
dataset and provides the text document containing
only the particular disease and its relevant
Symptoms, Cause and Treatment as result. This also
minimizes the time and the work load of the doctors
in analyzing information about certain disease and
treatment in order to make decision about patient
monitoring and treatment.
Shital Shah, Andrew Kusiak [3] used Gene
expression data sets for analyzing ovarian, prostate
www.internationaljournalssrg.org
Page 65
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
lung cancer. An integrated gene-search algorithm is
proposed for genetic expression data analysis. This
algorithm involves a genetic algorithm and
correlation-based heuristics for partitioning data sets
and data mining for making predictions. Knowledge
derived from the proposed algorithm has high
classification accuracy and identify the most
considerable genes. The algorithm applied to the
genetic expression data sets for any cancer. It is
successfully demonstrated on the ovarian, prostate,
and lung cancer in this research.
Classification of the images and extraction
process of their features and neural network classifier
using Histogram Equalization in “Early Detection
and Prediction of Lung Cancer Survival using Neural
Network Classifier” by Ada, Rajneet Kaur [4] to
check the state of a patient in its early stage whether
it is normal or abnormal.
Neural Network Algorithm is implemented
using open source and its performance is compared to
supplementary classification algorithms. It be
evidence for the best results with highest TP Rate and
lowest FP Rate and in case of correctly classification,
it gives the 96.04% result as compare to other
classifiers.
Zakaria Suliman Zubi, Rema Asheibani
Saad in Improves Treatment Programs of Lung
Cancer Using Data Mining Techniques [5] used Back
propagation algorithm for classifying the data into
three categories such as normal, benign, malignant.
Neural network method classifies problems and
identifies the characteristics that indicate the group to
which each case belongs.
In Early Detection of Lung Cancer Risk
Using Data Mining [6] by Kawsar Ahmed,
identification of genetic as well as environmental
factors is very important in developing novel
methods of lung cancer prevention. Finally using the
significant pattern prediction tools for a lung cancer
prophecy system were developed. This lung cancer
risk forecast system should prove helpful in detection
of a person’s predisposition for lung cancer.
In [7], Improved Classification of Lung
Cancer Tumors Based on Structural and
Physicochemical Properties of Proteins Using Data
Mining Models by R. Geetha Ramani, supervised
clustering algorithms exhibited poor performance in
differentiating the lung tumor curriculum. Hybrid
feature selection acknowledged the distribution of
solvent accessibility as the highest ranked features
with Incremental feature selection and Bayesian
ISSN: 2348 – 8387
Network prediction generating the optimal Jack-knife
cross validation accuracy of 87.6%.
In [8], Effectiveness of Data Mining - based
Cancer Prediction System (DMBCPS) by A.Priyanga
proposed the cancer prediction system based on data
mining. This system guesstimates the risk of the
breast, skin, and lung cancers. It envisage three
specific cancer risks. Specifically, Cancer prediction
system estimates the risk of the breast, skin, and lung
cancers by exploratory a number of user-provided
genetic and non-genetic factors. This system is
validated by comparing its predicted results with the
patient’s prior medical record, and also this is
analyzed using weka system.
In [9], Mining lung cancer data and other
diseases data Using data mining techniques: a survey
by Parag Deoskar, ant colony optimization (ACO)
technique is used for lung cancer classification. Ant
colony optimization helps in increasing or decreasing
the disease prediction value. An assortment of data
mining and ant colony optimization techniques for
appropriate rule generation and classification, which
funnel to exact cancer classification.
In [10], A Data Mining Classification
Approach to Predict Systemic Lupus Erythematosus
using ID3 Algorithm by Gomathi. S deals with the
deadly disease SLE and a effective way to predict
and investigate the disease. A new framework is
proposed for the judgment and predicting the disease
earlier will be used to extend the life of patient with
lupus.
III. PROPOSED METHODOLOGY
Adaptive Boosting is a machine learning
meta-algorithm. AdaBoost is a prevailing classifier
that works well on both basic and more complex
recognition problems. AdaBoost works by creating a
highly precise classifier by combining many
relatively weak and erroneous classifiers. AdaBoost
proceed as a meta algorithm, which allocate you to
use it as a binding for other classifiers.
This enables a user to add several weak
classifiers to the family of weak classifiers that
should be used at each round of boosting. The
AdaBoost algorithm will decide on the weak
classifier that works best at that round of boosting.
This paper
necessary data from
uploaded by the user.
taken from Medline
deals with extraction of
the images that are being
In this paper, the data’s are
database. The image that
www.internationaljournalssrg.org
Page 66
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
contains the phrases about bio-medical details has to
be uploaded by the user. This image is classified into
two task using Adaboost algorithm.
The input from the user is taken and relationship
between the disease and treatment are done.
3.1.2.1 Cancer Identification
In the first task, identify the affected parts in
the lung image uploaded by the user. From that
separated parts, the disease and their corresponding
treatment are categorized. The second task deals with
semantic relationship of the data that includes Cure,
Prevention and Side Effects of the disease.
3.1 Steps Involved
In order to identify the semantic
relationship with the identified disease, the
following steps have to done. The process flow for
the steps involved in the identification is shown in
figure1.
This deals with identifying the
cancer affected parts in the lung.
3.1.2.2 Relation identification
After identifying the cancer
affected parts in the image, medical
terms that are related to the cancer are
identified and the treatment for the
identified cancer will be provided.
3.1.3 Classification algorithms and data demonstration
3.1.3.1 Image Classifier
The image classifier is commonly used
for image classification. The affected parts of
the image are classified and highlighted using
image classifier.
3.1.3.2 Biomedical Concepts Representation
After identifying the affected parts of
the lung, the treatment and related prevention
methods are provided to the user.
3.1.3.3 Adaboost Algorithm
The image with medical terms is taken
as sample dataset. This input image is uploaded
and classified using adaboost algorithm. This
algorithm classifies the uploaded dataset into
two tasks. In the first task, the Medline dataset
values are compared with the uploaded dataset.
If the values matches the data’s in the Medline
dataset, then these dataset are classified into
separate group.
3.1.4 Output Performance
The semantic relationship is applied to the
separated group. This identifies the relationship such
as cure, prevention and side effects that are related to
the disease identified in the first task.
Figure1. Process Flow
3.1.1 Input Progression
This deals with providing image as input in order
to identify the semantic relationship between disease and
treatment. The cancer affected images are provided as
input in order to identify the cancer affected parts in the
lung.
3.1.2 Tasks and Data Sets
ISSN: 2348 – 8387
3.2 Data Set Analysis
The affected parts of the lungs are identified
using image classification. For identifying the
relationship and treatment for the identified cancer
www.internationaljournalssrg.org
Page 67
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
the medline dataset is used. The medical dataset for
cancer is taken. This consists of 584 data. These
data’s are taken from the Medline Dataset. The graph
represents the ages of the patient and their protein,
amino, phosphate level. The dataset is uploaded and
classified
using
Adaboost
algorithm.
The
classification accuracy for this algorithm is higher
than the previous algorithms.
level. Gene Search algorithm provides 94% ,
histogram
provides
96.04%,
Supplementary
algorithm provides 48%, back propogation provides
70%, pattern prediction provides 56% ,ant colony
provides 74%, supervised clustering provides 69%
and ID3 provides 83%. The performance level for
various accessing algorithm is shown in Figure 4.1.
Figure 4.1 Performance Level for accessing
algorithm
Figure3.1 Graphical Representation
From the above analysis, histogram has the
high performance level when compared to other
algorithm. So this paper aims at comparing the
histogram with Adaboost algorithm for improving the
performance level of classifying data.
V CONCLUSION AND FUTURE WORK
5.1 Conclusion
Figure 3.2 Color Representation
The existing algorithms such as SVM,
Multinomial Naive Bayes algorithm, gene-search
algorithm, Histogram algorithm, supplementary
classification
algorithms,
Back
propagation
algorithm, pattern prediction tools, supervised
clustering algorithms, ant colony optimization, ID3
were used. Among the existing algorithms, histogram
is preferred as the best algorithm for classification
algorithm by providing accurate results in less time.
The classification accuracy of splitting the bio
medical terms is high when compared with other
existing algorithms.
IV RESULT AND DISCUSSION
5.2 Future Work
Support vector machines are supervised
learning models with allied learning algorithms that
scrutinize data and be acquainted with patterns, used
for classification regression analysis.Multinominal
naïve bayes classifier are a family of simple
probabilistic classifier based on applying bayes
theorem with naive independence assumptions
between features. This provides 86% of performance
ISSN: 2348 – 8387
This paper uses Adaboost algorithm for
classifying the uploaded image dataset. This aims at
providing high classification accuracy than the
Multinomial Naive Bayes algorithm.
References
www.internationaljournalssrg.org
Page 68
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[1] “A Study of Detection of Lung Cancer Using
Data Mining Classification Techniques”,
Ada,
Rajneet Kaur, International Journal of Advanced
Research in Computer Science and Software
Engineering, Volume 3, Issue 3, March 2013.
[2]“Efficient Extraction of Medical relations using
Machine Learning Approach”,
Janani.R.M.S ,
Ramesh.V , International Journal of Advanced
Research in Computer Science and Software
Engineering, Volume 3, Issue 3, March 2013
[3] “Cancer gene search with data-mining and genetic
algorithms”, Shital Shah, Andrew Kusiak, Computers
in Biology and Medicine 37 (2007) 251 – 261.
[4] “Early Detection and Prediction of Lung Cancer
Survival using Neural Network Classifier”, Ada,
Rajneet Kaur, International Journal of Application or
Innovation in Engineering and Management, Volume
2, Issue 6, June 2013.
[5] “Improves Treatment Programs of Lung Cancer
Using Data Mining Techniques”, Journal of Software
Engineering and Applications, 2014, 7, 69-77
published
Online
February
2014
(http://www.scirp.org/journal/jsea)
http://dx.doi.org/10.4236/jsea.2014.72008
[6] “Early Detection of Lung Cancer Risk Using
Data Mining”, Kawsar Ahmed, Abdullah-Al-Emran,
ISSN: 2348 – 8387
Early Detection of Lung Cancer Risk Using Data Mining.
[7] “Improved Classification of Lung Cancer Tumors
Based on Structural and Physicochemical Properties
of Proteins Using Data Mining Models”, R. Geetha
Ramani1, Shomona Gracia Jacob PLoS ONE 8(3):
e58772. doi:10.1371/journal.pone.0058772
[8] “Effectiveness of Data Mining - based Cancer
Prediction System (DMBCPS)”,
A.Priyanga,
International Journal of Computer Applications (0975
– 8887) Volume 83 – No 10, December2013
[9] “Mining lung cancer data and other diseases data
Using data mining techniques: a survey”, Parag
deoskar, Dr. Divakar singh, Dr. Anju singh,
International Journal of Computer Engineering and
Technology (IJCET), ISSN 0976- 6367(Print), ISSN
0976 – 6375(Online) Volume 4, Issue 2, March –
2013.
[10] “ A Data Mining Classification Approach to
Predict Systemic Lupus Erythematosus using ID3
Algorithm“, Gomathi. S, Dr. V. Narayani,
International Journal of Advanced Research in
Computer Science and Software Engineering,
Volume 4, Issue 3, March 2014.
www.internationaljournalssrg.org
Page 69
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
DTN Analysis Using Dynamic Trust Management
Protocol and Secure Routing
K.Akilandeswari
Mr.M.K.Mohamed Faizal,m.e
cse M.I.E.T Engineering college
cse M.I.E.T Engineering college
Trichy,Tamilnadu.
Trichy,Tamilnadu.
Abstract-Delay tolerant networks (DTNs) are type of network that refers by high end-to-end latency, frequent disconnection, and
opportunistic communication over unreliable wireless links. Dynamic trust management protocol is design to use in DTN network
and also for secure routing optimization in the presence of well-behaved, selfish and malicious nodes. Analysing this dynamic trust
management protocol using simulations. By using this protocol trust bias can minimize and maximize routing performance and also
applying operational setting at runtime to manage dynamically changing condition of the network. The results demonstrate that our
protocol is able to deal with selfish behaviours and is resilient against trust-related attacks. Furthermore, our trust based routing
protocol can effectively trade off message overhead and message delay for a significant gain in delivery ratio.
Keywords-delay tolerant network; dynamic trust management protocol; simulation; wireless links.
I. INTRODUCTION
Delay Tolerant Networks (DTNs) are relatively new
class of networks, wherein sparseness and delay are
particularly high. In conventional Mobile Ad-hoc
Networks (MANETs), the existence of end-to-end paths
via contemporaneous links is assumed in spite of node
mobility. In contrast, DTNs are characterized by
intermittent contacts between nodes. In other words,
DTNs‟ links on an end-to-end path do not exist
contemporaneously, and hence, intermediate nodes may
need to store, carry, and wait for opportunities to transfer
data packets towards their destinations. Therefore, DTNs
are characterized by large end-to-end latency,
opportunistic communication over intermittent links,
error-prone links, and (most importantly) the lack of
end-to-end path from a source to its destination. It can be
argued that MANETs are a special class of DTNs.
contemporaneously, and hence, intermediate nodes may
need to store, carry, and wait for opportunities to transfer
data packets towards their destinations.
Therefore, DTNs are characterized by large end-toend latency, opportunistic communication over
intermittent links, error-prone links, and (most
importantly) the lack of end-to-end path from a source to
its destination. It can be argued that MANETs are a
special class of DTNs.
Delay Tolerant Networks (DTNs) are relatively new
class of networks, wherein sparseness and delay are
particularly high. In conventional Mobile Ad-hoc
Networks (MANETs), the existence of end-to-end paths
via contemporaneous links is assumed in spite of node
mobility. In contrast, DTNs are characterized by
intermittent contacts between nodes. In other words,
DTNs‟ links on an end-to-end path do not exist
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 70
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
the Dempster-Shafer theory, nodes form trust and
confidence opinions towards the competency of each
encountered forwarding node. Extensive real-trace-driven
simulation results are presented to support the
effectiveness of our system.
A malicious node can perform the following trustrelated attacks:
The DTN architecture was designed to provide a
framework for dealing with the sort of heterogeneity
found at sensor network gateways. DTN use a
multitude of different delivery protocols including
TCP/IP, raw Ethernet, hand-carried storage drives for
delivery. An end-to-end message-oriented (overlay)
layer called as the "bundle layer" which lies above the
transport layer which it is hosted.
II.
SYSTEM MODEL
Nodes in disruption-tolerant networks (DTNs) usually
exhibit repetitive motions. Several recently proposed
DTN routing algorithms have utilized the DTNs‟ cyclic
properties for predicting future forwarding. The
prediction is based on metrics abstracted from nodes‟
contact history.
However, the robustness of the encounter prediction
becomes vital for DTN routing since malicious nodes can
provide forged metrics or follow sophisticated mobility
patterns to attract packets and gain a significant advantage
in encounter prediction.
In this paper, we examine the impact of black hole
attacks and its variations in DTN routing. We introduce
the concept of encounter tickets to secure the evidence of
each contact. In our scheme, nodes adopt a unique way of
interpreting the contact history by making observations
based on the collected encounter tickets. Then, following
ISSN: 2348 – 8387
1. Self-promoting attacks: It can promote its importance
(by providing good recommendations for itself) so as to
attract packets routing through it (and being dropped).
2. Bad-mouthing attacks: it can ruin the reputation of
well-behaved nodes (by providing bad recommendations
against good nodes) so as to decrease the chance of
packets routing through good nodes.
3. Ballot stuffing: it can boost the reputation of bad
nodes (by providing good recommendations for them) so
as to increase the chance of packets routing through
malicious nodes (and being dropped).
We introduce a random attack probability Prand to
reflect random attack behaviour. When Prand¼1, the
malicious attacker is a reckless attacker; when Prand <1it
is a random attacker.
A collaborative attack means that the malicious nodes in
the system boost their allies and focus on particular
victims in the system to victimize. Ballot stuffing and
bad-mouthing attacks are a form of collaborative attacks
to the trust system to boost the reputation of malicious
nodes and to ruin the Reputation of (and thus to
victimize) good nodes. We mitigate collaborative attacks
with an application-level trust optimization design by
setting a trust recommender thresh-old Trec to filter out
less trustworthy recommenders, and a trust carrier
threshold Tf to select trustworthy carriers for message
forwarding. These two thresholds are dynamically
changed in response to environment changes. A node‟s
trust value is assessed based on direct trust evaluation
and indirect trust information like recommendations. The
trust of one node toward another node is updated upon
encounter events. Each node will execute the trust
protocol independently and will perform its direct trust
assessment toward an encountered node based on
specific detection mechanisms designed for assessing a
trust property X. Later in Section 4 we will discuss these
specific detection mechanisms employed in our protocol
for trust aggregation.
www.internationaljournalssrg.org
Page 71
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
System design is the process of defining the architecture,
components, modules, and data for a system to satisfy
specified requirements. One could see it as the
application of systems theory to product development.
There is some overlap with the disciplines of systems
analysis, systems architecture and systems engineering.
If the broader topic of product development blends the
perspective of marketing, design, and manufacturing into
a single approach to product development, then design is
the act of taking the marketing information and creating
the design of the product to be manufactured. System
design is therefore the process of defining and
developing systems to satisfy specified requirements of
the user
III. TRUST MANAGEMENT PROTOCOL
Our trust protocol considers trust composition,
trust aggregation, trust formation and application-level
trust optimization designs.Our trust management
protocol execution QoS trust: QoS trust [10] is
evaluated through the communication network by the
capability of a node to deliver messages to the
destination node. We consider “connectivity” and
“energy” to measure the QoS trust level of a node. The
connectivity QoS trust is about the ability of a node to
encounter other nodes due to its movement patterns.
The energy QoS trust is about the battery energy of a
node to perform the basic routing function. Social
trust: Social trust is based on honesty or integrity in
social relationships and friendship in social ties. We
consider “healthiness” and social “unselfishness” to
measure the social trust level of a node. The
healthiness social trust is the belief of whether a node
is malicious. The unselfishness social trust is the belief
of whether a node is socially selfish.
The selection of trust properties is application driven. In
DTN routing, message delivery ratio and message delay
are two important factors. We consider “healthiness”,
“unselfishness”, and “energy” in order to achieve high
message delivery ratio, and we consider “connectivity”
to achieve low message delay.
IV.
SIMULATION
To be able to implement the Destination
Sequenced Distance Vector and Dynamic Source
Routing protocols certain simulation scenario must be
ISSN: 2348 – 8387
run. This chapter describes the details of the simulation
which has been done and the results of the simulations
done for the protocols. The simulations were conducted
under UBUNTU (linux) platform
.
V.
IMPLEMENTATION
Suggest experimental model of dynamic trust
management for DTNs to deal with both malicious and
selfish misbehaving nodes. Our notion of selfishness is
social selfishness as very often humans carrying
communication devices in smart phones, GPSs, etc. in a
DTN are socially selfish to outsiders but unselfish to
friends. Our notion of maliciousness refers to malicious
nodes performing trust-related attacks to disrupt DTN
operations built on trust. Design and validate a dynamic
trust management protocol for DTN routing performance
optimization in response to dynamically changing
conditions such as the population of misbehaving nodes
through secure Router.
VI.
ALGORITHM EXPLANATION
Dynamic Trust Management protocol
QoS trust: QoS trust is evaluated through the
communication network by the capability of a node to
deliver messages to the destination node. We consider
“connectivity” and “energy” to measure the QoS trust
level of a node. The energy QoS trust is about the battery
energy of a node to perform the basic routing function.
Social trust: Social trust is based on honesty or integrity
in social relationships and friendship in social ties. We
consider “healthiness” and social “unselfishness” to
measure the social trust level of a node. The healthiness
social trust is the belief of whether a node is malicious.
The unselfishness social trust is the belief of whether a
node is socially selfish. While social ties cover more
than just friendship, we consider friendship as a major
factor for determining a node‟s socially selfish behavior.
CONCLUSION
In this paper, we designed and validated a trust
management protocol for DTNs and applied it to secure
routing. Our trust management protocol combines QoS
trust with social trust to obtain a composite trust metric.
The results obtained at design time can facilitate
www.internationaljournalssrg.org
Page 72
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
dynamic trust management for DTN routing in response
to dynamically changing conditions at runtime and
performed a comparative analysis of trust-based secure
routing running on top of our trust management protocol.
Further, it approaches the ideal performance of epidemic
routing in delivery ratio and message delay without
incurring high message or protocol maintenance
overhead
Servers,”Mul-timedia Systems, vol. 8, no. 2, pp. 83-91,
2000.
[9] S.T. Cheng, C.M. Chen, and I.R. Chen,
“Performance Evalua-tion of an Admission Control
Algorithm: Dynamic Threshold
with Negotiation,”Performance Evaluation, vol. 52, no.
1, pp. 1-13, 2003.
REFERENCES
[1]
“The
ns-3
Network
Simulator,”
http://www.nsnam.org/, Nov.
2011.
[2] E. Ayday, H. Lee, and F. Fekri, “Trust Management
and Adver-sary Detection for Delay Tolerant Networks,”
Proc. Military
Comm. Conf., pp. 1788-1793, 2010.
[3] E. Ayday, H. Lee, and F. Fekri, “An Iterative
Algorithm for Trust
Management and Adversary Detection for Delay
Tolerant
Networks,”IEEE Trans. Mobile Computing, vol. 11, no.
9, pp. 1514-1531, Sept. 2012.
[4] J. Burgess, B. Gallagher, D. Jensen, and B.N. Levine,
“Maxprop:
Routing
for
Vehicle-Based
Disruption-Tolerant
Networking,”
Proc. IEEE INFOCOM, pp. 1-11, Apr. 2006.
[5]
V.
Cerf,S.
Burleigh,A.Hooke,L.Torgerson,R.Durst,K.Scott,
K. Fall, and H. Weiss, “Delay-Tolerant Networking
Architec-ture,”RFC 4838, IETF, 2007.
[6] I.R. Chen, F. Bao, M. Chang, and J.H. Cho,
“Supplemental
Mate-rial
for
„Dynamic
Trust
Management for Delay Tolerant Networks
and Its Application to Secure Routing‟,”IEEE Trans.
Parallel and
Distributed Systems, 2013.
[7] I.R. Chen and T.H. Hsi, “Performance Analysis of
Admission
Control Algorithms Based on Reward Optimization for
Real-Time
Multimedia Servers,”Performance Evaluation, vol. 33,
no. 2, pp. 89-112, 1998.
[8] S.T. Cheng, C.M. Chen, and I.R. Chen, “Dynamic
Quota-Based
Admission Control with Sub-Rating in Multimedia
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 73
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Efficient Data Access in Disruption Tolerant
Network using Hint based Algorithm
Kaviya .P
Department of Computer Science and Engineering,
Indra Ganesan College of Engineering,
Trichy,
Abstract— Data access is an important issue in Disruption
Tolerant Networks (DTNs). To improve the performance of data
access, cooperative caching technique is used. However due to the
unpredictable node mobility in DTNs, traditional caching schemes
cannot be directly applied. A hint based decentralized algorithm is
used for cooperative caching which allow the nodes to perform
functions in a decentralized fashion. Cache consistency and storage
management features are integrated with the system. Cache
consistency is maintained by using the cache replacement policy.
The basic idea is to intentionally cache data at a set of network
central locations (NCLs), which can be easily accessed by other
nodes in the network. The NCL selection is based on a probabilistic
selection metric and it coordinates multiple caching nodes to
optimize the tradeoff between data accessibility and caching
overhead. Extensive trace-driven simulations show that the
approach significantly improves the data access performance
compared to existing schemes.
Keywords—disruption
tolerant
location;cooperative caching
I.
networks;network
central
INTRODUCTION
Disruption tolerant networks (DTNs) consist of mobile
nodes that contact each other opportunistically. Due to
unpredictable node mobility, there is no end-to-end connection
between mobile nodes, which greatly impairs the performance of
data access. In such networks node mobility is exploited to let
mobile nodes carry data as relays and forward data
opportunistically when contacting other nodes. The subsequent
difficulty of maintaining end-to-end communication links makes
it necessary to use “carry-and-forward” methods for data
transmission. Such networks include groups of individuals
moving in disaster recovery areas, military battlefields, or urban
sensing applications. In such networks, node mobility is
exploited to let mobile nodes carry data as relays and forward
data opportunistically when contacting others. The key problem
is, therefore, how to determine the appropriate relay selection
strategy. It has the difficulty of maintaining end-to-end
communication links. It requires number of Retransmissions and
cache consistency is not maintained. If too much data is cached
at a node, it will be difficult for the node to send all the data to
the requesters during the contact period thus wasting storage
ISSN: 2348 – 8387
Mrs.D.IndraDevi
Associate Professor,
Indra Ganesan college of Engineering,
Trichy.
space. Therefore it is a challenge to determine where to cache
and how much to cache in DTNs.
A common technique used to improve data access
performance is caching such that to cache data at appropriate
network locations based on query history, so that queries in the
future can be responded with less delay. Client caches filter
application I/O requests to avoid network and server traffic,
while server caches filter client cache misses to reduce disk
accesses. Another level of storage hierarchy is added, that allows
a client to access blocks cached by other clients. This technique
is known as cooperative caching and it reduces the load on the
server by allowing some local client cache misses to be handled
by other clients.
The cooperative cache differs from the other levels of
the storage hierarchy in that it is distributed across the clients
and it therefore shares the same physical memory as the local
caches of the clients. A local client cache is controlled by the
client, and server cache is controlled by the server, but it is not
clear who should control the cooperative cache. For the
cooperative cache to be effective, the clients must somehow
coordinate their actions. Data caching has been introduced as a
techniques to reduce the data traffic and access latency.
By caching data the data request can be served from the
mobile clients without sending it to the data source each time. It
is a major technique used in the web to reduce the access
latency. In web, caching is implemented at various points in the
network. At the top level web server uses caching, and then
comes the proxy server cache and finally client uses a cache in
the browser.
The present work proposes a scheme to address the
challenges of where to cache and how much data to cache. It
efficiently supports the caching in DTNs and intentionally cache
data at the network central location (NCLs). The NCL is
represented by a central node which has high popularity in the
network and is prioritized for caching data. Due to the limited
caching buffer of central nodes, multiple nodes near a central
node may be involved for caching and the popular data will be
cached near a central node. The selected NCL achieve high
chances for prompt response to user queries with low overhead
in network storage and transmission. The data access scheme
www.internationaljournalssrg.org
Page 74
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
will probabilistically coordinate multiple caching nodes for
responding to user queries. The cache replacement scheme is
used to adjust cache locations based on query history.
In order to ensure valid data access, the cache
consistency must be maintained properly. Many existing cache
consistency maintenance algorithms are stateless, in which the
data source node is unaware of the cache status at each caching
node. Even though stateless algorithms do not pay the cost for
cache status maintenance, they mainly rely on broadcast
mechanisms to propagate the data updates, thus lacking costeffectiveness and scalability. Besides stateless algorithms,
stateful algorithms can significantly reduce the consistency
maintenance cost by maintaining status of the cached data and
selectively propagating the data updates. Stateful algorithms are
more effective in MANETs, mainly due to the bandwidthconstrained, unstable and multi-hop wireless communication.
A Stateful cache consistency algorithm called Greedy
algorithm is proposed. In Greedy algorithm, the data source
node maintains the Time-to-Refresh value and the cache query
rate associated with each cache copy. Thus, the data source node
propagates the source data update only to caching nodes which
are in great need of the update. It employs the efficient strategy
to propagate the update among the selected caching nodes.
Cooperative caching, which allows the sharing and coordination
of cached data among multiple nodes, can be used to improve
the performance of data access in ad hoc networks. When
caching is used, data from the server is replicated on the caching
nodes. Since a node may return the cached data, or modify the
route and forward a request to a caching node, it is very
important that the nodes do not maliciously modify data, drop or
forward the request to the wrong destination.
Caching in wireless environment has unique constraints
like scarce bandwidth, limited power supply, high mobility and
limited cache space. Due to the space limitation, the mobile
nodes can store only a subset of the frequently accessed data.
The availability of the data in local cache can significantly
improve the performance since it overcomes the constraints in
wireless environment. A good replacement mechanism is needed
to distinguish the items to be kept in cache and that is to be
removed when the cache is full. While it would be possible to
pick a random object to replace when cache is full, system
performance will be better if we choose an object that is not
heavily used. If a heavily used data item is removed it will
probably have to be brought back quickly, resulting in extra
overhead.
II.
studies focus on proposing efficient relay selection metrics to
approach the performance of Epidemic routing with lower
forwarding cost, based on prediction of node contacts in the
future. Some schemes do such prediction based on their mobility
patterns, which are characterized by Kalman filter or semiMarkov chains. In some other schemes, node contact pattern is
exploited as abstraction of node mobility pattern for better
prediction accuracy, based on the experimental and theoretical
analysis of the node contact characteristics. The social network
properties of node contact patterns, such as the centrality and
community structures, have also been also exploited for relay
selection in recent social-based data forwarding schemes.
The aforementioned metrics for relay selection can be
applied to various forwarding strategies, which differ in the
number of data copies created in the network. While the most
conservative strategy always keeps a single data copy and
Spray-and-Wait holds a fixed number of data copies, most
schemes dynamically determine the number of data copies. In
Compare-and-Forward, a relay forwards data to another node
whose metric value is higher than itself. Delegation forwarding
reduces forwarding cost by only forwarding data to nodes with
the highest metric.
Data access in DTNs, on the other hand, can be
provided in various ways. Data can be disseminated to
appropriate users based on their interest profiles. Publish/
subscribe systems were used for data dissemination, where
social community structures are usually exploited to determine
broker nodes. In other schemes without brokers, data items are
grouped into predefined channels, and are disseminated based on
users’ subscriptions to these channels.
Caching is another way to provide data access.
Cooperative caching in wireless ad hoc networks was studied in,
in which each node caches pass-by data based on data
popularity, so that queries in the future can be responded with
less delay. Caching locations are selected incidentally among all
the network nodes. Some research efforts have been made for
caching in DTNs, but they only improve data accessibility from
infrastructure network such as WiFi access points (APs) . Peerto-peer data sharing and access among mobile users are
generally neglected.
Distributed determination of caching policies for
minimizing data access delay has been studied in DTNs ,
assuming simplified network conditions.
RELATED WORK
Research on data forwarding in DTNs originates from
Epidemic routing which floods the entire network. Some later
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 75
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
III.
OVERVIEW
A. MOTIVATION
A requester queries the network for data access, and the
data source or caching nodes reply to the requester with data
after having received the query. The key difference between
caching strategies in wireless ad hoc networks and DTNs is
illustrated in Fig. 1. Note that each node has limited space for
caching. Otherwise, data can be cached everywhere, and it is
trivial to design different caching strategies. The design of
caching strategy in wireless ad hoc networks benefits from the
assumption of existing end-to end paths among mobile nodes,
and the path from a requester to the data source remains
unchanged during data access in most cases. Such assumption
enables any intermediate node on the path to cache the pass-by
data. For example, in Fig. 1a, C forwards all the three queries to
data sources A and B, and also forwards data d1 and d2 to the
requesters. In case of limited cache space, C caches the more
popular data d1 based on query history, and similarly data d2 are
cached at node K. In general, any node could cache the pass-by
data incidentally. However, the effectiveness of such an
incidental caching strategy is seriously impaired in DTNs, which
do notassume any persistent network connectivity. Since data
are forwarded via opportunistic contacts, the query and replied
data may take different routes, and it is difficult for nodes to
collect the information about query history and make caching
decision. For example, in Fig. 1b, after having forwarded query
q2 to A, node C loses its connection to G, and cannot cache data
d1 replied to
not cache the pass-by data d1 either because it did not record
query q2 and considers d1 less popular. In this case, d1 will be
cached at node G, and hence needs longer time to be replied to
the requester. Our basic solution to improve caching
performance in DTNs is to restrain the scope of nodes being
involved for caching. Instead of being incidentally cached
“anywhere,” data are intentionally cached only at specific nodes.
These nodes are carefully selected to ensure data accessibility,
and constraining the scope of caching locations reduces the
complexity of maintaining query history and making caching
decision.
B. NCL SELECTION
When DTNs are activated the nodes will be generated, after
generating all nodes the NCL will be selected from a network.
The node is selected using probability selection metric
techniques. The selected NCLs achieve high chances for prompt
response to user queries with low overhead in network storage
and transmission. After that each and every node will send a
generated data to a NCL, and NCL will receive a data and store
in a cache memory. The opportunistic path weight is used by the
central node as relay selection metric for data forwarding.
Instead of being incidentally cached anywhere data are
intentionally cached only at the specific node called NCL.
These nodes are carefully selected to ensure data accessibility
and constraining the scope of caching locations reduces the
complexity of maintaining query history and making caching
decision. The push and pull process conjoin at the NCL node.
The push process means that whenever the nodes generate the
data it will be stored at the NCL. The pull process describes that
whenever the nodes request for a particular data it will send
request to the NCL then it checks the cache memory and send
response to the requested node. If the data is not available then
it forwards the request to the nearest node.
A r-hop opportunistic path
= , ) between nodes A and B
consists of a node set =(A, , ,…
,B)
and an edge
set =(
,……. )
with edge weights ( , ,… ). Path
weight
(T)is the probability that data are opportunistically
transmitted from A to B along
within time T. The path
weight is written as
) =∑
(T) = ∫
.(1)
requester E. Node H which forwards the replied data to E does
ISSN: 2348 – 8387
and the data transmission delay between two nodes A and B,
indicated by the random variable Y , is measured by the weight
of the shortest opportunistic path between the two nodes. In
practice, mobile nodes maintain the information about shortest
opportunistic paths between each other in a distance-vector
www.internationaljournalssrg.org
Page 76
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
manner when they come into contact.
The metric
for a node i to be selected as a central node to
represent a NCL is then defined as follows:
= ∑
(T)
whenever two caching nodes contact and ensures that popular
data are cached nearer to central nodes. We generally cache
more copies of popular data to optimize the cumulative data
access delay. We also probabilistically cache less popular data to
ensure the overall data accessibility.
D. CACHE DISCOVERY
where we define that (T)=0. This metric indicates the average
probability that data can be transmitted from a random node to
node i within time T. In general, network information about the
pairwise node contact rates and shortest opportunistic paths
among mobile nodes are required to calculate the metric values
of mobile nodes according to (3). However, the maintenance of
such network information is expensive in DTNs due to the lack
of persistent end-to-end network connectivity. As a result, we
will first focus on selecting NCLs with the assumption of
complete network information from the global perspective.
C. CACHING
A common technique used to improve data access
performance is caching. Cache the data at appropriate network
locations based on query history, so that queries in the future can
be responded with less delay. Although cooperative caching has
been studied for both web-based applications and wireless ad
hoc networks to allow sharing and coordination among multiple
caching nodes, it is difficult to be realized in DTNs due to the
lack of persistent network connectivity. When a data source
generates data, it pushes data to central nodes of NCLs which
are prioritized to cache data. One copy of data is cached at each
NCL. If the caching buffer of a central node is full, another
node near the central node will cache the data. Such decisions
are automatically made based on buffer conditions of nodes
involved in the pushing process. A requester multicast a query
to central nodes of NCLs to pull data and a central node
forwards the query to the caching nodes. Multiple data copies
are returned to the requester data accessibility and transmission
overhead is optimized by controlling the number of returned
data copies.
When a data source generates data, it pushes data to
central nodes of NCLs, which are prioritized to cache data. One
copy of data is cached at each NCL. If the caching buffer of a
central node is full, another node near the central node will
cache the data. Such decisions are automatically made based on
buffer conditions of nodes involved in the pushing process. A
requester multicast a query to central nodes of NCLs to pull
data, and a central node forwards the query to the caching nodes.
Multiple data copies are returned to the requester, and we
optimize the tradeoff between data accessibility and
transmission overhead by controlling the number of returned
data copies. Utility-based cache replacement is conducted
ISSN: 2348 – 8387
A cache discovery algorithm that is efficient to
discover and deliver requested data items from the neighbours
node and able to decide which data items can be cached for
future use. In cooperative caching this decision is taken not only
on the behalf of the caching node but also based on the other
nodes need. Each node will maintain a Caching Information
Table (CIT). When a NCL node caches a new data item or
updates its CIT it will broadcasts these updates to all its
neighbours.
When a data item d is requested by a node A, first the
node will check whether d available is TRUE or FALSE to see
the data is locally available or not. If this is FALSE then the
node will check d node to see whether the data item is cached by
a node in its neighbour. If the matching entry found then the
request is redirect to the node otherwise the request is forwarded
towards the data server. However the nodes that are lying on the
way to the data center checks their own local cache and d node
entry in their CIT. If any node has data in its local cache then the
data is send to requester node and request forwarding is stop and
if the data entry is matched in the CIT then the node redirect the
request to the node.
The hint based approach is to let the node itself perform
the lookup, using its own hints about the locations of blocks
within the cooperative cache. These hints allow the node to
access the cooperative cache directly, avoiding the need to
contact the NCL node on every local cache miss. Two principal
functions for a hint based system is Hint Maintenance and
lookup mechanism. The hints must be maintained so that they
are reasonably accurate; otherwise the overhead of looking for
blocks using incorrect hints will be prohibitive. Hints are used to
locate a block in the cooperative cache, but the system must be
able to eventually locate a copy of the block should the hints
prove wrong.
E. CACHE REPLACEMENT
A commonly used criterion for evaluating a replacement
policy is its hit ratio the frequency with which it finds a page in
the cache. Of course, the replacement policy’s implementation
overhead should not exceed the anticipated time savings.
Discarding the least-recently-used page is the policy of choice in
cache management. Until recently, attempts to outperform LRU
in practice had not succeeded because of overhead issues and the
need to pretune parameters. The adaptive replacement cache is a
www.internationaljournalssrg.org
Page 77
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
self-tuning, low-overhead algorithm that responds online to
changing access patterns. ARC continually balances between the
recency and frequency features of the workload, demonstrating
that adaptation eliminates the need for the workload-specific
pretuning that plagued many previous proposals to improve
LRU. ARC’s online adaptation will likely have benefits for
real-life workloads due to their richness and variability with
time. These workloads can contain long sequential I/Os or
moving hot spots, changing frequency and scale of temporal
locality and fluctuating between stable, repeating access patterns
and patterns with transient clustered references. Like LRU, ARC
is easy to implement, and its running time per request is
essentially independent of the cache size.
ARC maintains two LRU pages lists: L1 and L2. L1
maintains pages that have been seen only once, recently, while
L2 maintains pages that have been seen at least twice, recently.
The algorithm actually caches only a fraction of the pages on
these lists. The pages that have been seen twice within a short
time may be thought of as having high frequency or as having
longer term reuse potential. T1, which contains the top or mostrecent pages in L1, and B1, which contains the bottom or leastrecent pages in L1. If either |T1| > p or (|T1| = p and x B2),
replace the LRU page in T1. If either |T1| < p or (|T1| = p and x
B1), replace the LRU page in T2.
nodes opportunistically contact each other. Each caching node at
the original NCL recalculates the utilities of its cached data
items with respect to the newly selected central node. In general,
these data utilities will be reduced due to the changes of central
nodes, and this reduction moves the cached data to the
appropriate caching locations that are nearer to the newly
selected central node. Changes in central nodes and subsequent
adjustment of caching locations inevitably affect caching
performance.
IV.
CONCLUSION
Cooperative caching is a technique that allows clients to
access blocks stored in the memory of other clients. This enables some
of the local cache misses to be handled by other clients, offloading the
server and improving the performance of the system. However,
cooperative caching requires some level of coordination between the
clients to maximize the overall system performance. The proposed
method allows clients to make local decisions based on hints, which
performs well than the previous algorithms.
REFERENCES
[1] Hefeeda .M and Noorizadeh .B, “On the Benefits of Cooperative Proxy
Caching for Peer-to-Peer Traffic,” IEEE Trans. Parallel Distributed Systems
vol.21, no. 7, pp. 998-1010, July 2010.
F.
NCL LOAD BALANCING
When a central node fails or its local resources are depleted,
another node is selected as a new central node. Intuitively, the
new central node should be the one with the highest NCL
selection metric value among the current non central nodes in
the network. When the local resources of central node C1 are
depleted, its functionality is taken over by C3. Since C3 may be
far away from C1, the queries broadcasted from C3 may take a
long time to reach the caching nodes A, and hence reduce the
probability that the requester R receives data from A on time.
The distance between the new central node and C1 should also
be taken into account. More specifically, with respect to the
original central node j, we define the metric
for a node to
be selected as the new central node as
= .
(T)
After a new central node is selected, the data cached at the NCL
represented by the original central node needs to be adjusted
correspondingly, so as to optimize the caching performance.
After the functionality of central node C1 has been migrated to
C3, the nodes A, B, and C near C1 are not considered as good
locations for caching data anymore. Instead, the data cached at
these nodes needs to be moved to other nodes near C3. This
movement is achieved via cache replacement when caching
ISSN: 2348 – 8387
[2] Hui .P, Crowcroft .J, and Yoneki .E, “Bubble Rap: Social-Based Forwarding
in Delay Tolerant Networks,” Proc. ACM MobiHoc, 2008.
[3] Ioannidis .S, Massoulie .L, and Chaintreau .A, “Distributed Caching over
Heterogeneous Mobile Networks,” Proc. ACM SIGMETRICS Int’l Conf.
Measurement and Modeling of Computer Systems, pp. 311-322, 2010.
[4] Li .F and Wu .J, “MOPS: Providing Content-Based Service in DisruptionTolerant Networks,” Proc. Int’l Conf. Distributed Computing Systems (ICDCS),
pp. 526-533, 2009.
[5] Nkwe .T.K.R and Denko M.K, “Self-Optimizing Cooperative Caching in
Autonomic Wireless Mesh Networks,” Proc. IEEE Symp. Computers and
Comm. (ISCC), 2009.
[6] Pitkanen M.J and Ott .J, “Redundancy and Distributed Caching in Mobile
DTNs,” Proc. ACM/IEEE Second Workshop Mobility in the Evolving Internet
Architecture (MobiArch), 2007.
[7] Ravindra Raju .R.K, Santha Kumar .B and Nagaraju Mamillapally,
“Performance Evaluation of CLIR, LDIS and LDCC Cooperative Caching
Schemes Based on Heuristic Algorithms”, International Journal of Engineering
Research & Technology (IJERT) Vol. 2 Issue 5, May – 2013.
[8]Wei Gao, Arun Iyengar, and Mudhakar Srivatsa “Cooperative Caching for
Efficient Data Access in Disruption Tolerant Networks” Network Science CTA
under grant W911NF-09-2-0053, 2014.
www.internationaljournalssrg.org
Page 78
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
MUSIC EMOTION RECOGNITION FROM LYRICS USING DWCH FEATURES
M.Banumathi
N.J.Nalini
S.Palanivel
PG Student
Dept. Of CSE
Annamalai University
Tamil Nadu, India
Assistant Professor
Dept. Of CSE
Annamalai University
Tamil Nadu, India
Professor
Dept. Of CSE
Annamalai University
Tamil Nadu, India
ABSTRACT
Music emotions recognition is an important research
hotspot in computer music, which can be widely applied in
many fields such as multimedia retrieval, human-machine
interaction digital heritage, and digital entertainment and
so on. The main objective of this work is to develop a music
emotion recognition technique using Daubechies Wavelet
Coefficient Histograms (DWCH), Auto associative neural
network (AANN). The emotions taken are anger, happy,
sad, and normal. Music database is collected at 22 KHz
from various movies and websites related to music. For
each emotion 8 music signals are recorded and each one is
by 5sec duration. The proposed technique of music emotion
recognition (MER) is done in two phases such, i) Feature
extraction, and ii) Classification. Initially, music signal is
given to feature extraction phase to extract DWCH
features. Second the extracted features are given to Auto
associative neural networks (AANN) classifiers to
categorize the emotions and finally their performance are
compared. The experimental results show that DWCH with
AANN classifier achieves a recognition rate of about 75.0%.
Keywords: Music Emotion Recognition, Daubechies
Wavelet Coefficient Histograms, Auto Associative Neural
Network.
I.INTRODUCTION
Music plays an important role in human‟s history even
more in the digital age. An emotion is simply
a feeling or sensation caused by a person‟s perception
about something or someone and the different emotions
are shown in Fig. 1. The emotional impact of music on
people and the association of music with particular
emotions or „moods‟ have been used in certain contexts
to convey meaning such as in movies, musicals,
advertising, games, music recommendation systems, and
even music therapy, Music education, and music
composition, among others.
Our main goal is to investigate the performance of
music emotion recognition from lyrics [1], [11]. The
emotions are divided into two types: primary emotions
and secondary emotions. Primary emotions are the
emotions considered to be universal and biologically
ISSN: 2348–8387
based. They generally include fear, anger, sad, happy,
and normal. Secondary emotions are the emotions that
develop with cognitive maturity and vary across
individuals and cultures [2], [3], [4]. All emotions are
particular to human and specific cultures. MER fall
under two categories namely categorical approach and
dimensional approach [5]. The former divides emotion
into a handful of classes and trains a classifier to predict
the emotion of a song and the latter describes emotion
with arousal and valance plane as the dimensions. There
are various musical features including MFCC, timbre,
pitch, DWCH, rhythm, harmony, spectral etc., the
various classifiers are SVM, AANN, GMM, HMM,
FFT, fuzzy K-NN etc.,
The goal of this paper is to propose an efficient
system for recognizing the four emotions of music
content. First step is to analyze the musical feature
DWCH and mapped them into four categories of
anger, happy normal, and sad. Secondly auto
associative neural network is adopted as a classifier to
train and test for recognizing the four emotions. This
paper is organized as follows: A review of literature
on music emotion recognition is given in Section II.
Section III explains the DWCH feature extraction
process from the input music signal and the details of
AANN model for emotion recognition. Section IV
explains the Experimental results of the proposed
work. Section V Summary of the paper and the future
directions for the present work are provided in the
last section of the paper.
Fig.1. Emotions
www.internationaljournalssrg.org
Page 79
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)

II. RELATED WORK
Many works have been carried out in the literature
regarding emotion recognition using music and some of
them are described in this section. The researches [1] [6] categorized emotions into a various number of
emotion classes and discussed the relationship between
the music and emotion.
Renato Panda [7] used Bag of words and SVM as
classifier and reported the recognition rate of 65%.
N.J.Nalini and S. Palanivel [8] used MFCC with AANN
and SVM as classifier and achieved the recognition rate
94.4%. Bin Zhu et al [9] used neural network and genetic
algorithm (GA-BP) for eight emotions and get the
highest classification rate of 83.33%. Tao Li [10] used
new feature called DWCH and timbre features and
achieved performance about 80%. Yegnanarayana,
B. Kishore [11] used GMM classifier and achieved
84.3%.
Fom the literature it is understood that AANN classifier
works best for recognizing the emotions from the music
signal.
III. PROPOSED METHODOLOGY
The proposed work is shown in fig. 2. The work done in
two phases (i). Feature extraction and (ii) Classification.
Each component of a wavelet transform is the wave
of a fixed frequency, each component of a wavelet
transform is the wave of time-dependent frequency
function
FEATURE
EXTRACTION
(DWCH)
MUSIC WITH LYRICS
AANN
CLASSIFIER
RECOGNIZED
EMOTION
HAPPY
SAD
ANGER
NORMAL
Fig. 2 Proposed Work.
The decomposition process can be iterated, with
successive approximations being decomposed in turn, so
that one signal is broken down into many lowerresolution components. This is called the wavelet
decomposition tree in fig 3.
A. Feature extraction
The Daubechies Wavelet Coefficient Histogram
(DWCH), which is based on wavelet coefficient
histogram. The Daubechies wavelets, based on the work
of Ingrid Daubechies, are a family of orthogonal
wavelets defining a discrete wavelet transform and
characterized
by
a
maximal
number
of
vanishing moments for some given support [6]. With
each wavelet type of this class, there is a scaling function
(called the father wavelet) which generates an orthogonal
multiresolution analysis.The wavelet transform is a
synthesis of ideas that have emerged over many years in
such different fields as mathematics and image/signal
processing. The wavelet transform provides good time
and frequency resolution. The db8, db10 Daubechies
wavelet filter with eight and ten levels of decomposition,
which is used in our experiments. Wavelet transform:


Provides good time and frequency resolution.
A wavelet transform is viewed as a tool for dividing
data, functions, or operators into
different
frequency components and then analyzing each
component with a resolution matched into scale.
ISSN: 2348–8387
Fig. 3 Decomposition tree
The music signal with lyrics is decomposed into many
levels from which the better recognized feature set is
taken. Ingrid Daubechies, one of the brightest stars in the
world of wavelet research, invented what are called
compactly supported orthonormal wavelets thus making
discrete wavelet analysis practicable.
The names of the Daubechies family wavelets are written
dbN, where N is the order, and db the "surname" of the
wavelet. The db1 wavelet, as mentioned above, is the
same as Haar wavelet. There are many wavelet functions
psi of the next nine members of the family.
www.internationaljournalssrg.org
Page 80
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
The db10 wavelet filter the original music signal is given
and the father wavelet and wavelet function are shown in
the fig.4.
activation functions at the second and fourth layers are
essentially nonlinear. A five layer AANN model is
shown in Fig.3 function of the five layer AANN model
can be split as mapping (layers 1, 2 and 3) and
demapping (layers 3, 4 and 5) networks [7], [8].
Fig. 4 Db8 wavelet filter (a) Father Function (b) wavelet
Function
Fig. 6 Architecture of auto associative neural network model.
The db10 wavelet filter packet details are shown and it
describes about the psi and phi function details.
Fig.5 db10 packet details
B. Auto Associative Neural Network
A multilayer feedforward neural network consists of
interconnected processing units, where each unit
represents the model of an artificial neuron, and the
interconnection between two units has a weight
associated with it. AANN models are multilayer
feedforward neural network models that perform
identity mapping. An AANN consists of an input layer,
output layer and one or more hidden layers. The number
of units in the input and output layers is equal to the
dimension of the feature vector. The second and fourth
layers of the network have more units than the input
layer. The third layer is the compression layer that has
fewer units than the input layer. The activation function
at third layer may be linear or nonlinear, but the
ISSN: 2348–8387
Given a set of feature vectors of a class, the AANN
model is trained using the back propagation learning
algorithm. The learning algorithm adjusts the weights of
the network for each feature vector to minimize the mean
squared error. It is shown in that there is a relation
between the distribution of the given data and the
training error surface captured by the network in the
input space. It is also shown that the weights of the five
layer AANN model capture the distribution of the given
data using a probability surface derived from the training
error surface.
The issues related to the architecture of AANN models
are the selection of number of hidden layers and the
number of units in hidden layers. Number of hidden
layers and processing units in hidden layers are selected
empirically. The issues related to training an AANN
model with backpropagation learning are the local
minima problem, suitable values for the learning rate,
momentum factor and the number of iterations or the
threshold as the stopping criteria[8]. All these parameters
are empirically chosen during the training. The AANN
models were designed mainly for nonlinear dimension
reduction. As an alternative of regression based
autoassociative model, we propose an autoassociative
support vector regression model for time series
classification
During AANN training, the weights of the network are
adjusted to minimize the mean square error obtained for
each feature vector [9], [10]. If the adjustment of weights
is done for all feature vectors once, then the network is
said to be trained for one epoch. During the testing
www.internationaljournalssrg.org
Page 81
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
phase, the features extracted from the test data are given
to the trained AANN model to find its match.
IV. EXPERIMENTAL RESULTS
A. DATASETS
into more levels to get better feature set. The feature set
extracted for 10th level is 402 and it is gives better feature
set and some difference from the level 7 and it helps for
the better recognition of emotions. 10 levels
decomposition of sad music file is shown in fig 7and 10
levels of happy music file is also shown in fig 8.
The music dataset for training is made up of 40
songs with lyrics. The performance of the music emotion
recognition system is evaluated from the music signal.
For each categories 8 music files are collected from
various CD collections in MP3 format. MP3 format is
converted into .wav format of 22 KHz, 16-bit mono
music file using PRAAT software. The neural network is
trained so that the emotions anger, happy, normal and sad
are recognized.
B. DECOMPOSITION OF MUSIC SIGNAL
USING DWCH FILTER:
The music is decomposed into 10 levels using the db10
filter. The coefficient values for different levels are
shown. The original input signal is music file which is 5
sec duration and it is decomposed into 10 levels using the
db10 filter and the coefficient values are displayed with
the original and synthesized signal and get better feature
set with minimum coefficient values.
Fig. 8. 10 levels of happy music file.
C. TRAINING PHASE:
Training is the process to learn from training samples by
adaptively updating their values. For each emotion 5 sec
music file is used for training. During training 100,500
and 1000 epochs are given to have better trained error
file value. There is no considerable change in the error
value after 500 epochs. So in this work 500 epochs is
taken to train the model. In the testing phase the model is
tested with various test files to get the considerable less
recognized emotions.
Fig. 7.10 levels decomposition of sad music file
The music files with 5 sec duration are decomposed into
3 levels, 5 levels, 7 levels and 10 levels among them 10
levels of decomposition using db10 filter gives better
feature set with respect to size and as well as in better
recognition of emotion. The feature set extracted for
seven levels which is 889 and it is too large to classify
and to recognize the emotion so it is again decomposed
ISSN: 2348–8387
D. TESTING PHASE:
The anger, sad, happy and normal file is tested against
the trained anger file. The test file is 5 sec duration , for
www.internationaljournalssrg.org
Page 82
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
each emotion. The confusion matrix is used for
evaluation. The average performance for 8 subjects is
obtained.
DWCH features are extracted from the music signal. The
AANN classifiers were used to recognize the emotion.
The training and testing performed separately for each
emotion and 80% data was used for training and 20% of
data for testing. With the AANN model the average
recognition performance is about 75.0%. The future work
is to improve the performance with other classifiers.
VI. REFERENCES
[1] Yi-Hsuan Yang. Homer H.Chen.
Recognition. CRC Press.
2010.
Music
Emotion
[2] Yi-Hsuan Yang, Yu-Ching Lin and Homer H. Chen, 2007. A
Regression Approach to Music Emotion Recognition, IEEE.
E. PERFORMANCE MEASURE:
[3] Youngmoo E. Kim, Erik M. Schmidt, Raymond Migneco, Brandon
G. Morton Music Emotion Recognition: A State of The Art Review.
Accuracy or recognition rate is defined as
No. of correctly predicted testing
Accuracy =
Total no of testing
A confusion matrix, also known as a contingency table or
an error matrix. A table of confusion (sometimes also
called a confusion matrix), is a table with two rows and
two columns that reports the number of false positives,
false negatives, true positives, and true negatives. This
allows more detailed analysis than mere proportion of
correct guesses (accuracy). Accuracy is not a reliable
metric for the real performance of a classifier, because it
will yield misleading results if the data set is unbalanced
(that is, when the number of samples in different classes
vary greatly). The confusion matrix created for this
proposed work is given in table
Training files
Anger
(%)
Sad
(%)
Happy
(%)
Normal
(%)
Testing
Anger
80
0
0
20
Sad
0
60
0
40
Happy
40
0
60
0
Normal
0
0
20
80
[4] Lin, Y.-C. Yang, Y.-H. and Chen, H.-H. 2009. Exploiting genre for
music emotion classification, Proc. IEEE Int. Conf. Multimedia Expo.,
618-621.
[5] Y.-H. Yang, H. H. Chen, 2009, Music emotion ranking, In Proc.
IEEE Int. Conf. of Acoust., Speech, Signal Process., 1657-1660.
[6] Bianchini, M. Frasconi, P. Gori, M. 1995. Learning in multilayered
networks used as autoassociators, IEEE Transaction on neural
networks, 6, 512-515.
[7]. Ricardo Malheiro, Renato panda, “Music Emotion Recognition
from Lyrics: A Comparative Study”, in Int. Workshop on Machine
Learning and Music, 2013.
[8]. S.Palanivel and N.J.Nalini, “Emotion Recognition in music Signal
using AANN and SVM”, Int. Journal of computer Applications, vol.77,
no.2, sep 2013.
[9] Yegnanarayana, B. Kishore, S.P. 2002. AANN: an alternative to
GMM for pattern recognition. Neural networks, 15, 459-569.
[10]. Tao Li, MitsunoriOgihara, 2004. Content-Based Music Similarity
Search and Emotion Detection”, International conference on Acoustics,
Speech and Signal Processing (ICASSP 2004), 705- 708.
[11]. Y. Hu, X. Chen, and D. Yang, “Lyric-based song emotion
detection with affective lexicon and fuzzy clustering method,” in Proc.
of the Intl. Society for Music Information Conf., Kobe, Japan, 2009.
Average emotion recognized =75.0%
V. SUMMARY AND CONCLUSIONS
In this paper, the basic four primary emotions
anger, happy, sad and normal were considered. The
music signal database for this work is collected for this
work was 22.0 KHz from the various CDs and websites.
ISSN: 2348–8387
www.internationaljournalssrg.org
Page 83
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
COOPERATIVE CACHING FOR EFFICIENT CONTENT ACCESS IN SOCIAL
WIRELESS NETWORKS
R. Jayasri1
S. Pasupathy3
T. Sindhu2
1
2
M.E. Student, Dept. of CSE,
M.E. Student, Dept. of CSE,
3
Associate Professor, Dept. of CSE,
Annamalai University,
Annamalai University,
Annamalai University,
Chidambaram.
Chidambaram.
Chidambaram.
Abstract— Cooperative Caching in Social Wireless
Networks (SWNET) introduces a practical network that aims to
minimize electronic data provisioning cost by using two object
caching strategies such as Split cache Replacement and Benefit
based distributed caching heuristics. Cooperative caching is a
mechanism in which multiple caches of the systems or devices
present in the network are coordinated to form a single overall
cache. Object caching in SWNETs are shown to reduce the data
downloading price in networks with homogeneous and
heterogeneous object demands which depends on the service and
pricing needs among many stakeholders including Content
Providers (CP), Communication Service Provider (CSP) and End
Consumers (EC). Content provisioning cost is the cost involved
when users download the content from the server which is paid
either by the content provider or by the end consumers.
Analytical and simulation models are constructed for analyzing
the proposed caching policiesies in the existence of selfish users
that differ from network-wide cost-optimal policies.
Keywords— Cooperative caching, Social Wireless Networks,
Split cache replacement, Benefit based distributed caching,
content provisioning cost.
I. INTRODUCTION
Due to the widespread of internet, the need for content
sharing is growing at a faster rate. The emergence of data
enabled mobile devices and wireless enabled data applications
have adopted new content distribution models in today’s
mobile world. The list of such devices includes Apple’s
iPhone, Google’s Android, Windows, Amazon’s Kindle and
electronic book readers. The data application includes e-book,
magazine readers and mobile phone apps. Apple’s App store
provided 1,00,000 apps and Google’s App store provided
7,00,000 apps that are downloadable by smart phone users.
With the usual download scenario, a user downloads data
directly from the content provider’s server over a
communication service provider’s network. Downloading
content through this network involves a cost which must be
paid either by the end users or by the content provider.
ISSN: 2348–8387
Social Wireless Networks are formed using adhoc
wireless connections between the devices, when users carrying
their mobile devices physically gather in settings like
university campus, work place and other public places. In such
networks, content access by a device would be to first search
the local SWNET for the content before downloading it from
the server. This will reduce the content downloading cost
because the download cost to CSP is avoided when the content
is found within the local SWNET. This mechanism is named
as cooperative caching. Social Wireless network is social
networking where persons with similar interests come and
connect with one another through their mobile phone and/or
tablet.
For contents with different popularity, a greedy approach
for each node would be to store as many distinct contents as
far its storage allows. This will result in heavy network-wide
content duplications. In the fully cooperative approach, a node
would try to maximize the total number of unique contents
thus avoiding duplications. These two approaches will not
reduce the cost. This paper shows that object placement policy
which is in between these approaches and can minimize the
cost. The proposed caching algorithms strive to attain this
policy with aim of minimizing content provisioning cost.
Fig. 1. Content access from an SWNET in a University campus
www.internationaljournalssrg.org
Page 84
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
II. RELATED WORKS
E. Cohen et al[2] proposed a technique to reduce time for
user retrieving documents on the World Wide Web, a widely
used mechanism is caching both at the client’s browser and
more profitably at a proxy. An important component of such
policy is to guess next-request times. Such statistics can be
gathered either locally or at the server and stored to the proxy.
The experiments show that utilizing the server knowledge of
access patterns can greatly improve the effectiveness of proxy
caches. The experimental evaluation and proposed policies use
a price function framework. This allows us to evaluate and
compare different replacement policies by using server logs,
without having to construct a full model for each client’s
cache.
Suhani N. Trambadiya et al[3] proposed a caching scheme
called Group caching that permits each mobile hosts and its 1hop neighbors to form a group. In this group, caching status is
bartered and preserved periodically. Cache space of mobile
hosts used efficiently and therefore there is decrease of
redundancy of cached data and average access latency.
Hyeong Ho Lee et al[5] proposed the concept of Neighbour
Caching (NC) is to use the cache space of idle neighbours for
caching tasks. In Neighbour Caching when a node gets a data
from a faraway node, it puts the data in its own caching space
for reuse. This operation needs to remove the least important
information from the cache based on a replacement algorithm.
The data which is to be expelled is stored in idle neighbour
node’s storage with this neighbour caching scheme. It requests
the data not from far distant source node but from nearby
neighbour which keeps copy of data if node needs data again
in the near future. The NC policy uses the available cache
space of neighbour to improve the performance. However, it
lacks of the efficiently cooperative caching protocol between
the Mobile hosts.
M. Korupolu and M. Dahlin[6] proposed the design space
for cooperative placement and replacement algorithms.
Conclusion from these experiments is cooperative placement
can significantly improve performance compared to local
replacement algorithms particularly when the space of
individual caches is limited compared to the universe of
objects.
L. Yin and G. Cao[7] proposed Cooperative caching, which
allows the distribution and cooperation of cached data among
large number of nodes. Due to mobility and resource controls
of adhoc networks, caching techniques designed for wired
network may not be applicable to adhoc networks.
Cooperative caching techniques are designed and evaluated
efficiently to support data access in adhoc networks. Two
schemes are proposed: CacheData which caches the data, and
CachePath which caches the data path. After analyzing the
performance of these schemes, we propose a hybrid approach
(HybridCache) which can further improve performance by
taking advantage of CacheData and CachePath while avoiding
their weakness. The results show that the proposed schemes
ISSN: 2348–8387
reduce the query delay and message complexity when
compared to other caching schemes.
Chand N. Joshi et al[12] proposed the Zone Cooperative
policy reflects the evolvement of data discovery. Each client
has a cache to store the recurrently used data items. The data
items in the cache fulfill not only the client’s own demands
but also the object demands passing through it from other
clients. For a data miss in the local cache, the client first
searches the data in its zone before forwarding the request to
the next client that lies on a path towards server. However, the
latency may become longer if the neighbors of in-between
nodes do not have a copy of the requested data for the request.
Wei Gao[13] proposed the Disruption Tolerant Networks
(DTNs) which are characterized by low node density,
unpredictable node mobility and lack of global network
information. In this paper, an approach to support cooperative
caching in DTNs was proposed, which enables the distribution
and coordination of cached object among large number of
nodes and reduces data access delay. Basic idea is to cache
data at a set of network central locations, which can be easily
accessed by other nodes in the network. Efficient scheme was
proposed that ensures appropriate NCL selection based on a
probabilistic selection metric and coordinates multiple caching
nodes to optimize the tradeoff between data accessibility and
caching overhead. The results show that the approach
improves data access performance compared to existing
system.
III. PROPOSED COOPERATIVE CACHING SCHEME
This paper develops a network, service and pricing
models which are further used for creating two content
caching policies for minimizing content downloading price in
networks with homogeneous and heterogeneous object
demands. The two object caching strategies are Split Cache, a
cooperative caching strategy and Distributed Benefit, a benefit
based strategy. Split Cache is used for homogeneous content
demands and Distributed benefit is used for heterogeneous
content demands. This paper builds logical and simulation
models for analyzing the proposed caching policies. The
numerical results for both strategies are validated using
simulation and compared with a series of traditional caching
policies.
A. Network Model
There are two types of SWNETs. The first one
involves stationary SWNET partitions which are after forming
the partition, it is maintained for a long time that the
cooperative object caches can be formed and reach steady
states. The second type is to explore what happens when the
stationary assumption is relaxed. To analyze this effect,
caching is applied to SWNETs formed using human
interaction traces obtained from a set of real SWNET nodes.
www.internationaljournalssrg.org
Page 85
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
B. Search Model
The file to be searched is first searched in the local
cache. If the file is not in local cache, it searches the object
within its SWNET partition using limited broadcast message.
If that search also fails, the file or data is downloaded from
the CP’s server. In this paper, objects such as electronic
books, music, etc are modelled.
C. Pricing Model
In pricing model states that the CP pays a download cost
to
the CSP when a user downloads a data from the CP’s server
through the CSP’s cellular network. Whenever an EC provides
a locally cached object to another EC within its local SWNET
partition, the provider EC is paid a rebate
by the CP. This
cost can also be distributed among the provider EC and the
ECs of all the intermediate mobile devices that take part in
content forwarding .The selling price is directly paid to the CP
by an EC. A digitally signed rebate framework needs to be
supported so that the rebate recipient ECs can electronically
validate and redeem the rebate with the CP. We imagine,
using these two mechanisms the proposed caching mechanism
is built.
IV. EXPERIMENTAL RESULTS
A. Simulation Model
The simulation is performed on NS2 with the help of
CMU wireless extension. In this simulation, the AODV
routing protocol was tested as the underlying ad hoc routing
algorithm. The simulation time is set to 6000 seconds. The
number of mobile hosts is set to 50 in a fixed area. Here, we
assume that the wireless bandwidth is 2MB/s and the radio
range is 100m. There are totally 100 data items distributed
uniformly among all Mobile hosts. The number of hot data
objects is set to 200 and all hot data objects are distributed
uniformly among all MHs. The probability of queries for the
data is set to 80%. The query rate of MHs is set to 0.2/second.
Each node stores the data in its cache. The agent is selected
randomly at some time interval. There are 4 Social Wireless
Network partitions and 2 CSP network and 1 CP node.
TABLE I
Fig. 2 Content and Cost flow model.
D. Request Generation Model
There are two types of request generation models,
namely, homogeneous and heterogeneous. In the
homogeneous model, all mobile devices maintain the same
data request rate and pattern which follow a Zipf distribution
In the heterogeneous model, each mobile device have different
requests.
Simulator
Simulation Time
Network Size
Mobile Host
Transmission range of MH
Mobility Model
Speed of Mobile host
Total No. of data item set
Average query rate
Probability of query in data
Data size
Cache Size
Compared Schemes
Replacement Policy
E. Caching Strategies
1) Split Cache Replacement Policy
To realize the optimal object placement under
homogenous object request model Split Cache policy is
proposed in which the available cache space is divided into a
duplicate segment and a unique segment. In the first segment,
only the most popular objects are stored without considering
about the object duplication and in the second segment only
unique objects are allowed to be stored.
2) Benefit based Distributed Heuristics
When there is not enough space in the cache for
accommodating a new object, the already present object with
lowest benefit is identified and replaced with the new object
only if the new object shows more total benefit. The benefit of
a newly downloaded object is calculated based on its source.
ISSN: 2348–8387
Simulation Parameters
Network Simulator (NS2)
6000 seconds
1500m x 500 m
100 nodes
100m
Random way point
1 ~ 10 m/s randomly
1000 data item
0.2 / second
80%
10 KB
200KB, 400KB, 600KB,
800KB, 1000KB, 1200KB,
1400KB
Hit rates, Split factor and
provisioning cost.
Split cache
B. Simulation Results
www.internationaljournalssrg.org
Page 86
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig 3. Topology Creation
Fig 4. Agent discovery
V. PERFORMANCE EVALUATION
A. Hit rates and Provisioning cost
Fig 5. Transmission of packets between nodes
ISSN: 2348–8387
www.internationaljournalssrg.org
Page 87
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig 6. Hit rates as a function of split factor
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
Fig 7. Provisioning cost as a function of split factor
[12]
VI. CONCLUSION
A cooperative caching strategy for provisioning cost
minimization in Social Wireless Networks was developed. A
split replacement policy was proposed and evaluated using
NS2 simulation.
[2]
[3]
M. Zhao et al, ‖Empirical study on Human Mobility for Mobile Wireless
Networks,‖ Proc. IEEE Military Comm. Conf., 2008.
] E. Cohen et al, ―Evaluating Server-Assisted Cache Replacement in the
web,‖ Proc. Sixth Ann. European Symp. Algorithms, pp. 307-319, 1998.
Suhani N. Trambadiya et al, ―Group Caching: A novel Cooperative
Caching scheme for Mobile Ad Hoc Networks,‖ International Journal
of Engineering Research and Development, Vol 6, Issue 11 (April
2013), PP. 23-30.
ISSN: 2348–8387
[14]
[15]
[16]
References
[1]
[13]
[17]
[18]
A. Chaintreau et al, ―Impact of Human Mobility on Opportunistic
Forwarding Algorithms,‖ IEEE Trans. Mobile Computing, vol. 6, no. 6,
pp. 606-620, June 2007.
Joonho Cho et al, "Neighbor caching in multi-hop wireless ad hoc
networks," IEEE Communications Letters, Vol. 7, Issue 11, Nov. 2003
Page(s):525 – 527.
M. Korupolu and M. Dahlin, ―Coordinated Placement and Replacement
for Large-Scale Distributed Caches,‖ IEEE Trans. Knowledge and Data
Eng., vol. 14, no. 6, pp. 1317-1329, Nov. 2002.
L. Yin and G. Cao, ―Supporting Cooperative Caching in Ad Hoc
Networks,‖ IEEE Trans. Mobile Computing, vol. 5, no. 1,pp. 77-89, Jan.
2006.
F. Sailhan and V. Issarny, ―Cooperative Caching in Ad Hoc Networks,‖
Proc. Fourth Int’l Conf. Mobile Data Management, pp. 13-28, 2003.
B. Chun et al., ―Selfish Caching in Distributed Systems: A GameTheoretic Analysis,‖ Proc. 23th ACM Symp. Principles of Distributed
Computing, 2004.
Jing Zhao et al., ―Cooperative Caching in Wireless P2P Networks:
Design, Implementation, and Evaluation,‖ IEEE Trans. Parallel and
distributed systems, vol. 21, no. 2, 2010.
David Dominguez et al., ―Using Evolutive Summary Counters for
Efficient Cooperative Caching in Search Engines,‖ IEEE Trans. Parallel
and distributed systems, vol. 23, no. 4, 2012.
Chand, N. Joshi, R. C., and Misra, M., ―Efficient Cooperative Caching
in Ad Hoc Networks Communication System Software and
Middleware," Comsware 2006, First International Conference on 0812 Jan. 2006, Page(s): 1-8.
Wei Gao, ―Cooperative caching for Efficient Data Access in Disruption
Tolerant Networks,‖ IEEE Trans. Mobile computing, 2014.
S. Banarjee and S.karforma, ―A prototype Design for DRM Based Credit
Card Transaction in E-Commerce,‖ Ubiquity, vol. 2008.
C. Perkins and E. Royer, ―Ad-Hoc On-Demand Distance Vector
Routing,‖ Proc. IEEE Second Workshop Mobile Systems and
Applications, 1999.
S. Podlipnig and L. Boszormenyi, ―A Survey of Web Cache
Replacement Strategies,‖ ACM Computing Surveys, vol. 6, no. 6, pp.
606-620, June 2007.
Y. Du, S, Gupta, and G. Varsamopoulos, ―Improving On-Demand Data
Access Efficiency in MANETs with Cooperative Caching,‖ Ad Hoc
Networks, vol. 7, pp. 579-598, May 2009.
M. Taghizadeh, A. Plummer, A. Aqel, and S.Biswas, ‖ Optimal
Cooperative Caching in Social Wireless Networks,‖ Proc. IEEE Global
Telecomm. Conf. (GlobeCom), 2010.
www.internationaljournalssrg.org
Page 88
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Location Based Efficient and Secure Routing Protocol
T. Sindhu1
M.E student, Dept of CSE
Annamalai University,
TamilNadu, India.
R. Jayasri2
M.E student, Dept of CSE
Annamalai University,
TamilNadu, India.
Abstract— In location sharing-based applications, Privacy
of a user’s location or location preferences, with respect to
other users and the third party service provider, is a
critical concern. However existing anonymous protocols
generates a significantly high cost, problem in Mobile
computing and Secure Computing. Here Location Sharing
Based Service is addressed by implementing PPFRVP
Protocol which focuses on Fair Rendez-Vous Point (FRVP)
problem for providing secured location based sharing.
I. INTRODUCTION
Mobile computing is interaction between human and
computer which a is expected to be transported during normal
usage. Mobile computing involves mobile communication,
mobile hardware and software. Communication issues include
ad hoc and infrastructure networks and also communication
properties and protocols, data formats and concrete
technologies. Hardware contains mobile devices or device
components. Mobile software deals with the characteristics
and necessities of mobile applications. [1] Mobile computing
is any type of computing which use Intranet or internet and
respective communications links, as WAN, LAN, WLAN etc.
Mobile computers may develop a wireless personal network
or a piconet.
The rapid usage of smart phones technologies in urban
communities has enabled mobile users to utilize context
ware services on their devices. Service providers take
advantage of this dynamic and ever-growing technology
landscape by proposing innovative context-dependent
services for mobile subscribers. Location-based Services
(LBS) is an emerging process which provides location based
sharing’s.
Privacy of a user’s location or location preferences, with
respect to other users and the third-party service provider, is
a critical concern in such location-sharing-based applications
[1]. For instance, such information can be used to deanonymize users and their availabilities, to track their
preferences or to identify their social networks. FRVP is an
important technique for addressing privacy issues, which is
nothing but Fair Rendez-Vous Point (FRVP). The privacy
issue in the FRVP problem is representative of the relevant
privacy threats in LSBSs. FRVP problem is an optimization
ISSN: 2348–8387
S. Pasupathy3
Associate Professor, Dept of CSE
Annamalai University,
TamilNadu, India.
problem, specifically the k-center problem and then
analytically outlines the privacy requirements of the
participants with respect to each other and with respect to
the solver. Algorithms for solving the above formulation of
the FRVP problem in a privacy-preserving fashion, is each
user participates by providing only a single location
preference to the FRVP solver or the service provider.
II.RELATED WORK
In modern mobile networks, users even more share
their location with third-parties in return for location-based
services. In this way, users attain services customized to their
location. Yet, such communications reveal location
information about users. Even if the users make use of
pseudonyms, the operators of location-based services may be
able to identify them and thus affect their privacy. In this
paper, we present an analysis of the erosion of privacy caused
by the use of location-based services. To do so, an experiment
with real mobility traces and measures the dynamics of user
privacy. This paper thus details and enumerates the privacy
risks induced by the use of location-based services. In this
work, we consider a model that goes with the common use of
LBSs: we do not suppose the presence of privacy-preserving
mechanisms and consider that users access LBSs on a regular
basis (but not continuously). In this setting, we aim at
consideration of the privacy risk caused by LBSs. To do so,
we experiment with real mobility traces and investigate the
dynamics of user privacy in such systems by measuring the
erosion of user privacy.
The author proposed a personalized k-anonymity model for
protecting location privacy against various privacy threats
through location information sharing. Our model has two
unique features [3]. First, we provide a unified privacy
personalization framework to support location k-anonymity
for a wide range of users with context-sensitive personalized
privacy requirements. This framework enables each mobile
node to specify the minimum level of anonymity it desires as
well as the highest temporal and spatial resolutions it is
willing to tolerate when requesting for k-anonymity preserving
location-based services (LBSs). Second, we devise an efficient
message perturbation engine which runs by the location
protection broker on a trusted server and carries out location
anonymization on mobile users’ LBS request messages, such
as identity removal and spatio-temporal cloaking of location
information. We develop a suite of scalable and yet efficient
www.internationaljournalssrg.org
Page 89
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
spatio-temporal cloaking algorithms, called Clique Cloak
algorithms, to provide high quality personalized location kanonymity, aiming at avoiding or reducing known location
privacy threats before forwarding requests to LBS provider(s).
The effectiveness of Clique Cloak algorithms is studied under
various conditions using realistic location data synthetically
generated using real road maps and traffic volume data.
Location-sharing-based services (LSBSs) allow users to share
their location with their friends in a sporadic manner. In
currently deployed LSBSs users must disclose their location to
the service provider in order to share it with their friends [7].
This default disclosure of location data introduces privacy
risks. We define the security properties that a privacy
preserving LSBS should full and propose two constructions.
First, a construction based on identity based broadcast
encryption (IBBE) in which the service provider does not
learn the user's location, but learns which other users are
allowed to receive a location update. Second, a construction
based on anonymous IBBE in which the service provider does
not learn the latter either. As advantages with respect to
previous work, in our schemes the LSBS provider does not
need to perform any operations to compute the reply to a
location data request, but only needs to forward IBBE cipher
texts to the receivers. We implement both constructions and
present a performance analysis that shows their practicality.
Furthermore, we extend our schemes such that the service
provider, performing some verification work, is able to collect
privacy-preserving aggregate statistics on the locations users
share with each other.
III.PROPOSED SYSTEM
In the proposed system, anonymous observers can try
to hack the data during meeting location where group of users
were engaged in data sharing. The problem is that during data
sharing in group meeting, hackers will try to access the Id of
other users which degrades the robustness or the network.
In order to provide high anonymity for source destination and
routes, PPFRVP Protocol which focuses on Fair Rendez-Vous
Point (FRVP) is implemented. It can avoid timing attack and
intersection attack because of its non-fixed routing path for
source and destination pair. Here two algorithms were
proposed for addressing privacy issues. They are PPFRVP and
SHA algorithm.
The proposed system is divided into:
 User Privacy
 Server privacy

PPFRVP Algorithm
A. User privacy
The user-privacy in the PPFRVP algorithm measures
the probabilistic gains of learning the preferred location of at
least one other user (DTLDS), except the final fair rendezvous location. It will be done during data sharing and user
participation in group. Here the probabilistic advantage of user
identity and correct location were gained (Advd−LNKLDS).
i.e., during user participation in meeting, the location and the
adjacent distance between the users were identified correctly.
B. Server privacy
An execution of the PPFRVP algorithm is serverprivate if the identifiability advantage DTLDS, the distancelink ability advantage Advd−LNKLDS and the coordinate link
ability advantage Advd−LNKLDS of an LDS are negligible.
In practice, users will execute the PPFRVP protocol multiple
times with either similar or totally different sets of
participating users, and with the same or a different location
preference in each execution instant.
C. The PPFRVP protocol
The PPFRVP (Privacy-Preserving Fair Rendez-Vous Point)
protocol will address the issues has two main modules, the
distance computation module and the MAX module. The
distance computation module uses either the BGN-distance or
the Paillier- ElGamal distance protocols. Here, SHA algorithm
is used for encrypting the data which preserves the integrity
during data sharing. In max computation, data values will be
hided inside the encrypted element. It protects the internal
order (the inequalities) among the pair wise distance from
each user to all other users. It addresses the privacy issue in
LSBSs by focusing on a specific problem called the Fair
Rendez-Vous Point (FRVP) problem. Given a set of user
location preferences, the FRVP problem is to determine a
location among the projected ones such that the maximum
distance between this location and all other users’ locations is
minimized, i.e. it is fair to all users.
Fig. 1.System Architecture
ISSN: 2348–8387
www.internationaljournalssrg.org
Page 90
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
IV.SIMULATION RESULTS
Fig.2. User Privacy
Fig.5. Database updation
Fig.3. Server Privacy
Fig.4. Server Privacy Details
ISSN: 2348–8387
Fig.6. Authentication
Fig.7. Privacy under Multiple Dependent Executions
www.internationaljournalssrg.org
Page 91
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
“Privacy in mobile computing
for
location-sharing-based
th
services,” in Proc. 11 Int. Conf. PETS, 2011, pp. 77–96.
[8] (2011, Nov.). UTM Coordinate System [Online].
Available:https://www.education.psu.edu/natureofgeoinfo/c2_p21.
html
[9] G. Ghinita, P. Kalnis, A. Khoshgozaran, C. Shahabi, and K.
Tan, “Private queries in location based services: Anonymizers are
not necessary,” in Proc. ACM SIGMOD, 2008, pp. 121–132.
[10] M. Jadliwala, S. Zhong, S. J. Upadhyaya, C. Qiao, and J.-P.
Hubaux, “Secure distance-based localization in the presence of
cheating beacon nodes,” IEEE Trans. Mobile Comput., vol. 9,
no. 6, pp. 810–823, Jun. 2010.
Fig.8. PPFRVP protocol
V. CONCLUSION
Nowadays Location based sharing process are used
by millions of users in Mobile Computing and secured
Computing platforms. Here in this paper, location based
sharing is carried out, in which number of users in a group
shares data (location) in a network. There are number of issues
in sharing and they were addressed by implementing PPFRVP
protocol, in which user’s identity and distance between the
users within a group were identified by checking user
authentication. Simple Hashing Algorithm (SHA) is used for
encrypting the data going to be shared which preserves the
privacy and integrity. This ensures that privacy will be
maintained and preserved in location Based Sharing’s by
using the proposed techniques
[11] C.-H. O. Chen et al., “GAnGS: Gather, authenticate ’n group
securely,” in Proc. 14th ACM Int. Conf. Mobile Computing
Networking, 2008, pp. 92–103.
[12]
Y.-H. Lin et al., “SPATE: Small-group PKI-less
authenticated trust establishment,” in Proc. 7th Int. Conf. MobiSys,
2009, pp. 1–14. [19] R. Rivest,A. Shamir, and L. Adleman, “A
method for obtaining digital signatures and public-key
cryptosystems,” Commun. ACM, vol. 21, no. 2, pp. 120–126,
1978.
[13]
O.Goldreich, Foundations of
Cryptography: Basic
pplications.Cambridge, U.K.: Cambridge Univ. Press, 2004.
[14]
A. Loukas, D. Damopoulos, S. A. Menesidou, M. E.
Skarkala, G. Kambourakis, and S. Gritzalis, “MILC: A secure and
privacypreserving mobile instant locator with chatting,” Inf. Syst.
Frontiers, vol. 14, no. 3, pp. 481–497, 2012.
[15] D. Boneh, E.-J. Goh, and K. Nissim, “Evaluating 2-DNF
formulas on cipher texts,” in Proc. TCC, 2005, pp. 325–341.
[16] T. ElGamal, “A public key cryptosystem and a signature
scheme based on discrete logarithms,” IEEE Trans. Inf. Theory,
vol. 31, no. 4, pp. 473–481, Jul. 1985.
REFERENCES
[1]
P. Golle and K. Partridge, “On the anonymity of
home/work location pairs,” in Proc. 7th Int. Conf. Pervasive
Computing, 2009, pp. 390–397.
[2]
J. Freudiger, R. Shokri, and J.-P. Hubaux, “Evaluating the
privacy risk of location-based services,” in Proc. 15th Int. Conf.
Financial, 2011, pp. 31–46.
[3] J. Freudiger, M. Jadliwala, J.-P. Hubaux, V. Niemi, P.
Ginzboorg, and I. Aad,“Privacy of community pseudonyms in
wireless peer-to-peer Networks,” Mobile Netw. Appl., vol. 18, no.
3, pp. 413–428, 2012.
[4]
(2011, Nov.). Please Rob Me [Online]. Available:
http://pleaserobme.com/
[5] J. Krumm, “A survey of computational location privacy,”
Personal Ubiquitous Comput., vol. 13, no. 6, pp. 391–399, 2009.
[17] P. Paillier, “Public-key cryptosystems based on composite
degree residuosity classes,” in Proc. 17th Int. Conf. Theory
Application Cryptographic Techniques, 1999, pp. 223–238.
[18] M. Robshaw and Y. Yin, “Elliptic curve cryptosystems,”
RSA Lab., Bedford,MA, USA, Tech. Rep., 1997.
[19]
Y. Kaneda, T. Okuhira, T. Ishihara, K. Hisazumi, T.
Kamiyama, and M. Katagiri, “A run-time power analysis method
using OS-observable parameters for mobile terminals,” in Proc.
ICESIT, 2010, pp. 1–6.
[20] M. Chignell, A. Quan-Haase, and J. Gwizdka, “The privacy
attitudes questionnaire
(PAQ): Initial development and
validation,” in Proc. Human Factors and Ergonomics Society
Annu. Meeting, 2003.
[6] V. Vazirani, Approximation Algorithms. New York, NY,
USA: Springer-Verlag, 2001.
[7] I. Bilogrevic, M. Jadliwala, K. Kalkan, J. Hubaux, and I. Aad,
ISSN: 2348–8387
www.internationaljournalssrg.org
Page 92
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
SECURE VIDEO WATERMARKING
METHOD BASED ON QR CODE AND
WAVELET TRANSFORM TECHNIQUES
G. PRABAKARAN, R. BHAVANI and J. RAJA*
Assistant Professor, Professor, PG Student,
Dept. of Computer Science & Engineering,
Annamalai University,
Tamilnadu, India
Abstract:
Digital video is one of the most popular
multimedia data exchanged in the internet. For this
commercial activity on the internet and media, we
require protection to enhance security. The 2D
Barcode with a digital watermark is widely
interesting research in the security field.
Watermarking is a technology used for the copyright
protection and authentication of digital application.
We proposed a new invisible non-blind video
watermarking in Quick Response (QR) code
technique. In one video frame, we embed text
message into QR Code image. Decomposition of
Discrete Wavelet Transform (DWT) and Integer
Wavelet Transform (IWT) are used. In that, we use it
for non-blind video watermarking in the DWT
domain is used and in IWT domain using a key
known as Arnold Transformation. At first Arnold
Transformation is performed to scramble the payload
image (QR code) then both cover and payload image
are decomposed using Integer Wavelet Transform
(IWT). These experimental results are achieved
acceptable imperceptibility and Peak Signal to Noise
Ratio (PSNR) value.
Keywords: Video Watermarking, DWT, Arnold
Transformation, IWT, QR-Code.
I.
Introduction
The main idea of watermarking is the encoding
of secret information into data under the assumption
that others cannot seen or read the secret information
in the data. It checks the logo encoded in data or not.
ISSN: 2348–8387
Based on the type of document to be watermarked.
Text watermarking: line shift coding, word shift
coding, feature coding. Visible watermark: the
information is visible in the picture or video.
Typically, the information is text or a logo which
identifies the owner of the media. Invisible
Watermark: An invisible watermark is an overlaid
image which cannot be se en, but which can be
detected algorithmically. Dual Watermarking: dual
watermark is a combination of a visible and an
invisible watermark. In this type of watermark, an
invisible watermark is used as a backup for the
visible watermark. It can be used to verify ownership.
A QR code is a two dimensional barcode invented by
the Japanese corporation Denso Wave.
Figure 1: 1- D bar code
Figure 2: 2-D QR Code
Information is encoded in both the vertical and
horizontal direction, thus holding up to several
hundred times more data than a traditional bar code
shown in Figure 1. QR Codes hold a considerably
greater volume of information than a 1-D Barcode
shown in Figure 2. QR Code can encode in many
types of characters such as numeric, alphabetic
www.internationaljournalssrg.org
Page 93
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
character, Kanji, Kana, Hiragana, symbols, binary,
and control codes.
II.
RELATED WORK
Some of related for video watermarking is given
below:
Hirak Kumar Maity, and Santi Prasad Maity
[1] have proposed a joint robust and reversible
watermarking scheme that shows its efficiency in
terms of integrity, authenticity and robustness.Digital
watermarking has an important application to protect
and authenticate the medical images. To perform
accordingly one of the most commonly used methods
is region based operation, i.e. the whole image is
partitioned into two regions called region of interest
(ROI) and region of non-interest (RONI).
Vinay pandey et al., [2] have proposed for
protecting the transmission of medical images. The
presented algorithms will be applied to images. This
work presents a new method that combines image
cryptography, data hiding and Steganography
technique for denoised and safe image transmission
purpose.
Hamid Shojanazeri et al., [3] have proposed
the state of the art in video watermarking techniques.
It provides a critical review on various available
techniques. In addition, it addresses the main key
performance indicators which include robustness,
speed, capacity, fidelity, imperceptibility and
computational complexity.
Peter Kieseberg et al., [4] have proposed
paper examines QR Codes and how they can be used
to attack both human interaction and automated
systems. As the encoded information is intended to
be machine read- able only, a human cannot
distinguish between a valid and a maliciously
manipulated QR code. While humans might fall for
phishing attacks, automated readers are most likely
vulnerable to Structured Query Language (SQL)
injections and command injections. Our contribution
consists of an analysis of the QR Code as an attack
vector, showing different attack strategies from the
attackers‟ point of view and exploring their possible
consequences.
ISSN: 2348–8387
Emad E.Abdallah et al., [5] have present a
robust, hybrid non-blind MPEG video watermarking
technique based on a high-order tensor singular value
decomposition and the Discrete Wavelet Transform
(DWT). The core idea behind our proposed technique
is to use the scene change analysis to
embed the watermark repeatedly into the singular
values of high-order tensors computed form the DWT
coefficients of selected frames of each scene.
Experimental results on video sequences are
presented to illustrate the effectiveness of the
proposed approach in terms of perceptual invisibility
and robustness against attacks.
Fatema Akhter [6] has proposed a new
approach for information hiding in digital image in
spatial domain by selecting three bits of message is
embedded in a pixel using Lucas Number system but
only one bit allowed for alternation. Proposed method
has the larger capacity of embedding data, high peak
signal to noise ratio compared to existing methods
and is hardly detectable for steganolysis algorithm.
Chao Wang et al., [7] proposed a method to
increase the embedding speed of matrix embedding
by extending the matrix via some referential columns.
Compared with the original matrix embedding, the
proposed method can exponentially reduce the
computational complexity for equal increment of
embedding efficiency. Proposed method achieves
higher embedding efficiency and faster embedding
speed than previous fast matrix embedding methods.
III.
METHODOLOGY
Watermark is an invisible signature
embedded in an image to show authenticity or proof
of ownership. Here, we discuss about the QR code,
MPEG compression, SVD, DWT, IWT and Arnold
Transform proposed method.
A. QR Code
The standard specifies 40 versions (sizes) of
the QR code from the smallest 21x21 up to 177x177
modules in size. An advantage with QR code is also
there relatively small size for a given amount of
information The QR code is available in 40 different
www.internationaljournalssrg.org
Page 94
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
square sizes each with a user selectable error
correction level infour steps (referred to as error
correction level L, M, Q and H). With the highest
level of error correction used up to nearly 30% of the
code words can be damaged and still be restored. The
maximum capacity for QR codes depending on the
encoding scheme (using the lowest possible error
correction overhead).
B. MPEG-2 Video compression
MPEG-2 Video is similar to MPEG-1, but
also provides support for interlaced video (the format
used by analog broadcast TV systems). All standardsconforming MPEG-2 Video decoders are fully
capable of playing back MPEG-1 Video streams. An
HDTV camera generates a raw video stream of
24*1920*1080*3 = 1,49,299,200 bytes per second
for 24fps video. This stream must be compressed if
digital TV is to fit in the bandwidth of available TV
channels and if the movies are to fit on DVDs. TV
cameras used in broadcasting usually generate 25
pictures a second (in Europe).
Digital television requires that these pictures
be digitized so that they can be processed by
computer hardware. MPEG-2 specifies that the raw
frames be compressed into three kinds of frames:
intra-coded frames (I-frames), predictive-coded
frames (P-frames), and bidirectionally-predictivecoded frames (B-frames). An I-frame is a compressed
version of a single uncompressed (raw) frame. It
takes advantage of spatial redundancy and of the
inability of the eye to detect certain changes in the
image. Unlike P-frames and B-frames, I-frames do
not depend on data in the preceding or the following
frames. The raw frame I is divided into 8 pixels by 8
pixel blocks. The result is an 8 by 8 matrix of
coefficients.
The transform converts spatial variable into
frequency variations, but it does not change the
information in the block; the original block can be
recreated exactly by applying the inverse cosine
transform. The advantage of doing this is that the
image can now be simplified by quantizing the
coefficients. Many of the coefficients, usually the
higher frequency components, will then be zero. The
penalty of this step is the loss of some subtle
ISSN: 2348–8387
distinctions in brightness and colour. Error level
Symbolic constant Error correction capacity.
C. Integer Wavelet Transform (IWT)
In this proposed paper, Haar integer wavelet
transform is applied to the cover image for
embedding the secret data bits. The first level IWT
will result the high (H) and low (L) frequency
wavelet coefficients of the cover image. High
frequency wavelet coefficients are obtained by taking
the edge information between the adjacent pixel
values and low frequency wavelet coefficients are
obtained by suppressing the edge information in each
pixel value.
First Level IWT:
H = Cz - Ce
(1)
L = Ce - [H/2]
(2)
where Co and Ce is the odd column and even column
wise pixel values. The H and L bands of the first
level IWT are passed through the second level of
high pass and low pass filter banks to get the IWT
coefficients, which contains LL, LH, HL, HH bands,
where the LL band contains highly sensitive
information of the cover image. The other 3 bands
LH, HL and HH contain the detailed information of
the cover image.
Second Level IWT:
LH = Lodd - Leven
(3)
LL = Leven - [LH / 2]
(4)
L = Hodd - Heven
(5)
HH = Heven - [H L/ 2] (6)
Where, Hodd is an odd row of H band, Lodd is an
odd row of L band, Heaven is even row of H band
and live-in is even row of L band. As IWT is
reversible transformation. the image is reconstructed
by applying inverse integer wavelet transform to the
LL, LH, HL and HH bands.
D. Discrete Wavelet Transform
An image that undergoes Haar wavelet
transform will be divided into four bands at each of
the transform level. The first band represents the
input image filtered with a low pass filter and
www.internationaljournalssrg.org
Page 95
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
compressed to half. This band is also called
„approximation‟. The other three bands are called
„details‟ where the high pass filter is applied. These
bands contain directional characteristics.
file. The Block diagram of representation of
extracting process was given in the Figure 4.
The size of each of the bands is also
compressed to half. Specifically, the second band
contains vertical characteristics, the third band shows
characteristics in the horizontal direction and the last
band represents diagonal characteristics of the input
image. Conceptually, Haar wavelet is very simple
because it is constructed from a square wave.
Moreover, the Haar wavelet computation is fast since
it only contains two coefficients and it does not need
a temporary array for multi-level transformation.
Thus, each pixel in an image that will go through the
wavelet transform computation will be used only
once and no pixel overlapping during the
computation
Step 1: Read the AVI video file and extract the
frames.
1) Algorithm For Embedding Process
Step 2: Read the first frame (I Frame) image as a
cover image.
Step 3: Generate a QR code image with company
name.
Step 4: Apply DWT/IWT on the both cover image
and QR code image to get combined image.
E. Arnold Transform
Arnold Transform is commonly known as
cat face transforms and is only suitable for N×N
images digital images. It is defined as
( )
(
)( )
------ (7)
where (x, y) are the coordinates of original image,
and (x‟,y‟) are the coordinates of image pixels of the
transformed image. Transform changes the position
of pixels and if done several times, scrambled image
is obtained. N is the height or width of the square
image to be processed. Arnold Transform is periodic
in nature. The decryption of image depends on
transformation periods. Period changes in accordance
with size of image. Iteration number is used as the
encryption key. When Arnold Transformation is
applied, the image can do iteration, iteration number
is used as a secret key for extracting the secret text.
IV. PROPOSED MODEL
A. Encoding Process
In the embedding process a video file, we have
taken the I frame as a cover image. Apply DWT/IWT
on both I- frame and QR code image. Next, apply
IDWT/IIWT to obtain the watermarked image.
Finally, watermarked I-frame is merged in a video
ISSN: 2348–8387
Figure 4. Block diagram of proposed encoded
process.
Step 5: Take the IDWT/ IIWT on the combined
image to Watermarked Frame.
Step 6: Get the watermarked I frame image and the
video files.
B. Decoding Process
www.internationaljournalssrg.org
Page 96
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
In extracting process, DWT/IWT is applied
to watermarked image and recovers the QR code
image. Apply DWT/IWT on original video file and
watermarked
I-frame; take the IDWT/IIWT to
obtain the QR code image. Finally extract the
verification text. The schematic representation of
extracting process was given in the Figure 5.
V. TESTING AND PERFORMANCE ANALYSIS
In our experimental Video sequence
Image73.jpg in 512X512 and gray format are used
for watermarking embedding. The standard JPEG
compression format with 1150kbits (bit rate) and
frame rate is 25 to 30. The length of Video sequence
is 200 frames in 8 Sec. The Video sequence is
watermarking in size of 75X75 is selected as aa.jpg.
To evaluate the performance of the proposed method
by using Matlab R2013a and 7.10 version.
A. Quality Metrics
Image Quality of watermarked image was
tested on various quality parameters.
1) Mean Square Error(MSE):
It is defined as the square of the error between
cover image and watermarked image. The distortion
in the image can be measured using MSE and is
calculated using Equation.
∑∑
Where
Figure 5. Block Diagram of decoding process.
-------- (8)
-cover I frame.
- Watermarked frame.
1.
Algorithm For Decoding Process
Step 1: Read the watermarked video files and extract
watermarked I frame.
Step 2: Read the original video file and extract
original video I frame.
Step
3: Apply DWT/IWT on both watermarked I
frame and original I frame.
Step 4: Subtract watermarked video I frame
coefficient with original video I frame coefficient and
take IDWT/IIWT.
Step 5: Apply Anti Arnold Transformation to get the
QR code Image.
Step 6: By using QR code reader extract company
name from QR code image.
ISSN: 2348–8387
2)
Peak Signal to Noise Ratio(PSNR):
It is the measure of the quality of the image by
comparing the cover image with the watermarked
image, i.e., it measures the statistical difference
between the cover and watermarked image is
calculated by using below Equation 9.
PSNR=
---- (9) 3)
Normalized Cross Correlation (NCC):
This is also similarity measurement method
to evaluate the performance as shown in Equation 10.
∑∑
www.internationaljournalssrg.org
∑
∑
Page 97
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
--------------------------------(10)
6(a)
7(c)
7(d)
7(e)
7(f)
6(b)
6(c)
6(d)
Raja
7(g)
6(e)
Figure 7(a). Video I frame, (b). QR code image, (c) & (d). DWT of
cover and QR code image, (e). Watermarked I frame,
(f). Recovered QR code, (g) Extracting secret text using QR Code
Reader, (h). Name (verification text)
6(f)
VI. Results and Discussion
6(g)
The various quality video frames are
evaluated. The MSE, PSNR and NCC are illustrated
in the Table 1 & 2 .In Table 1 & 2 the MSE values
are 3 to 1 and PSNR value are between 45 to 50 dB.
The NCC values are nearly equal to 0 to 1. The above
values are shown our watermarking system achieves
the high security level.
6(h)
N:Ajay;A.
TEL;CELL:+919750452068
EMAIL:aajayit13@gmail.com
6(i)
Cover
Image
MSE
PSNR
NCC
raj.jpg
3.6319
42.5294
1.0054
aa.jpg
3.6319
42.5294
1.0054
raj.jpg
3.6319
42.5294
1.0054
aa.jpg
3.6319
42.5294
1.0054
raj.jpg
3.6319
42.5294
1.0054
aa.jpg
3.6319
42.5294
1.0054
raj.jpg
3.6319
42.5294
1.0054
aa.jpg
3.6319
42.5294
1.0054
raj.jpg
3.6319
42.5294
1.0054
aa.jpg
3.6319
42.5294
1.0054
Image11.jpg
Figure 6 (a). Video I frame, (b). QR code image, (c) Arnold
transform, (d). & (e). IWT of cover and QR code image,
(f). Watermarked I frame, (g). Recovered QR code, (h) Extracting
secret text using QR Code Reader, (i). Vcard (verification text).
Image13.jpg
Image15.jpg
Image17.jpg
7(a)
Payload
Image
7(b)
Image19.jpg
Table 1: Performance of video frame quality metrices using
DWT
ISSN: 2348–8387
www.internationaljournalssrg.org
Page 98
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Cover
Image
Payload
Image
MSE
PSNR
NCC
raj.jpg
3.8383
42.2894
0.9872
aa.jpg
3.9386
42.1773
0.9871
raj.jpg
3.8577
42.2675
0.9872
aa.jpg
3.9525
42.1620
0.9870
raj.jpg
3.7533
42.3866
0.9872
aa.jpg
3.8470
42.2796
0.9870
raj.jpg
3.6849
42.4665
0.9872
aa.jpg
3.7729
42.3641
0.9871
raj.jpg
3.6476
42.5108
0.9872
aa.jpg
3.7351
42.4077
0.9871
Image11.jpg
Image13.jpg
Image15.jpg
Image17.jpg
Image19.jpg
Table 2: Performance of video frame quality metrices using IWT.
VII. Conclusion
This proposed methods have achieved the
improved imperceptibility and more security in
watermarking. In this QR code encoding process and
get excellent performances. In the first method
watermark was embedded in the diagonal element.
On the other hand embedding text messages in the
QR code image. So, the dual process given two
authentication detail.
This method is convenient, feasible and practically
used
for
providing
copyright
protection.
Experimental result it shows that our method can
achieve acceptable certain robustness to video
processing. It is extended to apply other wavelet
filters also.
Advanced Engineering(ISSN 2250-2459, Volume 2,
Issue 1, January 2012).
[3] Hamid Shojanazeri, Wan Azizun Wan Adnan and
Sharifah Mumtadzah Syed Ahmad, “Video
Watermarking Techniques for Copyright protection
and Content Authentication”, International Journal of
Computer Information Systems and Industrial
Management Applications. ISSN 2150-7988 Volume
5 (2013).
[4] Peter Kieseberg, Manueal Leithner, Martin
Mulazzani,
Lindsay
Munaroe,
Sebastian
Schrittwieser, Mayank Sinha and Edgar Weippl,” QR
Code Security” TwUC‟10, 8-10 November, 2010,
Paris, France.
[5] Emad E Abdallah, A Ben Hamza and Prabir
Bhattacharya,” Video watermarking using wavelet
transform and tensor algebra” Springer-Verlag
London Limited 2009.
[6] Fatema Akhter, “A Novel Approach for Image
Steganography in Spatial Domain”, Global Science
of computer Science and Technology Graphics &
Vision, vol 13,issue 7, ver 1.0, pp 1-6, 2013.
[7] Chao Wang, Weiming Zhang, Jiufen Liu, and
Nenghai Yu, “Fast Matrix Embedding by Matrix
Extending”, IEEE Transactions on Information
Forensics and Security, vol 7, pp 346-350, 2012.
VIII. References
[1] Hirak Kumar Maity and Santi Prasad Maity,
“Joint Robust and Reversible Watermarking for
Medical Images” 2nd International Conference on
Communication, Computing & Security [ICCCS2012].
[2] Vinay Pandey, Angad Singh and Manish
Shrivastava, “Medical Image Protection by Using
Cryptography Data-Hiding and Steganography”,
International Journal of Emerging Technology and
ISSN: 2348–8387
www.internationaljournalssrg.org
Page 99
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Efficient and Effective Time based Constrained Shortest
Path Computation
Karthika.T1
Raja.G2
M.E-Computer Science and Enineering(2ndyear)1
Assistant Professor2
Dhanalakshmi Srinivasan Engineering College1
Department of Computer Science And Enineering 2
Perambalur, India
Dhanalakshmi Srinivasan Engineering College2
Perambalur, India
Abstract— Data summarization is an important concept in data mining that entails techniques for finding a compact
representation of a dataset. The existing work gives the shortest path by considering distance in Spatial Network Activity
Summarization (SNAS). Then it generates results only for transportation user. The method describes K-Main Routes
approach that collects all activities. In a proposed system, it summarizes the results to pedestrian user in safer way.KMR
approach uses network voronoi, heapsort divide and conquer and pruning strategies techniques. Time based technique gives
advantages based on shortest path for users. Additionally, an approach introduce on Prim’s spanning tree algorithm which is
time based technique. In time dependency, the user have to specify the time whether it may be day/night time according to the
time period the safety path will get suggested for the pedestrian fatality.
Keywords: Data summarization, Dataset, Spatial Network Activity Summarization, Prim’s Spanning Tree, pedestrian fatality
I.
INTRODUCTION
The Data summarization is an important
concept in data mining for finding a compact
representation of a dataset. Given a network and a
collection of activity events, spatial network activity
summarization (SNAS) finds a set of k shortest paths
based on the activity events. An activity is an object of
interest, associated with only one edge of the spatial
network. Spatial network activity summarization
(SNAS) has important applications in domains where
observations occur along linear paths in the network. In
the SNAS problem can be defined as follows: Given a
spatial network, a collection of activities and their
locations (e.g., placed on a node or an edge), and a
desired number of paths k, find a set of k shortest paths
that maximizes the sum of activities on the paths
(counting activities that are on overlapping paths only
once) and a partitioning of activities across the paths.
Depending on the domain, an activity may be the
location of a pedestrian fatality, a carjacking, a train
accident, etc.., Crime analysts may look for
concentrations of crimes along certain streets to guide
law enforcement and hydrologists may try to
summarize environmental change on water resources to
understand the behavior of river networks and lakes.
SNAS assumes that every path is a shortest path
because in applications such as transportation planning,
the aim is usually to help people to arrive at their
destination as fast as possible. The output contains two
shortest paths and two groups of activities. The shortest
paths are representatives for each group and each
shortest path maximizes the activity coverage for the
group it represents. This is due to the fact that if k
shortest paths are selected from all shortest paths in a
spatial network, there are a large number of
possibilities. SNAS is NP-complete and propose two
ISSN: 2348–8387
additional techniques for improving the performance of
KMR: 1) Network Voronoi activity Assignment, which
allocates activities to the nearest summary path, and 2)
Heap sort Divide and conquer Summary PAth
REcomputation. Show that SNAS is NP-complete.
Using two new techniques for improving the
performance of K-Main Routes (KMR): Network
Voronoi activity Assignment (NOVA_TKDE) and
Heap sort Divide and conquer Summary PAth
REcomputation
(D-SPARE_TKDE).Analytically
demonstrate the correctness of NOVA_TKDE and DSPARE_TKDE. Analyze the computation costs of
KMR.
Fig 1.1 Architecture
II.
Related Works
A hybrid heuristic for the p-median problem
[2], given n customers and a set F of m potential
facilities, the p-median problem consists in finding a
www.internationaljournalssrg.org
Page 100
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
subset of F with p facilities such that the cost of serving
all customers is minimized. This is a well-known NPcomplete problem with important applications in
location science and classification (clustering). A
multistory hybrid heuristic that combines elements of
several traditional met heuristics to find near-optimal
solutions to this problem. Techniques used are
multistart hybrid heuristic (GRASP in the sense that it
is a multistart method).Propose a hybrid heuristic for
the p-median problem. In essence, it is a multistart
iterative method, each iteration of which consists of the
randomized construction of a solution, which is then
submitted to local search. Traditionally, a multistart
algorithm takes the best solution obtained in all
iterations as its _nal result. A method is a multistart
approach in which each iteration consists basically of a
randomized greedy procedure followed by local search.
Drawbacks are that it can group spatial objects that are
Close in terms of Euclidean distance but not close in
terms of network distance. Thus, may fail to group
activities those occur on the same street.
Discovering and quantifying mean streets [3],
Mean streets represent those connected subsets of a
spatial network whose attribute values are significantly
higher than expected. Discovering and quantifying
mean streets is an important problem with many
applications such as detecting high-crime-density
streets and high crash roads (or areas) for public safety,
detecting urban cancer disease clusters for public
health. Mean streets mining algorithm is used which
can evaluate graphical models in statistics for their
ability to model activities on road networks. Road
segments will be modeled as edges or as nodes in
graphical models, and similarities and differences in
crime rates will be examined. It can provide statistical
models such as the Poisson distribution and the sum of
Independent Poisson Distributions to provide statistical
interpretations for results. To define the problem of
discovering and quantifying mean streets. Drawbacks
are finds anomalous streets or routes with unusually
high activity Levels. It is not designed to summarize
activities over k paths because the number of high
crime streets returned is always relatively small.
Detecting hotspots in geographic networks [4],
to study a point pattern detection problem on networks,
motivated by geographical analysis tasks, such as crime
hotspot detection. Given a network N (For example, a
street, train, or highway network) together with a set of
sites which are located on the network (for example,
accident locations or crime scenes), want to find a
connected subnetwork F of N of small total length that
contains many sites. And to searching for a subnetwork
F that spans a cluster of sites which are close with
respect to the network distance. Techniques Used are
polynomial-time algorithms using MSGF (Maximal
Sub graph Finding).The work addresses the problem of
finding hotspots in networks from an algorithmic point
of view. Model the problem as follows. The input
network N is a connected graph with positive edge
lengths. The connected subnetwork F of N which can to
searching for is a fragment of N, that is, a connected
subgraph of N: the edges of F are contained in edges of
N (are either edges of N or parts of edges of N). The
ISSN: 2348–8387
length of a fragment F is the sum of its edge lengths.
Together with N, are given a set S of sites (locations of
interest), which are located on the edges or vertices of
N. Drawbacks are identifies the maximal Sub graph
(e.g., a single path, k = 1) under the constraint of a user
specified length and cannot summarize activities When
k > 1.
Efficient and effective clustering methods for
spatial data mining [5], spatial data mining is the
discovery of interesting relationships and characteristics
that may exist implicitly in spatial databases. Explore
whether clustering methods have a role to play in
spatial data mining. To this end, develop a new
clustering method called CLAHANS which is based on
randomized search. Also develop two spatial data
mining algorithms that use CLAHANS. Our analysis
and experiments show that with the assistance of
CLAHANS, these two algorithms are very effective and
can lead to discoveries that are difficult to find with
current spatial data mining algorithms. Techniques
Used are clustering based CLARANS algorithm which
cluster analysis algorithm, called CLAHANS, which is
designed for large data sets. More specifically, to the
development of CLAHANS, which is based on
randomized search and is partly motivated by two
existing algorithms well-known in cluster analysis,
called PAM and CLARA; The development of two
spatial mining algorithms SD (CLAHANS) and NSD
(CLAHANS).Drawbacks are the CLARANS has
established itself tool used for spatial data mining. But
the grouping of activities in based distance
summarization is very difficult.
Clustering of traffic accidents using distances
along the network [6], Many existing geo statistical
methods for defining concentrations, such as the kernel
and the local spatial autocorrelation method, take into
account the Euclidian distance between the
observations. However, since traffic accidents are
typically located along a road network, it is assumed
that the use of network distances instead of Euclidean
distances could improve the results of these clustering
techniques. A new method is proposed here, taking into
account the distances along the road network.
Techniques used are Spatial clustering based on
network distance. A new methodology for detecting
dangerous location, based on distances along the road
network, is proposed, which improves the
disadvantages of the existing techniques. Proposes
network distance weighted clustering method and
describes the study area under investigation and the
input data. The goal of accident clustering techniques is
to find dangerous locations or black zones (road
segments, intersections), characterized by a higher
number of traffic accidents than expected from a pure
random distribution of accidents over the road network.
Drawbacks are dependent on the influence range for a
point of measurement; the expected dangerousness
index may be higher. The variations in influence range,
also variations in traffic flow, road category, etc. can be
incorporated in simulations.
www.internationaljournalssrg.org
Page 101
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
III.
Inactive node pruning in KMR
Scope and Outline of the Paper
This work focuses on summarizing discrete
activity events (e.g., pedestrian fatalities, crime reports)
associated with a point on a network. This does not
imply that all activities must necessarily be associated
with a point in a street. Furthermore, other network
properties such as GPS trajectories and traffic densities
of road networks are not considered. The objective
function used in SNAS is based on maximizing the
activity coverage of summary paths, not on minimizing
the distance of activities to summary paths. The
summary paths are shortest paths but other spatial
constraints are not considered (e.g., nearest neighbors).
Additionally, it is assumed that the number of activities
on the road network is fixed and does not change over
time. It includes KMR, NOVA_TKDE, and DSPARE_TKDE.
Prim’s spanning
tree algorithm
Summary Path
Network Voronoi
ActivityAssignment
Time Based
Select DataSet
Heapsort based
Divide And Conquer
Database
Get Safest Path
Spatial Network
Final Result
Enter Query &
Time
Pedestrian Fatality
Fig 3.1 Overall Dataflow Diagram
Time based network creation
The present work is to create two types of
network creation followed by daytime and night time.
Four activities will get changes due to the time value.
When pedestrian user enters into the network on day
time the result has been extract on the day time
database. Suppose enter into night time the result is
extract on night time database.
Spatial Network
Day Time
Night Time
DataBase
Day Time
Night Time
In SNAS, the optimal solution may not be
unique. Additionally, among the optimal solutions there
is somewhere every path starts and ends at active nodes.
Let’s begin with an arbitrary optimal solution. Let p be
a shortest path that starts or ends with inactive nodes. In
other words, eliminating inactive nodes from the
beginning and end of a shortest path does not reduce
coverage and does not split the path. KMR takes
advantage of this property to achieve computational
savings. Rather than checking all shortest paths,
inactive node pruning considers only paths between
active nodes, thereby reducing the total number of
paths.
Network Voronoi Activity
The pseudo code for the activity assignment
algorithm which has two modes: naive and
NOVA_TKDE. The naive mode enumerates all the
distances between every activity and summary path, and
then assigns each activity to its closest summary path.
NOVA_TKDE, by contrast, avoids the enumeration that
is done in the naive mode while still providing correct
results.The Network Voronoi activity Assignment
(NOVA_TKDE) technique is a faster way of assigning
activities to the closest summary path. Consider a
virtual node, V that is connected to every node of all
summary paths by edges of weight zero. The basic idea
is to calculate the distance from V to all active nodes
and discover the closest summary path to each activity.
The shortest path from V to each activity a will go
through a node in the summary path that is closest to a.
If the activity was previously assigned to another
summary path, it is removed from that path before
being assigned to the new summary path. Once all
active nodes have been added to the closed list, or the
Open list is empty, NOVA_TKDE’s main loop is
stopped.
Heap sort Divide and Conquer summary path
Recomputation
Presents the pseudocode, which has two
modes: naive and D-SPARE_TKDE. The naive mode
enumerates the shortest paths between all active nodes
in the spatial network while D-SPARE_TKDE
considers only the set of shortest paths between the
active nodes of a group, which gives the correct results.
The Divide and Conquer Summary PAth
REcomputation technique chooses the summary path of
each group with Maximum activity coverage but only
considers the set of shortest paths between the nodes of
a group. Experimental evaluation the performancetuning decisions utilized by KMR yielded substantial
computational savings without reducing the coverage of
the resulting summary paths.
Time Based Path Detection
Fig. 3.2 Network creation
ISSN: 2348–8387
The previous work is KMR algorithm based
find safest path .The algorithm not consider the time
www.internationaljournalssrg.org
Page 102
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
based result. Present approach introduce on Prim’s
spanning tree algorithm. The algorithm also worked on
previous work, but additionally adds the time based
technique a best on decision making process, the result
on pedestrian user time based safest path. It is mainly
depends on the user dependency and user advantages
based on shortest path. In time dependency, the user
have to specify the time whether it may be day/night
time according to the time period the safety filled path
will get suggested for the user. In case of any danger
about the path it will be informed to the user initially
and implicitly. While choosing the safe path the user
will be getting the safe reach to the destination suppose
the user chooses the day time even he /she can get the
shortest path for the destination but the thing is time
period purely depends on the time period choose by the
user.
Prim’s spanning tree
algorithm
Time Base
Select DataSet
DataBase
Day Time Night Time
Final Safest Path
Pedestrain
Fatalities
Conclusion
The problem of spatial network activity
summarization (SNAS) in relation to important
application domains such as preventing pedestrian
fatalities based on activity summarization. A K-Main
Routes (KMR) algorithm that discovers a set of k
shortest paths to Number activities.KMR uses inactive
node pruning, Network Voronoi activity Assignment
(NOVA_TKDE) and Heap sort Divide and conquer
Summary PAth REcomputation (D-SPARE_TKDE) to
enhance its performance and scalability. This system
main advantage is that shortest path is given to the user
according to the time based. Experimental evaluation
using both synthetic and real-world data sets indicated
that the performance-tuning decisions utilized by KMR
yielded substantial computational savings without
reducing the coverage of the resulting summary paths.
In future work, to explore other types of data that may
not be associated with a point in a street. Also plan to
characterize graphs where DSPARE_TKDE will find a
path within a given fragment as well as extend our
approach to account for different types of activities.
ISSN: 2348–8387
[1] Dev Oliver, Student Member, IEEE, Shashi
Shekhar, Fellow, IEEE, James M. Kang, Renee
Laubscher, Veronica Carlan, and Abdussalam Bannur
(2014), A K-Main Routes Approach to Spatial Network
Activity Summarization, IEEE Transactions On
Knowledge And Data Engineering, VOL. 26, NO. 6,
JUNE 2014.
[2] M. Resende and R.Werneck, “A hybrid heuristic for
the p-median problem” J. Heuristics, vol. 10, no. 1, pp.
59–88, Jan. 2004.
[3] M. Celik, S. Shekhar, B. George, J. Rogers, and J.
Shine, “Discovering and quantifying mean streets: A
summary of results,” Univ. Minnesota, Minneapolis,
MN, USA, Tech. Rep.07-025, 2007.
[4] K. Buchin et al., “Detecting hotspots in geographic
networks,” in Proc. Adv. GIScience, Berlin, Germany,
2009, pp. 217–231.
[5] R. Ng and J. Han, “Efficient and effective clustering
methods for spatial data mining,” in Proc. 20th Int.
Conf. VLDB, San Francisco, CA, USA, 1994, pp. 144–
155.
[6] K. Aerts, C. Lathuy, T. Steenberghen, and I.
Thomas, “Spatial clustering of traffic accidents using
distances along the network,” in Proc. 19th Workshop
ICTCT, 2006, pp. 1–10.
[7] P. Spooner, I. Lunt, A. Okabe, and S. Shiode,
“Spatial analysis of roadside Acacia populations on a
road network using the network K-function,”
Landscape Ecol., vol. 19, no. 5, pp. 491–499,Jul. 2004.
Fig. 3.3 Final Path Detection
IV.
REFERENCES
[8] T. Steenberghen, T. Dufays, I. Thomas, and B.
Flahaut, “Intraurban location and clustering of road
accidents using GIS: A Belgian example,” Int. J. Geogr.
Inform. Sci.,vol. 18, no. 2, pp. 169–181, 2004.
[9] I. Yamada and J. Thill, “Local indicators of
network-constrained clusters in spatial point patterns,”
Geogr. Anal., vol. 39, no. 3, pp. 268–292, Jul. 2007.
[10] S. Shiode and N. Shiode,“ Detection of multi-scale
clusters in network space,” Int. J. Geogr. Inform. Sci.,
vol. 23, no. 1, pp. 75–92, Jan. 2009.
[11] D. Oliver, A. Bannur, J. M. Kang, S. Shekhar, and
R. Bousselaire, “A K-main routes approach to spatial
network activity summarization: A summary of
results,” in Proc. IEEE ICDMW, Sydney, NSW,
Australia, 2010, pp. 265–272.
[12] A. Barakbah and Y. Kiyoki, “A pillar algorithm for
k-means optimization by distance maximization for
initial centroid designation, ” in Proc. IEEE Symp.
CIDM, Nashville, TN, USA, 2009, pp. 61–68.
www.internationaljournalssrg.org
Page 103
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Adaptive wavelet based Compression and
Representation using canonical Correlation
Algorithm
Vigneshwari. D
Guided by Mr. Sivakumar. K
Student, final year Master of Engineering,
Head Of The Department, Assistant Professor,
Department of Computer Science and Engineering,
Department of Computer Science and Engineering,
Roever Engineering College, Perambalur.
Roever Engineering College, Perambalur.
Abstract— The demand of the stereo videos which
is more live than 2D view has placed a trend of 3D view,
especially in the 3D TV broadcasting. Hence, A pioneering 2D
to stereo video conversion system is superlative among the
stereo video generation techniques. In this system, the human
interventions are used for the interactive conversion process.
The coding efficiency is get hold of using MVC which exploit
both the spatial and temporal redundancy for compression.
Lossy compression along with canonical correlation algorithm
is used to acquire the high compression ratio and the quality of
video will be enhanced.bit rate can be reduced with the
combined MVC and 2D-plus-depth cues. In this system, a
novel compact representation for stereoscopic videos - a 2D
video and its depth cues. 2D-plus-depth-cue representation is
able to encode stereo videos compactly by leveraging the byproduct of a stereo video conversion process. Decoder can
synthesize any novel intermediate view using texture and depth
maps of two neighboring captured views via Depth Image
Based Rendering (DIBR).Along with the stereo representation
the audio can also be added to the newly generated 3D video.
I . INTODUCTION:
conversion and video coding, namely CVCVC, On
the encoder side, depth cues are generated from the
“by-products” of an interactive conversion process
when converting a 2D video. Then, the 2D video and
its depth cues are compressed jointly. On the decoder
side, the 2D video and its depth cues are decoded
from the bit-stream, and then the depth cues are
utilized to reconstruct depth maps according to image
features of the 2D video. At last, the stereo video is
synthesized using a DIBR method.
In addition, since object contour is one of the
components in the representation, it is convenient for
the system to adopt the Region of Interest (ROI)
coding to further improve the video quality given a
limited bit rate, or to reduce the coding bit rate.
certain quality requirement. Experimental results
show that compared with no-ROI coding, the bit rate
is reduced by 30%–40%, or the video quality is
increased by 1.5dB–4dB.
A novel stereo video representation to improve
coding efficiency of stereo videos produced by 2Dto-stereo conversion is proposed. The representation
consists of a 2D video plus its depth cues. The depth
cues are derived from the intermediate data during
the operations of the 2D-to-stereo conversion
process, including object/region con-tours and
parameters of their designated depth model. Using
the depth cues and by jointly considering the
appearance of a 2D video, the depth of a scene can be
reliably recovered. Consequently, two views of the
derived stereo video can be synthesized. Compared
with the depth maps, the depth cues are much more
parsimonious than the frame-based depth maps. For
example, compared with traditional stereo video
representations, experimental results show that the bit
rate can be saved about 10%–50%
To prove such an idea, designed a system and use
the proposed representation to couple the stereo video
ISSN: 2348 -8387
Seventh Sense Research Group
Keywords: Canonical correlation algorithm, DIBR,
Automatic depth Estimation.
II . Related Work:
Stereo Video Coding
http://www.internationaljournalssrg.org
www.internationaljournalssrg.org
Page 210
Page 104
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Stereo videos (two-view videos) can be seen as a
special case of multiple-view videos. Hence, the
multi-view video coding (MVC) methods can be
directly applied to encode stereo videos. A key
feature of the MVC scheme is to exploit both spatial
and temporal redundancy for compression. A “Pparameter set (SPS in H.264/MPEG4 AVC) and
related supplemental enhancement information (SEI)
messages so that the decoders can correctly decode
the video according to the SPS. Compared with the
simulcast coding scheme, the MVC generally
achieves as high as 3 dB gain in coding
(corresponding to a 50% bit rate reduction). For
stereo videos, an average reduction of 20%–30% of
the bit rate was reported.
frame” or “B-frame” can be either predicted from the
frames in the same view using motion estimation (as
the traditional video coding methods), or predicted
from the frames of the other view so as to reduce the
substantial inter-view redundancy. MVC decoders
require high-level syntax through the sequence
stereo video coding. A depth map is rep-resented by a
series of depth models covering different regions of
an image. The region contours are further
compressed into a set sparse control points. Then a
depth cue is defined as a set of parameters derived
from a certain depth model. Based on such definition,
we propose a stereo video representation called 2D
video plus depth cues.
2D-To-Stereo Video Conversion Methods and
Systems
The key of 2D-to-stereo video conversion is to
obtain the depth map of each video frame so that
stereo frame pairs can be synthesized. In a typical
2D-to-stereo conversion system, depth maps can
either be estimated according to the appearance
features of 2D frames, or be obtained with user
assistance.
CCA is just like Principal Component
Analysis, is an effective feature extraction method for
dimensionality reduction and data visualization. PCA
is a single modal method, deals with data samples
obtained from a single information channel or view.
In contrast, CCA ism typically used for multi view
data samples, which are obtained either from various
information sources, e.g. sound and image. The aim
of CCA is to find two sets of basis vectors ωx ∈ Rp
and ωy ∈ Rq for X and Y , respectively. To maximize
the correlation coefficient between ωT xX and ωTy
Y. The process is formularized as follows,
The 2D-plus-depth coding is another type of
stereo video coding. It is also called depth enhanced
stereo video coding. The standard (MPEG-C Part 3)
supports a receiver to reproduce stereo videos by
depth-image based rendering (DIBR) of the second
auxiliary video bit stream, which can be coded by the
standard video coding scheme such as H.264/AVC
An inter-component prediction method was
proposed in. The method derives block partition from
the video reference frame corresponding to a depth
map and utilizes the partition information to coding
the depth map. In this paper, a higher level
correlation between depth maps and texture frames is
utilized to further improve depth map compression in
Seventh Sense Research Group
ISSN:
2348 -8387
Discrete Wavelet Transformation:
III . Scope Of The Paper:
http://www.internationaljournalssrg.org
www.internationaljournalssrg.org
Page 211
Page
105
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
The correspondence map should be
constructed for each frame of the given input 2D
video. A frame conversion is converting a 2D video
into individual frame with calculate the pixel
analysis. The pixel analysis processes is calculating
total number of pixel in every frame and also
calculate the depth analysis process. shifting each
pixel of a given 2D image to the left or right
depending on the corresponding depth map value.
background reconstruction tool. The interpolation
and texture generation technique used an artificial
image by attempting to minimize the visibility of
filled regions.
Builds a dense depth map The algorithm
finds the optimal correspondence along epipolar
lines. We obtain a mean disparity error of less than
one pixel. A stereoscopic pixel analysis method using
motion analysis which converts motion into disparity
values. After motion estimation, we used three cues
to decide the scale factor of motion-to-disparity
conversion in 3D images.
Depth cues are derived from an interactive
labeling process during 2D-to-3D video conversion.
A stereo video representation analysis color
correction and normalization technique is used. Color
patterns were used to search out color matches
between two or multiple images. Color matching in
stereo process image pixel and depth analysis used in
normalization technique. The representation benefits
both 3D video generation and coding.
In video plus depth coding format,
conventional 2D video and per pixel depth
information are combined and transmitted to the user
to create a stereo pair. Depth map is nothing but a
2D representation of a 3D scene in the form of a grey
scale image view(or monochromic, luminance only
video signal), Because the depth range is quantized in
an algorithmic scale with 8 bit where the value of the
closest point is 255 and the most far point value is 0.
Stereo interleaving coding is the method of
using existing video codecs for stereo video
transmission, which includes temporal interleaving,
or time multiplexed and spatial interleaving or spatial
multiplexed formats. To indentify the left and right
views in spatial interleaving or to perform deinterleaving at receiver and render video to the
display, a stereo image analysis.
Lossy methods provide high degrees of
compression and result in smaller compressed files.
Stereo video applications can tolerate loss, and in
many cases, it may not be noticeable to the human
ear or eye. The more tolerance for loss, the smaller
the file can be compressed, and the faster the file can
be transmitted over a network. Examples of lossy file
formats are MP3, AAC, MPEG and JPEG.
Multiview video coding(MVC) format is an
extended better version of H.264/MPEG-4 AVC .
MVC utilizes the redundancies among views for the
efficient compression rather than independent coding
view, allowed by “Interview Prediction” or
“Disparity-compensated prediction” in which
decoded pictures of other views are used as reference
pictures when coding a picture as long as they share
the same capturing time. A 2D video is encoded
using H.264/AVC High profile .Thus, a new rate
distortion cost is defined by considering distortion of
recovered region contours.
When an MVC bit stream with new NAL
unit fed into an H.264/MPEG-4 AVC decoder, the
decoder ignores the newly added portion and only
allows the subset part containing NAL units of
existing NAL unit of H.264/MPEG-4 AVC. The
depth map is reconstructed by computing the pixel
depth values according to the model and its
parameters of each region. Finally, stereoscopic
frames are synthesized using a DIBR method Region
contours can be recovered section by section using
Intelligent Scissors between each pair of neighboring
control points by referring to the image gradients of
the decoded frames.
IV . Conclusion:
A novel compact stereo video representation,
2D-plus-depth-cue used for compression and
representation. A system called CVCVC is designed
for both stereo video generation and coding. Using
the representation, the coding efficiency of the
converted stereo video can be largely improved. This
representation can also be extended and applied to
other related fields, such as object based video
coding, ROI-based coding, And intelligent video
content understanding. Several cases of applying the
proposed representation to the ROI coding are
shown. The limitation of the propose methods might
ISSN: 2348 -8387
Seventh Sense Research Group
http://www.internationaljournalssrg.org
www.internationaljournalssrg.org
Page 212
Page 106
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
be performing slower for large video file.
V. REFERENCES:
[1] G. CHEUNG, W. KIM, A. ORTEGA, J.
ISHIDA, AND A. KUBOTA, “Depth Map Coding
using Graph Based Transform and Transform
Domain Scarification”, IEEE Refer no.6, 2011.
[2] I. Dario, G. Cheung, and D. Florencio,
“Arithmetic Edge Coding For Arbitrarily Shaped
Sub-Block Motion Prediction in Depth Video
Compression”, IEEE Refer no.7, 2012.
[3] M. Guttmann, L. Wolf, and D. Cohen-Or,
“Semi-automatic Stereo Extraction from Video
Footage”, IEEE, Ref no.10, 2009.
[4] K.-J. Oh, S. Yea, A. Vetro, and Y.-S. HO,
“Depth Reconstruction Filter and Down/Up Sampling
for Depth Coding in 3D video”, Ref.no.23, 2009.
Seventh Sense Research Group
ISSN: 2348 -8387
[5] S. KIM AND Y.HO “Mesh-Based Depth Coding
For 3d Video Using Hierarchical Decomposition Of
Depth Maps”, Refer no.14, SEP/OCT, 2007.
[6] Z. Li, X. Xian, and X. Liu, “An Efficient 2d to
3d video conversion method based on skeleton line
tracking” ,Ref no.16, 2009.
[7] M. E. LUKACS, “Predictive coding of multiviewpoint image sets” , IEEE Refer no.18,1986.
[8] M. Mathieu and M. N. Do, “Joint encoding of
the depth image based representation using shapeadaptive wavelets”, IEEE, Ref No.19, 2008.
[9] U R Nachrichtechnik, “Depth-Based Block
Partitioning For 3d Video Coding”, Fabian Jager
Institute, Refer no.20, 2012.
[10] N. Nivetha, S.Prasanna, A. Muthukumaravel,
“Real-Time 2D to 3D Video
Conversion using
Compressed Video Based on Depth –From Motion
and Color Segmentation”, Vol. 2 Issue 4 July 2013.
http://www.internationaljournalssrg.org
www.internationaljournalssrg.org
Page 213
Page 107
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
In-network Guaranteed Data Gathering scheme for
arbitrary node failures in Wireless Sensor Networks
A. Wazim Raja, PG scholar,
IT-Network Engineering,
Velammal College of Engg
and Technology, Madurai,
India
S. Kamalesh, Assistant Professor,
IT Department, Velammal
College of Engineering and
Technology, Madurai, India.
Abstract—Wireless sensor networks are prone to node failures
since they are usually deployed in unattended environments. Tree
based data gathering technique works efficiently for such
network topology. The applications in wireless sensor networks
require guaranteed delivery of the sensed data to the user. So a
fault tolerant based data gathering scheme is required to deliver
the sensed data with minimum data loss and minimum delay. The
proposed scheme provides a guaranteed delivery using innetwork data gathering tree which is constructed using BFS.
Nodes select their alternate parent during the construction phase
of the tree. Using these computations all one hop neighboring
nodes initiate the repairing actions when a node fails. The
proposed technique provides solution for both single node failure
and multiple node failures with minimum delay and data loss.
Simulation results show that the delay and the energy are less.
Index Terms—wireless sensor network, node failure, fault
tolerance, data gathering and BFS.
I. INTRODUCTION
Sensors have various applications such as environmental
monitoring, battlefield surveillance, industrial automation and
process control, health and traffic monitoring and so on [1].
Sensor nodes are distributed randomly with a certain field of
interest so they can sense useful information and forward it to a
base station or sink for further analysis by the user. Since
sensor nodes use non-renewable battery for power supply, the
power management is one of the critical issues in this field.
Communicating consumes more power than sensing and
computing the sensed data. Hence reducing the number of
transmissions would improve the sensor life. Quality of Service
(QoS) should also be achieved for a certain level for some of
the applications. In-network Guaranteed Data gathering
scheme can be used to accumulate data in the sink without loss
or redundancy and with minimum delay. However, sink based
data gathering is one of the most challenging fields of Wireless
Sensor Networks (WSN).
II. NODE FAILURES
Since sensor nodes are low-cost and use non-renewable
batteries as its power source, they are subjected to failures.
Node failure is caused due to the damages in node or due to
energy depletion. Such failures of the sensor node in
ISSN: 2348 – 8387
P. Ganesh Kumar, Professor,
IT Department, KLN College
of Engineering and
Technology, Madurai, India.
applications like military surveillance and health monitoring
will lead to severe effects. So there is a need to provide an
effective fault tolerant technique to isolate the faulty nodes
from the rest of the healthy nodes in the network topology. The
sensor node that fails due to energy depletion can initiate the
repairing mechanism by the node itself. If a sensor node fails
suddenly, then the nodes in the vicinity should initiate the
repairing mechanism. When a sensor node fails, it cannot be
replaced because the sensor network is often implemented in
unmanned, unattended environment. So the failed node should
be isolated from the network otherwise other sensor nodes
might also lose their energy by communicating with the failed
node. Several fault tolerant approach focuses on the single
node failure. In arbitrary node failure more than one node fails
at a particular time instant. In case of arbitrary node failure a
region of the network is partitioned into different regions. This
case will lead to a worst scenario where lot of sensor nodes
could not communicate with the sink or the base station. Many
sensor nodes will be isolated from the network. While
providing solutions to the arbitrary node failure the important
fact to be considered is energy consumption.
III. FAULT TOLERANT MECHANISMS
Lu et al. [2] proposed an energy-efficient data gathering
scheme for large scale system. Annamalai et al. [3] have
proposed a heuristic algorithm for tree construction and
channel allocation for both broadcast and in-network messages
in multiple radio network. Li et al. [4] claimed that if
forwarded through a BFS tree the total number of packet relays
as well as the delay will be reduced. The authors of [5] proved
that the data gathering using BFS tree provides maximum
capacity in data collection. The major problem is the control
message overhead that arises due to flooding which also incurs
significant overhead communication cost. Hence the study of
efficient distributed BFS tree construction became an interest
of current research [6,7]. The above mentioned works are
suitable only for the stable and static network. They cannot be
implemented in real-life where the environment is volatile.
Gnawali et al. proposed a data forwarding protocol called
collection tree protocol [8]. In the collection tree protocol,
source based routing mechanism is used to forward the data
www.internationaljournalssrg.org
Page 1
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
from sensor node to the sink. Collection tree protocol provides
efficient data gathering and fault tolerance but its performance
is poor in highly loaded networks [9]. Furthermore, per-flow
queuing is hard to maintain at low capacity sensor nodes.
Moeller et al. have proposed a backpressure routing protocol
[9] for data forwarding in sensor networks. In backpressure
routing, like collection tree protocol computes a new path to
the sink when a fault is detected in the routing path. The time
taken to repair the failures in computing a new path is high.
Sergiou et al. have proposed a source based data gathering tree
[10] in wireless sensor network. However, the scheme does not
provide any fault tolerance for arbitrary node failure.
In this paper, a tree is constructed with the root at the sink
and the tree gradually becomes a BFS tree. A new approach to
extract the neighborhood information is implemented during
the construction phase. Each node precomputes an alternate
parent node through which they create an alternate path to the
root. The alternate path is used when the parent node fails. The
repairing done is quick and only a few control messages are
required with minimum use of the non-renewable battery
power. In [11], an introductory work has been published in
which the child node chooses an alternate parent node to
communicate with the sink node when the current parent node
fails. But there is no solution provided when the alternate
parent node fails. Since the alternate parent node chosen is the
forwarding path of the failed parent node, choosing such
alternate parent node as in [11] will not fix the tree. In [11], the
shortest path to the root node changes as the topology changes
during the repairing time. This work provides an optimal
solution in selecting the alternate path by using the information
fetched in the initial tree construction phase.
In In-network data gathering the messages are delivered
without any redundancy and with minimum loss and delay to
the sink node even in multiple node failures. The correctness of
the proposed scheme have been shown through simulation
results, and compared with the previous approaches in terms of
repairing delay and packet drops. To the best of our
knowledge, this is the first paper that provides an optimal
solution in achieving fault tolerance in WSN by using innetwork data gathering scheme.
IV. IN-NETWORK DATA GATHERING SCHEME
The tree is constructed using BFS and each node calculates
its level of the sink node through exchange messages. The
proposed distributed BFS tree is unique in a way that every
node performs some local processing during the tree
construction phase. Since every node precomputes its alternate
parent the cost of repairing the tree is low in terms of both
control message and time required. If the time interval between
the two successive failures is more than the repairing time of
single node failure, then the two failures are considered as two
single node failures. The proposed scheme can handle both
single node failure and simultaneous multiple node failures.
The proposed in-network data gathering tree ensures reliability
through efficient buffer management. The proposed scheme is
described in the following Subsections with system model,
ISSN: 2348 – 8387
proactive approach of repairing, reactive approach of repairing,
simulation results and conclusion.
A. System model:
Let N be the number of nodes deployed randomly in the
environment to be monitored. The topology is represented
using a graph G(v,e) where v is the vertex and e is the edge that
connects any two nodes. Two nodes are said to be connected if
they are within each others’ communication range ‘r’. Nodes
sense data and forward data to the sink. A data gathering tree
using BFS is constructed with the root at the sink. Each node in
the topology stores all one hop neighborhood ids and their level
in the tree. The network is considered to be static, however, a
node may fail suddenly due to power depletion or damaged due
to natural calamities. These are the assumptions made while
constructing the tree
i.
Each node in G receives Token messages at least
once.
ii.
Each sensor node has the same processing capacity,
transmitter power or memory limitation.
iii.
The algorithm for tree construction terminates
eventually.
iv.
Every node receives the Token message from the right
parent with the shortest path eventually.
v.
The algorithm produces a correct BFS tree for data
gathering rooted at the sink node.
vi.
The tree is constructed in a way not to form a closed
loop.
vii.
Each node shares their level of the tree and their id to
all one hop neighbors.
viii.
A node can add or remove another node from its
parentset.
ix.
Every node calculates an alternate parent during the
initial construction of the tree.
B. Proactive approach:
The proactive approach of repairing is initiated when a
single node failure occurs in the topology. The repairing is
explained in algorithm 1 & 2. However, in algorithm 1 when
urgflag is set to true the second phase of repairing is initiated.
The default value for the urgflag is FALSE. A single node gets
failed and id is considered as FN (Failed Node). Node m
receives DetectCrash-FN and checks the id with its neighbor
set. It removes FN from the parentset if FN is a child of m.
When HOLD is set to TRUE application controller assigns the
hold to FALSE and breaks the loop. If FN is the parent of node
m then pa(m) is assigned to Null. When the alternate parent is
not a null, the Reqflag is assigned to TRUE. ParentReq is sent
to altpa(m). If the alternate parent is also null then the urgflag
is set to TRUE and the second phase of repairing is initiated. In
this scenario, both the parent and alternate parent node gets
failed and the repairing is done by sending the urgent
messages. Update timer is used to calculate a new alternate
parent when the topology is changed.
On receiving the ParentReq from n, m checks the urgent
flag which will be FALSE due to single node failure. If node n
www.internationaljournalssrg.org
Page 2
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
is the parent of node m then the parent of node m is assigned as
null. If the parent of node m is not null then node m sends
ParentReqACK to all its neighbors. Node m also sends its level
and parent information to all the neighbors. It adds the node n
in its childset. If n is the alternate parent of m then the update
timer gets reset. On receiving ParentReqAck from a node n,
node m updates its altParentLevel and parentSet. If pa(m) has
failed and n = altpa(m), then m first sets its parent to n, sets the
level and resets block and reqflag. If n ϵ Child(m), then m
removes n from its Child set. For all nodes from which m
received ParentReq once are added to Child set. m then sends
ParentReqAck with its updated level and parent information to
all neighbors except the parent. The victim node selects one of
the available alternate parents with the lowest level.
Algorithm 1. On receiving Detectcrash (FN) by node m
from the Lower layer on detecting a node failure in the
neighborhood
1: Neighbor(m) ← {Neighbor(m) & FN}
2: Remove entry from ParentSet for FN
3: Remove entry from AltParentLevel for FN
4: if FN ϵ Ch(m) then
5: hold ← TRUE /* Parent switching waits to clear the
memory from Buffer*/
6: while hold = TRUE do
7: Wait for AC to return /* Application Controller will
assign hold to FALSE eventually breaking the loop*/
8: end while
9: Ch(m) ← {Ch(m) & FN}
10: else if FN = Pa(m) then
11: Send WaitFlow to n, for all n ϵ Ch(m)
12: pa(m) ← Null
13: if altpa(m) ‡ Null then
14: reqflag ← TRUE
15: Send ParentReq to altpa(m)
16: else
17: urgflag ← TRUE /* Second phase of repairing
Is initiated*/
18: urglst = {urglst, m, p}, for all p ϵ Neighbor(m)
19: for all p ϵ Neighbor(m) do
20: Send Urg(urglst) to p
21: end for
22: end if
23: else
24: if FN = altpa(m) then
25: altpa(m) ← Null
26: end if
27: Reset UpdateTmr /* calculation of new alt-parent is
initiated*/
28: end if
Algorithm 2. On receiving ParentReq from n by m
1: if urgflag = FALSE then
2: if n = pa(m) then
3: pa(m) ← Null
4: end if
ISSN: 2348 – 8387
5: if pa(m) ‡ Null then
6: for all p ϵ {NeighborSet(m) & pa(m)} do
7: Send ParentReqAck (level,pa(m)) to p
8: end for
9: Ch(m) ← {Ch(m), n}
10: if n = altpa(m) then
11: Reset UpdateTmr /*computation of new altparent is
required*/
12: end if
13: else
14: if altpa(m) = Null & v = altpa(m) then
15: urgflag ← TRUE /* Second phase of
repairing initiates*/
16: urglst ← {urglst, m, p}, for all p ϵ Neighbor(m)
17: for all p ϵ Neighbor(m) do
18:
Send Urg(urglst) to p
19: end for
20: if reqflag = FALSE then
21:
Send ParentReq to altpa(m)
22:
rflag ← TRUE
23: end if
24: end if
25: end if
26: end if
Algorithm 3. On receiving Urg (urglst) from n by m
1: if urg = FALSE then
2: if pa(m) ϵ urglst & pa(m) = Null then
3: urgflag ← TRUE
4: if altpa(m) ‡ urglst & PaSet(m), altpa(m) R urglst then
5: Send ParentReq to altpa(m)
6: else
7: urglst ← {urgList, m, p}, w ϵ Neighbor(m)
8: for all p ϵ Neighbor(m) & p ϵ urglst do
9: Send Urg(urglst) to p
10: end for
11: end if
12: else
13: if PaSet(m), p(m) ϵ urglst then
14: for all w ϵ Neighbor(m) do
15: Send UrgAck (level,p(m)) to w
16: end for
17: else
18: urgflag ← TRUE
19: urglst ← {urglst, m, p}, p ϵ Neighbor(m)
20: for all p ϵ Neighbor(m) & p ϵ urglst do
21: Send Urg(urglst) to p
22: end for
23: end if
24: end if
25: end if
C. Reactive approach of repairing:
When both the parent and the alternate parent gets filled the
reactive approach of repairing is initiated. The node receiving
the id of the failed node checks the data with its parentset and
www.internationaljournalssrg.org
Page 3
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
childset. Two nodes failing in a same time instant lead to the
second phase of repairing. Since there is no alternate parent the
nodes in the vicinity to the victim nodes initiates the repairing
and finds the valid alternate parent of the victim nodes to reach
the sink node. Each node has the level and the parent
information of all one hop neighboring nodes and so the nodes
can calculate whether there is a valid path to the sink node. The
time taken to repair the tree is comparatively more than the
first phase of repairing. The lines “17-21” of algorithm1
explains the initial steps of reactive approach. Here node m
receives the DetectCrash(FN) from neighborhood. Urgent flag
is mentioned as the urgflag in the algorithm. The urgflag is set
to true and a urglst is generated. Urglst is the urgent list that
consists of the node m and neighbors of node m. The list
cosists of all victim nodes that have the same failed parent.
Node m sends urg(urglst) to all its neighbors. On receiving the
urg(urglst) each node checks whether its id is in the urglst. If
any neighbor of node m that receives urg(urglst) finds its id in
urglst then that neighbor node is also a victim node. Urgflag is
kept as FALSE in default. The victim node assigns the urgflag
to TRUE and forward the urgent message to all its one hop
neighbors. This is repeated until a valid alternate parent is
found to reach the sink node. The nodes participating in the
second phase of repairing will set its urgflag to TRUE. The
node m will receive an acknowledgement from a valid parent.
The urgAck will contain the level and parent information of the
sending node. If the victim node receives various
acknowledgements for an urgent message, then it will check
for the level of the alternate parent nodes available. The best
level will be considered and selected as the alternate parent.
Update timers are used to improve the shortest path to the sink
node. The node m selects the node n as its parent when m
receives urgAck message.
Fig 1: No of failures vs Delay
The line formed by in-network data gathering scheme is
represented in red color. In adhoc routing protocol the time
taken to reach the sink node is much more than ours. Even for
a small number of failures the delay is increased to a greater
extent. So adhoc routing becomes inevitable for handling
failures in WSN.
B. Number of failures vs drop:
In figure 2, the comparison is between in-network
guaranteed data gathering tree and adhoc routing protocol in
terms of packet drops. When the failure of the nodes is
increased the packet drop does not increase in in-network data
gathering tree. The drop ratio is far better when compared to
the adhoc routing protocol. Even when the number of failures
increases the packet drops are maintained to a certain level,
thus guaranteeing the maximum number of packets to be
delivered to the sink node successfully.
V. SIMULATION RESULTS
The proposed guaranteed in-network data gathering scheme
for WSN has been worked through the Network Simulator
[12]. The simulations are taken with varying the number of
failures in the network topology. The results are compared
with the existing adhoc routing protocol and we came to know
that our algorithm works efficiently in various analyses taken.
A. Number of failures vs delay:
In figure1, the graph shows that the time taken to repair the
failures is less for the proposed algorithm even when there is a
large number of failures.
ISSN: 2348 – 8387
Fig 2: No of failures vs Drop
C. Number of failures vs energy:
Energy consumption is the most important factor in
wireless sensor network where the non-renewable battery is
used. In figure3, the energy level of the nodes of the proposed
algorithm is compared with the energy level of the adhoc
routing protocol. Energy level is high on the in-network data
gathering tree even when number of failure increases. For
adhoc routing protocol the energy level is very less even when
there is a minimum number of failures. Based on the energy
graph we came to know that our proposed algorithm provides
an energy efficient fault tolerant approach for WSN. Though
the energy level decreases gradually when the number of
www.internationaljournalssrg.org
Page 4
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
failures increases the energy level does not go below a certain
level.
[3]
[4]
[5]
[6]
Fig 3: No of failures vs Energy
[7]
VI. CONCLUSION
In this paper, a set of fault tolerant algorithms has been
proposed to construct a BFS based data gathering tree. Each
node precomputes an alternate parent node through which they
create an alternate path to the root. The alternate path is used
when the parent node fails. The repairing done is quick and
only a few control messages are required with minimum use of
the battery power. Simulation results confirm that the proposed
algorithm provides less delay, packet drops and energy
consumed than the previous approaches. The algorithm works
fine for both single node failure and arbitrary node failures.
The dynamic inherent topology to include the recovered node
is kept for future research.
[9]
[10]
[11]
REFERENCES
[1] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci,
Wireless sensor networks: a survey, Computer Network Elsevier
Journal 38 (2002) 393–422.
[2] K. Lu, L. Huang, Y. Wan, H. Xu, Energy-efficient data
gathering in large wireless sensor networks, in: Proceedings of
ISSN: 2348 – 8387
[8]
[12]
the Second International Conference on Embedded Software and
Systems, 2005, pp. 327–331.
V. Annamalai, S. Gupta, L.Schwiebert, On tree-based innetworkcasting in wireless sensor networks, in: Proceedings of
IEEE Wireless Communication and Networking Conference,
2003, pp. 1942–1947.
X.Y. Li, Y. Wang, Y. Wang, Complexity of data collection,
aggregation, and selection for wireless sensor networks, IEEE
Transactions on Computers 60 (2011) 386–399.
S. Chen, M. Huang, S. Tang, Y. Wang, Capacity of data
collection in arbitrary wireless sensor networks, IEEE
Transactions on Parallel Distributed Systems 23 (1) (2012) 52–
60.
C. Johnen, Memory efficient, self-stabilizing algorithm to
construct BFS spanning trees, in: Proceedings of the Sixteenth
Annual ACM Symposium on Principles of Distributed
Computing, 1997.
C. Boulinier, A.K. Datta, L.L. Larmore, F. Petit, Space efficient
and time optimal distributed BFS tree construction, Information
Processing Letters 108 (2008) 273–278.
O. Gnawali, R. Fonseca, K. Jamieson, D. Moss, P. Levis,
Collection tree protocol, in: Proceedings of the 7th ACM
Conference on Embedded Networked Sensor Systems, 2009, pp.
1–14.
S. Moeller, A. Sridharan, B. Krishnamachari, O. Gnawali,
Routing without routes: the backpressure collection protocol, in:
Proceedings of the 9th ACM/IEEE International Conference on
Information Processing in Sensor Networks, 2010, pp. 279–290.
C. Sergiou, V. Vassiliou, Source-based routing trees for efficient
congestion control in wireless sensor networks, in: Proceedings
of the IEEE 8th International Conference on Distributed
Computing in Sensor Systems, 2012, pp. 378–383.
S. Chakraborty, S. Chakraborty, S. Nandi, S. Karmakar, A novel
crashtolerant data gathering in wireless sensor networks, in:
Proceedings of 13th IEEE/IFIP Network Operations and
Management Symposium, 2012.
NS-2
Network
Simulator,
version
2.34.
2011
<http://www.isi.edu/nsnam/ns/>.G. Eason, B. Noble, and I. N.
Sneddon, “On certain integrals of Lipschitz-Hankel type
involving products of Bessel functions,” Phil. Trans. Roy. Soc.
London, vol. A247, pp. 529–551, April 1955. (references).
www.internationaljournalssrg.org
Page 5
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Scalable and secure EHR System using Big Data Environment
S.Sangeetha,N.Saranya,R.Saranya,M.Vaishnavi
A. Sheelavathi
Assistant Professor/IT
Abstract-Cloud technology progress and increased use of the Internet are creating very large new
datasets with increasing value to businesses and processing power to analyze them affordable. Data
volumes to be processed by cloud applications are growing much faster than the computing power.
Hadoop-map reduce has become powerful computation model address to these problems. A large number
of cloud services require users to share private data like electronic health records for data analysis or
mining, bringing privacy concerns. K-anonymity is a widely used category of privacy preserving
techniques. At present, the scale of data in many cloud applications increases tremendously in accordance
with the Big Data trend, thereby making it a challenge for commonly-used software tools to capture,
manage and process such large scale data within a tolerable elapsed time. As a result, it is a challenge for
existing anonymization approaches to achieve privacy preservation on privacy-sensitive large scale data
sets due to their insufficiency of scalability. In this project, we propose a scalable two-phase top-down
specialization approach to anonymize large-scale data sets using the incremental map reduce framework.
Experimental evaluation results demonstrate that with this project, the scalability, efficiency and privacy
of data sets can be significantly improved over existing approaches.
Index Terms— Cloud Computing, Electronic Health Record(EHR)Systems, K-Anonymity, Bigdata,
Mapreduce.
concerns .The research on cloud privacy and
1 INTRODUCTION
security has come to the picture .Privacy is one of
CLOUD computing, a disruptive trend at
present, poses a significant impact on current IT
industry
and
research
communities
.Cloud
computing provides massive computation power
and storage capacity via utilizing a large number of
commodity computers together, enabling users to
deploy applications cost-effectively without heavy
infrastructure investment. Cloud users can reduce
huge upfront investment of IT infrastructure, and
concentrate on their own core business. However,
numerous potential customers are still hesitant to
take advantage of cloud due to privacy and security
ISSN: 2348 – 8387
the most concerned issues in cloud computing, and
the concern Aggravates in the context of cloud
computing although some privacy issues are not
new. Personal data like electronic health records
and financial transaction records are usually
deemed extremely sensitive although these data can
offer significant human benefits if they are
analyzed and mined by organizations such as
disease research centres. For instance, Microsoft
Health Vault,an online cloud health service,
aggregates data from users and shares the data with
research institutes. Data privacy can be divulged
www.internationaljournalssrg.org
Page 6
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
with less effort by malicious cloud users or
informational data sets. Data privacy is one of the
providers because of the failures of some
most concerned issues because processing large-
traditional privacy protection measures on cloud .
scale privacy-sensitive data sets often requires
This can bring considerable economic loss or
computation power provided by public cloud
severe social reputation impairment to data owners.
services for big data applications. The scalability
Hence, data privacy issues need to be addressed
issues of existing BUG approaches when handling
urgently before data sets are analyzed or shared on
big data-data sets on cloud. In the proposed work,
cloud.
the bottom up and top down methods are combined
together in order to reach and best anonym zed
A
approach
highly
for
data
scalable
two-phase
anonymization
TDS
based on
MapReduce on cloud concept is proposed here.To
make full use of the parallel capability of
MapReduce on cloud, specializations required in an
anonymization process are split into two phases. In
the first one, original data sets are partitioned into a
group of smaller data sets, and these data sets are
anonymized in parallel, producing intermediate
results. In the second one, the intermediate results
are integrated into one, and further anonymized to
achieve consistent k-anonymous
level. The both top down and bottom up
approaches
are
individually lacks
in
some
parameters which will not give a better accurate
result. In our proposed both approaches are
combined together in order to generate a better
optimized output with better accuracy. A wide
verity of privacy models and anonymization
approaches have been put forth to preserve the
privacy sensitive information in data sets.
Data privacy is one of the most concerned
issues because processing large scale privacy-
data sets. We
sensitive data sets often requires computation
leverage MapReduce to accomplish the concrete
power provided by public cloud services for big
computation
of
data applications. The scalability issues of existing
MapReduce jobs is deliberately designed and
BUG approaches arises when handling big data-
coordinated to perform specializations on data sets
data sets on cloud. Most exiting algorithms exploit
collaboratively. We evaluate our approach by
indexing data structure to assist the process of
conducting experiments on real-world data sets.
anonymization,
Experimental results demonstrate that with our
Encoded Anonymity) index for BUG. TEA is a tree
approach, the scalability and efficiency of TDS can
of m levels. The ith level represents the current
be improved significantly over existing approaches.
value for Dj. Each root to-leaf path represents a qid
in
both
phases.
A
group
specifically
TEA
(Taxonomy
value in the current data table, with a (qid) stored at
II LITERATURE REVIEW
In
cloud
environment,
the
the leaf node. In addition, the TEA index links up
privacy
preservation for data analysis, share and mining is a
challenging research issue due to increasingly
larger volumes of data sets, thereby requiring
intensive investigation. A wide verity of privacy
models and Anonymization approaches have been
put forth to preserve the privacy sensitive
the qids according to the generalizations that
generalize them.
HaLoop Approach to Large-Scale Iterative
Data Analysis inherits the basic distributed
computing model and architecture of Hadoop. The
latter relies on a distributed file system (HDFS) that
stores each job’s input and output data. A Hadoop
cluster is divided into two parts: one master node
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 7
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
and many slave nodes. A client submits jobs
and indices on the slave node, and redirects each
consisting of mapper and reducer implementations
task’s cache and index accesses to the local file
to the master node. For each submitted job, the
system.The Scheduling Algorithm technique is
master node schedules a number of parallel tasks to
used in it.It has several optimizations, Loop aware
run on slave nodes. Every slave node has a task
task scheduler,Loop invariant data caching,Caching
tracker daemon process to communicate with the
for
master node and manage each task’s execution.
disadvantages are, Unable to identifying the
Each task is either a map task or a reduce task. A
minimal changes required in iterative computation
map task performs transformations on an input data
and Needs large number loop body functions.
efficient
fix
point
verification.The
partition by executing a user-defined map function
III ALGORITHMS
on each key, value pair. A reduce task gathers all
mapper output assigned to it by a potentially user-
Two Phase Top Down Specialization:
defined hash function, groups the output by keys,
and invokes a user-defined reduce function on each
key-group. HaLoop uses the same basic model. In
order to accommodate the requirements of iterative
data
analysis
applications however,
HaLoop
Three components of the TPTDS approach,
namely,
data
partition,
anonymization
level
merging, and data specialization.We propose a
TPTDS approach to conduct the computation
required in TDS in a highly scalable and efficient
includes several extensions.
fashion.The two phases of our approach are based
First, Haloop extends the application
programming
interface
to
express
on the two levels of parallelization provisioned by
iterative
MapReduce on cloud. Basically, MapReduce on
MapReduce programs. Second, HaLoop’s master
cloud has two levels of parallelization, i.e., job
node contains a new loop control module that
level and task level. Job level parallelization means
repeatedly starts new MapReduce steps that
that multiple MapReduce jobs can be executed
compose the loop body, continuing until a user-
simultaneously to make full
specified stopping condition is satisfied. Third,
infrastructure resources. Combined with cloud,
HaLoop caches and indexes application data on
MapReduce becomes more powerful and elastic as
slave nodes’ local disks. Fourth, HaLoop uses a
cloud can offer infrastructure resources on demand.
new loop-aware task scheduler to exploit these
Map Reduce is a programming model and
caches and improve data locality. Fifth, if failures
an associated implementation for processing and
occur, the task scheduler and task trackers
generating large data sets with a parallel,
coordinate recovery and allow iterative jobs to
distributed algorithm on a cluster. A Map Reduce
continue executing. HaLoop relies on the same file
program is composed of a Map() procedure that
system and has the same task queue structure as
performs filtering and sorting such as sorting
Hadoop, but the task scheduler and task tracker
students by first name into queues, one queue for
modules are modified and the loop control,
each name and Reduce() procedure that performs a
caching, and indexing modules are new.
summary operation (). The "Map Reduce System"
use of cloud
(also called "infrastructure" or "framework")
The HaLoop task tracker not only
manages task execution, but also manages caches
ISSN: 2348 – 8387
orchestrates the processing by marshalling the
distributed servers, running the various tasks in
www.internationaljournalssrg.org
Page 8
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
parallel, managing all communications and data
"Shuffle" step: Worker nodes redistribute
transfers between the various parts of the system,
data based on the output keys (produced by the
and providing for redundancy and fault tolerance.
"map()" function), such that all data belonging to
one key is located on the same worker node.
The model is inspired by the map and
reduces functions commonly used in functional
"Reduce" step: Worker nodes now
programming, although their purpose in the Map
process each group of output data, per key, in
Reduce framework is not the same as in their
parallel.
original forms. The key contributions of the Map
Reduce framework are not the actual map and
reduce functions, but the scalability and faulttolerance achieved for a variety of applications by
optimizing the execution engine once. Only when
the optimized distributed shuffle operation which
reduces network communication cost and fault
tolerance features of the Map Reduce framework
come into play, is the use of this model beneficial.
Map Reduce libraries have been written in many
programming languages, with different levels of
optimization.The name Map Reduce originally
referred to the proprietary Google technology but
has since been generalized.
Map
Reduce
is
a
framework
for
processing parallelizable problems across huge
datasets using a large number of computers nodes,
collectively referred to as a cluster if all nodes are
on the same local network and use similar hardware
or a grid.Processing can occur on data stored either
in a filesystem (unstructured) or in a database
Figure 1:Map
(structured). MapReduce can take advantage of
locality of data, processing it on or near the storage
assets in order to reduce the distance over which it
must be transmitted.
Reduce procedure Diagram
Map
Reduce
allows
for
distributed
processing of the map and reduction operations.
Provided
that
each
mapping
operation
is
independent of the others, all maps can be
"Map" step: Each worker node applies
performed in parallel – though in practice this is
the "map()" function to the local data, and writes
limited by the number of independent data sources
the output to a temporary storage. A master node
and/or the number of CPUs near each source.
orchestrates that for redundant copies of input data,
Similarly, a set of 'reducers' can perform the
only one is processed.
reduction phase, provided that all outputs of the
map operation that share the same key are
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 9
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
presented to the same reducer at the same time, or
after the previous step is completed – although in
that the reduction function is associative. While this
practice they can be interleaved as long as the final
process can often appear inefficient compared to
result is not affected. In many situations, the input
algorithms that are more sequential, MapReduce
data might already be distributed ("sharded")
can be applied to significantly larger datasets than
among many different servers, in which case step 1
"commodity" servers can handle – a large server
could sometimes be greatly simplified by assigning
farm can use Map Reduce to sort a petabyte of data
Map servers that would process the locally present
in only a few hours.The parallelism also offers
input data. Similarly, step 3 could sometimes be
some possibility of recovering from partial failure
sped up by assigning Reduce processors that are as
of servers or storage during the operation: if one
close as possible to the Map-generated data they
mapper or reducer fails, the work can be
need to process.
rescheduled – assuming the input data is still
IV CONCLUSION
available.
Another way to look at Map Reduce is as
In this paper, we have investigated the
a 5-step parallel and distributed computation:
scalability
problem
Prepare the Map() input – the "Map Reduce
anonymization by TDS, and proposed a highly
system" designates Map processors, assigns the
scalable
input key value K1 that each processor would work
MapReduce on cloud. Data sets are partitioned and
on, and provides that processor with all the input
anonymized in parallel in the first phase, producing
data associated with that key value. Run the user-
intermediate results. Then, the intermediate results
provided Map () code – Map() is run exactly once
are merged and further anonymized to produce
for each K1 key value, generating output organized
consistent k-anonymous data sets in the second
by key values K2.
phase. We have creatively applied MapReduce on
two-phase
of
large-scale
TDS
approach
data
using
cloud to data anonymization and deliberately
"Shuffle" the Map output to the Reduce
processors – the Map Reduce system designates
Reduce processors, assigns the K2 key value each
processor should work on, and provides that
processor
with
all
the
Map-generated
data
associated with that key value.
designed a group of innovative MapReduce jobs to
concretely
computation
accomplish
in
a
the
highly
specialization
scalable
way.
Experimental results on real-world data sets have
demonstrated that with our approach, the scalability
and efficiency of TDS are improved significantly
Run the user-provided Reduce () code –
over existing approaches. Providing the high ability
Reduce () is run exactly once for each K2 key value
on handles the large data sets.High scalable two-
produced by the Map step.
phase top-down approach to anonymize large-scale
data using Map reduce is proposed.Provide the
Produce the final output – the Map Reduce
privacy by effective anonymization approaches.
system collects all the Reduce output, and sorts it
by K2 to produce the final outcome.
V FUTURE ENHANCEMENT
These five steps can be logically thought
of as running in sequence – each step starts only
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 10
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
In
cloud
environment,
the
privacy
[6] X. Zhang, C. Liu, S. Nepal, S. Pandey, and J.
preservation for data analysis, share and mining is a
Chen,
challenging research issue due to increasingly
Constraint Based Approach for Cost-Effective
larger volumes of data sets, thereby requiring
Privacy Preserving of Intermediate Data Sets in
intensive investigation. We will investigate the
Cloud,”IEEE Trans. Parallel and Distributed
adoption of our approach to the bottom-up
Systems, to be published, 2012.
generalization algorithms for data anonymization.
[7] L. Hsiao-Ying and W.G. Tzeng, “A Secure
Based on the contributions herein, we plan to
Erasure Code-Based Cloud Storage System with
further explore the next step on scalable privacy
Secure Data Forwarding,” IEEE Trans.Parallel and
preservation aware analysis and scheduling on
Distributed Systems, vol. 23, no. 6, pp. 995-1003,
large-scale
2012.
data
sets.
Optimized
balanced
“A
Privacy
Leakage
Upper-Bound
scheduling strategies are expected to be developed
[8] N. Cao, C. Wang, M. Li, K. Ren, and W. Lou,
towards overall scalable privacy preservation aware
“Privacy-Preserving
data set scheduling.
Search over Encrypted Cloud Data,” Proc.
Multi-Keyword
Ranked
IEEE INFOCOM, pp. 829-837, 2011.
[9] P. Mohan, A. Thakurta, E. Shi, D. Song, and D.
REFERENCES
Culler, “Gupt:Privacy Preserving Data Analysis
[1] D. Zissis and D. Lekkas, “Addressing Cloud
Made Easy,” Proc. ACM SIGMOD Int’l Conf.
Computing Security Issues,” Future Generation
Management of Data (SIGMOD ’12), pp. 349-
Computer Systems, vol. 28, no. 3, pp. 583- 592,
360, 2012.
2011.
[10]MicrosoftHealthVault,http://www.microsoft.co
[2] L. Hsiao-Ying and W.G. Tzeng, “A Secure
m/health/ww/products/Pages/healthvault.aspx,2013
Erasure Code-Based Cloud Storage System with
.
Secure Data Forwarding,” IEEE Trans. Parallel and
Distributed Systems, vol. 23, no. 6, pp. 995-1003,
2012.
[3] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph,
R. Katz, A. Konwinski, G. Lee, D. Patterson, A.
Rabkin, I. Stoica, and M. Zaharia, “A View of
Cloud Computing,” Comm. ACM, vol. 53, no. 4,
pp. 50-58, 2010.
[4]
S. Chaudhuri, “What Next?: A Half-Dozen
Data Management Research Goals for Big Data
and the Cloud,” Proc. 31st Symp. Principles of
Database Systems (PODS ’12), pp. 1-4, 2012.
[5] D. Zissis and D. Lekkas, “Addressing Cloud
Computing Security Issues,” Future Generation
Computer Systems, vol. 28, no. 3, pp. 583592, 2011.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 11
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Message Authentication and Security Schemes
Based on ECC in Wireless Sensor Network
K.Selvaraj1
Karthik.S2
P.G. Scholar,
Department of IT,
V.M.K.V. Engineering College,
Salem - 636308, Tamilnadu, India
Sasikala .K3
Assistant Professor,
Department of IT,
V.M.K.V. Engineering College,
Salem - 636308, Tamilnadu, India
Assitant Professor,
Department of IT,
V.M.K.V. Engineering College,
Salem - 636308, Tamilnadu, India
Abstract
Message authentication is one of the most
Index
Terms—Hop-by-hop
authentication,
efficient ways to prevent unauthorized and corrupted
symmetric-key
cryptosystem,
public-key
messages from being forwarded in wireless sensor
cryptosystem, source privacy, simulation, wireless
networks (WSNs). That's why, several message
sensor net wor ks ( WSNs),
authentication proposals have been developed, based
I.
on either symmetric-key cryptosystems or public-key
INTRODUCTION
cryptosystems. Most of them, however, have the
Message authentication [1] performs a very important
restrictions of high computational and communication
role in thwarting unauthorized and corrupted messages
overhead in addition to lack of scalability and
from being delivered in networks to save the valuable
resilience to node compromise attacks. Wireless
sensor
Sensor Networks are being very popular day by day,
schemes have been proposed in literature to offer
however one of the main concern in WSN is its limited
message authenticity and integrity verification for
resources. One have to look to the resources to
wireless sensor networks (WSNs) [4, 12, and 13].
generate
These approaches can largely be separated into two
Message
Authentication
Code
(MAC)
energy.
Therefore,
many
keeping in mind the feasibility of method used for the
categories:
sensor network at hand. This paper investigates
symmetric-key based approaches.
public-key
based
authentication
approaches
and
different cryptographic approaches such as symmetric
key cryptography and asymmetric key cryptography.
The
symmetric-key
[2]
based
approach
To provide this service, a polynomial-based scheme
necessitates composite key management, lacks of
was recently introduced, this scheme and its extensions
scalability, and is not flexible to large numbers of node
all have the weakness of a built-in threshold
determined by the degree of the polynomial. In this
paper, we propose a scalable authentication scheme
based optimal Modified ElGamal signature (MES)
scheme on elliptic curves cryptography.
compromise attacks since the message sender and the
receiver have to share a secret key. The shared key is
handled by the sender to produce a message
authentication code (MAC) for each transmitted
message. However, for this process the authenticity
and integrity of the message can only be confirmed by
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 12
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
the node with the shared secret key, which is usually
public-key schemes can be more advantageous in
shared by a group of sensor nodes.
terms of computational complexity, memory usage,
and security resilience, since public-key based
An intruder can compromise the key by
incarcerating a single sensor node. In addition, this
method is not useful in multicast networks. To solve
approaches have a simple and clean key management.
In this project, an unconditionally secure and efficient
source anonymous message authentication (SAMA)
the scalability problem, a secret polynomial based
scheme based on the optimal Modified ElGamal
message authentication scheme was introduced [3].
signature (MES) scheme on elliptic curves. This MES
The idea of this scheme is similar to a
threshold secret sharing, where the threshold is
scheme is secure against adaptive chosen-message
attacks in the random oracle model [6].
determined by the degree of the polynomial. This
approach offers information-theoretic security of the
MES scheme enables the intermediate nodes
shared secret key when the number of messages
to authenticate the message so that all corrupted
transmitted is less than the threshold. The intermediate
message can be detected and dropped to conserve the
nodes verify the authenticity of the message through a
sensor
polynomial evaluation.
resiliency, flexible-time authentication and source
power.
While
achieving
compromise
identity protection this scheme does not have the
When the number of messages transmitted is
threshold problem. Both theoretical analysis and
larger than the threshold, the polynomial can be fully
simulation results demonstrate that the proposed
recovered and the system is completely broken. An
scheme is more efficient than the polynomial-based
alternative solution was proposed in to thwart the
algorithms under comparable security levels.
intruder from recovering the polynomial by computing
the coefficients of the polynomial. The idea is to add a
The principal attraction of ECC, compared to
random noise, also called a perturbation factor, to the
RSA, is that it appears to offer equal security for a far
polynomial so that the coefficients of the polynomial
smaller
cannot be easily solved. The random noise can be
overhead. ECC is a method of encoding data files so
completely removed from the polynomial using error
that only specific individuals can decode them. ECC
correcting code techniques [5].
based on mathematics of elliptic curves and uses the
key
size,
thereby
reducing
processing
location of points on an elliptic curve to encrypt and
For the public-key based method, each
decrypt information.
message is transmitted along with the digital signature
The main advantage of ECC over RSA is
of the message produced using the sender’s private
particularly important in wireless devices, where
key. Every intermediate forwarder and the final
computing power, memory and battery life are limited.
receiver can authenticate the message using the
ECC is having good potential for wireless sensor
sender’s public key [10], [11]. One of the restrictions
of the public key based method is the high
computational overhead. The recent progress on
Elliptic Curve Cryptography (ECC) shows that the
ISSN: 2348 – 8387
networks security due to its smaller key size and
its high strength of security. ECC is a public key
cryptosystem.
www.internationaljournalssrg.org
Page 13
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
the authenticity of message through polynomial
1.1 GOAL
The following function provides hop by hop
evaluation. In polynomial based scheme, when the
message authentication and source privacy in
number of messages transmitted is larger than the
wireless sensor networks.
threshold the adversary can fully recover the

To develop a source anonymous message
polynomial.
authentication code (SAMAC) on elliptic

curves that can provide unconditional II.
TERMINOLOGY AND PRELIMINARY
source anonymity.
This section briefly describes the terminology and the
It offers an efficient hop-by-hop message
cryptographic tools.
authentication mechanism for WSNs
without the threshold limitation.

It
provides
network
implementation
criteria on source node privacy protection
in WSNs.

The wireless sensor networks are implicit to consist of
a huge number of sensor nodes. It is assumed that each
sensor node recognizes its relative location in the
To propose an efficient key management
framework to ensure isolation of the
compromised nodes.

2.1. Thread Model and Assumptions
sensor domain and is competent of communicating
with its neighboring nodes directly using geographic
routing. The entire network is fully connected through
It provides an extensive simulation results
under ns-2 on multiple security levels.
multi-hop communications. It is assumed that there is a
security server (SS) that is liable for generation,
storage and distribution of the security parameters
MES scheme provides hop-by-hop node
among the network. This server will by no means be
authentication without the threshold limitation, and has
compromised. However, after deployment, the sensor
performance better than the symmetric-key based
nodes may be compromised and captured by attackers.
schemes. The distributed nature of algorithm makes
Once compromised, all data stored in the sensor nodes
the scheme suitable for decentralized networks.
can be obtained by the attackers. The compromised
nodes can be reprogrammed and completely managed
1.2 PROBLEM DEFINITION
by the attackers.
Message authentication is one of the most
effective ways to thwart unauthorized and corrupted
However, the compromised nodes will be unable to
messages from being forwarded in wireless sensor
produce new public keys that can be accepted by the
networks (WSNs). Most of them have the limitations
SS and other nodes. Two types of possible attacks
of high computational and communication overhead in
launched by the adversaries are:
addition to lack of scalability and resilience to node
• Passive attacks: By passive attacks, the adversaries
compromise attacks. An intruder can compromise the
could snoop on messages transmitted in the network
key by capturing a single sensor node. In addition,
and execute traffic analysis.
symmetric key based method does not work in
multicast networks. The intermediate node can verify
• Active attacks: Active attacks can only be
commenced from the compromised sensor nodes.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 14
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Once the sensor nodes are compromised, the
Node compromise resilience. The scheme
adversaries will gain all the data stored in the
be resilient to node compromise attacks. No matter
compromised nodes, including the security parameters
how many nodes are compromised, the remaining
of the compromised nodes. The adversaries can alter
nodes can still be secure.
the contents of the messages, and introduce their own
messages.
Efficiency. The scheme should be efficient in terms
of
An authentication protocol should be resistant to node
compromise by allowing secure key management. The
protocol may provide an integrated key-rotation
should
both
computational
and
communication
overhead.
2.3 Terminology
mechanism or allow for key rotation by an external
Privacy is sometimes referred to as anonymity.
module.
Communication
anonymity
in
information
management has been discussed in a number of
2.2
Design Goals
previous works [14][15][16][17][18][19]. It generally
Our proposed authentication scheme aims at
refers to the state of being unidentifiable within a set
achieving the following goals:
of subjects. This set is called the AS. Sender
Message authentication. The message receiver
should be able to verify whether a received
message is sent by the node that is claimed, or by a
node in a particular group. In other
words, the
adversaries cannot pretend to be an innocent node
and inject fake messages into the network without
being detected.
anonymity means that a particular message is not
linkable to any sender, and no message is linkable to a
particular sender.
We will start with the definition of the
unconditionally secure SAMA.
Definition 1 (SAMA). A SAMA consists of the
following two algorithms:
The message receiver should
Generate (m; Q1; Q2; . . . ; Qn ). Given a message m
be able to verify whether the message has been
and the public keys Q1, Q2………. Qn of the AS.S =
modified en-route by the adversaries. In other
{A1,A2,…….An}, the actual message sender At, 1< t <
words, the adversaries cannot modify the message
n, produces and anonymous message S (m) using its
content without being detected.
own private key dt .
Message integrity.
Every
Verify S(m). Given a message m and an nonymous
forwarder on the routing path
should be able
message S (m), which includes the public keys of all
to verify the authenticity and
integrity of the
members in the AS, a verifier can determine whether S
Hop-by-hop
message
authentication.
(m) is generated by a member in the AS.
messages upon reception.
The security requirement for SAMA include:
Identity and location privacy. The adversaries
cannot determine the message sender’s ID and
location by analyzing the message contents or the
local traffic.
ISSN: 2348 – 8387
Sender ambiguity. The probability that a verifier
successfully determine the real sender of the
anonymous message is exactly 1/n, where n is the total
number of member in the AS.
www.internationaljournalssrg.org
Page 15
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Unforgeability. An anonymous message scheme is
characteristic of all the applications for that, many
unforgeable if no adversary, given the public keys of
authors proposed different kinds of security algorithms
all members of the AS and the anonymous messages
like symmetric key algorithm and public key
m1,m2.......mn adaptively chosen by the adversary, can
algorithm. Both passive and active attacks are
produce in polynomial time a new valid anonymous
discussed in that algorithms and also recovery
message with non – negligible probability.
mechanisms are shown in simulation. The advantages
In this paper, the user ID and the user public key will
and disadvantages of such algorithms are discussed
be
below.
used
interchangeably
without
making
and
distinctions.
3.1. STATISTICAL ENROUTE FILTERING
Modified ElGamal Signature Scheme
Statistical En-route Filtering (SEF) mechanism detects
Definition
2
(MES). The modified ElGamal
signature scheme [8] consists of the following three
and drops false reports. SEF requires each sensing
report must be validated by multiple keyed message
authentication codes (MACs), each generated message
algorithms:
Key generation algorithm. Let be a large prime
and g be a generator of ZZp*,. Boath p and g are made
public. For a random private key
ZZp, the public
by a node that detects the same event. As the report is
forwarded, each node verifies the correctness of the
MACs probabilistically and drops those invalid MACs
at earliest points. The sink filters out remaining false
key y is computed from y =g x mod p.
reports that escape the enroute filtering. SEF exploits
Signature algorithm. The MES can also have many
to determine the truthfulness of each report through
variants [20],[21]. For the purpose of efficiency, we will
collective decision-making by multiple detecting nodes
describe the variant, called optimal scheme. To sign a
and collective false-report-detection by multiple
message m, one chooses a random k
ZZp-1,then
forwarding. The limitation it fails to detect malicious
computes the exponentiation r=g mod p and solves s
misbehaviors with the presence of the following
from :
disadvantages like ambiguous collisions, receiver
k
S =rxh (m ,r) + k mod (p-1),
collisions,
Where is a one-way hash function. The signature of
limited
transmission
power,
false
misbehavior report, collision and partial dropping.
message m is defined as the pair (r,s).
Verification algorithm . The verifier checks whether
the signature equation g s = ry rh(m,r) mod p. If the equality
3.2. SECRET POLYNOMIAL MESSAGE
AUTHENTICATION
holds true, then the verifier Accepts the signature, and
A secret polynomial based message authentication
rejects otherwise.
scheme was introduced to prevent message form
adversaries. This scheme offers security with ideas
similar to a threshold secret sharing, where the
III .RELATED WORK
Message
authentication
applications
are
and security is
threshold is determined by the degree of the
used
in
different
polynomial. If the number of messages transmitted is
one
of
the
below the threshold, then the intermediate node to
key
verify the authenticity of the message through
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 16
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
polynomial evaluation. When the number of messages
The figure 4.1 gives the overall architecture of
transmitted is larger than the threshold, the polynomial
a system in which the user enters the network and
be fully recovered by adversary and the system is
request for the service. The wireless sensor network
broken completely. To increase the threshold for the
consists of a large number of sensor nodes. Sensor
intruder to reconstruct the secret polynomial, a random
node knows its relative location in the sensor domain
noise, also called a perturbation factor, was added to
and is capable of communicating with its neighboring
the polynomial to prevent the adversary from
nodes directly using geographic routing.
computing the coefficient of the polynomial.
IV.PROPOSED WORK
4.1 PROPOSED SYSTEM
The whole network is fully connected through
multi-hop communications. An inquiry node register
the information, after registration the registration node
will continue the login process. Security server is
In the proposed system an unconditionally secure and
efficient source anonymous message authentication
scheme was introduced. The main idea is that for each
message to be released, the message sender, or the
sending node, generates a source anonymous message
authenticator for the message m. The generation is
based on the MES scheme on elliptic curves. For a ring
signature, each ring member is required to compute a
forgery signature for all other members in the AS. In
this scheme, the entire SAMA generation requires only
responsible for generation and storage and distribution
of security parameters among the network. This server
will never be compromised. After deployment the
sensor nodes may be captured and compromised by
attackers.
The compromised node will not be able to
create new public keys. For each message m to be
released, the message sender, or the sending node,
generates a source anonymous message authenticator
for the message m using its own private key.
three steps, which link all non-senders and the message
sender to the SAMA alike. In addition, design enables
the SAMA to be verified through a single equation
without individually verifying the signatures.
This is the improved form of SAMA it generates a
source anonymous message authenticator for the
message. The generation is based on MES scheme on
elliptic curves. SAMA generation requires three steps,
which link all non-senders and the message sender to
the SAMA. SAMA is verified through a single
equation without individually verifying the signatures.
4.2 Proposed MES Scheme on Elliptic Curves
The design of the proposed SAMA relies on the
SYSTEM MODEL
ElGamal signature scheme. Signature schemes can
achieve at different levels of security. Security against
Fig.1. System Architecture
existential forgery under adaptive-chosen message
attacks is the maximum level of security.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 17
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
The appropriate selection of an AS plays a key role in
message source privacy, since the actual message
source node will be hidden in the AS. The Techniques
used to prevent the adversaries from tracking the
message
source
through
the
AS
analysis
in
B. Modified ElGamal Signature Scheme
Authentication generation algorithm:
Sender node is send the message to be transmitted to
receiver node. (SAMA):
A SAMA consists of the following these steps:
combination with local traffic analysis. Before a
message is transmitted, the message source node
selects an AS from the public key list in the SS as its
choice. This set should include itself, together with
some other nodes. When an adversary receives a
1. Receiver node receiving the hashed message.
2. Left most bit of the hash is taken in decimal
format.
3.
If it receives same key means allow to
transform and access that message.
message, find the direction of the previous hop, or
even the real node of the previous hop. The adversary
is unable to distinguish whether the previous node is
4.3 Key Management and Compromised Node
Detection
the actual source node or simply a forwarder node if
SS responsibilities include public-key storage and
the adversary is unable to monitor the traffic of the
distribution in the WSNs .SS will never be
previous hop. Therefore, the selection of the AS
compromised. After deployment, the sensor node may
should create sufficient diversity so that it is infeasible
be captured and compromised by the attackers. Once
for the adversary to find the message source based on
compromised, all information stored in the sensor node
the selection of the AS itself.
will be accessible to the attackers. The compromised
SAMA techniques does not have the threshold
problem. Unlimited number of messages
are
authenticated. SAMA is a secure and efficient
mechanism. Generates a source anonymous message
authenticator for the message m. The m e s s a g e
generation is based on the MES scheme on elliptic
curves. An elliptic curve E is defined by an equation
node will not be able to create new public keys that
can be accepted by the SS. For efficiency, each public
key will have a short identity. The length of the
identity is based on the scale of the WSNs
Advantages
• Message authentication: The message receiver should
be able to verify whether a received message is sent by
the node that is claimed or by a node in a particular
of the form:
group. In other words, the adversaries cannot pretend
to be an innocent node and inject fake messages into
E : y²=x³ +ax + b mod p;
the network without being detected.
1. Considering a base point elliptic curve.
• Message integrity: The message receiver should be
2. Assuming the private key of sender node.
able to verify whether the message has been modified
3. Calculate public key of sender.
en-route by the adversaries. In other words, the
4.
The message is to be hashed and left bit of
hash functions are converting into binary
format.
5. Finding the signature of message.
ISSN: 2348 – 8387
adversaries cannot modify the message content
without being detected.
• Hop-by-hop message authentication: Every forwarder
on the routing path should be able to verify the
www.internationaljournalssrg.org
Page 18
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
authenticity and integrity of the messages upon
5.1.2
reception.
AUTHENTICATION
SOURCE
ANONYMOUS
MESSAGE
• Identity and location privacy: The adversaries cannot
determine the message sender’s ID and location by
For each message m to be released, the
analyzing the message contents or the local traffic.
message sender, or the sending node, generates a
• Node compromise resilience: The scheme should be
source anonymous message authenticator for the
resilient to node compromise attacks. No matter how
message m using its own private key.
many nodes are compromised, the remaining nodes
can still be secure.
msg
• Efficiency: The scheme should be efficient in terms
Neigh
bor
of both computational and communication overhead.
Source
Neigh
bor
Neigh
bor
V. SYSTEM IMPLEMENTATION
msg
Destin
ation
5.1MODULES
Fig.3 Source Anonymous Message
The System can be designed using the
Authentication
following modules,

Node Deployment.

Source Anonymous Message Authentication
5.1.3 MODIFIED ELGAMAL SIGNATURE
(SAMA).

Modified EIGamal Signature (MES).

Compromised Node Detection.
The optimal Modified ElGamal signature
(MES) scheme on elliptic curves generate signature
dynamically then it provide intermediate nodes to
authenticate the message so that all corrupted message
5.1.1 NODE DEPLOYMENT
An inquiry node register the information, after
can be detected.
registration the registration node will continue the
Send
er
login process.
Inquiry
node
Registr
ation
Process
Regist
ered
Node
Fig.2 Node Deployment
ISSN: 2348 – 8387
Neig
hbor
Login
Neig
hbor
Desti
natio
n
Infor
matio
n
recei
ved
Verif
icatio
n
Proce
ss
Fig.4 Modified Elgamal Signature
www.internationaljournalssrg.org
Page 19
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
5.1.4 COMPROMISED NODE DETECTION
1. Choose a random k such that 0<k<p-1 and
gcd(k,p-1)=1.
Sensor information will be delivered to a sink
2. Compute r ≡ gk (mod p).
node, which can be co-located with the Security Server
3. Compute S≡ (H(m)-xr)k-1 (mod p-1).
(SS).when a message is received by the sink node, the
4. If s=0 start over again.
message source is hidden in an Ambiguity Set
Then the pair(r,s) is the digital signature of m.
(AS).when a bad or meaningless message is received
by the sink node, the source node is viewed as
compromised. The compromised node will not be able
The signer repeats these steps for every
signature.
to create new public keys that can be accepted by the
SS
Authentication generation algorithm
Suppose m is a message to be transmitted. To
generate an efficient SAMA for message m, Alice
sign
generate
performs the following steps:
Attack
ers
A signature (r,s) of a message m is verified as
Compromised node
follows.
Send
.
er
Neig
hbor
Destination
Neig
hbor
1. 0<r<p and 0<s<p-1.
2. gH(m) ≡ yr rs (mod p).
Information
received
Verification
Process
VI. PERFORMANCE EVALUATION
Fig.5 Compromised Node Detection
A. Simulation model and parameters
p
Polynomial based approach
5.2 MES SCHEME ON ELLIPTIC CURVES
dx,dy=80
Let p > 3 is an odd prime. An elliptic curve E is
Gen
defined by an equation of the form: E: y2 = x3 +
ax + b mod p,[7]
dx,dy=100
Verify
Gen
Verify
L=24
9.31
0.25
14.45
0.31
L=32
12.95
0.33
20.05
0.41
L=40
13.32
0.35
20.57
0.44
L=64
21.75
0.57
33.64
0.71
Signature generation algorithm
The MES can also have many variants. For the
TABLE 1
purpose of efficiency, describe the variant, called
optimal scheme.
PROCESS TIME FOR EXISTING SCHEME
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 20
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
k Overhead
Proposed approach
n=1
n=10
The ECC scheme is compared against
Gen
Verify
Gen
Verify
L=24
0.24
0.53
4.24
2.39
L=32
0.34
0.80
5.99
3.32
L=40
0.46
1.05
8.03
4.44
destination node to the number of packets sent by the
L=64
1.18
1.77
20.53
11.03
source node. Routing overhead (RO): RO defines the
polynomial based and it has provided the positive
results. Packet delivery ratio (PDR): PDR defines the
ratio of the number of packets received by the
ratio of the amount of routing-related transmissions
TABLE 2
[Route REQuest (RREQ), Route REPly (RREP),
PROCESS TIME FOR PROPOSED SCHEME
Route ERRor (RERR), ACK, S-ACK, and MRA].
Delay: Delay is the interarrival time of 1st and 2nd
packet to that of total data packets delivered.
C. Results
Enhanced message authentication scheme is evaluated
by comparing it with other existing algorithms using
To evaluate the performance of proposed system,
compare it with some existing techniques using NS-2
Simulator. The bivariate polynomial based scheme is a
the NS-2 Simulator. Fig 4.1 shows Packet Delivery
Ratio of the proposed method over other existing
methods
symmetric key based implementation, while proposed
scheme is based on ECC. Assume that the key size to
VII. CONCLUSION
be l for symmetric key cryptosystem, the key size for
proposed should be 2l which is much shorter than the
traditional public key cryptosystem. The simulation
parameters are helpful in simulating the proposed
system. Table 1 shows the process time for existing
scheme and Table 2 shows the process time for
proposed scheme.
Message authentication has always been a
major threat to the security in wireless sensor
Networks. A Novel and efficient source anonymous
message authentication scheme based on ECC to
provide message content authenticity. To provide hop
by hop message authentication without the weakness
of the built in threshold of the polynomial based
B. Performance Metrics
scheme. SAMA based on ECC compared it against
other popular mechanisms in different scenarios
Fig.6 Packet Delivery Ratio
through simulations and TelosB.
Simulations results indicate that it greatly increases
Fig.7 Networ
the effort of an attacker, but it requires proper models
for every application. Proposed scheme is more
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 21
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
efficient than the bivariate polynomial-based scheme
in
terms
of
computational
overhead,
energy
consumption, delivery ratio, message delay, and
8. D. Pointcheval and J. Stern, “Security Arguments
for Digital Sig- natures and Blind Signatures,” J.
Cryptology, vol. 13, no. 3, pp. 361- 396, 2000.
9. R. Rivest, A. Shamir, and L. Adleman, ―A
memory consumption.
method for obtaining digital signatures and
VIII. REFERENCES
public-key cryptosystems,‖ Communications. of
the Assoc. of Comp. Mach., vol. 21, no. 2, pp.
1. Jian Li Yun Li Jian Ren Jie Wu, ―Hop-by-Hop
120–126, 1978.
Message Authentication and Source Privacy in
10. T. A. ElGamal, ―A public-key cryptosystem and
Wireless Sensor Networks‖, IEEE Transactions
a signature scheme based on discrete logarithms,‖
On Parallel And Distributed Systems, pp 1-10,
IEEE Transactions on Information Theory, vol.
2013
31, no. 4, pp. 469–472, 1985.
2. S. Zhu, S. Setia, S. Jajodia, and P. Ning, “An
11. H. Wang, S. Sheng, C. Tan, and Q. Li,
interleaved hop-by-hop authentication scheme for
“Comparing symmetric-key and public-key based
filtering false data in sensor networks,” in IEEE
security schemes in sensor networks: A case study
Symposium on Security and Privacy, 2004
of user access control,” in IEEE ICDCS, Beijing,
3. C. Blundo, A. De Santis, A. Herzberg, S. Kutten,
U. Vaccaro, and M. Yung, “Perfectly-secure key
distribution
for
dynamic
conferences,”
in
China, 2008, pp. 11–18.
12. A. Perrig, R. Canetti, J. Tygar, and D. Song,
“Efficient authentication and signing of multicast
Advances in Cryptology - Crypto’92, ser. Lecture
streams
Notes in Computer Science Volume 740, 1992,
Symposium on Security and Privacy, May 2000.
pp. 471–486.
over
lossy
channels,”
in
IEEE
13. W. Zhang, N. Subramanian, and G. Wang,
4. F. Ye, H. Lou, S. Lu, and L. Zhang, ―Statistical
“Lightweight and compromise resilient message
en-route filtering of injected false data in sensor
authentication in sensor networks,” in IEEE
networks,‖ in IEEE INFOCOM, March 2004
INFOCOM, Phoenix, AZ., April 15-17 2008.
5. M. Albrecht, C. Gentry, S. Halevi, and J. Katz,
14. D. Chaum, “Untraceable Electronic Mail, Return
“Attacking cryptographic schemes based on
Addresses, and Digital
”perturbation polynomials”,” Cryptology ePrint
ACM, vol. 24, no. 2, pp. 84-88, Feb. 1981.
Archive, Report 2009/098, 2009.
6. D. Pointcheval and J. Stern, “Security proofs for
Pseudonyms,” Comm.
15. D. Chaum, “The Dinning Cryptographer Problem:
Unconditional
Sender
and
Recipient
signature schemes,” in Advances in Cryptology -
Untraceability,” J. Cryptology, vol. 1, no. 1, pp.
EUROCRYPT, ser. Lecture Notes in Computer
65-75, 1988.
Science Volume 1070, 1996, pp. 387–398.
7. D. Pointcheval and J. Stern, “Security arguments
for digital signatures and blind signatures,”
Journal of Cryptology, vol. 13, no. 3, pp. 361–
396, 2000.
ISSN: 2348 – 8387
16. A. Pfitzmann and M. Hansen, “Anonymity,
Unlinkability, Unobservability,
and
Identity
Terminology,”
www.internationaljournalssrg.org
Management
Pseudonymity,
a
Proposal
for
http://dud.inf.tu-dresden.de/
Page 22
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
literatur/Anon_Terminology_v0.31.pdf,
Feb.
2008.
17. A. Pfitzmann and
M. Waidner, “Networks
without User Observ- ability—Design Options.,”
Proc. Advances in Cryptology (EURO- CRYPT),
vol. 219, pp. 245-253, 1985.
18. Reiter and A. Rubin, “Crowds: Anonymity for
Web Trans- action,” ACM Trans. Information
and System Security, vol. 1, no. 1, pp. 66-92,
1998.
19. M.
Waidner,
Recipient
“Unconditional Sender
Untraceability in Spite
and
of Active
Attacks,” Proc. Advances in Cryptology (EUROCRYPT), pp. 302-319, 1989.
20. D. Pointcheval and J. Stern, “Security Arguments
for Digital Sig- natures and Blind Signatures,” J.
Cryptology, vol. 13, no. 3, pp. 361- 396, 2000.
21. L. Harn and Y. Xu, “Design of Generalized
ElGamal Type Digital Signature Schemes Based
on Discrete
Logarithm,” Electronics Let- ters,
vol. 30, no. 24, pp. 2025-2026, 1994.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 23
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Efficient data Transmission in Hybrid RFID and
WSN by using Kerberos Technique
P.S.MANO1,
G.KARTHIK2,
ARUL SELVAN M3,
Dept. of Information Technology,
Kongunadu College of Engineering and
Technology,
Trichy, India.
Dept. of Information Technology,
Kongunadu College of Engineering and
Technology,
Trichy, India.
Dept. of Information Technology,
Kongunadu College of Engineering and
Technology,
Trichy, India.
Abstract— Radio frequency identification (RFID) and wireless
sensor networks (WSN) have been popular in industrial field.
RFID and WSN are used to monitoring and senses the
environmental conditions then send the data. In this paper we
propose RFID and WSN as Hybrid RFID and WSN (HRW).
HRW that combines the RFID system and WSN for efficient data
collection. HRW is used to senses the signal from the
environmental condition and stores the data in the database
server. The readers may collect the data from the back end
server for data management. The database server uses the
clustering to store the data type in same location. It reduces the
time consumption while searching the data. We also introduce
the security mechanism in data transmission and it also improves
the performance while data transfer to another readers. This
security mechanism protects the data and avoids the malicious
attacks from the unauthorized user. High performance of HRW
in terms of the cost of distribution, communication interruption
and ability, and tag size requirement.
Index Terms— Radio frequency identification, wireless sensor
networks, distributed hash table, data routing, clustering.
I. INTRODUCTION
Radio frequency identification (RFID) and Wireless sensor
networks (WSN) have been very popular in the industrial field.
They are used to monitoring the applications in the
environmental conditions. Wireless sensor network (WSN) is a
group of specialized transducers with a communications
infrastructure for monitoring and recording conditions at
diverse locations. Commonly observed parameters are
temperature, humidity, pressure, direction of wind and speed,
brightness intensity, shaking intensity, noise intensity,
powerline voltage. RFID is wireless technology radio waves
that are used to transfer the data between RFID tags and RFID
readers. RFID tags are used in many industries. It is also used
to track the progress work in the environment. The RFID
readers are used to store the data in the servers.
RFID tags are collects the data and directly communicates
with the RFID readers. It communicates with readers in the
particular range of the communication. If there are many tags
are moved to reader at the same time, they will oppose to
access the channels for information transmission. The
successful transmissions of tags are in the percentage of 34.6 to
36.8 [1]. Such a transmission in RFID data collection is not a
ISSN: 2348 – 8387
sufficient to meet the requirements of the low financial cost,
high performance, and real time specific large-scale mobile
monitoring applications. The RFID readers are not quickly
transmit the data to the RFID tags due to the immobility and
short range of the communication. Thus the massive readers of
RFID have to increase the coverage area and the
communication transmission speed. This could cause the
significant cost if the system deployment and design it
considering the high cost and high quality of RFID readers.
The high cost that occur between the RFID readers and the
back-end servers. Thus the RFID readers can get the efficient
data transmission.
In old-fashioned RFID monitoring applications such as
in airline baggage system tracking technique the reader us
required in quickly process several tags in the different
distances. The reader communicates within the particular
area of the coverage session. So these kinds of the problems
can be avoided by using the multi-hop transmission. In the
monitoring applications the objects can be monitoring by the
variation of particular change in environment (e.g. body
temperature, blood pressure) is the most important retrieval
in objects. In this paper the proposing technique is the
Radio frequency identification (RFID) and Wireless sensor
networks (WSN) as Hybrid RFID with WSN (HRW). That
integrates HRW to data transmission for energy efficient
data collection in large scale monitoring for moving objects.
HRW has new type of nodes they are called as Hybrid smart
nodes. It combines the function of RFID tags and reduced
the function of wireless sensor and RFID readers.
The HRW mainly contains three components smart
nodes, RFID readers and backend server. The RFID reader
collects the information [4][6][8][12] from the smart nodes
and stores the details in the backend server. The data
transmission that uses the multi hop transmission mode.
Multi-hop transmission waits for data that received from the
smart nodes to readers. The smart nodes are in active manner
then only it receives the data from the readers. When it is in
off condition it doesn’t receives the information. In
traditional WSN a node in sleep mode it can’t receive and
forward the data. In HRW a node can read the data from the
tag even the nodes are in sleep modes, it increases the
transmission speed. To improve the information collection it
www.internationaljournalssrg.org
Page 24
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig. 1. Traditional RFID architecture
using the clustering concept. The cluster nodes is replicated
their data to which data that belongs to. We also proposes
the tag clean up algorithm, it removes the delivered data
from the tags. It increases the size of the storage and reduces
transmission overhead. While transmitting the data to one
another smart node it having the security mechanism. It
avoids the malicious attacks from the unauthorized users.
II. RELATED WORK
A. Hybrid Smart Nodes
Hybrid RFID and WSN (HRW) is used in the existing
system. It has the smart nodes that integrate the RFID function
and WSN function. The smart nodes are having the following
components:
1) Reduced function sensor
The normal sensors are having only transmission function
but this sensor not only using for transmission it collects the
environmental conditions and sensed data.
2) RFID tag
In RFID tags they are only serves the information to the
storage buffer. The RFID tag receives the message and then
responds with its identification and other information.
3) Reduced-function RFID reader (RFRR) It is used for data
transmission between the smart nodes. The smart nodes that
are used to the RFID reader to read other nodes, tags and write
their own information. RFRR is used to help in the storing of
sensed data and monitoring the environment. As comparison
between RFID tags and HRW, HRW achieves higher
performance in each node in RFRR. The nodes with joint
RFID tag and sensor functions can also use HRW for efficient
data collection with RFRR modules. Smart nodes are
containing two state modes they are sleep and active mode. In
active mode the sensor nodes can collects the information from
the environment [4], [6]. And in sleep mode they do nothing.
B. Data Transmission Process
The Fig. 1 shows the architecture diagram of RFID and Fig.
2 mentions that architecture diagram of HRW system. RFID
contains two layers upper and lower layers. Upper layer that
was connected to the backend servers with high speed
backbone cables. Lower layer is designed by a substantial
number of article hosts that transfer data to RFID readers.
Fig. 2. HRW architecture.
In RFID architecture the nodes that are only in
transmission range it communicates to RFID readers and it
contain direct transmission. In HRW architecture, nodes are
that can exchange and replicate node details with each other.
This was the major difference between RFID and HRW
architecture.The data transmissions in the RFID readers are in
the multi-hop transmission mode. Each reader can receive the
data information from the other outside readers of its
particular range. HRW can collect the information and send
to readers in high speed communication[8].
After smart node A gathers the identified data, it attaches
the identified data with a timestamp and stores the data in its
tag through RFRR. Its process contains four steps. In the step
one process after the sensor unit in a smart node gathers the
information about its tag host. In the second step it enquires
RFRR to store the information into its tag. The third step
includes once two nodes move into the transmission range of
each other, the RFRR in a node delivers the information
stored in another node’s tag. Finally the step four is based on
the host ID and timestamp, the node checks if tag has stored
the information beforehand. If not, the RFRR then stores the
attained information into the local tag.
The data of the node can be stored into the nodes in other
system during exchange process. And the RFID reader can
send the data to the reader. RFID reader can increase the
number of readers to the delivery process.
When a node enters into the reading range of an RFID
reader, the reader reads the information in the node’s tag. The
first entered node is assigned highest priority then later nodes.
TABLE I
1.
2.
3.
4.
5.
6.
7.
ISSN: 2348 – 8387
PSEUDO CODE OF THE PROCESS OF INFORMATION
REPLICATION EXECUTED BY SMART N ODE I.
if this.state =active then
Collect the sensed info of its host Hi
Store (Hi, tagi)
for every node j in its transmission range do
if this.linkAvailable (j) then
Read info Hj with timestamp > tij from tagj
Store (Hj, tagi)
www.internationaljournalssrg.org
Page 25
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
8.
9.
10.
11.
Update timestamp tij with current time
end if
end for
end if
consumption for searching of the data. This method is called as
clustered based transmission. Clusters can then easily be
defined as objects belonging most likely to the same
distribution. In the HRW system, since the data is stored in
tags, active nodes can recover the information at any time from
a sleeping node.
Fig. 3. System architecture of HRW
III. SYSTEM ARCHITECTURE
Figure 3 that clearly explain about the Hybrid RFID and
WSN system. It integrates function of the Radio frequency
identification and Wireless sensor networks. The explanation
of the architecture is, it senses the environmental condition in
the particular area. But it doesn’t act in the particular specified
area monitoring. It senses any signal variation the held in the
event monitoring. The process of the RFID in the architecture
is to tracking the particular event. The RFID consists of two
components that are tags and readers. The tags are attached
with all objects to identify the RFID system. The readers can
communicate with the tags through the RF channel to obtain
the information. Reader contains the records. It stores the
information of data in the databases. The databases having
replicated information, if the data that loses in the databases we
can get the data from the readers. Wireless sensor network is
used to monitor the physical environmental condition changes
in the particular area. The both RFID and WSN integrate as the
Hybrid RFID and WSN (HRW). The function HRW contains
two components they are event manager and RFID information
server. The communication between the event manager and the
RFID information server is a bi-directional. The event
managers’ process is to collects the information and stores the
details. It events are held in the environment changes. The
RFID information server that stores the information in the
backend server.
The collected information that stored in the server. And the
received information that passes to the local server. The
process of the local server stores the data in database. The
storage of the data in same location. It reduces the time
ISSN: 2348 – 8387
Fig. 4. Data transmission from one to another node using RFID reader.
Data transmission from one to another node using RFID
reader. In old-fashioned WSNs, moreover, nodes in deactivate
mode cannot conduct data transmission. Therefore, the HRW
system can greatly improve packet transmission efficiency
with the RFID technology.
The database is used to store the data in the local server. If
the user wants to send the data from one user to another user
the internet communication is using. Here no secure process
while sending data from one user to another user.
This RFID reader reads the data from the coverage area. It
senses the signal from the environment and transmit the data to
the local server of the user. Here tag cleanup algorithm also
used for clear memory in the senders delivered data. This
increases the memory of the databases. We can store large data
types in the memory allocated for the particular data.
IV. SECURITY MECHANISM
The multi-hop data transmission method in HRW improves
the communication efficiency. The attacker may easily access
the data while sending data one node to another. The attacker
can obtain all the information in the compromised nodes and
use the compromised nodes to obtain sensitive information and
disrupt system functions. This process needs the security
policy while transferring data to another node. It adds the
authentication and authorization to the user when the users
access the data. It gives the secured access in the data to
www.internationaljournalssrg.org
Page 26
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
authorized user. So in this section, we consider two security
threats arising from node compromise attacks: data
manipulation and data selective forwarding[10].
A. Data Privacy And Data Manipulation
The process of data privacy and data manipulation, each
smart node replicates its information to other nodes. Once a
node is cooperated, all the information of other nodes is visible
to the challengers, which is dangerous especially in privacy
sensitive applications such as health monitoring.
A mischievous node can also use the gathered information
and provide wrong information to the readers. Moreover, it is
important to safeguard the confidentiality and authenticity of
tag information in data transmission. The challenge in the
process of data privacy is to share the data, while protecting
personal information. The process of the data manipulation is
to take the data and manage into the easiest method of reading.
The protection of the tag information in data transmission
needs the security process. It needs public key encryption or
private key encryption technique. This method use to collect or
dissemination the data in secrete manner. Public key actions
are too exclusive for the smart nodes due to their partial
computing, storage and bandwidth resources. We then improve
a symmetric key based security scheme in our system. In this
novel, we concentration on the threats due to the compromised
smart nodes and assume the readers are secure. In our security
system, the process uses the Kerberos algorithm. Kerberos
technique is a networks authentication protocol which works
on the basis of ticket granting system. It allows nodes which
contains the tickets. In the process of the Kerberos is an
authenticated server, which forwards the user name to key
distributing server. This process never sends the secret key to
the user unless it is encrypted by user. The Kerberos
authentication process having more benefits. Such as more
secure, flexible and efficient. The key distributing server issues
the ticket to the client. It includes the timestamp value. The
timestamp values access its value in the particular session. If
the value of the session is ends then the ticket value is not
valid. Kerberos process uses the private key encryption. The
process encode and decode it uses the same key value.
Kerberos algorithm builds on symmetric key value of
authentication and requires a trusted third party.
User client login process includes two steps. In the first step
user enters his name and password in the server. The next step
client transforms the password into the symmetric key
distribution. Client authentication process, this process
includes three steps. In step one the client sends message of the
user ID to the authenticated server (AS).
ISSN: 2348 – 8387
Fig. 5. Kerberos process
The AS creates the secret key. Second step AS checks
whether the client in its databases or not. If the client in its
database AS sends back the following messages. Ticket
granting service (TGS) key encrypted using secret key of the
client and the valid period of the ticket that issues by AS. In
third step, when the user receives the messages, they can
decrypt data, if the key that not matches in DB user can’t
access the data. Client service authorization, this process that
client sends the messages to TGS. Next the received message
of TGS, it retrieves message of TGS secret key. Client service
request, the receiving messages from TGS, the client has
access the data. We propose distributed key storing in the
backend servers to store the usable key from the AS. We form
the back-end servers into a distributed hash table (DHT). The
DHT overlap supplies Insert (key, data) and Lookup (key)
functions [7]. The ticket giving process in this novel proposed
the advantage in accessing the known user to get the data. It
allows the user who having the ticket while accessing the data.
B. Data Selective Forwarding
This process includes the clustering concept. The clustering
is the task of grouping set of data’s in the same group. Its main
task is to store the data in the memory location. In the
clusterhead based broadcast algorithm, the cluster head in all
nodes in cluster is responsible for forwarding the tag data of all
cluster members to the reader. A mischievous cluster head can
drop part of the data and selectively accelerative the collected
information to the reader. Subsequently an RFID reader may
not know all the smart nodes in a head’s cluster in advance, it
cannot identify such attacks. To avoid the selective forwarding
attack, we can implement the cluster-member based data
transmission algorithm, in which all cluster members clutch the
data of each other nodes are in the collection. The process of
data selective forwarding, select the particular node and send
the data. It reduces the transmission cost, because the data
sends to the node only requests by that original node. It
increases the data transmission process in high speed to reach
another user.
www.internationaljournalssrg.org
Page 27
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
V. CONCLUSION
This paper introduces the Hybrid RFID with WSN (HRW)
that combines the multi-hop transmission and the direct data
transmission mode in the RFID. HRW also improves the data
collection in the process of RFID readers within the particular
range of communication. HRW is composed of RFID readers
and smart nodes. The RFID readers store the data in the
backend servers. The stored data were in the clustering
analysis, which contains the same kind of data that stored in
the same location. It reduces the time consumption, while
searching the data and send to another client. In this novel we
introduce the security mechanism Kerberos algorithm used to
prevent the data. The collection of data that sends from one
user to another user has the secured transmission. The future
work is to implement this paper in real world, that counting the
number of wild animals in the forest and send the information
to the authorized user to access the data from server.
REFERENCES
[1]
[2]
[3]
[4]
[6]
[7]
[8]
[9]
[10]
[11]
[12] L. Zhang and Z. Wang, ‘‘Integration of RFID
into Wireless
Sensor Networks: Architectures, Opportunities and
Challenging Problems,’’ in Proc. Grid Cooperative Computing
Workshop, vol. 32, pp. 433-469, 2006.
B.H. Bloom, ‘‘Space/Time Trade-Offs in Hash Coding
with Allowable Errors,’’ Commun. ACM, vol. 13, no.
7, pp. 422426, July 1970.
R. Clauberg, ‘‘RFID and Sensor Networks,’’ in Proc.
RFID
Workshop, St. Gallen, Switzerland,
Sept. 2004.
J.Y. Daniel, J.H. Holleman, R. Prasad, J.R. Smith, and
B.P. Otis, ‘‘NeuralWISP: A Wirelessly Powered
Neural Interface with 1-m Range,’’ IEEE Trans.
Biomed. Circuits Syst., vol. 3, no. 6,pp. 379-387, Dec.
2009.
D. Karger, E. Lehman, T. Leighton, M. Levine, D.
Lewin, and R. Panigrahy, ‘‘Consistent Hashing and
Random Trees: Distributed Caching Protocols for
Relieving Hot Spots on the World Wide Web,’’ in
Proc. STOC, 1997, pp. 654-663. [5] C. Lee and C.
Chung, ‘‘RFID Data Processing in Supply
Chain Management Using a Path Encoding Scheme,’’ IEEE
Trans. Knowl. Data Eng., vol. 23, no. 5, pp. 742-758, May
2011.
T. Lez and D. Kim, ‘‘Wireless Sensor Networks and RFID
Integration for Context Aware Services,’’ Auto-ID Labs,
Cambridge, MA, USA, Tech. Rep., 2007.
M. Li, Y. Liu, J. Wang, and Z. Yang, ‘‘Sensor Network
Navigation Without Locations,’’ in Proc. IEEE INFOCOM,
2009, pp. 2419-2427.
H. Liu, M. Bolic, A. Nayak, and I. Stojmenovic, ‘‘Taxonomy
and Challenges of the Integration of RFID and Wireless Sensor
Networks,’’ IEEE Trans. Netw., vol. 22, no. 6, pp. 2635,
Nov./Dec. 2008.
W. Luo, S. Chen, T. Li, and S. Chen, ‘‘Efficient Missing Tag
Detection in RFID Systems,’’ in Proc. IEEE INFOCOM, 2011,
pp. 356-360.
K. Ren, W. Lou, and Y. Zhang, ‘‘LEDS: Providing
LocationAware End-To-End Data Security inWireless Sensor
Networks,’’in Proc. IEEE INFOCOM, Apr. 2006, pp. 1-12.
D. Simplot-Ryl, I. Stojmenovic, A. Micic, and A. Nayak, ‘‘A
Hybrid Randomized Protocol for RFID Tag Identification,’’
Sensor Rev., vol. 26, no. 2, pp. 147-154, 2006.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 28
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
ANALYSIS OF SKIN LESIONS FOR ESTIMATING THE RISK OF MELANOMA AND ITS CLASSIFICATION
K. Manikandan1
1
M.E Student , Dept. of Computer Science and Engg,
Annamalai university, Chidambaram.
Tamil Nadu, India.
Abstract: Cancer detection procedure has been discussed earlier
using Context Knowledge but struggles with accuracy. This
paper proposed ABCD rule for cancer detection and to classify
the melanoma types. The proposed method has following steps:
preprocessing, segmentation, feature extraction, melanoma
detection and melanoma classification. The input image is
preprocessed to eliminate the noise and enhanced using median
filter. Then the image is segmented using ostu’s threshold method.
Features for analysing skin lesions are then extracted and detect
the melanoma using Support Vector Machine(SVM). After
detection, to classify the melanoma types k-nearest neighbour
(KNN) classifier is used.
Keywords: Melanoma, ABCD rule, Support vector machine.
1. INTRODUCTION
Melanoma occurs when melanocytes (pigment cells) become
malignant. Most pigment cells are in the skin. When
melanoma starts in the skin, the disease is called cutaneous
melanoma. Melanoma may also occur in the eye (ocular
melanoma or intraocular melanoma). Melanoma is one of the
most common cancers. The chance of developing it increases
with age but this disease affects people of all ages. It can
occur on any skin surface.
1.1 Types of Melanoma
Superficial Spreading Malignant Melanoma is the
most cornmon type of malignant melanoma. It may occur on
any part of the body and is usually greater than 0.5cm in
diameter.
Nodular Malignant Melanoma is the next frequent
type, it is less common but more malignant. It is a raised
papule or nodule sometimes ulcerated. The outline of the
lesion may be irregular and its colour varied.
ISSN: 2348 – 8387
2
Dr. L. R. Sudha2
Assistant Professor, Dept. of Computer Science and Engg,
Annamalai university, Chidambaram.
Tamil Nadu, India.
Acral Lentiginous Malignant Melanoma is a very rare
tumor. It usually arises in an acral location or on a mucous
membrane and is initially flat and irregular but soon becomes
raised and subsequently nodular.
2. RELATED WORK
M. Emre Celebi [1], presented a machine learning
approach to the automated quantification of clinically
significant colors in dermoscopy images. Given a true-color
dermoscopy image with N colors, first reduce the number of
colors in this image to a small number K, i.e., K N, using
the K-means clustering algorithm incorporating a spatial term.
The optimal K value for the image is estimated separately
using five commonly used cluster validity criteria. Then
trained a symbolic regression algorithm using the estimates
given by these criteria, which are calculated on a set of 617
images. Finally, the mathematical equation given by the
regression algorithm is used for two-class (benign versus
malignant) classification. This approach yields a sensitivity of
62% and a specificity of 76% on an independent test set of
297 images.
Kouhei Shimizu, et al. [2], proposed a new computer aided method for the skin lesion classification applicable to
both melanocytic skin lesions (MSLs) and nonmelanocytic
skin lesions (NoMSLs). The computer-aided skin lesion
classification has drawn attention as an aid for detection of
skin cancers. Several researchers have developed methods to
distinguish between melanoma and nevus, which are both
categorized as MSL. However, most of these studies did not
focus on NoMSLs such as basal cell carcinoma (BCC), the
most common skin cancer and seborrheic keratosis (SK)
despite their high incidence rates.
Diego Caratelli, et al. [3], described the full-wave electro
magnetic characterization of reconfigurable antenna sensors
for non-invasive detection of melanoma-related anomalies of
the skin. To this end, an enhanced locally conformal
finite-difference time-domain procedure based on the
www.internationaljournalssrg.org
Page 29
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
definition of effective material parameters and a suitable
normalization of the electromagnetic field-related quantities is
adopted. In this way, an insightful understanding of the
physical processes responsible for the performance of
considered class of devices is achieved. This in turn is
important in order to enhance the structure reliability and
optimizing the design cycle.
Paul Wighton, et al. [4], presented a general model using
supervised learning and MAP estimation that is capable of
performing many common tasks in automated skin lesion
diagnosis and applied the model to segment skin lesions,
detect occluding hair and identify the dermoscopic structure
pigment network. Quantitative results are presented for
segmentation and hair detection and are competitive when
compared to other specialized methods. Additionally, leverage
the probabilistic nature of the model to produce receiver
operating
characteristic
curves,
show
compelling
visualizations of pigment networks and provide confidence
intervals on segmentations.
Aurora Sáez, et al. [5], proposed a different model-based
methods for classification of global patterns of dermoscopic
images. Global pattern identification is included in the pattern
analysis framework, the melanoma diagnosis method most
used among dermatologists. The modeling is performed in
two senses: first a dermoscopic image is modeled by a finite
symmetric conditional Markov model applied to color space
and the estimated parameters of this model are treated as
features. The classification is carried out by an image retrieval
approach with different distance metrics.
Pritha Mahata [6], introduced a simple method which
provides the most consistent clusters across three different
clustering algorithms for a melanoma and a breast cancer data
set. The method is validated by showing that the Silhouette,
Dunne’s and Davies-Bouldin’s cluster validation indices are
better for the proposed algorithm than those obtained by
k-means and another consensus clustering algorithm. The
hypotheses of the consensus clusters on both the datasets are
corroborated by clear genetic markers and 100 percent
classification accuracy was achieved.
Cheng Lu, et al. [7], proposed a novel computer-aided
technique for segmentation of the melanocytes in the skin
histopathological images. In order to reduce the local intensity
variant, a mean-shift algorithm was applied for the initial
ISSN: 2348 – 8387
segmentation of the image. A local region recursive
segmentation algorithm is then proposed to filter out the
candidate nuclei regions based on the domain prior
knowledge. To distinguish the melanocytes from other
keratinocytes in the epidermis area, a novel descriptor, named
local double ellipse descriptor (LDED) was proposed to
measure the local features of the candidate regions. The
LDED used two parameters: region ellipticity and local
pattern characteristics to distinguish the melanocytes from the
candidate nuclei regions. Experimental results on 28 different
histopathological images of skin tissue with different zooming
factors showed that this technique provides a superior
performance.
Brian D’Alessandro and Atam P. Dhawan [8], estimated
propagation of light in skin by novel voxel-based parallel
processing Monte Carlo method. This takes into account the
specific geometry of transillumination imaging apparatus.
Then used this relation in a linear mixing model, solved using
a multispectral image set for chromophore separation and
oxygen saturation estimation of an absorbing object located at
a given depth within the medium. Validation is performed
through the Monte Carlo simulation, as well as by imaging on
a skin phantom. Results showed that subsurface oxygen
saturation can be reasonably estimated with good implications
for the reconstruction of 3-D skin lesion volumes using
transillumination toward early detection of malignancy.
Rahil Garnavi, et al. [9], presented a novel computeraided diagnosis system for melanoma. The novelty lies in the
optimized selection and integration of features derived from
textural, borderbased and geometrical properties of the
melanoma lesion. The texture features are derived from using
wavelet-decomposition, the border features are derived from
constructing a boundary-series model of the lesion border and
analyzed it in spatial and frequency domains and the
geometry features are derived from shape indexes. The
optimized selection of features are achieved by using the
gain-ratio method which is shown to be computationally
efficient for melanoma diagnosis application. Classification is
done through the use of four classifiers; namely, support
vector machine, random forest, logistic model tree and hidden
naive Bayes. Important findings included the clear advantage
gained in complementing texture with border and geometry
features compared to using texture information only and
higher contribution of texture features than border-based
www.internationaljournalssrg.org
Page 30
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
features in the optimized feature set.
3. PROPOSED METHOD
The proposed system consists of five level: (i)
Preprocessing (ii) Segmentation (iii) Feature Extraction (iv)
Melanoma Detection (v) Melanoma Classification. The first
level starts with elimination of noise and enhanced the image
using Median Filter. The second level applies Otsu's
Threshold Method which segments the image. The third level
is for given segmented image, features are extracted using
ABCD rule. The fourth level is based on the extracted features,
malignant skin lesion images are detected using Support
Vector Machine (SVM). After melanoma detection, GLCM
features are extracted from the malignant images to classify
the type of melanoma. The block diagram of the proposed
system is given in fig.3.1. Each level of the system is
described in detail below.
Image segmentation is an important process for most
image analysis tasks. It divides the image into its constituent
regions or objects. Image segmentation is performed using
Ostu’s thresholding method.
Otsu's method is used to automatically perform
clustering-based image thresholding or the reduction of a
graylevel image to a binary image. The algorithm assumes that
the image contains two classes of pixels following bi-modal
histogram (foreground pixels and background pixels). It then
calculates the optimum threshold separating the two classes so
that their combined spread (intra-class variance) is minimal.
This method yields an effective algorithm.
3.3 Feature Extraction
In this section we examine the features i.e., the visual
cues that are used for skin lesion characterization. The
diagnosis method for Melanoma skin cancer use ABCD rule,
and GLCM features are used for melanoma classification.
The ABCD rule investigates the Asymmetry(A), Border
(B), Color (C) and Diameter (D) of the lesion and defines the
basis for a diagnosis by a dermatologist. Most moles on a
person's body look similar to one another. If a mole or freckle
that looks different from the others then if that has any
characteristics of the ABCDs of melanoma should be checked
by a dermatologist. It could be cancerous. The ABCDs are
important characteristics to consider when examining moles
or other skin growths. The features are Solidity,
EquivDiameter, Extent, Eccentricity, MajorAxisLength, Mean,
Variance, Minimum and Maximum of R, G, B.
Fig. 3.1 Block Diagram of the Proposed System
3.1 Preprocessing
Before any detection or analysis on the images,
preprocessing must be applied to prepare suitable images for
further processing. Since the skin lesion images may contain
hairs, skin-marks, skin background and other noise acquired
from photography taking or digitizing process, preprocessing
are proposed for the skin lesion images.
Median Filter is used for preprocessing in the proposed
system. It is an effective method for suppressing isolated
noise without blurring sharp edges.
`
A statistical method of examining texture that considers
the spatial relationship of pixels is the gray-level
co-occurrence matrix (GLCM), also known as the gray-level
spatial dependence matrix. The GLCM functions characterize
the texture of an image by calculating how often pairs of pixel
with specific values and in a specified spatial relationship
occur in an image, creating a GLCM, and then extracting
statistical measures from this matrix. After creating the
GLCMs, we can derive several statistics from them using
the graycoprops function. These statistics provide information
about the texture of an image. The statistics are Contrast,
Correlation, Energy and Homogeneity.
3.2 Segmentation
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 31
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
3.4 Melanoma Detection
After extracting the features, Support Vector Machine
(SVM) is used to detect malignant images. The goal of using
SVM is to find optimal hyper plane by minimizing an upper
bound of the generalization error through maximizing the
distance, margin between the separating hyper plane and the
data. There are number of kernels that can be used in Support
Vector Machines models. These include linear, polynomial,
radial basis function (RBF) and sigmoid.
3.5 Melanoma Classification
Classification is a technique to detect dissimilar texture
regions of the image based on its features. After Melanoma
detection, melanoma types using k-nearest neighbour
algorithm (KNN).
KNN stands for “k-nearest neighbour algorithm”, it is
one of the simplest but widely using machine learning
algorithms. If k = 1, the algorithm simply becomes nearest
neighbour algorithm and the object is classified to the class of
its nearest neighbour. The advantages of KNN classifier are
analytically tractable, simple implementation and uses local
information, which can yield highly adaptive behaviour.
Fig. 4.1 Original Image
Fig. 4.2 Gray Scale Image
4. EXPERIMENTAL RESULTS AND ANALYSIS
4.1 Dataset
The database for the experiment contains 70 skin lesion
images which are taken from DermnetNZ database. Size of the
images are 137 x 103 in pixels and the image format is .jpeg.
The sample database is split into training sets and test sets for
melanoma detection. We have trained SVM classifier by using
25 normal images and 25 abnormal images. Then the trained
classifier is tested with 10 normal images and 10 abnormal
images.
4.2 Experimental Results
Experimental results of the proposed system is given
below. Fig.4.1 shows the original image, Fig.4.2 shows
grayscale image, Fig.4.3 shows median filtered image and
Fig.4.4 shows segmented image.
Fig. 4.3 Median Filtered Image
Fig. 4.4 Segmented Image
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 32
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
4.3 Performance Metrics of the System
4.3.1 Melanoma Detection
CCR is the most obvious accuracy measure to evaluate
performance of a classification system. For melanoma
detection, we have achieved a classification rate of 95% with
Quadratic Kernel Function. The performances of SVM using
various kernel function are shown in table 4.1.
Table4.1. Performances of SVM for Various Kernel_functions
KERNEL_FUNCTION
CCR
LINEAR
QUADRATIC
POLYNOMIAL
RBF
MLP
80%
95%
90%
90%
55%
The other performance measures which are used to assess the
classifiers performance are sensitivity, specificity and
accuracy. These are calculated by using confusion martix.
Confusion Matrix is a binary classification model, which
classifies each instance into one of two classes: a true and a
false class. It is a table with two rows and two columns that
reports the number of false positives (FP), false negatives
(FN), true positives (TP) and true negatives (TN). This allows
more detailed analysis than mere proportion of correct
guesses (accuracy). For supervised learning with two possible
classes, all measures of performance are based on four
numbers obtained from applying the classifier to the test set.
These numbers are called true positives TP, false
positives FP, true negatives TN and false negatives FN. In our
system TP = 10, TN = 9, FP = 1, FN = 0. Using these values
Accuracy, Sensitivity and Specificity are calculated and the
results are tabulated in Table.4.2.
Table 4.2. Performance Measures Result
Measures
Accuracy
Sensitivity
Specificity
ISSN: 2348 – 8387
Values (%)
95
100
90
4.3.2 Melanoma Classification
In melanoma classification, we have considered 3 types
of melanoma. We have achieved a classification rate of 80%.
We have trained KNN classifier by using 40 images in each
type. Then the trained classifier is tested with 15 images of
each type.
5. CONCLUSION
We have proposed an automated system to classify skin
lesion images for cancer detection. In this work two classes of
skin lesion images namely Benign or Malignant are taken.
These skin lesion images are collected from DermnetNZ
database. After preprocessing and segmentation, features for
melanoma detection is extracted by ABCD rule. These
extracted features are fed into SVM classifier. The
performances are measured and the overall accuracy rate of
SVM is 95.00%. The SVM classifier with Quadratic kernel
function gives the better accuracy. After melanoma detection,
melanoma types are classified as Superficial spreading
melanoma, Acral lentiginous melanoma and Nodular
melanoma using K-NN classifier. The performances are
measured and the overall accuracy rate of KNN is 80.00%.
REFERENCES
[1] M. Emre Celebi,Azaria Zornberg, "Colors in Dermoscopy
Images and Its Application to Skin Lesion Classification",
IEEE SYSTEMS JOURNAL,Vol.8, No.3, pp. 980-984, Sep
2014.
[2] Kouhei Shimizu, et al. “Four-Class Classification of Skin
Lesions With Task Decomposition Strategy”, IEEE
TRANSACTIONS ON BIOMEDICAL ENGINEERING,
Vol.62, No.1, pp. 274-283, Jan 2015.
[3] Diego Caratelli, Alessandro Massaro, Roberto Cingolani
and
Alexander G. Yarovoy, “Accurate Time-Domain
Modeling of Reconfigurable Antenna Sensors for
Non-Invasive Melanoma Skin Cancer Detection”, IEEE
SENSORS JOURNAL, Vol.12, No.3, pp. 635-643, Mar 2012.
[4] Paul Wighton, Tim K. Lee, Harvey Lui, David I. McLean
and M. Stella Atkins, "Generalizing Common Tasks in
Automated Skin Lesion Diagnosis", IEEE TRANSACTIONS
ON INFORMATION TECHNOLOGY IN BIOMEDICINE,
Vol.15, No.4, pp. 622-629, Jul 2011.
www.internationaljournalssrg.org
Page 33
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[5] Aurora Sáez, Carmen Serrano and Begoña Acha,
"Model-Based Classification Methods of Global Patterns in
Dermoscopic Images", IEEE TRANSACTIONS ON
MEDICAL IMAGING , Vol.33, No.5, pp. 1137-1147, May
2014.
[6] Pritha Mahata, “Exploratory Consensus of Hierarchical
Clusterings for Melanoma and Breast Cancer”, IEEE/ACM
TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND
BIOINFORMATICS, Vol.7, No.1, pp.138-152, Jan-Mar 2010.
[7] Cheng Lu, Muhammad Mahmood, Naresh Jha and Mrinal
Mandal, "Automated Segmentation of the Melanocytes in
Skin Histopathological Images", IEEE JOURNAL OF
BIOMEDICAL AND HEALTH INFORMATICS, Vol.17,
No.2, pp.284-296, Mar 2013.
[8] Brian D’Alessandro and Atam P. Dhawan,
“Transillumination Imaging for Blood Oxygen Saturation
Estimation of Skin Lesions”, IEEE TRANSACTIONS ON
BIOMEDICAL ENGINEERING, Vol.59, No.9, pp.2660-2667,
Sep 2012.
[9] Rahil Garnavi, Mohammad Aldeen and James Bailey,
“Computer-Aided Diagnosis of Melanoma Using Border- and
Wavelet-Based Texture Analysis”, IEEE TRANSACTIONS
ON INFORMATION TECHNOLOGY IN BIOMEDICINE,
Vol.16, No. 6,pp. 1239-1252, Nov 2012.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 34
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
AN EFFICIENT HARDWARE IMPLEMENTATION OF AES
USING MICROBLAZE SOFTCORE PROCESSOR
R.KOTHANDAPANI
Roever Engineering College, Perambalur, India
Abstract
— This paper demonstrates the
hardware implementation to enhance the secrecy
of confidential data communication. Advanced
Encryption Standard S-box is capable of
resisting the side channel attack. A specified SCA
standard evaluation field-programmable gate
array (FPGA) board (SASEBO-GII) is used to
implement both synchronous and asynchronous
S-Box designs. This asynchronous S-Box is based
on self-time logic referred to as null convention
logic (NCL). Supports a few beneficial properties
for resisting SCAs: clock free, dual-rail encoding,
and monotonic transitions. These beneficial
properties make it difficult for an attacker to
decipher secret keys embedded within the
cryptographic circuit of the FPGA board. By
using this NCL the differential power analysis
(DPA) and correlation power analysis (CPA)
attacks are avoided. This paper enhances the
secrecy with reduction of side channel attack.
Keywords— Correlation power analysis (CPA),
differential power analysis (DPA), energy
consumption, field-programmable gate array
(FPGA) implementation, instrumentation and
measurement, null convention logic (NCL),
power/noise measurement, security, side channel
attack (SCA), substitution box (S-Box).
issues, such as glitches, hazards, and early
propagation, which still could leak some sidechannel information to the attackers. Our proposed
null-conventional-logic-based substitution
box
design matches the important security properties:
asynchronous, dual rail encoding, and an
intermediate state.
Cryptography is the practice and study of
hiding information. Applications of cryptography
include ATM cards, computer passwords, and
electronic commerce. Until modern times
cryptography referred almost exclusively to
encryption, which is the process of converting
ordinary information (plaintext) into unintelligible
gibberish (i.e., cipher text).Decryption is the reverse,
in other words, moving from the unintelligible
cipher text back to plaintext. A cipher is a pair of
algorithms which create the encryption and the
reversing decryption. The detailed operation of a
cipher is controlled both by the algorithm and in
each instance by a key. This is a secret parameter
(ideally known only to the communicants) for a
specific message exchange context. Keys are
important, as ciphers without variable keys are
trivially breakable and therefore less than useful for
most purposes. Historically, ciphers were often used
directly for encryption or decryption without
additional procedures such as authentication or
integrity checks.
I. INTRODUCTION
The crypto hardware devices that have
enhanced security measures while being energy
efficient are in high demand. In order to reach this
demand of low-power devices with high-security
features, researchers generally focus around the
cryptographic algorithm actually implemented in the
hardware itself to encrypt and decrypt information.
However, they are fundamentally based on
synchronized circuits, which either require a precise
control of timing or suffer from some timing related
ISSN: 2348 – 8387
The National Institute of Standards and
Technology (NIST) selected the Rijndael algorithm
as the new Advanced Encryption Standard (AES) in
2001. Numerous FPGA and ASIC implementations
of the AES were previously proposed and evaluated.
To date, most implementations feature high speeds
and high costs suitable for high-end applications
only. The need for secure electronic data exchange
will become increasingly more important.
Therefore, the AES must be extended to low-end
customer products, such as PDAs, wireless devices,
www.internationaljournalssrg.org
Page 35
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
and many other embedded applications. In order to
achieve this goal, the AES implementations must
become very inexpensive. Most of the low-end
applications do not require high encryption speeds.
the difficulties of attack due to the lack of timing
references.
Current wireless networks achieve speeds
up to 60 Mbps. Implementing security protocols,
even for those low network speeds, significantly
increases the requirements for computational power.
For example, the processing power requirements for
AES encryption at the speed of 10 Mbps are at the
level of 206.3 MIPS. In contrast, a state-of-the-art
handset processor is capable of delivering
approximately 150 MIPS at 133 MHz, and 235
MIPS at 206 MHz. This project attempts to create a
bridge between performance and cost requirements
of the embedded applications. As a result, a lowcost AES implementation for FPGA devices,
capable of supporting most of the embedded
applications, was developed and evaluated. Accurate
measurement and estimation of these outputs are the
key points of a successful attack. The measurement
should be based on the hardware gate-level
approach rather than the software instruction-level
estimation. In addition, for the power consumption
measurement, the focus would be the dynamic
power consumption that is dissipated during the
transistors switching rather than static leakage
power consumption.
Synchronous logic with clocked structures
has dominated the digital design over the past
decades. As the decrease of feature sizes and the
increase of the operating frequency of integrated
circuits (IC), clock-related issues become more
serious, such as clock skews, increased power at the
clock edges, extra area, and layout complexity for
clock distribution networks, and glitches. These
motivate the research of asynchronous (i.e., clock
less) logic design which has benefits of eliminating
all the clock-related issues. In order to reach this
demand of low-power devices with high-security
features, researchers generally focus around the
cryptographic algorithm actually implemented in the
hardware itself to encrypt and decrypt information.
Thus, securing cryptographic devices against
various side channel attacks (SCAs) has become a
very attractive research topic in recent years along
with the developments of information technologies.
SCAs explore the security information (i.e., secret
keys) by monitoring the emitted outputs from
physical cryptosystems. Advanced Encryption
Standard (AES) was announced with the intention of
being a faster and more secure encryption algorithm
over others since its algorithm is comprised of
multiple processes used to encrypt information with
supports of up to 256-bit key and block sizes,
making an exhaustive search impossible to check all
2256 possibilities. Usually, the hardware AES
implementation has higher reliability than software
since it is difficult to be read or modified by
attackers. Most of the countermeasures designed for
hardware implementation of AES are based on
securing the logic cells to balance the power
consumption of the system and to make it
independent of the processing data. This process of
adjusting the basic units of the system makes the
overall design less vulnerable to attacks. The
hardware implementation of AES essentially has
higher reliability than software since it is difficult to
be read or modified by the attackers and less prone
to reverse engineering.
Advanced Encryption Standard (AES) was
announced with the intention of being a faster and
more secure encryption algorithm over others since
its algorithm is comprised of multiple processes
used to encrypt information with supports of up to
256-bit key and lock sizes, making an exhaustive
search impossible to check all 2 power 256
possibilities. Usually, the hardware AES
implementation has higher reliability than software
since it is difficult to be read or modified by
attackers and less prone to reverse engineering.
The hardware implementation of AES essentially
has higher reliability than software since it is
difficult to be read or modified by the attackers and
less prone to reverse engineering. Asynchronous
circuits, on the other hand, have natural advantages
in terms of SCA resistance. The clock-related
information leakage can be either eliminated or
significantly reduced, which extensively increases
ISSN: 2348 – 8387
II. SYNCHRONOUS LOGIC
These countermeasures can be separated
into two categories based on the framework of the
www.internationaljournalssrg.org
Page 36
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
circuit that they are implemented on synchronous
and asynchronous. The countermeasures for
synchronous circuits include sense amplifier basic
logic, which is an improved two-spacer alternating
dual-rail circuit; wave dynamic differential logic,
which is a dynamic voltage and frequency switching
approach; masked logic styles using Fourier
transform; random switching logic with its
simplified version called dual-rail random-switching
logic and the recently proposed masked dual-rail
precharged logic and its improved version. These
works are centered around resisting DPA attacks
and introduce methods on how to effectively reduce
the impact of DPA attacks. However, they are
fundamentally based on synchronized circuits,
which either require a precise control of timing or
suffer from some timing related issues, such as
glitches, hazards, and early propagation which still
could leak some side-channel information to the
attackers. Asynchronous circuits, on the other hand,
have natural advantages in terms of SCA resistance.
The clock-related information leakage can be either
eliminated or significantly reduced, which
extensively increases the difficulties of attack due to
the lack of timing references. The countermeasures
based on asynchronous circuits are the balanced
delay-insensitive
method,
the
GloballyAsynchronous
Locally-Synchronous
System
module, and the 1-of-n data-encoded speed
independent circuit.
In synchronous logic circuits, an electronic
oscillator generates a repetitive series of equallyspaced pulses called the clock signal. The clock
signal is applied to all the memory elements in the
circuit, called flip-flops. The output of the flip-flops
only change when triggered by the edge of the clock
pulse, so changes to the logic signals throughout the
circuit all begin at the same time, at regular intervals
synchronized by the clock. The outputs of all the
memory elements in a circuit is called the state of
the circuit. The state of a synchronous circuit
changes only on the clock pulse. The changes in
signal require a certain amount of time to propagate
through the combinational logic gates of the circuit.
This is called propagation delay. The period of the
clock signal is made long enough so the output of all
the logic gates have time to settle to stable values
before the next clock pulse. As long as this
condition is met, synchronous circuits will operate
stably, so they are easy to design.
ISSN: 2348 – 8387
However a disadvantage of synchronous
circuits is that they can be slow. The maximum
possible clock rate is determined by the logic path
with the longest propagation delay, called the
critical path. So logic paths that complete their
operations quickly are idle much of the time.
Another problem is that the widely distributed clock
signal takes a lot of power, and must run whether
the circuit is receiving inputs or not.
Fig. 1. (a) Combinational S-Box architecture with
encryption and decryption data paths. (b) Block
diagram of multiplicative inversion, where MM
is modular multiplication and XOR is
EXCLUSIVEOR operation.
III. NCL AES S-BOX DESIGN
The Advanced Encryption Standard is the
most widely used symmetric-key algorithm standard
in different security protocols. The AES algorithm
consists of a number of rounds that are dependent on
the key size. For both cipher and inverse cipher of
the AES algorithm, each round consists of linear
operation (i.e., ADD ROUNDKEY, SHIFTROWS,
and MIXCOLUMNS steps) and nonlinear operation
(i.e., SUBBYTES step). SUBBYTES step is the first
step of AES round. Each byte in the array is updated
by an 8-bit S-Box, which is derived from the
multiplicative inverse. The AES S-Box is
constructed by combining the inverse function with
an invertible affine transformation in order to avoid
www.internationaljournalssrg.org
Page 37
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
attacks based on mathematics. The S-Box is one the
of most critical components in the implementation
of AES hardware. NCL is a delay-insensitive (DI)
asynchronous (i.e. clock less) paradigm, which
means that NCL circuits will operate correctly
regardless of when circuit inputs become available;
therefore NCL circuits are said to be correct-byconstruction (i.e. no timing analysis is necessary for
correct operation).NCL circuits utilize dual-rail or
quad-rail logic to achieve delay-insensitivity. The
two rails are mutually exclusive, such that both rails
can never be asserted simultaneously. The existing
countermeasures, we can find that the dual-rail
encoding with the precharge method, spacers, or
return-to-zero (RTZ) protocols, is frequently used in
both synchronous and asynchronous designs.
Fig. 2 Single bit dual rail register
The dual-rail encoding provides better data
independence with the power consumption since the
Hamming weights (HWs) of each data set are the
same. An RTZ protocol, a spacer, or the precharge
method was used to achieve the monotonic
transition to enhance the security. Our proposed
null-conventional-logic-based (NCL) substitution
box (S-Box) design essentially matches all these
important security properties: asynchronous, dual
rail encoding, and an intermediate state (i.e.,
NULL). Unlike other asynchronous designs, NCL
adheres to the monotonic transitions between DATA
(i.e., data representation) and NULL (i.e., control
representation), which utilizes dual-rail and quad
rail signaling methods to achieve the delay
insensitivity. This would significantly reduces the
design complexity. With the absence of a clock, the
NCL system is proved to reduce the power
consumption,
noise,
and
electromagnetic
interference. Furthermore, we have demonstrated
that NCL can also resist SCAs without worrying
about the glitches and power supply variations. In
addition to the DPA attack, a CPA attack has also
been applied to both synchronous and NCL S-Box
design to demonstrated that the proposed NCL SBox is capable of resisting CPA attack as well.
IV. FUNCTIONAL VERIFICATION OF THE
PROPOSED NCL S-BOX DESIGN
The initial value of the input and that of the
output are NULL and DATA0, respectively.
Previous input registers are reset to NULL and
output registers are reset to DATA0. As soon as the
reset falls down to 0, Ko from the output register
becomes 1,and Ki for the input register connected to
Ko becomes 1. As Ki rises, the input is changed to
the waiting input signal 01 01 01 01 01 01 01 01 in
dual-rail signaling, which means 00000000 in binary
and 0x00 in hexadecimal. The initial value of the
input and that of the output are NULL and DATA0,
respectively. Previous input registers are reset to
NULL and output registers are reset to DATA0. As
soon as the reset falls down to 0, Ko from the output
register becomes 1,and Ki for the input register
connected to Ko becomes 1. As Ki rises, the input is
changed to the waiting input signal 01 01 01 01 01
01 01 01 in dual-rail signaling, which means
00000000 in binary and 0x00 in hexadecimal. As
every bit of the output signal changes from NULL to
DATA, Ko falls to 0, which means that the output
register has received the proper output DATA wave.
Every single component (i.e., affine and inverse
affine
transformation,
and
multiplicative
inversion)has been separately verified.
On the NCL S-Box output column, the
results are shown as 16 bits, which are the extended
dual-rail signals. For example, for input 158, the
NCL S-Box output is 01 01 01 01 10 01 10 10, and
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 38
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
this dual-rail encoded data word is equivalent to
00001011 in binary, which is equal to the output of
the conventional synchronous S-Box. Since the key
is 11010100, after the bitwise XOR function, the
actual input that goes to the S-Box would be
00101011. According to the standard S-Box table,
the corresponding output is 11110001, which is
1010101001010110 in NCL and 0xF1 in
hexadecimal. Following that, the input signal is
incremented to 00000000, and the S-Box input
becomes 00000000 XOR 1010100 = 11010100,
which generates the corresponding output 01001000
(i.e., 0x48). Similarly, hexadecimal numbers 0x03
and 0xF6 shown in the Fig. 5 can be derived as well.
All 256 inputs with different keys have been
verified during the power analysis programming
using MATLAB. The correct behavior of the
function is the prerequisite for a successful power
attack.
II. DPA
DPA is a much more powerful attack than
SPA, and is much more difficult to prevent. While
SPA attacks use primarily visual inspection to
identify relevant power fluctuations, DPA attacks
use statistical analysis and error correction
techniques to extract information correlated to
secrete keys. Implementation of a DPA attack
involves two phases. They are data collection and
data analysis. Data collection for DPA may be
performed as described by a device’s power
consumption during cryptographic operations as a
function of time. For DPA, a number of
cryptographic operations using the target key are
observed. This DPA process has been implemented
on both synchronous S-Box and NCL S-Box with
256 keys. The DPA attack results show that the
selected keys cannot be identified from other
assumption keys. Therefore, the proposed NCL SBox design is secured from DPA attacks. The key
that is assumed from the power analysis is avoided
by implementing this method. Thus the information
that is transmitted from the transmitter to receiver
will be secured that has different keys which makes
the attacker to hack the data.
ISSN: 2348 – 8387
TABLE I. SIMULATION RESULTS FOR TEN
ARBITRARY SAMPLES FROM THE
CONVENTIONAL SYNCHRONOUS S-BOX
AND THE PROPOSED NCL S-BOX.THE SBOX OUTPUTS ARE DUAL RAIL ENCODED
VI. COUNTER MEASURE CIRCUIT
The countermeasures for synchronous
circuits include sense amplifier basic logic, which is
an improved two-spacer alternating dual-rail circuit
wave dynamic differential logic, which is a dynamic
voltage and frequency switching approach masked
logic styles using Fourier transform random
switching logic with its simplified version called
dual-rail random-switching logic and the recently
proposed masked dual-rail precharged logic and its
improved version. These works are centered around
resisting DPA attacks and introduce methods on
how to effectively reduce the impact of DPA
attacks. However, they are fundamentally based on
synchronized circuits, which either require a precise
control of timing or suffer from some timing related
issues, such as glitches, hazards, and early
propagation, which still could leak some sidechannel information to the attackers.
Asynchronous circuits, on the other hand,
have natural advantages in terms of SCA resistance.
The clock-related information leakage can be either
eliminated or significantly reduced, which
extensively increases the difficulties of attack due to
the lack of timing references. The countermeasures
based on asynchronous circuits are the balanced
delay-insensitive
method,
the
Globally-
www.internationaljournalssrg.org
Page 39
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Asynchronous
Locally-Synchronous
System
module, and the 1-of-n data-encoded speed
independent circuit. However, the increased security
does not come for free. The area required to
implement them is potentially larger than their
synchronized counterpart The benefits in terms of
total power consumption and speed are still
questionable. In addition, some of the
countermeasures are based on the electronic design
automation tool simulation results or theoretical
analysis, which may not effectively prove that these
methods can experimentally resist real SCAs. From
these existing countermeasures, we can find that the
dual-rail encoding, with the precharge method,
spacers, or return-to-zero (RTZ) protocols, is
frequently used in both synchronous and
asynchronous designs. The dual-rail encoding
provides better data independence with the power
consumption since the Hamming weights (HWs) of
each data set are the same. An RTZ protocol, a
spacer, or the precharge method was used to achieve
the monotonic transition to enhance the security.
The proposed null-conventional-logicbased (NCL) substitution box (S-Box) design
essentially matches all these important security
properties: asynchronous, dual rail encoding, and an
intermediate state (i.e., NULL). Unlike other
asynchronous designs, NCL adheres to the
monotonic transitions between DATA (i.e., data
representation)
and
NULL
(i.e.,
control
representation), which utilizes dual-rail and quadrail
signaling methods to achieve the delay insensitivity.
This would significantly reduces the design
complexity. With the absence of a clock, the NCL
system is proved to reduce the power consumption,
noise, and electromagnetic interference.
Furthermore, we have demonstrated that
NCL can also resist SCAs without worrying about
the glitches and power supply variations. This
project provides an extension to what has been
presented in. In addition to the DPA attack, a CPA
attack has also been applied to both synchronous
and NCL S-Box design to demonstrated that the
proposed NCL S-Box is capable of resisting CPA
attack as well. By transforming this input to the
polynomial the input is converted and the respective
code is replaced by the use of matrix array. The
counter measure logic will shift the secret key with
user defined counts and it continues till the count
completes. By this logic the attacker will find
ISSN: 2348 – 8387
difficult to hack the key. The counter measure
circuit is inserted as the initial stage which
introduces shift register and shifts the key. By this
method the security key is shifted in a considerable
count till the information reaches the receiver. So,
the attacker cannot hack the secrete key.
Fig. 3 The Effect of the Sub Bytes()
transformation of the state
Thus the simulation result shown below
will get the 128 input and segments into 8 bit and
shifting will takes place 6 times. After this shifting
the 8bit input will be converted into dual rail code.
Then the corresponding output is obtained from the
affine transformation. Till the shifting the key will
not be known to the attackers and at the receiver the
inverse affine transformation is done and the
original output information is retrieved.
www.internationaljournalssrg.org
Page 40
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig 4 Simulation result of counter measure circuit
Fig. 5 Total estimated power consumption.
results show that the presence of the parity check
circuitry has a negative impact on the resistance of the
device to power analysis attacks. After the functional
verification, the VHDL code has been synthesized and its
power measurements are executed using XILINX ISE
simulator. Power simulation results from XILINX ISE
simulator for the proposed NCL S-Box and conventional
synchronous S-Box are shown in Fig 5.As Fig s5 shows
the proposed NCL S-Box has 165 mW and conventional
synchronous S-Box has 174 mW for temperature about
27 degree Celsius.
Power minimization is of paramount importance
for designers today, especially in the portable electronicdevice market, where devices have become increasingly
feature rich and power hungry. Low supply voltages play
a significant role in determining the power consumption
in portable electronic-device circuits. Many side-channel
attacks on implementations of cryptographic algorithms
have been developed in recent years demonstrating the
ease of extracting the secret key. In response, various
schemes to protect cryptographic devices against such
attacks have been devised and some implemented in
practice. Almost all of these protection schemes target an
individual side-channel attack and consequently, it is not
obvious whether a scheme for protecting the device
against one type of side channel attacks may make the
device more vulnerable to another type of attacks.
Examination of the concept is the possibility of
such a negative impact for the case where fault detection
circuitry is added to a device (to protect it against fault
injection attacks) and analyze the resistance of the
modified device to power attacks. To simplify the
analysis we focus on only one component in the
cryptographic device (namely, the S-box in the AES and
Kasumi ciphers), and perform power attacks on the
original implementation and on a modified
implementation with an added parity check circuit. Our
ISSN: 2348 – 8387
Fig. 6 Power consumption without LFSR
The power consumption without using the counter
measure logic is 88mW and the power consumption with
counter measure logic is 60mW. Power consumption is
less by using the counter measure logic
www.internationaljournalssrg.org
Page 41
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
[5] A. Bailey, A. A. Zahrani, G. Fu, J. Di, and S. C.
Smith, ―Multi-threshold asynchronous circuit design for
ultra-low power,‖ J. Low Power Electron., vol. 4, pp.
337–348, 2008.
[6] J. Wu, Y. Shi, and M. Choi, ―FPGA-based
measurement and evaluation of power analysis attack
resistant asynchronous s-box,‖ in Proc. IEEE I2MTC,
May 2011, pp. 1–6.
Fig. 7 Power consumption with LFSR
VIII. CONCLUSION
The asynchronous S-Box design is based on
self-time logic referred to as NCL, which supports
beneficial properties for resisting DPA: clock free, dualrail signal, and monotonic transitions. These beneficial
properties make it difficult for an attacker to decipher
secret keys embedded within the cryptographic circuit of
the FPGA board. Experimental results of the original
design against the proposed S-Box revealed that the
asynchronous design decreased the amount of
information leaked from both DPA and CPA attacks.
Thus by the introduction of counter measure circuit the
secrecy enhancement is achieved.
REFERENCES
[1] S. Moore, R. Anderson, R. Mullins, G. Taylor, and J.
J. A. Fournier, ―Balanced self-checking asynchronous
logic for smart card applications,‖J. Micro process. Micro
system. vol. 27, pp. 421–430, 2003.
[7] J. Wolkerstorfer, E. Oswald, and M. Lamberger,
―An ASIC implementation of the AES sboxes,‖ in Proc.
Cryptographer’s Track RSA Conf. Topics Cryptol., 2002,
pp. 67–78.
[8] L. Medina, R. de Jesus Romero-Troncoso, E. CabalYepez, J. de Jesus Rangel-Magdaleno, and J. MillanAlmaraz, ―FPGA based multiple-channel vibration
analyzer for industrial applications in induction motor
failure detection,‖ IEEE Trans. Instrum. Meas., vol. 59,
no. 1, pp. 63–72, Jan. 2010.
[9] J. Hunsinger and B. Serio, ―FPGA implementation
of a digital sequential phase-shift stroboscope for inplane vibration measurements with sub pixel accuracy,‖
IEEE Trans. Instrum. Meas., vol. 57, no. 9, pp. 2005–
2011, Sep. 2008.
[10] R. Jevtic and C. Carreras, ―Power measurement
methodology for FPGA devices,‖ IEEE Trans. Instrum.
Meas., vol. 60, no. 1, pp. 237–247, Jan. 2011.
[11] R.C. for Information Security. Side-channel attack
standard evaluation board SASEBO-GII specification.
[2] D. Sokolov, J. Murphy, A. Bystrov, and A. Yakovlev,
―Design and analysis of dual-rail circuits for security
applications,‖ IEEE Trans. Comput., vol. 54, no. 4, pp.
449–460, Apr. 2005.
[3] S. Smith, ―Design of an FPGA logic element for
implementing asynchronous null convention logic
circuits,‖ IEEE Trans. Very Large Scale Integr. (VLSI)
Syst., vol. 15, no. 6, pp. 672–683, Jun. 2007.
[4] V. Satagopan, B. Bhaskaran, A. Singh, and S. C.
Smith, ―Automated energy calculation and estimation
for delay-insensitive digital circuits,‖ Micro electron. J.,
vol. 38, no. 10/11, pp. 1095–1107, Oct. /Nov. 2007.
ISSN: 2348 – 8387
www.internationaljournalssrg.org
Page 42
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Analysis of Power Consumption and Linearity in
Capacitive DAC Used in Split-SAR ADCs
Saranya.D #1, Karthikeyan.S #2
#1PG student, #2 Assistant professor, Department of Electronics & Communication Engineering,
Vandayar Engineering College, Thanjavur, Tamilnadu, India.
Abstract- This paper analyzes the SAR ADCs. Which
improve the significant switching energy saving when
compared with set-and-down and charge-recycling
switching approaches. Successive approximation
technique in ADC is well known logic, where in the
presented design the linearity analysis of a Successive
Approximation Registers (SAR) Analog-to-Digital
Converter (ADC) with split DAC structure based on two
switching methods: VCM -based switching, Switch to
switchback process. The main motivation is to implement
design of capacitor array DAC and achieve high speed
with medium resolution using 45nm technology. The
current SAR architecture has in built sample and hold
circuit, so there is significant saving in chip area. The
other advantage is matching of capacitor can be achieved
better then resistor. Which is verified by behavioral
Measurement results of power, speed, resolution, and
linearity clearly show the benefits of using VCM-based
switching? In the proposed design the SAR ADC is
designed in switch to switchback process such a way that
the control module completely control the splitting up of
modules, and we planning to give an option to change the
speed of operation using low level input bits. A dedicated
multiplexer is designed for that purpose system.
KEYWORDS: Linearity analysis, linearity calibration,
resolution SAR ADCs, split DAC, VCM-based
switching, switch to switch back process.
I. INTRODUCTION
Recently, several energy-efficient switching methods
have been presented to reduce the switching energy of
the DAC capacitor network. These works reduce the
unnecessary energy wasted in switching sequence.
However, the SAR control logic becomes more
complicated due to the increased capacitors and
switches. So we used split sar DAC technique.
The SAR ADC is widely used in many
communication systems, such as ultra-wideband
(UWB) and wireless sensor networks which require low
power consumption and low-to-medium-resolution
converters. Traditional SAR ADCs are difficult to be
applied in high-speed, however, the improvement of
technologies and design methods have allowed the
implementation of high-speed, lowpower SAR ADCs
that become consequently more attractive for a wide
variety of applications . The power dissipation in a SAR
converter is dominated by the reference ladder of the
DAC capacitor array.
ISSN: 2348 – 8549
Recently, a capacitor splitting technique has
been presented, which was proven to use 31% less
power from the reference voltage supply. The total
power consumption of a 5b binary-weighted split
capacitor array is 6mW, and often this does not take
into account the reference ladder. Moreover, as the
resolution increases, the total number of input
capacitance in the binary-scaled capacitive DAC will
cause an exponential increase in power dissipation, as
well as a limitation in speed, due to a large charging
time-constant. Therefore, a small capacitance spread in
the DAC capacitor array is highly desirable for high
speed SAR ADCs [4]. This paper presents a novel
structure of a split capacitor array to optimize the power
efficiency and the speed of SAR ADC’s. Due to the
series combination of the split capacitor array, smaller
values of the capacitor ratios and a more powerefficient charge recycling approach in the DAC
capacitor array can be achieved, simultaneously,
leading to fast DAC settling time and low power
dissipation in the SAR ADC.
The parasitic effects and the position of the attenuation
capacitor in the proposed structure will be theoretically
discussed and behavioral simulations will be performed.
The design and simulations of an 8b 180-MS/s SAR
ADC in 1.2-V supply voltage are presented in 90nm
CMOS exhibiting a Signal-to-Noise-and-Distortion
Ratio (SNDR) of 48 dB, with the total power
consumption of 14mW.
1.1 Selection of the right ADC architecture
The selection of the right architecture is a very
important decision. The following fig.1 shows the
common ADC (Analog to Digital Converter). Sigma
Delta ADC architectures are very useful for lower
Sampling rate and higher resolution (approximately 1224 bits).
The common applications for Sigma-delta ADC
architecture are found in voice,audio band and
industrial
Measurements.
The
Successive
Approximation (SAR) architecture is very suitable for
data acquisition; it has resolutions ranging from 8bits to
18 bits and sampling rates ranging from 50 KHz to 50
MHz The most effective way to create a Giga rate
application with 8 to 16 bit resolution is the pipeline
ADC architecture.
www.internationaljournalssrg.org
Page 43
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
1.2 SAR ADC Architecture
The SAR architecture mainly uses the binary
optimation algorithm. The SAR ADC consists of some
blocks such as one comparator, one DAC (Digital to
Analog Converter) and one control logic.The algorithm
is very similar to like searching a number from
telephone book. that mean, to search a telephone
number from telephone book, the book is opened and
the number may be located either in first half or second
half of the book. and relevant section divided into two
half. This method can be followed until determine
relevant number. The main advantage of SAR ADC is
good ratio of speed to power.
The SAR ADC compare to flash ADC it is compact
design, hence SAR ADC inexpensive. The limitation of
SAR ADC is one comparator throughout the entire
conversation process. If there is any offset error in the
comparator, it affect the all conversion bits. The other
one is gain error in DAC. whenever, the static
parameter errors is not affect dynamic behavior of SAR
ADC
.
fig. 3 a vcm-based switchiing
The attenuation capacitor divides the LSB capacitor
array and MSB capacitor array. Here, the ratio between LSB
to MSB capacitor (C to 8Cc) reduces drastically compare to
the conventional binary weighted capacitor array
The Vcm-based approach performs the MSB transition by
connecting the differential arrays to Vcm. The power
dissipation is just derived from what is needed to drive the
bottom-plate parasitic of the capacitive arrays, while in the
conventional charge-redistribution where the necessary MSB
“up” transition costs significant switching energy and settling
time.
Moreover, as the MSB capacitor is not required anymore, it
can be removed from the n-bit DAC array. Therefore, the next
n − 1 b estimation is done with an (n − 1) bit array instead of
its n-bit counterpart, leading to half capacitance reduction
with respect to the conventional method(FIG 3 b).
Fig1 block diagram of adc
1.2.1
The conventional binary weighted capacitor array
has limitation for higher resolution due the larger capacitor
ratio from MSB to LSB capacitor. To remove this problem,
one technique can be applied known as split capacitor
technique. For example, to reach the 8bits resolution, the
capacitor array can be split as shown in the fig.3.
SAR Logic
SAR logic is purely a digital circuit, it consists of three
important blocks,
• Counter
• Bit register
• Data register
The counter provides timing control and switch control.
For 8 bits conversion, seven DFFs are used. The
following table 1 explains which bit set to high during
different phases of SAR operation.
fig 3 b conventional switching
2.2 Sampling phase
II. EXISTING SYSTEM
During sampling phase bottom plates of
capacitor array are connected to Vin as shown in fig.4.
The reset switch still on hence the top plate is on VCM;
and voltage across capacitor array is Vin-VCM.During
Charge transfer phase, bottom plates of capacitor array
are switched to VCM and top plates are floating as
shown figure 7. In this phase, reset switch is off. Hence,
Voltage at top plate Vx
2.1 VCM Based Switching
During Charge transfer phase, bottom plates of
Phase
Q0
Discharge/
Reset
0
Sample/
1
Q1
Q2
Q3
Q4
Q5
Q6
0
0
0
0
0
0
0
0
0
0
0
0
Reset
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 44
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
capacitor array are switched to VCM and top plates are
floating as shown figure 7. In this phase, reset switch is
off. Hence, Voltage at top plate Vx, which is as follows:
`
Fig.4 Sampling Phase
2.2.1 Sample and Hold
In general, Sample and hold circuit (SHC)
contains a switch and a capacitor. In the tracking mode,
when the sampling signal is high and the switch is
connected, it tracks the analog input signal. Then, it
holds the value when the Sampling signal turns to low
in the hold mode. In this case, sample and hold provides
a constant voltage at the input of the ADC during
conversion. Regardless of the type of S/H (inherent or
separate S/H), sampling operation has a great impact on
the dynamic performance of the ADC such as SNDR
III.PROPOSED SYSTEM.
In the proposed system we are planning to
implement SAR ADC in a configurable manner with
different frequency inputs, the configurable is that the
entire ADC architecture can work with different
performance by changing the Vref of the ADC.
Normally in all ADC Vref , Vin , Vth plays. A major
role in adc conversion, by varying the Vref voltage.we
can change the ADC performance , We store the d
various values of Vref through Multiplexer , for
selecting the mux inputs we have simple counter, R
signal generator generates different analog signals to to
test our ADC .SAR ADCs provide a high degree of
configurability on both circuit level and architectural
level. At architectural level the loop order and
oversampling ratio can be changed, the number of
included blocks, and way these blocks are fixed. in
circuit level many things change, such as currents,
performance, of amplifier quarantined resolution etc.
2.3Conversion phase
During the conversion phase, the nbinary
search algorithm, which constructs the binary bits. This
digital word is fed through the DAC (producing ) and
compared with the sampled analog input. during the
first clock cycle in the conversion phase the comparator
defines whether the analog sampled voltage
is
smaller or greater than
. Based on this result the
most significant bit (MSB) is determined and stored in
the SAR. In the second clock cycle, the output of the
DAC is increased or decreased by
according to the
result of the first clock cycle and the second significant
bit is found.
During the next clock cycles Vdac
tracks VH until the difference between them becomes
less than 1VLSB where VLSBis the value of the leastsignificant-bit voltage. Therefore, after N clock cycles
in the conversion phase, all N bits of the digital word
will be ready. It should be noted that in many recent
architectures the S/H function is realized by the
capacitive DAC itself . In other words, the capacitive
array used in the DAC part also serves as the S/H
capacitor.
ISSN: 2348 – 8549
Fig 5 Major Block Diagram for Split-SAR ADC with
FPGA
3.1 DAC Architecture
The digital-to-analog (D/A) converter is used
to decode a digital word into a discrete analog level.
Depending on the application, the input signal can be
voltage or current. Figure 5 shows a high level block
diagram of a basic D/A converter. A binary word is
stored and decoded which drives a set of switches that
control a scaling network. The analog scaling network
is based on voltage scaling, current scaling, or charge
scaling. The scaling network is used to scale the
appropriate analog level from the analog reference
circuit and is applied to the output driver. A scaling
network is formed by a simple serial string of identical
resistors between a reference voltage and ground. A
switch does the work of tapping the voltages off the
www.internationaljournalssrg.org
Page 45
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
resistors and applies them to the output driver. Current
scaling approach depends on switched scaled current
sources. Charge scaling is obtained by providing
capacitor divider with a reference voltage using scaled
capacitors where the total capacitance value is
determined by the digital code.
linked to the encoders. At last step, all data values are
composed by means of the multiplexer. Multiplexer
perform the operation of first in first out
operation(FIFO).
Choice of the architecture depends on the components
available technology, conversion rate, and
resolution.
The power and speed analysis is done by
IV ANALYSIS OF SPLIT SAR ADC AND RESULT
implementing the SARADC in XilinxISE9.2 and matlap
INPUT SIGNAL
Fig 5: Basic D/A converter block diagram.
In an SAR-ADC the power is mainly
consumed in the DAC, the comparator, the reference
buffers and the digital circuits. One of the most
important building blocks that determine the accuracy
and conversion speed of the converter and also consume
most of the overall power dissipation of the SAR ADC,
is DAC. The DAC required in the SA-ADC can be
realized in various ways; e.g., capacitor-based DAC,
switched-current DAC or R-2R ladder DAC. Among
these architectures, the capacitor-based DAC has
become more popular because of its zero current.
Further, in most technologies resistor mismatch and
tolerance are greater than capacitor mismatch and
tolerance.
CORRELATED SIGNAL
3.2 Comparator
Comparator is desirable to attain the fast
conversion. Ramp generator produces the ramp voltage,
and it is compared with the comparator input voltage.
Finally, comparator generating the hit pulse, if the ramp
voltage is better than the input voltage. The difference
between the ramp and the input voltage is called as hit
pulse. And it passes all the way through the registers
and encoders.
DAC
Registers, encoders and multiplexer
Clock signals are second - hand for understanding the
register values .Two types of registers are READ and
WRITE registers. Gray counter and MCG outputs are
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 46
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
1.46mW power consumption and occupies only
0.012mm2.The
measured
performance
corresponds to an FOM of 39fJ/conversion-step,
which is comparable with the best published ADCs.
REFERENCES
1 Split-SAR ADCs: Improved Linearity With Power
and Speed Optimization, Yan Zhu, Chi Hang Chan, UFat Chio, Sai-Weng Sin, Seng-Pan U, Rui Paulo
Martins, and Franco Maloberti, IEEE transactions on
very large scale integration (VLSI) systems, Vol. 22,
no. 2, February 2014
2Y. Zhu, U.-F. Chio, H.-G.Wei,S.-W. Sin, U. Seng-Pan,
and R. P. Martins, “A power-efficient capacitor
structure for high- speed charge recycling SAR ADCs,”
in Proc. IEEE Int. Conf. Electron. CircuitsSyst., Aug.–
Sep. 2008, pp. 642–645.
3.“Radiosonde Temperature, pressure, air for an
Environmental Status”, WF Dabberdt and R Shell horn,
Vaisala Inc., Boulder, CO, USA, Copyright 2003
Elsevier Science Ltd. All Rights Reserved,
rwas.2003.0344 23/9/02 17:18 M. SHANKAR No. of
pages: 14
4. M. Saberi, R. Lotfi, K. Mafinezhad, and W. A.
Serdijn, “Analysis of power consumption and linearity
in capacitive Digital-to-Analog Converters used in
Successive Approximation ADCs,” IEEE Trans.
CircuitSyst. I, Regular Papers, vol. 58, no. 8, pp. 1736–
1748, Aug. 2011
.5. Y. F. Chen, X. Zhu, H. Tamura, M. Kibune, Y.
Tomita, T. Hamada, M. Yoshioka, K. Ishikawa, T.
Takayama, J. Ogawa, S. Tsukamoto, and T. Kuroda,
“Split capacitor DAC mismatch calibration in
successive approximation ADC,” in Proc. IEEE Custom
Integr. Circuits Conf,Sep. 2009, pp. 279–482.
6. S.Wong, Y. Zhu, C.-H. Chinju.-F. Chio S.-W. Sin, U.
Seng-Pan, and R. P. Martins, “Parasitic
calibration by two-step ratio approaching technique for
split capacitor array SAR ADCs,” in Proc. IEEE
SOCDesign Conf. Int., Nov. 2009, pp. 333– 336.
Split SAR ADC
multiplexer
4 bit comparator
V. CONCLUSION
The SAR ADCs operating at tens of MS/s
with conventional and VCM-based switching were
presented. The linearity behaviors of the DACs
switching and structure were analyzed and verified
by both simulated and measured results. The VCMbased switching technique provides superior
conversion linearity when compared with the
conventional method because of its array’s
capacitors correlation during each bit cycling. The
reduction of the maximum ratio and sum of the total
capacitance can lead to area savings and power
efficiency. Which allow the SAR converter to work
at high-speed while meeting a low power
consumption requirement. The ADC achieves
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 47
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
PARTICLE SWARM ALGORITHM FOR
CHANNEL ALLOCATION IN COGNITIVE RADIO
NETWORKS
G.Suganya1
Mr.N.Khadar basha2
M.E, Communication systems,
Dhanalakshmi srinivasan engineering college,
Perambalur, India.
Assistant professor of ECE department,
Dhanalakshmi srinivasan engineering college,
Perambalur, India.
Abstract - Cognitive radio (CR) networks are self organized and
self configured network. The CR network allows the secondary
users to use spectrum holes left by the primary users. Channel
allocation is the important factor in cognitive radio networks. In
this paper, we propose channel allocation by using tunable
transmitter and fixed receiver method. The channel availability
can be determined by using markov chain model. The particle
swarm optimization is used for TT-FR method. The proposed
particle swarm algorithm is compared with genetic algorithm
and the results shows proposed algorithm provides better result
than genetic algorithm.
Keywords - Channel allocation, Cognitive Radio Networks, Particle
swarm algorithm, and TT-FR method.
I.
INTRODUCTION
Cognitive radio network is a technology to improve
the spectrum utilization by detecting the unused spectrum.
The CR performs following tasks: spectrum sensing, spectrum
analysis, spectrum decisions. They have the potential to jump
in and out of unused spectrum gaps to enlarge spectrum
competence & make available wideband services. In which
the users can be classified into primary users (PUs) and
secondary users (SUs).
The spectrum sharing can be done by centralized or
distributed architecture. In centralised network, central
controller controls all users. In distributed network, each user
can share the knowledge of entire network.
The fig.1 shows the Cognitive Radio network, in
which PU and CRU are Primary Users and cognitive Radio
users (SUs) respectively. Under the CRN communication,
secondary network consist secondary users that are equipped
with cognitive radios. The secondary users sense and use
spectrum and share same space, time, and spectrum with
primary users.
ISSN: 2348 – 8549
Figure.1 Cognitive Radio Network
1.1 Spectrum allocation
In Cognitive Radio Networks, the secondary users
access any available portion of spectrum. So it causes
interference to primary users. To avoid this problem we are
using spectrum allocation or spectrum assignment technique.
This method is different from traditional wireless mesh
networks. Spectrum allocation in Cognitive Radio network is
the process of selecting simultaneously the operating central
frequency and the bandwidth. This is quite different and
complex than traditional wireless networks. So the cognitive
radio concept simplifies the spectrum allocation problem.
In this paper we consider the channel allocation
process in Cognitive Radio network. The channel allocation
can be done by tunable transmitter-fixed receiver method.
This can performed through the use of particle swarm
optimization algorithm.
www.internationaljournalssrg.org
Page 48
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
II.
RELATED WORK
In [2], they propose a local bargaining approach
where users affected by the mobility event self-organize into
bargaining groups and adapt their spectrum assignment to
approximate a new optimal assignment. Fairness Bargaining
with Feed Poverty is proposed to improve fairness in spectrum
assignment and derive a theoretical lower bound on the
minimum assignment each user can get from bargaining for
certain network configurations. Such bound can be utilized to
guide the bargaining process. The communication overhead is
high.
In [4] the channel allocation process can be
performed through the use of max-min bandwidth allocation
algorithm. It provides max-min fairness among users.The
linear Programming (LP) based best and heuristic algorithms
are obtainable for together MMBA and LMMBA problems. It
maximizes the figure of the logarithms of the owed bandwidth
of every user. Max-min fairness, tries to assign bandwidth
uniformly to the users. It provides only node to node
interference free data transmission and also allocation problem
occur.
In [6], the channel allocation problem in wireless
cognitive mesh networks is considered. For the allocation to
be feasible, served mesh clients must establish connectivity
with a backbone network in both the upstream and the
downstream directions, and must have the SINR of the uplink
and the downlink with their parent mesh routers within a
predetermined threshold. They propose a receiver-based
channel allocation strategy. The receiver-based channel
allocation problem in wireless cognitive mesh network is
formulated as a mixed integer linear program (MILP) and
proposes a heuristic solution.
2.1 Genetic Algorithm
Genetic algorithm is a search heuristic method. It is
used to generate the solution to optimization problems. Each
candidate solutions are represented by binary 0s and 1s. It
starts from randomly generated individuals and it is an
iterative process. The population in each iteration is called the
generation. In each generation, the fitness of every individual
in the population is evaluated and solves objective function in
the optimization problem. The fit individuals are
stochastically selected from the current population and each
individual are modified to form a new generation. This new
generation used in the next iteration.
It is difficult to solve the problems in dynamic data.
It requires expensive fitness function for finding the optimal
solution to multimodal and high dimensional problems.
ISSN: 2348 – 8549
III.
TUNABLE TRANSMITTER – FIXED
RECEIVER METHOD
Generally there are four modes operation of available
for channel allocation. They are,
1. Tunable transmitter – fixed receiver:
The SUs can use any available channels for
transmission, but they use fixed channel for receiving the data.
2. Tunable transmitter – tunable receiver:
The SUs can use any available channels for both the
transmission/reception.
3. Fixed transmitter – fixed receiver:
SUs
use
fixed
channel
for
both
the
transmission/reception.
4. Fixed transmitter –tunable receiver:
SUs can use fixed channel for transmitting the data
and use any available channel for receiving the data.
Tunable transmitter – tunable receiver and Fixed
transmitter-fixed receiver requires the common control
channel. In this paper we propose tunable transmitter – fixed
receiver method. Each node knows the channel allocated to
neighboring nodes so it does not require common control
channel.
If node m wants to communicate with n means they
first find the channel (fn) allocated to n. Based on this
information node m change its transceiver to that channel and
then they communicate with n.
The channel allocation problem can divided into two subproblems. They are
1. Channel allocated to mesh routers. These are used for
data transmission.
2. Channel allocated to mesh clients. These are used to
establish the uplink/downlinks with mesh routers.
The channel allocation problem can be solved by using
the particle swarm optimization algorithm.
IV.
PARTICLE SWARM OPTIMIZATION
ALGORITHM
The particle swarm algorithm optimizes a problem by
iterative procedure, which improves the solution to the
network. It makes few or no assumptions about the problem
being optimized.
The particle swarm algorithm is a pattern search
method. They does not require gradient of the problem which
means they does not require problems are differentiable. So
this algorithm can be used in problems with noisy, change
over time. The amount of node in the network is considered as
swarm and the individual nodes are considered as particles.
Each node moves towards the best channel. The information
links between the particles and its neighbour forms the
network. This network is called the topology of the particle
swarm optimization variant.
www.internationaljournalssrg.org
Page 49
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Channel
availability
information
Channel
state
information
Bandwidth
Create matrix
No
Find global best channel
Yes
Figure.3 Number of PUs among SUs throughput
Compute fitness function
Find personal best position
Figure.2 Particle swarm algorithm procedure
Each node in the swarm defines the solution to the
optimization problem. The channel availability is considered
as (0, 1). In which 0 denotes the channel is busy and 1 denotes
the channel is available. It gives the chances to nodes
(particles) to lead the channel and helps to jump out or in to
the channel.The throughput can be maximized through the use
of fitness function. The best position can divided into personal
best position and global best position.
The particle swarm algorithm performs the following
Procedures:
1. Create matrix for channel availability, bandwidth,
and channel state information. Then initiate the
maximum time iteration.
2. Find the global best channel for each node in the
swarm.
3. If channel is busy, continues iteration until to select
the best channel.
4. Compute the fitness of all new nodes and updates the
personal best position.
V.
RESULTS
The result shows the channel allocation and
throughput of the users. The Particle swarm algorithm
provides better channel allocation and high throughput than
genetic algorithm. The tunable transmitter and fixed receiver
method provides high throughput than the fixed transmitter
and tunable receiver method.
ISSN: 2348 – 8549
Figure.4 Number of PUs among avg. number of served MCs
VI.
CONCLUSION
The cognitive radio technology improves the
spectrum utilization and also provides better channel
allocation process. The channel allocation problem can be
solved through the use of particle swarm algorithm and the
channel allocation process done by tunable transmitter and
fixed receiver method. The result shows the performance of
the proposed algorithm. They provide better results than
genetic algorithm.
References
[1]
[2]
[3]
M. Alicherry, R. Bhatia, and L. Li, “Joint channel assignment and
routing for throughput optimization in multi-radio wireless mesh
networks,” in Proc. ACM MobiCom, pp. 58–72, 2005.
L. Cao and H. Zheng, “Distributed spectrum allocation via local
bargaining,” in Proc. IEEE Conf. SECON, Santa Clara, CA, USA, pp.
475–486, Sep. 2005.
A.A.El-Sherif, A. Mohamed, and Y. C. Hu, “Joint routing and resource
allocation for delay sensitive traffic in cognitive mesh networks,” in
Proc. IEEE Globecom Workshop RACCN, Houston, TX, USA, pp.
947–952, Dec. 2011.
www.internationaljournalssrg.org
Page 50
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
J. Tang, R. Hincapie, G. Xue, W. Zhang, and R. Bustamante, “Fair
bandwidth allocation in wireless mesh networks with cognitiveradios,”
IEEE Trans. Veh. Technol., vol. 59, no. 3, pp. 1487–1496,Mar. 2010.
[5] Y. T. Hou, Y. Shi, and H. D. Sherali, “Spectrum sharing for multi-hop
networking with cognitive radios,” IEEE J. Sel. Areas Commun., vol.
26, no. 1, pp. 146–155, Jan. 2008.
[6] Hisham M. Almasaeid and Ahmed E. Kamal “Receiver-Based Channel
Allocation for Wireless Cognitive Radio Mesh Networks”.
[7] S. Merlin, N. Vaidya, and M. Zorzi, “Resource allocation in multi-radio
multi-channel multi-hop wireless networks,” in Proc. IEEE INFOCOM,
pp. 610–618, 2008.
[8] Zhang jie, LvTiejun “spectrum allocation in cognitive Radio with
Particle Swarm Optimization Algorithm”.
[9] Y. Shi and Y.T. Hou, ‘‘A Distributed Optimization Algorithm for MultiHop Cognitive Radio Networks,’’ in Proc. IEEE INFOCOM, pp. 19661974, 2008.
[10] Z. Li, F.R. Yu, and M. Huang, ‘‘A Coorperative Spectrum Sensing
Consensus Scheme in
Cognitive Radios,’’ in Proc. IEEE INFOCOM, pp. 2546-2550, 2009.
[11] N. H. Lan and N. U. Trang, “Channel assignment for multicast in
multichannel multi-radio
wireless mesh networks,” Wireless Commun. MobileComput., vol. 9, pp.
557–571, Apr. 2009.
[12] Y. Wu and D. H. K. Tsang, “Distributed power allocation algorithm for
spectrum sharing
[4]
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 51
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Maximizing The Network Lifetime in MANET By
Using Efficient Power Aware Routing Protocol
k.Elakiya
Second year M.E (communication system),
K.Ramakrishnan college of engineering,
Samayapuram,Trichy.
Abstract-- In MANET, power aware is important
challenge issue to improve the communication energy
efficiency at individual nodes. I suggest efficient Power
Aware Routing (EPAR), a new power aware routing
protocol that increases the network lifetime of MANET. In
contrast to conservative power aware algorithms, EPAR
identifies the capacity of a node not just by its remaining
battery power, but also by the expected energy spent in
reliably forwarding data packets over a specific link.
Using a mini-max formulation, EPAR selects the path that
has the largest packet capacity at the smallest residual
packet transmission capacity. This protocol must be able
to handle high mobility of the nodes which often cause
changes in the network topology. This paper evaluates
three ad hoc network routing protocols (EPAR, MTPR
and DSR) in different network scales taking into
consideration the power consumption. Indeed, our
proposed scheme reduces for more than 20 % the total
energy consumption and decreases the mean delay
particularly for high load networks while achieving a good
packet delivery ratio.
1. INTRODUCTION
Wireless network has become gradually more popular
during the past decades. There are two variations of
wireless
networksinfrastructure
and
infrastructure less networks. In the former,
communications among terminals are established and
maintained through centric controllers. Examples
include the cellular networks and wireless Local
Networks (IEEE802.11). The latter variation is
commonly referred to as wireless adhoc network. Such
a network is organized in an adhoc manner, where
terminals are capable of establishing connections by
themselves and communicate with each other in a
multi-hop manner without the help of fixed
infrastructures. This infrastructure less property makes
an ad hoc networks be quickly deployed in a given area
and provides robust operation. Example applications
include emergency services, disaster recovery, wireless
sensor networks and home networking .Communication
has become very important for exchanging information.
MANET is group of mobile nodes that form a network
independently of any centralized administration.
ISSN: 2348 – 8549
Most of the researchers have recently started to
consider power-aware development of efficient
protocols for MANETs. As each mobile node in a
MANETs performs the routing function for establishing
communication among different mobile nodes the
“death” of even a few of the nodes due to power
exhaustion might cause disconnect of services in the
entire MANETs.So, Mobile nodes in MANETs are
battery driven. there are two major reasons of a link
breakage:
 Node dying of energy exhaustion

Node moving out of the radio range of its
neighboring node.
APPLICATIONS OF MANETS:
Military Scenarios: MANET supports tactical network
for military communications and automated battle
fields.
Rescue Operations: It provides Disaster recovery,
means replacement of fixed infrastructure network in
case of environmental disaster.
Data Networks: MANET provides support to the
network for the exchange of data between mobile
devices.
Device Networks: Device Networks supports the
wireless connections between various mobile devices so
that they can communicate.
Free Internet Connection Sharing: It also allows us
to share the internet with other mobile devices.
Sensor Network: It consists of devices that have
capability of sensing, computation and wireless
networking.
Wireless sensor network combines the power of all
three of them, like smoke detectors, electricity, gas and
water meters.
www.internationaljournalssrg.org
Page 52
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
II.
CONNECTED STUDY WORK
Most of the previous work on routing in wireless ad-hoc
networks deals with the problem of finding and
maintaining correct routes to the destination during
mobility and changing topology [17-18. In [7], the
authors presented a simple implementable algorithm
which guarantees strong connectivity and assumes
limited node range. Shortest path algorithm is used in
this strongly connected backbone network. However,
the route may not be the minimum energy solution due
to the possible omission of the optimal links at the time
of the backbone connection network calculation. In [4],
the authors developed a dynamic routing algorithm for
establishing and maintaining connection-oriented
sessions which uses the idea of proactive to cope with
the unpredictable topology changes.
A. Proactive Energy-Aware Routing
With table-driven routing protocols, each node attempts
to maintain consistent [1-3] up to date routing
information to every other node in the network. This is
done in response to changes in the network by having
each node update its routing table and propagate the
updates to its neighboring nodes. Thus, it is proactive in
the sense that when a packet needs to be forwarded the
route is already known and can be immediately used.
As is the case for wired networks, the routing table is
constructed using either link-state or distance vector
algorithms containing a list of all the destinations, the
next hop, and the number of hops to each destination.
B. Reactive Energy-Aware Routing
With on-demand driven routing, routes are discovered
only when a source node desires them. Route discovery
and route maintenance are two main procedures: The
route discovery process [4-6] involves sending routerequest packets from a source to its neighbor nodes,
which then forward the request to their neighbors, and
so on. Once the route-request reaches the destination
node, it responds by uni-casting a route-reply packet
back to the source node via the neighbor from which it
first received the route-request. When the route-request
reaches an intermediate node that has a sufficiently upto-date route, it stops forwarding and sends a routereply message back to the source. Once the route is
established, some form of route maintenance process
maintains it in each node’s internal data structure called
a route-cache until the destination becomes inaccessible
along the route. Note that each node learns the routing
paths as time passes not only as a source or an
intermediate node but also as an overhearing neighbor
node. In contrast to table-driven routing protocols, not
all up-to-date routes are maintained at every node.
Dynamic Source Routing (DSR) and Ad-Hoc OnDemand Distance Vector (AODV)[7], [18] are
examples of on-demand driven protocols.
ISSN: 2348 – 8549
C. DSR Protocol
Through the dynamic source protocol has many
advantages [8, 14]; it does have some drawback, which
limits its performance in certain scenarios. The various
drawbacks of DSR are as follows:- DSR does not
support multicasting. The data packet header in DSR
consists of all the intermediate route address along with
source and destination, thereby decreasing the
throughput. DSR sends route reply packets through all
routes from where the route request packets came. This
increases the available multiple paths for source but at
the same time increases the routing packet load of the
network. Current specification of DSR does not contain
any mechanism for route entry invalidation or route
prioritization when faced with a choice of multiple
routes.
D. Energy Aware Metrics
The majority of energy efficient routing protocols[1112] for MANET try to reduce energy consumption by
means of an energy efficient routing metric, used in
routing table computation instead of the minimum-hop
metric. This way, a routing protocol can easily
introduce energy efficiency in its packet forwarding.
These protocols try either to route data through the path
with maximum energy bottleneck, or to minimize the
end-to-end transmission energy for packets, or a
weighted combination of both. A first approach for
energy-efficient routing is known as Minimum
Transmission Power Routing (MTPR). That mechanism
uses a simple energy metric, represented by the total
energy consumed to forward the information along the
route. This way, MTPR reduces the overall transmission
power consumed per packet, but it does not directly
affect the lifetime of each node. However, minimizing
the transmission energy only differs from shortest hop
routing if nodes can adjust transmission power levels,
so that multiple short hops are more advantageous, from
an energy point of view, than a single long hop.
In the route discovery phase [15], the bandwidth and
energy constraints are built in into the DSR route
discovery mechanism. In the event of an impending
link failure, a repair mechanism is invoked to search for
an energy stable alternate path locally.
III.
DESIGN AND IMPLEMENTATION
This is one of the more obvious metrics (16-17). To
conserve energy, there should minimize the amount of
energy consumed by all packets traversing from source
node to destination node. i.e. we want to know the total
amount of energy the packets consumed when it travels
from each and every node on the route to the next hop.
The energy consumed for one packet is calculated by
the equation (1)
www.internationaljournalssrg.org
Page 53
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Where, ni to nk are nodes in the route while T denotes the
energy consumed in transmitting and receiving a packet
over one hop. Then we find the minimum Ec for all
packets. The main objective of EPAR is to minimize the
variance in the remaining energies of all the nodes and
thereby prolong the network lifetime.
A. Route discovery and Maintenance in Proposed
Algorithm
EPAR schemes make routing decisions to optimize
performance of power or energy related evaluation
metrics. The route selections are made solely with
regards
to performance
requirement
policies,
independent of the underlying ad-hoc routing protocols
deployed. Therefore the power aware routing schemes
are transferable from one underlying ad hoc routing
protocol to another, the observed relative merits and
drawbacks remain valid.
There are two routing objectives for minimum total
transmission energy and total operational lifetime of the
network can be mutually contradictory. For example,
when several minimum energy routes share a common
node, the battery power of this node will quickly run into
depletion, shortening the network lifetime. When
choosing a path, the DSR implementation chooses the
path with the minimum number of hops [13]. For EPAR,
however, the path is chosen based on energy. First, we
calculate the battery power for each path, that is, the
lowest hop energy of the path. The path is then selected
by choosing the path with the maximum lowest hop
energy. For example, consider the following scenario.
There are two paths to choose from. The first path
contains three hops with energy values 22, 18, and 100,
and the second path contains four hops with energy
values 40, 25, 45, and 90. The battery power for the first
path is 18, while the battery power for the second path is
25. Because 25 is greater than 8, the second path would
be chosen.
proposed EPAR selects ABCD only, because that
selected path has the maximum lifetime of the network
(1000s). It increases the network lifetime of the MANET
shown in equation (2). The objective of this routing
protocol is to extend the service lifetime of MANET with
dynamic topology. This protocol favors the path whose
lifetime is maximum. We represent our objective
function as follow:
Where, Tk(t)=lifetime of path
of node i in path
, Ti(t)=predicted lifetime
.
Proof:
Tk(0)=Min (Ti (0)) = Min(800,1000,400,200) = 200
Tπ(0))=Min (Ti (0)) = Min(800,700,400,200) = 200
Tπ(0))=Min (Ti (0)) = Min(800,700,100,200) = 100
Hence
200.
Our approach is a dynamic distributed load balancing
approach that avoids power-congested nodes and chooses
paths that are lightly loaded. This helps EPAR achieve
minimum variance in energy levels of different nodes in
the network and maximizes the network lifetime.
B. Data packet format in EPAR
The Pt value must be the power that the packet is actually
transmitted on the link. If for any reason a node chooses
to change the transmit power for hop i, then it must set
the Pt value in minimum transmission power (MTP[i]) to
the actual transmit power. If the new power differs by
more than Mthresh then the Link Flag is set.
400 s
1000
IV. NETWORK METRICS FOR PROPOSED
PROTOCOL PERFORMANCE
C
B
200 s
800
D
However, remaining battery life τi = Pi/ri depends on an
unknown mobile nodes i, r and consequently, is considered
as a random variable. Let Ti be an estimate of the remaining
battery life τi = Pi/ri , and ui = u (Ti )be the utility of the
battery power at node i . The number ofnodes in the
A
F
E
A. Remaining Battery Power
100 s
700 s
RREQ
RREP
Node Lifetime
network versus the average remaining battery power is
considered as the metric to analyze the performance of
the protocols in terms of power.
Fig.1: Route Discovery and maintenance process in EPAR.
EPAR algorithm is an on demand source routing
protocol that uses battery lifetime prediction. In fig.1,
DSR selects the shortest path AEFD or AECD and
MTPR selects minimum power route path AEFD. But
ISSN: 2348 – 8549
B. Power Consumption
The mobile node battery power consumption is mainly due
to transmission and reception of data packets. Whenever a
node remains active, it consumes power. Even when the
node sleepy participating in network, but is in the idle mode
www.internationaljournalssrg.org
Page 54
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
waiting for the packets, the battery keeps discharging. The
battery power consumption refers to the power spent in
calculations that take place in the nodes for routing and
other decisions. The number of nodes in the network versus
average consumed battery power is considered as a metric.
Fig.1 shows the throughput of DSR protocol becoming
stable when the number of nodes exceeds 60 while the
MTPR increases significally. On the other hand the
throughput of EPAR increases rapidly when the nodes
exceeds 60 with 80% efficiency than MTPR and DSR.
C. Dropped Packets
The fraction of dropped packets increases as the traffic
intensity increases. Therefore, performance at a node is
often measured not only in terms of delay, but also in
terms of the probability of dropped packets. Dropped
packet may be retransmitted on an end-to-end basis in
order to ensure that all data are eventually transferred from
source to destination. Losses between 5% and 10% of the
total packet stream will affect the network performance
significantly.
D. Network lifetime
It is the time span from the deployment to the instant
when the network is considered nonfunctional. When a
network should be considered nonfunctional is,
however, application-specific. It can be, for example,
the instant when the first mobile node dies, a percentage
of mobile nodes die, the network partitions, or the loss
of coverage occurs. It effects on the whole network
performance. If the battery power is high in all the
mobile nodes in the MANET, network lifetime is
increased.
V.
SIMULATION SETUP & RESULT
DISCUSSION
Extensive simulations were conducted using NS-2.33.
The simulated network consisted of 120 nodes
randomly scattered in a 2000x2000m area at the
beginning of the simulation. The tool setdest was used
to produce mobility scenarios, where nodes are moving
at six different uniform speeds ranging between 0 to 10
m/s and a uniform pause time of 10s.
Fig.2. End to End Delay v/s Pause Time (moving speed)
Fig.2 shows that the end to end delay with respect
to pause time of network using MTPR and DSR
increases significantly when the pause time exceeds
70secs. On the contrary, the end to end delay operating
EPAR protocol increases slowly compared with MTPR
based network shows a gentle increase with increasing
number of pause time. Observe that EPAR protocol
maintenance the stable battery power while calculating
the end to end delay
Fig. 1. Throughput versus no.of nodes.
Fig.3. N/W Lifetime varying with respect network size (traffic load
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 55
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
Fig.3 shows the network lifetime as a function of the
number of nodes. The life-time decreases as the number
of nodes grow; however for a number of nodes greater
than 100, the life-time remains almost constant as the
number of nodes increases. Lifetime decreases because
MANET have to cover more nodes as the number of
nodes in the network size increases. we observe that the
improvement achieved through EPAR is equal to 85 %.
Energy is uniformly drained from all the nodes and
hence the network life-time is significantly increased.
VI.
[4]
[5]
[6]
[7]
CONCLUSION
[8]
This research paper mainly deals with the problem of
maximizing the network lifetime of a MANET, i.e. the
time period during which the network is fully working.
We presented an original solution called EPAR which is
basically an improvement on DSR. This study has
evaluated three power-aware adhoc routing protocols in
different network environment taking into consideration
network lifetime and packet delivery ratio. Overall, the
findings show that the energy consumption and
throughput in small size networks did not reveal any
significant differences. However, for medium and large
ad-hoc networks the DSR performance proved to be
inefficient in this study. In particular, the performance
of EPAR, MTPR and DSR in small size networks was
comparable. But in medium and large size networks, the
EPAR and MTPR produced good results and the
performance of EPAR in terms of throughput is good in
all the scenarios that have been investigated. From the
various graphs, we can successfully prove that our
proposed algorithm quite outperforms the traditional
energy efficient algorithms in an obvious way. The
EPAR algorithm outperforms the original DSR
algorithm by 65%.
REFERENCES
[1]
[2]
[3]
Vinay Rishiwal, S. Verma and S. K. Bajpai,” QoS Based
Power Aware Routing in MANETs”, International Journal of
Computer Theory and Engineering, Vol.1, No.1,pp.47-54,
2009.
Chen Huang, “On Demand Location Aided QoS Routing in
Adhoc Networks”, IEEE Conference on Parallel Processing,
Montreal, pp 502-509, 2004.
Wen-Hwa Lio, Yu-Chee Tseng and Kuei-Ping Shih, “A
TDMA Based Bandwidth Reservation Protocol for QoS outing
in a Wireless Mobile Adhoc Network”, IEEE international
Conference on Communications, Vol.5, pp. 3186-3190, 2002.
ISSN: 2348 – 8549
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
aydeep Punde, Nikki Pissinou, and Kia Makki, “On Quality of
Service Routing in Adhoc Networks, ” Proc. 28th Annual
IEEE Conference on Local Area Network, pp 276-278, 2003.
Peng-Jun Wan, Gruia Calinescu, Xiangyang Li and Ophir
Frieder, “Minimum-Energy Broadcast Routing in Static Ad
Hoc Wireless Networks”, IEEE INFOCOM, 2001.
S-L. Wu, Y-C Tseng and J-P Sheu,”Intelligent Medium Access
for Mobile Ad Hoc Networks with Busy Tones and Power
Control”, IEEE Journal on Selected Areas in Communications,
Vol. 18, No. 9, September 2000.
S.Muthuramalingam et al., ”A Dynamic Clustering Algorithm
for MANETs by modifying Weighted Clustering Algorithm
with Mobility Prediction”, International Journal of Computer
and Electrical Engineering, Vol. 2, No. 4,pp.709-714, 2010.
Hussein, Abu Salem.A.H., & Yousef .A.O., “A flexible
weighted clustering algorithm based on battery power for
Mobile Ad hoc Networks”, IEEE International Symposium on
Industrial Electronics, 2008.
C.K Nagpal, et al. “Impact of variable transmission range on
MANET performance”, International Journal of Ad hoc,
Sensor & Ubiquitous Computing , Vol.2, No.4,pp.59-66, 2011.
Shivashankar, et al. ”Study of Routing Protocols for
Minimizing Energy Consumption Using Minimum Hop
Strategy in MANETS International Journal Of Computing
communication and Network Research (IJCCNet Research)
Vol. 1. No. 3, pp.10-21, 2012.
Priyanka goyal et al.,”MANET: Vulnerabilities, Challenges,
Attacks, Application”, IJCEM International Journal of
Computational Engineering & Management, Vol. 11, pp.32-37,
2011.
S. Shakkottai, T. S. Rappaport, and P. C. Karlsson, “Cross-layer
design for wireless networks", IEEE Communications Mag.,
vol. 41, pp. 74-49, 2003.
Srivastava V and Motani.M.”Cross-layer design: a survey and
the road ahead“, IEEE Communications Magazine, Vol.43,
Issue 12, pp.112- 119, 2005.
Subhankar Mishra et al.,” Energy Efficiency In Ad Hoc
Networks”, International Journal of Ad hoc, Sensor &
Ubiquitous Computing, Vol.2, No.1, pp.139-145, 2011.
C. Poongodi and A. M. Natarajan,”Optimized Replication
Strategy for Intermittently Connected Mobile Networks”,
International Journal of Business Data Communications and
Networking, 8(1), pp.1-3, 2012.
Shivashankar, et al.”Implementing a new algorithm for
Analysis of Protocol Efficiency using Stability and Delay
Tradeoff in MANET”, International Journal of Computers &
Technology, Vol. 2. No. 3, pp.11-17.
Kim, D., Garcia-Luna-Aceves, J. J., Obraczka, K., Cano, J.-C.,
and Manzoni, P, Routing Mechanisms for Mobile Ad hoc
Networks based on the Energy Drain Rate, IEEE Transactions
on Mobile Computing, Vol. 2, No.2, pp.161 – 173, 2006.
Mohd Izuan Mohd Saad Performance Analysis of RandomBased Mobility Models in MANET Routing Protocol,
European Journal of Scientific Research, Vol.32, No.4, pp.444454, 2009.
www.internationaljournalssrg.org
Page 56
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
MODELING AND REDUCING ENERGY
CONSUMPTION FOR USER EQUIPMENTS
IN CELLULAR NETWORKS
M.Shahida Banu
P.Rajeswari
M.E(Communication Systems)
DhanalakshmiSrinivasan Engineering College
Perambalur,India
e-mail:shahidaece97@gmail.com
Abstract—In cellular networks, timeout period of inactivity
timers were responsible for wastage of energy and radio
resources. The energy consumption is resulting in reduction of
quality in broadcasting and delay. The tail time (timeout
period) is responsible for this major problem. Here the
leverages of tail time are achieved by proposing the tail theft
scheme. These schemes include virtual tail time mechanism
and dual queue scheduling algorithm. The batching and
prefetching is used in this scheme. This approach is helpful to
distinguish the request and schedule the transmisions.thus
quality of user experience among heterogeneous user were
improved with energy efficiency.tailtheft using real application
traces were evaluated with energy saving in battery and radio
resources
Associate Professor / ECE
DhanalakshmiSrinivasan Engineering College
Perambalur,India
e-mail:prajeswari2k5@gmail.com
knowledge whole unrelated to traditional programming are
going to be transported. And third, broadcast applications
can interoperate seamlessly with different non-broadcast
client-server applications like World Wide internet sessions.
1.2 Multimedia Broadcast Multicast Services (MBMS)
Is a point-to-multipoint interface specification for
existing and forthcoming 3GPP cellular networks, that is
meant to supply economical delivery of broadcast and
multicast services, each among a cell moreover as among
the core network.
Keywords—Adaptive multimedia broadcast and
multicast, virtual tail time mechanism, dual queue scheduling
algorithm, heterogeneous users, energy saving, quality of user
experience
1.
INTRODUCTION
1.1 Multimedia Broadcasting
Multimedia broadcasting or knowledge casting
refers to the utilization of the prevailing broadcast
infrastructure to move digital info to a range of devices (not
simply PCs). whereas the prevailing infrastructure of
broadcast radio and tv uses analog transmissions, digital
signals may be transmitted on subcarriers or sub channels.
Also, each the radio and tv industries have begun a
transition to digital transmissions.
Multimedia broadcasting are going to be developed
in 3 basic dimensions. First, knowledge casting supports the
transport of multiple knowledge sorts. this suggests that
quite the normal period of time, linear, and prescheduled
styles of audio and video programming are going to be
accessible. Broadcast programming can become richer and a
lot of involving by increasing its artistic palette to include
totally different knowledge sorts and investing the process
power of intelligent receivers and PCs on the consumer
facet. Second, whereas a number of this knowledge are
going to be associated with the most channel programming
(i.e., typical radio and tv programming), different
ISSN: 2348 – 8549
Fig 1.Multimedia Broadcast Multicast Services (MBMS)
For broadcast transmission across multiple cells, it
defines transmission via single-frequency network
configurations. Target applications embody mobile TV and
radio broadcasting, moreover as file delivery and emergency
alerts.
1.3 Evolved Multimedia Broadcast and Multicast
Services (eMBMS)
Evolved multimedia broadcast and multicast
services (eMBMS) deliver multimedia multicast streaming
and download services in the long term evolution (LTE)
networks. Although power and spectral efficient, power
efficient high quality multimedia multicast in eMBMS is a
www.internationaljournalssrg.org
Page 57
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
challenge. As a multicast system with uplink feedback, the
eMBMS performance is limited by the capacity of the poor
receivers. This is because multicast systems choose
modulation and coding scheme (MCS), and multicast
transmission power based on the capacity of the poor
receivers. MCS decides the transmission rate. Therefore,
decided by the poor receivers, it prevents the users with
higher capacity to enjoy higher reception rates. Naive power
settings also increase transmission power to better cover the
poor nodes. This results in increased power consumption
and interference. There are two different categories of
solutions trying to alleviate power consumption in highquality multimedia multicast over wireless networks.
1.4 Motivation and Proposed Solution
In multimedia broadcast, one challenge is posed by
user end heterogeneity (e.g., different display size,
processing capabilities, and channel impairments). Another
key component that consumers highly care about is the
battery lifetime of their high-end mobile device. It is known
that, real-time multimedia applications demand strict
Quality of Service (QoS), but they are also very powerhungry. Given the above user-end constraints, a service
provider would look for maximizing the number of users
served without affecting the Quality of user Experience
(QoE). Clearly, attempting to receive a broadcast content
irrespective of the device constraints is detrimental to
battery resource efficiency, wherein the low-resolution
mobile users suffer from redundant processing of high-end
data that the device is not even able to use fully. Personal
use is permitted, but republication/redistribution requires
IEEE permission.
There have been a few recent studies that address
receiver energy constraints, display limitations and channel
dynamics, source and channel rate adaptation. Yet to our
best knowledge, a comprehensive look into the optimal
broadcast strategy that jointly caters to both user-specific
constraints and network dynamics is still missing.
This paper presents a novel cross-layer
optimization framework to improve both user QoE levels
and energy efficiency ofwireless multimedia broadcast
receivers with varying displayand energy constraints. This
solution combines usercomposition-aware source coding
rate (SVC) optimization,optimum time slicing for layer
coded transmission, and a cross-layer adaptive modulation
and coding scheme (MCS).
2.
RELATED WORKS
In this paper [3] Faria, J. Henriksson, E. Stare, and P.
Talmola offers a short review of the new Digital Video
Broadcasting—Handheld (DVB-H) normal. this is often
supported the sooner normal DVB-T, that is employed for
terrestrial digital TV broadcasting. The new extension
ISSN: 2348 – 8549
brings options that build it potential to receive digital video
broadcast sort services in hand-held, mobile terminals. The
paper discusses the key technology elements—4K mode and
in-depth interleavers, time slicing and extra forward error
correction—in some detail. It additionally offers in depth
vary of performance results supported laboratory
measurements and real field tests.
In this paper [5] C.-H. Hsu and M. M. Hefeeda
GLATSB propose a brand new broadcast theme to attain
energy saving diversity while not acquisition long channel
switch delays, and that we sit down with it as Generalized
Layer-Aware Time Slicing with Delay certain
(GLATSB).The GLATSB theme is associate degree
extension of the GLATS theme aims to cut back channel
switch delays. The delay reduction is predicated on the
subsequent observation. Long channel switch delays ar part
thanks to the dependency among totally different layers.
In this paper [7] W. Ji, Z. Li, and Y. Chen propose a
framework of broadcasting versatile rate and reliable video
stream to heterogeneous devices. Our objective is to
maximize the whole reception quality of heterogeneous QoS
users, and also the resolution is predicated on joint
temporal-spatial climbable video and Fountain writing
improvement. Aim at heterogeneous devices characteristics
together with various show resolution and variable channel
conditions. we tend to introduce a hybrid temporal and
special
rate-distortion
metric
supported
video
summarization and user preference. supported this hybrid
metric, we tend to model the whole reception quality
provision downside as a broadcasting utility achieving
downside.
In this paper [2] S. Parakh and A. Jagannatham
propose a game suppositional framework for redistributed
H.264 climbable video bitrate adaptation in 4G wireless
networks. The framework bestowed employs a rating based
mostly utility perform towards video streaming quality
improvement. Associate degree rule is bestowed for
unvarying strategy update of the competitive climbable
coded video streaming finish users towards associate degree
equilibrium allocation. during this context we tend to
demonstrate the existence of Nash equilibrium for the
planned video bitrates adaptation game supported the quasiconcavity of internet video utility perform. Existence of
Nash equilibrium ensures economical usage of the 3G/4G
information measure resources towards video quality and
revenue maximization.
In this paper [1] IST Mobile Summit, Dresden,
Germany, G. Xylomenos compares the cluster management
mechanisms employed in the information processing and
also the MBMS multicasting models. once outlining the
look of every model, we tend to describe the cluster
management protocols that they use. we tend to then
examine however the information processing cluster
management protocols will be tailored for MBMS and at
www.internationaljournalssrg.org
Page 58
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
last appraise the cluster management approach adopted by
MBMS. Our main findings ar that IGMP v.2 is preferred to
be used with MBMS, that the join/leave cluster management
approach of MBMS outperforms the query/report approach
of information processing which the reliableness of the
MBMS approach will be increased by up calls.
3.
OVERVIEW OF THE PROPOSED SYSTEM
Cellular network consists of number of mobile nodes.Radio
resources shared among UE is determined by the BS at the
time of service subscription, when the UE sends its type
information, i.e., the number of layers it wants to receive.
The UE periodically updated its channel condition to the BS
through the uplink channel.
3.2.2 Time Slicing as an Energy Saving Measure
A single-cell broadcast scenario is considered.
Multimedia content delivery is done from the BS and
managed jointly with a connected media server. The
wireless user equipments (UEs) have varying display
resolution and battery capabilities. Based on the users
characteristics in the celland their SNRs, the media server
suitably encodes the source content in H.264/SVC standard
of DVB-H. The broadcast over the physical channel is
OFDM-based. AUE, depending on its current status, may
choose to receive all or part of the broadcast content (layers)
by exploiting the timesliced transmission feature of DVB-H.
Fig. 1 illustrates a representative system, where L layers and
T user types are considered. For example, L = 14 in the
standard ‗Harbor‘ video sequence.
Time slicing approach allows discontinuous
reception at the UEs, thereby facilitating the UE to turn-off
the radio when not receiving data bursts and hence saving
energy.
3.1 DVB-H System Framework
Received SNR value of UE
through BS
The proposed overall system architecture is
illustrated in Fig. 2. The server encapsulates the SVC
encoded data in real-time transport protocol (RTP) format to
IP packets and sends them to the BS. The BS comprises of
the IP encapsulator, DVB-H modulator, and the radio
transmitter.
IP encapsulator puts the IP packets into
multiprotocol encapsulation (MPE) frames and forms MPEFEC for burst transmission as per the time slicing
scheme.The DVB-H modulator employs an adaptive MCS
selection for the layered video content and sends it to the
radio transmitter for broadcast. The SVC encoding and
MPE-FEC framing operations are inter-dependent and
jointly optimized based on some underlying parameters
(user, channel, and layer information). The optimized video
encoding parameters are obtained through a game theoretic
approach and stored in a central database. The UE and
channel aware usergrouping is discussed in, and SVC
parameter optimizationgame. The UE informs its
capabilities while subscribing to the broadcast service and
also time-to-time updates its signal strength to the BS. It
also has a power manager that helps to take advantage of the
time slicing scheme and save energy based on its remaining
power.
3.2
PERFORMANCE
OPTIMIZATION
MODELING
Identification of UE
Get the resolution of UE
User grouping
Fig. 2. Grouping of users
The broadcast channel rate is considered R (bps).
The multimedia content is encoded into L layers. For
decoding the layer l (1 ≤ l ≤ L) the UE first needs to
correctly receive and decode all layers `l, 1 ≤ `l < l. Video
layer l is allocated rate rl(bps), such that _Ll=1 rl≤ R.
AND
3.2.1 Network Creation
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 59
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
channel dynamics is accounted in a slow (shadow fading)
scale to avoid high bandwidth overhead of frequent channel
state feedback and computation of coding andMCS
optimizations at the BS as well as the video server.
Time slicing
3.2.5 Video Reception Quality Measure
Encoding
For a fair comparison of the quality of reception
performance of the different competitive strategies, we
define a video reception quality measure.
Decoding
4.
Skip the burst
Save Energy
Fig. 3. Time Slicing as an Energy Saving Measure
In time slicing-based layered broadcast, the UEs
know apriori the specific layer constituted in a MPE-FEC
frame (burst). As shown in Fig. 4, each layer corresponds to
a different burst within the recurring window. This allows a
UE to safely skip the bursts containing the layers that are
irrelevant to it, and thereby save energy. Each MPE-FEC
frame consists of two parts: Application Data Table that
carries the IP packet, and an R-S (Reed-Solomon coding)
Data Table that carries the parity bits.
Fig. 4. Time slicing based DVB-H broadcast scheme
3.2.3 Video Quality Model
The video quality Q(q, t) is a parametric function
that best approximates the Mean Opinion Score (MOS).
MOS is a subjective measure that indicates the user QoE
level. MOS to ‘excellent‘ quality, 4 is ‘good‘, 3 is fair, 2 is
‘poor‘, and 1 is ‘bad‘. The parameters for the quality model
arespecific to a video based on its inherent features.
3.2.4 Virtual tailtimemechanism and dual queue
scheduling approach
In our approach, besides user-andchannelaware
SVC rate optimization at the applicationlayer and time
slicing at the link layer, at the physical layer adaptive MCS
is applied which is optimized for enhanced energy
efficiency and network capacity. Clearly, this adaptation is a
function of the heterogeneous users composition in a cell
and the dynamic physical channel rate constraint. Physical
ISSN: 2348 – 8549
EXPERIMENTAL RESULTS
4.1 Energy-Quality Trade-Off Performance With Time
Slicing Technique
For instance, 90% of type 1 users, the joint
optimization approach results in energy saving of more than
90% for the UEs, with approximately 20% quality. This is
because, more than 90% users are energy-constrained and
the objective is to satisfy these users in terms of their
energy-saving. It is also notable that, since each user has the
independent control of time-sliced reception, even though
the high-end users may not achieve the maximum desired
quality due to the system optimization for large proportion
of low-end users, they can improve the QoE by the time
slicing flexibility.
4.2 Adaptive MCS Performance
It can be noticed that the adaptive MCS
outperforms the other two MCS schemes in terms of the
number of served users. Moreover by using the adaptive
MCS the received number of layers are very close to the
requested number of layers, reflecting a higher amount of
user satisfaction.
4.3 Energy-Quality Trade-Off With Optimized SVC and
Adaptive MCS
Under the six user-heterogeneity scenarios with
adaptive MCS, when compared with ‗ES only‘ strategy, the
‗ES+Q‘ strategy offers on average, about 43% higher
quality. The corresponding trade-off on the amount of
energy saving is only about 8%. With respect to ‗Q only‘
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Existing
System
Proposed
System
Average
Energy
Saving
Quality
Performance
www.internationaljournalssrg.org
Page 60
International Conference on Engineering Trends and Science & Humanities (ICETSH-2015)
scenario, the ‗ES+Q‘ scheme offers about 17% extra energy
saving as well as about 3.5% higher quality performance.
Performance comparison
Method
Average Energy
Quality
Saving
Performance
Existing System
75%
63%
Proposed System
90%
92%
[2]Game theory based dynamic bit-rate adaptation for H.264
scalable video transmission in 4G wireless systems, S.
Parakh and A. Jagannatham,YEAR- 2012.
[3]DVB-H: Digital broadcast services to handheld devices,
G. Faria, J. Henriksson, E. Stare, and P. Talmola,YEAR2006.
[4] M. Ghandi and M. Ghanbari, ―Layered H.264 video
transmission with hierarchical QAM,‖ J. Vis. Commun.
Image Represent., vol. 17, no. 2, pp. 451–466, Apr. 2006.
Proposed cross-layer optimization solution to
improve both the quality of user experience (QoE) and
energy efficiency of wireless multimedia broadcast
receivers with varying display and energy constraints. This
joint optimization is achieved by grouping the users based
on their device capabilities and estimated channel conditions
experienced by them and broadcasting adaptive content to
these groups.
5. CONCLUSION
This paper presents a novel cross-layer
optimization framework to improve both user QoE levels
and energy efficiency of wireless multimedia broadcast
receivers with varying display and energy constraints. This
solution combines user composition-aware source coding
rate (SVC) optimization, optimum time slicing for layer
coded transmission, and a cross-layer adaptive modulation
and coding scheme (MCS). The joint optimization is
achieved by network creation based on their device
capabilities and estimated channel conditions experienced
by them and broadcasting adaptive content to these groups.
The optimization is a game theoretic approach which
performs energy saving versus reception quality trade-off,
and obtains optimum video encoding rates of the different
users. This optimization is a function of the proportion of
users in a cell with different capabilities, which in turn
determines the time slicing proportions for different video
content layers for maximized energy saving of low-end
users, while maximizing the quality of reception of the highend users. The optimized layered coding rate, coupled with
the receiver groups‘ SNRs, adaptation of the tail time
mechanism for transmission of different layers, ensure
higher number of users are served while also improving
users‘ average reception quality. Thorough testing has
shown how the proposed optimization solution supports
better performance for multimedia broadcast over cellular in
comparison with the existing techniques.
[5] Flexible broadcasting of scalable video streams to
heterogeneous mobile devices, C.-H. Hsu and M. M.
Hefeeda,YEAR- 2011.
[6] Y.-C. Chen and Y.-R.Tsai, ―Adaptive resource
allocation for multiresolution multicast services with
diversity in OFDM systems,‖ in Proc. IEEE VTC,
Barcelona, Spain, Apr. 2009, pp. 1–5.
[7]Joint source-channel coding and optimization for layered
video broadcasting to heterogeneous devices, W. Ji, Z. Li,
and Y. Chen,YEAR- 2012.
[8] Z. Liu, Z. Wu, P. Liu, H. Liu, and Y. Wang, ―Layer
bargaining: Multicast layered video over wireless
networks,‖ IEEE J. Sel. Areas Commun., vol. 28, no. 3, pp.
445–455, Apr. 2010.
[9] Q. Du and X. Zhang, ―Statistical QoS provisionings for
wireless unicast/multicast of multi-layer video streams,‖
IEEE J. Sel. Areas Commun., vol. 28, no. 3, pp. 420–433,
Apr. 2010.
[10] O. Alay, T. Korakis, Y. Wang, and S. Panwar,
―Dynamic rate and FEC adaptation for video multicast in
multi-rate wireless networks,‖ Mobile Netw.Appl., vol. 15,
no. 3, pp. 425–434, Jun.2010.
REFERENCES
[1]Group management for the multimedia broadcast/
multicast service, in Proc. IST Mobile Summit, Dresden,
Germany, G. Xylomenos,YEAR- 2005.
ISSN: 2348 – 8549
www.internationaljournalssrg.org
Page 61