INFORMATION TECHNOLOGY - ASM Group of Institutes

Transcription

INFORMATION TECHNOLOGY - ASM Group of Institutes
ASM’s International E-Journal on ‘Ongoing
Research in Management & IT’
E-ISSN – 2320-0065
INFORMATION TECHNOLOGY
In Association With
I
Disclaimer
The responsibility of originality, authenticity, style and contents of Research Papers/
Abstracts printed in this conference proceeding of “International Conference on Ongoing
Research in Management and IT” organised by ASM group of institutes, Pune-411018,
Maharashtra, India remains with the respective Author (s) of the Paper/ Abstract. The
organising committee of the conference need not agree with the views expressed in the
articles. All the Papers/ Abstracts included in the e Journal are received
through email and are incorporated as such. “
ASM Group of Institutes, Pune- 411018
II
From The Editor’s Desk
INCON “International conference on Ongoing Research in Management and IT” is an
important activity of ASM group of institutes’ commitment for qualitative research in
academics. This “ASM’s International E-Journal on Ongoing Research in Management &
IT” is an outcome of the dedicated contribution from all the authors. INCON is truly
international and well appreciated conference by all delegates and participants from
various countries. Audyogik Shikshan Mandal has been playing a pioneering role in the
field of creative education ever since its inception in 1983. With a mission “Excellence in
Management Education, Training, Consultancy and Research for success”, ASM is
marching towards excellence having more than 55000 alumni working at all levels of
management in all ASM has global vision for education and research. As a part of
academic commitment for excellence, INCON 11th edition are in association with
Savitribai Phule Pune University, AMMI, CETYS University Mexico, Indo European Centre
Poland and City University of Seattle USA are also academic partners for various
activities. ASM is spreading wings across the border for continuous upgrading academic
excellence. This book will provide a spectrum to readers about various contemporary
issues in management and probable solution that can be derived. This will be a strong link
between industry and academia and aims to work as catalyst for knowledge sharing
between ASM is common platform for academic scholars and champions from industry to
come together for a common cause of developing innovative solutions to various
problems faced by society and business entities. ASM looks forward as a strong link and
partner for society and industry to develop workable solutions for day to day problems.
We believe our success is a team work of various contributors to this book. ASM is always
committed to excel in academic research and consultancy.
Dr. Asha Pachpande
Managing Trustee and Secretary,
Audyogic Shikshan Mandal,
Pune -411019 (India)
III
EDITORIAL BOARD
Dr. Asha Pachpande Managing Trustee and Secretary, Audyogik Shikshan Mandal
Dr. Sandeep Pachpande Chairman, Audyogik Shikshan Mandal
Dr. Santosh Dastane Director Research, ASM Group of Institutes
Dr. S. B. Mathur Director General, ASM’s IIBR
Dr. Sudhakar Bokephode Director, ASM’s IPS
Dr. G.B. Patil Dean, ASM’s IPS
Dr. Nandkumar Khachane Director , ASM’s IIBR
Dr. K. C. Goyal Professor, ASM’s IIBR
Dr. J. N. Shah Director, ASM’s IMCOST
Prof. Ashish Dixit Director, ASM’s ICS
Dr. Dhananjay Bagul Principal, ASM’s CSIT
Dr. Nirmala K Director MCA, ASM’s IBMR
Dr. Priti Pachpande Associate Professor, ASM’s IBMR
IV
CONTENTS
Sr
No:
Title of the paper
Name of the authors
Page No.
1
Unleashing The Power Of Communication
Service Provider’s Network Capabilities for
Delivering and Charging Of Cloud Services
Rahul Wargad
1-8
Essentials of Data Privacy in Data Mining
Asha Kiran Grandhi
2
Dr. Manimala Puri
9-15
Dr. Manimala Puri
S. Srinivasa Suresh
3
Study and usage of Android Smart Phones
with Enterprise Resource Planning (ERP) in
Education Sector”
4
Cloud Computing: An Emerging Technology in
IT
5
Role of IOT in Smart Pune
Dr. Shivaji D. Mundhe
16-19
Prof. Prashant N. Wadkar
Prof. Swati Jadhav
Gaurav Prakashchand Jain
Ms. Sonali Parkhi
20-28
29-38
Ms. Divya Kharya
6
Blue Brain
Dipali Tawar
39-44
7
Nanotechnology And Its Applications In
Information Technology
J.K.LAHIR
45-53
Learning Analytics with Bigdata to Enhance
Student Performance with Specific Reference
to Higher Education Sector in Pune
Prof. Neeta N. Parate
Solution on Mobile device security threats
Arti Tandon
8
9
Prof. Swati Jadhav
54-63
Prof. Sumita Katkar
64-68
Rimple Ahuja
Ritu Likhitkar
10
Big Data and Agricultural System
Mrs. Urmila Kadam
69-75
11
The Internet of Things and Smart Cities: A
Review
Rimple Ahuja
76-81
Prof. Leena patil
Arti Tandon
12
Solution to problems encountered in GeoSpatial Data Mining in Distributed Database
management
Prof. Hidayatulla Pirjade.
13
A Study of Digital India applications with
specific reference to Education sector
Rama Yerramilli
86-94
14
Build a Classification Model for Social Media
Repositories Using Rule Based Classification
Algorithms
Prof. Yogesh Somwanshi
95-101
Using Data Mining Techniques to Build a
Classification Model for Social Media
Repositories: Performance of Decision Tree
Based Algorithm
Prof. Yogesh Somwanshi
15
82-85
Prof. Sudhir P Sitanagre
Dr. Sarika Sharma
Dr. Sarika Sharma
102-107
Sr
No:
Title of the paper
Name of the authors
Page No.
16
NDIS Filter Driver for vSwitch in Hyper-V OS
Mr. Gajanan. M. Walunjkar
108-112
17
Email security: An important issue
Prof. Nilambari kale.
113-119
18
Internet of Things (IoT)– A Review of
Architectural elements, Applications &
Challenges
Ramesh Mahadik
120-127
19
Edge Detection Methods in Image Processing
Tripti Dongare-Mahabare
128-133
20
“Internet of Things (IoT): Revolution of
Internet for Smart Environment”
Mr. Rahul Sharad Gaikwad
134-142
21
Implementation of Effective Learning Model
for improvement of Teaching learning Process
in Selected schools.
Prof.Anita Anandrao Patil
143-150
22
“Big Data: Building Smart Cities in INDIA”
Mr. Rahul Sharad Gaikwad
151-159
23
String Matching and Its Application in Diverse
Field
Ms. Urvashi Kumari
160-166
Cloud Computing By Using Big Data
Mrs. Sandhya Giridhar
Gundre
24
Prof. A. V.Nikam
Dr. Sarika Sharma
167-172
Mr. Giridhar Ramkishan
Gundre
25
Internet Of Things – Prospects And Challenges
Prof. Reena Partha Nath
173-177
Prof. Archana A. Borde
26
27
28
“Multilingual issues and challenges in global Ecommerce websites.”
Study of Recommender System : A Literature
Review
Ms. Sonali B Singhvi
178-184
Prof. Vijaykumar Pattar
Mrs.A.S.Gaikwad
185-195
Prof. Dr. P .P.Jamsandekar
Significance Of Data Warehouse For ERP
System
VARSHA P DESAI
29
Cyber Crime – A Threat to Persons, Property,
Government and Societies
Shruti Mandke
201-207
30
The Security aspects of IOT and its counter
measures
Mr. Ravikiant Kale
208-216
Ethical Hacking: A need of an Era for
Computer Security
Mr. Ravikiant Kale
A Review Of Mobile Applications Usability
Issues And Usability Models For Mobile
Applications In The Context Of M-Learning”
Mr. Kamlesh Arun Meshram
A Comparative Study Of E-Learning And
Mobile Learning In Higher Education – Indian
Perspective”
Mr. Kamlesh Arun Meshram
A Comprehensive Study on the Best
Smartphones
Available in Market in Year
2015
Rinku Dulloo
31
32
33
34
196-200
Shruti Mandke
Mrs. Kalyani Alishetty
217-224
Mr. Amar Shinde
225-232
Dr. Manimala Puri
233-241
Dr. Manimala Puri
Dr. Manimala Puri
242-248
Sr
No:
Title of the paper
Name of the authors
Page No.
35
Exploring the Best Smartphones upcoming
Features of Year 2016
Rinku Dulloo, Dr. Manimala
Puri
249-253
36
Test Data Management - Responsibilities And
Challenges
Ruby Jain
254-259
Providing Data Confidentiality By Means Of
Data Masking: An Exploration
Ruby Jain
A study of critical factors affecting accuracy of
cost estimation for Software Development
Mrs. Shubhangi Mahesh
Potdar
37
38
Dr. Manimala Puri
260-266
Dr. Manimala Puri
267-273
Dr. Manimala Puri
Mr. Mahesh P. Potdar
39
Search Engine Goals By Using Feed Back
Session
Mrs. Sandhya Giridhar
Gundre
274-279
Mr. Giridhar Ramkishan
Gundre
40
Wireless & Mobile Technology
Apeksha V Dave
280-289
41
To Study Project Cost Estimation, Scheduling
and Interleaved Management Activities
Mrs. Shubhangi Mahesh
Potdar
290-294
Dr. Manimala Puri
Mr. Mahesh P. Potdar
42
Privacy Preserving Approaches on Medical
Data
Asha Kiran Grandhi
295-301
Dr. Manimala Puri
S. Srinivasa Suresh
43
Concepts of virtualization: its applications,
and its benefits
Ms. Sunita Nikam
302-312
44
Study Of Software Testing Tools Over Multiple
Platforms.
Prof. Vaishali Jawale
313-318
45
“A Bird view of Human Computer Interaction”
Prof. Vijaykumar Pattar
319-324
Ms. Monali Reosekar
Prof. Alok Gaddi
46
47
ICT Applications for the Smart Grid
OPPORTUNITIES AND POLICY IMPLICATIONS
Sharad Kadam
The Study of Implementing Wireless
Communication through Light Fidelity (Li-Fi)
Ms. Anjali Ashok Sawant
325-332
Dr. Milind Pande
333-338
Ms. Lavina Sunil Jadhav
Ms.Dipika Krishna Kelkar
48
Green Cloud Computing Environments
49
Information and Communications Technology
Mrs. Kalyani Alishetty
339-352
Mr. Amar Shinde
Mrs. Vaishali Bodade.
353-363
VARSHA P DESAI
364-369
as a General-Purpose Technology
50
Study Of Erp Implementation In Selected
Engineering Units At Midc, Sangli
Sr
No:
Title of the paper
Name of the authors
Page No.
51
To study the use of biometric to enhance
security in E-banking
Mrs. Deepashree K.
Mehendale
370-377
Mrs. Reshma S. Masurekar
52
Extraction of Top K List By Web Mining
Priyanka Deshmane
378-382
Dr. Pramod Patil
53
Bioinformatics and Connection with
Cancer
Ms.Sunita Nikam,
Ms.Monali Reosekar,
383-390
54
A Perspective About Mobile Cloud
Computing : With Respect Of Trends,
Need, Advantages, Challenges , Security
&Security Issues
Prof PritibalaSudhakar
Ingle
391-399
55
The Survey of Data Mining Applications
And Feature Scope
Ms. Jayshree Kamble
Mrs. Jyoti Tope
400-416
56
Sewing Machine Power Generator
Ms. Priya Janjalkar
417-425
57
A Comparative Study of M-Learning
Authoring Tools in Teaching and Learning
426-431
58
Cloud Computing
Mrs. Sonali Nemade
Mrs. Madhuri.A. Darekar
Mrs. Jyoti Bachhav
Sunaina Koul
59
Survey of Incremental Affinity
Propagation Clustering
Swati T. Kolhe
Prof. Bharati Dixit
438-443
60
Distributed Operating System
Rashmi V. Gourkar
444-446
61
A Survey on Sentiment Analysis Data
Sets and Techniques
Supriya Sharma
Bharati Dixit
447-451
62
Network Security : Attacks and Defence
Charushila Patil
452-460
432-437
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Unleashing The Power Of Communication Service Provider’s Network
Capabilities for Delivering and Charging Of Cloud Services
Dr. Manimala Puri ,
Director JSPM ,
Pune, India
manimalap@yahoo.com
Rahul Wargad ,
Research Scholar
AIMS,
Pune 411001, India
rahulbw@gmail.com
ABSTRACT :
Cloud computing, has aimed at allowing access to large amounts of computing
power in a fully virtualized manner, by aggregating resources and offering a single
system view. In addition, an important aim of these technologies has been delivering
computing as a utility. Utility computing describes a business model for on-demand
delivery of computing power; consumers pay providers based on usage (“pay as- yougo”), similar to the way in which we currently obtain services from traditional public
utility services such as water, electricity, gas, and telephony.
In the current telecommunications environment of dipping traditional revenues
where “Over the Top” (OTT) services are relegating operators to bit pipe suppliers,
Cloud computing provides a lucrative opportunity. Working via large platforms owned
by providers and shared by numerous users makes Cloud computing less expensive.
Telco operations have many network components and software, which can be combined
well with Cloud services. Each software component can be treated like a hosted
application, which yields parallels in processing and performance. These services
promise to at least increase network utilization and thus transport revenues.
Furthermore, operators can realistically extract two revenue streams from the same
function, charging both end-users and Cloud service operators for a given service
quality.
It would be very challenging to integrate cloud computing with the Next
Generation Network (NGN) in the telecommunication domain. Telecom operators can
become key players in cloud computing value chain as they provide last mile
connectivity, own high transport capacity and billing systems.
Keywords: Cloud Computing, Communication Service provider, Charging of Cloud
Services, Delivering of cloud services, provisioning of cloud services.
Introduction:
Communication Service Providers (CSP’s) have already established customer
relationships and have billing experience, essential for business. It would be comparatively
easy for a CSP to account for the cloud service utilised by the end user and the charge him for
the same. The key issues would be integration with cloud & revenue sharing.
With the evolution of Fourth Generation LTE technologies and FTTH access
technology integration of CSPs network with cloud service provider would open new
revenue streams for CSP and gain a competitive edge over competitors.
With increasing interest in cloud computing over the past few years, a major question
that needs to be answered is how these technologies, concepts, and capabilities can be
INCON - XI 2016
1
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
incorporated into CSPs network. Cloud computing systems would benefit from the global
reach of existing telecommunications networks, and the stability of carrier-grade networks.
This paper focuses on the solutions to integrate the telecommunication network
infrastructure with clouds , delivering cloud services and billing for the cloud usage.
Background:
CSP’s existing backbone and access network infrastructure, adaption of all ip network /
Software Defined Network (SDN), CSP’s have many unique advantages over other cloud
providers. The requirement to access cloud services on move justifies operators’ strengths.
The integration of Cloud technology with Telecommunication infrastructures provides
new revenue streams for Telecom operators through their converged fixed, mobile and data
services and by offering “cloud computing data” services with Next Generation Network
infrastructure features. This paper analyses interconnection scenarios for combining Cloudbased systems and an Evolved Packet Core (EPC) to offer new value added applications in a
unified architecture. The SDN, EPC and IP Multimedia Subsystem (IMS) are described as
possible vehicles for the integration.
CSP’s can align themselves in the cloud value chain in offering cloud services and
facilitating connectivity between cloud data centres.
Cloud Computing:
Cloud computing refers to a paradigm shift where computing is moved away from
personal computers or an individual application server to a cloud of computers. End-users of a
Cloud service do not need to be aware of the underlying details of how the computing and
processing occurs. These underlying details are what tie the Cloud and Grid computing
environments together. In Cloud computing, computer resources are pooled together and
managed by software that can be based on the distributed grid computing model. Essentially,
Cloud computing has evolved from grid computing. Grid computing typically refers to a
single high performance application that is possibly batch scheduled, while Cloud computing
is often transactional and offers a variety of on demand services.
The Cloud computing concept incorporates the delivery of Infrastructure (IaaS),
Platform (PaaS) and Software as a Service (SaaS). Cloud computing intends to offer resources
with the following characteristics: flexibility; abstracted resources featuring scalability, pay as
you use model, reliability and performance.
The Cloud architecture comprises hardware and software that communicate with each
other over application programming interfaces, usually web services. Cloud computing
platforms have strong support for virtualization, dynamically compo-sable services with Web
Service interfaces, and strong support for creating 3rd party value added services by building
on computation, storage, and application services. Nowadays, end user equipment is able to
run applications within virtual machines efficiently. Virtual machines allow both the isolation
of applications from the underlying hardware and other virtual machines, and the
customization of the platform to suit the needs of the end-user.
Providers can expose applications running within virtual machines or provide access to
them as a service, which allows users to install and run their own applications. The Cloud
architecture extends to the client, where web browsers and/or software applications access
Cloud applications.
INCON - XI 2016
2
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
CSP’s Role In Providing Connectivity Between Cloud Data Centres
Scenario 1:
First, resources in “the cloud” would typically be located in a set of cloud provider data
centers. These data centers are typically geographically distributed throughout a provider
network. While single location enterprises certainly exist, it is common today for enterprise
networks to be distributed across different geographic locations.
Fig. 1: Cloud Connectivity – Scenario 1
Finally, enterprises often have their own enterprise data centers, which might be located
at a main enterprise site (the “hub site” in Figure), with remote locations (“spoke sites”)
accessing resources in that data center via network services provided by the network provider,
e.g., via a virtual private network (VPN). The enterprise using resources in the cloud, i.e.,
resources in a cloud provider data center, when not enough resources are available in the
enterprise data center.
Scenario 2:
Cloud services, as distributed resources, depend on network performance and would
suffer without good connectivity within and between them. Managing cloud connectivity is the
most natural value-adding activity for telecom operators with their expertise in connecting and
managing networks. CSP’s are in a unique position to provide managed connectivity between
cloud users and third-party providers, offering flexibility in network resources both in realtime and on-demand.
In this role, they are essentially intermediating brokers, enabling users to switch cloud
vendors without worrying about network-related details. In addition, operators can offer
connectivity services according to pre-determined agreements and choose to employ networkbased techniques, including caching, optimisation and data acceleration, to enhance the user
experience of cloud applications.
INCON - XI 2016
3
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig. 2 : Cloud Connectivity – Scenario 2
Integrating The Cloud’s With CSP’s Network Infrastructure For Delivering Cloud
Services.
IMS and NGN/SDN service architectures could fit comfortably in Cloud environments.
The existing IMS enablers including capability negotiation, authentication, service invocation,
addressing, routing, group management, presence, provisioning, session establishment, and
charging, could be provided via standardized interfaces to the Cloud domain. Cloud service
providers could also benefit from the unified management and control architecture and a
bigger user base.
A unified EPC/IMS and Cloud domain has advantages for all stakeholders including
end-users (more usage and service options), network operators (different business models) and
Cloud service providers (larger user community).
Basically, there is a gap on how to offer Cloud services to NGN users and vice-versa [.
For instance, in order to provide a Cloud based video download service over NGN or an NGNbased video download service over Cloud, the mechanisms used are different. NGN may
require more information for authentication and authorization than Cloud. Therefore it might
not be possible to offer a Cloud service that is realized by NGN.
The current NGN model will need to be extended in order to support Cloud services.
Figure below proposes the enhancement of the current EPC model to support cloud
functionalities. This scenario is to provide the interconnection interfaces needed for the core
infrastructure to be cloud-ready. In other words, the extension of the already standardized
interfaces to cope with the principal cloud requirements should be available.
Fig. 3: Implementation of Cloud functions in the EPC
INCON - XI 2016
4
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
The main idea is to create a convergence platform that is capable of composing specific
services to meet the needs of different groups of users, and therefore cover specific niches
requiring the features from NGN with the flexibility and cost efficiency of clouds.
The Resource and Admission Control Subsystem (RACS) is defined as a functional
element within the NGN to expose resource management functions to applications. The RACS
utilizes policy based management to control access to resources. Applications compose
authorization requests; the RACS authorizes requests based on policies that can be static or
dynamic, and reserves these resources by configuring the physical devices in the transport
plane.
Fig. 4 : RACS role player
Integration Of Service Provider SDN, and Network Functions Virtualization (NFV)
approaches with Cloud
Communication Service Provider brings SDN capabilities to the network (outside the
data center), with policy-based and centralized control for improved network programmability
and payload elasticity.
NFV enables applications to share network resources intelligently, and be orchestrated
very efficiently. NFV entails implementing network functions in software, meaning they can
be instantiated from anywhere in the operator’s network, data center or consumer or enterprise
customer premises.
Cloud-based approaches enable network operators to ensure rapid service creation and
rollout by delivering new levels of flexibility, scalability and responsiveness. They also satisfy
the growing expectations for service performance and QoE, while handling ever-increasing
traffic loads. Operators can make use of NFV, SDN and cloud technology in three ways:
INCON - XI 2016
5
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Although these scenarios are all quite different, they share some common requirements,
and operators can benefit from the implementation of a common platform across all three
scenarios.
Network-enabled Cloud delivers the flexibility and elasticity to deploy software
applications and virtualized network functions wherever they are needed in the network. This
improves time to market and enhances innovation, QoE and network efficiency.
Charging and Billing of Cloud Services by CSP
Infrastructure as a service (IaaS)
Different hardware resources are provided through Infrastructure as a Service (IaaS).
Charging model examples include the following:

CPUs: CPUs are differentiated by power and number of CPU cores and, consequently,
price. CPU power may be differentiated by time zone, e.g., static (based on peak and off
peak resources) or dynamic, where price is determined by demand at the time. An
extreme example of this concept is a two-way negotiated price between buyer and seller,
similar to the priceline.com model in e-commerce where the buyer states the price he is
willing to pay per unit and the seller may accept or refuse it.

Server Type: Because the same CPU can be deployed either in a low cost server or in a
top-of-the-range server with high availability and a significantly different cost point, the
customer price must reflect this variation.

System Administration: The same server type resource may be charged at a different rate
depending on the operating system (e.g., Windows or Linux).

Storage (DASD): Different storage capacity (including mirroring) is available, as well as
different types of storage reflecting different price points from disk storage suppliers.
The same variability exists as in the case of CPUs. For example, the cost for 1GB will
vary depending on whether it’s provided in a low- or high-end unit.

Disaster Recovery: This involves the time window within which SaaS would need to be
available should a disaster take out a data center from which SaaS is provided. For
short time window an active-active deployment in two data centers may be required.

Other: Charges for space, power, network capacity, security, operating system and so on
are built into the infrastructure pricing.

Service Level Agreements (SLA’s): If high availability is part of the agreement, SLAs
may impact the price (e.g., refunds when contractual SLAs are not achieved).

Billing for IaaS may be done based on the quantity and quality of the infrastructure
resources provided.
INCON - XI 2016
6
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Platform As A Service (PaaS)
PaaS includes software frameworks and the necessary hardware in which to develop and
deliver Software as a Service (SaaS). Examples of such frameworks include the following:

Different
hardware
architectures with different
server sizes—from
small, Intel-based servers to mid- or top-range servers and mainframes—utilizing
different chips

Various software
operating
systems
(e.g., Windows, Linux, MAC
OS, Solaris, z/OS, and so on)

Various development and
application
frameworks (e.g., Java,.Net)

Solution stacks (e.g., LAMP,
MAMP,
WINS, and so on) Billing must
take into account Infrastructure as a Service (IaaS) costs, as well as software features
and product offerings provided in the PaaS layer. Different frameworks have different
prices and may include different infrastructures. All of this, together with usage, needs to
be taken into account.
Software As A Service (SaaS)
Software as a Service (SaaS) may be delivered as a single or multi cloud offering. An
example of a single cloud offering is Unified Communications (UC), which consists of
different modules. An example of a multi-cloud offering is one including UC and ERPclouds.
The assumption is that the cloud provider deploys all third party products necessary to run
such offerings into the cloud.
Single Cloud Offerings
Different providers may offer different packages in a single cloud offering. The offering
may also include third party software and, where such software is commercial and carries a
license (e.g., a database), a sublicensing model would need to be devised between the provider
of the solution and its supplier. (Note: cloud licensing models for embedded commercial
packages are slowly evolving because the static “per user” or “per server” models do not work
in the cloud environment).
Multi-Cloud Offerings
A typical enterprise has a number of different software suppliers for different
applications. If such software is supplied out of the cloud, cloud integration and corresponding
convergent billing is needed for the overall enterprise solution.
Conclusion
Cloud computing is a new paradigm delivering IT services and infrastructure as
computing utilities. NGN operators can benefit from this new paradigm and extend the NGN
infrastructure in order to support clouds and generate more revenues. This paper has presented
briefly the concept of Cloud computing and discussed standardization gaps regarding
interoperability between this technology and the TISPAN NGN reference model. Integration
scenarios have been proposed for integrating Cloud and NGN. This unified architecture allows
for the integration of IMS and Cloud services, which could result in new lucrative and
innovative services.
The usage of resources like computing power and storage in grid services is analogous to
transport plane resources, like bandwidth, that are consumed by IMS sessions. Requests from
INCON - XI 2016
7
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
cloud applications can be thought of as similar to authorization and resource requests initiated
by IMS application servers. The management of these resources should be flexible and under
the control of the operator, preferably managed by static and dynamic policies.
Operators are in a unique position to offer services that transcend the boundaries of the
traditional data center without compromising on quality. New levels of innovation are possible
when leveraging resources residing in different clouds or network domains.
The result will be an improved experience for both consumers and enterprises with
greater efficiency, lower costs and higher margins for operators.
References:
[1] Matthew, Woitaszek National Center for Atmospheric Research Boulder, CO 80305,
Developing a Cloud Computing Charging Model for High-Performance Computing
Resources.
[2] CGI , Cloud billing: The missing link for cloud providers.
[3] CA , Service Providers Put Their Heads in the Cloud.
[4] Fabricio Carvalho de Gouveia, The use of NGN / IMS for Cloud and Grid Services
Control and Management.
[5] Gilles Bertrand , The IP Multimedia Subsystem in Next Generation Networks.
[6] Thomas R, Geoff C, Julian G, Grid and Cloud Computing: Opportunities for Integration
with the Next Generation Network.
INCON - XI 2016
8
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Essentials of Data Privacy in Data Mining
Asha Kiran Grandhi
Asst. Professor, JSPM’s Rajarshi Shahu
College of Engineering, Pune,
Maharashtra, India.
ashakiran45@rediffmail.com
Dr. Manimala Puri
Director, Abacus Institute of
Computer Application,
Pune, Maharashtra. India
manimalap@yahoo.com
S. Srinivasa Suresh
sureshs@isquareit.edu.in
Asst.Professor,
International Institute of
Information Technology Pune,
Maharashtra. India
ABSTRACT:
Data privacy is an emerging topic in developing countries. Data privacy should be
maintained at every possible level. Due to lack of awareness, most of the people in India
are losing valuable asserts (even data is considered as an asset) and sensitive
information. How data is accessed and how data is used, is the primary concern of the
data privacy. Emerging technologies (mostly data intensive) like data mining are also
adopting data privacy techniques, to protect sensitive information. The current paper
gives an insight about the general data privacy issues along with privacy preserving in
data mining.
Index Terms : Data privacy, data mining, Privacy Preserving Data Mining (PPDM),
privacy landscape, privacy principles.
I
Introduction :
Data mining is an important tool for extracting useful and non-trival information from
large databases. It has many advantages. It has disadvantages too. Sometimes, data mining is
threat to the data owner. Because, mining leaks privacy of the data owner. Nowadays, data is
considered as an asset. If effective controls (privacy controls) are in place, asset safe guarding
is easy. Budding researchers need to understand various data privacy laws and principles
before starting their research in data mining. Data Mining has the power to reveal valuable
information and extract hidden knowledge. Generally, larger the data, better it is for extracting
valuable information. Data privacy is of more concern now, as everything has gone online
now. Inspite, keeping all their data (personal info, transactions and thoughts) online, still some
are think that it is impossible for someone to glean deep insight into their private lives. Data
Mining makes it harder to keep secrets. Often data mining includes various tools and
technologies to mine the data. For example, traditional databases, data warehouse, statistics
and Machine learning are the key supporting tools for effective data mining. Database is a
collection of related data of an organization or business, whereas, data warehouse is a
collection of integrated data. The advantage of data warehouse is, once data is stored, it won’t
INCON - XI 2016
9
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
get deleted (non-volatile). It supports historical data analysis. Applying data mining techniques
on data warehouse always gives better results. However, as stated at the beginning, data
privacy is important issue to data mining. Data mining results patterns and sometimes, these
patterns disclose sensitive information, which will be threat to the organization and individual.
Hence, applying privacy preserving techniques before applying data mining techniques on a
given data is desired. Debates have been going on for years about personal data privacy. US
Federal Trade Commission (FTC) has set online privacy guidelines for data exchange between
consumers and business owners. Privacy guidelines vary from country to country.
Global privacy principles
US and EU have jointly set 7 global principles for data privacy (US-EU Safe Harbor
principles issued by the US. department of commerce on July 21, 2000). The brief details of
these principles are as follows (Michael M,Michele C,Ambiga D 2014, p.g.no.156):
1.
Notice(Transparency): Inform individual about the purpose for which information is
collected
2.
Choice: Offer individuals the opportunity to choose whether and how personal
information they provide is used or disclosed
3.
Consent: Only disclose personal data information to third parties consistent with the
principles of notice and choice.
4.
Security: Take responsible measures to protect personal information from loss, misuse,
unauthorized access, disclosure, alteration, and destruction
5.
Data integrity: Assure the reliability of personal information for its intended use and
reasonable precautions and ensure information is accurate, complete, and current.
6.
Access: Provide individuals with access to personal information data about them.
7.
Accountability: A firm must be accountable for following the principles and must
include mechanisms for assuring compliances.
Data privacy is rather management concern than technical. Data privacy is primarily
linked with data access and usage. It addresses, who accesses the data and how they use that
data. In the privacy context, users can be divided into two types. Authorized or unauthorized.
In a personal one-on-one relationship, authorized users can access data based on agreement
and exchange, whereas, unauthorized users access data without prior permission or agreement.
According to Reiskind (managing counsel, privacy and data protection for Master Card),
Privacy is about how you use personal data. It is further concerned with collection of data, the
user of data, and the disclosure of the data – to whom you are giving the data.
Privacy Landscape :
There are four main constituents involved in privacy landscape. The following table
shows how they are impacted:
Table 1: Privacy landscape list
Business


Criminals


INCON - XI 2016
Increased need to leverage personally identifiable and sensitive
information for competitive advantage.
Significant investment in data sources and data analytics
Dramatic surge in indentifying theft
Sophisticated technology to exploit data security, use, and disclosure
10
ASM’s International E-Journal on
Ongoing Research in Management & IT
Consumers
E-ISSN – 2320-0065

Increased awareness and concern about data collection, use and
disclosure of their personal information.
Legislators

Responsibility to consumers concern by restricting access to and use of
personal information

Significant impact and restriction for business
Source: Adapted from Andrew Reiskind
Personal information (PI) is vital component of privacy. Users and readers often confuse
about Personally Identifiable Information (PII) and sensitive information. What is PII? What is
sensitive information? Personally Identifiable Information is any information that directly or
indirectly identifies a person. For example, name, address, phone number, email id, IP address,
SSN, Cookie ID, account numbers etc., whereas sensitive information is any information
whose unauthorized disclosure could be embarrassing. For example, gender, criminal record,
medical information, Race etc. Contextual and cultural factors play major role in deciding data
privacy. What’s relevant in one context is not relevant in another. In Europe, trade union
membership is considered sensitive data. However, in United States they don’t really think of
it as such. It comes of class associations with trade union membership and therefore it is a
sensitive data point in Europe. For example, In Germany, people have to indicate their names
at the door of their homes. Even Indians follows the same convention.
There have been cultural indexes developed to differentiate perceptions of privacy. G.J.
Hofstede developed Individualism Indices (IDV), which measures how collectivist or
individualist a society is. As India has low IDV, it is considered collectivist society, where
group interest prevails over those of the individual. US users are most privacy concerned
among all followed by Chinese. Indian users were the least privacy concerned. According to a
study, half of the U.S and Chinese respondents considered phone number, residence address,
e-mail ID and photo as privacy sensitive. However, Indian respondents considered only phone
number as privacy sensitive (Yang Wang, Gregory Norcie and Lorrie Faith Cranor, 2011).
There are different opinions between countries on the privacy topic. US refer it as “privacy
policy”, whereas Europe refers it as “Data Protection”.
Privacy Laws :
Several countries introduced different privacy laws. In 1986, Electronic communication
Privacy Act (ECPA) set boundaries for access to personal information by law enforcement. In
2000, children Online Protection Privacy Act came into effect. In 1996, US Health and Human
services introduced privacy rule, Health Insurance Portability and Accountability Act
(HIPAA), which protect fundamental rights of non-discrimination and health information
privacy. US (2015) consumer protection act “establishes a criminal offense for concealment of
a security breach of computerized data containing sensitive personally identifiable information
that results in economic harm of $1,000 or more to any individual” (Yang Wang 2011).
In data mining, privacy preserving data mining is essential because of security of privacy
in real-time applications. Hence, in this paper, we present an overview of data mining
techniques and privacy preserving data mining. The left over part of the paper is organized as
follows: In section II, we give an overview of data mining, its techniques. In the section III, we
present an introduction about the PPDM and followed by conclusion.
INCON - XI 2016
11
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
II. Data Mining :
Data Mining is the exploration for valuable knowledge in large volumes of data (Shabaz
and Rahman, 2008). A typical data mining application comprises four major steps: data
collection & preparation, data transformation & quality enhancement, pattern discovery, and
interpretation & evaluation of patterns (Xindong Wu and Xingquan Zhu 2008). The major
intention of Data Mining is to utilize the discovered knowledge for the purposes of explaining
current behavior, predicting future outcomes, or affording support for business decisions
(Nittaya Kerdprasop and Kittisak Kerdprasop 2008). It is a part of knowledge discovery which
deals with the process of identifying valid, novel, potentially useful, and ultimately
understandable patterns in data, and excludes the knowledge interpretation part of knowledge
discovery databases KDD (Sankar Pal 2004).
The data mining applications demand new alternatives in diverse fields, such as
discovery, data placement, scheduling, resource management, and transactional systems,
among others (S. Krishnaswamy, S.W. Loke and A. Zaslavsky 2000). The data mining
techniques and tools are equally applicable in other fields such as law enforcement, radio
astronomy, medicines, and industrial process control etc (Lovleen Kumar Grover and Rajni
Mehra 2008). It is also applied in domains such as Biomedical and DNA analysis, Retail
industry and marketing, telecommunications, web mining, computer auditing, banking and
insurance, fraud detection, financial industry, medicine and education (Ramasubramanian,
Iyakutti and Thangavelu 2009).
In general, data mining tasks can be classified into two categories: Descriptive mining
and Predictive mining. Descriptive mining is the description of a set of data in a brief and
summarized manner and the presentation of the general properties of the data. Predictive data
mining is the process of inferring patterns from data to make predictions. Predictive data
mining is goal directed. Representative samples of cases with acknowledged answers and
summarized past experiences in meeting the goals must be in place for predictive mining. In
brief, descriptive data mining aims to summarize data and to highlight their interesting
properties, while predictive data mining aims to build models to forecast future
behaviors(Ramesh Chandra Chirala 2004)(Han and Kamber 2001). Classification, regression
and deviation detection algorithms fall into the predictive category whereas clustering,
association rule mining and sequential pattern mining algorithms fall in the descriptive
category. Besides the other data mining techniques, the following are the mainly concentrated
data mining technique.

Clustering: Clustering is a partition of data into groups of analogous objects. Each
group, called cluster, consists of objects that are similar among themselves and
dissimilar to objects of other groups (Pavel Berkhin 2002).
Clustering is the subject of active research in several fields such as statistics, pattern
recognition, and machine learning. Clustering plays an vital role in data mining
applications such as scientific data exploration, information retrieval and text mining,
spatial database applications, Web analysis, medical diagnostics, computational biology,
and many others. Data mining adds to clustering the complications of very large datasets
with many attributes of different types. This enforces unique computational requirements
on relevant clustering algorithms (Ramesh Chandra Chirala 2004). Clustering can be
hierarchical or partitional. The number of clusters may be specified beforehand or not,
INCON - XI 2016
12
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
and many different cluster methods exist, each with their properties (Han and Kamber
2001).

Classification: Classification is a data mining (machine learning) technique utilized to
predict group membership for data instances (Thair Nu Phyu 2009).

Association rule mining: Association Rule Mining (ARM) is an undirected or
unsupervised data mining technique which works on variable length data, and generates
clear and comprehensible outcomes (Sujni Paul 2010).
III.
Privacy Preserving Data Mining (PPDM)
Recent advances in data collection, data dissemination and related technologies have
inaugurated a new era of research where existing data mining algorithms should be
reconsidered from a diverse point of view, this of privacy preservation. It is well documented
that this new without limits explosion of new information through the Internet and other
media, has reached to a point where threats against the privacy are very general on a daily
basis and they deserve serious thinking (Vassilios Verykios et al, 2004).
It is more essential in case of data mining since sharing of information is a primary
requirement for the accomplishment of data mining process. As a matter of fact the more the
privacy preservation requirement is increased, the less the accuracy the mining process can
achieve (Golam Kaosar and Xun Yi 2010). Privacy is becoming an increasingly important
issue in many data mining applications (Ling Qiu , Yingjiu Li, Xintao Wu 2007). Therefore a
tradeoff between privacy and accuracy is determined for a particular application. Oliveira and
Zaïane (Jeffrey, 2004) define PPDM as data mining methods which have to meet two targets:
(1) meeting privacy necessities and (2) providing valid data mining results. These targets are
in some cases at odds with each other, depending on the type of data mining results and the
attributes in basic data. In these cases the utilization of PPDM suggests a compromise between
the two targets mentioned.
In Privacy Preserving Data Mining (PPDM), the goal is to perform data mining
operations on sets of data without disclosing the contents of the sensitive data. The difficulty
of privacy-preserving data mining has become further essential in recent years because of the
increasing capacity to store personal data about users, and the growing sophistication of data
mining algorithms to leverage this information (Marıa Perez et.al 2007). Privacy-preserving
data mining finds numerous applications in surveillance which is naturally supposed to be
“privacy-violating” applications (Aggarwal, Charu, Yu, Philip 2008). The utilization of
privacy preserving data mining is extensively valuable in medical field.
Privacy preserving can be implemented at various levels in data mining. For example,
the following figure shows the steps required for implementing privacy preserving based on
clustering.
Fig. 1 Basic Block diagram
INCON - XI 2016
13
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
IV. Conclusion :
Data privacy in data mining is emerging topic for researchers. It is interdisciplinary
topic. Researchers have great scope for improving privacy level in various domains having
various types of data. PPDM is raising research area that has received great attention from the
research community in the recent years. This review helps beginners in research to know
fundamental privacy issues in business context, consumer context etc., In future we would like
to design a model for privacy preserving on any specific domain data.
Acknowledgments :
My sincere thanks to Savitribai Phule Pune University and Rajarshi Shahu College of
Engineering (RSCOE), Department of Management, for giving me an opportunity for pursuing
research in this area.
REFERENCES :

Aggarwal, Charu, Yu and Philip, "Privacy-Preserving Data Mining: Models and
Algorithms", Advances in Database Systems, Kluwer Academic Publishers, Vol.34,
2008

Golam Kaosar and Xun Yi, "Semi-Trusted Mixer Based Privacy Preserving Distributed
Data Mining for Resource Constrained Devices", International Journal of Computer
Science and Information Security, Vol.8, No.1, pp.44-51, April 2010

Han and Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufman
publishers, 2001.

Lovleen Kumar Grover and Rajni Mehra, "The Lure of Statistics in Data Mining",
Journal of Statistics Education, Vol.16, No.1, 2008,

Marıa Perez, Alberto Sanchez, Vıctor Robles, Pilar Herrero and Jose Pena, "Design and
Implementation of a Data Mining Grid-aware Architecture", Vol.23, No.1, 2007

Pavel Berkhin, "Survey of Clustering Data mining Techniques", Technical Report, 2002

Ramasubramanian, Iyakutti and Thangavelu, "Enhanced data mining analysis in higher
educational system using rough set theory", African Journal of Mathematics and
Computer Science Research, Vol.2, No.9, pp.184-188, 2009.

Ramesh Chandra Chirala, “A Service-Oriented Architecture-Driven Community of
Interest Model”, Technical report, Arizona State University, May 2004.

S. Krishnaswamy, S.W. Loke and A. Zaslavsky, "Cost Models for Distributed Data
Mining," in Proceedings of the 12th International Conference on Software Engineering
& Knowledge Engineering, pp. 31-38, 6-8 July, Chicago, USA, 2000.

Sankar Pal, "Soft data mining, computational theory of perceptions, and rough-fuzzy
approach", Information Sciences, Vol.163, pp. 5–12, 2004

Sujni Paul, "An Optimized Distributed Association Rule Mining Algorithm in Parallel
and Distributed Data Mining With XML Data for Improved Response Time",
International Journal of Computer Science and Information Technology, Vol.2, No.2,
April 2010

Thair Nu Phyu, "Survey of Classification Techniques in Data Mining", In proceedings of

International Multi Conference of Engineers and Computer Scientists, Vol.1, Hong
Kong,

2009
INCON - XI 2016
14
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Vassilios Verykios, Elisa Bertino, Igor Nai Fovino, Loredana Parasiliti Provenza, Yucel
Saygin and Yannis Theodoridis, "State-of-the-art in Privacy Preserving Data Mining" ,

ACM SIGMOD Record, Vol.33, No.1, 2004

Xindong Wu and Xingquan Zhu, "Mining With Noise Knowledge: Error-Aware Data
Mining", IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems And
Humans, Vol.38, No 4, 2008

Yang Wang, Gregory Norcie and Lorrie Faith Cranor,”who is Concerned about what? A
study of Americal, Chinese and Indian users Privacy Concerns on Social Network sites,”
Short paper, school of computer science, Carnegie Mellon University, 2011.
INCON - XI 2016
15
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
“Study and usage of Android Smart Phones with Enterprise Resource
Planning (ERP) in Education Sector
Prof. Prashant N. Wadkar
Dr, Shivaji D. Mundhe
Asso. Professor, ASM’s IBMR, Pune, India
Director-MCA SIMCA, Pune, India
pnwadkar@gmail.com
Drshivaji.mundhe@gmail.com
Research Guide
Research Guide
Prof. Swati Jadhav
Asst. Professor, ASM’s IBMR, Pune, India
swatijadhav@asmedu.org
ABSTRACT:
An ERP is powerful integrated platform that would connect all the departments of
the organizations like HR, Payroll, Accounts, Sales, Marketing, Production, etc. And
also it has been proved that, it played a vital role in different areas. The study is
basically on the Education sector, especially in Schools and Colleges. As per as the
Education sector is concerned there are many ERP vendors in Schools. Many Schools
are using ERP in their Schools. The Research is in ERP and the more advanced
features provided along with the Android Smart phones by various vendors. The
Advance features might be GPS (Global Positioning System) used in Transport Module.
It also might be use of Bio-Metric devices or RFID in the Attendance Module for
Teachers, Students and Parent Module. Currently at what extent it is facilitating, and at
what extent it can be extended. Keywords : Android, Bio-Metric devices, RFID, GPS,
ERP.
Introduction
An Enterprise Resource Planning system can be seen as software solution that helps to
the management of all processes and data of an enterprise by integrating the business functions
into a single system. ERP systems are composed by different modules related to different
departments of a company and that share data through a single and unified database. These
modules are in fact, software applications that concern the business activities as finance,
logistics, CRM, human resources, supply chain, manufacturing, and warehouse management.
The present study focuses not only the use of ERP is Education sector but along with the
Mobile apps which are facilitating for different and advanced modules of the ERP. These ERP
modules might be Parent module, Teacher module or any other where Mobile app and ERP
are connected in the ERP system.
The Study focuses on different ERP vendors available with Mobile app and also the
requirement of different schools in connection with the Mobile apps.
We collected the data with by conducting demos and meeting with 12 different vendors
of different platform. We observed every vendors has some different features, user interfaces
and modules except the basic modules.
The following table depicts our interesting area of our study.
INCON - XI 2016
16
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Table : Table showing ERP Vendors with No. of Mobile Apps
Tota No. of
ERP Vendors
(Sample)
12
No. of Windows
Based ERP
Vendors
1
No. of Web/
Cloud Based
ERP Vendors
11
ERP Vendors
without Mobile
App
10
ERP Vendors
with Mobile
App
2
Findings and Interpretation :
Above table shows the No. of ERP vendors (sample size) studies 12. Out of which the
number of ERP vendors who has Windows based ERP is 1. Whereas number of ERP vendors
with Web/ Cloud Based are 11. It shows that more of the ERP vendors are coming up with
Cloud based ERP. The ERP Vendors without Mobile App is 10, whereas ERP Vendors with
Mobile App are 2. It shows that there are more number of EPR vendors are without Mobile
App. Also the study reveals that the most of the schools has requirements of ERP Vendors
with Mobile App. Considering this market demand, the most of the ERP vendors are in a
position to develop the Mobile apps with their EPR which has many more features like Parent
module, Student Module and GPS supporting features.
Conclusion:
The Comparative study shows the different Modules present in each ERP vendors. Many
of the Vendors has the Common modules as listed in second column. Our Research interest is
in weather it is proving mobile app support for different modules like Teachers module or
Parent Module. Again the Question is of the operating system present with the parents
Handsets. If they have Android Mobile, the vendor should provide them the Android App.
Otherwise they have to consider which OS is present and depending on that they have to
provide the respective app. It might Windows, IOS, etc. . The popularity of Android is more.
Hence we would like to suggest for demand for Android app is more.
REFERENCES:
1.
[1]
[2]
[3]
[4]
[5]
2
[1]
Book reviews
Android Application Development for Dummies by Donn Felker published by Wiley
Publishing, Inc. ISBN: 978-0-470-77018-4.
Professional Android Application Development by Reto Meier published by Wiley
Publishing, Inc. ISBN: 978-0-470-34471-2.
SPSS13.0 for Windows: Analysis without anguish; S J Coakes, L Steed & P
Dzidic;Wiley-India, 2006
Statistical Methods for Practice and Research; A S Gaur & S S Gaur; Response
Books,2006
3. SPSS for windows Step by step; D George & P Mallary;Pearson, 2011
Journals, Articles and News.
“UK Porn Ban: Prime Minister Declares War on Adult Content” published on web news
“www.webpronews.com” dated 22nd July 2013 authored by Sarah Parrott
INCON - XI 2016
17
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[2] “Detection of Pornographic Digital Images” by Jorge A. Marcial-Basilio, Gualberto
Aguilar-Torres, Gabriel Sánchez-Pérez, L. Karina Toscano-Medina, and Héctor M.
Pérez-Meana Published online: INTERNATIONAL JOURNAL OF COMPUTERS
Issue 2, Volume 5, 2011
[3] “Design and Implementation of Mobile Forensic Tool for Android Smart Phone through
Cloud Computing” authored by Yenting Lai1, Chunghuang Yang, Chihhung Lin, and
TaeNam Ahn G. Lee, D. Howard, and D. Ślęzak (Eds.): ICHIT 2011, CCIS 206, pp.
196–203, 2011. © Springer-Verlag Berlin Heidelberg 2011
[4] “Analytical study of Android Technology” by Dr. Shivaji Mundhe, Mr. Prashant
Wadkar, ISBN:- 978-93-5158-007-3, Vol 2, Jan 2014, ASM Group of Institutes, Pune,
India.
[5] “Malicious Applications in Android Devices” by Dr. Shivaji Mundhe, Mr. Prashant
Wadkar, ISBN: 978-81-927230-0-6, SIMCA, Feb 2014, Pune, India.
[6] “Filtering of Pornographic Images Using Skin Detection Technique” by Dr. Shivaji
Mundhe, Prof. Prashant Wadkar, Prof. Shashidhar Sugur, ISBN: 978-93-84916-78-7,
Jan 2015, ASM Group of Institutes, Pune, India.
[7] “Filtering of Pornographic Images Using Skin Detection Technique” by Dr. Shivaji
Mundhe, Prof. Prashant Wadkar, Prof. Shashidhar Sugur, ASM’s International E-Journal
on Ongoing Research in Management and IT e-ISSN-2320-0065, Jan 2015, ASM Group
of Institutes, Pune, India.
[8] “Andromaly: a behavioral malware detection framework for android devices” by Asaf
Shabtai · Uri Kanonov · Yuval Elovici · Chanan Glezer · Yael Weiss Published online: 6
January 2011 © Springer Science+Business Media, LLC 2011
[9] “A Systematic Review of Healthcare Applications for Smartphones” a research article by
Abu Saleh Mohammad Mosa, Illhoi Yoo and Lincoln Sheets, Medical Informatics and
Decision Making 2012.
[10] “Diversity in Smartphone Usage” authored by Hossein Falaki, Ratul Mahajan, Srikanth
Kandula, Dimitrios Lymberopoulos, Ramesh Govindan,Deborah Estr in journal
MobiSys’10, June 15–18, 2010, San Francisco, California, USA
[11] Daily news paper “Pudhari”(Marathi) in Belgaum edition (Karnataka-INDIA) dated 3rd
Sept 2013
[12] Design and Implementation of Mobile Forensic Tool for Android Smart Phone through
Cloud Computing” authored by Yenting Lai1, Chunghuang Yang, Chihhung Lin, and
TaeNam Ahn G. Lee, D. Howard, and D. Ślęzak (Eds.): ICHIT 2011, CCIS 206, pp.
196–203, 2011. © Springer-Verlag Berlin Heidelberg 2011
[13] Research article “5 Ways to Boost Your Android Phone’s Performance“ authored by
Kenneth Butler, LAPTOP Web Producer/Writer in blog. Dt. Apr 9, 2012
[14] “The Potential Impact Of Android On The Mobile Application Development Industry”
article by Phil Byrne” in the journal “articlebase”
[15] “U.K. Government Bans Drivers From Using Google Glass Behind The Wheel” by
“Killian Bell” in “Cult of Android” a daily news website.
[16] “Email, SMS stealing virus targeting Android users in India” published in Daily news
paper “Times of India” dated 8th Sept 2013(NEW DELHI)
INCON - XI 2016
18
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[17] “How Samsung plans to tackle Android malware issues” published in Daily news paper
“Times of India” dated 5th Sept 2013 (LONDON)
[18] “99.9% of new mobile malware targets Android: Kaspersky” published in Daily news
paper “Times of India” dated 3th Sept 2013 (WELLINGTON) [11]
[19] “UK Porn Ban: Prime Minister Declares War on Adult Content” published on web news
“www.webpronews.com” dated 22nd July 2013 authored by Sarah Parrott
[20] 5G technology of mobile communication: A survey, Published in:Intelligent Systems
and Signal Processing (ISSP), 2013 International Conference by: Gohil, A. ; Charotar
Univ. of Sci. & Technol., Changa, India ; Modi, H. ; Patel, S.K. dt 1-2 March 2013,
ISBN: 978-1-4799-0316-0
[21] “A new app that lets you make unlimited free calls” https://in.news.yahoo.com/appoffers-free-voice-calls-mobiles-landlines-071208144.html,
IANS
India
Private
Limited/Yahoo India News – Sun 7 Dec, 2014
[22] The news in Daily News paper “Lokmat” (Marathi) dated 13th Aug 2015 on page 13
titled “Make the use of technology to ban the Porn sites”.
[23] “Android Technology and It’s Challanges” (2015) published at “National Conference
of Sinhgad Institute of Management and Computer Application (SIMCA,Pune)”
Authored by Prashant Wadkar & Dr. S. D. Mundhe.ISBN 978-81-927230-9-9
[24] “Analytical Study of Android technology in user's Perspective” (2015) Paper Published
in Journal “International Journal of Research in IT & Management (IMPACT FACTOR
– 4.961) IJRIM Journal (ISSN 2231-4334) Authored by Prashant Wadkar & Dr. S. D.
Mundhe.
INCON - XI 2016
19
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Cloud Computing: An Emerging Technology in IT
Gaurav Prakashchand Jain
gaurav.sanchetii@gmail.com
Pune, India
ABSTRACT:
The Future Internet covers all research and development activities dedicated to
realizing tomorrow’s internet, i.e. enhancing a networking infrastructure which
integrates
all
kind
of
resources,
usage
domains
etc.
As such, research related to cloud technologies form a vital part of the Future Internet
research agenda. Confusions regarding the aspects covered by cloud computing with
respect to the Future Internet mostly arise from the broad scope of characteristics
assigned to “clouds”, as is the logical consequence of the re-branding boom some years
ago. So far, most cloud systems have focused on hosting applications and data on
remote computers, employing in particular replication strategies to ensure availability
and thus achieving a load balancing scalability. However, the conceptual model of
clouds exceeds such a simple technical approach and leads to challenges not unlike the
ones of the future internet, yet with slightly different focus due to the combination of
concepts and goals implicit to cloud systems. In other words, as a technological
realization driven by an economic proposition, cloud infrastructures would offer
capabilities that enable relevant aspects of the future internet, in particular related to
scalability, reliability and adaptability. At the same time, the cloud concept addresses
multiple facets of these functionalities. Recently, a number of commercial and academic
organizations have built large systems from commodity computers, disks, and networks,
and have created software to make this hardware easier to program and manage. In
some cases, these organizations have used their hardware and software to provide
storage, computational, and data management services to their own internal users, or to
provide these services to external customers for a fee. We refer to the hardware and
software environment that implements this service-based environment as a cloudcomputing environment.
Introduction:
Computing is being transformed to a model consisting of services that are commoditized
and delivered in a manner similar to traditional utilities such as water, electricity, gas, and
telephony. In such a model, users access services based on their requirements without regard
to where the services are hosted or how they are delivered. Several computing paradigms have
promised to deliver this utility computing vision and these include cluster computing, Grid
computing, and more recently Cloud computing. The latter term denotes the infrastructure as a
‘‘Cloud’’ from which businesses and users are able to access applications from anywhere in
the world on demand. Thus, the computing world is rapidly transforming towards developing
software for millions to consume as a service, rather than to run on their individual computers.
At present, it is common to access content across the Internet independently without reference
to the underlying hosting infrastructure. This infrastructure consists of data centers that are
monitored and maintained around the clock by content providers. Cloud computing is an
INCON - XI 2016
20
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
extension of this paradigm where in the capabilities of business applications are exposed as
sophisticated services that can be accessed over a network. Cloud service providers are
incentivized by the profits to be made by charging consumers for accessing these services.
Consumers, such as enterprises, are attracted by the opportunity for reducing or eliminating
costs associated with ‘‘in-house’’ provision of these services. However, since cloud
applications may be crucial to the core business operations of the consumers, it is essential that
the consumers have guarantees from providers on service delivery. Typically, these are
provided through Service Level Agreements (SLAs) brokered between the providers and
consumers.
Providers such as Amazon, Google, Salesforce, IBM, Microsoft, and Sun Microsystems
have begun to establish new data centers for hosting Cloud computing applications in various
locations around the world to provide redundancy and ensure reliability in case of site failures.
Since user requirements for cloud services are varied, service providers have to ensure that
they can be flexible in their service delivery while keeping the users isolated from the
underlying infrastructure. Recent advances in microprocessor technology and software have
led to the increasing ability of commodity hardware to run applications within Virtual
Machines (VMs) efficiently. VMs allow both the isolation of applications from the underlying
hardware and other VMs, and the customization of the platform to suit the needs of the end
user. Providers can expose applications running within VMs, or provide access to VMs
themselves as a service (e.g. Amazon Elastic Compute Cloud) thereby allowing consumers to
install their own applications. While convenient, the use of VMs gives rise to further
challenges such as the intelligent allocation of physical resources for managing competing
resource demands of the users.
What Is A “Cloud”?
INCON - XI 2016
21
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Various definitions and interpretations of “clouds” and / or “cloud computing” exist.
With particular respect to the various usage scopes the term is employed to, we will try to give
a representative (as opposed to complete) set of definitions as recommendation towards future
usage in the cloud computing related research space.
In its broadest form, we can define a 'cloud' is an elastic execution environment of
resources involving multiple stakeholders and providing a metered service at multiple
granularities for a specified level of quality (of service).
Cloud computing is a computing paradigm, where a large pool of systems are connected
in private or public networks, to provide dynamically scalable infrastructure for application,
data and file storage. With the advent of this technology, the cost of computation, application
hosting, content storage and delivery is reduced significantly.
Cloud computing is a practical approach to experience direct cost benefits and it has the
potential to transform a data center from a capital-intensive set up to a variable priced
environment. The idea of cloud computing is based on a very fundamental principal of
‘reusability of IT capabilities'. The difference that cloud computing brings compared to
traditional concepts of “grid computing”, “distributed computing”, “utility computing”, or
“autonomic computing” is to broaden horizons across organizational boundaries.
Specific Characteristics / Capabilities of Clouds
Cloud computing provides a scalable online environment which facilitates the ability to
handle an increased volume of work without impacting on the performance of the system.
Cloud computing also offers significant computing capability and economy of scale that might
not otherwise be affordable to businesses, especially small and medium enterprises (SMEs)
that may not have the financial and human resources to invest in IT infrastructure.
1.
1.1
1.2
1.3
1.4
Non-functional Aspects
The most important non-functional aspects are:
Elasticity is an essential core feature of cloud systems and circumscribes the capability
of the underlying infrastructure to adapt to changing, potentially non-functional
requirements, for example amount and size of data supported by an application, number
of concurrent users etc. Elasticity goes one step further, tough, and does also allow the
dynamic integration and extraction of physical resources to the infrastructure.
Reliability is essential for all cloud systems – in order to support today’s data centretype applications in a cloud, reliability is considered one of the main features to exploit
cloud capabilities. Reliability denotes the capability to ensure constant operation of the
system without disruption, i.e. no loss of data, no code reset during execution etc.
Reliability is typically achieved through redundant resource utilization.
Quality of Service support is a relevant capability that is essential in many use cases
where specific requirements have to be met by the outsourced services and / or
resources. In business cases, basic QoS metrics like response time, throughput etc. must
be guaranteed at least, so as to ensure that the quality guarantees of the cloud user are
met.
Availability of services and data is an essential capability of cloud systems and was
actually one of the core aspects to give rise to clouds in the first instance. It lies in the
INCON - XI 2016
22
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
ability to introduce redundancy for services and data so failures can be masked
transparently.
2.
Economic Aspects
In order to allow for economic considerations, cloud systems should help in realizing the
following aspects:
2.1
Cost reduction is one of the first concerns to build up a cloud system that can adapt to
changing consumer behavior and reduce cost for infrastructure maintenance and
acquisition. Scalability and Pay per Use are essential aspects of this issue.
2.2
Pay per use. The capability to build up cost according to the actual consumption of
resources is a relevant feature of cloud systems. Pay per use strongly relates to quality of
service support, where specific requirements to be met by the system and hence to be
paid for can be specified.
2.3
Improved time to market is essential in particular for small to medium enterprises that
want to sell their services quickly and easily with little delays caused by acquiring and
setting up the infrastructure, in particular in a scope compatible and competitive with
larger industries. Larger enterprises need to be able to publish new capabilities with little
overhead to remain competitive. Clouds can support this by providing infrastructures,
potentially dedicated to specific use cases that take over essential capabilities to support
easy provisioning and thus reduce time to market.
2.4
Return of investment (ROI) is essential for all investors and cannot always be
guaranteed – in fact some cloud systems currently fail this aspect. Employing a cloud
system must ensure that the cost and effort vested into it is outweighed by its benefits to
be commercially viable – this may entail direct (e.g. more customers) and indirect (e.g.
benefits from advertisements) ROI. Outsourcing resources versus increasing the local
infrastructure and employing (private) cloud technologies need therefore to be
outweighed and critical cut-off points identified.
2.5
“Going Green” is relevant not only to reduce additional costs of energy consumption,
but also to reduce the carbon footprint. Whilst carbon emission by individual machines
can be quite well estimated, this information is actually taken little into consideration
when scaling systems up. Clouds principally allow reducing the consumption of unused
resources (down-scaling). In addition, up-scaling should be carefully balanced not only
with cost, but also carbon emission issues. Note that beyond software stack aspects,
plenty of Green IT issues are subject to development on the hardware level.
INCON - XI 2016
23
ASM’s International E-Journal on
Ongoing Research in Management & IT
3.
3.1
3.2
3.3
3.4
3.5
E-ISSN – 2320-0065
Technological Aspects
The main technological challenges that can be identified and that are commonly
associated with cloud systems are:
Virtualization is an essential technological characteristic of clouds which hides the
technological complexity from the user and enables enhanced flexibility. More
concretely, virtualization supports the following features:
Ease of use: through hiding the complexity of the infrastructure (including management,
configuration etc.) virtualization can make it easier for the user to develop new
applications, as well as reduces the overhead for controlling the system.
Infrastructure independency: in principle, virtualization allows for higher
interoperability by making the code platform independent.
Flexibility and Adaptability: by exposing a virtual execution environment, the
underlying infrastructure can change more flexible according to different conditions and
requirements.
Location independence: services can be accessed independent of the physical location
of the user and the resource.
INCON - XI 2016
24
ASM’s International E-Journal on
Ongoing Research in Management & IT
3.6
3.7
3.8
3.9
E-ISSN – 2320-0065
Security, Privacy and Compliance is obviously essential in all systems dealing with
potentially sensitive data and code.
Data Management is an essential aspect in particular for storage clouds, where data is
flexibly distributed across multiple resources. Implicitly, data consistency needs to be
maintained over a wide distribution of replicated data sources. At the same time, the
system always needs to be aware of the data location (when replicating across data
centers) taking latencies and particularly workload into consideration.
Metering of any kind of resource and service consumption is essential in order to offer
elastic pricing, charging and billing. It is therefore a pre-condition for the elasticity of
clouds.
Tools are generally necessary to support development, adaptation and usage of cloud
services.
Cloud Computing Models
Cloud Providers offer services that can be grouped into three categories.
1.
Software as a Service (SaaS): In this model, a complete application is offered to the
customer, as a service on demand. A single instance of the service runs on the cloud &
multiple end users are serviced. On the customers’ side, there is no need for upfront
investment in servers or software licenses, while for the provider, the costs are lowered,
since only a single application needs to be hosted & maintained. Today SaaS is offered
by companies such as Google, Salesforce, Microsoft, Zoho, etc.
2.
Platform as a Service (Paas): Here, a layer of software, or development environment is
encapsulated & offered as a service, upon which other higher levels of service can be
built. The customer has the freedom to build his own applications, which run on the
provider’s infrastructure. To meet manageability and scalability requirements of the
applications, PaaS providers offer a predefined combination of OS and application
servers, such as LAMP platform (Linux, Apache, MySql and PHP), restricted J2EE,
Ruby etc. Google’s App Engine, Force.com, etc are some of the popular PaaS examples.
3.
Infrastructure as a Service (Iaas): IaaS provides basic storage and computing
capabilities as standardized services over the network. Servers, storage systems,
networking equipment, data centre space etc. are pooled and made available to handle
workloads. The customer would typically deploy his own software on the infrastructure.
Some common examples are Amazon, GoGrid, 3 Tera, etc.
INCON - XI 2016
25
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Understanding Public and Private Clouds
Enterprises can choose to deploy applications on Public, Private or Hybrid clouds. Cloud
Integrators can play a vital part in determining the right cloud path for each organization.
Public Cloud:
Public clouds are owned and operated by third parties; they deliver superior economies
of scale to customers, as the infrastructure costs are spread among a mix of users, giving each
individual client an attractive low-cost, “Pay-as-you-go” model. All customers share the same
infrastructure pool with limited configuration, security protections, and availability variances.
These are managed and supported by the cloud provider. One of the advantages of a Public
cloud is that they may be larger than an enterprises cloud, thus providing the ability to scale
seamlessly, on demand.
Private Cloud:
Private clouds are built exclusively for a single enterprise. They aim to address concerns
on data security and offer greater control, which is typically lacking in a public cloud. There
are two variations to a private cloud: 
On-premise Private Cloud: On-premise private clouds, also known as internal clouds
are hosted within one’s own data center. This model provides a more standardized
process and protection, but is limited in aspects of size and scalability. IT departments
would also need to incur the capital and operational costs for the physical resources.
This is best suited for applications which require complete control and configurability of
the infrastructure and security.

Externally hosted Private Cloud: This type of private cloud is hosted externally with a
cloud provider, where the provider facilitates an exclusive cloud environment with full
guarantee of privacy. This is best suited for enterprises that don’t prefer a public cloud
due to sharing of physical resources.
Hybrid Cloud:
Hybrid Clouds combine both public and private cloud models. With a Hybrid Cloud,
service providers can utilize 3rd party Cloud Providers in a full or partial manner thus
increasing the flexibility of computing. The Hybrid cloud environment is capable of providing
on-demand, externally provisioned scale. The ability to augment a private cloud with the
resources of a public cloud can be used to manage any unexpected surges in workload.
Cloud Computing Challenges
Despite its growing influence, concerns regarding cloud computing still remain. In our
opinion, the benefits outweigh the drawbacks and the model is worth exploring. Some
common challenges are:
1.
Data Protection
Data Security is a crucial element that warrants scrutiny. Enterprises are reluctant to buy
an assurance of business data security from vendors. They fear losing data to competition
and the data confidentiality of consumers. In many instances, the actual storage location
is not disclosed, adding onto the security concerns of enterprises. In the existing models,
firewalls across data centers (owned by enterprises) protect this sensitive information. In
INCON - XI 2016
26
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
the cloud model, Service providers are responsible for maintaining data security and
enterprises would have to rely on them.
2.
Data Recovery and Availability
All business applications have Service level agreements that are stringently followed.
Operational teams play a key role in management of service level agreements and
runtime governance of applications. In production environments, operational teams
support

Appropriate clustering and Fail over

Data Replication

System monitoring (Transactions monitoring, logs monitoring and others)

Maintenance (Runtime Governance)

Disaster recovery

Capacity and performance management
If, any of the above mentioned services is under-served by a cloud provider, the damage
& impact could be severe.
3.
Management Capabilities
Despite there being multiple cloud providers, the management of platform and
infrastructure is still in its infancy. Features like”Auto-scaling” for example, is a crucial
requirement for many enterprises. There is huge potential to improve on the scalability
and load balancing features provided today.
4.
Regulatory and Compliance Restrictions
In some of the European countries, Government regulations do not allow customer's
personal information and other sensitive information to be physically located outside the
state or country. In order to meet such requirements, cloud providers need to setup a data
center or a storage site exclusively within the country to comply with regulations.
Having such an infrastructure may not always be feasible and is a big challenge for cloud
providers.
Future Research Areas:Although much progress has already been made in cloud computing, we believe there
are a number of research areas that still need to be explored. Issues of security, reliability, and
performance should be addressed to meet the specific requirements of different organizations,
infrastructures, and functions. Some future research areas as follows:
Security

Reliability

Vulnerability to Attacks

Cluster Distribution

Network Optimization

Interoperability

Applications
Conclusion :
I have described a number of approaches to cloud computing in this article and pointed
out some of their strengths and limitations. I have also provided motivation and suggestions
INCON - XI 2016
27
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
for additional research. The approaches outlined in this article, along with other strategies,
have already been applied successfully to a wide range of problems. As more experience is
gained with cloud computing, the breadth and depth of cloud implementations and the range of
application areas will continue to increase. Like other approaches to high performance
computing, cloud computing is providing the technological underpinnings for new ways to
collect, process, and store massive amounts of information. Based on what has been
demonstrated thus far, ongoing research efforts, and the continuing advancements of
computing and networking technology, I believe that cloud computing is poised to have a
major impact on our society’s data centric commercial and scientific endeavors. Cloud
computing is a new and promising paradigm delivering IT services as computing utilities. As
Clouds are designed to provide services to external users, providers need to be compensated
for sharing their resources and capabilities. In particular, we have presented various Cloud
efforts in practice from the market oriented perspective to reveal its emerging potential for the
creation of third-party services to enable the successful adoption of Cloud computing, such as
meta-negotiation infrastructure for global Cloud exchanges and provide high performance
content delivery via ‘Storage Clouds’.
REFERENCES:
[1] L. Kleinrock, A vision for the Internet, ST Journal of Research 2 (1) (2005) 4–5.
[2] S. London, Inside Track: The high-tech rebels, Financial Times 6 (2002).
[3] I. Foster, C. Kesselman (Eds.), The Grid: Blueprint for a Future Computing
Infrastructure, Morgan Kaufmann, San Francisco, USA, 1999.
[4] M. Chetty, R. Buyya, Weaving computational grids: How analogous are they with
electrical grids?, Computing in Science and Engineering 4 (4) (2002) 61–71.
[5] D.S. Milojicic, V. Kalogeraki, R. Lukose, K. Nagaraja, J. Pruyne, B. Richard, S. Rollins,
Z. Xu, Peer-to-peer computing, Technical Report HPL-2002-57R1, HP Laboratories,
Palo Alto, USA, 3 July 2003.
[6] D. Abramson, R. Buyya, J. Giddy, A computational economy for grid computing and its
implementation in the Nimrod-G resource broker, Future Generation Computer Systems
18 (8) (2002) 1061–1074.
[7] http://cordis.europa.eu/fp7/ict/ssai/docs/cloud-report-final.pdf
[8] http://www.south.cattelecom.com/Technologies/CloudComputing/
0071626948_chap01.pdf
[9] http://www.thbs.com/downloads/Cloud-Computing-Overview.pdf
[10] https://www.priv.gc.ca/resource/fs-fi/02_05_d_51_cc_e.pdf
[11] http://www.tutorialspoint.com/cloud_computing/cloud_computing_tutorial.pdf
[12] http://www.aic.gov.au/media_library/publications/tandi_pdf/tandi400.pdf
[13] http://www.buyya.com/papers/Cloud-FGCS2009.pdf
[14] https://www.nsa.gov/research/tnw/tnw174/articles/pdfs/TNW_17_4_Web.pdf
[15] Dean J and Ghemaway S. MapReduce: Simplified data processing on large clusters.
Proceedings Operating Systems Design and Implementation. 2004 December.
INCON - XI 2016
28
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Role of IOT in Smart Pune
Ms. Sonali Parkhi
(ASM’s IBMR-MCA)
sonaliparkhi@asmedu.org
Ms. Divya Kharya
(ASM’s IBMR-MCA)
divyakharya@asmedu.org
ABSTRACT:
Cities in India contribute over 60 per cent of the country’s GDP and 80 per cent of
tax revenues. The Government of India envisages new urbanisation and initiatives such
as the 100 smart cities project as a means to unshackle the economy from sub 8 per cent
growth. It is estimated that the demand for essential public services is expected to
increase by 4.5 to 8 times the existing demand. Among other things this involves
improving eco-efficiency, facilitating sustainable environments, offering optimized
transportation, good governance, enabling high-quality healthcare, improving security
and streamlining crisis-management responses.
The Internet of Things (IoT) shall be able to incorporate transparently and
seamlessly a large number of different and heterogeneous end systems, while providing
open access to selected subsets of data for the development of digital services. Building
a general architecture for the IoT is hence a very complex task, mainly because of the
extremely large variety of devices, link layer technologies, and services that may be
involved in such a system. In this paper, we focus specifically to an urban IoT system.
Urban IoTs, in fact, are designed to support the Smart City vision, which aims at
exploiting the most advanced communication technologies to support added-value
services for the administration of the city and for the citizens. This paper hence provides
a comprehensive survey of the enabling technologies, protocols, and architecture for
making Pune as a smart city. Furthermore, the paper will present and discuss the
technical solutions and best-practice guidelines adopted in the Tel Aviv Smart City
project, in Israel.
Keyword: IOT, Smart City, GDP, Added-value services
Introduction
The first question is what is meant by a ‘smart city’. The answer is, there is no
universally accepted definition of a Smart City. It means different things to different people.
The conceptualisation of Smart City, therefore, varies from city to city and country to country,
depending on the level of development, willingness to change and reform, resources and
aspirations of the city residents. A Smart City would have a different connotation in India
than, say, Europe. Even in India, there is no one way of defining a Smart City.
Some definitional boundaries are required to guide cities in the Mission. In the
imagination of any city dweller in India, the picture of a Smart City contains a wish list of
infrastructure and services that describes his or her level of aspiration. To provide for the
aspirations and needs of the citizens, urban planners ideally aim at developing the entire urban
eco-system, which is represented by the four pillars of comprehensive development —
institutional, physical, social and economic infrastructure. This can be a long term goal and
INCON - XI 2016
29
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
cities can work towards developing such comprehensive infrastructure incrementally, adding
on layers of ‘smartness’. In the approach to the Smart Cities Mission, the objective is to
promote cities that provide core infrastructure and give a decent quality of life to its citizens, a
clean and sustainable environment and application of ‘Smart’ Solutions. The focus is on
sustainable and inclusive development and the idea is to look at compact areas, create a
replicable model which will act like a light house to other aspiring cities. The Smart Cities
Mission of the Government is a bold, new initiative. It is meant to set examples that can be
replicated both within and outside the Smart City, catalysing the creation of similar Smart
Cities in various regions and parts of the country.
The core infrastructure elements in a Smart City would include:
i.
adequate water supply,
ii. assured electricity supply,
iii. sanitation, including solid waste management,
iv. efficient urban mobility and public transport,
v.
affordable housing, especially for the poor,
vi. robust IT connectivity and digitalization,
vii. good governance, especially e-Governance and citizen participation,
viii. sustainable environment,
ix. safety and security of citizens, particularly women, children and the elderly, and
x.
health and education.
As far as Smart Solutions are concerned, an illustrative list is given below. This is not,
however, an exhaustive list, and cities are free to add more applications. Accordingly, the
purpose of the Smart Cities Mission is to drive economic growth and improve the quality of
life of people by enabling local area development and harnessing technology, especially
technology that leads to Smart outcomes. Area-based development will transform existing
areas (retrofit and redevelop), including slums, into better planned ones, thereby improving
liveability of the whole City. New areas (greenfield) will be developed around cities in order
to accommodate the expanding population in urban areas. Application of Smart Solutions will
enable cities to use technology, information and data to improve infrastructure and services.
Comprehensive development in this way will improve quality of life, create
Employment and enhance incomes for all, especially the poor and the disadvantaged,
leading to inclusive Cities
1.
Research Methodology
I have conducted an online survey from the citizens of Pune including pimprichinchwad. The following shows the results of survey.
Ranking of important issues
Issue
Rank
Water
1
Waste managment
2
Transport
3
Drainage
4
INCON - XI 2016
30
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Transport
Issue
Is there a log of road congestin ? (Y/N)
Is public mode of transport convenient and safe (Y/N)
Are parking slots avaialble easily around your area ? (Y/N)
Yes %
61 %
52 %
56 %
No %
39 %
48 %
44 %
Walkabilty
Issue
Yes % No %
Are there enough footpaths for people/cyclists ? (Y/N) 55 %
45 %
Water and Sewage
Issue
Yes %
Do you get regular water supply ? (Y/N)
62 %
Is there access to clean drinking water ? (Y/N)
54 %
Is there access to clean water ? (Y/N)
65 %
Are there water meters installed in your area to check people pay 49 %
for the water they use ? (Y/N)
Is the sewave around your area managed well ? (Y/N)
60 %
Sanitation and waste management
Issue
Are your surroundnigs clean ? (Y/N)
Is there late/infrequent pick-up of garbage in your area ? (Y/N)
Are waste-pickers visiting your households regularly (Y/N)
1.
Yes %
60 %
60 %
58 %
No %
38 %
46 %
35 %
51 %
4%
No %
40 %
40 %
42 %
Role of IOT
IoT-powered smart cities are improving the quality of life of citizens through a slew of
technological solutions. Among other things this involves improving eco-efficiency,
facilitating sustainable environments, offering optimized transportation, good
governance, enabling high- quality healthcare, improving security and streamlining
crisis-management responses13. The figure below shows IBM’s representation of the
kind of services available in a smart city.
INCON - XI 2016
31
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
E SS
E-I
SN – 232
2 20--00
065
5
Onee exam
O
e mp
ple off th
he wid
w derr beeneefitts of
o an
a IoT
T driv
d ven
n ap
pprroaach
h to
o urrbaanizzattion
n com
c mess frrom
m
th
he Glob
G ball e-su
e ustaain
nab
bilitty initiaativ
ve’s esttim
matee tha
t at ICT
I T tec
t chn
nolo
ogiies su
uch
h as
a vid
deo
o
c nferren
con
ncin
ng an
nd sm
s artt bu
uild
din
ng ma
manaagem
meent co
ould
d pote
p enttiallly cu
ut gree
g enh
hou
usee
g (G
gas
GH
HG)) em
missions by
b 16.5 per cen
c nt leead
din
ng to
t ene
e erg
gy and
d fue
f el saviing
gs to
t thee tu
unee of
o
1 triillio
1.9
on US
SD
D an
nd 9.1
1 Gig
G gato
onn
ness off CO2
C 2.T
Thee seecttion
n belo
b ow
w to
ouches upo
u on a feew
w arreaas
in
n whi
w ich
h sm
maart tec
t chn
nolo
ogiies caan off
o fer sig
gniificcan
nt im
mp
pro
ovemeents in
n the qu
ualiity, reeliaabiility
y
a d co
and
osts of
o serv
s vicces in cittiess, par
p rticculaarly
y in
n th
he Ind
diaan con
c nteext..
3..1
GIIS bas
G
b sed
d urb
u ban
n pllan
nniing
g
G S syst
GIS
s tem
ms alllow
w spattiall data
d a man
m nag
gem
men
nt for
f r ciitiees with
w h map
m ppiing
g of
o util
u itiees, seerviices
an
nd ressou
urcces beelow
w the
t e grrou
und
d ass well
w l ass in
nfrrasttrucctu
ure an
nd lan
l nd-u
usee abo
a ve the gro
g und. Ussing
g
m dellling tech
mod
t hno
olo
ogiees, so
oftw
warre and
d sate
s elliite datta suc
s ch syssteemss alllow
w inte
i egrratiion
n of data
d abaases
th
hat caaptu
uree th
hreee dim
meensiion
nal in
nforrm
matiion
n. Thi
T is lea
l ads to
o better qu
ualiity urrbaan plaann
ning tha
t at
in
nco
orporaates wid
w derr co
onccerrns. Link
L kin
ng GIIS daata wiith ottheer lan
l d-u
usee in
nfo
orm
mation
n (ow
wneersh
hip
p,
ty
ypee of pro
p opeerty
y, leg
l al staatus, tax
x etc.
e ) allo
a ows onl
o inee deli
d iveery off taarg
geteed pu
ubliic serrviccess as
a
m niciipaal aut
mun
a tho
oritiies arre better equ
e uipp
ped
d to
t ideentify
y sh
horrtfaallss in
n serv
s vicces, mon
m nittor reven
nuee
co
olleecttion
n and
a d sttay ab
breastt with
w h ch
han
ngees in
i a pro
p operrty
y orr plott’s atttrib
butees . GIS
G S allso
o mak
m kes thee
prroccess of
o bui
b ildiing
g and
a man
m naging
g citie
c es mo
oree in
nclu
usiive ass itt faacillitaatess bett
b ter co
omm
mu
uniccattion
n
off plan
p ns an
nd activ
vitiees for citi
c izen
ns.. The
T e fiigu
ure beelo
ow givess an
a illu
usttrattive exa
e am
mplee of
o one
o e
ap
pprroaach
h to
o ussin
ng GIS
G S for
fo sm
martt ciity deeveelop
pm
men
nt.
IN
NC
CON - XI
X 20
2 16
32
2
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Few organizations / municipalities tasked with city planning or city development have
access to GIS systems and even those that have some exposure to GIS do not use it extensively
or effectively. For example, spatial data and land-use are not captured in the same database.
Moreover, lack of expertise in GIS systems results in city planning on more primitive software
with the use of paper drawings. Major reasons for this are a lack of training in GIS for urban
planners working at the city level and absence of incentives or strictly enforced guidelines to
adopt GIS.
3.2
Water Management
An area where creating a GIS layer for a city can improve the quality of urbanization is
water. The International Telecommunications Union (United Nations) in its Technology
Watch report identifies the combination of sensor networks, internet communications and GIS
tools as having an important role in water management in the future. The report states these
technologies can be very beneficial to government authorities in efficiently managing the
water distribution network and water quality while reducing water consumption and wastage
in sectors such as agriculture and landscaping.
Indeed this has been put into practice in cities such as Singapore and Masdar where
water is managed through a full controlled network in a manner analogous to the electric grid leakages are detected quickly and from remote locations, different streams of water are
collected, treated and added to the supply accordingly17. The proof of concept has led to the
increasing use of terms such as ‘Water Grid’ and ‘Smart Water Management’ by companies
providing IT enabled solutions. Singapore in particular has augmented its water supply
through integrating water into urban planning and design while leveraging sensors, SCADA
and water quality measurements to augment supply by cutting wastage. The figure below
depicts one example of smart water distribution control (by Hitachi).
INCON - XI 2016
33
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Improvements in water management are particularly significant in India which faces
high water stress i.e. high ratio of withdrawals to total renewable supply. The UN’s prediction
that water demand will exceed supply by 40 per cent by 2030 also makes water optimization a
key risk mitigation strategy for businesses as well.
3.3
Transportation
An area in which IoT has achieved considerable integration is transportation. Telemetry
and satellite data have transformed traffic management, not only on the road, but also in the air
and underground. Applications include real-time tracking for traffic management, services like
radio-cabs, online maps. To find the most efficient routes, smart parking - displaying available
spaces, time signalling on traffic lights and scheduling information of trains. A popular
example is automatic smart card ticketing systems such as the London Oyster card – now
increasingly linked to contactless payment credit and debit cards, rather than dedicated cards.
This is a large scale application of IoT in transportation that offers savings in terms of time
and money by reducing the number of ticket booths and waiting time. It also provides
enormously valuable information about people’s travel habits – allowing much better
management and planning.
Another interesting example of combining telematics with eco-mobility comes from the
Indian company Mahindra that has created India’s first smartphone controlled car. Using an
app, it allows users to track the general performance of this electric car while controlling
features like Air Conditioning. The app also sends timely alerts to customers about damaged
parts.
3.4
Energy Management
Cities consume 75 per cent of the total energy consumption in the world and are
responsible for 50 to 60 per cent of the world’s total greenhouse gas emissions - a figure that
goes up to 80 per cent if we consider emissions due to urban inhabitants23. Moreover, this
demand for energy also includes a strong desire for uninterrupted access to energy that is
available 24X7.
INCON - XI 2016
34
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Therefore, there is an urgent need for ambitious energy efficiency and low-carbon
energy programmes for cities. In this respect, the IoT has played a central role in the
development of solutions for improving energy management. Rapid growth in digital
technology, with transformations in the way energy is generated and consumed has led authors
to coin terms such as Energy 3.0 and the Energy Cloud. These are terms generally used to
represent IoT driven innovations such as smart grids, electrification of demand, demand
visualization and flexible generation that can help achieve desired outcomes from energy
infrastructure in cities. By providing insights into energy consumption of each connected
appliance, online monitoring, remote controls and tips to cut energy bills the project succeeded
in sensitizing people while identifying leaders and early adopters. The figure below shows
mapping of communications architecture on M2M architecture for smart grids developed by
the European Telecommunications Standards Institute (ETSI)
A similar initiative in India, was announced by IBM in 2013. The IT company has been
selected by Tata Power Delhi Distribution to conceptualize, design and deliver an advanced
smart grid solution to better manage energy output and further reduce outages. IBM will also
collect and analyze real-time information from smart meters and data from ICT infrastructure.
It is hoped that the solution will help Tata Power empower its over 1.3 million electric
consumers to manage their own energy usage27.
Such energy efficiency measures through smart ICT are vital in the Indian context as it
is estimated that India can save $42 billion every year by reducing energy waste in buildings.
3.5
Buildings
Another equally significant area is solutions for building low-energy resident and
commercial developments in our cities. It is estimated, that the buildings sector in India is
responsible for 40 per cent of energy use, 30 per cent of the raw material use, 20 per cent of
water use, and 20 per cent of land use in cities29. Therefore, solutions that can help realize the
objectives of policies such as the Energy Conservation Building Codes are much needed.
INCON - XI 2016
35
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
The ICT sector has the expertise to facilitate delivery of sustainable buildings through
solutions for simulation, modelling, analysis, monitoring and visualisation - all vitally needed
for a whole building approach to designing and operate buildings. With India’s microclimate
drastically different from Western economies there is a need for combining technologies with
indigenous expertise.
3.6
Healthcare
Among the broad areas under ‘smart city’ concepts, the use of ‘connected technologies’
in healthcare has been one of the most advanced. Today there are several applications, apps
and technologies targeting common citizens, patients as well as healthcare professionals
offering a variety of services for health, fitness, lifestyle education, monitoring and
management of key parameters (heart rate, perspiration, blood oxygen etc.), ambient assisted
living, and continuing professional education tools public health surveillance.
The figure below presents a conceptual schematic of a ‘smart shirt’ - embedded with
sensors to measure vital parameters - along with IoT and M2M technologies to deliver remote
and efficient healthcare.
With monitoring systems and interactive patient data communication methods estimated
to reduce face-to-face visits by 40 per cent the economic benefits of smart healthcare in a
smart city extend to reduction in travel, time savings (for healthcare professionals and
patients) and reduced risks such as exposure to hospital infections etc. This makes smart
healthcare highly desirable in Indian cities that witness high prevalence of infectious diseases
and growing lifestyle related ailments such as diabetes.
3.7
Security
A straightforward application of infrastructure for collecting and transmitting real-time
data is public security through a surveillance system. The figure below (from sensity) gives
INCON - XI 2016
36
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
some examples of ways in which smart city technologies such as digital communication,
CCTVs, sensors, controllers and analytics can improve public safety and security.
3.8
E-governance
An emerging, yet significant aspect of good governance is smart administration that
employs new channels of communication such e-governance that can allow faster delivery of
services and a wider reach. Besides extending the reach of governance, e-governance, is
viewed as a prerequisite for enabling smart cities by involving citizens and keeping the
decision and implementation process transparent. India, in particular, has shown a favourable
predisposition towards e-governance. There is tremendous potential to increase the coverage,
quality and range of public services.
Conclusion:
In summary, one vision of the future is that IoT becomes a utility with increased
sophistication in sensing, actuation, communications, control, and in creating knowledge from
vast amounts of data. This will result in qualitatively different lifestyles from today. What the
lifestyles would be is anyone’s guess. It would be fair to say that we cannot predict how lives
will change. We did not predict the Internet, the Web, social networking, Facebook, Twitter,
millions of apps for smartphones, etc., and these have all qualitatively changed societies’
lifestyle. New research problems arise due to the large scale of devices, the connection of the
physical and cyber worlds, the openness of the systems of systems, and continuing problems
of privacy and security.
REFERENCES:
[1] Andrea Zanella, Senior Member, IEEE, Nicola Bui, Angelo Castellani,Lorenzo
Vangelista, Senior Member, IEEE, and Michele Zorzi, Fellow, IEEE “Internet of Things
for Smart Cities”, IEEE INTERNET OF THINGS JOURNAL, VOL. 1, NO. 1,
FEBRUARY 2014
[2] John A. Stankovic, Life Fellow, IEEE, “Research Directions for the Internet of Things”,
[3] 19Hitachi. Intelligent water system for Realizing a Smart
[4] City. [Online] Available: http://www.hitachi.com/products/
[5] smartcity/smart-infrastructure/water/images/solution_img06.jpg
INCON - XI 2016
37
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[6] 13Boulos, M. and Al-Shorbaji, N. On the Internet of Things,
[7] smart cities and the WHO Healthy Cities. International Journal of Health Geographics,
Mar 2014.
[8] Amsterdam Smart City. Smart Traffic Management. [Online] Available:
http://amsterdamsmartcity.com/projects/detail/id/58/slug/smart-traffic-management
(2014)
[9] City of New York. PlaNYC - Green Buildings & Energy Efficiency
[10] [Online] Available: http://www.nyc.gov/html/gbee/html/home/home.shtml
INCON - XI 2016
38
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Blue Brain
Dipali Tawar
ASM’s IBMR
dipali.tawar@asmedu.org
ABSTRACT:
The Blue Brain Project is the first comprehensive attempt to reverse-engineer the
mammalian brain, in order to understand brain function and dysfunction through
detailed simulations.
Blue brain is the name of the world’s first virtual brain. That means a machine
that can function as human brain. Computer simulations in neuroscience hold the
promise of dramatically enhancing the scientific method by providing a means to test
hypotheses using predictive models of complex biological processes where experiments
are not feasible. Of course, simulations are only as good as the quality of the data and
the accuracy of the mathematical abstraction of the biological processes. The first phase
of the Blue Brain Project therefore started after 15 years of systematically dissecting the
microanatomical, genetic and electrical properties of the elementary unit of the
neocortex a single neocortical column, which is a little larger than the head of a pin.
Today scientists are in research to create an artificial brain that can think, response,
take decision, and keep anything in memory. The main aim is to upload human brain
into machine. So that man can think, take decision without any effort. After the death of
the body, the virtual brain will act as the man .So, even after the death of a person we
will not lose the knowledge, intelligence, personalities, feelings and memories of that
man that can be used for the development of the human society.
Introduction
No one has ever understood the complexity of human brain. It is complex than any
circuitry in the world. So, question may arise is it really possible to create a human brain? The
answer is Yes. Because whatever man has created today always he has followed the nature.
When man does not have a device called computer, it was a big question for all .But today it is
possible due to the technology. Technology is growing faster than everything. IBM is now in
research to create a virtual brain. It is called Blue brain .If possible; this would be the first
virtual brain of the world.
What is blue brain?

The IBM is now developing a virtual brain known as the Blue brain. It would be the
world’s first virtual brain. Within 30 years, we will be able to scan ourselves into the
computers. Is this the beginning of eternal life?
What is Virtual Brain?
We can say Virtual brain is an artificial brain, which does not actually the natural brain,
but can act as the brain. It can think like brain, take decisions based on the past experience,
and response as the natural brain can. It is possible by using a super computer, with a huge
amount of storage capacity, processing power and an interface between the human brain and
INCON - XI 2016
39
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
this artificial one. Through this interface the data stored in the natural brain can be up loaded
into the computer. So the brain and the knowledge, intelligence of anyone can be kept and
used for ever, even after the death of the person.
Why we need virtual brain?

Today we are developed because of our intelligence. Intelligence is the inborn quality
that cannot be created. Some people have this quality, so that they can think up to such
an extent where other cannot reach. Human society is always needed of such intelligence
and such an intelligent brain to have with. But the intelligence is lost along with the
body after the death. The virtual brain is a solution to it. The brain and intelligence will
alive even after the death.

We often face difficulties in remembering things such as people’s names, their birthdays,
and the spellings of words, proper grammar, important dates, history facts, and etcetera.
In the busy life every one wants to be relaxed. Cannot we use any machine to assist for
all these? Virtual brain may be the solution to it. What if we upload ourselves into
computer, we were simply aware of a computer, or maybe, what if we lived in a
computer as a program.
Working of brain

First, it is helpful to describe the basic manners in which a person may be uploaded into
a computer. Raymond Kurzweil recently provided an interesting paper on this topic. In
it, he describes both invasive and noninvasive techniques. The most promising is the use
of very small robots, or nanobots. These robots will be small enough to travel
throughout our circulatory systems. Traveling into the spine and brain, they will be able
to monitor the activity and structure of our central nervous system. They will be able to
provide an interface with computers that is as close as our mind can be while we still
reside in our biological form. Nanobots could also carefully scan the structure of our
brain, providing a complete readout of the connections between each neuron. They
would also record the current state of the brain. This information, when entered into a
computer, could then continue to function as us. All that is required is a computer with
large enough storage space and processing power. Is the pattern and state of neuron
connections in our brain truly all that makes up our conscious selves? Many people
believe firmly those we possess a soul, while some very technical people believe that
quantum forces contribute to our awareness. But we have to now think technically. Note,
however, that we need not know how the brain actually functions, to transfer it to a
computer. We need only know the media and contents. The actual mystery of how we
achieved consciousness in the first place, or how we maintain it, is a separate discussion.

Really this concept appears to be very difficult and complex to us. For this we have to
first know how the human brain actually works.

How the natural brain works?

The human ability to feel, interpret and even see is controlled, in computer like
calculations, by the magical nervous system. Yes, the nervous system is quite like magic
because we can’t see it, but its working through electric impulses through your body.

One of the world’s most “intricately organized” electron mechanisms is the nervous
system. Not even engineers have come close to making circuit boards and computers as
INCON - XI 2016
40
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
delicate and precise as the nervous system. To understand this system, one has to know
the three simple functions that it puts into action: sensory input, integration, motor
output.
1) Sensory input: When our eyes see something or our hands touch a warm surface, the
sensory cells, also known as Neurons, send a message straight to your brain. This action
of getting information from your surrounding environment is called sensory input
because we are putting things in your brain by way of your senses.
2) Integration: Integration is best known as the interpretation of things we have felt, tasted,
and touched with our sensory cells, also known as neurons, into responses that the body
recognizes. This process is all accomplished in the brain where many, many neurons
work together to understand the environment.
3)
Motor Output: Once our brain has interpreted all that we have learned, either by
touching, tasting, or using any other sense, then our brain sends a message through
neurons to effecter cells, muscle or gland cells, which actually work to perform our
requests and act upon our environment. The word motor output is easily remembered if
one should think that our putting something out into the environment through the use of
a motor, like a muscle which does the work for our body.
How we see, hear, feel, smell, and take decision?
1)
Nose: Once the smell of food has reached your nose, which is lined with hairs, it travels
to an olfactory bulb, a set of sensory nerves. The nerve impulses travel through the
olfactory tract, around, in a circular way, the thalamus, and finally to the smell sensory
cortex of our brain, located between our eye and ear, where it is interpreted to be
understood and memorized by the body.
2) Eye: Seeing is one of the most pleasing senses of the nervous system. This cherished
action primarily conducted by the lens, which magnifies a seen image, vitreous disc,
which bends and rotates an image against the retina, which translates the image and light
by a set of cells. The retina is at the back of the eye ball where rods and cones structure
along with other cells and tissues covert the image into nerve impulses which are
transmitted along the optic nerve to the brain where it is kept for memory.
3) Tongue: A set of microscopic buds on the tongue divide everything we eat and drink
into four kinds of taste: bitter, sour, salty, and sweet. These buds have taste pores, which
convert the taste into a nerve impulse and send the impulse to the brain by a sensory
nerve fiber. Upon receiving the message, our brain classifies the different kinds of taste.
This is how we can refer the taste of one kind of food to another.
4) Ear: Once the sound or sound wave has entered the drum, it goes to a large structure
called the cochlea. In this snail like structure, the sound waves are divided into pitches.
The vibrations of the pitches in the cochlea are measured by the Corti. This organ
transmits the vibration information to a nerve, which sends it to the brain for
interpretation and memory.
Brain simulation
Now the question is how to implement this entire natural thing by using artificial things.
Here is a comparative discussion.
1.
Input
INCON - XI 2016
41
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In the nervous system in our body the neurons are responsible for the message passing.
The body receives the input by the sensory cells. This sensory cell produces electric
impulses which are received by the neurons. The neurons transfer these electric impulses
to the brain.
2.
Interpretation
The electric impulses received by the brain from the neurons are interpreted in the brain.
The interpretation in the brain is accomplished by the means of certain states of many
neurons.
3.
Output
Based on the states of the neurons the brain sends the electric impulses representing the
responses which are further received by the sensory cell of our body to respond. The
sensory cells of which part of our body is going to receive that, it depends upon the state
o f the neurons in the brain at that time.
4.
Memory
There are certain neurons in our brain which represent certain states permanently. When
required these state is interpreted by our brain and we can remember the past things. To
remember thing we force the neurons to represent certain states of the brain permanently
or for any interesting or serious matter this is happened implicitly.
5.
Processing
When we take decision, think about something, or make any computation, Logical and
arithmetic calculations are done in our neural circuitry. The past experience stored and
the current input received is used and the states of certain neurons are changed to give
the output.
1.
Input
In a similar way the artificial nervous system can be created. The scientist has already
created artificial neurons by replacing them with the silicon chip. It has also been tested
that these neurons can receive the input from the sensory cells .So, the electric impulses
from the sensory cells can be received through these artificial neurons and send to a
super computer for the interpretation.
2.
Interpretation
The interpretation of the electric impulses received by the artificial neuron can be done
by means of a set of register. The different values in these register will represent
different states of the brain.
3.
Output
Similarly based on the states of the register the output signal can be given to the artificial
neurons in the body which will be received by the sensory cell.
4.
Memory
It is not impossible to store the data permanently by using the secondary memory .In the
similar way the required states of the registers can be stored permanently. And when
required these information can be retrieved and used.
INCON - XI 2016
42
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
5.
Processing
In a similar way the decision making can be done by the computer by using some stored
states and the received input and by performing some arithmetic and logical
calculations.
Now there is no question how the virtual brain will work .But the question is how the
human brain will be up loaded into it. This is also possible due to the first growing technology.
Uploading human brain
The uploading is possible by the use of small robots known as the Nanobots. These
robots are small enough to travel throughout our circulatory system. Traveling into the spine
and brain, they will be able to monitor the activity and structure of our central nervous system.
They will be able to provide an interface with computers that is as close as our mind can be
while we still reside in our biological form. Nanobots could also carefully scan the structure of
our brain, providing a complete readout of the connections. This information, when entered
into a computer, could then continue to function as us. Thus the data stored in the entire brain
will be uploaded into the computer.
Current research work
1) IBM, in partnership with scientists at Switzerland’s Ecole Polytechnique Federale de
Lausanne’s (EPFL) Brain and Mind Institute will begin simulating the brain’s biological
systems and output the data as a working 3-dimensional model that will recreate the
high-speed electro-chemical interactions that take place within the brain’s interior. These
include cognitive functions such as language, learning, perception and memory in
addition to brain malfunction such as psychiatric disorders like depression and autism.
From there, the modeling will expand to other regions of the brain and, if successful,
shed light on the relationships between genetic, molecular and cognitive functions of the
brain.
2) Researchers at Microsoft’s Media Presence Lab are developing a “virtual brain,” a PCbased database that holds a record of an individual’s complete life experience. Called
MyLifeBits, the project aims to make this database of human memories searchable in the
manner of a conventional search engine. “By 2047, almost all information will be in
cyberspace including all knowledge and creative works, said one of the project’s leaders,
Gordon Bell.
3) According to the new scientist Magazine report Rodrigo Laje and Gabriel Mindlin of the
University of Buenos Aires in Argentina have devised a computer model of a region of
the brain called the RA nucleus which controls muscles in the lungs and vocal folds.
The model brain can accurately echo the song of a South American sparrow. The bird
sing by forcing air from their lungs past folds of tissue in the voice box. The electric impulses
from the brain that force the lungs had been recorded and when the equivalent impulses were
passed to the computer model of the lungs of the bird it begins to sing like the bird.
Mr. Mindlin told the weekly science magazine he was surprised that simple instructions
from the brain change a constant signal into a complex series of bursts to produce the
intricacies of birdsong.
He plans to add more brain power to his model which might reveal how birds improve
their songs and learn them from other birds.
INCON - XI 2016
43
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
He hopes it might one day be possible to use similar models to map the neural [brain]
circuitry of animals without distressing lab experiments – just by recording their calls and
movements, the magazine said.
Advantages:
1.
We can remember things without any effort.
2.
Decision can be made without the presence of a person.
3.
Even after the death of a man his intelligence can be used.
4.
The activity of different animals can be understood. That means by interpretation of the
electric impulses from the brain of the animals, their thinking can be understood easily.
5.
It would allow the deaf to hear via direct nerve stimulation, and also be helpful for many
psychological diseases. By down loading the contents of the brain that was uploaded into
the computer, the man can get rid from the madness.
Disadvantages:
Further, there are many new dangers these technologies will open. We will be
susceptible to new forms of harm.
1.
We become dependent upon the computer systems.
2.
Others may use technical knowledge against us.
3.
Computer viruses will pose an increasingly critical threat.
4.
The real threat, however, is the fear that people will have of new technologies. That fear
may culminate in a large resistance. Clear evidence of this type of fear is found today
with respect to human cloning.
Hardware and Software Requirement
1.
A super computer.
2.
Memory with a very large storing capacity.
3.
Processor with a very high processing power.
4.
A very wide network.
5.
A program to convert the electric impulses from the brain to input signal, which is to be
received by the computer, and vice versa.
6.
Very powerful Nanobots to act as the interface between the natural brain and the
computer
Conclusions:
In conclusion, we will be able to transfer ourselves into computers at some point. Will
consciousness emerge? We really do not know. If consciousness arises because of some
critical mass of interactions, then it may be possible. But we really do not understand what
consciousness actually is, so it is difficult to say. Most arguments against this outcome are
seemingly easy to circumvent. They are either simple minded, or simply require further time
for technology to increase. The only serious threats raised are also overcome as we note the
combination of biological and digital technologies.
INCON - XI 2016
44
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Nanotechnology and its Applications in Information Technology
J.K.Lahir
Professor.
ASM’S IBMR MCA.
jklahir@rediffmail.com
Swati Jadhav
Asst. Professor.
ASM’S IBMR MCA.
swatijadhav@asmedu.org
ABSTRACT:
Nanotechnology is the development and use of techniques to study physical phenomena
and construct structures in the physical size range of 1-100 nanometers (nm), as well as
the incorporation of these structures into applications. Nanotechnology & Information
Technology has around two distinct domains:
1) The use of nonmaterial to create smaller, faster, more efficient memory for use in
computers.
2) The replacement of current computer devices with computers using advanced
quantum computer technology.
The development of Nanoscale Memristors is the current development which is able to
keep memory states, and data, in power-off modes. This being on the nanoscale it is
significant because it will not require poser to maintain the memory state. Memristors
are nonlinear electronic devices and they have been used in computer data storage. The
Memristors could lead to far more energy-efficient computers with pattern-matching
abilities of human brain. This being on the nanoscale. It is significant as it will not
require poser to maintain the memory state.
This Paper focuses on the Memoristerd, Graphene, Transistors and phase change
memories which are smaller faster and more efficient. Memristors can be switched from
a low resistance state to a high-resistance state by applying a short voltage pulse, or
short current pulse. The Application of this will be of immense use in Information
technology.
Keyword: Memoristers, Graphene Transistors, Quantum mechanics, Nanometers
Introduction:
Nanotechnology in Information Technology
Information technology is an important and rapidly growing industrial sector with a
high rate of innovation. Enormous progress has been made by making a transition from
traditional to nanotechnology electronics. Nanotechnology has created a tremendous change in
information technology.
INCON - XI 2016
45
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Breakthrough areas
Breakthrough in information technology due to nanotechnology can happen in two steps.
First step is top-down miniaturization approach which will take conventional
microstructures across the boundary to nanotechnology.
Secondly, in the longer term, bottom-up nano electronics and engineering will emerge
using technologies such as self-organization process to assemble circuits and systems.
Developments
Development are taking place on ultra-integrated electronics combined with powerful
wireless technology as low-price mass products, ultra miniaturization, the design of innovative
sensors, production of cheap and powerful polytronic circuits, novel system architectures
using nanotechnology for future DNA computing which is interface to biochemical processes
and quantum computing which can solve problems for which there are no efficient classical
algorithms. Due to the development of nanoelectronic components, quantum cryptography for
military and intelligence applications is emerging.
Memory storage
Memory storage before the advent of nanotechnology relied on transistors, but now
reconfigurable arrays are formed for storing large amount of data in small space. For example,
we can expect to see the introduction of magnetic RAMs and resonant tunnel elements in
logical circuits in the near future. Every single nanobit of a memory storage device is used for
storing information. Molecular electronics based on carbon nanotubes or organic
macromolecules will be used.
Semiconductors
Nano amplification and chip embedding is used for building semiconductor devices
which can even maintain and neutralize the electron flow. Integrated nanocircuits are used in
the silicon chips to reduce the size of the processors. Approaches promising success in the
INCON - XI 2016
46
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
medium term include e.g. rapid single-flux quantum (RSFQ) logic or single electron
transistors.
Display and audio devices
Picture quality and resolution of display devices has improved with the help of
nanotechnology. Nanopixelation of these devices make the picture feel real. Similarly
frequency modulation in audio devices has been digitized to billionth bit of signals.
Data processing and transmission
In the field of data processing and transmission development of electronic, optical and
optoelectronic components are expected to lead to lower cost or more precise processes in the
field of manufacturing technology. Development of nanoscale logical and storage components
are made for the currently dominant CMOS technology using quantum dots and carbon
nanotubes. Photonic crystals have potential for use in purely optical circuits as a basis for
future information processing based solely on light (photonics). In molecular electronics,
nanotechnology can be used to assemble electronic components with new characteristics at
atomic level, with advantages including potentially high packing density. Smaller, faster and
better components based on quantum mechanical effects, new architectures and new
biochemical computing concept called DNA computing are possible with nanotechnology.
The new phenomenon, called the "quantum mirage" effect, may enable data transfer within
future nanoscale electronic circuits too small to use wires.
Future Nanotechnology Areas
Nanotechnology is the next industrial revolution and the telecommunications industry
will be radically transformed by it in the future. Nanotechnology has revolutionized the
telecommunications, computing, and networking industries. The emerging innovation
technologies are:

Nanomaterials with novel optical, electrical, and magnetic properties.

Faster and smaller non-silicon-based chipsets, memory, and processors.

New-science computers based on Quantum Computing.

Advanced microscopy and manufacturing systems.

Faster and smaller telecom switches, including optical switches.
INCON - XI 2016
47
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Higher-speed transmission phenomena based on plasmonics and other quantum-level
phenomena.

Nanoscale MEMS: micro-electro-mechanical systems.
Nanotechnology is the development and use of techniques to study construct structures
and physical phenomena in the physical size range of 1-100 nanometers (nm), as well as the
incorporation of these structures into applications.
This paper indicates that nanotechnology is still very much under development. It has
multidisciplinary character and, therefore, it is difficult to plan future skill needs especially at
the intermediate level.
As far as scientists and specialists with tertiary education are concerned, a message is
that India has shortage of specialists, and this shortage is expected to increase in the future.
There is a need for monitoring intermediate skills and it could be learned from the
experience of other new and emerging technologies.
As soon as nanotechnology will go into a mass production, the shortage of skills in the
intermediary level of occupations will become obvious. The debate on ethical and legal
questions has brought about the conclusion that general risks and the social impact of
nanotechnology are difficult to predict and more research in this field is required.
Nano Technology & Information Technology has two distinct domains:
1) The use of nanomaterials to create smaller, faster, more efficient memory for use in
computers.
2) The replacement of current computer devices with computers using advanced quantum
computer technology.
The word 'Nanotechnology' in 1980's, It was about building machines on the scale of
molecules, a few Nanometers wide-motors, robot arms, and even whole computers, far smaller
than a cell.
Nanotechnology, in its traditional sense, means building things from the bottom up, with
atomic precision. It is said that to built a billion tiny factories, models of each other, which are
manufacturing simultaneously. The principles of physics, as far as can be seen, do not speak
against the possibility of maneuvering things atom by atom. It is not an attempt to violate any
laws; it is something, in principle, that can be done; but in practice, it has not been done
because we are too big.
Four Generations
National Nanotechnology Initiative has described four generations of nanotechnology
development.
The current generation is of passive nanostructures, materials designed to perform one
task.
The second generation introduces active nanostructures for multitasking, for example,
actuators, drug delivery devices, and sensors.
The third generation is expected to begin emerging around 2010 and which features
nanosystems with thousands of interacting components.
A few years later, the first integrated nanosystems, functioning (according to Roco) will
be much like a mammalian cell with hierarchical systems within the systems, are expected to
be developed.
INCON - XI 2016
48
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Dual-Use Technology
Like electricity, electronics or computers before it, nanotechnology will offer greatly
improved efficiency in almost every facet of life. But as a general-purpose technology, it will
have dual-use, meaning it will have many commercial uses and will have many military
uses—making it far more powerful weapons and tools of surveillance. Thus it represents not
only wonderful benefits for humanity, but also grave risks.
A key understanding of nanotechnology is that it offers not just better products, but a
vastly improved manufacturing process. A computer can make copies of data files—
essentially as many copies as we want at little or no cost. It may be only a matter of time until
the building of products becomes as cheap as the copying of files. That's the real meaning of
nanotechnology, and why it is sometimes seen as "The next Industrial Revolution."
The power of Nanotechnology can be encapsulated in an apparently simple device called
a Personal Nanofactory .It can be placed on the Desktop or Countertop. Packed with miniature
chemical processors, computing, and robotics.
It will produce a wide-range of items quickly, cleanly, and inexpensively, building
products directly from the working drawings.
Some experts may still insist that Nanotechnology can refer to measurement or
visualization at the scale of 1-100 nanometers, but a consensus seems to be forming around the
idea that control and restructuring of matter at the nanoscale is a necessary element. It is
defined as a bit more precise than that, but as work progresses through the four generations of
nanotechnology leading up to molecular nanosystems, which will include manufacturing. It is
obvious that "engineering of functional systems at the molecular scale" is what
Nanotechnology is really all about.
Conflicting Definitions
The conflicting definitions of nanotechnology and blurry distinctions between
significantly different fields have complicated the effort to understand the differences and
develop a sensible, effective policy.
The risks of today's nanoscale technologies (nanoparticle toxicity, etc.) cannot be treated
the same as the risks of longer-term molecular manufacturing (economic disruption, unstable
arms race, etc.). It is a mistake to put them together in one bracket for policy consideration—
each is important to address, but they offer different problems and will require different
solutions. As used today, the term nanotechnology usually refers to a broad collection of
mostly disconnected fields. Essentially, anything sufficiently small and interesting can be
called nanotechnology.
General-Purpose Technology
Nanotechnology is sometimes referred to as a general-purpose technology. That's
because in its advanced form it will have significant impact on almost all industries and all
areas of society. It will offer better built, longer lasting, cleaner, safer, and smarter products
for the home, for communications, for medicine, for transportation, for agriculture, and for
industry in general.
Imagine a medical device that travels through the human body to seek out and destroy
small clusters of cancerous cells before they can spread. Or a box no larger than a sugar cube
INCON - XI 2016
49
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
that contains the entire contents or materials much lighter than steel that possesses ten times as
much strength.
Our own judgment is that the nanotechnology revolution has the potential to change the
world on a scale equal to, if not greater than, the computer revolution.
The power of nanotechnology can be encapsulated in an apparently simple device called
a personal nanofactory .
Nano memory
Memory chips are critical components in any computing technology. Prices have
dropped dramatically over the last few years, but it’s still a significant part of the overall cost
for many devices - especially ebook tablets and music players.
Once again, nanowire technology is likely to have a big impact in the next generation of
memory circuitry. Researchers in Hewlett Packard’s labs have been working with nanowires
coated with titanium dioxide. When another nanowire is laid in a perpendicular fashion over a
group of these coated nanowires, a ‘memister’ device is created at each junction. The switch
formed at each memister junction can represent a binary one or zero value.
Meanwhile, IBM researchers are using magnetic nanowires made from an iron and
nickel alloy. They plan to place hundreds of millions of these nanowires onto a silicon
substrate to increase memory storage capacities. And researchers at the Rice University are
layering 5nm diameter silicon dioxide nanowires in a triple-layer sandwich to achieve
dramatic increases in information storage.
Other labs are trying a different approach by fabricating arrays of magnetic particles
called ‘nanodots’, each around 6nm in diameter. Many billions of nanodots will be needed for
a single memory device. The magnetic charge level of each dot is read to determine if it
represents a binary, zero or one value.
Every one of these organisations is in a race to find the best solution, yet developing
ground-breaking nanotechnology isn’t enough. To be successful it must be backed up by
equally innovative manufacturing techniques and technologies, able to fabricate billions of
high-density, low-cost memory chips.
New memristor technology could bring us closer to brain-like computing
Researchers are always searching for improved technologies, but the most efficient
computer possible already exists. It can learn and adapt without needing to be programmed or
updated. It has nearly limitless memory, is difficult to crash, and works at extremely fast
speeds. It's not a Mac or a PC; it's the human brain. And scientists around the world want to
mimic its abilities.
Both academic and industrial laboratories are working to develop computers that operate
more like the human brain. Instead of operating like a conventional, digital system, these new
devices could potentially function more like a network of neurons.
"Computers are very impressive in many ways, but they're not equal to the mind," said
Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence in Northwestern
University's McCormick School of Engineering. "Neurons can achieve very complicated
computation with very low power consumption compared to a digital computer."
A team of Northwestern researchers has accomplished a new step forward in electronics
that could bring brain-like computing closer to reality. The team's work advances memory
INCON - XI 2016
50
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
resistors, or "memristors," which are resistors in a circuit that "remember" how much current
has flowed through them.
"Memristors could be used as a memory element in an integrated circuit or computer it is said
that "Unlike other memories that exist today in modern electronics, memristors are stable and
remember their state even if power goes off."
Current computers use random access memory (RAM), which moves very quickly as a
user works but does not retain unsaved data if power is lost. Flash drives, on the other hand,
store information when they are not powered but work much slower. Memristors could
provide a memory that is the best of both worlds: fast and reliable. But there's a problem:
memristors are two-terminal electronic devices, which can only control one voltage channel. It
was tried to transform it into a three-terminal device, allowing it to be used in more complex
electronic circuits and systems.
It became possible to meet this challenge by using single-layer molybdenum disulfide
(MoS2), an atomically thin, two-dimensional nanomaterial semiconductor. Much like the way
fibers are arranged in wood, atoms are arranged in a certain direction--called "grains"--within
a material. The sheet of MoS2 that was used has a well-defined grain boundary, which is the
interface where two different grains come together.
"Because the atoms are not in the same orientation, there are unsatisfied chemical bonds
at that interface these grain boundaries influence the flow of current, so they can serve as a
means of tuning resistance.
When a large electric field is applied, the grain boundary literally moves, causing a
change in resistance. By using MoS2 with this grain boundary defect instead of the typical
metal-oxide-metal memristor structure a novel three-terminal memristive device is prepared
that is widely tunable with a gate electrode.
"With a memristor that can be tuned with a third electrode, we have the possibility to
realize a function that could not previously be achieved, It is said. "A three-terminal
memristor has been proposed as a means of realizing brain-like computing. Researchers are
now actively exploring this possibility in the laboratory to achive this."
Graphene
In 2004, at the University of Manchester, Andre Geim and Konstantin Novoselov
successfully produced sheets of graphene, for which they were awarded the 2010 Nobel Prize
for Science. A two-dimensional graphene sheet is the thickness of a single carbon atom,
namely 0.22nm, so nearly five million graphene layers are required to achieve a thickness of
one millimetre. Amazingly, though, the discovery process centered around a lump of naturally
occurring graphite.
In structure, a graphene layer is rather like a flattened nanotube and makes for an
incredibly strong material with a high tensile strength (meaning it can be stretched without
breaking). For example, experiments suggest a multi-layer sheet of graphene the thickness of a
piece of cling film could support the weight of a fully grown elephant. When mixed in with
other materials, say in an epoxy resin, the resulting composite will be far stronger than a
similar material infused with bucky balls or nanotubes.
Yet once again it’s graphene’s electrical properties that particularly excite researchers.
The speed of electron flow across a grapheme sheet is superior to any other known material
and a full one thousand times faster than copper. One goal is to replace the de facto standard
INCON - XI 2016
51
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
silicon wafer technology with grapheme sheets to deliver incredibly fast switching transistors
and high-speed integrated circuitry.
Faster transistors
Modern microprocessors contain hundreds of millions of field-effect transistors (FET).
These FETs act as tiny electronic switches using a substrate-insulator-gate sandwich (see
image). A reduction in FET size means higher processing speeds combined with lower power
consumption, plus the potential to squeeze millions more onto a single integrated circuit die.
CPU manufacturers apply great ingenuity to constantly miniaturise these transistors and
have The FETs in today’s processors are already tiny. Today’s 32nm or 22nm structure sizes
already fall well within the sub-100nm nanotechnology range.
To maintain this momentum, manufacturers are now investigating a modified FET
structure that rises from the substrate layer like a fin, hence the moniker ‘finFET’. Chips
constructed with finFET transistors should keep the size with transistor sizes of 14nm as a
distinct possibility.
Nanowire technology is seen by many as the next big step. With nanowires, the FET is
in the form of a cylinder, with the insulator and gate wrapped around the central nanowire core
(see image). This construction allows greater control of the voltage than a finFET component,
and their vertical orientation allows them to be densely stacked together, creating a nanowire
FET forest numbered in billions.
Better screens
Researchers at various universities and some Asia-based manufacturing companies have
already demonstrated their ability to use nanowires as the electrodes in organic light-emitting
diode (OLED) displays. The next generation of OLEDs will include robust nanowire
technology to create extremely thin, lightweight screens. Importantly, these displays would
require far less power to display the same levels of brightness, contrast and colour saturation.
Unlike the previous OLED components, nanowires exhibit an ability to flex while
continuing to operate. If the nanowires are deposited onto plastic sheets, the whole display will
be able to bend. Flexible displays would be a revolutionary technology for gadget designers,
who could envisage all kinds of novel and innovative scenarios. Foldable e-readers, bendy
iPads and screens that wrap around your wrist are just some of the possibilities. Also, flexible
devices will be far less likely to suffer damage after a heavy impact or drop.
Unfortunately, at present this advanced display technology requires expensive
manufacturing techniques. However, it’s only a matter of time before engineers crack the mass
production issues, and our handheld and pocketable devices take on new and futuristic
appearance.
Nano power
One of the best funded and most promising areas of nanotechnology research involves
improving power storage. When applied to battery technology, it will lead to gadgets, portable
computers and electric cars that weigh less, run for longer and charge much more rapidly.
So what does nanotechnology offer in this area? Well, as the particle size decreases, the
number of particles within a specific volume increases. Greater numbers of anode and cathode
particles means an increase on effective surface area and a more efficient battery. In other
INCON - XI 2016
52
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
words, adding nanowires, nanotubes and other nanoparticles to batteries will deliver greater
storage capacities or smaller and lighter batteries with the same capacity.
Research reports suggest it should be possible to achieve a tenfold increase in power
density. This kind of progress would dramatically decrease battery size and radically change
the look of our laptops, tablets and mobile phones.
Rapid recharge times are just as important as charge capacity. Owning portable devices
in which the battery doesn’t last all day or an electric car that has a range of just 150
Kilometers is far more acceptable if the battery can be recharged in a minute or two.
Fast recharge technology research looks to be well advanced. In August 2012, there
were news reports that Korean scientists have created a nanoparticle-enhanced lithium-ion
battery, which can be recharged 30 to 102 times quicker than a typical lithium-ion battery.
Such reports suggest the 60-second battery recharge scenario is a distinct possibility.
Conclusion:
The fast development of nanotechnology is often defined as a fundamental revolution in
Technology as compared to discovery of antibiotics, television, nuclear weapons, or computer
technologies. This paper discussed the current situation and the development potential of this
technology as such. The aim was to focus on approaches and first results to identify future
skill requirements and new emerging occupations in Information Technology.
Nanotechnology is a cross-sectoral and highly interdisciplinary field and its development
brings along several completely new tasks, and even jobs and occupations whose
requirements have to be identified and transferred into education and training without delay.
The paper attempted to tackle some of the concerns in the area of skill needs.
The paper gives a message that nanotechnology is still very much under development.
It has a multidisciplinary character and, therefore, it is difficult to plan future course of
actions which are needed especially at the intermediate level. As far as specialists and
scientists with tertiary education are concerned, a clear message is that our country has a
shortage of specialists, and this shortage is expected to increase in the future as such further
research will be required in exploring Nanotechnology.
Bibliography:
Drexler (1991), “Unbounding the future: the nanotechnology revolution”

Crawford, Robert; “ The Emerging Science of Nano technology”

VosGonz, MethewJohns(June 2009), “Nanotechnology Science”
Websites:

www.nanotechnologyinstitute.org

www.nanomagazine.com

http://nanozine.com

http:Wikipedia.com
INCON - XI 2016
53
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Learning Analytics with Bigdata to Enhance Student Performance with
Specific Reference to Higher Education Sector in Pune
Prof. Sumita Katkar
Prof. Neeta N. Parate
MCA,MBAPhD(Appeared)
BSc (Computer Appl.), MCA,
ASM’s IBMR, Chinchwad
ASM’s IBMR, Chinchwad
PhD(Appeared),
ABSTRACT :
This study is dedicated to academics institutions for improving the performance of
students as well as enhancing teaching learning quality and process by the use of
modern tools and techniques such as learning analytics and big data concept. This study
is dedicated to higher education sector and Pune PCMC region. Researchers are
currently doing literature survey on the mentioned subject to understand research work
done for the same area in different countries. To learn and analyze uses and limitations
of the existing work done and to implement this new technology to improve and enhance
student performance as well as to make teaching learning process more effective and
efficient to meet industry standards.
1.
Introduction :
Learning is a product of interaction and transferring of knowledge. It depends on the
learners ability and the teachers capacity of interacting with the students and the teachers so
that the students acquire the skills and knowledge to enhance their performance in their field
of work the traditional approaches in interaction with student and involves student evolution
analysis of grade and teachers perception consequently the evolution and analysis of learning
is limited quantity of sharing data between students and teachers and retrospective data and
significance delay between the events being reported and the implementation of intervention
as the educational resources move online and unpredicted amount of data surrounding this
interaction is readily available as such the LMS and CMS are being used by the learners and
teachers on various forums and social network site.
Learning Analytics is an emerging field in which analytical tools are used to improve
learning and teaching. This is also coordinated with studies such as web analytics, business
intelligence, educational data mining and action analytics. Learning analytics is measurement,
collection, analysis and reporting of data about learner and their teachers for the purpose of
understanding and optimizing learning and the environment in which it takes place.
Learning Analytics technique could be used to develop a deeper understanding of the
way student learn, recommended personalized learning plans, identifying early warning for
that reach data analytics and feedback enables the process of continuous improvement in this
it is figured out how to improve education so that it is useful to the society and the business at
large.
The learning and academic analytics in higher education is normally used to predict
success by examining how and what student learn and how success is in their field of work is
supported by academics programs and institution this focuses on measurement, collection
,analysis and reporting of data as drivers of departmental process and their program
curriculum . Learning Analytics process will inform a model of continuous improvement
which will examine in the context of institutions of higher education which normally prepare
INCON - XI 2016
54
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
the students for innovative studies and action oriented business to improve the present social
and economical level of the society, Learning Analytics will prepare the students to fill in the
gaps for their future performance in the field of their work.
Learning analytics is the measurement, collection, analysis and reporting of data about
learners and their contexts, for purposes of understanding and optimising learning and the
environments in which it occurs.
It has been pointed out that there is a broad awareness of analytics across educational
institutions for various stakeholders, but that the way 'learning analytics' is defined and
implemented may vary, including:
1.
For individual learners to reflect on their achievements and patterns of behaviour in
relation to others;
2.
As predictors of students requiring extra support and attention;
3.
To help teachers and support staff plan supporting interventions with individuals and
groups;
For functional groups such as course team seeking to improve current courses or develop
new curriculum offerings; and
For institutional administrators taking decisions on matters such as marketing and
recruitment or efficiency and effectiveness measures."
2.
History of the Techniques and Methods of Learning Analytics :
In a discussion of the history of analytics, Cooper highlights a number of communities
from which learning analytics draws techniques, including:
1.
Statistics - which are a well-established means to address hypothesis testing
2.
Business Intelligence - which has similarities with learning analytics, although it has
historically been targeted at making the production of reports more efficient through
enabling data access and summarising performance indicators.
3.
Web analytics - tools such as Google analytics report on web page visits and references
to websites, brands and other key terms across the internet. The more 'fine grain' of these
techniques can be adopted in learning analytics for the exploration of student trajectories
through learning resources (courses, materials, etc.).
4.
Operational research - aims at highlighting design optimisation for maximising
objectives through the use of mathematical models and statistical methods. Such
techniques are implicated in learning analytics which seek to create models of real world
behaviour for practical application.
5.
Artificial intelligence and Data mining - Machine learning techniques built on data
mining and AI methods are capable of detecting patterns in data. In learning analytics
such techniques can be used for intelligent tutoring systems, classification of students in
more dynamic ways than simple demographic factors, and resources such as 'suggested
course' systems modelled on collaborative filtering techniques.
6.
Social Network Analysis - SNA analyses relationships between people by exploring
implicit (e.g. interactions on forums) and explicit (e.g. 'friends' or 'followers') ties online
and offline. SNA developed from the work of sociologists like Wellman and Watts, and
mathematicians like Barabasi and Strogatz. The work of these individuals has provided
us with a good sense of the patterns that networks exhibit (small world, power laws), the
attributes of connections (in early 70's, Granovetter explored connections from a
INCON - XI 2016
55
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
perspective of tie strength and impact on new information), and the social dimensions of
networks (for example, geography still matters in a digital networked world). It is
particularly used to explore clusters of networks, influence networks, engagement and
disengagement, and has been deployed for these purposes in learning analytic contexts.
7.
Information visualization - visualisation is an important step in many analytics for sense
making around the data provided - it is thus used across most techniques (including those
above).
2.1
Literature Review :
1)
Big Data and Learning Analytics in Blended Learning Environments:




3.
4.
5.
6.
7.
8.
2)
Benefits and Concerns
Anthony G. Picciano1 1 Graduate Center and Hunter College, City University of New
York (CUNY)
The purpose of this article is to examine big data and learning analytics in blended
learning environments. It will examine the nature of these concepts, provide basic
definitions, and identify the benefits and concerns that apply to their development and
implementation. This article draws on concepts associated with data-driven decision
making, which evolved in the 1980s and 1990s, and takes a sober look at big data and
analytics. It does not present them as panaceas for all of the issues and decisions faced
by higher education administrators, but see them as part of solutions, although not
without significant investments of time and money to achieve worthwhile benefits.
Among the strategies to be considered was the effective use of learning analytics to
profile students and track their learning achievements in order to:
Identify at-risk students in a timely manner
Monitor student persistence on a regular basis
Develop an evidence base for program planning
Learner support strategies
Identifying outliers for early intervention
Predicting potential so that all students achieve optimally
Preventing attrition from a course or program
Identifying and developing effective instructional techniques
Analyzing standard assessment techniques and instruments (i.e. departmental and
licensing exams)
Testing and evaluation of curricula.
Enhancing Teaching and Learning Through Educational Data Mining and
Learning Analytics: An Issue Brief - U.S. Department of Education Office of
Educational Technology
In data mining and data analytics, tools and techniques once confined to research
laboratories are being adopted by forward-looking industries to generate business
intelligence for improving decision making. Higher education institutions are beginning
to use analytics for improving the services they provide and for increasing student grades
and retention. The U.S. Department of Education’s National Education Technology Plan,
INCON - XI 2016
56
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
as one part of its model for 21st-century learning powered by technology, envisions ways
of using data from online learning systems to improve instruction. With analytics and
data mining experiments in education starting to proliferate, sorting out fact from fiction
and identifying research possibilities and practical applications are not easy. This issue
brief is intended to help policymakers and administrators understand how analytics and
data mining have been—and can be—applied for educational improvement. At present,
educational data mining tends to focus on developing new tools for discovering patterns
in data. These patterns are generally about the micro concepts involved in learning: one
digit multiplication, subtraction with carries, and so on. Learning analytics—at least as it
is currently contrasted with data mining—focuses on applying tools and techniques at
larger scales, such as in courses and at schools and postsecondary institutions. But both
disciplines work with patterns and prediction: If we can discern the pattern in the data
and make sense of what is happening, we can predict what should come next and take
the appropriate action. Educational data mining and learning analytics are used to
research and build models in several areas that can influence online learning systems.
3)
Learning Analytics: Envisioning a Research Discipline and a Domain of Practice George Siemens Technology Enhanced Knowledge Research Institute Athabasca
University
Learning analytics are rapidly being implemented in different educational settings, often
without the guidance of a research base. Vendors incorporate analytics practices, models,
and algorithms from data mining, business intelligence, and the emerging “big data”
fields. Researchers, in contrast, have built up a substantial base of techniques for
analyzing discourse, social networks, sentiments, predictive models, and in semantic
content (i.e., “intelligent” curriculum). In spite of the currently limited knowledge
exchange and dialogue between researchers, vendors, and practitioners, existing learning
analytics implementations indicate significant potential for generating novel insight into
learning and vital educational practices. This paper presents an integrated and holistic
vision for advancing learning analytics as a research discipline and a domain of
practices. Potential areas of collaboration and overlap are presented with the intent of
increasing the impact of analytics on teaching, learning, and the education system.
3.




Objective :
To study the existing infrastructure.
To design and develop learning analytics framework.
To identify the key assessment factor.
To implement learning analytics framework in an education institute.
4.
Research Methodology:
Applied Research and Experimental Research:
Applied research is designed to solve practical problem of the modern world, rather
than to acquire knowledge for knowledge sake. The goal of applied research is to improve the
human condition. It focuses on analysis and solving social and real life problems. This
research is generally conducted on large scale basis, it is expensive.
The methodology of learning analytics includes the gathering of the data, which are
derived from the students and the learning environment they participate, and the intelligent
INCON - XI 2016
57
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
analysis of this data for drawing conclusions regarding the degree of the participation of
students in the forums and how this affects learning.
The main goal of this methodology is to understand and optimize the learning processes
and also to improve the environments in which these processes occur.
On a daily basis, a large amount of data is gathered through the participation of students
in e-learning environments. This wealth of data is an invaluable asset to researchers as they
can utilize it in order to generate conclusions and identify hidden patterns and trends by using
big data analytics techniques. The purpose of this study is a threefold analysis of the data that
are related to the participation of students in the online forums1 of their University. In one
hand the content of the messages posted in these for a can be efficiently analyzed by text
mining techniques. On the other hand, the network of students interacting through a forum can
be adequately processed through social network analysis techniques. Still, the combined
knowledge attained from both of the aforementioned techniques, can provide educators with
practical and valuable information for the evaluation of the learning process, especially in a
distance learning environment. The study was conducted by using real data originating from
the online forums of the Hellenic Open University (HOU). The analysis of the data has been
accomplished by using the R and the Weka tools, in order to analyze the structure and the
content of the exchanged messages in these for a as well as to model the interaction of the
students in the discussion threads.
5.
Significance of the Study
Learning analytics will be beneficial for the following:

Institutional administrators taking decisions on matters such as marketing and
recruitment or efficiency and effectiveness measures.

Individual learners to reflect on their achievements and patterns of behaviour in relation
to others.

Teachers and support staff plan supporting interventions with individuals and groups;

Functional groups such as course teams seeking to improve current courses or develop
new curriculum offerings.
The learning impact analysis with a design and development learning analytics
framework can provide inside into investment designing the training and development in the
process by an organization. It is important in maintaining and engaging and sustainable
workforce and aligning workforce with the strategic objective of the business. The success
factor of learning analytics combines the analytics and reporting capabilities of success factor
of workforce analytics which will be with learning matrix student standards to deliver and
inside to the learning personal, HR, learning professional to understand the impact of training
in their organization. The learning analytics helps to understand how, where, when and whom
to impart training skills and through this understanding measure the efficiency of internal and
external teaching sources and overall impact of training experiences on organizational
personnel. It shall help the organization to pinpoint the focus of training resources and the type
of course framework is emphasis which will provide inside into connection between
educational curriculum and human capital strategies. The learning analytics shall provide the
answer to the entire questions which a business organization seeks to answer.
However, learning impact analysis also can provide insights into investments in training
and development made by an organization. This is critically important in maintaining an
INCON - XI 2016
58
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
engaged and sustainable workforce, and in aligning your workforce with the strategic
objectives of your organization. Learning Analytics Success Factors Learning Analytics
combines the analytic and reporting capabilities of Success Factors Workforce Analytics with
learning metrics standards to deliver powerful insight to Chief Learning Officers, HR, and
Learning Professionals who want to understand better the impact of training across their
organizations. Learning Analytics helps you understand how, where, and to whom training is
delivered, and through this understanding, you can measure the efficiency of internal and
external training sources and the overall impact of training experiences on employees.
Learning Analytics can also help organizations pinpoint where training resources are focused
and what types of course are emphasized, providing insights into the connection between
training curricula and human capital strategies. Learning Analytics addresses common
learning questions that organizations seek to answer.
Learning Analytics delivers metrics related to the volume, type, and effectiveness of the
training courses provided by your organization, as well as metrics on to the mix of attendees,
by organization structure, employment type, tenure, gender, or age group. Examples of metrics
include: training hours per employee/event/FTE, training penetration and productivity rates,
and training course cancellation and completion rates. Every measure and metric delivered
with Learning Analytics comes with a clear definition and the formula for how it is calculated
to ensure a standard and mutual understanding across all stakeholders. Segmenting learning
metrics by employee, program, and organizational dimensions enables internal benchmarking
and more precise interventions.
Among the strategies to be considered was the effective use of learning analytics to
profile students and track their learning achievements in order to:

Identify at-risk students in a timely manner

Monitor student persistence on a regular basis

And to develop an evidence base for program planning and learner support strategies.
Learning analytics can have significant benefits in monitoring student performance and
progress. First, and at its most basic level, learning analytics software can mine down to the
frequency with which individual students access a CMS/LMS, how much time they are
spending in a course, and the number and nature of instructional interactions. These
interactions can be categorized into assessments (tests, assignments, or exercises), content
(articles, videos, or simulations viewed) and collaborative activities (blogs, discussion groups,
or wikis).
Second, by providing detailed data on instructional interactions, learning analytics can
significantly improve academic advisement related directly to teaching and learning. Learning
analytics can improve the ability to identify at-risk students and intervene at the first indication
of trouble. Furthermore by linking instructional activities with other student information
system data (college readiness, gender, age, major), learning analytics software is able to
review performance across the organizational hierarchy: from the student, to courses, to
department, to the entire college. It can provide insights into individual students as well as the
learning patterns of various cohorts of students.
Third, learning analytics software is able to provide longitudinal analysis that can lead to
predictive behaviour studies and patterns. By linking CMS/LMS databases with an
institution’s information system, data can be collected over time. Student and course data can
be aggregated and disaggregated to analyze patterns at multiple levels of the institution. This
INCON - XI 2016
59
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
would allow for predictive modelling that in turn, can create and establish student outcomes
alert systems and intervention strategies.
In sum, learning analytics can become an important element in identifying students who are at
risk and alerting advisors and faculty to take appropriate actions. Furthermore, it can do so
longitudinally across the institution and can undercover patterns to improve student retention
that in turn, can assist in academic planning.
6.
Hypothesis :
Learning analytics is the measurement, collection, analysis and reporting of data about
learners and their contexts, for purposes of understanding and optimising learning and the
environments in which it occurs.
Hypothesis 1:
H0 : Learning Analytics does not analyses the performance of the student to meet the
requirements of the industry .
H1 : Learning Analytics analyses and gives the inside of the student to be in line with
industry needs or requirement.
Hypothesis 2 :
H0 : Leaning Analytics framework does not help student in improving their performance.
H1 : Learning Analytics framework helps student in improving their performance .
Hypothesis 3:
H0 : Learning Analytics does not analyse the performance of the student to meet the industry
needs.
H1 : Learning Analytics analyses the performance of the student to meet the industry needs.
Hypothesis 4:
H0 : : Learning Analytics does not provide complete feedback to the students towards
improving academic performance.
H1: Learning Analytics provides complete feedback to the students towards improving
academic performance.
Research Plan :
Phase 1: Literature Review
A literature review is an evaluative report of information found in the literature related to
your selected area of study. The review should describe, summarise, evaluate and clarify
this literature. It should give a theoretical base for the research and help determine the nature
of your research.
A literature review is a text of a scholarly paper, which includes the current knowledge
including substantive findings, as well as theoretical and methodological contributions to a
particular topic
INCON - XI 2016
60
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Phase 2: Research Planning and Design
When designing a study, many methodological decisions need to be made that influence
the overall quality of the study and the ability to generalize results to other populations. In this
section, the following topics will be presented:

Validity

Research Design

Study Population

Sampling Plan

Data Collection Methods

Finalizing the Research Plan
INCON - XI 2016
61
ASM’s International E-Journal on
Ongoing Research in Management & IT
Phase 3: Data Gathering.
Generally there are three
7.
1.
2.
3.


E-ISSN – 2320-0065
Types of data collection :
Survey’s: Standardized paper-and-pencil or phone questionnaires that ask predetermined
questions.
Interviews: Structured or unstructured one-on-one directed conversations with key
individuals or leaders in a community.
Focus groups: Structured interviews with small groups of like individuals using
standardized questions, follow-up questions, and exploration of other topics that arise to
better understand participants
Consequences from improperly collected data include:
Inability to answer research questions accurately.
Inability to repeat and validate the study.
Phase 4: Hypothesis
A hypothesis is a tentative statement about the relationship between two or
more variables. A hypothesis is a specific, testable prediction about what you expect to happen
in your study. Unless you are creating a study that is exploratory in nature, your hypothesis
should always explain what you expect to happen during the course of your experiment or
research.
Phase 5: Processing and analysing the data
The acquisition of sample and transformation of data into knowledge is no longer a
simple matter of providing lists of names and summary tables. Luth’s value and expertise
comes from the fact that we understand that each type of research objective imposes different
kinds of requirements and offers different possibilities for consideration.
From the beginning, we approach data gathering as an attempt to match research
methodologies, sample acquisition strategies, sample types and information requirements to
business objectives, rather than just filling a quota. The nature of the research objective affects
who you ask, what you ask them, and how you get the data.
Phase 6: Conclusion and Report

Summarize the main points you made in your introduction and review of the literature

Review (very briefly) the research methods and/or design you employed.

Repeat (in abbreviated form) your findings.

Discuss the broader implications of those findings.

Mention the limitations of your research (due to its scope or its weaknesses)

Offer suggestions for future research related to yours.
8.
[1]
[2]
[3]
[4]
REFERENCES:
Research Methodology-C.R. Kothari.
http://link.springer.com/chapter/10.1007%2F978-3-319-07064-3_24
http://www.firmex.com/thedealroom/7-big-data-techniques-that-create-business-value/
http://www.laceproject.eu/faqs/learning-analytics/
INCON - XI 2016
62
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[5] http://www.palgrave.com/studentstudyskills/page/choosing-appropriate-researchmethodologies/
[6] https://www.edsurge.com/n/2014-10-30-opinion-personalization-possibilities-andchallenges-with-learning-analytics
[7] http://www.academia.edu/8519730/A_Learning_Analytics_Methodology_
For_Student_Profiling
[8] http://kmel-journal.org/ojs/index.php/online-publication/article/viewFile/196/148
[9] http://archive2.cra.org/ccc/files/docs/learning-analytics-ed.pdf.
[10] http://archive2.cra.org/ccc/files/docs/learning-analytic
INCON - XI 2016
63
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Solution on Mobile Device Security Threats
Ritu Likhitkar
ASM IBMR, Pune
rl@asmedu.org
Arti Tandon,
ASM IBMR,Pune
arti@asmedu.org
Rimple Ahuja
ASM IBMR,Pune
rimplrahuja@asmedu.org
ABSTRACT:
Mobile devices face a number of threats that pose a significant risk to corporate data.
Like desktops, smartphones and tablet PCs are susceptible to digital attacks, but they
are also highly vulnerable to physical attacks given their portability. Here is an
overview of the various mobile device security threats and the risks they pose to
corporate assets. Mobile malware – Smartphones and tablets are susceptible to worms,
viruses, Trojans and spyware similarly to desktops. Mobile malware can steal sensitive
data, rack up long distance phone charges and collect user data. High-profile mobile
malware infections are few, but that is likely to change. In addition, attackers can use
mobile malware to carry out targeted attacks against mobile device users.
Eavesdropping – Carrier-based wireless networks have good link-level security but lack
end-to-end upper-layer security. Data sent from the client to an enterprise server is often
unencrypted, allowing intruders to eavesdrop on users’ sensitive communications.
Keywords: Android, Security, Threat.
Introduction:
Android is a modern mobile platform that was designed to be truly open. Android
applications make use of advanced hardware and software, as well as local and served data,
exposed through the platform to bring innovation and value to consumers. Securing an open
platform requires a robust security architecture and rigorous security programs. Android was
designed with multi-layered security that provides the flexibility required for an open
platform, while providing protection for all users of the platform. Unauthorized access – Users
often store login credentials for applications on their mobile devices, making access to
corporate resources only a click or tap away. In this manner unauthorized users can easily
access corporate email accounts and applications, social media networks and more.

Theft and loss – Couple mobile devices’ small form factor with PC-grade processing
power and storage, and you have a high risk for data loss. Users store a significant
amount of sensitive corporate data–such as business email, customer databases,
corporate presentations and business plans–on their mobile devices. It only takes one
hurried user to leave their iPhone in a taxicab for a significant data loss incident to
occur.
Unlicensed and unmanaged applications – Unlicensed applications can cost your
company in legal costs. But whether or not applications are licensed, they must be updated
regularly to fix vulnerabilities that could be exploited to gain unauthorized access or steal data.
Keeping a static infrastructure protected is challenging enough, but these days you also
have to make sure laptops, notebooks, net books, tablets, PDAs, and phones are all up to snuff.
INCON - XI 2016
64
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In addition, users are making connections with these devices that are likely not secure. But
don't fret: There are several ways for administrators to mitigate the risks posed by mobile
devices and remote connectivity. In the first part of this series, we'll explore what these risks
are. Then, in Part 2, we'll find the solutions.

Security
risks:The first step is to identify technologies and practices that can put your company at risk.
I've categorized these risks into the following areas:

Unauthorized network access

Unsecured or unlicensed applications Viruses, malware, spyware, etc.

Social engineering
Loss of data or devices:
We've all seen the news reports about laptops being lost or stolen -- along with names,
Social Security numbers and financial data. Several sources claim that 12,000 laptops are lost
at major airports each week: Simple math extends that to more than 600,000 laptops per year.
One report says that, of that number, only 30% the machines are recovered by the owner, and
half of the owners say their laptops contain sensitive customer data or business information.
And now that PDAs and Smartphone can store more data, the problem will only get worse.
But mobile devices aren't the only things that can be lost or stolen. It's also important to
plan for the physical security of servers. If an intruder can get physical access to a server -either to hack a login or to steal the disk drive of a domain controller -- it makes their job
much easier.
Benefits of Mobile Security (Mobile Threat Prevention)
Prevent cyber attacks that use mobile devices

Scan and detect corporate-issued or employee-owned Android and iOS mobile devices
for malicious apps and activity

Block malicious mobile apps from running and alert users and administrators of apps
that threaten mobile security

Correlate activity across apps to detect malicious behaviour
Detect mobile security vulnerabilities and trends

Track user registration, device compliance and threat trends using a virtual dashboard

Pre-analyze App Store apps, providing threat scores and behavioural details for over 3
million apps

Display departmental and user mobile threat trends
Respond to incidents faster

Integrate with mobile device management (MDM) solutions to support detect-to-fix
model

Share FireEye threat intelligence from multiple vectors and stop targeted multi-vector
cyber attacks
Typical Attacks Leverage Portability and Similarity to PCs:
Mobile phones share many of the vulnerabilities of PCs. However, the attributes that
make mobile phones easy to carry, use, and modify open them to a range of attacks.
INCON - XI 2016
65
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Perhaps most simply, the very portability of mobile phones and PDAs makes them easy
to steal. The owner of a stolen phone could lose all the data stored on it, from personal
identifiers to financial and corporate data. Worse, a sophisticated attacker with enough time
can defeat most security features of mobile phones and gain access to any information they
store.
Many seemingly legitimate software applications, or apps, are malicious.
Anyone can develop apps for some of the most popular mobile operating systems, and
mobile service providers may offer third-party apps with little or no evaluation of their safety.
Sources that are not affiliated with mobile service providers may also offer unregulated
apps that access locked phone capabilities. Some users “root” or “jailbreak” their devices,
bypassing operating system lockout features to install these apps.
Even legitimate smartphone software can be exploited. Mobile phone software and
network services have vulnerabilities, just liketheir PC counterparts do.
For years, attackers have exploited mobile phone software to eavesdrop, crash phone
software, or conduct other attacks.
A user may trigger such an attack through some explicit action, such as clicking a
maliciously designed link that exploits vulnerability in a web browser. A user may also be
exposed to attack passively, however, simply by using a device that has a vulnerable
application or network service running in the background.
Phishing attacks use electronic communications to trick users into installing malicious
software or giving away sensitive information.
Email phishing is a common attack on PCs, and it is just as dangerous on email- enabled
mobile phones
Mobile device policies:
A mobile device policy is a written document that outlines the organization’s strategy
for allowing tablet PCs and Smartphone’s to connect to the corporate network. A mobile
device policy covers who gets a mobile device, who pays for it, what constitutes acceptable
use, user responsibilities, penalties for non-compliance, and the range of devices and operating
systems the IT organization supports. In order to make these decisions, it is important that
management understands what data is sensitive, whether data is regulated and the impact
mobile devices will have on that data.
Encryption for mobile devices:
Encrypting data at rest and in motion helps prevent data loss and successful
eavesdropping attempts on mobile devices. Carrier networks have good encryption of the air
link, but the rest of the value chain between the client and enterprise server remains open
unless explicitly managed. Contemporary tablet PCs and Smartphone’s can secure Web and
email with SSL/TLS, Wi-Fi with WPA2 and corporate data with mobile VPN clients. The
primary challenge facing IT organizations is ensuring proper configuration and enforcement,
as well as protecting credentials and configurations to prevent reuse on unauthorized devices.
Data at rest can be protected with self-protecting applications that store email messages,
contacts and calendars inside encrypted containers. These containers separate business data
from personal data, making it easier to wipe business data should the device become lost or
stolen.
INCON - XI 2016
66
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Restrict NT LAN Manager (NTLM) authentication. A lot of applications still us it,
however it's not secure.Use a smart card and similar authentication mechanisms with twofactor authentication. For example, a smart card or USB key device can store the certificate
and authentication information and require a PIN. If the user loses the mobile device, an
intruder can't log on without the smart card. Even if the intruder has the device and smart card,
he would need to know the PIN. Of course, most software requires only a four-digit PIN, but
the user would typically not lose both devices in the same place. Other methods include
biometric devices such as fingerprint readers.
Require a strong password. Some time ago, a Microsoft security expert described how
long it would take a brute-force attack to crack passwords of various lengths. On average, a
password of eight characters would take several years to crack. I know of one company that
requires a 12-character password, which needs to be changed only once a year. Guessing the
password is difficult, and this practice reduces the need for users to remember new passwords.
But even with strong passwords, users still stick password notes under their keyboards.
Malicious apps compromise mobile security to access private information, such as
contact lists and calendar details. They also use mobile device features, such as cameras and
microphones, to spy, profile users, or conduct cyber attacks.
FireEye Mobile Security (Mobile Threat Prevention) detects and prevents these mobile
threats and provides visibility into mobile security trends across the enterprise. FireEye Mobile
Threat Prevention also integrates with industry leading mobile device management (MDM)
providers.
Benefits of Mobile Security (Mobile Threat Prevention):
Prevent cyber attacks that use mobile devices. Scan and detect corporate-issued or
employee-owned Android and iOS mobile devices for malicious apps and activity
Block malicious mobile apps from running and alert users and administrators of apps
that threaten mobile security Correlate activity across apps to detect malicious behaviour
Detect mobile security vulnerabilities and trends Track user registration, device compliance
and threat trends using a virtual dashboard Pre-analyze App Store apps, providing threat
scores and behavioural details for over 3 million apps Display departmental and user mobile
threat trends Respond to incidents faster Integrate with mobile device management (MDM)
solutions to support detect-to-fix model Share FireEye threat intelligence from multiple
vectors and stop targeted multi-vector cyber attacks.
Authentication and authorization for mobile devices:
Authentication and authorization controls help protect unauthorized access to mobile
devices and the data on them. Ideally, Craig Mathias, principal with advisory firm Farpoint
Group, says IT organizations should implement two-factor authentication on mobile devices,
which requires users to prove their identity using something they know–like a password–and
something they have–like a fingerprint. In addition to providing robust authentication and
authorization, Mathias says two-factor authentication can also be used to drive a good
encryption implementation. Unfortunately, two-factor authentication technology is not yet
widely available in mobile devices. Until then, IT organizations should require users to use
native device-level authentication (PIN, password).
INCON - XI 2016
67
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Remote wipe for mobile device security :
Authentication and encryption help prevent data loss in the case of mobile device theft
or loss, but physical security can be further fortified with remote wipe and “phone home”
capabilities. Native remote lock, find and wipe capabilities can be used to either recover a lost
mobile device or permanently delete the data on them. Be careful, however, if you choose to
use these functionalities. Experts recommend defining policies for these technologies and
asking users to sign a consent form. Remote wipe could put the user’s personal data at risk and
“phone home” or “find me” services can raise privacy concerns.
Consequences of a Mobile Attack Can Be Severe:
Many users may consider mobile phone security to be less important than the security of
their PCs, but the consequences of attacks on mobile phones can be just as severe. Malicious
software can make a mobile phone a member of a network of devices that can be controlled by
an attacker (a “botnet”). Malicious software can also send device information to attackers and
perform other harmful commands.
Mobile phones can also spread viruses to PCs that they are connected to.
Losing a mobile phone used to mean only the loss of contact information, call histories,
text messages, and perhaps photos. However, in more recent years,
losing a smartphone can also jeopardize financial information stored on the device in
banking and payment apps, as well as usernames and passwords used to access apps and
online services. If the phone is stolen, attackers could use this information to access the user’s
bank account or credit card account.
An attacker could also steal, publicly reveal, or sell any personal information extracted
from the device, including the user’s information, information about contacts, and GPS
locations.
Even if the victim recovers the device, he or she may receive many spam emails and
SMS/MMS messages and may become the target for future phishing attacks.
REFERENCES:

National Institute of Standards and Technology. “Guidelines on Cell Phone and PDA
Security (SP 800124). ”http://csrc.nist.gov/publications/nistpubs/800124/SP800124.pdf

National Institute of Standards and Technology.“Guidelines on Cell Phone and PDA
Security (SP 800124). ”http://csrc.nist.gov/publications/nistpubs/800124/SP800124.pdf

Mathew
Maniyara.
“Phishers
Have
No
Mercy
for
Japan.”
http://www.symantec.com/connect/blogs/phishers-have-no-mercy-japan

“Technical Information Paper: Cyber Threats to Mobile Devices”

(http://www.us-cert.gov/reading_room/TIP10-105-01.pdf)“Protecting Portable Devices:
Physical Security” (http://www.us-cert.gov/cas/tips/ST04-017.html)
INCON - XI 2016
68
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Big Data and Agricultural System
Mrs. Urmila Kadam
Asst. Professor (Computer)
ASM’s Institute of Business Management & Research,
Chinchwad , Pune – 411019 ( M.S.) India
E-mail: urmilakadam@asmedu.org
ABSTRACT:
Farming Systems in India is adapted judiciously, according to their most suited
locations. That is crops are grown according to the farm or the soil conditions present
over a particular area or the land. The regions in India vary in accordance with the
types of farming they use; some are based on horticulture, agro forestry, and many
more. India's geographical site, cause different parts to experience distinct climates,
which affects every region’s agricultural productivity distinctly. India holds the second
position in agricultural production in the world at the present day
Big data is an evolving term that describes any voluminous amount of structured, semistructured and unstructured data that has the potential to be mined for information . Big
data is a set of techniques and technologies that require new forms of integration to
uncover large hidden values from large datasets that are diverse, complex, and of a
massive scale .
It gathers all viable crop information generated through electronic smart devices (like
moisture sensors, electromagnetic sensors, and optical sensors) for a detailed area.
These smart devices will generate prodigious amounts of data, impelled by record
keeping, compliance and regulatory requirements, which are considered as big data.
The processing of such data using common database management tools is a very
strenuous task. Everything around us is contributing to the generation of Big data at
every time instance.
e-Agriculture service data can be considered as a Big Data because of its variety of data
with huge volumes flowing with high velocity. Some of the solutions to the e-Agriculture
service big data include the predominant current technologies like HDFS, Map Reduce,
Hadoop, STORM etc.
The following are the key points that make the performance of agricultural systems
better and increase productivity:
1.
Measure, store and analyze the data to improve yield quality.
2.
Manage revenue costs by reducing crop failure’s probability.
3.
Improve preventive care and increase producer-consumer satisfaction
Adoption of big data in agriculture significantly decreases the possibility of crop failure
and farmer’s primary concerns and recommends the soil sensing. The main aim of this
research is to develop a efficient algorithm to implement a various Big data analytics
according to requirement at particular geographical area in agriculture.
INCON - XI 2016
69
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Keywords : Big data, moisture sensors, electromagnetic sensors, and optical sensors, eAgriculture ,HDFS, Map Reduce, Hadoop, STORM ,. Big data analytics
Introduction
Farming Systems in India is adapted judiciously, according to their most suited
locations. That is crops are grown according to the farm or the soil conditions present over a
particular area or the land. The regions in India vary in accordance with the types of farming
they use; some are based on horticulture, agro forestry, and many more. India's geographical
site, cause different parts to experience distinct climates, which affects every region’s
agricultural productivity distinctly. India holds the second position in agricultural production
in the world at the present day
Over the last three decades, the application of information and communication
technologies (ICT) has had marked impact across society and the economy. Changes fueled by
ICT adoption are apparent to us today and the processes by which those changes occurred are
a tacit part of our experience base. As we moved through that adoption process, however, the
extent of adoption and its eventual effects were not nearly so clear. We asked questions such
as:
In the 1980s,
What is a microcomputer?
If my office has a Selectric typewriter, why would I need to do word processing?
Isn’t 32K RAM (Random Access Memory) plenty?
In the 1990s,
I have voice mail for my phone, why would I use e-mail?
What is this Internet thing?
Would farmers actually pay for GPS-based yield monitors?
In the 2000s,
Why would I buy a book on-line when the bookstore is just down the road?
I get calls on my current cell phone, why buy something called a Smartphone?
Today, ICT-based advances continue to offer opportunities and challenges. One of the
most talked about, for business, government, and society, is called “Big Data”. Indeed,
Padmasree Warrior, Chief Technology and Strategy Officer for Cisco Systems (Kirkland
2013), notes:
In the next three to five years, as users we’ll actually lean forward to use technology
more versus what we had done in the past, where technology was coming to us. That will
change everything, right? It will change health care; it could even change farming. There are
new companies thinking about how you can farm differently using technology; sensors
connected that use water more efficiently, use light, sunlight, more efficiently.
The following are the key points that make the performance of agricultural systems in
India better and increase productivity:
1.
Measure, store and analyze the data to improve yield quality.
2.
Manage revenue costs by reducing crop failure’s probability.
3.
Improve preventive care and increase producer-consumer satisfaction
Adoption of big data in agriculture significantly decreases the possibility of crop failure
and farmer’s primary concerns and recommends the soil sensing.
INCON - XI 2016
70
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
The main aim of this research paper is to examine the Big Data phenomenon and to
explore its implications for agriculture systems in India.
Existing Agricultural System in India
Agriculture plays a vital role in India’s economy. Over 58 per cent of the rural
households depend on agriculture as their principal means of livelihood. Agriculture, along
with fisheries and forestry, is one of the largest contributors to the Gross Domestic Product
(GDP).
As per estimates by the Central Statistics Office (CSO), the share of agriculture and
allied sectors (including agriculture, livestock, forestry and fishery) was 16.1 per cent of the
Gross Value Added (GVA) during 2014–15 at 2011–12 prices. During Q1 FY2016,
agriculture and allied sectors grew 1.9 per cent year-on-year and contributed 14.2 per cent of
GVA.
India is the largest producer, consumer and exporter of spices and spice products. It
ranks third in farm and agriculture outputs. Agricultural export constitutes 10 per cent of the
country’s exports and is the fourth-largest exported principal commodity. The agro industry in
India is divided into several sub segments such as canned, dairy, processed, frozen food to
fisheries, meat, poultry, and food grains.
The Department of Agriculture and Cooperation under the Ministry of Agriculture is
responsible for the development of the agriculture sector in India. It manages several other
bodies, such as the National Dairy Development Board (NDDB), to develop other allied
agricultural sectors.
Market Size
Over the recent past, multiple factors have worked together to facilitate growth in the
agriculture sector in India. These include growth in household income and consumption,
expansion in the food processing sector and increase in agricultural exports. Rising private
participation in Indian agriculture, growing organic farming and use of information technology
are some of the key trends in the agriculture industry
India's geographical site, cause different parts to experience distinct climates, which
affects every region’s agricultural productivity distinctly. India holds the second position in
agricultural production in the world at the present day. In 2007, agriculture and other industry
made up more than 16% of India's GDP. Despite the steady decline in agriculture's
contribution to the country's GDP, India agriculture is the biggest industry in the country and
plays a key role in the socioeconomic growth of the country. It is also the second biggest
harvester of vegetables and fruit, representing 8.6% and 10.9% of overall production,
respectively. India also has the biggest number of livestock in the world, holding 281 million .
About one-sixth of the land area undergoes serious crop yielding issues such as erosion,
water logging, aridity, acidity, salinity, and alkalinity. Approximately, soil conservation
measures are required for as large as 80 mega hectare of cultivated area. After the irrigation
practices were introduced, within a few years, the problems of salinity and water logging had
aroused. Apparently 7 Mega hectare of land are affected by alkalinity and salinity. According
to the soil conditions of the particular farm area the crops to be cultivated are decided based on
the moisture content, humidity, degree of nutrients present etc.So it is very important to keep
records of all the soil quality properties (for which the big data can be used).
INCON - XI 2016
71
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
E SS
E-I
SN – 232
2 20--00
065
5
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
P bleem
Prob
ms Exi
E istiing
g In
n Pre
P esent Ag
griicu
ultu
ura
al Sys
S stem
m In In
ndia
a:
O er 70
Ov
0% off In
ndia's to
otall po
opu
ulaatio
on ressides at thee rura
r al are
a ea, an
nd abo
a outt th
hreee-ffou
urth
h of
o
th
he res
r sideentts of
o rura
r al are
a eas are dep
d pen
ndent on
n ag
griccullturre for
f r th
heirr liv
velliho
ood
d.
Mr. Go
Mr
Gopaal Na
Naik,, a pro
p ofessso
or in th
he arreaa of
o econom
mics and
d socciaal sciien
ncess and
a d
Chaairp
C
persson
n fo
or thee Cen
C nterr fo
or Pu
Publiic Pol
P licy
y at
a Ind
I dian
n In
nsttitu
ute of Man
M nagem
men
nt, Ba
Bang
galo
oree, in
n
on
ne off hiis inte
i erv
view
w has
h s poin
p nteed out
o t so
om
me cri
c tical isssuees. Cu
urreenttly,, th
he iss
i uess th
hatt afffliict thee
In
ndiian ag
griccullturre are
a e th
he def
d ficiien
ncy
y off prrop
perr kn
now
wleedg
ge and
d in
nfrrasttru
uctu
uree in
n th
he rur
r ral areeass.
Prrob
bleemss reelatted
d to
o irrrig
gatiion
n, mar
m rkeet infrrastru
uctu
uree an
nd traansp
porrt infr
i frasstru
uctu
uree ad
dd sig
gniificcan
nt
co
ostt to
o faarm
merrs' ope
o eraatio
onss. Ano
A oth
her isssuee iss laack
k off deeliv
verry meech
han
nism
ms. The
T ere aree a nu
um
mbeer
off sch
hem
mes aim
med tow
waards devellop
ping ag
griccultturre. We
W don
d n’t haavee eff
e fecttive deeliv
very
y
m chaanissms th
mec
hatt caan traansslatte tho
t osee in
nto efffecctiv
ve facciliitattion
n at
a the
t grrou
und
d leeveel, in
i terrmss of
o
in
ncreassin
ng prrod
ductiv
vity
y or
o deecrreaasin
ng co
ostt or
o in
ncreeassing
g priicee rea
r alizzatiion
n. Inaadeequ
uatee
go
oveern
nmeentt su
upp
porrt exa
e acerrbaates th
hesse issu
i uess.
G verrnm
Go
ment faiilure is a maajo
or con
c nceern in
n agriicu
ultu
ure beecaause the
t e hiigh
h riisk
ks inv
i volv
ved
d
m ke heelp an
mak
nd faacillitaatio
on neeceessary
y. Lik
ke an
ny otheer bu
usin
nesss en
nterrpriise, agr
a ricu
ultu
uree is
su
ubjjectted
d to
o hig
h gh risk
r ks beecaausee of
o thee vola
v atille naaturre of
o the fac
f ctorrs inv
i volved
d. Fo
or ins
i stan
ncee,
w atheer is
wea
i ofte
o en a pro
p obleem
m - you
y u hav
h ve dro
d oug
ghtss in
n one
o e yeearr an
nd hea
h avy
y raain
ns in
i the
t e neextt. In
n both
b h
caasees, faarm
mers los
l se ou
ut; heence theey haavee to
o loo
ok fo
or a nor
n rm
mal peerio
od to
o mak
m ke mon
m ney
y.
G vern
Gov
nm
men
nt, the
t erefforre, has to
o play
p y a majo
m or rol
r e in
n pro
p vid
din
ng sup
s ppo
ort to farrmerss. Thi
T is is trruee alll
ov
verr th
he wo
orld
d and
a d th
herre is
i har
h rdly
y any
a y co
oun
ntry
y whe
w eree goveern
nm
mentt in
nteerveenttion
n is not
n prreseentt.
T re maay of co
Ther
ourse bee vaariaatio
onss in
n th
he ex
xten
nt of
o intterv
ven
ntio
on; but if you che
c eck
k th
he situ
uattion
n
in
n mos
m st cou
c untrries or
o reg
r gion
ns, in
nclu
udiing
g deeveelo
oped one
o es like
l e th
he US
S, Caanaadaa an
nd thee Eur
E ropean
n
U on,, you
Unio
y u seee su
ubsttan
ntiaal inte
i erv
ven
ntio
on by th
he go
overrnm
meent. Thu
T us gov
g verrnm
men
nt facciliitattion
n is
essseentiial forr so
oun
nd ag
griccultturral dev
vellop
pmeentt.
W at is Big
Wh
B g Dat
D ta?
?
" ig Da
"Bi
Data"" deesccrib
bess daataa seets so laargee and
a co
omp
pleex the
t ey aree im
mpraccticcal to man
m agee with
w h
trradiitio
onaal sof
s ftwaree to
ools.
B g data
Big
d a iss an
n evo
e olviing
g teerm
m th
hatt deesccrib
bess an
ny vo
olum
miinous am
mou
untt of stru
uctu
ureed, semiisttrucctu
ured
d and
a d un
nsttrucctu
ured
d data
d a th
hatt haas thee pote
p enttiall to
o bee min
m ned
d fo
or info
i orm
mattio
on. Big dat
d ta is
a seet of
o tec
t chn
niqu
uess and
a d teech
hno
olog
giees tha
t at req
r quirre new
n w forrms of
o int
i egrration
n to
o unc
u cov
verr laargee
den
n vaalu
ues fro
om
m laargee data
d aseets thaat are
a diiverrsee, com
c mpllex
x, and
a d off a maassivee sccalle.
hiidd
IN
NC
CON - XI
X 20
2 16
72
2
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
There are multiple dimensions to big data, which are encapsu- lated in the handy set of
seven “V”s that follow Volume : considers the amount of data generated and collected.
Velocity : refers to the speed at which data are analyzed.
Variety : indicates the diversity of the types of data that are collected.
Viscosity : measures the resistance to flow of data.
Variability : measures the unpredictable rate of flow and types.
Veracity : measures the biases, noise, abnormality, and reli- ability in datasets.
Volatility : indicates how long data are valid and should be stored
It gathers all viable crop information generated through electronic smart devices (like
moisture sensors, electromagnetic sensors, and optical sensors) for a detailed area. These smart
devices will generate prodigious amounts of data, impelled by record keeping, compliance and
regulatory requirements, which are considered as big data.
The processing of such data using common database management tools is a very
strenuous task. Everything around us is contributing to the generation of Big data at every time
instance.
e-Agriculture service data can be considered as a Big Data because of its variety of data
with huge volumes flowing with high velocity. Some of the solutions to the e-Agriculture
service big data include the predominant current technologies like HDFS, Map Reduce,
Hadoop, STORM etc.
The emergence and development of big data technology offered a lot of opportunities and
application potential for the precision agriculture field. The popularity of Internet of Things,
Cloud Computing technologies in agriculture facilitate the generation of agricultural big data
INCON - XI 2016
73
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
which is related to each link of arable land, sowing, fertilization, insecticidal, harvesting,
storage, breeding, is the data analysis and mining. Generally, agricultural big data is composed
of structural data, unstructured data, these data exhibits the characteristics of 5V features, i.e.,
volume, velocity, variety, value, veracity. The new feature of agricultural big data presents
many challenges for precision agriculture, such as storage, retrieval, representation,
dimensionality reduction and parallel distributed processing of those big data.
Applications of Big data in agriculture –
1.
Develop Architectures, protocols, and platform for agriculture
2.
Data collection, representation, storage, distribution, and cloud services
3.
Power management and energy harvesting for green precision agriculture
4.
Wireless sensor network for precision agriculture
5.
Agricultural information retrieval
6.
Performance evaluations and simulations of precision agriculture
7.
Integration of agriculture activities and Internet of Things (IoT)
8.
Trends and future applications for ionnovative ideas
Concluding comments:
BIG DATA is likely to help better utilize the scarce resources and can help deal with the
various sources of inefficiency that have been frequently cited by critics as among the key
obstacles for development in developing countries.
BIG DATA can help to reduce the waste of inputs such as fertilizer and increase
agricultural productivity and control the epidemics of various diseases.
Uses of BIG DATA that lead to positive social and economic outcomes and those that
benefit socially and economically disadvantaged groups need to be promoted.
Creation of appropriate databases may stand out as particularly appealing and promising
to some entrepreneurial firms. Governments, businesses, and individuals are willing to pay for
data when they perceive the value of such data in helping them make better decisions. In the
meantime, policymakers, academics, and other stakeholders should make the most of what is
available.
INCON - XI 2016
74
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
REFERENCES:
[1] http://www.sciencepublishinggroup.com/specialissue/
specialissueinfo?specialissueid=218003&journalid=218
[2] http://www.ifama.org/files/IFAMR/Vol%2017
/Issue%201/%281%29%2020130114.pdf
Sonka Volume17 Issue 1, 2014
2014 International Food and Agribusiness
Management Association (IFAMA).
[3] http://tejas.iimb.ac.in/interviews/12.php
[4] http://www.farminstitute.org.au/publications/bigdata_2015
[5] ttps://www.mongodb.com/big-data-explained
[6] http://www.ibef.org/industry/agriculture-india.aspx
INCON - XI 2016
75
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
The Internet of Things and Smart Cities: A Review
Prof. Rimple ahuja
rimpleahuja@asmedu.org
Prof. Leena patil
leenapatil@asmedu.org
Prof. Leena patil
leenapatil@asmedu.org
ABSTRACT:
The Internet of things is far bigger than anyone realizes. When people talk about “the
next big thing” they are never thinking big enough. The Internet of Things revolves
around increased machine-to-machine communication. It’s built on cloud computing
and networks of data-gathering sensors; it’s mobile, virtual, and instantaneous
connection. It’s going to make everything in our lives from streetlights to seaports
“smart.”This review Paper discusses the IoT architectures, protocols, and algorithms,
IoT devices, IoT Technologies, IoT network technologies, Smart City architecture and
infrastructure, Smart City technology, Cloud Computing in Smart City, Big data in
Smart City.
Keywords : Smart cities, Internet of Things, Big data, Smart location, Sensors, cloud
computing.
Introduction:
The Internet of Things is about installing sensors (RFID, IR, GPS, laser scanners, etc.)
for everything, and connecting them to the internet through specific protocols for information
exchange and communications, in order to achieve intelligent recognition, location, tracking,
monitoring and management. With the technical support from IoT, smart city need to have
three features of being instrumented, interconnected and intelligent. Only then a Smart City
can be formed by integrating all these intelligent features at its advanced stage of IOT
development.
Smart City is the product of accelerated development of the new generation information
technology and knowledge-based economy, based on the network combination of the Internet,
telecommunications network, broadcast network, wireless broadband network and other
sensors networks where Internet of Things technology (IoT) as its core. The main features of a
smart city include a high degree of information technology integration and a comprehensive
application of information resources. The essential components of urban development for a
smart city should include smart technology, smart industry, smart services, smart management
and smart life [1].
IoT architecture, protocols, and algorithms: The IoT Architecture is generally divided into four layers:
1.
The device layer
2.
Medium layer
3.
The application layer
4.
Business layer
INCON - XI 2016
76
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig. 1.1 : layer Architecture
Devices must communicate with each other (D2D). Device data then must be collected
and sent to the server infrastructure (D2S). That server infrastructure has to share device data
(S2S), possibly providing it back to devices, to analysis programs, or to people. the protocols
can be described in this framework as:

MQTT: a protocol for collecting device data and communicating it to servers (D2S)

XMPP: a protocol best for connecting devices to people, a special case of the D2S
pattern, since people are connected to the servers

DDS: a fast bus for integrating intelligent machines (D2D)

AMQP: a queuing system designed to connect servers to each other (S2S). [3]
As for as algorithms are concerned , algorithms used in the Internet of things based on
various domains such as Sensor technology, Cryptography, Data Mining, Machine Learning
etc some of the example of algorithms are Dynamic routing, Power Control Algorithm,
Optimal Analysis Algorithms.

IoT devices: The Internet of Things is taking off, with connected devices likely to be a
big part of our future. So what kinds of devices can we expect: - Smart Refrigerators,
Smart Couches, Smart Toothbrushes, Smart Plates, Bowls and Cups, Smart Cars [2].

IoT Technologies: - The IoT covers a huge scope of industries and applications. So
below are the Technologies: - communication, Backbone, hardware, Protocols,
Software, Data Brokers / Cloud Platforms, Machine Learning [3].

IoT network Technologies: Wifi technology, Bluetooth, Zigbee, lowpan, Radio
transceivers and proprietary protocols are some of the technologies that are used in IoT
network .
INCON - XI 2016
77
ASM’s International E-Journal on
Ongoing Research in Management & IT
Smart City architecture and infrastructure
E-ISSN – 2320-0065
Fig. 1.2 : Smart City architecture and infrastructure
Goals, People and Ecosystem:
Every Smart City initiative is based on a set of goals; often they focus on sustainability,
inclusivity and the creation of social and economic growth. Such goals will only be achieved
through a Smart City strategy if that strategy results in changes to city systems and
infrastructures that make a difference to individuals within the city – whether they are
residents, workers or visitors. The challenge for the architects and designers of Smart Cities is
to create infrastructures and services that can become part of the fabric and life of this
ecosystem of communities and people.
Soft Infrastructures:
In the process of understanding how communities and individuals might interact with
and experience a Smart City, elements of “soft infrastructure” are created – in the first place,
conversations and trust. If the process of conversations is continued and takes place broadly,
then that process and the city’s communities can become part of a Smart City’s soft
infrastructure.
Some soft infrastructural elements are more formal. For example, governance processes
for measuring both overall progress and the performance of individual city systems against
INCON - XI 2016
78
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Smart City objectives; frameworks for procurement criteria that encourage and enable
individual buying decisions across the city to contribute towards Smart City goals; and
standards and principles for integration and interoperability across city systems.
City systems:
More importantly, these systems literally provide life support for cities – they feed,
transport, educate and provide healthcare for citizens as well as supporting communities and
businesses. So we must treat them with real respect.
A key element of any design process is taking into account those factors that act as
constraints on the designer. Existing city systems are a rich source of constraints for Smart
City design: their physical infrastructures may be decades old and expensive or impossible to
extend; and their operation is often contracted to service providers and subject to strict
performance criteria.
Hard Infrastructures:
The field of Smart Cities originated in the possibilities that new technology platforms
offer to transform city systems. Those platforms include networks such as 4G and broadband;
communication tools such as telephony, social media and video conferencing; computational
resources such as Cloud Computing; information repositories to support Open Data or Urban
Observatories and analytic and modelling tools that can provide deep insight into the
behaviour of city systems.
These technology platforms are not exempt from the principles I’ve described in this
article: to be effective, they need to be designed in context. By engaging with city ecosystems
and the organizations, communities and individuals in them to properly understand their
needs, challenges and opportunities, technology platforms can be designed to support them
[5].
Smart City technology
Fig. 1.3 : Smart cities Technology
INCON - XI 2016
79
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Sensing:
Cutting-edge sensors and video authentication technology that use RFID and wireless
sensor networks encompassing the land, sea, air and outer space make it possible to provide a
variety of data concerning all aspects of urban life for advanced authentication. This data can
be effectively visualized, accumulated and used in many scenarios, such as for early detection
of earthquakes to enable preparedness throughout the Smart City, for monitoring of longdistance highways through the use of optical cable sensors, and for advanced biometric
recognition in which video data is also used.
Authentication
The data acquired through sensing is continuously authenticated via location and
condition validity checks. Since the amount of urban authentication data in a Smart City is
enormous, advanced NEC authentication technologies renowned for high-speed processing
and precision are employed to ensure genuine real-time authentication.
Monitoring
Along with monitoring the normalcy of sensing, authentication, control and other
conditions, information is reported in real time to the required locations upon detection of
anomalies. For example, when an anomaly occurs, this is relayed along with video or other
data in real time to the required locations to allow effective prevention of crime, accidents and
most critically for the Smart City as whole disasters.
Control
Monitored data is analyzed in real time and the optimum control information is
determined and transmitted. For example, air conditioning inside buildings can be controlled
more precisely through the use of location information, resulting in the creation of an
environment that is better adapted to the needs of the people who work there.
Cloud computing
Robust remote backup functions capable of withstanding local disasters can provide
required information without users being aware of the location of hardware and data.
Moreover, rapid and flexible response is enabled for services whose contents change
according to the situation and the lapse of time. For example, residential services can be
tailored to individual districts and resources can be flexibly allocated for maximum effect
when large amounts of processing are required [6].
How Big Data Helps Build Smart Cities
1.
Smart Cities are presenting new challenges for Big Data.
2.
The emerging amount of data needs to be processed to make feasible its analysis.
3.
First step, data fusion to avoid noise and apparently random behaviours.
4.
Second step, correlation in order to see hidden behaviours.
5.
Next steps more focused on insight, and integration into business models.
6.
Needs from the market to define the questions that are expecting to answer for the Smart
Cities.
INCON - XI 2016
80
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Data Fusion

Temperature area totally isolated from the traffic monitoring zones.

Not required fine-grain analysis of temperature, since not influenced by traffic.

Traffic sensors needs to be aggregated by highways and lanes.

Data fusion feasible due to the nature of the problem.

This simplifies and makes feasible the correlation between Temperature and Traffic.
Cloud Computing in Smart City
There are many challenges and opportunities of emerging and future smart cities ,which
can be addressed by means of cloud computing. For instance, dynamic energy pricing, i.e.,
shifting the potential peak demand to a different time when the energy price is lower,
consistency of heterogeneous data from different sources like sensors, AMI to SCADA
system, real-time massive data streaming, etc. A cloud-based platform will be instrumental in
minimizing network complexity and providing cost-effective solutions as well as increasing
the utilization of energy. Some previous works investigated the issues related to cloud
computing in smart cities .However, the presented concepts lack a smart scalable cloud
platform to deal with real-time data management and consistency. Smart city
services/applications may be deployed in various ways, such as in a private cloud, community
cloud, or hybrid cloud [8].
Conclusion:
The explosive growth of Smart City and Internet of Things applications creates many
scientific and engineering challenges that call for ingenious research efforts from both
academia and industry, especially for the development of efficient, scalable, and reliable Smart
City based on IoT. New protocols, architectures, and services are in dire needs to respond for
these challenges.
REFERENCES:
[1] http://www.journals.elsevier.com/future-generation-computer-systems/
call-for-papers/special-issue-on-smart-city-and-internet-of-things/
[2] http://www.techtimes.com/articles/31467/20150208/top-5-internet-things-devices-expectfuture.htm
[3] http://postscapes.com/internet-of-things-technologies#backbone
[4] http://electronicdesign.com/iot/understanding-protocols-behind-internet-things
[5] http://theurbantechnologist.com/2012/09/26/the-new-architecture-of-smart-cities/
[6] http://nl.nec.com/nl_NL/global/ad/campaign/smartcity/technology/index.html
[7] http://www.kdnuggets.com/2015/10/big-data-smart-cities.html
[8] http://zeitgeistlab.ca/doc/cloud_computing_for_smart_grids_and
_smart_cities.html
INCON - XI 2016
81
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Proposed Database Security Framework in Cloud Computing
Prof. Hidayatulla Pirjade
Assistant Professor
ASM’s IBMR, Chinchwad, Pune
ABSTRACT :
Prof. Sudhir P. Sitanagre
Assistant Professor
ASM’s IBMR, Chinchwad, Pune
Data Security ,user authentication and Access Control is an important work in the
environment of Cloud Computing database. The users upload the personal and
classified data over the cloud. Security is mandatory for such type of outsourced data,
so that users are confident while processing their private and confidential data.
Modern technology suffers from computational problem of keys and there exchange.
This paper emphasizes the different problems and issues that arise while using cloud
services such as key generation, data security and authentication. The paper
addresses these problems using access control technique which ensures that only the
valid users can access their data. It proposes a RSA Key Exchange Protocol between
cloud service provider and the user for secretly sharing a symmetric key for the secure
data access that solves the problem of key distribution and management. The
Authentication will be done using Two Factor Authentication Technique with the help
of key generated using TORDES algorithm. The proposed work done using Cloud is
highly efficient and secure in the existing security models.
Keywords: Cloud computing, Data security, key Exchange Protocol, Access control
Techaniques, Symmetric encryption.
1.1
Cloud computing
1.
Introduction
Private cloud. The cloud infrastructure is operated exclusively for an organization. It
may be managed by the organization itself or by third party which can be executed in the
premise or off premise.
Cloud computing is defined by the National Institute
of Standards and Technology (NIST)2 as “a model for enabling convenient, ondemand network access to a shared pool of configurable computing resources (eg, networks,
servers, storage, applications, and services) that can be rapidly provisioned and released with
minimal management e ffort or service provider interaction. Cloud computing is defined to
have several deployment models, each of which provides distinct trade-offs for agencies which
are migrating applications to a cloud environment. cloud deployment models as follows:
INCON - XI 2016
82
ASM’s International E-Journal on
Ongoing Research in Management & IT
1.1
E-ISSN – 2320-0065
Cloud computing
Introduction
Private cloud. The cloud infrastructure is operated exclusively for an organization. It
may be managed by the organization itself or by third party which can be executed in the
premise or off premise.
Cloud computing is defined by the National Institute of Standards and Technology
(NIST)2 as “a model for enabling convenient, on-demand network access to a shared pool of
configurable computing resources (eg, networks, servers, storage, applications, and services)
that can be rapidly provisioned and released with minimal management e ffort or service
provider interaction. Cloud computing is defined to have several deployment models, each of
which provides distinct trade-offs for agencies which are migrating applications to a cloud
environment. cloud deployment models as follows:
Fig. 1 : cloud model
INCON - XI 2016
83
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Community cloud. The cloud infrastructure is managed by several organizations. It may
be managed by the organization itself or by third party which can be executed in the
premise or off premise.

Public cloud. The cloud infrastructure is open for general public or a large industry
group and can be owned by an organization by acquiring cloud services.

Hybrid cloud. The cloud infrastructure is a combination of more than one clouds (i.e.
private, community, or public) which always remain single entities but are bound
together by standardized technology that enables data and application portability (eg,
cloud bursting for load-balancing between clouds).

Cloud computing can also categorized into service models.

Software as a Service (SaaS). User can only access the applications running on the
cloud. This application can be accessible through various client interfaces like web
browser. In the process user don't have the access of management console due to which
as a user you cannot manage or control the base of infrastructure like network, server,
operating systems, storage or other capabilities. Users have some limited

user specific configuration setting with respect to application.

Platform as a Service (PaaS). One of the facilities given to the user is to deploy any
of the application on the infrastructure which will be created using programming
language and the tool supported by the user. In this user cannot manage or control base
of infrastructure like network, server, operating systems, storage but on same time has
control over deployed application and possibly application hosting configurations.

Infrastructure as a Service (IaaS). In this one functionality is provided to the user which
is storage, networks and other fundamental computing resources where user can deploy
and run random software which can be some application or operating system. In this
user has control over operation system, application which is deployed, storage and
limited control on networking components like firewall etc.
Cloud is vibrant, flexible and easily avaliable due to all this nature some severe
drawback comes in picture in terms of data security which cause challenge in secure data
sharing. To overcome from this issue we can follow an authenticated based approach rather
than communication based approach. In broad manner security is categorized in two types data protection and asset protection. In this research paper the main focus is on data
protection. Section 2 of this paper state the issue and section 3 show related work. Section 3
gives a simplified problem Section 4 introduces some important characteristics and building
blocks of cloud computing systems. Section 5 presents our system model based on
cryptographic techniques in detail. Section 6 describes the implementation steps, simulation
results and the evaluation. Section 7 summaries our conclusions and points out future work.
II.
Problem Analysis
Information security is a critical issue in cloud computing environments. Clouds have no
borders and the data can be physically located anywhere in the world aur any data centre
across the network geographically distributed. So the nature of cloud computing raises serious
issues regarding user authentication, information integrity and data confidentiality. Hence it is
proposed to implement a enhanced novel secure security algorithm in orderoptimize the
information security ensuring CIA – Confidentiality, Integrity and Authentication while
storing and accessing the data from and to data centers and also in peer interactions.
INCON - XI 2016
84
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
III. Related Works
In [4] they have addressed the security issues associated
in cloud data storage and have explored many security issues, whenever a data
vulnerability is perceived during the storage process a precision verification across the
distributed servers are ensured by simultaneous identification of the misbehaving nodes
through analysis in term of security malfunctioning, it is proved that their scheme is effective
to handle certain failures, malicious data modification attack, and even server colluding
attacks. This new technology opens up a lot of new security issuesleading to unexpected
challenges which is of dominant importance as security is still in its infancy now many
research problems are yet to be solved and identified. Security Content Automation Protocol
(SCAP) and the benefits it can provide to cloud and tools for system security such as patch
management and vulnerability management software, use proprietary formats, nomenclatures;
measurements, terminology and content. Mentioned that the lack of interoperability causes
delays in security assessment was addressed in [6] It has been described in [7] about the
overview of privacy issues within cloud computing and a detailed analysis on privacy threat
based on different type of cloud scenario was explained, the level of threat seem to vary
according to the application area. Their work has stated the basic guidelines for software
engineers when designing cloud services in particular to ensure that privacy are not mitigated.
The major focus of their schemes rests on the privacy risks, analysis on privacy threats,
privacy design patterns and accountability with in cloud computing scenario. In [8] it clearly
stated about the issues associated in choosing a security mechanisms or security frameworks
in the Cloud computing context and given a brief outline on flooding attacks. Also they
have given an idea about, the threats, their potential impact and relevance to real-world cloud
environment. It is well understood from their investigation, a significant pace for improving
data security in cloud is to initial intensification of the cloud database.
INCON - XI 2016
85
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
A Study of Digital India applications with specific reference to Education
Sector
Rama Yerramilli
National Informatics Centre
Scientist ‘C’
Dr. Nirmala K.
ASM’s IBMR.
Chinchwad, Pune
ABSTRACT :
Education is a very socially oriented activity and conventional education has been
associated with one to one interaction between teacher and learner. In spite of rapid
growth of Information and Communication Technology (ICT) applications in various
sectors, their impact has not been very extensive in education sector. But, with the
digital movement and availability of information resources and tools, education also
lends itself more to use of ICT services. There are some existing eservices through
public and/or private partnership but there is no significant development.ICT in
education sector got the much needed dynamic movement with the recent initiative of
Digital India. Digital India’s main aim is to make India a knowledge Economy. Under
this program, various initiatives are planned in Education sector. Already few
technologies are in place. The main aim of this paper is to highlight the emerging trends
in education sector under Digital India program. Further the paper also highlights the
challenges in implementation of Digital India applications in education sector.
Keywords : e-governance, Digital India, Knowledge Hub, elearning
1.
Introduction
Rapid advancements in Information and Communication Technology(ICT) and
availability of low cost communication gadgets like smart phones with internet applications
and globalization of education are some of the prominent factors which have dynamically
changed the Education sector in India. Digital education today is the latest trend. A talk-nchalk classroom is being replaced by interactive whiteboard with projectors and speakers.
Many private educational institutes have initiated digital classrooms and have included elearning as part of their curriculum. In technical education also e-learning become popular. A
recent UK-India business Council report titled “Meeting India’s Educational Challenges
Through e-learning” states that India is the second biggest e-learning market globally after the
US. But, India is a country with people speaking different languages and varied economical
and environmental conditions. There are extreme cases of few people who can afford to take
the best education and majority of the population who cannot afford to take the basic
education. Further, literacy rate of the female population is still considered relatively low in
India.
In an effort to overcome the digital divide and take India to the digital age, the
government has launched the Digital India campaign. The campaign’s targets include
providing broadband connectivity to a quarter of a million rural villages by 2019 and making
INCON - XI 2016
86
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Wi-Fi connections available in schools. Along the connectivity, few portals and applications
are being launched by Digital India.
The concept of e-learning and digitalization made it possible to implement virtual
classrooms. This brings transparency in various factors like performance, availability of
learning resources, assessments, educational processes throughout the country .Quality of
education is consistently maintained in all domains and regions through imparting online
training to teachers, availability of study materials anytime anywhere and optimum utilization
of resources. The latest trends in digital education sector also include adaptive and
collaborative learning technologies where a student is engaged by practising, experiencing,
sharing things and gaining knowledge in a collaborative environment.
2.
Digital India Programme
2.1
Creation of a knowledge based society:
Digital India has been envisioned as an ambitious umbrella programme to transform
India into a digitally empowered society and knowledge economy.
2.2
One of the Pillar 5 of Digital India is ekranti or Negp 2.0
E-kranti is Electronic delivery of services Under e-kranti the following things are
planned.
All Schools connected with broadband
Free wifi in all schools (coverage would be around 250,000 schools )
A programme on digital literacy would be taken up at the national level
MOOCs – develop pilot Massive Online Open Courses for e-education





2.3






Early Harvest Programme
Early Harvest Programme announced in Digital India for education sector. Early
Harvest Programme basically consists of those projects which are to be implemented
within short timeline.
Wi-Fi in all universities
All universities on the National Knowledge Network (NKN) shall be covered under this
scheme. Ministry of Human Resource Development (MHRD) is the nodal ministry for
implementing this scheme
School books to eBooks
All books shall be converted into eBooks. Ministry of HRD/ DeitY are the nodal
agencies for this scheme
It for Jobs
3.
Initial applications started under Digital India
Initial applications of Digital India are e-education, e-basta, Nand Ghar which will
impart education using technologies including smart phones, mobile apps and internet services
in far-flung areas where it may not be possible for teachers to be present in person.

Nand Ghar : 13 lakh Balwadis in India are planned to be converted into Nand Ghar
where Anganwadi educators will be trained to use digital tools as teaching aids.

e-basta : School books to eBooks
INCON - XI 2016
87
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
It is cherished initiative by the government aimed at making school books accessible in
digital form as e-books to be read on tablets and laptops.
Features :

A web portal is provided to bring the various publishers - free as well as commercial and
the schools together on one platform. It is also available through mobile app interface
on Android, IOS and windows platforms for wider access.

a structure to facilitate organization and easy management of digital resources has also
been made,

an app that can be installed on the tablet is also available for navigating such a structure.

Target Users:

Publishers can publish their resources on the portal, for use by the schools.

Schools can browse, select and compile their choice of resources from this pool, as a
basta for different classes.

Students can then download such bastas from the portal, or the school may distribute
them through media like SD cards.
Functionalities :

Publisher: Upload e-content along with meta data covering class, language, subject,
price, preview pages, etc. Pre-publication content can also be uploaded for review; these
will not be available for download. Publishers can also view comments and ratings by
the users of the portal, as well as download statistics of various content. Where content
is protected by specific DRM models, the framework would comply with the relevant
restrictions.

Schools: Schools can access the portal to assemble resources as per their choice. They
browse the eContent uploaded by various publishers, search for contents of their liking,
and organize them into a hierarchical structure in the form of an eBasta. One can create
an eBasta from scratch or by adding contents from existing eBasta. Every eBasta has a
unique name, which can be given to the students so that they can download the eBasta
on their own.

Students: Students visit the portal to download a prescribed eBasta or explore eBastas
and contents available on the portal, and download those that they need. If the eBasta to
be downloaded includes paid resources, they will have to complete the payment. Then
they will be given access to download the eBasta. They will have to use the eBasta App
to access the contents of the downloaded eBasta.
URL :www.ebasta.in
Some statastics

eContents 501

ebastas 15

ebasta downloads 577

econtent downloads 5578
4.
Latest trends in e-governance in education sector
4.1
elearning
INCON - XI 2016
88
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

edNEXT Providing support for development of e-Skills and Knowledge network.

Scope: All Schools connected with broadband, Free wifi in all schools & MOOCs
(Massive Online Open Courses)

Coverage: 250,000 Schools

Timeline: 2 Years

Nodal Agency: MHRD

In line with the government’s Digital India initiative, Human Resource Development
(HRD) ministry launched a number of mobile apps and web-based platforms allowing
students to access study material online, and parents to keep a track of the performance
and attendance of their children. There are also frameworks for Institutional to monitor
their performances.

These initiatives are useful to bring transparency in school education and also to create
new learning opportunities for students and teachers.
4.2
edNEXT
On 5th Noveber 2015 the national conference of ICT in school education edNEXT by
Ministry of HRD launched e-Pathshala, Saaransh, and National Programme on School
Standards and Evaluation Framework (Shala Siddhi) . All these are web portal/ mobile apps.
The event also showcased other ICT initiatives including the IVR (Interactive Voice
Response) system to monitor the daily implementation of Mid Day Meal scheme and Shaala
Darpan, an integrated platform to address all academic and administrative requirements of
schools, teachers, parents and students.
Ministry also provided a window ‘I share for India’, in which digital initiatives aiming
to spread learning can be shared And these ventures should not be used for commercial gain.
4.3




Shaala Siddhi
Shaala Siddhi is National Programme on School Standards and Evaluation (NPSSE).
It is initiated by National University of Educational Planning and Administration
(NUEPA).
Shaala Siddhi is a comprehensive instrument for school evaluation which enables the
schools to evaluate their performance in more focused and strategic manner to facilitate
them to make professional judgement for continuous improvement.
The web-portal of the framework will help all schools to assess themselves and the
results can be seen by all enabling them to provide feedback.
The initiative has already been successfully piloted in four districts of Tamil Nadu.
Need for this software :
The need for effective schools and improving school performance is increasingly felt in
the Indian education system to provide quality education for all children.
Objectives:

‘School Evaluation’ as the means and ‘School Improvement’ as the goal. It refers to
evaluating the individual school and its performance in a holistic and continuous
manner leading to school improvement in an incremental manner.

To provide quality education for all children

to improving quality of school education in India
INCON - XI 2016
89
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

to develop a technically sound conceptual framework, methodology, instrument, and
process of school evaluation to suit the diversity of Indian schools

to develop a critical mass of human resource for adaptation and contextualization of the
school evaluation framework and practices across states.
Milestones :
The programme envisions reaching 1,6 million schools in the country through a
compressive system of school evaluation.
Framework :

The School Standards and Evaluation Framework(SSEF) has been developed as an
instrument for evaluating school performance. This will enable the school to evaluate its
performance against the well-defined criteria in a focused and strategic manner.

The SSE framework comprises seven ‘Key Domains’ as the significant criteria for
evaluating performance of schools. The Framework has been developed through a
participatory and mutual consensus approach on 'How to evaluate diversified Indian
schools for Incremental Improvement ’.

The SSEF has flexibility for adaptation, contextualization and translation in statespecific languages.

It has been designed as a strategic instrument for both self and external evaluation and
Both the evaluation processes are complementary to each other and ensure that the two
approaches work in synergy for the improvement of the school as a whole.

As part of the SSEF, a ‘School Evaluation Dashboard-e Samiksha’ has been developed
to facilitate each school to provide consolidated evaluation report, including areas
prioritized for improvement.

The School Evaluation Dashboard is developed both in print and digitized format. The
School Evaluation Dashboard, obtained from each school, will be consolidated at
cluster, block, district, state and national -level for identifying school- specific needs and
common areas of intervention to improve school performance.

A web-portal and Mobile App on School Standards and Evaluation are in the process of
development.

In order to translate the objectives of NPSSE to institutionalize ‘School Evaluation for
Improvement’, a strong operational plan has been formulated to extend support to each
state..A dedicated Unit at NUEPA is leading this programme under the guidance of
National Technical Group (NTG) and in strong collaboration with the states.

URL : nuepa.org
4.4
Shaala Darpan

‘Shaladarpan’ is a web portal in which parents will be kept informed about the child’s
presence, mark-sheets and time table etc.
This programme is for Kendriya Vidyalaya Sangathan .
‘Shaladarpan’, which was launched earlier in Kendriya Vidyalayas, in which parents
will be kept informed about the child’s presence, mark-sheets and time table etc. Parents will
get entire information at a unified platform about their children in respect of the attendance
INCON - XI 2016
90
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
status, performance, health challenges and entire academic record from Ist to XIIth standard.
Students will have facilities of e-tutorials and learning aids to enrich their knowledge.

URL : Shala darpan http://rajrmsa.nic.in/Shaladarpan/SchoolHome.aspx
4.5


Saranch
Saransh is a tool for comprehensive self review and analysis for CBSE affiliated schools
and parents.It enables them to analyse students performance in order to take remedial
measures. Saransh brings schools, teachers and parents closer, so that they can monitor
the progress of students and help them improve their performance.
It is developed in-house, a decision support system by CBSE
Objective :

Improving children’s education by enhancing interaction between schools as well as
parents and providing data driven decision support system to assist them in taking best
decisions for their children’s future.
Features :

Self Review :Equip schools and parents to review student's performance in various
subjects

Performance:It helps school to look performance in scholastic areas at an aggregate level
and at the level of each student

Data visualization:All the performance metrics are presented through numbers as well as
in charts/ graphs for easy understanding

Support to take decisions:Data driven analysis empowering Schools and Parents to take
best decisions for students

Communications:Facilitates interaction between Parents and Schools

Data Uploading :Facility of data uploading for the CBSE/Schools and Real time
generation of Statistics

It is available as mobile app also and it can be downloaded on mobile

Features available for parents and schools are
For Parents:

Saransh is available for parents of students studying in CBSE affiliated Schools in class
IX, X, XI or XII

Parents can register with Saransh by providing their ward's Registration/ Roll Number

User name and password will be sent on the registered mobile number via OTP and
Email

Parents can only login after school approves the registration request
For Schools:

The School Principals can login using their existing affiliation number and password as
used for the Registration/LOC

URL : Saransh.nic.in
4.6

epathashala
e-Pathashala is a web portal which hosts educational resources for Students, Teachers,
Parents, researchers and educators. It is also available through mobile app interface on
Android, IOS and windows platforms for wider access. It contains textbooks and other
INCON - XI 2016
91
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
e-books as E-Pub 3.0 and Flipbooks in English, Hindi and Urdu. It is also providing
educational Survey information, information about Exhibitions, teachers innovation
awards, information about National Talent Search Examination etc. .

It is Developed by CIET, National council of Education Research and Training.

Objective : Providing educational resources to students, teachers, Parents, researchers
and educators in regional languages also.
Features :
Information available for Students :

eTextbooks : Textbooks of all classes

Refernece books: Learning Material

Events exhibitions, contests, festivals, workshops

eResources : audios, videos, interactives, images, maps, question banks
Information available for Techers :

eTextbooks : Textbooks of all classes

Teaching Instructions : Access teaching instructions and souce books

Learning Outcomes : Help children achieve expected learning outcomes

Periodicals and Journals : Access and contribute to periodicals & journals

Circular Resources : Access Policy Documents, Reports of Committees, NCFs, Syllabus
and other resources to support children learning

eResources : audios, videos, interactives, images, maps, question banks
Information available for Educators :

eTextbooks : Textbooks of all classes

Periodicals and Journals : Access and contribute to periodicals & journals

Circular Resources : Access Policy Documents, Reports of Committees, NCFs, Syllabus
and other resources to support children learning

eResources : audios, videos, interactives, images, maps, question banks
Information available for Parents :

eTextbooks : Textbooks of all classes

Learning Outcomes : Help children achieve expected learning outcomes

Circular Resources : Access Policy Documents, Reports of Committees, NCFs, Syllabus
and other resources to support children learning

eResources : audios, videos, interactives, images, maps, question banks

URL : Epathshala.nic.in
5.
Challenges in Implementation of Digital India applications
Benefits of ebooks and online material
Nowadays children are excited to use new technology and mobiles. They are easily
operating the mobiles without any specialised trainings. So use of mobile apps and e-books
make learning more interesting to the children.
INCON - XI 2016
92
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Various studies have shown carrying a school bag that weighs 10% more than the child’s
body weight has led to pain in the lower back, shoulders and hands. With e-learning and ebooks , school bags becomes lighter and which make them relieved from burden of heavy
weighted school bag.
Students will have facilities of e-tutorials,online material to enrich their knowledge.
Parents will get information at their door step about their children in respect to their
attendance status, performance, health challenges and entire academic record from class 1
to12.
Uniform study methods throughout the country.
Teachers can get proper trainings, study material on their desk. Skilled teachers,
educators can contribute to share their knowledge with others.
Expert sections can be arranged through Virtual Classrooms .
Virtual Lab facility for some type of experiments which need no personal intervention
bring lab facility to all needed students.
Digital Divide is the main problem for successful implementation.
According to new study, Teenagers at lower-income households have relatively less
access to computers, laptops and smart phones at home than their higher-income peers. The
disparities may influence more than how teenagers socialise, entertain themselves and apply
for college or jobs. During the time when schools across United States were introducing digital
learning tools for the classroom, the research suggests that the digital divide among teenagers
was taking a disproportionate toll on their homework as well.
Only one-fourth of teenagers in households with less than $35,000 annual income had
their own laptops compared with 62% in households with annual incomes of $100,000 or
more, according to the study carried out by Common Sense Media, a non-profit children's
advocacy and media ratings group. One-fifth of teenagers in lower-income households
reported that they never used computers for their homework, or used them less than once a
month. And one-tenth of lower-income teenagers said they had only dial-up web access, an
often slow and erratic internet connection.

Getting right information at right time.

Lack of knowledge, Lack of easiness, lack of availability of applications.

Advertisement and promotion,

Cost of the equipments

Utilising the resources in a proper way.
6.




Suggestions
The digital divide problem may be addressed by providing Wifi services along with
cheaper handsets in rural areas.
Build Own and Operate (BOO) basis systems were implemented for successful
implementation of Gramapanchayat at Andhra Pradesh. This model was facilitated
through a self-employment generation scheme. The BOO systems can be adopted for
implementing g-governance services for education sector.
Kiosk centres at schools may be used as the facilitation centres for e-governance.
Mobile Grandhalaya (Mobile vans with Network facilities) also provide useful support.
INCON - XI 2016
93
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Advertising and promotion is needed for these services. It should reach Audience level.
Chota Bheem is an example for that. It became famous with its advertisement.

Development of user friendly eservices

Maintenance of the services and equipments to be provided on regular basis.
7.
Current Websites and applications :
7.1 UDAAN
7.2 Virtual Classrooms for IIT and Govt Officers
7.3
Online Textbooks by Balbharati, CBSC etc
8.
Conculsion:
In the past there were few initiatives which were started in education sectors like virtual
class rooms and text books online by Balbharati, CBSC, NIC etc. Balachitravani, IGNOU
connects educational programs on TV and radio. But, none of the attempts were completely
successful. Children prefer watching the cartoon network instead of education programs on
TV. The main reasons for the failure are connectivity issues, digital divide, lack of knowledge,
and lack of user friendliness, lack of advertisement and promotions and cost of the
equipments. So, Digital India applications can be successful if they are supported by strong
connectivity, training, user friendly interfaces, good maintenance and publicity.
9.







REFERENCES
Time of India article reference
http://mhrd.gov.in
http://cbse.nic.in
Digest of cdac
DigitalIndia.gov.in
The role of ICT in higher education for the 21st century : ICT as a change agent for
education by RON Oliver
Cost effective solution for effective e-Governance-e-Panchayat (Example of Exemplary
Leadership and ICT Achievement of the year) By C.S.R Prabhu
INCON - XI 2016
94
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Build a Classification Model for Social Media Repositories Using Rule
Based Classification Algorithms
Asst. Prof. Yogesh Somwanshi
Dept. of M.C.A,
Central university of Karnataka
Gulbarga (Karnataka) India
ABSTRACT :
Dr. Sarika Sharma
JSPMs, ENIAC
Institute of Computer Application
Pune (Maharashtra), India
In recent year the social media are very famous and widely use in an internet world,
especially in youth users. Social media provides an online digital tool (platform) or
websites for users create online communities to share information, experience, ideas,
vent emotions, opinions and other user generated data usually for social purpose in
among the people. Social media are granted freedom to share anything with online
communities. The social media sites such as Twitter4, Facebook3, Linkedin5, Google+10,
and Youtube9. Data mining techniques provide researchers and practitioners the tools
needed to analyse large, complex, and frequently changing social media data. In this
paper, data mining techniques were utilized to build a classification model to classify the
social media data in different classes. Rule based algorithms are the main data mining
tool used to build the classification model, where several classification rules were
generated. To validate the generated model, several experiments were conducted using
real data collected from Facebook and Twitter. The model is intended to be Error rates
of various classification Algorithms were compared to bring out the best and effective
Algorithm suitable for this dataset.
General Term: Data Mining, Social Media, classification techniques.
Keywords: Decision Table, JRip, OneR, PART, ZeroR,
Introduction:
Data mining research has successfully produced numerous methods, tools, and
algorithms for handling large amounts of data to solve real-world problems [2]. Data mining
can be used in conjunction with social media to identify groups or categories (classes) of
people amongst the masses of a population. Social Media (SM) is very famous in the last few
years for sharing user generated content on the web. The use of social media increases
significantly day by day. SM is introducing in 2003-04 [4], the rapid changes in technologies
and divided in to various forms/ platforms SM has been beyond belief. Kaplan and M.
Heinlein [1] “A group of Internet-based applications that build on the ideological and
technological foundations of Web 2.0, and allow the creation and exchange of user generated
content”
Data mining algorithms are broadly classified into supervised, unsupervised, and semisupervised learning algorithms. Classification is a common example of supervised learning
approach. For supervised learning algorithms, a given data set is typically divided into two
parts: training and testing data sets with known class labels. Supervised algorithms build
classification models from the training data and use the learned models for prediction. To
evaluate a classification model’s performance, the model is applied to the test data to obtain
INCON - XI 2016
95
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
classification accuracy. Typical supervised learning methods include decision tree, k-nearest
neighbors, naive Bayes classification, and support vector machines [3].
Proposed System Model:
Fig. 1 Shows the proposed classification model for social media repositories. In this
model including following phases.
1.
Data collection
2.
Data cleaning
3.
Data preprocessing
4.
Feature reduction algorithm(AllFilter)
5.
Classification Algorithm
6.
Analysis of error rates produced by models
7.
Choose the best model for social media data.
Fig. 1 : Proposed classification model
Data collection:
Data Collection is main task in research; we are used survey method to collect the data
from different social website. We are focus on only comment type data from Facebook and
tweets type data from Twitter to use classifying in the different categories and analysis on text
INCON - XI 2016
96
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
E SS
E-I
SN – 232
2 20--00
065
5
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
cllassifi
ficaatio
on alg
gorrith
hm
m usin
ng WEK
W KA
A to
oolls. We
W aree use
u e foll
f low
win
ng fig
gurees 2&
&3 to
o coll
c lecct
co
om
mmeentt orr tw
weeet dat
d ta from
f m Fa
F ceb
boo
ok an
nd Tw
Twitteer
Fiig. 2 : Post
P t on
o Fac
F ceb
boo
ok.
F g. 2 : Tw
Fig
weet on
n Twitteer
D a Cle
Dat
C ean
nin
ng:
T e orig
The
o gin
nal daatasset co
onttain
ned
d som
me miissiing
g valu
v uess fo
or vaario
ous atttributess. To
T prroceed
d
w h th
with
he wo
work, th
hosse mis
m ssin
ng vaaluees weere rep
plaaceed as
a if
i the
t peeop
ple an
nsw
wereed do
on’tt kno
k w.
D a pre
Dat
p e-p
processsin
ng:
A ter rep
Aft
plaacin
ng thee mis
m sin
ng val
v luees, som
mee prre-p
pro
oceessiing
g off th
he datta is
i to
t be
b carrrieed ou
ut to
o
prrocceeed fur
f rtheer. Feeatu
uree Red
R ducctio
on (Al
( llF
Filteer) is on
ne of th
he pre
p e-prroccessin
ng tec
t chn
niqu
uess. In
I thi
t s
ph
hasse the
t e im
mporttan
nt feeatturees req
quiired
d to
o im
mp
plem
meent thee Cla
C assiificcatiion
n Algo
A oriithm
m are
a e id
den
ntifiied
d.
B Fea
By
F atu
ure Reedu
uctiion
n, th
he mo
odeel com
c mp
plex
xity
y iss reedu
uceed and
d itt iss eaasieer to
t int
i terp
preet.
C ssifficcatiion
Clas
n Algo
A oriith
hmss:
T e goa
The
g al of
o Cla
C assiificcattion
n iss to
o buil
b ld a set
s off m
mod
delss th
hat caan cor
c rrecctly
y fore
f eseee the
t e cllass
off th
he diffferren
nt obje
o ectts. Claasssifiicattion
n is a tw
wo--steep pro
oceess:
1..
B ild mod
Bui
m del usiing
g trrain
nin
ng data. Ev
verry obj
bjecct of
o thee data
d a mus
m st be
b pre-cclasssifieed i.e.
i . itts
c ss lab
clas
l bel mu
ust bee kn
now
wn
n.
2..
T e mod
The
m dell gen
g eraated
d in
i thee prec
p ced
din
ng steep is tessted
d by
b asssig
gniing
g cllass lab
l elss to
o data
d a
o ectts in
obje
i a teest daatasset..
T e teestt daataa may
The
m y bee diff
d fereentt frrom
m the
t e train
ning
g dat
d ta. Ev
very
y elem
e meent off th
he tesst data
d a is
allso
o reeclaassifieed in ad
dvaancce. Th
he acc
a curracy
y of
o the
t mod
m del is detterrmiined by
b comp
parring
g trruee cllass
laabeels in the test
t ting
g set
s wiith th
hose ass
a ign
ned
d by
b the
t e mod
m del. The
T e fo
ollo
ow
wing
g are
a brrieff ou
utlinee of
o
so
om
me rule
r e bas
b ed Cllassifi
ficaatio
on Alg
A gorrith
hm
ms tha
t at had
h d beeen
n used
d in
n data
d a min
m ning
g and
a d ussed
d as base
b e
A oriithm
Algo
ms in th
his res
r searrch
h
D cision
Dec
n Ta
able:
T allgo
The
oritthm
m, deecission
n tab
t ble,, iss fou
f und
d in
n the
t e We
Wekaa cllasssiffierrs un
nderr Rul
R less. The
T e
siimp
pleest waay of rep
preesen
ntiing th
he out
o tpu
ut from
fr m ma
mach
hinee leearrnin
ng is to pu
ut itt in
n th
he sam
mee fo
orm
m as
a
IN
NC
CON - XI
X 20
2 16
97
7
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
the input. The use of the classifier rules decision table is described as building and using a
simple decision table majority classifier. The output will show a decision on a number of
attributes for each instance. The number and specific types of attributes can vary to suit the
needs of the task. Two variants of decision table classifiers are available

DTMaj (Decision Table Majority)

DTLoc(Decision Table Local)
Decision Table Majority returns the majority of the training set if the decision table cell
matching the new instance is empty, that is, it does not contain any training instances.
Decision Table Local is a new variant that searches for a decision table entry with fewer
matching attributes (larger cells) if the matching cell is empty
JRip:
In 1995 JRip was implemented by Cohen, W. W, in this algorithm were implemented a
propositional rule learner, Repeated Incremental Pruning to Produce Error Reduction
(RIPPER). By the way, Cohen implementing RIPPER [7] in order to increase the accuracy of
rules by replacing or revising individual rules. Reduce Error Pruning was used where it isolate
some data for training and decided when stop from adding more condition to a rule. By using
the heuristic based on minimum description length as stopping criterion. Post-processing steps
followed in the induction rule revising the regulations in the estimates obtained by global
pruning strategy and it improves the accuracy.
OneR:
The OneR algorithm creates a single rule for each attribute of training data and then
picks up the rule with the least error rate. To generate a rule for an attribute, the most recurrent
class for each attribute value must be established. The most recurrent class is the class that
appears most frequently for that attribute value. A rule is a set of attribute values destined to
their most recurrent class with which the attribute based on. The number of training data
instances which does not agree with the binding of attribute value in the rule produces the
error rate. OneR selects the rule with the least error rate.
PART:
PART is a separate-and-conquer rule learner proposed by Eibe and Witten [5]. The
algorithm producing sets of rules called decision lists which are ordered set of rules. A new
data is compared to each rule in the list in turn, and the item is assigned the category of the
first matching rule PART builds a partial C4.5 decision tree in its each iteration and makes the
best leaf into a rule. The algorithm is a combination of C4.5 and RIPPER rule learning [6].
ZeroR
ZeroR9 is the simplest classification method which relies on the target and ignores all
predictors. ZeroR classifier simply predicts the majority category (class). Although there is no
predictability power in ZeroR, it is useful for determining a baseline performance as a
benchmark for other classification methods. A frequency table is constructed for the target and
the most frequent value is selected.
INCON - XI 2016
98
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Analysis and Results:
This section shows the analysis after executing various Classification Algorithms as per
the requirements and explores the results of the same. The whole experiment is carried out
with the Data Mining tool WEKA. Various classification Algorithms were executed for all
selected features one by one. For subset Facebook & Twitter, relevant attributes identified by
feature reduction are executed by various Classification Algorithm and different error rates
were identified and mentioned in the Table 1. A Graph drawn for the error rates after
executing the Algorithm for the attributes selected by Feature reduction is shown in Fig. 1.
Table 1 : Error rates after executing Classification Algorithms for subset Facebook &
Twitter
Classification Algorithm Error rates
Facebook Twitter
DecisionTable
0.091
0.094
Jrip
0.0615
0.0711
OneR
0.0437
0.0444
PART
0.0478
0.0487
ZeroR
0.4869
0.469
Fig. 1 : Comparison of error rates for subset Facebook and Twitter
Number of Classified Instances consisting of number of correctly classified and
incorrectly classified instances by five Rule based algorithms in Table 2. A Graph drawn for
the Number of classified instances after executing the Algorithm for the attributes selected by
feature reduction is shown in Fig. 2
INCON - XI 2016
99
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Table 2 : Correctly and incorrectly classified instance from Facebook and Twitter
Classification
Algorithm
DecisionTable
Jrip
OneR
PART
ZeroR
Facebook
Correctly
In-Correctly
Classify
Classify
96.5517
3.4483
96.5517
3.4483
95.6322
4.3678
97.4713
2.5287
61.3793
38.6207
Twitter
Correctly
In-Correctly
Classify
Classify
95.6566
4.3434
95.6566
4.3434
95.8745
4.1255
97.3574
2.6426
62.5178
37.4822
Fig. 2: Number of Classified Instances
Here, accuracy was enhanced upto 97.47% which makes this classification model for
social media data to be more accurate
From Table 1, it is clear that the error rate generated by OneR Algorithm is very less
compared to all other Algorithms. The misclassifications identified were very less. A Graph
drawn for the error rates after executing the algorithm for the attributes selected by Feature
reduction.
From Table 2 it is evident that PART shows the best performance as compare to other
studied algorithms. PART has highest number of correctly classified instances followed by
Decision Table, JRip, OneR & ZeroR. Decision Table and JRip algorithm shows good
performance .OneR algorithm has an average performance in terms of correctly classification
of instances and ZeroR show poor classification performance.
Conclusion:
Data Mining Classification Algorithm and the error rates were analyzed and compared.
From the results it is clear that in all the subsets considered for the research OneR Algorithm
produced less error rates when compared to all other Algorithms and PART is best
classification when compare to other rule based algorithm while executing with WEKA tool.
INCON - XI 2016
100
ASM’s International E-Journal on
Ongoing Research in Management & IT
REFERENCES:
E-ISSN – 2320-0065
Websites:
[1] http://decidedlysocial.com/13-types-of-social-media-platformsandcounting/#sthash.RMGpNsVc.dpuf
[2] http://en.wikipedia.org/wiki/Social media/
[3] https://www.facebook.com/
[4] https://twitter.com/
[5] https://www.linkedin.com/
[6] https://www.flickr.com/
[7] https://www.youtube9.com
[8] https://plus.google.com/
[9] http://www.saedsayad.com/zeror.htm
[1] Kaplan, Andreas M., and Michael Haenlein. "Users of the world, unite! The challenges
and opportunities of Social Media." Business horizons 53.1 (2010): 59-68.
[2] Geoffrey Barbier, and Huan Liu “DATA MINING IN SOCIAL MEDIA”, Arizona St at
e University.
[3] Gundecha, Pritam, and Huan Liu. "Mining social media: A brief introduction." Tutorials
in Operations Research 1.4 (2012).
[4] Ngai, Eric WT, Spencer SC Tao, and Karen KL Moon. "Social media research: Theories,
constructs, and conceptual frameworks." International Journal of Information
Management 35.1 (2015): 33-44
[5] B.R. Gaines and P. Compton. Induction of ripple-down rules applied to modeling large
databases
[6] Ali, Shawkat, and Kate A. Smith. "On learning algorithm selection for classification."
Applied Soft Computing 6.2 (2006): 119-138
[7] F. Leon, M. H. Zaharia and D. Galea, “Performance Analysis of Categorization
Algorithms,” International Symposium on Automatic Control and Computer Science,
2004.
INCON - XI 2016
101
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Using Data Mining Techniques to Build a Classification Model for Social
Media Repositories: Performance of Decision Tree Based Algorithm
Asst. Prof. Yogesh Somwanshi
Dept. of M.C.A,
Central university of Karnataka
Gulbarga (Karnataka) India
ABSTRACT :
Dr. Sarika Sharma
JSPMs, ENIAC
Institute of Computer Application
Pune (Maharashtra), India
In recent year the social media are very famous and widely use in an internet world,
especially in youth users. Social media provides an online digital tool (platform) or
websites for users create online communities to share information, experience, ideas,
vent emotions, opinions and other user generated data usually for social purpose in
among the people. Social media are granted freedom to share anything with online
communities. The social media sites such as Twitter4, Facebook3, Linkedin5, Google+10,
and Youtube9. Data mining techniques provide researchers and practitioners the tools
needed to analyse large, complex, and frequently changing social media data. In this
paper, data mining techniques were utilized to build a classification model to classify the
social media data in different classes. Decision Tree algorithms are the main data
mining tool used to build the classification model, where several classification rules
were generated. To validate the generated model, several experiments were conducted
using real data collected from Facebook and Twitter. The model is intended to be Error
rates of various classification Algorithms were compared to bring out the best and
effective Algorithm suitable for this dataset.
General Term: Data Mining, Social Media, classification techniques.
Keywords: LMT, REPTree, Decision Stump, J48, Random Forest, and Random Tree.
Introduction:
Data mining research has successfully produced numerous methods, tools, and
algorithms for handling large amounts of data to solve real-world problems [2]. Data mining
can be used in conjunction with social media to identify groups or categories (classes) of
people amongst the masses of a population. Social Media (SM) is very famous in the last few
years for sharing user generated content on the web. The use of social media increases
significantly day by day. SM is introducing in 2003-04 [4], the rapid changes in technologies
and divided in to various forms/ platforms SM has been beyond belief. Kaplan and M.
Heinlein [1] “A group of Internet-based applications that build on the ideological and
technological foundations of Web 2.0, and allow the creation and exchange of user generated
content”
Data mining algorithms are broadly classified into supervised, unsupervised, and semisupervised learning algorithms. Classification is a common example of supervised learning
approach. For supervised learning algorithms, a given data set is typically divided into two
parts: training and testing data sets with known class labels. Supervised algorithms build
classification models from the training data and use the learned models for prediction. To
evaluate a classification model’s performance, the model is applied to the test data to obtain
INCON - XI 2016
102
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
classification accuracy. Typical supervised learning methods include decision tree, k-nearest
neighbors, naive Bayes classification, and support vector machines [3].
Proposed System Model:
Figure 1. Shows the proposed classification model for social media repositories. In this
model including following phases.
1.
Data collection
2.
Data cleaning
3.
Data preprocessing
4.
Feature reduction algorithm(AllFilter)
5.
Classification Algorithm
6.
Analysis of error rates produced by models
7.
Choose the best model for social media data.
Fig. 1 : Proposed System
Data collection:
Data Collection is main task in research; we are used survey method to collect the data
from different social website. We are focus on only comment type data from Facebook and
tweets type data from Twitter to use classifying in the different categories and analysis on text
classification algorithm using WEKA tools. We are use following figures 2&3 to collect
comment or tweet data from Facebook and Twitter
INCON - XI 2016
103
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
F .2 : Pos
Fig
P st on
o Fa
Faceb
book
k
E SS
E-I
SN – 232
2 20--00
065
5
Fiig. 2 : Twe
T eett on
n Tw
Tweetterr
D a Cle
Dat
C ean
nin
ng:
T e orig
The
o gin
nal daatasset co
onttain
ned
d som
me miissiing
g valu
v uess fo
or vaario
ous atttributess. To
T prroceed
d
w h th
with
he wo
work, th
hosse mis
m ssin
ng vaaluees weere rep
plaaceed as
a if
i the
t peeop
ple an
nsw
wereed do
on’tt kno
k w.
D a pre
Dat
p e-p
processsin
ng:
A ter rep
Aft
plaacin
ng thee mis
m sin
ng val
v luees, som
mee prre-p
pro
oceessiing
g off th
he datta is
i to
t be
b carrrieed ou
ut to
o
prrocceeed fur
f rtheer. Feeatu
uree Red
R ducctio
on (Al
( llF
Filteer) is on
ne of th
he pre
p e-prroccessin
ng tec
t chn
niqu
uess. In
I thi
t s
ph
hasse the
t e im
mporttan
nt feeatturees req
quiired
d to
o im
mp
plem
meent thee Cla
C assiificcatiion
n Algo
A oriithm
m are
a e id
den
ntifiied
d.
B Fea
By
F atu
ure Reedu
uctiion
n, th
he mo
odeel com
c mp
plex
xity
y iss reedu
uceed and
d itt iss eaasieer to
t int
i terp
preet.
C ssifficcatiion
Clas
n Algo
A oriith
hmss:
T e goa
The
g al of
o Cla
C assiificcattion
n iss to
o buil
b ld a set
s off m
mod
delss th
hat caan cor
c rrecctly
y fore
f eseee the
t e cllass
off th
he diffferren
nt obje
o ectts. Claasssifiicattion
n is a tw
wo--steep pro
oceess:
1..
B ild mod
Bui
m del usiing
g trrain
nin
ng data. Ev
verry obj
bjecct of
o thee data
d a mus
m st be
b pre-cclasssifieed i.e.
i . itts
c ss lab
clas
l bel mu
ust bee kn
now
wn
n.
2..
T e mod
The
m dell gen
g eraated
d in
i thee prec
p ced
din
ng steep is tessted
d by
b asssig
gniing
g cllass lab
l elss to
o data
d a
o ectts in
obje
i a teest daatasset..
T e teestt daataa may
The
m y bee diff
d fereentt frrom
m the
t e train
ning
g dat
d ta. Ev
very
y elem
e meent off th
he tesst data
d a is
allso
o reeclaassifieed in ad
dvaancce. Th
he acc
a curracy
y of
o the
t mod
m del is detterrmiined by
b comp
parring
g trruee cllass
laabeels in the test
t ting
g set
s wiith th
hose ass
a ign
ned
d by
b the
t e mod
m del. The
T e fo
ollo
ow
wing
g are
a brrieff ou
utlinee of
o
so
om
me De
D cission
n Tre
T ee Cla
C assificcattion
n Alg
A gorith
hmss th
hatt haad beeen ussed
d in
n data
d a min
m ning
g and
a d ussed
d as
a
baasee Algo
A oriithm
ms in
n th
his res
r seaarch
h
LMT:
LM
Lo
ogissticc Mo
Model Tre
T es usse llog
gisttic regreession
n fun
f ctions at
a the
t e leeav
ves.. This
T s met
m tho
od can
n
deeall with
w h miss
m sin
ng val
v luees, bin
narry and
a dm
mullti--claasss vaariaablles, num
n merric an
nd nom
n min
nall atttriibu
utess. It
I
IN
NC
CON - XI
X 20
2 16
10
04
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
generates small and accurate trees. It uses CART pruning technique. It does not require any
tuning parameters. It is often more accurate than C4.4 decision trees and standalone logistic
regression [5]. LMT produces a single tree containing binary splits on numeric attributes,
multiway splits on nominal ones and logistic regression models at the leaves. It also ensures
that only relevant attributes are included in the latter.
Decision Stump:
Decision stumps are basically decision trees with a single layer. As opposed to a tree
which has multiple layers, a stump basically stops after the first split. Decision stumps are
usually used in population segmentation for large data. Occasionally, they are also used to
help make simpleyes/no decision model for smaller data with little data. Decision stumps are
generally easier to build as compared to decision tree. At the same time, SAS coding for
decision stumps are more manageable compared to CART and CHAID. The reason being that
the decision stumps is just one single run of the tree algorithm and thus does not need to
prepare data for the subsequent splits. At the same time, there is no need to specify the data for
the subsequent split which make renaming of output simpler to manage.
J48 (C4.5):
J48 is an open source Java implementation of the C4.5 algorithm in the Weka data
mining tool. C4.5 is a program that creates a decision tree based on a set of labeled input data.
This algorithm was developed by Ross Quinlan. The decision trees generated by C4.5 can be
used for classification, and for this reason, C4.5 is often referred to as a statistical classifier
(”C4.5 (J48)”, Wikipedia).
REPTree:
RepTree [6] uses the regression tree logic and creates multiple trees in different
iterations. After that it selects best one from all generated trees. That will be considered as the
representative. In pruning the tree the measure used is the mean square error on the predictions
made by the tree. Basically Reduced Error Pruning Tree ("REPT") is fast decision tree
learning and it builds a decision tree based on the information gain or reducing the variance.
REP Tree is a fast decision tree learner which builds a decision/regression tree using
information gain as the splitting criterion, and prunes it using reduced error pruning. It only
sorts values for numeric attributes once. Missing values are dealt with using C4.5’s method of
using fractional instances
Random Tree:
Random Tree is a supervised Classifier; it is an ensemble learning algorithm that
generates many individual learners. It employs a bagging idea to produce a random set of data
for constructing a decision tree. In standard tree each node is split using the best split among
all variables. In a random forest, each node is split using the best among the subset of
predicators randomly chosen at that node.
Random Forest:
Leo Breiman and Adele Cutler, 2001 developed Random forest algorithm, is an
ensemble classifier that consists of many decision tree and outputs the class that is the mode of
INCON - XI 2016
105
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
the class's output by individual trees. Random Forests grows many classification trees without
pruning.
Analysis and Results:
This section shows the analysis after executing various Classification Algorithms as per
the requirements and explores the results of the same. The whole experiment is carried out
with the Data Mining tool WEKA. Various classification Algorithms were executed for all
selected features one by one. For subset Facebook & Twitter, relevant attributes identified by
feature reduction are executed by various Classification Algorithm and different error rates
were identified and mentioned in the Table 1.
Table 1
Classification Algorithm
LMT
REPTree
Decision Stump
J48
Random Forest
Random Tree
Error rates
Facebook
0.1066
0.0891
0.0784
0.0519
0.0342
0.0129
Twitter
0.1075
0.0887
0.0795
0.0525
0.0352
0.0129
Table 1. Error rates after executing Classification Algorithms for subset Facebook &
Twitter
From Table 1, it is clear that the error rate generated by Rnd Tree Algorithm is very less
compared to all other Algorithms. The misclassifications identified were very less. A Graph
drawn for the error rates after executing the Algorithm for the attributes selected by Feature
reduction is shown in Fig. 2.
Fig. 2: Comparison of error rates for subset Facebook and Twitter
INCON - XI 2016
106
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Conclusion:
Data Mining Classification Algorithm and the error rates were analysed and compared.
From the results it is clear that in all the subsets considered for the research RandomTree
Algorithm produced less error rates when compared to all other Algorithms while executing
with WEKA tool.
REFERENCES:
Websites:
[1] http://decidedlysocial.com/13-types-of-social-media-platformsandcounting/#sthash.RMGpNsVc.dpuf
[2] http://en.wikipedia.org/wiki/Social media/
[3] https://www.facebook.com/
[4] https://twitter.com/
[5] https://www.linkedin.com/
[6] http://smbp.uwaterloo.ca/2013/10/honeycomb-and-social-crm/
[7] http://www.fredcavazza.net/2014/05/22/social-media-landscape-2014/
[8] https://www.flickr.com/
[9] https://www.youtube9.com
[10] https://plus.google.com/
[1] Kaplan, Andreas M., and Michael Haenlein. "Users of the world, unite! The challenges
and opportunities of Social Media." Business horizons 53.1 (2010): 59-68.
[2] Geoffrey Barbier, and Huan Liu “DATA MINING IN SOCIAL MEDIA”, Arizona St at
e University.
[3] Gundecha, Pritam, and Huan Liu. "Mining social media: A brief introduction." Tutorials
in Operations Research 1.4 (2012).
[4] Ngai, Eric WT, Spencer SC Tao, and Karen KL Moon. "Social media research: Theories,
constructs, and conceptual frameworks." International Journal of Information
Management 35.1 (2015): 33-44
[5] Niels Landwehr, Mark Hall, and Eibe Frank: Logistic Model Trees. In Machine Learning
59 (1 -2) 161 -205(2005)
[6] O. A. a. E. A. OlcayTanerYıldız, "Multivariate Statistical Tests for Comparing,"
Springer-Verlag Berlin Heidelberg , pp. 1 -15, 2011.
INCON - XI 2016
107
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
NDIS Filter Driver for vSwitch in Hyper-V OS
Mr. Gajanan. M. Walunjkar
Asst Prof. Dept of IT,
Army Institute of Technology,Pune-15
Savitribai Phule Pune University
Maharashtra ,India
gwalunjkar@aitpune.edu.in
ABSTRACT :
If we consider a local area network in which every single machine is running many
virtual machines. In such situation security of these virtual machines is of utmost
importance and is not an easy task because once the intruder takes over the Hyper-V
OS, the intruder can easily take over the virtual machines.
Presently we have many security systems present in the market which can easily prevent
this problem but installing the security software in each VM is a tedious task. Also it is a
possibility that a large number of VM’s are created and destroyed in a fraction of
second in a large network. So management of such a network is again a tedious task.
In order not to focus on each VM individually, we can manage a single virtual switch
which will filter all the packets going in and out of the network. We are developing filter
driver by extending the functionalities of vSwitch to secure the VM’s. NDIS is very useful
for communication protocol programs (such as TCP/IP) and how network device driver
should communicate with each other.
Keyword: VM, vSwitch, NDIS (Network Driver Interface Specification)
I
Introduction
Virtual machines allow emulation of many isolated OSs, reducing hardware costs, at the
same time providing features such as state restore, transience, and mobility. The main
components of a standard VM setup are as followsa.
The host OS : It interacts with the hardware,
b.
The hypervisor :It allocates resources and manages the VM,
c.
Guest OS :It is run without access to the host OS or physical hardware.
It is important that VMs are subject to unique attacks in addition to attacks that physical
machines face[1].
Use of a virtual environment in the local area network’s helps to increase speed and
utilization of data moving across the network, at the same time it also raises new network
challenges. Considering the context of virtualized setting, the network access layer is pulled
into the hypervisor and built in v-switches manage the traffic.
Virtual machines also have many challenges in terms of security vulnerabilities. To
restore integrity, the system administrators can rollback guests to a pre-attack state, but if the
host is compromised, attackers have access to hardware and have unlimited freedom. Most of
the users generally not secure their VMs for basic virus protection mechanism. The reason
behind is that it is easy to restore an infected VM. The impact of such negligence or use of
traditional security mechanism,60% of virtual machines in production are less secure than
INCON - XI 2016
108
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
their physical counterparts. These VMs are very easy to make, relative to a physical machine,
causing their growth to frequently far exceed the administrators ability to secure each unique
guest[2].
Apart from the local attacks, there are three other classes of attacks on Virtual Machines.
The attacker may utilize a compromised VM to communicate with other VMs on the same
physical host, a violation of the isolation principal of VMs. Second type of attacks are on the
hypervisor, which can potentially give the attacker access to the host OS and hardware. In
addition to this, denial of service attacks can be particularly successful on VMs because they
have the potential to consume resources from all VMs on the host.
II.
Proposed System Architecture
As we are writing vSwitch extensions to provide security to a number of virtual
machines running on a single host operating system, here are few terminologies related to our
project:
Hyper-visor: Hyper-visor or Virtual Machine Monitor (VMM) is computer software that
creates or run virtual machines[3][4]. Host Machine is a computer which runs hyper-visor and
each VM is known as Guest machine.The Big three hyper-visor vendors are- VMware
vSphere5, Citrix XenServer 6, Microsoft Hyper –V 3.
Fig 1: Hypervisor Architecture
NDIS is very useful for communication protocol programs (such as TCP/IP) and how
network device driver should communicate with each other. Hyper-V Virtual Switch is a layer
2 network switch. It has the capabilities to connect VMs to both virtual and physical networks.
Also it provides a level of security and isolation. The Hyper-V extensible switch supports an
interface which allows the binding of instances of NDIS filter drivers with the extensible
switch driver stack. After they are bounded and enabled, they can monitor, modify and
forward packets to extensible switch ports.We are thus developing a NDIS filter driver for
windows platform which will monitor all network traffic.[7]
INCON - XI 2016
109
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig 2: Proposed System Architecture
Systems running virtual machines connected to each other in a local network are
vulnerable to many threats. Presently we have many security systems present in the market
which can easily prevent this problem but installing the security software in each VM is a
tedious task. Also it is a possibility that a large number of VM’s are created and destroyed in a
fraction of second in a large network. So management of such a network is again a tedious
task. To solve this problem we can install security software on the vSwitch itself so it will
automatically scan any packet going in and out from one machine to another in the network.
The specification which we will be using for writing the driver is NDIS (Network Driver
Interface Specification).
This filter driver is an alternative solution to solve the security problems of VM’s[8]. We
are writing an extension to virtual switch to make it act as a filter driver to scan network
traffic. This filter driver will sniff the packets on the network and thus allowing access only to
good packets in the network and discarding the malicious packet which could be threat to the
VM’s.
Fig 3: Shows those malicious packets that are not allowed through our vSwitch
INCON - XI 2016
110
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
III. NDIS filter driver
Filter drivers provide filtering services for miniport drivers. NDIS driver stacks must
include miniport drivers and protocol drivers and optionally include filter drivers.
The following applications might require a filter driver:

Data filtering applications for security or other purposes.

Applications that monitor and collect network data statistics.
Filter drivers have the following characteristics:

An instance of a filter driver is called a filter module. Filter modules are attached to an
underlying miniport adapter. Multiple filter modules from the same filter driver or
different filter drivers can be stacked over an adapter.

Overlying protocol drivers are not required to provide alternate functionality when filter
modules are installed between such drivers and the underlying miniport drivers
(otherwise stated, filter modules are transparent to overlying protocol drivers).

Because filter drivers do not implement virtual miniports like an intermediate driver,
filter drivers are not associated with a device object.

NDIS can dynamically insert or delete filter modules in the driver stack

NDIS guarantees the availability of context space
There are two primary types of filter drivers:
(a) Monitoring: These filter drivers monitor the behavior in a driver stack. However, they
only pass on information and do not modify the behavior of the driver stack. Monitoring
filter drivers cannot modify or originate data.
(b) Modifying: These filter drivers modify the behavior of the driver stack. The type of
modification is driver-specific.
IV.
Writing a Hyper-V Extensible Switch
An interface supported by the Hyper-V extensible switch allows instances of NDIS filter
drivers (known as extensible switch extensions) to bind within the extensible switch driver
stack. As a result of proper binding, this extensions can monitor, modify, and forward packets
to extensible switch ports. In addition to this the extensions can reject, redirect, or originate
packets to ports that are used by the Hyper-V partitions. A Hyper-V Extensible Switch
extension is an NDIS filter or Windows Filtering Platform (WFP) filter that runs inside the
Hyper-V Extensible Switch (also called the "Hyper-V virtual switch").There are 3 classes of
extensions: capture, filtering, and forwarding. All of them can be implemented as NDIS filter
drivers. Filtering extensions can also be implemented as WFP filter drivers.
V.
I.
Applications
Security
It sniffs all the incoming and outgoing packets on the network and discards malicious
packets. The driver provides security features:
i.
Anti-Spam
ii. Anti-phishing
iii. Browser protection
iv. IDS/IPS
v. cheduled browsing
vi. Parental controls
vii.
Firewall
INCON - XI 2016
111
ASM’s International E-Journal on
Ongoing Research in Management & IT
II .
E-ISSN – 2320-0065
Management
As a large number of VM’s are created and destroyed in a very short amount of time,
managing security of such a network is tedious task.
By installing the filter driver on virtual switch, security management task of network
administrator is automated.
VI.
Conclusion
Writing vSwitch extensions will provide security to a number of virtual machines
running on a single host operating system with less time and effort. As it sniffs all the packets
so it correctly discards malicious packets and provides best security mechanism.
REFERENCES:
[1] https://msdn.microsoft.com/en-us/library/windows/hardware/ff544113(v=vs.85).aspx
[2] http://www.cse.wustl.edu/~jain/cse571-09/ftp/vmsec/index.html
[3] https://docs.oracle.com/cd/E19455-01/806-0916/6ja85398i/index.html
[4] http://www.wikipedia.com
[5] http://www.altaro.com/hyper-v/the-hyper-v-virtual-switch-explained-part-1/
[6] http://www.altaro.com/hyper-v/hyper-v-virtual-switch-explained-part-2/
[7] https://channel9.msdn.com/posts/Hyper-V-Extensible-Switch-Part-I--Introduction
[8] https://channel9.msdn.com/posts/Hyper-V-Extensible-Switch-Part-II--Understandingthe- Control-Path
INCON - XI 2016
112
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Email security: An Important Issue
Prof. Nilambari kale
Lecturer,Pratibha College of commerce & Computer Studies,
chinchwad, pune, India
ABSTRACT :
Internet security is a topic that we all know to be important, but it often sits way back in
the recesses of our minds, fooling ourselves into believing that “it won’t happen to me”.
Whether it’s the destructive force of the newest virus or just the hacking attempts of a
newbie script kiddy, we’re always only one click away from dealing with a security mess
that we’d rather not confront. Nowhere is this truer than in our emails. You may already
know that email is insecure; however, it may surprise you to learn just how insecure it
really is. For example, did you know that messages which you thought were deleted
years ago may be sitting on server’s half-way around the world? Or that your messages
can be read and modified in transit, even before they reach their destination? Or even
that the username and password that you use to login to your email servers can be stolen
and used by hackers?
This paper is designed to teach you about how email really works, what the real security
issues are, what solutions exist, and how you can avoid security risks.
Key words: how Email works, SMTP server, Security Threats, Email security.
1.
Introduction:
Information security and integrity are centrally important as we use email for personal
and business communication: sending confidential and sensitive information over this medium
every day.
1.1
How Email Works:
This section describes the general mechanisms and paths taken by an email message on
its route from sender to recipient. This should give you an overview of the different protocols
(languages) involved, the different types of servers involved, and the distributed nature of
email networks. The examples I present are representative of many common email solutions,
but are by no means exhaustive.
Sending an Email Message
Sending an email message is like sending a postal letter. When sending a letter, you drop
it off at your local post office. The local post office looks at the address and figures out which
regional post office the letter should be forwarded to. Then, the regional post office looks at
the address and figures out which local post office is closest to your recipient. Finally, the
recipient’s local post office delivers your letter to its recipient. Computers are like “post
offices”, and the “Simple Mail Transport Protocol” (SMTP) is the “procedure” which an
“email post office” uses to figure out where to send the letter next (e.g. the “next hop”). Any
program that sends an email message uses SMTP to deliver that message to the next “post
office” for “relaying” it to its final destination.
INCON - XI 2016
113
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Most people send mail in one of two ways – with a web-based interface like Gmail,
Outlook Web Access, or WebMail, or with an “email client” program like Outlook,
Thunderbird, iPhone Mail, Android, or Mac Mail.
When you send a message with an email program on your personal computer (or your
phone or tablet), you have to specify an “SMTP server” so that your email program knows
where to send the message. This SMTP server is like your local post office. Your email
program talks directly to the server using the computer protocol (language) known as SMTP.
This is like dropping off a letter at the local post office.
When you use WebMail, your personal computer uses an Internet connection to
communicate with a web server. The “language” that the internet connection uses is HTTP –
“HyperText Transfer Protocol”. When you send your message with WebMail, the web server
itself takes care of contacting an SMTP server and delivering your message to it.
Fig. 1 How SMTP works
Delivery of email from your SMTP Server to your recipient’s SMTP Server:
As shown in the above diagram when an SMTP Server receives an email message, it
first checks if an account for the message recipient is configured on the server itself. If there is
such an account, the server will drop the message in that person’s Inbox (or follow other more
complex user-defined rules). If there is no local account for that recipient, the server must
“relay” the email message to another SMTP server closer to the recipient. This is analogous to
how your local post office forwards your letter to a regional post office unless it is for
someone served by the post office itself. (Post offices don’t actually work this way in general,
but the concept is easily understood with this analogy.) This process is known as “SMTP
relaying”.
How does your SMTP Server know where to relay the message to?
If the recipient’s email address is “bob@gmail.com”, then the recipient’s domain name
is “gmail.com”. Part of the “DNS settings” for the recipient’s domain (these are the “mail
exchange” or MX records for the domain.Multiple servers can be listed and they can be ranked
in terms of “priority”. The highest priority SMTP Server listed is the recipient’s actual/main
inbound SMTP Server; the others are “backup inbound SMTP Servers”. These backup servers
INCON - XI 2016
114
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
may merely either queue email for later delivery to the recipient’s actual SMTP Server or may
perform the same real-time delivery actions as the main SMTP server (e.g. they are there for
redundancy).
The take away from this discussion so far is that:
1.
Most email servers communicate with each other using SMTP
2.
You never know how long it will take an email message to get from sender to recipient
because you don’t know how busy the servers are, how much traffic there is on the
Internet, what machines are down for maintenance, etc.
3
Your messages may sit in queues on any number of servers for any amount of time.
Some of these servers may belong to third parties (i.e. may not be under the purview of
either the sender or the recipient, may be an external email filtering or archival
organization, etc.).
4.
Your recipients can determine the Internet address and name of the computer from which
you are sending your messages, even in the case of your email being spoofed by a
spammer.
Retrieving Email from an SMTP Server
When you receive an email message it sits in a file (or database) in your email server. If
you wish to view this email message you must access this content. Any computer wishing to
access your email must speak one of the languages the email Server does. With some
exceptions, there are really only main 2 languages that email servers understand (for email
retrieval, as opposed to email sending, for which they use SMTP), one is called the “Internet
Message Access Protocol” (IMAP) and one is called the “Post Office Protocol” (POP).
As a recipient, you can generally retrieve your email by either using a web-based
interface known as “WebMail”, or via an “email client” program, such as Microsoft Outlook
or iPhone Mail, running on your personal computer or device. The email client programs will
talk directly to your email server and speak IMAP, POP, or something similar. With WebMail,
your computer will talk to a WebMail server using a web connection (speaking HTTP); the
WebMail server will, in turn, talk to your email server using POP or IMAP or something
similar (like a direct database connection).
1.2
The Lack of Security in Email
Email is inherently insecure. At this stage, it is important to point out the insecurity in
the email delivery pathway just discussed:

WebMail: If the connection to your WebMail server is “insecure then all information
including your username and password is not encrypted as it passes between the
WebMail server and your computer.

SMTP: SMTP does not encrypt messages. Communications between SMTP servers may
send your messages in plain text for any eavesdropper to see. Additionally, if your email
server requests that you send your username and password to “login” to the SMTP
server in order to relay messages to other servers, then these may also sent in plain text,
subject to eavesdropping. Finally, messages sent via SMTP include information about
which computer they were sent from and what email program was used. This
information, available to all recipients, may be a privacy concern.
INCON - XI 2016
115
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

POP and IMAP: The POP and IMAP protocols require that you send your username and
password to login; these credentials are not. So, your messages and credentials can be
read by any eavesdropper listening to the flow of information between your personal
computer and your email service provider’s computer.

BACKUPS: Email messages are generally stored on SMTP servers in plain, unencrypted
text. Backups of the data on these servers may be made at any time and administrators
can read any of the data on these machines. The email messages you send may be saved
unexpectedly and indefinitely and may be read by unknown persons as a result.
These are just a few of the security problems inherent in email. Now we will talk about
communications security problems in general so we can see what else can go wrong.
1.3
Security Threats to Your Email Communications
This section describes many of the common security problems involved in
communications and in email in particular.
Eavesdropping
The Internet is a big place with a lot of people on it. It is very easy for someone who has
access to the computers or networks through which your information is traveling to capture
this information and read it.
Identity Theft
If someone can obtain the username and password that you use to access your email
servers, they can read your email and send false email messages as you.
Invasion of Privacy
If you are very concerned about your privacy, then you should consider the possibility of
“unprotected backups”, listed below. You may also be concerned about letting your recipients
know the IP address of your computer. This information may be used to tell in what city you
are located or even to find out what your address is in some cases! This is not usually an issue
with WebMail, POP, or IMAP, but is an issue with the transport of email, securely or
insecurely, from any email client over SMTP.
Message Modification
Anyone who has system administrator permission on any of the SMTP Servers that your
message visits can not only read your message, but they can delete or change the message
before it continues on to its destination. Your recipient has no way to tell if the email message
that you sent has been altered! If the message was merely deleted they wouldn’t even know it
had been sent.
False Messages
It is very easy to construct messages that appear to be sent by someone else. Many
viruses take advantage of this situation to propagate them. In general, there it is very hard to be
sure that the apparent sender of a message is the true sender – the sender’s name could have
been easily fabricated.
Message Replay
INCON - XI 2016
116
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Just as a message can be modified, messages can be saved, modified, and re- sent later!
You could receive a valid original message, but then receive subsequent faked messages that
appear to be valid.
Unprotected Backups
Messages are usually stored in plain text on SMTP Servers. Thus, backups of these
servers’ disks usually contain plain text copies of your messages. As backups may be kept for
years and can be read by anyone with access to them, your messages could still be exposed in
insecure places even after you think that all copies have been “deleted”.
Repudiation
Because normal email messages can be forged, there is no way for you to prove that
someone sent you a particular message. This means that even if someone DID send you a
message, they can successfully deny it. This has implications with regards to using email for
contracts, business communications, electronic commerce, etc.
2.
Securing Your Email with TLS
2.1
Using TLS support
The easiest thing you can do to make your email more secure is to use an email provider
that supports “Transport Layer Security” (SSL / TLS) for their Webmail, POP, IMAP, and
SMTP servers.
TLS uses a combination of asymmetric and symmetric key encryption mechanisms. If
you connect to a server using TLS, the following things happen:
1.
The server uses its private key to prove to you that it is in fact the server that you are
trying to connect to. This lets you know that you are not connecting to a “middleman”
that is trying to intercept your communications.
2.
You send the server your public key.
3.
The server generates a “secret key” and sends it to you encrypted using your public key.
4.
You and the server then communicate using symmetric key encryption using this shared
secret key. (Symmetric key encryption is faster than asymmetric key encryption).
2.2
Security tips :
Here are some simple yet important security tips you should know in order to keep your
email account as secure as possible.
1.
Use Separate Email Accounts
If you’re like most people, your email account is probably the centralized hub of your
personal activity. All of your Facebook notifications, website registrations, newsletters,
messages, etc. get sent to your email box, right?
In other words, if you bring all of your activity into a single email account, what happens
when someone breaks into it? I’d say it’s plausible that they would gain access to
everything else. This is why you should use multiple email accounts.
Having separate email accounts will not only help boost your security, but also your
productivity. Imagine if you could consolidate all of your work emails into a single work
account; all of your friends and family communicate with your personal account; you
INCON - XI 2016
117
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
E SS
E-I
SN – 232
2 20--00
065
5
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
h ve a reecrreaatio
hav
onaal acco
a oun
nt forr variiou
us web
w bsiitess; and
a d a th
hrow
waawaay acccou
untt fo
or pot
p ten
ntiaal
s am lin
spa
nks. Thi
T s way
w y, if
i som
s meo
onee hac
h cks yo
ourr wor
w rk acc
a cou
unt,, alll of
o you
y ur peerso
onaal em
emails
a stiill saf
are
s fe.
2..
Creatte a Un
Cr
Uniqu
ue Pa
asswo
ord
d
Goiing
G
g alon
a ng wiith th
he mu
ultiiplee acc
a cou
unt id
dea,, you
y u sh
hou
uld
d also
a o hav
h ve an
a en
ntirrely
y uniq
u quee
p sw
pas
word
d for
f r eaach
h of
o you
y ur em
maiil acc
a cou
unts. Ev
ven
n iff you
y u deeciidee to
o kee
k ep one “m
“mastter””
e ail accou
ema
untt, mak
m ke su
ure thaat its
i passw
worrd is 100
0%
% uniq
u quee
3..
B wa
Be
aree Of
O Phi
P ish
hing
g Sca
S ams
Wh
W
hen
n deealling
g wit
w th a part
p ticu
ulaar com
c mp
pan
ny or
o pro
odu
uctt th
hatt reequ
uirees acccou
untt in
nfo
orm
matiion
n,
h ve you
hav
y u ever
e r seeen
n th
he folllowing mess
m sag
ge: “N
Nev
verr giivee aw
way
y you
y ur per
p rson
nall in
nfo
orm
matiion
n.
We will
We
w neeveer ask
a k you
y u fo
or you
y ur paassw
word..” Wh
When
n som
meo
onee seend
ds yo
ou an
a em
maiil ask
a king
g
y u fo
you
or you
y ur per
p rso
onaal in
nfo
orm
matiion
n, you
y u kno
k w rig
r ght aw
way
y th
hat it’s a trrick
k.
4..
Never Clic
Ne
C ck Liink
ks In
I Em
ma
ails
Wh
W
hen
never yo
ou seee a lin
nk in
n an
n ema
e ail,, 99%
% of
o the
t e timee you
y u sh
hou
uld
d no
ot cliick
k on
n itt. The
T e
o y excep
only
ptio
ons aree wh
when you
y u’ree exp
e pecctin
ng a paartiicu
ularr em
emaill, succh ass a for
f rum
m
r istrratiion
regi
n link
k orr gaam
me acc
a cou
unt activ
vatiion
n em
maiil.
If you
y u geet an
a em
maill frrom
m you
y ur ban
b nk or
o any oth
o her serrvice (e..g.,, biill pay
ym
men
nts)), alw
a way
ys visi
v it
th
he weebssitee man
m nuaally
y. No
N co
opy
y an
nd pastee. No
N dirrect click
kin
ng.
5..
Do
D
o Not
N Op
pen
n Un
U solliciited
d Att
A tacchm
men
ntss
A ach
Atta
hm
men
nts aree a trrick
ky thiing
g whe
w en it comees to em
maiil. If
I you’rre exp
pecctin
ng so
omeeth
hing
g
f m a bud
from
b ddy
y orr an unc
u cle, th
hen
n su
uree, go
g aheead
d and
a d op
pen
n th
he atttachm
men
nt. Haavee a lau
ugh
h at
a
th
he fu
unn
ny ph
hoto
o the
t ey sen
nt you. It’’s all
a go
ood
d wh
when yo
ou kn
now
w the
t e pers
p son
n sendin
ng thee
a ach
atta
hmeentt.
IN
NC
CON - XI
X 20
2 16
11
18
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
6.
Scan For Viruses & Malware
If you open an email and it seems suspicious in any way, go ahead and run a malware
and virus scanner. Not every spam email will infect you with a virus and it may seem
like overkill to run a malware scanner every time you open a fishy email, but it’s better
to be safe than sorry.
7.
Avoid Public Wi-Fi
And lastly, avoid checking your email when you’re on public Internet. public Wi-Fi can
be extremely insecure. There are programs out there called “network sniffers” that run
passively in the background of some hacker’s device. The sniffer monitors all of the
wireless data flowing through a particular network – and that data can be analyzed for
important information. Like your username and password.
3.
Conclusion:
As Email is very widely used for communication at every place in today’s E-World. But
email is insecure in many ways. Someone may access our confidential information on the
internet. We have to avoid certain things which are listed above for the security of our
important information. As well as we have to use some tricks like use separate email accounts
and create a unique password in favor of mail security.
4.
[1]
[2]
[3]
References:
https://luxsci.com.
Cryptography and Network security: Atul Kahate
http://www.makeuseof.com
INCON - XI 2016
119
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Internet of Things (IoT)– A Review of Architectural elements, Applications
& Challenges
Ramesh Mahadik
Director - MCA
Institute of Management and Computer Studies - MCA Institute
Mumbai, Maharashtra, India
ramesh.imcost@yahoo.com;rameshgm.iitb@gmail.com
ABSTRACT :
Conventionally, the internet comprises of a network of interconnected computers
(network). Internet of Things (IoT) evolved this concept further, to a network of physical
objects or “things” embedded with electronics, software, sensors and network
connectivity which enables these objects to collect & exchange data. Internet of Things
will help businesses gain efficiencies, harness intelligence from a wide range of
equipment, improve operations and increase customer satisfaction. IoT will also have a
profound impact on people’s lives. It will improve public safety, transportation and
healthcare with better information and faster communications of this information. The
concept is simple but powerful. If all objects in daily life are equipped with identifiers
and wireless connectivity, these objects could communicate with each other and be
managed by computers. This paper explores the technology framework, issues &
challenges and application areas of IoT.
Keywords: Internet of Things, IoT, embedded systems, sensors, smart objects,
interoperability
1.
Introduction:
The IoT evolution heralds the birth of a new age, of a relationship between human
beings and smart objects. It is a fast-emerging ecosystem of connected devices with the
potential to deliver significant benefits across various industries, valued at trillions of dollars.
Organizations can use IoT to drive considerable cost savings by improving asset utilization,
enhancing process efficiency and boosting productivity. More importantly, IoT-driven
innovations are expected to increase return on investments, reduce time to market, and open
up additional sources of revenue from new business opportunities. IoT is driven by a
combination of forces, including the exponential growth of smart devices, availability of lowcost technologies (sensors, wireless networks, big data and computing power),huge
connectivity network and massive data volumes.
2.
Research Methodology
An exhaustive study of various texts, research articles and materials pertaining to IoT
was carried out, with the aim of understanding its technology framework, issues, challenges
and application areas.
3.
What is Internet of Things (IoT)?
The Internet of Things (IoT) is an interconnection of uniquely identifiable computing
devices within the existing Internet infrastructure. So, IoT basically is connecting embedded
system to internet.
INCON - XI 2016
120
ASM’s International E-Journal on
Ongoing Research in Management & IT
3.1 Embedded System Architecture
Below we understand an Embedded System architecture.
E-ISSN – 2320-0065
Fig. 1: What is an Embedded System?
The heart of the embedded system is a microcontroller. Most important thing that
differentiate these microcontrollers with microprocessors is their internal read/writable
memory (EPROM). We can develop a light weight program ( in Assembly language or using
Embedded C) and "burn" the program into the hardware.
In most embedded system a single program is burnt with several subroutines. So unlike
your PC, microcontroller device in an embedded system runs a single program infinitely.
We can connect several input and output devices with these microcontrollers. These
simple hardware includes LCD display, buzzers, keypad/ numpad, etc. Also, several sensors
through an
interface can be connected. The devices can control Higher
Power/Voltage/Current rating devices like fans, motors, bulbs.
INCON - XI 2016
121
ASM’s International E-Journal on
Ongoing Research in Management & IT
3.2 IoT Architecture
E-ISSN – 2320-0065
Fig. 2 : IoT Architecture
We defined IoT as embedded devices which could be connected through the internet.
After understanding what an embedded system is, it is not too difficult to perceive the idea of
IoT.
Consider temperature sensor being connected to an embedded board.
A serial communication with the device can be established and temperature values can
be read. We can also interface a LCD display to display the temperature. But how about
checking out the temperature from any part of the world using internet?
How about getting the temperature information on our mobile phone? or Getting the
temperature as a tweet after certain interval?
If we can connect our embedded device into internet, we can get sensor information
online which can be viewed in wide range of devices including your tablet and mobiles. We
can also control devices over internet. We can actually have several home appliances
connected to our embedded system and the embedded system being connected to internet with
unique IP address. Then we can actually instruct the device to turn on or off certain peripheral
devices by generating the instruction online.
So Internet of Things or IoT is an architecture that comprises specialized hardware
boards, Software systems, web APIs, protocols which together creates a seamless environment
which allows smart embedded objects to be connected to internet such that sensory data can be
accessed and control system can be triggered over internet.
Also devices could be connected to internet using various means like WiFi, Ethernet and
so on. Furthermore devices may not needed to be connected to internet independently. Rather
a cluster of devices could be created ( for example a sensor network).
INCON - XI 2016
122
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
So, IoT is basically an internet connectivity of smart objects and embedded systems,
which can be connected with external hardware while Mobiles, Tablets, Laptops and PCs are
remote control/access center of IoT.
3.3
IoT - A combinatorial technology framework
From a technical perspective, IoT is not a single novel technology but a combination of
several complimentary technical capabilities which help us to bridge the gap between the
physical & virtual world. These capabilities include:
3.3.1
Communication and cooperation:
Objects have the ability to network with Internet resources or even with each other, to
make use of data and services and update their state. Wireless technologies such as GSM, WiFi, Bluetooth and various other wireless networking standards currently under development,
particularly those relating to Wireless Personal Area Networks (WPANs), are of primary
relevance here.
3.3.2 Addressability:
Within an Internet of Things, objects can be located and addressed via discovery, lookup or name services, and hence remotely interrogated or configured.
3.3.3 Identification:
Objects are uniquely identifiable. RFID, NFC (Near Field Communication)and optically
readable bar codes are examples of technologies with which even passive objects which do not
have built-in energy resources can be identified (with the aid of a “mediator” such as an RFID
reader or mobile phone). Identification enables objects to be linked to information associated
with the particular object and which can be retrieved from a server.
3.3.4
Sensing:
Objects collect information about their surroundings with sensors, record it, forward it or
react directly to it.
3.3.5 Actuation:
Objects contain actuators to manipulate their environment. Suchactuators can be used to
remotely control real-world processes via the Internet.
3.3.6 Embedded information processing:
Smart objects feature a processor or microcontroller, plus storage capacity. These
resources can be used to process and interpret sensor information, or to give products a
“memory” of how they have been used.
3.3.7 Localization:
Smart things are aware of their physical location, or can be located. GPS or the mobile
phone network are some of the suitable technologies to achieve this.
3.3.8 User interfaces:
Smart objects can communicate with people in an appropriate manner (either directly or
indirectly, for example via a smartphone). Innovative interaction paradigms are relevant here,
INCON - XI 2016
123
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
such as tangible user interfaces, flexible polymer-based displays and voice, image recognition
methods.
4.
IoT Application Areas
The Internet of Things holds a great potential across a wide variety of industry verticals
which are as follows:
4.1
Smart Homes:
Home automation or smart homes can be described as introduction of technology within
the home environment to provide convenience, comfort, security and energy efficiency to its
occupants. The system allows the user to control appliances and lights in their home from a
smart phones and PC from anywhere in the world through an internet connection. It also
allows the user to control their units within their home from a wireless remote.
4.2
Health Care:
Health care applications include the potential for remote patient monitoring using smart
electronic devices, allowing patients and their doctors to obtain real-time access to health data.
This is expected to lead to vast improvements in the quality of care, better health outcomes,
and significantly lower costs.
4.3
Transportation:
Transportation applications will include not just the rapidly emerging self-driving and
connected vehicles, but also the ability to develop “intelligent” transportation infrastructure
from roads to airports to parking garages.
4.4
Energy:
Applications include smart metering, other “smart grid” technologies, and the ability to
drive greater efficiencies in both energy production and consumption.
4.5
Manufacturing:
Sensor networks and smart devices will drive major improvements throughout the
manufacturing process based on process improvements such as increased visibility into
manufacturing processes that better inform decision-making, improved automation,
augmented energy management, increased ability for proactive maintenance, and a betterconnected supply chain.
4.6
Pharmaceutical industry:
For pharmaceutical products, security and safety is of utmost importance. In IoT
paradigm, attaching smart labels to drugs, tracking them through the supply chain and
monitoring their status with sensors has many potential benefits. For example, items requiring
specific storage conditions, e.g. maintenance of a cool chain, can be continuously monitored
and discarded if conditions were violated during transport. Drug tracking allow for the
detection of counterfeit products and keep the supply chain free of fraudsters.
INCON - XI 2016
124
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
4.7 Government:
The Internet of Things will have a major impact in the public sector, from defense and
emergency service applications to driving improvements in service delivery and
responsiveness to people's needs.
5.
IoT Issues &Challenges:
Even if IoT does offers a tremendous potential to businesses, organizations will have to
overcome numerous issues and challenges to streamline IoT’s growth and master it. The key
IoT hurdles to overcome are:
5.1
A lack of standards and interoperable technologies:
The sheer number of vendors, technologies and protocols used by each class of smart
devices inhibits interoperability. The lack of consensus on how to apply emerging standards
and protocols to allow smart objects to connect and collaborate makes it difficult for
organizations to integrate applications and devices that use different network technologies and
operate on different networks. Further, organizations need to ensure that smart devices can
interact and work with multiple services.
5.2
Data and information management issues:
Routing, capturing, analyzing and using the insights generated by huge volumes of IoT
data in timely and relevant ways is a huge challenge with traditional infrastructures.The sheer
magnitude of the data collected will require sophisticated algorithms that can sift, analyze and
deliver value from data. As more devices enter the market, more data silos are formed,
creating a complex network of connections between isolated data sources.
5.3
Privacy and security concerns:
Deriving value from IoT depends on the ability of organizations to collect, manage and
mine data. Securing such data from unauthorized use and attacks will be a key concern.
Similarly, with many devices used for personal activities, many users might not be aware of
the types of personally identifiable data being collected, raising serious privacy concerns. And
because most devices involve minimal human interference, organizations need to be
concerned about hacking and other criminal abuse. A far bigger potential for risk in the future
is a security breach or a malfunctioning device that induces catastrophic failures in the IoT
ecosystem.
5.4
Organizational inability to manage IoT complexities:
While IoT offers tremendous value, tapping into it will demand a whole new level of
systems and capabilities that can harness the ecosystem and unlock value for organizations.
For instance, making sense of the flood of data generated by sensors every millisecond will
require strong data management, storage and analytics capabilities. Similarly, policy makers
will need to address data, security and privacy concerns. Organizations will also need to
develop skills to preempt potential component failures and replacements, using preventive
servicing and maintenance practices to ensure business operations run effectively and
efficiently.
INCON - XI 2016
125
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
5.5 Spectrum & Bandwidth
IoT will comprise of a huge network of smart objects embedded with electronics,
software, sensors and network connectivity which will enable these objects to collect &
exchange data. Organizations will have to ensure that these sensor-enabled and network-aware
devices are able to transmit their data in a manner that uses constrained resources efficiently.
6.
Conclusion:
The Internet of Things promises to deliver a steep change in individuals’ quality of life
and enterprises’ productivity. Through a widely distributed, intelligent network of smart
devices, the IoT has the potential to enable extensions and enhancements to fundamental
services in transportation, logistics, security, utilities, education, healthcare and other areas,
while providing a new ecosystem for application development. A concerted effort is required
to move the industry beyond the early stages of market development towards maturity, driven
by common understanding of the distinct nature of the opportunity.
To realize the full potential from IoT applications, technology will need to continue to
evolve, providing lower costs and more robust data analytics. In almost all settings, IoT
systems raise questions about data security and privacy. And in most organizations, taking
advantage of the IoT opportunity will require leaders to truly embrace data-driven decision
making.
7.
[1]
REFERENCES:
The Telecommunications Industry Association (TIA), USA:“Realizing the Potential of
The Internet Of Things: Recommendations to Policy Makers”, 2015, tiaonline.org
[2] Nicholas Dickey, Darrell Banks, and Somsak Sukittanon, “Home Automation using
Cloud Network and Mobile Devices”, IEEE, Vol. 12, pp. 1375-1384, 2012.
[3] Debasis Bandyopadhyay, Jaydip Sen, Innovation Labs, Tata Consultancy Services Ltd.,
INDIA: “Internet of Things - Applications and Challenges in Technology and
Standardization”, Wireless Protocol Communications, May 2011
[4] S Ramachandran, IDC Manufacturing Insights, USA, “Innovative Use Cases for the
Adoption of Internet of Things in India Manufacturing”, IDC #IN250976, 2015
[6] Mirko Franceschinis, Researcher, Pervasive Technologies, Italy, “Internet of Things
Applications”, Workshop on Scientific Applications for the Internet of Things (IoT),
March 2015
[7] David Shen, CTO, Premier Farnell, “Exploring the Internet of Things”, TechJournal,
www.element14.com, 2014
[8] V. H Bhide, K. J. College of Engg. & Management Research, India: “A Survey on the
Smart Homes using Internet of Things (IoT)”, International Journal of Advance Research
in Computer Science and Management Studies, Volume 2, Issue 12, December 2014
[9] American International Group (AIG) & The Consumer Electronics Association (CEA)
White Paper, “The Internet of Things: Evolution or Revolution”, 2015
[10] Vinay Sagar KN, Kusuma S M, MSRIT, Bangalore, India, “Home Automation Using
Internet of Things”, International Research Journal of Engineering and Technology
(IRJET), Volume: 02, Issue: 03, June-2015, www.irjet.net
INCON - XI 2016
126
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[11] Bhagyashri Katole, Manikanta Sivapala, Suresh V., NISG, C-DAC, India, “Principle
Elements and Framework of Internet of Things”, Research Inventy: International Journal
Of Engineering And Science Vol.3, Issue 5, July 2013
[12] Rupam Das, Grasshopper.iics, Mentor, Grasshopper Network, CEO, Integrated Ideas
Consultancy Services, India, “Introduction to Internet of Things: What, Why & How”,
CodeProject, Oct 2014.
INCON - XI 2016
127
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Edge Detection Methods in Image Processing
Tripti Dongare-Mahabare
Pratibha College of Commerce & Computer Studies, Chinchwad Pune
E-mail : tru.neel@gmail.com
ABSTRACT :
Edge detection is an important task in image processing. It is a main tool in pattern
recognition, image segmentation, and scene analysis. It could detect the variation of
gray levels, but it is sensitive to noise. Edge detection is a process that detects the
presence and location of edges constituted by sharp changes in intensity of an image.
Edge detection is used for image segmentation and data extraction in areas such as
image processing, computer vision, and machine vision. Edge detection of an image
significantly reduces the amount of data and filters out useless information, while
preserving the important structural properties in an image. It works by detecting
discontinuities in brightness. Common edge detection algorithms include Sobel, Canny,
Prewitt, Roberts, and fuzzy logic methods. The software is implemented using MATLAB.
In this paper after a brief introduction, overview of different edge detection techniques
like differential operator method such as Sobel operator, Prewitt’s technique, Robert’s
technique and morphological edge detection technique are given.
Keywords: Edge detectors, Image Processing, Image segmentation, Pattern recognition,
Object Recognition
Introduction:
An edge detector is basically a high pass filter that can be applied to extract the edge
points in an image. An edge in an image is a contour across which the brightness of the image
changes abruptly. In image processing, an edge is often interpreted as one class of
singularities. In a function, singularities can be characterized easily as discontinuities where
the gradient approaches infinity. However, image data is discrete, so edges in an image often
are defined as the local maxima of the gradient. Edge widely exists between objects and
backgrounds, objects and objects, primitives and primitives. The edge of an object is reflected
in the discontinuity of the gray. Therefore, the general method of edge detection is to study the
changes of a single image pixel in a gray area, use the variation of the edge neighboring first
order or second-order to detect the edge. This method is used to refer as local operator edge
detection method. Edge detection is mainly the measurement, detection and location of the
changes in image gray. Image edge is the most basic features of the image. When we observe
the objects, the clearest part we see order derivative corresponds to a gradient, first order
derivative operator is the gradient operator. There are many ways to perform edge detection
however majority of the different method can be grouped into two major categories:(a) Gradient-the gradient method detects the edges by looking for the maximum and
minimum in the first derivative of the image.
(b) Laplacian- the Laplacian method searches for zero crossing in the second derivative of
the image to find edges.
INCON - XI 2016
128
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Edge Detection Techniques
There are different edge detection techniques available the compared ones are as follow:
(1)
Sobel Edge Detection Operator :
Sobel operator is one if the pixel based edge detection algorithm. It can detect edge by
calculating partial derivatives in 3 x 3 neighborhoods. The Sobel edge detection operation
extracts all of edges in an image, regardless of direction. The reason for using Sobel operator
is that it is insensitive to noise and it has relatively small mask in images. Figure shows the
convolution kernel, one kernel is simply the other rotated by 900.These kernels are designed to
respond to edges running vertically and horizontally relative to the pixel grid, one kernel for
each of the two perpendicular orientations. The kernels can be applied separately to input
image to produce separate measurement of gradient component in each orientation which can
be combined to find the absolute magnitude of gradient at each point.
The partial derivatives in x and y direction is given as follows:Sx=
{f(x + 1, y – 1) + 2f(x + 1, y) + f(x + 1, y + 1)} – {f(x – 1, y – 1) + 2f(x – 1, y) +
f(x – 1, y + 1)} (1)
Sy= {f(x – 1, y + 1) + 2f(x, y) + f(x + 1, y + 1)} – {f(x – 1, y – 1) + 2f(x, y) +
f(x + 1, y – 1)} (2)
The gradient of each pixel is calculated using:
G(x, y) =
2
2
sx + sy
Fig. 1
Sobel operation is implemented as the sum of two directional edge enhancement
operations. The resulting image appears as an unidirectional outline of the objects in the
original image. Constant brightness regions become black, while changing brightness regions
become highlighted. However, the Sobel operators have the advantage of providing both a
differencing and a smoothing effect. Because derivatives enhance noise, the smoothing effect
is particularly attractive feature of the Sobel operators [2] .
INCON - XI 2016
129
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
(2) Robert Cross operator :
The Robert Cross operator performs a simple and quick 2-D spatial gradient
measurement on an image. The operator consists of a pair of 2 x2 convolution kernel as shown
in figure two. These kernels are designed to respond maximally to edges running at 45o to the
pixel grid one kernel for each of the two perpendicular orientations. The kernels can be
applied separately to the input image to produce separate measurement of the gradient
component in each orientation these can then be combined together to find the absolute
magnitude of the gradient at each point and orientation of the gradient is represented by:
2
2
|G| = Gx + Gy
although typically, an approximate magnitude is computed using:
|G| = |Gx| + |Gy|
Fig 2
(3)
Prewitt Detection :
The Prewitt Operator is similar to the Sobel operator and it is used for detecting vertical
and horizontal edges in images [3]. The Prewitt edge detector is an appropriate way to
estimate the magnitude and orientation of an edge. The Prewitt operator is limited to eight
possible orientations [3] although most direct orientation estimates are not exactly accurate.
The Prewitt operator is estimated in the 3 x 3 neighborhood for eight directions. The entire
eight masks are calculated then the one with the largest module is selected.
Fig.3
INCON - XI 2016
130
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
(4)
Laplacian of Gaussian:
Laplacian as a second derivative operator is very sensitive to noise. The Laplacian is a 2D isotropic measure of the 2nd spatial derivative of an image. The Laplacian of an image
highlights regions of rapid intensity change and is therefore often used for edge detection. The
Laplacian is often applied to an image that has first been smoothed with something
approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise, and hence
the two variants will be described together here. The operator normally takes a single gray
level image as input and produces another gray level image as output. The Laplacian L(x,y) of
an image with pixel intensity values I(x,y) is given by:
2
2
I I
L (x, y) = 2 + 2
x y
This can be calculated using a convolution filter. Three commonly used discrete
approximations to the Laplacian filter:
In fact, since the convolution operation is associative, we can convolve the Gaussian
smoothing filter with the Laplacian filter first of all, and then convolve this hybrid filter with
the image to achieve the required result. Doing things this way has two advantages: The LoG
(`Laplacian of Gaussian') kernel can be precalculated in advance so only one convolution
needs to be performed at run-time on the image.The 2-D LoG function centered on zero and
with Gaussian standard deviation sigma has the form:
2
2
x2 + y2
– 2a2
1 ⎡
x +y⎤
e
2 ⎢1 –
2
 ⎣
2 ⎥⎦
It can be also represented as a kernel for convolution:
LoG (x, y) = –
INCON - XI 2016
131
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
LOG: effect on an edge
The effect of the filter is to highlight edges in an image.
LOG: example
Conclusion :
The edge detection is the primary step in identifying an image object. In this paper many
edge detection methods like Sobel operator technique, Roberts technique , Prewitt technique,
and Laplacian of Gaussian edge detection technique are discussed. Choosing a suitable method
for edge detection is based on the some environmental conditions. Each technique have its
INCON - XI 2016
132
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
own advantages and disadvantages .But mathematical morphology is better technique than
differential method. This paper dealt with study of edge detection techniques of Gradientbased and Laplacian based. It seems that although Laplacian does the better for some features
(i.e. the fins), it still suffers from mismapping some of the lines.
REFERENCES:
[1] Chen, L., “Laplacian Embedded Regression for Scalable Manifold Regularization”,
Neural Networks and Learning Systems, IEEE Transactions, Volume: 23, pp. 902 –
915, June 2012.
[2] Nick Kanopoulos, et.al. ; “Design of an Image Edge Detection Filter using the Sobel
Operator”, Journal of Solid State Circuits,IEEE, vol. 23, Issue: 2, pp. 358-367, April
1988.
[3] Seif, A.,et.al. ;“A hardware architecture of Prewitt edge detection”, Sustainable
Utilization and Development in Engineering and Technology (STUDENT), 2010 IEEE
Conference, Malaysia, pp. 99 – 101, 20-21Nov. 2010.
[4] N. Senthilkumaran, R. Rajesh, "Edge Detection Techniques for Image Segmentation and
A Survey of Soft Computing Approaches", International Journal of Recent Trends in
Engineering, Vol. 1, No. 2, PP.250-254, May 2009.
[5] T.G. Smith Jr., et.al. ; “Edge detection in images using Marr-Hildreth filtering
techniques”, Journal of Neuroscience Methods,Volume 26, Issue 1, pp. 75 – 81,
November 1988.
[6] WenshuoGao, et.al. ; “An improved Sobel edge detection”, Computer Science and
Information Technology (ICCSIT), 2010 3rd IEEE International Conference, China,
Volume: 5, pp. 67 – 71, 9-11 July 2010. IJCSI International Journal of Computer
Science Issues, Vol. 9, Issue 5, No 1, September 2012 ISSN (Online): 1694-0814
www.IJCSI.org 276 Copyright
[7] Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing, Second Edition.
[8] Iasonas Kokkinos, and Petros Maragos(2009),”Synergy between Object Recognition an
image segmentation using Expectation and Maximization Algorithm”., IEEE Trans. on
Pattern Analysis and Machine Intelligence (PAMI), Vol. 31(8), pp. 1486-1501, 2009.
[9] Balkrishan Ahirwal, Mahesh Khadtare and Rakesh Mehta, “FPGA based system for
Color Space Transformation RGB to YIQ and YCbCr.” International Conference on
Intelligent and Advanced Systems 2007
[10]. Xiaoliang Qian, Lei Guo, Bo Yu ,” An Adaptive Image Filter Based on Decimal Object
Scale For Noise Reduction and Edge Detection “ IEEE Trans. on Image
Processing,2010.
INCON - XI 2016
133
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
“Internet of Things (IoT): Revolution of Internet for Smart Environment”
Mr. Rahul Sharad Gaikwad
Executive MBA (Project Management), MCA (Engg), M.A. I (History)], DCS
Big Data Hadoop Administrator (Lead) at Opus Consulting Solutions, Pune. India
Email id: rahulgaikwad1986@gmail.com ; rahulgaikwad86@yahoo.co.in
ABSTRACT :
The Internet will continue to become ever more central to everyday life and work, but
there is a new but complementary vision for an Internet of Things (IoT), which will
connect billions of objects – ‘things’ like sensors, monitors, and RFID devices – to the
Internet at a scale that far outstrips use of the Internet as we know it, and will have
enormous social and economic implications.
The Internet of Things (IOT) describes a worldwide network of intercommunicating
devices. Internet of Things (IoT) has reached many different players and gained further
recognition. Out of the potential Internet of Things application areas, Smart Cities (and
regions), Smart Car and mobility, Smart Home and assisted living, Smart Industries,
Public safety, Energy & environmental protection, Agriculture and Tourism as part of a
future IoT Ecosystem have acquired high attention.
This paper provides a broad overview of the topic, its current status, controversy and
forecast to the future. Some of IoT applications are also covered.
Keywords: Internet of Things (IoT), Revolution of the Internet, Smart City
Introduction:
The Internet of Things (IoT) is the network of physical objects or "things" embedded
with electronics, software, sensors, and network connectivity, which enables these objects to
collect and exchange data. The Internet of Things allows objects to be sensed and controlled
remotely across existing network infrastructure, creating opportunities for more direct
integration between the physical world and computer-based systems, and resulting in
improved efficiency, accuracy and economic benefit. Each thing is uniquely identifiable
through its embedded computing system but is able to interoperate within the
existing Internet infrastructure. Experts estimate that the IoT will consist of almost 50 billion
objects by 2020.
Internet of Things (IoT) is a concept and a paradigm that considers pervasive presence
in the environment of a variety of things/objects that through wireless and wired connections
and unique addressing schemes are able to interact with each other and cooperate with other
things/objects to create new applications/services and reach common goals. A world where the
real, digital and the virtual are converging to create smart environments that make energy,
transport, cities and many other areas more intelligent. The goal of the Internet of Things is to
enable things to be connected anytime, anyplace, with anything and anyone ideally using any
path/network and any service.
Internet of Things is a new revolution of the Internet. Objects make themselves
recognizable and they obtain intelligence by making or enabling context related decisions
thanks to the fact that they can communicate information about themselves. They can access
INCON - XI 2016
134
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
information that has been aggregated by other things, or they can be components of complex
services.

History of the Internet of Things:
The concept of the Internet of Things first became popular in 1999, through the AutoID Center at MIT and related market-analysis publications. Radio-frequency identification
(RFID) was seen by Kevin Ashton (one of the founders of the original Auto-ID Center) as a
prerequisite for the Internet of Things at that point. If all objects and people in daily life were
equipped with identifiers, computers could manage and inventory them. Besides using RFID,
the tagging of things may be achieved through such technologies as near field communication,
barcodes, QR codes and digital watermarking.

Internet of Things Vision
New types of applications can involve the electric vehicle and the smart house, in
which appliances and services that provide notifications, security, energy-saving, automation,
telecommunication, computers and entertainment are integrated into a single ecosystem with a
shared user interface. Obviously, not everything will be in place straight away. Developing the
technology, demonstrating, testing and deploying products, it will be much nearer to
implementing smart environments by 2020. In the future computation, storage and
communication services will be highly pervasive and distributed: people, smart objects,
machines, platforms and the surrounding space (e.g., with wireless/wired sensors, M2M
devices, etc.).The “communication language” will be based on interoperable protocols,
operating in heterogeneous environments and platforms. IoT in this context is a generic term
nd all objects can play an active role thanks to their connection to the Internet by creating
smart environments, where the role of the Internet has changed.
The convergence creates the open, global network connecting people, data, and things.
This convergence leverages the cloud to connect intelligent things that sense and transmit a
broad array of data, helping creating services that would not be obvious without this level of
connectivity and analytical intelligence. The use of platforms is being driven by transformative
technologies such as cloud, things, and mobile. Networks of things connect things globally
and maintain their identity online. Mobile allows connection to this global infrastructure
anytime, anywhere. The result is a globally accessible network of things, users, and
consumers, who are available to create businesses, contribute content, generate and purchase
new services.

Internet of Things Common Definition:
The IERC definition states that “Internet of things (IoT): “A dynamic global network
infrastructure with self-configuring capabilities based on standard and interoperable
communication protocols where physical and virtual “things” have identities, physical
attributes, and virtual personalities and use intelligent interfaces, and are seamlessly integrated
into the information network.”

Internet of everything:
The IoT is not a single technology, it’s a concept in which most new things are
connected and enabled such as street lights being networked and things like embedded
sensors, image recognition functionality, augmented reality, and near field communication are
integrated into situational decision support, asset management and new services.
INCON - XI 2016
135
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig.1 : Internet of everything.
By 2020, over 30 billion connected things, with over 200 billion with intermittent
Connections are forecast. Key technologies here include embedded sensors, image recognition
and NFC. By 2015, in more than 70% of enterprises, a single executable will oversee all
Internet connected things. This becomes the Internet of Everything. The Internet is not only a
network of computers, but it has evolved into a network of devices of all types and sizes,
vehicles, Smartphone's, home appliances, toys, cameras, medical instruments and industrial
systems, all connected, all communicating and sharing information all the time .

Applications and Scenarios of Relevance:
The major objectives for IoT are the creation of smart environments /spaces and selfaware things (for example: smart transport, products, cities, buildings, rural areas, energy,
health, living, etc.) for climate, food, energy, mobility, digital society and health applications”
see Fig. 2
INCON - XI 2016
136
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig. 2 : IoT in the context of smart environments and applications
At the city level, the integration of technology and quicker data analysis will lead to a
more coordinated and effective civil response to security and safety (law enforcement and blue
light services); higher demand for outsourcing security capabilities. At the building level,
security technology will be integrated into systems and deliver a return on investment to the
end-user through leveraging the technology in multiple applications (HR and time and
attendance, customer behaviour in retail applications etc.). There will be an increase in the
development of “Smart” vehicles which have low (and possibly zero) emissions. They will
also be connected to infrastructure. Additionally, auto manufacturers will adopt more use of
“Smart” materials.
The key focus will be to make the city smarter by optimizing resources, feeding its
inhabitants by urban farming, reducing traffic congestion, providing more services to allow for
faster travel between home and various destinations, and increasing accessibility for essential
services. It will become essential to have intelligent security systems to be implemented at key
junctions in the city. Various types of sensors will have to be used to make this a reality.
Sensors are moving from “smart” to “intelligent”. Biometrics is expected to be integrated with
CCTV at highly sensitive locations around the city. In addition, smart cities in 2020 will
require real time auto identification security systems. Arrange of smart products and concepts
will significantly impact the power sector. For instance, sensors in the home will control
lights, turning them off periodically when there is no movement in the room. Home Area
INCON - XI 2016
137
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Networks will enable utilities or individuals to control when appliances are used, resulting in a
greater ability for the consumer to determine when they want to use electricity, and at what
price.

Application Areas:
In the last few years the evolution of markets and applications, and therefore their
economic potential and their impact in addressing societal trends and challenges for the next
decades has changed dramatically. Societal trends are grouped as: health and wellness,
transport and mobility, security and safety, energy and environment, communication and esociety. These trends create significant opportunities in the markets of consumer electronics,
automotive electronics, medical applications, communication, etc.
Potential applications of the IoT are numerous and diverse, permeating into practically
all areas of every-day life of individuals (the so-called “smart life”), enterprises, and society as
a whole. The 2010 Internet of Things Strategic Research Agenda (SRA) has identified and
described the main Internet of Things applications, which span numerous applications — that
can be often referred to as “vertical” — domains: smart energy, smart health, smart buildings,
smart transport, smart living and smart city. The vision of a pervasive IoT requires the
integration of the various vertical domains (mentioned before) into a single, unified, horizontal
domain which is often referred to as smart life. The IoT application domains identified by
IERC are based on inputs from experts, surveys and reports. The IoT application covers
“smart” environments/spaces in domains such as: Transportation, Building, City, Lifestyle,
Retail, Agriculture, and Factory, Supply chain, Emergency, Health care, User interaction,
Culture and Tourism, Environment and Energy.
The updated list presented below, includes examples of IoT applications in different
domains, which is showing why the Internet of Things is one of the strategic technology trends
for the next 5 years.
Cities

Smart Parking: Monitoring of parking spaces availability in the city.

Structural health: Monitoring of vibrations and material conditions in buildings,

Traffic Congestion: Monitoring of vehicles and optimize driving and walking routes.

Smart Lightning: Intelligent and weather adaptive lighting in street lights.

Intelligent Transportation Systems: Smart Roads and Intelligent Highways
Environment:

Forest Fire Detection: Monitoring of combustion gases and pre-emptive fire conditions
to define alert zones.

Air Pollution: Control of CO2 emissions of factories, pollution emitted by cars

Earthquake Early Detection: Distributed control in specific places of tremors.
Water:

Water Quality: Study of water suitability in rivers and the sea

Water Leakages: Detection of liquid presence outside tanks and pressure variations

River Floods: Monitoring of water level variations in rivers, dams.
INCON - XI 2016
138
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Energy Smart Grid, Smart Metering

Smart Grid: Energy consumption monitoring and management.

Tank level: Monitoring of water, oil and gas levels in storage tanks and cisterns.

Water Flow: Measurement of water pressure in water transportation systems.
Retail

Supply Chain Control: Monitoring of storage conditions along the supply chain and
product tracking for traceability purposes.

Intelligent Shopping Applications: Getting advice at the point of sale according to
customer habits, preferences, presence of allergic components or expiring dates.

Smart Product Management: Control of rotation of products in shelves and
warehouses to automate restocking processes.
Logistics

Item Location: Search of individual items in big warehouses or harbours.

Storage Incompatibility Detection: Warning emission on containers storing

IoT Applications
In the following sections, we can see several IoT applications, which are important.
1.
Smart Cities
By 2020 we will see the development of Mega city corridors and networked, integrated
and branded cities. With more than 60 percent of the world population expected to live in
urban cities by 2025, urbanization as a trend will have diverging impacts and influences
on future personal lives and mobility. Rapid expansion of city borders, driven by
increase in population and infrastructure development, would force city borders to
expand outward and engulf the surrounding daughter cities to form mega cities, each
with a population of more than 10 million. By 2023, there will be 30 mega cities
globally.. This will lead to the evolution of smart cities with eight smart features,
including Smart Economy, Smart Buildings, Smart Mobility, Smart Energy, Smart
Information Communication and Technology, Smart Planning, Smart Citizen and Smart
Governance. There will be about 40 smart cities globally by 2025.
2
Smart Energy and the Smart Grid
Future energy grids are characterized by a high number of distributed small and medium
sized energy sources and power plants which may be combined virtually ad hoc to
virtual power plants; moreover in the case of energy outages or disasters certain areas
may be isolated from the grid and supplied from within by internal energy sources such
as photovoltaic's on the roofs, block heat and power plants or energy storages of a
residential area (“islanding”).The developing Smart Grid, is expected to implement a
new concept of transmission network which is able to efficiently route the energy which
is produced from both concentrated and distributed plants to the final user with high
security and quality of supply standards. Therefore the Smart Grid is expected to be the
implementation of a kind of “Internet” in which the energy packet is managed similarly
to the data packet—across routers and gateways which autonomously can decide the best
pathway for the packet to reach its destination with the best integrity levels. In this
respect the “Internet of Energy” concept is defined as a network infrastructure based on
INCON - XI 2016
139
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
standard and interoperable communication transceivers, gateways and protocols that will
allow a real time balance between the local and the global generation and storage
capability with the energy demand.
3.
Smart Transportation and Mobility
The connection of vehicles to the Internet gives rise to a wealth of new possibilities and
applications which bring new functionalities to the individuals and/or the making of
transport easier and safer. In this context the concept of Internet of Vehicles (IoV)
connected with the concept of Internet of Energy (IoE) represent future trends for smart
transportation and mobility applications. At the same time creating new mobile
ecosystems based on trust, security and convenience to mobile/contactless services and
transportation applications will ensure security, mobility and convenience to consumercentric transactions and services.
Cars should be able to organise themselves in order to avoid traffic jams and to optimise
drive energy usage. This may be done in coordination and cooperation with the
infrastructure of a smart city’s traffic control and management system. Additionally
dynamic road pricing and parking tax can be important elements of such a system.
Further mutual communications between the vehicles and with the infrastructure enable
new methods for considerably increasing traffic safety, thus contributing to the reduction
in the number of traffic accidents.
4.
Smart Home, Smart Buildings and Infrastructure
The rise of Wi-Fi’s role in home automation has primarily come about due to the
networked nature of deployed electronics where electronic devices (TVs and AV receivers,
mobile devices, etc.) have started becoming part of the home IP network and due the
increasing rate adoption of mobile computing devices (smartphones, tablets, etc.), see Fig.3.
Fig. 3. Smart home platform
The networking aspects are bringing online streaming services or network playback,
while becoming a mean to control of the device functionality over the network. At the
INCON - XI 2016
140
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
same time mobile devices ensure that consumers have access to a portable ‘controller’
for the electronics connected to the network. Both types of devices can be used as
gateways for IoT applications. IoT applications using sensors to collect information
about operating conditions combined with cloud hosted analytics software that analyse
disparate data points will help facility managers become far more proactive about
managing buildings at peak efficiency.
5.
Smart Factory and Smart Manufacturing
The role of the Internet of Things is becoming more prominent in enabling access to
devices and machines in manufacturing systems. This evolution will allow the IT to
penetrate further the digitized manufacturing systems. The IoT will connect the factory
to a whole new range of applications, which run around the production. This could range
from connecting the factory to the smart grid, sharing the production facility as a service
or allowing more agility and flexibility within the production systems themselves. In this
sense, the production system could be considered one of the many Internets of Things
(IoT), where a new ecosystem for smarter and more efficient production could be
defined.
6.
Smart Health
The market for health monitoring devices is currently characterised by applicationspecific solutions that are mutually non-interoperable and are made up of diverse
architectures. While individual products are designed to cost targets, the long-term goal
of achieving lower technology costs across current and future sectors will inevitably be
very challenging unless a more coherent approach is used.
7.
Food and Water Tracking and Security
Food and fresh water are the most important natural resources in the world. Organic food
produced without addition of certain chemical substances and according to strict rules, or
food produced in certain geographical areas will be particularly valued. Similarly, fresh
water from mountain springs is already highly valued. In the future it will be very
important to bottle and distribute water adequately. This will inevitably lead to attempts
to forge the origin or the production process. Using IoT in such scenarios to secure
tracking of food or water from the production place to the consumer is one of the
important topics. This has already been introduced to some extent in regard to beef meat.
After the “mad cow disease” outbreak in the late 20th century, some beef manufacturers
together with large supermarket chains in Ireland are offering “from pasture to plate”
traceability of each package of beef meat in an attempt to assure consumers that the meat
is safe for consumption. However, this is limited to certain types of food and enables
tracing back to the origin of the food only, without information on the production
process.
8.
Social Networks and IoT
From a user perspective, abstract connectedness and real-world interdependencies are
not easily captured mentally. What users however easily relate to is the social
connectedness of family and friends. The user engagement in IoT awareness could build
on the Social Network paradigm, where the users interact with the real world entities of
INCON - XI 2016
141
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
interest via the social network paradigm. This combination leads to interesting and
popular applications, which will become more sophisticated and innovative. Future
research directions in IoT applications should consider the social dimension, based on
integration with social networks which can be seen as another bundle of information
streams. Note also that social networks are characterized by the massive participation of
human users. The use of the social networks metaphor for the interactions between
Internet-connected objects has been recently proposed and it could enable novel forms of
M2M, interactions and related applications.
Conclusions:

Many devices are no longer protected by well-known mechanisms such as firewalls and
can be attacked via the wireless channel directly.

In addition devices can be stolen and analysed by attackers to reveal their key material.

Secure exchange of data is required between IoT devices and their consumers.

Combining data from different sources is the other major issue since there is no trust
relationship between data providers and data consumers at least not from the very
beginning.
REFERENCES:
[1] Seshadri, A., Luk, M., Perrig, A., van Doorn, L., and Khosla, P.SCUBA: Secure code
Update by attestation in sensor networks. InWiSe ’06: Proceedings of the 5th ACM
workshop On Wireless security (2006), ACM.
[2] Aurelien Francillon, Claudio Soriente, Daniele Perito and Claude Castelluccia: On the
Difficulty of software based attestation of embedded devices. ACM Conference on
Computer And Communications Security (CCS), November 2009
[3] Benjamin Vetter, Dirk Westhoff: Code Attestation with Compressed Instruction Code.
IICS 2011: 170–181
[4] Trusted
Computing
Group
(TCG)
Specification.
URL
http://www.trustedcomputinggroup
[5] Rodrigo Roman, Cristina Alcaraz, Javier Lopez, Nicolas Sklavos, Key management
systems for sensor networks in the context of the Internet of Things, Computers &
[6] Butun, I. and Sankar, R. “A brief survey of access control in Wireless Sensor Networks,”
Consumer Communications and Networking Conference (CCNC), 2011 IEEE , vol., no.,
[7] Youssou Faye, Ibrahima Niang, and Thomas Noël. A Survey of Access Control Schemes
INCON - XI 2016
142
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Implementation of Effective Learning Model for improvement of Teaching
learning Process in Selected schools
Prof.Anita Anandrao Patil
BVDU,YMIM,Karad, India
ap9420462670@gmail.com
ABSTRACT :
Prof. A. V.Nikam, HOD (Comp.)
BVDU YMIM, Karad, India
avnkrd@rediffmail.com
World is moving rapidly into digital media and information. The role of ICT in
education is becoming more and more important in school education.
In the development of inclusive knowledge societies the education and training sector is
mainly responsible for producing skilled human resources required by education,
industry as well as citizens who can participate in building a well governed society. ICT
in turn can contribute to widening access to education, improving educational
management and addressing issues of quality.
The ICT are seen as an important catalyst and accelerator for development, having the
ability to attract investment, create job opportunities, promote knowledge building and
sharing, facilitate innovation and contribute to good governance and more efficient and
transparent provision of public services. ICT facilitate inclusiveness by enabling citizens
anywhere to access information and knowledge. ICT together with education empower
citizens to be aware of their rights and to participate actively in shaping public policy,
governance and development.
In this paper, a research has tried to focus on effective use of ICT learning model for
Education, along with ICT use in the teaching learning process; quality and
accessibility of education and learning motivation in schools of rural area. It is seen ICT
and scholastic performance of Government schools are poor as compared to private
schools.
In rural area, the use of ICT in education and training has become a priority from last
decade. However, very few have achieved progress. Indeed, a small percentage of
schools in some rural area achieved high levels of effective use of ICT to support and
change the teaching and learning process in various fields. Many are still in the early
phase of adoption of Information and Communication Technology.
Keywords: ICT, Rural Area, learning mode, scholastic performance
Introduction:
In dynamic world computer is must researchers focus in this paper we reviewed the
empirical studies that connect EPL with changes in teaching practices and student learning.
This focus clearly limited the scope of the review as few published studies have looked at the
impact of learniong on teacher practice or student learning. However, studies which have been
done clearly demonstrate that a learning community model can have positive impact on both
teachers and students. Just as important, our act of interpreting the literature has led us to draw
conclusions that are significant to future research.
That is, the professional development activity is based on the premise that knowledge
and expertise are best generated by university researchers outside of the day-to-day work of
INCON - XI 2016
143
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
teaching. Through professional development, teachers acquire and then implement this
knowledge. In addition, the knowledge presented is usually advocated as a prescription for
better teaching. The learning model represents a fundamental shift away from this traditional
model of professional development. That is, “it is assumed that the knowledge teachers need
to teach well is generated when teachers treat their own classrooms and schools as sites for
intentional investigation at the same time that they treat the knowledge and theory produced
by others as generative material for interrogation and interpretation”
Objective of Research Paper
Following are the objectives of the research paper
The research work is concerned with the prime objective of identifying the various
problems and solution thereof in schools education using ICT. The major objectives of the
proposed research are:
1.
To study the present scenario of ICT implementation in selected schools in Satara
District of Maharashtra.
2.
To identify the problems faced during ICT implementation in the selected schools.
3.
To analyze the effective of ICT implementation in the selected schools.
4.
To develop a working model for efficient and effective ICT implementation in schools
education.
5.
To suggest measures for effective implementation of ICT in Schools.
Research Design
Researcher has selected secondary schools from Satara district using stratified random
sampling method, strata has been defined using medium of school (Marathi,Urdu,and
English).
The Satara district consists of 11 talukas namely Jaoli, Karad, Khandala, Khatav,
Koregaon, Mahabaleshwar, Man, Patan, Phaltan, Satara, Wai.
Researcher has selected English, Urdu and Marathi medium secondary schools from all
talukas for study purpose.
Fig: 1: Present Status of Satara District: in ICT Awareness
Sr.No
Taluka Name
Total No of
Schools
Total No of
SECONDARY
Schools
1
2
3
4
5
6
7
Jaoli
Karad
Khandala
Khatav
Koregaon
Mahabaleshwar
Man
250
510
168
340
270
184
355
30
118
32
69
59
47
65
INCON - XI 2016
POPULATION of
SECONDARY
SCHOOL(9th and 10th
Standard)
Marathi urdu english
28
00
02
99
03
16
28
00
04
65
00
04
56
00
03
14
01
32
64
00
01
144
ASM’s International E-Journal on
Ongoing Research in Management & IT
Sr.No Taluka Name Total No of
Total No of
SECONDARY
Schools
Schools
8
9
10
11
Total
Patan
phaltan
Satara
Wai
627
412
484
254
3854
67
73
110
45
715
E-ISSN – 2320-0065
POPULATION of
SECONDARY
SCHOOL(9th and 10th
Standard)
Marathi urdu english
64
00
03
69
00
04
89
01
20
38
01
06
614
06
95
Source: From Zilla Parishid, Satara.and ps karad
Advantages of Effective Learning model :
In the modern world most of the use of ICT following are some advantages of ICT:
1)
Improving access and equity:
Global E-Schools and Communities Initiative (GESCI) assists countries to consider the
appropriate uses of ICT in order to take learning beyond the classroom. Examples
include addressing the shortage of teachers required for Universal Primary Education
(UPE), extending vocational skills training to underprivileged or marginalized
populations, creating new electronic learning materials to support teaching and learning
and using ICT to reach students with special needs to enable them to participate
effectively in learning and to thus make education truly inclusive.
2)
Improving quality of education:
Education reports from almost all developing countries point to quality as one of the
biggest issues developing country education systems face. The 2010 Education For All
(EFA) Global Monitoring Report notes that “Many countries are failing the quality test.
3)
Improving the efficiency of education systems:
The education system is a complex system that requires good management and
administration if it is to be efficient and effective. ICT have proven them in almost every
other industry, especially the private sector and increasingly in the public sector as well,
in supporting management and administration. ICT enable teacher, planners, managers
and policy makers to access to educational data when they need it.
4)
ICT and the demand for new skills:
ICT skills make a critical contribution to socio-economic development because of their
central importance to the knowledge economy. ICT can also contribute to the
development of other important knowledge economy “new millennium” skills such as
critical thinking, information retrieval, analytical capacity, problem solving,
communication and ability to understand and manipulate new media.
INCON - XI 2016
145
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Dynamic Learning model for improvement of teaching learning process
It is also evoked from the study that the Teaching Learning Process should be improved
and stakeholders are getting relevant information of subjects very easily. We can observe
effectiveness and efficiency in education system in all respect particularly in academic point of
view. Examine innovative educational models. It was planned and built as advanced schooling
ICT PEDAGOGICAL INNOVATIONS, according to an educational concept that views ICT
as a means for empowering and redefining the relationship between students and knowledge,
for facilitating learning skills acquisition, and improve academic achievement. As one of the
teachers reported: “Since the school from its very beginning was enriched by computers, it
would be very difficult without them, because half of the curricula are computer-based.” …
“Without the computers, this is a different school…” The innovation of the school is in its
holistic perception and integration of several components: the physical structure and
organization (including architectural design)
In the education sector education is developed, there are six ways in which we can use
ICT to improve education, following are the six areas as under:-
Fig: Dynamic Learning Model
EPL Model is divided in six entity researcher is explain in detail :In this model main entity is ICT, Content writing, Methodology ,Awareness, and
Resources is very important each entity is dependent upon each other .we discuss one by one
entity
In first step of model candidate in teaching learning process must require knowledge and
awareness of the subject. In every school it is essential to use ICT for internal and external
trainer, school portal, digital library and LMS (), learning locations and technical support
group.
Fig:1: Learning Model : Step: 1 Prior Process
INCON - XI 2016
146
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Stakeholders (parent, student, teacher, school)
Development of TCT Teaching Learing first step is prior process in this prior process
sub entity is awarenesss and knowledge .If student also Teacher is not capable of ICT
Teaching learning process whole system is kolapsed.That’s why every education system
stakeholders must be know knowledge about ICT Teaching Learining technology In education
system is not only enough knowledge but also he has to awareness of ICT Technology Both
sub entity is important to improving education System.
Fig:2: Learning Model : Step: 2 ICT
ICT: Entity is related to main four sections following are the four sections
a) Teacher:content Writing ,PPT
b)
School :Providing new technology
c) Student: self Understing
d) parent: Satisfication
In education system teacher, student,school and parent is most important ,in the
classroom teacher prepare power points and explain his chapter and those chapter how many
percent understood by student. And ICT helpful to both student and teacher providing new
technology but point is this new technology also providing by school then obviously
satisfaction by parent
Fig:3: Learning Model : Step: 3 Content Writing
Content Writing: In this model main four sections following are the main four sections
a) Need
b) Access
c) Data form
INCON - XI 2016
147
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
d) Design
According to current situation what is need of student and teacher access those data and
design ing the project and make a Database
Fig. 4 : Learning Model : Step: 4 Methodology
Methodology: In this model main four sections following are the main four sections
a) Trainer :
i)
Shool Teacher
ii)
External Trainer
b) School portal:
i)
Digital Library
ii)
LMS
c) Learning Location:
i)
Home
ii)
School
iii) Other Schools
d) Technical Support Group supported two types
i)
Network Support
ii)
Application Support
This model is conceptualized as innovative because it considers the implementation and
the use of educational ICT policies in four levels (Trainer, School portal, Learning Location,
Technical Support Group) to improve learning and teaching practices in formal context.
According to the literature all factors that have been pointed out before are necessary to get a
success environment when teachers attempt to integrate ICT in schools.
Trainer: Trainer is related to school Teacher and external Trainer
School portal: School portal is related to Digital Library and LMS(
Learning Location: Learning Location is related
Technical Support Group: Technical Support Group supported two types
1) Network Support
INCON - XI 2016
148
ASM’s International E-Journal on
Ongoing Research in Management & IT
2) Application Support
E-ISSN – 2320-0065
Fig:5: Learning Model : Step: 5: Assessment:
In this model main four sections following are the main four sections
i)
Courses
ii)
Scheme of work
iii) Data Analysis
iv)
Curriculum
Fig. 6:Learning Model : Step: 6: Resources:
In this model main four sections following are the main four sections
i)
Teaching Application
ii) Timetable
iii) Virtual Support
iv) Recorded Lectures
Results and Discussion:
Implementation ICT various growth in education field like Student also Teacher get
more confidential in his school also benefited his real life it can be beneficial in social
environment also To improve overall performance of a system and faster response time, many
systems allow multiple users to update the data simultaneously. In such environment,
interaction of concurrent updates may result inconsistent data. And
Our world today has changed a great deal with the aid of information technology. Things
that were once done manually or by hand have now become computerized operating systems,
which simply require a single click of a mouse to get a task completed. With the aid of
Information Technology Enabled Services we are not only able to stream line our school
processes but we are also able to get constant information in 'real time' that is up to the minute
and up to date.
INCON - XI 2016
149
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
With the help of ICT Services, communication has also become cheaper, quicker, and
more efficient. We can now communicate with anyone around the globe by simply text
messaging them or sending them an email for an almost instantaneous response. The internet
has also opened up face to face direct communication from different parts of the world thanks
to the helps of video conferencing.
Conclusion:
It is concluded that there is a relation between ICT and teaching learning process.
Virtual Class rooms, websites related to education and video CDs on chapters of those
particular subjects are very useful in teaching learning process. Awareness of ICT
implementation in the Government schools is less as compared to private schools hence to IT
training using through this model to the students and even teachers must be provided so that
they can improve their teaching learning process hence model is effective.
REFERENCES:
[1] ‘Introducing ICT into schools in Rwanda: educational’ ‘Challenges and Opportunities’,
‘volume 31’,issue-1’,’january 2011’,’pages 37-43’
[2] Teaching reading in the primary schools, David fultan, 2000,
[3] http://www.gesci.org/ , date 10 Jan 2015
[4] http://www.ictinedtoolkit.org/usere/login.php date 142015
[5] Gurumurthy k and k Vishwanath, 2010, international journal of
information
and communication technology (IJICT), ICTs programmes in schools in India
INCON - XI 2016
150
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Big Data: Building Smart Cities in INDIA
Mr. Rahul Sharad Gaikwad
Executive MBA (Project Management), MCA (Engg), M.A. I (History)], DCS
Big Data Hadoop Administrator (Lead) at Opus Consulting Solutions,
Pune. India
Email id: rahulgaikwad1986@gmail.com ; rahulgaikwad86@yahoo.co.in
ABSTRACT :
There is much enthusiasm currently about the possibilities created by new and more
extensive sources of data to better understand and manage cities. Here, we can see how
big data can be useful in urban planning by formalizing the planning process as a
general computational problem. We can show that, under general conditions, new
sources of data coordinated with urban policy can be applied following fundamental
principles of engineering to achieve new solutions to important age-old urban problems.
As such the primary role of big data in cities is to facilitate information flows and
mechanisms of learning and coordination by heterogeneous individuals. However,
processes of self-organization in cities, as well as of service improvement and
expansion, must rely on general principles that enforce necessary conditions for cities to
operate and evolve. Such ideas are the core a developing scientific theory of cities,
which is itself enabled by the growing availability of quantitative data on thousands of
cities worldwide, across different geographies and levels of development.
Cities are repositioning themselves to play a pivotal role in the development of
humanity; though because of rapid population growth and nonstop urban expansion;
cities are stressed with a variety of challenges related to urban life such as urban
planning and management, environmental management, urban safety, resource
mobilization and utilization, urban health, energy efficiency, traffic management, social
activities, recreation and entertainment. Letting down to cope with any of the aforesaid
challenges might be a threat to the city's prosperity1and quality of life affecting its
residents adversely hence 'Smart City' concept has been materialized as a problemsolving technological instrument to real urban world problems. Smart cities are also
recognized as lively cities which can respond to resident's basic needs and aspirations in
'real time’s.
This paper provides a broad overview of Smart City, Role of Big Data, Challenges and
real time applications of Big Data.
Keywords: Big Data, Smart City, Analytics, IoT,
Introduction:
Considering the pace of rapid urbanization in India, it has been estimated that the urban
population would rise by more than 400 million people by the year 2050 and would contribute
nearly 75% to India’s GDP by the year 2030. It has been realized that to foster such growth,
well planned cities are of utmost importance. For this, the Indian government has come up
with a Smart Cities initiative to drive economic growth and improve the quality of life of
people by enabling local area development and harnessing technology, especially technology
that leads to Smart outcomes.
INCON - XI 2016
151
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Initially, the Mission aims to cover 100 cities across the countries (which have been
shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will
be five years (FY2015-16 to FY2019-20). The Mission may be continued thereafter in the
light of an evaluation to be done by the Ministry of Urban Development (MoUD) and
incorporating the learnings into the Mission. This initiative aims to focus on area-based
development in the form of redevelopment, or developing new areas (Greenfield) to
accommodate the growing urban population and ensure comprehensive planning to improve
quality of life, create employment and enhance incomes for all, especially the poor and the
disadvantaged.
History of Smart City:
The concept of the smart city has been introduced to highlight the importance of
Information and Communication Technologies (ICTs) in the last 20 years (Schaffers, 2012).
In literature the term smart city is used to specify a city's ability to respond as promptly
as possible to the needs of citizens. Quality of life and city development are profoundly
influenced by the core systems of a city: transport, government services and education, public
safety and health. So, we must to start analyze and development of city for these four areas.
What is a Smart City?
A global consensus on the idea of a smart city is a city which is livable, sustainable and
inclusive. Hence, it would mean a city which has mobility, healthcare, smart infrastructure,
smart people, traffic maintenance, efficient waste resource management, etc.
A city equipped with basic infrastructure to give a decent quality of life, a clean and
sustainable environment through application of some smart solutions.
In the approach of the Smart Cities Mission, the objective is to promote cities that
provide core infrastructure and give a decent quality of life to its citizens, a clean and
sustainable environment and application of ‘Smart’ Solutions. The Smart Cities Mission of the
Government is a bold, new initiative. It is meant to set examples that can be replicated both
within and outside the Smart City, catalysing the creation of similar Smart Cities in various
regions and parts of the country. The core infrastructure elements in a smart city would
include:

Adequate water supply, Assured electricity supply

Sanitation, including solid waste management

Efficient urban mobility and public transport

Affordable housing, especially for the poor

Robust IT connectivity and digitalization

Good governance, especially e-Governance and citizen participation

Sustainable environment , Health and education
INCON - XI 2016
152
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Safety and security of citizens, particularly women, children and the elderly
Fig 1: Smart Solutions for Smart City
India's Smart City Initiative:
Though contested and criticized Smart cities revolution has been already begun in the
leadership of cities like Singapore (Singapore), Masdar (UAE), Johannesburg (SA), Boston
(USA), Dublin (Ireland), Amsterdam (Netherlands),Songdo (South Korea) and Barcelona
(Spain).India has entered quite late in this club yet expected to caught the pace sooner as union
government has declared establishment of 100 smart cities across the country though 'Smart
city' term has been topic of hot debate in media for last few months. As government has
prompted private sector to play a great role techno-based Multinationals like IBM, a market
leader in smart city solutions has shown its preparedness to transfer advanced, data driven
systems to integrate information from all city operations into a single system to improve
efficiency and deliver an enhanced quality of life for residents; ESRI is preparing itself for
business in the smart cities initiative through real time spatial data acquisition, storage,
processing and decision making systems; CISCO showcased technological solutions using
sensors and the internet of things (IoT) to solve a range of city's service deficiencies and
environmental quality degradations , such as leakage of water pipelines , pollution of ambient
air quality, pollution, congestion in traffic and crunch of solid waste management.
Big Data and the Role of Crowdsourcing:
A dataset converts into 'Big Data' when its volume, velocity and variety surpass the
capabilities of traditional IT systems to consume, store, analyse and process. As the cities
processes are complex, diverse and dynamic entities which generates a huge amount of data
through various actions, reactions and interactions are hard to process called 'Big Data 'which
will be required in a 'Smart City' to make real-time, efficient and informed problem-solving
INCON - XI 2016
153
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
decisions. Centre for Urban Science and Progress was recently established at New York
University; to sense New York City' through variety of sensors and offering a continuous data
flow for improving quality of life. Hence the centre will construct urban Big Data through:
Making publicly available government records; interoperable; after cleaning. Data related to
environmental dynamics such as pollution, sound, temperature and light; received through
'fixed sensors'. Data from individual volunteers through 'personal sensors' like up wristbands
and Fitbit; measuring locations, activities and physiology. Internet Data through social media,
blogs and other platforms. Data through increase of RFID and Video Camera surveillances for
pedestrian and vehicle movements. As stated above individual volunteers through personal
sensors could be avital contributor for urban Big Data. As number of smartphone users in
India is expected to reach 104 million24 that could be an opportunity for individuals to play a
key role as real time sensor to monitor performance of urban services. Apart from monitoring
performance of urban services, individuals have another task to conduct for Smart City that is
to offering ideas with creativity and innovations for urban solutions. There is no doubt that the
end-users can make appropriate contribution to the development processes and therefore it
will be worthy to engage them aggressively in generating ideas, designs and development of
solutions to their own requirements and difficulties.
Big data & Analytics Market Opportunities in Smart Cities:
How does one measure a city? By the buildings that fill its skyline? By the efficiency of
its rapid transit? Or, perhaps, by busy street? Certainly these are the memorable hallmarks of
any modern city or metropolitan area. But a city’s true measure goes beyond human-made
structures and lies deeper than daily routine. Rather, cities and metro areas are defined by the
quality of the ideas they generate, the innovations they spur, and the opportunities they create
for the people living within and outside the city limits.
Fig.2. Big Data and Smart City Concept
The rise of information and communication technologies (ICT) and the spread of
urbanization are arguable the two most important global trends at play across the world today.
INCON - XI 2016
154
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Both are unprecedented in their scope and magnitude in history, and both will likely change
the way we live irreversibly. If current trends continue, we may reasonably expect that the vast
majority of people everywhere in the world will live in urban environments within just a few
decades and that information technologies will be part of their daily lives, embedded in their
dwellings, communications , transportation and other urban services. It is therefore an obvious
opportunity to use such technologies to better understand urbanism as away of life and to
improve and attempt to resolve the many challenges that urban development entails in both
developed and developing cities. Despite its general appeal, the fundamental opportunities and
challenges of using big data in cities have, in my opinion, not been sufficiently formalized. In
particular, the necessary conditions for the general strategic application of big data in cities
need to be spelled out and their limitations must also be, as much as possible, anticipated and
clarified. To address these questions in light of the current interdisciplinary knowledge of
cities is the main objective of this perspective.
By 2017 there will be more than 1 trillion connected objects and devices on the planet
generating data .There are 2.5 billion gigabytes of data generated every day of which 80% is
unstructured. By 2017, world wide data spend will be $266 B. We’re beginning to see trends
and similarities as big data goes mainstream. “It’s not about the Peta bytes”.
People want to make decisions fast, predict customer behaviour more quickly, and react
more quickly to events in the real world. Between 2012 and 2014, machine-generated data
jumped from 23.7% of projects to 41.2% of projects, while the other two sources dropped
(from 45.4% to 30.9% for human-sourced data and from 30.7% to 27.7% for process-mediated
data). That ties into IT’s mantra – or mandate – to be more automated
Real-life example of Big Data helping to build Smart City:
The South Korean city of Songdo is an appropriate example of how big data has
changed it. Songdo is located just 40 miles from Seoul and 7 miles from the Incheon
International Airport. Songdo has 1500 acres of land reclaimed from the city and 40% of its
area is earmarked as open area. Although we hear a lot about smart cities these days, the vision
of building Songdo into a completely connected city started getting implemented way back in
2000 with a projected cost of $35 billion. Now, in 2015, Songdo is at the threshold of realizing
that vision. Cisco has been working on the project and it is making sure that every inch of the
city is wired with fibre optic broadband. So, Songdo, the smart city, is going to massively
impact the lives of its 65,000 residents and the 300,000 who will commute daily to Songdo.
Given below are some ways this smart city is going to behave.

The traffic will be measured and regulated with the help of RFID tags on the cars. The
RFID tags will send the geo location data to a central monitoring unit that will identify
the congested areas. Also, the citizens will always know via their smartphones and
mobile devices the exact status of public transportation and its availability.

Even Garbage collection will generate data. Residents who dispose of garbage will
need to use a chip card in the containers. The city planners and architects along with
Cisco are working on the concept of totally eliminating garbage trucks. Garbage trucks
will not collect and dispose garbage anymore. Each house will have garbage disposal
units and garbage will be sucked from them to the garbage treatment centres which will
dispose it in an environment friendly way. The garbage will used to generate power for
the city.
INCON - XI 2016
155
ASM’s International E-Journal on
Ongoing Research in Management & IT







E-ISSN – 2320-0065
Data will make life more secured for the citizens. For example, children playing in the
parks will wear bracelets with sensors which will allow the children to get tracked in
case they go missing.
The smart energy grid can measure the presence of people in a particular area in a
particular moment and can accordingly adjust the street lights. For example, the smart
grids will ensure that areas that are scantly populated will automatically have some of
the street lights turned off. This will result in a lot of energy savings.
How can big data potentially contribute to smart cities?
The example of Songdo above has already given some ideas on how can big data and
IoT contribute to smart city development. Still, it is worth to explore some more areas.
Big data can help reduce emissions and bring down pollution. Sensors fitted in the roads
will measure the total traffic at different times of a day and the total emissions. The data
can be sent to a central unit which will coordinate with the traffic police. Traffic can be
managed or diverted along other less congested areas to reduce carbon emissions in a
particular area.
Parking problems can be better managed. Cars will have sensors attached which can
guide the car to the nearest available parking lots.
The environment will cooler and greener with less energy being consumed. In Bristol, a
program that involves development of a wireless network based on IoT is under way.
This network will use less energy and power than the traditional Wi-Fi and mobile
networks. So, batteries on mobile devices will last more and there will lesser need to
charge the devices frequently.
How Big Data technology could improve our cities:

Public Safety: We could improve the efficiency of police and fire services by capturing
and correlating all the data coming from different systems installed in the city including
surveillance cameras, emergency vehicle GPS tracking and fire and smoke sensors.

Urban transportation: Through real time data capture and the management of signals
from video cameras and magnetic sensors installed in the road network, GPS systems
INCON - XI 2016
156
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
could be used to track the location of public buses. Equally, social media monitoring
systems could enable us to flag a protest organized on social networks and therefore
facilitate the management of potential traffic jams by changing bus routes, modifying
traffic light sequences and delivering information to drivers via mobile apps indicating
approximate driving times and giving alternative routes.

Water management: By analyzing the data coming from metering systems, pressure or
PH sensors installed in water supply networks and video cameras situated in water
treatment plants it would be possible to optimize water management detecting leaks,
reducing water consumption and mitigating sewer overflow.

Energy management: With all the data coming from smart electric meters installed in
customer’s homes as well as meteorological open data platforms it would be possible to
optimize energy production, depending on demand, which would help us to maximize
the integration of renewable energy resources like wind and solar energy.

Urban waste management: By gathering data in real time from sensors that detect the
container filling level and comparing to the historical data and usage trends it would be
possible to forecast the ideal time for emptying each individual container and optimize
waste collection routes.

Public Sentiment Analysis: By analyzing social media networks and blogs and then
using Big Data technologies, cities would be able to measure public opinion on key
issues and services such as public transportation, waste management or public safety
allowing them to priorities and shape policy.

M2M and IoT solutions: They are an absolutely fundamental element of Smart City
projects. Installing thousands of sensors in public buildings (HVAC, lighting, security),
energy management systems (smart meters, turbines, generators, batteries),
transportation platforms (vehicles, lights, signage) and security systems (ambulances,
video cameras, smoke detectors) allows us to use all of that data which is transmitted to
a central server where it can be correlated and analyzed with other sources of data
turning it into meaningful information.
INCON - XI 2016
157
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig 3.Integration of Big Data, Cloud Computing, IoT
Cloud computing technology is also an intrinsic part of Big Data and Smart City
projects. Here are some drivers for the cloud adoption of Big Data:

Cost Reduction: Big Data environments require a cluster of servers to support the
processing of large volumes, high velocity and varied formats of data and the cloud payper-use model will be financially advantageous.

Rapid provisioning/time to market: Big Data environments can be easily scaled up or
down based on the processing requirements and the provisioning of the cloud servers
could be done in real time.

Flexibility/scalability: Big Data analysis in smart city environments requires huge
computing power for a brief amount of time and servers need to be provisioned in
minutes. This kind of scalability and flexibility can only be achieved with cloud
technologies avoiding the required investments in very expensive IT infrastructure by
simply paying for the consumed computing resources on an hourly basis.
In a nutshell, if cities want to be smarter, more competitive and sustainable they need to
leverage Big Data, together with M2M and Cloud Computing technologies in order to achieve
their goals.

Conclusion

Clearly, big data can make enormous contributions to the development of smart cities.

However, the smart city vision faces huge challenges before it comes to fruition.

It appears that availability of funds, data confidentiality and social issues are the biggest
challenges.

For smart cities to become a global phenomenon, the issue of affordability in third world
countries needs to be addressed first.
INCON - XI 2016
158
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Given the cost, it is clear that the vision is still at a nascent stage and it may take several
years before it becomes a global phenomenon. It has to be an inclusive concept.

The event discussed in great detail about what a smart city would look like in a country
like India where every city has different demographics, needs and resources.

The initiative of creating smart cities would echo across the country as a whole and
would not be limited to the urban centers.
REFERENCES:
[1] UN-HABITAT, United Nations Human Settlements Programme, State of the World’s
Cities Report 2012/2013: Prosperity of Cities, United Nations Human Settlements
Programme, Nairobi, 2012, p.10.
[2] PTI, Bureau, Smart city plan out soon, wide scope for private sector: Government, Press
Trust of India,Sep 3 2012
[3] ET, Bureau IBM to develop smart cities in the Delhi-Mumbai industrial corridor,
Economic Times,Sep 20, 2013
[4] ET, Bureau, ESRI sees big business in India's smart city plan engineering, Economic
Times, Sep 20, 2014
[5] TOI, Bureau, Cisco displays smart city concept: India business, Sep 20, 2014
[6] DNA, Bureau, CEPT's answer to India's dream of 'smart cities', Daily News & Analysis,
July 24, 2014.
[7] Pathak, K., Student engineers to also help Modi build smart cities IIT-Bombay invites
ideas from students and professionals, Times of India, Mumbai, September 17, 2014.
[8] Jerath, A.R.,Delhi-Mumbai industrial corridor to spawn 7 'smart' cities,Times of
India,Jan 15, 2011.
[9] Government of India, Smart City Scheme, Draft Concept Note, Ministry of Urban
Development, New Delhi, October 14,2014
[10] Nam, T. & Pardo, T. A., Conceptualizing smart city with dimensions of technology,
people, and institutions. In Proceedings of the 12th Annual International Digital
Government Research Conference: Digital Government Innovation in Challenging
Times, New York, 2011, pp. 282-291.
INCON - XI 2016
159
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
String Matching and Its Application in Diverse Field
Dr. Sarika Sharma
Professor & Director
JSPM's Eniac Institute of Computer Applications,
Wagholi
Email- sarika4@gmail.com
Ms. Urvashi Kumari
Research Scholar
Email – urvashijj@gmail.com
ABSTRACT :
String data is ubiquitous, common-place applications are digital libraries and product
catalogs (for books, music, software, etc.), electronic white and yellow page directories,
specialized information sources (e.g. patent or genomic databases), customer
relationship management of data, etc.. The amount of textual information managed by
these applications is increasing at a incredible rate. The best two descriptive examples
of this growth are the World-Wide Web, which is estimated to provide access to at least
three terabytes of textual data, and the genomic databases ,which are estimated to store
more than fifteen billion of base pairs. The problem of string searching and matching is
fundamental to many such applications which depend on efficient access of large no. of
distinct strings or words in memory. For example spell checking in text editor, network
intrusion, computer virus detection, telephone directory handling (electronic yellow
page directory) etc. String matching is very important and one of the fundamental
problem of computer science and is an important problem where we try to find a place
where one or several strings are searched within large set of strings which is usually
termed as texts. In this paper the researcher is trying to explore various means of string
matching and also the diversified application on this classic problem.
Keywords: String matching, String searching, Trie, Index, Exact string matching,
Approximate string matching
Introduction
A text is a string or set of strings. Let ∑ be an finite set of alphabet and both the query
string and the searched text are array of elements of ∑. The ∑ may be a usual human alphabet
( for example A-Z of English alphabet) or binary alphabet ( ∑ = {0,1}) or DNA alphabet (∑ =
{A,C,G,T}) in bioinformatics. It is considered that the text is an array T[1…n] of length n and
the query string is distinct array S[1…m] of length m and that m<=n. The problem of string
matching is to find if the string S is found in text T as fig 1.1
The Text T
T
H
I
S
The string S
I
S
A
S
T
S
T
R
I
R
I
N
G
N
G
Different Approach for String Matching
There are two different approach of string matching or string searching
1.
In one approach, given a query string S & text T, the text T is scanned to search the
query string S and try to find a place where the query string is found within a larger text.
INCON - XI 2016
160
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
There are large no. of algorithms existing to solve string matching problem and can be
applied on the basis of requirement of exact or approximate string matching or single
pattern and multiple pattern matching.
1.1 Exact String Matching: In case of exact string matching Brute Force Algorithm,
Searching with automation, Rabin Karp algorithm, Knuth-Morris Pratt Algorithm etc.,
search the position in T from where the query string S is exactly found.
1.2 Approximate String Matching: In this approach, the algorithms are designed to find all
the occurrences of query string in the text whose edit distance to the query string is at
most K. The edit distance between two strings is defined as minimum no of character
modification need to make them equal. Dynamic programming is a method to solve
approximate string matching problem.
2.
The second approach of string matching or string searching, the attention is given on
creating an index for effective searching the string T. For this purpose the text document
is first preprocessed. The preprocessor normalizes the document. It removes all the
delimiters (spaces, commas), replaces non-alphanumeric symbols and unifies upper and
lower-case characters. It then returns the list of single terms that occur in the document.
The set of all distinct words in the index is termed as vocabulary. These terms in
vocabulary are represented in data structure in memory. This data structure will then
provide indexing into the text so that the string search and comparison can be performed
more efficiently.
According to The American Heritage Dictionary of English Language , 4th edition index
is defined as follows: 1. Something that serve to guide, point out, or otherwise facilitate
reference especially : a. An alphabetized list of names, places and subjects treated in a printed
work, giving the page or pages on which each item is mentioned b. A thumb index. c. Any
table, file or catalog. d. Computer science A list of keywords associated with a record or
document used especially as an aid in searching for information.
Applications of string matching
There are many different areas where string matching is extensively used . we discuss
few of the very important area of string matching
String Matching in Intrusion Detection
Need of sharing data and information on line has made security a big issue for all
enterprises using internet. Hackers and intruders have well attempts to bring down high
profile company networks and web services. To secure the network infrastructures and
communication over network , many protocols and algorithms like firewalls, encryption
algorithms and virtual private networks(VPN) are developed . Intrusion detection method is
used to identify some known types of attacks with the help of information collected on
common types of attacks and also to find out if someone is trying to attack your network or
particular server. The collected information helps in tighten the network security and for the
legal purposes.[2]. Intrusion Detection is a set of procedures and techniques that are used to
identify distrustful action both at network and server level. Intrusion detection systems can be
divided into two broader categories 1. signature-based intrusion detection systems and
anomaly detection systems. Since intruders have signatures, like computer viruses, that can be
detected using Signature based intrusion detection software. You try to find data packets that
INCON - XI 2016
161
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
contain any known intrusion related signatures or anomalies related to Internet protocols.
Based upon a set of signatures and rules, the detection system is able to find and log
suspicious activity and generate alerts. Anomaly-based intrusion detection usually depends on
packet anomalies present in protocol header parts. In some cases these methods produce better
results compared to signature-based IDS. Usually an intrusion detection system captures data
from the network and applies its rules to that data or detects anomalies in it
The earlier intrusion detection systems makes use of the AC Algorithm (Aho- Corasick)
.It is an automaton based multiple string matching algorithm which locates all the occurrences
of keywords in a string or text. It first builds a finite state machine of all the keywords in a
string and then uses the machine to process the payload in a single pass. The AC algorithm is
having a deterministic performance which does not depends on specific input and therefore not
vulnerable to various attacks, making it attractive to Network intrusion detection systems. [4]
In computer science, the Aho–Corasick string matching algorithm is a string searching
algorithm invented by Alfred V. Aho and Margaret J. Corasick. It is a kind of dictionarymatching algorithm that locates elements of a finite set of strings (the "dictionary") within an
input text. It matches all patterns simultaneously. The complexity of the algorithm is linear in
the length of the patterns plus the length of the searched text plus the number of output
matches. Informally, the algorithm constructs a finite state machine that resembles a trie with
additional links between the various internal nodes. These extra internal links allow fast
transitions between failed pattern matches (e.g. a search for cat in a trie that does not contain
cat, but contains cart, and thus would fail at the node prefixed by ca), to other branches of the
trie that share a common prefix (e.g., in the previous case, a branch for attribute might be the
best lateral transition). This allows the automaton to transition between pattern matches
without the need for backtracking.[6] AC algorithm requires more memory and too much time
in matching hence forth is not appropriate for many applications requiring high performance.
String matching in detecting plagiarism
String matching plays a vital role in finding duplicity in software code . String matching
tool is used in software metrics which is used to check the quality of software development
process. For the purpose of reliability , parallel and distributed processing , there is a
requirement of keeping multiple copies of same data. For example many systems like data
mining, mirroring, content distribution need replication of data. Although redundancy of data
may increase reliability but uncontrolled redundancy may largely increase the search space for
searching a string and can affect the performance of string matching algorithms.
similarity matching algorithms do not provide the information on the differences of
documents, and file synchronization algorithms are usually inefficient and ignore the structural
and syntactic organization of documents. For this purpose the S2S matching approach is used
The S2S matching is composed of structural and syntactic phases to compare documents [5].
Firstly, in the structural phase, documents are decomposed into components by its syntax and
compared at the coarse level. The structural mapping processes the decomposed documents
based on its syntax without actually mapping at the word level. The structural mapping can be
applied in a hierarchical way based on the structural organization of a document. Secondly, the
syntactic matching algorithm uses a heuristic look-ahead algorithm for matching consecutive
tokens with a verification patch. The two-phase S2S matching approach provides faster results
than currently available string matching algorithms.[1,5]
INCON - XI 2016
162
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
String matching in bioinformatics
Bioinformatics is the application of information technology and computer science to
biological problems, in particular to issues involving genetic sequences. String algorithms are
centrally important in bioinformatics for dealing with sequence information. Modern
automated high throughput experimental procedures produce large amounts of data for which
machine learning and data mining approaches hold great promise as interpretive means[6].
Approximate matching of a search pattern to a target (called the “text” in string algorithms) is
a fundamental tool in molecular biology. The pattern is often called the “query” and the text is
called a “sequence database”, but we will use “pattern” and “text” consistent with usage in
computer science. While exact string matching is more commonly used in computer science, it
is often not useful in biology. One reason for this is that biological sequences are
experimentally determined, and may include errors: a single error can render an exact match
useless, where approximate matches are less susceptible to errors and other sequence
differences. Another, perhaps more important, reason for the importance of approximate
matching is that biological sequences change and evolve. Related genes in different organisms
or even similar genes within the same organism, most commonly have similar, but not
identical sequences. Determining which sequences of known function are most similar to a
new gene of unknown function is often the first step in finding out what the new gene does.
Another application for approximate string matching is predicting the results of
hybridization experiments. Since strands may hybridize if they are similar to each other's
reverse complements, prediction of which strands will bind to which other strands, and how
stable the binding will be, requires approximate, rather than exact, string matching.[6]
Biological sequences can be represented as strings. Approximate matching algorithms that can
tolerate insertions, deletions, and substitutions are extremely important for biological sequence
comparison. The Shift-AND method uses a bit manipulation approach to accelerate the
process of approximate matching. The approach can be explained by comparing it to the naïve
exact matching method, in which the pattern is compared character by character at each
position along the text. This simple approach is inefficient (its time complexity is O(n*m))
String matching in Digital Forensics
Textual evidence is important to the vast majority of digital investigations. This is
because a great deal of stored digital data is linguistic in nature (e.g. human languages,
programming languages, and system and application logging conventions). Some examples of
important text-based evidence include: email, Internet browsing history (both logs and
the content itself), instant messaging, word processing documents, spreadsheets, presentations,
address books, calendar appointments, network activity logs, and system logs. Digital
forensic1 text string searches are designed to search every byte of the digital evidence, at the
physical level, to locate specific text strings of interest to the investigation. Given the nature of
the data sets typically encountered, text string search results are extremely noisy, which results
in inordinately high levels of information retrieval (IR) overhead and information overload [3].
Frequently, investigators are left to wade through hundreds of thousands of search hits for
even reasonably small queries (i.e. 10 search strings) on reasonably small devices (i.e. 80 GB)
– most of which (i.e. 80–90% of the search hits) are irrelevant to investigative objectives. The
investigator becomes inundated with data and wastes valuable investigative time scanning
through noisy search results and reviewing irrelevant search hits presumably by some
INCON - XI 2016
163
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
automated means. This can prohibit There are fundamentally two classes of solutions to this
problem: (1) decrease the number of irrelevant search hits returned, or (2) present the search
hits in a manner which enables the investigator to find the relevant hits more quickly. The first
solution class is disconcerting to many investigators and litigators, since it results in
information reduction, the investigator from finding important evidence. The second solution
class encompasses the basic approach that revolutionized web-based knowledge discovery:
search hit ranking. This approach presents hits in priority order based on some determination
of similarity and/or relevancy to the query. This approach is much more attractive to
investigators and litigators, since it improves the ability to obtain important information,
without sacrificing fidelity. Current digital forensic text string search approaches fail to
employ either solution class. They use simple matching and/or indexing algorithms that return
all hits. They fail to successfully implement grouping and/or ranking algorithms in a manner
that appreciably reduce IR overhead (time spent scanning/reviewing irrelevant search hits).
Search hits are commonly grouped by search string and/or ‘‘file item3’’ and/ or ordered by
their physical location on the digital media. Such grouping and ordering are inadequate; as
neither substantially helps investigators get to the relevant hits first (or at least more quickly).
New, better approaches in the second solution class are needed.
The Current researches in digital forensics aims at improving IR(information Retrieval)
effectiveness of digital forensics text string searches.[3,7]
Text Mining Research
Text mining includes tasks designed to extract previously unknown information by
analyzing large quantities of text, as well as tasks geared toward the retrieval of textual data
from a large corpus of documents [8]. Several information processing tasks fall under the
umbrella of text mining: information extraction, topic tracking, content summarization,
information visualization, question answering,concept linkage, text categorization/
classification,
and text clustering [8].
These are defined as follows:
1.
Information extraction: identifies conceptual relationships, using known syntactic
patterns and rules within a language.
2.
Topic tracking: facilitates automated information filtering, wherein user interest profiles
are defined and fine-tuned based on what documents users read.
3.
Content summarization: abstracts and condenses document content.
4.
Information visualization: represents textual data graphically (e.g. hierarchical concept
maps,social networks, timeline representations).
5.
Question answering: automatically extracts key concepts from a submitted question, and
Subsequently extracts relevant text from its data store to answer the question(s).
6.
Concept linkage: identifies conceptual relationships between documents based on
Transitive relationships between words/concepts in the documents.
7.
Text categorization/classification: automatically and probabilistically assigns text
documents into predefined thematic categories, using only the textual content (i.e. no
metadata).
INCON - XI 2016
164
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
8.
Text clustering: automatically identifies thematic categories and then automatically
assigns text documents to those categories, using only textual content (i.e. no metadata).
If applied to digital forensic text string searching, information extraction, content
summarization, information visualization, and concept linkage would fit into the first solution
class identified earlier (reduction of search results set size).These text mining approaches
reduce the search result set sizevia data abstraction techniques, by and large.
String matching Based Video Retrieval
String matching can be effectively used to retrieve fast video as it uses the content based
video retrieval in contrast with the traditional video retrieval which was slow and time
consuming. String based video retrieval method first converts the unstructured video into a
curve and marks the feature string of it. Approximate string matching is then used to retrieve
video quickly. In this method the characteristic curve of the key frame sequence is first
extracted followed by marking the feature string and then approximate string matching is used
on the feature string to get fast video retrieval [9].
6.
Conclusion
String matching has greatly influenced the field of computer science and will play an
important role in future technology. The importance of memory and time efficient string
matching algorithm will be increased in computer science. There are many more possible
areas in which string matching can play a key role for excelling. Exact and approximate string
matching algorithm makes various problems in the solvable state. Innovation and creativity in
string matching can play a immense role for getting time efficient performance in various
domains of computer science. But as the quantity of data and information is growing day by
day, Searching in gigantic collection of text documents is a pervasive component of new era
and very time consuming , it is required to use index based string searching in place of exact
or approximate string matching algorithms . Numerous search engines are designed and
implemented to execute queries on this large text collection. These collections of documents
are taken from the web, large corporate intranet, personal computers and data bases. Modern
computer system stores their data structure in different type of memory which are organized in
a memory hierarchy on the basis of time and space requirement. With the tradeoff between the
time and space requirements, search engines stores major parts of the indexes in the hard disk
, a low level storage with high latency . But now with the growing size of RAM , it has
become possible to hold all the index in main memory. In a way shifting up from low level
storage with high latency , and henceforth lesser response time for query processing. Hence it
is required to use more efficient index based in-memory search techniques for string searching
techniques to be used in various applications.
REFERENCE:
[1] [3] Aho, Alfred V.; Margaret J. Corasick (June 1975). "Efficient string matching: An aid
to bibliographic search". Communications of the ACM 18 (6): 333–340.
[2] [1] Ali Peiravi, “Application of string matching in Internet Security and Reliability”,
Marsland Press Journal of American Science 2010, 6(1): 25-33
[3] [6] Beebe NL, Dietrich G. “A new process model for text string searching”. In: Shenoi S,
Craiger P, editors. Research advances in digital forensics III. Norwell: Springer; 2007. p.
73–85.
INCON - XI 2016
165
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[4] [2] Peifeng Wang , Yue Hu, Li Li, “An Efficient Automaton Based String Matching
Algorithm and its application in Intrusion Detection”, International Journal of
Advancements in Computing
[5] [4] Ramazan S. Aygün “structural-to-syntactic matching similar documents”, Journal
Knowledge and Information Systems archive, Volume 16 Issue 3, August 2008.
[6] [5] Robert M. Horton, Ph.D. “Bioinformatics Algorithm Demonstrations in Microsoft
Excel” , 2004 - cybertory.org
[7] [7] Nicole Lang Beebe, Jan Guynes Clark, “Digital forensic text string searching:
Improving information retrieval effectiveness by thematically clustering search results”,
digital investigation 4S (2007) S49-S54
[8] [8] Yoan Pinzon, “Algorithm for approximate string
Matching”,_dis.unal.edu.co/~fgonza/courses/2006.../ approx_string_matching.pdf
August 2006
[9] [9] Yin Jian, Yu Xiu ,Dong Meng, “ Application of Approximate String Matching in
Video Retrieval”, 2010 3rd International Conference on Advanced Computer Theory and
Engineering(ICACTE),vol 4, page 348-351.
INCON - XI 2016
166
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Cloud Computing By Using Big Data
Mrs. Sandhya Giridhar Gundre
Assistant Professor (Computer Department)
Dr.D.Y.Patil Institute of engineering and
Management Research, Akurdi, Pune,
Maharashtra.
nsandhya528@gmail.com
Mr. Giridhar Ramkishan Gundre
Assistant Professor (Computer Science
Dept.)
GSM Arts, Commerce &Science Senior
College Yerwada, Pune
ABSTRACT:
Cloud Computing is emerging today as a commercial infrastructure that
eliminates the need for maintaining expensive computing hardware. Through the use of
virtualization, clouds promise to address with the same shared set of physical resources
a large user base with different needs. Thus, clouds promise to be for scientists an
alternative to clusters, grids, and super computers. However, virtualization may induce
significant performance penalties for the demanding scientific computing workloads. In
this work we present an evaluation of the usefulness of the current cloud computing
services for scientific computing. T codes and digital fountain techniques have received
significant attention from both academics and industry in the past few years. By
employing the underlying ideas of efficient Belief Propagation (BP) decoding process in
LDPC and LT codes, this paper designs the BP-XOR codes and use them to design three
classes of secret sharing schemes called BP-XOR secret sharing schemes, pseudo-BPXOR secret sharing schemes, and LDPC secret sharing schemes. By establishing the
equivalence between the edge coloured graph model and degree-two BP-XOR secret
sharing schemes, we are able to design novel perfect and ideal 2 out of n BP-XOR secret
sharing schemes. By employing techniques from array code design, we are also able to
design other than n; k Þ threshold LDPC secret sharing schemes. In the efficient
(pseudo) BP-XOR/LDPC secret sharing schemes that we will construct, only linear
number of XOR (exclusive-or) operations on binary strings are required for both secret
distribution phase and secret
Reconstruction phase. For a comparison, we should note that Shamir secret
sharing schemes require
Field operations for the secret distribution phase and field operations for the
secret reconstruction phase. Furthermore, our schemes achieve the optimal update
complexity for secret sharing schemes. By update complexity for a secret sharing
scheme, we mean the average number of bits in the participant’s shares that needs to be
revised when certain bit of the master secret is changed
Keyword : Error correcting codes, edge coloured graphs, perfect one factorization of
complete graphs, data searched over encrypted cypher text,
Introduction
Scientific computing requires an ever-increasing number of resources to deliver results
for growing problem sizes in a reasonable time frame. In the last decade, while the largest
research projects were able to afford expensive supercomputers, other projects were forced to
INCON - XI 2016
167
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
opt for cheaper resources such as commodity clusters and grids. Cloud computing proposes an
alternative in which resources are no longer hosted by the researcher’s computational
facilities, but leased from big data centre’s only when needed. Despite the existence of several
cloud computing vendors, such as Amazon [1] and Go Grid [3], the potential of clouds
remains largely unexplored. To address this issue, in this paper we present a performance
analysis of cloud computing services for scientific computing.
The cloud computing paradigm holds good promise for the performance-hungry
scientific community. Clouds promise to be a cheap alternative to supercomputers and
specialized clusters, a much more reliable platform than grids, and a much more scalable
platform than the largest of commodity clusters or resource pool. Clouds also promise to
“scale by credit card,” that is, scale up immediately and temporarily with the only limits
imposed by financial reasons, as opposed to the physical limits of adding nodes to cluster or
even supercomputers or to the financial burden of over-provisioning resources. Moreover,
clouds promise good support for bags-of-tasks, currently the dominant grid application type
[2]. However, clouds also raise important challenges in many Areas connected to scientific
computing, including performance, which is the focus of this work.
There are two main differences between the scientific computing workloads and the
initial target workload of clouds, one in size and the other in performance demand. Top
scientific computing facilities are very large systems, with the top ten entries in the Top500
Supercomputers List totalling together about one million cores. In contrast, cloud computing
services were designed to replace the small-to-medium size enterprise data centre’s with 1020% utilization. Also, scientific computing is traditionally a high-utilization workload, with
production grids often running at over 80% utilization [4] and parallel production
infrastructures (PPIs) averaging over60% utilization [2]. Scientific workloads usually require
top performance and HPC capabilities. In contrast, most clouds use virtualization to abstract
away from actual hardware, increasing the user base but potentially
Lowering the attainable performance. Thus, an important research question arises: Is the
performance of clouds sufficient for scientific computing? Though early attempts to
characterize clouds and other virtualized services exist [2,1,4,3], this question remains largely
unexplored. Our main contribution towards answering it is threefold:
Cloud Performance
Method
We design a performance evaluation method that allows an assessment of clouds, and a
comparison of clouds with other scientific computing infrastructures such as grids and PPIs.
To this end, we divide the evaluation procedure into two parts, the first cloud-specific, the
second infrastructure-agnostic.
Cloud-specific evaluation
An attractive promise of clouds is that there are always unused resources, so that they
can be obtained at any time without additional waiting time. However, the load of other largescale
Systems varies over time due to submission patterns; we want to investigate if large
clouds can indeed bypass this problem. Thus, we test the duration of resource acquisition and
release over short and long periods of time For the short-time periods one or more instances of
the same instance type are repeatedly acquired and released during a few minutes; the resource
acquisition requests follow a Poisson process with arrival rate λ=1s. For the long periods an
INCON - XI 2016
168
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
instance is acquired then released every 2 minutes over a period of one week, then hourly
averages are aggregated from the 2-minute samples taken over a period of at least one month
One of the other important problems in secret sharing schemes is to design efficient
homomorphic sharing schemes which were first introduced by Benaloh [3]. Desmedt and
Frankel [7] proposed a black-box secret sharing scheme for which the distribution matrix and
the reconstruction vectors are defined over the integer ring Z
and are designed independently of the group G from which the secret and the shares are
sampled. Cramer and Fehr [8] then continued the study by showing that the optimal lower
bound for the expansion factor could be achieved in general Abelian groups. Secret sharing
schemes, for which the secret distribution and reconstruction phases are based on XOR
operations on binary strings, could be converted to secret sharing schemes on several blackbox Abelian groups on binary strings. Thus our results could be adapted to certain black-box
based secret sharing schemes. In this paper, we will introduce the concept of update
complexity of secret sharing schemes. The update complexity of a secret sharing scheme is
defined as the average number of bits of the shares affected by a change of a single master
secret bit. For threshold secret sharing scheme, it is Straightforward that the lower bound for
the update complexity. In this paper, we will show that our schemes will achieve this lower
bound. It should be noted that traditionally the efficiency of secret sharing schemes have been
extensively studied to reduce the bounds of share sizes and reducing the computational cost in
shares distribution and secret reconstruction. However, we are not aware of any research that
addresses the update cost of secret sharing schemes. Since our scheme is exclusive-or based,
we have achieved the optimal information theoretical bound of share sizes (i.e., our scheme is
an ideal secret sharing scheme) and has the most efficient computational cost in shares
distribution and secret reconstruction process. The update complexity for traditional secret
sharing schemes may not be a serious concern since secret sharing schemes are normally used
for sharing a short secret (e.g.,1,000 bits). However, our scheme is based on exclusive-or
operations only. Thus it is possible to use our secret sharing schemes to share large amount of
data in distributed cloud environments. Thus it is important to consider the update complexity.
For example, if a user stores 1 GB data in cloud using our scheme and changes 1 KB of the
data, then she does not need to down-load the entire shares from all servers and reconstruct the
shares (for traditional secret sharing schemes she has to do that). In our scheme, all she needs
to do is to update at most 1 KB of data on each cloud server. Due to the expensive cost of field
operations of secret sharing schemes such as Shamir schemes, secret sharing schemes have
traditionally been used for the distribution of secret keys, which is then used to encrypt the
actual data. Since our schemes are based on linear numbers of XOR operations and are
extremely efficient, it is possible to share the massive data directly using secret sharing
schemes. In another word, the data stored at different locations (e.g., cloud servers) are shares
of the original data and only XOR operations on bits are needed for distributing massive data.
One of the other important problems in secret sharing
Schemes are to design efficient homomorphic sharing schemes which were first
introduced by Benaloh [3]. Desmedt and Frankel [4] proposed a black-box secret sharing
scheme for which the distribution matrix and the reconstruction vectors are defined over the
integer ring Z and are designed independently of the group G from which the secret and the
shares are sampled. Cramer and Fehr [9] then continued the study by showing that the optimal
lower bound for the expansion factor could be achieved in general Abelian groups. Secret
INCON - XI 2016
169
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
sharing schemes, for which the secret distribution and reconstruction phases are based on
XOR operations on binary strings, could be converted to secret sharing schemes on several
black-box Abelian groups on binary strings. Thus our results could be adapted to certain
black-box based secret sharing schemes. In this paper, we will introduce the concept of update
complexity of secret sharing schemes. The update complexity of a secret sharing scheme is
defined as the average number of bits of the shares affected by a change of a single master
secret bit. For a threshold secret sharing scheme, it is straightforward that the lower bound for
the update complexity in this paper, we will show that our schemes will achieve this lower
bound. It should be noted that traditionally the efficiency of secret sharing schemes have been
extensively studied to reduce the bounds of share sizes and reducing the computational cost in
shares distribution and secret reconstruction. However, we are not aware of any research that
addresses the update cost of secret sharing schemes. Since our scheme is exclusive-or based,
we have achieved the optimal information theoretical bound of share sizes (i.e., our scheme is
an ideal secret sharing scheme) and has the most efficient computational cost in shares
distribution and secret reconstruction process. The update complexity for traditional secret
sharing schemes may not be a serious concern since secret sharing schemes are normally used
for sharing a short secret (e.g.,1,000 bits). However, our scheme is based on exclusive-or
operations only. Thus it is possible to use our secret sharing schemes to share large amount of
data in distributed cloud environments. Thus it is important to consider the update complexity.
For example, if a user stores 1 GB data in cloud using our scheme and changes 1 KB of the
data, then she does not need to down-load the entire shares from all servers and reconstruct the
shares (for traditional secret sharing schemes she has to do that). In our scheme, all she needs
to do is to update at most 1 KB of data on each cloud server. Due to the expensive cost of field
operations of secret sharing schemes such as Shamir schemes, secret sharing schemes have
traditionally been used for the distribution of secret keys, which is then used to encrypt the
actual data. Since our schemes are based on linear numbers of XOR operations and are
extremely efficient, it is possible to share the massive data directly using secret sharing
schemes. In another word, the data stored at different locations (e.g.,cloud servers) are shares
of the original data and only XOR operations on bits are needed for distributing massive data.
System Architecture and Elements of Inter Cloud:
Figure shows the high level components of the service-oriented architectural framework
consisting of client’s brokering and coordinator services that support utility-driven federation
of clouds: application scheduling, resource allocation and migration of workloads. The
architecture cohesively couples the administratively and topologically distributed storage and
computes capabilities of Clouds as parts of single resource leasing abstraction. The system
will ease the cross-domain capabilities integration for on demand, flexible, energy efficient,
and reliable access to the infrastructure based on emerging virtualization technologies [8]
meant. Every client in the federated platform need to instantiate a Cloud Brokering service that
can dynamically establish service contracts with Cloud Coordinators via the trading functions
exposed by the Cloud Exchange.
Cloud Coordinator (CC):
The Cloud Coordinator service is responsible for the management of domain specific
enterprise Clouds and their membership to the overall federation driven by market based
trading and negotiation protocols. It provides a programming, management, and deployment
INCON - XI 2016
170
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
environment for applications in a federation of Clouds. Detailed depiction of resource
management components in the Cloud Coordinator service. Scheduling & Allocation
Monitoring Allocator Scheduler Market & Policy Engine. The Cloud Coordinator exports the
services of a cloud to the federation by implementing basic functionalities for resource
management such as scheduling, allocation,(workload and performance) models, market
enabling, virtualization, dynamic sensing/monitoring, discovery, and application composition
as discussed below:
Scheduling and Allocation:
This component allocates virtual machines to the Cloud nodes based on user’s Qu’s
targets and the Clouds energy management goals. On receiving a user application, the
scheduler does the following: (I) consults the Application Composition Engine about
availability of software and hardware infrastructure services that are required to satisfy the
request locally, (ii)asks the Sensor component to submit feedback on the local Cloud nodes’
energy consumption and utilization status; and (iii) enquires the Market and Policy Engine
about accountability of the submitted request. A request is termed as account able if the
concerning user has available credits in the Cloud bank and based on the specified QoS
constraints the establishment of SLA is feasible. In case all 11 three components reply
favourably, the application is hosted locally and is periodically monitored until it finishes
execution. Data centre resources may deliver different levels of performance to their clients;
hence, QoS aware resource selection plays an important role in Cloud computing.
Additionally, Cloud applications can present varying workloads. It is therefore essential to
carry out a study of Cloud services and their workloads in order to identify common
behaviours, patterns, and explore load forecasting approaches that can potentially lead to more
efficient scheduling and allocation. In this context, there is need to analyse sample applications
and correlations between workloads, and attempt to build performance models that can help
explore trade-offs between QoS targets.
Market and Policy Engine:
The SLA module stores the service terms and conditions that are being supported by the
Cloud to each respective Cloud Broker on a per user basis. Based on these terms and
conditions, the Pricing module can determine how service requests are charged based on the
available supply and required demand of computing resources within the Cloud. The
Accounting module stores the actual usage information of resources by requests so that the
total usage cost of each user can be calculated. The Billing module then charges the usage
costs to users accordingly. Cloud customers can normally associate two or more conflicting
QoS targets with their application services. In such cases, it is necessary to trade off one or
more QoS targets to find a superior solution. Due to such diverse QoS targets and varying
optimization objectives, we end up with a Multi-dimensional Optimization Problem (MOP).
For solving the MOP, one can explore multiple heterogeneous optimization algorithms, such
as dynamic programming, hill climbing, parallel swarm optimization, and multi objective
genetic algorithm.
Application Composition engine:
This component of the Cloud Coordinator encompasses a set of features intended to help
application developers create and deploy[5]applications, including the ability for on demand
INCON - XI 2016
171
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
interaction with a database backend such as SQL Data services provided by Microsoft Azure,
an application server such as Internet Information Server (IIS) enabled with secure ASP.Net
scripting engine to host web applications, and a SOAP driven Web services API for
programmatic access along with combination and integration with other applications and data.
Virtualization:
VMs support flexible and utility driven configurations that control the share of
processing power they can consume based on the time criticality of the underlying application.
However, the current approaches to VM based Cloud computing are limited to rather
inflexible configurations within a Cloud. This limitation can be solved by developing
mechanisms for transparent migration of VMs across service boundaries with the aim of
minimizing cost of service delivery (e.g., by migrating to a Cloud located in a region where
the energy cost is low) and while still meeting the SLAs. The Mobility Manager is responsible
for dynamic migration of VMs based on the real time feedback given by the Sensor service.
Currently, hypervisors such as VM ware [8] and Xin[9]have a limitation.
REFERENCES:
[1] The Cloud Status Team. JSON report crawl, Dec. 2008. [Online]. Available:
http://www.cloudstatus.com/
[2] E. Deelman, G. Singh, M. Livny, J. B. Berriman, and J. Good. The cost of doing science
on the cloud: the Montage example. In ACM/IEEE Supercomputing Conference on High
Performance Networking and Computing (SC), page 50. IEEE/ACM, 2008.
[3] InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling
of Application Services
[4] L.Kleinrock. A Vision for the Internet.ST Journal of Research 2(1):4-5, Nov. 2005.
[5] M.Armbrust,A.Fox,R.Griffith,A.Joseph,R.Katz,A.Konwinski,G.Lee,D.Patterson,
A.Rabkin,.Stoica,M.Zaharia.Above the Clouds: A Berkeley View of Cloud Computing.
University of California at Berkley, USA. Technical Rep UCB/EECS-2009-28, 2009.
[6] R. Buyya, C. Yeo, S. Venugopal, J. Broberg, and I. Brandic. Cloud Computing and
Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th
Utility.Future Generation Computer Systems, 25(6): 599-616, Elsevier Science,
Amsterdam, The Netherlands, June 2009.
[7] S. London. Inside Track: The high tech rebels. Financial Times, 6 Sept. 2002.
[8] The Reservoir Seed Team. Reservoir an ICT Infrastructure for Reliable and Effective
Delivery of Services as Utilities. IBM Research Report, H-0262, Feb. 2008.
[9] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger. Economic Models sfor Resource
Management and Scheduling in Grid Computing. Concurrency and Computation:
Practice and Experience, 14(13-15): 1507 1542, Wiley Press, New York, USA, Nov.
Dec. 2002.
[10] A. Weiss. Computing in the Clouds. Networker, 11(4):16-25, ACM Press, New York,
USAs, Dec. 2007
Internet Of Things – Prospects and Challenges
Prof. Reena Partha Nath
INCON - XI 2016
Prof. Archana A. Borde
172
ASM’s International E-Journal on
Ongoing Research in Management & IT
Assistant Professor,
Sinhgad Institute of Business Administration and
Computer Application, Lonavala. Pune,
Maharashtra
reenanath29@gmail.com
E-ISSN – 2320-0065
Assistant Professor,
Sinhgad Institute of Business
Administration and
Computer Application, Lonavala. Pune,
Maharashtra
archanaajitborde@gmail.com
ABSTRACT:
In this paper author has expressed views regarding the concept “IoT (Internet of
things). IoT is a combination of hardware, network & software which handles the task
operation for someone if he/she is busy with other task at other location.
Up to 2020 India will enhance its use up to 37%.The Internet of Things (IoT) is the
network of physical objects or "things" embedded with electronics, software, sensors,
and network connectivity, which enables these objects to collect and exchange data.
All across the globe, people are connecting to the Internet to access information,
communicate with other people, and do business. But it's not just people that are using
the Internet: objects use it too. Machine-to-machine communication is widely used in the
manufacturing and energy sectors to track machinery operations, report faults and raise
service alerts.
The digital space has witnessed major transformations in the last couple of years
and as per industry experts would continue to evolve itself. The latest entrant to the
digital space is the Internet of Things (IoT). IoT can also be defined as interplay for
software, telecom and electronic hardware industry and promises to offer tremendous
opportunities for many industries.
With the advent of the Internet of Things (IoT), fed by sensors soon to number in
the trillions, working with intelligent systems in the billions, and involving millions of
applications, the Internet of Things will drive new consumer and business behavior that
will demand increasingly intelligent industry solutions, which, in turn, will drive trillions
of dollars in opportunity for IT industry and even more for the companies that take
advantage of the IoT.
The Internet of Things is growing rapidly, and its forecast that, by 2020, it could include
between 30 billion and 75 billion things ranging from smart bands, toys and photo
frames to medical devices, earthquake sensors and aero planes.
Keyword : Globe, Information, Internet, Network, Opportunity, Sensors, Things.
Introduction:
The Internet of Things (IoT) is an environment in which objects, animals or people are
provided with unique identifiers and the ability to transfer data over a network without
requiring human-to-human or human-to-computer interaction. IoT has evolved from the
convergence of wireless technologies, micro-electromechanical systems (MEMS) and the
Internet. The concept may also be referred to as the Internet of Everything.
INCON - XI 2016
173
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm
animal with a biochip transponder, an automobile that has built-in sensors to alert the driver
when tire pressure is low -- or any other natural or man-made object that can be assigned an IP
address and provided with the ability to transfer data over a network. So far, the Internet of
Things has been most closely associated with machine-to-machine (M2M) communication in
manufacturing and power, oil and gas utilities. Products built with M2M communication
capabilities are often referred to as being smart.
Kevin Ashton, cofounder and executive director of the Auto-ID Center at MIT, first
mentioned the Internet of Things in a presentation he made to Procter & Gamble in 1999.
Here’s how Ashton explains the potential of the Internet of Things:
“Today computers -- and, therefore, the Internet -- are almost wholly dependent on
human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024
terabytes) of data available on the Internet were first captured and created by human beings by
typing, pressing a record button, taking a digital picture or scanning a bar code.
Although the concept wasn't named until 1999, the Internet of Things has been in
development for decades. The first Internet appliance, for example, was a Coke machine at
Carnegie Melon University in the early 1980s. The programmers could connect to the machine
over the Internet, check the status of the machine and determine whether or not there would be
a cold drink awaiting them, should they decide to make the trip down to the machine.
IoT is moving from hype to reality, even though consumer use cases are limited
Internet of Things (IoT) is one of the most talked about technology trends today. There
is a broad consensus among technology vendors, analysts and other stakeholders that IoT
would have a significant impact on the technology landscape and society in the coming years.
According to technology research firm Gartner, IoT devices installed base (excluding PCs,
tablets and Smartphone’s), will grow to 26 billion units in 2020, up from just 0.9 billion in
2009.
However, there are some voices that warn that IoT is today overhyped, and that it will
take a few more years for the real use cases and benefits of IoT to become visible. Some of
this scepticism is driven by the fact that we are yet to see real applications of IoT at and end
consumer level. Use cases such as “refrigerators that order milk from the super market once
the levels come down” - are still restricted to prototypes and academic discussions. Apart from
a few fitness related wearable devices, automobile telematics (which in many ways is less
“visible” to consumers), and “smart home” systems, we are yet to see other major consumer
IoT adoption stories.
Commercial IoT has much better prospects in India:
Ever since the Internet of things (IoT) got live, the thumb rule for the future has become
"anything that can be connected will be connected." IoT refers to a network of identifiable
devices and machinery of all forms and sizes with the intelligence to connect, communicate
and control or manage each other seamlessly to perform a set of tasks with minimum
intervention.
Today, the fixed Internet connects around 1 billion users via PCs. The mobile Internet
will soon link 6 billion users via Smartphone’s. As per a KPMG report, IoT is expected to
connect 28 billion "things" to the Internet by 2020.
INCON - XI 2016
174
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
E SS
E-I
SN – 232
2 20--00
065
5
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
C nsiideerin
Con
ng thee im
mp
porttan
ncee off th
hiss ph
hen
nom
meeno
on, IoT occ
o cup
piees a prom
p min
nen
nt po
osition
n in
n
th
he "D
Digitaal Ind
I dia"" pro
p graam
m laaun
nch
hed
d by
b Naareend
dra Mod
M di. With
W h a visi
v on to
o crea
c atee an IoT
I T
in
ndu
ustrry off $15 biillion by
y 202
2 20,, th
he go
oveernmeent haas drraftted
d a sttrattegic roadm
maap to bu
uild
d
do
om
main
n comp
pettenccy,, enc
e cou
urag
ge bud
ddin
ng en
ntreprren
neu
urs,, buf
b fferr pro
p odu
uct faailu
uree, eneerg
gizee
reeseearcch acu
um
men
n, and
a d th
hereeby
y plac
p ce Ind
I dia on
n th
he glo
g obaal IoT
T map
m p.
I ade
If
a equ
uateely
y su
upp
porrted
d by
b citi
c izeens,, in
nfraasttrucctu
ure an
nd gov
g verrnaancce, IoT
T has
h s th
he pot
p ten
ntiaal
to
o prrov
vid
de sub
s bstaanttial beeneefitt in
n th
he var
v riou
us dom
maains.
S urcee: http
Sou
h p:///deeity
y.g
gov
v.in
n/co
ontten
nt/in
nteerneet-tthings
H w IO
Ho
I T wo
would
d prov
p ve to bee beeneeficciaal fo
or the
t e co
om
mmo
on meen off th
he cou
c untrry.
P speectts :
Pro
A ricculturre : In
Ag
I a cou
c unttry wherre farrmerss com
mm
mittiing
g su
uiccidee on
o acccou
untt of
o low
l w crop
c p
yiield
d or
o unp
u preedicctaablee mar
m rket condittion
ns is qui
q ite reg
gullarr, itt iss tim
mee to
o taakee Io
oT to thee faarm
m. Fo
or
crrop
ps, sm
marrt farm
f min
ng meean
ns preepaarin
ng thee soil
s l, plan
p ntin
ng, nurt
n turiing
g and haarv
vestting
g at
a pre
p ecissely
y
th
he beest tim
mee via
v accceess to
o mar
m rkeet datta. Curr
C ren
ntly
y mK
mKRIISH
HI,, a Rur
R ral Seerv
vicee De
Deliv
very
y
pllatfforrm deeveelop
ped
d by
b TC
CS
S, pro
p ovid
dess adv
a viso
ory
y seerv
vicees to faarm
mers. Ho
oweveer wee requ
r uirre a
m chaanissm th
mec
hat prrov
videes farrmeerss with
w h ono dem
maand
d in
nfo
orm
mation
n on
o thee basi
b is of th
heirr co
onttex
xt
th
hat caan be
b sen
nseed thrrou
ugh
h a neetw
worrk of
o IoT
I T sen
s sorrs. Th
his caan be
b ussed to
o op
ptim
mize effficien
ncy
y,
m xim
max
mizee pro
p odu
uctiivitty an
nd ens
e surre qu
q alitty off prrod
ducce. Crrop
p spe
s eciffic in
nforrmaatio
on an
nd aleertts
reegaardiing
g cur
c rren
nt/ffutturee we
weath
herr con
c ndittion
ns,, soil
s l typ
t e, feertillizeer, pest
p t con
c ntro
ol, ettc. frrom
m
in
ndu
ustrry exp
e perrts sho
oulld be
b maadee acce
a esssiblle via
v a mob
m bilee deeviice.
IN
NC
CON - XI
X 20
2 16
17
75
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Disaster Prediction and Management: India is a large country and is prone to a
number of disasters such as river floods, drought, cyclones, landslides and more. IoT has the
potential to serve a critical, life-saving, role in the event of disaster. Though it is not possible
to completely avoid disaster, impact can be minimized by using a combination of GIS, remote
sensing and satellite communication.
GIS applications such as Hazzard Mapping can be used by meteorological departments
to quickly communicate the risk. Remote sensing can speedily gather data across channels to
signal impending disaster. For instance, a combination of sensors can provide information on
the condition of railway tracks that can be used to avoid derailment incidents.
Transportation: Metros and cities of India are famous for snarling traffic jams. The
problem worsens as population and the number of vehicles increase and the city transportation
systems and infrastructures stay limited.
Need of the hour is intelligent transportation systems that consolidate traffic data from
various sources such as traffic cameras, commuters' mobile phones, vehicles' GPS, sensors on
the roads. Analysis of this traffic information can provide near-real-time insights about traffic
performance, conditions and incidents using correlation with historical data. By monitoring
traffic operations and incidents through a centralized system, we can create a geospatial map
that graphically displays road network, traffic volume, speed and density at different city
locations.
Health & Wellness: To ensure affordable, accessible and quality healthcare, IoT can
evolve to a P2M (Person to Machine) relationship, enabled by a powerful interaction between
smart objects and people in areas of health care, monitoring, diagnostics, medication
administration, fitnessChronic Disease Management is possible via wearable devices that
monitor a patient's physiological conditions (blood pressure, blood glucose levels, breathing,
pulse, etc.). Hospitals can use IoT for remote monitoring personnel, disease management,
inpatient care, patient specific record databases and more. Also with today's busy life,
connected devices can help keep a tab on the health of people (elders / patients) at home as
well as alert medical staff in case of emergencies.
Safety and Security: As the cities and people are growing, so are crimes! IoT can help
make our homes secure through smart home solutions that not only provide visual data of the
visitors but, also check for intruders, provide remote alerts on the mobiles, monitor any gas
leakage in the house, check for water logging or other environmental conditions . In addition,
the connected devices, when deployed as part of city infrastructure, can be used to keep a tab
on the crimes either through involvement of fellow citizen and/or police forces.
Challenges :
Consumer IoT adoption would be slow in India
Due to various challenges, consumer IoT adoption would be slow in India. Some of
these IoT adoptions challenges (data security, lack of standardization, data ownership issues,
ROI, etc.) are really not unique to India. Apart from these challenges, IoT in India, especially
in the consumer space, would need to reckon with a few other hurdles. These are:
Internet availability / bandwidth / reliability: Even today Internet connectivity is a
major challenge in India. For consumer IoT adoption – this would remain a major challenge.
INCON - XI 2016
176
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Cost of IoT enabled systems and devices: Even products such as wearable fitness
bands are yet to take off in India, and price is a key reason. Indian consumers are very
selective in terms of where they would invest when it comes to technology.
Lack of vendor activity: Global vendors, often mistakenly, assume that Indian
consumers are “not ready” for advanced products. This is very much evident in the IoT space,
with hardly any kind of vendor activity today. This in turn has led to low awareness levels of
IoT devices and systems among consumers.
Overall infrastructure challenges: Apart from internet the supporting infrastructure
such as smart grids, traffic systems, etc., are far from being ready for IoT.
The challenges listed above for consumer IoT adoption becomes less of an issue when it
comes to commercial space. It is not that these challenges (especially internet connectivity and
cost of IoT) are not there in the commercial space; they can just be more easily addressable by
commercial organizations. Even globally, IoT adoption and usage is much higher in the
commercial space.
Conclusions :
The reality is that the IoT allows for virtually endless opportunities, many of which can't
be even think of or can't fully understand the impact of today. While we have seen good
adoption at personal level, the role of IoT is critical to the way we manage our cities and
infrastructure. It is thereby a must that citizens, the government and corporate participate and
collaborate in leveraging IoT to solve the social and economic challenges faced by us.
The problem is, people have limited time, attention and accuracy -- all of which means
they are not very good at capturing data about things in the real world. If we had computers
that knew everything there was to know about things -- using data they gathered without any
help from us -- we would be able to track and count everything and greatly reduce waste, loss
and cost. We would know when things needed replacing, repairing or recalling and whether
they were fresh or past their best.”
References :




http://www.businessinsider.in/How-the-Internet-of-Things-can-change-the-India-welive-in-on-its-head/articleshow/48871985.cms
http://deity.gov.in/content/internet-things
http://www.firstpost.com/business/internet-things-indian-perspective-2057171.html
http://whatis.techtarget.com/definition/Internet-of-Things
INCON - XI 2016
177
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Multilingual Issues and Challenges in Global E-Commerce Websites
Ms. Sonali B Singhvi
Student B.Com VI SEM Fatima Degree College.
Keshwapur Hubli-580023
sonali.singhvi23@gmail.com
Prof. Vijaykumar Pattar
Pattar Asst Professor Dept of Computer
Application Fatima Degree College.
Keshwapur Hubli-580023
vijaypattar5@gmail.com
ABSTRACT:
In this paper, we examine the importance of e-commerce in the global economy. It
introduces the issue and growing trends in global e-commerce as well as potential
challenges and opportunities that multinationals practicing e-commerce on a global
scale face. We all agree with the fact that English is the most popular language in the
world, spoken by the major chunk of the population as the native or second language.
India has been a profitable E-commerce region in the last 7 years. Thus many venture
capitalists, angel investors, private companies and high net-worth individuals are
pouring money in E-commerce, no matter how small or big the company. But there are
challenges that will test sustainability of this growth. With a population of 1.28 billion in
India is a goldmine waiting to be explored. But the logistics has been and still is a major
issue because of which E-commerce companies haven’t been able to reach all pockets.
However, localization of an E-commerce website is not an easy task. Content writers for
E-commerce websites often make a few common mistakes. Some of these common
mistakes are: direct translation of words phrases and sentences which can alter the
meaning after translation, not paying attention to the language variants, not
understanding the culture of target audience etc.
Introduction
Global e-commerce is expanding rapidly and several trillion dollars are being
exchanged annually over the web. India has an internet user base of about 375 million (30% of
population) as of Q2 of 2015. Despite being the second largest user base in world, only behind
China (650 million, 48% of population), the penetration of e-commerce is low compared to
markets like the United States (266 M, 84%), or France (54 M, 81%), but is growing at an
unexpected rate, adding around 6 million (0.5% of population) new entrants every month. The
industry consensus is that growth is at an inflection point. In India, cash on delivery is the
most preferred payment method, accumulating 75% of the e-retail activities. Demand for
international consumer products (including long-tail items) is growing much faster than incountry supply from authorised distributors and e-commerce offerings. India's e-commerce
market was worth about $3.8 billion in 2009, it went up to $12.6 billion in 2013.
In 2013, the e-retail segment was worth US$2.3 billion. About 70% of India's ecommerce market is travel related. According to Google India, there were 35 million online
shoppers in India in 2014 Q1 and is expected to cross 100 million mark by end of year 2016.
Indian e-commerce companies have managed to achieve billion-dollar valuations. Namely,
Flipkart, Snapdeal, InMobi, Quikr, OlaCabs and Paytm. Many researchers regarded language
barriers as being one of the major problems for searching, accessing, and retrieving
multilingual information and knowledge on the Internet, and looked at the role that language
INCON - XI 2016
178
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
played in creating difficulties for online information access as well as the impact of language
technologies on the development and maintenance of multilingual websites.
This article will briefly address the issues related to:
(1) language that is one of the many elements conforming culture,
(2) Culture that greatly affects the functionality and communication of multilingual
Websites.
(3) Technology that enables the multilingual support of e-commerce Web sites, focusing on
the challenges and issues of Website multilingualism in global e-commerce.
Essentials elements
1) Use global templates for consistent brand expression across all sites.
2) Design global gateways with the user in mind.
3) Build sites to accommodate a range of global internet speeds.
4) Internationalize to ensure exceptional purchasing and shipping experiences.
5) Format time and date conventions to support global variations.
6) Use images, icons, and other elements that are culturally appropriate and contextually
relevant.
7) Research the cultural connotations of colors—and choose accordingly.
8) Create localization-ready graphics that avoid embedded text.
9) Create exible UIs and have them reviewed by a desktop publishing specialist.
General Challenges:
1.
2.
3.
4.
5.
High Customer Acquisition Cost: One area that most of the entrepreneurs consistently
underestimate in their business plans is the cost of acquiring a customer. Most business plan
assumes heavy reliance on social media and internet advertising. The general myth is that the
cost of acquisition on internet is very low. Overtime, when the customers start coming directly to
the site, the cost of customer acquisition falls significantly but till that time, the business owners
need to cover the cost.
High Churn / Low Loyalty: I have heard multiple times that the Indian market is very large and
we have just hit the tip of the iceberg in terms of customer adoption. I have no doubts on the size
of Indian market but the problem is low loyalty. The big brands are yet to be created on the
internet and hence the brand loyalty is very low.
Cash On Delivery: Cash on Delivery (COD) has evolved out of less penetration of MasterCard
in India. Most of Indian E-commerce corporations are giving COD united of mode of payment
for the patrons. 30%-50% of patrons are taking advantage of this mode of payment whereas
creating purchase of any product and repair over net. COD has been introduced to counter the
payment security problems with on-line dealings, however this mode has been proving to be loss
and valuable to the businesses
High Cash Burn Rate: The capital requirement for any e Commerce venture is very high
contrary to the popular belief that it is easy to set up an electronic shop. Leaders in the ecommerce space (ones that have raised money, have large teams and are aggressively pursuing
growth) are spending $1-2 million (Rs 5-10 crore) a month, including on marketing, overheads
and salaries. At this rate of burn, smaller firms with scant capital are unable to cope. Therefore it
is important to raise money early in the game.
High Inventory / Poor Supply Chains: Most of the E-Commerce venture is complaining of the
excess inventory and absence of liquidation market in India. The poor supply chain compounds
inventory problems due to unpredictability of the supply. The cost of carrying the inventory is
INCON - XI 2016
179
ASM’s International E-Journal on
Ongoing Research in Management & IT
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
E-ISSN – 2320-0065
very high and successful ventures would need to tackle the supply chain issues if they really
want to run a scale business.
Logistics: Is good in India with companies like Aramex, TNT and BlueDart. Isn't good for COD
though. That’s why I believe most players backed it up with their own service. Though I wish the
postal service in India was as good.
Banking and Gateway: The sheer amount of effort taken to get a gateway and the failing
transactions. This is an issue, but since it is all online and I trust RBI a lot with their efficiency, I
expect this to improve very fast. In the long run, I DO NOT believe that COD is the way to go.
Poor Knowledge and Awareness: involves magnitude relation of net customers, situation isn't
thus admirable one. Majority of Indian rural population are unaware of net and it uses.
Amazingly, most of net savvies or urban population is littered with poor information on on-line
business and its functionalities.
Online Transaction: Most of Indian customers don't possess plastic cash, MasterCard, charge
account credit and internet banking industry, that is one in every of the prime reasons to curtail
the expansion of e-commerce.
Online Security : Such pirated computer code leaves space for virus, malwares and Trojan
attacks and it's extremely risky task to create on-line transactions within the systems, which can
disclose or leak sensitive details of credit cards and on-line banking of the users. These varieties
of droopiness ought to be illegal in Indian e-commerce sectors
Fear factor: concern of constructing on-line payment may be a universal psychological factor of
Indian customers. With the unfold of information on on-line transactions and its reliableness,
some percentages of consumers have unmarked this concern and that they are dauntlessly
participating themselves in on-line searching
Tax Structure: charge per unit system of Republic of Indian market is another issue for lesser
rate of E-Commerce in India as compared to different developed countries like USA and GB. In
those countries, charge per unit is uniform for all sectors whereas tax structure of Republic of
India varies from sector to sector.
Touch and Feel’ factors: Indian customers are easier in shopping for merchandise physically.
They have an inclination to settle on the merchandise by touching the merchandise directly.
Thereby, Indian patrons traditional inclined to try and do ticketing and booking on-line in Travel
sectors, books and physics.
Product & Quality: Just because its India you cannot sell shit online, you need to maintain
quality and need to offer good quality product. Hence you need to ensure that what ever you sell
is of good quality and there is a syn between the website and product. Like showing real pictures
of RayBan sunglasses and selling a fake. Be a genuine seller, sell what you say. This would also
help you build goodwill & trust.
Goodwill & Trust: Due to increase in number of fake eCommerce store, people usually don't
feel safe transacting on a new website, hence you really need to put in effort to build a goodwill
and trust for your brand. Yes if you already have a retail chain and plan to go online, this could
help you. But for a new player its a challenge. Hence you need to be patient, keep making your
presence feel and ensure you share your complete details like office address, phone, email, VAT
No etc. It takes year to build goodwill hence don't expect that you will start getting thousands of
orders within few weeks of launch.
Online & Offline Completion: If you have a retail chain and selling a product online and offline
make sure you sell it at lesser price because you have lower overhead cost compare to offline
store, hence you need to part with your profit margin.
INCON - XI 2016
180
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Issues in Marketing
1.
In the information age represented by the Internet and the World Wide Web, the
language representation online has evolved from the monolinguality of one English
language into the multilingualism of more than 1,000 languages (Crystal, 2001).
2.
As a natural outcome, multilingual Web sites have become a common ground for online
communication for peoples across national boundaries.
3.
On the Internet, Web users spend more time and come back more often to the Web sites
that are in their native language and appeal to their cultural sensibilities. Visitors to a
Web site would stay twice as long if the content on the Web site were available in their
own language.
4.
Their willingness to buy something online increases by at least four times if the Web site
is localized to meet their needs to thoroughly research the product and the company.
5.
Digital Literacy and Consumer Connect with Narendra Modi’s digital India campaign,
E-Commerce market is showing signs of boom but when we compare it with the digital
literacy of the consumers it fails to show the same sign. There is a major chunk of
population i.e 147 million who are still using feature phones. The stalwarts of online
marketplace should try and tap this sector by bringing out easy access to websites and
helping in digital literacy. Also try to fix the loopholes like fast customer service, return
policies, payment security and to lure more customers for the website.
6.
Winner takes it all-No room for small players Flipkart, Amazon and Snapdeal are deep
pocketed major players making it tough for the small players to survive in the market.
The mortality rate in Indian e-commerce is rising day by day as the small retailers are
choking to attract investors or customers alike.
7.
No profits in sight Flipkart, Snapdeal and others have infused immense capital from
investors and are burning them with customer acquisition and heavy discounts. As
projected in the picture the amount of loss incurred by top players and the gap between
the valuation and the funds raised in the market. So the profits which are not in sight will
pose threat to the overall business model if the break even is not met.
8.
Budget: We have come across people who have very low budget for an E-Commerce,
while planning for an E-Commerce store they have either forgotten about marking
budget or are not aware about it. So my question is you can build an E-Commerce in Rs
5000 (Start an E-Commerce Store at Rs 5000/-) but how about marketing. Since if you
don't market, it will die.
9.
Marketing: As mentioned above, due to big players like Flipkart, Myntra etc digital
marketing is not so cheap anymore. Hence if you are looking for a huge traffic for
recently launched E-Commerce store, you need to market it well using well know
mediums like PPC, SEO, Social Media, Display Advertising & Email Marketing. All
these need money, so you technically need good financial back up. A basic math that I
use, if you need to open retail outlet in GK, Delhi and it would cost you approx Rs 50
Lac, then you should at least have a budget between Rs 5 to 15 Lac for an E-Commerce
to start with. Anything below this is not worth trying.
Technical issues in creating multilingual website.
Designer will face problem related to algorithm, templates issues make process tough.
Also developer should have fluency in culture and language of the region.
INCON - XI 2016
181
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Technical issues

Language detection:
Suitable Language detection is biggest problem in e-commerce websites. The developer
is in dilemma to choose simple language or default language based on the location

Encoding:
UTF-8 character set is used to switch the language encoding using the following meta
tag.
<meta charset=”UTF-8”/>
Language declaration:
It’s is most important task in multilingual website is language declaration in coding.
Generally developers use single language in one page. It’s declared using adding the piece of
language like = “kn” in html tags.
Eg: <html lang= “kn”>

URL structure:
Choosing URL is most important part in multilingual website. A developer should have
practice to choose proper URL. The best practice is http://www.testsite.com/kn/main.php

Design problems:
This is biggest problem for the developer to choose multilingual templates that contains
the different languages. The language differs from region to region. In some language length
of word is small and translation of same is big in other language. In this context navigation
will need to easy accept words in different languages. Font size also creates problems for
developers. Eg. Dravidian and Devnagari language need to be displayed bit bigger compare to
English as readable.
Search Engine Optimization for E-commerce Websites.
It’s a process of affecting visibility of a website in a search engine results. In general
terms it’s referred as Natural or Organic. In internet marketing SEO plays major role based on
how keyword typed in search engine by users based on that search engine optimizes the
results. On-page optimization refers to website elements which comprise a web page, such as
HTML code, textual content, and images.
Need of SEO
Now a day businesses are switching from print media to e-media advertisements. SEO
strategies works 24/7 365 day compare to traditional ads on newspapers TV’s. And they are
time sensitive. The internet marketing and SEO are the only form of marketing that encourage
business and services in front of targeted market and genuine customers those who are actively
planning to buy a product. In other words SEO strategy is Not a Cost It’s an investment for the
business.
Technical Issues to establish SEO
In E-commerce site audit developer will find so many issues related to SEO. The
problems are like Server issues, Poor Navigation, Improper Redirects, Query related issues
etc.
INCON - XI 2016
182
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Server Issues
The issues is arises due to misconfiguration of server software especially in HTTP
headers. Normally an 'Accept' header would be sent by a client (your browser) to state which
file types it understands, and very rarely this would modify what the server does. The server
when it sends a file always sends a "Content-Type" header to specify if the file is
HTML/PDF/JPEG/something else. Googlebot sends “Accepts:*/*” when it’s crawling.
Changing user browser user agent to Googlebot does not influence the HTTP header, and tools
like web-snippets also don’t send the same HTTP header as Googlebot, hence server gives
error for SEO’s.

Poor Navigation
Poor engagement statistics paired with 'CRAWLABILITY' issues and other technical
issues are all indications of low authority" according to Google. If site is not useful for the
visitors then it will not get rank in search engine.

Improper Redirects
In some level the entire site to switch to another site with different URL. The 301
redirect is the best way to switch the correct page according to Google. If you've ever rebuilt
your website without enlisting a reputable SEO, then there is a good chance that links and
URLs from your old site aren't properly set up to connect to your new website. You should
locate any 404 "not found errors" on your site and use 301s to direct users to the correct pages.
"Google Webmaster Tools make it easy to find your 404s, so you can 301-redirect them to
more relevant pages

Query related issues
This problem arises specially in E-commerce websites which are database driven.
For other sites it’s may occur, but in E-Commerce websites as there are so many product
attributes and filtering options such as color, product name etc. another biggest issues in
E-commerce website is tendency to use multiple parameters are combined together.
Ex. www.test.com/product-cat?colour=15&product=10
www.test.com/product-cat?product=10&colour=15
In above example URL is same but paths are different, the page could be interpreted as
duplicate content. Google does allocate crawl budget based on PageRank. budget is being used
in the most efficient way possible.
Conclusion
We do not think any of the above mentioned issues are unbeatable. The opportunity is
huge to let it go. Entrepreneurs need to be disciplined sticking to the basics of business without
getting carried away by the rush of capturing the opportunity. The biggest and the only
problem faced by ecommerce sites in India are measuring how long term the game would be.
How large the market is, how fast it will grow. And how to remain profitable. This industry is
huge and is full of endless opportunities. Therefore, understand the basics of online business
focus on the long-term growth and keep moving ahead. Until the government takes initiative
to improve on our road and railway networks, it's difficult to actually improve the logistics
situation in India. So to raise above all the shortcomings in the Indian E-Commerce sector the
retailers and government need to join hands for working on increasing penetration of internet.
Also the credit card diffusion should be emphasized so that the perilous cash on delivery mode
can be restricted.
INCON - XI 2016
183
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
REFERENCES:
[1] https://en.m.wikipedia.org/wiki/E-commerce_in_India
[2] http://content.lionbridge.com/9-essential-elements-intelligent-multilingual-websitedesign/
[3] https://www.quora.com/What-are-the-challenges-faced-when-trying-to-reach-out-toglobal-consumers-Will-a-multilingual-website-suffice-needs-or-will-an-effective-CMShelp
[4] http://www.cmmlanguages.com/knowledge-centre/40-translation/83-multilingualcontent-writing-translation-services-for-ecommerce-websites
[5] http://www.clarity-ventures.com/blog/article/1701/10-key-considerations-formultilingual-ecommerce-clarity
[6] http://www.nutriactiva.com/multilingual-ecommerce-site-wordpress-ecwid-qtranslate/
[7] http://dejanseo.com.au/seo-for-multilingual-ecommerce-websites/
[8] http://www.websitebuilderexpert.com/how-to-build-a-multi-language-website/
[9] http://www.multilingualwebmaster.com/library/secrets.shtml
[10] https://assoc.drupal.org/blog/scharffshotmail.com/taking-your-e-commerce-site-globalchecklist
[11] http://www.designer-daily.com/getting-started-with-multilingual-websites-29260
[12] http://www.usanfranonline.com/resources/internet-marketing/internet-marketingstrategies/#
[13] http://www.statista.com/topics/2454/e-commerce-in-india/
[14] http://www.whatisseo.com/
[15] http://www.localwebsitedesign.com/ten-reasons-why-you-need-seo/
[16] http://searchengineland.com/the-ultimate-list-of-reasons-why-you-need-search-engineoptimization-121215
[17] http://www.usanfranonline.com/resources/internet-marketing/internet-marketingstrategies/#
[18] http://www.statista.com/topics/2454/e-commerce-in-india/
[19] http://www.whatisseo.com/
[20] http://www.localwebsitedesign.com/ten-reasons-why-you-need-seo/
[21] http://searchengineland.com/the-ultimate-list-of-reasons-why-you-need-search-engineoptimization-121215
INCON - XI 2016
184
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Study of Recommender System : A Literature Review
Mrs.A.S.Gaikwad
Bharati Vidyapeeth University,Pune
IMRDA,Sangli, India
anjaligaikwad11@rediffmail.com
Prof. Dr. P. P. Jamsandekar
Bharati Vidyapeeth University,Pune
IMRDA,Sangli, India.
pallavi.jamsandekar@yahoo.com
ABSTRACT :
This paper emphasis the study of different recommender systems (RS) for assisting
researchers in finding research papers for their literature review. This paper focuses
analytical study of technologies implemented, algorithms used at different recommender
systems. Recommender system, it becomes clear that the system is not just about the
algorithm, but rather the overall goal. It also becomes a sunshade term for different
types of recommender systems that uses various algorithms to achieve their goals.
Recommender systems can have algorithms that are constraint based (question and
answer conversational method), content based (CB) (item description comparison
method), collaborative filtering (CF user ratings and taste similarity method), and
hybrid (a combination of different algorithms). The collaborative filtering technique has
gained in popularity over the years and the social networking aspects help to strengthen
the filtering techniques. The hybrid technique combines collaborative filtering with
content-based techniques to capitalize the strength of each method
Keyword : Recommender System, Content Based filtering, Collaborative Filtering.
I.
Introduction
Recommender systems offer a solution to the problem of information overload by
providing a way for users to receive specific information that fulfil their information needs.
These systems help people to make choices that will impact their daily lives. According to
Resnick and Varian [1], “Recommender Systems assist and augment this natural social
process.” As more information is produced, the need and growth of recommender systems
continue to increase. One can find recommender systems in many domains ranging from
movies (MovieLens.org) to books (LibraryThing.com) and e-commerce (Amazon.com).
Research into this area is also growing to meet the demand, focusing on the core recommender
technology and evaluation of recommender algorithms. However, there’s a need for usercantered research on recommender systems that looks beyond the algorithms for receiving
recommendations and the impact of those recommendations on people’s choices. Multiple
techniques are implemented to develop recommender as an object, but still recommender
process is not fulfilling their objectives in various fields.
II.
Literature Review
Research papers published during 2013-2015 are summarised yearwise as below
Research work published in 2013:
SimonPhilip, P.B. Shola”A Paper Recommender System Based on the Past Ratings of a
User” International Journal of Advanced Computer Technology (IJACT). ISSN: 2319-7900
INCON - XI 2016
185
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In this paper author used Information retrieval model.TF-IDF: TF-IDF (Term Frequency
- Inverse Document Frequency) to determine the weight of a keyword or a term in a research
paper i.e. how important a keyword or a term in a document. Cosine similarity is used to find
similarity between research papers. This paper presents recommender system that suggests
recommendations to the intended users based on the papers the users have liked in the past.
This system adopt content-based filtering technique to generate recommendations to the
intended users. The system does not provide recommendations to an active user based on the
past ratings of other similar users to the active user. The result of the implementation of the
proposed algorithm was compared with the result of the system and found to give better
recommendations. This means that users get quality recommendations based on the papers the
user have liked in the past compared to the one obtained when users present their tastes or
needs in form of a query.[3]
Huang Yu Yao Dan Luo Jing Zhang Mu “Research on Personalized Recommender
System for Tourism Information Service” Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol.4, No.5, 2013.
In this paper author design Apriori algorithm utilizes hierarchical sequential searching
method to accomplish the mining of frequently occurring information sets. K set is used to
produce K+1 set through implementation of WEKA tool [4].
Atefeh Jajvand1, Mir Ali Seyyedi2, Afshin Salajegheh “A Hybrid Recommender System
for Service Discovery” International Journal of Innovative Research in Computer and
Communication Engineering (An ISO 3297: 2007 Certified Organization) Vol. 1, Issue 6,
August 2013 ISSN (Print): 2320-9798.
In this paper author mentioned algorithm implemented using C# programming language,
and is tested using data set. Hybrid of two methods viz; Collaborative Filtering and Contextaware. each of these methods are compared to each other. This algorithm has rather high
performance as well as it overcomes the problem of grey sheep, new consumer, and new
service entrance. The practical results show that this hybrid recommender system has more
performance and better quality of recommendation in comparison with collaborative filtering
method. [5].
Lalita Sharma#1, Anju Gera “A Survey of Recommendation System: Research
Challenges” International Journal of Engineering Trends and Technology (IJETT) Volume4Issue5- May 2013 ISSN: 2231
In this paper author used traditional collaborative filtering recommender algorithm
concern the prediction of the target user’s rating. Rating Matrix CF Algorithm and the several
recommendation systems have been proposed those are based on collaborative filtering,
content based filtering and hybrid recommendation methods, but these have some problems
which are the challenges for research work. [6]
Jin Xu, Karaleise Johnson-Wahrmann1, Shuliang Li1, “The Development, Status and
Trends of Recommender Systems: A Comprehensive and Critical Literature Review”
Mathematics and Computers in Science and Industry ISBN: 978-1-61804-247-7.
This paper emphasis on algorithms used in recommender systems, prototypes of
recommender systems, validation and evaluation and performance of recommender systems.
The algorithms would give details of the research done in recommender algorithms that can be
applied in further researches, the prototypes give insights in the fields of applications and the
evaluation give information on the research on recommendation system performance. Some of
INCON - XI 2016
186
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
the data mining techniques explained in the papers included: vector space model, matrix
factorization, neural networks, genetic algorithms, product taxonomy and others. [7]
Bhumika Bhatt, Prof. Premal J Patel,Prof. Hetal Gaudani “ A Review Paper on Machine
Learning Based Recommendation System” 2014 IJEDR | Volume 2, Issue 4 | ISSN: 23219939.
Author perform classification using K-means, a threshold for the number of common
rating items and a graph algorithm. Weighted Mean is used as most general algorithm. Modelbased CF algorithms, a theoretical model is pro-posed with user rating. Four partitioningbased clustering algorithms are used to make predictions, leading to better scalability and
accuracy in comparison to random partitioning. It is an iterative algorithm and starts with a
random partitioning of the items into k clusters and assign the user to the segment containing
the most similar customers.[8]
S.
Saint
Jesudoss
“SCALABLE
COLLABORATIVE
FILTERING
RECOMMENDATIONS
USING
DIVISIVE
HIERARCHICAL
CLUSTERING
APPROACH” ISSN: 2278-6244 International Journal of Advanced Research in IT and
Engineering
In this paper author describes divisive hierarchical clustering algorithm. The basic idea
of Item-based collaborative filtering algorithm is choosing K most similar items and getting
the corresponding similarity according to the similarity of rated item and target items. They
have used K-means clustering algorithm to cluster users and items. Genetic clustering,
RecTree Clustering, Fuzzy C-Means (FCM) clusters are also referred besides, the most
frequently used methods k-NN, Naive Bayes, CF and CBF. Generally, these methods achieved
a higher efficacy result among the applied models.[9]
Jaqueline Ferreira de Briton “A Study about Personalized Content Recommendation”
Revista de Sistemas de Informa_c~ao da FSMA n. 12 (2013) pp. 33-40.
Most papers validated the proposed solutions, whether through experimentation with
participating students, sampling techniques or some specific type of statistic measurement. We
also observed that the focus of the reviewed papers is more on the representation of the user
profile than on the recommendation process itself, reacting a Global quality improvement of
the system.[10]
Latika Gaddam, Pratibha Yalagi “Mobile Video Recommendation System on Cloud
with User behaviour” International Journal of Science and Research (IJSR) ISSN (Online):
2319-7064 Index Copernicus Value (2013): 6.14 | Impact Factor (2013): 4.438
Here Mahout's core algorithms are used for classification, clustering and collaborative
filtering. Mahout is self managing and can easily handle hardware failures as well as scaling
up or down its deployment without any change to the code base. So in a big data environment,
improve its scalability and efficiency with user preferences and user-based CF algorithm to
provide recommendation. [11]
Research work published in 2014 :
Sowmya.K.Menon, Varghese Paul, M.Sudheep Elayidom Ratan Kumar “An efficient
cloud based framework for Web recommendation systems” International Journal of Advances
in Computer Science and Technology (IJACST), Vol. 3 No.2, Pages : 07 – 11 (2014) Special
Issue of ICCSIE 2014 - Held during February 16, 2014,Bangalore, India.
INCON - XI 2016
187
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In this paper author used a CF algorithm searches a large group of people and finding a
smaller set with tastes similar to a particular user. To create a ranked list of suggestions, it
looks at other things they like and combines them. The server is a cloud server based on the
Google app engine architecture which does the actual clustering based on the k-means
algorithm. This type of algorithm is quite different from hierarchical clustering because it is
told in advance how many distinct clusters to generate. The algorithm will determine the size
of the clusters based on the structure of the data. K-means clustering begins with k randomly
placed centroids, and assigns Severs item to the nearest one. After the assignment, the
centroids are moved to the average location of all the nodes assigned to them, and the
assignments are redone. This process repeats until the assignments stop changing.[12]
Jyoti Gautam, Ela Kumar “ Framework for a Tag-Based Fuzzy Research Papers Sharing
and ecommendation System” International Journal of Computer & Mathematical Sciences
IJCMS ISSN 2347 – 8527 Volume 3, Issue 7 September 2014.
In this paper author used various kinds of ranking algorithms have been generated to give user
an optimised result list. They applied three different CF algorithms and found that user-based
filtering performs the best. The design of algorithms can be done to calculate local ranking of
users and items around their network. New ranking algorithm was developed which utilizes
semantic tags to enhance the already existing semantic web by using the IDF feature of the
TFIDF algorithm. Fuzzy Ranking Algorithm implemented to retrieve the documents matching
the query, social tagging has been used further to retrieve the set of documents which better
match the query.[13]
Prabhat Kumar, Sherry Chalotra “An efficient recommender system using hierarchical
Clustering Algorithm” International Journal of Computer Science Trends and Technology
(IJCST) – Volume 2 Issue 4, Jul-Aug 2014
This Recommender System framework based on data mining techniques incorporates
discrete functions and growing clusters and thus minimizing the variance between user’s taste
and predictors. Because of discrete algorithmic approach this recommender system has
managed to decrease running time of the recommender system for each recommendation little
bit. But since corporate business houses have to address thousands of such recommendation
per seconds the net effect is quite satisfactory. [14]
Ashfaq
Amir
Shaikh,
Dr.
Gulabchand
K.
Gupta1Research
Scholar
“RECOMMENDATION WITH DATA MINING ALGORITHMS FOR E-COMMERCE
AND MCOMMERCE APPLICATIONS” International Journal of Emerging Trends &
Technology in Computer Science (IJETTCS)Volume 3, Issue 6, November-December 2014
ISSN 2278-6856
The proposed system is simple user rating based simple product analysis and gives better
recommendation about any product using some data mining approach like classification,
clustering, association and other Technique. The Java EE based web application as Java Server
Pages (JSP) is user as View to accept the parameter from user, the database maintains with
Hibernate Technology as a Model and application support Model View Design Pattern, also
data mining analysis done using WEKA Data Mining tool and based on the user parameter
like product name, brand, price range and specific features or specifications the system will all
sort of analysis using exiting user rating, feedback and other processing the final result will be
generated as recommendation product. The Mobile based user interface is also given like Java
Based Mobile Application using J2ME and Android User Interface using Android API.[15]
INCON - XI 2016
188
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Dheeraj kumar Bokde1, Sheetal Girase2, Debajyoti Mukhopadhyay3 “Role of Matrix
Factorization Model in Collaborative Filtering Algorithm: A Survey” International
Journal of Advance Foundation and Research in Computer (IJAFRC) Volume 1, Issue 6,
May 2014.
In this paper authors are studied role of various Matrix Factorization (MF) model to deal
with the Collaborative Filtering (CF) challenges. From this study we can say that, SVD is able
to handle large dataset, sparseness of rating matrix and scalability problem of CF algorithm
efficiently. PCA is able to finds a linear projection of high dimensional data into a lower
dimensional subspace such as the variance retained is maximized and the least square
reconstruction error is minimized. All the techniques which have been researched until now
are trying to increase the accuracy and prediction performance of CF algorithm by
dimensionality reduction of rating matrix using MF model.[16]
Yonghong Xie, Aziguli Wulamu and Xiaojing Hu “Design and Implementation of
Privacy-preserving Recommendation System Based on MASK” JOURNAL OF SOFTWARE,
VOL. 9, NO. 10, OCTOBER 2014 2607
MASK algorithm is based on random transformation technology that its mining dataset
is formed by real dataset through probability transformation. In this paper, privacy-preserving
recommendation system utilizes distributed processing environment to meet the need of huge
amounts of data in existing environment, and the data processing is an offline processing that
reducing access time of online, that is, system meets the need of timely response of online, in
other words, the recommendation system is feasible in principle. Experiments on three groups
of different datasets show, recommendation system designed in this paper has a reference
value in practical application. However, the limitations of MASK algorithm make system still
a lot to be improved, for instance, the problem of increasing accuracy of recommendation. [17]
P. N. Vijaya Kumar 1, Dr. V. Raghunatha Reddy “A Survey on Recommender Systems
(RSS) and Its Applications” International Journal of Innovative Research in Computer and
Communication Engineering Vol. 2, Issue 8, August 2014
A good recommender system should be able to provide positive and relevant
recommendations from time to time and also provide alternative recommendations to break
the fatigue of the users seeing the same items in the recommendation list. Future
recommendation systems should be dynamic, and the profiles should be able to be updated in
real time. This and the synchronization of various profiles implies the need of huge amount of
computational power, network bandwidth etc. Current algorithms and techniques all have
relatively high memory computational complexity, and that leads to long system processing
time and data latency. Therefore, new algorithms and techniques that can reduce memory
computational complexity eventually eliminate synchronization problems will be the one of
development orientations. [18]
Tang Zhi-hang “Investigation and application of Personalizing Recommender Systems
based on ALIDATA DISCOVERY” Int. J. Advanced Networking and Applications Volume: 6
Issue: 2 Pages: 2209-2213 (2014) ISSN : 0975-0290
To aid in the decision-making process, recommender systems use the available data on
the items themselves. Personalized recommender systems subsequently use this input data, and
convert it to an output in the form of ordered lists or scores of items in which a user might be
interested. These lists or scores are the final result and their goal is to assist the user in the
decision-making process. The application of recommender systems outlined was just a small
INCON - XI 2016
189
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
introduction to the possibilities of the extension. They changed the way users make decisions,
and helped their creators to increase revenue at the same time. [19]
Research work published in 2015 :
Neelima Ramnath Satpute and Prof. Hyder Ali Hingoliwala RESEARCH
ARTICLE “Generating String Recommendation Efficiently and Privately” International
Journal of Advanced Research (2015), Volume 3, Issue 6, 1400-1408
In this paper, we expect to ensure the private information against the service provider
while saving the usefulness of the framework. We propose encoding private information and
preparing them under encryption to create suggestions. By presenting a semi-trusted third
party and utilizing data packing, we develop an exceedingly proficient framework that does
not require the dynamic interest of the client. We additionally exhibit a comparison protocol,
which is the first to the best of our insight that analyzes different qualities that are packed in
one encryption. Directed trials demonstrate that this work opens a way to generate private
recommendations in a privacy-preserving manner. The existing system work on only the
integer recommendation but in our propose work we implement on phrase and string
recommendation by applying steaming and stop word removal. [20]
Omkar S. Revankar, Dr.Mrs. Y.V.Haribhakta “Survey on Collaborative Filtering
Technique in Recommendation System” International Journal of Application or Innovation in
Engineering & Management (IJAIEM), Volume 4, Issue 3, March 2015 , ISSN 2319 – 4847.
This article presents an overview of recommendation System and illustrates the present
generation of recommendation techniques. These techniques are usually categorized into three
main classes as Collaborative Filtering (CF), Content-Based Filtering (CBF), and Hybrid
Recommendation approaches. CF is a framework for filtering information based on the
preferences of users. This technique can predict a user’s preferred items by using the user’s
known history data as well as other users’ known history data, and then recommends items to
the user. This paper is focused on collaborative filtering; its types and its major challenges for
instance cold start problem, data sparsity, scalability and accuracy. [21]
Tejal Arekar Mrs. R.S. Sonar Dr. N. J. Uke “A Survey on Recommendation System”
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 23492163 Volume 2 Issue 1 (January 2015)
Recommendation systems are used to overcome information overload problem as ecommerce continuously growing. Recommendation systems can benefit both customers and
provider. Customers use it by finding new interesting products and provider can enhance their
sales there are various techniques used for recommendations and their related issues are
discussed. Two recommendation systems are widely used that are YouTube.com and
Amazon.com for videos and product respectively. [22]
J. Amaithi Singam and S. Srinivasan “OPTIMAL KEYWORD SEARCH FOR
RECOMMENDER SYSTEM IN BIG DATA APPLICATION” ARPN Journal of Engineering
and Applied Sciences VOL. 10, NO. 7, APRIL 2015.
Most of the search engine gives additional supporting information. Recommender
system involves in this process and implements as service. Service recommender system gives
additional information to the user but if information grows then these process become a
critical one. The proposed work analyses issues occurring when service recommender system
implements in large data sets. This work proposes a keyword-Aware services Recommender
INCON - XI 2016
190
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
method, to split the services to the users and mainly focused keywords from the user
preferences. Hybrid Filter algorithm generates keyword recommenders from the previous user
preferences. To implement effective results in big data environment, this method is
implemented using the concept of Map Reduce parallel processing on Hadoop. Experimental
results are shown the effective results on real-world datasets and reduce the processing time
from large datasets. The porter stemmer algorithm is used to remove the suffix word from the
previous users review. And the common morphological also elite that term called stem. [23]
Adetoba B. Tiwalola., Yekini N. Asafe “A COMPREHENSIVE STUDY OF
RECOMMENDER SYSTEMS: PROSPECTS AND CHALLENGES” International Journal of
Scientific & Engineering Research, Volume 6, Issue 8, August-2015 699 ISSN 2229-5518.
Recommender systems (RSs) automate with the goal of providing affordable, personal,
and high-quality recommendations.. Development of recommender systems is a multidisciplinary effort which involves experts from various fields such as Artificial intelligence
(AI), Human Computer Interaction (HCI), Information Technology (IT), Data Mining,
Statistics, Adaptive User Interfaces, Decision Support Systems (DSS), Marketing, or
Consumer Behavior. In this paper, a comprehensive study of recommendation systems and
various approaches are provided with their major strengths and limitations thereby providing
future research possibilities in recommendation systems. [24]
Manisha Bedmutha ¹, Megha Sawant ², Sushmitha Ghan “Effective Bug Triage and
Recommendation System” International Journal of Engineering Research and General Science
Volume 3, Issue 6, November-December, 2015
ISSN 2091-2730.
In this paper we have focused on reading the bug report which will remove the
redundant and noisy data. The system will automatically recommend the bugs to the developer
which will save the time and cost in finding the bugs for fixing. As the system is time based in
particular developer failed to solve the bug in given time the bug will be reassign to the other
developer, so ultimately the project will get complete prior to the dead line. Also it will be
easy to find out which developer is expertise in which area. [25]
P. Poornima, Dr. K. Kavitha “USER PROFILING USING RECOMMENDATION
SYSTEMS” International Journal of Advanced Technology in Engineering and Science
Volume No 03, Special Issue No. 01, April 2015 ISSN (online): 2348 – 7550.
In this paper, Researcher discussed about the important concept of Web Mining. Most of
the research work highly focused on User profiling. By studying implicitly about the user and
his profile, the recommended items can easily be reached to the user. This will result in time
saving, profit increasing and effective use of storage spacing. Recommendation systems are
collectively used with Data mining concept in order to predict the interestingness patterns. In
this paper Weighted Slope One algorithm used to compute predictions since it is efficient to
query, reasonably accurate, and supports both online querying and dynamic updates, which
makes it a good candidate for real-world systems. [26].
Aryo Pinandito1, Mahardeka Tri Ananta2, Komang Candra Brata3 and Lutfi Fanani
“ALTERNATIVES WEIGHTING IN ANALYTIC HIERARCHY PROCESS OF MOBILE
CULINARY RECOMMENDATION SYSTEM USING FUZZY” VOL. 10, NO. 19,
OCTOBER 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences.
In this paper author discuss the combination of Fuzzy and AHP approach shows some
advantages: (1) AHP helps decision-makers to decompose decision problems by forming a
INCON - XI 2016
191
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
hierarchical decision-making structure, (2) The fuzzy approach helps in formulating user's
judgment vagueness for specific menus selection, (3) The Fuzzy-AHP method implemented in
this research culinary recommendation system helps to resolve disparity among each category
of menu options and (4) It is possible that the recommendation system produces one or more
food categories that are being recommended as of the weighting process are computed to the
menus in which the menus that are being recommended is reflecting its food categories. This
research shows that fuzzy method utilization in alternatives weighting process with numerous
and varying amount of alternatives is more effective compared to the original AHP
process.[27]
Bilal Hawashin, Ahmad Abusukhon, Ayman Mansour “An Efficient User Interest
Extractor for Recommender Systems” Proceedings of the World Congress on Engineering and
Computer Science 2015 Vol II WCECS 2015, October 21-23, 2015, San Francisco, USA.
This paper proposes an efficient method to extract user interests for recommender
systems. Although item-item content similarity has been widely used in the literature, it could
not detect certain user interests. Our solution improves the current work in two aspects. First,
it improves the current recommender systems by detecting actual user interests. Second, it
considers many types of user interests such as single-term interest, time interval interest,
multi-interests, and dislikes. This extractor would improve recommender systems in many
aspects. Our experiments show that our proposed method is efficient in terms of accuracy and
execution time. [28]
Gulnaj I. Patel, Rajesh V. Argiddi “ A Query Log Based SQL Query
Recommendation System for Interactive Database Exploration” International Journal of
Science and Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value 6.14 |
Impact Factor : 5.611
In this paper author present the SQL Query recommendation framework which generates
the query recommendation to assist the user in database exploration task. In this framework we
introduced four different techniques 1) Tuple based recommendation 2) Fragment Based
Recommendation 3) Clustering based Recommendation 4) Query-Term Matrix Based
recommendation are used. The first two methods are used for simple query recommendation
and third and fourth method is used for nested query recommendation. According to their
experimental evaluation it is observed that clustering based recommendation and query-term
matrix based recommendation provide high accuracy and availability than tuple based
recommendation and fragment based recommendation. [29]
Prof. Sujit Ahirrao, Faiz Akram, Swapnil Bagul, Kalpesh Modi, Harpreetkaur Saini
“Generic Recommendation Engine in Distributed Environment “Generic Recommendation
Engine in Distributed Environment” International journals of Modern Trend in Engineering
and Research ISSN (online):2349-9745 1.711
This paper is to make recommendation engine very generic and allow user to run the
recommendation engine on their data set in very small amount of time. For that Spark's
scalable machine learning algorithm is used as core engine which will give the power to run
machine learning algorithm in distributed environment and so run on large volume of data(Big
Data).[30]
Conclusion :
INCON - XI 2016
192
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Recommendation systems still require further improvements in its recommendation
methods or development of new algorithms in the following real-life applications:
Recommending in different area and applications are implemented in real life examples.
Recommending suitable candidate for jobs to benefit both the employee and employers (Job
recommender systems); Recommending certain types of financial services to investors, for
example stock recommendation; Recommending products to purchase in an online store;
Better methods for representing user behaviour and the information about the items to be
recommended; More advanced recommendation modelling methods through incorporation of
various contextual information into the recommendation process; Utilization of multi criteria
ratings; Development of less intrusive and more flexible recommendation methods that also
rely on.
References :
[1] Resnick and Varian “Recommender systems offer a solution” e-book.
[2] Bee, Joeran Beel, Bela Gipp, Stefan Langer, and Corinna Breitinger, “Research Paper
Recommender systems”
[3] SimonPhilip, P.B. Shola”A Paper Recommender System Based on the Past Ratings of a
User” International Journal of Advanced Computer Technology (IJACT). ISSN: 23197900
[4] Huang Yu Yao Dan Luo Jing Zhang Mu “Research on Personalized Recommender
System for Tourism Information Service” Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol.4, No.5, 2013.
[5] Atefeh Jajvand1, Mir Ali Seyyedi2, Afshin Salajegheh “A Hybrid Recommender System
for Service Discovery” International Journal of Innovative Research in Computer and
Communication Engineering (An ISO 3297: 2007 Certified Organization) Vol. 1, Issue
6, August 2013 ISSN (Print): 2320-9798.
[6] Lalita Sharma#1, Anju Gera “A Survey of Recommendation System: Research
Challenges” International Journal of Engineering Trends and Technology (IJETT) Volume4Issue5- May 2013 ISSN: 2231
[7] Jin Xu, Karaleise Johnson-Wahrmann1, Shuliang Li1, “The Development, Status and
Trends of Recommender Systems: A Comprehensive and Critical Literature Review”
Mathematics and Computers in Science and Industry ISBN: 978-1-61804-247-7.
[8] Bhumika Bhatt, Prof. Premal J Patel,Prof. Hetal Gaudani “ A Review Paper on Machine
Learning Based Recommendation System” 2014 IJEDR | Volume 2, Issue 4 | ISSN:
2321-9939.
[9] S.
Saint
Jesudoss
“SCALABLE
COLLABORATIVE
FILTERING
RECOMMENDATIONS USING DIVISIVE HIERARCHICAL CLUSTERING
APPROACH” ISSN: 2278-6244 International Journal of Advanced Research in IT and
Engineering
[10] Jaqueline Ferreira de Briton “A Study about Personalized Content Recommendation”
Revista de Sistemas de Informa_c~ao da FSMA n. 12 (2013) pp. 33-40.
[11] Latika Gaddam, Pratibha Yalagi “Mobile Video Recommendation System on Cloud with
User behaviour” International Journal of Science and Research (IJSR) ISSN (Online):
2319-7064 Index Copernicus Value (2013): 6.14 | Impact Factor (2013): 4.438
INCON - XI 2016
193
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[12] Sowmya.K.Menon, Varghese Paul, M.Sudheep Elayidom Ratan Kumar “An efficient
cloud based framework for Web recommendation systems” International Journal of
Advances in Computer Science and Technology (IJACST), Vol. 3 No.2, Pages : 07 – 11
(2014) Special Issue of ICCSIE 2014 - Held during February 16, 2014,Bangalore, India.
[13] Jyoti Gautam, Ela Kumar “ Framework for a Tag-Based Fuzzy Research Papers Sharing
and ecommendation System” International Journal of Computer & Mathematical
Sciences IJCMS ISSN 2347 – 8527 Volume 3, Issue 7 September 2014.
[14] Prabhat Kumar, Sherry Chalotra “An efficient recommender system using hierarchical
Clustering Algorithm” International Journal of Computer Science Trends and
Technology (IJCST) – Volume 2 Issue 4, Jul-Aug 2014
[15] Ashfaq
Amir
Shaikh,
Dr.
Gulabchand
K.
Gupta1Research
Scholar
“RECOMMENDATION WITH DATA MINING ALGORITHMS FOR ECOMMERCE AND MCOMMERCE APPLICATIONS” International Journal of
Emerging Trends & Technology in Computer Science (IJETTCS)Volume 3, Issue 6,
November-December 2014 ISSN 2278-6856
[16] Dheeraj kumar Bokde1, Sheetal Girase2, Debajyoti Mukhopadhyay3 “Role of Matrix
Factorization Model in Collaborative Filtering Algorithm: A Survey” International
Journal of Advance Foundation and Research in Computer (IJAFRC) Volume 1, Issue 6,
May 2014.
[17] Yonghong Xie, Aziguli Wulamu and Xiaojing Hu “Design and Implementation of
Privacy-preserving Recommendation System Based on MASK” JOURNAL OF
SOFTWARE, VOL. 9, NO. 10, OCTOBER 2014 2607
[18] P. N. Vijaya Kumar 1, Dr. V. Raghunatha Reddy “A Survey on Recommender Systems
(RSS) and Its Applications” International Journal of Innovative Research in Computer
and Communication Engineering Vol. 2, Issue 8, August 2014
[19] Tang Zhi-hang “Investigation and application of Personalizing Recommender Systems
based on ALIDATA DISCOVERY” Int. J. Advanced Networking and Applications
Volume: 6 Issue: 2 Pages: 2209-2213 (2014) ISSN : 0975-0290
[20] Neelima Ramnath Satpute and Prof. Hyder Ali Hingoliwala RESEARCH ARTICLE
“Generating String Recommendation Efficiently and Privately” International Journal of
Advanced Research (2015), Volume 3, Issue 6, 1400-1408
[21] Omkar S. Revankar, Dr.Mrs. Y.V.Haribhakta “Survey on Collaborative Filtering
Technique in Recommendation System” International Journal of Application or
Innovation in Engineering & Management (IJAIEM), Volume 4, Issue 3, March 2015 ,
ISSN 2319 – 4847.
[22] Tejal Arekar Mrs. R.S. Sonar Dr. N. J. Uke “A Survey on Recommendation System”
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN:
2349-2163 Volume 2 Issue 1 (January 2015)
[23] J. Amaithi Singam and S. Srinivasan “OPTIMAL KEYWORD SEARCH FOR
RECOMMENDER SYSTEM IN BIG DATA APPLICATION” ARPN Journal of
Engineering and Applied Sciences VOL. 10, NO. 7, APRIL 2015.
[24] Adetoba B. Tiwalola., Yekini N. Asafe “A COMPREHENSIVE STUDY OF
RECOMMENDER SYSTEMS: PROSPECTS AND CHALLENGES” International
INCON - XI 2016
194
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Journal of Scientific & Engineering Research, Volume 6, Issue 8, August-2015 699
ISSN 2229-5518.
[25] Manisha Bedmutha ¹, Megha Sawant ², Sushmitha Ghan “Effective Bug Triage and
Recommendation System” International Journal of Engineering Research and General
Science Volume 3, Issue 6, November-December, 2015 ISSN 2091-2730.
[26] P. Poornima, Dr. K. Kavitha “USER PROFILING USING RECOMMENDATION
SYSTEMS” International Journal of Advanced Technology in Engineering and Science
Volume No 03, Special Issue No. 01, April 2015 ISSN (online): 2348 – 7550.
[27] Aryo Pinandito1, Mahardeka Tri Ananta2, Komang Candra Brata3 and Lutfi Fanani
“ALTERNATIVES WEIGHTING IN ANALYTIC HIERARCHY PROCESS OF
MOBILE CULINARY RECOMMENDATION SYSTEM USING FUZZY” VOL. 10,
NO. 19, OCTOBER 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied
Sciences.
[28] Bilal Hawashin, Ahmad Abusukhon, Ayman Mansour “An Efficient User Interest
Extractor for Recommender Systems” Proceedings of the World Congress on
Engineering and Computer Science 2015 Vol II WCECS 2015, October 21-23, 2015,
San Francisco, USA.
[29] Gulnaj I. Patel, Rajesh V. Argiddi “ A Query Log Based SQL Query Recommendation
System for Interactive Database Exploration” International Journal of Science and
Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value 6.14 | Impact
Factor : 5.611
[30] Prof. Sujit Ahirrao1, Faiz Akram2, Swapnil Bagul3, Kalpesh Modi4, Harpreetkaur
Saini5 “Generic Recommendation Engine in Distributed Environment “Generic
Recommendation Engine in Distributed Environment” International journals of Modern
Trend in Engineering and Research ISSN (online):2349-9745 1.711
INCON - XI 2016
195
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Significance of Data Warehouse for ERP System
Varsha. P. Desai
(varshadesai9@gmail.com)
Asst. Professor
V.P. Institute of Management Studies
and Research, Sangli, India.
ABSTRACT:
ERP systems are the transaction engine for integrated data management in many
organizations. While ERP systems can integrate all business transaction data into their
master databases to manage operational workflow and organizational planning, but
does not support for data analysis and decision support system. Today, business
analytics is very important which helps determine how to run business by making future
prediction. Business intelligence uses data warehouse built from ERP system data. Data
warehouse provide valuable, accurate reporting services, data mining and
benchmarking. In this paper we describe need of data warehouse for success of ERP
system. We present architecture for creating data warehouse from ERP system and data
modeling of data warehouse.
Keyword: ERP, Data Mart, OLAP, ETL, SAS, OLTP
Objectives :
1.
Study importance of data warehouse for ERP system.
2.
Design architecture of data warehouse for ERP system.
3.
Suggest data model for implementation of data warehouse.
4.
Identify future challenges for implementation of data warehouse
Introduction :
Enterprise resource planning is software system that integrates all organization data at
central place for effective use of management resources and improves efficiency of enterprise.
ERP software package is a centralize repository of business system which help for flexible
data transaction and business intelligence. ERP database is a central hub which captures data
from different sources and allows multiple applications to share, reuse data efficiently.
Existence of central data creates an opportunity to develop enterprise data warehouse for
analysis and business strategic decision. Organizations implement ERP system to re-engineer
business process and face market competition.
Need of data warehouse for ERP system:
ERP systems are unable to satisfy the need of higher level management which helps for
strategic decision making. It is an information system plays important role in E-commerce[7].
Data warehouse is valuable source for decision support system and manage many IS
operations. Large volume of data can be store in data warehouse which help for future
predictions.[1] Data warehouse is complete repository of historical corporate data extracted
from transaction system that is available for ad-hoc access by knowledge workers. According
INCON - XI 2016
196
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
William H. Inmon data warehouse is subject oriented, integrated, time-variant, non-volatile
collection of data help for decision making.
Today ERP packages are implemented from small to large industries to manage
different business function areas, each function areas having different business functions like
manufacturing, sales and marketing, finance, human resource, inventory, logistics etc. ERP
database stores only operational data, as time goes this data is increase and affect on
performance of ERP system so this old data is removed from ERP. This historical data is
important asset of organization so we need to transfer this data to data warehouse for business
analysis.[3] W.H.Inmon suggests importance of creating data warehouse for ERP system is as
follows [1]:
1.
Useful to access historical and non-operation data without disturbing ERP database.
2.
Retain data for log period of a time.[4]
3.
Helps for generating analytical reports, dashboard for identify business status.
4.
Identify hierarchical relation between different functional units.
5.
Aware bout restrictions while accessing data from ERP system.
6.
Define logical relation between different business processes.
7.
Support timely access of ERP data.
8.
Consolidate data obtained from many sources.[4]
Architecture of data warehouse for ERP system :
Integration of ERP with extended function encounters problems of are inconsistent,
incomplete and duplicate data while creating data warehouse. Data staging operation is
performed for cleaning, pruning and to remove duplicates data. Following are the steps for
creating data warehouse for ERP system [10]:
1.
Collection: Collecting data from different sources and copied into data staging area for
further processing.
2.
Transformation: Removing unwanted, duplicate data and convert into standard format.
3.
Loading and Indexing: Transform data saved into data mart and indexed.
4.
Quality checking: Assuring quality of data store in data warehouse.
5.
Publication: Publish data online for analysis and perform OLAP operations.
6.
Searching: Provide search service for query execution.
7.
Security: Provide security of data to avoid potential damage.
INCON - XI 2016
197
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig.1. Architecture of Data warehouse for ERP system
Data sources :
Data warehouse extract data from different data sources like ERP,CRM, SCM
applications and spreadsheets.
ETL :
Extract transfer and load technique is used to retrieve data from operational system. ETL
tool read data from different source application, make changes, assign name to the fields and
organize data which help for analysis.[4]
Metadata :
Data store in data warehouse contain metadata. Metadata is data about data. It means
metadata contain detail information about each field, size, data type, logs etc. Metadata is
bridge between the data warehouse and decision support applications. Metadata is logical
linkage between data and application. It is useful for creating data mart from data
warehouse.[5] Metadata is needed for designing, constructing, retrieving and controlling data
warehouse.
Data Mart :
Data Marts are partition of the overall data warehouse. Data mart is a subset of data
warehouse separated according to departmental data. Different data marts are created for sales,
production, human resource and finance department. Data is separated in different data mart
according to function or geographical location.
Analytical cube :
Data cube are helps to multidimensional view of data that defines dimensions and facts.
Dimensions are entities which preserves data. Facts are the primary key columns that provide
data identify and further use as referential integrity. OLAP (Online Analytical Processing)
engine provides multidimensional view of data. It allows knowledge workers to get fast,
INCON - XI 2016
198
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
consistent and interactive information. Different operations perform on cube like rollup, drill
down, slicing and dicing for multi dimensional analysis.[6]
Extracting data from ERP application is not ERP data warehousing. An ERP data
warehouse must support services needed to construct a data warehouse and collection of
integrated applications, to generate report, analyze and control business events across
enterprise.[8] The output generated from data warehouse useful to manage many processes
such as extract files, reports, integration to applications such as SAS (Software as a service)
and transactional system.[9]
Data Warehouse Models :
There are three basic models of data warehouse such as enterprise model, data mart and
virtual data warehouse. Enterprise data model is a suitable model for ERP data warehousing. It
stores data from corporate operational system and external information system providers.
Enterprise model provide detail as well as summarized data. Size of enterprise warehouse is
ranges from few hundred GB to TB and beyond. Implementation requires traditional
mainframes, Linux super server or parallel architecture platforms.
Advantages of data warehouse for ERP system [14]:
1.
Easy to generate reports: ERP system data model is too complex so it is not possible
for end user to generate report without IT knowledge or proper training. Data ware house
model is simple so any end user can generate their own reports without any help.
2.
Combine data from multiple ERP system: Using data warehouse we can combine data
from multiple ERP systems to perform data integration.
3.
Data repository: ERP system does not contain all data of enterprise, but data warehouse
store all historical data which is used for analysis.
4.
Focus insight: Objective of ERP system is to generate report from transaction
processing. Data warehouse system not only to generate report but also provide insight
on data stored in database.
5.
Performance: Reports generated through data warehouse does not affect on
performance of OLTP (Online Transaction Processing) system.
Challenges

Metadata is key component of data warehouse. Ensure that vendor provide satisfactory
handling of metadata.

Special attention needed while selecting end user tools for online queries, batch
reporting and data extraction for analysis.

Take enterprise-wide view of requirements.

Proper estimation of time and efforts for data extraction, transformation and loading
functions.

Remove data pollution from data collected through different ERP systems.

Selection of operational and physical infrastructure for efficient data warehouse.
INCON - XI 2016
199
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Conclusion:
Data warehouse is valuable tool for business analysis and decision support system. ERP
software are integrated information system for operational data management where as data
warehouse act as repository for storing large volume of data collected from ERP and extended
systems. Online analytical processing (OLAP) provide multidimensional view of data through
dimensions and facts. Enterprise data model is effective model of data warehouse for ERP
system. Metadata management, time estimation for ETL activity, remove data pollution and
selection of operational and physical infrastructure are the major challenges for implementing
data warehouse for ERP system.
References:
[1] Phuc V. Nguyen, M.Sc., Ph.D.”USING DATA WAREHOUSE TO SUPPORT
BUILDING STRATEGY OR FORECAST BUSINESS TEND”.
[2] “A Critique of Data warehousing in Enterprise Resource Planning Systems” Indian
Journal of applied research, Volume 3/Issue 9/Sep2013/ISSN2249-555x
[3] Swati Varma,”Implementation of Data Warehouse in ERP System “Indian Journal of
applied research, Volume 3/Issue 9/Sep2013/ISSN2249-555x
[4] Joseph Guerra, SVP, CTO & Chief Architect, David Andrews, Founder, ” Why You
Need a Data Warehouse,”
[5] Arun K. pujari, Data Mining Techniques-2nd edition
[6] Data warehouse Tutorials points.com- Ebook
[7] V. Sathiyamoorthi1, Dr. V. Murali Bhaskaran2 1Lecturer/CSE, Sri Shakthi Institute of
Engineering and technology, Coimbatore-62, India. “Data Mining for Intelligent
Enterprise Resource Planning System”
[8] Stefano Rizzi,University of Bologna – Italy, Alberto Abell´o Polytechnical University of
Catalunya – Spain, Jens Lechtenb¨orger University of M¨unster – Germany, Juan
Trujillo,University of Alicante – Spain, “Research in Data Warehouse Modeling and
Design:Dead or Alive?”
[9] Data Warehousing Design Issues for ERP Systems, Mark Moorman, SAS Institute, Cary,
NC
[10] Mr. Dishek Mankad1, Mr. Preyash Dholakia, B.R.Patel, International Journal of
Scientific and Research Publications, Volume 3, Issue 3, March 2013 1 ISSN 22503153,” The Study on Data Warehouse Design and Usage”
[11] Surajit Chaudhuri, Umeshwar Dayal, “An Overview of Data Warehousing and OLAP
Technology”
Websites:
12. http://searchsap.techtarget.com/answer/ERP-Data-Warehouse
13. https://www.quora.com/Is-ERP-system-a-good-example-of-a-Data-Warehouse
14. http://wiki.scn.sap.com/wiki/display/EIM/BasicsofaDataWarehouse
INCON - XI 2016
200
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Cyber Crime – A Threat to Persons, Property, Government and Societies
Shruti Mandke
E-mail ID: deshpandeshruti9@gmail.com
ABSTRACT:
In the present day world, India has witnessed an unprecedented index of Cyber
crimes whether they pertain to Trojan attacks, salami attacks, e-mail bombing, DOS
attacks, information theft, or the most common offence of hacking. Despite technological
measures being adopted by corporate organizations and individuals, we have witnessed
that the frequency of cybercrimes has increased over the last decade. Since users of
computer system and internet are increasing worldwide in large number day by day,
where it is easy to access any information easily within a few seconds by using internet
which is the medium for huge information and a large base of communications around
the world. Certain precautionary measures should be taken by all of us while using the
internet which will assist in challenging this major threat Cyber Crime. In this paper, i
have discussed various categories of cybercrime and cybercrime as a threat to person,
property, government and society. In this paper I have suggested various preventive
measures to be taken to snub the cybercrime.
Keyword : Cybercrime, Computer crime, hacking, cyber fraud, Prevention of
cybercrime.
I.
Introduction
In the present day world, India has witnessed an huge increase in Cybercrimes whether
they pertain to Trojan attacks, salami attacks, e-mail bombing, DOS attacks, information theft,
or the most common offence of hacking the data or system to commit crime. Despite
technological measures being adopted by corporate organizations and individuals, we have
witnessed that the frequency of cybercrimes has increased over the last decade. Cybercrime
refers to the act of performing a criminal act using computer or cyberspace (the Internet
network), as the communication vehicle. Though there is no technical definition by any
statutory body for Cyber crime, it is broadly defined by the Computer Crime Research Center
as - “Crimes committed on the internet using the computer either as a tool or a targeted
victim.” All types of cybercrimes involve both the computer and the person behind it as
victims; it just depends on which of the two is the main target. Cyber rime could include
anything as simple as downloading illegal music files to stealing millions of dollars from
online bank accounts. Cybercrime could also include non-monetary offenses, such as creating
and distributing small or large programs written by programmers called viruses on other
computers or posting confidential business information on the Internet. An important form of
cybercrime is identity theft, in which criminals use the Internet to steal personal information
from other users. Various types of social networking sites are used for this purpose to find the
identity of interested peoples. There are two ways this is done - phishing and harming, both
methods lure users to fake websites, where they are asked to enter personal information. This
includes login information, such as usernames and passwords, phone numbers, addresses,
credit card numbers, bank account numbers, and other information criminals can use to "steal"
another person's identity.
INCON - XI 2016
201
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
II. History :
The first recorded cybercrime took place in the year 1820 which is not surprising
considering the fact that the abacus, which is thought to be the earliest form of a computer, has
been around since 3500 B.C. in India, Japan and China. The era of modern computers,
however, began with the analytical engine of Charles Babbage. In 1820, Joseph-Marie
Jacquard, a textile manufacturer in France, produced the loom. This device allowed the
repetition of a series of steps in the weaving of special fabrics. This resulted in a fear amongst
Jacquard's employees that their traditional employment and livelihood were being threatened.
They committed acts of sabotage to discourage Jacquard from further use of the new
technology. This was the first recorded cybercrime.
III.
Manifestations :
Basically cybercrimes can be understood by considering two categories, defined for the
purpose of understanding as Type I and Type II cybercrime. Type I cybercrime has the
following properties: It is generally a single event from the perspective of the victim. For
example, the victim unknowingly downloads or installs a Trojan horse which installs a
keystroke logger on his or her machine. Alternatively, the victim might receive an e-mail
containing what claims to be a link to a known entity, but in reality it is a link to a hostile
website. There are large number of key logger soft wares are available to commit this crime. It
is often facilitated by crime ware programs such as keystroke loggers, viruses, root kits or
Trojan horses. Some types of flaws or vulnerabilities in software products often provide the
foothold for the attacker. For example, criminals controlling a website may take advantage of
vulnerability in a Web browser to place a Trojan horse on the victim's computer. Examples of
this type of cybercrime include but are not limited to phishing, theft or manipulation of data or
services via hacking or viruses, identity theft, and bank or e-commerce fraud. Type II
cybercrimes, at the other end of the spectrum, includes, but is not limited to activities such as
computer related frauds, fake antivirus, cyber-stalking and harassment, child predation,
extortion, travel scam, fake escrow scams, blackmail, stock market manipulation, complex
corporate espionage, and planning or carrying out terrorist activities. The properties of Type II
cybercrime are: • It is generally an on-going series of events, involving repeated interactions
with the target. For example, the target is contacted in a chat room by someone who, over
time, attempts to establish a relationship. Eventually, the criminal exploits the relationship to
commit a crime. Or, members of a terrorist cell or criminal organization may use hidden
messages to communicate in a public forum to plan activities or discuss money laundering
locations. • It is generally facilitated by programs that do not fit into the classification of
crimeware. For example, conversations may take place using IM (Instant Messaging). Clients
or files may be transferred using FTP.
IV.
Cyber Crime In India :
Reliable sources report that during the year 2005, 179 cases were registered under the
I.T. Act as compared to 68 cases during the previous year, reporting the significant increase of
163% in 2005 over 2004. (Source: Karnika Seth - Cyber lawyer & Consultant practicing in the
Supreme Court of India and Delhi High Court) Some of the cases are: • The BPO, Mphasis
Ltd. case of data theft • The DPS MMS case • Pranav Mitra's email spoofing fraud
INCON - XI 2016
202
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
V. Some Professions Giving Birth To Cyber Crimes :
There are three kinds of professionals in the cyberspace:
1.
IT or Tech Professionals Since Cyber Crime is all about computers and Networks
(Internet), many types of IT & Technology professionals are quite prominently active in
the same, which include but are not restricted to: • Network Engineers • Cyber Security
Software Professionals • Cyber Forensic Experts • IT Governance Professionals •
Certified Internet Security Auditors • Ethical Hackers
2.
Cyber Law Experts Cyber Law has become a multidisciplinary approach and hence
specialization in handling cybercrimes is required. Cyber law experts handle: • Patent
and Patent Infringements or other Business Cybercrimes • Cyber Security for Identity
thefts and Credit Cards and other Financial transactions • General Cyber Law • Online
Payment Frauds •Copyright Infringement of software, music and video.
3.
Cyber Law Implementation Professionals Many agencies play a role in cyber law
implementation, which include the e-Governance agencies, law and enforcement
agencies, cybercrime research cells and cyber forensic labs. Each of these would have a
different category of professionals.
VI.
Categories of Cyber Crime :
Cybercrimes can be basically divided into four major categories:
1.
Cybercrimes against persons.
Cybercrimes committed against persons include various crimes like transmission of
child-pornography, cyber porn, harassment of a person using a computer such as through email, fake escrow scams. The trafficking, distribution, posting, and dissemination of obscene
material including pornography and indecent exposure, constitutes one of the most important
Cybercrimes known today. The potential harm of such a crime to humanity can hardly be
explained. Cyber-harassment is a distinct Cybercrime. Various kinds of harassment can and do
occur in cyberspace, or through the use of cyberspace. Different types of harassment can be
sexual, racial, religious, or other. Persons perpetuating such harassment are also guilty of
cyber crimes. Cyber harassment as a crime also brings us to another related area of violation of
privacy of citizens. Violation of privacy of online citizens is a Cybercrime of a grave nature.
No one likes any other person invading the invaluable and extremely touchy area of his or her
own privacy which the medium of internet grants to the citizen. There are certain offences
which affect the personality of individuals can be defined as: Harassment via E-Mails: This
is very common type of harassment through sending letters, attachments of files & folders i.e.
via e-mails. At present harassment is common as usage of social sites i.e. Facebook, Twitter,
Orkut etc. increasing day by day. Cyber-Stalking: It is expressed or implied a physical threat
that creates fear through the use to computer technology such as internet, e-mail, phones, text
messages, webcam, websites or videos. Defamation: It involves any person with intent to
lower down the dignity of the person by hacking his mail account and sending some mails
with using vulgar language to unknown persons mail account. Hacking: It means
unauthorized control/access over computer system and act of hacking completely destroys the
whole data as well as computer programs. Hackers usually hacks telecommunication and
mobile network. Cracking: It is act of breaking into your computer systems without your
knowledge and consent and has tampered with precious confidential data and information. EINCON - XI 2016
203
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Mail Spoofing: A spoofed e-mail may be said to be one, which misrepresents its origin. It
shows it’s origin to be different from which actually it originates. SMS Spoofing: Spoofing is
a blocking through spam which means the unwanted uninvited messages. Here an offender
steals identity of another person in the form of mobile phone number and sending SMS via
internet and receiver gets the SMS from the mobile phone number of the victim. It is very
serious cybercrime against any individual. Carding: It means false ATM cards i.e. Debit and
Credit cards used by criminals for their monetary benefits through withdrawing money from
the victim’s bank account. There is always unauthorized use of ATM cards in this type of
cybercrimes. Cheating & Fraud: It means the person who is doing the act of cybercrime i.e.
stealing password and data storage has done it with having guilty mind which leads to fraud
and cheating. Child Pornography: In this cyber crime defaulters create, distribute, or access
materials that sexually exploit underage children. Assault by Threat: It refers to threatening a
person with fear for their lives or lives of their families through the use of a computer network
i.e. E-mail, videos or phones. 2. Cybercrimes against property. The second category of Cybercrimes is that of Cybercrimes against all forms of property. These crimes include computer
vandalism (destruction of others' property) and transmission of harmful viruses or programs. A
Mumbai-based upstart engineering company lost a say and much money in the business when
the rival company, an industry major, stole the technical database from their computers with
the help of a corporate cyber spy software. There are certain offences which affects persons
property which are as follows: Intellectual Property Crimes: Intellectual property consists of
a bunch of rights. Any unlawful act by which the owner is deprived completely or partially of
his rights is an crime. The most common type of IPR violation may be said to be software
piracy, infringement of copyright, trademark, patents, designs and service mark violation, theft
of computer source code, etc. Cyber Squatting: It involves two persons claiming for the same
Domain Name either by claiming that they had registered the name first on by right of using it
before the other or using something similar to that previously. For example two similar names
i.e. www.yahoo.com and www.yahhoo.com. Cyber Vandalism: Vandalism means
deliberately damaging property of another. Thus cyber vandalism means destroying or
damaging the data or information stored in computer when a network service is stopped or
disrupted. It may include within its purview any kind of physical harm done to the computer of
any person. These acts may take the form of the theft of a computer, some part of a computer
or a peripheral or a device attached to the computer. Hacking Computer System: Hackers
attacks those included Famous Twitter, blogging platform by unauthorized access/control over
the computer. Due to the hacking activity there will be loss of data as well as computer
system. Also research especially indicates that those attacks were not mainly intended for
financial gain too and to diminish the reputation of particular person or company. As in April,
2013 MMM India attacked by hackers. Transmitting Virus: Viruses are programs written by
programmers that attach themselves to a computer or a file and then circulate themselves to
other files and to other computers on a network. They mainly affect the data on a computer,
either by altering or deleting it. Worm attacks plays major role in affecting the computer
system of the individuals.
As for the HR department, the task is to conduct thorough pre-employment checks. In
the modern cyber technology world it is very much necessary to regulate cyber crimes and
most importantly cyber law should be made stricter in the case of cyber terrorism and hackers.
INCON - XI 2016
204
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
VII. Major Deterrents for the Police and The Companies So For As Detecting
Cyber Crimes :
Companies in India do not want to be publicized for the wrong reasons. If ever they are
in trouble, they try their best to sort it out through own in-house security system. As far as the
police are concerned they are usually reluctant to take up cybercrime cases as investigation is
highly labour-intensive and expensive.
VII. Prevention of Cyber Crime :
Prevention is always better than cure. It is always better to take certain precautions while
working on the net. One should make them a part of his cyber life. Sailesh Kumar Zarkar,
technical advisor and network security consultant to the Mumbai Police Cybercrime Cell,
advocates the 5P mantra for online security: Precaution, Prevention, Protection, Preservation
and Perseverance.

Identification of exposures through education will assist responsible companies and
firms to meet these challenges.

One should avoid disclosing any personal information to strangers, the person whom
they don’t know, via e-mail or while chatting or any social networking site.

One must avoid sending any photograph to strangers by online as misusing or
modification of photograph incidents increasing day by day.

An update Anti-virus software to guard against virus attacks should be used by all the
netizens and should also keep back up volumes so that one may not suffer data loss in
case of virus contamination.

A person should never send his credit card number or debit card number to any site that
is not secured, to guard against frauds.

It is always the parents who have to keep a watch on the sites that their children are
accessing, to prevent any kind of harassment or depravation in children.

Web site owners should watch traffic and check any irregularity on the site. It is the
responsibility of the web site owners to adopt some policy for preventing cybercrimes as
number of internet users are growing day by day.

Web servers running public sites must be physically separately protected from internal
corporate network.

It is better to use a security programs by the body corporate to control information on
sites.

Strict statutory laws need to be passed by the Legislatures keeping in mind the interest of
netizens.

IT department should pass certain guidelines and notifications for the protection of
computer system and should also bring out with some more strict laws to breakdown the
criminal activities relating to cyberspace.

As Cyber Crime is the major threat to all the countries worldwide, certain steps should
be taken at the international level for preventing the cybercrime.

A complete justice must be provided to the victims of cyber crimes by way of
compensatory remedy and offenders to be punished with highest type of punishment so
that it will anticipate the criminals of cyber crime.
INCON - XI 2016
205
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Conclusion:
In conclusion, computer crime does have a drastic effect on the world in which we live.
It affects every person no matter where they are from. It is ironic that those who in secret
break into computers across the world for enjoyment have been labeled as deviance. Many
hackers view the Internet as public space for everyone and do not see their actions as criminal.
Hackers are as old as the Internet and many have been instrumental in making the Internet
what it is now. In my view point hacking and computer crime will be with us for as long as we
have the Internet. It is our role to keep the balance between what is a crime and what is done
for pure enjoyment. Luckily, the government is making an effort to control the Internet. Yet,
true control over the Internet is impossible, because the reasons the Internet was created. This
is why families and the institution of education of is needed, parents need to let their children
know what is okay to do on the computer and what is not and to educate them on the
repercussions of their actions should they choose to become part of the subculture of hackers.
In finishing this paper, the true nature of what computer crime will include in the future is
unknown. What was criminal yesterday may not be a crime the next day because advances in
computers may not allow it. Passwords might be replaced for more secure forms of security
like biometric security. Most of the recorded computer crimes cases in most organization
involve more than individual and virtually all computer crime cases known so far are
committed by employer of the organization. Criminals have also adapted the advancements of
computer technology to further their own illegal activities. Without question, law enforcement
must be better prepared to deal with many aspects of computer-related crimes and the technocriminals who commit them. This article is not meant to suggest that programmers or
computer users are fraudulent people or criminal but rather to expose us to the computerrelated crime and provides ways to prevent them. Since users of computer system and internet
are increasing worldwide in large number day by day, where it is easy to access any
information easily within a few seconds by using internet which is the medium for huge
information and a large base of communications around the world. Certain precautionary
measures should be taken by all of us while using the internet which will assist in challenging
this major threat Cyber Crime.
References:
1.
Communications Fraud Control Association. 2011 global fraud loss survey. Available:
http://www.cfca.org/fraudlosssurvey/, 2011.
2.
F. Lorrie, editor. “Proceedings of the Anti-Phishing Working Groups”, 2nd Annual
eCrime Researchers Summit 2007, Pittsburgh, Pennsylvania, USA, October 4–5, 2007,
vol. 269 of ACM International Conference Proceeding Series. ACM, 2007.
3.
I. Henry, “Machine learning to classify fraudulent websites”. 3rd Year Project Report,
Computer Laboratory, University of Cambridge, 2012.
4.
Microsoft Inc. Microsoft security intelligence report, volume 9, 2010. Available:
http://www.microsoft.com/security/sir/.
5.
Neilson Ratings. (2011). Top ten global web parent companies, home and work.
Retrieved February 24, 2012.
6.
N. Leontiadis, T. Moore, and N. Christin. “Measuring and analyzing search-redirection
attacks in the illicit online prescription drug trade”. In Proceedings of USENIX Security
2011, San Francisco, CA, August 2011.
INCON - XI 2016
206
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
7.
Phil Williams, Organized Crime and Cybercrime: Synergies, Trends, and Responses,
Retrieved December 5, 2006 from Available: http:// www.pitt.edu/~rcss/toc.html.
8.
Steel.C. (2006), Windows Forensics: The Field Guide for Corporate Computer
Investigations, Wiley.
INCON - XI 2016
207
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
The Security Aspects of IOT and Its Counter Measures
Mr. Ravikiant Kale
Assistant Professor
NBN Sinhgad School of Computer Studies
Pune, Maharashtra
ravikant1010.kale@gmail.com
Mrs. Kalyani Alishetty
Assistant Professor
NBN Sinhgad School of
Computer Studies,
Pune, Maharashtra.
koukuntla.kalyani@gmail.com
ABSTRACT:
IOT uses technologies to connect physical objects to the Internet. The size (and
cost) of electronic components that are needed to support capabilities such as sensing,
tracking and control mechanisms, play a critical role in the widespread adoption of IOT
for various industry applications. The progress in the semiconductor industry has been
no less than spectacular, as the industry has kept true to Moore's Law of doubling
transistor density every two years.
Today, various encryption and authentication technologies such as Rivest Shamir
Adleman (RSA) and message authentication code (MAC) protect the confidentiality and
authenticity of transaction data as it “transits” between networks. Encryptions such as
full disk encryption (FDE) is also performed for user data “at rest” to prevent
unauthorized access and data tampering.
In future, new standards and technologies should address security and privacy
features for users, network, data and applications. In areas of network protocol security,
Internet Protocol Version 6 (IPv6) is the next generation protocol for the Internet; it
contains addressing and security control information, i.e., IPSec to route packets
through the Internet. In IPv4, IPSec is optional and connecting computers (peers) do not
necessarily support IPsec. With IPv6, IPSec support is integrated into the protocol
design and connections can be secured when communicating with other IPv6 devices.
IPSec provides data confidentiality, data integrity and data authentication at the
network layer, and offers various security services at the IP layer and above. These
security services are, for example, access control, connectionless integrity, data origin
authentication, protection against replays (a form of partial sequence integrity),
confidentiality (encryption), and limited traffic flow confidentiality. Other IP-based
security solutions such as Internet Key Exchange (IKEv2) and Host Identity Protocol
(HIP) are also used to perform authenticated key exchanges over IPSec protocol for
secure payload delivery.
At the data link layer, Extensible Authentication Protocol (EAP) is an
authentication framework used to support multiple authentication methods. It runs
directly on the data link layer, and supports duplicate detection and re-transmission
error. In order to enable network access authentication between clients and the network
infrastructure, a Protocol for carrying Authentication for Network Access (PANA) forms
the network-layer transport for EAP. In EAP terms, PANA is a User Datagram Protocol
(UDP)-based EAP lower layer that runs between the EAP peer and the EAP
authenticator.
INCON - XI 2016
208
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
For data privacy, policy approaches and technical implementations exist to ensure
that sensitive data is removed or replaced with realistic data (not real data). Using
policy approaches, Data Protection Acts are passed by various countries such as the
USA and the European Union to safeguard an individual's personal data against misuse.
For technical implementations, there are Privacy Enhancing Techniques (PETs) such as
anonymisation and obfuscation to de-sensitize personal data. PETs use a variety of
techniques such as data substitution, data hashing and truncation to break the sensitive
association of data, so that the data is no longer personally identifiable and safe to use.
For example, European Network and Information Security Agency (ENISA) has
proposed to approach data privacy by design22, using a “data masking” platform which
uses PETs to ensure data privacy.
With the IOT-distributed nature of embedded devices in public areas, threats
coming from networks trying to spoof data access, collection and privacy controls to
allow the sharing of real-time information, IOT security have to be implemented on a
strong foundation built on a holistic view of security for all IOT elements at various
interacting layers.
Introduction :
Imagine a world where billions of objects can sense, communicate and share
information, all interconnected over public or private Internet Protocol (IP) networks. These
interconnected objects have data regularly collected, analyzed and used to initiate action,
providing a wealth of intelligence for planning, management and decision making. This is the
world of the Internet of Things (IOT).
The IOT concept was coined by a member of the Radio Frequency Identification
(RFID) development community in 1999, and it has recently become more relevant to the
practical world largely because of the growth of mobile devices, embedded and ubiquitous
communication, cloud computing and data analytics.
Since then, many visionaries have seized on the phrase “Internet of Things” to refer to
the general idea of things, especially everyday objects, that are readable, recognizable,
locatable, addressable, and/or controllable via the Internet, irrespective of the communication
means (whether via RFID, wireless LAN, wide- area networks, or other means). Everyday
objects include not only the electronic devices we encounter or the products of higher
technological development such as vehicles and equipment but things that we do not ordinarily
think of as electronic at all - such as food and clothing. Examples of “things” include :

People;

Location (of objects);

Time Information (of objects);

Condition (of objects).
These “things” of the real world shall seamlessly integrate into the virtual world,
enabling anytime, anywhere connectivity. In 2010, the number of everyday physical objects
and devices connected to the Internet was around 12.5 billion. Cisco forecasts that this figure
is expected to double to 25 billion in 2015 as the number of more smart devices per person
increases, and to a further 50 billion by 2020With more physical objects and smart devices
connected in the IOT landscape, the impact and value that IOT brings to our daily lives
become more prevalent. People make better decisions such as taking the best routes to work or
INCON - XI 2016
209
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
choosing their favourite restaurant. New services can emerge to address society challenges
such as remote health monitoring for elderly patients and pay-as-you-use services. For
government, the convergence of data sources on shared networks improves nationwide
planning, promotes better coordination between agencies and facilitates quicker
responsiveness to emergencies and disasters. For enterprises, IOT brings about tangible
business benefits from improved management and tracking of assets and products, new
business models and cost savings achieved through the optimization of equipment and
resource usage. But as IOT provides many services as described above there are more chances
of threats. Security with respect to authorization and authentication.
Today, various encryption and authentication technologies such as Rivest Shamir
Adleman (RSA) and message authentication code (MAC) protect the confidentiality and
authenticity of transaction data as it “transits” between networks. Encryptions such as full disk
encryption (FDE) is also performed for user data “at rest” to prevent unauthorized access and
data tampering.
In future, new standards and technologies should address security and privacy features
for users, network, data and applications.
In areas of network protocol security, Internet Protocol Version 6 (IPv6) is the next
generation protocol for the Internet; it contains addressing and security control information,
i.e., IPSec to route packets through the Internet. In IPv4, IPSec is optional and connecting
computers (peers) do not necessarily support IPsec. With IPv6, IPSec support is integrated into
the protocol design and connections can be secured when communicating with other IPv6
devices. IPSec provides data confidentiality, data integrity and data authentication at the
network layer, and offers various security services at the IP layer and above.
These security services are, for example, access control, connectionless integrity, data
origin authentication, protection against replays (a form of partial sequence integrity),
confidentiality (encryption), and limited traffic flow confidentiality. Other IP-based security
solutions such as Internet Key Exchange (IKEv2) and Host Identity Protocol (HIP) are also
used to perform authenticated key exchanges over IPSec protocol for secure payload delivery.
At the data link layer, Extensible Authentication Protocol (EAP) is an
authentication framework used to support multiple authentication methods. It runs
directly on the data link layer, and supports duplicate detection and re-transmission
error. In order to enable network access authentication between clients and the network
infrastructure, a Protocol for carrying Authentication for Network Access (PANA) forms
the network-layer transport for EAP. In EAP terms, PANA is a User Datagram Protocol
(UDP)-based EAP lower layer that runs between the EAP peer and the EAP
authenticator.
Application area for IOT:
Below are some of the IOT applications that can be developed in the various industry
sectors (these applications are not exhaustive).
Supply Chains
Dynamic Ordering Management Tool
Traditionally, the order picking management in the warehouse picks up multiple types of
commodities to satisfy independent customer demands. The order picker (done manually) tries
INCON - XI 2016
210
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
to minimize the travelling distance for time and energy savings via route optimization and
order consolidation. Using the dynamic ordering tool, the network of smart objects will
identify the types of commodities and decompose the order picking process to distributed subtasks based on area divisions. The application will plan the delivery routes centrally before
activating order pickers for the delivery. Using executable algorithms in active tags, the tags
can choose the best paths for the order pickers to take, as well as paths that are within their
responsible areas. This results in a more optimized order processing, time savings and lower
cost of delivery.
Government
Crowd Control during Emergencies and Events
The crowd control application will allow relevant government authorities to estimate the
number of people gathering at event sites and determine if necessary actions need to be taken
during an emergency. The application would be installed on mobile devices and users would
need to agree to share their location data for the application to be effective. Using locationbased technologies such as cellular, WiFi and GPS, the application will generate virtual “heat
maps” of crowds. These maps can be combined with sensor information obtained from street
cameras, motion sensors and officers on patrol to evaluate the impact of the crowded areas.
Emergency vehicles can also be informed of the best possible routes to take, using information
from real-time traffic sensor data, to avoid being stuck among the crowds.
Intelligent Lampposts
The intelligent streetlamp is a network of streetlamps that are tied together in a WAN
that can be controlled and monitored from a central point, by the city or a third party. It
captures data such as ambient temperature, visibility, rain, GPS location and traffic density
which can be fed into applications to manage road maintenance operations, traffic
management and vicinity mapping. With the availability of such real-time data, government
can respond quicker to changing environments to address citizen needs.
Retail
Shopping Assistants
In the retail sector, shopping assistant applications can be used to locate appropriate
items for shoppers and provide recommendations of products based on consumer preferences.
Currently, most of shopping malls provide loyalty cards and bonus points for purchases made
in their stores but the nature of these programs are more passive, i.e., they do not interact with,
and often do not make any recommendations for, the buyers.
The application can reside in the shopper’s personal mobile devices such as tablets and
phones, and provide shopping recommendations based on the profile and current mood of the
shopper. Using context-aware computing services, the application captures data feeds such as
promotions, locations of products and types of stores, either from the malls’ websites or open
API if the mall allows it. Next the application attempts to match the user’s shopping
requirements or prompts the user for any preferences, e.g., “What would you like to buy
today?” If the user wants to locate and search for a particular product in the mall, the
application guides the user from the current location to the destination, using local-based
technology such as WiFi embedded on the user’s mobile phone.
INCON - XI 2016
211
ASM’s International E-Journal on
Ongoing Research in Management & IT
Healthcare
E-ISSN – 2320-0065
Elderly Family Member Monitoring
This application creates the freedom for the elderly to move around safely outdoors, with
family members being able to monitor their whereabouts. The elderly sometimes lose their
way or are unable to identify familiar surroundings to recall their way back home. Family
members who do not know their relatives’ whereabouts may be at a loss to know where and
how to start searching. The application can be a tiny piece of wearable device such as a coilon-chip tag attached to the elderly. This tag will be equipped with location-based sensors to
report the paths that the wearer has travelled. It can emit signals to inform family members if
the wearer ventures away from predetermined paths. It can also detect deviations in their daily
routines. Family members can also track the location of their elderly online via the user
interface (UI) application.
Continuous Patient Monitoring
Continuous patient monitoring can be an extension to the “Elderly Family Member
Monitoring” application; this application, however, requires the medical services companies to
support it. Continuous patient monitoring requires the use of medical body sensors to monitor
vital body conditions such as heartbeat, temperature and sugar levels. The application
examines the current state of the patient’s health for any abnormalities and can predict if the
patient is going to encounter any health problems. Analytics such as predictive analytics and
CEP can be used to extrapolate information to compare against existing patterns and statistics
to make a judgment. Energy harvesting sensors can be used to convert physical energy to
electrical energy to help power these sensors to prevent the patient from having to carry bulky
batteries or to perform frequent re-charging.
Smart Pills
Smart pills are essentially ingestible sensors that are swallowed and can record various
physiological measures. They can also be used to confirm that a patient has taken his or her
prescribed medication, and can measure the effects of the medication.
Need of Security in IOT:
The Internet of Things (IOT) is not a futuristic technology trend: It’s here today, and it
starts with your things — your devices and sensors, the data they produce, your cloud services
and business intelligence tools. That’s the Internet of Your Things. By implementing a
strategy to capitalize on the Internet of Things, you can stop just running your business and
start making it thrive.
Get a jump-start on your competition.
Cut a food-service inspection process in half. Give doctors and nurses access to patient
records in a fraction of the time. Enable online grocery shopping with one-hour delivery.
Businesses are already taking advantage of the Internet of Things by connecting their devices
to create new insights from data that help them transform their business. It’s time to create an
Internet of Things strategy so your business can lead, instead of fighting to catch up.
INCON - XI 2016
212
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
2.
Get more out of your existing IT assets.
Start with your existing IT assets and build upon them. Add a few new devices,connect
them to the cloud, and enable them to talk to each other, to your employees and to customers.
Transform your business by utilising the data those devices generate with business intelligence
tools to have deeper insight into what your customers and employees want and need.
3.
Enable small changes to make a big impact.
The Internet of Your Things starts with identifying the one process, product line or
location that matters most to you, then making small changes for big impact. Connect robots
on the factory floor with back-end systems and create a production line with more continuous
uptime. Add expiration dates to the data set for pharmacy inventory and save thousands of
pounds in wasted medications. Connect one handheld device to your inventory system;
suddenly, you’ve got real-time customer service on the sales floor. The Internet of Things
doesn’t have to be overwhelming — a few key improvements can make a big difference.
4.
Become more efficient.
Connecting devices and systems can help you shave minutes from a user’s login process,
hours from restocking inventory, or days from routine system upgrades and enhancements.
When data flows seamlessly between devices and through the cloud, you can access and use it
more efficiently than ever before. That means spending less time pulling reports, and more
time creating new services and products based on your new insights.
5.
Discover new ways to delight your customers.
From the least-used fitting room in the store to the keywords that drive the strongest
coupon sales, every piece of data is a clue to the products and experiences your customers are
seeking. Visualize emerging patterns and predict behavior to anticipate trends and give your
customers what they want, before they even know they want it.
6.
Open up new business opportunities.
Connecting devices, data and people gives you faster processes and fresh insight,
resulting in new business opportunities. Combining GPS with automated kiosks and RFIDenabled check-in lets motorists join a car-sharing service and drive away in minutes.
Automating the stonecutting process frees up craftsmen to meet increased demand without
sacrificing quality. The insights you get from your data help you see new possibilities.
7.
Increase agility.
Data insights can help you respond more quickly to competition, supply chain changes,
customer demand and changing market conditions. Collecting and analyzing data gives you
quick insight into trends, so you can change your production activity, fine-tune your
maintenance schedule or find less expensive materials. With the Internet of Things, you can
spend less time wondering and more time taking action.
8.
Build the ability to scale.
New ideas are born when you work with new partners, new technologies, new devices
and new data streams. You suddenly put your employees and technology to work together in
ways never before imagined. New data opportunities let you shift your focus from repairing
INCON - XI 2016
213
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
machines to fine-tuning their performance over the long term. Comparing results from
different store locations lets you identify the most successful services and roll them out
nationwide. The Internet of Things lets you scale from the smallest data point to global
deployments.
9.
Get your devices to start talking.
Devices have the potential to say a lot, but only if there’s someone, or something, on the
other end to engage, react and listen. Sensors can tell your distribution center systems which
merchandise routes are plagued with delays. Machine-generated data can tell your operations
teams which remote service kiosks will need repairs the soonest. From sensors to handheld
scanners to surgical instruments, the devices in your business can create efficiency and insight,
if you enable them to talk to each other, your employees and customers.
10.
Transform your business.
When you have a strategy in place to take advantage of the Internet of Things — and
you team up with the one company that can provide the right platforms, services, tools and
partner ecosystem — you can transform your business in real time. Microsoft and its partners
have the technology and the experience to help you put the Internet of Your Things to work in
your business today, so you can stop just running your business, and start making it thrives.
Security aspects/tools:
Privacy and SecurityAs the adoption of IOT becomes persistent, data that is captured and stored becomes
huge.
One of the main concerns that the IOT has to address is privacy. The most important
challenge in convincing users to adopt emerging technologies is the protection of data and
privacy. Concerns over privacy and data protection are widespread, particularly as sensors and
smart tags can track user movements, habits and ongoing preferences.
Invisible and constant data exchange between things and people, and between things and
other things, will take place, unknown to the owners and originators of such data. IOT
implementations would need to decide who controls the data and for how long.
The fact that in the IOT, a lot of data flows autonomously and without human
knowledge makes it very important to have authorization protocols in place to avoid the
misuse of data. Moreover, protecting privacy must not be limited to technical solutions, but
must encompass regulatory, market-based and socio-ethical considerations. Another area of
protecting data privacy is the rising phenomenon of the “Quantified Self”38 where people
exercise access control to their own personal data e.g. food consumed, distance travelled,
personal preferences.
These groups of people gather data from their daily lives and grant trusted third-party
applications to access their data in exchange for benefits such as free data storage and analysis.
The third-party applications or providers do not have access to the raw data or usually have
commercial relationships with these consumers and hence cannot use their personal data for
other purposes.
In the retail/consumer example, data collected from users can range from location data,
user preferences, payment information to security parameters. This data gives insight into the
INCON - XI 2016
214
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
lives of the users and hence, appropriate privacy and security mechanisms have to be in place
to protect the use and dissemination of the data.
With new IOT applications being developed from evolving data that has been processed
and filtered, IOT systems must be able to resolve the privacy settings from this evolving data
and also for corresponding applications.
Counter measures for IOT:
Advantages:
The Advantages of IOT
1.
Information: In my opinion, it is obvious that having more information helps making
better decisions. Whether it is mundane decisions as needing to know what to buy at the
grocery store or if your company has enough widgets and supplies, knowledge is power
and more knowledge is better.
2.
Monitor: The second most obvious advantage of IoT is monitoring. Knowing the exact
quantity of supplies or the air quality in your home, can further provide more
information that could not have previously been collected easily. For instance, knowing
that you are low on milk or printer ink could save you another trip to the store in the near
future. Furthermore, monitoring the expiration of products can and will improve safety.
3.
Time: As hinted in the previous examples, the amount of time saved because of IoT
could be quite large. And in today’s modern life, we all could use more time.
4.
Money: In my opinion, the biggest advantage of IoT is saving money. If the price of the
tagging and monitoring equipment is less than the amount of money saved, then the
Internet of Things will be very widely adopted.
The Disadvantages of IOT
1.
Compatibility: Currently, there is no international standard of compatibility for the
tagging and monitoring equipment. I believe this disadvantage is the most easy to
overcome. The manufacturing companies of these equipment just need to agree to a
standard, such as Bluetooth, USB, etc. This is nothing new or innovative needed.
2.
Complexity: As with all complex systems, there are more opportunities of failure. With
the Internet of Things, failures could sky rocket. For instance, let’s say that both you and
your spouse each get a message saying that your milk has expired, and both of you stop
at a store on your way home, and you both purchase milk. As a result, you and your
spouse have purchased twice the amount that you both need. Or maybe a bug in the
software ends up automatically ordering a new ink cartridge for your printer each and
every hour for a few days, or at least after each power failure, when you only need a
single replacement.
3.
Privacy/Security: With all of this IoT data being transmitted, the risk of losing privacy
increases. For instance, how well encrypted will the data be kept and transmitted with?
Do you want your neighbors or employers to know what medications that you are taking
or your financial situation?
4.
Safety: Imagine if a notorious hacker changes your prescription. Or if a store
automatically ships you an equivalent product that you are allergic to, or a flavor that
INCON - XI 2016
215
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
you do not like, or a product that is already expired. As a result, safety is ultimately in
the hands of the consumer to verify any and all automation.
REFERENCES:

Overview of the internet of things. Available: http://www.itu.int/rec/T-REC-Y.2060201206-P/en IOT Security issues research papers

[Shipley13] Shipley, AJ. "Security in the Internet of Things." Wind River, Sept. 2013.
Web.

http://www.windriver.com/whitepapers/security-in-the-internet-of-things/wr_secu

rity-in-the-internet-of-things.pdf

http://www.internet-of-thingsresearch.eu/pdf/Converging_Technologies_for_Smart_Environments_and_Integrated_E
cosystems_IERC_Book_Open_Access_2013.pdf

https://www.cisco.com/web/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf

https://www.ida.gov.sg/~/media/Files/Infocomm%20Landscape/Technology/Technolog
yRoadmap/InternetOfThings.pdf

http://www.gsma.com/connectedliving/wpcontent/uploads/2014/08/cl_iot_wp_07_14.pdf

http://www.ti.com/lit/ml/swrb028/swrb028.pdf

https://cache.freescale.com/files/32bit/doc/white_paper/INTOTHNGSWP.pdf
INCON - XI 2016
216
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Ethical Hacking: A need of an Era for Computer Security
Mr. Ravikiant Kale
Assistant Professor
NBN Sinhgad School Of
Computer Studies
Pune, Maharashtra.
ravikant1010.kale@gmail.com
Mr. Amar Shinde
Assistant Professor
NBN Sinhgad School Of
Computer Studies,
Pune, Maharashtra
amarslives@gmail.com
ABSTRACT :
"Ethical Hacking" - is the term an oxymoron, or it’s one of today's necessities in
the fight against cybercrime. There is an exponential growth in the internet which makes
many new practices that are collaborative computing, e-mail, and new avenues for
information distribution. Along with the most technological advances, there is also a
dark side i.e. Criminal hacker. Many users like citizens, private/public organizations
and different government bodies are afraid about their privacy and security as one can
break into their Web server and replace their logo with pornography, steal their
customer’s credit card information from an on-line shopping/e-commerce site, or
implant software that will secretly transmit their organization’s secrets to the open
Internet. This paper describes about ethical hackers: their skills, attitudes, and how they
go about helping their customers to find and plug up security breaches. It also focuses
on importance of “Ethical Hacking” in curricula.
Keyword : Ethical Hacking, Curricula, Privacy and Security, Collaborative computing,
Cyber Crime.
Introduction:
In a societal structure, we have many streams, one amongst them is Ethical Hacking. In
India Ethical Hacking is in a very much budding stage. Large number of companies are
undertaking these activities superficially and promoting the activities in Media. It is expected
that society, activist groups, Government and corporate sectors should work together to create
appropriate means and avenues for the marginalized and bring them to the mainstream. The
success of Ethical Hacking lies in practicing it as a core part of a company’s development
strategy. It is important for the corporate sector to identify, promote and implement successful
policies and practices that achieve awareness. At one end of the spectrum, Ethical Hacking can
be viewed simply as a collection of good citizenship activities being engaged by various
organizations. At the other end, it can be a way of doing business that has significant impact
on society. That is to say, public and private organizations will need to come together to set
standards, share best practices, jointly promote Ethical Hacking, and pool resources where
useful.
Computer hacking is the practice of modifying computer hardware and software to
accomplish a goal outside of the creator’s original purpose. The term “hacker” was originally
associated with computer enthusiasts who had a limitless curiosity for computer systems. The
term “cracker” came about to create a distinction between benevolent and malevolent hackers.
INCON - XI 2016
217
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Hackers, crackers, intruders, or attackers, they are all interlopers who are trying to break into
your networks and systems. However, all these terms now represent unauthorized activity that
is intended to inflict damage. While early hacking activity was primarily focused on exploring
and intellectual challenge, evidence shows that hackers are increasingly turning their attention
to financial gain. Case lists of computer and intellectual property crimes can be found on the
government’s cyber crime web site [7]
Fig.1 Five Stages of Cyber Crime
Not only youngsters and teenagers but also adults are also involved in this activity.
Some do it for fun, some do it for profit, or some simply do it to disrupt your operations and
perhaps gain some recognition. Though they all have one thing in common; they are trying to
uncover a weakness in your system in order to exploit it. For many hackers hacking is an “art”
and they enjoy it. The process of hacking is described in Fig 1. For these individuals, computer
hacking is a real life application of their problem-solving proficiencies. It’s a chance to
demonstrate their abilities, not an opportunity to harm others. Majority of hackers are self
taught personalities, so companies employ them as their technical supportive staff. These
individuals use their skills to find flaws in the company’s security system to overcome the
problems quickly. In many cases, this type of computer hacking helps prevent identity theft
and other serious computer-related crimes.
Hackers who are out to steal personal information, change a corporation’s financial data,
break security codes to gain unauthorized network access, or conduct other destructive
activities are sometimes called “crackers.”
As noted in the article by Sandyha SM, “Ethical Hacking: The network sentinels” [8]
“Ethical Hacking is a process of simulating an attack by a hacker but without interrupting the
systems' function.”
Ethical Hacking And Education:
Today, public education faces the mounting challenges of standardized testing, strained
budgets etc. At the same time, corporate America has added pressure to prove itself to
consumers, investors, and government regulators. These demands have given way to new
opportunities for businesses to support education in a win-win situation that benefits everyone.
Teaching a student to hack and later discover that knowledge was used to commit crimes will
INCON - XI 2016
218
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
definitely have an impact on society as to why he was allowed to understand how to hack in
the first place, but we cannot, simply, pinpoint our argument to say that it was the fault of the
course leaders that allowed him to undertake the course [12]. The bottom line is that
educational outreach efforts have the potential to make a real and lasting difference for all
players involved. It is necessary to mandate graduates, professionals and students, and even
the general public can benefit from the experience and expertise that corporations bring to the
education is an important part of their plans. Since the needs exist in all geographic areas,
across all subject areas, particularly if the groups work together to ensure the right needs are
being met on both ends. As long as they address the right needs, businesses have the ability to
make a tremendous impact. By providing highly engaging resources, by building in strong
connections with instructional needs, as it can provide a positive result. There is a need of
“high academic standards, best educational practices, collaborative relationships, and the
expertise of a longstanding partner to become as global technology leader” [13]. Since then,
the company has to develop curriculum programs for aspiring entrepreneurs & college
students.
Ethical Hacking and its technology:
The world has undergone a tremendous change with the advent and proliferation of
information communication technologies (ICT) such as the internet, email and wireless
communication, whose impact (both positive and negative) is perceived in every sector of
society and every corner of the globe. In this new era of knowledge society that has emerged
in the course of Ethical Hacking. In a society there is a need to aware a people about this new
technology right from the graduates and under graduates students. Long-term strategies are
forsaken in favor of short-term frameworks, which yield measurable outcomes. Policy
Framework required for creating an ambience for the accelerated flow of investment into the
IT sector, with specific orientation towards the Software Industry. Bridging the digital divide
is not realized by merely creating a pool of IT experts but also through the spread of the basics
of IT education and the usage of computers in a localized manner right from the school
leveling a manner affordable and accessible to the teeming majority.
In India there are many companies catering in the field of Ethical Hacking education and
training, like Apple, Kyrion Digital Securities, CEH etc. And also government organizations
like Ethical Hacking Council of India provides CSI (Computer Society of India) Certification
courses which provides functional services to agencies like Navy, Air force etc.
Upcoming Recent courses in Ethical Hacking:
Course Name
Location
Date & Year
Community SANS New York
SANS Chicago 2014
SANS Seattle 2014
SOS: SANS October Singapore
2014
SANS Perth
Community SANS Sacramento
New York, NY
Chicago, IL
Seattle, WA
Singapore,
Singapore
Perth ,Australia
Sacramento
Aug 18, 2014 - Aug 23, 2014
Aug 24, 2014 - Aug 29, 2014
Sep 29, 2014 - Oct 06, 2014
Oct 07, 2014 - Oct 18, 2014
INCON - XI 2016
Oct 13, 2014 - Oct 18, 2014
Aug 11, 2014 - Aug 16, 2014
219
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
Cy
ybeer Deefeencce Su
S mm
mitt & Nash
hviillee.
Trrain
nin
ng
E SS
E-I
SN – 232
2 20--00
065
5
A g 13,
Aug
1 20
014
4 - Au
Aug 20,
2 , 20
014
4
W at is the
Wh
t e Nee
N ed for
f r Eth
E hica
al Ha
Hack
king
g?
B sin
Bus
nesss org
gan
nization
n sho
s ould
d adopt seccurrity
y me
mechaniism
m (po
olicciees, prroccesssess and
a d
arrch
hiteectu
urees) to seecurre theeir infforrmaatio
on as th
hey arre out
o tsou
urcce ma
many
y th
hings an
nd tak
t ke a help
h p of
o
neew
w teech
hno
olo
ogiees lik
ke clo
oud
d com
c mp
putiing
g and
a d virt
v tuallizaatio
on. As
A du
ue to
o in
ncrreasin
ng neeed
d of
o
in
nfo
orm
matiion
n seecu
uritty, orrgaanizzatiion
n has
h to taakee a heelp off Ethiicall Hac
H ckin
ng to
o seecu
ure th
heirr dataa,
w ch is on
whic
ne of
o the
t e po
opu
ularr seecu
uritty pra
p actiicee.
E hicaal Ha
Eth
Hack
king
g iss a prrev
ven
ntattivee mea
m asu
ure whic
w ch consiistss of
o a ch
haiin of leg
gitiimatee to
ools
th
hat ideenttify
y and ex
xploitt a com
mp
pan
ny’s secu
uriity weeak
kneessees [3]]. It uses
u s th
he sim
milar tecchn
niqu
uess of
o
m icio
mali
ouss hac
h kerrs to atttack key
k y vuln
v nerrab
bilitties in
n the
t co
om
mpaany’s seccurrity
y sy
ysttem
m, wh
which
h th
hen
n
caan bee miti
m igaated
d and
a d cllossed
d. As
A a ressultt, Eth
E hical Haack
king
g is
i an
a efffecctiv
ve too
ol thaat can
c n help
h p
asssisst CA
A pro
p ofesssio
onaalss to
o bett
b ter un
ndeerstan
nd thee org
o gan
nizaatio
on’s inf
i form
maatio
on syssteemss and
a d itts
sttratteg
gy, as weell as to en
nhaancce the
t lev
vell off assu
uraancee and
a d IS
S au
udiits if use
u ed pro
opeerly
y.
H ckeerss caan bee diivided
Hac
d into
i o thre
t ee gro
oup
ps:: (1
1) Wh
White hat
h ts,(2) Bllack hat
h ts, an
nd (3)
( Grey
G y
haats. Acc
A corrdin
ng to au
uth
hor Kiim
mberrley Gra
G avees (20
007
7), “E
Eth
hiccal ha
ack
kerrs usu
u uallly fa
all intto thee
w te--ha
whit
at cat
c teg
gory, bu
ut som
s meetim
mess the
t ey’rre forrm
merr gray
y hat
h ts wh
who ha
avee beco
om
me seccurrity
y
proffesssio
ona
als and
a d who
w o usee the
t eir sk
killls in
n an
a ethiccall man
m nn
ner..” [1] Gr
G avees offferrs thee
ollo
ow
wing
g desc
d crip
ption fo
or th
he thrreee grrou
upss off haack
kerrs:
fo
1..
Whitee Hat
Wh
H ts: Th
hey
y arre the
t e go
ood
d guy
g ys, thee ethiicall hack
kerrs wh
w o use
u e th
heirr haack
kin
ng ski
s ills fo
or
p tecctiv
pro
ve pu
urp
possess. Whit
W te-h
hatt hac
h ckeers are
a e usu
u uallly seecu
uritty prof
p fessio
onaals with
w h
k owlled
kno
dgee of hac
h ckin
ng an
nd th
he hac
h ckeer too
olset an
nd wh
ho usse thiis kno
k ow
wled
dgee to
t loccatee
w akn
wea
nesssess and
a d im
mpllem
men
nt cou
c untter meeassurees.
2..
B ack
Bla
k Hat
H ts: Th
hey
y are
a co
onsideereed the
t e bad gu
uyss, the
t e mali
m icio
ouss hac
h ckerrs or crracckerrs usee
th
heiir ski
s ills fo
or ille
i egaal or
o maaliccio
ous pu
urp
posses.. The
T ey breeak
k in
nto
o or
o oth
o herw
wisse vio
olaate thee
s tem
syst
m inteegrrity
y off reem
motee mac
m chiines, wit
w th maaliccious intten
nt. Haavin
ng gaaineed un
nau
utho
orized
d
a esss, bla
acc
b ack--haat hac
h ckeers deestrroy
y viitall data
d a, den
d ny leg
gitiimaate ussers serv
s vicce, an
nd bas
b sicaally
y
c use probllem
cau
ms forr th
heir taarg
getss.
3..
G ey Ha
Gre
atss: The
T esee arre hacckeerss who
w o may
m y wor
w rk off
o fenssiv
vely
y or
o def
d fensiv
vely
y, dep
d pen
ndiing
g on
n
th
he sittuaatio
on. Th
hiss iss th
he div
vid
ding line
l e betw
b weeen
n haack
kerr an
nd crrack
kerr. Bo
Both are pow
p werrfu
ul
f cess on
forc
n the
t In
nterrneet, and
d bot
b th wil
w ll rem
r main
n per
p rmaaneenttly. And
A d so
om
me ind
i diviidu
ualss qua
q alify
y
f bo
for
oth categ
gorriess.
T e foc
The
f us off th
his paapeer wil
w ll be
b on
n th
he “w
whiite haat hac
h ckeers””. In
I a wh
hitee pap
p er reccen
ntly
y
reeleaaseed by
y Fro
F ost & Sul
S lliv
van
n, “T
The Im
mp
portan
ncee of
o Eth
hiccal H
Hacckin
ng:: Em
Emerrgin
ng Th
hreeatts
IN
NC
CON - XI
X 20
2 16
22
20
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Emphasize the Need for Holistic Assessments”, Ethical Hacking benefits were discussed aside
from its role, solutions, and technical concerns. According to the report, the success and
sophistication of cyber attacks can be directly traced to the funding of well-trained, highly
motivated, and well organized groups of programmers by huge criminal organizations and
nation states. Due to the increased threat, there is now a need for a more comprehensive
assessment of a company’s security measures. It is important for the business entity to set up
effective infrastructure, procedures, and security policies in order to prevent or reduce the
effects of data hacking.
The main objective of Ethical Hacking is to analyze the business’s security mechanism.
But it is observed that an organization has not enough knowledge about business’s systems
except those which they can easily tap. These hackers typically scan for weaknesses, prioritize
objectives, test entry points, as well as create a policy which can best put their resources in a
great advantage. After the security measures are assessed, the Ethical Hacking company then
suggests advice for the business’s unique security objectives, capabilities, and IT environment.
The business entity can do fine-tuning of their security tools, make adjustments on their
security policy and efforts, as well as identify any required training.
According to the report, businesses are still skeptical about taking advantage of the
Ethical Hacking service because they are not comfortable in allowing a third-party to access
their sensitive resources and systems. To alleviate this fear, it is best for organizations to
employ an Ethical Hacking service which has implemented practices which guarantees
confidentiality and privacy. The Ethical Hacking company must be accredited by EC-Council
and International Information Systems Security Certification Consortium.
Need of Ethical Hacking in curriculum:
Every organization needs skilled employees who can find vulnerabilities and mitigate
their impacts, and this whole course is specially designed to get you ready for that role. The
course starts with proper planning, scoping and reorganization, and then drives deep into
scanning, target exploitation, password attacks, and wireless and web apps. The course is
chock full of practical, real-world tips from some of the world’s best penetration testers to help
you do your job masterfully, safely, and efficiently.
Need Of Training For Professionals:
As a cyber security professional, you have a unique responsibility to find and understand
your organization's vulnerabilities and to work diligently to mitigate them before the bad guys
pounce [5]. This subject area will fully arms you to address this duty head-on.
With comprehensive coverage of tools, techniques, and methodologies for network, web
app, and wireless testing, the subject truly prepares you to conduct high-value penetration
testing projects end-to-end, step-by-step [6]. Every organization needs skilled personnel who
can find vulnerabilities and mitigate their impacts, and this whole course is specially designed
to get you ready for that role.
Best Practices Towards Bad Guys Attack:
With the glance knowledge of hacking you get ready to conduct a full-scale, high-value
penetration test, against bad guys attack. After building your skills in awesome labs, the course
culminates with a final full-day, real-world penetration test scenario. You'll conduct an end-toend pen test, applying knowledge, tools, and principles from throughout the course as you
INCON - XI 2016
221
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
discover and exploit vulnerabilities in a realistic sample target organization, demonstrating the
knowledge you’ve mastered in this course.
Research Methodology:
Looking into requirements of the objectives of the study the research design employed
for the study is of descriptive type. Keeping in view of the set objectives, this research design
was adopted to have greater accuracy and in depth analysis of the research study. Available
secondary data was extensively used for the study. Different news articles, International and
National Journals, Books and Web were used which were enumerated and recorded.
Issues and Challenges: Ethical Hacking
There are numerous legal issues surrounding Ethical Hacking, many of which are
common sense.

The Computer Fraud and Abuse Act, 18 USC Section 1030, prohibits actions most
hackers take. “While the development and possession of harmful computer code is not a
criminal act, using the code can be.” [9] The events of September 11th have helped
accelerate changes to cybercrime laws and the desire to adopt these laws [4]. Fifteen EU
member states are currently working to create consistent definitions and penalties for
computer crimes covering unauthorized access, denial of service, and destruction caused
from viruses and worms [10].

It is critical to ensure the legality of all activities and work prior to beginning an Ethical
Hacking project. Additionally, within some jurisdictions the tests you may be
considering could be viewed as illegal, therefore, you will want to have authorization
from your organization in writing prior to conducting the exercise. This written
permission will provide you with the necessary “get out of jail free card” in the event
that you need it.

Issues of Transparency: Lack of transparency is one of the key issues brought forth by
the survey. There is an expression by the companies that there exists lack of
transparency on the part of the local implementing agencies as they do not make
adequate efforts to disclose information on their programs, audit issues, impact
assessment. This reported lack of transparency negatively impacts the process of trust
building between companies and local communities, which is a key to the success of any
Ethical Hacking initiative at the local level.

Visibility Factor: The role of media in highlighting good cases of successful penetration
testing initiatives is welcomed as it spreads good stories and sensitizes the local
population about various ongoing Ethical Hacking initiatives of companies. This
apparent influence of gaining visibility and branding exercise often leads many
nongovernmental organizations to involve themselves in event-based programs.

Narrow Perception towards White Hat Initiatives: Non-governmental organizations
and Government agencies usually possess a narrow outlook towards the Ethical Hacking
initiatives of companies, often defining.

Non-availability of Clear Hacking Guidelines: There are no clear cut statutory
guidelines or policy directives to give a definitive direction to white Hat hacking
initiatives of companies, also depends upon their business size and profile.
Benefits of Ethical Hacking:
INCON - XI 2016
222
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
This type of “test” can provide convincing evidence of real system or network level
threat exposures through proof of access. Even though these findings may be somewhat
negative, sometimes it needs to be proactive in improving overall security system [14]. A
mature security information program is a combination of policies, technical system and
network standards, configuration settings, scrutinizing, and inspection practices.

An ethical hack, which tests beyond operating system and network vulnerabilities,
provides a broader view of an organization’s security.

“Tests” of this sort could also identify weakness such as the fact that many systems
security administrators may not be as aware of hacking techniques as are the hackers
they are trying to protect against.

These findings could enhance and promote a better communication between system
administrators and technical support staff and helps to identify training needs.

Traditional investigation work mainly deals with the possibility of a threat and this often
leads to a casual view of the threat, and immediately addresses the requirements.
Limitations of Ethical Hacking:
Ethical Hacking is based on the simple principle of finding the security vulnerabilities in
systems and networks before the hackers do, by using so-called “hacker” techniques to gain
this knowledge [14]. Unfortunately, the common definitions of such testing usually stop at the
operating systems, security settings, and “bugs” level.

Limiting the exercise to the technical level by performing a series of purely technical
tests, an Ethical Hacking exercise is no better than a limited “diagnostic” of a system’s
security.

Time is also a critical factor in this type of testing. There is a necessicity for Hackers to
have patience when finding system vulnerabilities.

Most probable it is engaged to a “trusted third party” vendor to perform these test, so
always time is money.

Provision of limited Information always leads to limitation to the testers. It is necessary
to provide detailed information for a “third party” vendor in order to speed up the
process and to save time.
Suggestions:

There is a need of training for security professionals and graduates in curricula.

Awareness regarding security measures in academics and business world.

Need to change the mindset of the black and grey hat hackers.

Avoid exposing too much personal information.

Cyber crime laws should be strict and rapidly applicable.
Conclusion:
The concept of Ethical Hacking is now firmly rooted on the global business agenda. But
in order to move from theory to concrete action, many obstacles need to be overcome. A key
challenge facing business is the need for more professionals who are reliable ethical hackers.
Transparency and dialogue can help to make a business appear more trustworthy. By using
Ethical Hacking in curriculum we can teach and build the students in regards of information
security. And also we can motivate them by suggesting number of employment sectors like
INCON - XI 2016
223
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
penetration tester, network security specialist, ethical hacker, security consultant, site
administrator and auditor. Ethical Hacking stream opens the door to lucrative security
positions in the government IT sector, as the it is endorsed and used by the National Security
Agency (NSA), the Committee on National Security Systems (CNSS) and the Department of
Defense (DoD) as a benchmark to clear personnel and contractors with privileged access to
sensitive information.
REFERENCES:
[1] Wilhelm, Thomas : "Professional Penetration Testing: Creating and Operating a Formal
Hacking Lab”
[2] Baker, Gary, and Simon Tang: "CICA ITAC: Using an Ethical Hacking Technique to
Assess Information Security Risk."
[3] Nemati, Hamid (2007): "The Expert Opinion." Journal of Information Technology Case
and Application Research 9.1: 59-64.
[4] N.B. Sukhai (2005): “Hacking And Cybercrime”, AT&T.
[5] RD.Hartley: “Ethical Hacking: Teaching Students to Hack”, EastCarolina University.
[6] Pashel, Brian(2006): "Teaching Students to Hack: Ethical Implications in Teaching
Students to Hack at the University Level." .
[7] SANS Security Essentials GSEC Practical Assignment – Version, A Journeyman’s View
of Ethical Hacking.
[8] http://cloudtweaks.com/2012/04/how-important-is-ethical-hacking-for-enterprisesecurity-architecture
[9] http://www.gcu.ac.uk/study/undergraduate/undergraduate/cours.es/digital-securityforensics-and-ethical-hacking-subject-to-approval
[10] http://www.sans.org/course/network-penetration-testing-ethical-hacking
[11] http://www.cybercrime.gov/cccases.html.
[12] http://www.ciol.com/content/search/showarticle.asp?artid=21795
[13] http://www.pbs.org/wgbh/pages/frontline/shows/hackers/blame/crimelaws.html
[14] http://www.newsbytes.com/news/01/172314.html
INCON - XI 2016
224
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
A Review of Mobile Applications Usability Issues and Usability Models for
Mobile Applications in The Context of M-Learning
Dr. Manimala Puri
Mr. Kamlesh Arun Meshram
JSPM’S
JSPM’S
Jayawant Institute of Management
Jayawant Institute of Management Studies,
Studies, Tathawade,
Tathawade,
Pune, Maharashtra.India
Pune, Maharashtra, India
manimalap@yahoo.com
Kamlesh.meshram2007@gmail.com
ABSTRACT :
The purposes of the present study are to describe the usability issues of mobile
learning applications based on existing usability models, and to discuss implications for
the future research in this area. This paper has three aims: First aim is to provide a past
overview of mobile learning concept and new advancement in the process of TeachingLearning. Secondly it discusses the Usability issues and review of usability models for
mobile applications in context of mobile learning in the higher education. Third aim is
to review evidences getting from review of literature. The review of usability issue for
mobile applications and review usability models will help the researcher newer aspects
for designing m-learning application. Mobile learning is gaining its recognition as it is
accepted to be an effective Teaching-Learning technique of delivering lectures and
acquiring knowledge as its main strength is ubiquitous learning. Usability issues play an
important role in M-Learning in the context of higher education. These Usability Issues
have prompted many researchers to further research on mobile learning due to its
potential in making teaching and learning more effective and promising. Finally, this
paper reveals usability issues and significance of usability Model in the context of
higher education.
Keyword : Mobile Learning, Usability Issue, Ubiquitous Learning, M-Learning
Application, Usability Models
Introduction :
Mobile phones have become a popular device in people’s daily life up to business.
Statistics show that nearly 4.4 billion mobile connections will exist by 2017 worldwide and the
number is increasing every day. Trends in the Information Technology (IT) and purchasing
policies indicate that individuals use their personal phone for work (Sean, 2006). Mobility
business has become main stream and it is predicted that there will be more than 1.3 billion
mobile workers by Stacy et al. (2011). This situation has caused mobile applications to emerge
as corporate IT initiatives that need to support the organizational functions.
Usability is defined as the capability of a product to be understood, learned, operated and
be attractive to users when used to achieve certain goals with effectiveness and efficiency in
specific environments (Bevan, 1995; Hornbæk and Lai-Choong, 2007; International
Organization for Standardization, 2002). Usability of a product is normally demonstrated
through its interfaces. To ensure software products could meet this quality, a number of
usability guidelines and standards have been introduced. They however are generic rules to
guide the design and implementation for web and desktop applications. Usability guidelines
for mobile applications are still lacking and relatively unexplored and unproven (Azham and
INCON - XI 2016
225
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Ferneley, 2008; Gong and Tarasewich, 2004). Even several usability guidelines for mobile
applications do exist, they are isolated and disintegrated. This issue is critical as existing
usability guidelines are insufficient to design effective interfaces for mobile applications due
to peculiar features and dynamic application context in mobile (Glissmann, 2005; Holzinger
and Errath, 2007; Azham and Ferneley, 2008; Chittaro, 2011).
Literature Review :
Nielsen J. (1994) identified five attributes of usability:

Efficiency: Resources expended in relation to the accuracy and completeness with which
users achieve goals;

Satisfaction: Freedom from discomfort, and positive attitudes towards the use of the
product.

Learnability: The system should be easy to learn so that the user can rapidly start getting
work done with the system;

Memorability: The system should be easy to remember so that the casual user is able to
return to the system after some period of not having used it without having to learn
everything all over again;

Errors: The system should have a low error rate, so that users make few errors during the
use of the system.

In addition to this Nielsen defines Utility as the ability of a System to meet the needs of
the user. He does not consider his to be part of usability but a separate attribute of a
system. If a product fails to provide utility then it does not offer the features and
functions required; the usability of the product becomes superfluous as it will not allow
the user to achieve their goals. Likewise, the International Organization for
Standardization (ISO) defined usability as the “Extent to which a product can be used by
specified users to achieve specified goals with effectiveness, efficiency and satisfaction
in a specified context of use”, Cavus, N., & Ibrahim, D. (2009).
This definition identifies 3 factors that should be considered when evaluating usability.

User: Person who interacts with the product;

Goal: Intended outcome;

Context of use: Users, tasks, equipment (hardware, software and materials), and the
physical and social environments in which a product is used.
Each of the above factors may have an impact on the overall design of the product and in
particular will affect how the user will interact with the system. In order to measure how
usable a system is, the ISO standard outlines three measurable attributes:

Effectiveness: Accuracy and completeness with which users achieve specified goals;

Efficiency: Resources expended in relation to the accuracy and completeness with which
users achieve goals;

Satisfaction: Freedom from discomfort, and positive attitudes towards the use of the
product.
Unlike Nielsen’s model of usability, the ISO standard does not consider Learnability,
Memorability and Errors to be attributes of a product’s usability although it could be argued
that they are included within the definitions of Effectiveness, Efficiency and Satisfaction.
INCON - XI 2016
226
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
This research study is defined “Usability” as the capability of a product to be
understood, learned, Operated and be attractive to users when used to achieve certain goals
with effectiveness and efficiency in scientific environments. Usability of product is normally
demonstrated through its Interfaces. This research article is critical as existing Usability
guidelines are insufficient to design effective interfaces for mobile application due to peculiar
features and dynamic application context in mobile. This study intended to review existing
studies on usability for mobile applications in order to identify and prioritize the usability
dimensions based on importance, more appropriate guidelines for mobile application in the
context of M-Learning. The purpose of this study is to provide a review the Usability issues
and Usability Models in the context of M-learning in higher education that could help
researchers and practitioners to design and measure Usability of Mobile Application. The data
have taken and analyzed research Journals, Publications concerning the field of Usability. The
data were analyzed to determine the Usability issues; they are clearly defined Usability issues
in terms of Effectiveness, Efficiency, Satisfaction, Usefulness, Aesthetic, Learnability,
Simplicity, attractiveness, memorability.
Usability has been an important quality in the development of application as well as
product (Seffah et al., 2006; Bahn et al., 2007). Various definitions of usability can be found
in the literature, as listed in Table 1.
Heo et al. (2009) research work has developed a framework for usability evaluation
considering eight requirements to develop a multi-level hierarchical model. The study has
concluded that fact-based, modularization; hierarchical, optimization, user oriented,
implementable, context - based and design oriented are the requirements for a good evaluation
framework. The usability evolution discussed in the framework covers effectiveness,
usefulness, efficiency, consistency, compatibility and understandability.
Biel et al. (2010) applied hybrid technique to identify usability issues in his research
work. According to Biel, a usability evaluation should focus on the usage problems based on
application and human errors. The evaluation task employed scenarios to measure interaction
capabilities whereas mobile usage behavior used user profile and personality. Several
techniques were used such as user interface walk-through on prototype to determine important
communication and visible usability problems; run analysis scenario to identify suitable
scenarios which describe usability requirements and interactions; and the use case technique
to evaluate components, interface and design patterns. The study proposed several dimensions,
namely useful, error, understandable, Learnability, satisfaction and intuitiveness.
Kenteris et al. (2011) studied user experience towards mobile tourist applications. A task
was given to the users, which was to download tourist information from via Internet into their
mobile devices. The exercise was conducted in the laboratory and on field via Bluetooth and
mobile ad-hoc network. There were 20 users with the age range from 20 to 53 years old and
two usability specialists who employed heuristic evaluation. The study found that the usability
dimensions that fit such applications are effectiveness, efficiency; learn ability, user
satisfaction, simplicity, comprehensibility, perceived usefulness and system adaptability.
Holzinger et al. (2011) studied usability designs for mobile cancer systems using iPhone
and iPad. The study started with project requirements, clinical context and environment,
primary end-user (patient), secondary end-user (medical professional) and stakeholders
(hospital manager). The User Centered Design (UCD) method was used to identify four main
user requirements: user usage for the patient; sufficient data control function on minimizing
INCON - XI 2016
227
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
error; transfer questionnaire into mail application; and easy manual. It applied four stages of
application development: paper mock up studies, low-fidelity prototype, hi-fidelity prototype
and development system testing. The study has identified a few guidelines which are learn
ability, ease of use, functionality, easy manual, useful, usable, enjoyable, safety, security,
efficiency, effectiveness, satisfaction, aesthetic and minimalist and simplicity.
Raita and Oulasvirta (2011) have discovered an exciting aspect of usability evaluation.
The empirical study tested on the expectation aspect. For objective usability, task success and
task completion were measured while subjective usability used task-specific and postexperiment. The study indicated that user expectation strongly influences usability rating that
may overshadow good performance. This implies that usability evaluation is discovering not
just usability problems but also revealing how future users will experience and perceive the
product in their daily life.
Choi and Hye-Jin (2012) investigated the impact of simplicity towards user satisfaction.
Simplicity comprises three dimensions: aesthetics, information architecture and task
complexity. To validate the relationship, scenario based tasks were performed as a survey that
involved 205 users. The findings of the study have shown that a simple interface design
contributes to positive user satisfaction. There are studies that investigated the relationship
between aesthetic and usability.
Issues / Limitation for mobile applications
The models presented above were largely derived from traditional desktop applications.
For example, Nielsen’s work was largely based on the design of telecoms systems, rather than
computer software. The advent of mobile devices has presented new usability challenges that
are difficult to model using traditional models of usability. Zhang and Adipat, (2005)
highlighted a number of issues that have been introduced by the advent of mobile devices: _
Mobile Context: When using mobile applications the user is not tied to a single location. They
may also be interacting with nearby people, objects and environmental elements which may
distract their attention.

Connectivity: Connectivity is often slow and unpredictable on mobile devices. This will
impact the performance of mobile applications that utilize these features.

Small Screen Size: In order to provide portability mobile devices contain very limited
screen size and so the amount of information that can be displayed is limited.

Different Display Resolution: The resolution of mobile devices is reduced from that of
desktop computers resulting in lower quality images.

Limited Processing Capability and Power: In order to provide portability, mobile devices
often contain less processing capability and power. This will limit the type of
applications that are suitable for mobile devices.

Data Entry Methods: The input methods available for mobile devices are different from
those for desktop computers and require a certain level of proficiency.
This problem increases the likelihood of erroneous input and decreases the rate of data
entry. From our review it is apparent that many existing models for usability do not consider
mobility and its consequences, such as additional cognitive load. This complicates the job of
the usability practitioner, who must consequently define their task model to explicitly include
mobility. One might argue that the lack of reference to a particular context could be strength of
INCON - XI 2016
228
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
a usability model provided that the usability practitioner has the initiative and knows how to
modify the model for a particular context.
Research Approach :
The purpose of this study is to review research journals to evaluate usability issue and
Existing Usability Model. This study could help researchers and practitioners to design and
measure usability of mobile applications. The review is based on a conception of usability,
similar to ISO 9241, Bevan & MacLeod, 1994. This notion merely discusses studies related to
usability evaluation instead of the broad concept of usability. The usability issues were
proposed by reviewing previous studies on mobile usability. In general, this study aims to
answer the following Research Questions (RQs):
What are the necessary usability issues for mobile applications?
How these usability issues can be viewed as one unified model?
What are the influencing factors in the context of M-learning?
What is the significance of Usability Models in the context of M-Learning?
The research paper considered various research journals from high-rank publications
concerning the field of usability. The keywords used in the searching were “usability issues”,
“m-learning” and “ubiquitous learning” for articles that were published from year 2002 until
2014. The data were analyzed to determine the usability issue measured in these empirical
mobile usability studies. They are clearly defined as usability aspects as
Table 2 : Usability Factors for mobile applications
Factors
Count %
Effectiveness
5
55
Efficiency
5
55
Satisfaction
5
55
Usefulness
5
55
Aesthetic
5
55
Learnability
4
44
Simplicity
4
44
Intuitiveness
3
33
Understandable
2
22
Attractiveness
2
22
Accessibility
1
11
Memorability
1
11
Acceptability
1
11
Flexibility
1
11
Consistency
1
11
Adaptability
1
11
Operability
1
11
INCON - XI 2016
229
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Factors
Count %
Reliability
1
11
The list is based on 9 empirical studies: Coursaris and Kim (2006) Heo et al. (2009) Biel
et al. (2010) Kenteris et al. (2011) Moshagen and Thielsch (2010) Holzinger et al. (2011)
Raita and Oulasvirta (2011) Choi and Hye-Jin (2012) and Tuch et al. (2012)
Usability Challenges :
There are some issues that can be faced during examining the usability of the mobile
application and existing usability models.
According to various research studies following usability issues have been identified:
1.
2.
3.
4.
5.
Mobile context: It refers to the information that are related to the interaction among the
application’s user, the application, and the environment that surrounding the user.
Connectivity: Wireless internet connection is one of most important factor that mobile
devices should have it, since it allow the user of mobile to connect to the internet.
Small screen size: Small screen of mobile is one of the factors that affect the usability of
the mobile application that should be considered in the mobile application usability.
Different display resolutions: The mobile devices provide low display resolution than
desktop or laptop has. This low resolution can influence the quality of the multimedia
information of the interface.
Limited processing capability and power: Developing application for mobile devices
need some considerations such as the application’s capacity that will be needed for
installing it in the mobile device.
INCON - XI 2016
230
ASM’s International E-Journal on
Ongoing Research in Management & IT
Fig. 1 : Rosnita Baharuddin, et. al., (2013)




E-ISSN – 2320-0065
The existing usability guidelines of mobile learning application did not cover all the
attributes of the usability guidelines for mobile learning application.
The existing usability guidelines from different usability models for mobile learning
application have not provided details information of ways to implement each proposed
guideline.
The existing usability guidelines of mobile learning application did not cover the issue
of designing an application that can be worked efficiently by different type of the screen
of the mobile devices that can be used by the learner to interact with the application.
The existing mobile learning usability guidelines do not discussed the memorability,
context-aware m-learning elements that should be covered to have usable context in
which learner will use the mobile application.
Conclusion:
This study has presented a review for usability issues and usability models, the usability
issue and usability models are important aspect in the context of mobile learning (MLearning). The models consist of various usability factors and contextual factors. The model
can be used by researchers as well as practitioners as a usability guidelines model for mobile
application. Practitioners can use the model to determine which usability dimensions should be
considered when designing and measuring usability level for mobile applications in the
context of higher education. On the other hand, researchers may extend the study by
investigating how these aspects can be scrutinized and operationalized in the context of mobile
learning in higher education. The literature review has also revealed that not much work has
been done in term of usability issue in the context of m-learning. There is not adequate
research to develop usability model for mobile application specifically for M-Learning
Applications in Higher Education.
REFERENCES:
[1] Adams, R. (2007). Decision and stress: cognition and e-accessibility in the information
workplace. Springer Universal Access in the Information ociety, 5(4), 363–379.
[2]
[3]
[4]
[5]
[6]
[7]
Adams, R. (2006). Applying advanced concepts of cognitive overload and augmentation in
practice; the future of overload. In D Schmorrow, KM Stanney, & LM Reeves (Eds.),
“Foundations of augmented cognition” (2nd ed., pp. 223–229).
Ahmed, S., Mohammad, D., Rex, B.K., and Harkirat, K.P.: ‘Usability measurement and metrics:
A consolidated model’, Software Quality Control, 2006, 14, (2), pp. 159-178
Bevan, N., and MacLeod, M.: ‘Usability measurement in context’, Behaviour & Information
Technology, 1994, 13, (1), pp. 132 - 145
Cavus, N., & Ibrahim, D. (2009). M-learning: an experiment in using MS to support learning
new English language words. British Journal of Educational Technology, 40(1), 78-91.
Chen, Y.S., Kao, T.C., Sheu, J.P., & Chiang, C.Y. (2002). A Mobile Scaffolding-Aid-Based Bird
- Watching Learning System. Proceedings of IEEE International Workshop on Wireless and
Mobile Technologies in Education (WMTE'02), pp.15-22.
Chiou, C. K., Tseng, J. C. R., Hwang, G. J., & Heller, S. (2010). An adaptive navigation
support system for conducting context-aware ubiquitous learning in museums. Computers &
Education, 55, 834–845.
INCON - XI 2016
231
ASM’s International E-Journal on
Ongoing Research in Management & IT
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
E-ISSN – 2320-0065
Chu, H. C., Hwang, G. J., & Tsai, C. C. (2010). A knowledge engineering approach to
developing mindtools for context-aware ubiquitous learning. Computers & Education, 54(1),
289–297.
Constantinos, K.C., and Dan, K.: ‘A research agenda for mobile usability’. Proc. CHI '07
extended abstracts on Human factors in computing systems, San Jose, CA, USA2007 pp. Pages
IEEE_Std_1061: ‘IEEE standard for a software quality metrics methodology’, IEEE Std 10611998, 1998
ISO: ‘INTERNATIONAL STANDARD: ISO 9241-11(Guidance on usability)’, in Editor
(Ed.)^(Eds.): ‘Book INTERNATIONAL STANDARD: ISO 9241-11(Guidance on usability)’
(1998, edn.), pp.
Introduction into Usability, Jakob Nielsen's Alertbox Retrieve April 10 th, 2012. From the
http://www.useit.com/alertbox/20030825.html.
ISO/IEC, 13407. Human-Centred Design Processes for Interactive Systems. 1999: ISO/IEC
13407: 1999(E).
Jun, G. and Tarasewich, P. 2004. Guidelines for handheld mobile device interface design. In
Proceedings of the DSI 2004 Annual Meeting (Boston). Northeastern University.
Kirakowski, J., and Corbett, M.: ‘SUMI: the Software Usability Measurement Inventory’, British
Journal of Educational Technology, 1993, 24, (3), pp. 210-212
Macleod, M., and Rengger, R., The development of DRUM: A software tool for video-assisted
usability evaluation, http://www.nigelbevan.com/papers/drum93.pdf, Access: 29 January 2009.
Miller, J.: ‘Usability Testing: A Journey, Not a Destination’, Internet Computing, IEEE, 2006,
10, (6), pp. 80-83
Nielsen, J. (1994). Usability engineering. Morgan Kaufmann Pub. Technology, 1993, 24, (3), pp.
210-212
Rosnita Baharuddin, Dalbir Singh and Rozilawati Razali: “Usability Dimensions for Mobile
Applications-A Review” Res. J. Appl. Sci. Eng. Technol., 5(6): 2225-2231, 2013
Tapanee Treeratanapon. “Design of the Usability Measurement Framework for Mobile
Applications”. International Conference on Computer and Information Technology (ICCIT'2012)
June 16-17, 2012, Bangkok
Yonglei, T.: ‘Work in progress - introducing usability concepts in early phases of software
development’. Proc. Frontiers in Education, 2005. FIE '05. Proceedings 35th Annual Conference,
Year pp. T4C-7-8
Zhang, D. and Adipat, B. "Challenges, Methodologies, and Issues in the Usability Testing of
Mobile Applications". International Journal of Human-Computer Interaction, 18, 3, 293 - 308,
2005.
INCON - XI 2016
232
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
A Comparative Study of E-Learning and Mobile Learning in Higher
Education – Indian Perspective
Mr. Kamlesh Arun Meshram
JSPM’S
Jayawant Institute of Management Studies,
Tathawade, Pune, Maharashtra, India
7620228842 / 9850228842
Kamlesh.meshram2007@gmail.com
Dr. Manimala Puri
JSPM’S
Jayawant Institute of Management
Studies,
Tathawade, Pune, Maharashtra, India
09325093752
manimalap@yahoo.com
ABSTRACT :
E-Learning and M-learning are the modern tools in Higher Education. This
research paper focuses on insights of e-learning and m-learning. Recently there has
been ample use of mobile technology and e-learning tools in education, although the
technology is still evolving. The question is why the sudden interest, especially in
countries like India where still many students gets dropouts after their schooling. But at
the same time India has huge market for smart phones is concerned. Twenty first century
declared to be the age of information and communication technology. This is the time
when more people everywhere are involved in acquiring new knowledge and skills. We
cannot work in the society without on-line technology. Online technology is also entered
in the field of education. E-learning and M-learning have become extremely important
buzz words of the educational technological revolution; each characterising a whole raft
of ideas and resources for the tech-savvy teacher. But the two terms are not always used
correctly, with some confusion about the differences between them and where they
overlap. And in more complex terms, thinking about the differences between E-learning
and M-learning can be particularly useful for teachers who use technology in the
classroom, as it can help them to pick out which techniques are best for which education
scenario. The present paper was based on secondary sources of data highlighting the
comparison of concept, characteristics, advantages, disadvantages, similarities and
differences between E-learning and M-learning.
Keyword : Mobile Learning, E-Learning, ICT, Pedagogical, Ubiquitous Learning
Introduction :
This paper focuses impact of e-learning and m-learning on higher education in India, elearning and m-learning content preparation and presentation tools, application of eLearning in
various types of methodologies used in higher studies, pros and cons of eLearning and
mLearning. This paper gives insights and future aspects of both eLearning and mLearning in
the context of Higher Education in India. Since the Indian knowledge industry is entering into
the take off stage, the strategy of survival of the fittest holds good. E-learning with MLearning plays an important role in the educational development of any nation. In India
education system evolving rapidly, many private universities has been formed in recent years.
A State universities and private universities in India are offering different courses for modern
INCON - XI 2016
233
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
learners. A different domain needs innovating learning methods and that could provide
through e-learning and m-learning. Technology has changed our world in ways previously
unimaginable, Mobile technology has been developed rapidly in almost every sector
nowadays. One of the sectors that show development is education. Especially due to the
mobile phones and handheld computers, it is very easy to reach the information. E-learning
plays an important role in the educational growth of any nation. It also offers opportunities for
developing nations to enhance their educational development. It can also plays a critical role in
preparing a new generation of teachers, as well as upgrading the skills of the existing teaching
force to use 21st century tools and pedagogies for learning.
Mobile learning combines E-learning and mobile computing. Mobile learning is
sometimes considered merely an extension of E-learning, but quality M-learning can only be
delivered with an awareness of the special limitations and benefits of mobile devices. Mobile
learning has the benefits of mobility and its supporting platform. M-learning is a means to
enhance the broader learning experience. M-learning is a powerful method for engaging
learners on their own terms. E-learning and M-learning diagrammatically mentioned below:
Modes of Leaning
Though there are some differences lies between E-learning and M-learning, they are
closely related. M-learning is a sub-set of E-learning. Their relationships are diagrammatically
given below:-
As per the latest report of MHRD, India Table I shows the number of recognized
educational institutions in India. It highlights that the number is increasing to cater to the
increasing demand of education in the country.
INCON - XI 2016
234
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Table1: Number of Institutions by Type 201314 (P), by MHRD, India
School education
Type
Primary
Upper primary
Secondary
Senior secondary
Total
Universities
Higher Education
Central university
State public university
Deemed university
State private university
Central open university
State open university
Institution of National Importance
Institutions under state legislature act
Others
Total
Colleges
Stand alone
institution
Diploma Level Technical
PGDM
Diploma Level Nursing
diploma level teacher training
Institute under ministries
Total
Number
790640
401079
131287
102558
1425564
42
310
127
1443
1
13
68
5
3
712
36671
3541
392
2674
4706
132
11445
E-Learning in Higher Education
E-Learning is Internet-enabled learning, Dawabi, P., Wessner, M. et. al. (2006). To
provide a comprehensive understanding, E-Learning is defined as a new education concept by
using the Internet technology Attewell. J. (2005). It is defined as interactive learning in which
students learn through the usage of computers as an educational medium, Goh, T. and
Kinshuk, D(2006). Moreover, Hassenberg pointed that “E-Learning covers a wide set of
applications and processes, including multimedia online activities such as the web, Internet
video CD-ROM, TV and radio. The components of E-Learning include content delivery in
multiple formats, management of the learning experience, a networked community of learners,
and content developers and experts. E-Learning is personalized, focusing on the individual
learner. Its environment includes self paced training, many virtual events, mentoring,
simulation, collaboration, assessment, competency road map, authoring tools, e-store, and
learning management system Dawabi, P., Wessner M, et. al., (2006).
INCON - XI 2016
235
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
M-Learning in Higher Education
Mobile Learning (M-Learning) is a relatively new tool in the education system, to
support students and teachers as they navigate the options available in the expanding world of
distance learning. M-Learning is learning accomplished with the use of small, portable
computing devices. These computing devices may include: Smartphone, Wearable Devices,
Personal Digital Assistants (PDAs) and similar handheld devices. M-learners typically view
content and/or lessons in small in size, manageable formats that can be utilized when laptop or
fixed station computers are unavailable. It is currently being used in a variety of educational,
governmental and industrial settings. This paper assesses advantages as well as limitation of
M-Learning in higher education and provides an existing literature based evaluation of the
effectiveness of m-learning and e-learning.
As mobile devices offer flexible access to the Internet and communication tools for
learning within and outside of the classroom, and as they support learning experiences that are
personalized as well as collaborative, accessible and integrated within the world beyond the
classroom, mobile learning can open up new contexts for learning, with ubiquitous
connectivity allowing interactive and connected learning in school and university, in the
workplace, in the home and in the educational campus. As for technological advancement,
mobile and networked technologies and devices become more and more powerful; the rise of
an ‘app culture’ marks a large commercial market, driving a new wave of creativity in the
design of learning applications.
Outcome of The of The Study :
The following are the specific outcome of this study:

To explore the scope for eLearning and mLearning in Indian higher education scenario.

To study about the benefits of eLearning and mLearning.

To explore the challenges that will be faced by eLearning and mLearning in India.
Literature Review
Lieberman (2002) explains that in higher education student participation is a primary
feature of enhanced performance and in distance learning courses students are more likely to
anticipate in class discussions and group work than in traditional lectures, as they are given
more time to prepare questions and responses.
Stephenson, 2006; Hameed, 2007 explains, the increasing contextual impact on ELearning is being identified in the research about the integration of educational technologies.
Qureshi et al., 2009, In traditional computer-based learning, the computer which was
used as a tool to complete a task or get something done so there was no need to address the
broader environmental context of the individual.
Nawaz &Kundi, 2010c, In a study of Indian universities, found that “most IT education
is ineffective because it is largely on technical grounds and not at all concerned with contexts
and real world problems.
Nawaz et al., 2011c, Another research on E-Learning reveals that despite the best of
intentions, efforts and resources, many of the E-Learning projects end in failure because they
won’t undertake perspectives of existing and changing social and political context.
INCON - XI 2016
236
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Goh and Kinshuk, 2006, have cited several mLearning initiatives. These include among
others: games oriented implementation for m-portal.
Mitchell, A (2003), class room of the future; hands-on scientific experimentation and
learning Crescente, Mary Louise; Lee, Doris (2011); mobile learning system for bird watching
Crescente, Mary Louise; Lee, Doris (2011) and context-aware language learning support
system. Research on mobile learning is reacting on these changes and opportunities inherent
since recent years already and is focusing - by engaging in project orientated research as well
as theory building - technological developments and pedagogically informed approaches to
use and design of mobile technologies. Also industry and politics are paying more and more
attention to the filed of mobile learning. Not at last the considerable sums of research funding
being available in German speaking countries, in particular Germany and Switzerland, gives
evidence to this.
Challenges In E- and M-Learning
E-Learning and M-Learning are two different modes of learning. There are some
challenges to incorporate this mode of learning in a class room teaching in the context of
higher education in Indian perspective.

Connectivity and battery life

Screen size and key size (Maniar and et. Al. 2008)

Meeting required bandwidth for nonstop/fast streaming

Number of file/asset formats supported by a specific device

Content security or copyright issue from authoring group

Multiple standards, multiple screen sizes, multiple operating systems

Reworking existing E-Learning materials for mobile platforms

Limited memory (Elias, 2011)

Risk of sudden obsolescence (Crescente and Lee, 2011)
E-Learning and mLearning can be real-time or self-paced, also known as "synchronous"
or "asynchronous" learning. Additionally, E-Learning is considered to be ―tethered‖
(connected to something) and presented in a formal and structured manner. In contrast, mobile
learning is often self-paced, un-tethered and informal in its presentation (see Table 1).
E-Learning is a subset of Distance Learning – Mobile Learning is a Subset of ELearning. The conceptual shifts from E-learning to M-learning then to u-learning are given
below:-
The Global eLearning Industry Market
The global eLearning Market is expected to reach $107 billion by 2015, Global Industry
Analysts, Inc. (2015). The global self-paced eLearning market reached $32.1 billion in
INCON - XI 2016
237
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
revenue in 2010, Ref: www.gurukilonline.co.in, with a five year compound annual growth rate
of approximately 9.2%. This means that the self-paced eLearning market should see estimated
revenues of $49.9 billion in 2015, www.gurukilonline.co.in.
Top 5 Growth Rates By Country
Growth rate shows how each country adopts eLearning and is a significant indicator
since it can reveal revenue opportunities. The growth rate of self-paced eLearning by country
is Global Industry Analysts, Inc. (2015):
1. India 55% 2. China: 52% 3. Malaysia: 41% 4. Romania: 38% 5. Poland: 28%
The Global mLearning Industry Market
Given current trends, by 2015, India is expected to be among the top 10 counties when it
comes to buying mobile learning products and services – along with the USA, China and
Japan.
Current Capabilities and Applications of Mobile Phone
Table 1: Mobile Learning for Education: Benefits and Challenges
Subject
Place
Pedagogical
change
Instructor to
student
communication
Student to
student
communcation
Feed back to
INCON - XI 2016
eLearning
Lecture in lcass room or internet
labs
More text and graphics based
instructions
Lecture in classroom or in
internet labs
Time delayed (students need to
check e-mails or web sites)
Passive communication
Asynchronous
Scheduled
Face-to-Face
Audio-teleconference common
e-mail to e-mail
Private location
Travel time to reach to internet
site
Dedicated time for group
meetings
Poor communication due to
group consciousness
1 to 1 basis possible
mLearning
Learning anywhere, anytime
More voice graphics and
animation based instructions
Learning occurring in the field
or while mobile
Instant delivery of e-mail or
SMS
Instant communication
Synchronous
Spontaneous
Flexible
Audio and video teleconference
possible
24/7 instantaneous
No geographic boundaries
No travel time with wireless
internet connectivity
Flexible timings on 24/7 basis
Rich communication due to one
to one communication on,
reduced inhibitions
1 to 1 basis possible
238
ASM’s International E-Journal on
Ongoing Research in Management & IT
Subject
eLearning
student
Asynchronous and at times
delayed
Mass/standardized instruction
Benchmark based grading
Assignments
and tests
Presentations,
Exams and
Assignments
E-ISSN – 2320-0065
mLearning
Both asynchronous and
synchronous
Customized instruction
Performance and improvement
based grading
Simulations and lab based
Real life cases and on the site
experiments
experiments
Paper based
Less paper, less printing, lower
cost
In class or on computer
Any location
Dedicated time
24/7 instantaneous
Restricted amount of time
Any amount of time possible
Standard test
Individualized tests
Usually delayed feedback
Instant feedback possible
Fixed length tests
Flexible length/number of
questions
Theoretical and text based
Practical oriented exams direct
on site, hands-on based
Observe and monitoring in lab
Observe in the field and
monitoring from remote location
Class based presentations
1 to 1 presentations with much
richer communication
Usually use of one language
Automatic translation for
delivery of instructions in many
languages (possible)
Mostly individualized
Simultaneous collaborative
component based group work
group work
Paper-based assignment delivery Electronic based assignment
delivery
Hand delivery of assignments at E-delivery of assignments at any
a particular place and time
place and time
Instructor’s time used to deliver Instructor’s time used to offer
individualized instructions and
lectures
help
Conclusion:
However E- and M-learning play an important in the field of higher education. In spite
of some differences, there are some relationship lies in them. E- and M-learning promoted
both educator and students to take personal accountability for their own learning. When
teachers succeed it builds self-knowledge and self confidence in them. The recent trend in Elearning sector is screen casting. E- and M-learning will also bring a substantial change in the
INCON - XI 2016
239
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
method of spreading knowledge to improve the quality in teacher education and hence will
make teachers of global standard. Thus, eLearning and mLearning are beneficial in higher
education, corporations and to all types of teachers / learners. It is the effective learning
process created by combining digitally delivered content with learning support and service.
Therefore, we can conclude that teachers need to acquire technological skills in order to
succeed in E-learning. Mobile technology is also used in learning purpose. It is an innovative
educational approach which provides learning opportunities to the students.
REFERENCES:
[1] Adkins, S.S. (December 2008). "The US Market for Mobile Learning Products and
Services: 2008-2013 Forecast and Analysis". Ambient Insight. p. 5. Retrieved June 8,
2009.
[2] Ahuja, N., Ahuja, T. & Holkar, A., Need and Significance of E-learning in Education,
Retrieved
from
http://pioneerjournal.in/conferences/tech-knowledge/14th-nationalconference/3802.
[3] Attewell. J. (2005) Mobile Technologies and Learning, London Learning and Skills
Development Agency.
[4] Chet Hosmer, Carlton Jeffcoat, Matthew Davis, Thomas McGibbon (2011), "Use of
Mobile Technology for Information Collection and Dissemination", Data & Analysis
Center for Software, March 2011.
[5] Crescente, Mary Louise; Lee, Doris (2011). "Critical issues of M-Learning: design
models, adoption processes, and future trends". Journal of the Chinese Institute of
Industrial Engineers 28 (2): 111–123.
[6] Dawabi, P., Wessner M. and Neuhold, E., “Using mobile devices for the classroom of
the future”, in Attewell, J. and Savill-Smith, C. (Eds), Learning with Mobile Devices.
Research and Development, Learning and Skills Development Agency, London, 2003.
[7] eLearning and Applications, www2. academee.com/html/consultancy.html
[8] Goh, T. and Kinshuk, D., “Getting ready for mobile learning – adaptation perspective”,
Journal of Educational Multimedia and Hypermedia, Volume 15, Number 2, 2006, pp.
175-98.
[9] Global E-Learning Market to Reach US$107 Billion by 2015, According to New Report
by Global Industry Analysts, Inc.
http://www.prweb.com/releases/distance_learning/e_learning/prweb9198652.html
[10] ‘Is there a case for eLearning in India’, www.gurukulonline.co.in
[11] Milrad, M., Hoppe, U., Gottdenker, J. and Jansen, M., “Exploring the use of mobile
devices to facilitate educational interoperability around digitally enhanced experiments”,
Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile
Technologies in Education (WMTE 2004), Mobile Support for Learning Communities,
Taoyuan, Taiwan, March, 2004.
[12] Mitchell, A., “Exploring the potential of a games oriented implementation for m-portal”,
Learning With Mobile Devices. Research and Development, Learning and Skills
Development Agency, London, 2003.
[13] Mobile Learning Update. Learning Consortium Perspectives. 2008. pp. 3, 5–13, 17.
Retrieved June 9, 2009.
INCON - XI 2016
240
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[14] Moore, J. (2009). "A portable document search engine to support off-line mobile
learning". Proceedings of IADIS International Conference Mobile Learning. Barcelona,
Spain.
[15] Mostakhdemin-Hosseini, A. and Tuimala, J. (2005). Mobile Learning Framework.
Proceedings IADIS International Conference Mobile Learning 2005, Malta, pp 203-207.
[16] Ogata, H. & Yano, Y., “Context-aware support for computer-supported ubiquitous
learning”, Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile
Technologies in Education (WMTE 2004), Mobile Support for Learning Communities,
Taoyuan, Taiwan, 2004.
[17] The Worldwide Market for Self-paced eLearning Products and Services: 2010-2015
Forecast and Analysis.
http://www.ambientinsight.com/Resources/Documents/Ambient-Insight-2010-2015Worldwide-eLearning-Market-Executive-Overview.pdf
[18] Yousef Mehdipour , 2,Hamideh Zerehkafi “Mobile Learning for Education: Benefits and
Challenges” International Journal of Computational Engineering Research||Vol,
03||Issue, 6|| June 2013
[19] Krishna Kumar, Central Institute of Education, University of Delhi, “Quality of
Education at the Beginning of the 21st Century-Lessons from India”
INCON - XI 2016
241
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
A Comprehensive Study on the Best Smart phones Available in Market in
Year 2015
Dr. M.M Puri
Rinku Dulloo
Assistant Professor, JSPM’s Rajarshi Shahu Director,JSPM’s, Pune, India.
College of Engineering, Pune, India.
manimalap@yahoo.com
rinkudulloo@gmail.com
ABSTRACT :
The intention of this study is to investigate various types of Smartphones of year
2015. The study helps to identify the new models which are incoming and the old models
which are falling by the wayside.
This paper is organized as follows; Section 2 presents literature review, Section 3
presents objectives, Section 4 discusses about review of models. Section 5 presents
Comparative Analysis of Smartphone Models according to Year of Release. The last
section 6 presents Conclusion & Future Work.
Keyword : Smartphone, Microsoft Lumia 735, Nexus 6, LG G Flex 2, OnePlus
One,Huawei Ascend Mate 7, Nokia Lumia 930, Motorola Moto G 4G (2014), Motorola
Moto E (2015), Apple iPhone 6 Plus, HTC Desire Eye, Samsung Galaxy Note Edge,
Sony Xperia Z3 Compact, LG G3, Motorola Moto X (2014), Sony Xperia Z3, HTC One
M9, HTC One M8, Samsung Galaxy Note 4, Samsung Galaxy S6 edge, Apple iPhone 6,
Samsung Galaxy S6.
I.
Introduction
Smartphone is the type of gadget which merges cell phone and computer functionality.
All activities which can be performed on normal computers such as sharing information,
sending and receiving emails, chatting, opening and editing documents, paying for products,
browsing and shopping can be done using Smartphone; a small device which can be kept
inside a pocket of a trouser or a shirt.( J.L. Kim and J. Altmann, 2013)
The objective of this paper is to review and discuss best Smartphone of 2015. This paper
will guide to identify the hottest mobile phones of the year and saves time while purchasing
new handset.
It also gives updates about best Smartphones feature to reflect recent launches. In
today’s era new models are incoming, old models are falling by the wayside and the
desirability of some of last year's models are waning. The paper covers the best Smartphone
list which includes all operating systems, all sizes, and prices.
II.
Literature Review
Today‘s Smartphone‘s has been around since last six years when Apple introduced the
Smartphone in mass consumer market, but in reality the Smartphone has been in market since
1993.(Muhammad Sarwar, 2013). The different between today‘s Smartphone and early
Smartphone‘s is that early Smartphone‘s were predominantly meant for corporate users and
used as enterprise devices and also those phone were too expensive for the general consumers
(Brad Reed, 2010). The initial phase of Smartphone was purely for enterprises and the features
and functions were as per corporate requirement. The first Smartphone The Simon‘ from IBM
INCON - XI 2016
242
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
in 1993. Blackberry is considered as the revolutionary device of this era, it introduced many
features including Email, Internet, Fax, Web browsing, Camera. This phase was totally based
on Smartphone targeting enterprises (Brad Reed, 2010) (Wikipedia, 2012) (Hamza Querashi,
2012). The Middle phase of Smartphone period started with the beginning of iPhone. The first
Smartphone was revealed by Apple in 2007. At this stage the Smartphone was manufactured
for general consumers market. At the later period of 2007 Google revealed its Android
Operating System. The intention behind this is to approach the consumer Smartphone market.
The emphasis during this time period was to introduce features that the general consumer
requires and at the same time keep the cost at lower side to attract more and more customers.
Feature like, email, social website integration, audio/video, internet access, chatting along with
general features of the phone were part of these entire phone (Hamza Querashi, 2012) (Sam
Costello, 2012) (Chris Ziegler, 2011 Mark Prigg, 2012). The third phase of Smartphone started
in 2008. During this phase of Smartphone the gap between industry requirement and general
customer requirement was mainly closed. Improvement was seen in the display quality,
operating system and display technology Also consideration of more powerful batteries and
enhanced user interface and many more features are given within these smart devices. The
upgrades in the mobile operating system and within last five year there have been several
upgrades in Apple iOS, Android and Blackberry OS (Muhammad Sarwar 2013). The most
popular mobile Operating systems (iOS, Android, Blackberry OS, Windows Mobile) and key
Smartphone vendors (Apple, Samsung, HTC, Motorola, Nokia, LG, Sony etc.) are
concentrating to bring features both in operating systems and devices which will provide
exciting feature to enterprise and general consumers (Muhammad Sarwar 2013). The role of
Android has been tremendous during this time period as it provided a great opportunity to all
vendors to build devices using the great open source Android technology (Sam Costello, 2012)
(Chris Ziegler, 2011) (Mark Prigg, 2012).
III.
1.
IV.
Objective
The basic objective of this paper is
To propose a consolidated document highlighting comprehensive analysis of
Smartphone Models available in 2015
Review of Smartphone Models :
a.
Microsoft Lumia 735
The Lumia 735 provides a great Windows Phone experience. It has 4.7-inch screen and
is affordable. It offers latest Windows Phone 8.1 with Lumia Denim. Both the front and rear
cameras are good. Front one is 5-megapixel camera and rear is 6.7-megapixel camera. The
rear camera offers Zeiss optics. Although it is not the most powerful handset around, but at a
cheap price, Lumia 735 is difficult to ignore. It is recommended to buy if you want a great allround experience without spending much.
Nexus 6
Nexus 6 is a Google phablet and it offers a great Android experience. Nexus 6 provides
large display screen size. Screen provides a great Quad HD high-resolution display, along with
powerful Qualcomm Snapdragon 805 innards. You will have the advantage of fast update
INCON - XI 2016
243
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
times, being head of the list for Google. At the current price Nexus isn't hugely expensive,
which adds further appeal. Nexus 6 has the pure Android experience on the big screen.
b.
LG G Flex 2
The LG G Flex 2 has curved display format which is unconventional and might not
appeal to everyone. This will make it a non-starter for some. This is one of the new-generation
of smartphones, that's packed with powerful Qualcomm Snapdragon 810 chipset. It got a great
camera on the back too.
c.
OnePlus One
It's rare that an outsider jumps into the elite ranks, even rarer that it's widely lauded. The
OnePlus One offers great value for money. At cheap price, it’s a device that's powerful and
has plenty of battery life. It has CyanogenMod operating system which is a custom version of
Android. Means it provides all the usual Google benefits plus some extra. On the downside, it
has some network compatibility issues and is tricky to buy.
d.
Huawei Ascend Mate 7
Huawei has come a long way, and is selling a lot of phones. The Mate 7 is its latest
phablet, with a 6-inch full HD display and a metal design. There's plenty of customisation to
the Android UI. Some of these are really useful, presenting plenty of options, even if some of
the design choices might not be to everyone's tastes. The long battery life and the asking price
are also appealing.
e.
Nokia Lumia 930
The Lumia 930 offers the new potential of the Windows Phone experience. It has a great
5-inch Full HD display, a wonderful camera on the back and latest software from Microsoft,
dabbed with Lumia additions. The Lumia Denim upgrade, makes this Windows Phone more
compelling than ever before. This is highly packed of tech into the brightly-coloured handset,
with optical image stabilisation on its camera and wireless charging for battery.
f.
Motorola Moto G 4G (2014)
The Moto G 4G is a fantastic smartphone and incredible value for money compared to
peers. The Moto G has now been updated with a 2015 model, and the newer version is larger.
The design is good, display is excellent and there's lot of power too. Motorola Moto G was one
of the first non-Nexus devices to be upgraded to Android 5.0 Lollipop with some Moto app
extras. If someone is on budget, this is one of the choice.
g.
Motorola Moto E (2015)
The Moto E is Motorola's even cheaper version than the Moto G. It does not have the
fastest processor, but this does not affects usage. It got a great design and now includes
interchangeable "Bands" for a lick of colour, a good display, excellent battery life and
microSD support for expanding the internal storage. Moto E also empowers a front-facing
camera, provides a smoother Android experience, as well as 4G connectivity for fast internet
browsing.
INCON - XI 2016
244
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
h.
Apple iPhone 6 Plus
The iPhone 6 Plus offers a beautiful design, with an all-metal body and a great 5.5-inch
screen. This design curves into the edges for an almost seamless experience, and this is a big
handset overall. It's more expensive than most of its rivals. The 6 Plus offers a Full HD display
and plenty of power to take Apple into the realms of the phablet world. Apps are more
optimised for use on the larger display offering Full HD playback.
HTC Desire Eye
The Desire Eye offers a lovely full HD 5.2-inch display. It provides HTC's mature Sense
6 user interface built over Android, with the latest HTC Eye experience adding booster
performance to the cameras. The headline feature is the 13-megapixel front-facing camera
with flash. It is a well-built phone with waterproofing.
i.
Samsung Galaxy Note Edge
Samsung Note Edge was the pre-cursor to Galaxy S6 edge and is one of the favourite
Android phones ever. The display edges are curved, giving range of shortcuts and adding extra
functionality. It has plenty of power and a great camera. The battery takes a bit of a hit over
the original Note 4. Core features take good advantage of the larger display.
j.
Sony Xperia Z3 Compact
The Sony Xperia Z3 Compact offers flagship power in a mid-range size. For anyone
looking for a portable powerhouse, this phone is the answer. Pricing provided by Sony for this
handset is aggressive. The 4.6-inch 720p display is good and the outstanding feature is battery
life. It powers mature user interface, waterproofing, great camera performance and options
making the Xperia Z3 Compact favourite.
k.
LG G3
The LG G3 was one of the first handset with a Quad HD display (2560 x 1440 pixels). It
provides 5.5-inch display which is capable of incredibly sharp details (at 538ppi). There is lot
of smart functionality added to the G3. There are functions like dual app views although the
battery life is relatively poor. Camera on the back is slick with good performance adding 4K
video capture.
l.
Motorola Moto X (2014)
It provides a larger 5.2-inch display than the original. There are options to customise the
design materials using Moto-Maker. Android experience is one of its strengths, as is the speed
of updates when new Android versions come along. Battery performance is good. The camera
is weak and isn't a consistent performer. Also there is no microSD card option.
m.
Sony Xperia Z3
It got a slim body with refined edges and construction. The 5.2-inch display with
powerful hardware is speedy in execution. The camera pair is great with good quality results
and plenty of shooting options. This is a powerful, waterproof, sharp-shooting handset.
n.
HTC One M9
HTC has refined the M9 with the quality of the build and the design making it one of the
most precisely manufactured phones. Camera is 20-megapixels but the performance is weaker
INCON - XI 2016
245
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
in low-light conditions. Sense 7 brings a good user interface on Android. The BoomSound
speakers are class-leading and this is equipped with the Qualcomm Snapdragon 810 processor
and 3GB of RAM. HTC One M9 also offers a microSD card slot for expansion.
o.
HTC One M8
The HTC One M8 is slick, fast and refinement of Sense 6 adds plenty to Android 5.0
Lollipop. With a premium metal body, the design is great. It offers a 5-inch Full HD display
with best visuals on a device of this size. Camera is relatively weak and the Duo Camera
features not really appealing. There is insufficient resolution in some shooting conditions but
the low-light performance is pretty good.
p.
Samsung Galaxy Note 4
It is a sensational device with plenty of power from the Snapdragon 805 chipset. It
includes stylus called the S-Pen for added features and functionality. The Note 4 is filled with
features making use of the screen space and the hardware. This is a great handset for work and
play. The display is fantastic with plenty of power and endurance. Camera is amongst the best
available on an Android handset.
q.
Samsung Galaxy S6 edge
Its display design is quite innovative. The curves display leads to jaw-dropping design
making screen punchy and vibrant. The edge offers the same slick user experience as the
SGS6. The refined TouchWiz user interface is powerful and better than before. Camera on the
back produces great results with very little efforts. There's no removable battery support or
microSD support.
r.
Apple iPhone 6
The iPhone 6 has a great quality display paired with excellent design. It's slick with a
high quality finish. The TouchID fingerprint-verification is very effective. This comes with
Apple Pay which could be a key to unlocking much more than just your phone. The camera
results are consistent. This comes with the refinement of iOS 8 providing consistent
experience that some other platforms lack.
s.
Samsung Galaxy S6
Samsung Galaxy S6 comes with a slick body comprising of metal frame sandwiching
Gorilla Glass both front and back. Front display is fantastic with Super AMOLED panel. This
is loaded with Quad HD resolution fitted into a 5.1-inch display. SGS6 is powered with the
Exynos octa-core chipset and 3GB of RAM. There's no microSD card slot. It got a 16megapixel rear camera. The TouchWiz interface is slicker and cleaner. This comes with the
fingerprint scanner.
V.
Chronological order as per Date of Release:
Table 1
S.No
Name of Model
1.
OnePlus One
2.
Motorola Moto G 4G
3.
Motorola Moto E
INCON - XI 2016
Date of Release
Apr, 2014
May,2014
May 2014
246
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
S.No
Name of Model
Date of Release
4.
LG G3
May, 2014
5.
Huawei Ascend Mate 7
Sept., 2014
6.
Motorola Moto X (2014)
Sept, 2014
7.
Sony Xperia Z3
Sept.,2014
8.
Apple iPhone 6
Sept., 2014
9.
Apple iPhone 6 Plus
Sept ., 2014
10. Samsung Galaxy Note Edge
Sept., 2014
11. Sony Xperia Z3 Compact
Sept.,2014
12. Samsung Galaxy S6 edge
Sept., 2015
13. Nokia Lumia 930
Oct., 15
14. HTC Desire Eye
Oct., 2014
15. HTC One M8
Oct.,2014
16. Samsung Galaxy Note 4
Oct., 2014
17. Nexus 6
Oct., 2014
18. Samsung Galaxy S6
March, 2015
19. HTC One M9
Apr, 2015
20. LG G Flex 2
Apr ., 2015
21. Microsoft Lumia 735
June , 2015
The Table 1 shows existence Smartphone Models in market.
VI.
Conclusion & Future Work
In this paper, Researcher(s) have compared various Smartphone models of year 2015.
These models have been compared in terms of technology, operating system and features to
find the impact on behavior for using technology. Despite finding that there are dissimilarity in
forecasting precision, researchers point out that there may be other advanced features that will
have an equal, if not greater, impact upon their adoption.
The results shown in all these approach demand an additional investigation, particularly to
explore the effect of various parameters on the models in term of improving robustness and
accuracy. It also offers the potential to provide more transparent solutions but this aspect also
requires further research.
VIII. References:

Brad
Reed,
(2010),
―
A
brief
history
of
Smartphone‘s,
http://www.networkworld.com/slideshows/2010/061510-smartphonehistory.html#slide1

Chris
Ziegler,
(2011),
―
Android:
A
visual
history,http://www.theverge.com/2011/12/7/2585779/android-history

J.L. Kim and J. Altmann, (2013). Adapting Smartphones as Learning Technology in a
Korean
INCON - XI 2016
247
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Hamza Querashi, (2012), ―Apple: from iPhone 1 to iPhone 5 – Evolution, Features
andFutureReview‖http://www.thenewstribe.com/2012/07/16/apple-from-iphone-1-toiphone-5-evolution-features-and-future-review/

Hamza Querashi, (2012), ―Apple: from iPhone 1 to iPhone 5 – Evolution, Features and
Future Review‖ ,http://www.thenewstribe.com/2012/07/16/apple-from-iphone-1-toiphone-5
evolution-features-and-future-review/

Muhammad Sarwar (2013). Impact of Smartphone’s on Society, European Journal of
Scientific Research

Mark Prigg, (2012).Microsoft launches Windows 8 Phone software - and hopes apps and
Jessica
Alba
will
help
it
take
on
Apple
and
Google‖,http://www.dailymail.co.uk/sciencetech/article-2225149/Windows-8-phonesoftware-launch-Microsoft-hopes-Jessica-Alba-help-Apple Google.html

Sam
Costello,
(2012),
―First-Generation
iPhone
Review,
http://ipod.about.com/od/iphoneproductreviews/fr/iphone_review.htm
University.
Journal of Integrated Design and Process Science, 17(1), 5-16.
Websites Referred:

Wikipedia, 2012, ―Blackberry‖, http://en.wikipedia.org/wiki/BlackBerry

https://en.wikipedia.org/wiki/Moto_E_(1st_generation)

https://en.wikipedia.org/wiki/IPhone_6

https://en.wikipedia.org/wiki/LG_G3

www.motorola.com/us/MotoX/FLEXR2.html

https://en.wikipedia.org/wiki/HTC_One_(M8)

www.theinquirer.net/inquirer/.../galaxy-note-4-release-date-specs-and-price

gadgets.ndtv.com › Mobiles › All Brands › Motorola

gadgets.ndtv.com/sony-xperia-z3-1940
INCON - XI 2016
248
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Exploring the Best Smartphones upcoming Features of Year 2016
Dr. M.M Puri
Rinku Dulloo
Assistant Professor, JSPM’s Rajarshi Shahu Director,JSPM’s, Pune, India.
College of Engineering, Pune, India.
manimalap@yahoo.com
rinkudulloo@gmail.com
ABSTRACT :
The purpose of this study is to investigate various types of upcoming advanced
features in Smartphones for the year 2016.The study gives an idea about the tasks which
helps to identify the usage of these features are incoming and the old models which are
falling by the wayside.
This paper is organized as follows; Section 2 presents literature review, Section 3
presents objectives, Section 4 discusses about review of features. The last section 5
presents Conclusion & Future Work.
Keyword : Smartphone, Better biometrics, USB Type-C, Intelligent apps, Superior
displays, Bendable handsets, Amazing cameras, Longer-lasting batteries, Invisible
waterproofing, More sensors, Force Touch, Holographic projections, Laptop power,
Modular components.
I.
Introduction
The word Smart in Smartphone itself means that the device which possess smarter
capabilities than mobile phone. Despite normal call or text message, it provides essential
functions like web browsing, multimedia entertainment, games etc. It works like a minicomputers which can fit in our pocket. The Smartphones of today have other extended features
which include web browsing, mobile apps, Wi-Fi, 3rd-party apps and video-streaming as well
as connectivity that enable millions to stay connected while on the go.
The major phone manufacturers releasing similar-looking slabs of powerful electronics
packed into a thin, touchscreen body. Even if today's mobiles aren't changing all that much on
the surface, there's still plenty of new technology on the way.
The objective of this paper is to review and discuss best Smartphone upcoming features
of year 2016. This paper will guide to identify the hottest features of the year and saves time
while purchasing new handset.
It also gives updates about best Smartphones feature to reflect recent launches. In
today’s era new models are incoming, old models are falling by the wayside and the
desirability of some of last year's models are waning. The paper covers the best Smartphone
upcoming feature list.
II
Literature Review
Smartphones has been in market since 1993 but around since last six years it came into
existence more when Apple introduced the Smartphone in mass consumer market
(Muhammad Sarwar, 2013). Earlier the Smartphone were predominantly meant for corporate
users and used as enterprise devices and also those phone were too expensive for the general
INCON - XI 2016
249
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
consumers (Brad Reed, 2010). Previously the Smartphone was made according to the needs
for enterprises and the features and functions were as per corporate requirement. The first
Smartphone The Simon‘ from IBM in 1993. Blackberry is considered as the revolutionary
device of this era, it introduced many features including Email, Internet, Fax, Web browsing,
Camera. This phase was totally based on Smartphone targeting enterprises (Brad Reed, 2010)
(Wikipedia, 2012) (Hamza Querashi, 2012). The Middle phase of Smartphone period started
with the beginning of iPhone. The first Smartphone was revealed by Apple in 2007. At this
stage the Smartphone was manufactured for general consumers market. At the later period of
2007 Google revealed its Android Operating System. The intention behind this is to approach
the consumer Smartphone market. The emphasis during this time period was to introduce
features that the general consumer requires and at the same time keep the cost at lower side to
attract more and more customers. Feature like, email, social website integration, audio/video,
internet access, chatting along with general features of the phone were part of these entire
phone (Hamza Querashi, 2012) (Sam Costello, 2012) (Chris Ziegler, 2011 Mark Prigg, 2012).
The third phase of Smartphone started in 2008. During this phase of Smartphone the gap
between industry requirement and general customer requirement was mainly closed.
Improvement was seen in the display quality, operating system and display technology Also
consideration of more powerful batteries and enhanced user interface and many more features
are given within these smart devices. The upgrades in the mobile operating system and within
last five year there have been several upgrades in Apple iOS, Android and Blackberry
OS(Muhammad Sarwar 2013). The most popular mobile Operating systems (iOS, Android,
Blackberry OS, Windows Mobile) and key Smartphone vendors (Apple, Samsung, HTC,
Motorola, Nokia, LG, Sony etc.) are concentrating to bring features both in operating systems
and devices which will provide exciting feature to enterprise and general
consumers(Muhammad Sarwar 2013). The role of Android has been tremendous during this
time period as it provided a great opportunity to all vendors to build devices using the great
open source Android technology (Sam Costello, 2012) (Chris Ziegler, 2011) (Mark Prigg,
2012).
III Objective
The basic objective of this paper is
1. To propose a consolidated document highlighting upcoming features of Smartphones for
year 2016.
IV
Review of upcoming Features in Smartphone
a.
Better biometrics
For security, normally people are using Pin based authentication in their Smartphones.
Touch ID is available on latest iPhones and iPads now. Fingerprint sensing is also built into
Android Marshmallow. Windows Smartphones can use Windows Hello to log using advanced
facial recognition. By next year, iris-scanning to log into phone might be a common feature for
security rather using Pin based authentication.
b.
USB Type-C
A more advanced form of USB - USB Type-C connection standard is likely to appear by
2016.It offers faster data transfer, better charging speed and ability to multitask using the same
INCON - XI 2016
250
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
cable. This would enable to connect your phone to a high definition monitor at the same time
as charging it. The cables and sockets would be reversible to easily fit from either end.
c.
Intelligent apps
Apple and Google are adding extra features to Siri and Google Now as well as Microsoft
for Cortana. By 2016, these features would control more of a phone's settings and would be
more intelligent to understand about how we live our lives.
d.
Superior displays
Smartphone display screens have seen rapid improvements with QHD resolutions (1440
x 2560 pixels) on many phones. By next year, we would be entering into the era of WQXGA+
(3200 x 1800 pixel) display screens. Efforts are also being invested to make LEDs brighter,
clearer and more efficient.
e.
Longer-lasting batteries
Decent battery life is an inevitable Smartphone feature. Additional improvements are
being made in chipset efficiency which would improve battery life too. Advancements in
wireless charging capabilities are being made which will become the norm for every handset.
f.
Force Touch
Force Touch is a feature where a touch-screen display registers the pressure of your
touch as well as its position and duration. Force Touch is a pressure sensitive multi-touch
technology that enables track-pads and touch-screens to distinguish between different levels of
force being applied to their surfaces (wikipedia). This is already a feature of Apple Watch and
the latest MacBook. This is also available in latest iPhone 6 and iPhone 6 plus. Force Touch
brings new dimensions to user interfaces and interactive computing. This has revolutionized
an area where other manufacturers will follow and integrate the technology into their own
screens. This will enable users to be more interactive and with versatile displays.
g.
Bendable handsets
Flexible screens and bendable handsets are the features that many manufacturers were
trying to build. But no one has managed to make the technology commercially viable. By
2016, these features would be available for few. Major manufacturers like Samsung and LG
are investigating the possibilities. Samsung has already built a prototype for a foldable tablet.
LG Flex 2 has a curved display which gives it a distinctive design.
h.
Amazing cameras
Many smartphone users rely more on their smartphone camera feature. Although, there
is some limitation imposed because of the thickness of current handsets. Optics inside these
cameras is continuously improving. Apple has patented some swappable camera modules and
features. This would enable you to attach upgrades to the basic camera on your smartphone as
and when needed. This is more like using a standard camera device but on a smartphone. Low
light performance is continuously improving with advanced technology. Also optical image
stabilization is becoming a standard in many smartphones.
i.
Invisible waterproofing
INCON - XI 2016
251
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
There aren't too many phones currently in the market with waterproofing feature. Sony's
Xperia series and new Galaxy S6 do have some waterproofing available. The emergence of
hydrophobic coating technology has the ability to cover electronics in a microscopic layer of
water-resistant material. This will enable manufacturers not to block every port with plastic
covers.
j.
More sensors
Smartphones of 2016 are going to have better sensing technology to check your energy
levels, heart rate, oxygen level in blood and even your mood. This is achievable with or
without an external wearable device. More advanced atmospheric measurements can be taken
by these handsets in future providing better weather forecasts and air quality readings.
k.
Modular components
Project Ara is a Google's experiment in modular phone technology which is to launch
during 2016. This will enable users to configure phone to your exact specifications. User can
focus on the features he/she needed most and can replace components individually (like the
camera and battery). This will not require upgrading the whole phone each time.
l.
Holographic projections
Smartphones with holographic projection capabilities are being researched. Samsung has
developed technology that projects 3D objects into the air above Smartphone’s screen.
Samsung is trying to patent this technology. This means the phone display extends beyond the
actual display itself.
m.
Console-like gaming
With the help of innovations like Metal on iOS and Vulkan on Android, developers are
creating faster, more complex games with better graphics. Future smartphones empowered
with technology and hardware would mean that user would not even need a PlayStation or
Xbox.
n.
Built-in virtual reality capabilities
Smartphone manufactures are developing virtual reality capabilities built within the
handset.
For example - Oculus Rift, Google Cardboard and the Samsung Gear VR. A lot of the
phones releasing next year are going to have some virtual reality function built in.
o.
Laptop power
Developments are being made enabling to plug a Smartphone into a laptop-like chassis
and using it with a standard monitor and keyboard. As Smartphones are becoming more
powerful and companies like Apple, Google and Microsoft are developing better desktop and
mobile Operating Systems, we can expect in 2016 to get a phone-powered laptop.
VI.
Conclusion & Future Work
In this paper, Researcher(s) have identified advances Smartphone features. These
features will improve quality of Smartphones and will be of greater usage of users. Despite
identifying 11 advanced features there are dissimilarity in forecasting precision, researchers
INCON - XI 2016
252
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
point out that there may be other advanced features that will have an equal, if not greater, and
impact upon their adoption.
The results shown in all these approach demand an additional investigation, particularly
to explore the effect of various features on the Smartphone models in term of improving
robustness and accuracy. It also offers the potential to provide more transparent solutions but
this aspect also requires further research.
VIII. REFERENCES:

Brad Reed, 2010, ―A brief history of
Smartphone‘s,http://www.networkworld.com/slideshows/2010/061510-smartphonehistory.html#slide1

Chris Ziegler, 2011, ―Android: A visual
history‖,http://www.theverge.com/2011/12/7/2585779/android-history

Hamza Querashi, 2012, ―Apple: from iPhone 1 to iPhone 5 – Evolution, Features
andFutureReview‖http://www.thenewstribe.com/2012/07/16/apple-from-iphone-1-toiphone-5-evolution-features-and-future-review/

Hamza Querashi, 2012, ―Apple: from iPhone 1 to iPhone 5 – Evolution, Features and
Future Review‖ ,http://www.thenewstribe.com/2012/07/16/apple-from-iphone-1-toiphone-5
evolution-features-and-future-review/

Muhammad Sarwar (2013). Impact of Smartphone’s on Society, European Journal of
Scientific Research

Mark Prigg, 2012,‖ Microsoft launches Windows 8 Phone software - and hopes apps
and Jessica Alba will help it take on Apple and
Google‖,http://www.dailymail.co.uk/sciencetech/article-2225149/Windows-8-phonesoftware-launch-Microsoft-hopes-Jessica-Alba-help-Apple Google.html

Sam Costello,retrieved on 2012, ―First-Generation iPhone Review‖,
http://ipod.about.com/od/iphoneproductreviews/fr/iphone_review.htm

Wikipedia, 2012, ―Blackberry‖, http://en.wikipedia.org/wiki/BlackBerry

gadgets.ndtv.com › Mobiles › All Brands › Motorola

https://en.wikipedia.org/wiki/Moto_E_(1st_generation)

https://en.wikipedia.org/wiki/IPhone_6

https://en.wikipedia.org/wiki/LG_G3

www.motorola.com/us/MotoX/FLEXR2.html

gadgets.ndtv.com/sony-xperia-z3-1940

https://en.wikipedia.org/wiki/HTC_One_(M8)

www.theinquirer.net/inquirer/.../galaxy-note-4-release-date-specs-and-price
INCON - XI 2016
253
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Test Data Management - Responsibilities And Challenges
Ruby Jain
Research Student SPPU Pune,
Maharashtra.India
ruby.jain81@gmail.com
Dr. Manimala Puri
Director, JSPM Pune, Maharashtra.
India
manimalap@yahoo.com
ABSTRACT :
Testing discipline is a very important part of software development process. It
helps to improve and then retain quality of a software product, and also makes it
possible to measure required quality. Test Data Management, a service that caters to the
various data needs for development / enhancement / maintenance and testing of
applications, plays a vital role in the IT system of any organization. An increasing
number of organizations are requesting “Test Data Management” as a managed,
centralized IT service from its vendors, mainly with the objectives of realizing cost
benefits, reduced time-to-market and improved quality of the end-product. The objective
of this paper is to converse the challenges and typical practices in providing Test Data
Management services, and addresses the challenges.
Keyword : Test Data Management, test environment, agile, masking, TDM.
Introduction :
Test Data management is very critical during the test life cycle. The amount of data that
is generated is enormous for testing the application. By reporting the results, it minimizes the
time spent for processing the data and creating reports which greatly contributes to the
efficiency of an entire product.
An ineffective test data management may lead to:

Inadequate testing and thus poor quality of product

Increased time-to-market

Increased costs from redundant operations and rework

Non-compliance with regulatory norms on data confidentiality
Test Data Management is about the provisioning of data for non-production
environments, especially for test purposes but also for development, training, quality
assurance, demonstration or other activities. Test data has always been required to support
application development and other environments but, until the relatively recent advent of
TDM, this has been achieved in an ad-hoc manner rather than in any formalized or managed
way. The predominant technique has been copying some production data or cloning entire
production databases.
Test Data Management provides integrated sensitive data discovery, business
classification, and policy driven data masking for de-identification and safe use of production
data used in test and development environments.
Test Data :
Test data is the data that is used in tests of a software system.
INCON - XI 2016
254
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In order to test a software application you need to enter some data for testing most of the
features. Any such specifically identified data which is used in tests is known as test data.
Some test data is used to confirm the expected result, i.e. when test data is entered the
expected result should come and some test data is used to verify the software behavior to
invalid input data. Test data is generated by testers or by automation tools which support
testing. Most of the times in regression testing the test data are re-used; it is always a good
practice to verify the test data before re-using it in any kind of test.
Test Environment :
Test Environment consists of elements that support test execution with software,
hardware and network configured. Test environment configuration must mimic the production
environment in order to uncover any environment/configuration related issues.
Factors for designing Test Environment :

Determine if test environment needs archiving in order to take back ups.

Verify the network configuration.

Identify the required server operating system, databases and other components.

Identify the number of license required by the test team.
Test Management :
Test management, process of managing the tests. A test management is also performed
using tools to manage both types of tests, automated and manual, that have been previously
specified by a test procedure.
Test management tools allow automatic generation of the requirement test matrix
(RTM), which is an indication of functional coverage of the application under test (AUT)( HP
White Paper,2010).
Test Management tool often has multifunctional capabilities such as testware
mansagement, test scheduling, the logging of results, test tracking, incident management and
test reporting.
Test Management Responsibilities :

Test Management has a clear set of roles and responsibilities for improving the quality
of the product.

Test management helps the development and maintenance of product metrics during the
course of project.

Test management enables developers to make sure that there is fewer design or coding
faults.
Challenges in TDM:
Test Data management may seem simple, but a closer involvement reveals that it is quite
a tricky job that comes with the following challenges (Kozikowski, 2009):

Complexity of data requirements – each project usually involves multiple application
teams requiring data to be synchronized between applications; each application is
simultaneously involved in multiple projects resulting in contention for environments
and so on. (Figure 1).

Analysis of data requirements due to lack of information on the existing data can prove
to be a major challenge.
INCON - XI 2016
255
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Sudden and immediate requests for test data during test execution – catering to these
types of requests requires a lot of agility since the allowed turnaround time is very short.

Adherence to regulatory compliance like data confidentiality – data in the test
environments cannot be a direct copy of production; confidential information must be
masked before loading data.

Assurance of data safety or security - there was initially no well-defined policy on test
data storage strategy with version-control, access security and back-up mechanisms.
Thus, there was always a risk of crisis situations resulting from unanticipated loss of
database.

Reliance on production data for loading to test environment - this is always a challenge
due to the huge volume of production data and the chance of disruption to production
systems due to repeated data requests.

Coordination – the test data management team has to coordinate with application teams,
infrastructure team, Data Base Administrators (DBA) and so on. Coordination with
multiple stakeholders can be at times quite a challenging task.

Lack of a proper process framework to manage the activities related to Test Data
Management

Ensuring of proper data distribution so as to prevent:
o
Data contention between multiple projects
o
Redundant or unused data in any region

Ensuring of data reuse

Managing of the impact of data refresh on ongoing projects

Identification of the right region that caters to the needs of all the applications within a
project.
Fig. 1: Complexity of Test Data Requests
Managing Test Data Across Testing Lifecycle :
Testing team needs to address the requirement in the stages of TDM lifecycle:

Test Data Setup(Preparation)

Ongoing Test Data Administration
INCON - XI 2016
256
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig. 2: Stages of TDM lifecycle
Typically, Test Data Management services can be segregated into the following four
categories (Oracle Test Data Management Pack”, 2013):
1.
Initial test data set-up and/or synchronization of test data across applications. This is a
one-time job that is executed by the Test Data Management team right after it is
established.
2.
Servicing data requirements for project(s). Projects are again of two categories:
a. Development of a new application, which may thus require test data creation from
scratch
b. Enhancement or maintenance of existing application(s) only
3.
Regular Maintenance or Support – servicing:
a. Simple data requests
b. Change requests (CRs), that is, change in data requirements
c. Problem Reports (PRs), that is, problems reported in data delivered
4.
Perfective Maintenance – scheduled maintenance of test beds on an annual frequency
Best Practices for Test Data Management:
The best test data management solutions are (Kasurinen, J., Taipale, O. and Smolander,
K. 2009) the following:
1.
How representative is the test data?
It is important to stress just how vital this understanding of business entities is. A
business entity, a conceptual view of which is illustrated in Figure 3, provides a complete
picture of that entity: for example, a customer record including delivery addresses, order
number, payment history, invoices and service history. Note that such an entity may span
multiple data sources: even if your new application will only run against one of these you will
still need a broader understanding if you are to generate representative data. Further, it is
important to be aware that business entities cannot be understood by simply turning to the data
model because many of the relationships that help to define a business entity are actually
INCON - XI 2016
257
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
encoded in the application layer rather than in any database schema, and such relationships
need to be inferred by tools.
Fig. 3 : Conceptual view of a business entity
2.
Providing agile test data:
Much of today’s development takes place in an agile environment (Highsmith, J. and
Cockburn, A. 2001). Here you will repeatedly need new sets of test data. A TDM solution
should enable testers and developers to refresh test data to ensure they are working with a
clean dataset. In other words, developers and testers should not have to go back to the database
administrator and ask for new test data. The ability to refresh data improves operational
efficiency while providing more time to test, thus enabling releases to be delivered more
quickly. In practical terms we do not believe that agile development is realistic without the
ability to quickly refresh test data. Agile development means agile test data. A TDM solution
should also support the generation of differently sized non-production databases. As we noted
at the outset, TDM has practical implications for quality assurance, training, demonstrations or
other non-production purposes. You might want a larger dataset for integration testing than
you do for unit testing or for training or demonstration purposes. In other words, you should
be able to generate ‘right-size’ databases according to your needs.
3.
Easy and Comprehensive Data masking:
For data masking you will need a range of different masking techniques that run from
simple shuffling and randomization through to more sophisticated capabilities that preserve
relationships and support relevant business rules (such as being a valid post code). The goal is
to ensure that masked data is contextually correct so that testing processes will run accurately.
Proper data masking requires sensitive data discovery and an understanding of relationships
within and across databases. From an implementation perspective such a solution should
INCON - XI 2016
258
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
provide the ability to propagate keys across tables to ensure referential integrity is maintained.
In addition to these capabilities, it will be useful if the data masking solution supports major
ERP and CRM environments as well as custom developments.
Conclusion:
The effectiveness of TDM is critical in ensuring the success of independent validation
efforts. Quality assurance teams having a thorough understanding of data requirement and
techniques for data preparation would be able to ensure test data is appropriately setup in test
environment. Effective test data management helps in achieving better test coverage,
optimizing test efforts and as a result reducing the cost of quality considerably.
REFERENCES:
[1] Derek M. Kozikowski, “Test Data Management Challenges and Solutions,” 2009
[2] HP White Paper, “How to address top problems in test data management”, 2010
[3] The
tdminsights
blogspot
website.
[Online].
Available:
http://tdminsights.blogspot.in/2013/03/test-data-life-cycle.html#more
[4] Highsmith, J. and Cockburn, A. (2001). Agile Software Development: the Business of
Innovation. Computer, 34(9), 120-127. DOI: 10.1109/2.947100
[5] Purnima Khurana , Purnima Bindal. "Test Data Management". International Journal of
Computer Trends and Technology (IJCTT) V15(4):162-167, Sep 2014. ISSN:22312803. www.ijcttjournal.org. Published by Seventh Sense Research Group
[6] Testify – TCS tool for test data creation
[7]
Grid-Tools
Test
Data
Management
[online]
Available:
http://www.itdirector.com/technology/applications/paper.php?paper=927
[8] Test Data Management - A Process Framework By Dinesh Nittur, Tithiparna Sengupta
[9] Kasurinen, J., Taipale, O. and Smolander, K. (2009). “Analysis of Problems in Testing
Practices”, Proceedings of the 16th Asia Pacific Software Engineering Conference
(APSEC), 1.12.‐3.12.2009, Penang, Malaysia. doi: /10.1109/APSEC.2009.17
[10] We spoke with practitioners across TCS i.e. Test Data Manager and members of multiple
teams that are providing Test Data Management services.
[11] Masketeer – TCS tool for test data masking.
[12] Oracle White Paper,”Oracle Test Data Management Pack”, 2013.
INCON - XI 2016
259
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Providing Data Confidentiality By Means Of Data Masking:
An Exploration
Ruby Jain
Research Student SPPU Pune,
Maharashtra. India
ruby.jain81@gmail.com
Dr. Manimala Puri
Director, JSPM Pune, Maharashtra.
India
manimalap@yahoo.com
ABSTRACT :
Enterprises need to share production data with various constituents while also
protecting sensitive or personally identifiable aspects of the information. As the number
of applications increases, more and more data gets shared, thus further increasing the
risk of a data breach, where sensitive data gets exposed to unauthorized parties. Data
Masking addresses this problem by irreversibly replacing the original sensitive data
with realistic-looking scrubbed data that has same type and characteristics as the
original sensitive data thus enabling organizations to share this information in
compliance with information security policies and government regulations.
"Confidentiality is going to become interesting because society is changing worldwide.
Society is beginning to worry about confidentiality," Byrnes said. "We're pretty sure that
one thing that is happening is a push towards more transparency (Gartner). The main
reason for applying masking to a data field is to protect data that is classified as
personal identifiable data, personal sensitive data or commercially sensitive data;
however the data must remain usable for the purposes of undertaking valid test cycles. It
is often necessary to anonymize data in test and development databases in order to
protect it from inappropriate visibility. There are many things, some incredibly subtle,
which can cause problems when masking data. This paper provides an investigation of
the realistic issues concerned in the masking of sensitive data and provides a vision
about things we really need to know.
Keyword : Masking, Tokenization, Shuffling, Obfuscation, Breach.
Introduction :
Data masking or data obfuscation is the process of hiding original data with random
characters or data. In other words, it is now data rather than information. This involves the
obfuscation of any information subject to any internal procedure, national law or industry
regulatory compliance, such that if this data is subsequently used in an inherently insecure
context such as application testing, the data no longer contains meaningful private or
confidential information in any way.
The conversion of data into a masked form — typically sensitive data into non-sensitive
data of the same type and format. In other words, Masking creates a proxy data substitute
which retains part of the value of the original. The point is to provide data that looks and acts
like the original data, but which lacks sensitivity and doesn’t pose a risk of exposure, enabling
use of reduced security controls for masked data repositories. It's "also known as data
obfuscation, data privacy, data sanitization, data scrambling, data de-identification, data
anonymization and data deauthentication."
INCON - XI 2016
260
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Data masking has been around for years but the technology has matured considerably
beyond its simple script-based beginnings, most recently advancing rapidly to address
customer demand for solutions to protect data and comply with regulations.
In general, the users of the test, development or training databases do not need to see the
actual information as long as what they are looking at looks real and is consistent (Ravikumar
G K et al, 2011-p-535-544). The ability of test and development teams to use masked data is
not commonly true. It is important to be aware that data masking is appropriate to more than
just personal details sometimes business confidential information is appropriate for masking.
In general, a logical security assumption is that more the people who have access to the
information, the greater the natural risk of the data being compromised. The modification of
the existing data in such a way as to remove all identifiable distinguishing characteristics can
provide a valuable layer of security for test and development databases
(Lodha,Sundaram,2005),(Xiao-Bai Li,2009).
According to the Identity Theft Resource Center, there were 761 reported data security
breaches in 2014 impacting over 83 million breached records across industries and
geographies with B2B and B2C retailers leading the pack with 79.2% of all breaches.
Features of Masking:

Precision for data privacy laws: Masking supports the precise controls needed to
accommodate data privacy laws and tightly control access to sensitive, private and
confidential information.

Seamless Integration without modification: Data masking supports heterogeneous
environment without the need to modify applications or data. They also complement
existing data security controls such as encryption and tokenization without the need to
modify settings and configurations.

Complete coverage for data security needs: By combining persistent and dynamic
masking with traditional data security controls, organizations can provide complete
coverage for businesses’ data security needs.
INCON - XI 2016
261
ASM’s International E-Journal on
Ongoing Research in Management & IT

E-ISSN – 2320-0065
High performance and nonintrusive: Dynamic data masking delivers high throughput
and low latency performance that does not the user’s performance. For persistent data
masking proven platform can scale to meet the requirements of the organizations that
need to mask large data stores. Both solutions integrate without disruption as no changes
are required.
Process For Masking and Subsetting :
1.
Build the Knowledge Base from the production data through direct access or an unload
process.
2.
Analyze, inventory and classify the data in the Knowledge Base. This provides the key
information for subsequent steps.
3.
Define the extraction patterns and rules delivering repeated extraction schemes. These
rules are reused in all subsequent extractions.
4.
Perform the actual data extraction. This is a single process for both masked and subsetted
data.
5.
Load the reduced and secured test data into the test environment for testing against
normal procedures (Muralidhar K., R.Sarathy, 1999, p-487-493).
Fig. 1: Five step process for masking and subsetting
INCON - XI 2016
262
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Challenges of Masking Data :
1.
Reusability: Due to the firm connection between a script and the associated database,
these scripts would have to re-written from scratch if applied to another database. There
are no common capabilities in a script that can be easily leveraged across other
databases.
2.
Transparency: In view of the fact that scripts tend to be huge programs, auditors have
no transparency into the masking procedures used in the scripts. The auditors would find
it really difficult to offer any advice on whether the masking process built into a script is
secure and offers the enterprise the appropriate degree of protection.
3.
Maintainability: When these enterprise applications are upgraded, new tables and
columns containing sensitive data may be added as a part of the upgrade process. With a
script-based approach, the entire script has to be revisited and updated to accommodate
new tables and columns added as a part of an application patch or an upgrade.
Different Types of Data Masking :
Data masking is tightly coupled with building test data. The major types of data masking
are :
1.
Static data masking :
Static Data Masking is done on the golden copy of the database. Production DBAs load
the backup in a separate environment, reduce the data set to a subset that holds the data
necessary for a particular round of testing (a technique called "subsetting"), apply data
masking rules while data is in stasis, apply necessary code changes from source control and
push data to desired environment.
2.
On-the-fly data masking :
On-the-Fly Data Masking (RISKS,sic) happens in the process of transferring data from
environment to environment without data touching the disk on its way. The same technique is
applied to "Dynamic Data Masking" but one record at a time. This type of data masking is
most useful for environments that do continuous deployments as well as for heavily integrated
applications. Organizations that employ continuous deployment or continuous delivery
practices do not have the time necessary to create a backup and load it to the golden copy of
the database. Thus, continuously sending smaller subsets (deltas) of masked testing data from
production is important. In heavily integrated applications, developers get feeds from other
production systems at the very onset of development and masking of these feeds is either
overlooked or not budgeted until later, making organizations non-compliant. Having on-thefly data masking in place becomes essential.
3.
Dynamic data masking :
Dynamic Data Masking is similar to On-the-Fly Data Masking but it differs in the sense
that On-the-Fly Data Masking is about copying data from one source to another source so that
the latter can be shared. Dynamic data masking happens at runtime, dynamically, and ondemand so that there need not be a second data source where to store the masked data
dynamically.
Dynamic data masking can also be used to encrypt or decrypt values on the fly
especially when using format-preserving encryption.
INCON - XI 2016
263
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Key Features of Dynamic data masking :
1.
Exactness for data privacy laws: Combinations of personal, health, or credit information
can be anonymized to comply with complex cross-border privacy laws and regulations.
2.
Performance: Dynamic data masking’s high-speed engine ensures no impact on user
throughput. Persistent data masking can scale to mask terabytes of data for large test,
outsourcing, or analytic projects.
3.
Data connectivity: Masking techniques have developed comprehensive integrations and
connectors with its long-term heritage in data integration and management.
4.
Powerful masking capabilities: A range of masking functions is repeatable across
systems to ensure business processes are reliable and precise.
5.
Role-based masking: Based on role and location, dynamic data masking accommodates
data security and privacy policies that vary based on users’ locations.
6.
Monitoring and compliance reporting: Data security and privacy professionals can
validate that identified sensitive data has been masked to meet security and privacy
policies.
Data Masking and The Cloud :
In latest years, organizations develop their new applications in the cloud more and more
often, regardless of whether final applications will be hosted in the cloud or on- premises. The
cloud solutions as of now allow organizations to use Infrastructure as a Service or IaaS,
Platform as a Service or PaaS, and Software as a Service or SaaS. There are various modes of
creating test data and moving it from on-premises databases to the cloud, or between different
environments within the cloud. Data masking invariably becomes the part of these processes
in SDLC as the development environments' SLAs are usually not as stringent as the
production environments' SLAs regardless of whether application is hosted in the cloud or onpremises.
Data Masking Techniques :
The following is a list of common data masks (Masking, Net2000) used to obfuscate
data:

Substitution: Substitution is simply replacing one value with another. For example, the
mask might substitute a person’s first and last names with names from a random phone
book entry. The resulting data still constitutes a name, but has no logical relationship
with the original real name unless you have access to the original substitution table.

Redaction/Nulling: This substitution simply replaces sensitive data with a generic
value, such as ‘X’. For example, we could replace a phone number with “(XXX)XXXXXXX”, or a Social Security Number (SSN) with XXX-XX-XXXX. This is the simplest
and fastest form of masking, but the output provides little or no value.

Shuffling: Shuffling is a method of randomizing existing values vertically across a data
set. For example, shuffling individual values in a salary column from a table of
employee data would make the table useless for learning what any particular employee
earns without changing aggregate or average values for the table. Shuffling is a common
randomization technique for disassociating sensitive data relationships while preserving
aggregate values.
INCON - XI 2016
264
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Blurring: Taking an existing value and alter it so that the value falls randomly within a
defined range.

Averaging: Averaging is an obfuscation technique where individual numbers are
replaced by a random value, but across the entire field, the average of these values
remains consistent. For example is case of salary, we could substitute individual salaries
with the average across a group or corporate division to hide individual salary values
while retaining an aggregate relationship to the real data.

De-identification: A generic term for to any process that strips identifying information,
such as who produced the data set, or personal identities within the data set. Deidentification is important for dealing with complex, multi-column data sets that provide
sufficient clues to reverse engineer masked data back into individual identities.

Tokenization: Tokenization is substitution of data elements with random placeholder
values, although vendors overuse the term ‘tokenization’ for a variety of other
techniques. Tokens are non-reversible because the token bears no logical relationship to
the original value.

Format Preserving Encryption: Encryption is the process of transforming data into an
unreadable state. Unlike the other methods listed, the original value can be determined
from the encrypted value, but can only be reversed with special knowledge (the key).
While most encryption algorithms produce strings of arbitrary length, format preserving
encryption transforms the data into an unreadable state while retaining the format
(overall appearance) of the original values. Technically encryption violates the first law
of data masking, but some customers we interviewed use encryption and masking sideby-side, we wanted to capture this variation.
Conclusion :
Staying compliant with policy and government regulations while sharing production data
with nonproduction users has become a critical business imperative for all enterprises. Data
Masking is designed and optimized for today’s high volume enterprise applications running on
databases. Masking is combination of discovery, data set management, protection, and control
over data migration is unique. No other data security product provides all these benefits
simultaneously. Masking reduces risk with minimal disruption to business systems. These
characteristics suit masking to meeting compliance requirements.
Organizations that have implemented Data Masking to protect sensitive data in test and
development environment have realized significant benefits in the following areas:
a.
Reducing Risk through Compliance.
b.
Increasing Productivity through Automation.
The expand of analyzing the data masking techniques further with the critical parameters
across industries and comparison study with the existing methods will be the future work.
REFERENCES :
[1] "Data Masking Methodology". Data Kitchen.
[2] "What is Data Obfuscation".
[3] "Information Management Specialists". GBT.
[4] "Test Management Methodology". Data Kitchen.
[5] "Generating and Maintaining Test Data Subsets". Data Kitchen.
INCON - XI 2016
265
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[6] Static and Dynamic Data Masking Explained, Gartner, October 2015
[7] http://www.freepatentsonline.com/7864952.html
[8] "Dynamic Data Masking with IBM Optim".
[9] "Data Masking: What You Need to Know" (PDF). Net2000 Ltd.
[10] "12qsSyncronisation and Complex Data Masking Rules Explained".
[11] "ELIMINATING COMPLINCE [sic] RISKS - DATA MASKING WITH CLOUD"
[12] http://searchsecurity.techtarget.com/news/4500248108/Gartner-IoT-security-is-all-aboutphysical-safety-and-data-handling.
[13] Muralidhar, K. and R.Sarathy,(1999) "Security of Random Data Perturbation Methods,"
ACM Transactions on Database Systems, 24(4),487-493.
[14] A Study on Dynamic Data Masking with its Trends and Implications, International
Journal of Computer Applications (0975 – 8887)Volume 38– No.6, January 2012.
[15] Ravikumar G K, Manjunath T N, Ravindra S Hegadi, Umesh I M “A Survey on Recent
Trends, Process and Development in Data Masking for Testing"-IJCSI- Vol. 8, Issue 2,
March 2011-p-535-544.
[16] Sachin Lodha and Sharada Sundaram. Data Privacy. In Proceedings of 2nd World TCS
Technical Architects' Conference (TACTiCS), 2005.
[17] Xiao-Bai Li, Luvai Motiwalla BY “Protecting Patient Privacy with Data Masking” WISP
2009.
INCON - XI 2016
266
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
A Study of Critical Factors Affecting Accuracy of Cost Estimation for
Software Development
Dr. Manimala Puri
Director, Abacus Institute of Computer
Applications
Pune, India
manimalap@yahoo.com
Mr. Mahesh P. Potdar
Associate Professor,IMSCD&R
Ahmednagar, India
maheshpotdar@rediffmail.com
Mrs. Shubhangi Mahesh Potdar
Assistant Professor, IBMRD
Ahmednagar, India
shubhangipotdar@rediffmail.com
ABSTRACT :
While developing any software, Cost estimation is key task in software project
management. The accurate prediction of software development costs is a critical issue,
because success or failure of whole project is depends on the accuracy of estimated cost.
There are various factors like to establish a stable requirement baseline, to consider
user input, to prepare a realistic cost and schedule estimation, Suitable estimation
technique, Changes of company Policy and many more that affect accuracy of estimated
cost and indirectly increase the possibility of successful software projects. This paper
aimed to discuss different factors that bring to the effectiveness of cost estimation in
software development project. Using this study perhaps cost estimation process can
produce more accurate result and will help software development communities to
increase the success of software development projects.
Keywords: Success Factors, Cost Estimation, Software Project Management, Realistic
Cost, Software Development Communities.
1.
Introduction
Cost estimation is an important issue in any software development project. The accurate
prediction of software development costs is a critical issue, because success and failure of
whole project is depends on the accuracy of estimated cost. We can say that software cost
estimation is integrated with software development process. All project management activities
like project planning, Project scheduling etc. are totally depends upon estimated cost for that
software project. Information system is defined as interaction between people, process, data
and technology [9]. There are various factors that affects on success and failure of cost
estimation for software project development.
In this paper, literature survey is carried out in order to gather related information to
study the factors. Few existing influence factors with new factors that contribute to success
software cost estimation process. [8]
2.
Suitable Estimation Technique
The accurate prediction of software development costs is a critical issue to make the
good management decisions and accurately determining how much effort and time a project
INCON - XI 2016
267
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
required for project managers as well as system analysts and developers. There are many
software cost estimation technique available including Algorithmic Methods, Non Algorithmic
Methods.
This classification is depends upon use of mathematical expressions for estimating cost.
In Algorithmic Cost Estimation there is use of mathematical expressions for estimating cost
associated with software development. While in Non Algorithmic Cost Estimation there is no
use of mathematical expressions for software development cost estimation.
The algorithmic methods have been largely studied and there are a lot of models have
been developed, such as COCOMO models, Putnam model, and Function points based
models.
The non algorithmic methods are expert judgment, top-down approach, bottom up
approach, [5] price to win, analogy, and many more. No one method is necessarily better or
worse than the other, in fact, their strengths and weaknesses are often complimentary to each
other. If we integrate more than one technique to get more accuracy in cost estimation then the
technique is called as the hybrid technique.
Selection of Suitable Cost Estimation Technique/Method for software development is a
key success factor for software project.
3.
Success Factors for Software Development Cost Estimation
There are various factors need to be considered such as Good Planning, Appropriate
Communication, Clear Requirements and Specifications, Clear Objectives and Goals, Support
from Top Management, Effective Project Management Methodologies, Proper Tool Selection,
Suitable Estimation Technique, Changes of Company Policy, Testing, Training, Good Quality
Management, Entertainment, Role of Sponsors, and Suitable Software Development
Methodology that contribute to the successful of cost estimation. The study also included a
reworked statement of the original "CHAOS Report" which was published in 1995 [4].
Software Project Success Factors were essential for a software project to succeed
1.
User Involvement
2.
Executive Management Support
3.
Clear Statement of Requirements
4.
Proper Planning
5.
Realistic Expectations
6.
Smaller Project Milestones
7.
Competent Staff
8.
Ownership
9.
Clear Vision & Objectives
10. Hard-Working, Focused Staff
11. Entertainment
12. Role of Sponsors
13. Changes of Company Policies
14. Proper Tool Selection
15. Suitable Estimation Technique
16. Suitable Software Development Methodology
17. Effective Project management skills (Project Manager)
INCON - XI 2016
268
ASM’s International E-Journal on
Ongoing Research in Management & IT
18. Testing
19. Training
E-ISSN – 2320-0065
1.
User Involvement:
Less involvement from customers might cause misunderstanding of the process and
output of the project. Active user’s involvement will ensure the project team has a clearer
picture of the whole project and the expected output. It helps the project team to identify the
scopes, the objectives, the goals and the requirements from the users. An active user’s
involvement also will help the end product that is according to their needs. [4][11]
2.
Executive Management Support
Top management commitment is the factor that determines the tipping point between
potential success and failure when developing and implementing business continuity
management projects and systems. In almost all of the cases where we were able to
successfully develop, implement and validate a business continuity management system, the
topmost contributor to the success was the keen interest exhibited by top management. When
we say top management, it implies the Steering Committee formed for the execution of the
business continuity project. A project charter issued by the top management ensures
organization wide commitment for the project and the availability of resources. [4]
3.
Clear Statement of Requirements
A Software requirement is a sub-field of software engineering that deals with the
elicitation, analysis, specification, and validation of requirements for software. The software
requirement specification document enlists all necessary requirements for project
development. To derive the requirements we need to have clear and thorough understanding of
the products to be developed. [4]
4.
Proper Planning
The Plan, Do, Check, act cycle is fundamental to achieving project quality. [10] The
overall project plan should include a plan for how the project manager and team will maintain
quality standards throughout the project cycle. Time spent planning is time well spent. All
projects must have a plan with sufficient detail so that everyone involved knows where the
project is going. [4]
A good plan provides the following benefits:

Clearly documented project milestones and deliverables

A valid and realistic time-scale

Allows accurate cost estimates to be produced

Details resource requirements

Acts as an early warning system, providing visibility of task slippage

Keeps the project team focused and aware of project progress
5.
Realistic Expectations
The planning process identifies needs and capabilities of an organization; these two
dimensions form the rationale for purchasing new systems. Needs are translated into business
requirements, i.e., “what we want”. Specifications describe exactly how the system, will fulfill
the requirement. [4]
INCON - XI 2016
269
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
6.
Smaller Project Milestones
Milestones are a tool used in project management to mark specific points along a project
time line. These points may signal anchors such as a project start and end date, a need for
external review or input and budget checks among others. In many instants, milestones do not
impact project duration. Instead, they focus on major progress points that must be reaching to
achieve another success. [4]
These check points in the planning building and completion stages help you determine if
the project is on track.
7.
Competent Staff
Competent development teams require effective communication, as this trait maximizes
the strength and minimizes the weaknesses of the team. The team should have a clear
direction, a sense of ownership of the work and buy-in to the process. Teams require a quality
work place environment, as a team results are likely to reflect, not correct the environment. [4]
8.
Ownership
Ownership is a big issue in software project development. Customer organization
assumes that since they are spending money, is the sole responsibility of supplying
organization to make the project a success. A project can never become a success while the
two agencies involved act as Separate Island, both have to take the equal ownership to make it
a success. [4]
9.
Clear Vision & Objectives
Goals are generically for an achievement or accomplishment for which certain efforts
are put. Objectives are specific targets within the general goal. Objectives are time-related to
achieve a certain task. A goal is defined as (i) The purpose toward which an endeavor is
directed. (ii) The result or achievement toward which effort is directed; aim; end. An objective
has a similar definition but is supposed to be a clear and measurable target.[4]
10.
Hard-Working, Focused Staff
The challenge to any software development effort is that quality management, hardworking, focused staff is a tricky balancing act that must factor in time, cost, and risks. Get it
wrong, and you could face issues ranging from unsustainable costs, missed windows of
opportunity, and unhappy customers, to a massive recall or the complete failure of a system at
a critical moment. Get it right, and you can achieve a positive operational return on investment
from efficiencies gained in development activities. [4]
11. Entertainment
Overlook in entertainment cost is a serious matter in causing over budget in software
development project. Entertainment cost is not only entertaining the client but includes having
outside meeting with supplier or top management or other stakeholders. Some software
development communities do not include entertainment cost in their estimation and budgeting
process because they thought entertainment cost is not included in the expenses and they tend
to use their own pocket money. However, if they look deeply into this matter, sometimes it
causes them a big amount of money. As a result, they might claim these expenses under the
project budget. Therefore, the actual cost is sometimes exceeded the allocation budget. [11]
[12]
12.
Role of Sponsors
INCON - XI 2016
270
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Sponsors are commonly seen as providing or contributing to the allocation of resources
and budget of a project. The main role of sponsors is to ensure a project is managed within the
budget. Even though the sponsor is not fully involved in the project but somehow the project
manager needs to report frequently to the sponsor. In common practice, the sponsor will
influence the decision made. Therefore, the involvement of sponsor is needed in order to make
sure the identification of resources is done properly. [11] [12]
13.
Changes of Company Policies
The changes of company policy can be one of the contributions for the success of cost
estimation process in software development project. The changes include change of top
management, change of technology, change of governs, change of environment and many
more. All the changes are strongly affect to cost estimation process. In certain cases, an
organization might have their own policy on resources, budget and duration of project.
Therefore, software development communities need to consider the company policies in doing
the estimation. Normally, the changes affect during the early stage, in progress or after the
estimation process. However, the right time to consider any changes is at the early stage of the
estimation process. [11] [12]
14.
Proper Tool Selection
Choosing the right proper tool is important in cost estimation process. The right proper
tool produce an accurate results. Most of the cost estimation process was done manually. The
traditional common tool used in estimating the cost is spreadsheet or Microsoft excel and
Microsoft project. The biggest challenge in using the traditional method is the level of
accuracy. Software development communities faced difficulty to achieve high accuracy in
producing cost estimation result. Therefore, many studies have been done to develop
automated tool for cost estimation process. However, no one claimed their proposed tool can
produce accurate result. [2][11] [12]
Most of the researchers agreed that to produce accurate cost estimation is hard and
crucial. [2] From the literature, there is less accurate tool that can be used to estimate cost in
any IS project. This is due to the different and various requirements and needs by the users or
project developers. Even though, many researchers have tried to construct their own tool, until
the date, nobody can claim their tool would produce good and accurate and widely acceptable
estimation.
15.
Suitable Estimation Technique
In software cost estimation process, there are few techniques that can be applied to
estimate the cost. For examples, the expert judgment, top-down approach, bottom-up
approach, price-to-win, rules of thumb, parametric model, analogy, time boxes and many
more. However, until the date, there is no research that can ensure which technique is the most
suitable in cost estimation process. Therefore, many researchers have been done investigating
the most suitable technique that can be applied.
Choosing the right cost estimation technique is important to ensure the result is accurate.
Different approach applied in the cost estimation techniques might produce different accuracy
of result. There are few researches have been carried out to integrate more than one technique
which is called the hybrid technique. Yet, no one can claim which technique is the best.
Therefore, there is no right and wrong approach in choosing cost estimation technique in a
INCON - XI 2016
271
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
project. The most important matter is to choose the right technique which is suitable to the
project. Sometimes, few techniques might be applied in a single project but it caused the
increment of cost and time. However, this does not mean the result is better. [11] [12]
16.
Suitable Software Development Methodology
Software development methodology plays an important role in cost estimation process.
However, some software development communities just ignore it in the estimation process.
The examples of software development methodology are agile, spiral, waterfall, Rapid
Application Development, prototyping and many more. Each software development
methodology provide different step which contributes in cost estimation process. For example
traditional method (example waterfall) provides different process compared to agile
methodology. The simplest process can affect the way the software development communities
do the estimation. Less step or process might decrease the cost involved. Therefore, to secure
the cost estimation process, the most suitable and correct software of development
methodology is needed. [11] [12]
17.
Effective Project management skills (Project Manager)
Project manager also plays important roles and become important factor in the
successful of any project. [7] A successful Project Manager must simultaneously manage the
four basic elements of a project: resources, time, money, and most importantly, scope.[6] All
these elements are interrelated. Each must be managed effectively. All must be managed
together if the project, and the project manager, is to be a success. [3]

Resources: People, equipment, material

Time: Task durations, dependencies, critical path

Money: Costs, contingencies, profit

Scope: Project size, goals, requirements
Most literature on project management speaks of the need to manage and balance three
elements: people, time, and money. However, the fourth element is the most important and it
is the first and last task for a successful project manager. [1]
18.
Testing
Testing is critical to understand how the application will work in the installed
environment, if it performs according to expectations, and to identify any problems with the
software or processes so they are addressed prior to the live event.

Document what type of testing must be done (i.e., database conversion, data flows, user
front end, business flow). Include who will be involved in testing and how it will be
performed.

Write Test Scripts that detail all scenarios that could occur. Business end users should be
involved in this as they are most likely to understand all aspects of their business.

Test items that are standard operations as well as those items that occur infrequently.

Conduct user testing with staff members who are familiar with the business for which
the application is designed. They should be validating the application for their business.

Allow time in the schedule to retest anything that did not work initially. If any changes
are made to software or setup, run through most tests again to assure there is no negative
impact in other areas.
INCON - XI 2016
272
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Determine security access, setup, and test user accounts prior to live. [1]
19.
Training
Without training, the implementation will take longer, adaptation will be more
problematic and frustration will be higher. A training plan should be developed that includes
everyone who will either support or use the new system. Components of a successful training
program, as reported by practitioners, include business process reengineering training and
team training prior to project work, Extensive documentation, Appropriate timing,
coordinating between project schedule, trainer availability and trainee availability. [1]
Conclusion:
This study has covered maximum success factors required for software cost estimation
process in software project development. Many researchers may overlook some of the factors
that may affect the overall software cost. Using this study perhaps cost estimation process can
produce more accurate result and will help software development communities to increase the
success of software development projects. In the next steps, the identified factors will be
evaluated through hypothesis test and a questionnaires survey.
REFERENCES:
[1] G. Rajkumar1, D. (IJCTA,Jan-Feb 2013). The Most Common Success Factors In Cost
Estimation – A Review. Int.J.Computer Technology & Applications,Vol 4 (1), 58-61 ,
58-61.
[2] Guru, P.: What is Project Success: A Literature Review. . (2008). International Journal
of Business and Management 3(9), 71–79.
[3] Iman, A. S. (2008). Project Management Practices: The Criteria for Success of Failure.
[4] Johnson, J. (2000). “The Chaos Report” West Yarmouth, MA: The Standish Group. The
Chaos Report .
[5] Jorgensen, M. (2004). A Review of Studies on Expert Estimation of Software
Development Effort., 37–60. Journal of Systems and Software 70(1-2).
[6] Mohd hairul Nizam Nasir and Shamsul Sahibuddin. (n.d.). Critical success factors for
software projects: A Comparative Study Malaysia.
[7] Ostvold, K. J. (2005). A Comparison of Software Project Overruns – Flexible versus
Sequential Development Model. IEE Transactions on Software Engineering 31(9).
[8] Robert Frese, S. A. (n.d.). Project success and failure: what is success, what is failure,
and how can you improve your odds for success?
[9] Schwalbe, K. (2004). IT Project Management. Canada: Thomson Course Technology.
[10] Young, M. L. (n.d.). Six Success Factors for Managing Project Quality.
[11] Zulkefli Mansor 1, S. Y. (2011). Review on Traditional and Agile Cost Estimation
Success Factor in Software Development Project. International Journal on New
Computer Architectures and Their Applications (IJNCAA) 1(3) , 942-952.
[12] Zulkefli Mansor1, S. Y. (2011). Success Factors in Cost Estimation for Software.
ICSECS 2011, Part I, CCIS 179 , pp. 210–216.
INCON - XI 2016
273
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Search Engine Goals By Using Feed Back Session
Mrs.Sandhya Giridhar Gundre
Mr. Giridhar Ramkishan Gundre
Assistant Professor,(Computer Department)
Assistant Professor
Dr.D.Y.Patil Institute of engineering and
(Computer Science Dept.)
Management Research
GSM Arts, Commerce &Science Senior
Akurdi, Pune, Maharashtra. India
College Yerwada, Pune. India
nsandhya528@gmail.com
ABSTRACT:
We cluster pseudo-documents by FCM clustering which is simple and effective.
Since we do not know the exact number of user search goals for each query, we set
number of clusters to be five different values and perform clustering based on these five
values, respectively. After clustering all the pseudo-documents, each cluster can be
considered as one user search goal. The center point of a cluster is computed as the
average of the vectors of all the pseudo-documents in the cluster.
Key Words: Cluster, Psuedo code.
Introduction:
Fuzzy Clustering:

A fuzzy self-constructing algorithm(Data Mining Process):
Feature clustering is a powerful method to reduce the dimensionality of feature vectors
for text classification. In this paper, we propose a fuzzy similarity-based [3] self-constructing
algorithm for feature clustering. The words in the feature vector of a document set are grouped
into clusters, based on similarity test. Words that are similar to each other are grouped into the
same cluster. Each cluster is characterized by a membership function with statistical mean and
deviation. When all the words have been fed in, a desired number of clusters are formed [2]
automatically. We then have one extracted feature for each cluster. The extracted feature,
corresponding to a cluster, is a weighted combination of the words contained in the cluster.
By this algorithm, the derived membership functions match closely with and describe
properly the real distribution of the training data. Besides, the user need not specify the
number of extracted features in advance, and trial-and-error for determining [1] the
appropriate number of extracted features can then be avoided. Experimental results show that
our method can run faster and obtain better extracted features than other methods.
Fuzzy clustering is a class of algorithms for cluster analysis in which the allocation of
data points to clusters is not "hard" (all-or-nothing) but "fuzzy" in the same sense as fuzzy
logic.

Explanation of clustering
Data clustering is the process of dividing data elements into classes or clusters so that
items in the same class are as similar as possible, and items in different classes are as
dissimilar as possible. Depending on the nature of the data and the purpose for which
clustering is being used, different measures of similarity may be used to place items into
classes [4], where the similarity measure controls how the clusters are formed. Some examples
of measures that can be used as in clustering include distance, connectivity, and intensity.
INCON - XI 2016
274
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In hard clustering, data is divided into distinct clusters, where each data element belongs
to exactly one cluster. In fuzzy clustering[5] (also referred to as soft clustering), data elements
can belong to more than one cluster, and associated with each element is a set of membership
levels. These indicate the strength of the association between that data element and a particular
cluster. Fuzzy clustering is a process of assigning these membership levels, and then using
them to assign data elements to one or more clusters.
One of the most widely used fuzzy clustering algorithms is the Fuzzy C-Means (FCM)
Algorithm. The FCM algorithm attempts to partition a finite collection of n elements into a
collection of c fuzzy clusters with respect to some given criterion. Given a finite set of data,
the algorithm returns a list of c cluster centres and a partition matrix, where each element wij
tells the degree to which element xi belongs to cluster cj. Like the k-means algorithm, the
FCM [3] aims to minimize an objective function. The standard function is:
1
wk(x) =
d
(center
⎞
⎛
k x)
∑j ⎜
2/(m – 1)⎟
⎠
⎝ d (centerj x)
Which differs from the k-means objective function by the addition of the membership
values uij and the fuzzifier m. The fuzzifier m determines the level of cluster fuzziness. A
large m results in smaller memberships wij and hence, fuzzier clusters. In the limit m = 1, the
memberships wij converge to 0 or 1, which implies a crisp partitioning. In the absence of
experimentation or domain knowledge, m is commonly set to 2. The basic FCM Algorithm,
given n data points (x1, . . ., xn) to be clustered, a number of c clusters with (c1, . ., cc) the
center of the clusters, and m the level of cluster fuzziness with,

Fuzzy c-means clustering
In fuzzy clustering, every point has a degree of belonging to clusters, as in fuzzy logic,
rather than belonging completely to just one cluster. Thus, points on the edge of a cluster, may
be in the cluster to a lesser degree than points in the center of cluster. An overview and
comparison of different fuzzy clustering algorithms is available. Any point x has a set of
coefficients giving the degree of being in the kth cluster wk(x). [4]With fuzzy c-means, the
centroid of a cluster is the mean of all points, weighted by their degree of belonging to the
cluster:
The degree of belonging, wk(x), is related inversely to the distance from x to the
cluster center as calculated on the previous pass. It also depends on a parameter m that
controls how much weight is given to the closest center. The fuzzy c-means algorithm is very
similar to the k-means algorithm:

Choose a number of clusters.

Assign randomly to each point coefficients for being in the clusters.

Repeat until the algorithm has converged (that is, the coefficients' change between two
iterations is no more than, the given sensitivity threshold):

Compute the centroid for each cluster, using the formula above.

For each point, compute its coefficients of being in the clusters, using the formula above.
The algorithm minimizes intra-cluster variance as well, but has the same problems as kmeans; the minimum is a local minimum, and the results depend on the initial choice of
weights. Using a mixture of Gaussians along with the expectation-maximization [5]algorithm
INCON - XI 2016
275
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
is a more statistically formalized method which includes some of these ideas: partial
membership in classes. Another algorithm closely related to Fuzzy C-Means is Soft K-means.
Fuzzy c-means has been a very important tool for image processing in clustering objects
in an image. In the 70's, mathematicians introduced the spatial term into the FCM[3] algorithm
to improve the accuracy of clustering under noise.
Software Requirement Specification
Functional requirements:
It employs a low-overhead user-level communication mechanism like virtual interface
architecture to achieve a good load balance among server nodes. We compare three
distribution models for network servers, [2]round robin, ssl with session and ssl with bf
through simulation.
The experimental results with 16-node and 32-node cluster configurations show that,
although the session reuse of ssl with session is critical to improve the performance of
application servers, the proposed back-end forwarding scheme can further enhance the
performance due to better load balancing. The ssl [5] with bf scheme can minimize the
average latency by about 40 percent and improve through put across a variety of workloads.
INCON - XI 2016
276
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Non-Functional Requirements:
Secure access of confidential data (attendance details)
24*7 availability
Better component design to get better performance at peak time
Flexible service based architecture will be highly desirable for future extension.
Security:
It provides more security by setting username and password.
Safety:
This application provides more safety to the users for accessing the databases and for
performing the operations on the databases.
Interfaces:
It provides the interface for accessing the database and also allows the user to do the
manipulations on the databases.
Reliability:
This entire project is depends on the SQL Server.
Accuracy:
Since the same table is created at different users account, the possibility of retrieving
data wrongly increases. Also if the data is more, validations become difficult. This may result
in loss of accuracy of data.
Performance Requirements
Performance is measured in terms of the output provided by the application.
Requirement specification plays an important part in the analysis of a system. Only when the
requirement specifications are properly given, it is possible to design a system, which will fit
into required environment. It rests largely in the part of the users [3] of the existing system to
give the requirement specifications because they are the people who finally use the system.
This is because the requirements have to be known during the initial stages so that the system
can be designed according to those requirements. It is very difficult to change the system once
it has been designed and on the other hand designing a system, which does not cater to the
requirements of the user, is of no use.
The requirement specification for any system can be broadly stated as given below:

The system should be able to interface with the existing system.

The system should be accurate.

The system should be better than the existing system.

The existing system is completely dependent on the user to perform all the duties.
Client Application Development
Client applications are the closest to a traditional style of application in windows-based
programming. These are the types of applications that display windows or forms on the
desktop, enabling a user to perform a task. Client applications include applications such as
word processors and spreadsheets, as well as custom business applications such as data-entry
INCON - XI 2016
277
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
tools, reporting tools, and so on. Client applications usually that binary or natively executing
code can access some of the resources on the user’s system [2](such as GUI elements and
limited file access) without being able to access or compromise other resources. Because of
code access security, many applications that once needed to be installed on a user’s system can
now be safely deployed through the web. Your applications can implement the features of a
local application while being deployed like a web page.
Datasets and Dataadapters:
Dataset:
A dataset (or data set) is a collection of data. Most commonly a dataset corresponds to
the contents of a single database table, or a single statistical data matrix, where every column
of the table represents a particular variable, and each row corresponds to a given member of
the dataset in question. The dataset lists values for each of the variables, such as height and
weight of an object, for each member of the dataset. [4]Each value is known as a datum. The
dataset may comprise data for one or more members, corresponding to the number of rows.
The term dataset may also be used more loosely, to refer to the data in a collection of closely
related tables, corresponding to a particular experiment or event.
Data Adapter
Data Adapter is a part of the ADO.NET Data Provider. Data Adapter provides the
communication between the Dataset and the Data source. We can use the Data Adapter in
combination with the Data Set Object. That is these two objects combine to enable both data
access and data manipulation capabilities.
Conclusion:
A novel approach has been proposed to infer user search goals for a query by clustering
its feedback sessions represented by pseudo documents. First, we introduce feedback sessions
to be analyzed to infer user search goals rather than search results or clicked URLs. Both the
clicked URLs and the un clicked ones before the last click are considered as user implicit
feedbacks and taken into account to construct feedback sessions Therefore, feedback sessions
can reflect user information needs more efficiently. Second, we map feedback sessions to
pseudo documents to approximate goal texts in user minds. The pseudo documents can enrich
the URLs with additional textual contents including the titles and snippets. Based on these
pseudo documents, user search goals can then be discovered and depicted with some
keywords. Finally, a new criterion CAP is formulated to evaluate the performance of user
search goal inference. Experimental results on user click through logs from a commercial
search engine demonstrate the effectiveness of our proposed methods.
Bibliography:
[1] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval. ACM Press, 1999.
512 IEEE Transactions on Knowledge and Data Engineering, Vol. 25, No. 3, March
2013 Fig. 11. Clustering of the clicked URLs and feedback sessions, where points
represent clicked URLs, ellipses represent feedback sessions, and lines represent class
boundaries. Supposing that the clicked URLs have two classes, the points in the left
INCON - XI 2016
278
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
figure are hard to be segmented directly, while the ellipses in the right figure are easier to
be segmented.
[2] R. Baeza-Yates, C. Hurtado, and M. Mendoza, “Query Recommendation Using Query
Logs in Search Engines,” Proc. Int’l Conf. Current Trends in Database Technology
(EDBT ’04), pp. 588-596, 2004.
[3] D. Beeferman and A. Berger, “Agglomerative Clustering of a Search Engine Query
Log,” Proc. Sixth ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining
(SIGKDD ’00), pp. 407-416, 2000.
[4] S. Beitzel, E. Jensen, A. Chowdhury, and O. Frieder, “Varying Approaches to Topical
Web Query Classification,” Proc. 30th Ann. Int’l ACM SIGIR Conf. Research and
Development (SIGIR ’07), pp. 783-784, 2007.
[5] H. Cao, D. Jiang, J. Pei, Q. He, Z. Liao, E. Chen, and H. Li, “Context-Aware Query
Suggestion by Mining Click-Through,” Proc. 14th ACM SIGKDD Int’l Conf.
Knowledge Discovery and Data Mining (SIGKDD ’08), pp. 875-883, 2008.
INCON - XI 2016
279
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Wireless Technology
Apeksha V Dave
Assistant Professor
IMCOST, Thane, India
apexa.dave12@gmail.com









ABSTRACT :
What is Wireless Technology?
Wireless is a term used to describe telecommunications in which electromagnetic
waves (rather than some form of wire) carry the signal over part or all of the
communication path. Some monitoring devices, such as intrusion alarms, employ
acoustic waves at frequencies above the range of human hearing; these are also
sometimes classified as wireless.
The first wireless transmitters went on the air in the early 20th century using
radiotelegraphy (Morse code). Later, as modulation made it possible to transmit voices
and music via wireless, the medium came to be called "radio." With the advent of
television, fax, data communication, and the effective use of a larger portion of the
spectrum, the term "wireless" has been resurrected.
Common examples of wireless equipment in use today include:
Cellular phones and pagers -- provide connectivity for portable and mobile applications,
both personal and business
Global Positioning System (GPS) -- allows drivers of cars and trucks, captains of boats
and ships, and pilots of aircraft to ascertain their location anywhere on earth
Cordless computer peripherals -- the cordless mouse is a common example; keyboards
and printers can also be linked to a computer via wireless
Cordless telephone sets -- these are limited-range devices, not to be confused with cell
phones
Home-entertainment-system control boxes -- the VCR control and the TV channel
control are the most common examples; some hi-fi sound systems and FM broadcast
receivers also use this technology
Remote garage-door openers -- one of the oldest wireless devices in common use by
consumers; usually operates at radio frequencies
Two-way radios -- this includes Amateur and Citizens Radio Service, as well as
business, marine, and military communications
Satellite television -- allows viewers in almost any location to select from hundreds of
channels
Wireless LANs or local area networks -- provide flexibility and reliability for business
computer users
Introduction
Wireless technology is the process of sending information through invisible waves in the
air. Information such as data, voice, and video are carried through the radio frequency of the
electromagnetic spectrum.
INCON - XI 2016
280
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Wireless technologies are all based on the same idea, with a few small differences.
Information is relayed wirelessly using from one point to another using various types of
signals. Some communicate over large distances, while others over small distances. Here are
some of the key wireless technologies your probable use in everyday life.
Bluetooth
Bluetooth is a very popular wireless technology because it can transmit lots of
information very quickly. The only drawback of Bluetooth is that is can only communicate
over a maximum distance of about 10 meters. They’re reliable and really fast.
Wi-Fi
WiFi stands for wireless fidelity and is used mainly for home wireless networking
because of its fast transmission speeds. It works in a similar way to Bluetooth, but works over
a much larger radius
3G and 4G
You’ve probably heard these terms on TV or in your phone shop, and they work similar
to the way your TV picks up a TV signal. It’s likely that if you have a mobile phone , it
operates on a 3G network. The ‘G’ stands for generation, as the technology improves and
becomes faster, we need to give it a different name. Wireless mobile phone technology tends
to be a bit slower, because it covers a larger area and a greater number of devices.
Infrared
Infrared technology has been in use for a long time! If you have a TV remote, or any
type of remote for heating and cooling in your home, it’s likely that it transmits information
though infrared. Infrared only need a very small amount of power to run, but can only transmit
very basic commands to another point or device. It’s also restricted to fairly short distances.
Generations Of Wireless Technology
Zero Generation (0G – 0.5G)
Mobile radio telephone systems preceded modern cellular mobile telephony technology.
Since they were the predecessors of the first generation of cellular telephones, these systems
are sometimes referred to as 0G (zero generation) systems. Technologies used in 0G systems
included PTT (Push to Talk), MTS (Mobile Telephone System), IMTS (Improved Mobile
Telephone Service), AMTS (Advanced Mobile Telephone System).
First Generation (1G)
Most of the devices which came from this generation had military as its origin and then
moved to civilian services. 1G is short for first-generation wireless telephone technology, or
mobile phones. These are the analogue mobile phone standards that were introduced in the
1980s and continued until being replaced by 2G digital mobile phones. Some of the 1G
standards include AMPS (Advanced Mobile Phone System),
Second Generation (2G – 2.75G)
The main difference between the two succeeding mobile telephone systems, 1G and 2G,
is that the radio signals that 1G networks use are analogue, while 2G networks are digital.
INCON - XI 2016
281
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Note that both systems use digital signaling to connect the radio towers (which listen to the
handsets) to the rest of the telephone system. But the call itself is encoded to digital signals in
2G whereas 1G is only modulated to higher frequency (typically 150 MHz and up).

GSM (Global System for Mobile Communications)

GPRS (General Packet Radio Service)
Third Generation (3G – 3.75G)
The services associated with 3G provide the ability to transfer simultaneously both voice
data (a telephone call) and non-voice data (such as downloading information, exchanging
email, and instant messaging).
UMTS (3GSM) (Universal Mobile Telecommunications System)
3.5G – HSDPA (High-Speed Downlink Packet Access) is a mobile telephony protocol,
which provides a smooth evolutionary path for UMTS-based 3G networks allowing for higher
data transfer speeds.
Fourth Generation (4G)
According to 4G working groups, the infrastructure and the terminals will have almost
all the standards from 2G to 3G implemented. The system will also serve as an open platform
where the new innovations can go with it.
Types of Wireless Technology
1.
GSM
GSM networks consist of thee major systems: SS, which is known to be The Switching
System; BSS, which is also called The Base Station; and the Operation and Support System
for GSM networks.
The Switching System
The Switching system is very operative system in which many crucial operations are
conducted, SS systems holds five databases with in it which performs different functions.
HLR- Home Location Register:
HLR is database, which holds very important information of subscribers. It is mostly
known for storing and managing information of subscribers. It contains subscriber service
profile, status of activities, information about locations and permanent data of all sorts..
MSC- Mobile Services Switching Center:
MSC is also important part of SS, it handles technical end of telephony. It is build to
perform switching functionality of the entire system. It’s most important task is to control the
calls to and from other telephones, which means it controls calls from same networks and calls
from other networks.
VLR- Visitor Location Register:
VLR performs very dynamic tasks; it is database which stores temporary data regarding
subscribers which is needed by Mobile Services Switching Center-MSC VLR is directly
connected to MSC, when subscribe moves to different MSC location, Visitor location register
INCON - XI 2016
282
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
– VLR integrates to MSC of current location and requests the data about subscriber or Mobile
station ( MS ) from the Home Location Register –HLR..
AUC- Authentication Center:
AUC is small unit which handles the security end of the system. Its major task is to
authenticate and encrypt those parameters which verify user’s identification and hence enables
the confidentiality of each call made by subscriber.
EIR – Equipment Identity Register:
EIR is another important database which holds crucial information regarding mobile
equipments. EIR helps in restricting for calls been stolen, malfunctioning of any MS, or
unauthorized access. AUC – Authentication center and EIR- Equipment Identity registers are
either Stand-alone nodes or sometimes work together as combined AUC/EIR nodes for
optimum performance.
The Base Station System (BSS)
The base station system have very important role in mobile communication. BSS are
basically outdoor units which consist of iron rods and are usually of high length. BSS are
responsible for connecting subscribers (MS) to mobile networks.
BTS – The Base Transceiver Station:
Subscriber, MS (Mobile Station) or mobile phone connects to mobile network through
BTS; it handles communication using radio transmission with mobile station. As name
suggests, Base transceiver Station is the radio equipment which receive and transmit voice
data at the same time. BSC control group of BTSs.
BSC – The Base Station Controller:
The Base Station normally controls many cells, it registers subscribers, responsible for
MS handovers etc. It creates physical link between subscriber (MS) and BTS , then manage
and controls functions of it.
Fig. 1
INCON - XI 2016
283
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
The Operation and Support System (OSS)
The basic reason for developing operation and support system is to provide customers a
cost effective support and solutions. It helps in managing, centralizing, local and regional
operational activities required for GMS networks.
Maintaining mobile network organization, provide overview of network, support and
maintenance activities are other important aspects of Operation and Support System
2.
GPRS
GPRS network Architecture
Packet Control Unit(PCU)- This PCU is the core unit to segregate between GSM and
GPRS traffic. It separates the circuit switched and packet switched traffic from the user and
sends them to the GSM and GPRS networks respectively which is shown in the figure above.
In GPRS PCU has following two paths.
1.
PCU-MSC-GMSC-PSTN
2.
PCU-SGSN-GGSN-Internet (packet data network)
Serving GPRS Support Node(SGSN)- It is similar to MSC of GSM network. SGSN
functions are outlined below.

Data compression which helps minimize the size of transmitted data units.

Authentication of GPRS subscribers.

Mobility management as the subscriber moves from one PLMN area to the another
PLMN, and possibly one SGSN to another SGSN.

Traffic statistics collections.
Gateway GPRS Support Node(GGSN)- GGSN is the gateway to external networks
such as PDN (packet data network) or IP network. It does two main functions. It is similar to
GMSC of GSM network

Routes packets originated from a user to the respective external IP network
Border Gateway(BG)- It is a kind of router which interfaces different operators GPRS
networks. The connection between two border gateways is called GPRS tunnel.
Charging Gateway(CG) - GPRS users have to be charged for the use of the network,
this is taken care by Charging gateway. Charging is done based on Quality of Service or plan
user has opted either prepaid or post paid.
3.
UMTS
INCON - XI 2016
284
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Universal Mobile Telecommunication System
The Universal Mobile Telecommunication System (UMTS) is a third generation (3G)
mobile communications system that provides a range of broadband services to the world of
wireless and mobile communications. The UMTS delivers low-cost, mobile communications
at data rates of up to 2 Mbps. It preserves the global roaming capability of second generation
GSM/GPRS networks and provides new enhanced capabilities.
UMTS Services
The UMTS provides support for both voice and data services. The following data rates
are targets for UMTS:

144 kbps—Satellite and rural outdoor

384 kbps—Urban outdoor

2048 kbps—Indoor and low range outdoor
Data services provide different quality-of-service (QoS) parameters for data transfer.
UMTS network services accommodate QoS classes for four types of traffic:

Conversational class—Voice, video telephony, video gaming

Streaming class—Multimedia, video on demand, webcast

Interactive class—Web browsing, network gaming, database access

Background class—E-mail, short message service (SMS), file downloading
The UMTS supports the following service categories and applications:

Internet access—Messaging, video/music download, voice/video over IP, mobile
commerce (e.g., banking, trading), travel and information services

Intranet/extranet access—Enterprise application such as e-mail/messaging, travel
assistance, mobile sales, technical services, corporate database access, fleet/warehouse
management, conferencing and video telephony

Customized information/entertainment—Information (photo/video/music download),
travel assistance, distance education, mobile messaging, gaming, voice portal services

Multimedia messaging—SMS extensions for images, video, and music; unified
messaging; document transfer

Location-based services—Yellow pages, mobile commerce, navigational service, trading
UMTS Architecture
The public land mobile network (PLMN) described in UMTS Rel. '99 incorporates three
major categories of network elements:

GSM phase 1/2 core network elements—Mobile services switching center (MSC),
visitor location register (VLR), home location register (HLR), authentication center
(AUC), and equipment identity register (EIR)

GPRS network elements—Serving GPRS support node (SGSN) and gateway GPRS
support node (GGSN)

UMTS-specific network elements—User equipment (UE) and UMTS terrestrial radio
access network (UTRAN) elements
The UMTS core network is based on the GSM/GPRS network topology. It provides the
switching, routing, transport, and database functions for user traffic. The core network
contains circuit-switched elements such as the MSC, VLR, and gateway MSC (GMSC).
INCON - XI 2016
285
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
General Packet Radio System
The General Packet Radio System (GPRS) facilitates the transition from phase1/2 GSM
networks to 3G UMTS networks. The GPRS supplements GSM networks by enabling packet
switching and allowing direct access to external packet data networks (PDNs). Data
transmission rates above the 64 kbps limit of integrated services digital network (ISDN) are a
requirement for the enhanced services supported by UMTS networks. The GPRS optimizes the
core network for the transition to higher data rates. Therefore, the GPRS is a prerequisite for
the introduction of the UMTS.
The UMTS architecture
4.
WAP
Wireless Application Protocol ( WAP) is a suite of communication protocols for the
wireless and mobile devices designed to access the internet independent of manufacturer,
vendor, and technology.
The WAP was developed by the WAP Forum, a consortium of device manufacturers,
service providers, content providers, and application developers. WAP bridges the gap
between the mobile world and the Internet as well as corporate intranets and offers the ability
to deliver an unlimited range of mobile value-added services to subscribers—independent of
their network, bearer, and terminal.
The WAP Architecture
There are three major parts of a WAP-enabled system :
1.
WAP Gateway
2.
HTTP Web Server
3.
WAP Device
WAP Gateway
INCON - XI 2016
286
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
WAP gateway acts as mediator between Cellular device and HTTP or HTTPS web
server. WAP gateway routes requests from the client (Cellular Phones) to an HTTP (or Web)
server. The WAP gateway can be located either in a telecom network or in a computer
network (an ISP).
The HTTP Web Server
Receive the request from WAP Gateway and process the request and finally sends the
output to the WAP Gateway, which in turn the sends this information to the WAP device using
it's wireless network.
The WAP Device
Wap device (Cellular phones) is part of wireless network. WAP Device sends the WAP
request to the WAP Gateway, which in turn translates WAP requests to WWW requests, so the
WAP client is able to submit requests to the Web server. After receiving the response from the
the HTTP Web Server, WAP Gateway translates Web responses into WAP responses or a
format understood by the WAP client and sends it to the WAP Device.
WAP Protocol Stack
For those of you who want to understand the deep down, nitty-gritty of the WAP, here''s
a quick summary. The WAP relies on stacked architecture, as does Unix, Windows NT, and
most other newer technologies. Because wireless devices have limited memory, some layers of
the stack have been offloaded to the WAP gateway (which is part of the service providers''
system). The layers, from top to bottom, are:

the application layer, which relies on the Wireless Application Environment (WAE)

the session layer, which relies on the Wireless Session Protocol (WSP)

the transaction layer, which relies on the Wireless Transaction Protocol (WTP)

the security layer, which relies on the Wireless Transport Layer Security (WLTS)

the transport layer

and the network layer.
WAP Benefits:

Operators: New applications can be introduced quickly and easily without the need for
additional infrastructure or modifications to the phone.

Content Providers : Applications will be written in wireless markup language (WML),
which is a subset of extensible markup language (XML).

End Users: End users of WAP will benefit from easy, secure access to relevant Internet
information and services such as unified messaging, banking, and entertainment through
their mobile devices. Intranet information such as corporate databases can also be
accessed via WAP technology.
What is Bluetooth?
Bluetooth is a standardized protocol for sending and receiving data via a 2.4GHz
wireless link. It’s a secure protocol, and it’s perfect for short-range, low-power, low-cost,
wireless transmissions between electronic devices.
INCON - XI 2016
287
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
These days it feels like everything is wireless, and Bluetooth is a big part of that wireless
revolution. You’ll find Bluetooth embedded into a great variety of consumer products, like
headsets, video game controllers, or (of course) livestock trackers.
How Bluetooth Works
Masters, Slaves, and Pico nets
Bluetooth networks (commonly referred to as piconets) use a master/slave model to
control when and where devices can send data. In this model, a single master device can be
connected to up to seven different slave devices. Any slave device in the piconet can only be
connected to a single master.
The master coordinates communication throughout the piconet. It can send data to any of
its slaves and request data from them as well. Slaves are only allowed to transmit to and
receive from their master. They can’t talk to other slaves in the piconet.
WiFi
WiFi stands for Wireless Fidelity. WiFiIt is based on the IEEE 802.11 family of
standards and is primarily a local area networking (LAN) technology designed to provide inbuilding broadband coverage.
Current WiFi systems support a peak physical-layer data rate of 54 Mbps and typically
provide indoor coverage over a distance of 100 feet.
WiFi has become the de facto standard for last mile broadband connectivity in homes,
offices, and public hotspot locations. Systems can typically provide a coverage range of only
about 1,000 feet from the access point.
WiFi offers remarkably higher peak data rates than do 3G systems, primarily since it
operates over a larger 20 MHz bandwidth, but WiFiWiFi systems are not designed to support
high-speed mobility.
One significant advantage of WiFi over WiMAX and 3G is its wide availability of
terminal devices. A vast majority of laptops shipped today have a built-in WiFi interface. WiFi
interfaces are now also being built into a variety of devices, including personal data assistants
(PDAs), cordless phones, cellular phones, cameras, and media players.
WiMAX
WiMAX is one of the hottest broadband wireless technologies around today. WiMAX
systems are expected to deliver broadband access services to residential and enterprise
customers in an economical way.
More strictly, WiMAX is an industry trade organization formed by leading
communications, component, and equipment companies to promote and certify compatibility
and interoperability of broadband wireless access equipment that conforms to the IEEE 802.16
and ETSI HIPERMAN standards.
INCON - XI 2016
288
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
WiMAX would operate similar to WiFi, but at higher speeds over greater distances and
for a greater number of users. WiMAX has the ability to provide service even in areas that are
difficult for wired infrastructure to reach and the ability to overcome the physical limitations
of traditional wired infrastructure.
WiMAX was formed in April 2001, in anticipation of the publication of the original 1066 GHz IEEE 802.16 specifications. WiMAX is to 802.16 as the WiFi Alliance is to 802.11.
WiMAX is

Acronym for Worldwide Interoperability for Microwave Access.

Based on Wireless MAN technology.

A wireless technology optimized for the delivery of IP centric services over a wide area.

A certification that denotes interoperability of equipment built to the IEEE 802.16 or
compatible standard. The IEEE 802.16 Working Group develops standards that address
two types of usage models −

o
A fixed usage model (IEEE 802.16-2004).

o
A portable usage model (IEEE 802.16e).
Keywords: 1G,2G,3G,3.5G,4G,GSM,GPRS,UMTS,Bluetooth,Wi-Fi,WiMax
Conclusion:
Wireless technology is certainly able to improve our life quality. Especially since
wireless communication systems are becoming cheaper, easier to implement and smaller every
day, so more and more devices can profit from it. Wireless solutions can be time saving, easier
to use, and more mobile. Also, wireless conditioning monitoring reveals different applications
not even realizable through a wired network.
Wireless technologies are able to improve our quality of life. But we should be aware of
the possible risks. It will become more difficult to guarantee people their privacy in the
wireless future. People have to be aware and able to control, the amount of data, about
themselves, they make available online. And when they are aware of what they make public
they should also be aware of the possible impact that data might have on their lives.
Security of wireless connections is much more difficult than wired connections; effort
should be done to improve this.
The influence of wireless technologies will become greater in the future, but I do not
believe that wireless technologies will replace all wired technologies. Power transfer
efficiency is much smaller for wireless technologies than for wired technologies. In a world
where we want to reduce our power consumption, low power efficiency is intolerable for
applications with a large power consumption.
REFERENCES:
[1] https://sgar91.files.wordpress.com/mobile_communications_schiller_2e.pdf
[2] www.tutorialspoint.com/gsm/gsm_architecture.htm
[3] https://en.wikipedia.org/wiki/Wireless_network
[4] http://www.radio-electronics.com/info/cellulartelecomms/gprs/gprs-networkarchitecture.php
[5] http://www.umtsworld.com/technology/overview.htm
INCON - XI 2016
289
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
To Study Project Cost Estimation, Scheduling and Interleaved Management
Activities
Dr. Manimala Puri
Mrs. Shubhangi Mahesh Potdar
Director, Abacus Institute of Computer
Assistant Professor
Applications, Pune, India
IBMRD
manimalap@yahoo.com
Ahmednagar, India
Email:
shubhangipotdar@rediffmail.com
Mr. Mahesh P. Potdar
Associate Professor, IMSCD & R,
Ahmednagar, India
maheshpotdar@rediffmail.com
ABSTRACT :
Software cost estimation is an important activity in software project management.
This activity takes place in planning stage of project management. Cost estimation
means prediction of development cost it is an important issue in any software
development because all project management activities like planning, scheduling,
staffing, controlling are totally depends on estimated cost. In project management, a
schedule is a listing of a project's milestones, activities, and deliverables, usually with
intended start and finish dates. Those items are often estimated in terms of resource
allocation, budget and duration, linked by dependencies and scheduled events. A
schedule is commonly used in project planning and project portfolio management parts
of project management. This paper aimed to discuss how the project cost estimation is
directly related with scheduling and interleaved management activities. This study will
help software development communities to increase the success of software development
projects by focusing more on accuracy of software cost estimation.
Keywords: Cost Estimation, Software Project Management, Realistic Cost, Software
Development Communities, Project scheduling.
I.
Introduction
Cost estimation is an important issue in any software development project. The accurate
prediction of software development costs is a critical issue, because success and failure of
whole project is depends on the accuracy of estimated cost. We can say that software cost
estimation is integrated with software development process. All project management activities
like project planning, Project scheduling etc. are totally depends upon estimated cost for that
software project. [2].
The possibility of software projects failing due to various reasons—including costs,
scheduling and quality issues, and/ or achievement of objectives—pose a tangible threat to
companies wishing to outsource their software development needs. These failures, which often
cause huge losses in time and money, can prove to be detrimental to a company's growth and
development. Being able to identify the causes of failure and categorizing them can lead to
lower failure rates in future endeavors [1]
INCON - XI 2016
290
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
In this paper, literature survey is carried out in order to gather related information to
study how the cost estimation is directly related with Project scheduling and other project
management activities.
II.
What is project management?
In project management, a schedule is a listing of a project's milestones, activities, and
deliverables, usually with intended start and finish dates. Those items are often estimated in
terms of resource allocation, budget and duration, linked by dependencies and scheduled
events. A schedule is commonly used in project planning and project portfolio management
parts of project management. Elements on a schedule may be closely related to the work
breakdown structure (WBS) terminal elements, the Statement of work, or a Contract Data
Requirements List.[4]
Before a project schedule can be created, the schedule maker should have a work
breakdown structure (WBS), an effort estimate for each task, and a resource list with
availability for each resource. If these components for the schedule are not available, they can
be created with a consensus-driven estimation method like Wideband Delphi. The reason for
this is that a schedule itself is an estimate: each date in the schedule is estimated, and if those
dates do not have the buy-in of the people who are going to do the work, the schedule will be
inaccurate.
In order for a project schedule to be healthy, the following criteria must be met:

The schedule must be constantly (weekly works best) updated.

The EAC (Estimation at Completion) value must be equal to the baseline value.

The remaining effort must be appropriately distributed among team members (taking
vacations into consideration).[4]
To build complex software systems, many engineering tasks need to occur in parallel
with one another to complete the project on time. The output from one task often determines
when another may begin. It is difficult to ensure that a team is working on the most
appropriate tasks without building a detailed schedule and sticking to it. [5]
Root Causes for Late Software:
Poor project management is the main root cause for delay in software delivery.
Why do good projects go bad? CIO.com surveyed dozens of IT executives and project
managers and came up with a list of 12 Common Project Management Mistakes

Not Assigning the Right Person to Manage the Project

Failing to Get Everyone on the Team Behind the Project

Not Getting Executive Buy-in

Putting Too Many Projects Into Production at Once

Lack of (Regular) Communication/Meetings

Not Being Specific Enough with the Scope/Allowing the Scope to Frequently Change

Providing Aggressive/Overly Optimistic Timelines

Not Being Flexible

Not Having a System in Place for Approving and Tracking Changes

Micromanaging Projects.

Expecting Software to Solve All Your Project Management Issues.
INCON - XI 2016
291
ASM’s International E-Journal on
Ongoing Research in Management & IT

Not Having a Metric for Defining Success. [6]
E-ISSN – 2320-0065
Project Scheduling Perspectives:

One view is that the end-date for the software release is set externally and that the
software organization is constrained to distribute effort in the prescribed time frame.

Another view is that the rough chronological bounds have been discussed by the
developers and customers, but the end-date is best set by the developer after carefully
considering how to best use the resources needed to meet the customer's needs. [5]
Software Project Scheduling Principles:

Compartmentalization - the product and process must be decomposed into a manageable
number of activities and tasks

Interdependency - tasks that can be completed in parallel must be separated from those
that must completed serially

Time allocation - every task has start and completion dates that take the task
interdependencies into account

Effort validation - project manager must ensure that on any given day there are enough
staff members assigned to complete the tasks within the time estimated in the project
plan Defined Responsibilities - every scheduled task needs to be assigned to a specific
team member

Defined outcomes - every task in the schedule needs to have a defined outcome
(usually a work product or deliverable)

Defined milestones - a milestone is accomplished when one or more work products
from an engineering task have passed quality review [5]
Scheduling:

For project scheduling there is one graphical representation that represents the task
interdependencies and can help define a rough schedule for particular project which is
called as Task Networks.

There are various scheduling tools available to schedule any non-trivial project.

There are various quantitative techniques that allow software planners to identify the
chain of dependent tasks in the project work breakdown structure that determine the
project duration time like PERT and CPM.

Software planners use Gantt charts to determine what tasks will be need to be conducted
at a given point in time(Based on effort and time estimation for each Task ) A Gantt
chart provides a graphical illustration of a schedule that helps to plan, coordinate, and
track specific tasks in a project.

The best indicator of progress is the completion and successful review of a defined
software work product.

Time-boxing is the practice of deciding a priori the fixed amount of time that can be
spent on each task. When the task's time limit is exceeded, development moves on to the
next task.
Tracking Project Schedules:

Periodic status meetings
INCON - XI 2016
292
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Evaluation of results of all work product reviews

Comparing actual milestone completion dates to scheduled dates

Comparing actual project task start-dates to scheduled start-dates

Informal meeting with practitioners to have them assess subjectively progress to date
and future problems

Use earned value analysis to assess progress quantitatively [5]
III.
Project Planning:
The Plan, Do, Check, act cycle is fundamental to achieving project quality. [7] The
overall project plan should include a plan for how the project manager and team will maintain
quality standards throughout the project cycle. Time spent planning is time well spent. All
projects must have a plan with sufficient detail so that everyone involved knows where the
project is going. [8]
A good plan provides the following benefits:

Clearly documented project milestones and deliverables

A valid and realistic time-scale

Allows accurate cost estimates to be produced

Details resource requirements

Acts as an early warning system, providing visibility of task slippage

Keeps the project team focused and aware of project progress
IV.
Conclusion:
This study gives an idea that in addition to cost and scope, time is an essential
component of project management. Time has a direct relationship with cost of software project
implementation. Poor project schedule estimation sometimes leads to project failure. So we
can say that project estimation, Scheduling and other management activities are interrelated.
Using this study perhaps it would be possible for the software development communities to
focus more on project estimation. In the next steps, the identified factors will be evaluated
through hypothesis test and a questionnaires survey.
V.
[1]
[2]
[3]
[4]
[5]
[6]
REFERENCES:
(n.d.).
Retrieved
from
http://www.outsource2india.com:
http://www.outsource2india.com/software/SoftwareProjectFailure.asp
Schwalbe, K. (2004). IT Project Management. Canada: Thomson Course Technology.
Mohd hairul Nizam Nasir and Shamsul Sahibuddin. (n.d.). Critical success factors for
software projects: A Comparative Study Malaysia.
Schedule
(project
management).
(n.d.).
Retrieved
from
https://en.wikipedia.org/wiki/Schedule_(project_management)
SOFTWARE
PROJECT
SCHEDULING
(n.d.).
Retrieved
from
http://pesona.mmu.edu.my/~wruslan/SE3/Readings/GB1/pdf/ch24-GB1-SoftwareProject-Scheduling.pdf
Schiff, J. L. (2012, Sep 26). 12 Common Project Management Mistakes--and How to
Avoid
Them.
Retrieved
from
http://www.cio.com:
http://www.cio.com/article/2391872/project-management/12-common-projectmanagement-mistakes--and-how-to-avoid-them.htm
INCON - XI 2016
293
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[7] Young, M. L. (n.d.). Six Success Factors for Managing Project Quality.
[8] Johnson, J. (2000). “The Chaos Report” West Yarmouth, MA: The Standish Group. The
Chaos Report
[9] Guru, P.: What is Project Success: A Literature Review. . (2008). International Journal of
Business and Management 3(9), 71–79.
[10] Iman, A. S. (2008). Project Management Practices: The Criteria for Success of Failure.
[11] Schwalbe, K. (2004). IT Project Management. Canada: Thomson Course Technology.
INCON - XI 2016
294
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Privacy Preserving Approaches on Medical Data
Ms. Monali Reosekar,
Asha Kiran Grandhi
Director, Abacus Institute of Computer
Asst. Professor, JSPM’s Rajarshi
Applications
Shahu College of Engineering, Pune
Pune, Maharashtra, India
Maharashtra, India
manimalap@yahoo.com
ashakiran45@rediffmail.com
Srinivasa Suresh S
Asst. Professor, International Institute
of Information Technology, Pune, Maharashtra.
India
sureshs@isquareit.edu.in
ABSTRACT:
Privacy preserving is a growing research topic. Privacy preservation should be
highlighted wherever sensitive information exists. Very often, medical databases contain
sensitive information. There are global medical data usage laws, terms and conditions
are in force. However, due to increased volume and speed of data, one single data store
is not sufficient. Hence, third party storages have come into existence. The current
paper highlights privacy preserving on medical data, basic terminologies in the area of
privacy preserving and existing privacy preserving techniques.
Section I: Introduction
Privacy preserving data mining is the process of maintaining the privacy of the data
owners while mining the data. Privacy is the important concern for many applications related
to medical, banking and other Government organizations. The volume of the data is increasing
day-by-day in medical, banking and in other areas tremendously. Hence, organizations are
adopting latest technologies like Cloud based storages, distributed databases for maintaining
such huge data. Hence, scope for mining is increasing. At the same time, probability for
privacy leak is also increasing. There is lot of research going on in the area of distributed data
mining and researchers are facing lot of challenges to ensure that privacy of individuals is
protected while performing data mining. Distributed data mining can be classed into two
categories (N. Zhang, S. Wang, W. Zhao). The first is server-to-server where data is
distributed across several servers. The second is client-to-server where data resides on each
client while a server or a data miner performs mining tasks on the aggregate data from the
clients (XunYi, YanchunZhang). A typical example in distributed data mining where privacy
can be of great importance is in the field of medical research. Consider the case where a
number of different hospitals wish to jointly mine their patient data, for the purpose of medical
research (XunYi, YanchunZhang). Privacy policy and law do not allow these hospitals from
even pooling their data or revealing it to each other due to the confidentiality of patient
records. Although hospitals are allowed to release data as long as the identifiers, such as name,
address, etc., are removed, it is not safe enough because the re-identification attack can link
different public databases to relocate the original subjects (L. Sweeney). Consequently,
providing privacy protection may be critical to the success of data collection, and to the
INCON - XI 2016
295
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
success of the entire task of distributed data mining. Hence, research in privacy preserving on
distributed data is raising its significance.
The following definitions give differences between various key terms related to
privacy(Michael Minelli, Michaele Chambers, Ambiga Dhiraj):
a)
Privacy
Privacy is about how you use personal data. It’s the conscious choice about how that
information is used. In short, it is collection of data, the use of data, and disclosure of the
data – to whom are you giving the data?
b)
Sensitive information
Any information whose unauthorized disclosure could be embracing or detrimental to
the individual. For example, Health records, Religious affiliations, Racial / Ethnic origin,
gender, Sexual life, criminal record, political opinion etc.
c)
Anonymity
Anonymity or anonymous is used to describe situations where the acting person's name
is unknown. The important idea here is that the person should be non-identifiable,
unreachable, or untrackable. Anonymity is seen as a technique, or a way of realizing,
certain other values, such as privacy, or liberty.
d)
Confidentiality
Confidentiality is the protection of personal information. Confidentiality means keeping
a client's information between you and the client, and not telling others including coworkers, friends, family, etc.
Data sharing is another upcoming buzzword in privacy community. Data sharing
means, sharing of data between two or more organizations, under stated terms and conditions.
Now a days, data sharing is one of the crucial platforms for researchers. For example,
iSHARE is cross country demographical data sharing platform. Various countries which are
part of INDEPTH network share their minimal data sets on single platform called iSHARE.
India is also contributor to INDEPTH’s iSHARE project. Sharing involves protocols and
common standards. While sharing, Sensitive information is hidden. However, anonymity is
maintained at desired level. Privacy preservation is required in various scenarios. The section
II describes the scenarios where privacy preservation is needed.
The rest of the paper is organized as follows: Section II describes scenarios where
privacy leaks can occur in medical data, section III describes current privacy preserving
approaches, section IV describes Fuzzy C-means clustering algorithm, followed by conclusion
& future work.
Section II: Scenarios Where Privacy Leaks can Occur in Medical Data
Medical data is very sensitive data. Because, it is related to people sentiments and status.
Privacy leak should be controlled using sophisticated tools and techniques. Otherwise people
lose confidence. Hence, advanced methods are highly desired. First, one should understand
when privacy leak happens? We identify the following situations where generally privacy
leaks occur:
(a) While sharing the data between two organizations or persons
INCON - XI 2016
296
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
(b) While storing the data on external databases (e.g Cloud databases)
(c) If adequate data access controls are absent
(d) If unauthorized persons access the data
(e) While transferring the data from one system to another system
(f) If proper anonymity techniques not followed.
(g) Due to data theft
In order to overcome the above stated problems (a) to (g) efficient privacy preservation
methods, data access controls, authorization processes should be in place.
Section IV: Current Privacy Preserving Approaches
Randomisation Method
Randomisation is the process of making something random; in various contexts this
involves, for example, generating a random permutation of a sequence (such as when shuffling
cards) and selecting a random sample of a population (important in statistical sampling). In the
statistical theory of design of experiments, randomization involves randomly allocating the
experimental units across the treatment groups. For example, if an experiment compares a new
drug against a standard drug, then the patients should be allocated to either the new drug or to
the standard drug control using randomization. Randomization reduces confounding by
equalizing
so-called
factors
( independent variables) that have not been accounted for in the experimental design.
K-Ananomity Model
K-anonymity is a property possessed by certain anonymized data. The concept of kanonymity was first formulated by Latanya Sweeney in a paper published in 2002 as an
attempt to solve the problem: "Given person-specific field-structured data, produce a release of
the data with scientific guarantees that the individuals who are the subjects of the data cannot
be re-identified while the data remain practically useful."[1][2] A release of data is said to have
the k-anonymity property if the information for each person contained in the release cannot be
distinguished from at least k-1 individuals whose information also appear in the release. The
following table is a nonanonymized database consisting of the patient records of some
fictitious hospital in Cochin.(patent 7,269,578 United States Patents)
Name
Age Gender State of domicile Religion Disease
Ramsha
29
Female Tamil Nadu
Hindu
Cancer
Yadu
24
Female Kerala
Hindu
Viral infection
Salima
28
Female Tamil Nadu
Muslim TB
Sunny
27
Male
Karnataka
Parsi
No illness
Joan
24
Female Kerala
Christian Heart-related
Bahuksana 23
Male
Karnataka
Buddhist TB
Rambha
19
Male
Kerala
Hindu
Cancer
Kishor
29
Male
Karnataka
Hindu
Heart-related
Johnson
17
Male
Kerala
Christian Heart-related
John
19
Male
Kerala
Christian Viral infection
INCON - XI 2016
297
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
There are 6 attributes and 10 records in this data. There are two common methods for
achieving k-anonymity for some value of k.
1.
Suppression: In this method, certain values of the attributes are replaced by an asterisk
'*'. All or some values of a column may be replaced by '*'. In the anonymized table
below, we have replaced all the values in the 'Name' attribute and all the values in the
'Religion' attribute have been replaced by a '*'.
2.
Generalization: In this method, individual values of attributes are replaced by with a
broader category. For example, the value '19' of the attribute 'Age' may be replaced by ' ≤
20', the value '23' by '20 < Age ≤ 30' , etc.
The following table is annonimized table:
Name Age
Gender State of domicile Religion Disease
*
20 < Age ≤ 30 Female
Tamil Nadu
*
Cancer
*
20 < Age ≤ 30 Female
Kerala
*
Viral infection
*
20 < Age ≤ 30 Female
Tamil Nadu
*
TB
*
20 < Age ≤ 30 Male
Karnataka
*
No illness
*
20 < Age ≤ 30 Female
Kerala
*
Heart-related
*
20 < Age ≤ 30 Male
Karnataka
*
TB
*
Age ≤ 20
Male
Kerala
*
Cancer
*
20 < Age ≤ 30 Male
Karnataka
*
Heart-related
*
Age ≤ 20
Male
Kerala
*
Heart-related
*
Age ≤ 20
Male
Kerala
*
Viral infection
Sampling
Sampling is a method of studying from a few selected items, instead of the entire big
number of units. The small selection is called sample. The large number of items of units of
particular characteristic is called population. Example: We check a sample of rice to see
whether the rice well boiled or not. We check a small sample of solution to decide how much a
given solution is concentrated. Thus with the sample we infer about a population. Some of the
types of sampling are (1) simple random sampling - Mostly used for the type of population
which is homogeneous (2) Stratified sampling - Stratas help us classify the population when
the population is heterogeneous and take simple random samples from each classes. (3)
Sequential sampling is done by selection of the samples sequentially at regular intervals. The
purpose of all the sampling techniques is to give the equal chance of any item to be selected
without bias.
Data Swapping and Perturbation
Data perturbation is a form of privacy-preserving data mining for electronic health
records (EHR). There are two main types of data perturbation appropriate for EHR data
protection. The first type is known as the probability distribution approach and the second type
is called the value distortion approach. Data perturbation is considered a relatively easy and
effective technique for protecting sensitive electronic data from unauthorized use.
Cryptography
INCON - XI 2016
298
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Cryptography is a method of storing and transmitting data in a particular form so that
only those for whom it is intended can read and process it.
Section V: Clustering in Privacy Preserving
Clustering is the process of dividing the data into groups of related data. There are
various clustering algorithms proposed in the literature. Formerly, the k-means algorithm is
employed for clustering the distributed multiple databases which has various disadvantages
such as high computation cost and low clustering precision. One of the standard algorithms
for clustering the data is Fuzzy C-means clustering algorithm. It has many applications and is
base for many clustering algorithms. The detail of Fuzzy C-means is as follows:
Fuzzy C-Means Clustering Algorithm
Fuzzy c-means clustering
Based on one fuzzy partition the FCM clustering is an efficient means of clustering. It
optimizes the clustering through cyclic iteration of partition matrix and finishes the cycle when
the objective function attains a suitable threshold. (Pavel Berkhin)
The step by step procedure of the fuzzy c-means clustering algorithm is

Randomly select cluster centre
(0)

Initialize U = |uij| matrix, U
The uijis calculated using
1
…(3)
uij = c
2
⎛||x – c ||⎞ m – 1
∑ ⎜⎝ ||xii – c||j ⎟⎠
k=1
(k)
(k)

At k-step calculates the centres vectors C = [Cj] with U
N
∑
cj =
m
ij
– xi
i=1
N
∑
…(4)
m
ij
i=1

(k)
(k + 1)
Update U , U
uij =
1
c
∑

(k + 1)
If ||U
2
…(5)
⎛||xi – cj||⎞ m – 1
⎜ ||x – c|| ⎟
⎝ i
⎠
k=1
(k)
– U || <  then stop or return to step 2
INCON - XI 2016
299
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Conclusion
The current paper describes the techniques and basic subject matters involved in privacy
preservation. It explains the privacy leak scenarios and the meaning of the key terms. This
helps budding researchers to understand the data privacy essentials. Our future research
involves implementing privacy preservation using clustering techniques.
Acknowledgments
My sincere thanks to Savitribai Phule Pune University and Rajarshi Shahu College of
Engineering (RSCOE), Department of Management, for giving me an opportunity for pursuing
research in this area.
References

Aggarwal, Charu, Yu and Philip, "Privacy-Preserving Data Mining: Models and
Algorithms", Advances in Database Systems, Kluwer Academic Publishers, Vol.34,
2008

Golam Kaosar and Xun Yi, "Semi-Trusted Mixer Based Privacy Preserving Distributed
Data Mining for Resource Constrained Devices", International Journal of Computer
Science and Information Security, Vol.8, No.1, pp.44-51, April 2010

Michael Minelli, Michaele Chambers, Ambiga Dhiraj, “Big Data Big Analytics”, Wiley
Series 2014, ISBN : 978-81265-4469-1

Han and Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufman
publishers, 2001.

Lovleen Kumar Grover and Rajni Mehra, "The Lure of Statistics in Data Mining",
Journal of Statistics Education, Vol.16, No.1, 2008,

Marıa Perez, Alberto Sanchez, Vıctor Robles, Pilar Herrero and Jose Pena, "Design and
Implementation of a Data Mining Grid-aware Architecture", Vol.23, No.1, 2007

Pavel Berkhin, "Survey of Clustering Data mining Techniques", Technical Report, 2002

Ramasubramanian, Iyakutti and Thangavelu, "Enhanced data mining analysis in higher
educational system using rough set theory", African Journal of Mathematics and
Computer Science Research, Vol.2, No.9, pp.184-188, 2009.

Ramesh Chandra Chirala, “A Service-Oriented Architecture-Driven Community of
Interest Model”, Technical report, Arizona State University, May 2004.

“Systems and Methods for de-identifying entries in a data source” United states patents
and Trademarks office. Retrieved 19 January, 2014.

S. Krishnaswamy, S.W. Loke and A. Zaslavsky, "Cost Models for Distributed Data
Mining," in Proceedings of the 12th International Conference on Software Engineering
& Knowledge Engineering, pp. 31-38, 6-8 July, Chicago, USA, 2000.

Sankar Pal, "Soft data mining, computational theory of perceptions, and rough-fuzzy
approach", Information Sciences, Vol.163, pp. 5–12, 2004

Sujni Paul, "An Optimized Distributed Association Rule Mining Algorithm in Parallel
and Distributed Data Mining With XML Data for Improved Response Time",
International Journal of Computer Science and Information Technology, Vol.2, No.2,
April 2010

Thair Nu Phyu, "Survey of Classification Techniques in Data Mining", In proceedings of
INCON - XI 2016
300
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

International Multi Conference of Engineers and Computer Scientists, Vol.1, Hong
Kong, 2009

Vassilios Verykios, Elisa Bertino, Igor Nai Fovino, Loredana Parasiliti Provenza,Yucel
Saygin and Yannis Theodoridis, "State-of-the-art in Privacy Preserving Data Mining" ,

ACM SIGMOD Record, Vol.33, No.1, 2004

Xindong Wu and Xingquan Zhu, "Mining With Noise Knowledge: Error-Aware Data
Mining", IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems And
Humans, Vol.38, No 4, 2008

Yang Wang, Gregory Norcie and Lorrie Faith Cranor,”who is Concerned about what? A
study of Americal, Chinese and Indian users Privacy Concerns on Social Network sites,”
Short paper, school of computer science, Carnegie Mellon University, 2011.

N. Zhang, S. Wang, W. Zhao, A new scheme on privacy-preserving data classification,
in: Proceedings of KDD’05, 2005, pp. 374–383.

XunYi _, YanchunZhang, Privacy-preserving naïve Bayes classification on distributed
data via semi-trusted mixers

L. Sweeney, k-Anonymity: a model for protecting privacy, International Journal on
Uncertainty, Fuzziness and Knowledge-based Systems 10 (5) (2002) 557–570.
INCON - XI 2016
301
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Concepts of Virtualization: its Applications, and its Benefits
Ms. Monali Reosekar,
Ms. Sunita Nikam
Dr.D.Y.Patil ,Arts, Commerce
ASM’s IIBR
& Science College Pimpri Pune
Pmpri, Pune India
ABSTRACT:
Virtualization provides so many benefits. It was introduced more than thirty years
ago. As hardware prices went down, the need for virtualization faded away. More
recently, virtualization at all levels like system, storage, and network etc… became
important. This paper explains the basics of system virtualization and its benefits like
greater efficiency in CPU utilization, greener IT with less power consumption, better
management through central environment control, and more availability, reduced
project timelines by eliminating hardware procurement, improved disaster recovery
capability, more central control of the desktop and few weakness. And it also explain
about What new opportunities does desktop virtualization create for future work.
Keywords: Virtualization, disaster, CPU/OS Virtualization
Introduction
You may be asking yourself, “What exactly is virtualization?” Many people tend to
confuse virtualization with other forms of technology and sometimes confuse the difference
between virtualization and say, cloud computing. Virtualization is the process of creating a
“virtual” version of something, such as computing environments, operating systems, storage
devices or network components, instead of utilizing a physical version for certain aspects of a
company’s infrastructure.
For example, server virtualization is the process in which multiple operating systems
(OS) and applications run on the same server at the same time, as opposed to one server
running one operating system. If that still seems confusing, think of it as one large server
being cut into pieces. The server then imitates or pretends to be multiple servers on the
network when in reality it’s only one. This offers companies the capability to utilize their
resources efficiently and lowers the overall costs that come with maintaining servers
In computing, virtualization means to create a virtual version of a device or resource,
such as a server, storage device, network or even an operating system where the framework
divides the resource into one or more execution environments. Even something as simple as
partitioning a hard drive is considered virtualization because you take one drive and partition it
to create two separate hard drives. Devices, applications and human users are able to interact
with the virtual resource as if it were a real single logical resource
Basic Concepts in VirtualizationVirtualization allows you to run the two environments on the same machine in such a
way that these two environments are completely isolated from one another.
Virtualization is the creation of a virtual (rather than actual) version of something, such
as an operating system, a server, a storage device or network resources.
You probably know a little about virtualization if you have ever divided your hard drive
into different partitions. A partition is the logical division of a hard disk drive to create, in
effect, two separate hard drives.
INCON - XI 2016
302
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Operating system virtualization is the use of software to allow a piece of hardware to run
multiple operating system images at the same time. The technology got its start on mainframes
decades ago, allowing administrators to avoid wasting expensive processing power.
Types of Virtualization:Desktop Virtualization
Desktop Virtualization separates the desktop environment from the physical device that
is used to access it. This is most often configured as a “virtual desktop infrastructure” (VDI)
where many personal desktops (say Windows 10) are run as virtual machines on a server, but
employees access those desktops from client devices generally PCs.
The advantage to Desktop Virtualization is in work convenience and information
security. Because the desktops are accessed remotely, an employee is able to work from any
location and on any PC. This creates a lot of flexibility for employees to work from home and
on the road without needing to physically bring their work computer. It also protects important
company data from being lost or stolen by keeping it in a controlled location on their central
servers.
Server Virtualization
Server virtualization is the moving of existing physical servers into a virtual
environment, which is then hosted on a physical server. Many modern servers are able to host
more than one server simultaneously, which allows you to reduce the number of servers you
have in your company, thus reducing your IT and administrative expenditures. Some servers
can also be virtualized and stored offsite by other hosting companies.
Server virtualization enables multiple virtual operating systems to run on a single
physical machine, To the contrary, server virtualization can often take the place of the costly
practice of manual server consolidation, by combining many physical servers into one logical
server. "The idea is to present the illusion of one huge machine that's infinitely powerful,
reliable, robust and manageable - whether it's one machine that looks like many, or multiple
machines tied together to look like a single system" (Brandel, 2004). The focus of server
virtualization is on maximizing the efficiency of server hardware in order to increase the
return on investment for the hardware.
Server Virtualization
INCON - XI 2016
303
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Operating System Virtualization
Operating system virtualization is the movement of a desktop’s main operating system
into a virtual environment. The computer you use remains on your desk but the operating
system is hosted on a server elsewhere. Usually, there is one version on the server and copies
of that individual OS are presented to each user. Users can then modify the OS as they wish,
without other users being affected.
Through virtualization of operating systems, a single computer can accommodate
multiple platforms and facilitate their operation simultaneously. This description is similar to
the aforementioned server virtualization, but server virtualization alone does not necessarily
provide the ability to run multiple platforms on a single server. Also, in contrast, the goal of
OS virtualization is focused more on flexibility than efficiency. OS virtualization can be used
to facilitate what is known as universal computing, where software and hardware work
together seamlessly regardless of the architecture or language for which they are designed.
Operating System Virtualization
Application Virtualization
Similar to VDI mentioned above, application virtualization differs in that it delivers only
a specific application from a server to the user's device. Instead of logging into an entire
desktop, the user will interact with the application as though it were a native application on the
client device. This makes application virtualization preferable for work on tablets or
smartphones because the native presentation makes working easier.
The big advantage to application virtualization is efficiency. In many situations where
remote applications are useful, applications are redundant across employees. With that in
mind, running an entire operating system for each employee becomes unnecessary if you can
simply run a separate instance of the application.
INCON - XI 2016
304
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Imagine an office where the employees chiefly use Microsoft Office programs - it makes
much more sense to have a single server or virtual machine run the Operating System and just
deliver the applications to each employee as they need them.
Application virtualization is normally done using Windows Server and Windows
Remote Desktop Services. However, some of the hassles of setting it up have lead to a number
of third party virtual application delivery software’s that simplify management and improve
user experience.
Storage Virtualization
Storage virtualization is the combining of multiple physical hard drives into a single,
virtualized storage environment. To many users, this is simply called cloud storage, which can
be private hosted by your company, public hosted outside of your company e.g., DropBox, or
mixed. This type of virtualization, along with server virtualization, is often the most pursued
by companies as it is usually the easiest and most cost effective to implement.
Storage virtualization improves storage flexibility by creating a unified virtual pool of
storage from physical storage devices in a network. What this does is present all physical
storage in a cluster as a single shared group - visible to all servers.
Storage virtualization is important because it allows for virtual machine portability
without necessitating a shared storage array (generally a NAS or SAN). Under normal
circumstances powerful virtualization features like live migrations and high availability
require expensive share storage arrays to operate. By using storage virtualization, every server
added to the network contributes to the virtual storage pool as though it were a SAN or NAS
array.
INCON - XI 2016
305
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Perhaps the most widely deployed and highly regarded virtualization practice, storage
virtualization allows separate storage devices to be combined into a perceived single unit.
Storage virtualization attempts to maximize the efficiency of storage devices in an information
architecture.
Storage Virtualization
Data \ Database Virtualization
Data virtualization allows users to access various sources of disparately located data
without knowing or caring where the data actually resides (Broughton). Database
virtualization allows the use of multiple instances of a DBMS, or different DBMS platforms,
simultaneously and in a transparent fashion regardless of their physical location. These
practices are often employed in data mining and data warehousing systems.
Database \ Data Virtualization
Network Virtualization
Similar to storage virtualization, network virtualization pools resources from all physical
networking equipment and presents it to virtual machines and applications as a single virtual
network.
INCON - XI 2016
306
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
This increases network agility and drastically reduces provisioning time for new network
architectures. What once involved physically connecting devices and then configuring each
device to properly communicate is now done virtually in moments (both manually and through
automated templates).
Additionally, network virtualization improves the amount of visibility an IT team has
into the network since everything is managed from a central platform. It also allows for better
virtual machine portability (network architecture is preserved) and for impressive new security
features like a distributed firewall (a firewall at each virtual machine).
By virtualizing a network, multiple networks can be combined into a single network, or
a single network can be separated logically into multiple parts. A common practice it to create
virtual LANs, or VLANs, in order to more effectively manage a network.
Network Virtualization
Hardware Virtualization
Hardware virtualization refers to taking the components of a real machine and making
them virtual. This virtual machine works like the real machine and is usually a computer with
an operating system. The software is ordinarily separated from the hardware resources, with
the software often remaining on the physical machines. A good example of this is a Windows
PC that runs a virtual version of Linux. There are different types of hardware virtualization,
but this is the most common type used by businesses.
This is most common type of virtualization used today, hardware virtualization is
common because of the advantages it offers concerning hardware utilization and application
uptime. It typically refers to virtualizing a server.
Normally, a server devotes complete control of it's hardware resources (cpu, RAM, and
storage) to the actions of a single operating system. When you virtualize your hardware, it
means that a program called a hypervisor manages the hardware's resources and divides them
among different isolated operating systems, referred to as “virtual machines.”
INCON - XI 2016
307
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
By separating your Hardware among multiple operating systems, there is a greater
diversity of programs that can be utilized without encountering compatibility problems. It also
prevents a single application crash from impacting everything else that is running on the
server. Before virtualization, it was fairly common for servers to utilize less than 30% of their
available hardware resources. Thanks to these improvements, modern companies can utilize
nearly 100% of the hardware investment they make.
Another advantage to hardware virtualization is the portability it offers. With modern
hypervisors an IT team can migrate running virtual machines from one server to another allowing them to do updates and maintenance without ever having downtime for the
application. Preventing downtime is important for larger businesses that lose significant
amounts employee productivity (wasted salaries) or sales when an application is down.
Benefits of Virtualization
Virtualization can increase IT agility, flexibility, and scalability while creating
significant cost savings. Workloads get deployed faster, performance and availability increases
and operations become automated, resulting in IT that's simpler to manage and less costly to
own and operate.

Reduce capital and operating costs.

Deliver high application availability.

Minimize or eliminate downtime.

Increase IT productivity, efficiency, agility and responsiveness.

Speed and simplify application and resource provisioning.

Support business continuity and disaster recovery.

Enable centralized management.

Build a true Software-Defined Data Center.

Cheaper implementation

Business doesn’t stop

Higher availability and uptime

Speedy Installations

Corporate directives

Data center consolidation and decreased power consumption

Simplified disaster recovery solutions
INCON - XI 2016
308
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

The ability to run Windows, Solaris, Linux and Netware operating systems and
applications concurrently on the same server

Increased CPU utilization from 5-15% to 60-80%

The ability to move a “virtual machine” from one physical server to another without
reconfiguring, which is beneficial when migrating to new hardware when the existing
hardware is out-of-date or just fails

The isolation of each “virtual machine” provides better security by isolating one system
from another on the network; if one “virtual machine” crashes it does not affect the other
environments

The ability to capture (take a snapshot) the entire state of a “virtual machine” and
rollback to that configuration, this is ideal for testing and training environments

The ability to obtain centralized management of IT infrastructure

A “virtual machine” can run on any x86 server

It can access all physical host hardware

Re-host legacy operating systems, Windows NT server 4.0 and Windows 2000 on new
hardware and operating system

The ability to designate multiple “virtual machines” as a team where administrators can
power on and off, suspend or resume as a single object

Provides the ability to simulate hardware; it can mount an ISO file as a CD-ROM files as
hard disks

It can configure network adaptor drivers to use NAT through the host machine as
opposed to bridging which would require an IP address for each machine on the network

Allow the testing of live CD’s without first burning them onto disks or having to reboot
the computer
The Impact of Virtualization
Adaptability is becoming an increasing focus for the management of modern business.
With new opportunities and threats always lurking on the horizon, businesses must be able to
quickly, efficiently, and effectively react to their dynamic environment. With regard to IT
infrastructure, virtualization is perhaps the most effective tool for facilitating this adaptability.
In virtualized systems, the expansion and reduction of technical resources can be performed
seamlessly. Because physical devices and applications are logically represented in a virtual
environment, administrators can manipulate them with more flexibility and reduced
detrimental effects than in the physical environment. Through the use of virtualization tools,
server workloads can be dynamically provisioned to servers, storage device usage can be
manipulated, and should a problem occur, administrators can easily perform a rollback to
working configurations. Generally, the addition (or removal) of hardware can be easily
managed with virtualization tools.
The deployment of new applications for use across the enterprise is easily performed
through varied combinations of application, operating system, and server virtualization.
Through virtualization induced "containers," applications can be isolated from both the
hardware and one another, preventing configuration conflicts that often complicate their
introduction into IT systems.
INCON - XI 2016
309
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Increased demand for data or database capabilities can be easily met with data and
database virtualization through the management of new DBMSs or physical infrastructure with
virtualization tools. All of these examples illustrate the adaptable nature of the virtual
enterprise.
In addition to adaptability, the CIO can lower operating costs through implementing
virtualization within his or her infrastructure. There is much inherent efficiency that comes
with implementing this type of system, because much of its focus is on optimizing the use of
resources, thus reducing overhead for maintenance. Any element of current infrastructure can
be leveraged more fully with virtualization. Switching costs for new operating systems or
applications are lowered with the ability to more flexibly install and implement them. The
consolidation of servers and storage space obviously increases the return on investment for
this hardware by maximizing efficiency.
Lowering costs will enable the CIO to reallocate the IT budget towards initiatives that
are not related to the maintenance of current systems, such as research and development,
partnerships, and the alignment of IT with business strategy. A case in which this is well
illustrated is that of Welch Foods. Through virtualization, Welch's IT management was able to
increase server usage in their infrastructure from five to ten percent, to a range of 50 to 60
percent, allowing them to increase the number of servers per manager to 70-to-1[6], thus
reducing expensive labor costs for day-to-day operations (Connor, 2005).
IT managers will be able to increase the productivity of employees across the entire
organization through a properly implemented virtualization system. For businesses that rely on
in-house application development, an increase in productivity and increased ease of
implementation can be seen. Developers within a platform-virtualized environment can
program in languages they are most proficient with. Debugging and testing applications
becomes second nature with the ability to create contained virtual environments. In this
instance, application and systems testing can be performed on a single workstation that
employs a variety of virtual machines without the need to transfer and debug code to external
computers. Enterprise-wide testing can be performed in isolated virtual machines, which do
not interact with or compromise the resources actually being used on the network. Users in a
virtualized environment do not know or care how their use of IT resources is being optimized.
They are able to access needed information and perform work effectively and simply, without
regard to the complexities that exist behind the scenes.
Security Impact
In virtualized environments in which resource segmentation takes place, an increase in
security can be seen due to the residual complexities for hackers who are not familiar with the
configuration of the system they wish to compromise. For example, in application
virtualization, virtual applications can run on multiple servers, causing confusion for attackers
who are prevented from determining the physical resource that has been compromised
(Lindstrom, 2004). In the case of virtual machines, which emulate hardware systems, there can
be added confusion for would-be attackers. "It's hard to accomplish much by cracking a
system that doesn't exist" (Yager, 2004). Another security benefit brought about by
virtualization is related to disaster recovery. In virtualized server systems, it is not necessary to
create identical configurations on backup servers as it is with non-virtualized systems. Because
the virtualization layer separates the hardware from the operating system environment,
INCON - XI 2016
310
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
restoring a lost server can be done on a machine with unlike hardware configurations. It is also
possible to perform backups from several servers to one secondary server, creating a less
expensive method for high availability and disaster recovery.
Risks
With any benefit, there is always associated risk. This is not an exception for the practice
of virtualization. The first problem that IT managers must be aware of occurs in the planning
and implementation of virtualization. CIOs and their staff must decide if, in fact, virtualization
is right for their organization. The short-term costs of an ambitious virtualization project can
be expensive, with the need for new infrastructure and configuration of current hardware. In
businesses where cost reduction and flexibility of IT are not currently in alignment with the
businesses strategy, other initiatives will be better suited. That is not to say that virtualization
is not right for every environment, Few risk related to virtualizations are,
1.
Sensitive data within a VM
2.
Security of offline & dormant VMs
3.
Security of pre-configured (golden image) VM/active VMs
4.
Lack of visibility and control over virtual networks
5.
Resource exhaustion
6.
Hypervisor security
7.
Unauthorized access to hypervisor
8.
Account or service hijacking through the self-service portal
9.
Workloads of different trust levels located on the same server
10. Risk due to cloud service provider APIs
11. Information security isn't initially involved in the virtualization projects
12. A compromise of the virtualization layer could result in the compromise of all hosted
workloads
13. The lack of visibility and controls on internal virtual networks created for VM-to-VM
communications blinds existing security policy enforcement mechanisms
14. Workloads of different trust levels are consolidated onto a single physical server without
sufficient separation
15. Adequate controls on administrative access to the Hypervisor/VMM layer and to
administrative tools are lacking
16. There is a potential loss of separation of duties for network and security controls
Conclusion:
The economic climate is forcing companies to look for every possible means of
increasing efficiency,
Flexibility and cost effectiveness. Strategic companies will capitalize on the current
business crisis to create future opportunities. Server virtualization offers possibilities in an
environment where some organizations might be paralyzed with fear Virtualization involves a
long term strategy. It transforms how a company approaches IT and will require a change in
how companies see their computing needs evolving and how they want to address them.
Virtualization is the foundation for building an optimized data center that can help companies
improve server efficiency, lower energy costs and enhance IT flexibility. It believes that
INCON - XI 2016
311
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
companies can confront the risks and rewards that come with virtualization in a way that
optimizes their data center in the most efficient and flexible manner . But don’t think of it as a
quick fix. Think of it as future fix that can pay off in the long run. And it can pay off because
it is committed to helping companies realize results with virtualization
Virtualization may bring several advantages to the design of modern computer systems
including better security, higher reliability and availability, reduced costs, better adaptability
to workload variations, easier migration of virtual machines among physical machines, and
easy coexistence of legacy applications. Many vendors including Sun, IBM, and Intel have
already announced or already have virtualization solutions. Intel has just announced a new
architecture, called Intel Virtualization.
REFERENCES:

http://www.dabcc.com/documentlibrary/file/ThinstallWPTO.pdf

http://www.webopedia.com/TERM/V/virtualization.html

http://cs.gmu.edu/~menasce/papers/menasce-cmg05-virtualization.pdf

https://c.ymcdn.com/sites/www.aitp.org/resource/resmgr/research/virtualization_and_its
_benef.pdf

http://carpathia.com/blog/virtualization-what-is-it-what-types-there-are-and-how-itbenefits-companies/

http://www.cio.com/article/2400612/virtualization/six-reasons-small-businesses-needvirtualization.html

http://www.t-systems.be/umn/uti/143640_1/blobBinary/WhitePaper_DesktopVirtualization-ps.pdf?ts_layoutId=383236

http://www.techadvisory.org/2013/07/define-4-types-of-virtualization/

http://www.sangfor.net/blog/5-types-virtualization-you-should-know.html

http://carpathia.com/blog/virtualization-what-is-it-what-types-there-are-and-how-itbenefits-companies/

http://www.windowsecurity.com/whitepapers/Network_Security/Virtualization.html

http://www.losangelescomputerhelp.com/2010/04/the-different-types-of-virtualization/

http://searchservervirtualization.techtarget.com/definition/virtualization

https://www.vmware.com/virtualization/overview

www.gfi.com/blog/5-benefits-switching-virtualization-technology/

http://blog.pluralsight.com/15-reasons-to-consider-virtualization

http://www.pcworld.com/article/220644/10_Cool_Things_Virtualization_Lets_You_Do.
html

http://www.networkcomputing.com/data-centers/top-11-virtualization-risksidentified/d/d-id/1320314

http://www.net-security.org/secworld.php?id=9023

http://www.zdnet.com/article/virtualization-what-are-the-security-risks/
INCON - XI 2016
312
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Study of Software Testing Tools over Multiple Platforms
Prof. Vaishali Jawale
Assistant Prof.,
MCA,PhD ( Appeared)
ASM’s IBMR
vaishalijawale@asmedu.org
ABSTRACT:
The aim of this research paper is to evaluate and compare automated software
testing tools to determine their usability and effectiveness over the multiple platforms.
Software testing is an important part in software development process [1]. Complex
systems are being built and testing throughout the software development cycle is valid to
the success of the software. Testing is very expensive process. Manual testing involves a
lot of effort, Measured in person per month. These efforts can be reduced by using the
automated testing with specific tools. software products need to qualify as multiplatform
to truly provide for the global user. Many questions and issues may come up when you
are trying to achieve efficient testing of a product on some or all of different platforms.
This paper can serve as a beginner's guide for testing a product on several platforms. It
describes the concepts you need to pin down, pre-empts some common issues and
provides tips, solutions and best practices for teams involved in multi platform testing.
Keywords: Multiple platforms, Software development process, Usability, Effectiveness.
I.
Introduction
Software testing is a process is to identify all bugs that exist in a software product. It is
the process of evaluating all the components of a system verifies that it satisfies specified
requirements or to classify differences between expected and actual results. Software testing is
also performed to achieving quality by using the software with applicable test cases. Testing
can be integrated at various points in the development process depending upon the tools and
methodology used. Software Testing usually starts after requirements .
At a unit level phase, it starts concurrently with coding; whereas at integration level, it
starts when coding is completed. Testing process can be performed by two ways that are
manual or automation.[2]
Fig: 1 : Software Testing
Manual testing is a process to test the software manually to find out the bugs. Manual
testing is performed without using any automated tool. While performing the manual testing a
test plan is used that describe the systematic and detailed approach of testing a software
application. The goal of the testing is to make sure that the software application under test is
defect free. Manual testing is not suitable for large projects as it requires more resources and
time. Automated testing is a process in which tools execute a predefined scripted test on
software to find defects. Automated software testing is the finest way to increase the
effectiveness and efficiency of software testing. Automation testing can does what manual
INCON - XI 2016
313
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
testing does not. Automation testing also improves the accuracy and saves the time of the
tester & organization’s money. It is best appropriate in the environment where the
requirements are repeatedly changing & huge amount of regression testing is required to be
performed.
II.
Automated Testing Process
Test automation interface is a platform which provides a single workspace for
incorporating multiple testing tools. To improve the efficiency and flexibility of maintaining
test scripts test automation interface is use. It includes Interface Engine which consist runner
to execute the test scripts, the Interface Environment that consists Framework and Project
Library and Object Repository are collection of application object data.
Fig2: Automation Testing Interface
The automation software testing consists of a sequence of activities, processes and tools
that processed in order to execute the test on software and to keep the record of the result of
tests.
The following activities include in testing process:

Test planning

Test design & Implementation

Test execution

Test evaluation
A general testing process is depicted in figure 1.
INCON - XI 2016
314
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig 3: Testing Process
Each activity has performed some tasks that are used by another phase. At the end, a bug
reports will be documented. These documents are further used by the development team to
recognize the cause of faults and to correct them.
After elaborating the test plan analysis is performed on the requirements and test
objectives are defined. The design phase is focused on the definition and test procedures. At
this step it is decided that which part should be tested manually and which will be tested
automatically .
General approaches that are used for test automation are:
Code driven testing: the public interfaces to libraries, classes and modules are tested
with a number of inputs to validate that the results are correct. Graphical User Interface:
Keystrokes and mouse clicks interface events are generated by a testing framework and detect
the changes to validate that the observable performance of program is correct.
III.
Software Test Automation Tools
Various types of tools are used for automated testing and they can be used in different
areas of testing. The selection of tool is based on the type of application which we want to test
like automated web testing tools, GUI testing tools.[3]
Selenium
Selenium is an open source web testing tool which is used to test the web browsers
across different platforms. It is divided into four components: First is, Selenium IDE which is
used as a prototyping tool and no programming language is required. Second Selenium
Remote Control that allow users to use the programming language. Third Web Driver which
implement a stable approach by direct communication between the test scripts and browsers.
Forth, Selenium Grid that helps to execute parallel tests on different browsers by using with
Selenium Remote Control.
INCON - XI 2016
315
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Watir
Watir is an open source tool for automating web browsers. This tool is simple and
flexible in terms of easy to read and maintain. It supports only Internet Explorer, Firefox and
Opera. It also supports multiple browsers on different platforms.
Ranorex
It is a GUI test automation framework which provides testing of a wide range of
desktop, web and mobile applications.Test cases once written can be executed on different
platforms. It simulates the user actions by record and replay tool into recording modules.
Ranorex is easy to use and affordable even for small testing teams.
Test Complete
Test complete has open flexible architecture for maintaining and executing automated
tests for web. This tool helps to keep the balance between quality and speed of delivery of
applications at affordable cost. With testcomplete tool different types of testing can be done
like unit testing and GUI testing, regression testing etc.
Windmill
Windmill is a web testing framework that provides automation testing and debugging
capabilities. The purpose of windmill is to create test writing portable and easier. It supports
Firefox, Safari, Chrome and Opera browser. The tool runs on Microsoft Windows, Linux and
Mac OS X. Without require any programming language Windmill provides a cross-browser
test recorder that helps in writing tests.
Sahi
Sahi is automation and testing tool for web applications. This tool is used by the
developers for fixing and reproducing bugs, QAs for functional testing and by business
analysts for defining and verifying functionality. It supports java script language and offers
easily editable scripts.
QuickTest Professional
QuickTest Professional (QTP) helps the tester to perform an automated functional
testing. It supports only window XP and developed only in VBScript or JavaScript. With QTP
it is easy to edit the script , playback and validate the results.
Tellurium
Tellurium is an open source automated testing framework for testing web applications. It
was developed from Selenium framework with different testing concept. It is built with UI
module concept which helps to write reusable and easily maintainable tests.
What is a platform?
Platform can mean many things. For our context, we assume that platform is a unique
operating system running on unique hardware. So with platform, you should always
distinguish between Solaris on an Intel machine and Solaris on a SPARC workstation. When
developers work on products that are portable or interoperable across platforms, it is common
to use one of those platforms as the preferred platform for development.(Note that "platform"
does not only refer to Operating Systems; it can also refer to different database backends,
INCON - XI 2016
316
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
CORBA ORBs from different vendors, and so on.) The development platform may be the one
that is most commonly used by customers, or may be the one with the best development and
debugging tools. In such a situation, there may be forces that cause UnitTests to be run on
other platforms only sporadically, or not run at all. This can lead to nasty surprises. When
developing cross-platform products, running the tests continuously on all supported platforms
is a necessary part of Test Driven Development. Otherwise, you are really only running a
subset of the tests.[4]
Introducing Cross-Platform Test Automation:
It should come as no surprise that the numbers of platforms and device types are more
varied now than ever before. Customers continue to demand the latest devices, features and
functionality, as well as increased mobility and accessibility. With the proliferation of mobile
and portable device platforms and the Internet of Things, the workload of developers, and
especially testers, has greatly increased. As traditional testing practices prove unable to keep
up with the demand, businesses of all types are experiencing significant product delivery
delays and, in some cases, costly product defects. Naturally, there is a growing demand for
more efficient and cost-effective testing across all platforms.With the Internet of Things and
more devices of all types, it is critical for development teams to build products that
interoperate and function consistently across multiple platforms and devices. Regardless of the
industry, multi-platform testing capability is now mandatory. Due to fragmentation and
differentiation of devices, displays, and inputs, the traditional manual approach to testing
suffers many drawbacks and limitations. Among other things, it is extremely time-consuming
and prone to missed coverage. This not only promotes lower product quality, but also leads to
cost escalation. Testing is even more difficult because features do not function in the same
way across all platforms. This requires developers to implement custom code for different
platforms, which, in turn, requires more testing. Adding to the testing complexity is the vast
number of APIs (application programming interfaces) that connect all these devices.[5].
In order to improve product quality across all platforms, it is necessary to have
known test coverage. In parallel, it is important to ensure greater efficiency throughout the
testing process and reduce testing turnaround time in order to speed time-to-market. The
industry agrees that implementing cross-platform test automation is really the only effective
way to scale. This requires businesses to make an investment in implementing and deploying a
test execution infrastructure that works across platforms such as Appium, Calabash, Perfecto,
Selendroid, and Telerik. At the same time, for other platforms and legacy applications,
INCON - XI 2016
317
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
automating execution utilizing existing tools like HP QTP/UFT and Selenium might also be
needed. When combined, these infrastructures allow tests to be executed on a variety of
devices without the core structure having to be re-engineered, but supporting them all can
make automated testing very complex. Thus it becomes critical to use a testing process that
can easily support all needed execution frameworks
Some tips for cross-platform testing:

When developing a product that has to work across platforms, set up the cross-platform
tests immediately--don't put it off until later.[6]

Set up an automated script that will run the tests on all platforms at least once per day
(perhaps after each Daily Build).

Use a cross-platform scripting language to drive automated test scripts, and use the exact
same script for all platforms. If that is not possible, find some other way to guarantee
that the same tests are being run everywhere.

Whenever tests fail unexpectedly, compare the results on different platforms. If the tests
succeed on some platforms but fail on others, it's often a sign that the code is not as
portable as is thought.

Whenever a new test is added, the developer who writes the test should be encouraged to
immediately run it on all platforms (but this may not be feasible in all environments).

Don't use Mock Objects in place of the cross-platform elements. You need to run tests
against the real stuff to ensure portability and interoperability. Also beware of using
platform emulators for your tests.
Effective cross platform testing significantly reduces the time it takes to create and
maintain test automation scripts for your website across all major browsers and versions.
IV.
Conclusion:
This paper presents a study on various automated testing tools that used on different
platforms. Automation testing tools helps the tester to easily automate the whole testing
process. Automation testing improves the accuracy and also save time of the tester as
compared to the manual testing. This paper also preset the study on cross platform test
automation . The only efficient and scalable way to approach cross-platform application
testing and keep up with the rapid proliferation of devices is to invest in test automation by
implementing and deploying an automated test execution infrastructure.
REFERENCES:
http://www.ijera.com/papers/Vol3_issue5/KA3517391743.pdf
http://www.ijarcsse.com/docs/papers/Volume_5/6_June2015/V5I6-0418.pdf
http://www.testingtools.com/test-automation/
http://www.moravia.com/en/knowledge-center/testing-engineering/multilingual-testingon-multiple-platforms/
[5]. https://www.conformiq.com/wp-content/uploads/2014/12/How-to-Succeed-at-MultiPlatform-Testing-White-Paper.pdf
[6]. http://c2.com/cgi/wiki?CrossPlatformTesting
V.
[1].
[2].
[3].
[4].
INCON - XI 2016
318
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
A Bird view of Human Computer Interaction
Prof. Alok Gaddi
Prof. Vijaykumar Pattar
H O D, Global College of BBA
Asst.Professor, Department of
Hubli Karnataka, India
Computer Application. Fatima
Degree College Hubli Karnataka
ABSTRACT:
In Modern day business with the rapid growth of computing has made effective
human-computer interaction essential. It is important for the growing number of
computer users whose professional schedules will not allow the elaborate training and
experience that was once necessary to take advantage of computing. Increased attention
to usability is also driven by competitive pressures for greater productivity, the need to
reduce frustration, and to reduce overhead costs such as user training. As computing
affects more aspects of our lives the need for usable systems becomes even more
important. In this paper we will provide bird view of Human- Computer Interaction
(HCI) with an introduction, overview and advancement in this of the field.
Keywords: Human-Computer Interaction
History of Human-computer interaction
Human-computer interaction arose as a field from intertwined roots in computer
graphics, operating systems, human factors, ergonomics, industrial engineering, cognitive
psychology, and the systems part of computer science. Computer graphics was born from the
use of CRT and pen devices very early in the history of computers. This led to the
development of several human-computer interaction techniques. Many techniques date from
Sutherland's Sketchpad Ph.D. thesis (1963) that essentially marked the beginning of computer
graphics as a discipline. Work in computer graphics has continued to develop algorithms and
hardware that allow the display and manipulation of ever more realistic-looking objects (e.g.,
CAD/CAM machine parts or medical images of body parts). Computer graphics has a natural
interest in HCI as "interactive graphics" (e.g., how to manipulate solid models in a CAD/CAM
system).
A related set of developments were attempts to pursue "man-machine symbiosis"
(Licklider, 1960), the "augmentation of human intellect" (Engelbart, 1963), and the
"Dynabook" (Kay and Goldberg, 1977). Out of this line of development came a number of
important building blocks for human-computer interaction. Some of these building blocks
include the mouse, bitmapped displays, personal computers, windows, the desktop metaphor,
and point-and-click editors.
Work on operating systems, meanwhile, developed techniques for interfacing
input/output devices, for tuning system response time to human interaction times, for
multiprocessing, and for supporting windowing environments and animation. This strand of
development has currently given rise to "user interface management systems" and "user
interface toolkits".
Human factors, as a discipline, derives from the problems of designing equipment
operable by humans during World War II (Sanders & McCormick, 1987). Many problems
faced by those working on human factors had strong sensory-motor features (e.g., the design
of flight displays and controls). The problem of the human operation of computers was a
INCON - XI 2016
319
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
natural extension of classical human factors concerns, except that the new problems had
substantial cognitive, communication, and interaction aspects not previously developed in
human factors, forcing a growth of human factors in these directions. Ergonomics is similar to
human factors, but it arose from studies of work. As with human factors, the concerns of
ergonomics tended to be at the sensory-motor level, but with an additional physiological flavor
and an emphasis on stress. Human interaction with computers was also a natural topic for
ergonomics, but again, a cognitive extension to the field was necessary resulting in the current
"cognitive ergonomics" and "cognitive engineering." Because of their roots, ergonomic studies
of computers emphasize the relationship to the work setting and the effects of stress factors,
such as the routinization of work, sitting posture, or the vision design of CRT displays.
Industrial engineering arose out of attempts to raise industrial productivity starting in the
early years of this century. The early emphasis in industrial engineering was in the design of
efficient manual methods for work (e.g., a two-handed method for the laying of bricks), the
design of specialized tools to increase productivity and reduce fatigue (e.g., brick pallets at
waist height so bricklayers didn't have to bend over), and, to a lesser extent, the design of the
social environment (e.g., the invention of the suggestion box). Interaction with computers is a
natural topic for the scope of industrial engineering in the context of how the use of computers
fit into the larger design of work methods.
Cognitive psychology derives from attempts to study sensation experimentally at the end
of the 19th century. In the 1950's, an infusion of ideas from communications engineering,
linguistics, and computer engineering led to an experimentally-oriented discipline concerned
with human information processing and performance. Cognitive psychologists have
concentrated on the learning of systems, the transfer of that learning, the mental representation
of systems by humans, and human performance on such systems.
Finally, the growth of discretionary computing and the mass personal computer and
workstation computer markets have meant that sales of computers are more directly tied to the
quality of their interfaces than in the past. The result has been the gradual evolution of
standardized interface architecture from hardware support of mice to shared window systems
to "application management layers." Along with these changes, researchers and designers have
begun to develop specification techniques for user interfaces and testing techniques for the
practical production of interfaces.
Human-Computer Interaction: Definition, Terminology
The concept of Human-Computer Interaction/Interfacing (HCI) was automatically
represented with the emerging of computer, or more generally machine, itself. The reason, in
fact, is clear: most sophisticated machines are worthless unless they can be used properly by
men. This basic argument simply presents the main terms that should be considered in the
design of HCI: functionality and usability.
Having these concepts in mind and considering that the terms computer, machine and
system are often used interchangeably in this context, HCI is a design that should produce a fit
between the user, the machine and the required services in order to achieve a certain
performance both in quality and optimality of the services. Determining what makes a certain
HCI design good is mostly subjective and context dependant. For example, an aircraft part
designing tool should provide high precisions in view and design of the parts while a graphics
editing software may not need such a precision. The available technology could also affect
INCON - XI 2016
320
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
how different types of HCI are designed for the same purpose. One example is using
commands, menus, graphical user interfaces (GUI), or virtual reality to access functionalities
of any given computer. In the next section, a more detailed overview of existing methods and
devices used to interact with computers and the recent advances in the field is presented.
Design Challenges of HCI
Need for Iterative Design.
The need to use iterative design means that the conventional software engineering
“waterfall”, approach to software design, where the user interface is fully specified, then
implemented, and later tested, is inadequate. Instead, the specification, implementation, and
testing must be intertwined. This makes it very difficult to schedule and manage user interface
development.
Multiprocessing
A related issue is that in order to be reactive, user interface software is often organized
into multiple processes. All window systems and graphical tool kits queue “event” records to
deliver the keyboard and mouse inputs from the user to the user interface software. Users
expect to be able to abort and undo actions (for example, by typing control-C or Commanddot). Also, if a window’s graphics need to be redrawn by the application, the window system
notifies the application by adding a special “redraw” event to the queue. Therefore, the user
interface software must be structured so that it can accept input events at all times, even while
executing commands. Consequently, any operations that may take a long time, such as
printing, searching, global replace, re-paginating a document, or even repainting the screen,
should be executed in a separate process. Alternatively, the long jobs could poll for input
events in their inner loop, and then check to see how to handle the input, but this is essentially
a way to simulate multiple processing. Furthermore, the window system itself often runs as a
separate process. Another motivation for multiple processes is that the user may be involved in
multiple ongoing dialogs with the application, for example, in different windows. These
dialogs will each need to retain state about what the user has done, and will also interact with
each other.
The Need for Real-time Programming
Another set of difficulties stems from the need for real-time programming. Most
graphical, direct manipulation interfaces will have objects that are animated or which move
around with the mouse. In order for this to be attractive to users, the objects must be
redisplayed between 30 and GO times per second without uneven pauses. Therefore, the
programmer must ensure that any necessary processing to calculate the feedback can be
guaranteed to finish in about 16 milliseconds. This might involve using less realistic but faster
approximations (such as XORed bounding boxes), and complicated incremental algorithms
that compute the output based on a single input which has changed, rather than a simpler
recalculation based on ail inputs.
Difficulty of Modularization
One of the most important ways to make software easier to create and maintain is to
appropriately modularize the different parts. The standard admonition in textbooks is that the
INCON - XI 2016
321
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
user interface portion should be separated from the rest of the software, in part so that the user
interface can be easily changed (for iterative design). Unfortunately, programmers find in
practice that it is difficult or impossible to separate the user interface and application parts, and
changes to the user interface usually require reprogramming parts of the application also.
Furthermore, modern user interface tool kits make this problem harder because of the
widespread use of “call-back” procedures. Usually, each widget (such as menus, scroll bars,
buttons, and string input fields) on the screen requires the programmer to supply at least one
application procedure to be called when the user operates it. Each type of widget will have its
own calling sequence for its call-back procedures. Since an interface may be composed of
thousands of widgets, there are thousands of these procedures, which tightly couples the
application with the user interface and creates a maintenance nightmare.
Multimodal HCI (MMHCI) Systems
The term multimodal refers to combination of multiple modalities. In MMHCI systems,
these modalities mostly refer to the ways that the system responds to the inputs, i.e.
communication channels. The definition of these channels is inherited from human types of
communication which are basically his senses: Sight, Hearing, Touch, Smell, and Taste.
Applications
A classic example of a multimodal system is the “Put That There” demonstration
system. This system allowed one to move an object into a new location on a map on the screen
by saying “put that there” while pointing to the object itself then pointing to the desired
destination. Multimodal interfaces have been used in a number of applications including mapbased simulations, such as the aforementioned system; information kiosks, such as AT&T’s
MATCHKiosk and biometric authentication systems. Multimodal interfaces can offer a
number of advantages over traditional interfaces. For one thing, they can offer a more natural
and user-friendly experience. For instance, in a real-estate system called Real Hunter; one can
point with a finger to a house of interest and speak to make queries about that particular house.
Using a pointing gesture to select an object and using speech to make queries about it
illustrates the type of natural experience multimodal interfaces offer to their users. Another
key strength of multimodal interfaces is their ability to provide redundancy to accommodate
different people and different circumstances. For instance, MATCHKiosk allows one to use
speech or handwriting to specify the type of business to search for on a map. Thus, in a noisy
setting, one may provide input through handwriting rather than speech. Few other examples of
applications of multimodal systems are listed below:

Smart Video Conferencing

Intelligent Homes/Offices

Driver Monitoring

Intelligent Games

E-Commerce

Helping People with Disabilities
Future Development
The means by which humans interact with computers continues to evolve rapidly. A
curriculum in a changing area must be put together with some understanding of the forces
INCON - XI 2016
322
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
shaping the future so that its concepts are not quickly out of date. Although the curriculum can
always be revised in the light of greater understanding in the future, students cannot generally
be recalled for retraining. They must build their own future understanding upon the
foundations provided by the courses taken at the time they were students.
Human-computer interaction is, in the first instance, affected by the forces shaping the
nature of future computing. These forces include:

Decreasing hardware costs leading to larger memories and faster systems.

Miniaturization of hardware leading to portability.

Reduction in power requirements leading to portability.

New display technologies leading to the packaging of computational devices in new
forms.

Assimilation of computation into the environment (e.g., VCRs, microwave ovens,
televisions).

Specialized hardware leading to new functions (e.g., rapid text search).

Increased development of network communication and distributed computing.

Increasingly widespread use of computers, especially by people who are outside of the
computing profession.

Increasing innovation in input techniques (e.g., voice, gesture, pen), combined with
lowering cost, leading to rapid computerization by people previously left out of the
"computer revolution."

Wider social concerns leading to improved access to computers by currently
disadvantaged groups (e.g., young children, the physically/visually disabled, etc.).
Conclusion:
Computer systems can be regarded as a complex system of layers. An operating system
is one of these layers, managing system resources and protecting users from the complexity of
a system. Programmers rely on functions provided by operating system which makes it
difficult to determine which layer a user communicates with.
Due to the input and output devices of a typical computer, humans can only use some of
their senses. Nevertheless, expectations and experiences determine what is perceived and what
is remembered. There are several ways of describing how humans solve problems and plan
actions.
HCI design principles are formulated in order to help programmers in the design of userfriendly programs. Most of these principles seem to be developed with regard to the
complexity of a computer system and the limitations of human .In this paper we have given
simple overview to the complex topic.
REFERENCES:
[1] Butler, K. A. (1985) Connecting Theory and Practice: A Case Study of Achieving
Usability Goals. In: Proceedings of CHI'85 Human Factors in Computing Systems (April
14-18, 1985,San Francisco, CA) ACM, pp. 85-88.
[2] Wilkund, M. E. (1994) Usability in Practice: How Companies Develop User-Friendly
Products, Cambridge, MA: Academic Press.
INCON - XI 2016
323
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[3] Boff, K. R. and Lincoln, J. E. (1988). Engineering Data Compendium: Human
Perception and Performance vols 1-3. Harry G. Armstrong Aerospace Medical Research
Laboratory, Wright-Patterson Air Force Base, Ohio.
[4] Card, S.K., Moran, T.P., and Newell, A., (1983) The Psychology of Human-Computer
Interaction, Hillsdale, NJ: Erlbaum.
[5] Foley, J.D., van Dam, A., Feiner, S.K., and. Hughes, J.F. (1990) Computer Graphics:
Principles and Practice, Reading, MA: Addison- Wesley.
[6] Myers, B. A. (1989) "User-interface Tools: Introduction and Survey," IEEE Software,
vol. 6(1) pp. 15-23.
[7]. Olsen, D.R. (1992) User Interface Management Systems: Models and Algorithms, San
Mateo, CA: Morgan Kaufmann.
[8] Norman, K. L. (1991b). The psychology of menu selection: Designing cognitive control
of the human/computer interface. Norwood, NJ: Ablex Publishing Corp.
[9] Challenges of HCI Design and Implementation By Brad A Myers.
[10] Challenges in Human-Computer Interaction Design for Mobile Devices By Kuo-Ying
Huang
[11] New Challenges in Human Computer Interaction By M. Ziefle, and E.M. Jakobs
[12] Fakhreddine Karray, Milad Alemzadeh, Jamil Abou Saleh and Mo Nours Arab Pattern
Analysis and Machine Intelligence Lab., Department of Electrical and Computer
Engineering University of Waterloo, Waterloo, Canada: Human-Computer Interaction:
Overview on State of the Art, International Journal ON Smart Sensing and Intelligent
Systems, VOL. 1, NO. 1, MARCH 2008
Website
[1] www.sigchi.org
INCON - XI 2016
324
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
ICT Applications for the Smart Grid
Opportunities and Policy Implications
Dr. Milind Pande
Sharad Kadam
Director, MIT SOT, MIT Pune India
Asst. Professor, MIT Arts
milindpande@mitpune.com
Commerce and Science
College, Alandi, Pune, India
This report discusses “smart” applications of information and communication
technologies (ICTs) for more sustainable energy production, management and consumption.
The “smart grid” is a particular application area expected to help tackle a number of structural
challenges global energy supply and demand are facing. The challenges include:

The direct impact of energy supply industries on climate change and other
environmental impact categories.

Explosion of energy demand worldwide over the past decades.

Wider uptake of renewable energy sources in national “energy mixes”, which holds
specific challenges.

Accelerating diffusion of electric vehicles, which will impact volumes and patterns of
electricity demand.

Provision of reliable and secure national electricity infrastructures.

Electricity provision to un served parts of the population in developing countries.
This report discusses these challenges in greater detail and links them to innovative
applications of ICTs. These linkages provide the basis for what is termed the “smart grid”, i.e.
electricity networks with enhanced capacities for information and communication. In
concluding, this report outlines policy implications for government ministries dealing with
telecommunications regulation, ICT sector and innovation promotion, consumer and
competition issues. But policy implications can reach further than that and the European
Commission’s recent Energy Strategy is just one example of how ICTs are expected to
mitigate environmental challenges across the board. It points to the importance of ICTs “in
improving the efficiency of major emitting sectors. [They] offer potential for a structural shift
to less resource-intensive products and services, for energy savings in buildings and electricity
networks as well as for more efficient and less energy consuming intelligent transport
systems” .
Similarly, OECD ministers see ICTs and the Internet as a key enabling technology for
Green Growth, a fact that resounds in the Green Growth Strategy report presented in May
2011 (OECD, 2011a). However, the magnitude and persistence of energy and electricity
challenges require joint agendas of ICT firms and utilities, ICT and energy policy makers, as
well as bridging dispersed academic and civil society communities around the smart grid.1 A
major conclusion of this report is therefore that there is an for co-ordination between energy
and ICT sectors, integrating also inputs from stakeholders in transportation, construction and
related sectors.
Current and Future Stakes :
Global energy challenges are immense. Over the past three decades, global energy
production and consumption have accelerated to unprecedented degrees. Between 1973 and
INCON - XI 2016
325
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
2008 (35 years) total energy production has basically doubled (OECD calculations based on
IEA World Energy Statistics). This is problematic because close to 70% of global energy
demand is satisfied using energy generated from sources that emit relatively large amounts of
greenhouse gases (carbon dioxide, CO2, is one of them). The energy supply sector, which is
responsible for one quarter of global greenhouse gas (GHG) emissions, has therefore become a
major target of climate change mitigation action
The link between energy and electricity
Electricity is a pivotal element in understanding global energy challenges. Electricity by
itself (its existence) or its consumption does not emit greenhouse gas emissions. It is an energy
carrier, a sort of intermediary, between the supply of primary energy sources (e.g. coal) and
the demand for energy-using services (e.g. transport, heating, lighting) (see Figure 1). It is, in
fact, the main energy carrier used around the world for residential, commercial and industrial
processes next to fuels .
The climate challenge related to electricity stems from the fact that over two-thirds of
global electricity production is generated from the combustion of fossil fuels (IEA, 2010a).
The electricity producing sector is a major user of fossil fuels, responsible for one-third of
global fossil fuel use (IEA, 2010b). As a result, electricity plants have outpaced other
contributors in terms of greenhouse gas emissions (GHGs) since the 1970s, making mitigation
action in the electricity sector a necessary condition for sustainable economic growth
worldwide.
Further to the past increases in contribution to climate change, the electricity sector
globally is facing structural challenges that will amplify the detrimental effects of business-asusual practices on the environment. The emerging shift from internal combustion engines to
electricity-powered engines is only one of them. Further challenges involve the provision of
electricity in developing countries, industrial demand for electricity as well as a reliable
electricity supply. The latter factor expands the “smart grids”discussion beyond environmental
considerations to include the economic development dimensions of electricity. Reliable
electricity supplies are necessary to power manufacturing and services provision, to empower
poor populations, etc. The required investments to satisfy energy demand will be large if no
changes are made to the volumes of energy consumption and their patterns.
A list of key electricity sector challenges
To understand the potential of ICT applications in the electricity sector, it is important to
get a solid understanding of the key challenges in the sector. On a global scale, the main
energy sector challenge is the dependence on fossil fuels such as oil, gas and coal. This
dependence has environmental and economic implications. From an environmental
perspective, it is evident today that the combustion of fossil fuels is a major contributor to
anthropogenic causes of climate change. Moreover, the combustion of fossil fuels has other
polluting characteristics, notably acidification of land and water resources through emissions
of sulphur and nitrogen oxides (e.g. “acid rain”). From an economic standpoint, dependence
on scarce resources that, in the case of most OECD countries, need to be imported creates
ulnerabilities to changes in prices and availability. Political and social unrest in oil-exporting
countries have contributed to price shocks in the 1970s and 1980s leading to “car-free” days in
some countries. In 2008 and 2011, these issues have re-emerged with unrest in a number of
major oil-exporting countries.
INCON - XI 2016
326
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Growing levels of living standards and industrialisation in emerging economies expands
the demand for energy originating in non-OECD countries. About half of global electricity
production took place in the OECD area in 2008; this was down from over two-thirds in the
1970s (IEA, 2010a). Energy demand in the OECD is expected to remain flat over the next two
decades while the global total is projected to more than double (increase by 151%), driven by
growth in emerging economies (IEA, 2010c). As a result, a growing number of countries
compete for scarce energy resources. More and more energy-exporting economies will need to
satisfy domestic energy demand as economic development advances, creating further
pressures on global availability of oil resources. New reserves for fossil fuels might be
discovered and exploited, e.g. through deep water drilling, Arctic exploration, shale gas.
A look at the traditional energy sector value chain translates global energy sector
challenges to tangible areas for action where ICT technologies already provide solutions or
might be able to do so in the future.

Electricity generation is the process of converting primary energy sources to
electricity as the energy carrier. This includes conventional power plants (nuclear, oil,
gas), incinerators (waste to electricity), on-site generators, etc. but also wind turbine
parks, solar panel installations, etc.

Electricity transmission is the first step in the transportation of energy, encompassing
high voltage transmission lines (overhead, underground, seabed) that typically use
alternating current (AC). Transmission systems under high voltage and using direct
current (HVDC) are an important element in energy systems where generation is far
from the sites of consumption (e.g. off-shore wind parks or hydro power). The largest
transmission line today covers a distance of 2 000 km between a large hydropower plant
under development in the Chinese provinces of Sichuan and Yunnan and the city of
Shanghai. Use of direct current is typically associated with lower physical losses of
electricity than alternating current, but requires additional equipment investments when
compared with AC.

Electricity distribution refers to power delivery to the point of consumption, i.e.
medium and low voltage power lines that use almost exclusively alternating current
(AC). These distribution lines can span several kilometres starting at substations that
transform high voltage electricity to medium and low voltage electricity, ending at
electricity meters at the customer site.
Role of the “Smart Grid”
Definition
Various definitions of the smart grid exist. Before proceeding towards a working
definition for the purpose of this report, it is important to highlight that the smart grid is not a
product. It must be seen as a continuous process of modernising existing electricity grids and
of designing future grids. The smart grid is meant to address a number of key challenges –
environmental and economic – that the electricity sector is facing. The use of ICTs and
Internet applications are at the centre of this modernisation.
Smart grids can essentially be defined by their functions and their components.
Environmental and economic challenges in the electricity sector transcend individual steps in
the value chain. The smart grid is therefore expected to address the key challenges
INCON - XI 2016
327
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
stakeholders in the sector are facing: mitigation of climate change, disruptions in supply of
conventional energy sources, exploding global demand for electricity, wider diffusion of
renewable energy sources, accelerating use of electric vehicles and louder consumer demands
for greater transparency.
Turning to the components, smart grids are typically described as electricity systems
complemented by communications networks, monitoring and control systems, “smart” devices
and end-user interfaces
"A smart grid is an electricity network that uses digital and other advanced technologies
to monitor and manage the transport of electricity from all generation sources to meet the
varying electricity demands of end-users. Smart grids co-ordinate the needs and capabilities of
all generators, grid operators, end-users and electricity market stakeholders to operate all parts
of the system as efficiently as possible, minimising costs and environmental impacts while
maximising system reliability, resilience and stability."
Policy Implications
Overarching challenges for policy makers
A key message underlying this report is that innovation drives the development of the
smart grid. The smart grid is expected to help achieve levels of electricity production that are
sustainable in the long run,that reduce environmental burdens, but that also permit individuals
to maintain or improve standards of living. Policy makers have some options at hand to
facilitate “green innovation” and transformational change through the use of ICTs in the
electricity sector (OECD, 2011b). Diversification towards sustainable energy sector products,
services and infrastructures can be achieved through i) market mechanisms, e.g. for
transparency and access to information for all value chain participants, ii) financial incentives,
e.g. contribution to investment costs or tax breaks for infrastructure investments, iii) targeted
regulation, e.g. the recent EU Directives mandating a roll-out of smarter electricity meters that
inter alia provide improved information to final customers (2006/32/EC and 2009/72/EC).
Governments can also facilitate innovation “spill-overs” from the ICT to the energy sector and
related industries such as transport and construction. Ways to do so include promoting R&D
and commercialisation overall, reducing barriers to entry for smaller enterprises, supporting
cross-sector technology development and diffusion and coordinating national policy agendas
for energy, IT and communications (see OECD Council Recommendation on ICTs and the
Environment, 2010b).
Information and communication:
Information asymmetries across the electricity sector value chain remain an important
issue to tackle, and with them the need for effective and reliable communications channels.
The electricity sector’s “line of command” in cases where electricity demand risks peaking
(e.g. extremely hot or cold days) remains to a large degree patchy and mediated. Final
electricity consumers, in particular residential consumers, have little effective means of
obtaining information about the current state of electricity production, its availability, cost and
environmental impacts. Press releases issued by utilities about upcoming peaks may or may
not be picked up by the local media and customers may or may not pay attention to aggregate
information about the electricity system and its state. Direct messages over digital
INCON - XI 2016
328
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
communications channels, especially when linked with customer-specific information and
advice, represent “low-hanging fruit” as far as communication channels are available.
Information asymmetries also affect upstream processes. Utilities have relatively little
information about disaggregated electricity consumption patterns below the distribution
system level and losses of electricity along transmission and distribution lines are not always
accounted for systematically.

Economic and financial hurdles: High investments in research, development and
deployment (or RD&D) are necessary to modernise national electricity systems.
National grid assets in OECD countries are often several decades old and function close
to maximum capacity. In many emerging economies electricity infrastructures are
“greenfield” investments – this means more advanced technological levels from the start
but it requires higher investments too. Overall, the IEA estimates that maintenance and
expansion of transmission and distribution networks globally will require investments of
over USD 8 trillion between 2010 and 2050 (this excludes investments in power
generation). Making these grid investments “smart” would add at least USD 3 trillion to
the bill (IEA, 2010b). Looking at closer horizons, needed investments are estimated at
around USD 600 billion in Europe by 2020 and close to USD 500 billion in the United
States by 2030.26 Although proponents of the smart grid point out substantial returns on
investment, concerns also exist that some purposes of the smart grid (e.g. energy
efficiency and conservation) might not be consistent with business models that are
traditionally based on volume sales. Large-scale investments are also at risk from low
and falling levels of private and public-sector spending on energy R&D over the past
decades. Despite ambitious political agendas in this area, expenditures are unlikely to
rise substantially, reflecting the impact of the economic crisis and tightening public
budgets.

Consumer acceptance, engagement and protection: Improved information on energy
use and better access to it can bring substantial social, economic and environmental
benefits. Mediated or automatic control of electric devices can help manage electricity
demand and lower electricity bills. Benefits of the latter sort can be of particular
importance to low-income groups spending relatively large parts of their household
income on energy. However, trial outcomes on electricity demand and costs are
ambiguous. Various survey results show that consumers are concerned about privacy
issues and costs related to smart meters. These concerns need to be addressed when
designing products and services from the start, otherwise public opinion risks turning
against smart grid initiatives. The smart meter, for example, provides valuable
information, but it also adds a level of complexity to an area of consumption that so far
used relatively simple tariff structure (c.f. Consumer Focus, 2010). Dynamic pricing of
electricity will change that, making electricity prices dependent on levels of electricity
supply, demand and their environmental impacts. These changes are necessary, but they
will require well-designed interfaces between the user and the technology. And it
requires behavioural changes that can come about through guidance and education.
Consumer concerns around the smart grid focus also on electricity provision to poorer
and vulnerable parts of the population. Consumer rights groups highlight that smart
meters potentially lower the operational barriers for utilities wishing to remotely turn off
electricity supply or switch customers to more expensive pre-paid tariffs (c.f. Consumer
INCON - XI 2016
329
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Focus, 2010). It therefore seems necessary to meet these new technical possibilities with
improved legal and other safeguards for concerned customers. Otherwise, the smart
meter risks facilitating wider diffusion of controversial business practices.
ICT-specific policy implications
Policy implications that are of specific relevance to ICT policy makers and
telecommunications regulators include:
Regulatory and networks issues
Converging energy and telecommunications services. The report shows that energy and
telecommunications services are increasingly intersecting. The smart meter is a prime example
of smart grid technology that blends electricity provision and consumption with advanced
communication requirements. There is a potential need for open access provisions allowing
smart meter service providers and utilities access to data capacity over telecommunications
networks.
Connectivity.
Communication channels need to be available across the economy to all electricity users
to maximise the potential benefits of smart grids. Ensuring communication channels are
available universally across the economy will remain a key goal of policy makers and there are
significant potential synergies that could be exploited between communication and electrical
distribution companies (e.g. utility pole or duct sharing). Increased reliance on communication
networks in the electricity sector will put to test existing infrastructures regarding speed,
quality of service and equal treatment of competitors’ information. Although the need for realtime communications links along the electricity sector value chain is likely to be an exception,
fast response times are nevertheless necessary to simultaneously send control signals to virtual
power plants that can comprise hundreds, or even thousands of individual entities. The number
of connected devices could grow by orders of magnitude if projections for annual sales of
electric vehicles (7 million worldwide in 2020) and mandated smart meter installations are
realised (around 180 million in Europe in 2018).29 Utilities, grid operators and 3rd party
intermediaries will depend on efficient network infrastructures to control the charging (grid-tovehicle) and discharging (vehicle-to-grid, vehicle-to-home) of electric vehicles. Bandwidth
requirements are difficult to estimate. On the one hand, data traffic will predominantly cover
status information and control signals, which can be designed for reduced size. On the other
hand, the sheer amount of connected entities and devices that will have to communicate
simultaneously in smarter grids could require significant bandwidth. Finally, there are possible
needs for more spectrum for wireless data exchange.
Converging IT and operational technologies (OT). The use of IT is not new to the
electricity sector. However, a change is taking place in the quality of engagement between
utilities and ICT firms. IT firms that want to provide value-added services in the smart grid
need a more detailed understanding of operational processes. This refers to services targeting
utilities (e.g. distribution grid management), consumers (e.g. energy consumption
optimisation), or both (e.g. operating virtual power plants). The trend can be described as
converging IT and operational technologies (OT).30 The ever-tighter integration of IT into
operational processes in the electricity, transport and buildings sectors requires the alignment
of research and policy agendas. Co-ordinated approaches are important to drive innovation
INCON - XI 2016
330
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
(see above), but also to make ICT applications relevant to the challenges of making electricity
supply efficient, reliable and sustainable.
Emerging skills requirements.
Immediate skills needs are accelerating at the same time as some OECD countries are
noticing declining attraction of students to so-called STEM subjects (science, technology,
engineering, mathematics). This could trigger shortages when smart grid developments
accelerate. Moreover, the growing need for an operational understanding of electricity,
transport and buildings management might require adapting curricula for engineering courses
and other IT-oriented education programmes.
Implementation of smart grids projects will require field-specific knowledge of legal
frameworks, environmental impacts, etc.
Security, resilience, privacy and exploitation of personal data
Risks of converging IT and OT. Closer integration of large-scale operational systems with
IP based networks such as the Internet increases the openness of critical infrastructures. The
Stux Net worm and its impact on targeted industrial systems is only one example of the
potential threats. Operational systems that exchange data with IP-based networks need to be
designed for security and resilience. Critical infrastructures converging with information
infrastructures require scenario-building that includes consideration of highly unlikely types of
events.
Upholding availability, integrity, confidentiality and authenticity. The smart meter is
likely to become a key node for managing information about the electricity system (e.g. grid
loads) and about final customers (e.g. preference “profiles” for charging of electric vehicles).
Questions must be addressed about unauthorised access to electricity data, the prevention of
“malware” in the smart meter and connected devices and other potential security threats.
Trends towards greater automation and remote control need to be accompanied by policies that
can guarantee integrity and authenticity of information. The smart meter will send and receive
control signals that directly impact the functioning of associated devices, e.g. electric vehicles,
domestic appliances and small-scale energy generators. In many cases, wireless
communications channels will be used for short-distance communications and connected
devices.
REFERENCES

Allcott, H. (2009), “Social norms and energy conservation”, MIT working paper.

Ayres, I., Raseman, S. and A. Shih (2009), “Evidence from Two Large Field
Experiments that Peer

Comparison Feedback can Reduce Residential Energy Usage”, paper presented at the
5th Annual

Conference on Empirical Legal Studies.

Blackburn, J. (2010), “Matching Utility Loads with Solar and Wind Power in North
Carolina: Dealing with Intermittent Electricity Sources”, Working paper, Institute for
Energy and Environmental Research,

MD, United States. Consumer Focus (2010), “Consumer Focus response to Smart
Metering Implementation Programme:

Consumer Protections”, October.
INCON - XI 2016
331
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Darby, S. (2009), “Smart metering: what potential for householder engagement?”,
Building Research and Information, Vol. 38, No. 5, pages 442-457.

Dennis, M. and B. Thompson (2009), “Vehicle to grid using broadband
communications”,

Telecommunications Journal of Australia, Vol. 59, No. 1.

EDF (2010), “Vague de froid : EDF mobilise toussesmoyens de production disponibles
et
l'ensemble
de
seséquipes”,
press
release,
1
December,
http://medias.edf.com/communiques-de-presse/tous-lescommuniques-depresse/communique-2010/vague-de-froid-edf-mobilise-tous-ses-moyens-deproductiondisponibles-et-lyensemble-de-ses-equipes-82418.html, last accessed 2 May 2011.

Ehrhardt-Martinez, K., K. Donnelly and J. Laitner (2010), “Advanced Metering
Initiatives and Residential
INCON - XI 2016
332
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
E SS
E-I
SN – 232
2 20--00
065
5
T e Stu
The
S ud
dy of
o Im
mp
pleem
men
ntin
ng
g Wir
W rellesss Co
om
mm
mu
unica
atio
on
n th
hro
ou
ugh
h Lig
L ghtt
F dellity
Fid
y (Li
( i-F
Fi))
Ms. An
M
nja
ali Assho
ok Sa
awa
antt
A M Ins
ASM
I stitu
utee off Man
M nag
gem
men
nt
& Com
C mpu
uteer Stu
S udiees (IM
( MC
COS
ST),
T ne,, Mum
Than
M mb
bai. In
ndiaa
Mss. Lav
M
L vin
na Su
Sunill Ja
adh
ha
av
ASM In
AS
nstiitutte of
o Ma
Manaageemeentt
& Co
Comp
putter Stu
udiiess (IM
MC
CO
OST
T),
T anee, Mu
Th
Mum
mbaii. Ind
I dia
M .Dipiika
Ms
a Kris
K shn
na Keelk
karr
A M Inst
ASM
I titu
ute off Man
M nag
gem
men
nt
& Co
C mp
putter Stu
udiiess (IM
MC
CO
OST
T),
Thane, Mu
Th
Mum
mbaai. Ind
I dia
A ST
ABS
TRA
AC
CT :
This pap
p perr focu
fo usees on
o stu
udyy and
a d deeveelop
pin
ng off a LiL -Fii ba
aseed sysstem
m and
a d ana
a alyzzess
itts peerfo
form
ma
ance wiith rresp
pecct to
o exi
e stin
ng tech
hno
olo
ogyy. Li--Fii is
i a V
Visiiblee Lig
ghtt
C mm
Com
mun
nica
ation syysttem
m pro
p opo
osed
d by th
he Geerm
man ph
physiicisst—
—H
Harrald
d Ha
Haas,, pro
p ovid
dess
trran
nsm
misssio
on off da
ata th
hro
oug
gh illu
i umina
atio
on byy seend
din
ng dat
d ta thrrou
ugh
h an
a LE
LED lig
ghtt bu
ulb
b
th
hatt iss diff
d fereentt frrom
m oth
o herr lig
ght sou
s urcees in in
nten
nsiity fasteer tha
t an thee hum
h ma
an eyee can
c n
fo ow
follo
w. Lig
L ht is ussed
d to
o trran
nsfeer inf
i form
ma
atio
on. Da
ata
a can
c n bee enc
e cod
ded
d in
n light to
o gen
g nerratee
strin
ngss of
o 1s
1 and
a d 0s.
0 Hu
um
man
n eyyess ca
ann
not dete
d ect th
he inte
i enssityy off liigh
ht gen
g nerrateed by LE
ED
D
o tpu
ut gen
g nerrateed is co
onsstan
nt. LII-fi
fi iss use
u d to
t ovverccom
me th
he intterfferrence isssuee and
a d
so out
p vid
pro
dess hiigh
h ba
and
dwidtth for
f neetw
workk cov
c vera
agee.
Key Wo
Key
ord
ds - Li-Fii, VL
LC, WiW Fi,, Hig
H gh-briigh
htneesss LED
L D, Photod
dio
ode,, Wir
W relesss
c mmunica
com
atio
on.
In
ntrrod
ducctio
on
I tod
In
t day
y’s wo
orld dat
d ta tran
t nsm
misssio
on is veery im
mpo
ortaantt paart off co
om
mmu
uniicaatio
on. If mu
ultiiplee
usserrs are
a co
onn
neccted
d to
o wir
w releesss arreaa it slo
ow
ws dow
d wn
n th
he dat
d ta tran
t nsm
misssio
on.. Iff nu
um
mbeers off usserrs
arre mo
more baand
dwiidth
h is
i con
c nsu
umeed mo
oree an
nd traansmiissiion
n sp
peeed is dec
d creeaseed.. A so
olu
utio
on to
t thi
t s
prrob
blem
m is by
b th
he use
u e off Li-F
L Fi. Li--fi is traanssmiissiion
n off data
d a th
hro
oug
gh LE
LED bu
ulbss (ssho
own
n in
i Fig
F g.
1)) th
hatt vaariees in
i intten
nsity
y fast
f ter th
han th
he hum
h man
n eye
e e caan folllow
w. Lii-Fii uses
u s visib
blee ligh
ht in
nsttead
d
off raadiio wav
w vess to
o trran
nsfeer dat
d ta.
F . 1 : LiFig.
L -Fii Bulb
b
Li-Fi pllays a maj
L
m jorr ro
olee in
n ove
o ercom
min
ng thee pro
p obleem
ms o
of thee hea
h avy
y load
ds whicch thee
cu
urrrentt wir
w eleess raadio
o sys
s tem
ms arre fac
f cing
g sin
s ce it makees usse of
o vissib
ble lig
ghtt to
o th
he for data
d a
IN
NC
CON - XI
X 20
2 16
33
33
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
transfer. Li-FI offers much larger frequency band (300 THz) compared to that available in
radio frequency communications (300GHz). Elctromagnetic waves adversely affects our
health where as LI-Fi overcomes this problem and more use of this technology can be done.
Li-Fi is the future technology that can be used by many devices like laptops, mobiles and
tablets in a room. Security is the less concern of Li-Fi because if light can’t be seen then its
difficult to hack into the data transfer process. Li-Fi can be used in military areas where
communication is the biggest issue of security and Li-Fi helps to overcome this problem.
Construction OF LI-FI
Li-fi is called as Visible Light Communication (VLC) system which is based on two
main components:
1.
At least one device able to receive the light signal that will be transmitted by LED bulb.
2
Light Bulb which is having the capacity to produce data signals.
A VLC light source comprises of a fluorescent or LED bulb. Since Li-Fi requires high
quality light generating output device LED’s are preferred in implementing Li-Fi
communication system. In LED light is generated by a semiconductor, which means LED
bulbs amplifies light in more quantity and at faster speed. Hence LED generates many signals
and human eye won’t be able to identify the signals. So changes in intensity of light are
received by the device and it is converted to electrical current by the receiving device. After
receiving the electronic signal it is converted to data stream which consists of audio, video and
any other kind of information which can be used by any Internet enabled device.
Li-Fi is a growing technology now a days. Like other communications systems in
internet era, Li-Fi also functions as a bidirectional communication system. Mobile device can
also send the data to Li-Fi device with the help of infrared light and photo detector. Multi
colored RGB light can also be used to send and receive wide range of signals as compared to
single colored light source.
The Li-Fi emitter system consists of 4 primary subassemblies[10]:
a)
Bulb
b)
RF power amplifier circuit (PA)
c)
Printed circuit board (PCB)
d)
Enclosure
The PCB controls the electrical inputs and outputs of the light source and consists of the
microcontroller used to manage different functions of light source. A RF (radio-frequency)
signal is generated by the solid-state PA and is guided into an electric field about the bulb. The
high concentration of energy in the electric field vaporizes the contents of the bulb to a plasma
state at the bulb‘s center; this controlled plasma generates an intense source of light. All of
these subassemblies (shown in Fig. 2) are contained in an aluminum enclosure.
INCON - XI 2016
334
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Fig. 2 : Block diagram of Li-Fi sub assemblies
Working OF LI-FI
A new generation of high brightness LED forms the core part of Li-Fi technology. If the
LED is on, a digital signal 1 is transmitted. If the LED is off, a digital signal 0 is transmitted.
These high brightness lights are switched on and off in such a rapid speed that helps to
transmit data through light. The working of Li-Fi is very simple and easy to understand and
implement. This system consists of two main devices, first is the LED bulb which transmits
data and another is the receiver device which accepts the signal transferred by light source.
When light is on the receiver generate binary 1 and 0 if it’s off. To generate any particular
message with LI-FI switch off and of LED multiple times or we can also make use of colored
light sources to generated signals of megabits per second. The block diagram of Li-Fi system
is shown in Fig. 3.
Fig. 3 Li- Fi System
The data to be transmitted is encoded in the light emitted by changing the flickering rate
of LED bulbs such that strings of binary data will be generated. The intensity of LED bulbs is
changed so rapidly that normal eyes cannot notice and light appears constant to human eyes.
[13].
INCON - XI 2016
335
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Light-emitting diodes (commonly referred to as LEDs and found in traffic and street
lights, car brake lights, remote control units and countless other applications) can be switched
on and off faster than the human eye can detect, causing the light source to appear to be on
continuously, even though it is in fact 'flickering'. The on-off activity enables to switch The
on-off activity of the bulb which seems to be invisible enables data transmission using binary
codes: switching on an LED is a logical '1', switching it off is a logical '0'. By varying the rate
at which the LEDs flicker on and off, information can be encoded in the light to different
combinations of 1s and 0s. Visible Light Communication can be defined as the process of
using rapid pulse of light for transmitting the information in wireless communication. Hence it
is popularly called as Li-Fi because competes with its radio signal transmission based rival
technology Wi-Fi. Fig. 4 shows a Li-Fi technology connecting devices in a room
Fig. 4 : Li-Fi system connecting devices in a room
Comparison between LI-FI &WI-FI
Li-Fi uses the VLC technology for communication which provides high speed data
transfer rate while communicating. It derived this name by virtue of the similarity to Wi-Fi.
Wi-fi can only be used for general use in buildings where as Li-Fi overcomes the interference
issue of radio waves communication. Table below shows a comparison of transfer speed of
various wireless technologies. Table II shows a comparison of various technologies that are
used for connecting to the end user. Wi-Fi currently offers high data rates. The IEEE 802.11.n
INCON - XI 2016
336
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
in most implementations provides up to 150Mb/s although practically, very less speed is
received.
Comparison of Speed of Various Wireless Technologies
Technology
Speed
Wi-Fi – IEEE 802.11n 150 Mbps
Bluetooth
3 Mbps
IrDA
4 Mbps
Li-Fi
>1 Gbps
Application Areas Of Li-Fi Technology
(a) Airways: Whenever travelling by aiways the most faced problem is the radio waves
communication bcoz the complete airways communication is based onradio waves only.
To overcome this Li-Fi was introduce.
(b) Education systems: This technology provides high speed internet access which is very
useful in educational institutes and companies for faster internet access. Also Li-Fi
connects to multiple users and this never affect the bandwidth allocation.
(c) Green information technology: There is no harm for the body of living things such as
birds and human bodies. Green information technology means Li-Fi never affects the
body of living things.
(d) Free From Frequency Bandwidth Problem: Li-fi technology does not face the
frequency allocation issues as it is the communication with the help of light. There is no
need to pay amount for using light as a communication media.
(e) Increase Communication Safety: Due to visual light communication, all the terminal
are visible to the host of network.
(f) Multi User Communication: Using Li-Fi many things can be shared at a single instance
calledsupports the broadcasting of network , it helps to share multiple thing at a single
instance called broadcasting.
(g) Replacement for other technologies: Li-Fi doesn’t work on radio frequency so it can
be easily used and very helpful to implement.
Conclusion
The possibilities are very much and can be explored further. If this technology can be
used regularly by everyone all the bulbs can be used as a wi-fi device and data will be
transferred. The concept of Li-Fi is currently attracting a great deal of interest, not least
because it may offer a genuine and very efficient alternative to radio-based wireless. As a
growing number of people and their many devices access wireless internet, the airwaves are
becoming increasingly clogged, making it more and more difficult to get a reliable, high-speed
signal. This may solve issues such as the shortage of radio-frequency bandwidth and also
allow internet where traditional radio based wireless isn’t allowed such as aircraft or hospitals.
One of the short coming show ever is that it only work in direct line of sight.
INCON - XI 2016
337
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Acknowledgement
This research consumed huge amount of work and dedication. We would like to
sincerely thank our teachers. We express my gratitude toward our family and friends for their
kind co-operation and encouragement which helped me in this research. We also place our
sense of gratitude to one and all, who directly or indirectly, have lent their hand in this
research study.
REFERENCES:
[1] Jyoti Rani, PrernaChauhan, RitikaTripathi, ―Li-Fi (Light Fidelity)-The future
technology In Wireless communication‖, International Journal of Applied Engineering
Research, ISSN 0973-4562 Vol.7 No.11 (2012).
[2] Richard Gilliard, Luxim Corporation, ―The lifi® lamp high efficiency high brightness
light emitting plasma with long life and excellent color quality‖.
[3] Richard P. Gilliard, Marc DeVincentis, AbdeslamHafidi, Daniel O‘Hare, and Gregg
Hollingsworth, ―Operation of the LiFi Light Emitting Plasma in Resonant Cavity‖.
[4] Visilink, ―Visible Light Communication Technology for
Near‐Ubiquitous Networking‖ White Paper, January 2012.
[5] http://edition.cnn.com/2012/09/28/tech/lifi-haas-innovation
[6] http://articles.economictimes.indiatimes.com/2013-0114/news/36331676_1_datatransmission-traffic-signals-visible-lightspectrum
[7] http://www.extremetech.com/extreme/147339-micro-led-lifi-whereevery- light-sourcein-the-world-is-also-tv-and-provides-gigabit-internetaccess
[8] http://www.dvice.com/archives/2012/08/lifi-ten-ways-i.php
[9] http://www.good.is/posts/forget-wifi-it-s-lifi-internet-through-lightbulbs
[10] http://www.lifi.com/pdfs/techbriefhowlifiworks.pdf
[11] http://www.ispreview.co.uk/index.php/2013/01/tiny-led-lights-set-todeliver-wifi-styleinternet-communications.html
[12] http://www.newscientist.com/article/mg21128225.400-will-lifi-be-thenew-wifi.html
[13] http://groupivsemi.com/working-lifi-could-be-available-soon/
[14] http://en.wikipedia.org/wiki/Li-Fi
[15] http://slideshare.com/li-fitech.html
INCON - XI 2016
338
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Green Cloud Computing Environments
Mrs. Kalyani Alishetty
Mr. Amar Shinde
Assistant Professor
Assistant Professor
NBN Sinhgad School Of Computer Studies
NBN Sinhgad School Of Computer Studies
Pune, Maharashtra
Pune, Maharashtra
amarslives@gmail.com
amarslives@gmail.com
ABSTRACT :
Cloud computing is a highly scalable and cost-effective infrastructure for running
HPC, enterprise and Web applications. However, the growing demand of Cloud
infrastructure has drastically increased the energy consumption of data centers, which
has become a critical issue. High energy consumption not only translates to high
operational cost, which reduces the profit margin of Cloud providers, but also leads to
high carbon emissions which is not environmentally friendly. Hence, energy-efficient
solutions are required to minimize the impact of Cloud computing on the environment.
In order to design such solutions, deep analysis of Cloud is required with respect to their
power efficiency. Thus, in this chapter, we discuss various elements of Clouds which
contribute to the total energy consumption and how it is addressed in the literature. We
also discuss the implication of these solutions for future research directions to enable
green Cloud computing. The chapter also explains the role of Cloud users in achieving
this goal.
1.
Introduction
With the growth of high speed networks over the last decades, there is an alarming rise
in its usage comprised of thousands of concurrent e-commerce transactions and millions of
Web queries a day. This ever-increasing demand is handled through large-scale datacenters,
which consolidate hundreds and thousands of servers with other infrastructure such as cooling,
storage and network systems. Many internet companies such as Google, Amazon, eBay, and
Yahoo are operating such huge datacenters around the world.
The commercialization of these developments is defined currently as Cloud computing,
where computing is delivered as utility on a pay-as-you-go basis. Traditionally, business
organizations used to invest huge amount of capital and time in acquisition and maintenance
of computational resources. The emergence of Cloud computing is rapidly changing this
ownership-based approach to subscription-oriented approach by providing access to scalable
infrastructure and services on-demand. Users can store, access, and share any amount of
information in Cloud. That is, small or medium enterprises/organizations do not have to worry
about purchasing, configuring, administering, and maintaining their own computing
infrastructure. They can focus on sharpening their core competencies by exploiting a number
of Cloud computing benefits such as on-demand computing resources, faster and cheaper
software development capabilities at low cost. Moreover, Cloud computing also offers
enormous amount of compute power to organizations which require processing of tremendous
amount of data generated almost every day. For instance, financial companies have to
maintain every day the dynamic information about their hundreds of clients, and genomics
research has to manage huge volumes of gene sequencing data.
INCON - XI 2016
339
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Therefore, many companies not only view Clouds as a useful on-demand service, but
also a potential market opportunity. According to IDC (International Data Corporation) report,
the global IT Cloud services spending is estimated to increase from $16 billion in 2008 to $42
billion in 2012, representing a compound annual growth rate (CAGR) of 27%. Attracted by
this growth prospects, Web-based companies (Amazon, eBay, Salesforce.com), hardware
vendors (HP, IBM, Cisco), telecom providers (AT&T, Verizon), software firms
(EMC/VMware, Oracle/Sun, Microsoft) and others are all investing huge amount of capital in
establishing Cloud datacenters. According to Google’s earnings reports, the company has
spent $US1.9 billion on datacenters in 2006, and $US2.4 billion in 2007.
Fig 1 : Cloud and Environmental Sustainability
Clouds are essentially virtualized datacenters and applications offered as services on a
subscription basis as shown in Figure 1. They require high energy usage for its operation.
Today, a typical datacenter with 1000 racks need 10 Megawatt of power to operate, which
results in higher operational cost. Thus, for a datacenter, the energy cost is a significant
component of its operating and up-front costs. In addition, in April 2007, Gartner estimated
that the Information and Communication Technologies (ICT) industry generates about 2% of
the total global CO2 emissions, which is equal to the aviation industry. According to a report
published by the European Union, a decrease in emission volume of 15%–30% is required
before year 2020 to keep the global temperature increase below 2 0C. Thus, energy
consumption and carbon emission by Cloud infrastructures has become a key environmental
concern. Some studies show that Cloud computing can actually make traditional datacenters
more energy efficient by using technologies such as resource virtualization and workload
consolidation. The traditional data centers running Web applications are often provisioned to
handle sporadic peak loads, which can result in low resource utilization and wastage of
energy. Cloud datacenter, on the other hand, can reduce the energy consumed through server
consolidation, whereby different workloads can share the same physical host using
virtualization and unused servers can be switched off. A recent research by Accenture shows
that moving business applications to Cloud can reduce carbon footprint of organizations.
According to the report, small businesses saw the most dramatic reduction in emissions – up to
INCON - XI 2016
340
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
90 percent while using Cloud resources. Large corporations can save at least 30-60 percent in
carbon emissions using Cloud applications, and mid-size businesses can save 60-90 percent.
Contrary to the above opinion, some studies, for example Greenpeace, observe that the
Cloud phenomenon may aggravate the problem of carbon emissions and global warming. The
reason given is that the collective demand for computing resources is expected to further
increase dramatically in the next few years. Even the most efficiently built datacenter with the
highest utilization rates will only mitigate, rather than eliminate, harmful CO2 emissions. The
reason given is that Cloud providers are more interested in electricity cost reduction rather
than carbon emission. The data collected by the study is presented in Table 1 below. Clearly,
none of the cloud datacenter in the table can be called as green.
Table 1. Comparison of Significant Cloud Datacenters
Cloud
datacenters
Location
Estimated power
usage Effectiveness
Google
Lenoir
1.21
Apple
Apple,
NC
Chicago,
IL
La Vista,
NE
50.5% Coal,
38.7% Nuclear
1.22
Microsoft
Yahoo
1.16
% of Dirty
Energy
Generation
50.5% Coal,
38.7% Nuclear
3.8%
72.8% Coal,
22.3% Nuclear
73.1% Coal,
14.6% Nuclear
% of Renewable
Electricity
3.8%
Apple
1.1%
7%
2.
Cloud Computing and Energy Usage Model: A Typical Example
In this section, through a typical Cloud usage scenario we will analyze various elements
of Clouds and their energy efficiency. User data pass from his own device through an Internet
service provider’s router, which in turn connects to a Gateway router within a Cloud
datacenter. Within datacenters, data goes through a local area network and are processed on
virtual machines, hosting Cloud services, which may access storage servers. Each of these
computing and network devices that are directly accessed to serve Cloud users contribute to
energy consumption. In addition, within a Cloud datacenter, there are many other devices,
such as cooling and electrical devices, that consume power. These devices even though do not
directly help in providing Cloud service, are the major contributors to the power consumption
of a Cloud datacenter. In the following section, we discuss in detail the energy consumption of
these devices and applications.
2.1
User/Cloud Software Applications
The first factor that contributes to energy consumption is the way software applications
are designed and implemented. The Cloud computing can be used for running applications
owned by individual user or offered by the Cloud provider using SaaS. In both cases, the
energy consumption depends on the application itself. If application is long running with high
CPU and memory requirements then its execution will result in high energy consumption.
Thus, energy consumption will be directly proportional to the application’s profile. The
allocation of resources based on the maximum level of CPU and memory usage will result in
INCON - XI 2016
341
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
much higher energy consumption than actually required. The energy inefficiency in execution
of an application emanates from inaccurate design and implementation. The application
inefficiencies, such as suboptimal algorithms and inefficient usage of shared resources causing
contention lead to higher CPU usage and, therefore, higher energy consumption. However,
factors such as energy efficiency are not considered during the design of an application in
most of the application domains other than for example embedded devices such as mobile
phone.
2.2
Cloud Software Stack for SaaS, PaaS, IaaS Level
The Cloud software stack leads to an extra overhead in execution of end user
applications. For instance, it is well known that a physical server has higher performance
efficiency than a virtual machine and IaaS providers offer generally access to a virtual
machine to its end users. In addition, the management process in the form of accounting and
monitoring requires some CPU power. Being profit oriented, service providers regularly have
to adhere to Service Level Agreements (SLA) with their clients. These SLAs may take the
form of time commitment for a task to be completed. Thus, Cloud provider for meeting certain
level of service quality and availability, provision extra resources than generally required. For
instance, to avoid failure, fast recovery and reduction in response time, providers have to
maintain several storage replicas across many datacenters. Since workflow in Web
applications require several sites to give better response time to its end user, their data is
replicated on many servers across the world. Therefore, it is important to explore the
relationships among Cloud components and the tradeoffs between QoS and energy
consumption.
2.3. Network Devices
The network system is another area of concern which consumes a non-negligible
fraction of the total power consumption. The ICT energy consumption estimates just for
Vodafone Group radio access network was nearly 3 TWh in 2006. In Cloud computing, since
resources are accessed through Internet, both applications and data are needed to be
transferred to the compute node. Therefore, it requires much more data communication
bandwidth between user’s PC to the Cloud resources than require the application execution
requirements. In some cases, if data is really large, then it may turn out to be cheaper and more
carbon emission efficient to send the data by mail than to transfer through Internet.
In Cloud computing, the user data travels through many devices before it reaches a
datacenter. In general, the user computer is connected to Ethernet switch of his/her ISP where
traffic is aggregated. The BNG (Broadband Network Gateway) network performs traffic
management and authentication functions on the packets received by Ethernet switches. These
BNG routers connect to other Internet routers through provider's edge routers. The core
network is further comprised of many large routers. Each of these devices consumes power
according to the traffic volume. According to the study conducted by Tucker, public Cloud is
estimated to consume about 2.7 J/b in transmission and switching in comparison to 0.46J/b for
a private Cloud. They found out that power consumption in transport represents a significant
proportion of the total power consumption for Cloud storage services at medium and high
usage rates. Even typical network usage can result in three to four times more energy
consumption in public Cloud storage than one's own storage infrastructure. Therefore, with the
growth of Cloud computing usage, it is expected that energy efficiency of switches and routers
INCON - XI 2016
342
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
will play a very significant role in what since they need to provide capacity of hundreds of
terabits of bandwidth.
In the network infrastructure, the energy consumption depends especially on the power
efficiency and awareness of wired network, namely the network equipments or system design,
topology design, and network protocol design. Most of the energy in network devices is
wasted because they are designed to handle worst case scenario. Therefore, the energy
consumption of these devices remains almost the same during both peak time and idle state.
Many improvements are required to get high energy efficiency in these devices. For example
during low utilization periods, Ethernet links can be turned off and packets can be routed
around them. Further energy savings are possible at the hardware level of the routers through
appropriate selection and optimization of the layout of various internal router components (i.e.
buffers, links, etc.).
2.4
Datacenter
The Cloud datacenters are quite different from traditional hosting facilities. A cloud
datacenter could comprise of many hundreds or thousands of networked computers with their
corresponding storage and networking subsystems, power distribution and conditioning
equipment, and cooling infrastructures. Due to large number of equipments, datacenters can
consume massive energy consumption and emit large amount of carbon. According to 2007
report on computing datacenters by US Environmental Protection Agency (EPA), the
datacenters in US consumed about 1.5% of total energy, which costs about $4.5 billon. This
high usage also translates to very high carbon emissions which was estimated to be about 80116 Metric Megatons each year. In reality, the cooling equipments consume equivalent
amount of energy as the IT systems themselves
3.
Features of Clouds enabling Green computing
Even though there is a great concern in the community that Cloud computing can result
in higher energy usage by the datacenters, the Cloud computing has a green lining. There are
several technologies and concepts employed by Cloud providers to achieve better utilization
and efficiency than traditional computing. Therefore, comparatively lower carbon emission is
expected in Cloud computing due to highly energy efficient infrastructure and reduction in the
IT infrastructure itself by multi-tenancy. The key driver technology for energy efficient Clouds
is “Virtualization,” which allows significant improvement in energy efficiency of Cloud
providers by leveraging the economies of scale associated with large number of organizations
sharing the same infrastructure. Virtualization is the process of presenting a logical grouping
or subset of computing resources so that they can be accessed in ways that give benefits over
the original configuration. By consolidation of underutilized servers in the form of multiple
virtual machines sharing same physical server at higher utilization, companies can gain high
savings in the form of space, management, and energy.
According to Accenture Report, there are following four key factors that have enabled
the Cloud computing to lower energy usage and carbon emissions from ICT. Due to these
Cloud features, organizations can reduce carbon emissions by at least 30% per user by moving
their applications to the Cloud. These savings are driven by the high efficiency of large scale
Cloud data centers.
INCON - XI 2016
343
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
1.
Dynamic Provisioning: In traditional setting, datacenters and private infrastructure used
to be maintained to fulfill worst case demand. Thus, IT companies end up deploying far
more infrastructure than needed. There are various reasons for such over-provisioning: a)
it is very difficult to predict the demand at a time; this is particularly true for Web
applications and b) to guarantee availability of services and to maintain certain level of
service quality to end users. One example of a Web service facing these problems is a
Website for the Australian Open Tennis Championship. The Australian Open Website
each year receives a significant spike in traffic during the tournament period. The
increase in traffic can amount to over 100 times its typical volume (22 million visits in a
couple of weeks). To handle such peak load during short period in a year, running
hundreds of server throughout the year is not really energy efficient. Thus, the
infrastructure provisioned with a conservative approach results in unutilized resources.
Such scenarios can be readily managed by Cloud infrastructure. The virtual machines in
a Cloud infrastructure can be live migrated to another host in case user application
requires more resources. Cloud providers monitor and predict the demand and thus
allocate resources according to demand. Those applications that require less number of
resources can be consolidated on the same server. Thus, datacenters always maintain the
active servers according to current demand, which results in low energy consumption
than the conservative approach of over-provisioning.
2.
Multi-tenancy: Using multi-tenancy approach, Cloud computing infrastructure reduces
overall energy usage and associated carbon emissions. The SaaS providers serve multiple
companies on same infrastructure and software. This approach is obviously more energy
efficient than multiple copies of software installed on different infrastructure.
Furthermore, businesses have highly variable demand patterns in general, and hence
multi-tenancy on the same server allows the flattening of the overall peak demand which
can minimize the need for extra infrastructure. The smaller fluctuation in demand results
in better prediction and results in greater energy savings.
3.
Server Utilization: In general, on-premise infrastructure run with very low utilization,
sometimes it goes down up to 5 to 10 percent of average utilization. Using virtualization
technologies, multiple applications can be hosted and executed on the same server in
isolation, thus lead to utilization levels up to 70%. Thus, it dramatically reduces the
number of active servers. Even though high utilization of servers results in more power
consumption, server running at higher utilization can process more workload with
similar power usage.
4.
Datacenter Efficiency: As already discussed, the power efficiency of datacenters has
major impact on the total energy usage of Cloud computing. By using the most energy
efficient technologies, Cloud providers can significantly improve the PUE of their
datacenters. Today’s state-of-the-art datacenter designs for large Cloud service providers
can achieve PUE levels as low as 1.1 to 1.2, which is about 40% more power efficiency
than the traditional datacenters. The server design in the form of modular containers,
water or air based cooling, or advanced power management through power supply
optimization, are all approaches that have significantly improved PUE in datacenters. In
addition, Cloud computing allows services to be moved between multiple datacenter
which are running with better PUE values. This is achieved by using high speed network,
virtualized services and measurement, and monitoring and accounting of datacenter.
INCON - XI 2016
344
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
4.
Towards Energy Efficiency of Cloud computing: State-of-the-Art
4.1
Applications
SaaS model has changed the way applications and software are distributed and used.
More and more companies are switching to SaaS Clouds to minimize their IT cost. Thus, it has
become very important to address the energy efficiency at application level itself. However,
this layer has received very little attraction since many applications are already on use and
most of the new applications are mostly upgraded version of or developed using previously
implemented tools. Some of the efforts in this direction are for MPI applications, which are
designed to run directly on physical machines. Thus, their performance on virtual machine is
still undefined.
Various power efficient techniques for software designs are proposed in the literature but
these are mostly for embedded devices. In the development of commercial and enterprise
applications which are designed for PC environment, generally energy efficiency is neglected.
Mayo et al. presented in their study that even simple tasks such as listening to music can
consume significantly different amounts of energy on a variety of heterogeneous devices. As
these tasks have the same purpose on each device, the results show that the implementation of
the task and the system upon which it is performed can have a dramatic impact on efficiency.
Therefore, to achieve energy efficiency at application level, SaaS providers should pay
attention in deploying software on right kind of infrastructure which can execute the software
most efficiently. This necessitates the research and analysis of trade-off between performance
and energy consumption due to execution of software on multiple platforms and hardware. In
addition, the energy consumption at the compiler level and code level should be considered by
software developers in the design of their future application implementations using various
energy-efficient techniques proposed in the literature.
4.2
Cloud Software Stack: Virtualization and Provisioning
In the Cloud stack, most works in the literature address the challenges at the IaaS
provider level where research focus is on scheduling and resource management to reduce the
amount of active resources executing the workload of user applications. The consolidation of
VMs, VM migration, scheduling, demand projection, heat management and temperature-aware
allocation, and load balancing are used as basic techniques for minimizing power
consumption. As discussed in previous section, virtualization plays an important role in these
techniques due to its several features such as consolidation, live migration, and performance
isolation. Consolidation helps in managing the trade-off between performance, resource
utilization, and energy consumption. Similarly, VM migration allows flexible and dynamic
resource management while facilitating fault management and lower maintenance cost.
Additionally, the advancement in virtualization technology has led to significant reduction in
VM overhead which improves further the energy efficiency of Cloud infrastructure.
Abdelsalam et al. proposed a power efficient technique to improve the management of
Cloud computing environments. They formulated the management problem in the form of an
optimization model aiming at minimization of the total energy consumption of the Cloud,
taking SLAs into account. The current issue of under utilization and over-provisioning of
servers was highlighted by Ranganathan et al. They present a peak power budget management
solution to avoid excessive over-provisioning considering DVS and memory/disk scaling.
There are several other research work which focus on minimizing the over provisioning using
INCON - XI 2016
345
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
consolidation of virtualized server. Majority of these works use monitoring and estimation of
resource utilization by applications based on the arrival rate of requests. However, due to
multiple levels of abstractions, it is really hard to maintain deployment data of each virtual
machine within a Cloud datacenter. Thus, various indirect load estimation techniques are used
for consolidation of VMs.
Although above consolidation methods can reduce the overall number of resources used
to serve user applications, the migration and relocation of VMs for matching application
demand can impact the QoS service requirements of the user. Since Cloud providers need to
satisfy a certain level of service, some work focused on minimizing the energy consumption
while reducing the number of SLA violations. One of the first works that dealt with
performance and energy trade-off was by Chase et al. who introduced MUSE, an economybased system of resource allocation. They proposed a bidding system to deliver the required
performance level and switching off unused servers. Kephart et al. addressed the coordination
of multiple autonomic managers for power/performance tradeoffs using a utility function
approach in a non-virtualized environment. Song et al. proposed an adaptive and dynamic
scheme for efficient sharing of a server by adjusting resources (specifically, CPU and
memory) between virtual machines. At the operating system level, Nathuji et al. proposed a
power management system called Virtual Power integrating the power management and
virtualization technologies. Virtual Power allows the isolated and independent operation of
virtual machine to reduce the energy consumption. The soft states are intercepted by Xen
hypervisor and are mapped to changes in the underlying hardware such as CPU frequency
scaling according to the virtual power management rules.
In addition, there are works on improving the energy efficiency of storage systems.
Kaushik et al. presented an energy conserving self-adaptive Commodity Green Cloud storage
called Lightning. The Lightning file system divides the Storage servers into Cold and Hot
logical zones using data classification. These servers are then switched to inactive states for
energy saving.
4.3
Datacenter level: Cooling, Hardware, Network, and Storage
The rising energy costs, cost savings and a desire to get more out of existing investments
are making today’s Cloud providers to adopt best practices to make datacenters operation
green. To build energy efficient datacenter, several best practices has been proposed to
improve efficiency of each device from electrical systems to processor level.
First level is the smart construction of the datacenter and choosing of its location. There
are two major factors in that one is energy supply and other is energy efficiency of
equipments. Hence, the datacenters are being constructed in such a way that electricity can be
generated using renewable sources such as sun and wind. Currently the datacenter location is
decided based on their geographical features; climate, fibre-optic connectivity and access to a
plentiful supply of affordable energy. Since main concern of Cloud providers is business,
energy source is also seen mostly in terms of cost not carbon emissions.
Another area of concern within a datacenter is its cooling system that contributes to
almost 1/3 of total energy consumption. Some research studies have shown that uneven
temperature within datacenter can also lead significant decline in reliability of IT systems. In
datacenter cooling, two types of approaches are used: air and water based cooling systems. In
both approaches, it is necessary that they directly cool the hot equipment rather than entire
INCON - XI 2016
346
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
room area. Thus newer energy efficient cooling systems are proposed based on liquid cooling,
nano fluid-cooling systems, and in-server, in-rack, and in-row cooling by companies such as
SprayCool. Other than that, the outside temperature/climate can have direct impact on the
energy requirement of cooling system. Some systems have been constructed where external
cool air is used to remove heat from the datacenter.
Another level at which datacenter’s power efficiency is addressed is on the deployment
of new power efficient servers and processors. Low energy processors can reduce the power
usage of IT systems in a great degree. Many new energy efficient server models are available
currently in market from vendors such as AMD, Intel, and others; each of them offering good
performance/watt system. These server architecture enable slowing down CPU clock speeds
(clock gating), or powering off parts of the chips (power gating), if they are idle. Further
enhancement in energy saving and increasing computing per watt can be achieved by using
multi-core processors. For instance Sun‟s multicore chips, each 32-thread Niagara chip, Ultra
SPARC 1, consumes about 60 watts, while the two Niagara chips have 64 threads and run at
about 80 watts. However, the exploitation of such power efficiency of multi-core system
requires software which can run on multi-CPU environment. Here, virtualization technologies
play an important role. Similarly, consolidation of storage system helps to further reduce the
energy requirements of IT Systems. For example, Storage Area Networks (SAN) allow
building of an efficient storage network that consolidates all storage. The use of energy
efficient disks such as tiered storage (Solid-State, SATA, SAS) allows better energy
efficiency.
The power supply unit is another infrastructure which needs to be designed in an energy
efficient manner. Their task is to feed the server resources with power by converting the highvoltage alternating current (AC) from the power grid to a low-voltage direct current (DC)
which most of the electric circuits (e.g. computers) require. These circuits inside Power
Supply Unit (PSU) inevitably lose some energy in the form of heat, which is dissipated by
additional fans inside PSU. The energy efficiency of a PSU mainly depends on its load,
number of circuits and other conditions (e.g. temperature). Hence, a PSU which is labeled to
be 80% efficient is not necessarily that efficient for all power loads. For example, low power
loads tend to be the most energy inefficient ones. Thus, a PSU can be just 60% efficient at
20% of power load. Some studies have found that PSUs are one of the most inefficient
components in today’s data centers as many servers are still shipped with low quality 60 to 70
percent efficient power supplies. One possible solution offered is to replace all PSUs by
ENERGY STAR certified ones. This certificate is given to PSUs which guarantee a minimum
80% efficiency at any power load.
4.4
Monitoring/Metering
It is said that you cannot improve what you do not measure. It is essential to construct
power models that allow the system to know the energy consumed by a particular device, and
how it can be reduced. To measure the unified efficiency of a datacenter and improve its'
performance per-watt, the Green Grid has proposed two specific metrics known as the Power
Usage Effectiveness (PUE) and Datacenter Infrastructure Efficiency (DciE).

PUE = Total Facility Power/IT Equipment Power

DciE = 1/PUE = IT Equipment Power/Total Facility Power x 100%
INCON - XI 2016
347
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Here, the Total Facility Power is defined as the power measured at the utility meter that
is dedicated solely to the datacenter power. The IT Equipment Power is defined as the power
consumed in the management, processing, and storage or routing of data within the datacenter.
PUE and DCIE are most common metrics designed to compare the efficiency of
datacenters. There are many systems in the marketplace for such measurements. For instance
SunSM Eco Services measures at a higher level rather than attempting to measure each
individual device‟s power consumption. For measuring and modeling the power usage of
storage system, Researchers from IBM [44] have proposed a scalable, enterprise storage
modelling framework called STAMP. It side steps the need for detailed traces by using
interval performance statistics and a power table for each disk model. STAMP takes into
account controller caching and algorithms, including protection schemes, and adjusts the
workload accordingly. To measure the power consumed by a server (e.g. PowerEdge R610)
the Intelligent Platform Management Interface (IPMI) is proposed. This framework provides a
uniform way to access the power-monitoring sensors available on recent servers. This
interface being independent of the operating system can be accessed despite of operating
system failures and without the need of the servers to be powered on (i.e. connection to the
power grid is enough). Further, intelligent power distribution units (PDUs), traditional power
meters (e.g. Watts Up Pro power meter) and ACPI enabled power supplies can be used to
measure the power consumption of the whole server.
4.5.
Network Infrastructure
As discussed previously, at network level, the energy efficiency is achieved either at the
node level (i.e. network interface card) or at the infrastructure level (i.e. switches and routers).
The energy efficiency issues in networking is usually referred to as “green networking”, which
relates to embedding energy-awareness in the design, in the devices and in the protocols of
networks. There are four classes of solutions offered in literature, namely resource
consolidation, virtualization, selective connectedness, and proportional computing. Resource
consolidation helps in regrouping the under-utilized devices to reduce the global consumption.
Similar to consolidation, selective connectedness of devices consists of distributed
mechanisms which allow the single pieces of equipment to go idle for some time, as
transparently as possible from the rest of the networked devices. The difference between
resource consolidation and selective connectedness is that the consolidation applies to
resources that are shared within the network infrastructure while selective connectedness
allows turning off unused resources at the edge of the network. Virtualization as discussed
before allows more than one service to operate on the same piece of hardware, thus improving
the hardware utilization. Proportional computing can be applied to a system as a whole, to
network protocols, as well as to individual devices and components. Dynamic Voltage Scaling
and Adaptive Link Rate are typical examples of proportional computing. Dynamic Voltage
Scaling reduces the energy state of the CPU as a function of a system load, while Adaptive
Link Rate applies a similar concept to network interfaces, reducing their capacity, and thus
their consumption, as a function of the link load. The survey by Bianzino et al. gives more
details about the work in the area of Green networking.
INCON - XI 2016
348
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
5.
Green Cloud Architecture
From the above study of current efforts in making Cloud computing energy efficient, it
shows that even though researchers have made various components of Cloud efficient in terms
of power
and performance, still they lack a unified picture. Most of efforts for sustainability of
Cloud computing have missed the network contribution. If the file sizes are quite large,
network will become a major contributor to energy consumption; thus it will be greener to run
application locally than in Clouds. Furthermore, many work focused on just particular
component of Cloud computing while neglecting effect of other, which may not result in
overall energy efficiency. For example, VM consolidation may reduce number of active
servers but it will put excessive load on few servers where heat distribution can become a
major issue. Some other works just focus on redistribution of workload to support energy
efficient cooling without considering the effect of virtualization. In addition, Cloud providers,
being profit oriented, are looking for solutions which can reduce the power consumption and
thus, carbon emission without hurting their market. Therefore, we provide a unified solution to
enable Green Cloud computing. We propose a Green Cloud framework, which takes into
account these goals of provider while curbing the energy consumption of Clouds. The high
level view of the green Cloud architecture is given in Figure. The goal of this architecture is to
make Cloud green from both user and provider’s perspective.
In the Green Cloud architecture, users submit their Cloud service requests through a new
middleware Green Broker that manages the selection of the greenest Cloud provider to serve
the user’s request. A user service request can be of three types i.e., software, platform or
infrastructure. The Cloud providers can register their services in the form of „green offers‟ to
a public directory which is accessed by Green Broker. The green offers consist of green
services, pricing and time when it should be accessed for least carbon emission. Green Broker
gets the current status of energy parameters for using various Cloud services from Carbon
Emission Directory. The Carbon Emission Directory maintains all the data related to energy
efficiency of Cloud service. This data may include PUE and cooling efficiency of Cloud
datacenter which is providing the service, the network cost and carbon emission rate of
electricity, Green Broker calculates the carbon emission of all the Cloud providers who are
offering the requested Cloud service. Then, it selects the set of services that will result in least
carbon emission and buy these services on behalf users.
The Green Cloud framework is designed such that it keeps track of overall energy usage
of serving a user request. It relies on two main components, Carbon Emission Directory and
Green Cloud offers, which keep track of energy efficiency of each Cloud provider and also
give incentive to Cloud providers to make their service “Green”. From user side, the Green
Broker plays a crucial role in monitoring and selecting the Cloud services based on the user
QoS requirements, and ensuring minimum carbon emission for serving a user. In general, a
user can use Cloud to access any of these three types of services (SaaS, PaaS, and IaaS), and
therefore process of serving them should also be energy efficient.
6.
Case Study: IaaS Provider
In this section, we describe a case study example to illustrate the working of the
proposed Green Architecture in order to highlight the importance of considering the unifying
picture to reduce the energy and carbon emissions by Cloud infrastructure. The case study
INCON - XI 2016
349
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
focuses on IaaS service providers. Our experimental platform consists of multiple Cloud
providers who offer computational resources to execute user‟s HPC applications. A user
request consists of application, its estimated length in time and number of resources
required. These applications are submitted to the Green broker who acts as an interface to the
Cloud infrastructure and schedules applications on behalf of users as shown in Figure 7. The
Green Broker interprets and analyzes the service requirements of a submitted application and
decides where to execute it. As discussed, Green Broker‟s main objective is to schedule
applications such that the CO2 emissions are reduced and the profit is increased, while the
Quality of Service (QoS) requirements of the applications are met. As Cloud data centers are
located in different geographical regions, they have different CO2 emission rates and energy
costs depending on regional constraints. Each datacenter is responsible for updating this
information to Carbon Emission Directory for facilitating the energy-efficient scheduling.
a) Greedy Minimum Carbon Emission (GMCE): In this policy, user applications are
assigned to Cloud providers in greedy manner based on their carbon emission.
b) Minimum Carbon Emission - Minimum Carbon Emission (MCE-MCE): This is a double
greedy policy where applications are assigned to the Cloud providers with minimum
Carbon emission due to their datacenter location and Carbon emission due to application
execution.
c) Greedy Maximum Profit (GMP): In this policy, user applications are assigned in greedy
manner to a provider who execute the application fastest and get maximum profit. .
d) Maximum Profit - Maximum Profit (MP-MP): This is double greedy policy considering
profit made by Cloud providers and application finishes by its deadline.
e) Minimizing Carbon Emission and Maximizing Profit (MCE-MP): In this policy, the
broker tries to schedule the applications to those providers which results in minimization
of total carbon emission and maximization of profit.
7.
Conclusions and Future Directions
Cloud computing business potential and contribution to already aggravating carbon
emission from ICT, has lead to a series of discussion whether Cloud computing is really green.
It is forecasted that the environmental footprint from data centers will triple between 2002 and
2020, which is currently 7.8 billion tons of CO2 per year. There are reports on Green IT
analysis of Clouds and datacenters that show that Cloud computing is “Green”, while others
show that it will lead to alarming increase in Carbon emission. Thus, in this chapter, we first
analyzed the benefits offered by Cloud computing by studying its fundamental definitions and
benefits, the services it offers to end users, and its deployment model. Then, we discussed the
components of Clouds that contribute to carbon emission and the features of Clouds that make
it “Green”. We also discussed several research efforts and technologies that increase the
energy efficiency of various aspects of Clouds. For this study, we identified several
unexplored areas that can help in maximizing the energy efficiency of Clouds from a holistic
perspective. After analyzing the shortcoming of previous solutions, we proposed a Green
Cloud Framework and presented some results for its validation. Even though our Green Cloud
framework embeds various features to make Cloud computing much more Green, there are
still many technological solutions are required to make it a reality:
INCON - XI 2016
350
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

First efforts are required in designing software at various levels (OS, compiler, algorithm
and application) that facilitates system wide energy efficiency. Although SaaS providers
may still use already implemented software, they need to analyze the runtime behavior
of applications. The gathered empirical data can be used in energy efficient scheduling
and resource provisioning. The compiler and operating systems need to be designed in
such a way that resources can be allocated to application based on the required level of
performance, and thus performance versus energy consumption tradeoff can be
managed.

To enable the green Cloud datacenters, the Cloud providers need to understand and
measure existing datacenter power and cooling designs, power consumptions of servers
and their cooling requirements, and equipment resource utilization to achieve maximum
efficiency. In addition, modeling tools are required to measure the energy usage of all
the components and services of Cloud, from user PC to datacenter where Cloud services
are hosted.

For designing the holistic solutions in the scheduling and resource provisioning of
applications within the datacenter, all the factors such as cooling, network, memory, and
CPU should be considered. For instance, consolidation of VMs even though effective
technique to minimize overall power usage of datacenter, also raises the issue related to
necessary redundancy and placement geo-diversity required to be maintained to fulfill
SLAs with users. It is obvious that last thing Cloud provider will want is to lose their
reputation by their bad service or violation of promised service requirements.

Last but not the least, the responsibility also goes to both providers and customers to
make sure that emerging technologies do not bring irreversible changes which can bring
threat to the health of human society. The way end users interact with the application
also has a very real cost and impact. For example, purging of unsolicited emails can
eliminates energy wasted in storage and network. Similarly, if Cloud providers want to
provide a truly green and renewable Cloud, they must deploy their datacenters near
renewable energy sources and maximize the Green energy usage in their already
established datacenters. Before adding new technologies such as virtualization, proper
analysis of overhead should be done real benefit in terms of energy efficiency.
In conclusion, by simply improving the efficiency of equipment, Cloud computing
cannot be claimed to be Green. What is important is to make its usage more carbon efficient
both from user and provider’s perspective. Cloud Providers need to reduce the electricity
demand of Clouds and take major steps in using renewable energy sources rather than just
looking for cost minimization.
REFERENCES:
[1] Gleeson, E. 2009. Computing industry set for a shocking change. Retrieved January 10,
2010 from http://www.moneyweek.com/investment-advice/computing-industry-set-fora-shocking-change-43226.aspx
[2] Buyya, R., Yeo, C.S. and Venugopal, S. 2008. Market-oriented Cloud computing:
Vision, hype, and reality for delivering it services as computing utilities. Proceedings of
the 10th IEEE International Conference on High Performance Computing and
Communications, Los Alamitos, CA, USA.
INCON - XI 2016
351
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[3] New Datacenter Locations. 2008. http://royal.pingdom.com/2008/04/11/map-of-allgoogle-data-center-locations/
[4] Bianchini, R., and Rajamony, R. 2004, Power and energy management for server
systems, Computer, 37 (11) 68-74.
[5] Rivoire, S., Shah, M. A., Ranganathan, P., and Kozyrakis, C. 2007. Joulesort: a balanced
energy-efficiency benchmark, Proceedings of the 2007 ACM SIGMOD International
Conference on Management of Data, NY, USA.
[6] Greenpeace International. 2010. Make IT Green http://www.greenpeace.org/
international/en/publications/reports/make-it-green-Cloudcomputing/
[7] Accenture Microsoft Report. 2010. Cloud computing and Sustainability: The
Environmental
Benefits
of
Moving
to
the
Cloud,
http://www.wspenvironmental.com/media
/docs/newsroom/Cloud_computing_and_Sustainability_-_Whitepaper_-_Nov_2010.pdf.
[8] Mell, P. and Grance, T. 2009. The NIST Definition of Cloud computing, National
Institute of Standards and Technology.
[9] Google App Engine. 2010. http://code.google.com/appengine/.
[10] Vecchiola, C., Chu, X. and Buyya, R. 2009. Aneka: A Software Platform for .NET-based
Cloud Computing. In High Performance & Large Scale computing, Advances in Parallel
Computing, ed. W. Gentzsch, L. Grandinetti and G. Joubert, IOS Press.
INCON - XI 2016
352
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Information and Communications Technology as a General-Purpose
Technology
Mrs. Vaishali Bodade.
(M.Sc Information Technology)
ASM’s CSIT(Computer Science Department )
ABSTRACT :
Many people point to information and communications technology (ICT) as the
key forunderstanding the acceleration in productivity in the United States since the mid1990s. Stories of ICT as a‘general purpose technology’ suggest that measured TFP
should rise in ICT-using sectors (reflecting eitherunobserved accumulation of intangible
organizational capital; spillovers; or both), but with a long lag.Contemporaneously,
however, investments in ICT may be associated with lower TFP as resources are
divertedto reorganization and learning. We find that U.S. industry results are consistent
with GPT stories: theacceleration after the mid-1990s was broadbased—located
primarily in ICT-using industries rather than ICTproducingindustries. Furthermore,
industry TFP accelerations in the 2000s are positively correlated with(appropriately
weighted) industry ICT capital growth in the 1990s. Indeed, as GPT stories would
suggest, aftercontrolling for past ICT investment, industry TFP accelerations are
negatively correlated with increases in ICTusage in the 2000s.
Prepared for the Conference on “The Determinants of Productivity Growth” .This
paper draws heavily on and updates work reported in , and . Wethank r and conference
participants for helpful comments, and for excellent research assistance. The views
expressed in this paper are those of the authors and do not necessarily representthe
views of others affiliated with the Federal Reserve System.
1.
Introduction
After the mid-1990s, both labor and total factor productivity (TFP) accelerated in the
United States. Alarge body of work has explored the sources and breadth of the U.S.
acceleration. Much of this research focuseson the role of information and communications
technology (ICT).1
In this paper, we undertake two tasks. First, we undertake detailed growth accounting at
an industry levelfor data from 1987-2004. Second, we use these results to show that the simple
ICT explanation for the U.S. TFPacceleration is incomplete at best. In standard neoclassical
growth theory, the use of ICT throughout the economyleads to capital deepening, which boosts
labor productivity in ICT-using sectors—but does not change TFP insectors that only use but
do not produce ICT. TFP growth in producing ICT goods shows up directly in theeconomy’s
aggregate TFP growth. From the perspective of neoclassical economics, there is no reason to
expectan acceleration in the pace TFP growth outside of ICT production.But, consistent with a
growing body of literature, we find that the TFP acceleration was, in fact, broadbased—not
narrowly located in ICT production. Basu, Fernald and Shapiro (2001), in an early study,
found aquantitatively important acceleration outside of manufacturing. Triplett and Bosworth
(2006, though the originalworking paper was 2002) highlighted the finding that the late-1990s
TFP acceleration was due, in a proximatesense, to the performance of the service sector.
INCON - XI 2016
353
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Since these early studies, there have been several rounds of major data revisions by the
Bureau of Economic Analysis that changed the details of the size and timing of the measured
acceleration in differentsectors, but did not affect the overall picture. It use aggregate data
(plus data on therelative prices of various high-tech goods) and estimate that in the 2000-2005
period, the acceleration in TFP iscompletely explained by non-ICT-producing sectors.
Jorgenson, Ho and Stiroh (2006) undertake a similar exercise and reach a similar conclusion.
Indeed, both papers find that TFP growth in ICT production sloweddown from its rapid pace
of the late 1990s. Using industry-level data, Corrado et al (2006), and Bosworth and
Triplett (2006) find that non-ICT-producing sectors saw a sizeable acceleration in TFP
in the 2000s, whereas TFPgrowth slowed in ICT-producing sectors in the 2000s. In the data
for the current paper, sectors such as ICTproduction, finance and insurance, and wholesale and
retail trade accelerated after the mid-1990s; TFP growththose sectors remained relatively
strong in the 2000s even as other sectors finally saw an acceleration.
The broadbased acceleration raises a puzzle. According to standard neoclassical
production theory, whichunderlies almost all the recent discussions of this issue, factor prices
do not shift production functions. Thus, ifthe availability of cheaper ICT capital has increased
TFP in industries that use, but do not produce, ICTequipment, then it has done so via a
channel that neoclassical economics does not understand well.We discuss theories of ICT as a
general purpose technology (GPT), in an effort to see if these theories canexplain the puzzle of
why measured TFP accelerated in ICT-using industries.
The main feature of a GPT is that itleads to fundamental changes in the production
process of those using the new invention (see, e.g., Helpman andTrajtenberg, 1998). For
example, Chandler (1977) discusses how railroads transformed retailing by
allowingnationwide catalog sales. David and Wright (1999) also discuss historical examples.
Indeed, the availability ofcheap ICT capital allows firms to deploy their other inputs in
radically different and productivity-enhancing ways.In so doing, cheap computers and
telecommunications equipment can foster an ever-expanding sequence ofcomplementary
inventions in industries using ICT. These complementary inventions cause the demand curve
forICT to shift further and further out, thereby offsetting the effects of diminishing returns.
As Basu, Fernald, Oulton and Srinivasan (2003; henceforth BFOS) highlight, ICT itself
may be able toexplain the measured acceleration in TFP in sectors that are ICT users. In their
model, reaping the full benefits ofICT requires firms to accumulate a stock of intangible
knowledge capital. For example, faster informationprocessing might allow firms to think of
new ways of communicating with suppliers or arranging distributionsystems. These
investments may include resources diverted to learning; they may involve purposeful
innovationarising from R&D. The assumption that complementary investments are needed to
derive the full benefits of ICTis supported both by GPT theory and by firm-level evidence.
Since (intangible) capital accumulation is a slowprocess, the full benefits of the ICT
revolution show up in the ICT-using sectors with significant lags.Note that the BFOS story
hews as closely as possible to neoclassical assumptions while explaining thepuzzle of TFP
growth in ICT-using industries. From the perspective of a firm, the story is essentially one
ofneoclassical capital accumulation. If growth accounting could include intangible capital as
an input to productionthen it would show no technical change in ICT-using industries. (Of
course, measuring intangible capital directlyis very difficult at best; see Corrado, Hulten and
Sichel (2006).) But the story can easily be extended to includenon-neoclassical features that
INCON - XI 2016
354
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
would explain true technical progress in ICT-using industries via other mechanisms,such as
spillovers. Indeed, to the extent that much of the intangible capital accumulated by ICT users
isknowledge, which is a non-rival good, it would be natural to expect spillovers. For example,
the innovations thathave made Amazon.com and Walmart market leaders could presumably be
imitated at a fraction of the cost it tookto develop these new ideas in the first place, at least in
the long run.
We assess whether the acceleration in measured TFP is related to the use of ICT. We
write down a simplemodel to motivate our empirical work. The model predicts that observed
investments in ICT are a proxy forunobserved investments in reorganization or other
intangible knowledge. In this model, the productivityacceleration should be positively
correlated with lagged ICT capital growth but negatively correlated with currentICT capital
growth (with these growth rates ‘scaled’ by the share of ICT capital in output). Note that
theunconditional correlation between the productivity acceleration and either ICT capital
growth or the ICT capital share can be positive, negative, or zero.
In the data, we find results that support the joint hypothesis that ICT is a GPT—i.e.,
that complementaryinvestment is important for realizing the productivity benefits of ICT
investment—and that, since thesecomplementary investments are unmeasured, they can help
explain the cross-industry and aggregate TFP growthexperience of the U.S. in the 1990s.
Specifically, we find that industries that had high ICT capital growth rates inthe 1987-2000
period (weighted by ICT revenue shares, as suggested by theory) also had a faster acceleration
inTFP growth in the 2000s. Controlling for lagged capital growth, however, ICT capital
growth in the 2000s wasnegatively correlated with contemporaneous TFP growth. These
results are consistent with—indeed, predictedby—the simple model that we present.The paper
is structured as follows. We present preliminary empirical results from industry-level
growthaccounting in Section 2, and document the puzzle we note above. We then present a
simple model of intangiblecapital investment in Section 3, and show how measured inputs—
especially ICT investment—can be used toderive a proxy for unmeasured investment in
intangibles. We test the key empirical implications of the model inSection 4. Conclusions,
caveats and ideas for future research are collected in Section 5.
2.
Data and preliminary empirical results
We begin by establishing stylized facts from standard growth accounting. We focus on
disaggregated,industry-level results for total factor productivity. We first describe our data set
briefly, and then discuss results. We use a 40-industry dataset that updates that used in Basu,
Fernald, and Shapiro (2001), Triplett andBosworth (2006), and BFOS (2003). The data run
from 1987-2004 on a North American Industry ClassificationSystem (NAICS) basis. For
industry gross output and intermediate-input use, we use industry-level nationalaccounts data
from the Bureau of Economic Analysis. For capital input—including detailed ICT data—we
useBureau of Labor Statistics capital input data by disaggregated industry. For labor input, we
use unpublished BLSdata on hours worked by two-digit industry.3Several comments are in
order. First, there are potential differences in how the conversion from SIC toNAICS has been
implemented across agencies; see Bosworth and Triplett (2006) and Corrado et al (2006) for a
discussion. Second, we do not have industry measures of labor quality, only raw hours,
as estimated by the BLS.Third, we aggregate industries beyond what is strictly necessary, in
part because of concern that industry matchesacross data sources are not as good at lower
INCON - XI 2016
355
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
levels of aggregation. (For example, in some cases, our BLS estimateof capital compensation
share in a sub-industry substantially exceeded the implied BEA figure, whereas in anothersubindustry the share fell substantially short; once aggregated, the BLS figure was close to—i.e.,
only slightly smaller than, the BEA figure—as expected.)
It provides standard estimates of TFP for various aggregates, including the 1-digit
industry level.
The first three columns show TFP growth, in value-added terms, averaged over different
time periods. Since aggregate TFP is a value-added concept, we present industry TFP in valueadded terms as well; by controlling for differences in intermediate input intensity, these
figures are ‘scaled’ to be comparable to the aggregate figures. The next two columns show the
acceleration, first from 1987-95 to 1995-2000; and then from 1995-2000 to 2000-04. The final
two columns show the average share of intermediate inputs in gross output and the sector’s
nominal share of aggregate value-added. We use a 40-industry dataset that updates that used in
Basu, Fernald, and Shapiro (2001), Triplett and Bosworth (2006), and BFOS (2003). The data
run from 1987-2004 on a North American Industry Classification System (NAICS) basis. For
industry gross output and intermediate-input use, we use industry-level national accounts data
from the Bureau of Economic Analysis. For capital input—including detailed ICT data—we
use Bureau of Labor Statistics capital input data by disaggregated industry. For labor input, we
use unpublished BLS
data on hours worked by two-digit industry.3Several comments are in order. First, there
are potential differences in how the conversion from SIC to NAICS has been implemented
across agencies; see Bosworth and Triplett (2006) and Corra do et al (2006) for a discussion.
Second, we do not have industry measures of labor quality, only raw hours, as estimated by
the BLS. Third, we aggregate industries beyond what is strictly necessary, in part because of
concern that industry matches across data sources are not as good at lower levels of
aggregation. (For example, in some cases, our BLS estimate of capital compensation share in a
sub-industry substantially exceeded the implied BEA figure, whereas in another sub-industry
the share fell substantially short; once aggregated, the BLS figure was close to—i.e., only
slightly smaller than, the BEA figure—as expected.)Table 1 provides standard estimates of
TFP for various aggregates, including the 1-digit industry level. The first three columns show
TFP growth, in value-added terms, averaged over different time periods. Since aggregate TFP
is a value-added concept, we present industry TFP in value-added terms as well; by controlling
for differences in intermediate input intensity, these figures are ‘scaled’ to be comparable to
the aggregate figures. The next two columns show the acceleration, first from 1987-95 to
1995-2000; and then from 1995-2000 to 2000-04. The final two columns show the average
share of intermediate inputs in gross output and the sector’s nominal
The top line shows an acceleration of about ½ percentage point in the second half of
the 1990s, and then a further acceleration of about ¾ percentage point in the 2000s. The other
lines show various sub-aggregates, including the 1-digit NAICS level. It is clear that in our
dataset, the acceleration was broad-based. First, suppose we focus on the non-ICT producing
sectors (fourth line from bottom). They show a very small acceleration in the late 1990s (from
0.70 to 0.84 percent per year), but then a much larger acceleration in the 2000s (to an average
of2.00 percent per year). In contrast, ICT-producing industries saw a sharp acceleration in TFP
in the late 1990sbut then some deceleration in the 2000s
INCON - XI 2016
356
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
A more detailed analysis of the non-ICT sectors shows more heterogeneity in the
timing of the TFP acceleration. For example, trade and finance accelerated in the mid-1990s
and growth then remained strong in the2000s. Non-durable manufacturing, business services,
and information slowed in the mid-1990s before accelerating in the 2000s. Nevertheless, by
the 2000s, most sectors show an acceleration relative to the pre-1995period (mining, utilities,
and insurance are exceptions.
IT argue that real output in many service industries are poorly measured—e.g., there
are active debates on how, conceptually, to measure the nominal and ‘real output’ of abank5 ;
in health care, the hedonic issues are notoriously difficult. Nordhaus argues for focusing on
what ‘well measured’(or at least, ‘better measured’) sectors of the economy. The acceleration
in TFP in well-measure dindustries (third line from bottom) took place primarily in the 1990s
with little further acceleration in the 2000s;but excluding ICT-producing sectors, the
acceleration is spread out over the 1995-2004 period.
In the short term, non-technological factors can change measured industry TFP. These
factors include non-constant returns to scale and variations in factor utilization. Basu, Fernald
and Shapiro (BFS, 2001) argue that cyclical miss measurement of inputs plays little if any role
in the U.S. acceleration of the late 1990s. BFS alsofind little role in the productivity
acceleration for deviations from constant returns and perfect competition.
In the early 2000s, some commentators suggested that, because of uncertainty, firms
were hesitant to hire new workers; as a result, one might conjecture that firms might have
worked their existing labor force more intensively in order to get more labor input. But
typically, one would expect that firms would push their work ersto work longer as well as
harder; this is the basic intuition underlying the use of hours-per-worker as a utilization proxy
in Basu and Kimball (1997), BFS, and Basu, Fernald, and Kimball (2006). In the 2000s,
however, when productivity growth was particularly strong, hours/worker remained low.
BFS do find a noticeable role for traditional adjustment costs associated with
investment. When investment rose sharply in the late 1990s, firms were, presumably, diverting
an increasing amount of worker time to installing the new capital rather than producing
marketable output. This suggests that true technologi calprogress was faster than measured. In
contrast, investment generally was weak in the early 2000s, suggesting that there was less
disruption associated with capital-installation. Nevertheless, the magnitude of this effect
appears small, for reasonable calibrations of adjustment costs. Applying the BFS correction
would raise the U.S.technology acceleration from 1995-2000 by about 0.3 percentage points
per year, but would have a negligible effect from 2000-2004. Hence, the investment reversal
could potentially explain some portion of the second wave of acceleration, but not all of it.6
These adjustment-cost considerations strengthen the conclusion that the technology
acceleration was broad based, since service and trade industries invested heavily in the late
1990s and, hence, paid a lot of investment adjustment costs.
3.
Industry-Level Productivity Implications of ICT as a New GPT
The U.S. productivity acceleration in the late 1990s coincided with accelerated price
declines for computers and semiconductors. But, as we just saw, much of the TFP acceleration
appears to have taken place in the 2000s, and outside of ICT production. Can ICT somehow
explain the measured TFP acceleration in industries using ICT? We first discuss broad
INCON - XI 2016
357
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
theoretical considerations of treating ICT as a new General-Purpose Technology (GPT), and
then present a simple model to clarify the issues and empirical implications.
3.1
General Purpose Technologies And Growth Accounting
Standard neoclassical growth theory suggests several direct channels for ICT to affect
aggregate labor and total factor productivity growth. First, faster TFP growth in producing ICT
contributes directly to aggregate TFP growth. Second, by reducing the user cost of capital,
falling ICT prices induce firms to increase their desired capital stock.7 This use of ICT
contributes directly to labor productivity through capital deepening.
Growth accounting itself does not take a stand on the deep causes of innovation and
TFP. Neo classical growth theory generally takes technology as exogenous, but this is clearly
a modeling shortcut, appropriate for some but not all purposes. Endogenous growth theories,
in contrast, generally presume that innovation results from purposeful investments in
knowledge or human capital, possibly with externalities.
We interpret ICT’s general purpose nature in the spirit of the neoclassical growth
model, since the GPT arrives exogenously (i.e., technological progress in ICT production is
exogenous). ICT-users respond in aneoclassical way: Frms respond to faster, more powerful
computers and software by reorganizing and accumulating intangible organizational capital.
Measured TFP, which omits this intangible organizational investment as output, and the
service flow from organizational capital as an input, is also affected.
Our motivation for viewing ICT this way is the many microeconomic, firm-level, and
anecdotal studies suggesting an important—but often indirect and hard to foresee—role for
ICT to affect measured production and productivity in sectors using ICT. Conceptually, we
separate these potential links into two categories: Purpose fulco-invention, which we interpret
as the accumulation of “complementary organizational capital,” which leads to miss
measurement of true technology; and externalities of one sort or another. For example,
Bresnahan and Trajtenberg (1995) and Helpman and Trajtenberg (1998) suggest that
innovations in ICT cause unexpected ripplesof co-invention and co-investment in sectors that
seem almost arbitrarily far away.
First, firm-level studies suggest that benefiting from ICT investments requires
substantial and costly co investmentsing complementary capital, with long and variable lags.8
For example, firms that use computers more intensively may reorganize production, thereby
creating ‘intangible capital’ in the form of organizational knowledge. Such investments
include resources diverted to learning, or purposeful innovation arising from R&D.As
Bresnahan (undated) argues, “advances in ICT shift the innovation possibility frontier of the
economy ratherthan directly shifting the production frontier.”
The resulting “organizational capital” is analogous to physical capital in that companies
accumulate it in a purposeful way. Conceptually, we think of this unobserved complementary
capital as an additional input into a standard neoclassical production function. Second, the
GPT literature suggests the likelihood of sizeable externalities to ICT. For example, successful
new managerial ideas—including those that take advantage of ICT, such as the use of a new
business information system—seem likely to diffuse to other firms. Imitation may be easier
and less costly than the initialco-invention of, say, a new organization change, because you
learn by watching and analyzing the experimentation, the successes and, importantly, the
mistakes made by others. 10 Indeed, firms that don’t use computers more intensively might
INCON - XI 2016
358
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
also benefit from spillovers of intangible capital. For example, if there are sizeable spillovers
to R&D, and if R&D is more productive with better computers, then even firms that don’t use
computers intensively may benefit from the knowledge created by computers
The first set of considerations are completely consistent with the traditional growth
accounting frame workbut suggest difficulties in implementation and interpretation. In
particular, these considerations suggest that the production function is mismeasured because
we don’t observe all inputs (the service flow from complementary ,intangible capital) or all
outputs (the investment in complementary capital). Hence, TFP is mismeasured. Thesecond
set of ideas, related to externalities, suggest that ICT might also explain “true” technology.
Empirically, the challenge is to infer unobserved complementary investments. We now
turn to a formal model that suggests variables that might proxy for these unobservables. Of
course, our interpretation of the results will be clouded by our uncertainty about whether our
proxies are capturing only neoclassical investment inunobserved organizational capital, or
whether the proxies are affecting TFP directly through spillovers.
3.2 Industry-Level Implications Of Ict As A New Gpt: A Simple Model11
Many papers modeling the effects of GPTs are motivated by the ICT revolution.12 But it
is difficult to derive industry-level empirical implications from this literature. For example, it
is often unclear how to measure in practice some of the key variables, such as unobserved
investment and capital; and even for observed variables, measurement conventions often
depart from those used in national accounting.
On the other hand, conventional industry-level growth-accounting studies of the sort
reviewed and extendedin Section 3 are typically hard to interpret in terms of GPT
considerations because they generally lack aconceptual framework to interpret movements in
TFP. Although some studies try to look for a “new economy” inwhich ICT has indirect effects
on measured TFP in ICT-using industries, in the absence of clear theoreticalguidance, it is not
clear that many would know if they had, in fact, found it.
Finally, the empirical literature using firm-level data or case studies stresses the
importance and costly nature of organizational change accompanying ITC investment. This
literature is insightful but rarely makes contact with economy-wide productivity research. (An
exception is Brynjolfsson and Yang (2001)). Our impirical work below is a tentative attempt
to make that connection. The model below provides the bare bones ofa theoretical framework
to capture some of the key issues, focusing on cross-industry empirical implications. The
model takes as given the arrival of a particular GPT, which here is taken to be the production
of ICT capital at a continuously falling relative price. The distinguishing feature of a GPT is
that its effects are general—going well beyond the industry of production—but require
complementary investments by firms in order to fully benefit from
its use. For empirical implementation, we focus on industries that use the GPT.
3.4
Extensions to the Basic Framework
The model above is, of course, stylized and imposes a lot of structure on the problem in
order to derive an estimating equation. As a result, there are a number of challenges in
implementing this framework empirically. First, it is unclear how long the lags are between
ICT investment and complementary investment. In other words the length of a period is a free
parameter, and theory gives little guidance. The lagged k may be last year’s ITC capital
INCON - XI 2016
359
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
accumulation, or the last decade’s. Furthermore, equation (3) for the accumulation of
complementary capital has no adjustment costs, or time-to-build or time-to-plan lags in the
accumulation of C. But such frictions and lags are likely to be important in practice, making it
even harder to uncover the link between ICT and measured TFP.
Second, suppose there are externalities captured in and that they are a function of
industry as well as aggregate C. Then one can no longer tell whether the k terms represent
accumulation of a private stock, or intra industry externalities that are internalized within the
industry. Similarly, if we find that lagged k is important for explaining current productivity
growth we do not know whether that finding supports the theory we have out lined,or whether
it indicates that the externality is a function of lagged capital.
Third, other variables might enter the production function for A, which we have not
accounted for. We imposed the same production function for A and Y. But it is possible, as
many have recognized, that the production of complementary capital is particularly intensive
in skilled (i.e., college-educated) labor.19 If true, the hypothesis implies that the relative price
of accumulating complementary capital may differ significantly across industries (or across
countries) in ways that we have ignored. Fourth, even with the restrictions we’ve imposed, we
need to make further assumptions about σ as well as the relative user costs for ICT and
complementary capital. We made the strong assumption that the price of complementary
investment is the same as that of output, so this relative price should largely reflect the trend
decline in ICT prices. Nevertheless, that was clearly an assumption of convenience—reflecting
our lack of knowledge—rather than something we want to rely to strongly on. In what follows,
we ignore the relative price terms, but this needs to be explored further. (Suppose we assume
that σ = 0. There is still a relative price effect which, if omtted, would imply a trend in the
estimated coefficient over time; but in the cross-section, this relative price is close to common
across firms, so its omission shouldn’t matter much.)
Finally, given the difficulty finding good instruments, we report OLS regressions below.
But current ICT capital growth is surely endogenous. Given the correlation between current
and lagged share-weighted ICT capital growth, any endogeneity potentially biases both
coefficients. The effect on estimates depends on the size of the true coefficient as well as the
degree of endogeneity. The endogeneity bias might be positive or negative:Basu, Fernald, and
Kimball (2006), for example, find that positive technology innovations tend to reduce inputson
impact. As is standard, one trades off bias against precision; indeed, weak instruments could
lead to both biasand imprecision. In any case, one needs to interpret the results with caution.
5.
Conclusions
Even though ICT seems to be the major locus of innovation in the past decade, the TFP
acceleration in the United States since the mid-1990s has been broadbased. We reconcile these
observations by emphasizing the role of the complementary investments/innovations that ICT
induces in firms that use it. We thus link the literature on ICT as a general purpose technology
with the literature on intangible capital. To the extent that there is unmeasured intangible
output and unmeasured intangible capital, conventional TFP growth is a biased measure of
true technical change. This GPT view suggests that productivity slowdowns and speedups
might reflect the dynamics associated with complemenary investment.
A fundamental difficulty, of course, is that complementary investment and capital are
unmeasured. Wepresented a simple theoretical framework that observed ICT capital intensity
INCON - XI 2016
360
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
and growth should serve as reasonable proxies. In line with this GPT view, the U.S. industry
data suggest that ICT capital growth is associated with industry TFP accelerations with long
lags of 5 to 15 years. Indeed, controlling for past growth in ICT capital, contemporaneous
growth in ICT capital is negatively associated with the recent TFP acceleration across
industries. More work remains to be done to explore the robustness of the theoretical
framework—for example, allowing for different production functions for intangible
investment and for final output—and for extending the empirical work. For example, we have
not exploited the panel nature of the theory, nor have we explored the importance of the
relative price of intangible capital. But we encouraged by the preliminary results that link
aggregate and industry-level U.S. TFP performance in the 2000s to both the persuasive macro
models of GPTs and to the stimulating micro empirical work that supports the GPT
hypothesis.
Bibliography:

Bakhshi, H., Oulton, N. and J. Thompson. (2003). Modelling investment when relative
prices are trending: theory and evidence for the United Kingdom. Bank of England
Working Paper 189.

Basu, S. and J. G. Fernald. (2001). Why is productivity procyclical? Why do we care?”
In New Developments in

Productivity Analysis, edited by C. Hulten, E. Dean, and M. Harper,. Cambridge, MA:
National Bureau of Economic Research.

Basu, S., J.G. Fernald, and Miles Kimball (2006). Are technology improvements
contractionary? American Economic Review, Dec.

Basu, S., J.G. Fernald, N. Oulton, and S. Srinivasan (2003). “The case of the missing
productivity growth.”

NBER Macroeconomics Annual 2003, M. Gertler and K. Rogoff, eds. Cambridge, MA:
MIT Press.

Basu, S., J. G. Fernald, and Matthew D. Shapiro. (2001). Productivity growth in the
1990s: Technology, utilization, or adjustment? Carnegie-Rochester Conference Series on
Public Policy 55:117-165.

Basu, S. and M. Kimball. (1997). Cyclical Productivity with unobserved input variation.
Cambridge, MA:

National Bureau of Economic Research. NBER Working Paper 5915 (December).

Belsley, D., E. Kuh, and R. Welsch. (1980). Regression Diagnostics: Identifying
Influential Data and Sources of

Collinearity. New York: John Wiley and Sons.

Bloom, N., R. Sadun and J. van Reenen (2005). It ain't what you do it's the way that you
do I.T. – Testing explanations of productivity growth using U.S. transplants.
Unpublished, London School of Economics.

Bosworth, B. P. and J. E. Triplett. (2006). Is the 21st Century Productivity Expansion
Still in Services? And What Should Be Done about It? Manuscript (July).

Bresnahan, T. F. (undated). The mechanisms of information technology’s contribution to
economic growth.

Prepared for presentation at the Saint-Gobain Centre for Economic Research.
INCON - XI 2016
361
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

Bresnahan, T. F. and M. Trajtenberg. (1995). General purpose technologies: ‘Engines of
growth?’ Journal of

Econometrics 65(Special Issue, January):83-108.

Bresnahan, Timothy, E. Brynjolfsson and L. M. Hitt (2002). "Information Technology,
Workplace Organization and the Demand for Skilled Labor: Firm-level Evidence,"
Quarterly Journal of Economics, 117(1): 339-376.

Brynjolfsson, E. and L. M. Hitt. (2000). Beyond computation: Information technology,
organizational transformation and business performance. Journal of Economic
Perspectives 14(4):23-48.

Brynjolfsson, E. and L. M. Hitt. (2003). Computing productivity: Firm-level evidence,
MIT Working Paper 4210- 01.

Brynjolfsson, E. and S. Yang. (2001). Intangible assets and growth accounting: Evidence
from computer investments. Manuscript.

Caselli, F. (1999). Technological revolutions. American Economic Review 89(March).
19

Chandler, A. D. Jr. (1977). The Visible Hand. Cambridge: Harvard University Press.
Corrado, Carol, Paul Lengermann, Eric J. Bartelsman, and J. Joseph Beaulieu (2006).
Modeling aggregate productivity at a disaggregate level: New results for U.S. sectors
and industries. Manuscript.

Corrado, Carol, Charles Hulten, and Dan Sichel (2006). Intangible capital and economic
growth. NBER working paper 11948 (January).

David, P. A. and G. Wright. (1999). General purpose technologies and surges in
productivity: Historical reflections on the future of the ICT revolution. Manuscript.

Greenwood, J. and M. Yorokoglu. (1997). 1974. Carnegie-Rochester Conference Series
on Public Policy 46:49- 95.

Griliches, Z. (1994). Productivity, R&D, and the data constraint. American Economic
Review 84(1):1-23.

Hall, R. E. (2001). The stock market and capital accumulation. American Economic
Review 91(December):1185-1202.

Helpman, E. (ed). (1998). General Purpose Technologies and Economic Growth.
Cambridge: MIT Press, 1998.

Helpman, E. and M. Trajtenberg. (1998). Diffusion of general purpose technologies. In
General Purpose

Technologies and Economic Growth, edited by E. Helpman. Cambridge: MIT Press.

Hobijn, B. and B. Jovanovic. (2001). The information-technology revolution and the
stock market: evidence.

American Economic Review 91(December):1203-1220.

Hornstein, A. and P. Krusell. (1996). Can technology improvements cause productivity
slowdowns? In NBER

Macroeconomics Annual edited by Bernanke and Rotemberg. Cambridge, MA: National
Bureau of Economic Research.

Howitt, P. (1998). Measurement, obsolescence, and general purpose technologies. In
General purpose
INCON - XI 2016
362
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT

technologies and economic growth, edited by E. Helpman. Cambridge and London: MIT
Press.

Hulten, Charles (1978). Growth accounting with intermediate inputs. Review of
Economic Studies 45 (October): 511-518.

Inklaar, Robert and Marcel P. Timmer (2006). Of yeast and mushrooms: Patterns of
industry-level productivity growth. University of Groningen, Mimeo, August.

Inklaar, Robert, Marcel P. Timmer and Bart van Ark (2006). Mind the gap! International
comparisons of productivity in services and goods production. University of Groningen,
Mimeo, August.

Jorgenson, D. W. (2001). Information technology and the U.S. economy. American
Economic Review 91(March): 1-32.

Jorgenson, D.W., Mun Ho, and Kevin Stiroh (2006). The Sources of the Second Surge
of U.S. Productivity and Implications for the Future. Manuscript.

Jovanovic, B. and P. L. Rousseau. (2003). Mergers as Reallocation. Unpublished, NYU,
February.

Krueger, D. and K. Kumar. (2003). US-Europe differences in technology adoption and
growth: The role of education and other policies. Manuscript prepared for the CarnegieRochester Conference, April 2003. 20

Laitner, J. and D. Stolyarov. (2001). Technological change and the stock market.
Manuscript, University ofMichigan.

Lynch, L. and S. Nickell. (2001). Rising productivity and falling unemployment: can the
US experience be sustained and replicated?” In The Roaring Nineties, edited by A.
Krueger and R. Solow. New York:Russell Sage Foundation.

Nordhaus, W.D. (2002). Productivity Growth and the New Economy. Brookings Papers
on Economic Activity 2.

Oliner, S.D. and D E. Sichel. (2000). The resurgence of growth in the late 1990s: Is
information technology the story? Journal of Economic Perspectives 14(Fall):3-22.

Oliner, S.D. and D E. Sichel. (2006). Unpublished update to Oliner and Sichel (2000).
September.
INCON - XI 2016
363
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Study of ERP Implementation in Selected Engineering Units at MIDC,
Sangli
Varsha.P.Desai
Asst.Professor
V.P.Institute Of Management Studies And Research, Sangli
Varshadesai9@Gmail.Com
ABSTRACT :
Today ERP is a buzzword for most of the engineering industries. Engineering
industries face number of issues related to managing complex project, increases
inventory cost, customer service etc. ERP system helps to overcome these challenges and
speed up process by automating business functions and workflow. ERP implementation
is one of the most important and costly affair. Organizations spent huge amount of
money for ERP implementation but they are not getting desired results from ERP
solution. This paper reveals ERP implementation structure and issues in engineering
units at Sangli. Through this paper we provide guideline for success of ERP
implementation and future challenges.
Keywords: ERP, BPR, IS, EDP
Introduction:
Technology plays a key role in today’s business environment. Many companies greatly
rely on computers and software to provide accurate information for effective management of
their business. It is becoming increasingly necessary for all businesses to incorporate
information technology solutions to operate successfully. One way that many corporations
have adopted information technology on a large scale by using Enterprise Resource Planning
(ERP) systems to accomplish their business transaction and data processing needs.
An ERP system is an integrated, configurable and customizable information system
which plans and manages all the resources in the enterprise and incorporates the business
processes within and across the functional or technical departments in the organization. ERP
systems consist of different modules which represent different functional areas and they offer
integration across the entire business, including human resources, accounting, manufacturing,
materials management, sales and distribution and all other areas which are required in
different branches. ERP system integrates all facets of an operation, including product
planning, development, manufacturing processes, sales and marketing. The central feature of
all ERP systems is a shared database that supports multiple functions used by different
business units. Though the market for ERP seems to be growing, there are several issues like
wrong selection of ERP vendors and applications, over expectations from ERP solutions,
problems with change management, problems with data migration etc. one has to face with
implementing an ERP solution. These issues lead to grow failure rate of ERP implementation.
Objective:
1.
To study ERP modules of engineering industry.
2.
Identify reasons for ERP implementation in engineering unit.
3.
Determine status of ERP implementation in engineering units at Sangli MIDC.
4.
Identify ERP implementation issues in selected units at Sangli MIDC.
INCON - XI 2016
364
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Importance of Study:
The study of ERP implementation is become more important for any organization
because the process of implementing ERP solutions is hidden with risk and affects processes
and people. A successful ERP implementation is the prime objective in front of organizations.
This study will put forth issues encountered in ERP implementation in engineering units. This
will be beneficial for technical team members to overcome technical issues, for project
managers to fill gap in implementation planning and for stakeholders to know the present
status and issues of ERP implementation. This study will helpful for ERP vendors to reduce
requirement gaps of their ERP software. This study will also important for other engineering
units those will plan to implement ERP system.
ERP Modules:
1.
Manufacturing: Manage a manufacturing process of any length, including complex
interconnections of components, routings, BOMs. Identify and control manufacturing
processes where multiple different items are produced from a single operation. Manage
engineering data management and change control.
2.
Inventory: This module facilitates the process of maintaining the appropriate level of
stock in warehouse. It identifies inventory requirements, provide replenishment
technique, monitoring item usage, reconciling the inventory balances and reporting
inventory status. Integrate with other modules to generate executive level reports.
3.
Material Management: Define detailed chemical and physical specifications for
materials such as steel, aluminum and other metals. Create user-definable specification
templates. Link multiple materials to a product or item, and multiple products or items to
a material.
4.
Compliance
Management: Compliance module generates statutory reports and
maintains guidelines. Provides automatic generation of documents with built in facilities
for VAT, TDS, EXCISE and EXPORTS helps to develop statutory reports.
5.
Sales and distribution: This module facilitates sales order management, purchase order
management, shipping, billing, sales support and transportation.[9]
6.
Marketing: Marketing module enables marketing resource management, campaign
management, e-mail marketing, online survey, and marketing analytics.
7.
Customer Service: This module provides synchronizing and coordinating customer
interaction, customer service support, sales force automation and customer retention.
8.
Quality Management: It provides quality planning, quality inspection and quality
control functions. CAD and CAM system is integrated for designing and developing
machinery parts.
9.
Process Routing: Create routings to define and control the sequence of operations
required to manufacture or process a given product. This modules store operation
number, operation name, approved work centers and suppliers, crew size, weight,
container type and other key routing data.
10. Accounting and Finance: Account module track accurately various cost elements. It
helps company to track profitability of company. It avoids duplication of entries to
maintain data accuracy. Manage account receivable, payable, cost controlling and
profitability analysis.[9]
INCON - XI 2016
365
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
11. Heats and Chemistries: Set up a specialized version of the standard receiving system,
optimized for metallic raw materials. Receive material that has been defined in the
material specification system. Validate the chemistry and physical characteristics of
incoming materials against engineering specifications. Create inventory records and barcoded labels for each inventory unit being received.
12. Human Resource: This module facilitates personnel management, recruitment
management, payroll management, shift management, training and event management.
Biometric attendance record is integrated with ERP system.
Fig. 1 : ERP modules in engineering Industry
Area of the Study:

As per the records of DIP there are 54 Engineering units in Sangli.[10] At present out of
54 units, 5 units are using ERP systems. Researcher takes 3 engineering units for study
because they have completed 80% ERP implementation in their industry and are as
below:
1.
JSONS Foundry, MIDC, sangli
2.
BASHKO engineering Pvt.Ltd, MIDC,Sangli
3.
VEERESHA Casting Pvt. Ltd, Sangli

ERP and Engineering units:
According to survey JSONS foundry implemented ERP software from SAP vendor.
Whereas VEERESHA casting Pvt. Ltd. and BASHKO engineering Pvt.Ltd. industries
implemented ERP software from local vender named as “compserve”, Kolhapur.
Implementation period for these industries is average 2.5 years. Average numbers of users
using ERP system is 20-22 and usage hours are approximate 18-19 hrs weekly. Details are
shown as below table:
Table1: Usage of ERP software in engineering units.
Sr.
No
1.
2.
3.
Industry
Name
JSONS
BHASKO
VEERESHA
INCON - XI 2016
Implementation
Period
2 years 3 months
2 years 4 months
2 years 5 months
Total No.
of Users
20-28
14-16
25-30
Total No. of hours use
ERP software in week
21 Hours
18 Hours
16 Hours
366
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Implementing ERP software:
ERP implementation helps success of organization. Every organization has one or many
objectives for implementing ERP software. Implementation of ERP software is very expensive
and long term process. Before implementation organization perform BPR business process
reengineering which help to identify changes needed in business function and useful for
success of ERP.
According to response of end users and steering committee (EDP manages, CEO, CIO)
major reason for implementing ERP in engineering industries are providing excellent customer
services, inventory cost reduction and to catch new market opportunities for business success.
Following table shows reasons and average weight given by above engineering industries for
selecting ERP software in their organization.
Table2: Reasons for implementing ERP software
Sr.No
A.
B.
C.
D.
E.
F.
G.
H.
Reasons
Standardization of processes
Improvement of existing customer services
Improve management controls
Enabling of future growth
Increasing firms flexibility to respond to new
market opportunities
Quality Improvement
Material inventory cost reduction
On time information assistance to data management
process
Weight in%
78.12
87.05
68.75
75.00
81.25
73.87
84.37
62.05
Figure 2: Reasons for using ERP software
Implementation Issues:
ERP implementation mostly focus on maximizing strategic flexibility and improving
business operations by reducing operational costs, enabling business integration, improve
customer service, data transparency and making better business decisions.[1]
INCON - XI 2016
367
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
ERP implementation dramatically improves change in business process. Organizations
are implements stepwise approach for successful ERP implementation. Most of the ERP
implementation is fail, be delayed, cost more than forecast or fail to deliver full functionality
than they are to succeed.[9] According to survey we found common ERP implementation
issues in different engineering units are as follows:
1.
Exceeds implementation period: ERP implementation time is exceeds with planned
time so lengthy implementation is major issues in engineering units at sangli
2.
Customization: Most of the modules need to be customizing according to requirement.
During customization most of the time and efforts are required for creating standard
report format that meets business needs.
3.
Change management: Acceptance of change brought about by the implementation of
ERP system is most cited key issue towards successful implementation. End user
training is provided to understand the concept and how the system will manage change in
the business process.[7]
4.
Lack of understanding benefits of ERP: End user and top management may not able
to understand concept, business operations, cost and implementing process. Through
education and training it helps to create awareness and importance of ERP software in
industries.
5.
Top management support: Commitment from top management is important issues in
ERP implementation. Without financial and technical support from top management
organization cannot successfully implement ERP software. Lack of technical and
knowledgeable human resources leads to delay in ERP implementation which causes
huge loss. Top management involvement and consultant support is major factor for
success of ERP system.[7]

To overcome implementation issues it is necessary to identify gap between success
factors and success indicator. This gap analysis help to solve implementation issues that
leads to success of ERP projects. [2]
1.


2.



Success Factors –
User related variables: Output quality, social image, results, compatibility, system
reliability, reporting capabilities.
Project related variables: Top management support, planning, training, team
contribution, software selection, consultant support, IS area.
Success Indicator:
User related variables: Intention for using system, user behaviour, no. of hours using
software, most used functions of system, frequency, user satisfaction, decision making,
and efficiency.
Project related variables: Project time, budget, quality, scope, productivity, performance,
effectiveness, reporting capability, customizability.
Conclusion: Successful implementation of ERP software beneficial for engineering
industries to face competitive market and financial growth. At MIDC, Sangli only 10%
engineering industries are implementing ERP package. Major objective of selecting ERP
package is to standardize business process, provide customer services and inventory cost
reduction. Engineering industries manage various complex projects through integrating
INCON - XI 2016
368
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
various business functions. But due to large investment in ERP system, lack of top
management support, technical problems, cultural and infrastructural issues, most of the
engineering industries neglecting benefits of ERP software. Gap analysis is effective
technique for overcome implementation issues and achieves business challenges.
REFERENCES:
[1]
Jim Odhiambo Otieno “Enterprise Resource Planning Systems Implementation and
Upgrade” (A Kenyan Study) School of Engineering and Information Sciences. Ph.D
thesis
[2] Boo Young Chung “An analysis of success and failure factors for ERP system in
engineering and construction firms.” Ph.D Thesis.
[3] Pham Hong Thai , Dr. Jian-Lang Dong , Dr. Nguyen Chi Thanh “Study of Problems and
Solutions for Implementing ERP System at Enterprises in Nam Dinh Province, Vietnam”
feb 2011.
[4] Syed M. Ahmed, Irtishad Ahmad, Salman Azhar and Suneetha Mallikarjuna
“Implementation of enterprise resource planning (ERP) system in the construction
industry”
[5] Ashish Kr. Dixit, Om Prakash. (2011, April). “A study of Issues affecting ERP
Implementation in SMEs”. Journal of Arts, Science & Commerce, 2(2), 77-85.
[6] Venugopal C. (2011). “A Study of The Factors that Impact The Outcome of an ERP
Implementation”. Ph. D. Thesis, Anna University, Chennai. Unpublished
[7] Rana Basu,Parijat Upadhyay,Manik C. Das, Pranb K.Dan (2012).An approach to identify
issues affecting ERP implementation in Indian SMES., Journal of Industrial Engineering
and Management, 416.
[8] Purnendu Mandal, A. Gunasekaran. (2003). “Issues in implementing ERP: A case
Study.” European Journal of Operational Research, 146, 274–283.
[9] ERP Demystified –Alexis Leon
[10] Brief industrial profile of sangli district- DIP Sangli
INCON - XI 2016
369
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
To study the use of biometric to enhance security in E-banking
Mrs. Deepashree K. Mehendale
Assistant Professor
Dr. D. Y. Patil Arts Commerce and Science
College,
Pimpri, Pune - 18
India
E-mail: deepashree.deshpande2@gmail.com
Mrs. Reshma S. Masurekar
Assistant Professor
Dr. D. Y. Patil Arts Commerce and Science
College,
Pimpri, Pune - 18
India
E-mail: rush1681@gmail.com
ABSTRACT :
E-Banking is a giant step towards financial globalization. E-banking refers to
electronic banking. It is like E-business in banking industry. It offers customers and
organizations many benefits including 24/7 access to account and services. E-banking
has increased tremendously in the last few years making the transactions easier and
faster but it has also increased the risks involved in it. Typically this is done by asking
the customer to enter his unique login id and password combination. The success of this
authentication relies on the ability of customers to maintain the secrecy of their
password. The traditional security provided by username, password, pin are vulnerable
to attack. So many bank customers are still reluctant to use E-banking services because
of the security concerns.
Authentication plays a vital role in the process of E-banking as the customer is not
present at the front of the banker or the authorized representative. As passwords are not
that strong enough for authentication purpose the best alternative for authentication can
be Biometrics. Biometric technology uses physiological and behavioral characteristics
unique in each individual thus providing high level of security. In addition to increased
certainty to the identification process the biometric system also adds simplicity and
affordability to the process. In this research paper we have studied the security threats
involved in traditional authentication process. It also focuses on the use of biometric for
enhancing security while performing transactions. There are various types of Biometric
technologies available for authentication purpose. The research paper depicts that the
most accepted technology is fingerprint, if fingerprint technology is deployed in the
login process of E-Banking then it will provide more authentication and authorization
and it will be able to reduce number of transaction frauds and security breaches.
Keywords: E-banking, biometric, security, risk, fingerprint, authentication
Introduction:
Banking organizations have been offering services to the customers and business for
years. The transformation from traditional banking to E-banking has been a giant reform in ECommerce. This reform has attracted new cyber attacks. E-banking and E-Commerce are
closely related systems. E-banking means providing banking products and services directly to
customers through electronic, interactive communication channels. E-banking is a result of
growing expectations of bank’s customers. This system involves direct interface with the
customers. The customers do not have to visit the bank’s premises. Thus it offers extensive
INCON - XI 2016
370
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
benefits in terms of ease and cost of transactions through Internet. E-banking provides many
services such as transferring money, paying bills and checking account balance etc.
Any user with a personal computer and a browser gets connected to the bank’s website
to perform any of the virtual banking functions. In E-banking system the bank has a
centralized database that is web-enabled. All the services that the bank has permitted on the
internet are displayed in menu. The user is able to select the service at just one click and
perform the transaction. Authentication plays an important role in E-banking. Authentication
is the process of verifying user’s credentials. Authentication process is preceded by
authorization process; Authorization involves verifying that the user has permission to perform
certain operation. Authentication process is based on three factors
1.
What the User knows
2.
What does the User possesses
3.
What the User is
User knows
User possesses
User Is
Fingerprint
USB Token
Username
Palm print
Smart Card
Password
OTP by SMS/token IRIS
PIN
Voice
Swipe cards
Card No.
Vein pattern
Identifiable picture Mobile Signature
Types of Authentication:1.
Single factor Authentication:- It uses any one factor for authentication.
For example :- User name password Authentication process.
2.
Two Factor Authentication:-It uses a combination of two factors i.e.:- the user knows
and user possesses. This method is used by various banks for authentication of Ebanking.
For example:- user uses a password as first factor which the user knows and One time
password as a second factor that the user possesses.
3.
Multi factor Authentication : - It uses two or more factors for authentication in which
one of the factor is necessarily pertaining to what the user is.
For example:- It is a combination of user id, smart card and biometric authentication
factor.
Security threats in E-banking
E-Banking is a new technology having many capabilities and also many potential
problems, users are hesitant to use the system. In recent years the numbers of malicious
applications are increased dramatically to target online banking transactions. The most serious
threat faced by E-banking is that it is not safe and secure all the time. Resulting may be
loss of data due to technical defaults.
The extensive use of information technology by banks has put them under the pressure
to provide confidentiality and integrity of information to maintain competitive edge, cashflow, profitability, legal compliance and commercial image. The sources of damage such as
the computer viruses, computer hacking and denial of service attacks have become more
common, more ambitious and increasingly sophisticated in the networked environment.
INCON - XI 2016
371
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
The authentication schemes provided by banking organizations are not balanced, in the
sense that the authentication process is performed exclusively at one end of the architecture:
the user’s computer. A Personal computer is inherently becoming an insecure environment for
online banking. Furthermore, this process is still carried out in many cases using weak
schemes, like the traditional user/password. Even with two-factor schemes all the process
takes place, exclusively, in the user side of the equation. Due to this, if this authentication
process is hacked, by using a trojan or a simple credentials theft, the bank has no way to
distinguish between legitimate and illegitimate customers.
Identity theft refers to all types of crime in which someone illegally obtains and uses
another person's personal data through fraud, typically for monetary gain. With enough
personal information about an individual, a cyber criminal can assume individual's identity to
carry out a wide range of crimes. Identity theft occurs through a wide range of methods from
very low-tech means, such as check forgery and mail theft to more high-tech schemes, such as
computer spyware and social network data mining.
Man in middle attack is the type of attack where attackers intrude into an existing
connection and intercept the exchanged data and inject false information. It involves
eavesdropping on a connection, intruding into a connection and modifying the data.
Pharming is a type of fraud that involves diverting the client Internet connection to a
counterfeit website so that even when he enters the correct address into his browser he ends up
on the forged site. It can be done by changing the host file on a victim’s computer or by
exploitation of vulnerabilities in DNS server software. Phishing is another threat in which
malicious websites are used to solicit personal information by posing as a trustworthy
organization.
Biometric based Authentication:For online banking, the degree of login or signing process security can be increased by
using biometrics. Biometrics can be used to open the token instead of using a PIN and actively
interfere in the login or signing process. Biometrics can be used alone, or in combination with
another authentication methods. When used with another authentication methods it becomes a
multi factor authentication which is more secure and effective. Biometric systems have been
proven to be very effective in protecting information and resources in various applications.
This systems can integrate information at various levels. At present, the amount of
applications employing biometric systems is quite limited, mainly because of the crucial cost.
However, the demand of biometrics will continuously increase to provide the highest level of
security for every transaction to gain customer trust and avoid financial loss.
The rapid digitization of banking services combined with the continued need to adopt
stricter customer and employee identification protocols to prevent identity theft and fraud has
set the biometric identification technology to become an integral and strategic part of financial
services. Acting as a strong authentication tool biometrics in banking will help to increase
more number of customer to use E-banking. The necessity for a stronger authentication
solution becomes unavoidable in banking services because of the growing pace of
sophisticated transactional technology adoption along with the unfortunate rise in fraud and
security breaches due to reliance on traditional security systems such as passwords.
E-Banking is considered to be the most convenient and fastest method of transaction. As
the number of customers using online banking is increasing day by day, each day it is also
INCON - XI 2016
372
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
giving rise to a new cyber crime attack. The most important security issue which the bank
faces is about the authorized customer. The bank’s website accepts the user credentials in the
form of login name and password and if right credentials are provided they are allowed to
access the services offered by the bank. The banks website only authenticates the credentials
but not exactly the person who is using it. Authentication is the basic principle of security
which ensures who actually the person is. If strong authentication approaches are used at the
time of login process then the process will work more seamlessly as well as it will be able to
attract more customers who are still not willing to use the method.
Biometric being the strongest of authentication approach if deployed with E-banking
will try to gain customer trust, belief and also increase the reputation of the bank. This
approach relies on physiological and behavioral characteristic of a person which cannot be
stolen or forgotten. There are various types of Biometric mechanism. E.g. :- Face recognition,
fingerprint identification, iris recognition, handwriting recognition etc. The following pie chart
shows the use of different Biometric mechanisms in various applications.
Use of Biometric Technology
Even if there are many Biometric ways of authentication it can be seen from the above
that fingerprint technology occupies 48.8% of the market. Thus we see that there are many
users who rely on fingerprint scanning, so if the technology is deployed in E-banking system
will be easily acceptable by the customers and gain a huge market. Many computers, laptops,
and even smart phones today have fingerprint scanners, offering flexibility for banks to easily
adopt biometric authentication in E-banking services.
The authentication process of fingerprint works in 3 stages:1.
Enrolment
2.
Storage
3.
Verification or Identification
INCON - XI 2016
373
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Steps in Authentication process
The enrolment process consists of an user providing his fingerprint. For example, during
the enrolment of fingerprint biometric system the user is requested to provide a series of
fingerprints. The digital representations of the fingerprint samples are used to generate a
fingerprint template using an averaging process. Templates collected in the enrolment process
are stored so they can be referenced later. Typically, templates are either stored within the
biometric device itself, in a central repository, or stored on a portable token carried by
individuals. The verification or identification process involves an individual to provide a live
biometric, this is then compared with the corresponding template. If the samples and the
templates match within the predefined threshold, then biometric system returns a binary true
or false message.
INCON - XI 2016
374
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Existing Model
INCON - XI 2016
375
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Proposed Model
In existing model of E-banking we see that the user gets access to banks website after
the user enters correct user id and password. But in proposed model using biometric
technology there are two levels for entering into banks website first the user has to enter the
correct user id and password and then the user has to provide his fingerprint for accessing to
E-banking system.
Benefits of the technology
1.
Free from identity theft.
2.
Biometrics holds the promise of fast, easy-to-use, accurate, reliable authentication.
3.
Secure customer service.
4.
The cost of the device is affordable
5.
Greater customer convenience
INCON - XI 2016
376
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Conclusion:
The research study shows that E-Banking is an increasing trend in customers and to gain
more customer acceptance it is necessary to achieve more confidence of the user. Since current
authentication method in banking system such as password and tokens are not perfect
solutions as it doesn’t offer a high level of security. The Biometric Technology is the strongest
authentication method and if used in E-Banking will able to reduce the security threats and the
increasing cybercrime. The fingerprint technology is already acceptable method of
authentication in many areas including forensic area; if deployed with online banking will able
to enhance security in login process of E-banking.
There are many benefits of the use of fingerprint technology making it strongest and
more protected scheme. This system when used within the login process will add more layer
of security. As the technology uses multi factor authentication it reduces online frauds. The
deployment is cost effective and can be used for both E-banking as well as M- banking which
is the next generation use for banking.
REFERENCES :
[1] International Journal of Computer Science and Information technology security ISSN
2249-9555Improving the security of the Internet banking system using three level
security implementation – Emeka Reginald Nwogu
[2] Business Management Dynamics Vol 2 No 1 Jul 2012 User Acceptance of Biometric in
E banking-Dhurgham T Ahmad MohammadHariri
[3] Journal of Internet Banking and Commerce E Banking security Issues- Is there a
Solution in Biometrics? – Amtul Fatima
[4] International Journal of Advance Research in Computer Science and Software
Engineering Authentication Approaches for Online Banking Ms Arati Gadgil
[5] IJITKM Application of Biometric in Secure Bank Transaction Garima Yadav
[6] IDRBT Working Paper No. 11 Authentication factors for Internet banking
M V N K Prasad and S Ganesh Kumar
[7] International Journal of Pure and Applied Science and technology
Internet Banking: Risk analysis and applicability of Biometric technology for
authentication Gunajit Sharma and Pranav Kumar Singh
INCON - XI 2016
377
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Extraction of Top K List By Web Mining
Dr. Pramod Patil
Priyanka Deshmane
M.E. Student DYPIET Pimpri Pune
M.E. STUDENT DYPIET PIMPRI Pune
SavitribaiPhule Pune University
SavitribaiPhule Pune University
pdpatiljune@gmail.com
:priyanka.deshmane8218@gmail.com
ABSTRACT :
Now a days finding proper information within less time is important need But one
more problem is that very small percentage data available on web is meaningful and
interpretable and lot of time needed to extract. There is need of system that deals with
the method for extracting information from top k web pages which contains top k
instances of interested topic . Examples include “the 10 tallest towers in the world”. In
comparison with other structured information like web tables Information in top-k lists
contains larger and richer information of higher quality, and interesting . Therefore topk list are very valuable as it can help to develop open domain knowledge bases to
applications such as search or truth answering. Here paper gives survey of different
systems used for extraction of top k list.
Keywords top k list , information extraction , top k web pages, structured
information
1.
Introduction
It is difficult to extract knowledge from information explained in natural language and
unstructured format. Also some information over internet present in organized or semiorganized forms, for example, as records or web stages coded with specific names, for
example, html5 pages. As per a large measure of new method has to be devoted for getting
understanding from structured information on the web, (like web tables) specifically from
internet platforms .[1]
Although overall numbers of web tables are large in the whole corpus, but slight
proportion of them include helpful information. An smaller proportion of these include data
interpretable without context. Many tables are not “relational.” as relational tables since they
are interpretable, with rows refer to entities, and columns refer to characteristics of these
entities. Based on Cafarella et al. [3], of the 1.2 % of most web tables which are relational, the
most are worthless without context. For instance, assume extracted a table which has 4 rows
and 3 columns, with the three columns marked “cars” , "model" and “prize” respectively. It is
not clear why these 4 cars are gathered together (e.g., are they the most expensive, or fastest ).
In other words, we don't know the definite situations under which extract information is useful
Understanding the context is very important for extracting information, but in many of the
cases, context is represented in such a manner that the machine cannot understand it. In this
paper, instead concentrating on structured data (like tables, xml data) and ignoring context,
concentration is on easily understand context , and then apply context to interpret less
structured or free-text information, and guide its extraction.
Top k list is bound with very high quality and rich information, specially evaluate with
web tables, it contain huge amount of high quality information. Moreover top k lists associated
with context which is more useful and correct to be useful in Quality analysis, search and other
systems.
INCON - XI 2016
378
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
The title of a top k page should contains minimum three section of important
information: i) number k for example, 30, thirteen, and 20 in the above example, which means
how many items does page mention/described ii) A topic or idea the items is associated with,
for example, Scientists, comic Books, Bollywood Classics and scientist; iii) A ranking
criterion, for example, Influential, fastest , tallest ,best seller ,interesting (which is Best or
Top). Sometimes the ranking criterion is implicitly mention, in which case it make equivalent
to the “Best”,'top'. Besides these 3 section, few top-k titles contain two optional extra pieces of
information : time and location
Fig. 1 : Example of Top-k Title
2.
Literature Review
2.1
Automatic extraction of top k list from web
Zhixian Zhang, Kenny Q. Zhu , Haixun Wang Hongsong Li[1] author is interested in
method for extracting information from top k web pages, which describe\contains top k
instances of a interested topic . Compared with other structured information on the web top k
list contains more richer, of good quality, and interesting information. Therefore top-k lists are
valuable.
The system introduced here consists of the following components:
Title Classifiers : It makes attempts to recognize the page title of the web page
Candidate Picker: It extracts all the candidate lists from the input page. It is structurally a
list of HTML tag paths which are identical. A tag path is a sequence of tag names, from
the root node to a certain tag node.

Top-k Ranker: It scores the candidate list and picks the best one by scoring function
which is weighted sum of two features: P-score and V score.

P score measure the correlation between the list and title. V score calculates the visual
area occupied by a list, because usually the main list of a web page tends to occupy
larger area than other lists.

Content Processor : Processes the extracted list to produce attribute value pairs by
inferring the structure of text nodes, conceptualizing the list attributes, using the tables
heads or the attribute/value pairs.
This method gives improved performance by providing domain-specific lists and
focussing more on the content. It doesn’t focus only on the visual area of the lists. If list is
divided into more than one pages it may not get included completely.
Author demonstrated algorithm that automatically extracts such top k lists from the a web
snapshot and discovers the structure of each list. Algorithm achieves 92.0% precision and
72.3% recall in evaluation.[1]


INCON - XI 2016
379
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
2.2 System for extracting top k list from web pages
Z.zhang, K. Q. zhu, H.wang [2] author define a novel list extraction problem, which
aims at recognizing, extracting and understanding 'top-k' lists from web pages. The problem is
different from other data mining tasks, because compared to structured data top k lists are
clear easier to understand and interesting for readers.
With the massive knowledge stored in those lists, the instance space of a general purpose
knowledge base such as Probase can be enhanced. It is also possible to develop a search
engine for “top-k” lists as an efficient fact answering machine. 4-stage extraction framework
has demonstrated its ability to retrieve very large number of “top-k” lists at a very high
precision[2]
2.3
Extracting general lists from web documents
F. Fumarola , T. Weninger ,R. Barber, D. Maleba and J. Han [6] Author propose a new
hybrid technique for extraction of general lists from the web . It uses general assumption on
visual rendering of list and the structural arrangement of item contained in them. This system
aims to overcome the limitations of work which concern with generality of extracted lists.this
is achieved by combining several visual and structural characteristics of web list. Both
information on visual list item structure,andnon visual information such DOM tree structure of
visually aligned items are used to find and extract general list on the web.
Empirically it is demonstrated that by capitalizing the visual regularities in web page
translation and skeletal properties of relevant elements, it is possible to correctly extract
general list from web pages. approach doesn’t require the enumeration a huge set of skeletal or
visual features nor web page segment into atomic element and use a computationally
demanding process to full discover list. [6]
2.4
Short text conceptualization using a probabilistic knowledge base
Y. Song, H. Wang, Z. Wang, H. Li, and W. Chen,[7] author improve text understanding
by making use of a probabilistic knowledge. Conceptualization of short words is done by
Bayesian interference mechanism. comprehensive experiments are performed on
conceptualizing textual terms, and clustering short segments of text such as Twitter messages
Compared with purely statistical methods like latent semantic topic modelling or methods that
use existing knowledge base (e.g. WordNet, Freebase and Wikipedia), approach brings notable
improvements in short text conceptualization as shown by the clustering accuracy.[7]
2.5
Extarcting data records from web using tag path clustering
G.MiaoJ.Tatemura,W.P.Hsiung,A.Sawires,L.E.Moser[10] author introduces a new
method for extraction of record that captures a list of elements in a more powerful fashion
based on comphrensive analysis of a Web page. The technique concentrate on how a distinct
tag path appears frequently in Web document DOM tree . rather than correlating individual
segments pair, it correlate tag path occurrence patterns pair (called visual signals) to calculate
how similarly these two tag paths present the same list of objects. a similarity measure has
been introduces that captures how nearly the visual signals arise . On the basis of similarity
measure, and sets of extracted tag paths which form the frame of data record clustering of tag
path is performed .[10]
Experimental results on at data record lists are compared with a state-of-the-art algorithm.
Algorithm shows significantly higher accuracy than the existing work. For data record lists
INCON - XI 2016
380
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
with a nested structure, Web pages from the domains of business, education, and government
are collected. Algorithm shows high accuracy in extracting atomic-level as well as nestedlevel data records. The algorithm has linear execution time in the document length for
practical data sets.[10]
This work can be extended to support data attribute alignment. Each data record
contains various data attributes. but sadly, there is no one-to-one mapping from the HTML
code structure to data record arrangement. Identification of the data attributes offer the
potential of better use of the Web data.[10]
2.6
Popularity guided Top-K Extraction
Mathew Solomon, Cong Yu, LuisGravano [8] This paper aims to come the top-k values
of the attribute for the entity in step with a evaluation operate for extracted attribute values.
This evaluation operate depends on extraction confidence and importance. Additional typically
every document is accessed by users once checking out data associated with associate entity,
the additional seemingly it contains vital information. By analyzing question click-through
knowledge, search engines will establish the online documents that individuals ask for data.
for every entity in dataset, a frequency live is computed on the premise of several |what
percentage |what number} users have explore for the entity and the way many pages matching
a specific pattern are clicked as a results of the search [8].
It follows the subsequent algorithm:




Document selection: Select a batch of unprocessed documents
Extraction : method every document in batch with extraction system
Top-k Calculation : Update rank of extracted attribute values for every entity
Stopping Condition : If top-k values for every entity have been known, stop, otherwise
visit step 1 This paper addresses each quality and potency challenges and offers
additional common documents in results by specializing in the importance of
information. however this technique could ignore the new and recent net pages, that
could be containing vital knowledge.
3.
Conclusion:
The paper presents a survey on different aspects of the work done till now in the field of
extraction of data from web pages. The traditional systems focused on retrieving tabular data
and producing general lists. Mostly the description is in natural language which is not machine
interpretable. Later the research continued with topic mining contiguous and non-contiguous
data records. Next the research expanded to extracting general lists from the web more
efficiently. Next the evolution was done in retrieving top-k list data from web pages, which
gives the ranked results. Hence, top-k list data is of high importance. By, understanding the
issues faced by the current systems, more improvements can be done in the field of web pages
top-k lists extraction. Hence top-k data is of high superiority and has cleaner data than other
forms of data on the web.
REFERENCES:
INCON - XI 2016
381
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
[1] Zhixian Zhang, Kenny Q. Zhu, Haixun Wang Hong song Li ,“Automatic top k list
extraction from web” IEEE ,ICDE Conference, 2013, 978-1-4673-4910-9.
[2] Z. Zhang, K. Q. Zhu, and H. Wang, “A System for extracting top k list from web” in
KDD, 2012.
[3] W. Wu, H. Li, H. Wang, and K. Q. Zhu, “Probase: A probabilistic taxonomy for text
understanding,” in SIGMOD, 2012.
[4] X. Cao, G. Cong, B. Cui, C. Jensen, and Q. Yuan, “Approaches to exploring category
information for question retrieval in community question-answer archives,” TOIS, vol.
30, no. 2, p. 7, 2012.
[5] J. Wang, H. Wang, Z. Wang, and K. Q. Zhu, “Understanding tables on the web,” in ER,
2012, pp. 141–155.
[6] F. Fumarola, T. Weninger, R. Barber, D. Malerba, and J. Han, “Extracting general lists
from web document: A hybrid approach,” in IEA/AIE (1), 2011, pp. 285–294.
[7] Y. Song, H. Wang, Z. Wang, H. Li, and W. Chen, “Short text conceptualization using a
probabilistic knowledgebase,” in IJCAI, 2011.
[8] Mathew Solomon, Cong Yu, Luis Gravano,”Popularity Guided Top-k Extraction of
Entity Attributes”, Columbia University, Yahoo! Research, WebDB ‟10, ACM, 2010.
[9] A. Angel, S. Chaudhuri, G. Das, and N. Koudas, “Ranking objects based on relationships
and fixed associations,” in EDBT, 2009, pp. 910–921.
[10] G.Miao, J. Tatemura, W.-P. Hsiung, A. Sawires, and L. E. Moser, “Extracting data
records from the web using tag path clustering,” in WWW, 2009, pp. 981–990.
[11] EK. Fisher, D. Walker, K. Q. Zhu, and P. White, “From dirt to shovels: Fully automatic
tools generation from ad hoc data,” in ACM POPL,2008.
[12] N. Bansal, S. Guha, and N. Koudas, “Ad-hoc aggregations of ranked lists in the presence
of hierarchies,” in SIGMOD, 2008, pp. 67–78.
[13] M. J. Cafarella, E. Wu, A. Halevy, Y. Zhang, and D. Z. Wang, “Web tables: Exploring
the power of tables on the web,” in VLDB, 2008.
[14] W. Gatterbauer, P. Bohunsky, M. Herzog, B. Krupl, and B. Pollak, “Towards domainindependent information extraction from web tables,”in WWW. ACM Press, 2007, pp.
71–80.
[15] K. Chakrabarti, V. Ganti, J. Han, and D. Xin, “Ranking objects based on relationships,”
in SIGMOD, 2006, pp. 371–382.
INCON - XI 2016
382
ASM’s International E-Journal on
Ongoing Research in Management & IT
E-ISSN – 2320-0065
Bioinformatics and Connection with Cancer
Ms.Sunita Nikam,
ASM’s IIBR
Ms.Monali Reosekar,
Dr.D.Y.Patil ,Arts, Commerce & Science College
Pimpri Pune
ABSTRACT :
Bioinformatics is the application of computer technology to the management of
biological information. Developments both in computer hardware and software allowed
for storing, distributing, and analyzing data obtained from biological experimentation is
not new .Computers are used to gather, store, analyze and integrate biological and
genetic information which can then be applied to gene-based drug discovery and
development. Bioinformatics has become an important part of many areas of biology
.Bioinformatics can be applied in, the cancer studies, research and therapies for many
beneficial reasons. This paper will give an idea about bioinformatics, its definition,
aims, applications, technologies and how to apply bioinformatics to discover and
diagnose diseases like cancer. And how bioinformatics can be useful in cancer diseases.
Again it will highlight the most appropriate and useful resources available to
cancer researchers. This will result is a useful reference to cancer researchers and the
people who are interested n bioinformatics studying cancer alike.
Keywords: Bioinformatics, Applications, Technologies, Data, Algorithms, Cancer.
Introduction:
Bioinformatics is conceptualizing biology in terms of molecules in the sense of physicalchemistry and then applying “informatics ” techniques derived from disciplines such as
applied math, CS, and statistics to understand and organize the information associated with
these molecules, on a large-scale.
Bioinformatics is a new discipline that addresses the need to manage and interpret the
data that in the past decade was massively gene rated by genomic research. This discipline
represents the convergence of genomics, biotechnology and information technology, and
encompasses analysis and interpretation of data, modeling of biological phenomena, and
development of algorithms and statistics.
Genome studies have revolutionized cancer research in recent years as high-throughput
technologies can now be used to identify sets of genes potentially related with different
processes in cancer. However, managing all this data and organizing it into useful datasets is
still a challenge in the bioinformatics field. Finding relationships between the molecular and
genomic information and the clinical information available, within the medical informatics
domain, is currently driving the development of translational research in biomedicine. The
dispersion and complexity of the molecular information, the poor adherence to standards,
together with the fast evolution of the experimental techniques, pose obvious challenges for
the development of integrated molecular resources. In parallel, restricted access to medical
information together with the gaps in the development of standard terminologies are typical
limitations in the area of medical informatics. The development of research projects
combining medical and molecular information together with the current efforts to standardize
and integrate databases and terminologies are described in this review as a demonstration of
the fruitful activity in this area.
INCON - XI 2016
383
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
W at is Ca
Wh
anccerr:
E SS
E-I
SN – 232
2 20--00
065
5
Can
C
nceer is
i the
t e gen
g eraal nam
n me fo
or a grou
g up off mor
m re tha
t an 100 dis
d eassess. Alt
A hou
ugh
h the
t ere aree
m ny kin
man
k ndss off canccerr, all
a can
nceers startt beecaausse abn
a norrmaal cel
c lls gro
ow
w ou
ut of
o contrrol.. Untr
U reaated
d
caanccerrs can
c n caausse seri
s iou
us illn
i ness and
a d deeatth.
C nceer har
Can
h rmss th
he body
y when
n alte
a ered
d cell
c ls div
d videe unco
onttro
ollaably
y to
o fo
orm
m lum
l mpss orr mas
m sses
off tiissu
ue caalleed tum
t mors (ex
xceeptt in
n th
he casse off leeuk
kem
mia wher
w re can
nceer pro
ohiibitts nor
n rm
mal blo
ood
d
fu
uncctio
on by
y ab
bno
orm
mall ceell diivission
n in
i the
t e blloo
od strream
m)). Tum
T mors can
n gro
g ow an
nd inte
i erfferee with
w h
th
he dig
d gesstiv
ve, neervo
ouss, and
a d ciirculaatory sy
ysteemss, and
a d th
hey
y can reeleaasee ho
orm
mon
ness th
hatt allterr bo
ody
y
fu
uncctio
on. Tum
T mors th
hat sttay
y in
n on
ne spot an
nd deem
mon
nstrratee lim
l miteed grrow
wth
h are
a e gen
g neraally
y
co
onssid
dereed to be beenig
gn.
M re dan
Mor
d ngeero
ous,, orr mal
m ign
nan
nt, tum
mo
ors forrm wh
hen
n tw
wo
o th
hing
gs occurr:
1..
a canccerrou
us cel
c ll ma
m nag
gess to
o mo
movee th
hro
oug
gho
out th
he bod
b dy ussing
g the
t e bloo
od orr ly
ymp
phaaticc
s tem
syst
ms,, deestrroy
yin
ng hea
h alth
hy tisssuee in
n a prrocesss caalleed inv
vassion
2..
th
hatt cell
c l man
m nag
ges to
o divi
d idee and
a d grow
g w, mak
m king
g new
n w blo
b ood
d vess
v sells to
t feeed itsselff in a
p cesss cal
pro
c lled
d angiog
gen
nessis.
When
Wh
n a tu
umo
or succceessfullly sp
preaadss to
o othe
o er parrts off th
he body an
nd gro
ow
ws, inv
vad
din
ng and
d
deesttroy
yin
ng oth
herr heeallthy
y tiss
t suees, it is
i saiid to
t haave meta
m astasiized. Th
his prrocesss ittself is
i callled
d
m asttasiis, and
meta
d the resullt iss a seerio
ouss co
ond
ditiion
n th
hat is verry diffficcult to
o trreaat.
T e Wo
The
World
d Hea
H alth
h Org
O gan
nizzatiion
n esstim
maatess th
hatt, wor
w rld
dwiidee, therre weeree 4 mill
m lion
n new
n w
caanccerr caases and
a d 8..2 mil
m llio
on can
c nceer-rrelaateed dea
d ath
hs in
n 201
2 12 (th
( heirr mos
m t reeceent daata)).
C nceer is on
Can
ne of
o thee com
c mm
mon
nesst cau
c usees of
o paatieent deeath
h in
i thee clin
c nic an
nd a com
c mp
plex
x
diiseeasee occ
o currring in
i mu
ultiiplee org
o ans per
p sy
ysteem
m, mu
m ltip
plee sy
ystem
ms per
p r orga
o an,, orr both
b h, in thee
bo
ody
y. Th
he poo
p or diaagn
nosses,, th
herrapiess an
nd prrognosess of
o th
he diseaasee co
oulld be
b main
m nly du
ue to thee
vaariation of sev
s verritiees, du
uraatio
onss, loccatiion
ns, seenssitiv
vitty an
nd ressisttan
nce agaiinsst dru
d ugss, cel
c ll
diiffeereentiiatiion
n an
nd orrigiin, an
nd un
ndeersttan
ndin
ng off paath
hog
gen
nesiis. With
W h in
ncrreassin
ng eviideencce tha
t at
th
he inte
i eraactiion
n an
nd neetw
work
k betw
b weeen
n geenees and
a d prot
p tein
ns plaay an im
mpo
ortaantt ro
ole in in
nvestig
gattion
n
off canccerr mol
m lecu
ulaar me
m chaaniism
ms, it is neecessaary an
nd im
mpo
ortaant to
o in
ntro
odu
ucee a neew co
oncceptt of
o
Sy
ysttem
ms Cliiniicall Med
M diccinee in
nto
o caanccer reeseaarcch, to intteg
gratte sysstem
mss biiolo
ogy
y, cliniccal sccien
ncee,
om
mics--baased
d tec
t chn
nolo
ogy
y, bio
oin
nfo
orm
matiics an
nd co
om
mpu
utattionall scie
s encce to im
mp
prov
ve diiag
gno
osiss,
th
herapiies an
nd prognosis off dise
d eases.. Can
C ncerr bioi
b infform
maatics is
i a crritiicaal and
a d im
mporttan
nt part
p t of
o
th
he systeemss clin
c nical meediicin
ne in
n can
c cerr and
a d th
he co
oree to
ooll and ap
pprroaach
h to
o car
c rry ou
ut thee
in
nveestiigaatio
ons of
o caanccer in sy
ysteem
ms cliinicall me
mediccin
ne. “Th
“ hem
mattic Seri
S ies on
o Can
C nceer
B infform
Bioi
maatics” gaatheer the
t e sttren
ngtth of
o BM
MC
C Bio
B infform
maaticcs, BM
MC
C Can
C nceer, Ge
Geno
omee Med
M diccinee
an
nd Jo
ourn
nall off Clin
C nicaal Bio
B oin
nforrm
matiics to heead
dlin
ne the
t e ap
ppllicaatio
on off caanccer bio
oin
nform
matiics fo
or
IN
NC
CON - XI
X 20
2 16
38
84
ASM
A
M’s Inteern
nattion
nal E-J
E ou
urn
nal on
n
E SS
E-I
SN – 232
2 20--00
065
5
O going
On
g Res
R sea
arch
h in
i Ma
Mana
ageem
men
nt & IT
I
th
he dev
d vellop
pmeentt off bioiinfo
orm
mattics met
m tho
ods, netw
n work bio
om
mark
kerrs and
a d prec
p cission
n med
m diccinee. The
T e
Seriies fo
ocu
uses on
o neew deeveelo
opm
men
nts in
n can
c ncer bio
b oinfforrmaaticcs an
nd comp
puttatiion
nal sy
ysteem
ms
y to
t exp
e plo
ore thee pot
p tenttiall of
o clin
c nicaal app
a pliccattion
ns an
nd imp
i pro
ovee th
he outco
omes off paatieentts
biiology
w h caanccerr.
with
C nneectiion
Con
n of
o Bio
B oinfforrm
matiics wiith
h Can
C ncer:
D A Mi
DNA
Micro
oarrra
ayss



Colllecctio
C
on off micr
m roscop
picc DNA
D A spo
s ots atttached
d to a so
olid
d su
urffacce.
U ed maain
Use
nly to meeassurre gen
g ne exp
e preessiion
n.
I con
It
c ntaiins piecees of
o a gen
g ne or
o a DN
DNA seq
queencce.
C n acc
Can
a om
mpllish
hed
d man
m ny task
t ks in paaralllell.
A pliccattion
App
n of
o DN
DNA M
Moccroarrray
ys::
G nettic waarn
Gen
ning
g sign
s ns.!Drug
g seleectiion
n.!
T rgeet seleectiion
Tar
n fo
or dru
d ug des
d sig
gn.!!

P tho
Pat
ogen resi
r istaancce.!

Measu
Me
uriing
g teemp
porral vaariaatio
ons in
n prroteein
n ex
xprressio
on.
M ss Sp
Mas
S ecttro
ometrry


Meeasu
M
urees the
t e mass
m s-to
och
harrgee raatio
o off mol
m lecu
ulees.!!
U efu
Use
ul for
f rap
pid
d id
den
ntifficaatio
on of thee com
mpo
oneents of
o pro
p teins..!
IN
NC
CON - XI
X 20
2 16
38
85
ASM’s International E-Journal on
Ongoing Research in Management & IT

Partial sequencing of proteins and nucleic acids.
E-ISSN – 2320-0065
Diagnosis of Disease

DNA sequence can detect the absence of a gene or mutation. Disease risks.!

Identification of gene sequences associated with disease will permit fast and reliable
diagnosis of conditions (Huntington Disease)!

Relationship between genotype and disease risk is difficult to established.!

Analysis of protein expression, in the case of asthma, can help.
Bioinformatics is a new interdisciplinary area involving biological, statistical and
computational sciences. Bioinformatics will enable cancer researchers not only to manage,
analyze, mine and understand the currently accumulated, valuable, high-throughput data, but
also to integrate these in their current research programs. The need for bioinformatics will
become ever more important as new technologies increase the already exponential rate at
which cancer data are generated.
The volume of biological data collected during the course of biomedical research has
exploded, thanks in large part to powerful new research technologies.
The availability of these data, and the insights they may provide into the biology of
disease, has many in the research community excited about the possibility of expediting
progress toward precision medicine—that is, tailoring prevention, diagnosis, and treatment
based on the molecular characteristics of a patient’s disease.
Use of BioinformaticsWhat is bioinformatics?
Bioinformatics is the application of computer science, applied math, and statistics to the
field of molecular biology. Bioinformatics is also the science of managing and analyzing vast
biological data with advanced computing techniques and has an essential role in searching for
unique genomic features, variations, and disease markers. The need for bioinformatics is
becoming more important as next-generation sequencing (NGS) technologies continue to
evolve.
What role does bioinformatics play in cancer research and treatment?
The merging of genomic technologies provides many opportunities for bioinformatics to
bridge the gap between biological knowledge and clinical therapy. Today, efforts are focused
on biomarker discovery and the early diagnosis of cancer through the application of various
technologies, including genomics and other individualized DNA characteristics. Regardless of
the technology being used, bioinformatics tools are required to extract the diagnostics or
prognostic information from complex data. Through different technologies and novel
bioinformatics tools, we attain a more complete understanding of cancer and expedite the
process of biomarker discovery.
How does bioinformatics fit into the concept of personalized medicine?
Both the development and use of bioinformatics is essential to the future of cancer
therapeutics and personalized medicine. The complexity of cancer means that most cancer
treatments work only for a small group of patients and not a one-size-fits-all approach. This
results in both a large portion of patients receiving ineffective treatments and a huge financial
burden to healthcare systems. It is essential that we develop accurate bioinformatics tools for
INCON - XI 2016
386
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
delivering the right treatment to the right patient, based on the individual makeup of their
specific tumor.
Are there specific research developments or advances in the field related to cancer
treatment that are especially exciting right now?
One exciting development is system biology, which studies biological systems by
integrating data into a dynamic network. This approach can identify where a network should
be perturbed to achieve a desired effect, provide insights into drug function, and evaluate a
drug’s likely efficiency and toxicity. System biology analysis requires sophisticated
bioinformatics software to find and analyze patterns in diverse data sources, producing an
integrated view of a specific cancer.
In addition, we are entering a time for the convergence and unity of all sciences to fight
cancer. The merging of engineering, physics, mathematics, technology, and medicine will be
one of history’s most fruitful unions. Bioinformatics will certainly be an important part of it.
Bioinformatics should be viewed broadly as the use of mathematical, statistical, and
computational methods for the processing and analysis of biologic data. The genomic
revolution would not be possible without the sophisticated statistical algorithms on which
DNA sequencing, microarray expression profiling and genomic sequence analysis rest.
Although data management and integration are important, data analysis and interpretation are
the rate-limiting steps for achieving biological understanding and therapeutic progress.
Effective integration of scientific bioinformatics into biology and the training of a new
generation of biologists and statistical bioinformaticists will require leadership with a vision of
biology as an information science.
Development and use of bioinformatics is essential for the future of cancer therapeutics.
Most cancer treatments work for only a subset of patients and this is likely to remain true for
many molecularly-targeted drugs. This results in a large proportion of patients receiving
ineffective treatments and is a huge financial burden on our health care system. It is essential
that we develop accurate tools for delivering the right treatment to the right patient based on
biological characterization of each patient's tumor.
Gene-expression profiling of tumors using DNA microarrays is a powerful tool for
pharmacogenomic targeting of treatments. A good example is the Oncotype DX™ assay
(Genomic Health) recently described for identifying the subset of node-negative estrogenreceptor-positive breast cancer patients who do not require adjuvant chemotherapy.
Development of genomic tests that are sufficiently validated for broad clinical application
requires the sustained effort of a team that includes clinical investigators, biologic scientists
and biostatisticians. Accurate, reproducible, predictive diagnostics rarely result from the
unstructured retrospective studies of heterogeneous groups of patients that are commonly
deposited in the oncology literature, but never independently validated or broadly
applied.With proper focus and support, gene-expression-based diagnostic tests could be
developed today to assist patients and physicians with a wide range of difficult decisions
regarding the use of currently existing treatments. Development of such tests should be part of
a new paradigm for future therapeutics.
The tools to achieve rapid advances in cancer therapeutics are available today. Rapid
progress requires wisdom to establish innovative multidisciplinary approaches for the delivery
of a new generation of truly effective cancer treatments
INCON - XI 2016
387
ASM’s International E-Journal on
E-ISSN – 2320-0065
Ongoing Research in Management & IT
Bioinformatics is also essential for enhancing the discovery of new drugs. Many tumors
consist of mixtures of subclones containing different sets of mutated, overexpressed and
silenced genes. This heterogeneity makes the process of identifying good molecular targets
very challenging. Most overexpressed genes and mutated genes may not represent good
molecular targets because resistant subclones are present. The best target is a 'red dot' gene
whose mutation occurs early in oncogenesis and dysregulates a key pathway that drives tumor
growth in all of the subclones. Examples include mutations in the genes ABL, HER-2, KIT,
EGFR and probably BRAF, in chronic myelogenous leukemia, breast cancer, gastrointestinal
stromal tumors, non-small-cell lung cancer and melanoma, respectively. Effective
development of therapeutics requires identification of red-dot targets, development of drugs
that inhibit the red-dot targets, and diagnostic classification of the pathways driving the growth
of each patient's tumor. Development and application of bioinformatics by multidisciplinary
teams conducting focused translational research is essential for all steps of this process.
Taking advantage of genomic technologies to develop drugs effectively and target them
to the right patients depends on the use of bioinformatics, in its broadest sense. The tools to
achieve rapid advances in cancer therapeutics are available today. Rapid progress requires
wisdom to establish innovative multidisciplinary approaches to focus our technologies and
organize our talents for the delivery of a new generation of truly effective canc