How to Avoid Storage Over-Spending: Six Steps to Embracing Storage Efficiency  HighVail Systems

Transcription

How to Avoid Storage Over-Spending: Six Steps to Embracing Storage Efficiency  HighVail Systems
How to Avoid
Storage Over-Spending:
Six Steps to Embracing Storage Efficiency Brought to you by:
HighVail Systems
High Availability
Business Continuity
Table of Contents
Introduction
Confronting Data Centre Challenges
Understanding Storage Economics
Tiered Storage
Architecture and Virtualization
Adopting Best Practices
Implementation Considerations
Leveraging Partners
Conclusion
2
www.highvail.com
www.highvail.com
2
www.hds.com
Introduction
Storage dynamics are changing with each
passing day. As one of the fastest growing data
center expenditures, IT managers are struggling
to grasp the realities behind storage costs and
efficiency. Storage economics
has become the order of the
day for all configurations,
from large scale data centers
to computer rooms to closets.
Whether assessing the physical structure of
data center operations (storage architectures
and space), the business climate (doing more
with less) or environmental stewardship (energy
efficiency), an efficient storage environment by current standards is one
that’s flexible and agile enough to
respond to changing data, corporate
and environmental demands.
While there has been a long
standing practice of basing
storage performance on the
hard costs of disks and components, the realities of virtualization, consolidation and energy
efficiency means that this approach falls far
short of addressing the total cost of ownership.
Today, understanding performance and return
on investment is all about climate.
This White Paper will look at the
challenges facing data center
managers when it comes to storage
proliferation and management. It will also look
at a variety of approaches that can be applied to
storage management, as well as best practices
when planning storage strategies
Storage
dynamics
are
changing
Confronting Data Center Challenges
Industry pundits find at a bare minimum, 50%
of computer room space today is devoted
to servers (27%) and storage (23%). Even at
that, storage represents the
fastest growing part of the
pie, which means it will
soon outstrip other space
demands in the very near
future.
As data demands increase, we are witnessing an
escalating proliferation of servers, data and storage devices that are creating financial, technical
and environmental nightmares for
many enterprises today. Simply
put, data centers are getting too
crowded, too costly and too
complicated. The cost of acquirservers
ing and servicing technology
27%
is reaching untenable levels.
Ongoing management and
complexity is making it
even more difficult to be
storage
efficient with human and fiscal
23%
resources.
www.highvail.com
other
50%
computer
room space
3
www.hds.com
Yet another issue driving up data center costs
Archiving requirements and compliance have
today is power consumption. Not only are
also put increased demand on storage resources
hydro costs on the rise, corporate “eco” manand the need to keep applications for longer
dates are demanding that operations reduce
periods of time. This has put managers in the
power consumption. Whereas
position of having to sort out
we once had the luxury of virtuhow to move forward while
Yesterday’s
ally unlimited power supplies,
maintaining the status quo.
processes
today charges are based on
wattage and power consumpWith all these increased capaccannot address
tion patterns, much like what we
ity requirements, costs will
have seen in residential homes.
only rise over time – even
the storage
more so when the need for
demands of today
In the face of all these challenges,
resiliency is factored in. As volit has become all too clear
umes increase, critical service
that yesterday’s processes cannot address the
performance may be compromised if storage
storage demands of today and the future.
is not dealt with effectively. The question that
weighs on everyone’s mind is how to free up
space without incurring risk or losing critical
applications or data.
Understanding Storage Economics
Understanding what data center managers are facing today is just simple economics.
Let us consider a few facts:
The fastest growing power user within a data center environment is storage – most
companies are experiencing up to 200% CAGR (compounded annual growth
rates). This means that within a five-year period, companies will experience a 10X
growth in their power demands.
At the same time individual server power consumption is actually decreasing
because technology has become more efficient.
Storage costs increase with the levels of resiliency required. As compliance
demands increase, resiliency needs will only escalate.
Critical services depend on secondary services such as data recovery, increasing
both storage requirements and costs.
The purchase price of equipment (storage devices) represents less than 25% of
total cost of ownership.
www.highvail.com
4
www.hds.com
Tiered Storage
transparently between the different tiers, while
enabling the acceleration of slower SATA (Serial
Advanced Technology Attachment) drive
performance with SSD (Solid State Disk) or
caching technologies. The first tier is relegated
to the most critical data, while second and third
tiers are assigned to secondary data storage
requirements.
The reality within data centers is that all IT is
organic. Whether talking about components
or the storage, everything is subject to change.
The “organic” nature of this new reality is leading to an entirely new paradigm in data center
architecture and practices.
At the same time, the storage world is being
flooded with terminology that essentially
amounts to the same thing: doing more with
less. Concepts such as caching, wide striping,
virtualization, consolidation andrepatriation
– revolve around the need to maximize performance and reliability while minimizing risk
and costs. Ultimately the choice relies on
an organization’s existing infrastructure, its
business goals and storage needs in the foreseeable future.
Dynamic provisioning within a virtualized
pool provides a significantly higher level of
efficiency, while reducing demands on storage
requirements and mitigating potential risk. It
does this by allowing organizations to virtually
allocate storage resources as needed for an
application.
To maximize the benefits of a tiered storage
model, data center managers
must understand their application
needs and ensure they are properly placed as they go through provisioning, development and testing, pre-production, production,
retirement and decommissioning.
By way of example, during the
provisioning process, it may be
preferable to have the data on
Tier 2. In the case of archiving, the
chance of needing to access data that is
more than six months old is minimal. This data
can be stored using a combination of disk and
tapes, since instantaneous access would not be
critical.
There is no single solution that will
resolve all data center needs, despite claims to the contrary. Mixing
and matching is critical within the
organic data center model. The
question at the heart of all this is
whether an organization is flexible
and agile enough to access and/or
move data when they need it.
Companies can no longer afford to use
“cheaper” silo’d storage solutions, since they
ultimately lead to greater back end costs when
the time comes to scale up. An essential foundation for approaching storage challenges in
today’s environment is the concept of dynamic
multi-tiered storage. This should be a part of
any short- or long-term strategy as a means to
improve asset utilization and storage efficiencies.
Managers also need to ensure that protocols
and paths to the storage tier are correct. This
is a concept that was not uncommon in the
mainframe world, but fell out of usage in open
systems environments. Other factors to take
into consideration include RTO/RPO (recovery
time objective/recovery point objective) and
replication frequency.
With dynamic tiered storage, organizations
create a “storage pool” that contains multiple
tiers of storage. In this way, data can be moved
www.highvail.com
5
www.hds.com
Architecture and Virtualization
Virtualization can play a key part in a tiered storage strategy, since in principle it addresses the
1:1 relationship of equipment to application
model. Instead, organizations can create multiple layers of a virtualization framework to address
their tiered storage needs.
In other words, organizations can take away the
need to direct connect servers and storage in order to
do more with less.
single or multiple devices to enable allocation
of capacity to various applications and/or
specific sets of data on an as needed basis.
Since multiple layers of virtualization can be used at both
the server and application
level, managers gain even
greater flexibility to maximize their data centre usage
while reducing operational
costs. In addition, since fewer
components are required, it
also helps to lower power and cooling costs,
while taking out the complexities of system
management and mitigating risk.
Virtualization
lowers power
and cooling costs
Storage virtualization can be combined with
server virtualization. In doing this, managers
are able to pool multiple storage “pieces” into
Adopting Best Practices
Any effective IT strategy is one that applies industry best practices. In brief, managers should follow the
following “ground rules” to ensure they are deriving the greatest value and performance from their storage architecture strategies.
Set target objectives. It is important to establish business objectives ahead of time. This may be
improving efficiency, reducing costs, doing more with less, creating an eco-friendlier model or all
of the above. In identifying objects from the outset, it is easier to pinpoint existing synergies and
infrastructure capabilities and capitalize on them before investing in additional technology.
Understand total costs. This goes beyond a simple dollar per terabyte approach to include total
cost of ownership from planning and deployment, to ongoing management and support. In fact,
there are approximately 30 different metrics associated with total cost of ownership. It’s also
important to keep in mind that while an IT manager may be focusing on capital expenditures,
another person in the organization will be playing close attention to operating expenses.
Simplify through centralization and virtualization. Storage virtualization can be approached
the same way organizations are looking at virtualizing their servers. By creating processes that
allow applications to be where they need to be, enterprises can streamline operations, reduce
hardware requirements, and minimize complexities.
Create processes that move application data through the various tiers during its life cycle. The
key is to analyze an application's life cycle and simulate a production environment when doing
proofs of concept. This will ensure moving an application to end of life is less costly and data
remains highly available.
www.highvail.com
6
www.hds.com
Analyze application work patterns to determine the best connectivity for storage access.
Categorize applications so they get placed in the proper tier during production periods.
Leverage storage pooling to improve efficiencies through wide striping and disk spindle sharing between applications.
Engage experienced experts in the field and ask questions. Study what others are doing in this
area to determine the best approach.
Implementation Considerations
To improve data center storage performance, IT managers should also consider the following processes
and practices:
Agile Storage
It is essential that a storage system augments or enhances a storage strategy rather than increases the
overall burden within a data center environment. Agile storage takes into account a number of elements,
including backup and replication strategies, zero cost cloning practices (i.e. creating copies of production
without consuming space) and convergence.
Data Deduplication
An efficient storage solution has the ability to
look for duplicate patterns to ensure that data
is stored once instead of many times. For
example, attachments sent via email are typically saved a minimum of five times, and can
be duplicated “on purpose” as many as eight
times. This is in addition to duplications that
occur on their own, as in backup functions.
This duplication places an incredible strain on
networks and storage resources. The only way
to resolve them is to look for patterns that
happen either during ingestion or post
processing of data at the block or file level,
thereby freeing up space, while improving
overall system performance.
www.highvail.com
7
www.hds.com
Thin Provisioning
Thin Provisioning, in a shared storage environment, optimizes utilization of available storage. This
approach is perhaps best explained using a working example. A customer needed eighty 300GB FC (fibre
channel) drives to sustain the required IOPs (input/output operations per second) of an application. This
translated into 24TB of RAW capacity: about 18TB was usable. Since the application only required 2TB of
storage, this resulted in a 16TB of lost or “orphaned” storage. By implementing storage pooling, the
customer could enable wide striping and acquired higher IOPs, while sharing disk spindles with other
applications to improve usability.
Purchase Flexibility
As requirements and storage strategies change, purchasing agreements should be reviewed carefully and/or modified so that they are
designed to reward efficiency. Capital expenditure forms should
include a section for environmental impact so that operations can
take advantage of economies that are realized through energy and
space savings.
Usability
Test drives and references are absolutely critical to a successful storage strategy. Having a test environment
allows managers to get a good “look at and feel” for a new storage solution. It is essential to ensure that the
properly trained technical resources are in place to manage a new storage environment; and that provisioning conforms to agreed upon business processes.
Leveraging Partners
When developing effective storage strategies and practices, partners can play an important role in
supporting the transition to a new environment. Some factors to consider when selecting a partner
include:
Is there the flexibility to negotiate more competitive rates as opposed to cutting professional
services to meet budget constraints?
Can the partner work together with an organization to leverage existing equipment and infrastructure resources before making additional capital expenditures?
Does the partner offer project management skills?
Does the partner have the facilities and/or training to test drive solutions prior to implementation?
Will the partner engage in open dialogue?
What do other customers have to say about the partner?
www.highvail.com
8
www.hds.com
Conclusion
with an understanding of what is in place today,
what is available in the market, and bringing
the two worlds together to establish the best
of both worlds. Rather than turning this into
a daunting task, a good
approach is to start off with
a data project on a small
scale to ensure that a plan
will reduce expenditures,
and then expand on that as
comfort levels grow.
As we enter a new era of storage, companies
must consider all aspects of efficiency, from
capacity and performance to footprint and
costs. By combining the right strategies and
technologies, the results
can exceed expectations.
In adopting a combination
of storage approaches
outlined in this paper, one
manufacturer of storage solutions for example was able
to increase storage utilization to 90% with top
systems attaining 300% utilization. Today, one full-time
engineer is able to manage
300TB of data. It has also re-
duced 80% of rack space
and development time in
excess of 20%.
It is not
uncommon
to see storage
expenditures
reduced by
50% or more
The journey to this level of
efficiency however begins
In our experience it is not
uncommon to see storage
expenditures reduced by
50% or more, by putting data
in the right place, applying
strategic workload distribution and mixing workloads
to maximize performance,
making sure data can move
seamlessly between tiers,
and leveraging available
partnerships.
Six Steps to Embracing Storage Efficiency
1
2
Confront data center challenges: What are the pain points?
3
Set target objectives: Know objectives beforehand. Consider if money can be
saved/made.
4
5
Adopt best practices: Store data in the right place at the right time.
6
Understand storage economics: What does your organization have, what’s out
there, and what is missing?
Implement new technologies and processes: De-dupe, thin provisioning, functional
storage.
Leverage partners: Know the key factors that need to be considered when
engaging with a new partner.
www.highvail.com
9
www.hds.com
100 Adelaide Street West, Suite 600
Toronto, Ontario, M5H 1S3, Canada
416-867-3000
info@highvail.com
www.highvail.com
info@hds.com
www.hds.com
www.highvail.com
10
www.hds.com