Disaster Recovery using Hitachi TrueCopy on Hitachi Unified

Transcription

Disaster Recovery using Hitachi TrueCopy on Hitachi Unified
Disaster Recovery using Hitachi TrueCopy on
Hitachi Unified Compute Platform for the SAP HANA
Platform using Hitachi Compute Blade 2500,
Hitachi Virtual Storage Platform G1000, and
Hitachi NAS Platform 4060
Reference Architecture Guide
By Abhishek Dhanuka
April 2016
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to
SolutionLab@hds.com. To assist the routing of this message, use the paper number in the subject and the title of this white
paper in the text.
Contents
Solution Overview................................................................................................................ 3
Hardware Elements......................................................................................................................... 7
Software Elements ........................................................................................................................ 10
Solution Design................................................................................................................... 12
Hitachi Compute Blade 2500 Chassis Configuration ...................................................................
520X B2 Server Blade Architecture ..............................................................................................
Fibre Channel SAN Architecture ...................................................................................................
Storage Architecture .....................................................................................................................
Hitachi NAS Platform 4060 Architecture.......................................................................................
Network File System Design for Shared Binaries .........................................................................
Management Server......................................................................................................................
Network Architecture ....................................................................................................................
SAP Storage Connector API Fibre Channel Client .......................................................................
Inter-site Configuration .................................................................................................................
Disaster Recovery and Replication Components .........................................................................
12
14
14
15
23
25
26
27
33
33
34
Engineering Validation ...................................................................................................... 40
Test Methodology ......................................................................................................................... 40
Test Results .................................................................................................................................. 40
1
Disaster Recovery using Hitachi TrueCopy on
Hitachi Unified Compute Platform for the SAP HANA Platform
using Hitachi Compute Blade 2500,
Hitachi Virtual Storage Platform G1000, and
Hitachi NAS Platform 4060
Reference Architecture Guide
This reference architecture guide describes the necessary components and overall layout of a disaster recovery solution
using Hitachi TrueCopy on Hitachi Unified Compute platform for the SAP HANA platform in a scale-out configuration. This
reference architecture guide documents how to deploy this configuration using the following:

Hitachi Compute Blade 2500 (CB 2500) with 520X B2 server blades

Quanta Cloud Technology QuantaPlex T41S-2U server with one node

Hitachi Virtual Storage Platform G1000 (VSP G1000)

Hitachi NAS Platform 4060 (HNAS 4060)

System management unit

Hitachi TrueCopy

SAP HANA

Either of the two operating systems:

SUSE Linux Enterprise Server

Red Hat Enterprise Linux
The testing of this solution in the lab was only on a 2+1 configuration with 3 TB SAP HANA nodes. However, this reference
architecture supports the scale-out configurations in Hitachi Unified Compute Platform for the SAP HANA Platform using 3
TB or 4 TB SAP HANA Nodes in Scale-out 48 TB or 64 TB Configuration of 16 Active Nodes and 3 Standby Nodes with
Hitachi Compute Blade 2500, 520X B2 Server Blades, and Hitachi Virtual Storage Platform G1000 Reference Architecture
Guide (AS-399-04 or later, PDF). For information concerning your implementation, contact your GSS representative.
This document supports understanding the example architecture of disaster recovery using Hitachi TrueCopy on Unified
Compute Platform for SAP HANA in a scale-out configuration.
The scale-out environment for Unified Compute Platform for SAP HANA is a preconfigured analytical appliance that
provides real-time access to operational data for use in analytic models. Changes to this architecture require approval
from the following:

Sales

Solution Engineering and Technical Operations

Solution product management
1
2
Failure
of SAP HANA may result in revenue loss. For protection from this loss, use two sites in the disaster recovery
strategy. In addition to failover production, the second site can handle the quality assurance environment of the SAP
HANA landscape.
The primary business problem this solution answers is disaster recovery for SAP HANA. This solution performs
synchronous replication of SAP HANA data volumes and log volumes on Hitachi Virtual Storage Platform G1000 to the
secondary site. It also performs synchronous replication of the SAP HANA binaries and other configuration files stored in
the /hana/shared file system on Hitachi NAS Platform 4060 to the secondary site.
Data centers at each site must have almost identical hardware for this disaster recovery solution. Implementing Hitachi
Universal Replicator on this environment permits the additional use of the secondary site as a quality assurance
environment. This additional use requires adding additional disk drives on the secondary site storage to run the quality
assurance system.
Hitachi Virtual Storage Platform G1000 technology permits maintaining sufficient performance for the SAP HANA
production instance and site-to-site replication.
This technical paper assumes you have familiarity with the following:

Storage area network-based storage systems

Network attached storage (NAS) systems

General storage concepts

General storage replication skills and concepts

General network and virtual IP knowledge

General WAN knowledge

Advanced SAP HANA skills

Disaster recovery scenarios

Common IT storage practices
Note — Testing of this configuration occurred in the Hitachi lab environment. Many things affect production
environments beyond prediction or duplication in a lab environment. Follow the recommended practice of conducting
proof-of-concept testing for acceptable results in a non-production, isolated test environment that otherwise matches
your production environment before your production implementation of this solution.
2
3
Solution Overview
The primary site A contains a production SAP HANA database instance. Implement this site as a scale-out configuration
of Hitachi Unified Compute Platform for the SAP HANA platform.
The secondary site B is an exact replica of primary site, with the exception of optional additional storage disks. Site B
houses the following:

Failover production instance

(Optional) Non-production instances, such as the following that requires additional disks:

Quality assurance

Development

Test, In-memory database of SAP HANA.
The design of the SAP HANA in-memory database enables this solution to use the same set of Hitachi Compute Blade
2500 nodes for production and non-production instances.
In the test environment for this solution, the Hitachi Virtual Storage Platform G1000 at the secondary site housed two sets
of disks for data, log, and /hana/shared LUNs for the following:

The replicated production instance

(Optional) The quality assurance system
In this solution, install command control interface on the management servers on the primary site and secondary site,
which performs the data replication operations within Hitachi Virtual Storage Platform G1000 at each site using Hitachi
Open Remote Copy Manager instances. Each instance at the primary site and secondary site management server has its
own Hitachi Open Remote Copy Manager configuration file that lists the following for replication between two sites:

SAP HANA data volumes

SAP HANA log volumes

Hitachi NAS Platform LUNs
Hitachi Virtual Storage Platform G1000 works with Hitachi TrueCopy to enable SAP HANA disaster recovery. In this
solution, a Fibre Channel over IP (FCIP) switch is used for the wide area network connections between Hitachi Virtual
Storage Platform G1000 at primary site A and secondary site B.
For the synchronous replication of LUNs on Hitachi NAS Platform hosting the /hana/shared file system on Hitachi Virtual
Storage Platform G1000, a one-time initial configuration needs to be done after completing the initial pair copy operations
to configure the RAID mirror relationships between the LUNs on Hitachi NAS Platform at both sites. The Hitachi NAS
Platform clusters at both the sites need to be made aware of the TrueCopy LUN relationship between them during this
initial configuration by using RAID mirroring (sd-mirror-remotely) command.
In case of an outage or any component failure in the primary site, an administrator initiates a manual failover to the
secondary site using customized scripts. There are two different possibilities for enabling client connection recovery either
using virtual IP failover or DNS failover configuration. However, the actual implementation will differ, based on the network
and cluster management capabilities.
3
4
This
solution supports four different disaster recovery options described below.
1. Add a Disaster Recovery Site
This option is for a primary site for production and a secondary site for disaster recovery.
2. Add a Quality Assurance or Development Instance
This option is for a primary site for production and a secondary site for disaster recovery site and a single quality
assurance or development SAP HANA instance.
3. Disaster Recovery Connectivity Bundle
This option is for a Fibre Channel over IP (FCIP) switch between two sites.
4. Only Disaster Recovery Site
This option is for adding a secondary site for a disaster recovery solution to an existing SAP HANA landscape. (The
primary site for production already exists.)
To perform the replication of Hitachi Unified Compute Platform for the SAP HANA platform, the reference solution uses
the following:

Hitachi Compute Blade 2500 with 520X B2 Server Blades

Quanta Cloud Technology QuantaPlex T41S-2U Server
An ultra-dense design equipped with four independent nodes, it creates the flexibility to set up different workloads
independently in one 2U shared infrastructure, providing optimal data center performance. This solution uses only one
node out of the four nodes.

Hitachi Virtual Storage Platform G1000
This is a storage platform that scales for all data types that flexibly adapts for performance, capacity, and
multi-vendor storage.

Hitachi NAS Platform 4060
This is a network-attached storage solution used for file sharing, file server consolidation, data protection, and
business-critical NAS workloads.

System management unit software
Providing front-end server administration and monitoring tools for Hitachi NAS Platform, this supports clustering and
acting as a quorum device in a cluster.

Hitachi TrueCopy
For synchronous replication up to 43 miles (70 km), Hitachi TrueCopy provides no data-loss and rapid restart solution
with real-time copies that are the same as the originals, reducing recovery time to minutes.

SAP HANA
This is a multi-purpose, in-memory database appliance to analyze transactional and analytical data.
4
5

Brocade VDX 6740-48 switch
This 48-port switch provides 10 GbE connectivity to the appliance.

Brocade ICX 6430-48 switch
This 48-port 1 GbE switch provides management network to the appliance.

Brocade ICX 6430-24 switch
This 24-port, 1 GbE switch provides the NAS Platform private network.

Brocade 6505 Extension switch
A 24-port, entry-level switch that provides Fibre Channel connectivity between Hitachi Unified Storage on the primary
site and the secondary site.
Figure 1 on page 6 shows the configuration of this solution.
5
6
Figure 1
6
7
Hardware
Elements
Table 1 describes the hardware needed to deploy the three active nodes and one standby node configuration at each site.
All the configurations of Hitachi Unified Compute Platform for the SAP HANA Platform are supported for Hitachi
TrueCopy.
Table 1. Hardware Elements
Hardware
Quantity
Configuration
Role
Hitachi Compute Blade 2500
4

8-blade chassis
Server blade chassis.

2 management modules


10 cooling fan modules

5 power supply modules


520X B2 server blade
24
SMP connection board for 520X
server blade
6
2 × FIVE-FX 16 Gb/s 2-port FC
PCI-Ex card per HANA node
4 × 2-port 10GBASE-SR PCI-Ex
card per HANA node

2 × 18-core processors

768 GB RAM

Server blade for the SAP
HANA nodes.

2 pass-through mezzanine cards on
Mezzanine Slot 2 and 4 of each
server blade, except for Server
Blade 15

4-blade SMP connection board

SMP expansion module

SMP connector cover
System management unit
4
2
For every NAS Platform server

2 cluster ports

2 × 10 GbE ports

2 Fibre Channel ports


Four server blades are
required for each SAP
HANA node.
SMP connector to turn four
physical blades into one SAP
HANA node with 8 × 18-core
processor and 3 TB of
memory.

Hitachi NAS Platform 4060
One chassis is required for
two SAP HANA nodes.
One SMP connector is
required per SAP HANA
node.
Provide NFS shared file
system for SAP HANA
binaries, cluster-wide
configuration files, and for
storing one in-memory
backup.
2 Ethernet ports
2 Intel Core 2 Duo E7500 processor, NAS Platform cluster
management
2.93 GHz CPU, 4 GB RAM
7
8
Table
1. Hardware Elements (Continued)
Hardware
Quantity
Configuration
Role
Hitachi Virtual Storage Platform
G1000
2

MPB — 2 pairs

Block storage for SAP HANA
nodes and NAS Platform
DKA (BED) — 2 pairs

CHA (FED) — 3 pairs
Quanta Cloud Technology
QuantaPlex T41S-2U server
2
For T41S-2U server:





Brocade VDX 6740-48 port
switch
8


Brocade ICX 6430-48 port switch 2

Brocade ICX 6430-24 port switch 2


Brocade 6505 switch


4

Management server runs the
following:
Intel Xeon E5-2620 v3 processor,
2.4 GHz CPU, 32 GB RAM

NTP
2 × 500 GB 7200 RPM SATA drives

Hitachi Command Suite
1 dual port 10 GbE Intel 82599ES
SFP+OCP Mezzanine card
1 dual port 1 GbE Base-T Intel i350
Mezzanine Card
Emulex Dual Port 8 Gb/sec Fibre
Channel HBA



Hi-Track Remote
Monitoring system
SAP HANA Studio
Command control
interface
Two distinct VLANs, each dedicated 10 GbE NFS and intra-cluster
to NFS and SAP HANA intra-cluster network
network
10 GbE client network
Two switches with one VLAN to
provide uplink network to customer
network infrastructure
1 GbE
1 GbE Management Network
48 ports
1 GbE
NAS Platform private network
24 ports
24 port switch
Fibre Channel switch for
storage connections
Fibre Channel switch
8
9
Hitachi
Compute Blade 2500
Hitachi Compute Blade 2500 delivers enterprise computing power and performance with unprecedented scalability and
configuration flexibility. Lower your costs and protect your investment.
Flexible I/O architecture and logical partitioning allow configurations to match application needs exactly with Hitachi
Compute Blade 2500. Multiple applications easily and securely co-exist in the same chassis.
Add server management and system monitoring at no cost with Hitachi Compute Systems Manager. Seamlessly integrate
with Hitachi Command Suite in Hitachi storage environments.
This configuration uses 24 × 520X B2 server blades in the Hitachi Compute Blade 2500 chassis. Table 2 shows the
specifications for the 520X B2 server blade used in this solution.
Table 2. 520X B2 Server Blade Configuration
Feature
Configuration
Processors

Memory






Network ports


Processor SKU
Processor frequency
Processor cores
Memory DIMM slots

Intel Xeon processor E7-8880
2 processors per server blade
Intel Xeon processor E7-8880 v3
2.3 GHz
18 cores
48 slots populated
768 GB RAM
16 GB DIMMs
1 × USB 3.0 port
KVM connector (VGA, COM, USB 2.0 2-port)
Hitachi NAS Platform 4060
Hitachi NAS Platform is an advanced and integrated network attached storage (NAS) solution. It provides a powerful tool
for file sharing, file server consolidation, data protection, and business-critical NAS workloads.



Powerful hardware-accelerated file system with multi-protocol file services, dynamic provisioning, intelligent tiering,
virtualization, and cloud infrastructure
Seamless integration with Hitachi SAN storage, Hitachi Command Suite, and Hitachi Data Discovery Suite for
advanced search and index
Integration with Hitachi Content Platform for active archiving, regulatory compliance, and large object storage for
cloud infrastructure
This solution uses NAS Platform 4060 servers for file system sharing of the global binary and configuration SAP HANA
files. There are four NAS Platform 4060 server nodes.
The system management unit provides front-end server administration and monitoring tools for NAS Platform. It supports
clustering and acts as a quorum device in a cluster.
9
10
Hitachi
Virtual Storage Platform G1000
Hitachi Virtual Storage Platform G1000 provides an always-available, agile, and automated foundation that you need for a
continuous infrastructure cloud. This delivers enterprise-ready software-defined storage, advanced global storage
virtualization, and powerful storage.
Supporting always-on operations, Virtual Storage Platform G1000 includes self-service, non-disruptive migration and
active-active storage clustering for zero recovery time objectives. Automate your operations with self-optimizing,
policy-driven management.
The following reside on this storage device:

Operating system LUNs

Data LUNs

Log LUNs

LUNs for the Hitachi NAS Platform cluster
This solution uses two Hitachi Virtual Storage Platform G1000 storage platforms.
To properly size the storage, refer to Hitachi Unified Compute Platform for the SAP HANA Platform using 3 TB SAP HANA
Nodes in a Scale-out 48 TB Configuration of 16 Active Nodes and 3 Standby Nodes with Hitachi Compute Blade 2500
Chassis, 520X B2 Server Blades, and Hitachi Virtual Storage Platform G1000 Reference Architecture Guide (AS-399-02,
PDF).
At any time, the secondary site only has one live SAP HANA instance. The secondary site is for production or quality
assurance. Normally the quality assurance instance is available for use at the secondary site in case of a service outage,
when it becomes the production instance.
Use the server priority manager at the secondary site to do the following:


Designate the prioritized ports (replication) and non-prioritized ports (quality assurance).
Set the upper limits and thresholds for the I/O activity of these ports to prevent low-priority activities from negatively
affecting the high priority activities, such as replication.
Additional information is available in the Hitachi Virtual Storage Platform G1000 Performance Guide.
Software Elements
Table 3 describes the software products used to deploy the three active nodes and one standby node configuration.
Table 3. Software Elements
Software
Version
Hitachi Storage Navigator, a part of Hitachi Storage Virtualization
Operating System
Microcode dependent
SMU software
12.4.3924.05
Hitachi NAS Platform firmware
12.4.3924.11
Hitachi TrueCopy
Microcode dependent
Command control interface
01-34-03/04
Hitachi Dynamic Link Manager
8.20.1
10
11
Table
3. Software Elements (Continued)
Software
Operating System

Used on Hitachi Compute
Blade 2500
Version
SUSE Linux Enterprise Server for
SAP Applications
SUSE Enterprise Linux 11 SP3
Alternate: Red Hat Linux Enterprise RHEL 6.6
SAP HANA
1.0 SPS09, Rev. 97 or later
SAP HANA Studio
1.0 SPS09, Rev. 97 or later
Microsoft® Windows Server® 2012 R2
Standard Edition

Used on QuantaPlex T41S-2U
Brocade VDX 6740 port switch
4.1.3b or later
Brocade ICX 6430-48 port switch
7.4.00 or later
Brocade 6505 switch
7.4.00 or later
Hitachi Storage Virtualization Operating System
Hitachi Storage Virtualization Operating System spans and integrates multiple platforms. It is integrates storage system
software to provide system element management and advanced storage system functions. Used across multiple
platforms, Storage Virtualization Operating System includes storage virtualization, thin provisioning, storage service level
controls, dynamic provisioning, and performance instrumentation.
Storage Virtualization Operating System includes standards-based management software on a Hitachi Command Suite
base. This provides storage configuration and control capabilities for you.
To enable essential management and optimization functions, this solution uses Hitachi Storage Navigator, a part of Sever
Virtualization Operating System. Storage Navigator runs on most browsers. A command line interface is available.
11
12
Solution Design
The detailed design for the Hitachi TrueCopy of Hitachi Unified Compute Platform for the SAP HANA Platform solution in
a scale-out configuration is based on specifications from SAP and is a 3+1 node configuration that includes the following:

“Hitachi Compute Blade 2500 Chassis Configuration,” starting on page 12

“520X B2 Server Blade Architecture,” starting on page 14

“Fibre Channel SAN Architecture,” starting on page 14

“Storage Architecture,” starting on page 15

“Hitachi NAS Platform 4060 Architecture,” starting on page 23

“Network File System Design for Shared Binaries,” starting on page 25

“Management Server,” starting on page 26

“Network Architecture,” starting on page 27

“SAP Storage Connector API Fibre Channel Client,” starting on page 33

“Inter-site Configuration,” starting on page 33

“Disaster Recovery and Replication Components,” starting on page 34
Hitachi Compute Blade 2500 Chassis Configuration
This solution uses four Hitachi Compute Blade 2500 chassis. Each chassis has of a total of eight 520X B2 server blades.
As each SAP HANA node is four server blades connected using a four-blade SMP interface connector, each chassis
contains two SAP HANA nodes.
The PCIe slots on each of the chassis have the following components.

One 2-port 10GBASE-SR PCI-Ex card for each of the following IOBD slots:

01B

03A

09B

11A

02A

04B

10A

12B
12
13
 One FIVE-FX 16 Gb/sec 2-port Fibre Channel PCI-Ex card for each of the following IOBD slots:

01A

09A

04A

12A
Figure 2 shows the front and back view of Chassis 1 for Hitachi Compute Blade 2500. Use the same configuration for the
remaining chassis.
Figure 2
13
14
Table
4 shows the Hitachi Compute Blade 2500 chassis configuration of all the SAP HANA nodes.
Table 4. Hitachi Compute Blade 2500 Chassis Configuration of SAP HANA Nodes per Site
Chassis
Server Blades
SAP HANA Node
Name
Role of SAP HANA
Node
Chassis 1
Blades 1, 3, 5 and 7
HANA Node 1
Master
Chassis 1
Blades 9, 11, 13 and 15
HANA Node 2
Worker
Chassis 2
Blades 1, 3, 5 and 7
HANA Node 3
Standby
520X B2 Server Blade Architecture
This solution uses 12 × 520X B2 server blades for each node in the 3 TB SAP HANA configuration with two active nodes
plus one standby node for each disaster recovery site.
Each SAP HANA node has four server blades connected using the four-blade SMP interface connector. This creates a
single eight-socket SMP node with 144 cores and 3 TB of memory.
Table 5 lists the 3-node server blade configuration per site.
Table 5. Server Blade Configuration at Each Site
Total 12 server blades
Server Blades

3 nodes comprising 4 SMP connected server blades per node
Total Number of CPU Cores
432
Total Memory (TB)
9TB
Fibre Channel SAN Architecture
Each of the 520X B2 server blades contains a pass-through mezzanine card on Mezzanine Slot 2 and Mezzanine Slot 4.
These slots connect to the 16 Gb/sec Hitachi FIVE-FX Fibre Channel PCI-Ex cards and to the 10 GbE NIC card installed in
the PCI-Ex slots through the backplane within the server chassis.
For each SAP HANA node, there are two dedicated Fibre Channel ports on Hitachi Virtual Storage Platform G1000. The
Fibre Channel cable connects the 16 Gb/sec Hitachi FIVE-FX Fibre Channel PCI-Ex cards on the chassis with the
designated Virtual Storage Platform G1000 ports.
Table 6 shows the storage port mapping, which is identical for both sites.
Table 6. Storage Port Mapping
SAP HANA
Node
Chassis Name
PCI-Ex slot, Port
Virtual Storage
Platform G1000 Ports
Node1
Chassis 1
IOBD 01A, Port 0
1C
Node1
Chassis 1
IOBD 04A, Port 0
2C
Node2
Chassis 1
IOBD 09A, Port 0
3C
Node2
Chassis 1
IOBD 12A, Port 0
4C
Node3
Chassis 2
IOBD 01A, Port 0
5C
Node3
Chassis 2
IOBD 04A, Port 0
6C
HNAS
HNAS1
Hitachi NAS Platform Server 1, FC Port 1
1A
14
15
Table
6. Storage Port Mapping (Continued)
SAP HANA
Node
Chassis Name
PCI-Ex slot, Port
Virtual Storage
Platform G1000 Ports
HNAS
HNAS1
Hitachi NAS Platform Server 1, FC Port 3
2A
HNAS
HNAS2
Hitachi NAS Platform Server 2, FC Port 1
1B
HNAS
HNAS2
Hitachi NAS Platform Server 2, FC Port 3
2B
Table 7 lists the port properties for the direct connection between Hitachi Compute Blade 2500 and Hitachi Virtual
Storage Platform G1000.
Table 7. Port Properties
Property
Value
Port Attribute
Target
Port Security
Disabled
Port Speed
Auto
Fabric
ON
Connection Type
P-to-P
Hitachi FIVE-FX 16 Gb/sec HBA can emulate FC-SW virtually. Enable the settings for Hitachi FIVE-FX HBA to enable for
Multiple Port ID.
Storage Architecture
The central storage system for the scale-out cluster for SAP HANA is a Hitachi Virtual Storage Platform G1000 storage
platform. Several usage aspects divide the space provided by Virtual Storage Platform G1000, as follows:

Operating system LUN provisioning for SAP HANA nodes

Log device provisioning for SAP HANA database

Data device provisioning for SAP HANA database

Block storage provisioning for Hitachi NAS Platform shared file system to store the SAP HANA binaries and
cluster-wide configuration files.
This solution follows a design that utilizes a building block approach in multiples of four active nodes. So we use a four
active node building block design to provision storage for this setup
Figure 3 on page 17 shows the RAID group configuration for the Virtual Storage Platform G1000 architecture used in the
SAP HANA appliance with four active nodes and one standby node.
Each SAP HANA node has its own data volume and log volume. Only active SAP HANA nodes need data volumes and log
volumes. Standby nodes do not require these volumes.
15
16 design follows a four-node building block approach for the SAP HANA data volumes, log volumes, and shared
This
binaries. Provision the parity groups in Figure 3 on page 17, as follows.

Operating System LUN


From this parity group, create 5 LDEVs, each with a capacity of 100 GB.

Map each LDEV exclusively to the corresponding SAP HANA node as follows: LUN ID 00.




The installation of SUSE Linux Enterprise Server for SAP Applications or Red Hat Enterprise Linux resides on the
OS LUN.
Hitachi NAS Platform 4060 Block Storage


A single parity group configured as RAID-6 (6D+2P) on 900 GB drives provisions the operating system LUN for
SAP HANA nodes 1 to 5 (up to a max. of 19) on Virtual Storage Platform G1000.



Each node has its own 100 GB LUN on Virtual Storage Platform G1000 for the operating system volumes.
The block storage for Hitachi NAS Platform consists of three parity groups on Virtual Storage Platform G1000,
each configured as RAID-6 (6D+2P) on 24 × 900 GB drives to store the shared binaries and configuration files of
the SAP HANA database.
In each of the three parity groups, create two LDEVs of 2400 GB each.
Create a dynamic provisioning pool named HNAS_HDP_pool. Assign all the created Hitachi NAS Platform LDEVs
to this pool. This allows the use of all the disks concurrently on NAS Platform for better performance.
For a 4-node building block, create six virtual volumes, each with 2400 GB in HNAS_HDP_pool. Execute the LUN
path assignment for these virtual volumes to the ports in Table 6 on page 14 connected to Hitachi NAS Platform.
SAP HANA Log Volumes

For the SAP HANA log volumes, create two parity groups configured as RAID-6 (6D+2P) on 16 × 900 GB drives.

In each of the parity groups, create two LDEVs of 600 GB each.

Map each SAP HANA log volume to all SAP HANA nodes at each port with the LUN ID 1 of the specified host.
SAP HANA Data Volumes



For the SAP HANA data volumes, create four parity groups, each configured as RAID-6 (14D+2P) on 64 × 900 GB
drives.
Create four LDEVs with a capacity of 2816 GB per each parity group. Table 8 on page 18 shows the parity groups
and LDEVs created for data volumes.
Assign four LDEVs (w/ LUN ID 2-5) for use as data volumes to each SAP HANA node, as shown in Table 8.
16
17
Figure 3
17
18
Table
8 shows the parity groups and LDEV assignment of boot volumes, the Hitachi NAS Platform volumes, the SAP
HANA log volumes, and the SAP HANA data volumes for production system at both the sites.
Table 8. Groups and LDEV Assignment of Operating System, Hitachi NAS Platform, SAP HANA Log Volumes, and
SAP HANA Data Volumes for Production System at Primary and Secondary
Parity
Group ID
Parity Group RAID Level
and Disks
1
RAID-6 (6D+2P) on 900 GB 00:01:00
10k SAS drives
00:02:00
HANA_OS_LUN_N1
100 GB
HANA_OS_LUN_N2
100 GB
00:03:00
HANA_OS_LUN_N3
100 GB
00:04:00
HANA_OS_LUN_N4
100 GB
RAID-6 (6D+2P) on 900 GB 00:00:01
10k SAS drives
00:00:02
HNAS_VOL_1
2400 GB
HNAS_VOL_2
2400 GB
RAID-6 (6D+2P) on 900 GB 00:00:03
10k SAS drives
00:00:04
HNAS_VOL_3
2400 GB
HNAS_VOL_4
2400 GB
RAID-6 (6D+2P) on 900 GB 00:00:05
10k SAS drives
00:00:06
HNAS_VOL_5
2400 GB
HNAS_VOL_6
2400 GB
RAID-6 (6D+2P) on 900 GB 00:01:01
10k SAS drives
00:02:01
HANA_LOG_N1
600 GB
HANA_LOG_N2
600 GB
RAID-6 (6D+2P) on 900 GB 00:03:01
10k SAS drives
00:04:01
HANA_LOG_N3
600 GB
HANA_LOG_N4
600 GB
RAID-6 (14D+2P) on 900
GB 10k SAS drives
00:01:02
HANA_DATA_N1_01
2816 GB
00:02:02
HANA_DATA_N2_01
2816 GB
00:03:02
HANA_DATA_N3_01
2816 GB
00:04:02
HANA_DATA_N4_01
2816 GB
00:01:03
HANA_DATA_N1_02
2816 GB
00:02:03
HANA_DATA_N2_02
2816 GB
00:03:03
HANA_DATA_N3_02
2816 GB
00:04:03
HANA_DATA_N4_02
2816 GB
00:01:04
HANA_DATA_N1_03
2816 GB
00:02:04
HANA_DATA_N2_03
2816 GB
00:03:04
HANA_DATA_N3_03
2816 GB
00:04:04
HANA_DATA_N4_03
2816 GB
00:01:05
HANA_DATA_N1_04
2816 GB
00:02:05
HANA_DATA_N2_04
2816 GB
00:03:05
HANA_DATA_N3_04
2816 GB
00:04:05
HANA_DATA_N4_04
2816 GB
2
3
4
5
6
7
8
9
10
RAID-6 (14D+2P) on 900
GB 10k SAS drives
RAID-6 (14D+2P) on 900
GB 10k SAS drives
RAID-6 (14D+2P) on 900
GB 10k SAS drives
LDEV ID
Name
LDEV Size
18
19
Table
9 shows the dynamic provisioning pool IDs and virtual volume LDEV IDs for Hitachi NAS Platform.
Table 9. Dynamic Provisioning Pool IDs and Virtual Volume LDEV IDs of Hitachi NAS Platform for Production
System at Both sites
Dynamic
Provisioning
Pool ID
Dynamic
Provisioning
Pool Name
Virtual Volume Names
Virtual Volume
LDEV ID for NAS
Platform Shared
Binaries
Virtual Volume
Size for NAS
Platform Shared
Binaries
MPB
Assignment
0
HNAS_HDP_Pool
HNAS_HANA_VVOL_1
00:0A:01
2400.00 GB
MPB0
HNAS_HANA_VVOL_2
00:0A:02
2400.00 GB
MPB4
HNAS_HANA_VVOL_3
00:0A:03
2400.00 GB
MPB0
HNAS_HANA_VVOL_4
00:0A:04
2400.00 GB
MPB4
HNAS_HANA_VVOL_5
00:0A:05
2400.00 GB
MPB0
HNAS_HANA_VVOL_6
00:0A:06
2400.00 GB
MPB4
While mapping the LUN path assignment for each node, add the LUNs in the following order:
1. Map the operating system LUN for the specific SAP HANA node.
2. Map the log volume and data volume of each SAP HANA node.
Table 10 shows an example configuration of the LUN path assignment for Node01 on the primary site. The LUN
assignment would be the same for all nodes except for the first LUN, which would be the operating system LUN of that
specific node.
Table 10. LUN Path Assignment at Primary Site for Node 01
LUN ID
LDEV ID
LDEV Name
0000
00:01:00
hananode01
0001
00:01:01
LOG_1
0002
00:01:02
DATA_1_01
0003
00:01:03
DATA_1_02
0004
00:01:04
DATA_1_03
0005
00:01:05
DATA_1_04
0006
00:02:01
LOG_2
0007
00:02:02
DATA_2_01
0008
00:02:03
DATA_2_02
0009
00:02:04
DATA_2_03
0010
00:02:05
DATA_2_04
0011
00:03:01
LOG_3
0012
00:03:02
DATA_3_01
0013
00:03:03
DATA_3_02
0014
00:03:04
DATA_3_03
0015
00:03:05
DATA_3_04
0016
00:04:01
LOG_4
0017
00:04:02
DATA_4_01
19
20
Table
10. LUN Path Assignment at Primary Site for Node 01 (Continued)
LUN ID
LDEV ID
LDEV Name
0018
00:04:03
DATA_4_02
0019
00:04:04
DATA_4_03
0020
00:04:05
DATA_4_04
Figure 4 shows the LUN and port assignment for the maximum SAP HANA server nodes of 19 on each Hitachi Virtual
Storage Platform G1000.
Figure 4
20
21
Table
11 shows the parity groups and LDEV assignment the Hitachi NAS Platform volumes, the SAP HANA log volumes,
and the SAP HANA data volumes for quality assurance system at the secondary site.
Table 11. Groups and LDEV Assignment of Operating System Boot, Hitachi NAS Platform, SAP HANA Log Volumes,
and SAP HANA Data Volumes for Quality Assurance System at Secondary Site
Parity
Group ID
Parity Group RAID Level
and disks
11
12
13
14
15
16
17
18
19
LDEV Name
LDEV Size
RAID-6(6D+2P) on 900 GB 00:00:25
10k SAS drives
00:00:26
HNAS_VOL_QA_1
2400 GB
HNAS_VOL_QA_2
2400 GB
RAID-6(6D+2P) on 900 GB 00:00:27
10k SAS drives
00:00:28
HNAS_VOL_QA_3
2400 GB
HNAS_VOL_QA_4
2400 GB
RAID-6(6D+2P) on 900 GB 00:00:29
10k SAS drives
00:00:30
HNAS_VOL_QA_5
2400 GB
HNAS_VOL_QA_6
2400 GB
RAID-6(6D+2P) on 900 GB 00:01:06
10k SAS drives
00:02:06
HANA_LOG_QA_N1
600 GB
HANA_LOG_QA_N2
600 GB
RAID-6(6D+2P) on 900 GB 00:03:06
10k SAS drives
00:04:06
HANA_LOG_QA_N3
600 GB
HANA_LOG_QA_N4
600 GB
RAID-6(14D+2P) on 900
GB 10k SAS drives
00:01:07
HANA_DATA_QA_N1_01
2816 GB
00:02:07
HANA_DATA_QA_N2_01
2816 GB
00:03:07
HANA_DATA_QA_N3_01
2816 GB
00:04:07
HANA_DATA_QA_N4_01
2816 GB
00:01:08
HANA_DATA_QA_N1_02
2816 GB
00:02:08
HANA_DATA_QA_N2_02
2816 GB
00:03:08
HANA_DATA_QA_N3_02
2816 GB
00:04:08
HANA_DATA_QA_N4_02
2816 GB
00:01:09
HANA_DATA_QA_N1_03
2816 GB
00:02:09
HANA_DATA_QA_N2_03
2816 GB
00:03:09
HANA_DATA_QA_N3_03
2816 GB
00:04:09
HANA_DATA_QA_N4_03
2816 GB
00:01:10
HANA_DATA_QA_N1_04
2816 GB
00:02:10
HANA_DATA_QA_N2_04
2816 GB
00:03:10
HANA_DATA_QA_N3_04
2816 GB
00:04:10
HANA_DATA_QA_N4_04
2816 GB
RAID-6(14D+2P) on 900
GB 10k SAS drives
RAID-6(14D+2P) on 900
GB 10k SAS drives
RAID-6(14D+2P) on 900
GB 10k SAS drives
LDEV ID
21
22
Table
12 shows the dynamic provisioning pool IDs and virtual volume LDEV IDs for Hitachi NAS Platform.
Table 12. Dynamic Provisioning Pool IDs and Virtual Volume LDEV IDs of Hitachi NAS Platform for Quality
Assurance System at the Secondary Site
Dynamic
Provisioning
Pool ID
Dynamic
Provisioning Pool
Name
Virtual Volume
Names
Virtual Volume
LDEV ID for
NAS Platform
Shared Binaries
Virtual Volume
Size for NAS
Platform
Shared Binaries
MPB
Assignment
1
HNAS_QA_HDP_Pool
HNAS_QA_VVOL_1
00:0A:25
2400.00 GB
MPB0
HNAS_QA_VVOL_2
00:0A:26
2400.00 GB
MPB4
HNAS_QA_VVOL_3
00:0A:27
2400.00 GB
MPB0
HNAS_QA_VVOL_4
00:0A:28
2400.00 GB
MPB4
HNAS_QA_VVOL_5
00:0A:29
2400.00 GB
MPB0
HNAS_QA_VVOL_6
00:0A:30
2400.00 GB
MPB4
Table 13 shows an example configuration of the LUN path assignment for Node01 on secondary site. The LUN
assignment would be the same for all nodes except for the first LUN, which would be the operating system LUN of that
specific node.
Table 13. LUN Path Assignment at Secondary Site
LUN ID
LDEV ID
LDEV Name
0000
00:01:00
hananode01
0001
00:01:01
LOG_1
0002
00:01:02
DATA_1_01
0003
00:01:03
DATA_1_02
0004
00:01:04
DATA_1_03
0005
00:01:05
DATA_1_04
0006
00:02:01
LOG_2
0007
00:02:02
DATA_2_01
0008
00:02:03
DATA_2_02
0009
00:02:04
DATA_2_03
0010
00:02:05
DATA_2_04
0011
00:03:01
LOG_3
0012
00:03:02
DATA_3_01
0013
00:03:03
DATA_3_02
0014
00:03:04
DATA_3_03
0015
00:03:05
DATA_3_04
0016
00:04:01
LOG_4
0017
00:04:02
DATA_4_01
0018
00:04:03
DATA_4_02
0019
00:04:04
DATA_4_03
0020
00:04:05
DATA_4_04
22
23
Table
13. LUN Path Assignment at Secondary Site (Continued)
LUN ID
LDEV ID
LDEV Name
0021
00:01:01
LOG_QA_1
0022
00:01:02
DATA_QA_1_01
0023
00:01:03
DATA_QA_1_02
0024
00:01:04
DATA_QA_1_03
0025
00:01:05
DATA_QA_1_04
0026
00:02:01
LOG_QA_2
0027
00:02:02
DATA_QA_2_01
0028
00:02:03
DATA_QA_2_02
0029
00:02:04
DATA_QA_2_03
0030
00:02:05
DATA_QA_2_04
0031
00:03:01
LOG_QA_3
0032
00:03:02
DATA_QA_3_01
0033
00:03:03
DATA_QA_3_02
0034
00:03:04
DATA_QA_3_03
0035
00:03:05
DATA_QA_3_04
0036
00:04:01
LOG_QA_4
0037
00:04:02
DATA_QA_4_01
0038
00:04:03
DATA_QA_4_02
0039
00:04:04
DATA_QA_4_03
0040
00:04:05
DATA_QA_4_04
Hitachi NAS Platform 4060 Architecture
This describes the architecture for Hitachi NAS Platform 4060 and related systems.
This reference architecture uses two Hitachi NAS Platform 4060 file system modules. They are part of Hitachi Virtual
Storage Platform G1000 in the cluster configuration. The two NAS Platform servers are cluster interconnected with two 10
GbE network links.
Each NAS Platform server directly connects to the Virtual Storage Platform G1000 target ports using two Fibre Channel
cables, from each NAS Platform 4060 node.
System Management Unit
Web Manager, the graphical user interface of the system management unit, provides front-end server administration and
monitoring tools. It supports clustering and acts as a quorum device in a cluster. This solution uses an external system
management unit that manages two NAS Platform servers.
Use one of the following browsers to run Web Manager:

Microsoft Internet Explorer®: version 9.0, or later

Mozilla Firefox: version 6.0, or later
23
24
Private
Management Network
Connect the private management interfaces of the Hitachi NAS Platform 4060 servers and the system management unit
to the Brocade ICX6430-24 port switch. Devices connected to this private ICX6430-24 port switch are only accessible
through the system management unit.
Public Data Network
The public data network consists of the public Ethernet port of Hitachi NAS Platform 4060 and system management unit
connected to the Brocade ICX6430-48 switch.
Storage Subsystem
This solution uses Hitachi Virtual Storage Platform G1000 as the storage subsystem. Hitachi NAS Platform has direct
attached Fibre Channel connections with Virtual Storage Platform G1000.
Server Connections
Figure 5 shows the back of Hitachi NAS Platform 4060.
Figure 5
Port C1 and Port C2 are the NAS Platform 4060 cluster ports. To enable clustering, do the following:

Connect Port C1 of first NAS Platform server to Port C1 of the second NAS Platform server.

Connect Port C2 of first NAS Platform server to Port C2 of the second NAS Platform server.
Port tg1 and Port tg2 are 10 GbE ports. Link aggregate and connect these ports to the Brocade VDX 6740-48 switches.
Connect Fibre Channel Ports FC1 and FC3 directly to the Hitachi Virtual Storage Platform G1000 ports, as follows

1A and 2A for the first NAS Platform server

1B and 2B for the second NAS Platform server
Connect port eth0 of the NAS platform server to the management network on the Brocade ICX6430-48 port switch.
Connect port eth1 of the NAS platform server to the private network on the Brocade ICX6430-24 port switch.
For the direct connection between Hitachi NAS Platform and Hitachi Virtual Storage Platform G1000, set the port
properties as shown in Table 14 on page 25.
24
25
Table 14. Hitachi Virtual Storage Platform G1000 Port Properties
Property
Value
Port Attribute
Target
Port Security
Disable
Port Speed
Auto (8 Gb/sec)
Fabric
OFF
Connection Type
FC-AL
Network File System Design for Shared Binaries
This solution requires a network file system to store cluster-wide SAP HANA binaries and configuration files of the SAP
HANA database. Host this shared file system called /hana/shared/<SID>, on Hitachi NAS Platform 4060. Mount this file
system on all SAP HANA nodes. “<SID>” is the system ID for the SAP HANA production database instance.
Table 8 on page 18 and Table 9 on page 19 show the parity group setup and the LDEVs/virtual volumes used for NAS
Platform in this configuration.
This solution uses six virtual volume LDEVs, as listed in Table 9.

Refer to each LDEV as a system drive.

With these system drives, create a single storage pool called HANABIN_<SID>.
Configure two EVS on the NAS Platform nodes, as follows:

EVS on NAS Platform node 1 as HNASEVS1

EVS on NAS Platform node 2 as HNASEVS2
Create the shared file system hana_shared_<SID> using the storage pool HANABIN_<SID> with the following:

Capacity based on the number of HANA nodes:

Up to 4 active HANA nodes: 13 TB

Up to 8 active HANA nodes: 26 TB

Up to 12 active HANA nodes: 39 TB

Up to 16 active HANA nodes: 52 TB

Block size of 32 KB

Allocate on demand to allow automatically expanding to the size limit

Thin provisioning enabled
Mount and then export the file system. Mount the NFS export /hana_shared_<SID> on the file system path
/hana/shared/<SID> on all the SAP HANA nodes.
Set the MTU size to 9000 on both NAS Platform nodes.
Follow a similar procedure for the quality assurance system on the secondary site.
25
26
Management
Server
This solution uses one node of a four-node Quanta Cloud Technology QuantaPlex T41S-2U server for the management
server. The management server acts as a central device for managing the SAP HANA platform.
Manage the following from the management server:

Hitachi Compute Blade 2500 chassis

520X B2 server blades

Brocade ICX 6430 — 48 port switch

Brocade VDX 6740 switches

System management unit

SAP HANA nodes

Hitachi NAS Platform 4060 servers

Hitachi Virtual Storage Platform G1000

NTP configuration

Hi-Track Remote Monitoring system from Hitachi Data Systems

Hitachi Command Suite and management of the server blades

SAP HANA Studio
Figure 6 on page 27 shows the management server network ports using one dual port 1 GbE Base-T Intel i350 mezzanine
card.


Slot 01 Port 2 — Connect this port to the customer network. It provides 1 GbE network to the management server.
Slot 01 Port 1 — Connect this port to the Brocade ICX 6430-48 port switch that provides 1 GbE network to all other
switches, chassis, and Hitachi NAS Platform nodes.
The management server has the following additional components:

One dual port 10 GbE Intel 82599ES SFP+ OCP mezzanine card

One Emulex 2-port 8 Gb/sec Fibre Channel HBA on the PCIe slot
Connect the 10 GbE network ports to two different Brocade switches, VDX6740-A and VDX6740-B, to provide
management access to the SAP HANA nodes from the Quanta 2U4N server using the NFS network.
26
27
Figure 6
Install the following software on the management server:

Hitachi Command Suite

Hitachi Compute Systems Manager

Command control interface

Hitachi Dynamic Link Manager

NTP server service

PuTTY

Teraterm

JRE version jre-7u51-windows-i586 (no 64 bit)

Adobe Flash Player

WinSCP

SAP HANA Studio
Network Architecture
For the client network, two Brocade VDX 6740 switches provide external connectivity. In the solution rack, the switch
placement is as follows:

The switch at rack unit 40 is the VDX 6740-D switch 54

The switch at rack unit 39 is the VDX 6740-C switch 53
27
28 scale-out solution for SAP HANA uses the NFS network and the intra-cluster network to access the highly available
The
NFS file system from Hitachi NAS Platform and for communication between the nodes. They can share a switch-pair,
given that traffic can be strictly separated. Accomplish this by using VLANs on two ISL-paired Brocade VDX 6740
switches. In the solution rack, refer to the switches as follows:

The switch at rack unit 37 is the VDX 6740-B switch 52

The switch at rack unit 36 is the VDX 6740-A switch 51
The scale-out SAP HANA solution requires five separate networks.

SAP HANA Intra-Cluster Network
This network provides communication between the SAP HANA instances on the cluster.


Connect the SAP HANA intra-cluster network to Switch 51 and Switch 52 (Brocade VDX 6740). Set an MTU size
of 9100, in accordance with Brocade best practices.

Isolate using a VLAN of 100.

Each of these ports utilize active-active bonding mode:

Ports on Switch 51 and Switch 52 (VDX 6740 switches)

Port eth9901 and Port eth9902 of the Linux operating system.
SAP HANA Client Network
This network is dedicated to traffic between the SAP HANA database and its clients.


By default, isolate using a VLAN of 200. However, a different VLAN can be used to isolate this network.

Each of these ports utilize active-active bonding mode:


Connect the SAP HANA client network to Switch 53 and Switch 54 (Brocade VDX 6740). Set with an MTU size of
9100, in accordance with Brocade best practices.

Ports on Switch 53 and Switch 54 (Brocade VDX 6740 switches)

Port eth9921 and Port eth9922 of the Linux operating system.
Connect the up-link ports of the Brocade VDX 6740-48 switches to the local area network.
SAP HANA NFS Network
This network provides access to SAP HANA shared binaries and configuration files using the NFS protocol.

Connect the SAP HANA NFS network to Switch 51 and Switch 52 (Brocade VDX 6740). Set with an MTU size of
9100, in accordance with Brocade best practices.

Isolate using a VLAN of 150.

Each of these ports utilize active-active bonding mode:

Ports on Switch 51 and Switch 52 (VDX 6740 switches)

Port eth9911 and Port eth9912 of the Linux operating system
28
29
 Hitachi NAS Platform Private Network
This network is used for the NAS Platform heartbeat.


A 1 GbE Brocade ICX6430-24 port switch supplies the heartbeat.
Management Network
This network is used with the management server. See “Management Server” on page 26.

This resides on a 1 GbE Brocade ICX 6430-48 port switch.

The management network does not need to have a VLAN assigned to it.

The Brocade ICX 6430-48 port switch uses the default switch configuration.
The SAP HANA intra-cluster network, client network, and NFS network are required to have the following:

No single point of failure

Provide at least 10 GbE equivalent throughput
To meet these requirements, use four dual-port 10 GbE PCI-Ex cards per SAP HANA node. Bond two ports from different
PCIe network adapters at the operating system level using link aggregation following the IEEE 802.3ad Link Aggregation
standard for each of the three networks:

SAP HANA intra-cluster

NFS

Client network
Connections of each bond need to go to physically different VDX6740 switches, so when one switch fails there is still
another route to the corresponding host. This solution connects two switches together using Inter Switch Link. It lets both
switches act together as one single logical switch with the characteristics that, if one switch fails, there still is a path to the
hosts.
The complete network setup uses the ports on the 10 GbE PCI-Ex cards listed in Table 15 as an example configuration for
chassis 1. Follow a similar configuration for SAP HANA nodes on other chassis as well.
Table 15. Network Setup Using 10 GbE PCI-Ex NIC Cards
PCI-Ex slot number
Port
Operating System
Level eth Port
Network Description
IOBD 01B
0
eth9921
Client Network for Node01
IOBD 01B
1
eth9911
NFS Network of Node01
IOBD 02A
0
eth9902
Intra-Cluster Network for Node01
IOBD 03A
0
eth9901
Intra-Cluster Network for Node01
IOBD 04B
0
eth9922
Client Network for Node01
IOBD 04B
1
eth9912
NFS Network for Node01
IOBD 09B
0
eth9921
Client Network for Node02
IOBD 09B
1
eth9911
NFS Network of Node02
IOBD 10A
0
eth9902
Intra-Cluster Network for Node02
29
30
Table 15. Network Setup Using 10 GbE PCI-Ex NIC Cards (Continued)
PCI-Ex slot number
Port
Operating System
Level eth Port
Network Description
IOBD 11A
0
eth9901
Intra-Cluster Network for Node02
IOBD 12B
0
eth9922
Client Network for Node02
IOBD 12B
1
eth9912
NFS Network for Node02
SAP HANA Intra-Cluster Network Connection for Node 01 and Node 02
Figure 7 shows the SAP HANA intra-cluster network connection for Node 01 and Node 02
Figure 7
30
31
Configure
the SAP HANA intra-cluster network using operating system-level bonding on every node. Make the following
10 GbE connections, as listed in Table 16.
Table 16. SAP HANA Intra-Cluster Port Mapping
Chassis, PCI-Ex slot number, Port
VDX 6740 Switch and Port
Bond
Chassis 1, IOBD 03A, Port 0
VDX 6740-A, Port #1
Bond 0 of Node 01
Chassis 1, IOBD 02A, Port 0
VDX 6740-B, Port #1
Bond 0 of Node 01
Chassis 1, IOBD 11A, Port 0
VDX 6740-A, Port #2
Bond 0 of Node 02
Chassis 1, IOBD 10A, Port 0
VDX 6740-B, Port #2
Bond 0 of Node 02
SAP HANA NFS Network Connection for Node 01 and Node 02
Figure 8 shows the SAP HANA NFS network connection for Node 01 and Node 02.
Figure 8
31
32
Make
the 10 GbE connections using the mappings in Table 17.
Table 17. SAP HANA NFS Network Port Mappings
Chassis, PCI-Ex Slot Number, Port
VDX 6740 Switch and Port
Bond
Chassis 1, IOBD 01B, Port 1
VDX 6740-A, Port #21
Bond 1 of Node 01
Chassis 1, IOBD 04B, Port 1
VDX 6740-B, Port #21
Bond 1 of Node 01
Chassis 1, IOBD 09B, Port 1
VDX 6740-A, Port #22
Bond 1 of Node 02
Chassis 1, IOBD 12B, Port 1
VDX 6740-B, Port #22
Bond 1 of Node 02
SAP HANA Client Network Connection
Figure 9 shows the SAP HANA client network connection.
Figure 9
32
33
Make
the following 10 GbE connections using the port mappings shown in Table 18.
Table 18. SAP HANA Client Network Port Mappings
Chassis, PCI-Ex Slot Number, Port
VDX 6740 Switch and Port
Bond
Chassis 1, IOBD 01B, Port 0
VDX 6740-C, Port #1
Bond 2 of Node 1
Chassis 1, IOBD 04B, Port 0
VDX 6740-D, Port #1
Bond 2 of Node 1
Chassis 1, IOBD 09B, Port 0
VDX 6740-C, Port #2
Bond 2 of Node 2
Chassis 1, IOBD 12B, Port 0
VDX 6740-D, Port #2
Bond 2 of Node 2
SAP Storage Connector API Fibre Channel Client
The SAP HANA Storage Connector API Fibre Channel client defines a set of interface functions called during the
following:

Normal SAP HANA cluster operation

Failover handling
Storage connector clients implement the functions defined in the Storage Connector API.
The scale-out of Hitachi Unified Compute Platform for the SAP HANA Platform uses the fcClientLVM implementation,
which supports the use of Logical Volume Manager. SAP supports this solution to enable the use of high-performance
Fibre Channel devices for a scale-out installation.
The fcClientLVM implementation uses standard Linux commands, such as multipath and sg_persist. Install and
configure these commands.
The fcClientLVM implementation is responsible for mounting the SAP HANA volumes. It also implements a proper fencing
mechanism during a failover by means of SCSI-3 persistent reservations.
Configuration of the SAP Storage Connector API is contained within the SAP global.ini file in
/hana/shared/<SID>/global/hdb/custom/config.
Inter-site Configuration
SAN switch and long distance amplifiers must be installed between the primary site and the secondary site, as and when
applicable.
Customers who have the SAN switches and long distance amplifiers in the existing infrastructure can utilize the same
infrastructure.
Table 19 lists the target and initiator ports of the two storage systems at each site along with zoning alias. Both sites need
one zoning configuration.
Table 19. Zoning Configuration
Initiator Port
RCU Target Port
Zone Name
PrimA_Port3B
SecB_Port7B
PrimA_Port3B_SecB_Port7B
PrimA_Port4B
SecB_Port8B
PrimA_Port4B_SecB_Port8B
SecB_Port3B
PrimA_Port7B
SecB_Port3B_PrimA_Port7B
SecB_Port4B
PrimA_Port8B
SecB_Port4B_PrimA_Port8B
33
34
Table
20 has the details on the zoning between the management server and Hitachi Virtual Storage Platform G1000.


Create a single zone named MGMT_Pri_Sec.
Add all four Virtual Storage Platform G1000 ports aliases and all of the four management server Emulex port aliases in
the single zone MGMT_Pri_Sec.
With this configuration, command devices on the primary system and secondary system are accessible on the
management servers at the primary site and secondary site.
Table 20. Management Server Zoning
Virtual Storage
Platform
G1000 Alias
Hitachi Compute
Rack Management
Server Alias
Zone Name
PrimA_Port5B
MGMT_PRI_Port0
MGMT_Pri_Sec
SecB_Port5B
MGMT_SEC_Port0
MGMT_Pri_Sec
PrimA_Port6B
MGMT_PRI_Port1
MGMT_Pri_Sec
SecB_Port6B
MGMT_SEC_Port1
MGMT_Pri_Sec
Disaster Recovery and Replication Components
In this reference architecture for SAP HANA disaster recovery, Hitachi TrueCopy setup requires doing the following:

“Install Command Control Interface ” on page 34

“Install Hitachi Dynamic Link Manager” on page 34

“Configure Command Devices ” on page 35

“Configure Replication using Command Control Interface ” on page 35

“Setup Hitachi TrueCopy” on page 38
Install Command Control Interface
Command control interface enables you to perform storage system operations by issuing commands to Hitachi Virtual
Storage Platform G1000.
In this solution, install command control interface on the management server for the primary site and the secondary site.
Command control Interface uses the components residing on the following:

Storage System — Command devices and Hitachi TrueCopy volumes (P-VOLs and S-VOLS)

Quanta Cloud Technology QuantaPlex T41S-2U Management Server — Hitachi Open Remote Copy Manager
Install Hitachi Dynamic Link Manager
Install the Hitachi Dynamic Link Manager software on the management server for the primary site and the secondary site.
Dynamic Link Manager manages the access paths to the storage system.
Dynamic Link Manager provides the ability to distribute loads across multiple paths and switch to another path if there is
a failure in the path currently being used, improving system availability and reliability.
34
35
Configure
Command Devices
A command device is a dedicated logical volume on the storage system that functions as the interface to the storage
system from the host. The command device accepts commands from the host that are executed on the storage system.
In this reference solution, create a 100 MB command device logical volume on the local and remote storage system. Each
management server has one dual-port Emulex HBA card, connected through the Brocade 6505 switch to Hitachi Virtual
Storage Platform G1000. Perform zoning and add the LUN path for the command devices in such a way that the primary
site management server and the secondary site management server can access the command devices on both storage
systems.
Configure Replication using Command Control Interface
A key aspect of this reference architecture on Hitachi Virtual Storage Platform G1000 is defining the volume pair
relationship for replication between storage systems. Define and manage storage replication relationships through the
Hitachi Storage Navigator graphical user interface or on the management server running Hitachi Open Remote Copy
Manager.
Hitachi Open Remote Copy Manager operates as a daemon process on the host. When activated, Open Remote Copy
Manager refers to its configuration files. The Hitachi Open Remote Copy Manager instance communicates with the
storage sub-system and remote servers.
Two instances of Open Remote Copy Manager are required for Hitachi TrueCopy to be operational.

A Hitachi Open Remote Copy Manager instance on the primary management server manages the P-VOLs.

A Hitachi Open Remote Copy Manager instance on the secondary management server manages the S-VOLs.
The Hitachi Open Remote Copy Manager configuration file defines the communication path and the logical units to be
controlled. Each instance has its own configuration file. The configuration file lists the following for replication:

SAP HANA data volumes

Log volumes

Hitachi NAS Platform LUNs
Figure 10 on page 36 shows the sample content of the configuration file (horcm04.conf) used by the Hitachi Open Remote
Copy Manager instance on the primary management server.
Figure 11 on page 37 shows the sample configuration file (horcm06.conf) files used by the Hitachi Open Remote Copy
Manager instance on the secondary management server.
Figure 12 on page 37 lists the entries that have to be added to the services file of the management server on both sides
for Open Remote Copy Manager to function.
35
36
Figure 10
36
37
Figure 11
Figure 12
37
38
Setup
Hitachi TrueCopy
The Hitachi TrueCopy test bed setup for this SAP HANA disaster recovery solution is outlined in Table 1 on page 7. The
reference solution setup is as follows:






For the failover to the secondary site, this solution uses two initiator ports on Hitachi Virtual Storage Platform G1000
at the primary site connected to two RCU target ports on Hitachi Virtual Storage Platform G1000 at the secondary
site.
For the failback to the primary site, it uses two initiator ports from the storage system at the secondary site connected
to the two RCU target ports on the storage system at the primary site.
The initiator and RCU target ports on Hitachi Virtual Storage Platform G1000 on the primary site connect to the
Brocade 6505 Fibre Channel switch at the primary site.
The initiator and RCU target ports on Hitachi Virtual Storage Platform G1000 on the secondary site connect to the
Brocade 6505 Fibre channel switch at the secondary site.
Define the port attributes for initiator and target ports on Hitachi Virtual Storage Platform G1000. Configure the
storage system for Hitachi TrueCopy replication by defining the logical paths for replication.
Four ISL connections exist between the Brocade 6505 Fibre Channel switch on the primary site and the Brocade 6505
Fibre Channel switch on the secondary site.
Configure Hitachi TrueCopy
Configuring Hitachi TrueCopy for Hitachi Unified Compute Platform for SAP HANA requires the following steps. The
detailed implementation steps are provided in Hitachi Virtual Storage Platform G1000 - Hitachi TrueCopy remote replication
User Guide. Contact your Hitachi Data Systems representative if you need this document.

Configuration work flow

Defining port attributes

Adding remote connection

Setting number of volumes copied, path blockade, other options
Start the Hitachi Open Remote Control Manager Instance
After configuring Hitachi TrueCopy, start the Hitachi Open Remote Control Manager instances on both sides.

Start up the HORCM instance on the primary management server. Ensure the correct instance number is used.

Start up the HORCM instance on the secondary management server. Ensure the correct instance number is used.
Initial Paircreate Operation
Perform the initial data transfer, called the paircreate operation, between the primary site and secondary SAP HANA node
volumes in data/log/HNAS volumes of HANA nodes. Execute the command on the primary site.
When executing the paircreate command, the initial copy happens. The primary storage system copies all the data in
sequence from the P-VOL directly to the S-VOL.
During the initial copy process, the status of the P-VOL and S-VOL is COPY. On completion of the initial copy process, the
status of the P-VOL and S-VOL changes to PAIR.
38
39
Initial
Configuration for Replication of Hitachi NAS Platform 4060 LUNs
To use the replicated LUNs on Hitachi NAS Platform 4060 perform the initial configuration steps. This is a one-time
configuration to setup the mirror relationship between system drives on Hitachi NAS Platform.
The mirroring ensures that the registry information about the replicated system drives of the primary cluster is added on to
secondary cluster. Mirroring also ensures that both Hitachi NAS Platform clusters are aware of the replication and mirror
relationship.
If one site is permanently lost and the surviving LUs are promoted into the SSWS state, it is necessary to run 'sd-peg -up'
on the surviving NAS Platform cluster to make it treat the S-VOLs as P-VOLs. Otherwise, sd-peg should never be used.
In addition, registry changes made on one cluster while it is in production always need recording when made and then
copied to the other cluster after the next failover:



If creating any new file systems, bind them to EVSs using the 'evsfs' command, and then export.
If deleting any file systems on one cluster, delete them from the registry of the other cluster by the file
system-forget-and-delete-nv-data command.
If any exports have been created, deleted, or modified on one cluster, make the same changes on the other cluster.
Contact your Hitachi Data Systems representative for installation and configuration details.
39
40
Engineering Validation
Validation of this reference architecture was conducted in the Hitachi Data Systems laboratory. Tests of the steps of a
failover to the secondary site and a failback to primary site for a planned and an unplanned shutdown using Hitachi
TrueCopy were performed.
Test Methodology
To test the setup, the following scenarios were executed in the lab:

Planned failover to the secondary site

Planned failback to the primary site

Planned failover to the secondary site during mid-flush and then failback to the primary

Unplanned failover to the secondary site

Automated failover and failback using Hitachi Disaster Recovery Manager without quality assurance on the secondary
site.
Test Results
All the tests passed without issues. The RPO was near zero as verified. The RTO was less than an hour.
Testing with the use of HHDRM resulted in a time of less than 8 minutes for the server-storage failover operation and
failback operation between the two sites.

The overall time for the failover was 3 minutes 45 seconds.

The overall time for the failback was 3 minutes, 45 seconds.
40
For More Information
Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a
comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For
more information, see the Hitachi Data Systems Global Services website.
Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration,
contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources
website. Click the Product Demos tab for a list of available recorded demonstrations.
Hitachi Data Systems Academy provides best-in-class training on Hitachi products, technology, solutions and
certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based
instructor-led training (ILT) and virtual instructor-led training (vILT) courses. For more information, see the Hitachi Data
Systems Services Education website.
For more information about Hitachi products and services, contact your sales representative or channel partner or visit
the Hitachi Data Systems website.
1
Corporate Headquarters
2845 Lafayette Street
Santa Clara, CA 96050-2639 USA
www.HDS.com
community.HDS.com
Regional Contact Information
Americas: +1 408 970 1000 or info@hds.com
Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com
Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com
HITACHI is a trademark or registered trademark of Hitachi, Ltd. © Hitachi Data Systems Corporation 2016. All rights reserved. Hi-Track is a trademark or registered trademark of
Hitachi Data Systems Corporation. Microsoft, Windows Server, and Internet Explorer are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service
marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by
Hitachi Data Systems Corporation.
AS-458-01. April 2016.