In this blog we’ll be discussing one possible option for performing an SAP heterogeneous OS/DB migration from an on-premises SAP ABAP-based system such as SAP ECC to the Microsoft Azure Public Cloud. The process discussed can be tailored for any SAP ABAP or Java based system but in this blog we’ll focus on SAP ECC.

We utilised this process for a customer migration where one of their key requirements included the ability to migrate their 4TB SAP ECC system during a 36 hour period over the weekend from their UK-based data centre to the Azure EU West region located in the Netherlands.

If the underlying operating system Endian or database are not changing, other options are available e.g. database backup/restore. However, to take advantage of reorganising and restructuring the database during the migration, you may like to consider using the traditional OS/DB migration method using MIGMON/R3LOAD/JLOAD.

You also need to consider the size of the database backup versus the R3LOAD/JLOAD export files. Compression ratios with R3LOAD/JLOAD may be much better than those achieved with the database backup and you may potentially have to transfer this data across a wide area.

If the underlying operating system Endian or database are changing during the migration then you have no choice but to perform a traditional OS/DB migration. The change may be forced upon you if your source operating system and/or database are not supported in Azure or licensing is costly.

Either way, we hope you find the discussion in this blog useful.

 

Options for Consideration

When performing the heterogeneous SAP migration from on-premises to Azure, the following options could be considered:

  • Option #1 – Perform a heterogeneous SAP migration in standalone mode to a local export file system which is then transferred across the WAN using SFTP.
  • Option #2 – Perform a heterogeneous SAP migration using the parallel export/import option utilising a network file system. The network file system is provided by an export from an NFS server hosted in Azure. The NFS file system is then mounted onto both the source (on-premises) and target (Azure) VM.
  • Option #3 – Perform a heterogeneous SAP migration using the parallel export/import option utilising an Azure Storage Account in combination with customs scripts and blobxfer.

 

Why Option #3

When facing the challenge on a recent customer migration, option #3 proved to be the most efficient of because:

  • Option #1 – This option took too long end-to-end and wouldn’t fit into the migration window offered by the client.
  • Option #2 – The latency across the WAN with the NFS-mounted file system imposed a long runtime for the migration despite using the parallel export/import option.
  • Option #3 – This was the chosen option in this customer case. The upload speed offered by the customer’s internet connection to the Azure Storage Account and the download speed within Azure produced the best result allowing the migration to fit into the migration window offered by the customer.

 

Requirements in Azure

An Azure Storage Account must be created in your Azure subscription and be accessible from both your Azure-hosted target network(s) and your on-premises source network(s). Our tests have shown that direct access to the Storage Account over the internet may be quicker than via an Express Route link from within your corporate network. Access to the Storage Account is typically via https on port 443. Couple with the highly-compressed R3LOAD data files, a high level of security of data in transit is achieved.

 

Requirements in Source & Target Hosts

To facilitate the export and import, a file system or disk mounted on the source and target operating systems is required for the OS/DB migration related files created by MIGMON, R3LOAD, R3LDCTL etc.

 

Process in Detail

The standard heterogenous SAP OS/DB migration with the parallel export/import option is started on the source and target systems using SWPM. To supplement the standard SAP migration process, a custom script on both source and target operating systems was created. Each script is used to control the physical movement of the OS/DB migration related files generated to and from the Azure Storage Account using the standard blobxfer data movement tool.

Migration Flow

The custom upload script is started on the source system and uploads the STR files (and WHR files if table splitting performed). It then then starts monitoring for signal files created by MIGMON indicating a package is ready for transfer. When a signal file is detected, the TOC and data files associated with the package are uploaded to the Azure Storage Account.

The custom download script is started on the target system and downloads the STR files (and WHR files if table splitting performed) and the ready TOC and data files. It then creates a signal file to indicate to MIGMON on the target system that a package is ready to load into the target database.

 

About blobxfer

blobxfer is an advanced data movement tool and library for Azure Storage Blob and Files. blobxfer offers the following functionality:

  • Upload files into Azure Storage
  • Download files out of Azure Storage
  • Command Line Interface (CLI)
  • Integration into custom Python and other flavour of scripts

For further information see https://github.com/Azure/blobxfer

 

About Azure Storage Explorer

The Microsoft Azure provides an explorer-like GUI for the administration of data lakes, files, blobs, tables and queues within Azure Storage Accounts. For details on how better to use Azure Storage Explorer, please see the following excellent Red Gate article by Supriya Pande from our LinkedIn network:

https://www.red-gate.com/simple-talk/cloud/cloud-development/using-azure-storage-explorer

Azure Storage Explorer can be used to monitor the progress of the uploaded migration related files. The first image shows the ”DATA” and “DB” virtual folders within the <sid> container; the container being the root location; the second shows an example of the content of the “DATA” virtual folder with STR and TOC files visible.