In this blog I’ll be discussing the various SAP services (dialog, RFC, update, spool, background and workflow) and how they can be distributed across the available application servers of an SAP NetWeaver AS ABAP system, and how the introduction of Cloud Computing has added an extra dimension.
I first wrote a technical article on this very topic in the early days of SAP R/3. Twenty years on and very little has changed; the same options for distributing the various SAP services persist and new features and enhancements have appeared during that time.
Summary of Distributable Services
Seasoned SAP Technical Consultants will be familiar with the various distributable SAP services and their associated main configuration transaction. These can be summarised as follows:
- Dialog – SMLG
- RFC – RZ12
- Update – SM14
- Spool – SPAD
- Background – SM61
Dialog Workload Distribution (SMLG)
Dialog workload distribution by logon load balancing includes that generated by normal dialog user logon using SAPGUI and external RFC requests from a system providing a user and password e.g. the SAP Enterprise Portal.
During the days of 32-bit CPU architecture, addressable memory was limited to 4gb. Multiple application servers were often deployed to address this limitation. Typically, logon groups would be used to segregate users by functional area (e.g. FI, SD, HR) to reduce stress on the ABAP program buffer or segregate users by logon language (e.g. EN, DE, FR) to reduce stress on the text buffers.
Today, 64-bit CPU architecture has removed this limitation and the previous reasons for segregating users is not as important. Balancing the dialog workload across multiple application servers allows true scale-out and redundancy without jeopardising any of the user community.
RFC Workload Distribution (RZ12)
RFC workload distribution for parallel processing comprises internal RFC requests sent and received within the same SAP system which can be limited by RFC parameters and server groups in transaction RZ12. Process such as SGEN (via server group “parallel_generators”) fall into this category.
An application can process data simultaneously by using parallel RFCs. This can also happen indirectly with qRFC inbound queues and the inbound queue scheduler. Every RFC request occupies a dialog work process on the application server onto which it is distributed. This type of processing is known asynchronous RFC or parallel RFC. Asynchronous RFC with load balancing is implemented using the following ABAP statement:
CALL FUNCTION <function> STARTING NEW TASK <task> DESTINATION IN GROUP <group>
With this command, you’re telling the SAP system to process RFC calls in parallel. This command invokes parallel processing by sending asynchronous RFC calls to the appropriate servers. These servers are defined in the RFC server group <group> defined in transaction RZ12. A special internal group named “DEFAULT” encompasses all available application servers of the SAP system. This group does not have to be defined in RZ12 but specifying a group that does not exist in RZ12, will result in an error.
The qRFC outbound scheduler uses parallel RFC for processing the outbound queue. For this to be possible, the RFC destinations have to be maintained in transaction SMQS. When scheduling outbound queues, the scheduler will check available resources in the source system, e.g. availability of dialog work processes. If no resources are available, synchronous RFC is used.
If inbound queues are being used, the inbound scheduler takes over the processing of the inbound queue, providing queues to be processed are registered in transaction SMQR. The inbound scheduler also checks available resources in the target system, and executes parallel RFCs if resources permit. If no resources are available, the scheduler waits until resources become available.
Update Workload Distribution (SM13)
Update workload comprises all workload processed by the SAP update services in UPD work processes. The symmetrical distribution of update workload over all available application servers is essential to the performance and continued operation of the SAP system. Update processing should not be limited to the primary application server (PAS) because:
- Failure of the (PAS) would cause system update activity to come to a standstill
- Distributed update work processes can handle temporary load peaks better
Spool Workload Distribution (SPAD)
Spool work processes can be distributed across many application servers. In the past, the definition of print devices within the SAP system specified the name of a real server that had spool work processes defined. From an administration point of view, this meant that when a device definition was transported from development into production, it had to be amended in each environment to address an available application server.
SAP introduced the concept of logical spool servers to reduce spool administration and to provide reliability and load balancing for SAP printing. From the outside, logical spool servers look much the same as real servers. You assign a name to them and can assign them to devices for directing output requests. Unlike real servers, logical spool servers don’t print anything themselves but only redirect output to real servers or other logical spool servers.
Each logical spool server definition specifies a mapping server and an alternate server. The mapping server takes precedence over the alternate server, so if an active spool server is found on the mapping server branch, the output request is sent there. If this mapping server happens to be another logical server, the mapping branch of that logical server is followed until a real active server is reached. It is only when no active server is found in the mapping branch that the system considers the alternate branch for locating an active spool server.
With the use of mapping and alternate servers it is possible to configure the spool system for load balancing and high availability. The mapping server is considered for load balancing if the “Allow load balancing” checkbox is selected in the definition of the logical spool server. The alternate server can be used to provide high availability or redundancy for spool work processes scenario so that production printing is not hindered if an application server suffers an outage.
The following diagram illustrates the configuration of the logical spool servers within the SAP system with two printers “A” and “B” defined for LOG_SPOOL_SERVER_1 and LOG_SPOOL_SERVER_2. Under normal operation, requests to printer A would be routed to LOG_SPOOL_SERVER_1 and load balanced across the two application servers attached to LOG_SPOOL_SERVER_A (HOSTA_<SID>_<NN> and HOSTB_<SID>_<NN>). If these application servers were unavailable, the request would be routed to the alternate server of LOG_SPOOL_SERVER_1 (i.e. LOG_SPOOL_SERVER_2) and load balanced across the two application servers attached to LOG_SPOOL_SERVER_2 (HOSTC_<SID>_<NN> and HOSTD_<SID>_<NN>). This logic continues until an available application server is found providing a very highly available solution.
Background Workload Distribution (SM61)
The following diagram shows a simple setup within the SAP system with two job server groups, BGD_GROUP_1 and SAP_DEFAULT_BTC. The job server group BGD_GROUP_1 is served by instances HOSTA_<SID>_<NN>, HOSTB_<SID>_<NN> and HOSTB_<SID>_<NN>; the default job server group SAP_DEFAULT_BTC is served by instance HOSTD_<SID>_<NN>.
Note: It is important that the default background server group SAP_DEFAULT_BTC exists at minimum with instances assigned, to ensure background jobs have a target on which to execute.
The instance(s) within job server groups must offer background services (i.e. BGD work processes), in order for background jobs to execute.
Background workload is distributed within a job server group in a “round robin” fashion. In the case illustrated below, job “A” and job “B” are both defined to execute on job server group BGD_GROUP_1 which results in the ABAP programs defined in their respective steps to be scheduled on a BGD work process on HOSTA_<SID>_<NN> and HOSTB_<SID>_<NN>, respectively – i.e. in a round robin” fashion.
Workload Distribution and Public Cloud
With the advent of cloud computing, distribution of SAP workload takes on new dynamic. In the past, if an SAP system was suffering performance problems due to lack of work process capacity, you probably didn’t have the flexibility to spin up an additional server to help. Cloud computing allows us to provision additional virtual machines from templates and reduce the time to deploy additional SAP application instances.
Of course, the ultimate goal would be to automate the addition of the new SAP application instance and update the necessary workload balancing configuration to make it available for immediate use. SAP Landscape Management (LaMa) is very capable of this. In a future blog, I’ll be discussing this in more detail.