My First SAP
Understanding ERP software
Published: Wednesday, Mar 11, 2020 Last modified: Tuesday, Nov 26, 2024
A post on SAP on HN I found to be a great introduction.
The next stumbling block is the sheer amount of different products SAP have. Some SaaS, and some on-premises. Since I’ve joined a partner, I’m focusing on the on-premises solutions, i.e. the software we’re allowed to run ourselves.
For ERP solutions there are at least:
- S/4 HANA featuring their infamous in-memory database HANA that runs on Linux
- SAP ByDesign
- SAP BusinesOne
- SAP CloudPlatform
- SAP Fiori
- SAP Ariba
I believe why there are so many, is SAP bought any fledging ERP competition. However, SAP push S/4 HANA, their latest evolution of ERP. This is a problem for many existing clients since the upgrade path to HANA is usually expensive.
To make matters a little confusing SAP are pushing their SAP Cloud Platform. So SAP markets public cloud deployment as SAP Hyperscale environments. They also mandate specific regions because their own complementing SaaS solution are only in specific regions. So they might only certify workloads only in specific AWS regions to ensure low latency experiences, back to their own integrated SaaS applications.
Database is everything
Business logic and configuration, as well as data, is all kept in the HANA database.
Certified
On the topic of “certified”, only certain AWS instance types can be used.
Considering HANAs high memory requirements, remember:
- R,X memory optimised
- U high memory
SAPS is a benchmark, which correlates to interactions per user on the SAP sales and distribution module. Quick sizer on existing workloads can help you map to the AWS instances via SAPS.
SSD disks should always be used.
Deployments and networking
These often require a static IP, aka an AWS Elastic IP:
- SAProuter - so you can “ssh in”
- SAP Web dispatcher - their load balancer
- SAP Transports
/usr/sap/trans
so you can do RPC calls between environment
In the AWS context, deployments are usually are in a private VPC. Requiring a VPN (IPsec) for access.
Production and a Non-production should probably split out by AWS account. They could be connected by AWS VPC sharing a permissions construct, or VPC peering a networking construct.
AWS Transit Gateway, a more flexible hub and spoke model could be used. But that instance and connection has associated costs over just the data transfer of VPC peering.
To improve connectivity latency for clients, AWS offers an Accelerated VPN product. As well as a interesting Global accelerator, which basically sets up a single IP for the fat client, the SAP GUI to be configured with.
For the age old “production data in staging” environment for debugging problem, SAP does offer a “System refresh” way of getting data. But to mask / redact sensitive production data, tools like TDMS and other third party tools like Qlik Gold.
Monitoring
AWS Data Provider for SAP feeds data into SAP CCMS.
How SAP CCMS is the old way of SAP monitoring. Currently the SAP monitoring solutions are:
However the most forward looking SAP monitoring strategy appears to be adopting Prometheus with exporters.
AWS EBS volumes
Do not use the Instance store for SAP Workloads.
They should be encrypted by default.
- root OS
- SAP app
- data, aka the database
- logs from database
Depending on performance characteristics, it could just be two volumes:
- root OS & SAP app
- database & logs
Or use multiple volumes in a RAID-0 striped setup for increased I/O. Or try different volume type. Though it’s actually cheaper to allocate more space in gp2 space.
Remember you can change volume size type on the fly. You can temporarily change the type for increased performance!
For backups use snapshot for OS + App. Use SAP tools for the data/base and script to S3. Often via an intermediate st1 volume.
S3 is 10 9s for durability and 4 nines availability.
sapmnt would use EFS on Linux and FSx for Windows workloads.
High availability
For high availability workloads, use an ELB to two workloads on different AZs. The app could be hosted on EFS (also use ers) and the database could be replicated with DB Log replication. It’s Active / Passive, it cannot be Active/Active due the nature of DB replication. However it’s common to use a third party solution, SLES HAE, RHEL HA Add-on, SIOS Protection Suite.
Disaster Recovery
- Passive DR, backup up to S3
- Pilot light would be using replication
Quickstart
ec2-user@imdbmaster:~> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 50G 0 disk
├─nvme0n1p1 259:1 0 2M 0 part
├─nvme0n1p2 259:2 0 20M 0 part /boot/efi
└─nvme0n1p3 259:3 0 50G 0 part /
nvme1n1 259:4 0 512G 0 disk
└─vghanaback-lvhanaback 254:0 0 511G 0 lvm /backup
nvme2n1 259:5 0 300G 0 disk /hana/shared
nvme3n1 259:6 0 225G 0 disk
└─vghanadata-lvhanadata 254:1 0 585G 0 lvm /hana/data
nvme4n1 259:7 0 225G 0 disk
└─vghanadata-lvhanadata 254:1 0 585G 0 lvm /hana/data
nvme5n1 259:8 0 225G 0 disk
└─vghanadata-lvhanadata 254:1 0 585G 0 lvm /hana/data
nvme6n1 259:9 0 175G 0 disk
└─vghanalog-lvhanalog 254:2 0 325G 0 lvm /hana/log
nvme7n1 259:10 0 175G 0 disk
└─vghanalog-lvhanalog 254:2 0 32G 0 lvm /hana/log
nvme8n1 259:11 0 50G 0 disk /usr/sap
nvme9n1 259:12 0 50G 0 disk /media
HANA offerings
- *SAP HANA BYOL only S4 HANA compatible
- SAP Cloud Platform SAP HANA service for custom application development
- SAP HANA express edition for testing use case studies
- SAP HANA Enterprise Cloud (HEC) on AWS – one throat to choke
You can’t seperate your app and database, for latency reasons.