Loading...

Follow AWS for SAP on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Our customers who run SAP workloads on Amazon Web Services (AWS) are also invested in data and analytics transformations by using data lake solutions on AWS. These customers can use various third-party solutions to extract data from their SAP applications. However, to increase performance and reduce cost, they’re also asking for native integrations that use AWS solutions.

A common pattern that these customers use for extracting data from SAP applications is through IDoc Interface/Electronic Data Interchange. SAP NetWeaver ABAP-based systems have supported IDocs for a long time and are a very stable framework that powers master and transactional data distributions across SAP and non-SAP systems.

Architectural approaches for integrating SAP IDocs with Amazon Simple Storage Service (Amazon S3) have been published previously in the SAP community, such as in the blog post Integrating SAP’s IDOC Interface into Amazon API Gateway and AWS Lambda. However, those approaches don’t cover the security aspect, which is key for production use. It’s important to secure business-critical APIs to protect them from unauthorized users.

In this blog post, I show you how to store SAP IDocs in Amazon S3 by using API Gateway, with AWS Lambda authorizers and Amazon Cognito both providing the authentication layer.

An AWS Lambda authorizer is an Amazon API Gateway feature that uses a Lambda function to control access to your APIs. To learn more about AWS Lambda authorizers, see Use API Gateway Lambda Authorizers. By using Amazon Cognito, you can add user sign-in and access control mechanisms to your web, mobile, and integration apps. To learn more about Amazon Cognito, see Getting Started with Amazon Cognito.

Use cases

First, let’s look at some of the use cases and business processes that benefit from the architecture that I discuss in this blog post.

Master data integration: Let’s say your SAP application is the source of truth for all your master data like material master and customer master, and you’re integrating this master data with non-SAP applications and other software as a solution (SaaS) offerings. You can set up Application Link Enabling (ALE) in SAP, and extract the master data from SAP as IDocs for storing in Amazon S3. Once the data lands in Amazon S3, you can integrate the master data with other applications, or use the data in your data lake solutions. For a list of all master data objects supported by ALE, see Distributable Master Data Objects.

Business-to-business (B2B) integration: IDocs are still extensively used in B2B integration scenarios. Some use cases include finance data integration with banks, and inventory and material master data integration with suppliers. For a full list of business process integrations that are supported through IDocs, see Distributable Master Data Objects. By bringing your IDoc data to Amazon S3, you can tap into existing integration functionality, without much custom development.

Architecture

The following architecture diagram shows the workflow for integrating IDocs with Amazon S3, which incorporates basic authentication.

  1. SAP IDocs can be written as an XML payload to HTTPS endpoints. In this architecture, you create an IDoc port that maps to an HTTPS-based Remote Function Call (RFC) destination in SAP. Out of the box, HTTPS-based RFC destinations support basic authentication with a user name and password. Here, the HTTP destination points to an API Gateway endpoint.
  2. To support basic authentication in the API Gateway, enable a gateway response for code 401 with a WWW-Authenticate:Basic response header. Then, to validate the user name and password, use a Lambda authorizer function.
  3. The Lambda authorizer reads the user name and password from the request header, Amazon Cognito user pool ID, and client ID from the request query parameters. Then it launches an admin-initiated authentication to an Amazon Cognito user pool. If the correct user name and password are provided, the Amazon Cognito pool issues a JSON Web Token (JWT). If a valid JWT is received, the Lambda authorizer allows the API call to proceed.
  4. Once authorized, the API Gateway launches another Lambda function to process the IDoc data.
  5. The Lambda function reads the IDoc payload information from the request body and, using the AWS SDK, writes the IDoc data as an XML file to the S3 bucket.

Once the data is available in Amazon S3, you can use other AWS solutions like AWS Glue for data transformations, and then load the data into Amazon Redshift or Amazon DynamoDB.

Setting it up Prerequisites
  • Configure AWS Command Line Interface (AWS CLI) for your AWS account and region. For more information, see Configuring the AWS CLI.
  • Get administrator access to your AWS account to create resources using AWS CloudFormation.
  • Get administrator access to SAP application for uploading certificates, and for creating RFC destinations, IDOC ports, and partner profiles.
AWS setup

Next, implement this integration by going through the steps that follow. To make it easy for you to create the required AWS resources, we’ve published an AWS CloudFormation template, Lambda functions, and a deployment script in a GitHub repository.

Please note that there are costs associated with consuming the resources created by this CloudFormation template. See the “CloudFormation resources” section in this blog post for a full list of resources created.

Step 1:

Clone the aws-cloudformation-apigw-sap-idocs GitHub repo to your local machine.

$ git clone https://github.com/aws-samples/aws-cloudformation-apigw-sap-idocs.git

Step 2:

In the terminal/command window, navigate to the downloaded folder.

$ cd aws-cloudformation-apigw-sap-idocs

Step 3:

Change execute access permission for the build.sh file and execute the build.sh script.

$ chmod +x build.sh
$ ./build.sh

Step 4:

This creates the build folder. Navigate to the newly created build folder.

$ cd build

Step 5:

Open the deploystack.sh file and edit variable values as applicable. Change the value for at least the following variables to suit your needs:

  • S3BucketForArtifacts – Where all the artifacts required by the CloudFormation template will be stored.
  • USERNAME – The Amazon Cognito user name.
  • EMAILID – The email ID attached to the Amazon Cognito user name.

Step 6:

Change execute access permission for the deploystack.sh file, and execute the script. Make sure your AWS Command Line Interface (AWS CLI) is configured for the correct account and region. For more information, see Configuring the AWS CLI.

$ chmod +x deploystack.sh

$ ./deploystack.sh

The script performs the following actions:

  • Creates an S3 bucket in your AWS account (per the name specified for variable S3BucketForArtifacts in the deploystack.sh file)
  • Uploads all the required files to the S3 bucket
  • Deploys the CloudFormation template in your account
  • Once all the resources are created, creates an Amazon Cognito user (per the value provided for variable USERNAME in the deploystack.sh file)
  • Sets its password (per the value that you provide when you run the script)

For more information about the created resources, see the “CloudFormation resources” section in this blog post.

SAP setup

You can perform the following steps in an existing SAP application in your landscape or stand up an SAP ABAP Developer Edition system by using the SAP Cloud Appliance Library. If you’d rather install a standalone SAP ABAP Developer Edition system in your VPC, we’ve provided a CloudFormation template to speed up the process in the GitHub repo.

Configure RFC connection in SAP

Step 1:

When the SAP application connects to the API Gateway endpoint, it presents a certificate. For the SAP application to trust this certificate, it needs to be uploaded to the SAP certificate store by using the transaction code STRUST. You can download the Amazon server certificates from Amazon Trust Services. In the Root CAs section of that webpage, download all the root CAs (DER format), and upload them under the SSL client SSL Client (Standard) node using transaction code STRUST. If this node doesn’t exist, create it. For more information about SSL client PSE, see Creating the Standard SSL Client PSE.

Step 2:

Open the AWS Management Console and navigate to AWS CloudFormation. Select the stack that you deployed in “AWS setup,” earlier in this blog post. Then, go to the Outputs tab, and note down the values for the IDOCAdapterHost and IDOCAdapterPrefix keys. You will need these fields in the next step.

Step 3:

In your SAP application, go to transaction code SM59, and create an RFC destination of type G (HTTP Connection to External Server). For Target Host, provide the value of the key IDOCAdapterHost from the previous step. Similarly, for Path Prefix, provide the value of the key IDOCAdapterPrefix. Also, in Service No., enter 443. Once all the details are filled in, press Enter. You will receive a warning that query parameters aren’t allowed. You can ignore that warning by pressing Enter again.

Step 4:

While still in transaction SM59, choose the Logon & Security tab, and then choose Basic Authentication. In the User field, enter the value of USERNAME that you used in “AWS setup,” earlier in this blog post. In the Password field, enter the value of PASSWORD that you used in “AWS setup.” Then under Security Options, choose Active for SSL, and choose DEFAULT SSL Client (Standard) for SSL Certificate.

Step 5:

Choose Connection Test, and you will get a 200 HTTP response from the API Gateway. If you get an error, recheck the Target Host field (it shouldn’t start with HTTP or HTTPS), make sure the service number is 443, and make sure the path prefix is correct (it should start with a / and contain the full query string). Check whether you provided the correct user name and password. Also, check whether SSL is Active and SSL certificate value is DEFAULT SSL Client (Standard).

Configure IDoc port and partner profiles

Step 1:

Go to transaction code WE21 and create a port of type XML HTTP using the RFC destination created in “SAP setup,” in this blog post. In Content Type, choose Text/XML.

Step 2:

Go to transaction code BD54, and create a new logical system—for example, AWSAPIGW.

Step 3:

Go to transaction code WE20, and create a new partner profile of type LS.

Step 4:

From transaction code WE20, create outbound parameters for the Partner profile that you created in the previous step. For testing purposes, choose FLIGHTBOOKING_CREATEFROMDAT as the message type, the port name (for example, AWSAPIGW) that was created in “SAP setup,” in this blog post, as the receiver port, and FLIGHTBOOKING_CREATEFROMDAT01 as the basic IDoc type.

Test with an outbound IDoc

Step 1:

Go to transaction code WE19. In the Via message type, field enter FLIGHTBOOKING_CREATEFROMDAT, and then choose Execute.

Step 2:

To edit the control record fields, double-click the EDIDC field. Fill in the details for Receiver and Sender. Receiver Partner No. will vary based on your system ID and client. In this example, the system ID is NPL and client is 001. Check transaction BD54 for your logical system name.

Step 3:

Double-click the E1SBO_CRE and E1BPSBONEW nodes, and provide some values. It doesn’t matter what you provide here. There are no validations for the field values. Once done, choose Standard Outbound Processing. This should send the IDoc data to the API Gateway endpoint.

Step 4:

Validate whether the IDoc data is stored in the S3 bucket that was created by the CloudFormation earlier.

Amazon Cognito vs AWS Identity and Access Management (IAM)

We use Amazon Cognito in this architecture because it provides the flexibility to authenticate the user against a user store and to issue short-lived credentials. However, if you would rather use access keys of an IAM user, you can do so by using the access key ID for the user name and access secret key for the password in the RFC destination.

The Lambda function apigw-sap-idoc-authorizer first tries to authenticate the user with Amazon Cognito. If it fails, it tries to authenticate using the access key and secret key. Make sure that the user of these keys has ‘list’ access to the S3 bucket where the IDoc data is stored. For more information, see the inline documentation of the Lambda function apigw-sap-idoc-authorizer. Also, make sure you follow the best practices for maintaining AWS access keys, if you choose to use them instead of Amazon Cognito.

CloudFormation resources

The following resources are created by the CloudFormation template that you deployed in “AWS setup,” earlier in this blog post.

Amazon Cognito user pool: To support user name and password authentication flow from the SAP application, the CloudFormation template creates an Amazon Cognito user pool with the name <Environment>_user_pool (for example, sapidocs_user_pool), where <Environment> is the input parameter from the CloudFormation template. The user pool is set up to act as a user store, with email ID as a required user attribute. Password policies are enforced also.

Amazon Cognito user pool client: An app client is also created in the Amazon Cognito user pool. This app client is set up to Enable sign-in API for server-based authentication (ADMIN_NO_SRP_AUTH) and Enable username-password (non-SRP) flow for app-based authentication (USER_PASSWORD_AUTH). These two settings allow the Lambda authorizer functions to authenticate a user against the Amazon Cognito user pool using the credentials supplied by SAP when making API Gateway calls.

Amazon S3 bucket: An S3 bucket with the name <Your AWS Account ID>-<value from S3BucketForIDOC parameter> (for example, 123456789-sapidocs) is created to store the IDoc XML files.

Lambda authorizer function: A NodeJS Lambda function with the name apigw-sap-idoc-authorizer is created for authorizing API Gateway requests from SAP by performing admin-initiated auth with Amazon Cognito with the user name/password provided in the request.

Lambda integration function: A NodeJS Lambda function with the name apigw-sap-idoc-s3 is created to store the IDoc payload received from SAP into the S3 bucket created earlier. The IDoc data is stored as XML files.

IAM roles: Two roles are created for the Lambda functions.

  • A role with the name <Environment>-lambda-authorizer-role (for example, sapidocs-lambda-authorizer-role) is created for providing Amazon Cognito admin-initiated authentication access to the Lambda authorizer function.
  • A role with the name <Environment>-lambda-s3-access-policy (for example, sapidocs-lambda-s3-access-policy) is created for providing write access to the S3 bucket for storing IDocs.

API Gateway API: An API Gateway API with the name sap-idoc-adapter-api is created. A Lambda authorizer (‘Request’ based) with the name IDOC_Adapter_Authorizer is also created for this API. This API has a GET method and a POST method. Both these methods use the Lambda authorizer for authentication. The GET method targets a mock endpoint and is only used for testing connections and authorization from the SAP application. The POST method uses Lambda integration by calling the Lambda function apigw-sap-idoc-s3 for uploading the IDoc payload data from the SAP application to the S3 bucket.

Resource limits
  • Make sure to note Amazon API Gateway Limits and Important Notes, especially the payload size limit (10 MB at the time of writing) and integration timeout (29 seconds at the time of writing). Batching IDocs might result in higher payload size or higher processing time, which can result in timeouts. You might want to consider smaller batch sizes.
  • Make sure to note AWS Lambda Limits. There are limits on the invocation payload size and memory allocations that might also affect the IDoc batch size.
Conclusion

This blog post gives you a way to upload SAP IDoc data to Amazon S3 without any coding in the SAP application, while incorporating security best practices. The API is protected via user authentication by Amazon Cognito and user authorizations through IAM policies. Now you can integrate your SAP master data, such as material master, with other applications that are running on AWS. You can also perform B2B integrations, such as integrating finance data with banks.

This approach works for most use cases. However, there are edge cases where the volume might be high enough to warrant custom coding in the SAP application by using ABAP HTTP client libraries. For such cases, it’s advised that you check third-party adapters or build your own ABAP HTTP client libraries.

I hope that you found this blog post useful. Please don’t hesitate to contact us with your comments or..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post is by Kenney Antoney Rajan, an AWS Partner Solutions Architect.

Many organizations that use enterprise resource planning (ERP) software like SAP run and maintain Secure File Transfer Protocol (SFTP) servers to securely transfer business-critical data from SAP to external partner systems. In this series of blog posts, we’ll provide steps for you to integrate your SAP Process Integration and Process Orchestration (SAP PI/PO) and SAP Cloud Platform Integration with AWS Transfer for SFTP (AWS SFTP). We’ll also show you how to use the data that AWS SFTP stores in Amazon Simple Storage Service (Amazon S3) for post-processing analytics.

Use cases

There are many SAP scenarios where an SFTP server is useful for SAP file workloads. For example:

  • Transportation industry. A company can use an SFTP server as a middle layer to place files that contain sales order data. The external transportation company processes the order information from the SFTP server to schedule transportation.
  • Retail industry. A company can send their material data from SAP to the SFTP destination for a data lake solution to process the data. The data lake solution polls and processes raw data files sent from SAP and internal sales applications, to get insights such as fast selling items by material types.
Benefits of using AWS SFTP

Regardless of industry, laws and legislation in many countries mandate that every company keep private information secure. Organizations that require an SFTP server for their SAP integration can now use AWS SFTP to distribute data between SAP ERP and external partner systems, while storing the data in Amazon S3.

AWS SFTP manages the infrastructure behind your SFTP endpoint for you. This includes automated scaling and high availability for your SFTP needs, to process business-critical information between SAP and external partner systems. To learn how to create an AWS SFTP server, see Create an SFTP Server in the AWS documentation.

Architecture overview

You can integrate your SAP environment with AWS SFTP using SAP PI/PO, which acts as an integration broker to facilitate connection between systems. The following diagram shows the high-level architecture of how your SAP PI/PO systems can integrate with AWS SFTP and perform post-processing functions.

Authentication options

To establish a connection with AWS SFTP, you’ll use SAP PI/PO authentication options:

  • SAP key-based authentication. Convert the Secure Shell (SSH) private key that’s generated as a part of the AWS SFTP server creation process to Public Key Cryptography Standards (PKCS)12 type keystore. You do this to integrate SAP PI/PO communication channels with AWS SFTP.
  • SAP PI/PO password-based authentication. Use AWS Secrets Manager to enable username- and password-based authentication. You do this to integrate SAP PI/PO communication channels with AWS SFTP.
SAP key-based authentication

You can use Open SSL to create X.509 and P12 certificates on your local SSH key pair directory, as shown in the following diagram. Enter the password and note it down for SAP keystore setup. The generated key will be in binary form.

SAP NetWeaver Administrator keystore configuration
  1. Log in to SAP NetWeaver Administrator Key Storage Views, and enter a name and description to create a new key storage view.

  1. Select Import Entry, and then choose PKCS#12 Key Pair type from the drop-down menu, to import the .p12 file created as part of the earlier Open SSL step.

 

  1. To decrypt the file and complete the import, use the same password that you used earlier, and then choose Import.

  1. Make a note of the fingerprints to integrate the SAP PI/PO systems with the AWS SFTP server to finish configuring the SAP PI/PO integration directory.

Integrating the SAP PI/PO SFTP communication channel with the AWS SFTP endpoint

Next, you’ll configure a key-based authentication method in SAP PI/PO to transfer your file workloads from SAP ERP Central Component (SAP ECC) to the AWS SFTP destination.

To test the SAP PI/PO integration, you can transfer a MATMAS material intermediate document (IDoc) from the SAP system to the AWS SFTP destination.

In this blog post, it’s assumed that you’ll configure the software and business component in the SAP PI/PO System Landscape directory, import the MATMAS IDoc structure, and map the raw IDoc structure (XML) to comma-separated value (CSV) formatted type using message, service, and operational mappings in the SAP PI/PO Enterprise Services Repository function. You can also use the raw MATMAS intermediate document structure (XML) for testing.

In addition, you’ll need to configure sender and receiver communication channels and integration configuration in the SAP PI/PO integration directory function.

In the SAP PI/PO integration directory configuration, select SFTP adapter type and update the AWS SFTP endpoint and fingerprint created during the SAP NetWeaver Administrator keystore configuration. Update the values for the authentication method and file parameter key in the SAP PI/PO communication channel screen as follows:

  • Authentication method: Private Key
  • Username: The username for the SFTP server created as part of the AWS SFTP setup process.
  • Private Key View: The key view created previously in the SAP NetWeaver Administrator keystore configuration.
  • Private Key Entry: The key entry type created previously in SAP NetWeaver Administrator keystore configuration.
  • Filename: The filename or naming convention that will be transferred from SAP to the AWS SFTP server.
  • Filepath: The Amazon S3 bucket path that’s created as part of the AWS SFTP setup process. This filepath is the AWS SFTP S3 destination where your transferred files will be stored.

SAP PI/PO password-based authentication
  1. See the Enable password authentication for AWS Transfer for SFTP using AWS Secrets Manager blog post to enable password authentication for the AWS SFTP server using AWS Secrets Manager. Note down the username and password from AWS Secrets Manager to enter in the authentication configuration of the SAP PI/PO integration directory.

  1. Update the SAP PI/PO integration directory configuration with the new AWS SFTP endpoint and fingerprint created as part of password authentication. Update the values for your authentication method and file parameter key as follows:
  • Authentication method: Password.
  • Username: The username created as part of password authentication, as mentioned in the previous step.
  • Password: The password created as part of password authentication, as mentioned in the previous step.
  • Filename: The filename or naming convention that will be transferred from SAP to the AWS SFTP server.
  • Filepath: The Amazon S3 bucket path created as part password authentication. This filepath is the SFTP destination where your transferred files will be stored.

  1. To test the integration, trigger a MATMAS IDoc using an SAP ECC BD10 transaction to send a material master XML file to the AWS SFTP S3 directory through the SAP PO/PI integration.

The file is now successfully placed in the AWS SFTP S3 directory file-path configured in the SAP PI/PO communication channel.

Post-processing analytics using AWS serverless options

AWS serverless options include the following:

  • Building serverless analytics with Amazon S3 data
  • Creating a table and exploring data
Building serverless analytics with Amazon S3 data

With your data stored in Amazon S3, you can use AWS serverless services for post-processing needs like analytics, machine learning, and archiving. Also, by storing your file content in Amazon S3, you can configure AWS serverless architecture to perform post-processing analytics without having to manage and operate servers or runtimes, either in the cloud or on premises.

To build a report on SAP material data, you can use AWS Glue, Amazon Athena, and Amazon QuickSight. AWS Glue is a fully managed data catalog and extract, transform, and load (ETL) service. As you get your AWS Glue Data Catalog data partitioned and compressed for optimal performance, you can use Amazon Athena ad-hoc query analysis on the data that’s processed by AWS Glue. You can then visualize the data using Amazon QuickSight, a fully managed visualization tool, to present the material data using pie charts.

See the Build a Data Lake Foundation with AWS Glue and Amazon S3 blog post to learn how to do the following:

  • Create data catalogs using AWS Glue
  • Execute ad-hoc query analysis on AWS Glue Data Catalog using Amazon Athena
  • Create visualizations using Amazon QuickSight
Creating a table and exploring data

Create a table with your material file stored in Amazon S3 using AWS Glue crawler. AWS Glue crawler scans through the raw data available in an S3 bucket and creates a data table with a data catalog. Using AWS Glue ETL jobs, you can transform the SAP MATMAS CSV file into parquet format, which is well suited for you to query the data with Amazon Athena.

The following figure shows how the material table named parquetsappparquet was created from the SAP MATMAS file stored in Amazon S3. For detailed steps on creating a job in parquet format, see the Build a Data Lake Foundation with AWS Glue and Amazon S3 blog post.

After completing the data transformation using AWS Glue, select the Amazon Athena service from the AWS Management Console and use Athena Query Editor to execute a SQL query on the SAP material table created in the earlier step.

Amazon QuickSight is a data visualization service that you can use to analyze data. Create a new Amazon Athena data source using Amazon QuickSight, and build a visualization of your material data. In the following example, you can see the count of materials by material type using Amazon QuickSight visualization. For more detailed instructions, see the Amazon QuickSight User Guide.

Conclusion

In part 1 of this blog series, we’ve shown how to integrate SAP PI/PO systems with AWS SFTP and how to use AWS analytics solutions for post-processing analytics. We’ve used key AWS services such as AWS SFTP, AWS Secrets Manager, AWS Glue, Amazon Athena, and Amazon QuickSight. In part 2, we’ll talk about SAP Cloud Platform integration with AWS SFTP for your file-based workloads.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post is by Patrick Leung, a Senior Consultant in the AWS SAP Global Specialty Practice.

As part of Amazon Web Services (AWS) professional services in the SAP global specialty practice, I often assist customers in architecting and deploying SAP on AWS. SAP customers can take advantage of fully managed AWS services such as Amazon Elastic File System (Amazon EFS) and AWS Backup to unburden their teams from infrastructure operations and other undifferentiated heavy lifting.

In this blog post, I’ll show you how to use AWS Single Sign-On (AWS SSO) to enable your SAP users to access your SAP Fiori launchpad without having to log in and out each time. This approach will provide a better user experience for your SAP users and ensure the integrity of enterprise security. With just a few clicks, you can enable a highly available AWS SSO service without the upfront investment and on-going maintenance costs of operating your own SSO infrastructure. Moreover, there is no additional cost to enable AWS SSO.

Solution overview

The integration between AWS SSO and an SAP Fiori application is based on the industry standard: Security Assertion Markup Language (SAML) 2.0. It works by transferring the user’s identity from one place (AWS SSO) to the service provider (SAP Fiori) through an exchange of digitally signed XML documents.

To configure and test AWS SSO with SAP, you need to complete following steps:

  1. Activate the required SAP parameters and web services in the SAP system.
  2. Create the SAML 2.0 local provider in SAP transaction SAML2.
  3. Download the SAP local provider SAML 2.0 metadata file.
  4. Configure AWS SSO and exchange the SAML 2.0 metadata files.
  5. Configure the attribute mappings.
  6. Assign users to the application.
  7. Configure the trusted provider in SAP transaction SAML2.
  8. Enable the identity provider.
  9. Configure identity federation.
  10. Test your SSO.
Step 1. Activate the required SAP parameters and web services in the SAP system
  1. Log in to the business client of your SAP system. Validate the single sign-on parameters in the SAP S/4HANA system by using SAP transaction RZ10. Here are the profile parameters I used:
    login/create_sso2_ticket = 2    
    login/accept_sso2_ticket = 1    
    login/ticketcache_entries_max = 1000    
    login/ticketcache_off = 0    
    login/ticket_only_by_https = 1    
    icf/set_HTTPonly_flag_on_cookies = 0    
    icf/user_recheck = 1    
    http/security_session_timeout = 1800    
    http/security_context_cache_size = 2500    
    rdisp/plugin_auto_logout = 1800    
    rdisp/autothtime = 60    
    
  2. Ensure that the HTTPS services are active by using SAP transaction SMICM. In this example, the HTTPS port is 44300 with a keep alive time of 300 seconds and a processing timeout of 7200 seconds.
  3. Use SAP transaction SICF to activate the following two Internet Communication Framework (ICF) services:
    • /default_host/sap/public/bc/sec/saml2
    • /default_host/sap/public/bc/sec/cdc_ext_service
Step 2. Create the SAML 2.0 local provider in SAP transaction SAML2
  1. In the business client of the SAP system, go to transaction code SAML2. It will open a user interface in a browser. In this example, the SAP business client is 100. For Enable SAML 2.0 Support, choose Create SAML 2.0 Local Provider.

    You can select any provider name and keep the clock skew tolerance as the default 120 seconds.
  2. Choose Finish. When the wizard finishes, you will see the following screen.
Step 3. Download the SAP local provider SAML 2.0 metadata

Choose the Metadata tab, and download the metadata.

Step 4. Configure AWS SSO
  1. In the AWS SSO console, in the left navigation pane, choose Applications. Then choose Add a new application.
  2. In the AWS SSO Application Catalog, choose Add a custom SAML 2.0 application from the list.
  3. On the Configure Custom SAML 2.0 application page, under Details, type a Display name for the application. In this example, I am calling my application S/4HANA Sales Analytics.
  4. Under AWS SSO metadata, choose the Download button to download the AWS SSO SAML metadata file.
  5. Under Application properties, in the Application start URL box, enter the Fiori application URL. The standard Fiori launchpad URL is https://<hostname>:<https port>/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html?sap-client=<client number>. I am using the default values for the Relay state and Session duration.
  6. Under Application metadata, upload the local provider metadata that you downloaded in step 3.
  7. Choose Save changes.
Step 5. Configure the attribute mappings

In this example, the user mapping will be based on email.

  1. On the Attribute mappings tab, enter ${user:subject} and use the emailAddress format.
  2. Choose Save changes.
Step 6. Assign users to the application

On the Assigned users tab, assign any user who requires access to this application. In this example, I am using an existing user in AWS SSO. AWS SSO can be integrated with Microsoft Active Directory (AD) through AWS Directory Service, enabling users to sign in to the AWS SSO user portal by using their AD credentials.

Step 7. Configure the trusted provider in SAP transaction SAML2
  1. Go to SAP transaction code SAML2 and choose the Trusted Providers tab.
  2. Upload the AWS SSO SAML metadata file that you downloaded in step 4.
  3. Choose Next for Metadata Verification and Select Providers.
  4. For Provider Name, enter any alias as the trusted identity provider.
  5. For Signature and Encryption, change the Digest Algorithm to SHA-256 and keep the other configurations as default.

    SHA-256 is one of the successor hash functions to SHA-1 and is one of the strongest hash functions available.
  6. For Single Sign-On Endpoints, choose HTTP POST.
  7. For Single Sign-On Logout Endpoints, choose HTTP Redirect.
  8. For Artifact Endpoints, keep the default.
  9. For Authentication Requirements, leave everything as default and choose Finish.
Step 8. Enable the identity provider
  1. Under List of Trusted Providers, choose the identity provider that you created in step 7.
  2. Choose Enable to enable the trusted provider.
  3. Confirm that the identity provider is active.
Step 9. Configure identity federation

Identity federation provides the means to share identity information between partners. To share information about a user, AWS SSO and SAP must be able to identify the user, even though they may use different identifiers for the same user. The SAML 2.0 standard defines the name identifier (name ID) as the means to establish a common identifier. In this example, I use the email address to establish a federated identity.

  1. Choose the identity provider that you enabled in step 8, and choose Edit.
  2. On the Identity Federation tab, under Supported NameID Formats, choose Add.
  3. Select E-mail in the Supported Name ID Formats box.

    This automatically sets the User ID source to Assertion Subject Name ID and the User ID Mapping Mode to Email.
  4. Choose Save.
  5. In your SAP application, use SAP transaction SU01 to confirm that the user email address matches the one in your AWS SSO directory.
Step 10. Test your SSO

At your AWS SSO start URL, you should see your application. In this example, this is S/4HANA Sales Analytics.

Voilà! Choose the application to open your Fiori launchpad without entering a user name and password.

Conclusion

The beauty of this solution is in its simplicity: The AWS SSO service authenticates you, enabling you to log in to your SAP Fiori applications without having to log in and out each time.

AWS SSO supports any SAML 2.0-compliant identity provider, which means that you can use it as a centralized access point for your enterprise applications. AWS SSO also includes built-in SSO integrations with many business applications, such as Salesforce, ServiceNow, and Office 365. This offers a great way to standardize your enterprise application single sign-on process and reduce total cost of ownership.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

SAP and AWS announce their project Embrace, a new partnership program aimed at simplifying customers’ SAP S/4HANA journey and accelerating their ability to innovate.

Orlando, FL — May 9, 2019 — Today, SAP announced the Embrace project, a collaboration program with Amazon Web Services (AWS) and Global Service Integrators. Embrace puts the customer’s move to SAP S/4HANA on AWS in the language and context of their industry through reference architectures.

AWS is a proud participant in the program, which will simplify the move to S/4HANA and accelerate enterprises’ ability to innovate like startups in an industry-relevant context. By using a Market Approved Journey, which is supported by reference architectures that combine SAP Cloud Platform with AWS services, we are easing the dataflow across platforms.

SAP’s Intelligent Enterprise portfolio and AWS Cloud services, with co-innovated automation, are enabling enterprises around the globe to quickly transform themselves to become “start-up-like”: using intelligent technologies natively, innovating business models faster, serving their customers globally by default, and running at the lowest-cost. Together, SAP and AWS are building a set of unique offerings aimed at retiring the technical debt that exists in today’s IT landscapes and at accelerating innovation by providing instant and on-demand access to higher-level services like Machine Learning, Data Lakes, and IoT.

Here is an example of how AWS is helping ENGIE in their SAP S/4HANA journey:

“Our CEO, Isabelle Kocher, has defined a new strategy for ENGIE focused on decarbonization, digitalization, and decentralization. This transformation strategy has placed a much greater emphasis on data and operational efficiency,” explains Thierry Langer, Chief Information Officer of the Finance division at ENGIE. “48 hours after the AWS team provided a comprehensive demonstration of what was possible on AWS, we decided to migrate our non-production environment to AWS. After we completed that step, we chose to migrate our entire platform and production environment as well. I saw the difference between running SAP on premises and on AWS, and there was no question it would be advantageous for us to migrate to AWS.”

SAP and AWS will be leveraging our existing collaboration efforts to support the Embrace program:

S/4HANA Move Xperience: A comprehensive offering allowing customers to transform and test their legacy ERP systems on SAP S/4HANA. The offering leverages automation, a prescriptive set of tools and architectures, and packaged partner offerings to migrate and convert in a matter of days. Customers can start with their existing SAP ERP system or choose a fresh but accelerated start by leveraging the SAP Model Company industry templates. The test environments will be made available to customers free of charge on SAP-certified, production-ready AWS environments. The S/4HANA Move Xperience offering enables customers to validate their business case and to assess the effort involved in transforming to S/4HANA.

SAP Cloud Platform on AWS: Today, over 160 SAP Cloud Platform services are now supported across seven global AWS Regions (Montreal, Virginia, Sao Paulo, Frankfurt, Tokyo, Singapore, and Sydney). SAP and AWS are now connecting both platforms to enable developers to easily design and deploy solutions that are context-aware through the semantic and metadata layers of underlying SAP business applications. On Wednesday, we launched the IoT cloud-to-cloud option and introduced the edge-to-edge offering (coming soon). We also introduced the AWS Lambda for SAP Cloud Platform API Management reference architecture, which was a frequent request by customers who want to mesh business processes powered by SAP S/4HANA applications with AWS services by using SAP Cloud Platform Connectivity.

Market Approved Journeys: To further simplify and help customers innovate faster within their industries, SAP, AWS, and our joint Global System Integration partners will release Market Approved Journeys. These innovation paths map how the services and solutions of both companies enable customers to integrate and extend beyond the core ERP solution. Through Market Approved Journeys, customers and partners can leverage interoperability and design patterns that are common in their industry, and then they can turn their focus to how they want to differentiate their organizations.

According to research firm Gartner, Inc., “Two-thirds of all business leaders believe that their companies must pick up the pace of digitalization to remain competitive.”* As leaders in enterprise application software and cloud services, SAP and AWS are aligning closely to provide customers with the safe and trusted path to digital transformation.

If you are attending SAPPHIRE NOW 2019 in Orlando this week, I hope you stop by the AWS booth (#2000) and talk to our SAP on AWS experts about how we can help you simplify your SAP S/4HANA journey and innovate faster!

* Gartner, Smarter with Gartner, Embrace the Urgency of Digital Transformation, Oct. 30, 2017, https://www.gartner.com/smarterwithgartner/embrace-the-urgency-of-digital-transformation/

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Co-authored by KK Ramamoorthy, Principal Partner Solutions Architect, and Brett Francis, Principal Product Solutions Architect

Today, SAP announced its collaboration with Amazon Web Services IoT and the general availability of interoperability between SAP Leonardo IoT and AWS IoT Core. The new collaboration makes it straightforward and cost-effective for you to deploy IoT solutions using the global scalability of the AWS IoT platform and business processes powered by SAP Leonardo IoT. The collaboration provides two new interoperability options.

The cloud-to-cloud option, which integrates SAP Leonardo IoT with AWS IoT Core, is generally available now. With this option, you can build SAP Leonardo IoT solutions that connect to backend intelligent suite solutions like SAP S/4 HANA and AWS IoT with a click of a button. Deployed device models in SAP Leonardo IoT are synced with the AWS IoT device registry and establish a one-to-one connection with SAP business processes. Without a single line of code written, customer data from IoT sensors is received by the AWS IoT platform, aggregated based on business rules established by the thing model, and posted to SAP Leonardo IoT.

The edge-to-edge option (coming soon) enables SAP business processes to execute locally with AWS IoT Greengrass. Essential business function (EBF) modules based on SAP Leonardo IoT Edge will run within the AWS IoT Greengrass environment, reducing latency while optimizing usage of bandwidth and connectivity. The EBF modules extend Intelligent Enterprise business processes to the edge.

While the cloud-to-cloud interoperability combines the power of both AWS and SAP cloud solutions, customers are also looking at increasingly bringing the power of cloud to the edge. They are looking at measuring, sensing, and acting upon data locally while using the cloud as a control plane for deployments and security. This is especially true for business processes that have poor or no internet connection or for businesses that require split-second local processing like running machine learning inferences. A company can take advantage of AWS IoT Greengrass to ensure local data connections are not lost, and then it can use AWS IoT Core to process and aggregate data from multiple, remote facilities.

With this collaboration, SAP will bring the power of SAP’s EBF modules based on SAP Leonardo IoT Edge to AWS IoT Greengrass. Our joint customers now will be able to use AWS IoT as the control plane to deploy powerful SAP edge solutions. For example, an Oil & Gas company will be able to ingest data from various sensors in their oil rigs using AWS IoT Greengrass and use SAP’s EBF modules to execute business processes locally.

Enterprises are constantly looking at ways to improve process efficiency, reduce cost, meet compliance requirements, and develop newer business models by having access to data in real time. Data generated by IoT sensors can provide valuable insights and help line of business owners make meaningful decisions, faster. Consider Bayer Crop Science, a division of Bayer that reduced the time taken to get seed data to analysts from days to a few minutes using AWS IoT. Many other customers are seeing similar business benefits (see the case studies).

However, collecting raw data from IoT sensors will soon become “noise” if that data does not have business context. This problem grows exponentially as enterprises deploy millions and billions of sensors. Until today, such customers had to build costly, custom solutions to marry the sensor data with business context, and they had to build complex custom integrations between IoT platforms and business solutions to bring sense to the data.

AWS and SAP are now able to help our joint customers deploy IoT solutions at scale without having to worry about complex custom integrations between solutions. For example, using AWS IoT, a manufacturing company can deploy, secure, and manage sensors on machines in their production lines. They can then ingest sensor telemetry data and seamlessly stream it to SAP Leonardo IoT where business rules can be applied to the data to determine asset utilization, analyze preventive maintenance needs, and identify process optimizations.

Below is a high-level architecture of both cloud-to-cloud and edge-to-edge interoperability options.

Cloud-to-Cloud integration

The interoperability between SAP Leonard IoT and AWS IoT is achieved by using a set of AWS resources that are automatically provisioned by the SAP Leonardo IoT platform with AWS CloudFormation. These resources enable the ingestion of device telemetry data and stream the data to an Amazon Kinesis stream by using AWS IoT Rules Engine rules.

Device data streamed in Amazon Kinesis is then picked up by AWS Lambda functions and sent to a customer-specific SAP Leonardo IoT endpoint where further business rules and application integrations are implemented. Processing and error logs are written to Amazon CloudWatch.

The interoperability automatically sets up cross-account authentication using secure stores in the AWS Cloud and SAP Leonardo IoT. After the initial setup is complete, you can use the Thing Modeler in SAP Leonardo IoT to create a thing model and sync it to AWS IoT to create matching AWS IoT things.

Customers can use AWS IoT Device Management functionality to onboard, monitor, track, and manage the physical devices. As the devices start sending telemetry information to AWS IoT Core, the telemetry information is seamlessly integrated with SAP Leonardo IoT using the resources created during initial setup.

Edge-to-Edge integration (coming soon)

AWS IoT Greengrass extends AWS to edge devices so that the devices can act locally on the data they generate, while still leveraging the cloud for management and durable storage. With the edge-to-edge option, you can also extend support for your business processes powered by SAP, by running EBF modules based on SAP Leonardo IoT Edge within AWS IoT Greengrass.

You can deploy EBF modules within AWS IoT Greengrass by using AWS IoT Core as the control plane for deployment and security. Once deployed, device telemetry data can be streamed directly from AWS IoT Greengrass to local EBF module endpoints. EBF modules can then invoke local business rules or call an SAP Leonardo IoT cloud endpoint for further processing.

See for yourself at SAPPHIRE NOW 2019

Want to learn more about this integrated IoT solution? Visit the AWS booth (No. 2000) at SAPPHIRE NOW. You will experience feature-rich demos, and you can leverage 1:1 time with an SAP on AWS expert to innovate for your own solution. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website.

Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Hope to see you soon and Build On!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The business benefits of running SAP workloads on AWS are already well proven with thousands of customers now running such workloads. Tangible benefits like those experienced by ENGIE, an international energy provider, include not only cost savings but also flexibility and speed. For example, as mentioned in an ENGIE case study, ENGIE was able to reduce the expected delivery time of new business frameworks by two times, when they upgraded their SAP platform to SAP S/4HANA on AWS. They did this all while rightsizing their HANA infrastructure, by using AWS-enabled high availability architecture patterns.

Although these are very tangible business benefits, customers are also increasingly looking at driving business innovations by extending core SAP business processes in the areas of Big data & analytics, Internet of Things (IoT), Apps & APIs, DevOps, and Machine learning. In fact, we discussed this SAP extension approach using AWS native services in a Beyond infrastructure: How to approach business transformation at startup speed blog post last year. Over the course of a year, we’ve been working with customers using many of the approaches detailed in that post.

As more and more customers move their SAP estates to AWS, they have also asked for help with additional reference architectures and integration patterns to extend these significant investments, through a combination of SAP Cloud Platform and AWS services.

A solid foundation for building SAP extensions

A building is only as strong as its foundation, and this is also true for any technology platform. As detailed in the SAP Cloud Platform Regions and Service Portfolio document, more than 160 SAP Cloud Platform services are now supported across seven global AWS Regions (Montreal, Virginia, Sao Paulo, Frankfurt, Tokyo, Singapore, and Sydney). Out of this total number of services, 37 of them—which include some of the more foundational digital transformation services like SAP Leonardo Machine Learning Foundation, SAP Leonardo IoT, and SAP Cloud Platform Enterprise Messaging—run exclusively on AWS infrastructure. Global availability, scalability, and elasticity are vital components of any platform as a service (PaaS). With the depth and breadth of SAP Cloud Platform services on AWS, you now have unparalleled opportunities to build SAP extensions on a solid infrastructure foundation.

Interoperability between platforms

You also have multiple options to extend, integrate, and interoperate between AWS services and SAP Cloud Platform, beyond just the services provided natively via SAP Cloud Platform. Let’s look at a few examples.

Simplify cross-cloud connectivity using SAP Cloud Platform Open Connectors

SAP Cloud Platform Open Connectors provides pre-built connectors to simplify connectivity with AWS services and to consume higher-level APIs. This service abstracts cross-cloud authentication and connectivity details, so your developers can focus on building business solutions and not worry about lower-level integration services.

For example, using Open Connectors, you can integrate Amazon DynamoDB with your web applications running on SAP Cloud Platform. Another example is integrating higher-level artificial intelligence (AI) services like Amazon Rekognition with predictive analytics solutions on SAP Cloud Platform.

Getting started with Open Connectors is easy. You can create a new connector or use an existing one.

To access an AWS service—for example, an Amazon Simple Storage Service (Amazon S3) bucket:

  1. Launch the Open Connectors configuration from the SAP Cloud Platform console, and then provide the service endpoint URL.
  2. In Authentication, choose awsv4, and then provide the required AWS authentication information.

Now you can access the AWS service as REST API calls in your applications and other services in SAP Cloud Platform.

Integrate SAP Cloud Platform API Management using AWS Lambda

AWS developers can also consume SAP Cloud Platform services by using AWS Lambda and SAP Cloud Platform API Management. This pattern is especially attractive for customers who want to mesh business processes powered by SAP S/4 HANA applications with AWS services by using SAP Cloud Platform Connectivity.

This pattern also opens up access to other higher-level services running on SAP Cloud Platform, and connectivity to other software-as-a-service (SaaS) applications such as SAP SuccessFactors, SAP Concur, and SAP Ariba. For example, let’s say you want to build a voice-enabled application for accessing inventory information from a backend SAP enterprise resource planning (ERP) application running on Amazon Elastic Compute Cloud (Amazon EC2). You can expose the inventory data as APIs using SAP Cloud Platform API Management, and consume it in a Lambda function over HTTPS. Then, you can create an Alexa skill and connect it to this Lambda function, to provide your users with functionality for voice-enabled inventory management.

See it at SAPPHIRE NOW

These are just a few examples of how to start integrating SAP Cloud Platform services with AWS services. Want to learn more? Stay tuned for a special blog series devoted to this topic and, if you are at SAPPHIRE NOW 2019, come visit our Build On bar in AWS booth 2000. You will experience feature-rich demos and can talk 1:1 with an SAP on AWS expert, to learn more about building your SAP innovation journeys on AWS. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website. Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Hope to see you soon, and Build On!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Steven Jones is a Technology Director and Global Technical Lead for the AWS Partner Organization.

At Amazon, we always try to start with the customer’s needs and work backward to build our products and services. Back in 2017, our customers, who were already running production deployments of SAP HANA on Amazon Elastic Compute Cloud (Amazon EC2) X1e instances with 4 TB memory, needed to support the growth of their enterprise data. So they started asking us for Amazon EC2 instances with even larger amounts of RAM.

We asked our customers what features and capabilities were most important to them. Consistent feedback was that they expected the same, familiar experience of running SAP HANA on Amazon Web Services (AWS). They especially wanted the ability to use the same network and security constructs like Amazon Virtual Private Cloud (Amazon VPC), security groups, AWS Identity and Access Management (IAM), and AWS CloudTrail; to manage these systems via APIs and the AWS Management Console; to use elastic storage on Amazon Elastic Block Store (Amazon EBS); and to be able to scale easily when needed.

In a nutshell, customers told us they didn’t want to compromise on performance, elasticity, and flexibility just to run larger instances.

Breaking the mold

We started our journey with the mission to build a product that could meet these requirements and delight our customers. In the fall of 2018, we announced the general availability of Amazon EC2 High Memory instances with up to 12 TB of memory, certified by SAP and ready for mission-critical SAP HANA workloads. Today, Amazon EC2 High Memory instances are available in three sizes—with 6 TB, 9 TB, and 12 TB of memory. You can launch these EC2 bare metal instances within your existing AWS environments using the AWS Command Line Interface (AWS CLI) and/or AWS SDK, and connect to other AWS services seamlessly.

In this blog post, I’ll discuss some of the key attributes that our customers love about EC2 High Memory instances.

These Amazon EC2 High Memory instances are powered by what we call the Nitro system, which includes dedicated hardware accelerators that offer and manage connectivity to Amazon VPC and Amazon EBS. By offloading these functions that have been traditionally supported through a hypervisor, these bare metal instances enable applications to have direct access to the underlying physical hardware. At the same time, the Nitro system enables full and seamless integration of these instances into the broader range of AWS services.

The ability to run SAP HANA on these instances ultra-close to your application servers within the same virtual private cloud (VPC) enables you to achieve ultra-low latency between your database and application servers and get consistent, predictable performance.

The ability to run database and application servers at close proximity offers the best outcome for running your SAP estate, including the SAP HANA database, in the cloud. High Memory instances support the AWS CLI/SDK for launching, managing, and resizing instances, elastic storage capacity from Amazon EBS, and benefit from direct connectivity to other AWS services.

The Nitro system enables EC2 High Memory instances to operate as fully integrated EC2 instances, while presenting them as bare-metal servers. All the CPU and memory on the host are directly available for your SAP workloads without a hypervisor, allowing for maximum performance. Each EC2 High Memory instance size is offered on an 8-socket host server platform powered by Intel® Xeon® Platinum 8176M (Skylake) processors. The platform provides a total of 448 logical processors that offer 480,600 SAP Application Performance Standard (SAPS). We’ve published both ERP (Sales & Distribution) and BW on HANA benchmarks to transparently disclose the performance of this platform for both OLTP and OLAP SAP HANA workloads.

EC2 High Memory instances are also Amazon EBS-optimized by default, and offer 14 Gbps of dedicated storage bandwidth to both encrypted and unencrypted EBS volumes. These instances deliver high networking throughput and low latency with 25 Gbps of aggregate network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking.

Finally, if you want to implement a very large, compute-heavy S/4HANA system with high memory requirements, you now have the option of running S/4HANA in scale-out mode on EC2 High Memory instances. You can scale-out to up to 4 nodes on 12 TB High Memory instances. In total, this provides up to 48 TB of memory and 1,792 logical processors/1.9 million SAPS, an unprecedented option in the cloud. For more information, see Announcing support for extremely large S/4HANA deployments on AWS.

Unprecedented flexibility

Our customers love the ability to size their infrastructure on AWS based on current needs rather than overprovisioning up front to meet future demands. EC2 High Memory instances provide the same scalability for SAP HANA workloads as our virtualized EC2 instances. In fact, you can start with what you need now and easily scale to meet your demand as and when your needs dictate.

For example, start with a 6 TB EC2 High Memory instance now, and within 6 months easily convert to a 9 TB or 12 TB instance, if needed. You can simply resize to a 9 or 12 TB instance with a few API calls. Since the persistent block storage on the backend is based on Amazon EBS, this too can be extended as needed with a few API calls. Typically, with other private hosting options, this requires lengthy outages and shuttling data around to migrate servers.

The following diagram shows an example of resizing a 6 TB High Memory instance to a 12 TB High Memory instance in minutes. To see how simple this really is, watch this segment from a demo with Whirlpool from AWS re:Invent 2018. Also, learn how Whirlpool is using EC2 High Memory instances in an innovative way.

Commercially, these instances are available on a 3-year reservation and also offer the flexibility of moving to larger sizes during the 3-year reservation period. This flexibility offers the best total cost of ownership (TCO), and prevents over-provisioning. You can start with an instance size that meets your database sizing requirements today, and then move to larger instance sizes when the growth in your database size requires it. Spend only for what you need today, and not for what you might need a year or two from now.

A fully integrated experience

When it comes to management, you might think that because these are bare metal instances, they need to be managed or architected differently. Not so! You can use the AWS CLI/SDK and AWS Management Console. In addition, you can also use existing AWS architecture patterns, frameworks, and processes to secure, maintain, and monitor your SAP HANA instances running on EC2 High Memory instances.

For example, because these instances are natively integrated with all other AWS services, you can use services such as the following:

  • IAM to securely manage the access to your EC2 High Memory resources.
  • Amazon CloudWatch to monitor your instance.
  • AWS Systems Manager to gain operational insights.
  • AWS CloudTrail for governance and compliance.

And finally, the truly transformative capabilities come from being able to seamlessly integrate with other AWS services like Amazon Sagemaker for machine learning or AWS IoT services, for example.

If you’re ready to get started, you have several options to migrate your existing workload to EC2 High Memory instances. Build your new systems with a few API calls, or use an Amazon Machine Image (AMI) or one of the available AWS Quick Starts for SAP. Then, follow the SAP System Migration guidelines by using SAP HANA system replication, database export, or backup/restore.

To further minimize system downtime during migration, use our SAP Rapid Migration Test program (also known as FAST). Use downtime and cost-optimized options to build a resilient environment that meets your high availability and disaster recovery requirements with EC2 High Memory instances as well. See our SAP on AWS technical documentation site to find resources on migration and other operational aspects for running SAP HANA on AWS.

Summary

AWS pioneered running SAP HANA database in the cloud, and today continues to offer the most comprehensive portfolio of instances and certified configurations. Here is a quick view of our SAP-certified scale-up and scale-out deployment options of EC2 instances for SAP HANA OLAP and OLTP workloads. Later this fall we will be releasing two additional sizes with 18 and 24 TB of RAM to give you even more options for large scale-up workloads.

SAPPHIRE 2019 – If you are at the SAPPHIRE NOW 2019 conference at Orlando, stop by booth #2000 to learn more about Amazon EC2 High Memory instances. See them in action with live demos, and talk to one of our solutions architects to learn more about how easy it is to get started. We also have several other exciting things to share during SAPPHIRE to help you use your investments in SAP workloads beyond just infrastructure. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website. Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Stay tuned for more exciting news, and register for one of our upcoming webinars. Build on!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post was written by Fernando Castillo, Head WW SAP, at Amazon Web Services (AWS).

This week will be my fifth year anniversary going to SAPPHIRE NOW as part of the Amazon Web Services (AWS) team. During those five years, we’ve been able to help thousands of customers in their SAP journeys to AWS.

During this time, I have had the opportunity to travel around the world and connect with many SAP customers to understand their challenges and to help them find ways to overcome and achieve their goals. Through these interactions—from Seaco, Coca-Cola Icecek, and BP in the early days, to AIG, ENGIE, Bristol-Myers Squibb, Fast Retailing, FirstGroup, and many others more recently—two common themes have emerged as the key benefits of moving to AWS: retiring technical debt and accelerating innovation.

Retiring technical debt means that customers can move away from their old, inflexible, on-premises infrastructure and take advantage of modern DevOps, automation, and flexibility that only the cloud provides. Customers can now move to S/4HANA without long-term commitments, with the ability to explore with low risk and to deliver value to the business.

AWS and SAP on AWS competency partners have been developing multiple tools to simplify migrations to the AWS Cloud. For example, customers are able to provision certified HANA environments (2 TB / 4 TB) in minutes and shut them down with a simple command (or voice if you prefer to use Alexa). We just launched our AWS SAP S/4HANA Quick Start, which enables customers to build fully certified S/4HANA environments in less than 2.5 hours! ENGIE and Fast Retailing, known for their Uniqlo brand, are clear examples of how AWS has been instrumental in enterprises’ S/4HANA journeys.

Figure 1: How the AWS 100% software-defined cloud infrastructure helps retire technical debt

But customers don’t just want to move their SAP solutions from their on-premises data centers to AWS. They want to accelerate innovation. Customers are using SAP Cloud Platform, which runs in seven AWS Regions worldwide today, to enable innovation from both SAP and AWS. Customers are also taking advantage of the full range of AWS services, such as AWS IoT Core, to combine their data on edge devices with their SAP solutions. We are working very closely with SAP on cloud-to-cloud interoperability; stay tuned for more information this week.

Finally, a key component of how we are helping accelerate innovation is our industry focus. By working with many customers in multiple industries, we have been building industry solutions. For example, we have created certifications like GxP for Life Sciences and other reference architectures. At SAPPHIRE, you will be able to connect with our industry team that covers 13 distinct industries.

Figure 2: AWS innovation pillars and industries

With these topics in mind, when we started thinking about SAPPHIRE NOW 2019 and reflecting on the challenges SAP customers are facing and how we have been helping them, it became evident that Simplify your S/4HANA journey and innovate faster captured this year’s simple but powerful theme.

Figure 3: Today, AWS is helping customers retire their technical debt by helping them move to S/4HANA faster, with tools, methods, and competency partners. At the same time, AWS is helping accelerate innovation by taking advantage of SAP Cloud Platform, which is leveraging the innovation AWS brings.

I look forward to seeing you in Orlando. It’s going to be a very interesting week, with multiple announcements and great innovations showcased at our booth (#2000). We have a jam-packed week planned, with sessions, demos, social events, and opportunities to hear from our partner community. You’ll also be able to learn first-hand how you can join the many customers who have successfully migrated to AWS to take advantage of the innovations AWS has to offer.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post was written by Kuang-Shih Howard Lee, who is an SAP solutions architect at Amazon Web Services (AWS).

In today’s business world, speed is everything. Enterprises must transform their IT assets at an increasing pace to stay ahead while dealing with complex technologies and deployment models. At Amazon Web Services (AWS), we’re working on simplifying and fast-tracking your SAP software deployments to the cloud, to save you time and resources. We’re excited to announce a new AWS Quick Start for SAP S/4HANA that enables businesses to deploy their SAP S/4HANA workloads on the AWS Cloud in less than three hours, compared with a manual deployment that can take days or even weeks to complete.

SAP S/4HANA, the newest generation of the enterprise resource planning (ERP) software package from SAP that supports core enterprise business functions, is optimized for SAP HANA in-memory databases. With the recently released Amazon EC2 High Memory instances, SAP customers now have the ability to scale their SAP HANA database up (up to 12 TB of memory) and out (up to 48 TB of memory) for extremely large S/4HANA deployments. For details, see SAP S/4HANA on AWS on the AWS website.

What is the AWS Quick Start for SAP S/4HANA?

The AWS Quick Start for SAP S/4HANA is a deployment automation tool designed by AWS solution architects that is built on AWS CloudFormation, Python, and shell scripts. The Quick Start follows best practices from AWS, SAP, and Linux vendors to automatically set up an AWS environment and ready-to-run SAP S/4HANA system. (This Quick Start is the latest addition to the set of AWS Quick Starts that automate the deployment of SAP workloads; you might also want to check out the Quick Starts for SAP HANA, SAP Business One, version for SAP HANA, and SAP NetWeaver.)

SAP S/4HANA Quick Start components and deployment options

The Quick Start deploys an SAP S/4HANA system that consists of a number of Amazon Elastic Compute Cloud (Amazon EC2) instances and AWS core infrastructure services into a new or an existing virtual private cloud (VPC) in your AWS account. It offers two main deployment options: a single-scenario standard deployment and a multi-scenario distributed deployment with or without SAP software installed. Additionally, you can choose whether to use Amazon Elastic File System (Amazon EFS) or Network File System (NFS) for your shared file system. You can also choose to deploy a bastion host and Remote Desktop Protocol (RDP) server. You can choose a combination of the following components to launch an SAP S/4HANA environment that meets your requirements.

Primary resources:

  • SAP HANA primary database
  • SAP S/4HANA ABAP SAP Central Services (ASCS) server
  • SAP S/4HANA Primary Application Server (PAS)

Secondary and optional resources:

  • SAP HANA secondary database for high availability
  • SAP S/4HANA standby ASCS server for high availability
  • Optional SAP S/4HANA Additional Application Server (AAS)
  • Optional bastion host and RDP instances

To ensure business continuity, the AWS Quick Start for SAP S/4HANA also enables you to create a SAP HANA database with high availability, using SAP HANA System Replication (HSR) across Availability Zones within an AWS Region. In addition, you can set up a standby ASCS server for high availability alongside the SAP HANA database to protect mission-critical SAP workloads from Availability Zone outages.

The Quick Start offers the following standard and distributed deployment options for SAP S/4HANA on AWS.

For a new VPC:

For an existing VPC:

For more information about these deployment options, see the AWS Quick Start for SAP S/4HANA deployment guide.

The S/4HANA architecture on AWS

The following diagram shows the standard deployment architecture of a typical four-server SAP S/4HANA cluster that hosts the SAP HANA database, ASCS, PAS, and AAS separately in a private subnet within the same Availability Zone, and a bastion host and RDP server in a public subnet.

The following diagram shows the deployment architecture of a typical four-server, high-availability cluster that hosts the primary SAP HANA database, primary ASCS, PAS, and AAS separately in a private subnet in one Availability Zone, and the secondary SAP HANA database and standby ASCS in a private subnet in another Availability Zone. This architecture also includes a bastion host in an Auto Scaling group and an RDP server in the public subnet of the first Availability Zone.

Getting started

To get started with this Quick Start deployment, read through the deployment guide to get a general understanding of the components and deployment options, and then follow the instructions in the guide to launch the Quick Start into your AWS account. Depending on your parameter selections, the Quick Start can run between 1.5 to 2.5 hours to complete the deployment.

The source templates and codes are available to download from GitHub. If you would like to customize this Quick Start to meet your needs, see the AWS Quick Start Contributor’s Guide.

What’s next?

We will continue to enhance the SAP S/4HANA Quick Start to support new operating system versions, SAP S/4HANA software packages, and AWS services and instance types. Let us know if you have any comments or questions—we value your feedback.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post is by Santosh Choudhary, Senior Solution Architect at Amazon Web Services (AWS).

AWS provides services and infrastructure to build reliable, fault-tolerant, and highly available systems in the cloud. Due to the business-critical importance of SAP Systems, high availability is essential to the business.

High-availability for SAP applications can be achieved in many ways on AWS, depending on the operating system and database that you use. For example, SUSE High Availability Extensions (SUSE HAE), Red Hat Enterprise Linux for SAP with High Availability and Update Services (RHEL for SAP with HA and US), Veritas InfoScale Enterprise for AWS, SIOS Protection Suite, etc.

In this post, we will see how to deploy SAP on AWS in a highly available manner in Windows and Linux environments using SIOS Protection Suite. We’ll also cover some of the differences in SIOS setup in Windows and Linux environments.

SIOS Protection Suite software is a clustering solution that provides a tightly integrated combination of high availability failover clustering, continuous application monitoring, data replication, and configurable recovery policies to protect business-critical applications and data from downtime and disasters.

To start with, AWS recommends deploying the workload in more than one Availability Zone. Each Availability Zone is isolated, but the Availability Zones in an AWS Region are connected through low-latency links. If one instance fails, an instance in another Availability Zone can handle requests.

Now, let’s explore the architectural layers within an SAP NetWeaver system, single points of failure (SPOFs) within that architecture, and the ways to make these components highly available using SIOS Protection Suite.

Understanding SAP NetWeaver architecture

The SAP NetWeaver stack primarily consists of a set of ABAP SAP Central Services (ASCS) servers, a primary application server (PAS), one or more additional application servers (AAS), and the databases.

ASCS consists of Message Server and Enqueue Server. Message Server acts as a communication channel between the application servers and provides load balancing between the application servers. Enqueue Server stores the database table locks and forms the critical component of ASCS to ensure database consistency.

In an SAP architecture, ASCS and databases are the SPOFs and in a highly available scenarios they need to be made highly available and fault tolerant.

To achieve high availability, ASCS instances are deployed in a clustered environment like Windows Server Failover Clustering (WSFC) or Linux clusters. One of the requirements of a clustered environment is a shared file system. On the AWS Cloud, SIOS Data Keeper can be used to replicate the common file share across the Availability Zones.

Setup for a Windows environment

The SIOS DataKeeper part of SIOS Protection Suite is an SAP certified, optimized, and host-based replication solution that performs block-level replication across the Availability Zones to configure and manage high-availability to imitate a Server Message Block (SMB) file share.

It is used to make a /<sapmnt> highly available file system by replicating the content in synchronous mode. It can also be used to make /usr/sap/trans a shared file system.

Using SIOS DataKeeper Cluster, you can achieve high availability protection for critical SAP components, including the ASCS instance, back-end databases (Oracle, DB2, MaxDB, MySQL, and PostgreSQL), and the SAP Central Services instance (SCS) by synchronously replicating data at the block level. In a Windows environment, the DataKeeper Cluster integrates seamlessly with Windows Server Failover Clustering (WSFC). WSFC features, such as cross-subnet failover and tunable heartbeat parameters, make it possible for administrators to deploy geographically dispersed clusters.

The setup consists of Windows Failover Cluster Manager with both ASCS nodes (e.g., ASCS-A and ASCS-B as shown in the following screenshot) and a file server that acts as witness in the cluster. We recommend deploying the file server in a separate, third, Availability Zone.

At any point in time, the cluster is pointing to one active node.

The following diagram shows the architecture of a highly available SAP system on AWS.

Customers can either choose to do database replication using database-specific methods (like SQL Always On availability groups) or block-level replication using SIOS for both the database and the ASCS instance. The SAP Recovery Kit, which is part of the SIOS Protection Suite, provides monitoring and switchover for different SAP instances. It works in conjunction with other SIOS Protection Suite Recovery Kits (e.g., the IP Recovery Kit, NFS Server Recovery Kit, NAS Recovery Kit, and database recovery kits) to provide comprehensive failover protection.

The following diagram shows the high-level architecture of SIOS Datakeeper used to create file share for ASCS in a cluster environment and leveraging native SQL replication (using an Always On availability group).

This next diagram shows the generic architecture of highly available SAP (running on AnyDB) using SIOS.

Setup for a Linux environment

In the case of a Linux environment, both the DataKeeper and LifeKeeper components of SIOS Protection Suite are used. Datakeeper provides the data replication mechanism, and LifeKeeper is responsible for automatic orchestration of failover of SAP ASCS and databases (e.g., SAP HANA, DB2, Oracle, etc.) across Availability Zones. The SAP HANA Recovery Kit within LifeKeeper starts the SAP HANA system on all nodes and performs the take-over process of system replication.

The actual IP address of the SAP ASCS Amazon Elastic Compute Cloud (Amazon EC2) instance and the underlying database is abstracted using overlay IP address (also called floating IP address). An overlay IP address is an AWS-specific routing entry that sends network traffic to an instance within a particular Availability Zone. As part of the failover orchestration, LifeKeeper is also responsible for changing the entries within the route table during failover to redirect the traffic to the active node (primary node).

The detailed SIOS guide steps through the deployment of SAP NetWeaver with high availability on AWS using SIOS Protection Suite. The whitepaper uses NFS as part of the setup. However, you can simplify the setup by using Amazon Elastic File Service (Amazon EFS) instead.

Amazon EFS provides a simple, scalable file system for Linux-based workloads that are running on AWS Cloud services and on-premises resources. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

In case of any questions, please feel free to reach out to us.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview