Loading...

Follow SAP Blogs - The Best Run Businesses Run SAP on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Hey you!

By now you are probably aware that the Unidad de Gestión Pensional y Parafiscales (UGPP), through Resolution 922 of 2018, changed the required information provided by the employers.

If your company is required by the UGPP to present the payroll information, you can generate an Excel file with the requested information using the Reporte Unidad de Gestión Pensional y Parafiscales UGPP report (RPC_PAYCO_UGPP) from SAP (transaction PC00_M38_UGPP).

SAP Note 2681933 – [CO] UGPP – Resolution 922 of 2018 modified the output of the RPC_PAYCO_UGPP report in accordance with the new requirements. Keep in mind that this Note requires steps prior to the implementation.

We have released a video explaining how you can parameterize the report:

Cómo ejecutar el reporte UGPP - YouTube

And a video demonstrating how you can execute the report to generate the UGPP excel file:

 

Cómo parametrizar el reporte UGPP de SAP - YouTube

Remember to always click in the documentation icon! In the documentation you can find relevant information related to the object you are working with, in Spanish

In the SAP Help Portal, you can also find documentation about the report, in English (Generating the UGPP file)

In case of doubt, don’t hesitate to contact us!

 

***

 

Hola!

A estas alturas, usted probablemente ya sabe que la Unidad de Gestión Pensional y Parafiscales (UGPP), mediante la Resolución 922 de 2018, cambió los requisitos que debe cumplir la información suministrada por empleadores.

Si la UGPP requiere que su compañía presente la información de pagos de aporte de nómina, usted puede generar un fichero en formato Excel para presentar la información solicitada utilizando el reporte Reporte Unidad de Gestión Pensional y Parafiscales UGPP (RPC_PAYCO_UGPP) de SAP (transacción PC00_M38_UGPP).

La Nota de SAP 2681933 – [CO] UGPP – Resolución 922 de 2018 modificó la salida del reporte RPC_PAYCO_UGPP de acuerdo con los nuevos requisitos.  Tenga en cuenta que esta Nota requiere pasos previos a la implementación.

Nosotros hemos lanzado un video que explica cómo usted puede parametrizar el reporte:

Cómo ejecutar el reporte UGPP - YouTube

Y un video que muestra cómo usted puede ejecutar el reporte para generar el archivo en formato Excel:

 

Cómo parametrizar el reporte UGPP de SAP - YouTube

¡Recuerde de siempre hacer clic en el ícono de documentación en el sistema! En la documentación usted puede encontrar información relevante sobre el objeto con el que esté trabajando, en español.

En el Help Portal de SAP, también puede encontrar documentación acerca del reporte, en inglés (Generating the UGPP file).

 

¡En caso de duda, no hesite en contactarnos!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

(Note: This blog post focuses on Best Practices for SAP S/4HANA Cloud and not SAP S/4HANA on premise)

The words ‘Best Practices’ and SAP are almost synonymous – if you’re discussing SAP you’re pretty much guaranteed to talk about Best Practices as well. Given this, you might be initially shy (as I was) to ask for a clear explanation of what is meant by a Best Practice. Maybe you might give it a try right now and give the answer out loud to the question: What is a Best Practice? This blog post tries to answer that question.

After asking more than a few times – I got an answer that finally clicked for me:

When SAP mentions a ‘Best Practice’ it’s talking about the best way to execute a process in the SAP system.

That’s it. You can stop reading now if that clarifies it for you, or you can keep going for examples on what a Best Practice is not, and the difference between Best Practices and Best Practice Content.

So for example, we have a ‘best practice’ in the Order to Cash area for when a customer is using SAP S/4HANA Cloud to sell an inventory managed item. So the best practice for Sale of Stock is the ‘best’ way to carry out this process in SAP S/4HANA Cloud so that the process:

  • Meets legal requirements
  • Can be reviewed/controlled (e.g. monitored start to finish)
  • Makes the correct financial account postings (if relevant),
  • Can be reported on (by standard SAP reports – also sometimes industry/government compliance reporting standards)
  • Integrates properly to other system applications (e.g. the Invoice generation logic and layout)
  • Successfully works with other dependent business processes (e.g. financial reporting, Material Requirements Planning in production etc.)

The ‘best practice’ for a business process is one that meets the above objectives.

The SAP Best Practice does not try to tell you whether in the physical world, the best way to sell stock (inventory managed) items is in a single store, a chain of stores, at a lemonade stand, or via digital web portal fulfilled by Amazon etc. That’s the business side of things that the customer is the expert on. (Note though that your consultant should have an opinion though on how you map the physical world entities and processes to the system entities/processes).

So – knowing that there’s an acknowledged best way to carry out this process in your SAP S/4HANA Cloud system – how do you find out what that is? Well that’s where you begin using the Best Practices Content. The Best Practice content provides baseline examples of how the key objectives highlighted earlier can be met in a typical business based on SAP’s (extensive) experience of customers using our software to fulfil their business needs. Of course individual, industry and local nuances need to be considered as a part of your implementation, but we provide a wealth of documentation and tools based on that baseline best practice to accelerate your implementation – and for SAP S/4HANA Cloud this can all be found in the SAP Best Practices Explorer.

The Best Practices Explorer identifies processes with a concept called ‘Scope Items’. The scope item normally represents a full business process e.g. BD9 – Sell From Stock – Selling inventory managed items to an external customer, J45 – Procurement of Direct Materials – Where you might be purchasing those items that you’ll be selling on from some other suppliers, and J59 – Accounts Receivable – Where you can post into the system incoming payments from customers for the items that you sold earlier with BD9.

So for each scope item, various types of Best Practice Content is available. For example, within the BD9 – Sale of Stock scope item, you have:

  1. Test Script – A full example of how you execute this process in your SAP S/4HANA Cloud system. It uses master data (customers, materials etc) that is set up and provided to you in your Starter System or Trial System. It includes information on what needs to have been done before you can execute this process (You may need to procure stock, provide your system user with appropriate security access for example), what processes follow this one etc. This information is detailed enough that you can actually follow along in the system and see an example of the best way to execute this process in your SAP system. The Test Script is Best Practice content that will outline steps that conform to the SAP Best Practice – and deliver you the benefits discussed earlier.
  2. Process Flow – This is a flowchart / overview that provides higher level information on the Test Script discussed before. It enables you to see at a glance the major steps that are executed by which business role in order for the process to be completed. Note that the difference between the ‘Process Flow’ and the ‘Process flow (BPMN2)’ links is that you can download the BPMN2 file to edit it as required for your specific business scenario e.g. removing optional steps that you have opted not to use. Also note that the process flow (and Test Script) often includes optional steps and follow on-subsequent processes: Only use these if applicable for your business. Including these where appropriate, and excluding them where inapplicable, keeps you aligned to the Best Practice.
  3. Task Tutorials – This is one of my favourite bits of Best Practices Content. This is a recording of someone carrying out various steps that were detailed in the Test Script so that if you just need a quick review of what this flow looks like in the system – you can open this content and get a walkthrough of the full process within a couple of minutes. It’s also great for practice as you can interact with the recording to select the appropriate fields etc. and actually get the feel for the steps included in the Best Practice. Task Tutorials are not available for all scope items but more are being added quarterly.
  4. Set Up Instructions – For some scope items there’s a separate document leading you through relevant additional set up steps such as communication arrangements etc. that must be completed for you to be able to execute the Test Script.

As further backing to the claim that Best Practices help you fulfil legal requirements, you can take a quick look at the difference in the end of the business process flow for the Sale from Stock Process Flow in the US and Spain.

In Spain, there is an additional step for ‘EDocument Cockpit’ to fulfil the legal requirement of transmitting the electronic billing document to the government for approval. Differences in processes like this are taken into account in the Best Practice Content, and is the reason why it is different for various localizations. (Disclaimer: SAP does not provide legal advice. The localization team for each country works to ensure the SAP solution enables the customer to be legally compliant, but this compliance remains customer responsibility).

So, there are Best Practices for scope items of business processes. There are also scope items for Master Data. Similar to the test scripts for business processes, these illustrate the steps and applications involved in creating each type of master data. Again, these scripts are best practice content, but the steps they show the recommended way to carry out those actions (e.g. creating a Trading Good material, or creating a customer) in the SAP system.

Best Practice Approach

To take an example of Business Partners – an SAP customer may have a single Business Partner (legal entity) that buys goods or services from them (and so is a Customer) and also sells goods to them (and so is a Vendor). This Business Partner trades with their USA operation, and with their German operation – sometimes with different contract elements such as payment terms etc. Generally the Best Practice approach to setting up this business partner is to use a single Business Partner in your system – with one system Business Partner ID – and then you can differentiate the varying partner information by using different ‘Sales Areas’ that are attached to that partner. There may be very particular circumstances that require multiple business partners with different IDs to be created in your SAP system, but the vast majority of the time – doing so would constitute ‘breaking’ a best practice. This sort of best practice information is not a part of SAP Activate, and will typically come from education courses such as SAP Certification for the relevant line of business, blog articles, white papers and the like.

Conclusion

Of course there are other subtleties to what constitutes a ‘Best Practice’. However for S/4HANA Cloud, if you 1) Follow the guidance from the Best Practices Content (e.g. Test Scripts, Process Flow) and 2) Combine this with an understanding of the best practice approach which comes from the experience, training and certification of your Key Users or Consultants – then you should be on the right track.

 

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Jenkins fundamental architecture is ‘Master+Slave’ which is distributed in nature.

Master is designed for co-ordination and handling tasks such as scheduling jobs, monitoring slave nodes, dispatching builds to slave nodes, recording and representing the build result and executing build jobs directly.

Slave is designed to perform the work. Agent can run on different operating systems such as Windows and Linux and doesn’t require Jenkins installation on it.

Following document provides details to setup Master Slave configuration in Jenkins. Windows is configured as Master and Linux as Slave node.

Steps to be executed on Master server/Windows

Install Jenkins on Master server

Since the agent node is setup on Linux, it will be launched through ssh.

Secure Shell(ssh) is a network that provides authentication and encrypted data communications between two machines connecting over open network such as the internet.

ssh key authentication is needed to setup agent node.

Generate ssh key

ssh-keygen

‘id_rsa’ private and ‘id_rsa.pub’ public key will be created in the ‘.ssh’ directory

 

Steps to be executed on Slave server/Linux

Setup of slave node requires Java to be installed. Install Java from Oracle Website.

Installed java version can be checked by executing ‘java -version’

Set up environmental variables for the installed java

export JAVA_HOME=/usr/java/latest

export JRE_HOME=/usr/java/latest

Create a new user Jenkins and user group named Jenkins

sudo useradd jenkins -U -s /bin/bash

set password jenkins

Verify by checking in /etc/passwd and /etc/group files

Configure sudo privileges by configuring in /etc/sudousers

jenkins ALL=(ALL)        NOPASSWD: ALL

 

Command to copy ssh-id from Windows to LINUX

After the generation of ssh key, place the public key into the slave machine’s authorized_keys file by executing below command

C:\Users\admin\.ssh> ssh jenkins@<slave machine IP>”umask 077; test -d .ssh || mkdir .ssh ; cat >> .ssh/authorized_keys || exit 1″ < “umask 077; test -d .ssh || mkdir .ssh ; cat >> .ssh/authorized_keys || exit 1” < C:\Users\admin\.ssh\id_rsa.pub

After the copy of ssh-id to Slave machine > Try logging into the slave machine by executing following command in Jenkins master machine ssh ‘jenkins@<slave machine IP>’.

 

Setup of Credentials on Jenkins

On Jenkins dashboard click on the Credentials.

 

Click on Global

Click on Add Credentials

Kind: SSH Username with private Key

Username: Jenkins

Private Key: Enter directly and paste the id_rsa private key

Click on OK

 

Jenkins user with SSH key is created.

 

Setup of New Node in Jenkins

Click on Manage Nodes

 

Click on New Node

 

Enter Node name LINUX_DEMO, choose Permanent Agent and click OK

 

  • Description: Slave node
  • Remote root directory: /home/jenkins
  • Launch method: Launch slave agent via SSH
  • Host: IP address
  • Credentials- Jenkins
  • Host Key Verification Strategy – Known hosts file Verification strategy

Click on Save


 

Slave node LINUX_DEMO is created

 

Click on Log to see the connection status details

If all the above steps are executed successfully, connection to slave machine will be established.

Conclusion:

To test a cross-platform code base with a different operating system, you can configure different OS slaves and run the job against it. Also, a distributed architecture will reduce the load on the master server.

In this blog, we have learned how to setup master slave configuration for Jenkins on different operating systems.

 

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We will continue from where we left off in the previous part  where we finished configuring the HANA Database and the user that will communicate with the database.

STEP 4: Deploy the Application

Let’s build the war file first. If you are using Eclipse, then perform a maven build and specify profile as cf.  Otherwise, you can also build the application using maven command line tool with the following command –

mvn clean install -P cf

Let’s push the application to cloud foundry space now.To deploy the application, execute the following command from project root –

cf push

If everything goes alright, you should have your application deployed in Cloud Foundry. You can check the status of the application using –

cf app spring-hana-cloud-foundry

 

Yay ! The application is deployed. You can try out the endpoints as follows –

  1. GET <APP-ROOT>/employee/count -> Returns count of total employees in database.
  2. GET <APP-ROOT>/employee/all -> Returns all employees in database.
  3. GET <APP-ROOT>/employee/id/{id} -> Returns the employee instance corresponding to the ID.
  4. POST <APP-ROOT>/employee/add -> Add a new employee in database. Use the following payload in body –
    {
    	"id":"1",
    	"firstName":"Boudhayan",
    	"lastName":"Dev",
    	"email":"email@example.com",
    	"contact":"12121212"
    }
  5. DELETE <APP-ROOT>/employee/delete/{id} -> Delete particular employee from database.

 

STEP 5: Check Database content.

Execute few POST requests as listed above to insert some items in the database. We will now check the database with the Database user to see those records –

Launch SAP HANA Cockpit with the Database user –

If you are still logged on with the SYSTEM user, try logging out by clicking Clear Credentials. This would prompt you to enter credentials. Use the credentials found in the service key that we created earlier in step 3.

Once logged in, click Open SQL Console. This will open the database explorer where you can check the database artifacts created by this user –

As you can see, the table – EMPLOYEE exists for the database user. If you open the contents of this data, you’ll find the records that were inserted.

 

I have inserted 3 records earlier, which can be seen in the database above. We have successfully, bound HANA DB to our application in SAP Cloud Platform (CF).

This completes the 3 part blog series where we saw how to create a Spring Boot Application and bind it t HANA database. In case of any doubts or queries, do leave out your message. I’ll be happy to help.

All the best !

Thanks

 

Find the remaining parts here –
  1. Develop the Spring Boot Application (PART 1) – Develop a Spring Boot (Java) application with HANA database on SAP Cloud Platform (Cloud Foundry) – PART 1
  2. Create instance of HANA service (PART 2) – Develop a Spring Boot (Java) application with HANA database on SAP Cloud Platform (Cloud Foundry) – PART 2
  3. Deploy and Test (PART 3) – Develop a Spring Boot (Java) application with HANA database on SAP Cloud Platform (Cloud Foundry) – PART 3
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this blog post, we continue with the Spring Boot application that we developed in PART 1We will now look at the steps involved in binding a HANA instance with our application.

STEP 2 : Create HANA service instance.

We will create the HANA service instance in this step. HANA service provisions a HANA DB for us. This is a paid service so you’ll need to enable it using ISM tool.

For this exercise, we will use the 64standard service plan.

Navigate to Service Marketplace of your CF space and select SAP HANA Service

Create New instance of this service as follows –

Provide 15 characters long password.

You can skip the next step, as the binding is not really required. Confirm the creation of service by giving it a name – haas.

STEP 3 : Create SAP HANA Schema and HDI Container service instance.

Now that we have provisioned a HANA DB for us, it is time to create an instance of HANA Schema and HDI Service. This service generates a schema (if you use schema plan) which allows your Spring Boot application to create its database artifacts. There are other plans ofcourse which cater to different use-cases. For example, if you follow the Cloud Application Programming model, it is recommended to model the database artifacts in CDS. In that case, hdi-shared plan is used, which deploys the database artifacts from your CDS files.

However, in this exercise we will make use of the schema plan, as our database tables are defined in the application layer itself.

Navigate to Cloud Foundry space and create an instance of SAP HANA Schema and HDI Container service as follows –

You can leave the application binding for now as the binding will take place during application deployment. We will name the instance as hana_migration.

STEP 3 : Assign Roles and permissions for the HDI user.

Now that you have created a HDI schema service, it would have generated an database user for you. We need to assign roles and permissions to this user so that our application can use this user to create the database artifacts.

We will use the CF CLI Tool. Documentation for the usage of this tool can be found here – CLI Documentation.

Let us create a service key for the hana_migration service instance.

cf csk hana_migration hanaKey
cf service-key hana_migration hanaKey

This will generate service key as follows –

{
 "certificate": "-----BEGIN CERTIFICATE-----",
 "driver": "com.sap.db.jdbc.Driver",
 "host": "zeus.hana...",
 "password": "...",
 "port": "2...",
 "schema": "U...",
 "url": "jdbc:sap://...",
 "user": "U..."
}

We will grant Roles and Permission to this user.

 

3.1 Open the dashboard for the HANA Service instance.

Open the dashboard for the HANA service instance that we created earlier –

3.2 Open HANA Cockpit

The HANA Cockpit opens up with the SYSTEM user that we created while instantiating an instance of SAP HANA Service.Navigate to Role Management and assign the following roles to the user (found int the service key above) –

And finally, we need to grant the package privilege to the user. Navigate to Privilege Assignment 

For System Privileges grant all the privilege –

 

For Package Privilege assign the following privilege –

 

With this we have everything in place to deploy the application.  In the next and final part of the blog series, we will deploy our application to Cloud Foundry and check the contents of the database.

See you in next part.

Cheers.

 

Find the remaining parts here –
  1. Develop the Spring Boot Application (PART 1) – Develop a Spring Boot (Java) application with HANA database on SAP Cloud Platform (Cloud Foundry) – PART 1
  2. Create instance of HANA service (PART 2) – Develop a Spring Boot (Java) application with HANA database on SAP Cloud Platform (Cloud Foundry) – PART 2
  3. Deploy and Test (PART 3) – Develop a Spring Boot (Java) application with HANA database on SAP Cloud Platform (Cloud Foundry) – PART 3
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Intelligent enterprises expect to transform paper-based processes into logical, easily understood, and automated workflows. Increased use of process automation should free up time for companies to reap the benefits of digitalization through transformed business processes and respond more quickly to challenging situations. But achieving the optimal balance between automation and strategy is not always easy.

In the ASUG Pre-Conference Seminar “Hands-On with SAP Cloud Platform: A Deep Dive into the Digital Experience of Intelligent Business Process Management“, our experts will guide you how to manage live processes in the most effective way. Get hands-on experience how to handle business-critical end-to-end scenarios, including:

  • Automating and orchestrating business processes with SAP Cloud Platform Workflow
  • Steering the decision-automation process with SAP Cloud Platform Business Rules
  • Achieving real-time view on workflows with SAP Process Visibility
  • Infusing intelligence in your processes with digital assistance and conversational UX
  • Leveraging structured workflow automation together with SAP Intelligent Robotic Process Automation
  • Improving the business process experience, connecting X-Data (Qualtrics experience data) with O-Data (operational data)

Benefit from the best practices we will outline for developers and solution architects. Listen to our customer use cases – how they changed, improved and created new innovative live processes. Interact with our experts and build up your knowledge in one day.

Take this opportunity, register now (seats are limited) and be ready to spearhead the evolution of your enterprise to an intelligent enterprise.

ASUG Pre-Conference Seminar

Hands-On with SAP Cloud Platform: A Deep Dive into the Digital Experience of Intelligent Business Process Management

Monday, September 23, 2019

8:00 AM – 5:00 PM

Register NOW!
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

 

This article provides insights into the SAP Cloud Platform Big Data Services, one of the popular solutions from SAP software corporation. It allows customers to handle big data in Hadoop based on cloud service subscription.

Big data in the business ecosystem

It is common knowledge that SAP solutions are used by major international companies from different industries, including metallurgy, oil, and gas, as well as other conservative sectors of the economy. The software giant develops and deploys modern IT solutions for them. Nowadays, these enterprises are investing more and more in cutting-edge technologies, such as the Internet of Things (IoT), machine learning, or big data processing.

One of the motivations in terms of the latter activity is to benefit from this data in new ways. For instance, metallurgical companies are trying to find new sources of income or ways to cut the expenses in the current economic and geopolitical circumstances. Big data can help them generate new ideas as it speaks volumes about the global industry trends in general and their business processes in particular.

There are plenty of services out there, both open-source and commercial, to store and work with big data. Hadoop, along with its extra components, is the most popular one. The features that make it the leading solution in its niche include:

  • Dependability
  • Scalability
  • Affordable data storage
  • A vast variety of additional open-source components for processing data, such as Hive, Spark, etc.
  • Lots of experts who are proficient in working with Hadoop

The popularity of open-source solutions with zero cost is understandable. Nevertheless, in scenarios where Hadoop is deployed for industrial use, free open-source services are hardly ever leveraged in their pure, original form. Instead, commercial variants of open-source solutions based on Hadoop have been gaining traction among businesses. A few well-known providers of these products are Hortonworks and Cloudera, but there are many more. These companies’ responsibility boils down to delivering reliable software and ensuring a seamless interaction of all the components. Some services adopt a different approach, enabling their clients to work with big data via the cloud on a subscription basis.

At some point, many companies face a tough choice between on-premise and cloud approach of working with big data. Most IT teams prefer the former option due to concerns about the reliability of the cloud.

It’s not hard to deploy Hadoop on local servers at the initial stage of dealing with big data, which is accompanied by commonplace hypothesis testing and other checks. Things get more complicated once you move on to commercial use of the solution, where it needs to meet specific requirements: SLA uptime level of 99.9%, guaranteed high reliability of storing massive amounts of data, as well as compliance with predefined KPIs.

In the event you choose to deploy Hadoop in production on-premise, the following tasks are going to be on your to-do list:

  • Hire skilled IT specialists
  • Buy appropriate hardware
  • Buy the necessary distribution kits, install and optimize the software
  • Launch the solution in production
  • Invest in regular maintenance (personnel salaries, hardware maintenance, etc.)

It’s worth mentioning that this preliminary phase takes quite a bit of time. This is why businesses often find it hard to decide which approach – on-premise or cloud – is the most suitable for them.

Bain & Company, a reputable management consulting firm, touched upon the Netflix case in one of their reports. In 2016, the media services provider claimed they had to work with thousands of nodes under immense load in order to process big data. In particular, they were processing about 350 billion user-generated events and petabytes of data related to their services every single day. Obviously, on-premise servers alone are incapable of addressing the objective, unless you are constantly busy building new data centers.

SAP has joined the cloud boom, too. In 2016, the company teamed up with Altiscale, one of the world’s leading providers of Big Data-as-a-Service. The resulting product is the SAP Cloud Platform Big Data Services. SAP customers can use a cloud subscription model to benefit from this solution. It is also embedded in SAP’s general cloud infrastructure.

So, what is the SAP Cloud Platform Big Data Services, SAP’s cloud-based Hadoop service?

SAP Cloud Platform Big Data Services is a toolkit for working with big data based on the SaaS (Software-as-a-Service) model. It includes the following three main components:

Apache Hadoop cluster

This cluster is compiled using Hadoop in compliance with the ODPi certification. It means that the applications and scripts utilized in other services’ ODPi ecosystems can be successfully executed in SAP Big Data Services.

The cluster, in its turn, comprises three nodes: control node (‘namenode’), maintenance node (‘secondary namenode’), and data node (‘resource manager’). The initial set-up of the service already includes the YARN cluster management technology.

The secondary namenode supports additional services, such as Oozie, Hive Metastore, etc. When a customer subscribes for the solution, they get a separate cluster with all the necessary resources. The measurement of these resources is based on storage space and the number of machine-hours. The cluster’s resources can flexibly increase during periods of critical computation or on a permanent basis if necessary.

Workbench, the all-in-one access point

For the sake of security, direct access to the Hadoop cluster is restricted to the operating personnel and the Workbench. The customer can only access the Workbench, which spans local Hadoop as well as Spark, Hive, Oozie, Pig and other components required for data science and engineering, including SAP Predictive Analytics and SAP Lumira.

The customer can use the Workbench to launch scripts, examine data with business intelligence tools, and solve other tasks. The Workbench interacts closely with the Hadoop cluster via a high-capacity channel.

Big Data Services dashboard

The purpose of this element is to deliver proper user experience, generate keys to access Big Data Services, provide cluster usage statistics, and perform other routine tasks the customer may come across.

The Big Data Services solution is connected to the outside world by means of a jump host server. The whole network communication is done within the local IP address space – the virtual private cloud and virtual private network. SSH is the default way of accessing Big Data Services, whereas alternative options are available upon request. The solution additionally supports Kerberos authentication, thus allowing the clients to benefit from single sign-on (SSO) technology.

SAP Cloud Platform Big Data Services can interact with other services from SAP and with on-premise solutions. The following properties can facilitate successful integration:

  • Gathering and processing sensor data with Kafka Streams
  • Extracting data from relational databases by means of Kafka Connectors or SAP Data Services
  • Interacting with SAP HANA platform-based SAP systems through Smart Data Access and Smart Data Integration
  • Interacting with on-premise Hadoop at the Hadoop Distributed File System (HDFS) layer

All communication channels integrated with Big Data Services allow for high-speed data exchange with the customers’ system sources.

What makes the SAP Cloud Platform Big Data Services stand out from other cloud-based Hadoop solutions?

The fundamental difference is that the SAP toolkit can be flawlessly embedded into business processes due to the overarching interoperability between its architecture and SAP’s other systems and services. This is the key advantage for businesses seeking to monetize big data. If data scientists are the only ones who see the actual analysis results in a Hadoop usage scenario, they have yet to persuade enterprise users to put the new ideas into practice, and no one can guarantee that the hypotheses will be implemented. The SAP Cloud Platform Big Data Services can be directly interwoven with an organization’s internal IT systems as one of the critical steps towards a successful business process.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

You can now subscribe to the Portal service in the trial account on SAP Cloud Platform Cloud Foundry environment – for the following data centers Europe (Frankfurt) and US East (VA). You have the full Portal capabilities available on the trial account.

Did you already create a trial account for SAP Cloud Platform? If not, you should do it right now.  A trial account lets you explore SAP Cloud Platform for free. Access is open to everyone. Trial accounts are intended for personal exploration, and not for production use or team development. They allow restricted use of the platform resources and services. Usage of the trial is possible for 30 days. You can extend the trial period to a maximum of 90 days. A trial account in the Cloud Foundry environment can contain multiple subaccounts which is different from the NEO environment where you can can manage only one trial subaccount.

To get more information about the different types of accounts on SAP Cloud Platform trial and enterprise account, check out this link to the official documentation.

I. Prerequisite: Create your Trial account for SAP Cloud Platform

As a prerequisite to get access to the Portal service, you must create a SAP Cloud Platform trial account in the Cloud Foundry environment.

  1. Go to the Cloud Platform site and choose Start your free trial.
  2. Follow the steps on the official Cloud Platform documentation, see section “Get a trial account”.

II. Subscribe to Portal Service

  1. Click on your trial subaccount or the subaccount you created in the SAP Cloud Platform cockpit.
  2. Click Subscriptions from the side menu:
  3. Click on the Portal Service. You see that you are not yet subscribed:
  4. Click Subscribe. The status is changing to Subscribed.
  5. You are now subscribed to the Portal service:

Note: If you click on Go to Application, you will realize that you still have no access to the Portal service application. You first have to assign yourself the Super_Admin role.

III. Assign Yourself to Super_Admin Role

To be able to access and work in the Portal service application, you need to do some configuration steps.

  1. Click on your subaccount.
  2. From the side menu, select Security -> Role Collections. Then create a new role collection (you can name it for example “Administrator”) and save:
  3. Click on your role collection to navigate to the Roles screen and then click Add Role.
  4. Select the following values:
    Application Identifer: portal-cf-service!<Your id>
    Role Template: Super_Admin
    Role: Super_Admin
  5. Use the breadcrumbs at the top of your screen to go back to your subaccount.
  6. From your side menu, select Security -> Trust Configuration.
  7. Click the SAP ID Service. If you are not using the SAP IDP (SAP ID Service), click Role Collection Assignment from the side menu.
  8. Enter your email address and then click Show Assignments:
  9. Click Assign Role Collection. In the dialog box that opens, select the Administrator role collection that you defined above and then click Assign Role Collection.

Now you are assigned to the Super_Admin role and you can access the Portal service as follows:

  1. Click again on your trial subaccount or the subaccount you created in the SAP Cloud Platform cockpit.
  2. Click Subscriptions from the side menu.
  3. Search for the Portal Service and click on Go to application.
  4. Enter your email and password and log on, you have now access to the site directory in the administration environment and you can start creating Portal sites:

 

Watch also this video which shows you the steps you need to take to subscribe to the Portal service and how to assign yourself to the Super Admin role:

For further information, you can also check out the SAP Cloud Platform Portal Administrator Guide. 

 

 

 

 

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hi all,

SAP Enterprise Threat Detection was released 5 years ago. After more than 3 years presales experience and 200 customer presentations I want to share some other information with you.

SAP Enterprise Threat Detection (ETD) and Security Information and Event Management (SIEM). What is the difference and how can they work together?

I will try to give an answer….

Introduction:

A lot of companies have already a running SIEM product or are just in the evaluation to select one.

7 years ago, SAP IT (running our own SAP systems) was looking for a solution to discover suspicious activities on SAP application level based on SAP logfile information. The result of this examination was to develop a product in this area, because the existing products did not have the functions SAP IT needed on an SAP application layer. Why?

The general approach of SAP ETD is different. SAP ETD is collecting data from an SAP application level based on information the different SAP applications write in several SAP logfiles. Below a graphic showing the different logfiles SAP ETD is supporting automatically (HANA, ABAP, and Java).

Actions in SAP applications can be monitored in SAP logfiles. Events like debugging, a call of a critical transaction, change of authorizations, download data to a file from a transaction are not possible to see on an infrastructure level. Meta information like the position of an employee is not available on an infrastructure level in addition.

Result: SIEM products and SAP ETD are both monitoring logfile information, but the focus is very different. Classical SIEM products focus on the infrastructure level and usually can only partly monitor the SAP application level, or even not at all monitor the SAP world.

So, the most important question we need to answer:

Do you want to use the monitoring tools separately or link the application and infrastructure level together? To answer this question, you have the following options:

1) Using ETD like an alarm signal with a focused scenario

https://blogs.sap.com/2019/05/06/sap-alarm-system-avoid-data-breaches-with-sap-enterprise-threat-detection/?source=email-g-security-newsletter-20190515&sap-outbound-id=3A7D340534F348993D398FEF20A26BB065CF0047&utm_source=SAPHybris&utm_medium=email&utm_campaign=SEND_TEST&utm_term=NEWS_GL_ALL_180033_1_Security_Products___Article%204a&utm_content=EN

Running SAP ETD in such an environment, an integration of SAP ETD alerts in a leading SIEM product is not necessary. In such an environment an SAP administrator can maintain ETD and can assure a data breach is detected. The focus is to protect the most important data stored in SAP applications only.

This kind of monitoring of suspicious activities should be the basic protection of each customer, who has important data stored in SAP systems.

2) Monitor IT landscape only on an infrastructure and network level.  

This should only be done if your company does not have important data in SAP applications. In such a case you do not need to monitor activities in SAP.
Please keep in mind that in more than 95 % of attacks on data stored in SAP applications, the attack comes from internal sources. These attacks are often directly executed on an application level and cannot be detected if you focus on IT landscape only.

3) Direct integration of SAP logfile information in a leading SIEM product without SAP ETD.

Some SIEM tools can technically integrate SAP logfiles like the SAP Security Audit Log or other SAP logfiles. Does this help?

Let’s have a look on the features of SAP ETD, other SIEM systems don’t provide:

a) SAP constantly update security patterns based on information we get from customers, partners and security agencies.

https://blogs.sap.com/2019/07/03/new-and-updated-attack-detection-patterns-with-sap-enterprise-threat-detection-2.0-and-sap-security-notes/

This help our customers to get constantly the newest protections directly from SAP. If possible, we even give you patterns for the newest SAP vulnerabilities directly when the vulnerability is announced. This is a way to protect your landscape in case a downtime of the productive systems is not possible. It is a kind of virtual patching.

b) Usually a hacker tries to cover the tracks. He deletes the entries in the different logfiles to make sure he will not be detected. SAP ETD does have a unique technology to duplicate logfile information in real time. Meaning SAP ETD logs and saves the activities from the attacker based on logfile information, even the attacker deletes this information in the original data. In the same process the data is pseudonymized to guarantee the requirements of the German working council.

c) In the current version of SAP ETD 2.0 the first Cloud (Multi Tenant Edition (STE)) is already supported. SAP ETD has an integration of SAP Cloud Platform (SCP). Meaning: Attacks on SCP will be monitored in SAP ETD.

As you can see the advantages of SAP ETD are tremendous. These features provide a unique value to protect your SAP landscape.

Be aware SAP logfiles like the Business Transaction log can easily produce more than 1 Terabyte data every day. That may have an influence on the performance and license needs of your SIEM product.

Result: That’s why a direct integration of SAP Logfiles in a SIEM tool might not be the right choice.

4) Integration of SAP ETD in a leading SIEM product

This is the approach a lot of customers are using to combine the best of two worlds. SAP ETD for SAP applications and a SIEM tool for the infrastructure and network level. This approach is used mostly by bigger and medium size companies who already have a SIEM product in mind.

In such an environment we have two different approaches to integrate in a leading SIEM product. You can use a standard API using JSON or integrate via LEEF Format.

We work together with several SIEM vendors (IBM QRadar, HP ArcSight, …) to guarantee a smooth integration. Examples for the integration are available on SAP Security community like:

IBM QRadar:

https://blogs.sap.com/2018/04/28/sap-enterprise-threat-detection-integrated-in-ibm-qradar/

HP ArcSight:

https://blogs.sap.com/2016/05/18/sap-enterprise-threat-detection-integrated-into-hewlett-packard-enterprise-arcsight/

Other SIEM tools can be integrated using the same technologies.

In such environments the alerts created by SAP ETD are published in a leading SIEM system. The first level support of the Security Operation Center (SOC) will be in charge to take care about the main parts of the ETD alerts. Second level support mostly located in the SAP administration team will be contacted only in case first level support cannot solve the problem. In such an environment it is very important to define the SLA’s in advance.

Summary: As you can see, there are several options to use monitoring tools. Based on the experience of my last 2 years I recommend option 1 and 4.

Option 1 is most interesting for companies, who want to protect the most important data stored in SAP applications and do not want to invest a lot of time in a Security Operations Center (SOC). This option is the basic option which should be used by all customers who have very important data in SAP systems.

Option 4 is the solution for companies, who want to integrate SAP security in an existing SOC. This is a very comprehensive solution for customers, who already have an existing SIEM system in place.

In addition, companies should think about who will be in charge to run an SAP ETD application. It is not necessary to run it all inhouse. Several providers offer managed services for SAP ETD which can reduce the effort inhouse near to zero. Details about this approach can be found via the following link:

https://blogs.sap.com/2019/06/11/managed-service-scenarios-for-sap-enterprise-threat-detection/

I hope this blog helps you to find the correct monitoring tool for you.

 

Best regards,

Martin

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Low-Code platforms have been gaining traction within organizations as a fast and efficient method in creating business applications.

The low code market is projected to reach $27.23 billion in the year 2022 and that is just the beginning of market relevance that low-code development and low-code tools are going to gain in the coming years. It is projected that the world’s software needs are going to see an increase to such a degree that there will be a shortage of expert coders who will be able to meet the demand. Low code solutions  will be able to fill this increasing demand, offering businesses a cheaper and faster way to develop those ideas without losing any of the capabilities that hard coding offers.

 Let’s review how low-code is becoming the future of enterprise app development:

1. Substituting Traditional Coding with Visual Modeling

Traditional code comes with a very manual process that requires scoping, building, testing and releasing. The process can take months or even years depending on the size of the project, and often requires multiple team members who work both independently and on teams. Alternatively, low-code enables organizations to develop applications without the need to write every single string of code. A low-code platform facilitates such an environment by turning software development from a hand-coded methodology to a visual oriented development thanks to flowcharts, cloud based tools, and APIs.

As a result, developers can build software just by dragging and dropping the elements that make up the software. Hence, application development becomes much easier and less taxing on developers. It can reduce the time and amount of labor needed to move from idea to working application.

2. Opportunity for More Developers

The low-code movement will enable a more diverse team of technology experts. In the past, an organization had to choose a language to code in, like .Net or Angular, then could only hire team members who wrote in that code.

Low code app development allows a team to hire the smartest candidates and those with the right fit, regardless of coding language experience. Low-Code will provide an opportunity to a larger spectrum of developers, helping them develop an application without requiring years of experience.

3. Faster Development

Low code is a simplified coding process which means that the time it takes to create an application is extensively limited.  A typical business multi-platform app takes about 6 months to complete, the same can be achieved using low-code within less than three months of time. Simple apps can be developed within a matter of days.

Low code development means that businesses can have their apps running and have it market-ready in much less time compared to hand-coded methods. Additionally, organizations can stay more agile and capitalize on opportunities faster.

4. Fewer Errors and Bugs

Errors are often common with traditional coding because we don’t know in advance where a code turns bad before it is actually executed on the test bed. The bug testing phase is a long and tricky part of code development with multiple team members testing to find bugs, then working to correct them and repeating that process. However, with low-code, each API that the developer plans to use in their software has already been tested and verified.

Also, when different application modules are put together, the system itself will check for cross-compatibility between them so that they will run better with each other. Hence the number of errors during and post-launch can be reduced and better managed than traditional alternatives.

5. The Future of AI Backed Coding

Many low-code solutions also have the added benefit of incorporating AI. The right low-code tool may use AI to run tasks, integrate data sets or make sense of unstructured data. AI can be used to test code and identify errors where humans may have missed.

Additionally, some Low-Code platforms have AI that monitors each and every action performed by the developer. So, if certain modules don’t work well with each other or if there are other modules better fit for the app, the AI will notify the developer right way, making coding easier and developer-friendly.

The Future is Here

Low code empowers existing developers to do more in less time. Low code can help organizations build interdisciplinary teams and enable a friendly software development ecosystem for future developers. Working together, developers can create, iterate and release applications in a fraction of the time it takes with traditional methods. The application built with low-code is less prone to bugs and can even be backed by powerful AI tools. Every organization should look to find ways to incorporate low-code in their development process.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview