Loading...

Follow Recast.AI | Collaborative Bot Platform on Feedspot

Continue with Google
Continue with Facebook
or

Valid
What’s the goal of this tutorial?

By the end of this tutorial, you’ll be able to speak with your own chatbot to get information about temperature and humidity from an IoT device.

I’ll describe a step-by-step guide about how to configure your SAP Cloud Platform (IoT Service + SAP Hana DB), codify your IoT device and create a new chatbot that integrates with IoT Service in SAP Cloud Platform storing data into an SAP HANA Database.

See below the result expected at the end of this article.

Data flow

Prerequisites
  • SAP Cloud Platform Neo account (Create a free trial account here);
  • SAP Conversational AI (Create a free account here);
    • If you don’t have experience with SAP CAI, I suggest first following this tutorial to understand how to work with it.
  • An IoT device. You can use an existing one or prototype a new one (our case here). But don’t worry if you don’t have a physical IoT device or can’t prototype one – you can simulate one with your computer sending data to the service using Postman.
Steps
  1. Create a new database (SAP HANA MDC)
  2. Activate and configure an IoT Service in your SAP Cloud Platform account
  3. Create the digital twin
  4. Configure your IoT Device
  5. Create a new service (XSJS) in your database
  6. Create a chatbot in SAP Conversational AI
STEP 1: Create a new database (SAP HANA MDC)

In the left-hand menu of your SAP Cloud Platform account, go to SAP HANA / SAP ASE and click on Databases & Schemas:

Click on New, provide the information asked on the next screen, and click on Create:

IMPORTANT: In this step, I activated the option to configure a new user for SHINE, but it isn’t needed, you can create a new user in SAP HANA (under Security) and assign the correct roles if you prefer. I’ve just activated here because it is easier.

Wait for the DB creation. When it’s ready, you’ll see a screen like this one in the Overview menu option:

Here is a trick! Click on SAP HANA Cockpit link, and on the next page, log on with the SYSTEM user using the password that you’ve just created:

This action will assign some needed roles. After that, you can click on the SAP HANA Web-Based Development Workbench link, and you can log on with your SHINE user to change your password for the first time and to check your SCHEMAS in your new HANA database.

Done, your new DB is created!

STEP 2: Activate and configure an IoT Service in your SAP Cloud Platform account

First, you must activate IoT Services in your SAP Cloud Platform account.

In the left-hand menu of your SAP Cloud Platform account, go to Services and search for Internet of Things:

Click on Go to Service.

STEP 3: Create the digital twin

In the IoT Service Cockpit you must set up a new Message Type, a new Device Type and a new Device in this exact order.
These following steps will create a Digital Twin of your physical IoT device.

Step 1/3

Click on Message Types and on the Create message type icon at the bottom of the page. Following the example below, fill in the information according to the data sent from your IoT Device, and click on Create.

Note: I’ve created these 3 fields because my IoT prototype send these 3 values.

You’ll need the Message Type ID later on, but you can check it anytime.

Save this information and go back to the Cockpit.

Step 2/3

Click on Device Types and then click on the Create Device Type icon at the bottom page.

Enter the name of your device type and click on Add Message Type.

Enter a name in the Assignment Name field and choose the Message Type that you’ve just created, and select the From Device option. Click on Create.

Step 3/3

Click on Devices and then click on the Create Device icon at the bottom of the page.

Fill in Name with a name for your device, select the Device Type that you’ve just created and click on Create.

IMPORTANT: You must save the token that appears in this step because you won’t be able to see it again afterwards. If you lose this token, you’ll need to generate another one.

You’ll need the device ID too, but you can check this information at any time.

Go back to the cockpit and click on Redeploy the Message Management… or Deploy if you’ve never used it before.

Enter the account information and click on Deploy twice.

After that, go to the link below.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this tutorial you are going to learn how to establish an integration from your SAP Conversational AI chatbot to an HANA MDC Database.

First we are going to insert data into a HANA MDC Database and expose the table via an OData service. Afterwards we use SAP Conversational AI to connect a chatbot to our previously created HANA table. As a prerequisite you need an already created chatbot.

This exercise is splitted into 4 different parts:

  1. Create a new SAP HANA MDC Database
  2. Expose the table data via an OData service
  3. Create a chatbot on SAP Conversational AI
  4. Connect your chatbot to SAP HANA with a Webhook API
1. Create a new SAP HANA MDC Database

First, log into your SAP Cloud Platform Cockpit here.

Navigate to Persistence > Databases & Schemas.

Select the New button to create a new Database on your SAP Cloud Platform account.

Type in following credentials:

  • Database ID: scpta
  • Database System: SAP HANA MDC (<trial>)
  • SYSTEM User Password: Keepitshortsweet1 (password must be 15+ characters, have 1+ lowercase, uppercase, and numeric)

Then keep all other parameters the same.

Click Save.

After you click Save, you’ll see an Events screen. This describes the activity within your newly created SAP HANA database.

Select Overview from the left-hand side menu. This will allow you to access the SAP HANA Web-based Development Workbench. Notice your database is still being created. Please wait a few minutes until the CREATING indicator turns into STARTED.

In order to access the SAP HANA Web-based Development Workbench, you first must access the SAP HANA Cockpit, sign in using your new SYSTEM user and password.

You will be alerted that you’re not authorized to access this page. Then you will automatically be assigned to the Admin roles and be brought to the Admin Cockpit Launchpad.

In order to access the SAP HANA Web-based Development Workbench, you will also need to assign the proper roles to your SYSTEM user. Start by opening the Manage Roles tile to access the Security screen.

Navigate to Users > SYSTEM. Then navigate to Granted Roles.

For each item below, select the + button and add the following roles:

  • sap.hana.admin.roles::Administrator
  • sap.hana.ide.roles::CatalogDeveloper
  • sap.hana.ide.roles::SecurityAdmin
  • sap.hana.xs.admin.role::SQLCCAdministrator
  • sap.hana.xs.ide.roles::Developer
  • sap.hana.xs.ide.roles::EditorDeveloper
  • Select Save and exit the Security screen.

You will now access the SAP HANA Web-based Development Workbench by selecting it beside the Developer Tools label on the scpta-Overview page.

Once open, select Catalog.

Once the Catalog is open, select the SQL icon from the toolbar on the upper part of the screen. This will launch an SQL scripting window.

For this tutorial I created a table with following schema:

CREATE COLUMN TABLE "SYSTEM"."CUSTOMER"(
	"customerNumber" NVARCHAR(10),
	"firstName" NVARCHAR(50),
	"lastName" NVARCHAR(50),
	"email" NVARCHAR(50),
	"city" NVARCHAR(50),
	"state" NVARCHAR(50),
	"zip" NVARCHAR(10),
	"totalSpend" DECIMAL,
	"homeCountry" NVARCHAR(50),
	"specialization" NVARCHAR(60),
	"concentration" NVARCHAR(60),
	PRIMARY KEY (
		"customerNumber"
	)
);
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."customerNumber" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."firstName" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."lastName" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."email" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."city" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."state" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."zip" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."totalSpend" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."homeCountry" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."specialization" is ' ';
COMMENT ON COLUMN "SYSTEM"."CUSTOMER"."concentration" is ' ';

Copy this content in the file into the SQL window. This will create a table called CUSTOMER in your SYSTEM schema in the SAP HANA database, with customerNumber as the primary key.

Click Run.

Verify success with the log below the SQL window. If you wish to have the SQL code structured, select the Format Code icon.

Once the table is created, we can now add data.

Clear your SQL scripting window or close it and open another window.

INSERT INTO "SYSTEM"."CUSTOMER" VALUES('340349','John','Murray','john.murray@test.com','Middle River','MD','21220',7700000,'Spain','Finance','LoB Finance');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('861474','Alex','Jones','alex.jones@test.com','Roswell','GA','30075',9400000,'Mexico','DSC Products','Manufacturing');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('341324','Sarah','Hasser','sarah.hasser@test.com','Grand Haven','MI','49417',1800000,'Pakistan','Business Intelligence / Predictive','BI Platform / Tools');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('340848','Peter','Dent','peter.dent@test.com','Billings','MT','59101',9300000,'Russia','Customer Engagement & Commerce','Marketing');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('339949','Amanda','Ronan','amanda.ronan@test.com','Randallstown','MD','21133',5700000,'India','Human Resources','not defined');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('338632','Benjamin','Krygsmen','benjamin.krygsmen@test.com','Franklin','MA','2038',6500000,'Australia','Human Resources','not defined');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('668939','Stephen','Parker','stephen.parker@test.com','Phillipsburg','NJ','8865',1400000,'Germany','DSC-Supply Chain',' Supply Chain Execution');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('861443','Brian','Guerrero','brian.guerrero@test.com','Midland','MI','48640',3100000,'Mexico','Cloud & Platform Technologies','SAP Cloud Platform');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('339623','Chris','Johnson','chris.johnson@test.com','Bronx','NY','10451',4000000,'KSA','DSC-Supply Chain','Supply Chain Execution');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('861193','Johanna','Benton','johanna.benton@test.com','Roseville','MI','48066',8100000,'Chile','DSC-Supply Chain','Supply Chain Execution');
INSERT INTO "SYSTEM"."CUSTOMER" VALUES('861475','Oliver','Sanchez','oliver.sanchez@test.com','Hernando','MS','38632',5500000,'Mexico','Customer Engagement & Commerce','Marketing');

Copy the content in the file into the SQL window. Adjust the data if you prefer.

Click Run.

Verify success with the log below the SQL window. Copy the content in the file into the SQL window.

2. Expose the table data via an OData service

Navigate back to the SAP HANA Web-based Development Workbench and select Editor.

First we will create a new package for exposing the customer table as an OData service.

Right-click on the top-level ‘Content’ folder and select New > Package.

Name the package ‘customer’ and leave the remaining fields blank (Note: this is case-sensitive and it is recommended to be lowercase).

Select Create.

Each SAP HANA XS application must have an application descriptor file called .xsapp.

Right-click on the new package you created and select New > File and name the new file ‘.xsapp’.

Select Create.

If needed, replace the code with the code below:

{}

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When it comes to sharing your chatbot on different channels, Messenger MUST be one of your first choices – if not THE first choice! In fact, we realized many companies immediately put their bots on Facebook once in production, as it’s clearly the most user-friendly and easiest way for a customer to contact a company.

A Facebook chatbot has a lot of advantages:

  • 24/7 availability
  • 100% answers
  • Instant answers (think about your Answer rate!)
  • Tedious tasks are automated

On the SAP Conversational AI platform, we created a step-by-step integration process for our users, so that it only takes a few minutes to reveal your chatbot to your Facebook followers. Let’s dive in!

Step 1: Get your chatbot ready

First of all, you’ll need a chatbot (seems legit, right?!). Note that once your chatbot is online on Facebook, you’ll be able to modify it, and any changes you make to it will appear in your Messenger chat.

For the purpose of this tutorial, we won’t go into how to create a chatbot. Instead, I warmly invite you to create your account (it’s completely free!) and read our tutorial “How To Build Your First Bot With SAP Conversational AI”. Once your “Joke Chatbot” (or whatever you’ve built) is ready, return here!

Step 2 : Get your Facebook page ready

Your chatbot will only be available for integration on a Facebook page (not on your personal profile). This means you have to create a Facebook page or have in mind the one you’ll use. Let’s assume your company, business, or group already has a page. (If that’s not the case, hit this link and create one.)

As I said in the introduction, having a chatbot on a Facebook page will automate private messaging once it’s connected to your page. Thus, if you decide to remove the chatbot, you’ll immediately revert to traditional person-to-person conversations (which means nothing will happen when users enter a message until you manually answer them).

Step 3 : Create a Messenger Facebook app

Creating an app will help make the connection between SAP Conversational AI and your Facebook page. Without this app, you won’t be able to publish your chatbot on your Facebook page.

Click on this link, choose My Apps in the top menu and then Add New App.


Once your app is created, you’ll have to add a Messenger “product”. There are tons of jobs a Facebook app can be dedicated to, but we specifically want a private messaging application. Go to your app’s dashboard and click Set Up in the Messenger box.

In the left-hand menu, you’ll then see Messenger under PRODUCTS.

Step 4 : Get your page token and app secret

Now that we’ve created a Messenger app, we need to link it to your Facebook page (by default, a Facebook app is an independent entity). With this connection, you’ll be given a token, which is basically a unique code that says “OK, this is the code of the Messenger app of the page X”.

In the left-hand menu, click Settings just below the product Messenger.

Choose the page you want your chatbot to appear on.

For security reasons, you’ll probably need to allow the app to interact with your Facebook page. Click the blue Edit Permissions button, select your page, and check the different boxes.

Once the permissions are given, a token will be generated.

Go back to the Connect tab in your SAP Conversational AI chatbot, choose Messenger, and paste your token in the Page token field in step 4.

Yay, we’re halfway through! Let’s now get our “app secret”, which is like a password for your app.

In the left-hand menu, go to Settings > Basic.

For privacy, the app secret is hidden. Click Show and copy and paste it to the App secret field on your chatbot’s Connect tab (similar to what you just did with the page token).

Click Update channel under the SAP Conversational AI form.

Step 5 : Connect SAP Conversational AI to your app

It’s time to connect our platform to Messenger!

On the Products > Messenger > Settings page, go to the Webhooks section and click Subscribe To Events.

In the pop-up window, enter the values for Callback URL and Verify token that you’ll find in step 4 of your chatbot’s Connect tab.

Also select the checkboxes shown below:

Once your page has reloaded, select your page in the list so that it can access your webhook.

Step 6 : Test and publish the Messenger Chatbot

Now you can test your bot as an administrator (you can also grant some test roles using Roles > Test Users in the left-hand menu). Your bot won’t be publicly accessible until you change the status, so take your time to test it and make sure everything is just fine before releasing it to the world!

ce you’re happy with your bot, if you change the toggle to ON (in the top right corner), you’ll be redirected to the settings and prompted to provide some extra information before your bot is published. (Tip: You can also access the settings under Settings > Basic in the left-hand menu.)

Very last step: Facebook will want to verify and test your Messenger chatbot. Here’s what they say about this step in their documentation:

"When you are ready to release your bot to the public, you must submit it to our team for review and approval. This review process allows us to ensure your Messenger bot abides by our policies and functions as expected before it is made available to everyone on Messenger."— Facebook Documentation

In the left-hand menu, go to Products > Messenger > Settings and click Add to Submission in the pages_messaging block.

It won’t take long for the Facebook review team to look at your bot and give you the green light to publish it!

And that’s all there is to it!

Hope you enjoyed this tutorial. And remember you’re very welcome to contact us if you need help, through the comment section below or via Stack Overflow.

If you’re looking for another tutorial to improve your chatbot building skills and connect it to Amazon Alexa, read this article.

Happy bot building!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Experiments

In his book, The Master Algorithm, Pedro Domingos imagines the following experiments:  

Take a building, extremely well–built for two purposes: Nothing can enter and most importantly nothing can leave. The former is to assure the safety of any human from the danger inside, the latter is to assure the safety of everyone on the planet from what is inside. Inside the building, one unlucky fellow finds a 3D printer and the robots it manufactures.

The design and algorithms that define and govern each robot are subject to both “mutation” and “mating”. And each created robot has one purpose – to eradicate every other robot. Such a strategy, through selection of the fittest, could create specialized killer robots and serve any army. 

This is an example of an application of genetic algorithms, a family of potent artificial intelligence algorithms. Genetic algorithms have already been used in real life. For example, NASA designed antennas for satellites with a genetic algorithm.

Nasa Evolved Antenna

This method created antennas whose shape had never before been seen and that were specific to each satellite orbit. 

But why mimic natural selection in AI? If you think about it, and if you don’t listen to creationists, natural selection has already created intelligence several times over life history. The human brain is indeed a product of the selection of the fittest.

The ability of any living thing to use a central nervous system to process signals from its environment evolved one generation after the other. 

So it could make sense trying to channel the powerful force of evolution and selection in order to evolve AI algorithms once more. 

How novelty is generated in life?

For any living thing, novelty can occur through mutation. The genetic code of any living organism (excluding viruses from the living, whose genome can be made of RNA) is made of DNA. The DNA is just a polymer made up of 4 constituents, called nucleobases and denoted by the four letters ATGC (Adenine, Thyime, Guanine, and Cytosine).

Each three consecutive letters are called a codon and can be interpreted by the cell machinery as part of a protein, called an amino acid. It is the interpretation of the codon unit of DNA that directs protein synthesis and is responsible for protein shape and function.

From time to time, through chemical stress or copy error, a nucleobase can be inverted by another (a mutation), which can change the interpretation of a codon, hence altering the properties of a given protein. Through many generations and after subsequent selection, novelties are introduced in every species.  

Sexual species can also introduce novelties through the mating process. Basically, two mates are sampling half of their DNA to create a third individual. The DNA polymer of the offspring can be reasserted randomly during the mating process, which can create a new mutation, hence creating new proteins. For example, in humans, each non-sexual chromosome is present in two copies. It is possible for two chromosomes of the same type to exchange part of their DNA. This can create a totally new variation of a given gene. This process is called a cross-over.  

Also, through copy errors, some genes can be duplicated or removed, which can create innovation. Then after mutation, two proteins with the same ancestor can differ and display different properties. 

How to mimic evolution inside a computer 

In a way, one can look at a genetic algorithm simply as an optimization technique. Let’s say you want to minimize a function f. At first, you would randomly create several “individuals”, say a hundred. Each individual would receive random attributes, which are inputs of f. Then, individuals would be evaluated through f, and you would only retain the 20 best combinations of inputs.

For each remaining survivor, some of the inputs could vary by a little amount: You need to define the probability for it to happen and the magnitude of the change. This mimics the biological process of mutation. 

Now you can also imagine a mating strategy. Take two individuals and make them exchange some attributes. If an attribute is a list, you could break the list at a point for both individuals and each would exchange a part of the list. This can be seen as the cross-over of living things. This will produce the offspring of the previous generation.  

By repeating the process of mutating, mating, and of cross-over for several generations, it’s possible to learn to minimize f

Generic algorithms for hyper-parameters selection

Genetic algorithms can be used for a variety of machine learning task and it has recently gained traction for the machine-learning hyper-parameters search. There is a python library dedicated to this use: TPOT.

It’s usage is rather simple:

from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                                                                      train_size=0.75, test_size=0.25)

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export(‘tpot_mnist_pipeline.py’)

Conclusion

Machine learning algorithm is often inspired by natural processes. Artificial Neural Networks are inspired by the neural organization of the visual cortex in humans. Genetic algorithms is about channeling evolution for optimization purposes. Here is described how it is possible through mutations, mating and cross-over processes to learn a function.

Genetic algorithms have recently been used for hyper-parameters search, as a replacement of gradient descent for very large networks, as a way to find new neural architectures and many more machine learning tasks.

Want to learn more about machine learning? Read this article about how NLP (Natural Language Processing) is teaching computers the meaning of words!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the previous post, we saw how Natural Language Processing has moved from seeing words as a string of characters to what they mean. This post builds on those ideas.  

Miklov et al. introduced the world to the power of word vectors by showing two main methods: Skip–Gram and Continuous Bag of Words (CBOW). Soon after, two more popular word embedding methods built on these methods were discovered.  

In this post, we’ll talk about GloVe and fastText, which are extremely popular word vector models in the NLP world.  

Global Vectors (GloVe) 

Pennington et al. argue that the online scanning approach used by word2vec is suboptimal since it does not fully exploit the global statistical information regarding word co-occurrences.  

In the model they call Global Vectors (GloVe), they say: “The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.”  

In order to understand how GloVe works, we need to understand two main methods which GloVe was built on – global matrix factorization and local context window.  

In NLP, global matrix factorization is the process of using matrix factorization methods from linear algebra to reduce large term frequency matrices. These matrices usually represent the occurrence or absence of words in a document. Global matrix factorizations when applied to term frequency matrices are called Latent Semantic Analysis (LSA).  

Local context window methods are CBOW and Skip–Gram. These were discussed in detail in the previous post. Skip-gram works well with small amounts of training data and represents even words that are considered rare, whereas CBOW trains several times faster and has slightly better accuracy for frequent words.   

Authors of the paper mention that instead of learning the raw co-occurrence probabilities, it was more useful to learn ratios of these co-occurrence probabilities. This helps to better discriminate the subtleties in term-term relevance and boosts the performance on word analogy tasks.  

This is how it works: Instead of extracting the embeddings from a neural network that is designed to perform a different task like predicting neighboring words (CBOW) or predicting the focus word (Skip-Gram), the embeddings are optimized directly, so that the dot product of two word vectors equals the log of the number of times the two words will occur near each other.  

For example, if the two words “cat” and “dog” occur in the context of each other, say 20 times in a 10-word window in the document corpus, then: 

Vector(cat) . Vector(dog) = log(10)

This forces the model to encode the frequency distribution of words that occur near them in a more global context.  

fastText  

fastText is another word embedding method that is an extension of the word2vec model. Instead of learning vectors for words directly, fastText represents each word as an n-gram of characters. So, for example, take the word, “artificial” with n=3, the fastText representation of this word is <ar, art, rti, tif, ifi, fic, ici, ial, al>, where the angular brackets indicate the beginning and end of the word.   

This helps capture the meaning of shorter words and allows the embeddings to understand suffixes and prefixes. Once the word has been represented using character n-grams, a skip-gram model is trained to learn the embeddings. This model is considered to be a bag of words model with a sliding window over a word because no internal structure of the word is taken into account. As long as the characters are within this window, the order of the n-grams doesn’t matter.  

fastText works well with rare words. So even if a word wasn’t seen during training, it can be broken down into n-grams to get its embeddings.

Word2vec and GloVe both fail to provide any vector representation for words that are not in the model dictionary. This is a huge advantage of this method.  

Conclusion 

Here are some references for the models described here:  

We’ve now seen the different word vector methods that are out there. GloVe showed us how we can leverage global statistical information contained in a document. Whereas, fastText is built on the word2vec models but instead of considering words we consider sub-words.

You might ask which one of the different models is best. Well, that depends on your data and the problem you’re trying to solve! 

If you want to boost your bot performance through data, don’t forget to read this article!  

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Introduction of Security Development Lifecycle (SDL)

Security Development Lifecycle is one of the four Secure Software Pillars. By pillars, I mean the essential activities that ensure secure software.
SDL can be defined as the process for embedding security artifacts in the entire software cycle.

SDL activities should be mapped to a typical Software Development LifeCycle (SDLC) either using a waterfall or agile method. The benefits from following SDL activities are endless, but two of the most important benefits are:

  • Security by default
  • Build buy-in, efficiency in both operations and business processes

Software Development LifeCycle (SDLC) defines the different phases that a software product goes through from beginning to end of its life. Different SDLCs exist and companies are free to define their own. SDLC not only applies to software directly shipped to customers, it also applies to IT and to Software as a Service (SaaS) projects.

A typical SDLC has the following 7 phases, though these can be modified according to your team’s methodology:

  1. Concept
  2. Planning
  3. Design and development
  4. Testing
  5. Release
  6. Sustain
  7. Disposal

New development methodologies like Agile, Lean, CD, and others require rethinking existing SDLCs and making adaptions. For example, a team with an agile environment may have the following flow:

One we determine the methodology, each phase in the SDLC should be mapped to the corresponding activities in the SDL, as follows:

A good implementation of SDL in the software development cycle may look like the following:

Now let’s dive into the preparation for SDL implementation.

SDL Discovery / Preparation

When? During the concept phase.
Who? Usually initiated by a sponsor but can be anyone in the team.

Goals
SDL Discovery should address the following questions:

1. What are the security objectives required by the software?
Think in CIA terms (Confidentiality, Integrity, Availability).
2. What are the regulations and policies that we need to follow?
3. What are the possible threats in our environment?

When working on SDL Discovery, focus on the following deliverables:

  • Security milestones throughout the SDLC
  • Security requirements gathered from customers, regulations, and standards
  • Required certifications
  • Risk assessments
  • Required trainings
  • Required third-party software
  • Key security resources (security team, SDL champions, etc.)
  • SDL resources (e.g. coding guidelines, testing tools, compliance checks, etc.)
  • Security metrics
  • Privacy Impact Assessment (PIA)

After the discovery and preparation has been done, we can start mapping SDL artifacts to our own SDLC.

SDL Artifacts 1. Security Baselines (Requirements)

When? Early in the cycle (e.g. planning)
Who?
Senior engineers and project managers

A security baseline is a list of requirements that every product must comply with. Examples of security requirements might include:

  • Only approved cryptography
  • Sanitize your inputs
  • Don’t use backdoors
  • Only use approved libraries
  • Use multifactor authentication, etc.

Evaluating the product against the Security Baseline is vital. This is called “Gap Analysis”. This is when you compare and contrast the features in your product and the initial features of the baseline. An initial gap analysis at the beginning of the project and a final prior to product shipment are necessary.

The goal of an initial gap analysis is to identify areas that are not compliant with the Security Baseline, missing features, and other requirements. Gaps are analyzed. If any decisions need to be made, they can be addressed in the current release, the next release, or you can make an exception. The gaps that need to be addressed in the current release are documented and reported as work items such as Agile user stories, work items, tickets, etc.

The Final Gap Analysis is performed before products are released. This is done after work items in the initial gap analysis are finished. Companies can release the product based on the compliance level to the baseline. Some companies may only release a product if it is 100%, 90%, or 80% compliant. The level of compliance required is determined by the project managers or by the company standards.

2. Security Training & Awareness

When? Anytime, but preferably during the early stages
Who?
Everyone

Security trainings aim to raise the security knowledge of developers, designers, architects, and quality assurance. On the other hand, security awareness sessions are targeted at everyone involved in the project. The goal is to encourage a security mindset in all stakeholders.

Security trainings target developers, designers, architects, and QA. These can cover aspects like SDL, secure design principles, and common security issues, and also address some coding pitfalls such as Java coding, avoiding buffer overflows etc. They can also focus on specific security areas like web security or encryption.

Security awareness sessions target everyone involved in the project because software security is everyone’s responsibility. Sessions should not require a technical background and they may include topics like CIA (Confidentiality, Integrity, and Availability), understanding threats, risk impact and management, understanding SDL, and lessons learned from previous release experiences followed with examples.

These trainings and awareness programs can also help to fill the gap in education, as most security courses teach how security technologies work, but not how to develop secure software or applications.

3. Threat Modeling

When? During planning
Who?
Senior engineers and project managers

Threat modeling aims to identify and manage threats early in the secure development lifecycle and plan for proper mitigations because the cost of remediating issues early on is much lower than later during the cycle. It also helps to validate the architecture with the development team and forces the team to look at the architecture from a security and privacy perspective.

Threat modeling consists of modeling the software components, data stores, trust boundaries, data flows, and external dependencies because the chances of identifying threats are greater if the software is modeled accurately.

There are 4 steps to threat modeling:

 1. Prepare
What are we building?
Create architecture diagrams

 2. Analyze
What can go wrong?
e.g. Map a STRIDE Model

 3. Determine mitigations
What can we do about it?
Describe mitigations

4. Validate
Did we do it right?
Execute a retrospective activity

Threat modeling can also have different approaches, for example:

Asset centric = Protect specific things
Attacker centric = Exploit weaknesses
Design centric = Focus on system design

Threat modeling should be done early during the planning phase and updated later during development.

4. Third-Party Software Tracking

When? As early as possible
Who?
Senior technical member / technical lead

Leveraging third-party software can also introduce additional risks that need to be mitigated. Third-party software can be both open source and commercial use. The project team needs to keep an inventory of all third-party tools used in the project, as this will help ensure that all the tools used in the project have the latest security patches.

The inventory should be done as early as possible in the development cycle. Monitoring vulnerabilities has to be done by a senior technical member who can understand the vulnerabilities and a technical lead who will coordinate the deployment of security patches.

Third-party components can be tracked in a spreadsheet or database with application capabilities such as versioning, patched level, vulnerability fixes, project dependencies, etc. There are some tools that will give you an alert if any of your tools need an upgrade. Some companies are required to keep track of third-party software for financial and legal licensing reasons as well.

5. Secure Build

a) Security Design Review

When? During design and implementation
Who? Development team
Goal: Make sure the software is built with the most secure flags

Security design applies to individual features created by the development team. These features can correspond to respective user stories. The security design review can be done at the same time as the functional feature design, for example:

Functional feature design review: Will this feature work as expected?
Security design review: How can this feature be abused? “Think like a hacker.”

A common strategy for Agile teams is to define an “Evil user story”, which corresponds to each user story for certain features.

b) Peer Code Review

When? During implementation
Who? Development team
Goal: Make sure the software is built with the most secure flags

Security checks should be part of the code review. These can be done by training developers on common coding security pitfalls and providing a secure coding checklist.

Examples of security code review checks may include the following:

  • Are important security events logged?
  • If authentication is required, can it be bypassed?
  • Does the software dump user passwords in logs?
  • Is user input properly validated?
  • Does the software properly release sources like memory, system handles, and ports that aren’t needed anymore?

Functional code review can also help to discover security issues, for example, improper logging, memory and process synchronization.

6. Security Testing and QA

In the software development cycle, code review and QA mostly focus on being functional to make sure that the software is doing what it is supposed to do. However, security testing means testing with a hacker’s perspective.

Security testing can be done manually or automatically through the use of tools, or a combination of both. Before security testing, threat modeling, design reviews, and user stories must be done.

Security testing includes static and dynamic analysis, vulnerability scanning, fuzzing, third-party penetration testing, fault injections, and others.

In this article, we’ll focus on the following:

a) Static Analysis

When? During development and testing
Who? Developers, QA, or security expert

Static application security testing or SA is the analysis of software that is performed without executing programs. It can be performed on source code and on object code. SA is a white-box testing technique that has full access to the possible behaviors of the software.

Some of the advantages include:

  • Can identify exact locations of weaknesses
  • Allows quick fixes
  • Reduces overall project cost by finding weaknesses earlier
  • Can be done by a QA engineer who understands the code.

Static analysis tools can help to look at the possible program flows to detect potential error conditions at runtime. It also helps to check if the code performs as designed and checks for coding best practices. Modern static analysis tools are getting better at finding security vulnerabilities in source code.

Examples of security issues found by static analysis may include:

  • Potential buffer overflow
  • SQL injections
  • Cross-site scripting (XSS)
  • Use of unsafe functions
b) Dynamic Analysis

When? During development and testing
Who? Developers, QA, or security expert

Dynamic application security testing or DA is the analysis of software. This is done by executing programs on a real or virtual processor in real time. The goal is to find security errors in the program while it’s running. It can also help to ensure that the functionality of the program works as designed.

Some of the advantages of DA testing are:

  • Can find infrastructure, configurations, and patch errors that SA tools might miss
  • Identifies vulnerabilities in a runtime environment
  • Validates the findings from SA tests
  • Can be done on any application written in any language
c) Vulnerability Scanning

When? During development and testing
Who? Developers, QA, or security expert

Vulnerability scanning tools are used against running software. These tools usually work by injecting “malicious” inputs and observing how the software handles them. These types of tools are mostly used to scan applications with a web interface including REST and SOAP APIs. Vulnerabilities like XSS, SQL injections, and poor session management can be found during these scans. Some tools allow you to modify payloads in other to get a more accurate result.

d) Fuzzing

When? Towards the end of development and during the testing phase
Who? Developers, QA, or security expert

Fuzzing is a black-box testing technique that involves providing invalid, random unexpected data into a program. The goal is to test how well protocols and file formats are handled. In simpler terms, fuzzing finds security flaws or bugs by using malformed data injection.

Whenever a protocol or file format is involved in an application, there’s a chance of injecting harmful data. It’s very important to execute these tests because the incorrect handling of unexpected protocols and file formats can lead to Denial of Service attacks or even remote code execution.

Fuzzing can be done manually or by a tool. When it is done manually, the tester must tweak the protocol and file format, and observe the outcome. When fuzzing is done by a tool, the protocol or format definition needs to be provided to the tool in advance. Some strategies for fuzzing through the help of a tool include:

  • Let the fuzzing tool generate random input
  • Have the tester define the fuzzing space
  • Have the tool try all possible tweaking

One of the biggest advantages of fuzzing is that the test design is very simple; its random approach allows you to find bugs that would often be missed by humans. Bugs found during fuzzing are usually severe and exploitable by an attacker.

e) Third-Party Penetration Testing (Optional)

When? At the end of the development and testing phase

Who? Third-party certified pen tester

Penetration testing is a security analysis in which the tester simulates the actions of a hacker. The goal is to discover coding errors, system configuration faults, and to uncover exploitable vulnerabilities and validate security features.

This test is performed by a skilled security professional. Being an individual who is independent of the team is a requirement for pen testing. This is why it is recommended to have a third-party external tester, as this will provide an “outside” perspective. This is very critical, especially for high-risk projects.

The key to a successful pen test is to always test the system from a hacker’s perspective!

7. Data Disposal and Retention

When? At the end of a product’s or feature’s life
Who?
Developers and security expert/data expert

There are many motives to data disposal. They can vary from an old product that is no longer needed, to no further use of the data, a defective product, or no legal right to retain the data. One of the most used practices is “crypto-shredding”, which is the act of deleting or overwriting encryption keys. Getting rid of data is a huge challenge when there is concern about the confidentiality of the information. Making sure that the data has been properly deleted or overwritten is a very important part of the SDLC.

Some recent laws, regulations, and policies like GDPR have very specific requirements for data retention and disposal. Companies need to create their own data destruction policies with the help of an attorney to make sure they are compliant to the state laws and the regulations applicable according to their type of data and national/international data protection laws.

Conclusion

SDL is a process that standardizes security best practices and will help you to integrate security checkups throughout your development cycle, which will naturally result in building security by default.

If you are a developer, designer, or architect, I encourage you to read more about the other three essential pillars to secure software:

  1. QA
  2. Secure design principles
  3. Security technologies

If you are a project manager thinking to establish an SDL in your team, I suggest starting by following the “Discovery” part mentioned earlier, and definitely start to create a security mindset in your team through security trainings and security awareness programs.

Learn more about security practices commonly implemented when working with a chatbot in this article!

Remember, Better Safe than Sorry J!

Happy software building!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Conversational interfaces are gaining in popularity, especially for transacting with seemingly opaque backend systems. For example, we can deploy a chatbot to walk a customer through a troubleshooting process and create a ticket if they require further assistance; all without the customer having to know the ticket creation process. This allows for a more intuitive experience for your customer, increasing customer satisfaction, while also improving efficiency by freeing employees from handling the classification and routing of tickets. 

Conversational AI can handle this out of the box, but what if your users want to be able to interact with your front end application? For example, it might be nice for your user to navigate to a certain page within your website without having to find the exact link. Or allow your user to apply a complex filter to a list of products without having to click around menus. Though our webchat can be imbedded in any website, it does not have the contextual awareness of the UI necessary for these sorts of interactions. To demonstrate how we can accomplish this contextual awareness, we will create a simple map application with an embedded bot that has the ability to move the map and zoom in or out:

This simple interaction is enabled by defining an “unusual” way to send messages to the chat UI, allowing the map application to intercept the message, parse it and move the map, all before the final message is displayed to the user. 

Resources

Create your first chatbot on SAP Conversational AI

Learn how to selfhost the Webchat

Google Maps APIs

Map mover bot

Frontend Application source code

Final Map Application

Pre-requisites
  • First, you will need to be comfortable building a simple bot using SAP Conversational AI. If you are unfamiliar with the platform, head over to this tutorial to learn how to build a hilarious joke bot.
  • You will also need to be able to host our Webchat component somewhere that you control. Our GitHub has all the information to get you started. 
  • It is also expected that you are at least familiar with JavaScript and front end web development basics. 
Tutorial

To start, we will need to define the interface for our bot to be able to send commands and messages to our front end. This will be accomplished by sending a stringified JSON object in the place of the normal message string we generally send to the user. Our modified webchat will be able to understand this JSON object, take the defined action and finally display a “message” to the user. 

This can be accomplished fairly simply; we will send an object with an action of either “move” or “zoom” and then a message that we can show to the user. Note that we will pass this json object as a string, and it is our assumption that the application will parse it and display only the value of “message” to the user.

{
"action": "move" || "zoom", 
"message": "This will be displayed to the user"
}

If our action type is “move”, the map will need coordinates to navigate to. So, we will include a location’s coordinates in our json object. Alternatively, if our action is zoom, we will need to know whether to zoom in or out. For this, we will include a direction represented as a 1 for in or a -1 for out. With this defined, here are some examples of what our json objects could look like: 

{
"action": "move", 
"location": { 
"lat": -8.3405389, 
"Ing": 115.0919509 
"message": "Going to Bali, Indonesia!" 
}


{
"action": "zoom" 
"direction": 1, 
"message": "Zooming in!"
}

With that in mind, we can start building our bot. As always, we will start with defining the intents our user could say. In this case we have zoom and move-map. 

Note that we will need to tag the sentences in @zoom with the entity ‘direction’, but ‘location’ is automatically recognized in @move-map. Luckily for us, the location gold entity comes with the longitude and latitude out of the box, so we will be able to easily pass these to the front end. To get the 1 or -1 that represents our zooming direction, we will leverage custom enrichments. We will add the keys “name” and “direction” with the following values. Then map the correct entity values to their respective key values. 

Now that we can recognize our move-map intent, we just need a skill that is triggered if our intent is matched: 

And requires a location: 

And finally sends a message back telling the front-end where to go: 

The zoom skill can be implemented in much the same way; I encourage you to try it for yourself! 

Now that our bot is done, we will need to host the webchat locally so that we can modify it to understand our “unusual” responses. If you are unfamiliar with the self-hosting process, check out this github

Finally, it’s time to build our web application. We will start by including a container div for our map, the script we will write to handle the map interactions (map_controls.js), the necessary script as described in this tutorial from Google, and the script tag pointing to our locally hosted bot. It should look something like this: 

To complete our simple application, we will implement our map initialization and zoom/move methods: 

function initMap () {
  window.map = new google.maps.Map(document.getElementById('map'), {
    // OPTIONS
    center: {lat: -34.397, lng: 150.644},
    zoom: 8,
    zoomControl: false,
    streetViewControl: false,
    mapTypeControl: false,
    rotateControl: false,
    scaleControl: false,
    fullscreenControl: false
  });
}
     
const zoom = (direction) => {
  window.map.setZoom(window.map.getZoom() + direction);
}

const setCenter = (lat, lng) => {
  window.map.setCenter({lat: lat, lng: lng});
}

Once we have the chatbot successfully added to our application, we will be able to ask it to move around or zoom in/out, but it will still just display that ugly json string to us. To solve that, we will add the following code to Webchat/src/containers/Chat/index.js. This will search the window object for a function called applicationParse and call it if it exists.  

const getApplicationParse =  messages  => {
  return new Promise(resolve => {
    if (!window.webchatMethods || !window.webchatMethods.applicationParse) {
      return resolve()
    }
    // so that we process the message in all cases
    setTimeout(resolve, MAX_GET_MEMORY_TIME)
    try {
      const applicationParseResponse = window.webchatMethods.applicationParse(messages)
      if (!applicationParseResponse) {
        return resolve()
      }
      if (applicationParseResponse.then && typeof applicationParseResponse.then === 'function') {
        // the function returned a Promise
        applicationParseResponse
          .then(applicationParse => resolve())
          .catch(err => {
            console.error(FAILED_TO_GET_MEMORY)
            console.error(err)
            resolve()
          })
      } else {
        resolve()
      }
    } catch (err) {
      console.error(FAILED_TO_GET_MEMORY)
      console.error(err)
      resolve()
    }
  })
}

Now, we will call getApplicationParse before the call to setState in componentWillReceiveProps. This will ensure that our application has a chance to parse the response from the bot before anything is sent back to the user. 

componentWillReceiveProps(nextProps) {
  const { messages, show } = nextProps
  
  if (messages !== this.state.messages) {
    getApplicationParse(messages)
    this.setState({ messages }, () => {
      const { getLastMessage } = this.props
      if (getLastMessage) {
        getLastMessage(messages[messages.length - 1])
      }
    })
  }
  if (show && show !== this.props.show && !this.props.sendMessagePromise && !this._isPolling) {
    this.doMessagesPolling()
  }
}

Finally, we need to implement applicationParse and include it in the window object from map_controls.js. Here, we will loop through our messages, and if it is a valid action command from the bot take the action and return only the message back to the user. 

window.webchatMethods = {
  applicationParse: (messages) => {
    messages.map(message => {
      try {
        var obj = JSON.parse(message.attachment.content);
        console.log(obj);
        if(obj !== undefined && 
           obj.action == 'zoom' && 
           typeof obj.direction === "number"){
          message.attachment.content = obj.message.toString();
          zoom(obj.direction);
        } else if (obj !== undefined && 
                   obj.action == 'move' && 
                   typeof obj.location.lat === "number" && 
                   typeof obj.location.lng === "number") {
          message.attachment.content = obj.message.toString();
          setCenter(obj.location.lat, obj.location.lng);
        }
      } catch (err) {
        // Invalid JSON - treat it as a regular message and pass it back to UI as is
      }
      message
    }) 
    return messages;     
  }
}

You can now ask your bot to move or zoom the map and it will send a message that the application can interpret and act upon. With this tool in your tool belt, you can now integrate a chatbot into any of your web applications and provide users with a fun and intuitive way to interact with the UI! 

Ever wondered how to connect your SAP CAI chatbot to Amazon Alexa? Follow this tutorial!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today is the International Women’s Day and here, at SAP Conversational AI, we’ve always followed a simple commitment: opening our doors to everybody. As the example of Jasmine Anteunis, one of 4 founders of our company, we have a constant high-level woman task force since the beginning, and we’re willing to enlarge it!

What’s more interesting than letting some of my fellow women colleagues speak? Product owner, designer, project manager…they daily give us some fresh ideas thanks to their deep involvement and dedication. Let’s hear their inspiring stories!

Justine, Communication and Marketing Manager – Chief of Staff

I’m Justine, and today I’m in charge of marketing, communication, and am also assisting in the efficient running of our global organization.

My job is to create a shining « vitrine » for our product, inside and outside SAP, and make sure the people that would need our technology can easily reach out and get it. No great product can become a reference in its industry if it is not advertised correctly! In that, I love being able to take technical specs and turn them into a story. A story that will be compelling to people and make them want to work with what we’re building.

Understanding how to run a tech unit inside a major corporation is also a very valuable experience, where I learn about people, strategy and vision every single day. When you put all that together, I’m glad to be apart and make a contribution to something bigger than me.

Clémentine, UX/UI Designer

I am UX UI Designer at SAP Conversational AI since September 2018. A big challenge in my daily job, especially in AI, is to make intuitive a platform, an application or a website with a good design.

When I started working at SAP Conversational AI, I was considering myself as an atom in the middle of big organisms that I wasn’t really understanding. Every day, I worked A LOT to better understand and now I continue to learn about my work and the field I work for, it motivates and challenges me. It doesn’t work without empathy (specifically for design) and without communication (specifically with all developers).

That’s why having an open mind on new things helps us to come up with a new approach for future interactions and design. Working with everyone this day is the best learning I have ever had!

Jasmine, Product Owner

I’m Product Owner at SAP Conversational AI and previously co-founder of Recast.AI. What is really important to me every day is not linked to my day-to-day job, but more about the impact we can have as an innovative product in SAP and in the world. I guess that what really drives me is to be able to create something new everyday, even if it’s a small thing, always keep the creation process on going.

A second thing that I really like in this adventure is our team and the people around me every day. At the end of the day you can work on any product, it’s better if it’s with the people you like and if everyone share the same vision and values. When you feel this kind of “symbiosis” you can achieve anything.

Christine, Product Owner

I am product owner in the digital field and more precisely AI. This role is exciting as the market is constantly evolving and this is particularly true with the new technologies.

I have the chance and opportunity to collect and challenge the product features in order to define a solution that meets the customer or market expectation and that keeps improving the user experience. Creativity and innovation are needed to bring an added value that makes a real difference.
I also define the stories and the priorities to maximize the product value.

Moreover, I really think that collaboration and communication are keys to best achieve common goals. I enjoy working closely with different teams like Engineering, Data Science, Design, and Marketing, but also with peers from different countries as this brings diversity and rich cultural environment.

Juliette, Executive Assistant Intern

I’m the executive assistant of the office. I help with trips and meetings organization, the recruiting process and the budget. I like this job because I enjoy organizing events, making sure that all tasks have been completed to deliver a good experience in the end. It is also a way to see if I’d like to take event planning further for my first job.

Doing my internship in SAP also involves belonging to a global community. This is a great setting to work in an international environment and it offers opportunities to work abroad later on.

Aurélie, Project Manager

My job is to get client feedback and help driving the conception of new features. This constant link between our clients and the product is one of the most interesting part of my job, as it gives us good insights of what we have to improve to keep our platform great.
I also really appreciate training our internal team at SAP so everybody can understand and apply the same methodology to build high-quality bots. For me, the good atmosphere at our offices and the great relation between all the team members also takes an important part in our daily job, as it’s a major motivation factor!

Of course, this is just an overview of a larger panel of talents we have in our team, and more globally, at SAP. If you wish to take part to this tech-adventure, feel free to mail me!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview