Loading...

Follow Java – AWS Developer Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Today, I’m excited to announce the Practical DynamoDB Programming in Java demo from AWS re:Invent 2015 is available on github. This project is used to demonstrate how Amazon DynamoDB can be used together with AWS Lambda to perform real-time and batch analysis of domain-specific data. Real-time analysis is performed by using DynamoDB streams as an event source of a Lambda function. Batch processing uses the parallel scan operation in DynamoDB to distribute work to Lambda.

To download the project from github, use:
git clone https://github.com/awslabs/reinvent2015-practicaldynamodb.git .

Follow the instructions in the README file and play with the demo code. You’ll see how simple it is to use the AWS Toolkit for Eclipse to upload AWS Lambda functions and invoke them with the AWS SDK for Java.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

There’s a great post on the AWS Security Blog today. Greg Rubin explains How to Protect the Integrity of Your Encrypted Data by Using AWS Key Management Service and EncryptionContext.

Greg is a security expert and a developer on AWS Key Management Service. He’s helped us out with encryption and security changes in the AWS SDK for Java many times, and he also wrote the AWS DynamoDB Encryption Client project on GitHub.

Go check out Greg’s post on the AWS Security Blog to learn more about keeping your data secure by properly using EncryptionContext in the KMS API.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In parts 1 and 2 of this blog post, we saw how easy it is to get started on Java development for AWS Lambda, and use a microservices architecture to quickly iterate on an AuthenticateUser call that integrates with Amazon Cognito. We set up the AWS Toolkit for Eclipse, used the wizard to create a Java Lambda function, implemented logic for checking a user name/password combination against an Amazon DynamoDB table, and then used the Amazon Cognito Identity Broker to get an OpenID token.

In part 3 of this blog post, we will test our function locally as a JUnit test. Upon successful testing, we will then use the AWS Toolkit for Eclipse to configure and upload the function to Lambda, all from within the development environment. Finally, we will test the function from within the development environment on Lambda.

Expand the tst folder in Package Explorer:

You will see the AWS Toolkit for Eclipse has already created some stubs for you to write your own unit test. Double-click AuthenticateUserTest.java. The test must be implemented in the testAuthenticateUser function. The function creates a dummy Lambda context and a custom event that will be your test data for the testing of your Java Lambda function. Open the TestContext.java file to see a stub that is created to represent a Lambda context. The Context object in Java allows you to interact with the AWS Lambda execution environment through the context parameter. The Context object allows you to access useful information in the Lambda execution environment. For example, you can use the context parameter to determine the CloudWatch log stream associated with the function. For a full list of available context properties in the programming model for Java, see the documentation.

As we mentioned in part 1 of our blog post, our custom object is passed as a LinkedHashMap into our Java Lambda function. Create a test input in the createInput function for a valid input (meaning there is a row in your DynamoDB table User that matches your input).

@BeforeClass
    public static void createInput() throws IOException {
        // TODO: set up your sample input object here.
        input = new LinkedHashMap();
        input.put("userName", "Dhruv");
        input.put("passwordHash","8743b52063cd84097a65d1633f5c74f5");
    } 

Fill in any appropriate values for building the context object and then implement the testAuthenticateUser function as follows:

@Test	
    public void testhandleRequest() {
        AuthenticateUser handler = new AuthenticateUser();
        Context ctx = createContext();

        AuthenticateUserResponse output = (AuthenticateUserResponse)handler.handleRequest(input, ctx);

        // TODO: validate output here if needed.
        if (output.getStatus().equalsIgnoreCase("true")) {
            System.out.println("AuthenticateUser JUnit Test Passed");
        }else{
        	Assert.fail("AuthenticateUser JUnit Test Failed");
        }
    }

Save the file. To run the unit test, right-click AuthenticateUserTest, choose Run As, and then choose JUnit Test. If everything goes well, your test should pass. If not, run the test in Debug mode to see if there are any exceptions. The most common causes for test failures are not setting the right region for your DynamoDB table or not setting the AWS credentials in the AWS Toolkit for Eclipse configuration.

Now that we have successfully tested this function, let’s upload it to Lambda. The AWS Toolkit for Eclipse makes this process very simple. To start the wizard, right-click your Eclipse project, choose Amazon Web Services, and then choose Upload function to AWS Lambda.

 

You will now see a page that will allow you to configure your Lambda function. Give your Lambda function the name AuthenticateUser and make sure you choose the region in which you created your DynamoDB table and Amazon Cognito identity pool. Choose Next.

On this page, you will configure your Lambda function. Provide a description for your service. The function handler should already have been selected for you.

You will need to create an IAM role for Lambda execution. Choose Create and type AuthenticateUser-Lambda-Execution-Role. We will need to update this role later so your Lambda function has appropriate access to your DynamoDB table and Amazon Cognito identity pool. You will also need to create or choose an S3 bucket where you will upload your function code. In Advanced Settings, for Memory (MB), type 256. For Timeout(s), type 30. Choose Finish.

Your Lambda function should be created. When the upload is successful, go to the AWS Management Console and navigate to the Lambda dashboard to see your newly created function. Before we execute the function, we need to provide the permissions to the Lambda execution role. Navigate to IAM, choose Roles, and then choose the AuthenticateUser-Lambda-Execution-Role. Make sure the following managed policies are attached.

We need to provide two inline policies for the DynamoDB table and Amazon Cognito. Click Create Role Policy, and then add the following policy document. This will give Lambda access to your identity pool.

The policy document that gives access to the DynamoDB table should look like the following:

Finally, go back to Eclipse, right-click your project name, choose Amazon Web Services, and then choose Run Function on AWS Lambda. Provide your custom JSON input in the format we provided in part 1 of the blog and click Invoke. You should see the result of your Lambda function execution in the Eclipse console:


  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In part 1 of this blog post, we showed you how to leverage the AWS Toolkit for Eclipse to quickly develop Java functions for AWS Lambda. We then set up a skeleton project and the structure to handle custom objects sent to your Java function.

In part 2 of this blog post, we will implement the handleRequest function that will handle the logic of interacting with Amazon DynamoDB and then generate an OpenID token by using the Amazon Cognito API.

We will now implement the handleRequest function within the AuthenticateUser class. Our final handleRequest function looks like the following:

@Override
public AuthenticateUserResponse handleRequest(Object input, Context context) {
      
    AuthenticateUserResponse authenticateUserResponse = new AuthenticateUserResponse();
    @SuppressWarnings("unchecked")
    LinkedHashMap inputHashMap = (LinkedHashMap)input;
    User user = authenticateUser(inputHashMap);
    if(user!=null){
        authenticateUserResponse.setUserId(user.getUserId());
        authenticateUserResponse.setStatus("true");
        authenticateUserResponse.setOpenIdToken(user.getOpenIdToken());
    }else{
        authenticateUserResponse.setUserId(null);
        authenticateUserResponse.setStatus("false");
        authenticateUserResponse.setOpenIdToken(null);
    }
        
    return authenticateUserResponse;
}

We will need to implement the authenticateUser function for this Lambda Java function to compile properly. Implement the function as shown here:

public User authenticateUser(LinkedHashMap input){
    User user=null;
    	
    String userName = input.get("userName");
    String passwordHash = input.get("passwordHash");
    	
    try{
        AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        client.setRegion(Region.getRegion(Regions.US_EAST_1));
        DynamoDBMapper mapper = new DynamoDBMapper(client);
	    	
        user = mapper.load(User.class, userName);
	    	
        if(user!=null){
            if(user.getPasswordHash().equalsIgnoreCase(passwordHash)){
                String openIdToken = getOpenIdToken(user.getUserId());
                user.setOpenIdToken(openIdToken);
                return user;
            }
        }
    }catch(Exception e){
        System.out.println(e.toString());
    }
    return user;
}

In this function, we use the DynamoDB Mapper to check if a row with the provided username attribute exists in the table User. Make sure you set the region in your code. If a row with the username exists, the code makes a simple check against the provided password hash value. If the passwords match, we will authenticate this user and then follow the developer authentication flow to get an OpenID token from the CognitoIdentityBroker. The token will be passed to the client as an attribute in the AuthenticationResponse object. For about information about the authentication flow for developer authenticated identities, see the Amazon Cognito documentation here. For this Java Lambda function, we will be using the enhanced authflow.

Before we can get an OpenID token, we need to create an identity pool in Amazon Cognito and then register our developer authentication provider with this identity pool. When you create the identity pool, you can keep the default roles provided by the console. In the Authentication Providers field, in the Custom section, type login.yourname.services.

After the pool is created, implement the getOpenIdToken as shown:

private String getOpenIdToken(Integer userId){
    	
    AmazonCognitoIdentityClient client = new AmazonCognitoIdentityClient();
    GetOpenIdTokenForDeveloperIdentityRequest tokenRequest = new GetOpenIdTokenForDeveloperIdentityRequest();
    tokenRequest.setIdentityPoolId("us-east-1:6dbccdfd-9444-4d4c-9e1b-5d1139cbe863");
    	
    HashMap map = new HashMap();
    map.put("login.dhruv.services", userId.toString());
    	
    tokenRequest.setLogins(map);
    tokenRequest.setTokenDuration(new Long(10001));
    	
    GetOpenIdTokenForDeveloperIdentityResult result = client.getOpenIdTokenForDeveloperIdentity(tokenRequest);
    	
    String token = result.getToken();
    	
    return token;
}

This code calls the GetOpenIdTokenForDeveloperIdentity function in the Amazon Cognito API. You need to pass in your Amazon Cognito identity pool ID along with the unique identity provider string you entered in the Custom field earlier. You also have to provide a unique identifier for the user so Amazon Cognito can map that to its Cognito ID. This unique ID is usually the user ID you use internally, but it can be any other unique attribute that allows both your authentication back end and Amazon Cognito to identify a user.

In part 3 of this blog, we will test the Java Lambda function locally using JUnit. Then we will upload and test the function on Lambda.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Most of us are aware of the support for a developer authentication backend in Amazon Cognito and how one can use a custom backend service to authenticate and authorize users to access AWS resources using temporary credentials. In this blog, we will create a quick serverless backend authentication API written in Java and deployed on Lambda. You can mirror this workflow in your current backend authentication service, or you can use this service as it is.

The blog will cover the following topics in a four-part series.

  1. Part 1: How to get started with Java development on Lambda using the AWS Toolkit for Eclipse.
  2. Part 1: How to use Java Lambda functions for custom events.
  3. Part 2: How to create a simple authentication microservice that checks users against an Amazon DynamoDB table.
  4. Part 2: How to integrate with the Amazon Cognito Identity Broker to get an OpenID token.
  5. Part 3: How to locally test your Java Lambda functions through JUnit before uploading to Lambda.
  6. Part 4: How to hook up your Lambda function to Amazon API Gateway.

The Lambda workflow support in the latest version of the AWS Toolkit for Eclipse makes it really simple to create Java functions for Lambda. If you haven’t already downloaded Eclipse, you can get it here. We assume you have an AWS account with at least one IAM user with an Administrator role (that is, the user should belong to an IAM group with administrative permissions).

Important: We strongly recommend you do not use your root account credentials to create this microservice.

After you have downloaded Eclipse and set up your AWS account and IAM user, install the AWS Toolkit for Eclipse. When prompted, restart Eclipse.

We will now create an AWS Lambda project. In the Eclipse toolbar, click the yellow AWS icon, and choose New AWS Lambda Java Project.

On the wizard page, for Project name, type AuthenticateUser. For Package Name, type aws.java.lambda.demo (or any package name you want). For Class Name, type AuthenticateUser. For Input Type, choose Custom Object. If you would like to try other predefined events that Lambda supports in Java, such as an S3Event or DynamoDBEvent, see these samples in our documentation here. For Output Type, choose a custom object, which we will define in the code later. The output type should be a Java class, not a primitive type such an int or float.

Choose Finish.

In Package Explorer, you will now see a Readme file in the project structure. You can close the Readme file for now. The structure below shows the main class, AuthenticateUser, which is your Lambda handler class. It’s where you will be implementing the handleRequest function. Later on, we will implement the unit tests in JUnit by modifying the AuthenticateUserTest class to allow local testing of your Lambda function before uploading.

Make sure you have added the AWS SDK for Java Library in your build path for the project. Before we implement the handleRequest function, let’s create a Data class for the User object that will hold our user data stored in a DynamoDB table called User. You will also need to create a DynamoDB table called User with some test data in it. To create a DynamoDB table, follow the tutorial here. We will choose the username attribute as the hash key. We do not need to create any indexes for this table. Create a new User class in the package aws.java.lambda.demo and then copy and paste the following code:

Note: For this exercise, we will create all our resources in the us-east-1 region. This region, along with the ap-northeast-1 (Tokyo) and eu-west-1 (Ireland) regions, supports Amazon Cognito, AWS Lambda, and API Gateway.

package aws.dhruv.lambda.services;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName="User")
public class User {
	
    private String userName;
    private Integer userId;
    private String passwordHash;
    private String openIdToken;
	
    @DynamoDBHashKey(attributeName="username")
    public String getUserName() { return userName; }
    public void setUserName(String userName) { this.userName = userName; }
	
    @DynamoDBAttribute(attributeName="userid")
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }
	
    @DynamoDBAttribute(attributeName="passwordhash")
    public String getPasswordHash() { return passwordHash; }
    public void setPasswordHash(String passwordHash) { this.passwordHash = passwordHash; }
	
    @DynamoDBAttribute(attributeName="openidtoken")
    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
	
    public User(String userName, Integer userId, String passwordHash, String openIdToken) {
        this.userName = userName;
        this.userId = userId;
        this.passwordHash = passwordHash;
        this.openIdToken = openIdToken;
    }
	
    public User(){ }	
}

You will see we are leveraging annotations so we can use the advanced features provided by the DynamoDB Mapper. The AWS SDK for Java provides DynamoDBMapper, a high-level interface that automates the process of getting your objects into Amazon DynamoDB and back out again. For more information about annotating your Java classes for use in DynamoDB, see the developer guide here.

Our Java function will ingest a custom object from API Gateway and, after execution, return a custom response object. Our custom input is a JSON POST body that will be invoked through an API Gateway endpoint. A sample request will look like the following:

        {
          "userName": "Dhruv",
          "passwordHash": "8743b52063cd84097a65d1633f5c74f5"
        } 

The data is passed in as a LinkedHashMap of key-value pairs to your handleRequest function. As you will see later, you will need to cast your input properly to extract the values of the POST body. Your custom response object looks like the following:

        {
          "userId": "123",
          "status": "true",
          "openIdToken": "eyJraWQiOiJ1cy1lYXN0LTExIiwidHlwIjoiSldTIiwiYWxnIjoiUl"	 
        }

We need to create an implementation of the Response class in our AuthenticateUser class as follows.

public static class AuthenticateUserResponse{
		
    protected Integer userId;
    protected String openIdToken;
    protected String status;
		
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }

    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
		
    public String getStatus() {	return status; }
    public void setStatus(String status) { this.status = status; }			
}

Now that we have the structure in place to handle a custom event, in part 2 of this blog post, we will finish the implementation of the handleRequest function that will do user validation and interact with Amazon Cognito.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Of the many changes brought about with Java 8, the Stream API is perhaps one of the most exciting.  Java 8 streams, which are unrelated to Java’s I/O streams, allow you to perform a series of mutations and transformations against a collection of items.  You can think of a stream as a form of data pipeline, where a collection of data is passed as input and a series of defined steps are performed against that data.  Streams can produce a result in the form of a new collection or directly perform actions against each element of the stream.  Streams can be directly created from multiple sources, including directly specified values, from a collection, or from a Spliterator using a utility method.

The following are some very simple examples of how streams can be used with the Amazon S3 Java client.

Creating a Stream from results
Iterable<S3ObjectSummary> objectSummaries = S3Objects.inBucket(s3Client, "myBucket");
Stream<S3ObjectSummary> objectStream = StreamSupport.stream(objectSummaries.spliterator(), false);

We first make a call through the S3 client to grab a paginated Iterable of result object summaries from the objects in a bucket.  This transparently handles iteration across multiple pages by making additional calls to the service, as needed, to retrieve subsequent result pages.  Now it’s time to create a stream to process our results.  Although Java 8 does not provide a direct way to generate a stream from an Iterable, it does provide a utility class (StreamSupport) with methods to help you do this.  We’re able to use this to pass in a Spliterator (also new to Java 8, it helps facilitate parallelized iteration) grabbed off the Iterable to generate a stream.

Finding the total size of all objects in a bucket

This is a simple example of how using Java 8 streams can reduce the verbosity of an operation.  It’s not uncommon to want to compute the total size of all objects in a bucket and historically one might iterate through the results and keep a running tally of cumulative sizes of each object.

long totalBucketSize = 0L;
for (S3ObjectSummary summary : objectSummaries) {
    totalSize += summary.getSize();
}

Using a stream gives you a neat alternative that does the same thing.

long totalBucketSize = objectStream.mapToLong(obj -> obj.getSize()).sum();

Calling mapToLong on our stream produces a LongStream generated from the results of applying a function (in this case, one that simply grabs the object size from each summary) which allows us to perform subsequent stream operations.  Calling sum (which is a stream terminal reduction operation) returns the sum of all elements of the stream.

Delete all bucket objects older than a specified date

You might regularly run a job that goes through the objects in a bucket and deletes those that were last modified before some date.  Again, streams allow us to perform this operation concisely.  Here we’ll say that we want to delete any objects that were last modified over 30 days ago.

Calendar c = Calendar.getInstance();
c.add(Calendar.DAY_OF_MONTH, -30);
Date cutoffDate = c.getTime();

objectStream.filter(obj -> obj.getLastModified().before(cutoffDate))
    .forEach(obj -> s3Client.deleteObject("myBucket", obj.getKey()));

First we generate our target cutoff date.  In this example we call filter on our stream to filter the stream elements down to those matching our condition.  At that point calling forEach (which itself is a stream terminal operation) executes a function against the remaining stream elements.  In this case it makes a calls to the S3 client to delete each object.

This could also be easily modified to simply return a List of these old objects to pass around.

List<S3ObjectSummary> oldObjects = objectStream
			.filter(obj -> obj.getLastModified().before(cutoffDate))
			.collect(Collectors.toList());
Conclusion

I hope these simple examples give you some ideas for using streams in your application.  Are you using Java 8 streams with the AWS SDK for Java?  Let us know in the comments!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

AWS re:Invent 2015 kicks off next week! We couldn’t be more excited to hear how you’re using our SDKs and tools to build your applications.

You can find several sessions covering the AWS SDKs and tools in the Developer Tools track. We’ll also be working at the AWS booth in the Expo Hall, so be sure to come by and see us.

I’ll be co-presenting DEV303: Practical DynamoDB Programming with Java on Thursday morning. Come by to see how we use the AWS SDK for Java along with AWS Lambda and the AWS Toolkit for Eclipse to efficiently work with data in DynamoDB.

As always, the re:Invent 2015 technical sessions will be available to watch online, for free, after the event. Here are a few sessions on the AWS SDK for Java from years past:

Will you be at AWS re:Invent this year? What are most excited about? Let us know in the comments below.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Maven project specifies its required dependencies in the pom.xml file. The AWS SDK for Java provides a Maven module for every service it supports. To use the Java client for a service, all you need to do is specify the group ID, artifact ID and the Maven module version in the dependencies section of pom.xml.

The AWS SDK for Java introduces a new Maven bill of materials (BOM) module, aws-java-sdk-bom, to manage all your dependencies on the SDK and to make sure Maven picks the compatible versions when depending on multiple SDK modules. You may wonder why this BOM module is required when the dependencies are specified in the pom.xml file. Let me take you through an example. Here is the dependencies section from a pom.xml file:

  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-ec2</artifactId>
      <version>1.10.2</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
      <version>1.10.5</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
      <version>1.10.10</version>
    </dependency>
  <dependencies>

Here is the Maven’s dependency resolution for the above pom.xml file:

As you see, the aws-java-sdk-ec2 module is pulling in an older version of aws-java-sdk-core. This intermixing of different versions of SDK modules can create unexpected issues. To ensure that Maven pulls in the correct version of the dependencies, import the aws-java-sdk-bom into your dependency management section and specify your project’s dependencies, as shown below.

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-bom</artifactId>
        <version>1.10.10</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>
  
  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-ec2</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
    </dependency>
  </dependencies>

The Maven version for each dependency will be resolved to the version specified in the BOM. Notice that when you are importing a BOM, you will need to mention the type as pom and the scope as import.

Here is the Maven’s dependency resolution for the above pom.xml file:

As you can see, all the AWS SDK for Java modules are resolved to a single Maven version. And upgrading to a newer version of the AWS SDK for Java requires you to change only the version of aws-java-sdk-bom module being imported.

Have you been using modularized Maven modules in your project? Please leave your feedback in the comments.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last year at re:Invent we told you that we were working on the AWS Toolkit for IntelliJ. Since then, the toolkit has been in active development on GitHub.

I’m happy to share that the AWS Toolkit for IntelliJ is now generally available!

The toolkit provides an integrated experience for developing serverless applications. For example, you can:

  • Create a new, ready-to-deploy serverless application in Java.
  • Locally test your code with step-through debugging in an execution environment similar to that of AWS Lambda.
  • Deploy your applications to the AWS Region of your choice.
  • Invoke your Lambda functions locally or remotely.
  • Use and customize sample payloads from different event sources such as Amazon S3Amazon API Gateway, and Amazon SNS.

Installation

First, install the AWS Serverless Application Model (SAM) CLI. It provides a Lambda-like execution environment and enables you to step-through and debug your code. This toolkit also uses SAM CLI to build and create deployment packages for your applications. You can find installation instructions for your system here.

Next, install the AWS Toolkit for IntelliJ via the JetBrains plugins repository. In the Settings/Preferences dialog, click on Plugins, select Marketplace, search for “AWS Toolkit”, and click the Install button. Then restart the IDE for the changes to take effect.

Building a serverless application with IntelliJ

Now that the IDE is configured and ready, I create a new project, select AWS on the left, and then choose AWS Serverless Application.

In the next window, I choose a name for my project and finish.

I’m using Maven to manage the project and the Project Object Model (pom.xml) file is not in the root directory. I have to select it and right click to add it as a Maven project.

Before I start deploying the application, I choose the AWS Region from the bottom-right menu. Let’s use Stockholm.

The default application is composed of a single Lambda function that you can call via HTTP using Amazon API Gateway. I open the code in the src/main/java/helloworld directory and change the message to be “Hello World from IntelliJ”.

The default application comes with unit tests that make it easy to build high-quality applications. Let’s update the assertion to make the test pass.

Running a function locally

Back to the function, I click Lambda icon on the left of the class definition to see the option to run the function locally or have a local step-through debugging session.

Let’s run the function locally. The first time I run the function, I can edit the configuration to choose the AWS credentials I want to use, the Region (for AWS services used by the function), and the input event to provide. I select the API Gateway AWS Proxy to simulate an invocation by API Gateway. I can customize the HTTP request using the syntax described here. I can also pass environment variables to customize the behavior of the function.

I select Run, and two tabs appears:

  • The Build tab, using the SAM CLI to do the build.
  • The Run tab, where I can check the output of my function.

The local invocation of the function is using Docker containers to emulate the Lambda environment.

Debugging a function locally

I’m not really sure how the location, part of the output message, is computed by this application, so I add a breakpoint where the pageContents variable is given a value. I select the option to debug locally, by clicking the gutter icon.

I can now use the IntelliJ debugger to get a better understanding of my function. I click Step Into to go in the getPageContents method. There I Step Over a few times to see how the location is taken from the public https://checkip.amazonaws.com website.

I finally resume the program execution to get a similar result as before.

Deploying a serverless application

Everything works as expected, I am ready to go in production. I deploy the serverless application in the AWS Region of my choice. To do so, I select the template.yaml file in the root directory. This template is using AWS SAM to describe the deployment in terms of:

  • Infrastructure, in this case a Lambda function, API, permissions, and so on.
  • Code, because the Handler property of the function is specifying the source file and the method that is invoked by the Lambda platform.

Right-clicking the template.yaml file I choose to Deploy Serverless Application. AWS SAM is using AWS CloudFormation to create and update the required resources. I choose to create a new AWS CloudFormation stack, but you can use the same deployment option to update an existing stack. I create an S3 bucket to host the deployment packages that the build process creates. The new bucket is automatically created in the AWS Region I selected before. You can reuse the bucket for multiple deployments. The SAM CLI automatically creates unique names for each build.

I don’t have template parameters to pass here, but they can be used by SAM or AWS CloudFormation to customize the behavior of a template for different environments.

If your build process depends on the actual Lambda execution environment, you can choose to run it inside a container to provide the necessary emulation.

I choose Deploy, and after a few minutes, the AWS CloudFormation stack is created.

Running a function remotely

Now I can invoke the Lambda function remotely. In the AWS Explorer on the left, I find the function under Lambda, where all functions in the selected Region are listed, and under AWS CloudFormation, where all stacks that have a Lambda function are listed.

I right-click the Lambda function to run it remotely. You can also jump to the source code of the function from here. Again, I create a configuration similar to what I did for the local invocation: I choose the API Gateway AWS Proxy input event, then choose Run to get the output of my serverless application. I can also see the logs of the function here, including the duration, the billed duration, the memory size, and the memory actually used by the invocation.

Invoking the HTTP endpoint

To invoke the API via HTTP, I need to know the API endpoint. I can get it from the output of the AWS CloudFormation stack, for example, using the AWS CLI:

$ aws cloudformation describe-stacks --stack-name hello-world-from-IntelliJ --region eu-north-1

In the output, there is a section similar to this, with the API endpoint in the OutputValue:

{
  "Description": "API Gateway endpoint URL for Prod stage for Hello World function", 
  "OutputKey": "HelloWorldApi", 
  "OutputValue": "https://<API_ID>.execute-api.eu-north-1.amazonaws.com/Prod/hello/"
}

Now I can invoke the API using curl, for example:

$ curl -s https://<API_ID>.execute-api.eu-north-1.amazonaws.com/Prod/hello/
{
  "message": "Hello World from IntelliJ",
  "location": "x.x.x.x"
}

Available now

This toolkit is distributed under the open source Apache License, Version 2.0.

More information is available on the AWS Toolkit for IntelliJ product page.

There are lots of other features I didn’t have time to describe in this post. Just start using this toolkit to discover more. And let us know what you use it for!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview