Loading...

AWS Identity and Access Management (IAM) now makes it easier for you to control access to your AWS resources by using the AWS organization of IAM principals (users and roles). For some services, you grant permissions using resource-based policies to specify the accounts and principals that can access the resource and what actions they can perform on it. Now, you can use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account in the organization. For example, let’s say you have an Amazon S3 bucket policy and you want to restrict access to only principals from AWS accounts inside of your organization. To accomplish this, you can define the aws:PrincipalOrgID condition and set the value to your organization ID in the bucket policy. Your organization ID is what sets the access control on the S3 bucket. Additionally, when you use this condition, policy permissions apply when you add new accounts to this organization without requiring an update to the policy.

In this post, I walk through the details of the new condition and show you how to restrict access to only principals in your organization using S3.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:PrincipalOrgID Condition Key

You can use this condition key to apply a filter to the Principal element of a resource-based policy. You can use any string operator, such as StringLike, with this condition and specify the AWS organization ID for as its value.

Condition key Description Operator(s) Value
aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization. All String operators Any AWS organization ID
Example: Restrict access to only principals from my organization

Let’s consider an example where I want to give specific IAM principals in my organization direct access to my S3 bucket, 2018-Financial-Data, that contains sensitive financial information. I have two accounts in my AWS organization with multiple account IDs, and only some IAM users from these accounts need access to this financial report.

To grant this access, I author a resource-based policy for my S3 bucket as shown below. In this policy, I list the individuals who I want to grant access. For the sake of this example, let’s say that while doing so, I accidentally specify an incorrect account ID. This means a user named Steve, who is not in an account in my organization, can now access my financial report. To require the principal account to be in my organization, I add a condition to my policy using the global condition key aws:PrincipalOrgID. This condition requires that only principals from accounts in my organization can access the S3 bucket. This means that although Steve is one of the principals in the policy, he can’t access the financial report because the account that he is a member of doesn’t belong to my organization.



{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowPutObject",
            "Effect": "Allow",
            "Principal": ["arn:aws:iam::094697565664:user/Casey",
				"arn:aws:iam::094697565664:user/David",
				"arn:aws:iam::094697565664:user/Tom",
				"arn:aws:iam::094697565664:user/Michael",
				"arn:aws:iam::094697565664:user/Brenda",
				"arn:aws:iam::094697565664:user/Lisa",
				"arn:aws:iam::094697565664:user/Norman",
				"arn:aws:iam::094697565646:user/Steve",
				"arn:aws:iam::087695765465:user/Douglas",
				"arn:aws:iam::087695765465:user/Michelle"],
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::2018-Financial-Data/*",
            "Condition": {"ForAnyValue:StringLike": 
                             {"aws:PrincipalOrgID": [ "o-xxxxxxxxxx" ]}
                         }
        }
    ]
}

In the policy above, I specify the principals that I grant access to using the principal element of the statement. Next, I add s3:GetObject as the action and 2018-Financial-Data/* as the resource to grant read access to my S3 bucket. Finally, I add the new condition key aws:PrincipalOrgID and specify my organization ID in the condition element of the statement to make sure only the principals from the accounts in my organization can access this bucket.

Summary

You can now use the aws:PrincipalOrgID condition key in your resource-based policies to more easily restrict access to IAM principals from accounts in your AWS organization. For more information about this global condition key and policy examples using aws:PrincipalOrgID, read the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The EU’s General Data Protection Regulation (GDPR) describes data processor and data controller roles, and some customers and AWS Partner Network (APN) partners are asking how this affects the long-established AWS Shared Responsibility Model. I wanted to take some time to help folks understand shared responsibilities for us and for our customers in context of the GDPR.

How does the AWS Shared Responsibility Model change under GDPR? The short answer – it doesn’t. AWS is responsible for securing the underlying infrastructure that supports the cloud and the services provided; while customers and APN partners, acting either as data controllers or data processors, are responsible for any personal data they put in the cloud. The shared responsibility model illustrates the various responsibilities of AWS and our customers and APN partners, and the same separation of responsibility applies under the GDPR.

AWS responsibilities as a data processor

The GDPR does introduce specific regulation and responsibilities regarding data controllers and processors. When any AWS customer uses our services to process personal data, the controller is usually the AWS customer (and sometimes it is the AWS customer’s customer). However, in all of these cases, AWS is always the data processor in relation to this activity. This is because the customer is directing the processing of data through its interaction with the AWS service controls, and AWS is only executing customer directions. As a data processor, AWS is responsible for protecting the global infrastructure that runs all of our services. Controllers using AWS maintain control over data hosted on this infrastructure, including the security configuration controls for handling end-user content and personal data. Protecting this infrastructure, is our number one priority, and we invest heavily in third-party auditors to test our security controls and make any issues they find available to our customer base through AWS Artifact. Our ISO 27018 report is a good example, as it tests security controls that focus on protection of personal data in particular.

AWS has an increased responsibility for our managed services. Examples of managed services include Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon Elastic MapReduce, and Amazon WorkSpaces. These services provide the scalability and flexibility of cloud-based resources with less operational overhead because we handle basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery. For most managed services, you only configure logical access controls and protect account credentials, while maintaining control and responsibility of any personal data.

Customer and APN partner responsibilities as data controllers — and how AWS Services can help

Our customers can act as data controllers or data processors within their AWS environment. As a data controller, the services you use may determine how you configure those services to help meet your GDPR compliance needs. For example, AWS Services that are classified as Infrastructure as a Service (IaaS), such as Amazon EC2, Amazon VPC, and Amazon S3, are under your control and require you to perform all routine security configuration and management that would be necessary no matter where the servers were located. With Amazon EC2 instances, you are responsible for managing: guest OS (including updates and security patches), application software or utilities installed on the instances, and the configuration of the AWS-provided firewall (called a security group).

To help you realize data protection by design principles under the GDPR when using our infrastructure, we recommend you protect AWS account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each user is only given the permissions necessary to fulfill their job duties. We also recommend using multi-factor authentication (MFA) with each account, requiring the use of SSL/TLS to communicate with AWS resources, setting up API/user activity logging with AWS CloudTrail, and using AWS encryption solutions, along with all default security controls within AWS Services. You can also use advanced managed security services, such as Amazon Macie, which assists in discovering and securing personal data stored in Amazon S3.

For more information, you can download the AWS Security Best Practices whitepaper or visit the AWS Security Resources or GDPR Center webpages. In addition to our solutions and services, AWS APN partners can provide hundreds of tools and features to help you meet your security objectives, ranging from network security and configuration management to access control and data encryption.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.

With the addition of the following 11 new services, you can now select from a total of 62 SOC-compliant services. To see the full list, go to our Services in Scope by Compliance Program page:

• Amazon Athena
• Amazon QuickSight
• Amazon WorkDocs
• AWS Batch
• AWS CodeBuild
• AWS Config
• AWS OpsWorks Stacks
• AWS Snowball
• AWS Snowball Edge
• AWS Snowmobile
• AWS X-Ray

Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.

Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.

Want more AWS Security news? Follow us on Twitter.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We have a new resource available to help you meet a requirement for physically-separated infrastructure using logical separation in the AWS cloud. Our latest guide, Logical Separation: An Evaluation of the U.S. Department of Defense Cloud Security Requirements for Sensitive Workloads outlines how AWS meets the U.S. Department of Defense’s (DoD) stringent physical separation requirement by pioneering a three-pronged logical separation approach that leverages virtualization, encryption, and deploying compute to dedicated hardware.

This guide will help you understand logical separation in the cloud and demonstrates its advantages over a traditional physical separation model. Embracing this approach can help organizations confidently meet or exceed security requirements found in traditional on-premises environments, while also providing increased security control and flexibility.

Logical Separation is the second guide in the AWS Government Handbook Series, which examines cybersecurity policy initiatives and identifies best practices.

If you have questions or want to learn more, contact your account executive or AWS Support.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you store sensitive or confidential data in Amazon DynamoDB, you might want to encrypt that data as close as possible to its origin so your data is protected throughout its lifecycle.

You can use the DynamoDB Encryption Client to protect your table data before you send it to DynamoDB. Encrypting your sensitive data in transit and at rest helps assure that your plaintext data isn’t available to any third party, including AWS.

You don’t need to be a cryptography expert to use the DynamoDB Encryption Client. The encryption and signing elements are designed to work with your existing DynamoDB applications. After you create and configure the required components, the DynamoDB Encryption Client transparently encrypts and signs your table items when you call PutItem and verifies and decrypts them when you call GetItem.

You can create your own custom components, or use the basic implementations that are included in the library. We’ve made sure that the classes that we provide implement strong and secure cryptography.

You can use the DynamoDB Encryption Client with AWS Key Management Service (AWS KMS) or AWS CloudHSM, but the library doesn’t require AWS or any AWS service.

The DynamoDB Encryption Client is now available in Python, as well as Java. All supported language implementations are interoperable. For example, you can encrypt table data with the Python library and decrypt it with the Java library.

The DynamoDB Encryption Client is an open-source project. We hope that you will join us in developing the libraries and writing great documentation.

How it works

The DynamoDB Encryption Client processes one table item at a time. First, it encrypts the values (but not the names) of attributes that you specify. Then, it calculates a signature over the attributes that you specify, so you can detect unauthorized changes to the item as a whole, including adding or deleting attributes, or substituting one encrypted value for another.

However, attribute names, and the names and values in the primary key (the partition key and sort key, if one is provided) must remain in plaintext to make the item discoverable. They’re included in the signature by default.

Important: Do not put any sensitive data in the table name, attribute names, the names and values of the primary key attributes, or any attribute values that you tell the client not to encrypt.

How to use it

I’ll demonstrate how to use the DynamoDB Encryption Client in Python with a simple example. I’ll encrypt and sign one table item, and then add it to an existing table. This example uses a test item with arbitrary data, but you can use a similar procedure to protect a table item that contains highly sensitive data, such as a customer’s personal information.

You can see the complete example in the examples directory of the aws-dynamodb-encryption-python repository.

Step 1: Create a table

I’ll start by creating a DynamoDB table resource that represents an existing table. If you use the code, be sure to supply a valid table name.

# Create a DynamoDB table
table = boto3.resource('dynamodb').Table(table_name)
Step 2: Create a cryptographic materials provider

Next, create an instance of a cryptographic materials provider (CMP). The CMP is the component that gathers the encryption and signing keys that are used to encrypt and sign your table items. The CMP also determines the encryption algorithms that are used and whether you create unique keys for every item or reuse them.

The DynamoDB Encryption Client includes several CMPs and you can create your own. And, if you’re in doubt, we help you to choose a CMP that fits your application and its security requirements.

In this example, I’ll use the Direct KMS Provider, which gets its cryptographic material from the AWS Key Management Service (AWS KMS). The encryption and signing keys that you use are protected by a customer master key in your AWS account that never leaves AWS KMS unencrypted.

To create a Direct KMS Provider, you specify an AWS KMS customer master key. Be sure to replace the fictitious customer master key ID (the value of aws-cmk-id) in this example with a valid one.

# Create a Direct KMS provider. Pass in a valid KMS customer master key.
aws_cmk_id = '1234abcd-12ab-34cd-56ef-1234567890ab'
aws_kms_cmp = AwsKmsCryptographicMaterialsProvider(key_id=aws_cmk_id)
Step 3: Create an attribute actions object

An attribute actions object tells the DynamoDB Encryption Client which item attribute values to encrypt and which attributes to include in the signature. The options are: ENCRYPT_AND_SIGN, SIGN_ONLY, and DO_NOTHING.

This sample attribute action encrypts and signs all attributes values except for the value of the test attribute; that attribute is neither encrypted nor included in the signature.

# Tell the encrypted table to encrypt and sign all attributes except one.
actions = AttributeActions(
    default_action=CryptoAction.ENCRYPT_AND_SIGN,
    attribute_actions={
        'test': CryptoAction.DO_NOTHING
    }
)

If you’re using a helper class, such as the EncryptedTable class that I use in the next step, you can’t specify an attribute action for the primary key. The helper classes make sure that the primary key is signed, but never encrypted (SIGN_ONLY).

Step 4: Create an encrypted table

Now I can use the original table object, along with the materials provider and attribute actions, to create an encrypted table.

# Use these objects to create an encrypted table resource.
encrypted_table = EncryptedTable(
    table=table,
    materials_provider=aws_kms_cmp,
    attribute_actions=actions
)

In this example, I’m using the EncryptedTable helper class, which adds encryption features to the DynamoDB Table class in the AWS SDK for Python (Boto 3). The DynamoDB Encryption Client in Python also includes EncryptedClient and EncryptedResource helper classes.

The DynamoDB Encryption Client helper classes call the DescribeTable operation to find the primary key. The application that runs the code must have permission to call the operation.

We’re done configuring the client. Now, we can encrypt, sign, verify, and decrypt table items.

Step 5: Add an item to the table

Let’s add an item to the DynamoDB table.

plaintext_item = {
    'partition_key': 'key1',
    'sort_key': 'key2'
    'example': 'data',
    'numbers': 99,
    'binary': Binary(b'\x00\x01\x02'),
    'test': 'test-value'
}

When we call the PutItem operation, the item is transparently encrypted and signed, except for the primary key, which is signed, but not encrypted, and the test attribute, which is ignored.

encrypted_table.put_item(Item=plaintext_item)

And, when we call the GetItem operation, the item is transparently verified and decrypted.

decrypted_item = encrypted_table.get_item(Key=partition_key)['Item']

To view the encrypted item, call the GetItem operation on the original table object, instead of the encrypted_table object. It gets the item from the DynamoDB table without verifying and decrypting it.

encrypted_item = table.get_item(Key=partition_key)['Item']

Here’s an excerpt of the output that displays the encrypted item:
 

Figure 1: Output that displays the encrypted item

Client-side or server-side encryption?

The DynamoDB Encryption Client is designed for client-side encryption, where you encrypt your data before you send it to DynamoDB.

But, you have other options. DynamoDB supports encryption at rest, a server-side encryption option that transparently encrypts the data in your table whenever DynamoDB saves the table to disk. You can even use both the DynamoDB Encryption Client and encryption at rest together. The encrypted and signed items that the client generates are standard table items that have binary data in their attribute values. Your choice depends on the sensitivity of your data and the security requirements of your application.

Although the Java and Python versions of the DynamoDB Encryption Client are fully compatible, the DynamoDB Encryption Client isn’t compatible with other client-side encryption libraries, such as the AWS Encryption SDK or the S3 Encryption Client. You can’t encrypt data with one library and decrypt it with another. For data that you store in DynamoDB, we recommend the DynamoDB Encryption Client.

Encryption is crucial

Using tools like the DynamoDB Encryption Client helps you to protect your table data and comply with the security requirements for your application. We hope that you use the client and join us in developing it on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Key Management Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

To help protect their assets, many security-conscious enterprises require their system administrators to go through a “bastion” (or “jump”) host to gain administrative access to backend systems in protected or sensitive network segments.

A bastion host is a special-purpose instance that hosts a minimal number of administrative applications, such as RDP for Windows or Putty for Linux-based distributions. All other unnecessary services are removed. The host is typically placed in a segregated network (or “DMZ”), and is often protected with multi-factor authentication (MFA) and monitored with auditing tools. And most enterprises require that the access trail to the bastion host be auditable.

In this post, I demonstrate the use of Amazon AppStream 2.0 as a hardened and auto-scaled bastion host solution, and show how it could reduce the attack surface by stripping away the underlying OS and exposing only the necessary tools to system administrators that need access to protected network segments.

Prerequisites
  • A Virtual Private Cloud (VPC) with a dedicated subnet for AppStream 2.0.
  • An existing Active Directory (AD) domain. This may be on premises, on AWS EC2 for Windows, or AWS Directory Service for Microsoft Active Directory used as a user directory.
  • Active Directory Federation Services (ADFS).
  • A Linux or Windows instance for which AppStream 2.0 will be acting as a bastion host.
Solution overview

Amazon AppStream 2.0 is a fully managed application streaming service that provides users instant access to their desktop applications from anywhere by using an HTML5-compatible desktop browser. When a user requests access to an application, AppStream 2.0 uses a base image to deploy a streaming instance and destroys the instance after the user closes their session. This ensures the same consistent experience during each logon.

You can use AppStream 2.0 as a bastion solution to enable your system administrators to manage their environment without giving them a full bastion host. Because AppStream 2.0 freshly builds instances each time a user requests access, a compromised instance will only last for the duration of a user session. As soon as the user closes their session and the Disconnect Timeout period is reached, AppStream 2.0 terminates the instance and, with it, you’ve reduced your risks of compromised instances.

You will also potentially reduce your costs because AppStream 2.0 has built-in auto-scaling to increase and decrease capacity based on user demand. It allows you to take advantage of the pay-as-you-go model, where you only pay for what you use.

High-level AppStream 2.0 architecture

The diagram below depicts a high-level AppStream 2.0 architecture used as a bastion host for servers in another VPC.

There are three VPCs shown: AppStream 2.0 VPC, Bastion host VPC, and application VPC. The AppStream 2.0 VPC is an AWS-owned VPC where the AppStream 2.0 maintains its infrastructure. Customers are not responsible for this VPC and have no access to it. AppStream 2.0 builds each streaming instance with two Elastic Network Interfaces (ENI); one in the AppStream 2.0 VPC and one in the VPC where you choose to deploy your AppStream 2.0 instances. The third VPC is the application VPC where you would typically keep your backend servers.

The diagram also depicts the end-user process to access the AppStream 2.0 environment, which works as follow:

  1. Using an HTML5 desktop browser the user logs on to a Single Sign-On URL. This authenticates the user against the corporate directory using SAML 2.0 federation and with optional MFA.
  2. After successful authentication, the user will see a list of provisioned applications.
  3. The user can launch applications, such as RDP and Putty, which are only visible within the browser and with its underlying OS hidden. The user is then able to connect to the backend systems over the ports that were opened through security groups. The user logs off and AppStream 2.0 destroys the instance used for the session.

 

Figure 1: Architecture diagram

Step-by-step instructions

This walk-through assumes you have created the following resources as prerequisites.

  • A single VPC with a /23 CIDR range and two private subnets in two AZs.

    Note: “private” subnet refers to a subnet that has no internet gateway (IGW) attached.

    • Bastion Subnets — used for the AppStream 2.0 instances that will be hosting the bastion applications.
    • Apps Subnets — used for the servers for which the AppStream 2.0 instances will be acting as a bastion host.
       

      Figure 2: Screen shot of bastion and apps subnets

  • A peering connection to a VPC where the corporate Active Directory resides and with updated routing tables. This is only necessary if your AD resides in a different VPC.
  • Two EC2 instances with private IP addresses in the app subnet.
Phase 1: Create the DHCP Options Set

For the AppStream 2.0 instances to be able to join the corporate domain, they need to have their DNS entries point to the corporate domain controller(s). To accomplish this, you need to create a DHCP Options Set and assign it to the VPC:

  1. Sign in to the AWS console, and then select VPC Dashboard > DHCP Option Sets > Create DHCP options set.
  2. Give the DHCP Options Set a name, enter the domain name and DNS server(s) of your corporate domain controller(s), and then select Yes, Create.
  3. Select your VPC Dashboard > your VPC > Actions > Edit DHCP Options Set.
  4. Select the DHCP Options Set created in the previous step, and then select Save.
     

    Figure 3: The “Edit DHCP Options Set” dialog

Phase 2: Create the AppStream 2.0 Stack

An AppStream 2.0 stack consists of a fleet, user access policies, and storage configuration. To create a stack, follow these steps:

  1. Sign in to the AWS console and select AppStream 2.0 > Stack > Create Stack.
  2. Give the stack a name, and then select Next.
  3. Enable Home Folders, if you want persistent storage, and then select Review.
     

    Figure 4: The “Enable Home Folders” dialog

  4. Select Create.
Phase 3: Create the AppsStream 2.0 Directory Configuration

First create a directory configuration so you can join the AppStream 2.0 instances to an Organizational Unit (OU) in your corporate directory.

Note: AppStream 2.0 instances must be placed in an OU and can’t reside in the Computer Container.

To create a directory configuration, follow these steps:

  1. Sign in to the AWS console and select AppStream 2.0 > Directory Configs > Create Directory Config.
  2. Enter the following Directory Config information:
    • Directory name: The FQDN of your corporate domain.
    • Service Account Name: The account AppStream 2.0 uses to join the instances to the corporate domain. The required service account privileges are documented here.
    • Organizational Unit (OU): The OUs where AppStream 2.0 will create your instances. You can add additional OUs by clicking the plus (+) sign.
  3. Select Next, and then select Create.
Create Security Groups

Now, create AWS security groups for your AppStream 2.0 instances and backend servers.

BastionHostSecurityGroup

For your AppStream 2.0 instances, you must attach a “BastionHostSecurityGroup” in order to communicate to the backend servers. This security group is only used as a “source” by the security groups the backend servers are attached to and, therefore, they don’t require any inbound ports to be opened.

To create a security group, follow these steps:

  1. Sign in to the AWS console and select VPC > Security Groups > Create Security Group.
  2. Give your “BastionHostSecurityGroup” Security Group a name, select the VPC where you will place the AppStream 2.0 instances, and then select Yes, Create.
BastionHostAccessSecurityGroup

For your backend servers, you must attach a “BastionHostAccessSecurityGroup” that allows incoming traffic from the AppStream 2.0 instance. Unlike the “BastionHostSecurityGroup”, this one requires open inbound ports.

  1. Sign in to the AWS console and select VPC > Security Groups > Create Security Group.
  2. Give your “BastionHostAccessSecurityGroup” security group a name, select the correct VPC, and then select Yes, Create.
  3. In the Security Group console, select the newly created security group, select the Inbound Rule tab, and then select Edit.
  4. Add rules to open port 3389 and 22, use the previously-created security group as the source, and then select Save.
     

    Figure 5: Opening ports 3389 and 22

    Note: In addition to security groups, you can place Network ACLs (NACLs) around the subnet you use for AppStream 2.0 as an additional layer of security. The main differences between security groups and NACLs are that security groups are mandatory and you apply them to the instance level, while you apply NACLs to the subnet level and are optional. Another difference worth pointing out is that NACLs are “stateless” while security groups are “stateful.” This means that any port allowed inbound via NACLs will need a corresponding outbound rule. For more information on NACLs, refer to this documentation.

Phase 4: Build the AppStream 2.0 Image

An AppStream 2.0 image contains applications that you can stream to users. AppStream 2.0 uses the image to launch streaming instances that are part of an AppStream 2.0 fleet.

Once you have created the stack, create a custom image to make custom applications available to the users:

  1. Sign in to the AWS console and select AppStream 2.0 > Images > Image Builder > Launch Image Builder.
  2. Choose the image you want to use as a starting point, and then select Next. For this example, I chose a generic image from the General Purpose stock.
     

    Figure 6: Choosing an image

  3. Give your image a name, choose the instance family, and then select Next.
  4. Choose the VPC and subnet you want to deploy the AppStream 2.0 instances in.
  5. Select the security group you created for the AppStream 2.0 instances.
  6. Select the directory configuration you created, the OU you want your AppStream 2.0 instances to reside in, and then select Review.
  7. Select Launch.
  8. Once the image is built and in a running state, select the image, and then select Connect. This will open a new browser tab where you’ll be able to connect to and manage the image.
  9. Select Administrator and log in.
     

    Figure 7: Log in as a local administrator

  10. Once logged in as administrator, select the Image Assistant shortcut on the desktop.
     

    Figure 8: The Image Assistant shortcut on the desktop

  11. Add all the applications you want to make available to your users for streaming, and then select Next.

    Note: If you need to upload installation or configuration files, you can use the My Files option in the Control menu. Any files uploaded through this method will show up under the X: drive on the Image Builder.

     

    Figure 9: The “Control” menu

  12. If you want to test the applications as a non-privileged user, follow the on-screen instructions to switch the user. Otherwise, select Next.
     

    Figure 10: “Switch User” on-screen instructions

  13. Select Launch to have the Image Assistant optimize the applications.
  14. Give the image a name, and then select Next.
  15. Select Disconnect and Create Image.
  16. Go back to the AppStream 2.0 console and wait for the “snapshotting” to complete and for the image to be in an available state before continuing to the next step.
Phase 5: Create the AppStream 2.0 Fleet

Once you create your Stack and image, you need to create a Fleet and associate it with your Stack.

AppStream 2.0 fleets consist of streaming instances that run the image that you specify. The fleet type determines when your instances run and how you pay for them. You can specify a fleet type when you create a fleet, and you can’t change them once they’ve been created.

To create a fleet, follow these steps:

  1. Sign in to the AWS console and select AppStream 2.0 > Fleets > Create Fleet.
  2. Give your fleet a name, and then select Next.
  3. Select the newly created image, and then select Next.
  4. Choose your preferred settings, and then select Next.

    Important: Pay special attention to the Fleet capacity value. Fleet capacity determines the number of running instances you have at any given time, and it affects your costs.

     

    Figure 11: The “Fleet capacity” dialog

  5. Select your VPC, subnet(s), security Group(s), Active Directory settings, and then select Next.
  6. Review the information, and then select Create.
     

    Figure 12: Review your settings

Associate the fleet with the stack

Follow these steps:

  1. Sign in to the AWS console and select AppStream 2.0 > Stacks.
  2. Select the stack, select Actions, and then select Associate Fleet.
  3. Select the fleet, and then select Associate.
Phase 6: Configure ADFS for AppStream 2.0

To have users authenticate against the corporate directory prior to accessing AppStream 2.0, use a Single Sign-On solution. For this demo, I use ADFS. If you choose another solution, follow the instructions that come with the solution. For help with setting up ADFS with AppStream 2.0, review Enabling Identify Federation with ADSF and Amazon Appstream 2.0.

Note: If you use AWS Directory Service for Microsoft AD (AWS Managed Microsoft AD) as your user directory, you can use ADFS by following the ADFS set-up instructions in the blog on How to Enable Your Users to Access Office 365 with AWS Managed Microsoft AD Credentials.

End User Experience

This section shows you what the AppStream 2.0 end user experience is like when connecting to backend Windows and Linux instances.

Note: Make sure you have backend servers to connect to, as indicated in the prerequisites.

Process
  1. Access the ADFS URL that you created as part of the ADFS setup.
  2. Sign in using your corporate credentials.
  3. Select Remote Desktop from the list of applications.
  4. Enter your corporate credentials.
  5. Enter the private IP address of the backend windows instance you want to remote in to.
     

    Figure 13: Enter the private IP address


    You’re now logged on to the backend Windows instance through AppStream 2.0.
  6. To test in Linux, open putty. Select the Launch app icon in the Control menu, and then select putty.
     

    Figure 14: The “Control” menu showing putty

  7. Provide the private IP address of a backend Linux host you want to connect to, and then select Open.

    Note: For putty to connect to a Linux instance on AWS, you will need to provide a KeyPair. For information on how to configure putty and KeyPairs, refer to this documentation.

You’re now logged on to a backend Linux host through AppStream 2.0.

Monitoring

You can monitor AppStream 2.0 use by default with the following AWS monitoring services.

  • Amazon CloudWatch is a monitoring service for AWS cloud resources. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. For more information, refer to this documentation. Here’s a sample CloudWatch metric showing in-use capacity was 100% at 14:30, which indicates the Fleet capacity may need to be adjusted.
     

    Figure 15: An example CloudWatch metric

  • AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. For more information, refer to this documentation. Here’s a sample CloudTrail event. For example, from this event you can see that user Bob logged on to AppStream 2.0 on March 4, 2018, and you can see his source IP.
     

    Figure 16: An example CloudTrail event

Summary

Amazon AppStream 2.0 is a cost-effective way to provide administrators with a secure and auditable method to access their backend environments.

The AppStream 2.0 built-in auto-scaling feature offers a pay-as-you-go model, where the number of instances running is based on user demand. This allows you to keep costs down without compromising availability. Another cost-saving benefit of AppStream 2.0 is its underlying infrastructure being managed and maintained by AWS, so you can deploy AppStream 2.0 with minimal effort.

AppStream 2.0 helps with reducing the attack surface by hiding the shell of the streaming OS. This prevents administrators from interacting with executables that haven’t been made available to them through AppStream 2.0.

Another security benefit of AppStream 2.0 is that it destroys streaming instances after each use, reducing risks. This is a good mitigation strategy against compromised instances, as the lifespan of an instance is limited to the length of a user’s session.

AppStream 2.0 support for SAML provides yet another layer of security, allowing you to restrict access to SAML-federated URLs from corporate networks only, as well as the ability to enforce multi-factor authentication (MFA).

You can monitor the AppStream 2.0 environment through the use of AWS CloudTrail and Amazon CloudWatch, allowing you to monitor and trace the usage of AppStream 2.0.

For all of these reasons, AppStream 2.0 makes for a uniquely attractive bastion host solution.

For more information on the technologies mentioned in this blog, see the links below:

..
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

AWS Config enables continuous monitoring of your AWS resources, making it simple to assess, audit, and record resource configurations and changes. AWS Config does this through the use of rules that define the desired configuration state of your AWS resources. AWS Config provides a number of AWS managed rules that address a wide range of security concerns such as checking if you encrypted your Amazon Elastic Block Store (Amazon EBS) volumes, tagged your resources appropriately, and enabled multi-factor authentication (MFA) for root accounts. You can also create custom rules to codify your compliance requirements through the use of AWS Lambda functions.

In this post we’ll show you how to use AWS Config to monitor our Amazon Simple Storage Service (S3) bucket ACLs and policies for violations which allow public read or public write access. If AWS Config finds a policy violation, we’ll have it trigger an Amazon CloudWatch Event rule to trigger an AWS Lambda function which either corrects the S3 bucket ACL, or notifies you via Amazon Simple Notification Service (Amazon SNS) that the policy is in violation and allows public read or public write access. We’ll show you how to do this in five main steps.

  1. Enable AWS Config to monitor Amazon S3 bucket ACLs and policies for compliance violations.
  2. Create an IAM Role and Policy that grants a Lambda function permissions to read S3 bucket policies and send alerts through SNS.
  3. Create and configure a CloudWatch Events rule that triggers the Lambda function when AWS Config detects an S3 bucket ACL or policy violation.
  4. Create a Lambda function that uses the IAM role to review S3 bucket ACLs and policies, correct the ACLs, and notify your team of out-of-compliance policies.
  5. Verify the monitoring solution.

Note: This post assumes your compliance policies require the buckets you monitor not allow public read or write access. If you have intentionally open buckets serving static content, for example, you can use this post as a jumping-off point for a solution tailored to your needs.

At the end of this post, we provide an AWS CloudFormation template that implements the solution outlined. The template enables you to deploy the solution in multiple regions quickly.

Important: The use of some of the resources deployed, including those deployed using the provided CloudFormation template, will incur costs as long as they are in use. AWS Config Rules incur costs in each region they are active.

Architecture

Here’s an architecture diagram of what we’ll implement:
 

Figure 1: Architecture diagram


 

Step 1: Enable AWS Config and Amazon S3 Bucket monitoring

The following steps demonstrate how to set up AWS Config to monitor Amazon S3 buckets.

  1. Sign into the AWS Management Console and open the AWS Config console.
  2. If this is your first time using AWS Config, select Get started. If you’ve already used AWS Config, select Settings.
  3. In the Settings page, under Resource types to record, clear the All resources checkbox. In the Specific types list, select Bucket under S3.
     

    Figure 2: The Settings dialog box showing the “Specific types” list


     
  4. Choose the Amazon S3 bucket for storing configuration history and snapshots. We’ll create a new Amazon S3 bucket.
     

    Figure 3: Creating an S3 bucket


     

    1. If you prefer to use an existing Amazon S3 bucket in your account, select the Choose a bucket from your account radio button and, using the dropdown, select an existing bucket.
       

      Figure 4: Selecting an existing S3 bucket


       
  5. Under Amazon SNS topic, check the box next to Stream configuration changes and notifications to an Amazon SNS topic, and then select the radio button to Create a topic.
    1. Alternatively, you can choose a topic that you have previously created and subscribed to.
       

      Figure 5: Selecting a topic that you’ve previously created and subscribed to


       
    2. If you created a new SNS topic you need to subscribe to it to receive notifications. We’ll cover this in a later step.
  6. Under AWS Config role, choose Create a role (unless you already have a role you want to use). We’re using the auto-suggested role name.
     

    Figure 6: Creating a role


     
  7. Select Next.
  8. Configure Amazon S3 bucket monitoring rules:
    1. On the AWS Config rules page, search for S3 and choose the s3-bucket-publice-read-prohibited and s3-bucket-public-write-prohibited rules, then click Next.
       

      Figure 7: AWS Config rules dialog


       
    2. On the Review page, select Confirm. AWS Config is now analyzing your Amazon S3 buckets, capturing their current configurations, and evaluating the configurations against the rules we selected.
  9. If you created a new Amazon SNS topic, open the Amazon SNS Management Console and locate the topic you created:
     

    Figure 8: Amazon SNS topic list


     
  10. Copy the ARN of the topic (the string that begins with arn:) because you’ll need it in a later step.
  11. Select the checkbox next to the topic, and then, under the Actions menu, select Subscribe to topic.
  12. Select Email as the protocol, enter your email address, and then select Create subscription.
  13. After several minutes, you’ll receive an email asking you to confirm your subscription for notifications for this topic. Select the link to confirm the subscription.
Step 2: Create a Role for Lambda

Our Lambda will need permissions that enable it to inspect and modify Amazon S3 bucket ACLs and policies, log to CloudWatch Logs, and publishing to an Amazon SNS topic. We’ll now set up a custom AWS Identity and Access Management (IAM) policy and role to support these actions and assign them to the Lambda function we’ll create in the next section.

  1. In the AWS Management Console, under Services, select IAM to access the IAM Console.
  2. Create a policy with the following permissions, or copy the following policy:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "SNSPublish",
                "Effect": "Allow",
                "Action": [
                    "sns:Publish"
                ],
                "Resource": "*"
            },
            {
                "Sid": "S3GetBucketACLandPolicy",
                "Effect": "Allow",
                "Action": [
                    "s3:GetBucketAcl",
                    "s3:GetBucketPolicy"
                ],
                "Resource": "*"
            },
            {
                "Sid": "S3PutBucketACLAccess",
                "Effect": "Allow",
                "Action": "s3:PutBucketAcl",
                "Resource": "arn:aws:s3:::*"
            },
            {
                "Sid": "LambdaBasicExecutionAccess",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "*"
            }
        ]
    }
    
    
  3. Create a role for your Lambda function:
    1. Select Lambda from the list of services that will use this role.
    2. Select the check box next to the policy you created previously, and then select Next: Review
    3. Name your role, give it a description, and then select Create Role. In this example, we’re naming the role LambdaS3PolicySecuringRole.
Step 3: Create and Configure a CloudWatch Rule

In this section, we’ll create a CloudWatch Rule to trigger the Lambda function when AWS Config determines that your Amazon S3 buckets are non-compliant.

  1. In the AWS Management Console, under Services, select CloudWatch.
  2. On the left-hand side, under Events, select Rules.
  3. Click Create rule.
  4. In Step 1: Create rule, under Event Source, select the dropdown list and select Build custom event pattern.
  5. Copy the following pattern and paste it into the text box:
    {
      "source": [
        "aws.config"
      ],
      "detail": {
        "requestParameters": {
          "evaluations": {
            "complianceType": [
              "NON_COMPLIANT"
            ]
          }
        },
        "additionalEventData": {
          "managedRuleIdentifier": [
            "S3_BUCKET_PUBLIC_READ_PROHIBITED",
            "S3_BUCKET_PUBLIC_WRITE_PROHIBITED"
          ]
        }
      }
    }
    
    			

     
    The pattern matches events generated by AWS Config when it checks the Amazon S3 bucket for public accessibility.

  6. We’ll add a Lambda target later. For now, select your Amazon SNS topic created earlier, and then select Configure details.
     

    Figure 9: The “Create rule” dialog


     
  7. Give your rule a name and description. For this example, we’ll name ours AWSConfigFoundOpenBucket
  8. Click Create rule.
Step 4: Create a Lambda Function

In this section, we’ll create a new Lambda function to examine an Amazon S3 bucket’s ACL and bucket policy. If the bucket ACL is found to allow public access, the Lambda function overwrites it to be private. If a bucket policy is found, the Lambda function creates an SNS message, puts the policy in the message body, and publishes it to the Amazon SNS topic we created. Bucket policies can be complex, and overwriting your policy may cause unexpected loss of access, so this Lambda function doesn’t attempt to alter your policy in any way.

  1. Get the ARN of the Amazon SNS topic created earlier.
  2. In the AWS Management Console, under Services, select Lambda to go to the Lambda Console.
  3. From the Dashboard, select Create Function. Or, if you were taken directly to the Functions page, select the Create Function button in the upper-right.
  4. On the Create function page:
    1. Choose Author from scratch.
    2. Provide a name for the function. We’re using AWSConfigOpenAccessResponder.
    3. The Lambda function we’ve written is Python 3.6 compatible, so in the Runtime dropdown list, select Python 3.6.
    4. Under Role, select Choose an existing role. Select the role you created in the previous section, and then select Create function.
       

      Figure 10: The “Create function” dialog


       
  5. We’ll now add a CloudWatch Event based on the rule we created earlier.
    1. In the Add triggers section, select CloudWatch Events. A CloudWatch Events box should appear connected to the left side of the Lambda Function and have a note that states Configuration required.
       

      Figure 11: CloudWatch Events in the “Add triggers” section


       
    2. From the Rule dropdown box, choose the rule you created earlier, and then select Add.
  6. Scroll up to the Designer section and select the name of your Lambda function.
  7. Delete the default code and paste in the following code:
    import boto3
    from botocore.exceptions import ClientError
    import json
    import os
    
    ACL_RD_WARNING = "The S3 bucket ACL allows public read access."
    PLCY_RD_WARNING = "The S3 bucket policy allows public read access."
    ACL_WRT_WARNING = "The S3 bucket ACL allows public write access."
    PLCY_WRT_WARNING = "The S3 bucket policy allows public write access."
    RD_COMBO_WARNING = ACL_RD_WARNING + PLCY_RD_WARNING
    WRT_COMBO_WARNING = ACL_WRT_WARNING + PLCY_WRT_WARNING
    
    def policyNotifier(bucketName, s3client):
        try:
            bucketPolicy = s3client.get_bucket_policy(Bucket = bucketName)
            # notify that the bucket policy may need to be reviewed due to security concerns
            sns = boto3.client('sns')
            subject = "Potential compliance violation in " + bucketName + " bucket policy"
            message = "Potential bucket policy compliance violation. Please review: " + json.dumps(bucketPolicy['Policy'])
            # send SNS message with warning and bucket policy
            response = sns.publish(
                TopicArn = os.environ['TOPIC_ARN'],
                Subject = subject,
                Message = message
            )
        except ClientError as e:
            # error caught due to no bucket policy
            print("No bucket policy found; no alert sent.")
    
    def lambda_handler(event, context):
        # instantiate Amazon S3 client
        s3 = boto3.client('s3')
        resource = list(event['detail']['requestParameters']['evaluations'])[0]
        bucketName = resource['complianceResourceId']
        complianceFailure = event['detail']['requestParameters']['evaluations'][0]['annotation']
        if(complianceFailure == ACL_RD_WARNING or complianceFailure == PLCY_RD_WARNING):
            s3.put_bucket_acl(Bucket = bucketName, ACL = 'private')
        elif(complianceFailure == PLCY_RD_WARNING or complianceFailure == PLCY_WRT_WARNING):
            policyNotifier(bucketName, s3)
        elif(complianceFailure == RD_COMBO_WARNING or complianceFailure == WRT_COMBO_WARNING):
            s3.put_bucket_acl(Bucket = bucketName, ACL = 'private')
            policyNotifier(bucketName, s3)
        return 0  # done
    			
  8. Scroll down to the Environment variables section. This code uses an environment variable to store the Amazon SNS topic ARN.
    1. For the key, enter TOPIC_ARN.
    2. For the value, enter the ARN of the Amazon SNS topic created earlier.
  9. Under Execution role, select Choose an existing role, and then select the role created earlier from the dropdown.
  10. Leave everything else as-is, and then, at the top, select Save.
Step 5: Verify it Works

We now have the Lambda function, an Amazon SNS topic, AWS Config watching our Amazon S3 buckets, and a CloudWatch Rule to trigger the Lambda function if a bucket is found to be non-compliant. Let’s test them to make sure they work.

We have an Amazon S3 bucket, myconfigtestbucket that’s been created in the region monitored by AWS Config, as well as the associated Lambda function. This bucket has no public read or write access set in an ACL or a policy, so it’s compliant.
 

Figure 12: The “Config Dashboard”


 
Let’s change the bucket’s ACL to allow public listing of objects:
 

Figure 13: Screen shot of “Permissions” tab showing Everyone granted list access


 
After saving, the bucket now has public access. After several minutes, the AWS Config Dashboard notes that there is one non-compliant resource:
 

Figure 14: The “Config Dashboard” shown with a non-compliant resource


 
In the Amazon S3 Console, we can see that the bucket no longer has public listing of objects enabled after the invocation of the Lambda function triggered by the CloudWatch Rule created earlier.
 

Figure 15: The “Permissions” tab showing list access no longer allowed


 
Notice that the AWS Config Dashboard now shows that there are no longer any non-compliant resources:
 

Figure 16: The “Config Dashboard” showing zero non-compliant resources


 
Now, let’s try out the Amazon S3 bucket policy check by configuring a bucket policy that allows list access:
 

Figure 17: A bucket policy that allows list access


 
A few minutes after setting this bucket policy on the myconfigtestbucket bucket, AWS Config recognizes the bucket is no longer compliant. Because this is a bucket policy rather than an ACL, we publish a notification to the SNS topic we created earlier that lets us know about the potential policy violation:
 

Figure 18: Notification about potential policy violation


 
Knowing that the policy allows open listing of the bucket, we can now modify or delete the policy, after which AWS Config will recognize that the resource is compliant.

Conclusion

In this post, we demonstrated how you can use AWS Config to monitor for Amazon S3 buckets with open read and write access ACLs and policies. We also showed how to use Amazon CloudWatch, Amazon SNS, and Lambda to overwrite a public bucket ACL, or to alert you should a bucket have a suspicious policy. You can use the CloudFormation template to deploy this solution in multiple regions quickly. With this approach, you will be able to easily identify and secure open Amazon S3 bucket ACLs and policies. Once you have deployed this solution to multiple regions you can aggregate the results using an AWS Config aggregator. See this post to learn more.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Config forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Internet of Things (IoT) has precipitated to an influx of connected devices and data that can be mined to gain useful business insights. If you own an IoT device, you might want the data to be uploaded seamlessly from your connected devices to the cloud so that you can make use of cloud storage and the processing power to perform sophisticated analysis of data. To upload the data to the AWS Cloud, devices must pass authentication and authorization checks performed by the respective AWS services. The standard way of authenticating AWS requests is the Signature Version 4 algorithm that requires the caller to have an access key ID and secret access key. Consequently, you need to hardcode the access key ID and the secret access key on your devices. Alternatively, you can use the built-in X.509 certificate as the unique device identity to authenticate AWS requests.

AWS IoT has introduced the credentials provider feature that allows a caller to authenticate AWS requests by having an X.509 certificate. The credentials provider authenticates a caller using an X.509 certificate, and vends a temporary, limited-privilege security token. The token can be used to sign and authenticate any AWS request. Thus, the credentials provider relieves you from having to manage and periodically refresh the access key ID and secret access key remotely on your devices.

In the process of retrieving a security token, you use AWS IoT to create a thing (a representation of a specific device or logical entity), register a certificate, and create AWS IoT policies. You also configure an AWS Identity and Access Management (IAM) role and attach appropriate IAM policies to the role so that the credentials provider can assume the role on your behalf. You also make an HTTP-over-Transport Layer Security (TLS) mutual authentication request to the credentials provider that uses your preconfigured thing, certificate, policies, and IAM role to authenticate and authorize the request, and obtain a security token on your behalf. You can then use the token to sign any AWS request using Signature Version 4.

In this blog post, I explain the AWS IoT credentials provider design and then demonstrate the end-to-end process of retrieving a security token from AWS IoT and using the token to write a temperature and humidity record to a specific Amazon DynamoDB table.

Note: This post assumes you are familiar with AWS IoT and IAM to perform steps using the AWS CLI and OpenSSL. Make sure you are running the latest version of the AWS CLI.

Overview of the credentials provider workflow

The following numbered diagram illustrates the credentials provider workflow. The diagram is followed by explanations of the steps.

To explain the steps of the workflow as illustrated in the preceding diagram:

  1. The AWS IoT device uses the AWS SDK or custom client to make an HTTPS request to the credentials provider for a security token. The request includes the device X.509 certificate for authentication.
  2. The credentials provider forwards the request to the AWS IoT authentication and authorization module to verify the certificate and the permission to request the security token.
  3. If the certificate is valid and has permission to request a security token, the AWS IoT authentication and authorization module returns success. Otherwise, it returns failure, which goes back to the device with the appropriate exception.
  4. On successful validation, the credentials provider invokes the AWS Security Token Service (AWS STS) to assume the preconfigured IAM role.
  5. If assuming the role succeeds, AWS STS returns a temporary, limited-privilege security token to the credentials provider.
  6. The credentials provider returns the security token to the device.
  7. The AWS SDK on the device uses the security token to sign an AWS request with AWS Signature Version 4.
  8. The requested service invokes IAM to validate the signature and authorize the request against access policies attached to the preconfigured IAM role.
  9. If IAM validates the signature successfully and authorizes the request, the request goes through.

In another solution, you could configure an AWS Lambda rule that ingests your device data and sends it to another AWS service. However, in applications that require the uploading of large files such as videos or aggregated telemetry to the AWS Cloud, you may want your devices to be able to authenticate and send data directly to the AWS service of your choice. The credentials provider enables you to do that.

Outline of the steps to retrieve and use security token

Perform the following steps as part of this solution:

  1. Create an AWS IoT thing: Start by creating a thing that corresponds to your home thermostat in the AWS IoT thing registry database. This allows you to authenticate the request as a thing and use thing attributes as policy variables in AWS IoT and IAM policies.
  2. Register a certificate: Create and register a certificate with AWS IoT, and attach it to the thing for successful device authentication.
  3. Create and configure an IAM role: Create an IAM role to be assumed by the service on behalf of your device. I illustrate how to configure a trust policy and an access policy so that AWS IoT has permission to assume the role, and the token has necessary permission to make requests to DynamoDB.
  4. Create a role alias: Create a role alias in AWS IoT. A role alias is an alternate data model pointing to an IAM role. The credentials provider request must include a role alias name to indicate which IAM role to assume for obtaining a security token from AWS STS. You may update the role alias on the server to point to a different IAM role and thus make your device obtain a security token with different permissions.
  5. Attach a policy: Create an authorization policy with AWS IoT and attach it to the certificate to control which device can assume which role aliases.
  6. Request a security token: Make an HTTPS request to the credentials provider and retrieve a security token and use it to sign a DynamoDB request with Signature Version 4.
  7. Use the security token to sign a request: Use the retrieved token to sign a request to DynamoDB and successfully write a temperature and humidity record from your home thermostat in a specific table. Thus, starting with an X.509 certificate on your home thermostat, you can successfully upload your thermostat record to DynamoDB and use it for further analysis. Before the availability of the credentials provider, you could not do this.
Deploy the solution 1.  Create an AWS IoT thing

Register your home thermostat in the AWS IoT thing registry database by creating a thing type and a thing. You can use the AWS CLI with the following command to create a thing type. The thing type allows you to store description and configuration information that is common to a set of things.

aws iot create-thing-type --thing-type-name Thermostat

The following is sample output of the create-thing-type command. It contains the thingTypeName, thingTypeId, and thingTypeArn.

{
    "thingTypeName": "Thermostat", 
    "thingTypeId": "11f6d708-919e-479d-8a37-790ce48204d8",
    "thingTypeArn": "arn:aws:iot:us-east-1:<your_aws_account_id>:thingtype/Thermostat"
}

Run the following command in the AWS CLI to create a thing.

aws iot create-thing --thing-name MyHomeThermostat --thing-type-name Thermostat --attribute-payload "{\"attributes\": {\"Owner\":\"Alice\"}}"

The following is sample output of the create-thing command. It contains the thingArn, thingName, and thingId.

{
    "thingArn": "arn:aws:iot:us-east-1:<your_aws_account_id>:thing/MyHomeThermostat", 
    "thingName": "MyHomeThermostat",
    "thingId": "a9b098ac-2ee9-4ba6-aa40-163113eb18d0"
}
2.  Register a certificate

Now, you need to have a Certificate Authority (CA) certificate, sign a device certificate using the CA certificate, and register both certificates with AWS IoT before your device can authenticate to AWS IoT. If you do not already have a CA certificate, you can use OpenSSL to create a CA certificate, as described in Use Your Own Certificate. To register your CA certificate with AWS IoT, follow the steps on Registering Your CA Certificate.

You then have to create a device certificate signed by the CA certificate and register it with AWS IoT, which you can do by following the steps on Creating a Device Certificate Using Your CA Certificate. Save the certificate and the corresponding key pair; you will use them when you request a security token later. Also, remember the password you provide when you create the certificate.

Run the following command in the AWS CLI to attach the device certificate to your thing so that you can use thing attributes in policy variables.

aws iot attach-thing-principal --thing-name MyHomeThermostat --principal <certificate-arn>

If the attach-thing-principal command succeeds, the output is empty.

3.  Configure an IAM role

Next, configure an IAM role in your AWS account that will be assumed by the credentials provider on behalf of your device. You are required to associate two policies with the role: a trust policy that controls who can assume the role, and an access policy that controls which actions can be performed on which resources by assuming the role.

The following trust policy grants the credentials provider permission to assume the role. Put it in a text document and save the document with the name, trustpolicyforiot.json.

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {"Service": "credentials.iot.amazonaws.com"},
    "Action": "sts:AssumeRole"
  }
}

Run the following command in the AWS CLI to create an IAM role with the preceding trust policy.

aws iam create-role --role-name dynamodb-access-role --assume-role-policy-document file://trustpolicyforiot.json

The following is sample output of the create-role command.

{
    "Role": {
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17", 
            "Statement": {
                "Action": "sts:AssumeRole", 
                "Effect": "Allow", 
                "Principal": {
                    "Service": "credentials.iot.amazonaws.com"
                }
            }
        }, 
        "RoleId": "AROAJWE6XX2DBF3PMINT6", 
        "CreateDate": "2018-01-18T08:40:03.788Z", 
        "RoleName": "dynamodb-access-role", 
        "Path": "/", 
        "Arn": "arn:aws:iam::<your_aws_account_id>:role/dynamodb-access-role"
    }
}

The following access policy allows DynamoDB operations on the table that has the same name as the thing name that you created in Step 1, MyHomeThermostat, by using credentials-iot:ThingName as a policy variable. I explain after Step 5 about using thing attributes as policy variables. Put the following policy in a text document and save the document with the name, accesspolicyfordynamodb.json.

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": [
      "dynamodb:GetItem",
      "dynamodb:BatchGetItem",
      "dynamodb:PutItem",
      "dynamodb:UpdateItem",
      "dynamodb:DeleteItem"
    ],
    "Resource": "arn:aws:dynamodb:us-east-1:<your_aws_account_id>:table/${credentials-iot:ThingName}"
  }
}

Run the following command in the AWS CLI to create the access policy.

aws iam create-policy --policy-name accesspolicyfordynamodb --policy-document file://accesspolicyfordynamodb.json

The following is sample output of the create-policy command.

{
    "Policy": {
        "PolicyName": "accesspolicyfordynamodb", 
        "CreateDate": "2018-01-18T08:47:38.368Z", 
        "AttachmentCount": 0, 
        "IsAttachable": true, 
        "PolicyId": "ANPAJMRTTZH25SEHZOBYG", 
        "DefaultVersionId": "v1", 
        "Path": "/", 
        "Arn": "arn:aws:iam::<your_aws_account_id>:policy/accesspolicyfordynamodb", 
        "UpdateDate": "2018-01-18T08:47:38.368Z"
    }
}

Finally, run the following command in the AWS CLI to attach the access policy to your role.

aws iam attach-role-policy --role-name dynamodb-access-role --policy-arn arn:aws:iam::<your_aws_account_id>:policy/accesspolicyfordynamodb

If the attach-role-policy command succeeds, the output is empty.

Configure the PassRole permissions

The IAM role that you have created must be passed to AWS IoT to create a role alias, as described in Step 4. The user who performs the operation requires iam:PassRole permission to authorize this action. You also should add permission for the iam:GetRole action to allow the user to retrieve information about the specified role. Create the following policy to grant iam:PassRole and iam:GetRole permissions. Name this policy, passrolepermission.json.

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": [
        "iam:GetRole",
        "iam:PassRole"
    ],
    "Resource": "arn:aws:iam::<your_aws_account_id>:role/dynamodb-access-role"
  }
}

Run the following command in the AWS CLI to create the policy in your AWS account.

aws iam create-policy --policy-name passrolepermission --policy-document file://passrolepermission.json

The following is sample output of the create-policy command.

{
    "Policy": {
        "PolicyName": "passrolepermission", 
        "CreateDate": "2018-01-18T08:53:35.016Z", 
        "AttachmentCount": 0, 
        "IsAttachable": true, 
        "PolicyId": "ANPAJ6HSQYSBLAUCR5XRC", 
        "DefaultVersionId": "v1", 
        "Path": "/", 
        "Arn": "arn:aws:iam::<your_aws_account_id>:policy/passrolepermission", 
        "UpdateDate": "2018-01-18T08:53:35.016Z"
    }
}

Now, run the following command to attach the policy to the user.

aws iam attach-user-policy --policy-arn arn:aws:iam::<your_aws_account_id>:policy/passrolepermission --user-name <user_name>

If the attach-user-policy command succeeds, the output is empty.

4.  Create a role alias

Now that you have configured the IAM role, you will create a role alias with AWS IoT. You must provide the following pieces of information when creating a role alias:

  1. RoleAlias: This is the primary key of the role alias data model and hence a mandatory attribute. It is a string; the minimum length is 1 character, and the maximum length is 128 characters.
  2. RoleArn: This is the Amazon Resource Name (ARN) of the IAM role you have created. This is also a mandatory attribute.
  3. CredentialDurationSeconds: This is an optional attribute specifying the validity (in seconds) of the security token. The minimum value is 900 seconds (15 minutes), and the maximum value is 3,600 seconds (60 minutes); the default value is 3,600 seconds, if not specified.

Run the following command in the AWS CLI to create a role alias. Use the credentials of the user to whom you have given the iam:PassRole permission.

aws iot create-role-alias --role-alias Thermostat-dynamodb-access-role-alias --role-arn arn:aws:iam::<your_aws_account_id>:role/dynamodb-access-role --credential-duration-seconds 3600

The following is sample output of the create-role-alias command. It contains the roleAlias as specified in the request and the roleArn.

{
    "roleAlias": "Thermostat-dynamodb-access-role-alias",
    "roleAliasArn": "arn:aws:iot:us-east-1:<your_aws_account_id>:rolealias/Thermostat-dynamodb-access-role-alias"
}
5.  Attach a policy

You created and registered a certificate with AWS IoT earlier for successful authentication of your device. Now, you need to create and attach a policy to the certificate to authorize the request for the security token.

Let’s say you want to allow a thing to get credentials for the role alias, Thermostat-dynamodb-access-role-alias, with thing owner Alice, thing type thermostat, and the thing attached to a principal. The following policy, with thing attributes as policy variables, achieves these requirements. After this step, I explain more about using thing attributes as policy variables. Put the policy in a text document, and save it with the name, alicethermostatpolicy.json.

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": "iot:AssumeRoleWithCertificate",
    "Resource": "arn:aws:iot:us-east-1:<your_aws_account_id>:rolealias/${iot:Connection.Thing.ThingTypeName}-dynamodb-access-role-alias",
    "Condition": {
      "StringEquals": {
        "iot:Connection.Thing.Attributes[Owner]": "Alice",
        "iot:Connection.Thing.ThingTypeName": "Thermostat"
      },
      "Bool": {
        "iot:Connection.Thing.IsAttached": "true"
      }
    }
  }
}

Run the following command in the AWS CLI to create the policy in your AWS IoT database.

aws iot create-policy --policy-name AliceThermostatPolicy --policy-document file://alicethermostatpolicy.json

The following is sample output of the create-policy command. It contains the policyName, policyArn, policyDocument, and policyVersionId.

{
    "policyName": "AliceThermostatPolicy", 
    "policyArn": "arn:aws:iot:us-east-1:<your_aws_account_id>:policy/AliceThermostatPolicy", 
    "policyDocument": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": {\n    \"Effect\": \"Allow\",\n    \"Action\": \"iot:AssumeRoleWithCertificate\",\n    \"Resource\": \"arn:aws:iot:us-east-1:<your_aws_account_id>:rolealias/${iot:Connection.Thing.ThingTypeName}-dynamodb-access-role-alias\",\n    \"Condition\": {\n      \"StringEquals\": {\n        \"iot:Connection.Thing.Attributes[Owner]\": \"Alice\",\n        \"iot:Connection.Thing.ThingTypeName\": \"Thermostat\"\n      },\n      \"Bool\": {\n        \"iot:Connection.Thing.IsAttached\": \"true\"\n      }\n    }\n  }\n}\n", 
    "policyVersionId": "1"
}

Use the following command to attach the policy with the certificate you registered earlier.

aws iot attach-policy --policy-name AliceThermostatPolicy --target <certificate-arn>

If the attach-policy command succeeds, the output is empty.

You have completed all the necessary steps to request an AWS security token from the credentials provider!

Using thing attributes as policy variables

Before I show how to request a security token, I want to explain more about how to use thing attributes as policy variables and the advantage of using them. As a prerequisite, a device must provide a thing name in the credentials provider request.

Thing substitution variables in AWS IoT policies

AWS IoT Simplified Permission Management allows you to associate a connection with a specific thing, and allow the thing name, thing type, and other thing attributes to be available as substitution variables in AWS IoT policies. You can write a generic AWS IoT policy as in alicethermostatpolicy.json in Step 5, attach it to multiple certificates, and authorize the connection as a thing. For example, you could attach alicethermostatpolicy.json to certificates corresponding to each of the thermostats you have that you want to assume the role alias, Thermostat-dynamodb-access-role-alias, and allow operations only on the table with the name that matches the thing name. For more information, see the full list of

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Over the coming weeks, we’ll be adding enhanced domain protections to Amazon CloudFront. The short version is this: the new measures are designed to ensure that requests handled by CloudFront are handled on behalf of legitimate domain owners.

Using CloudFront to receive traffic for a domain you aren’t authorized to use is already a violation of our AWS Terms of Service. When we become aware of this type of activity, we deal with it behind the scenes by disabling abusive accounts. Now we’re integrating checks directly into the CloudFront API and Content Distribution service, as well.

Enhanced Protection against Dangling DNS entries
To use CloudFront with your domain, you must configure your domain to point at CloudFront. You may use a traditional CNAME, or an Amazon Route 53 “ALIAS” record.

A problem can arise if you delete your CloudFront distribution, but leave your DNS still pointing at CloudFront, popularly known as a “dangling” DNS entry. Thankfully, this is very rare, as the domain will no longer work, but we occasionally see customers who leave their old domains dormant. This can also happen if you leave this kind of “dangling” DNS entry pointing at other infrastructure you no longer control. For example, if you leave a domain pointing at an IP address that you don’t control, then there is a risk that someone may come along and “claim” traffic destined for your domain.

In an even more rare set of circumstances, an abuser can exploit a subdomain of a domain that you are actively using. For example, if a customer left “images.example.com” dangling and pointing to a deleted CloudFront distribution which is no longer in use, but they still actively use the parent domain “example.com”, then an abuser could come along and register “images.example.com” as an alternative name on their own distribution and claim traffic that they aren’t entitled to. This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution.

Of course, the best fix for this kind of risk is not to leave dangling DNS entries in the first place. Earlier in February, 2018, we added a new warning to our systems. With this warning, if you remove an alternate domain name from a distribution, you are reminded to delete any DNS entries that may still be pointing at CloudFront.

We also have long-standing checks in the CloudFront API that ensure this kind of domain claiming can’t occur when you are using wildcard domains. If you attempt to add *.example.com to your CloudFront distribution, but another account has already registered www.example.com, then the attempt will fail.

With the new enhanced domain protection, CloudFront will now also check your DNS whenever you remove an alternate domain. If we determine that the domain is still pointing at your CloudFront distribution, the API call will fail and no other accounts will be able to claim this traffic in the future.

Enhanced Protection against Domain Fronting
CloudFront will also be soon be implementing enhanced protections against so-called “Domain Fronting”. Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to “www.example.com” but then issue a request for “www.example.org”.

In certain circumstances this is normal and expected. For example, browsers can re-use persistent connections for any domain that is listed in the same SSL Certificate, and these are considered related domains. But in other cases, tools including malware can use this technique between completely unrelated domains to evade restrictions and blocks that can be imposed at the TLS/SSL layer.

To be clear, this technique can’t be used to impersonate domains. The clients are non-standard and are working around the usual TLS/SSL checks that ordinary clients impose. But clearly, no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain. Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties.

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).
Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Figure 1: Diagram showing the relationship between all the various services


 
In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

Figure 2: How domain resolution across accounts works


 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.
Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.
Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     

    Figure 3: The “Create DHCP options set” dialog box


     
  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.
Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview