Top AWS Terminology to Understand AWS Security Concepts

author_profile
Panoptica Team
Monday, May 24th, 2021

Ever feel lost in a sea of AWS terminology, acronyms and AWS cloud security concepts on Amazon web services? You’re not alone! Let’s take a look at some of the most common AWS terminology that you’ll hear when you’re learning about Amazon Web Services, and getting to grips with AWS cloud security, and get an overview of what they mean for your own unique cloud security environment. Consider this your cheat sheet!
.

Table of contents:
AWS IAM
Amazon Virtual Private Cloud (VPC)
AWS Relational Database Service (RDS)
AWS Lambda
EC2
Amazon Elastic File System
S3 Bucket
AWS Key Pair
AWS Least Privilege
AWS Cloudformation
AWS Security Hub
Amazon EKS
AWS Cross-account Access

AWS IAM

IAM is AWS terminology that stands for Identity and Access Management, and it’s the way that Amazon web services allows you to control access to all of your resources on AWS. You’ll use the terms authenticated and authorized. Authenticated means that the user is signed in, and authorized means that they have the permissions that they need to use any given resources.

IAM is used in many ways to manage access in AWS. You do this by creating IAM identities, such as users, groups, or roles, and then attaching policies to these identities. This will define the permissions for each identity.

The root user will be the initial person that created the AWS account, and this is one single sign-in identity who will have access to all of the AWS services and resources that are in the account. AWS recommends that you only use this root user to create your first IAM user, as these access permissions are far too loose.

It’s important to note that defining policies using IAM logic doesn’t always mean that your environment is secure. In fact, even by design, some configurations can open you up to risk. Read more about AWS IAM and AWS Authorization logic.

Amazon Virtual Private Cloud

If you’re using AWS services on the cloud, you’ve probably heard the term Virtual Private Cloud or VPC. The way this works is that instead of using the internal and external IP addresses that are allocated by Amazon, you can assign your own IP address and numbers from subnets, and choose which of your AWS resources will be public facing, and which will be private. This is a great option for customers who need more granular security, and don’t want to have all of their AWS cloud services on the public cloud. 

Using Amazon Virtual Private Cloud is not the same as using elastic IP addresses. An elastic IP address is an IP number that you can reserve for your public cloud, even through situations where it would usually change, such as restarting the instance. Elastic IP addresses are usually used for fault-tolerant software or instances, and are still public.

There are two ways that Amazon Virtual Private Cloud VPC can support better security best practices than relying on AWS cloud’s public resources. First, security groups. These are used to control traffic at the instance level, while ACL access control lists are used to control communications at the subnet level, like a firewall might be.

You can also create dedicated instances on hardware, which would make these instances physically isolated from any other instances in the AWS cloud. It might be worth reading our article, There is no Private in your Public Cloud for more information on ensuring that you are managing applications, instances and web services in the most secure way possible.

AWS Relational Database Service

AWS Relational Database Service is a managed service that provides the tools and technology necessary to work with relational databases such as Oracle in the AWS cloud. A relational database works by storing data in columns and rows, and is queried using the SQL language. Non-relational databases are therefore known as No-SQL databases, and will need a different AWS service to manage data storage.

In a relational database, attributes are shown in columns, and the rows will represent the records. These relational databases can be used for finance, warehousing, CRM applications, ERP technology, and many other transactions, however they won’t be suitable for any dynamic AWS resources and applications, or those that need high-performance. If you have large amounts of data that is unstructured, or you’re looking to use it for machine learning or to train algorithms, you’ll want a No-SQL database.

Familiar relational databases include Amazon Aurora and DynamoDB, as well as Amazon RDS, which stands for relational database service. There are six database choices from within RDS, including Oracle, but also PostgreSQL, AWS MySQL, MariaDB, SQLServer, and also Aurora itself.

As the customer, you’ll need to consider whether you want to manage any of these database engines from a CLI, via API calls, or with your centralized management console. The main point of using a relational database service like this is to automate much of the administrative work that needs to be done, such as provisioning, back-up, updates and initial set up.

AWS Lambda

AWS Lambda makes it easy to run code without any of the overhead and work of managing and provisioning servers. Think about time-consuming tasks such as integrating events, managing runtimes, or building scaling logic. This is all eliminated with a serverless compute capacity service like AWS Lambda. You might be asking yourself, what does the common term serverless mean according to AWS? Well, with AWS Lambda, you can write your code in any language, and then simply upload your code as a container image or a ZIP file, and Lambda will allocate the amount of power, and run the incoming request or event, at any scale.

Terms that you might hear when you’re reading about AWS Lambda include Functions; which are the scripts or program that run in Lambda, processing the event and returning the response, Runtimes; which allow you to run functions which have been written in various languages on the same environment, Lambda Layers; that helps you to manage your development function code and distribute your functions, and Log Streams; that allow you to add custom logging statements to analyze the flow and performance.

AWS Elastic Computing (EC2)

Amazon EC2 stands for Elastic Compute Cloud, and offers secure, scalable virtual servers that can be used to run AWS resources and applications. You can configure your Amazon EC2 instances exactly how you like, including with memory, storage, CPU and networking requirements.

In order to secure Amazon EC2 instances, Amazon web services suggests you use identity federation, IAM users and roles, implement rules according to least privilege, and ensure that your OS and applications are patched and up to date. However, this doesn’t cover all the risks on the cloud. One good example is managing SSH key pairs, which are often used for multiple instances, opening an organization up to risk.

At Panoptica, we offer a free, open-source Red Detector that can be used to scan your Amazon EC2 instances to find vulnerabilities using Vuls, as well as for signs of a rootkit using Chkrootkit. You can also use this tool to audit your EC2 instance using Lynis to find misconfigurations.

Amazon Elastic File System

Amazon Elastic File System is also known as Amazon EFS, and makes it easy to manage AWS resources without manually provisioning or deploying your space on the AWS cloud. The service creates elastic files that automatically scale as files are added, removed, or burst to a need for higher throughput. Files that are accessed less frequently can be automatically moved, which can save you a lot of money from your monthly AWS cloud spend, which is really important if you have a low cost strategy for your cloud environment.

In terms of security, the AWS terminology you’ll want to think about here is your VPC security group rules, and your IAM policies, both of which are discussed further up in this article. While Amazon virtual private cloud will offer security group policies that control the traffic that moves in and out of your file system, you can also use Identity and Access Management policies to your file system, which will help you dictate which clients can access your EFS and what permissions they have. Head to the EFS console if you want to see what policies you can add, but these are usually items like disabling root access, encrypting all connections, or enforcing read-only access and permissions. Of course, these are “one size fits all” policies, and won’t look at your requirements with any business context.

S3 Bucket

Amazon S3 buckets are storage, used to hold objects on Amazon web services. Objects are data, files, metadata and so on. After creating a bucket, you will need to select a tier for your data, which will vary in terms of redundancy, price, and how often you can access the objects inside. You can have a single bucket that stores objects across multiple tiers.

AWS IAM service helps you to assign the right privileges and permissions for all the objects in any given S3 bucket, including creating access control lists. However, these can quickly become complex to manage. In addition, the access options from AWS don’t always allow you to ascertain whether your objects are public.

AWS Key Pair

A key pair is a set of security credentials that users need to access their instances, and is made up of a public and a private key. The public key is stored by Amazon web services, and you create and hold the key which is unique to you. In that way, it works like a password to access your buckets and objects. 

As your cloud environment grows, it becomes harder to manage these pairs, and like with password misuse, many companies and users share their keys across different instances. However, when organizations use the same key pairs for multiple instances, attackers can access all of these servers by uncovering just the one credential.

Panoptica maps and visualizes all EC2 instances, highlighting those that are using the same key pair, and prioritizing these by real-world attack paths, so that you can easily mitigate any misuse that is opening your business up to risk.

AWS Least Privilege

The principle of least privilege (PoLP) is a concept that means all users have the permissions and access that they need to do their job, and no further. This means that if an attacker compromises an account, they are limited in terms of what they can access. This term was built for on-premises environments, and does not carry over entirely to cloud security.

There are a few reasons for this disconnect. First, on the cloud an attacker only needs one or two permissions to gain complete control over an account. Second, AWS least privilege isn’t always possible because of pre-set rules that can’t be amended by the user. Lastly, attempting to create the number of policies needed for least privilege creates a huge amount of work for DevOps teams.

AWS CloudFormation

Infrastructure-as-code makes it easy for DevOps to easily run, model, provision and manage a collection of AWS resources. This is what AWS CloudFormation does, using templates. You can create resources with dependencies, and then configure these as one single stack, instead of needing to handle the individual parts. On AWS, these stacks can be used across multiple user accounts, and even across regions, too.

This not only saves time for DevOps, by allowing them to automate best practices, but also ensures they can then roll any updates out quickly and easily across projects, at low cost.

AWS Security Hub

AWS Security Hub brings all of your alerts together from all your AWS accounts into a centralized location so that you can easily run your security settings from a single place. It is where you manage alerts that come in from your security tools, such as endpoint protection or firewalls, or compliance scanners for example, both for integrated AWS services, and also for solutions that come from within the AWS Partner Network. Integrated with SIEM or SOAR tools, or with AWS tools such as Amazon Detective, you can also take steps towards remediation.

The challenge is that organizations today have so many security tools, both that fall under these categories and those which do not, that alerts have escalated past what can be managed within AWS Security Hub alone. Security teams have hundreds of alerts per day, and many of them are impossible to prioritize without context. This leads to alert fatigue, and security teams who simply ignore many of these alerts as they don’t have the information required to make contextual decisions, and therefore leave their companies exposed to risk.

Amazon EKS

EKS is how Amazon web services runs the Kubernetes control plane across AWS availability zones to provide highly available Kubernetes clusters on AWS. You can start, run and scale your K8 applications either on-premises or on the cloud, and get all the benefits of the container orchestration system, in terms of deployment, management and scale.

There are many security considerations for companies running Kubernetes on AWS. For example, some of the AWS defaults can be overly-permissive or open you up to risk, and attackers can also exploit the K8 control plane to attempt account takeover.

AWS Cross-account Access

Using the AWS console, you can set up cross-account access so that users can obtain access to objects from buckets belonging to other users. This is usually done using cross-account IAM roles with a trust policy, and setting up an AWS credentials file.

There are three steps involved in setting up cross-account access. First you create the necessary IAM role, defining the other account as what’s known as a “trusted entity” and specifying the correct permissions policy. For example, is it a read relationship, or will you want the other account to be able to update or edit? 

Second, according to AWS, you modify the IAM group policy so that Testers don’t have access to the role of UpdateApp. This is in the scenario where they have PowerUser access, and it’s important to ensure you’ve explicitly denied that role for their use.

Finally, it’s time to test whether the cross-account access works. Using your Developer role, try to UpdateApp, and check to see if it has worked. This should update in real time.

Popup Image