Why choose Panoptica?
Four reasons you need the industry’s leading cloud-native security solution.
Ready to have some knowledge dropped about the principle of least privilege? Here at Panoptica, we love knocking those cloud security misconceptions out of the park! In our previous editions of this series, we spoke about how relying on CVEs is not enough to keep you secure on the cloud, and how CWPP is not a good fit for modern cloud environments.
Today, we're going to talk about the principle of least privilege, and why running after this on-premises concept on the cloud will add a huge amount of work, for little reward or risk reduction.
The principle of least privilege (PoLP) is a security concept for computer systems where you give users exactly the permissions that they need to do their job, and no further. It was invented for on-premises security environments, and on-premises at least, it can be extremely effective at reducing risk. Here’s how it works.
All users have access permissions, which will govern what they can interact with in any network, whether that’s access to read-only versions of files, read write access, root privileges, or any other. Excessive privileged access means that a user has more access to an environment than they need to do their given role.
If a freelance designer is given all-access to their client’s network, with virtually unlimited privileges, when all they really need is permissions to a single folder where the logos are kept - you’ll agree that’s a problem with their level of privilege access policies. In contrast, if a super user account can only view read-only files, they obviously don’t have enough access, and least privilege has gone wrong somewhere, probably with too coarse policies dictating their access.
In a smart and secure on-premises environment, tools like micro-segmentation and zero trust policies have used the principle of least privilege to narrow down risk in a number of ways.
This is the process of isolating specific “crown jewel” applications so that even if an attacker could make it into your environment, they would be unable to reach that data or application. As few people as possible would be given credentials that allow access, therefore following least privilege access rules. Crown jewel applications could be anything from where sensitive customer data is stored, to business-critical systems and processes.
Based on the role that they hold at the company, RBAC or role-based access control allows specific access to certain data or applications, or parts of the network. This goes hand in hand with the principle of least privilege, and means that if credentials are stolen, the attackers are limited to what access the employee in question holds. As this is based on users, you could isolate privileged user sessions specifically to keep them with an extra layer of protection. Only if an administrator account or one with wide access privilege is stolen, would the business be in real trouble.
This task is usually done with micro-segmentation, where specific apps, users, data or any other element of the business is protected from an attack with internal, next-gen firewalls. Risk is reduced in a similar way to the examples above, where the requisite access needed is provided using the principle of least privilege to allow access to only those who need it, and no-one else. In some situations you might need to allow elevated privileges for a short period of time, for example during an emergency. Watch out for privilege creep, where users gain more access over time, and this is never corrected.
More recently, with the rise in cloud-native deployments and data centers, many security professionals have attempted to lift and shift the concept of least privilege to the cloud. Here’s why I believe that when it comes to securing the cloud, least privilege is dead.
Least privilege is predicated on the idea of escalating privileges and credentials. An attacker breaches the network, and they can make lateral moves from one account to another, moving across the network and gaining greater and greater access until they reach the ultimate payload. This is a very “on-prem” kind of idea.
To take full account ownership on the cloud, you really only need to have a couple of permissions, in an environment where there are often tens of thousands. For example, on AWS there are more than 10,000 different IAM actions, distributed across the various services in the cloud. These permissions are segmented between read, write, or management actions. To create a data breach, all an attacker needs is to gain access to a single IAM role or set of user credentials that has S3:GetObject only with a wildcard in the resource. In short, any S3 bucket that doesn’t have explicit restrictions to the account is open for business to an attacker. We talk a lot about the risks of S3 buckets in this article on the risks of misconfigured cloud buckets and what you can do about them.
This is exactly what happened to CapitalOne. Security experts continue to call out the famous data breach as an issue that could have been prevented with least privilege in mind, but this really isn’t the case. With Capital One, Just two IAM actions, ‘List S3 buckets’ and ‘Get objects’ (sync command) allowed the attackers access to cause an $190M settlement, and a data breach of more than 100M customers.
With this attack path available from just two permissions, the principle of least privilege just isn’t possible. It’s not about restricting privilege access permissions or using role-based user access, on cloud platforms like AWS, Azure, or GCP, you need a totally different approach.
Image source: ShiftLeft
The same is true when you think about threats due to Privilege Escalation in the cloud. While least privilege claims to protect against lateral movement and privilege escalation, on the cloud you can achieve this with just a couple of permissions. Think about AWS for example. As RhinoSecurity points out in this great blog, an attacker could use ec2:RunInstances and iam:PassRole as just two permissions that allow for full account ownership. It’s not about gaining specific privileged credentials at all, as this could be done with minimal privilege or superuser privileges alike.
The principle of least privilege also relies on the idea that you can manipulate and manage permissions and access rules, fully controlling your environment. That’s also not really an idea that can be widened to a hybrid or cloud-native environment.
On both AWS and Azure, there are managed policies and roles that are set out by the CSP that can’t be manually changed. In my experience, many users and developers are unaware of the risks of these policies and roles. Two examples are the AWSLambda FullAccess and AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy. Although these are used by-design to achieve rapid agile development cycles out of the box, making DevOps lives easier, both of these also allow for privilege escalation when exploited.
One good example is the AWSElasticBeanstalkService managed policy that AWS announced for deprecation, and is no longer available for attachment to users, groups or roles, as of April 15th, 2021. AWS recommends using the new managed policy, AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy.
On this policy, some actions have had certain resources and conditions defined, but privilege escalation is still possible by exploit, a huge security gap. Here’s a gist I created for more information and a proof of concept. Don’t forget to reach out and let me know what you think of it!
The CIEM domain is built around tackling these issues and supporting businesses in achieving the principle of least privilege in the cloud. However, in reality, this causes a huge headache for DevOps teams and cloud security owners when attempting to maintain lean policies under strict resource requirements. In some cases, CIEM tools will claim to offer ways to achieve the principle of least privilege creation at the build stage, baked-into CI/CD pipelines, but in fact, this ROI is impossible, and the attempt causes an endless amount of maintenance and frustration.
Most developers in cloud-native companies are unaware of the risks of managed policies or wildcard policies in the cloud, and that’s fine! It’s not part of their role to gain such in-depth knowledge of cloud risks. In fact, attempting to get up to speed with all of the changing risks of a dynamic environment would slow down the very DevOps pace that they are aiming to achieve by leveraging the cloud.
Consider the time that would be spent every time we modify the policies used by our users or applications, for example when we add a new database or new IAM actions. Trying to achieve privilege enforcement here would be beyond tedious. This process involves continuously checking each identity and specific permissions, asking yourself – is it really necessary for this identity to be using these permissions? If not, and I decide to remove one, will it break production or cause problems for the user? It’s simply not maintainable.
Cloud-native companies have just two options. They can take the direction of policies shared between many applications, using default AWS or Azure permissions, which is not the principle of least privilege, or they can attempt to create policies for each application and user. The second option generates so much maintenance and overhead for security and operations teams, that it negates the benefits of the cloud in the first place. You might be working towards a least privilege model, but you won’t have much else!
Ok, enough with the doom and gloom! For system stability, and to restrict access in a smart way, what kind of privilege enforcement does make sense on the cloud?
At Panoptica, our approach is to offer Guardrails to solve this use case. By creating your own managed policy unique to your environment that blocks specific attack paths, there’s no need to fight with the least privilege model. You can apply access controls using Guardrails, and your teams can work unimpeded without any impact to the pace of DevOps.
Instead of getting fine-grained about specific one-to-one connections between the account and a user, or working endlessly on permissions into any given resource (all of which need to be maintained over time), you just set and forget Guardrails that block the potential attack.
As part of our cloud security platform, we offer automated Guardrails that are dynamically built inside your DevOps tools, for example Terraform or Json. A misconfiguration can be found and alleviated automatically, eliminating the vast majority of the manual work that DevOps will need to do. If an asset has an overly-permissive role where it should actually have just read-only access, the required Guardrail will automatically appear with all the necessary denies to shore up this risk. No impact on production, and no need to change the read-only permissions themselves.
Much smarter than relying on the principle of least privilege for an environment it was never intended for, right?
Ready to discuss your unique cloud environment? Schedule a call.