We have all seen alarming headlines this year with over a dozen high profile breaches or exposure of critical customer data stored in AWS S3 storage.
- [July 2017] Verizon contractor leaked over 10 million customer records stored in S3
- [July 2017] WWE leaked customer information for over 3 million customers stored in S3
- [July 2017] Dow Jones leaked over 2 million customer details from data stored in S3
- [August 2017], A voting machine supplier leaked over 1.8 million voter from S3
- [September 2017] Vehicle tracking vendor leaked half a million records about customers
- [September 2017], Time Warner Cable leaked 4 million customer records from S3
- [September 2017], Accenture left potential data on S3 exposed
- [November 2017], US Government DoD exposed 1.8 billion posts from S3
The list keeps growing as every month more breaches get disclosed. We begin by asking five fundamental questions on what is going on behind all these massive breaches and present 3 simple steps that enterprises can take proactively to prevent these. As enterprises rush in moving their workloads and data to cloud, security of S3 and cloud resources should not be forgotten or ignored. Governance of cloud resources in design, operations and continuous monitoring is critical to keep the data secure.
Five security challenges
Let us dig deeper into five challenges that enterprises face in keeping data secure in S3 buckets in cloud.
- [Skillset Problem] As enterprises move to cloud, they perhaps lack the security expertise needed to keep their customer data safe in cloud. For example, despite a secure sounding access control, setting an “Authenticated Users group” permission on S3 bucket is highly insecure that caused the Dow Jones exposure. Novices in cloud security can easily misinterpret and mistakenly think they are secure when they aren’t.
- [Detection Problem] Most of the S3 bucket leaks came from a simple misconfigured security settings on the AWS S3 that made them “public” readable. Can such misconfigurations be easily detected and proactively corrected? For example, setting S3 buckets with GET permissions to allow global access is really a very insecure option for almost all use cases – http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-2. This permission makes sense for a bucket with public website but never for sensitive data.
- [Scale Problem] As enterprises have hundreds of accounts with thousands of developers using the cloud, it is manually impossible to audit these hundreds or thousands of S3 buckets. This can be one reason for insecure buckets where there just are too many of them to manually audit them and continuously monitor permissions.
- [Agility Problem] As business move fast with thousands of changes being made to cloud every day with new features, new configurations and new services, how does an enterprise make sure that their cloud environment such as S3 buckets are secure and compliant every day and every hour?
- [Automation Problem] Finally, as scale and frequency of changes in cloud increase (#3 and #4 points above), there is a clear need for automation. Are there tools to help with security automation?
We explored possible causes for insecure buckets. Now let us focus on solutions.
Whose fault is it for insecure S3 buckets?
Cloud security is a shared responsibility between cloud providers (AWS) and customers, and customers should make sure they do their part of this responsibility. AWS S3 storage is secure by default when buckets are created. However, as customers change the permissions on the buckets, setting incorrect permissions such as “Public” or “Authenticated Users” can leave these buckets wide open to the public. This configuration drift of permissions can happen with time and there needs to be a way to detect, alert and respond to this.
As businesses store sensitive data on S3, encryption of this data is another complimentary solution that should be implemented along with permissions control. AWS S3 provides server side encryption capabilities that customers can use to further protect the data. And if buckets with sensitive data are not encrypted, such misconfigurations need to be detected, alerted and responded.
These are two examples of the customer responsibilities to keep S3 buckets secure.
Three simple steps to secure S3 buckets
AWS S3 buckets can be kept secure by taking 3 simple steps.
I. Design Security Model, Policies and controls: First, enterprises must define a security model for governing all their cloud resources such as S3 buckets and create policies to enforce this model. They must encode the security model as part of their “security as code” design best practices such as using AWS cloudformation templates to bake security early in the design. Example of security model policies to enforce are given below for S3:
- Ensure that all S3 buckets do not have public read for all users or Authenticated users
- Ensure all S3 buckets have policy to require server-side and in transit encryption for all objects stored in bucket.
- Ensure that principle of least privilege is followed for S3 permissions
- Ensure that S3 bucket access logging is enabled
- Ensure that S3 bucket cloudtrail logs are not publicly accessible
- Any exceptions to the above policies should be approved by application owner of the service
The first policy rule if correctly designed and continuously checked could have prevented 90% of the breaches this year. Second rule is all about encryption at rest for all data stored in s3. Many compliance standards require this so important to get this done. The 3rd rule is a IAM security design best practice that must be applied to all cloud resources including S3 buckets. The 4th and 5th rules are important from audit logging the access for monitoring and forensics. Most of these policies can be sourced from standards such as AWS CIS foundation and three tier apps.
II. Implement Security Policy Automation: Enterprises should implement a policy automation solution using a cloud security vendor tool such as BMC Policy Service. These tools can programmatically enforce the “security as code” policies and controls. For example, these tools can continuously assess cloud resources and notify the security posture of S3 buckets even when thousands of changes to S3 bucket are happening in an agile way across hundreds of buckets in dozens of AWS accounts. Policy security automation is a fundamental mind shift that needs to happen to prevent such breaches. This easily addresses not only the scale and agility challenges that we identified earlier but also can mitigate some of the skill set gap.
III. Remediation flows: Finally, enterprise should implement an actionable remediation and response plan to compliance violations through policy automation tools such as BMC Policy Service. This can include automating remediating misconfigured buckets, creating tickets in Jira or change control systems as well as taking exceptions. Just imagine a developer misconfigured an S3 bucket and within a few minutes, this misconfiguration was automatically detected by a policy automation tool as a non-compliant bucket. Based on a remediation policy, this non-compliant bucket then was automatically remediated and configuration changes reverted to a more secure configuration. This is the level of security that enterprises should demand and expect to get from the policy automation and cloud security vendors.
All the AWS S3 breaches we heard of this year were completely preventable. Customers should define security models, policies and controls and use security policy automation tools to implement them and remediate them. This will ensure that if S3 or other cloud resource misconfigurations happen over time they get alerted on and even auto-remediated in many cases.
BMC Policy Service is a security automation tool that you can get started in less than 30 minutes and secure all your buckets and cloud resources – https://www.bmc.com/it-solutions/secops-policy-service.html.