At Boldlink, when it comes to Data, we understand that misconfigurations can expose your data to be exposed or exploited against our customers. But if guided in the correct way to use the best solutions on AWS, they can avoid the misconfigurations and pitfalls of having an abundance of choice on AWS.
AWS Shared Responsibility Model, your Data lifecycle access and protection on AWS is yours, the customers, responsibility, thinking otherwise will expose you to unrecoverable data loss or compromise.
Backup of your data can be done through AWS Backup, this service covers Amazon EBS, Amazon FSx, Amazon EC2, Amazon RDS, Amazon DynamoDB, Amazon EFS, and AWS Storage Gateway, the features allow you to manage backup policies, tagging, scheduling, encryption etc.
Cross-region and Cross-account capabilities allow AWS Customers to store their backup data on different AWS Regions and backup across Different AWS accounts natively.
All backup Data stored on AWS S3 block storage, AWS S3 SLA’s are 99.999999999% durability and 99.99% availability, automatically, each PUT (write) operation is repeated across different AWS facilities on the same region and if one of these copies becomes corrupted or unavailable a copy will be used to automatically and transparently replace the lost version.
It is worth mentioning that Backups are stored on an “invisible” S3 structure associated with your AWS account to protect your backups further, ex: a user deletes a bucket by mistake. More information is available here.
We recommend getting familiar with AWS S3 capabilities further since it is a centrepiece of your Data strategy on AWS, for more information, go here.
AWS S3 is an Object Storage type, which means that you use it to keep data for short or long term storage (analogous to a physical storage warehouse) and retrieve it when needed, but not for real-time read/write storage ex. your laptop or pc hard drive or the AWS EBS volume attached to your AWS EC2 instance.
Redundancy, AWS provides Regions that are intended to be independent geographical areas of Availability Zones, which are different physical data centres within a single region.
When it comes to the redundancy of your storage, you must configure or choose Multi-AZ configuration as a minimum. Still, for the crucial and business impacting Data, you must also enable data replication options, for example, let us look at three services to show you how you configure and extend their redundancy.
- AWS RDS databases support many different SQL engines, when you configure multi-AZ’s typically, you will have the Read/Write instance of your Database on one AZ (ex. eu-west-1a) and another copy, which will be used to Read augmenting the performance of your DB, on a different AZ (eu-west-1b) on the same AWS Region. If these AZ’s, which are real physical data centres, fail, automatic promotion and failover will happen. You can increase the nr of your Read-only DB nodes/instances and place them in different AZ’s for local performance and redundancy, or if the selected engine supports it, you can enable Cross-Region replication which creates a Read-only copy of your DB for Disaster Recovery or performance on a totally different AWS Region. Bear in mind that these features are all optional and, by default, not enabled. Check your RDS DB engine options to confirm its capabilities.
- AWS DynamoDB tables, by default, are always created with native AWS Region redundancy, this means that a copy of your data exists on at least three of the AWS AZ’s. Still, you can also enable the Global Table configuration, which makes your data available Cross-region through replication, allowing you to build a more distributed application, and become AWS Region redundant.
- AWS S3 was part of AWS initial offer (Ec2; S3; SQS), and as massively grown since its inception in 2001, by default, it offers an impressive availability and redundancy, but, you can also enable Cross-region replication, ex, replicate data on the UK to the US, or most importantly, Cross-account replication, which allows you to copy to a different region on a different AWS account, this delivers versioning of your Data from source to destination. Further, you can use object lock on AWS S3 Data, preventing it from being changed or deleted indefinitely or within a time frame (5 years). AWS S3 is designed and architect for durability but also to be scalable, you can store small KBs to PTs of information, a truly scalable cloud solution storage.
Long-term storage, we must talk again about AWS S3 and add AWS Glacier and AWS Snowball. AWS S3 Object storage is your ideal long term storage from a durability and cost perspective.
It offers different storage types, which allow you to choose between performance and durability vs less durability (two AZ’s instead of three) and more performance but lower cost, or lower performance with high durability, with AWS S3 Glacier costing a fraction from all other Storage types on AWS.
Both S3 and Glacier allow for lifecycle policies that can be used to manage your Data’s lifecycle automatically, be it in months or years and let us not forget the Object Locking which protects your Data from any accidental or malicious data deletion or tampering.
AWS Snowball is a device delivered to the customers’ facilities and used when talking of large amounts of data to transfer between Customer facilities and AWS Datacenters, the use cases are broad, and the devices were specifically designed to deliver enormous amounts of Data which otherwise would be unpractical or too sensitive to go over the internet.
Encryption, AWS customers have four options of how to encrypt-at-rest their Data on AWS:
- AWS KMS Service provides customers with complete and centralised control over encryption keys. It solves the challenge of key rotation management offering automatic rotation and the ability to define different usage policies and access at the resource (KMS key policy) level. You will be able to manage your Key service through AWS API and SDK’s and take advantage of the native integration with a large part of AWS Services. Additionally, AWS supports bring-your-own-key (BYOK) that allows you to import your generated keys outside AWS and manage access and usage through AWS API and SDK with the limitation of rotation, which the customer must handle. AWS KMS is a multi-tenant key storage solution.
- AWS Cloud HSM provides a single-tenant key storage solution, and it’s the ideal solution for FIPS 140-2 Level 3 validated HSM compliance or/and single-tenant key storage. This solution is ideal if you don’t want AWS to manage your keys but want to offload time-consuming management tasks such as hardware provisioning, software patching, high availability, and backups. You can still take advantage of KMS features by configuring a custom key store in AWS KMS.
- AWS Marketplace offers a range of key management solutions which can be integrated or extend AWS KMS or AWS HSM further, we recommend you investigate this additionally if required for compliance.
- Bring your own solution, you can build your own open-source solution using the VPC EC2 etc., on AWS and completely separate from AWS your Key storage and management, the downside, of course is the added management and not integrating with AWS Services.
In conclusion, if you want to keep versions of your Data, you can use AWS Backup. For long term storage, you can use a combination of AWS Backup up to 365 days of historical backups and keep Database Dumps in S3 for two years and AWS S3 Glacier for five years as an example.
Are you concerned with DR and the rise of data ransomware attacks? Take advantage of AWS S3 Cross-Account replication or Object-Lock on a different AWS region.
The extent you can protect your Data on AWS is quite broad and very customisable, still, at the same with many options to make it cost-effective, we hope that this brief introduction on this topic has guided you to understand the power of AWS cloud better.