A few years ago, Skeddly added an “Estimated Cost Savings” report. This report showed you an estimated cost savings, as a percentage, on an action-by-action basis. However, what it failed to do was provide concrete savings information against your actual infrastructure in actual dollars.
Today, I want to tell you about Skeddly’s “new & improved” Cost Savings report.
It’s been a while since the Skeddly blog has been updated. Rest assured, Skeddly has been adding new features and enhancements. Today, I’d like to summarize some of the additions that have been added to Skeddly over the last two years.
Back in 2019, I introduced you to Skeddly Projects. Projects is a feature in Skeddly that allows you to separate actions, credentials, managed backup plans, and managed start/stop plans. Almost like a mini Skeddly account within an account.
Over the last few months, Skeddly’s Projects feature has been significantly enhanced. And I’m going to tell you all about the wonderful new features within Skeddly Projects.
It is very important to create backups for your crutial EC2 instances. While AWS provides mechanisms to increase availability, the cloud is not infallible.
EC2 provides a native backup format for your EC2 instances in the way of AMI images. But storage costs of the AMI images can build over time.
Storage in S3, GB-for-GB, is cheaper than EBS snapshots. For example, the starting price for 1 GB of EBS snapshots is $0.05 per month, compared to $0.23 for 1 GB of data stored in S3. So if you can move your AMI images to S3, you may be able to lower your storage costs.
AWS now provides a built-in function to copy AMI images to S3 buckets through a new aws ec2 create-store-image-task
CLI command. At this time, this feature is quite limited.
Scheduling and automating this function can be very helpful in keeping your AMI images archived in S3. To help you with this, Skeddly has added a new action: Copy AMI Images to S3.
Amazon DocumentDB is a managed MongoDB-compatible database service provided by AWS. It provides the database in clusters, with multiple instances, for high-availablity.
To help with cost-reduction strategies, AWS allows DocumentDB clusters to be stopped and restarted. While the cluster is stopped, you’re not charged. So it’s a great candidate to shut off overnight and on weekends if it’s not needed.
Why is this a good thing?
In the US East 1 region, a DocumentDB cluster with a single db.r5.large
instance costs $0.277 per hour. That will cost $46.536 per week. But if you only need the cluster during business hours, it could cost you only $16.62 (Monday - Friday, running only 12 hours). Larger clusters with more instances simply compounds the cost savings.
Instance Type | Hourly Cost | Number of Instances | Weekly Cost (24/7) | Weekly Cost (12/5) |
---|---|---|---|---|
db.r5.large | $0.277 | 1 | $46.536 | $16.62 |
db.r5.xlarge | $0.554 | 1 | $93.072 | $33.24 |
db.r5.xlarge | $0.554 | 2 | $186.144 | $66.48 |
db.r5.12xlarge | $6.648 | 2 | $2,233.728 | $797.76 |
As you can see, by starting & stopping your clusters, you can have a resilient cluster (with 2 instances) for less than the cost of a non-resilient cluster (with only 1 instance).
Today, I’m happy to announce two new actions to help you reduce your cloud costs: “Start DocumentDB Clusters” and “Stop DocumentDB Clusters”.
In December, AWS announced the new VPC Reachability Analyzer tool. Using this tool, AWS can analyse the paths within your VPC to determine whether the various components in your VPC can communicate with each other or not. This can be useful for network connectivity and security:
Today, Skeddly’s adding a new action to assist with your network path analyses: Start Network Insights Analyses
With this new action, you can start analysing your pre-defined network insights paths on any schedule you want, sending you an email summary when the analyses are complete.
Until now, if you wanted to export your Amazon DynamoDB tables to Amazon S3, you needed to use a Data Pipeline solution. Previously, Skeddly added its own version of the Data Pipeline solution.
Earlier this month, AWS announced a built-in method to export data from DynamoDB tables to Amazon S3.
Naturally, we jumped on this and added a new action to support this new functionality. Today, I’m happy to announce this new action: “Export DynamoDB Tables 2”.
Amazon S3 is a highly-scalable object storage system. Amazon S3 can contain any number of objects (files), and those objects can be organized into “folders”. However, to S3, folders don’t really exist.
huh?
That’s right. “Folders” are a human concept, applied to S3 keys for organizational purposes. But they’re nothing special to S3 itself.
Before we begin, forget everything you know about the S3 Management Console. The S3 Management Console is a graphical user interface (GUI), and GUIs are built for humans.
We’re going to start with the AWS CLI today, so you can see the truth.
Last October, we added Managed Backups. Managed Backups is a fantastic feature in Skeddly where you simply configure your backup plan, add resources to the plan, and Skeddly manages the actions used to create and delete your backups for you.
Today, I’m going to introduce you to a similar feature for starting and stopping your cloud resources: Managed Start/Stops.
Using Managed Start/Stops, you:
Once you have created your plan and added resources, Skeddly will create the necessary actions to start and stop your resources based on your desired schedule.
In January, AWS announced the ability to export RDS snapshots to S3. This new feature allows you to export your RDS data to S3 buckets in Apache Parquet format.
Today, I’m happy to say that we’ve added a new action to help with this feature: Export RDS Snapshots. This new action will automate the process of exporting RDS snapshots to S3 on a daily basis.